Wednesday, 26 January 2011

Configuring a Database for Performance



 Necessary Initialization Parameters Without Performance Impact
Parameter                                                Description
DB_NAME                       Name of the database. This should match the ORACLE_SID
                                                                        environment variable.
DB_DOMAIN                     Location of the database in Internet dot notation.
OPEN_CURSORS                  Limit on the maximum number of cursors (active SQL
                                                                        statements) for each session. The setting is application-dependent;                      
                                                                        500 is recommended.
CONTROL_FILES                 Set to contain at least two files on different disk drives to
                                                                        prevent failures from control file loss.
DB_FILES                      Set to the maximum number of files that can assigned to the
                                                                        database.

Important Initialization Parameters With Performance Impact
Parameter Description

COMPATIBLE

Specifies the release with which the Oracle server must
maintain compatibility. It lets you take advantage of the
maintenance improvements of a new release immediately in
your production systems without testing the new functionality
in your environment. If your application was designed for a
specific release of Oracle, and you are actually installing a later
release, then you might want to set this parameter to the
version of the previous release.


DB_BLOCK_SIZE

 Sets the size of the Oracle database blocks stored in the
database files and cached in the SGA. The range of values
depends on the operating system, but it is typically 8192 for
transaction processing systems and higher values for database
warehouse systems.

SGA_TARGET

Specifies the total size of all SGA components. If SGA_TARGET
is specified, then the buffer cache (DB_CACHE_SIZE), Java pool
(JAVA_POOL_SIZE), large pool (LARGE_POOL_SIZE), and
shared pool (SHARED_POOL_SIZE) memory pools are
automatically sized
PGA_AGGREGATE_TARGET

 Specifies the target aggregate PGA memory available to all
server processes attached to the instance

PROCESSES

 Sets the maximum number of processes that can be started by
that instance. This is the most important primary parameter to
set, because many other parameter values are deduced from
this.

SESSIONS

This is set by default from the value of processes. However, if
you are using the shared server, then the deduced value is
likely to be insufficient.

UNDO_MANAGEMENT

 Specifies which undo space management mode the system
should use. AUTO mode is recommended.

UNDO_TABLESPACE
Specifies the undo tablespace to be used when an instance
starts up.

Configuring Undo Space

The V$UNDOSTAT view contains statistics for monitoring and tuning undo space.
Using this view, you can better estimate the amount of undo space required for the
current workload. Oracle also uses this information to help tune undo usage in the
system. The V$ROLLSTAT view contains information about the behavior of the undo
segments in the undo tablespace.

Sizing Redo Log Files
The size of the redo log files can influence performance, because the behavior of the
database writer and archiver processes depend on the redo log sizes. Generally, larger
redo log files provide better performance. Undersized log files increase checkpoint
activity and reduce performance.
Although the size of the redo log files does not affect LGWR performance, it can affect
DBWR and checkpoint behavior. Checkpoint frequency is affected by several factors,
including log file size and the setting of the FAST_START_MTTR_TARGET initialization
parameter. If the FAST_START_MTTR_TARGET parameter is set to limit the instance
recovery time, Oracle automatically tries to checkpoint as frequently as necessary.
Under this condition, the size of the log files should be large enough to avoid
additional checkpointing due to under sized log files. The optimal size can be obtained
by querying the OPTIMAL_LOGFILE_SIZE column from the V$INSTANCE_
RECOVERY view. You can also obtain sizing advice on the Redo Log Groups page of
Oracle Enterprise Manager Database Control.
It may not always be possible to provide a specific size recommendation for redo log
files, but redo log files in the range of a hundred megabytes to a few gigabytes are
considered reasonable. Size your online redo log files according to the amount of redo
your system generates. A rough guide is to switch logs at most once every twenty
minutes.


Creating Subsequent Tablespaces

All databases should have several tablespaces in addition to the SYSTEM and
SYSAUX tablespaces. These additional tablespaces include:

A temporary tablespace, which is used for things like sorting
An undo tablespace to contain information for read consistency, recovery, and
rollback statements
At least one tablespace for actual application use


For permanent tables, the choice between local and global extent management on
tablespace creation can have a large effect on performance. For any permanent
tablespace that has moderate to large insert, modify, or delete operations compared to
reads, local extent management should be chosen.

Creating Permanent Tablespaces - Automatic Segment-Space Management
For permanent tablespaces, Oracle recommends using automatic segment-space
management. Such tablespaces, often referred to as bitmap tablespaces, are locally
managed tablespaces with bitmap segment space management.

Creating Temporary Tablespaces
Properly configuring the temporary tablespace helps optimize disk sort performance.
Temporary tablespaces can be dictionary-managed or locally managed. Oracle
Corporation recommends the use of locally managed temporary tablespaces with a
UNIFORM extent size of 1 MB.
You should monitor temporary tablespace activity to check how many extents are
being allocated for the temporary segment. If an application extensively uses
temporary tables, as in a situation when many users are concurrently using temporary
tables, the extent size could be set smaller, such as 256K, because every usage requires
at least one extent. The EXTENT MANAGEMENT LOCAL clause is optional for temporary
tablespaces because all temporary tablespaces are created with locally managed
extents of a uniform size. The default for SIZE is 1M.

Creating and Maintaining Tables for Good Performance
When installing applications, an initial step is to create all necessary tables and
indexes. When you create a segment, such as a table, Oracle allocates space in the
database for the data. If subsequent database operations cause the data volume to
increase and exceed the space allocated, then Oracle extends the segment.
When creating tables and indexes, note the following:
Specify automatic segment-space management for tablespaces
This allows Oracle to automatically manage segment space for best performance.
Set storage options carefully
Applications should carefully set storage options for the intended use of the table
or index. This includes setting the value for PCTFREE. Note that using automatic
segment-space management eliminates the need to specify PCTUSED.

Note: Use of free lists is no longer encouraged. To use automatic
segment-space management, create locally managed tablespaces,
with the segment space management clause set to AUTO.

Table Compression
Heap-organized tables can be stored in a compressed format that is transparent for any
kind of application. Table compression was designed primarily for read-only
environments and can cause processing overhead for DML operations in some cases.
However, it increases performance for many read operations, especially when your
system is I/O bound.
Compressed data in a database block is self-contained which means that all the
information needed to re-create the uncompressed data in a block is available within
that block. A block will also be kept compressed in the buffer cache. Table compression
not only reduces the disk storage but also the memory usage, specifically the buffer
cache requirements. Performance improvements are accomplished by reducing the
amount of necessary I/O operations for accessing a table and by increasing the
probability of buffer cache hits.

Reclaiming Unused Space
Over time, it is common for segment space to become fragmented or for a segment to
acquire a lot of free space as the result of update and delete operations. The resulting
sparsely populated objects can suffer performance degradation during queries and
DML operations.
Oracle Database provides a Segment Advisor that provides advice on whether an
object has space available for reclamation based on the level of space fragmentation
within an object.
If an object does have space available for reclamation, you can compact and shrink
database segments or you can deallocate unused space at the end of a database
segment.

Indexing Data
The most efficient way to create indexes is to create them after data has been loaded.
By doing this, space management becomes much simpler, and no index maintenance
takes place for each row inserted. SQL*Loader automatically does this, but if you are
using other methods to do initial data load, you might need to do this manually.
Additionally, index creation can be done in parallel using the PARALLEL clause of the
CREATE INDEX statement. However, SQL*Loader is not

Specifying Memory for Sorting Data
During index creation on tables that contain data, the data must be sorted. This sorting
is done in the fastest possible way, if all available memory is used for sorting. Oracle
recommends that you enable automatic sizing of SQL working areas by setting the
PGA_AGGREGATE_TARGET initialization parameter.

Performance Considerations for Shared Servers
Using shared servers reduces the number of processes and the amount of memory
consumed on the system. Shared servers are beneficial for systems where there are
many OLTP users performing intermittent transactions.
Using shared servers rather than dedicated servers is also generally better for systems
that have a high connection rate to the database. With shared servers, when a connect
request is received, a dispatcher is already available to handle concurrent connection
requests. With dedicated servers, on the other hand, a connection-specific dedicated
server is sequentially initialized for each connection request.
Performance of certain database features can improve when a shared server
architecture is used, and performance of certain database features can degrade slightly
when a shared server architecture is used. For example, a session can be prevented
from migrating to another shared server while parallel execution is active.
A session can remain nonmigratable even after a request from the client has been
processed, because not all the user information has been stored in the UGA. If a server
were to process the request from the client, then the part of the user state that was not
stored in the UGA would be inaccessible. To avoid this, individual shared servers
often need to remain bound to a user session.

Identifying Contention Using the Dispatcher-Specific Views
The following views provide dispatcher performance statistics:
V$DISPATCHER - general information about dispatcher processes
V$DISPATCHER_RATE - dispatcher processing statistics
The V$DISPATCHER_RATE view contains current, average, and maximum dispatcher
statistics for several categories. Statistics with the prefix CUR_ are statistics for the
current sample. Statistics with the prefix AVG_ are the average values for the statistics
since the collection period began. Statistics with the prefix MAX_ are the maximum
values for these categories since statistics collection began.
To assess dispatcher performance, query the V$DISPATCHER_RATE view and compare
the current values with the maximums. If your present system throughput provides
adequate response time and current values from this view are near the average and
less than the maximum, then you likely have an optimally tuned shared server
environment.
If the current and average rates are significantly less than the maximums, then
consider reducing the number of dispatchers. Conversely, if current and average rates
are close to the maximums, then you might need to add more dispatchers. A general
rule is to examine V$DISPATCHER_RATE statistics during both light and heavy system
use periods. After identifying your shared server load patterns, adjust your
parameters accordingly.
If needed, you can also mimic processing loads by running system stress tests and
periodically polling the V$DISPATCHER_RATE statistics. Proper interpretation of these
statistics varies from platform to platform. Different types of applications also can cause significant variations on the statistical values recorded in V$DISPATCHER_
RATE.
Reducing Contention for Dispatcher Processes
To reduce contention, consider the following:
Adding dispatcher processes
The total number of dispatcher processes is limited by the value of the
initialization parameter MAX_DISPATCHERS. You might need to increase this
value before adding dispatcher processes.
Enabling connection pooling
When system load increases and dispatcher throughput is maximized, it is not
necessarily a good idea to immediately add more dispatchers. Instead, consider
configuring the dispatcher to support more users with connection pooling.
Enabling Session Multiplexing
Multiplexing is used by a connection manager process to establish and maintain
network sessions from multiple users to individual dispatchers. For example,
several user processes can connect to one dispatcher by way of a single connection
from a connection manager process. Session multiplexing is beneficial because it
maximizes use of the dispatcher process connections. Multiplexing is also useful
for multiplexing database link sessions between dispatchers.

Identifying Contention for Shared Servers
This section discusses how to identify contention for shared servers.
Steadily increasing wait times in the requests queue indicate contention for shared
servers. To examine wait time data, use the dynamic performance view V$QUEUE. This
view contains statistics showing request queue activity for shared servers. By default,
this view is available only to the user SYS and to other users with SELECT ANY TABLE
system privilege, such as SYSTEM.

Monitor these statistics occasionally while your application is running by issuing the
following SQL statement:
SELECT DECODE(TOTALQ, 0, 'No Requests',
WAIT/TOTALQ || ' HUNDREDTHS OF SECONDS') "AVERAGE WAIT TIME PER REQUESTS"
FROM V$QUEUE
WHERE TYPE = 'COMMON';
This query returns the results of a calculation that show the following:
AVERAGE WAIT TIME PER REQUEST
-----------------------------
.090909 HUNDREDTHS OF SECONDS
From the result, you can tell that a request waits an average of 0.09 hundredths of a
second in the queue before processing.
You can also determine how many shared servers are currently running by issuing the
following query:
SELECT COUNT(*) "Shared Server Processes"
FROM V$SHARED_SERVER
WHERE STATUS != 'QUIT';
The result of this query could look like the following:
Shared Server Processes
-----------------------
10
If you detect resource contention with shared servers, then first make sure that this is
not a memory contention issue by examining the shared pool and the large pool. If
performance remains poor, then you might want to create more resources to reduce
shared server process contention. You can do this by modifying the optional server
process initialization parameters:
MAX_DISPATCHERS
MAX_SHARED_SERVERS
DISPATCHERS
SHARED_SERVERS

No comments:

Post a Comment