Performance
- Metalink AMM Note 443746.1
- Oracle 12c documentation: Memory Management
- Memory Management through the versions
- Automatic Memory Management discussion with Tom Kyte
- Automatic Memory Management from ORACLE_BASE
- How Parsing in Oracle Causes Contention for Shared Pool and Library Cache Latches
- Tuning
- juliandyke.com - Advanced Diagnostics (including oradebug 10046 10053 etc...)
Contents
- 1 Tanel Poder's session snapper
- 2 Show long-running SQL queries / statements
- 3 Is stats gathering enabled?
- 4 Check when statistics were last gathered
- 5 Gather more accurate statistics
- 6 Check for stale statistics
- 7 Gather database stats
- 8 Gather dictionary and fixed objects statistics
- 9 Gather schema statistics
- 10 Gather table statistics
- 11 Show any events set in database
- 12 Top memory consumers in AIX
- 13 Show all Oracle hidden parameters
- 14 How much memory is being used by processes (PGA memory)?
- 15 Enable Automatic Memory Management (AMM)
- 16 SGA tuning
- 17 How much redo was generated / How many archivelog switches have occurred per hour over the past week?
- 18 How much archive has been generated per day
- 19 Top session activity
- 20 Show current used undo blocks for ongoing transactions
- 21 Pinpoint which sessions are using lots of undo
- 22 How many blocks have been changed by sessions?
- 23 Top SQL
- 24 Top waits
- 25 Session wait history
- 26 System waits
- 27 Oracle Log File Sync Wait Event
- 28 EXPLAIN PLAN
- 29 Virtual Indexes
- 30 Statspack
Tanel Poder's session snapper[edit]
Show long-running SQL queries / statements[edit]
Long running means more than 10 minutes if last_call_et > 600 (adjust as necessary!)
set pages 1000 headi on underline off lines 1000 col username for a15 col program for a20 col osuser for a15 select distinct machine , nvl(s.osuser,'no user') osuser , s.username username , s.sid sid , s.serial# serial# , (s.last_call_et/60) mins , s.last_call_et secs , s.program program , s.sql_id sql_id , q.sql_text sql_text from v$session s , v$sql q where s.sql_id = q.sql_id and status = 'ACTIVE' and type != 'BACKGROUND' and last_call_et > 600 order by sid , serial# /
Is stats gathering enabled?[edit]
column client_name format A55 SELECT client_name, status from dba_autotask_operation;
Check when statistics were last gathered[edit]
select owner,table_name,tablespace_name,to_char(LAST_ANALYZED,'DD-MON-YY HH24:MI:SS') LAST_ANALYZED from dba_tables where owner not in ('SYS','SYSTEM') order by LAST_ANALYZED
Gather more accurate statistics[edit]
default is 2
alter system set optimizer_dynamic_sampling = 4;
Check for stale statistics[edit]
select m.table_owner , m.table_name , m.timestamp , sum(m.inserts) , sum(m.updates) , sum(m.deletes) , t.num_rows , t.last_analyzed from sys.dba_tab_modifications m , dba_tables t where m.table_owner = t.owner and m.table_name = t.table_name group by m.table_owner , m.table_name , m.timestamp , t.num_rows , t.last_analyzed order by 1,2,3 /
or
select dt.owner||'.'||dt.table_name owner_table
, dtm.partition_name
, inserts+updates+deletes modified_rows, num_rows, last_analyzed
, case when num_rows = 0 then null
else (inserts+updates+deletes) / num_rows * 100
end percent_modified
from dba_tab_modifications dtm
join dba_tables dt
on dtm.table_name = dt.table_name
order by 6 desc
/
Gather database stats[edit]
Some helpful stuff on statistics gathering here
#!/usr/bin/ksh sqlplus / as sysdba << EOSQL exec dbms_stats.gather_database_stats( estimate_percent=>dbms_stats.auto_sample_size, degree => 8 ); EOSQL
Gather dictionary and fixed objects statistics[edit]
How to Gather Statistics on Objects Owned by the ‘SYS’ User and ‘Fixed’ Objects (Doc ID 457926.1)
exec dbms_stats.gather_dictionary_stats;
exec dbms_stats.gather_fixed_objects_stats;
Gather schema statistics[edit]
begin
dbms_stats.gather_schema_stats ( ownname => 'FEUK_SCR'
, options => 'gather auto'
, estimate_percent => 100
, degree => 6
, method_opt => 'for all indexed columns'
, cascade => true
);
end;
/
Gather table statistics[edit]
BEGIN dbms_stats.gather_table_stats(ownname =>'FEUK_PRF',tabname=>'TD_NOTE',estimate_percent=>20,cascade=>true,degree=>6,METHOD_OPT=>'FOR ALL COLUMNS SIZE 100' ); END;
Show any events set in database[edit]
set serveroutput on
declare
event_level number;
begin
dbms_output.enable(null);
dbms_output.put_line ('Event list');
dbms_output.put_line ('----------');
for i in 10000..10999 loop
sys.dbms_system.read_ev(i,event_level);
if (event_level > 0) then
dbms_output.put_line('Event '||to_char(i)||' set at level '|| to_char(event_level));
end if;
end loop;
end;
/
Top memory consumers in AIX[edit]
svmon -U -t 10 -O process=on -O sortentity=pgsp
[edit]
set pages 100 lines 300 col ksppinm for a50 col ksppstvl for a50 select ksppinm , ksppstvl from x$ksppi a , x$ksppsv b where 1=1 and a.indx=b.indx and substr(ksppinm,1,1) = '_' order by ksppinm /
How much memory is being used by processes (PGA memory)?[edit]
col PGA_USED_MEM for 999,999,999,990 col PGA_ALLOC_MEM for 999,999,999,990 col PGA_FREEABLE_MEM for 999,999,999,990 col PGA_MAX_MEM for 999,999,999,990 compute sum of PGA_USED_MEM on report compute sum of PGA_ALLOC_MEM on report compute sum of PGA_MAX_MEM on report break on report SELECT PROGRAM, PGA_USED_MEM, PGA_ALLOC_MEM, PGA_FREEABLE_MEM, PGA_MAX_MEM FROM V$PROCESS order by 2 /
Enable Automatic Memory Management (AMM)[edit]
AMM is enabled by setting one or both of the following memory parameters:
alter system set memory_max_target=2g scope=spfile; alter system set memory_target=2g scope=spfile;
With these memory parameters set, AMM is enabled.
Memory will now be allocated automatically to where it is needed.
alter system set sga_target=0 scope=spfile; alter system set pga_aggregate_target=0 scope=spfile;
If sga_target and/or pga_aggregate_target are set, these will be treated as minimum values.
If only one of sga_target or pga_aggregate_target is set, the other will be set to (memory_target - the value set).
Disable AMM[edit]
SQL> alter system reset memory_max_target scope=spfile sid='*'; SQL> alter system reset memory_target scope=spfile sid='*';
Enable ASMM[edit]
SQL> alter system set SGA_MAX_SIZE=1400m scope=spfile sid='*'; SQL> alter system set SGA_TARGET=1000m scope=spfile sid='*'; SQL> alter system set PGA_AGGREGATE_TARGET=480m scope=spfile sid='*';
Reboot database and verify that we have switched from AMM to ASMM[edit]
SQL> show parameter memory NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ memory_max_target big integer 0 memory_target big integer 0
--> AMM disabled
SQL> show parameter sga NAME TYPE VALUE ------------------------ ----------- ------------------------------ sga_max_size big integer 1408M sga_target big integer 912M SQL> show parameter pga NAME TYPE VALUE ------------------------ ----------- ------------------------------ pga_aggregate_target big integer 480M
--> ASMM enabled !
SGA tuning[edit]
Problem with 12c SGA growing too high. Database keeps crashing due to memory issues. Started at 2G, then to 4G and now at 5G.
Raised SR with Oracle. This is the response
You need to query the advice views like V$MEMORY_TARGET_ADVICE, V$SGA_TARGET_ADVICE, V$SHARED_POOL_ADVICE etc. Interpret the results and then make the changes to memory components as indicated by the SIZE column. Please review Note 1323708.1 : Tuning the SGA_TARGET using V$SGA_TARGET_ADVICE then follow the version specific advice. Please note that Oracle recommend to specify a value for the each of the tunable parameters i.e Shared Pool, Database Buffer Cache, Large Pool, Java Pool and Streams Pool even when AMM or ASMM is enabled (you are using AMM). When minimum value for each of the tunable parameters is set, Oracle will make sure that the amount of memory being allocated for the corresponding pool will not shrink below the specified amount. Best practices with auto-tuning are to include explicit, minimum settings for the various auto-tuned components. This will help the auto-tuner make "smarter" choices about moving memory within the SGA. While it is not required to set explicit settings, the auto-tuner can get over aggressive with memory moves when auto-tuned components are set to 0. The parameter _kghdsidx_count controls the number of subpools used. Setting this parameter to 2 will be the best. Currently you have this parameter set to 5. SQL> alter system set "_kghdsidx_count"=2 scope=spfile; or add this in the pfile "_kghdsidx_count"=2 As a best practice for SGA components you can use the following recommendations: - SHARED_POOL_SIZE >= 2G (because minimum 1G/subpool is recommended) - LARGE_POOL_SIZE >= 150M (used for parallel executions) - STREAMS_POOL_SIZE >= 150M (if you use datapump regularly) - JAVA_POOL_SIZE >= 150M (if you use java) - SGA_TARGET >= 3G (because it should be greater by 10% of the sum of it's components)
or another one from an 11g incident...
ACTION PLAN =========== 1. We can see that your shared pool is divided intro 4 subpools. Set _kghdsidx_count to 2 in order to be sure the shared pool will be divided only in 2 subpools and not more. You currently have 4 subpools which favor fragmentation. connect / as sysdba SQL > alter system set "_kghdsidx_count"=2 scope=spfile; - restart the database FYI: Starting with Oracle 9i the shared_pool will be divided into multiple sub-heaps. Until 9.0, the shared pool was always allocated in one large heap rather than multiple subheaps. The number of subpools even if produced a better performance, also it could produce ora-4031 errors when there is not space in the subpool as a result of fragmentation. One process will use only a specific subpool and if an error is reported, it wont be migrated to other subpool. Oracle 9i implemented multiple subpools to avoid shared pool latch contention. But, the cost of the multiple pools is that the size of each subpool will be smaller and hence more susceptible to an ORA-4031. So, the parameter _kghdsidx_count can be used to override the default for the number of sub pools. 2. Decrease the number of open_cursors from 3000 to 1000. The open_cursors parameter is defined per session. Having many cursors open per session is too much and it means that the cursors are not correctly handled/closed at application level. 1000 cursors created per sessions should be enough for any application. We recommend in some cases to go as low as 600 which should still be enough. 3. Apply the workaround to disable the use of durations by setting "_enable_shared_pool_durations=FALSE". This will allow unpinned SQLA, KGLH0, and KGLHD memory to be freed to make room for new permanent allocations. 4. Set a minimum for the SGA within the memory_target size using the parameter sga_target. For example set sga_target=13G. This will mean that at all times 13G of memory from memory_target will be reserved for the SGA. More memory can be allocated to it if necessary and available. So please modify accordingly the parameters sga_max_size and sga_target. This will require a database restart.
But.. From 1323708.1 ...
In Oracle11g (or higher), the information in the V$SGA_TARGET_ADVICE view is similar to that provided in the V$MEMORY_TARGET_ADVICE view for Automatic Shared Memory Management. If MEMORY_TARGET is set, then SGA_TARGET should be set to 0. See Document 443746.1 for more details on this. If MEMORY_TARGET is not set, use the same steps as for 10g.
How much redo was generated / How many archivelog switches have occurred per hour over the past week?[edit]
set lines 200
col day for a9 hea "Date\Hour"
col "00" for a5
col "01" for a5
col "02" for a5
col "03" for a5
col "04" for a5
col "05" for a5
col "06" for a5
col "07" for a5
col "08" for a5
col "09" for a5
col "10" for a5
col "11" for a5
col "12" for a5
col "13" for a5
col "14" for a5
col "15" for a5
col "16" for a5
col "17" for a5
col "18" for a5
col "19" for a5
col "20" for a5
col "21" for a5
col "22" for a5
col "23" for a5
col "all" for a9 heading "All day"
alter session set nls_date_format='DD-MON-YY'
/
select * from (
select trunc(first_time) day,
to_char(sum(decode(to_char(first_time,'HH24'),'00',1,0)),'9999') "00",
to_char(sum(decode(to_char(first_time,'HH24'),'01',1,0)),'9999') "01",
to_char(sum(decode(to_char(first_time,'HH24'),'02',1,0)),'9999') "02",
to_char(sum(decode(to_char(first_time,'HH24'),'03',1,0)),'9999') "03",
to_char(sum(decode(to_char(first_time,'HH24'),'04',1,0)),'9999') "04",
to_char(sum(decode(to_char(first_time,'HH24'),'05',1,0)),'9999') "05",
to_char(sum(decode(to_char(first_time,'HH24'),'06',1,0)),'9999') "06",
to_char(sum(decode(to_char(first_time,'HH24'),'07',1,0)),'9999') "07",
to_char(sum(decode(to_char(first_time,'HH24'),'08',1,0)),'9999') "08",
to_char(sum(decode(to_char(first_time,'HH24'),'09',1,0)),'9999') "09",
to_char(sum(decode(to_char(first_time,'HH24'),'10',1,0)),'9999') "10",
to_char(sum(decode(to_char(first_time,'HH24'),'11',1,0)),'9999') "11",
to_char(sum(decode(to_char(first_time,'HH24'),'12',1,0)),'9999') "12",
to_char(sum(decode(to_char(first_time,'HH24'),'13',1,0)),'9999') "13",
to_char(sum(decode(to_char(first_time,'HH24'),'14',1,0)),'9999') "14",
to_char(sum(decode(to_char(first_time,'HH24'),'15',1,0)),'9999') "15",
to_char(sum(decode(to_char(first_time,'HH24'),'16',1,0)),'9999') "16",
to_char(sum(decode(to_char(first_time,'HH24'),'17',1,0)),'9999') "17",
to_char(sum(decode(to_char(first_time,'HH24'),'18',1,0)),'9999') "18",
to_char(sum(decode(to_char(first_time,'HH24'),'19',1,0)),'9999') "19",
to_char(sum(decode(to_char(first_time,'HH24'),'20',1,0)),'9999') "20",
to_char(sum(decode(to_char(first_time,'HH24'),'21',1,0)),'9999') "21",
to_char(sum(decode(to_char(first_time,'HH24'),'22',1,0)),'9999') "22",
to_char(sum(decode(to_char(first_time,'HH24'),'23',1,0)),'9999') "23",
to_char(count(to_char(first_time,'DD')),'9999') "all"
from v$log_history
group by trunc(first_time)
order by trunc(first_time) desc
)
where rownum < 8
order by 1
/
How much archive has been generated per day[edit]
Also how much has RMAN deleted
set pages 100
select sum_arch.day
, sum_arch.generated_mb
, sum_arch_del.deleted_mb
, sum_arch.generated_mb - sum_arch_del.deleted_mb remaining_mb
from (
select to_char (completion_time, 'dd-mm-yyyy') day
, sum (round ( (blocks * block_size) / (1024 * 1024), 2)) generated_mb
from v$archived_log
where archived = 'YES'
group by to_char (completion_time, 'dd-mm-yyyy')) sum_arch
, (
select to_char (completion_time, 'dd-mm-yyyy') day
, sum (round ( (blocks * block_size) / (1024 * 1024), 2)) deleted_mB
from v$archived_log
where archived = 'YES' and deleted = 'YES'
group by to_char (completion_time, 'dd-mm-yyyy')) sum_arch_del
where sum_arch.day = sum_arch_del.day(+)
order by to_date (day, 'dd-mm-yyyy') desc
/
Top session activity[edit]
Enter READS, EXECS or CPU to order session activity by that column
set lines 500 pages 1000 verif off
col username for a15
col machine for a26
col module for a30
col logon_time for a20
col program for a30
col killer for a12
col osuser for a10
prompt Enter CPU, READS or EXECS
prompt (Press Enter for CPU default)
select nvl(a.username, '(oracle)') as username
, a.osuser
, a.sid||','||a.serial# killer
, c.value as &&TYPE
, a.lockwait
, a.status
, a.module
, a.machine
, a.program
, to_char(a.logon_time,'dd-mon-yyyy hh24:mi:ss') as logon_time
from v$session a
, v$sesstat c
, v$statname d
where 1=1
and a.sid = c.sid
and c.statistic# = d.statistic#
and d.name = decode(upper('&TYPE'), 'READS', 'session logical reads',
'EXECS', 'execute count',
'CPU', 'CPU used by this session',
'CPU used by this session'
)
order by c.value desc
/
undef TYPE
Show current used undo blocks for ongoing transactions[edit]
select vs.sid , vs.serial# , vs.username , vs.program , vt.used_ublk , vt.used_urec from v$session vs , v$transaction vt where vs.taddr = vt.addr order by 5 desc, 6 desc
Pinpoint which sessions are using lots of undo[edit]
select a.sid , b.name , a.value from v$sesstat a , v$statname b where a.statistic# = b.statistic# and a.statistic# = 176 # undo change vector size order by a.value desc
How many blocks have been changed by sessions?[edit]
High values indicate a session generating lots of redo
Use this query to check for programs generating lots of redo when these programs activate more than one transaction.
select vs.sid , vs.serial# , vs.username , vs.program , vi.block_changes from v$session vs , v$sess_io vi where vs.sid = vi.sid order by 5 desc , 1, 2, 3, 4;
Top SQL[edit]
Shows the SQL statements that have caused the most disk reads per execution since the instance was last started
set lines 1000 pages 1000
set verif off
col sql_text for a150
select *
from (
select substr(a.sql_text,1,200) sql_text
, trunc(a.disk_reads/decode(a.executions,0,1,a.executions)) reads_per_execution
, a.buffer_gets
, a.disk_reads
, a.executions
, a.sorts
, a.address
from v$sqlarea a
order by 2 desc
)
where rownum <= 11
/
set lines 80
Top ten SQL statements with the greatest aggregate elapsed time
set lines 1000 pages 1000
col sql_text for a150
SELECT sql_id,child_number,sql_text, elapsed_time
FROM (SELECT sql_id, child_number, sql_text, elapsed_time,
cpu_time,disk_reads,
RANK () OVER (ORDER BY elapsed_time DESC) AS elapsed_rank
FROM v$sql)
WHERE elapsed_rank <= 10
/
set lines 80
See the explain plan of these SQL's by feeding the sql_id and child_no into this cracker!
SELECT * FROM TABLE (DBMS_XPLAN.display_cursor (&sql_id, &child_no,'TYPICAL -BYTES'));
Top waits[edit]
Displays a list of the events currently being waited on by active sessions.
The meaning of the wait_time and seconds_in_wait columns varies depending on their values follows:
- wait_time - A non-zero value represents the session’s last wait time, while a zero value indicates that the session is currently waiting.
- seconds_in_wait - When the wait_time is zero, the seconds_in_wait value represents the seconds spent in the current wait condition.
When the wait_time is greater than zero, the seconds_in_wait value represents the seconds since the start of the last wait,
and (seconds_in_wait - wait_time / 100) is the active seconds since the last wait ended.
set lines 200 pages 1000 col username for a20 col event for a30 col wait_class for a15 select nvl(s.username, '(oracle)') as username , s.sid , s.serial# , sw.event , sw.wait_class , sw.wait_time , sw.seconds_in_wait , sw.state from v$session_wait sw , v$session s where 1=1 and s.sid = sw.sid order by sw.seconds_in_wait desc /
Session wait history[edit]
Once a session of interest has been identified, we can display the history of events associated with that session
set lines 200 pages 1000 set verif off col username for a20 col event for a40 select nvl(s.username, '(oracle)') as username , s.sid , s.serial# , se.event , se.total_waits , se.total_timeouts , se.time_waited , se.average_wait , se.max_wait , round(se.time_waited_micro/1000000,2) waited_secs from v$session_event se , v$session s where 1=1 and s.sid = se.sid and s.sid = &1 order by se.time_waited desc /
System waits[edit]
select event , total_waits , total_timeouts , time_waited , average_wait , time_waited_micro from v$system_event order by event /
Oracle Log File Sync Wait Event[edit]
Reference: logicalread.solarwinds.com
Snippet:
When a user session waits on the 'log file sync' event, it is actually waiting for the LGWR process to write the log buffer to the redo log file and
return confirmation/control back to it. If the total wait time is significant, review the average wait time. If the average wait time is low but the
number of waits is high, reduce the number of commits by batching (or committing after 'n') rows.If slow I/O, investigate the following:
# Reduce contention on existing disks.
# Put log files on faster disks.
# Put alternate redo logs on different disks to minimise the effect archive processes (log files switches).
# Review application design, use NOLOGGING operations where appropriate, and avoid changing more data than required.
If wait times are still significant, review each component of the 'log file sync' and tune separately.
EXPLAIN PLAN[edit]
Usage (old school)[edit]
If no access to the system plan_table, create your own (also to keep the plans longer than a session), run $ORACLE_HOME/rdbms/admin/utlxplan.sql
explain plan [set statement_id = 'statement_id'] [into table_name ] for sql_statement
See the results[edit]
select rtrim(lpad(' ', 2*level) || rtrim(operation) || ' ' || rtrim(options) || ' ' || object_name) query_plan
, cost
, cardinality
from plan_table
connect by prior id = parent_id
start with id = 0
Usage (new school)[edit]
1. Explain plan for the most recent SQL statement executed in the session
set pages 0 select * from table(dbms_xplan.display_cursor);
2. Execute an explain plan command on a SELECT statement
explain plan for select * from emp e , dept d where d.deptno = e.deptno and e.ename = 'benoit' / set pages 0 lines 150 select * from table(dbms_xplan.display);
3. Explain plan on a previously executed statement
Find the SQL_ID
select sql_id, child_number from v$sql where sql_text like '%<something distinctive to find the SQL statement>%';
Get the explain_plan
select * from table(dbms_xplan.display_cursor(('&sql_id',&child_number));
Virtual Indexes[edit]
An index created to see if the optimiser would use it without actually having to build it
SQL> ALTER SESSION SET "_use_nosegment_indexes"=TRUE; Session altered. SQL> CREATE INDEX sh.sales_vi1 ON sh.sales(quantity_sold) NOSEGMENT; Index created.
Now re-run the explain plan and see the difference.
Statspack[edit]
If you are using Standard Edition, you cannot use the Grid utilities or to debug performance issues. You need to use the "old" method.
http://www.akadia.com/services/ora_statspack_survival_guide.html
Reproduced here in case the page disappears...
Overview
STATSPACK is a performance diagnosis tool, available since Oracle8i. STATSPACK can be considered BSTAT/ESTAT's successor, incorporating many new features. STATSPACK is a diagnosis tool for instance-wide performance problems; it also supports application tuning activities by providing data which identifies high-load SQL statements. STATSPACK can be used both proactively to monitor the changing load on a system, and also reactively to investigate a performance problem.
Remember to set timed_statistics to true for your instance. Setting this parameter provides timing data which is invaluable for performance tuning.
The «more is better» approach is not always better!
The single most common misuse of STATSPACK is the «more is better» approach. Often STATSPACK reports spans hours or even days. The times between the snapshots (the collection points) should, in general, be measured in minutes, not hours and never days.
The STATSPACK reports we like are from 1 5-minute intervals during a busy or peak time, when the performance is at its worst. That provides a very focused look at what was going wrong at that exact moment in time. The problem with a very large STATSPACK snapshot window, where the time between the two snapshots is measured in hours, is that the events that caused serious performance issues for 20 minutes during peak processing don't look so bad when they're spread out over an 8-hour window. It's also true with STATSPACK that measuring things over too long of a period tends to level them out over time. Nothing will stand out and strike you as being wrong. So, when taking snapshots, schedule them about 15 to 30 minutes (maximum) apart. You might wait 3 or 4 hours between these two observations, but you should always do them in pairs and within minutes of each other.
«Having a history of the good times is just as important as having a history of the bad; you need both»
Another common mistake with STATSPACK is to gather snapshots only when there is a problem. That is fine to a point, but how much better would it be to have a STATSPACK report from when things were going good to compare it with when things are bad. A simple STATSPACK report that shows a tremendous increase in physical 1/0 activity or table scans (long tables) could help you track down that missing index. Or, if you see your soft parse percentage value went from 99% to 70%, you know that someone introduced a new feature into the system that isn't using bind variables (and is killing you). Having a history of the good times is just as important as having a history of the bad; you need both.
Architecture
To fully understand the STATSPACK architecture, we have to look at the basic nature of the STATSPACK utility. The STATSPACK utility is an outgrowth of the Oracle UTLBSTAT and UTLESTAT utilities, which have been used with Oracle since the very earliest versions.
UTLBSTAT - UTLESTAT
The BSTAT-ESTAT utilities capture information directly from the Oracle's in-memory structures and then compare the information from two snapshots in order to produce an elapsed-time report showing the activity of the database. If we look inside utlbstat.sql and utlestat.sql, we see the SQL that samples directly from the view: V$SYSSTAT;
insert into stats$begin_stats select * from v$sysstat;
insert into stats$end_stats select * from v$sysstat;
STATSPACK
When a snapshot is executed, the STATSPACK software will sample from the RAM in-memory structures inside the SGA and transfer the values into the corresponding STATSPACK tables. These values are then available for comparing with other snapshots.
Note that in most cases, there is a direct correspondence between the v$ view in the SGA and the corresponding STATSPACK table. For example, we see that the stats$sysstat table is similar to the v$sysstat view.
SQL> desc v$sysstat;
Name Null? Type
----------------------------------------- -------- -----------------------
STATISTIC# NUMBER
NAME VARCHAR2(64)
CLASS NUMBER
VALUE NUMBER
STAT_ID NUMBER
SQL> desc stats$sysstat;
Name Null? Type
----------------------------------------- -------- -----------------------
SNAP_ID NOT NULL NUMBER
DBID NOT NULL NUMBER
INSTANCE_NUMBER NOT NULL NUMBER
STATISTIC# NOT NULL NUMBER
NAME NOT NULL VARCHAR2(64)
VALUE NUMBER
It is critical to your understanding of the STATSPACK utility that you realize the information captured by a STATSPACK snapshot is accumulated values. The information from the V$VIEWS collects database information at startup time and continues to add the values until the instance is shutdown. In order to get a meaningful elapsed-time report, you must run a STATSPACK report that compares two snapshots as shown above. It is critical to understand that a report will be invalid if the database is shut down between snapshots. This is because all of the accumulated values will be reset, causing the second snapshot to have smaller values than the first snapshot.
Installing and Configuring STATSPACK
Create PERFSTAT Tablespace
The STATSPACK utility requires an isolated tablespace to obtain all of the objects and data. For uniformity, it is suggested that the tablespace be called PERFSTAT, the same name as the schema owner for the STATSPACK tables. It is important to closely watch the STATSPACK data to ensure that the stats$sql_summary table is not taking an inordinate amount of space.
SQL> CREATE TABLESPACE perfstat
DATAFILE '/u01/oracle/db/AKI1_perfstat.dbf' SIZE 1000M REUSE
EXTENT MANAGEMENT LOCAL UNIFORM SIZE 512K
SEGMENT SPACE MANAGEMENT AUTO
PERMANENT
ONLINE;
Run the Create Scripts
Now that the tablespace exists, we can begin the installation process of the STATSPACK software. Note that you must have performed the following before attempting to install STATSPACK.
Run catdbsyn.sql as SYS
Run dbmspool.sql as SYS
$ cd $ORACLE_HOME/rdbms/admin
$ sqlplus "/ as sysdba"
SQL> start spcreate.sql
Choose the PERFSTAT user's password
-----------------------------------
Not specifying a password will result in the installation FAILING
Enter value for perfstat_password: perfstat
Choose the Default tablespace for the PERFSTAT user
---------------------------------------------------
Below is the list of online tablespaces in this database which can
store user data. Specifying the SYSTEM tablespace for the user's
default tablespace will result in the installation FAILING, as
using SYSTEM for performance data is not supported.
Choose the PERFSTAT users's default tablespace. This is the tablespace
in which the STATSPACK tables and indexes will be created.
TABLESPACE_NAME CONTENTS STATSPACK DEFAULT TABLESPACE
------------------------------ --------- ----------------------------
PERFSTAT PERMANENT
SYSAUX PERMANENT *
USERS PERMANENT
Pressing <return> will result in STATSPACK's recommended default
tablespace (identified by *) being used.
Enter value for default_tablespace: PERFSTAT
Choose the Temporary tablespace for the PERFSTAT user
-----------------------------------------------------
Below is the list of online tablespaces in this database which can
store temporary data (e.g. for sort workareas). Specifying the SYSTEM
tablespace for the user's temporary tablespace will result in the
installation FAILING, as using SYSTEM for workareas is not supported.
Choose the PERFSTAT user's Temporary tablespace.
TABLESPACE_NAME CONTENTS DB DEFAULT TEMP TABLESPACE
------------------------------ --------- --------------------------
TEMP TEMPORARY *
Pressing <return> will result in the database's default Temporary
tablespace (identified by *) being used.
Enter value for temporary_tablespace: TEMP
.....
.....
Creating Package STATSPACK...
Package created.
No errors.
Creating Package Body STATSPACK...
Package body created.
No errors.
NOTE:
SPCPKG complete. Please check spcpkg.lis for any errors.
Check the Logfiles: spcpkg.lis, spctab.lis, spcusr.lis
Adjusting the STATSPACK Collection Level
STATSPACK has two types of collection options, level and threshold. The level parameter controls the type of data collected from Oracle, while the threshold parameter acts as a filter for the collection of SQL statements into the stats$sql_summary table.
SQL> SELECT * FROM stats$level_description ORDER BY snap_level;
Level 0 This level captures general statistics, including rollback segment, row cache, SGA, system events, background events, session events, system statistics, wait statistics, lock statistics, and Latch information.
Level 5 This level includes capturing high resource usage SQL Statements, along with all data captured by lower levels.
Level 6 This level includes capturing SQL plan and SQL plan usage information for high resource usage SQL Statements, along with all data captured by lower levels.
Level 7 This level captures segment level statistics, including logical and physical reads, row lock, itl and buffer busy waits, along with all data captured by lower levels.
Level 10 This level includes capturing Child Latch statistics, along with all data captured by lower levels.
You can change the default level of a snapshot with the statspack.snap function. The i_modify_parameter => 'true' changes the level permanent for all snapshots in the future.
SQL> exec statspack.snap(i_snap_level => 6, i_modify_parameter => 'true');
Create, View and Delete Snapshots
sqlplus perfstat/perfstat
SQL> exec statspack.snap;
SQL> select name,snap_id,to_char(snap_time,'DD.MM.YYYY:HH24:MI:SS')
"Date/Time" from stats$snapshot,v$database;
NAME SNAP_ID Date/Time
--------- ---------- -------------------
AKI1 4 14.11.2004:10:56:01
AKI1 1 13.11.2004:08:48:47
AKI1 2 13.11.2004:09:00:01
AKI1 3 13.11.2004:09:01:48
SQL> @?/rdbms/admin/sppurge;
Enter the Lower and Upper Snapshot ID
Create the Report
sqlplus perfstat/perfstat
SQL> @?/rdbms/admin/spreport.sql
Statspack at a Glance
What if you have this long STATSPACK report and you want to figure out if everything is running smoothly? Here, we will review what we look for in the report, section by section. We will use an actual STATSPACK report from our own Oracle 10g system.
Statspack Report Header
STATSPACK report for
DB Name DB Id Instance Inst Num Release RAC Host
------------ ----------- ------------ -------- ----------- --- ----------------
AKI1 2006521736 AKI1 1 10.1.0.2.0 NO akira
Snap Id Snap Time Sessions Curs/Sess Comment
--------- ------------------ -------- --------- -------------------
Begin Snap: 5 14-Nov-04 11:18:00 15 14.3
End Snap: 6 14-Nov-04 11:33:00 15 10.2
Elapsed: 15.00 (mins)
Cache Sizes (end)
~~~~~~~~~~~~~~~~~
Buffer Cache: 24M Std Block Size: 4K
Shared Pool Size: 764M Log Buffer: 1,000K
Note that this section may appear slightly different depending on your version of Oracle. For example, the Curs/Sess column, which shows the number of open cursors per session, is new with Oracle9i (an 8i Statspack report would not show this data).
Here, the item we are most interested in is the elapsed time. We want that to be large enough to be meaningful, but small enough to be relevant (15 to 30 minutes is OK). If we use longer times, we begin to lose the needle in the haystack.
Statspack Load Profile
Load Profile
~~~~~~~~~~~~ Per Second Per Transaction
--------------- ---------------
Redo size: 425,649.84 16,600,343.64
Logical reads: 1,679.69 65,508.00
Block changes: 2,546.17 99,300.45
Physical reads: 77.81 3,034.55
Physical writes: 78.35 3,055.64
User calls: 0.24 9.55
Parses: 2.90 113.00
Hard parses: 0.16 6.27
Sorts: 0.76 29.82
Logons: 0.01 0.36
Executes: 4.55 177.64
Transactions: 0.03
% Blocks changed per Read: 151.59 Recursive Call %: 99.56
Rollback per transaction %: 0.00 Rows per Sort: 65.61
Here, we are interested in a variety of things, but if we are looking at a "health check", three items are important:
The Hard parses (we want very few of them)
Executes (how many statements we are executing per second / transaction)
Transactions (how many transactions per second we process).
This gives an overall view of the load on the server. In this case, we are looking at a very good hard parse number and a fairly light system load (1 - 4 transactions per second is low).
Statspack Instance Efficiency Percentage
Next, we move onto the Instance Efficiency Percentages section, which includes perhaps the only ratios we look at in any detail:
Instance Efficiency Percentages (Target 100%)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Buffer Nowait %: 100.00 Redo NoWait %: 99.99
Buffer Hit %: 95.39 In-memory Sort %: 100.00
Library Hit %: 99.42 Soft Parse %: 94.45
Execute to Parse %: 36.39 Latch Hit %: 100.00
Parse CPU to Parse Elapsd %: 59.15 % Non-Parse CPU: 99.31
Shared Pool Statistics Begin End
------ ------
Memory Usage %: 10.28 10.45
% SQL with executions>1: 70.10 71.08
% Memory for SQL w/exec>1: 44.52 44.70
The three in bold are the most important: Library Hit, Soft Parse % and Execute to Parse. All of these have to do with how well the shared pool is being utilized. Time after time, we find this to be the area of greatest payback, where we can achieve some real gains in performance.
Here, in this report, we are quite pleased with the Library Hit and the Soft Parse % values. If the library Hit ratio was low, it could be indicative of a shared pool that is too small, or just as likely, that the system did not make correct use of bind variables in the application. It would be an indicator to look at issues such as those.
OLTP System
The Soft Parse % value is one of the most important (if not the only important) ratio in the database. For a typical OLTP system, it should be as near to 100% as possible. You quite simply do not hard parse after the database has been up for a while in your typical transactional / general-purpose database. The way you achieve that is with bind variables. In a regular system like this, we are doing many executions per second, and hard parsing is something to be avoided.
Data Warehouse
In a data warehouse, we would like to generally see the Soft Parse ratio lower. We don't necessarily want to use bind variables in a data warehouse. This is because they typically use materialized views, histograms, and other things that are easily thwarted by bind variables. In a data warehouse, we may have many seconds between executions, so hard parsing is not evil; in fact, it is good in those environments.
The moral of this is ...
... to look at these ratios and look at how the system operates. Then, using that knowledge, determine if the ratio is okay given the conditions. If we just said that the execute-to-parse ratio for your system should be 95% or better, that would be unachievable in many web-based systems. If you have a routine that will be executed many times to generate a page, you should definitely parse once per page and execute it over and over, closing the cursor if necessary before your connection is returned to the connection pool.
Statspack Top 5 Timed Events
Moving on, we get to the Top 5 Timed Events section (in Oracle9i Release 2 and later) or Top 5 Wait Events (in Oracle9i Release 1 and earlier).
Top 5 Timed Events
~~~~~~~~~~~~~~~~~~ % Total
Event Waits Time (s) Call Time
-------------------------------------------- ------------ ----------- ---------
CPU time 122 91.65
db file sequential read 1,571 2 1.61
db file scattered read 1,174 2 1.59
log file sequential read 342 2 1.39
control file parallel write 450 2 1.39
-------------------------------------------------------------
Wait Events DB/Inst: AKI1/AKI1 Snaps: 5-6
-> s - second
-> cs - centisecond - 100th of a second
-> ms - millisecond - 1000th of a second
-> us - microsecond - 1000000th of a second
-> ordered by wait time desc, waits desc (idle events last)
This section is among the most important and relevant sections in the Statspack report. Here is where you find out what events (typically wait events) are consuming the most time. In Oracle9i Release 2, this section is renamed and includes a new event: CPU time.
CPU time is not really a wait event (hence, the new name), but rather the sum of the CPU used by this session, or the amount of CPU time used during the snapshot window. In a heavily loaded system, if the CPU time event is the biggest event, that could point to some CPU-intensive processing (for example, forcing the use of an index when a full scan should have been used), which could be the cause of the bottleneck.
Db file sequential read - This wait event will be generated while waiting for writes to TEMP space generally (direct loads, Parallel DML (PDML) such as parallel updates. You may tune the PGA AGGREGATE TARGET parameter to reduce waits on sequential reads.
Db file scattered read - Next is the db file scattered read wait value. That generally happens during a full scan of a table. You can use the Statspack report to help identify the query in question and fix it.
SQL ordered by Gets
Here you will find the most CPU-Time consuming SQL statements
SQL ordered by Gets DB/Inst: AKI1/AKI1 Snaps: 5-6
-> Resources reported for PL/SQL code includes the resources used by all SQL
statements called by the code.
-> End Buffer Gets Threshold: 10000 Total Buffer Gets: 720,588
-> Captured SQL accounts for 3.1% of Total Buffer Gets
-> SQL reported below exceeded 1.0% of Total Buffer Gets
CPU Elapsd Old
Buffer Gets Executions Gets per Exec %Total Time (s) Time (s) Hash Value
--------------- ------------ -------------- ------ -------- --------- ----------
16,926 1 16,926.0 2.3 2.36 3.46 1279400914
Module: SQL*Plus
create table test as select * from all_objects
Tablespace IO Stats
Tablespace
------------------------------
Av Av Av Av Buffer Av Buf
Reads Reads/s Rd(ms) Blks/Rd Writes Writes/s Waits Wt(ms)
-------------- ------- ------ ------- ------------ -------- ---------- ------
TAB 1,643 4 1.0 19.2 16,811 39 0 0.0
UNDO 166 0 0.5 1.0 5,948 14 0 0.0
SYSTEM 813 2 2.5 1.6 167 0 0 0.0
STATSPACK 146 0 0.3 1.1 277 1 0 0.0
SYSAUX 18 0 0.0 1.0 29 0 0 0.0
IDX 18 0 0.0 1.0 18 0 0 0.0
USER 18 0 0.0 1.0 18 0 0 0.0
-------------------------------------------------------------
Rollback Segment Stats
->A high value for "Pct Waits" suggests more rollback segments may be required
->RBS stats may not be accurate between begin and end snaps when using Auto Undo
managment, as RBS may be dynamically created and dropped as needed
Trans Table Pct Undo Bytes
RBS No Gets Waits Written Wraps Shrinks Extends
------ -------------- ------- --------------- -------- -------- --------
0 8.0 0.00 0 0 0 0
1 3,923.0 0.00 14,812,586 15 0 14
2 5,092.0 0.00 19,408,996 19 0 19
3 295.0 0.00 586,760 1 0 0
4 1,312.0 0.00 4,986,920 5 0 5
5 9.0 0.00 0 0 0 0
6 9.0 0.00 0 0 0 0
7 9.0 0.00 0 0 0 0
8 9.0 0.00 0 0 0 0
9 9.0 0.00 0 0 0 0
10 9.0 0.00 0 0 0 0
-------------------------------------------------------------
Rollback Segment Storage
->Optimal Size should be larger than Avg Active
RBS No Segment Size Avg Active Optimal Size Maximum Size
------ --------------- --------------- --------------- ---------------
0 364,544 0 364,544
1 17,952,768 8,343,482 17,952,768
2 25,292,800 11,854,857 25,292,800
3 4,321,280 617,292 6,418,432
4 8,515,584 1,566,623 8,515,584
5 126,976 0 126,976
6 126,976 0 126,976
7 126,976 0 126,976
8 126,976 0 126,976
9 126,976 0 126,976
10 126,976 0 126,976
-------------------------------------------------------------
Generate Execution Plan for given SQL statement
If you have identified one or more problematic SQL statement, you may want to check the execution plan. Remember the "Old Hash Value" from the report above (1279400914), then execute the scrip to generate the execution plan.
sqlplus perfstat/perfstat
SQL> @?/rdbms/admin/sprepsql.sql
Enter the Hash Value, in this example: 1279400914
SQL Text
~~~~~~~~
create table test as select * from all_objects
Known Optimizer Plan(s) for this Old Hash Value
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Shows all known Optimizer Plans for this database instance, and the Snap Id's
they were first found in the shared pool. A Plan Hash Value will appear
multiple times if the cost has changed
-> ordered by Snap Id
First First Plan
Snap Id Snap Time Hash Value Cost
--------- --------------- ------------ ----------
6 14 Nov 04 11:26 1386862634 52
Plans in shared pool between Begin and End Snap Ids
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Shows the Execution Plans found in the shared pool between the begin and end
snapshots specified. The values for Rows, Bytes and Cost shown below are those
which existed at the time the first-ever snapshot captured this plan - these
values often change over time, and so may not be indicative of current values
-> Rows indicates Cardinality, PHV is Plan Hash Value
-> ordered by Plan Hash Value
--------------------------------------------------------------------------------
| Operation | PHV/Object Name | Rows | Bytes| Cost |
--------------------------------------------------------------------------------
|CREATE TABLE STATEMENT |----- 1386862634 ----| | | 52 |
|LOAD AS SELECT | | | | |
| VIEW | | 1K| 216K| 44 |
| FILTER | | | | |
| HASH JOIN | | 1K| 151K| 38 |
| TABLE ACCESS FULL |USER$ | 29 | 464 | 2 |
| TABLE ACCESS FULL |OBJ$ | 3K| 249K| 35 |
| TABLE ACCESS BY INDEX ROWID |IND$ | 1 | 7 | 2 |
| INDEX UNIQUE SCAN |I_IND1 | 1 | | 1 |
| NESTED LOOPS | | 5 | 115 | 16 |
| INDEX RANGE SCAN |I_OBJAUTH1 | 1 | 10 | 2 |
| FIXED TABLE FULL |X$KZSRO | 5 | 65 | 14 |
| FIXED TABLE FULL |X$KZSPR | 1 | 26 | 14 |
| FIXED TABLE FULL |X$KZSPR | 1 | 26 | 14 |
| FIXED TABLE FULL |X$KZSPR | 1 | 26 | 14 |
| FIXED TABLE FULL |X$KZSPR | 1 | 26 | 14 |
| FIXED TABLE FULL |X$KZSPR | 1 | 26 | 14 |
| FIXED TABLE FULL |X$KZSPR | 1 | 26 | 14 |
| FIXED TABLE FULL |X$KZSPR | 1 | 26 | 14 |
| FIXED TABLE FULL |X$KZSPR | 1 | 26 | 14 |
| FIXED TABLE FULL |X$KZSPR | 1 | 26 | 14 |
| FIXED TABLE FULL |X$KZSPR | 1 | 26 | 14 |
| FIXED TABLE FULL |X$KZSPR | 1 | 26 | 14 |
| FIXED TABLE FULL |X$KZSPR | 1 | 26 | 14 |
| FIXED TABLE FULL |X$KZSPR | 1 | 26 | 14 |
| FIXED TABLE FULL |X$KZSPR | 1 | 26 | 14 |
| FIXED TABLE FULL |X$KZSPR | 1 | 26 | 14 |
| FIXED TABLE FULL |X$KZSPR | 1 | 26 | 14 |
| FIXED TABLE FULL |X$KZSPR | 1 | 26 | 14 |
| FIXED TABLE FULL |X$KZSPR | 1 | 26 | 14 |
| FIXED TABLE FULL |X$KZSPR | 1 | 26 | 14 |
| FIXED TABLE FULL |X$KZSPR | 1 | 26 | 14 |
| VIEW | | 1 | 13 | 2 |
| FAST DUAL | | 1 | | 2 |
--------------------------------------------------------------------------------
Resolving Your Wait Events
The following are 10 of the most common causes for wait events, along with explanations and potential solutions:
1. DB File Scattered Read
This generally indicates waits related to full table scans. As full table scans are pulled into memory, they rarely fall into contiguous buffers but instead are scattered throughout the buffer cache. A large number here indicates that your table may have missing or suppressed indexes. Although it may be more efficient in your situation to perform a full table scan than an index scan, check to ensure that full table scans are necessary when you see these waits. Try to cache small tables to avoid reading them in over and over again, since a full table scan is put at the cold end of the LRU (Least Recently Used) list.
2. DB File Sequential Read
This event generally indicates a single block read (an index read, for example). A large number of waits here could indicate poor joining orders of tables, or unselective indexing. It is normal for this number to be large for a high-transaction, well-tuned system, but it can indicate problems in some circumstances. You should correlate this wait statistic with other known issues within the Statspack report, such as inefficient SQL. Check to ensure that index scans are necessary, and check join orders for multiple table joins. The DB_CACHE_SIZE will also be a determining factor in how often these waits show up. Problematic hash-area joins should show up in the PGA memory, but they're also memory hogs that could cause high wait numbers for sequential reads. They can also show up as direct path read/write waits.
3. Free Buffer
This indicates your system is waiting for a buffer in memory, because none is currently available. Waits in this category may indicate that you need to increase the DB_BUFFER_CACHE, if all your SQL is tuned. Free buffer waits could also indicate that unselective SQL is causing data to flood the buffer cache with index blocks, leaving none for this particular statement that is waiting for the system to process. This normally indicates that there is a substantial amount of DML (insert/update/delete) being done and that the Database Writer (DBWR) is not writing quickly enough; the buffer cache could be full of multiple versions of the same buffer, causing great inefficiency. To address this, you may want to consider accelerating incremental checkpointing, using more DBWR processes, or increasing the number of physical disks.
4. Buffer Busy
This is a wait for a buffer that is being used in an unshareable way or is being read into the buffer cache. Buffer busy waits should not be greater than 1 percent. Check the Buffer Wait Statistics section (or V$WAITSTAT) to find out if the wait is on a segment header. If this is the case, increase the freelist groups or increase the pctused to pctfree gap. If the wait is on an undo header, you can address this by adding rollback segments; if it's on an undo block, you need to reduce the data density on the table driving this consistent read or increase the DB_CACHE_SIZE. If the wait is on a data block, you can move data to another block to avoid this hot block, increase the freelists on the table, or use Locally Managed Tablespaces (LMTs). If it's on an index block, you should rebuild the index, partition the index, or use a reverse key index. To prevent buffer busy waits related to data blocks, you can also use a smaller block size: fewer records fall within a single block in this case, so it's not as "hot." When a DML (insert/update/ delete) occurs, Oracle Database writes information into the block, including all users who are "interested" in the state of the block (Interested Transaction List, ITL). To decrease waits in this area, you can increase the initrans, which will create the space in the block to allow multiple ITL slots. You can also increase the pctfree on the table where this block exists (this writes the ITL information up to the number specified by maxtrans, when there are not enough slots built with the initrans that is specified).
5. Latch Free
Latches are low-level queuing mechanisms (they're accurately referred to as mutual exclusion mechanisms) used to protect shared memory structures in the system global area (SGA). Latches are like locks on memory that are very quickly obtained and released. Latches are used to prevent concurrent access to a shared memory structure. If the latch is not available, a latch free miss is recorded. Most latch problems are related to the failure to use bind variables (library cache latch), redo generation issues (redo allocation latch), buffer cache contention issues (cache buffers LRU chain), and hot blocks in the buffer cache (cache buffers chain). There are also latch waits related to bugs; check MetaLink for bug reports if you suspect this is the case. When latch miss ratios are greater than 0.5 percent, you should investigate the issue.
6. Enqueue
An enqueue is a lock that protects a shared resource. Locks protect shared resources, such as data in a record, to prevent two people from updating the same data at the same time. An enqueue includes a queuing mechanism, which is FIFO (first in, first out). Note that Oracle's latching mechanism is not FIFO. Enqueue waits usually point to the ST enqueue, the HW enqueue, the TX4 enqueue, and the TM enqueue. The ST enqueue is used for space management and allocation for dictionary-managed tablespaces. Use LMTs, or try to preallocate extents or at least make the next extent larger for problematic dictionary-managed tablespaces. HW enqueues are used with the high-water mark of a segment; manually allocating the extents can circumvent this wait. TX4s are the most common enqueue waits. TX4 enqueue waits are usually the result of one of three issues. The first issue is duplicates in a unique index; you need to commit/rollback to free the enqueue. The second is multiple updates to the same bitmap index fragment. Since a single bitmap fragment may contain multiple rowids, you need to issue a commit or rollback to free the enqueue when multiple users are trying to update the same fragment. The third and most likely issue is when multiple users are updating the same block. If there are no free ITL slots, a block-level lock could occur. You can easily avoid this scenario by increasing the initrans and/or maxtrans to allow multiple ITL slots and/or by increasing the pctfree on the table. Finally, TM enqueues occur during DML to prevent DDL to the affected object. If you have foreign keys, be sure to index them to avoid this general locking issue.
7. Log Buffer Space
This wait occurs because you are writing the log buffer faster than LGWR can write it to the redo logs, or because log switches are too slow. To address this problem, increase the size of the log files, or increase the size of the log buffer, or get faster disks to write to. You might even consider using solid-state disks, for their high speed.
8. Log File Switch
All commit requests are waiting for "logfile switch (archiving needed)" or "logfile switch (Checkpoint. Incomplete)." Ensure that the archive disk is not full or slow. DBWR may be too slow because of I/O. You may need to add more or larger redo logs, and you may potentially need to add database writers if the DBWR is the problem.
9. Log File Sync
When a user commits or rolls back data, the LGWR flushes the session's redo from the log buffer to the redo logs. The log file sync process must wait for this to successfully complete. To reduce wait events here, try to commit more records (try to commit a batch of 50 instead of one at a time, for example). Put redo logs on a faster disk, or alternate redo logs on different physical disks, to reduce the archiving effect on LGWR. Don't use RAID 5, since it is very slow for applications that write a lot; potentially consider using file system direct I/O or raw devices, which are very fast at writing information.
10. Idle Event.
There are several idle wait events listed after the output; you can ignore them. Idle events are generally listed at the bottom of each section and include such things as SQL*Net message to/from client and other background-related timings. Idle events are listed in the stats$idle_event table.
Remove STATSPACK from the Database
After a STATSPACK session you want to remove the STATSPACK tables.
sqlplus "/ as sysdba"
SQL> @?/rdbms/admin/spdrop.sql
SQL> DROP TABLESPACE perfstat INCLUDING CONTENTS AND DATAFILES;