User Tools

Site Tools


performance

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
performance [2018/12/08 12:49] – created 0.0.0.0performance [2026/01/09 15:47] (current) – external edit 127.0.0.1
Line 1: Line 1:
-====== Performance ======+  * [[https://blogs.oracle.com/sql/improve-sql-query-performance-by-using-bind-variables|Improve SQL Query Performance by Using Bind Variables - All Things SQL]] 
 +  * [[https://dach.tdsynnex.com/blog/ch/wp-content/uploads/sites/4/2023/04/2022.11.15_OracleOnPowerTuningConsiderations_CPU_Memory_PSUG2022.pdf|Oracle Database on IBM Power with AIX Best Practices (Part 1: Memory and CPU)]] 
 +  * [[https://www.linkedin.com/pulse/19-great-things-you-can-do-vactivesessionhistory-part-merav-kedem|19 Great Things You Can Do With V$ACTIVE_SESSION_HISTORY (Part 1)]] 
 +  * [[https://blogs.oracle.com/connect/post/beginning-performance-tuning-trace-your-steps|How to trace SQL sessions to identify Oracle Database bottlenecks - Arup Nanda]] 
 +  * [[https://support.oracle.com/epmos/faces/DocumentDisplay?id=33089.1|TROUBLESHOOTING: Possible Causes of Poor SQL Performance (Doc ID 33089.1)]]
  
 +  * [[https://docs.oracle.com/database/121/TGSQL/tgsql_histo.htm#TGSQL95033|Histograms - Database SQL Tuning Guide - oracle.com]]
 +  * [[https://www.informit.com/articles/article.aspx?p=1400612&seqNum=5|Oracle Performance Tuning: A Methodical Approach - informit.com]]
 +  * [[https://oracle-base.com/articles/12c/adaptive-plans-12cr1|Adaptive Plans in Oracle Database 12c - oracle-base.com]]
 +  * [[https://asktom.oracle.com/pls/apex/f?p=100:11:0::::p11_question_id:9422487749968|Why full table scans are not always bad - asktom.com]]
 +  * [[https://magnusjohanssontuning.wordpress.com/2017/10/09/high-buffer-gets-for-small-delete/|High buffer gets for small delete - Oracle Tuning by Magnus Johansson]]
 +  * [[https://magnusjohanssontuning.wordpress.com|Oracle Tuning by Magnus Johansson]]
 +  * [[https://www.dba-career.com/2017/07/awr-addr-and-ash-reports-quick.html|Difference between AWR, ADDM and ASH Reports in Oracle Performance Tuning]]
 +  * [[http://www.petefinnigan.com/ramblings/how_to_set_trace.htm|How to set trace for others sessions, for your own session and at instance level]]
 +  * [[https://support.oracle.com/epmos/faces/DocumentDisplay?id=199083.1|Master Note: SQL Query Performance Overview (Doc ID 199083.1)]]
 +  * [[http://juliandyke.com/Presentations/Presentations.php|Advanced Oracle presentations - Julian Dyke]]
   * [[https://support.oracle.com/epmos/faces/DocumentDisplay?id=443746.1|Metalink AMM Note 443746.1]]   * [[https://support.oracle.com/epmos/faces/DocumentDisplay?id=443746.1|Metalink AMM Note 443746.1]]
 +  * [[https://community.oracle.com/thread/858909|CBO not picking correct indexes or doing Full Scans]]
   * [[https://docs.oracle.com/database/121/ADMIN/memory.htm#ADMIN00207|Oracle 12c documentation: Memory Management]]   * [[https://docs.oracle.com/database/121/ADMIN/memory.htm#ADMIN00207|Oracle 12c documentation: Memory Management]]
   * [[http://perfhints.blogspot.com/2009/02/oracle-11g-memory-management-made.html|Memory Management through the versions]]   * [[http://perfhints.blogspot.com/2009/02/oracle-11g-memory-management-made.html|Memory Management through the versions]]
   * [[http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:1137385000346840516|Automatic Memory Management discussion with Tom Kyte]]   * [[http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:1137385000346840516|Automatic Memory Management discussion with Tom Kyte]]
   * [[http://www.oracle-base.com/articles/11g/automatic-memory-management-11gr1.php| Automatic Memory Management from ORACLE_BASE]]   * [[http://www.oracle-base.com/articles/11g/automatic-memory-management-11gr1.php| Automatic Memory Management from ORACLE_BASE]]
 +  * [[https://books.google.be/books/about/Troubleshooting_Oracle_Performance.html?id=-cjAAwAAQBAJ&printsec=frontcover&source=kp_read_button&redir_esc=y#v=onepage&q&f=false|Troubleshooting Oracle Performance - A methodical approach]]
 +  * [[https://www.oraclenext.com/2018/03/reading-and-understanding-awr-report.html|Reading and Understanding AWR Report for I/O or Disk latency]]
 +  * [[http://logicalread.solarwinds.com/oracle-contention-for-shared-pool-and-library-cache-latches-mc01/#.Vj80XJRNS9j|How Parsing in Oracle Causes Contention for Shared Pool and Library Cache Latches]]
 +  * [[https://www.dba-career.com/2017/07/awr-addr-and-ash-reports-quick.html|Difference between AWR, ADDM and ASH Reports in Oracle Performance Tuning]]
 +  * [[http://www.petefinnigan.com/ramblings/how_to_set_trace.htm|How to set trace for others sessions, for your own session and at instance level]]
 +  * [[https://support.oracle.com/epmos/faces/DocumentDisplay?id=199083.1|Master Note: SQL Query Performance Overview (Doc ID 199083.1)]]
 +  * [[http://juliandyke.com/Presentations/Presentations.php|Advanced Oracle presentations - Julian Dyke]]
 +  * [[https://support.oracle.com/epmos/faces/DocumentDisplay?id=443746.1|Metalink AMM Note 443746.1]]
 +  * [[https://community.oracle.com/thread/858909|CBO not picking correct indexes or doing Full Scans]]
 +  * [[https://docs.oracle.com/database/121/ADMIN/memory.htm#ADMIN00207|Oracle 12c documentation: Memory Management]]
 +  * [[http://perfhints.blogspot.com/2009/02/oracle-11g-memory-management-made.html|Memory Management through the versions]]
 +  * [[http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:1137385000346840516|Automatic Memory Management discussion with Tom Kyte]]
 +  * [[http://www.oracle-base.com/articles/11g/automatic-memory-management-11gr1.php| Automatic Memory Management from ORACLE_BASE]]
 +  * [[https://books.google.be/books/about/Troubleshooting_Oracle_Performance.html?id=-cjAAwAAQBAJ&printsec=frontcover&source=kp_read_button&redir_esc=y#v=onepage&q&f=false|Troubleshooting Oracle Performance - A methodical approach]]
 +  * [[https://www.oraclenext.com/2018/03/reading-and-understanding-awr-report.html|Reading and Understanding AWR Report for I/O or Disk latency]]
   * [[http://logicalread.solarwinds.com/oracle-contention-for-shared-pool-and-library-cache-latches-mc01/#.Vj80XJRNS9j|How Parsing in Oracle Causes Contention for Shared Pool and Library Cache Latches]]   * [[http://logicalread.solarwinds.com/oracle-contention-for-shared-pool-and-library-cache-latches-mc01/#.Vj80XJRNS9j|How Parsing in Oracle Causes Contention for Shared Pool and Library Cache Latches]]
   * [[Tuning]]   * [[Tuning]]
   * [[http://juliandyke.com/Presentations/Presentations.php#AdvancedDiagnosticsRevisited|juliandyke.com - Advanced Diagnostics (including oradebug 10046 10053 etc...)]]   * [[http://juliandyke.com/Presentations/Presentations.php#AdvancedDiagnosticsRevisited|juliandyke.com - Advanced Diagnostics (including oradebug 10046 10053 etc...)]]
-=====Tanel Poder's session snapper===== +Scripts 
-  * [[http://tech.e2sn.com/oracle-scripts-and-tools/session-snapper|e2sn.com]] +  * [[https://dbaclass.com/monitor-your-db/|Monitoring scripts from dbaclass.com]] 
-=====Show long-running SQL queries / statements=====+  * [[https://github.com/freddenis/oracle-scripts|Collection of scripts by Fred Denis, the unknowndba.blogspot.com]] 
 +  * [[Snippets]] 
 +  * [[Handy Scripts]] 
 +  * [[Tuning]] 
 +  * [[http://juliandyke.com/Presentations/Presentations.php#AdvancedDiagnosticsRevisited|juliandyke.com - Advanced Diagnostics (including oradebug 10046 10053 etc...)]] 
 +  * [[https://www.informit.com/articles/article.aspx?p=1400612&seqNum=5|Oracle Performance Tuning: A Methodical Approach - Stage 3: Reducing Physical IO]] 
 +  * [[https://renenyffenegger.ch/notes/development/databases/Oracle/installed/packages/dbms/xplan/api/display/cursor/index|Gather plan stats in a hint and show with DBMS_XPLAN.DISPLAY_CURSOR]] 
 + 
 + 
 +==== Why was my query fast yesterday and super slow today? ==== 
 +  * [[https://mikesmithers.wordpress.com/2022/11/21/flipping-plans-retrieving-past-execution-plans-from-awr/|Execution plan has changed for a query. How to find the old and the new]] 
 + 
 +Maybe the stats are stale, maybe something happened to global temporary tables, maybe something else happened but now the query that was super fast yesterday is now running like a pig. Could be that the execution plan has changed. 
 +I'm not going to rehash all that was said by the AntiKyte so I'll just put the main points here in case his article is removed. 
 +<code> 
 +sho parameter awr 
 + 
 +NAME                        TYPE    VALUE  
 +--------------------------- ------- -----  
 +awr_pdb_autoflush_enabled   boolean FALSE  
 +awr_pdb_max_parallel_slaves integer 10     
 +awr_snapshot_time_offset    integer 0  
 +</code> 
 +Allow autoflush 
 +<code> 
 +alter system set awr_pdb_autoflush_enabled = true; 
 +</code> 
 + 
 +Adjust the snapshot interval 
 +<code> 
 +select * from cdb_hist_wr_control; 
 +</code> 
 +<code> 
 +begin 
 +    dbms_workload_repository.modify_snapshot_settings( 
 +        interval => 10, 
 +        topnsql => 'MAXIMUM'); 
 +end; 
 +
 +</code> 
 +Make a big table 
 +<code> 
 +create table chunky as select * from dba_objects; 
 + 
 +begin 
 +    for i in 1..100 loop 
 +        insert into chunky 
 +        select * from dba_objects; 
 +        -- commit after each iteration as we're a bit tight  
 +        -- on resources 
 +        commit; 
 +    end loop; 
 +end; 
 +
 + 
 +create index chunky_owner on chunky(owner); 
 + 
 +exec dbms_stats.gather_table_stats(user, 'CHUNKY');  
 + 
 +</code> 
 + 
 +Start a new snapshot 
 +<code> 
 +exec dbms_workload_repository.create_snapshot; 
 +</code> 
 +Find the sql_id from the the v$sql table 
 +<code> 
 +select sql_id, sql_text 
 +from v$sql  
 +where sql_text like 'select%/*%slide 1%*/%' 
 +and sql_text not like '%v$sql%' 
 +
 +</code> 
 +Create another snapshot and ennsure a different plan is generated by hiding the index 
 +<code> 
 +exec dbms_workload_repository.create_snapshot; 
 +alter index chunky_owner invisible; 
 +</code> 
 +Rerun the query. They should appear in different snapshots. 
 + 
 +Check the awr history tables to see if we can see more than one plan 
 +<code> 
 +select 
 +    snap.snap_id, 
 +    snap.instance_number, 
 +    begin_interval_time, 
 +    sql_id, 
 +    plan_hash_value, 
 +    nvl(executions_delta,0) execs, 
 +    (elapsed_time_delta/decode(nvl(executions_delta,0),0,1,executions_delta))/1000000 avg_etime, 
 +    (buffer_gets_delta/decode(nvl(buffer_gets_delta,0),0,1,executions_delta)) avg_lio 
 +from dba_hist_sqlstat stat, dba_hist_snapshot snap 
 +where sql_id = '&sql_id.' 
 +and snap.snap_id = stat.snap_id 
 +and snap.instance_number = stat.instance_number 
 +and executions_delta > 0 
 +order by 3 
 +
 +</code> 
 +List the plans 
 +<code> 
 +select * 
 +from dba_hist_sql_plan 
 +where sql_id = '&sql_id.' 
 +
 +</code> 
 +or 
 +<code> 
 +select * 
 +from   table ( dbms_xplan.display_workload_repository ( sql_id          => '&sql_id.' 
 +                                                      , plan_hash_value => &plan_hash_id 
 +                                                      ) 
 +             ); 
 +</code> 
 + 
 +==== Top Program-Module-Action ==== 
 +From [[https://www.linkedin.com/pulse/19-great-things-you-can-do-vactivesessionhistory-part-merav-kedem|19 Great Things You Can Do With V$ACTIVE_SESSION_HISTORY (Part 1)]] 
 +<code> 
 +set lines 200 pages 100 
 +col program for a42 
 +col module  for a42 
 +col action  for a42 
 +select program 
 +,      module 
 +,      action 
 +,      count(*) cnt 
 +,      100*trunc(ratio_to_report(count(*)) over (),4) "%" 
 +from   v$active_session_history 
 +where  1=1 
 +and    sample_time > sysdate-1/24 
 +group  by program 
 +,      module 
 +,      action 
 +order  by count(*) desc 
 +fetch  first 50 rows only 
 +
 +</code> 
 + 
 +==== Top user activity ==== 
 +<code> 
 +set lines 200 pages 100 
 +col username for a20 
 +select du.username 
 +,      count(*) cnt 
 +,      100*trunc(ratio_to_report(count(*)) over (),4) "%" 
 +from   v$active_session_history vash 
 +join   dba_users du on vash.user_id = du.user_id 
 +where  1=1 
 +and    vash.sample_time > sysdate-1/24 
 +group  by du.username 
 +order  by count(*) desc 
 +fetch  first 20 rows only 
 +
 +</code> 
 +==== Top Events ==== 
 +set lines 200 pages 100 
 +col wait_class for a20 
 +<code> 
 +select nvl(event,'CPU'     event 
 +,      nvl(wait_class,'CPU') wait_class 
 +,      count(*)              cnt 
 +,      100*trunc(ratio_to_report(count(*)) over (),4) "%" 
 +from   v$active_session_history 
 +where  1=1 
 +and    sample_time > sysdate-1/24 
 +group  by event 
 +,      wait_class 
 +order  by count(*) desc 
 +fetch  first 20 rows only 
 +
 +</code> 
 + 
 +=== Top SQL ==== 
 +set lines 200 pages 100 
 +col sql_opname for a20 
 +<code> 
 +select vash.sql_id 
 +,      vash.sql_opname 
 +,      count(*) cnt 
 +,      100*trunc(ratio_to_report(count(*)) over (),4) "%" 
 +,      vs.sql_text 
 +from   v$active_session_history vash 
 +left join v$sqlarea vs on vash.sql_id = vs.sql_id 
 +where  1=1 
 +and    sample_time > sysdate-10/60/24 
 +and    vash.sql_id is not null 
 +group  by vash.sql_id 
 +,      vash.sql_opname 
 +,      vs.sql_text 
 +order  by count(*) desc 
 +fetch  first 20 rows only 
 +
 +</code> 
 + 
 +==== Top accessed objects ==== 
 +set lines 200 pages 100 
 +col owner       for a32 
 +col object_name for a32 
 +<code> 
 +select vash.current_obj# object_id 
 +,      do.owner 
 +,      do.object_name 
 +,      do.object_type 
 +,      count(*) cnt 
 +,      100*trunc(ratio_to_report(count(*)) over (),4) "%" 
 +from   v$active_session_history vash 
 +left join dba_objects do on vash.current_obj# = do.object_id 
 +where  sample_time > sysdate-1/24 
 +and    current_obj# > 0 
 +group  by vash.current_obj# 
 +,      do.owner 
 +,      do.object_name 
 +,      do.object_type 
 +order  by count(*) desc 
 +fetch  first 30 rows only 
 +
 +</code> 
 + 
 +==== How much hard parsing ==== 
 +set lines 200 pages 100 
 +col in_parse for a9 
 +col in_hard_parse for a14 
 +<code> 
 +select in_parse 
 +,      in_hard_parse 
 +,      count(*) cnt 
 +,      100*trunc(ratio_to_report(count(*)) over (),4) "%" 
 +from   v$active_session_history 
 +where  1=1 
 +and    sample_time > sysdate-30/60/24 
 +group  by in_parse 
 +,      in_hard_parse 
 +order  by count(*) desc 
 +
 +</code> 
 + 
 +==== Everything the database did in the past n minutes ==== 
 +<code> 
 +select vash.sample_time 
 +,      vash.sql_id 
 +,      vash.top_level_sql_id 
 +,      nvl(vash.event,'CPU') event 
 +,      in_parse 
 +,      in_hard_parse 
 +,      vash.force_matching_signature 
 +,      t.exact_matching_signature 
 +,      t.sql_text 
 +from   v$active_session_history vash 
 +left join v$sqlarea t on vash.sql_id=t.sql_id 
 +where  1=1 
 +and    vash.top_level_sql_id is not null 
 +--and    vash.top_level_sql_id='c6xbvn0rt49n5' --in case you know the id 
 +and    sample_time > sysdate-(&minutes_back./60/24) 
 +order  by sample_time desc 
 +
 +</code> 
 + 
 +==== Create a snapshot/dump of the current state of the database ==== 
 +Ref [[https://support.oracle.com/epmos/faces/DocumentDisplay?id=1594905.1|Note 1594905.1]] 
 +  * No connections posible to the database except for sysdba. 
 +  * Nothing suspicious in the alertlog. 
 +  * Filesystems not full. 
 +Before restarting, create hang analysis and systemstate trace files with the current state so that Oracle Support can have a look and see what is wrong. 
 +Use -prelim flag if connections using sysdba are not even possible 
 +  * Collect Hang Analysis 1 
 +<code> 
 +sqlplus [-prelim] / as sysdba 
 +oradebug setmypid 
 +oradebug unlimit 
 +oradebug tracefile_name 
 +oradebug hanganalyze 3 
 +</code> 
 +Wait 1 minute.... 
 +  * Collect Hang Analysis 2 
 +<code> 
 +sqlplus / as sysdba 
 +oradebug setmypid 
 +oradebug unlimit 
 +oradebug tracefile_name 
 +oradebug hanganalyze 3 
 +</code> 
 +  * Collect Systemstate dump 
 +<code> 
 +sqlplus / as sysdba 
 +oradebug setmypid 
 +oradebug unlimit 
 +oradebug dump systemstate 258 
 +or 
 +oradebug dump systemstate 266 
 +oradebug tracefile_name 
 +</code> 
 + 
 +  * Generate and Check ADDM report, implement findings, re-test 
 +<code> 
 +SQL> @?/rdbms/admin/addmrpt.sql 
 +</code> 
 +==== Explain plan for a recently statement statement ==== 
 +This example is from [[https://renenyffenegger.ch/notes/development/databases/Oracle/installed/packages/dbms/xplan/api/display/cursor/index|René Nyffenegger]] showing how to do it with an SQL in the current session but it should work for any SQL. 
 + 
 +In order to collect execution statistics for the SQL plan, the hint gather_plan_statistics is used 
 +<code> 
 +select 
 +--+  gather_plan_statistics 
 +--   find-this-statement-01 
 +     count(*)       cnt 
 +from 
 +     dba_objects    a, 
 +     dba_objects    b 
 +where 
 +     a.created     > b.created      and 
 +     a.object_name > b.object_name 
 +
 +</code> 
 +Find SQL_ID and child number of executed SQL statement 
 +<code> 
 +select 
 +   sql_id, 
 +   child_number, 
 +   sql_text  
 +from 
 +   v$sql 
 +where 
 +   sql_text     like '%find-this-statement-01%' and 
 +   sql_text not like '%v$sql%'; 
 +</code> 
 +With the sql_id and child number queried in the previous statement, we can now execute dbms_xplan.display_cursor 
 +<code> 
 +select 
 +   * 
 +from 
 +   table(dbms_xplan.display_cursor( 
 +           sql_id          => '1cb036z54s4hu', 
 +           cursor_child_no =>  0, 
 +           format          => 'ALLSTATS LAST' 
 +        )); 
 +</code> 
 +The data that is formatted by dbms_xplan.display_cursor is found in  v$sql_plan_statistics_all, v$sql, v$sql_plan and v$sql_shard. 
 + 
 +==== Trace all SQL in the current session, sending output to a tracefile (.trc) ==== 
 +<code> 
 +set lines 1000 pages 0 
 +col stmt for a500 
 +col sid new_value sid 
 +col ser new_value ser 
 + 
 +select sid     sid 
 +,      serial# ser 
 +from   sys.v_$session 
 +where  sid in (select distinct sid from sys.v_$mystat) 
 +
 + 
 +begin 
 +  dbms_monitor.session_trace_enable ( 
 +    session_id => &sid, 
 +    serial_num => &ser, 
 +    waits      => true, 
 +    binds      => true, 
 +    plan_stat  => 'all_executions'); 
 +end; 
 +
 + 
 + 
 +select vr.value                               ||'/diag/rdbms/'|| 
 +--       sys_context('USERENV','DB_NAME'      ||'/'|| 
 +--       sys_context('USERENV','INSTANCE_NAME') ||'/trace/'|| 
 +--       sys_context('USERENV','DB_NAME'      ||'_ora_'||vp.spid||'.trc' tracefile_name 
 +       vd.db_unique_name ||'/'|| 
 +       vi.instance_name  ||'/trace/'|| 
 +       vd.db_unique_name ||'_ora_'||vp.spid||'.trc' tracefile_name 
 +from   v$session   vs 
 +,      v$parameter vr 
 +,      v$process   vp 
 +,      v$database  vd 
 +,      v$instance  vi 
 +where  vr.name = 'diagnostic_dest' 
 +and    vs.sid  = &sid 
 +and    vp.addr = vs.paddr 
 +
 +</code> 
 + 
 +==== SQLTXPLAIN ==== 
 +SQLTXPLAIN (SQLT) Tool that helps to diagnose SQL statements performing poorly [[https://support.oracle.com/epmos/faces/DocumentDisplay?id=1614107.1|Note 1614107.1]] 
 +==== Trace an SQL query ==== 
 +Useful additional information [[https://oracleyogi.wordpress.com/2015/12/26/10053-trace-demystified/|10053 trace demystified]] 
 +<code> 
 +alter session set tracefile_identifier='10053_&your_ref'; 
 +alter session set timed_statistics = true; 
 +alter session set statistics_level=all; 
 +alter session set max_dump_file_size = unlimited; 
 +alter session set events '10053 trace name context forever, level 1'; 
 + 
 +explain plan for select count('x') from dba_segments; 
 + 
 +alter session set events '10053 trace name context off'; 
 +</code> 
 + 
 +<code> 
 +alter session set tracefile_identifier='10046_&your_ref'; 
 +alter session set timed_statistics = true; 
 +alter session set statistics_level=all; 
 +alter session set max_dump_file_size = unlimited; 
 +alter session set events '10046 trace name context forever,level 12'; 
 + 
 +rem execute the query for 30 minutes, then stop it 
 +select count('x') from dba_segments; 
 + 
 +select 'Verify Close' from dual; 
 +alter session set events '10046 trace name context off'; 
 +exit; 
 +</code> 
 +Then find the trc files in the trace directory, probably in 
 +<code> 
 +ls -altr $ORACLE_BASE/diag/rdbms/${ORACLE_SID}/${ORACLE_SID}/trace/|egrep "(10046|10053)" 
 +</code> 
 +==== set event 10046 ==== 
 +<code> 
 +alter system set events 'sql_trace [sql:&1] bind=&2, wait=&3' 
 +
 +</code> 
 +With this event set every time this sqlid is executed it will be traced, Supply TRUE for binds and waits means they will be shown in the tracefiles as well. To disable the event, run 
 +<code> 
 +alter system set events 'sql_trace [sql:&1] off' 
 +
 +</code> 
 +tkprof the tracefile 
 +<code> 
 +tkprof trace.trc /tmp/trace.trc.tkprof sys=y 
 +</code> 
 +sys=y means that it will include all sql executed, also what is called recursive sql. Recursive sql is sql executed on behalf of the executed sql. 
 + 
 + 
 +==== How to Diagnose Slow TNS Listener / Connection Performance ==== 
 +Ref: [[https://orasg.wordpress.com/2015/10/31/how-to-diagnose-slow-tns-listener-connection-performance/|How to Diagnose Slow TNS Listener / Connection Performance - Hanh Nguyen]] 
 +  * [[Listener]] 
 + 
 +==== Tracing the Cost Based Optimiser (CBO) ==== 
 +This is the Cost-based Optimizer trace. This trace really tells you ‘why did the CBO process this explain plan’…it goes into considerable detail on the hard parsing process 
 + 
 +This is the syntax to start and stop this trace. The trace files are all created in the ‘BACKGROUND_DUMP_DEST’ location 
 +<code> 
 +alter session set events '10053 trace name context forever'; 
 +</code> 
 +Run sql here 
 +<code> 
 +alter session set events '10053 trace name context off'; 
 +</code> 
 +==== Get the total amount of memory currently in use by databases on the server ==== 
 +<code> 
 +./all_db_do "select instance_name,round((select max(p.pga_max_mem)/1024/1024 mem_used from v\$process p)+(select sum(value)/1024/1024 from v\$sga)) total_mem_size_in_mb from v\$instance;" 
 +</code> 
 +==== Tanel Poder's session snapper ==== 
 +  *  [[http://tech.e2sn.com/oracle-scripts-and-tools/session-snapper|e2sn.com]] 
 +==== More from Tanel Poder ==== 
 +  *  [[https://blog.tanelpoder.com/2008/08/07/the-simplest-query-for-checking-whats-happening-in-a-database/|The simplest query for checking what’s happening in a database]] 
 +  *  [[https://github.com/tanelpoder/tpt-oracle/|Tanel Poder scripts on Github]] 
 + 
 +==== Set trace in another session using DBMS_SYSTEM ==== 
 +First lets set trace in SCOTT's session using the DBMS_SYSTEM package. Before we do let's turn on timed statistics so that the trace files get timing info and also set the dump file size so that there is plenty of room for the trace being generated. 
 +<code> 
 +exec dbms_system.set_bool_param_in_session(10,20,'timed_statistics',true); 
 +exec dbms_system.set_int_param_in_session(10,20,'max_dump_file_size',2147483647); 
 +exec dbms_system.set_sql_trace_in_session(10,20,true); 
 +exec dbms_system.set_sql_trace_in_session(10,20,false); 
 +</code> 
 +A second way to set trace in another session - This time setting trace level as well\\ 
 +Events and trace levels: 
 +* Level 0 = No statistics generated 
 +* Level 1 = standard trace output including parsing, executes and fetches plus more. 
 +* Level 2 = Same as level 1. 
 +* Level 4 = Same as level 1 but includes bind information 
 +* Level 8 = Same as level 1 but includes wait's information 
 +* Level 12 = Same as level 1 but includes binds and waits 
 +Turn on tracing 
 +<code> 
 +exec dbms_system.set_ev(10,20,10046,8,''); 
 +</code> 
 +Turn off tracing 
 +<code> 
 +exec dbms_system.set_ev(10,20,10046,0,''); 
 +</code> 
 +Yet another way - with dbms_support package\\ 
 +If not installed, do so with @?rdbms\admin\dbmssupp.sql 
 +<code> 
 +exec dbms_support.start_trace_in_session(10,20,waits=>true,binds=>false); 
 +exec dbms_support.stop_trace_in_session(10,20); 
 +</code> 
 + 
 +=== Shared pool loaded objects ==== 
 +<code> 
 +set lines 1000 pages 100 
 +col owner for a20 
 +col name for a30 
 +col type for a12 
 +select owner 
 +,      name 
 +,      type 
 +,      round(sharable_mem/1024,2) size_kb 
 +,      loads 
 +,      kept 
 +,      executions 
 +,      locks 
 +,      pins 
 +from   v$db_object_cache 
 +where  type in ('PROCEDURE', 'PACKAGE BODY', 'PACKAGE', 'FUNCTION', 'TRIGGER', 'VIEW'
 +order  by sharable_mem desc 
 +
 +</code> 
 +==== See the state of the SGA (which components are shrinking and growing) ==== 
 +<code> 
 +set lines 1000 pages 100 
 +col component  for a25 
 +col oper_type  for a15 
 +col oper_mode  for a15 
 +col parameter  for a25 
 +col status     for a10 
 +col start_time for a18 
 +col end_time   for a18 
 + 
 +select component 
 +,      oper_type 
 +,      oper_mode 
 +,      parameter 
 +,      initial_size/1024/1024 init_MB 
 +,      target_size/1024/1024 target_MB 
 +,      final_size/1024/1024 final_MB 
 +,      status 
 +,      start_time 
 +,      end_time  
 +from   v$sga_resize_ops 
 +
 +set lines 80 
 + 
 +</code> 
 +On a system in trouble, errors will be seen when trying to grow a component... 
 +<code> 
 +COMPONENT                      OPER_TYPE     OPER_MODE PARAMETER               INIT_MB  TARGET_MB   FINAL_MB STATUS    START_TIME         END_TIME 
 +------------------------------ ------------- --------- -------------------- ---------- ---------- ---------- --------- ------------------ ------------------ 
 +shared pool                    GROW          IMMEDIATE shared_pool_size            928        944        928 ERROR     01-MAY-19 05:56:44 01-MAY-19 05:56:44 
 +shared pool                    GROW          IMMEDIATE shared_pool_size            928        944        928 ERROR     01-MAY-19 05:56:44 01-MAY-19 05:56:44 
 +shared pool                    GROW          IMMEDIATE shared_pool_size            928        944        928 ERROR     01-MAY-19 05:56:44 01-MAY-19 05:56:44 
 +shared pool                    GROW          IMMEDIATE shared_pool_size            928        944        928 ERROR     01-MAY-19 05:56:44 01-MAY-19 05:56:44 
 +shared pool                    GROW          IMMEDIATE shared_pool_size            928        944        928 ERROR     01-MAY-19 05:56:44 01-MAY-19 05:56:44 
 +shared pool                    GROW          IMMEDIATE shared_pool_size            928        944        928 ERROR     01-MAY-19 05:56:44 01-MAY-19 05:56:44 
 +shared pool                    GROW          IMMEDIATE shared_pool_size            928        944        928 ERROR     01-MAY-19 05:56:44 01-MAY-19 05:56:44 
 +shared pool                    GROW          IMMEDIATE shared_pool_size            928        944        928 ERROR     01-MAY-19 05:56:44 01-MAY-19 05:56:44 
 +shared pool                    GROW          IMMEDIATE shared_pool_size            928        944        928 ERROR     01-MAY-19 05:56:44 01-MAY-19 05:56:44 
 +shared pool                    GROW          IMMEDIATE shared_pool_size            928        944        928 ERROR     01-MAY-19 05:56:44 01-MAY-19 05:56:44 
 +shared pool                    GROW          IMMEDIATE shared_pool_size            928        944        928 ERROR     01-MAY-19 05:56:44 01-MAY-19 05:56:44 
 +shared pool                    GROW          IMMEDIATE shared_pool_size            928        944        928 ERROR     01-MAY-19 05:56:44 01-MAY-19 05:56:44 
 +DEFAULT buffer cache           SHRINK        IMMEDIATE db_cache_size               224        208        224 ERROR     01-MAY-19 05:56:45 01-MAY-19 05:56:45 
 +DEFAULT buffer cache           SHRINK        IMMEDIATE db_cache_size               224        208        224 ERROR     01-MAY-19 05:56:45 01-MAY-19 05:56:45 
 +DEFAULT buffer cache           SHRINK        IMMEDIATE db_cache_size               224        208        224 ERROR     01-MAY-19 05:56:45 01-MAY-19 05:56:45 
 +DEFAULT buffer cache           SHRINK        IMMEDIATE db_cache_size               224        208        224 ERROR     01-MAY-19 05:56:45 01-MAY-19 05:56:45 
 +DEFAULT buffer cache           SHRINK        IMMEDIATE db_cache_size               224        208        224 ERROR     01-MAY-19 05:56:45 01-MAY-19 05:56:45 
 +DEFAULT buffer cache           SHRINK        IMMEDIATE db_cache_size               224        208        224 ERROR     01-MAY-19 05:56:45 01-MAY-19 05:56:45 
 +shared pool                    GROW          IMMEDIATE shared_pool_size            928        944        928 ERROR     01-MAY-19 05:56:45 01-MAY-19 05:56:45 
 +shared pool                    GROW          IMMEDIATE shared_pool_size            928        944        928 ERROR     01-MAY-19 05:56:45 01-MAY-19 05:56:45 
 +shared pool                    GROW          IMMEDIATE shared_pool_size            928        944        928 ERROR     01-MAY-19 05:56:45 01-MAY-19 05:56:45 
 +shared pool                    GROW          IMMEDIATE shared_pool_size            928        944        928 ERROR     01-MAY-19 05:56:45 01-MAY-19 05:56:45 
 +shared pool                    GROW          IMMEDIATE shared_pool_size            928        944        928 ERROR     01-MAY-19 05:56:45 01-MAY-19 05:56:45 
 +shared pool                    GROW          IMMEDIATE shared_pool_size            928        944        928 ERROR     01-MAY-19 05:56:45 01-MAY-19 05:56:45 
 + 
 +</code> 
 +On a freshly started system everything is good... 
 +<code> 
 +COMPONENT                 OPER_TYPE     OPER_MODE PARAMETER                    INIT_MB  TARGET_MB   FINAL_MB STATUS    START_TIME         END_TIME 
 +------------------------- ------------- --------- ------------------------- ---------- ---------- ---------- --------- ------------------ ------------------ 
 +shared pool               STATIC                  shared_pool_size                         1024       1024 COMPLETE  01-MAY-19 12:04:40 01-MAY-19 12:04:40 
 +In-Memory Area            STATIC                  inmemory_size                      0          0          0 COMPLETE  01-MAY-19 12:04:40 01-MAY-19 12:04:40 
 +large pool                STATIC                  large_pool_size                    0       1056       1056 COMPLETE  01-MAY-19 12:04:40 01-MAY-19 12:04:40 
 +RECYCLE buffer cache      STATIC                  db_recycle_cache_size              0          0          0 COMPLETE  01-MAY-19 12:04:40 01-MAY-19 12:04:40 
 +KEEP buffer cache         STATIC                  db_keep_cache_size                          0          0 COMPLETE  01-MAY-19 12:04:40 01-MAY-19 12:04:40 
 +java pool                 STATIC                  java_pool_size                             48         48 COMPLETE  01-MAY-19 12:04:40 01-MAY-19 12:04:40 
 +streams pool              STATIC                  streams_pool_size                  0          0          0 COMPLETE  01-MAY-19 12:04:40 01-MAY-19 12:04:40 
 +DEFAULT buffer cache      STATIC                  db_cache_size                      0        928        928 COMPLETE  01-MAY-19 12:04:40 01-MAY-19 12:04:40 
 +ASM Buffer Cache          STATIC                  db_cache_size                      0          0          0 COMPLETE  01-MAY-19 12:04:40 01-MAY-19 12:04:40 
 +DEFAULT buffer cache      INITIALIZING            db_cache_size                    928        928        928 COMPLETE  01-MAY-19 12:04:40 01-MAY-19 12:04:40 
 +DEFAULT 2K buffer cache   STATIC                  db_2k_cache_size                            0          0 COMPLETE  01-MAY-19 12:04:40 01-MAY-19 12:04:40 
 +DEFAULT 4K buffer cache   STATIC                  db_4k_cache_size                            0          0 COMPLETE  01-MAY-19 12:04:40 01-MAY-19 12:04:40 
 +DEFAULT 8K buffer cache   STATIC                  db_8k_cache_size                            0          0 COMPLETE  01-MAY-19 12:04:40 01-MAY-19 12:04:40 
 +DEFAULT 16K buffer cache  STATIC                  db_16k_cache_size                  0          0          0 COMPLETE  01-MAY-19 12:04:40 01-MAY-19 12:04:40 
 +DEFAULT 32K buffer cache  STATIC                  db_32k_cache_size                  0          0          0 COMPLETE  01-MAY-19 12:04:40 01-MAY-19 12:04:40 
 +DEFAULT buffer cache      SHRINK        IMMEDIATE db_cache_size                    928        864        864 COMPLETE  01-MAY-19 12:04:52 01-MAY-19 12:04:52 
 +DEFAULT buffer cache      GROW          DEFERRED  db_cache_size                    864        944        944 COMPLETE  01-MAY-19 12:05:24 01-MAY-19 12:05:24 
 +large pool                SHRINK        DEFERRED  large_pool_size                 1056        976        976 COMPLETE  01-MAY-19 12:05:24 01-MAY-19 12:05:24 
 + 
 +</code> 
 + 
 +==== How to create an ADDM report quickly ==== 
 +From [[http://www.runningoracle.com/product_info.php?cPath=1_67|Running Oracle - AWR]] 
 +<code> 
 +set lines 1000 pages 100 
 +select a.execution_end, b.type, b.impact, d.rank, d.type,  
 +'Message           : '||b.message MESSAGE, 
 +'Command To correct: '||c.command COMMAND, 
 +'Action Message    : '||c.message ACTION_MESSAGE 
 +From dba_advisor_tasks a, dba_advisor_findings b, 
 +     Dba_advisor_actions c, dba_advisor_recommendations d 
 +Where a.owner=b.owner and a.task_id=b.task_id 
 +And b.task_id=d.task_id and b.finding_id=d.finding_id 
 +And a.task_id=c.task_id and d.rec_id=c.rec_Id 
 +And a.task_name like 'ADDM%' and a.status='COMPLETED'  
 +and a.execution_end > sysdate - 1 
 +Order by a.execution_end desc,  rank desc; 
 +</code> 
 +==== See how evenly the physical reads and writes are spread over the datafiles ==== 
 +From [[http://www.runningoracle.com/product_info.php?cPath=1_68&products_id=85|runningoracle.com - Datafiles Disk I/O]] 
 +The Physical design of the database assures optimal performance for DISK I/O. Storing the datafiles in different filesystems (Disks) is a good technique to minimize disk contention for I/O 
 +<code> 
 +set lines 1000 
 +col perc_writes for a12 
 +col perc_reads  for a12 
 +col name        for a55 
 +select name 
 +,      phyrds physical_reads 
 +,      round((ratio_to_report(phyrds) over ())*100, 2)|| '%' perc_reads 
 +,      phywrts physical_writes 
 +,      round((ratio_to_report(phywrts) over ())*100, 2)|| '%' perc_writes 
 +,      phyrds + phywrts total 
 +from   v$datafile df 
 +,      v$filestat fs 
 +where  df.file# = fs.file# 
 +order  by phyrds desc 
 +
 +</code> 
 +==== How I/O is spread per filesystem ==== 
 +<code> 
 +select filesystem 
 +,      round((ratio_to_report(reads) over ())*100, 2) || '%' perc_reads 
 +,      round((ratio_to_report(writes) over ())*100, 2) || '%' perc_writes 
 +,      round((ratio_to_report(total) over ())*100, 2) || '%' perc_total 
 +from   ( 
 +       select filesystem 
 +       ,      sum(physical_reads)  reads 
 +       ,      sum(physical_writes) writes 
 +       ,      sum(total)           total 
 +       from   ( 
 +              select substr(name, 0, 25)                                    filesystem 
 +              ,      phyrds                                                 physical_reads 
 +              ,      round((ratio_to_report(phyrds) over ())*100, 2)|| '%'  perc_reads 
 +              ,      phywrts                                                physical_writes 
 +              ,      round((ratio_to_report(phywrts) over ())*100, 2)|| '%' perc_writes 
 +              ,      phyrds + phywrts                                       total 
 +              from   v$datafile df 
 +              ,      v$filestat fs 
 +              where  df.file# = fs.file# 
 +              order  by phyrds desc 
 +              ) a 
 +       group  by filesystem 
 +       ) b 
 +order  by round((ratio_to_report(total) over ())*100, 2) desc; 
 +</code> 
 +==== How I/O is spread for the datafiles of a specific tablespace ==== 
 +<code> 
 +select df.name 
 +,      phyrds physical_reads 
 +,      round((ratio_to_report(phyrds) over ())*100, 2)|| '%'  perc_reads 
 +,      phywrts physical_writes 
 +,      round((ratio_to_report(phywrts) over ())*100, 2)|| '%' perc_writes 
 +,      phyrds + phywrts total 
 +from   v$datafile df 
 +,      v$filestat fs 
 +,      ts$ t 
 +where  df.file# = fs.file# 
 +and    df.ts# = t.ts# 
 +and    t.name = 'tablespace_name' 
 +order  by phyrds desc; 
 +</code> 
 +==== Spotting I/O intensive SQL statements ==== 
 +From [[https://blog.rackspace.com/tuning-oracles-buffer-cache|rackspace.com - tuning oracles buffer cache 
 +<code> 
 +select * 
 +from   ( 
 +       select executions 
 +       ,      buffer_gets 
 +       ,      disk_reads 
 +       ,      first_load_time 
 +       ,      sql_text 
 +       from   v$sqlarea 
 +       order  by disk_reads desc 
 +       ) 
 +where  rownum < 11 
 +
 +</code> 
 +==== Enable and disable session tracing using PL/SQL package ==== 
 +See also trace of a particular session_id in [[Datapump]] section. 
 +<code> 
 +exec dbms_monitor.session_trace_enable; 
 +exec dbms_monitor.session_trace_disable; 
 + 
 +exec dbms_monitor.database_trace_enable; 
 +exec dbms_monitor.database_trace_disable; 
 + 
 +exec dbms_monitor.client_id_trace_enable; 
 +exec dbms_monitor.client_id_trace_disable; 
 + 
 +exec dbms_monitor.serv_mod_act_trace_enable; 
 +exec dbms_monitor.serv_mod_act_trace_disable; 
 + 
 +</code> 
 +==== Show database resource limits and current usage ==== 
 +<code> 
 +set lines 1000 pages 100 
 +select resource_name 
 +,      current_utilization 
 +,      max_utilization 
 +,      limit_value 
 +from   v$resource_limit 
 +
 +</code> 
 +==== Show long-running SQL queries / statements ====
 Long running means more than 10 minutes if last_call_et > 600 (adjust as necessary!) Long running means more than 10 minutes if last_call_et > 600 (adjust as necessary!)
-<code>0@@</code>+<code> 
 +set pages 1000 headi on underline off lines 1000 
 +col username for a15 
 +col program  for a20 
 +col osuser   for a15 
 +select distinct machine 
 +,      nvl(s.osuser,'no user') osuser 
 +,      s.username              username 
 +,      s.sid                   sid 
 +,      s.serial#               serial# 
 +,      (s.last_call_et/60)     mins 
 +,      s.last_call_et          secs 
 +,      s.program               program 
 +,      s.sql_id                sql_id 
 +,      q.sql_text              sql_text 
 +from   v$session s 
 +,      v$sql     q 
 +where  s.sql_id     = q.sql_id 
 +and    status       = 'ACTIVE' 
 +and    type        != 'BACKGROUND' 
 +and    last_call_et > 600 
 +order  by sid 
 +,      serial# 
 +
 +</code>
  
-=====Is stats gathering enabled?===== +==== Enable block change tracking (BCT) ==== 
-<code>1@@</code>+<code> 
 +alter database enable block change tracking; 
 +</code>
  
-=====Check when statistics were last gathered===== +==== Disable block change tracking (BCT) ==== 
-<code>2@@</code> +<code> 
-=====Gather more accurate statistics=====+alter database disable block change tracking; 
 +</code> 
 + 
 +==== Where is the block change tracking file? ==== 
 +<code> 
 +select filename from v$block_change_tracking 
 +
 +</code> 
 + 
 +==== Check if block change tracking (BCT) is enabled ==== 
 +<code> 
 +select status from v$block_change_tracking 
 +
 +</code> 
 +or 
 +<code> 
 +ps -ef | grep [c]twr 
 +</code> 
 +==== Show CPU Usage for Active Sessions ==== 
 +<code> 
 +SET PAGESIZE 60 
 +SET LINESIZE 300 
 +  
 +COLUMN username FORMAT A30 
 +COLUMN sid FORMAT 999,999,999 
 +COLUMN serial# FORMAT 999,999,999 
 +COLUMN "cpu usage (seconds)"  FORMAT 999,999,999.0000 
 +  
 +SELECT 
 +   s.username, 
 +   t.sid, 
 +   s.serial#, 
 +   SUM(VALUE/100) as "cpu usage (seconds)" 
 +FROM 
 +   v$session s, 
 +   v$sesstat t, 
 +   v$statname n 
 +WHERE 
 +   t.STATISTIC# = n.STATISTIC# 
 +AND 
 +   NAME like '%CPU used by this session%' 
 +AND 
 +   t.SID = s.SID 
 +AND 
 +   s.status='ACTIVE' 
 +AND 
 +   s.username is not null 
 +GROUP BY username,t.sid,s.serial# 
 +/</code> 
 +==== Show the Bind Variables for a Given sqlid ==== 
 +<code> 
 +SET PAGESIZE 60 
 +SET LINESIZE 300 
 +  
 +COLUMN sql_text FORMAT A120 
 +COLUMN sql_id FORMAT A13 
 +COLUMN bind_name FORMAT A10 
 +COLUMN bind_value FORMAT A26 
 +  
 +SELECT  
 +  sql_id, 
 +  t.sql_text sql_text,   
 +  b.name bind_name, 
 +  b.value_string bind_value  
 +FROM 
 +  v$sql t  
 +JOIN 
 +  v$sql_bind_capture b  using (sql_id) 
 +WHERE 
 +  b.value_string is not null   
 +AND 
 +  sql_id='&sqlid' 
 +
 +</code> 
 + 
 +==== List the Most Resource Hungry SQL Statements ==== 
 +<code> 
 +SET PAGESIZE 60 
 +SET LINESIZE 300 
 +  
 +COLUMN sql_text FORMAT A50 
 +COLUMN reads_per_execution FORMAT 999,999,999 
 +COLUMN buffer_gets FORMAT 999,999,999 
 +COLUMN disk_reads FORMAT 999,999,999 
 +COLUMN executions FORMAT 999,999,999 
 +COLUMN sorts FORMAT 999,999,999 
 +  
 +SELECT * 
 +FROM   (SELECT Substr(a.sql_text,1,50) sql_text, 
 +               Trunc(a.disk_reads/Decode(a.executions,0,1,a.executions)) reads_per_execution,  
 +               a.buffer_gets,  
 +               a.disk_reads,  
 +               a.executions,  
 +               a.sorts, 
 +               a.address, 
 +               a.sql_id 
 +        FROM   v$sqlarea a 
 +        ORDER BY 2 DESC) 
 +WHERE  rownum <= &rows_to_by_displayed 
 +
 +</code> 
 +==== Is stats gathering enabled? ==== 
 +<code> 
 +column client_name format A55 
 +SELECT client_name, status from dba_autotask_operation; 
 +</code> 
 + 
 +==== Check when the statistics advisor tasks ran ==== 
 +This shows the details of various advisor jobs 
 +<code> 
 +col name       for a50 
 +col owner_name for a10 
 +col name       for a40 
 +select name 
 +,      ctime 
 +,      how_created 
 +,      owner_name 
 +,      name 
 +from   sys.wri$_adv_tasks 
 +where  1=1 
 +--and    owner_name = 'SYS' 
 +--and    name in ('AUTO_STATS_ADVISOR_TASK','INDIVIDUAL_STATS_ADVISOR_TASK'
 +order  by 2 
 + 
 +</code> 
 +gives something like... 
 +<code> 
 +NAME                                CTIME                       HOW_CREATED                    OWNER_NAME NAME 
 +----------------------------------- --------------------------- ------------------------------ ---------- ----------------------------------- 
 +SYS_AUTO_SPCADV339041814082021      14-AUG-21 18:04:39          CMD                            SYS        SYS_AUTO_SPCADV339041814082021 
 +SYS_AUTO_SPCADV216002216082021      16-AUG-21 22:00:16          CMD                            SYS        SYS_AUTO_SPCADV216002216082021 
 +ADDM:141033772_1_15823              21-AUG-21 00:00:23          AUTO                           SYS        ADDM:141033772_1_15823 
 +SYS_AUTO_SPCADV522010021082021      21-AUG-21 00:01:23          CMD                            SYS        SYS_AUTO_SPCADV522010021082021 
 +ADDM:141033772_1_15831              21-AUG-21 08:00:41          AUTO                           SYS        ADDM:141033772_1_15831 
 +ADDM:141033772_1_15874              23-AUG-21 03:00:43          AUTO                           SYS        ADDM:141033772_1_15874 
 +ADDM:141033772_1_15880              23-AUG-21 09:01:01          AUTO                           SYS        ADDM:141033772_1_15880 
 +ADDM:141033772_1_15884              23-AUG-21 13:00:09          AUTO                           SYS        ADDM:141033772_1_15884 
 +ADDM:141033772_1_15906              24-AUG-21 11:00:09          AUTO                           SYS        ADDM:141033772_1_15906 
 +ADDM:141033772_1_15922              25-AUG-21 03:00:56          AUTO                           SYS        ADDM:141033772_1_15922 
 +ADDM:141033772_1_15932              25-AUG-21 13:00:20          AUTO                           SYS        ADDM:141033772_1_15932 
 +ADDM:141033772_1_15939              25-AUG-21 20:00:36          AUTO                           SYS        ADDM:141033772_1_15939 
 +SYS_AUTO_SPCADV151060126082021      26-AUG-21 01:06:52          CMD                            SYS        SYS_AUTO_SPCADV151060126082021 
 +ADDM:141033772_1_15946              26-AUG-21 03:00:56          AUTO                           SYS        ADDM:141033772_1_15946 
 +ADDM:141033772_1_15957              26-AUG-21 14:00:22          AUTO                           SYS        ADDM:141033772_1_15957 
 +... 
 +ADDM:141033772_1_16360              12-SEP-21 09:00:53          AUTO                           SYS        ADDM:141033772_1_16360 
 +SYS_AUTO_SPCADV951040912092021      12-SEP-21 09:04:51          CMD                            SYS        SYS_AUTO_SPCADV951040912092021 
 +ADDM:141033772_1_16361              12-SEP-21 10:00:57          AUTO                           SYS        ADDM:141033772_1_16361 
 +SYS_AUTO_SPCADV557041012092021      12-SEP-21 10:04:57          CMD                            SYS        SYS_AUTO_SPCADV557041012092021 
 +ADDM:141033772_1_16362              12-SEP-21 11:00:59          AUTO                           SYS        ADDM:141033772_1_16362 
 +SYS_AUTO_SPCADV155041112092021      12-SEP-21 11:04:55          CMD                            SYS        SYS_AUTO_SPCADV155041112092021 
 +ADDM:141033772_1_16363              12-SEP-21 12:01:01          AUTO                           SYS        ADDM:141033772_1_16363 
 +SYS_AUTO_SPCADV759041212092021      12-SEP-21 12:04:59          CMD                            SYS        SYS_AUTO_SPCADV759041212092021 
 +ADDM:141033772_1_16364              12-SEP-21 13:00:04          AUTO                           SYS        ADDM:141033772_1_16364 
 +SYS_AUTO_SPCADV302051312092021      12-SEP-21 13:05:02          CMD                            SYS        SYS_AUTO_SPCADV302051312092021 
 + 
 +958 rows selected. 
 +</code> 
 + 
 +==== Check when statistics were last gathered ==== 
 +<code> 
 +select owner,table_name,tablespace_name,to_char(LAST_ANALYZED,'DD-MON-YY HH24:MI:SS') LAST_ANALYZED from dba_tables where owner not in ('SYS','SYSTEM') order by LAST_ANALYZED 
 +</code> 
 +==== Gather more accurate statistics ====
 default is 2 default is 2
-<code>3@@</code> +<code> 
-=====Check for stale statistics===== +alter system set optimizer_dynamic_sampling = 4; 
-<code>4@@</code>+</code> 
 +==== Check for stale statistics ==== 
 +<code> 
 +set lines 200 pages 100 
 +col table_owner for a15 
 +col table_name  for a40 
 +select m.table_owner 
 +,      m.table_name 
 +,      m.timestamp 
 +,      sum(m.inserts) 
 +,      sum(m.updates) 
 +,      sum(m.deletes) 
 +,      t.num_rows 
 +,      t.last_analyzed 
 +from   sys.dba_tab_modifications m 
 +,      dba_tables t 
 +where  m.table_owner = t.owner 
 +and    m.table_name  = t.table_name 
 +group  by m.table_owner 
 +,      m.table_name 
 +,      m.timestamp 
 +,      t.num_rows 
 +,      t.last_analyzed 
 +order  by 1,2,3 
 +
 +set lines 80 
 + 
 +</code>
 or or
-<code>5@@</code>+<code> 
 +set lines 200 pages 200 
 +col owner_table    for a30 
 +col partition_name for a20 
 +select dt.owner||'.'||dt.table_name owner_table 
 +,      dtm.partition_name 
 +,      inserts+updates+deletes modified_rows, num_rows, last_analyzed 
 +,      case when num_rows = 0 then null 
 +            else (inserts+updates+deletes) / num_rows * 100 
 +       end percent_modified 
 +from   dba_tab_modifications dtm 
 +join   dba_tables dt 
 +on     dtm.table_name = dt.table_name 
 +order  by 6 desc 
 +
 +</code>
  
-=====Gather database stats=====+==== Export dictionary and fixed statistics ==== 
 +<code> 
 +begin 
 +    dbms_stats.create_stat_table ( stattab => 'dic_stats' 
 +                                 , ownname => 'system' 
 +                                 ); 
 +    dbms_stats.export_dictionary_stats ( stattab => 'dic_stats' 
 +                                       , statown => 'system' 
 +                                       ); 
 + 
 + 
 +    dbms_stats.create_stat_table ( stattab => 'fix_stats' 
 +                                 , ownname => 'system' 
 +                                 ); 
 +    dbms_stats.export_fixed_objects_stats ( stattab => 'fix_stats' 
 +                                          , statown => 'system' 
 +                                          ); 
 +end; 
 +
 +</code> 
 +<code> 
 +expdp userid='system/' dumpfile=rpainse_dic_stats.dmp reuse_dumpfiles=true logfile=rpainse_dic_stats.log tables='DIC_STATS' directory=data_pump_dir 
 +expdp userid='system/' dumpfile=rpainse_fix_stats.dmp reuse_dumpfiles=true logfile=rpainse_fix_stats.log tables='FIX_STATS' directory=data_pump_dir 
 +</code> 
 + 
 +==== Delete existing fixed and dictionary stats and import new ones ==== 
 +This can be done before importing stats from another database. Maybe queries work faster on prod and the stats need copying over to dev or acc. 
 +<code> 
 +begin 
 +    dbms_stats.delete_dictionary_stats; 
 +    dbms_stats.delete_fixed_objects_stats; 
 +end; 
 +
 +</code> 
 +<code> 
 +impdp userid='system/' dumpfile=rpainse_dic_stats.dmp logfile=impdp_rpainse_dic_stats.log directory=data_pump_dir 
 +impdp userid='system/' dumpfile=rpainse_fix_stats.dmp logfile=impdp_rpainse_fix_stats.log directory=data_pump_dir 
 +</code> 
 + 
 +==== Copy the statistics from a single query from one database to another ==== 
 +  * [[https://martincarstenbach.wordpress.com/2020/06/09/copying-a-sql-plan-baseline-from-one-database-to-another/|Copying a SQL Plan Baseline from one database to another - martincarstenbach.wordpress.com]] 
 + 
 +=== Pack up the baseline === 
 +<code> 
 +set serveroutput on 
 +prompt prepare a SQL Plan Baseline for transport 
 +prompt 
 +accept v_table_name  prompt 'enter the name of the staging table to be created: '  
 +accept v_table_owner prompt 'enter the schema name where the staging table is to be created: '  
 +accept v_sql_handle  prompt 'which SQL handle should be exported? '  
 +accept v_plan_name prompt 'enter the corresponding plan name: '  
 + 
 +declare 
 +  v_packed_baselines number; 
 +begin 
 +  dbms_spm.create_stgtab_baseline(  
 +    table_name => '&v_table_name',  
 +    table_owner => '&v_table_owner'); 
 + 
 +  v_packed_baselines := dbms_spm.pack_stgtab_baseline( 
 +    table_name => '&v_table_name', 
 +    table_owner => '&v_table_owner', 
 +    sql_handle => '&v_sql_handle', 
 +    plan_name => '&v_plan_name'); 
 +  dbms_output.put_line(v_packed_baselines || ' baselines have been staged in &v_table_owner..&v_table_name'); 
 +end; 
 +/  
 +</code> 
 + 
 +  * Export the table, copy it over and import it into the destination database. 
 + 
 +=== Unpack the baseline === 
 +<code> 
 +var num_unpacked number 
 + 
 +begin 
 +  :num_unpacked := dbms_spm.unpack_stgtab_baseline( 
 +    table_name => '&table_name', 
 +    table_owner => '&table_owner'); 
 +end; 
 +
 + 
 +print :num_unpacked  
 +</code> 
 + 
 +  * Check it is there 
 +<code> 
 +select sql_handle, plan_name, origin, enabled, accepted, fixed from dba_sql_plan_baselines; 
 +</code> 
 + 
 +==== Gather database stats ====
 Some helpful stuff on statistics gathering [[http://dba-tips.blogspot.co.uk/2012/11/all-about-statistics-in-oracle.html|here]] Some helpful stuff on statistics gathering [[http://dba-tips.blogspot.co.uk/2012/11/all-about-statistics-in-oracle.html|here]]
-<code>6@@</code> +<code> 
-=====Gather dictionary and fixed objects statistics===== +  - !/usr/bin/ksh 
-How to Gather Statistics on Objects Owned by the ‘SYS’ User and ‘Fixed’ Objects ([[https://support.oracle.com/epmos/faces/DocumentDisplay?id=457926.1|Doc ID 457926.1]]) +sqlplus / as sysdba << EOSQL 
-<code>7@@</code> +exec dbms_stats.gather_database_stats( estimate_percent=>dbms_stats.auto_sample_size, degree => 8 ); 
-<code>8@@</code>+EOSQL 
 +</code> 
 + 
 +==== Setup a job to run gather statistics as a one-off job ==== 
 +<code> 
 +begin 
 +    dbms_scheduler.create_job ( job_name   => '"sys"."gather_fixed_object_stats_one_time"' 
 +                              , job_type   => 'PLSQL_BLOCK' 
 +                              , job_action => 'begin dbms_stats.gather_fixed_object_stats; end;' 
 +                              , start_date => sysdate+7 
 +                              , auto_drop  => true 
 +                              , comments   => 'gather fixed object stats one time' 
 +                              ); 
 +    dbms_scheduler.enable ( name => '"sys"."gather_fixed_object_stats_one_time"' ); 
 +end; 
 +
 +</code> 
 + 
 +==== Gather dictionary and fixed objects statistics ==== 
 +A common problem when certain queries take a long time - a particular culprit is dba_free_space.\\ 
 +In this case also check the dba_recyclebin and purge it or disable it (at least on production) if not required.\\ 
 +The problem stems from queries doing a full table scan on x$ktfbue (and executing gather_fixed_object_stats didn't help as this table is specifically excluded from the gather!). The full story can be read [[https://jonathanlewis.wordpress.com/2019/08/08/free-space-3/|here by Jonathon Lewis]] 
 +A rejigged query (using hints) was used in the past to speed up dba_free_space. This was provided by Oracle for 12.1.0.2 
 +<code> 
 +create or replace view DBA_FREE_SPACE  
 +(TABLESPACE_NAME, FILE_ID, BLOCK_ID,  
 +BYTES, BLOCKS, RELATIVE_FNO)  
 +as  
 +select ts.name, fi.file#, f.block#,  
 +f.length * ts.blocksize, f.length, f.file#  
 +from sys.ts$ ts, sys.fet$ f, sys.file$ fi  
 +where ts.ts# f.ts#  
 +and f.ts# = fi.ts#  
 +and f.file# = fi.relfile#  
 +and ts.bitmapped = 0  
 +union all  
 +select /*+ ordered use_nl(f) use_nl(fi) */  
 +ts.name, fi.file#, f.ktfbfebno,  
 +f.ktfbfeblks * ts.blocksize, f.ktfbfeblks, f.ktfbfefno  
 +from sys.ts$ ts, sys.x$ktfbfe f, sys.file$ fi  
 +where ts.ts# = f.ktfbfetsn  
 +and f.ktfbfetsn = fi.ts#  
 +and f.ktfbfefno = fi.relfile#  
 +and ts.bitmapped <> 0 and ts.online$ in (1,4) and ts.contents$ = 0  
 +union all  
 +select /*+ ordered use_nl(u) use_nl(fi) */  
 +ts.name, fi.file#, u.ktfbuebno,  
 +u.ktfbueblks * ts.blocksize, u.ktfbueblks, u.ktfbuefno  
 +from sys.recyclebin$ rb, sys.ts$ ts, sys.x$ktfbue u, sys.file$ fi  
 +where ts.ts# = rb.ts#  
 +and rb.ts# = fi.ts#  
 +and u.ktfbuefno = fi.relfile#  
 +and u.ktfbuesegtsn = rb.ts#  
 +and u.ktfbuesegfno = rb.file#  
 +and u.ktfbuesegbno = rb.block#  
 +and ts.bitmapped <> 0 and ts.online$ in (1,4) and ts.contents$ = 0  
 +union all  
 +select ts.name, fi.file#, u.block#,  
 +u.length * ts.blocksize, u.length, u.file#  
 +from sys.ts$ ts, sys.uet$ u, sys.file$ fi, sys.recyclebin$ rb  
 +where ts.ts# = u.ts#  
 +and u.ts# = fi.ts#  
 +and u.segfile# = fi.relfile#  
 +and u.ts# = rb.ts#  
 +and u.segfile# = rb.file#  
 +and u.segblock# = rb.block#  
 +and ts.bitmapped = 0  
 +/  
 +</code> 
 + 
 +Another table that struggles can be rman_backup_job_details. 
 + 
 +Stats on the sys X$ tables under the V$ views are not automatically gathered. Furthermore, once gatherd, they persist across instance restarts. Having no stats (uses defaults) is better than having bad stats. Fixed object stats should be gathered when the database is under load so that the optimiser chooses plans based on a representative load.\\  
 +Noticable performance issues on DBA_FREE_SPACE, V$RMAN_STATUS, V$RMAN_BACKUP_JOB_DETAILS can be fixed by gathering these stats.\\ 
 +How to Gather Statistics on Objects Owned by the ‘SYS’ User and ‘Fixed’ Objects ([[https://support.oracle.com/epmos/faces/DocumentDisplay?id=457926.1|Doc ID 457926.1]])\\ 
 +Fixed Objects Statistics (GATHER_FIXED_OBJECTS_STATS) Considerations ([[https://support.oracle.com/epmos/faces/DocumentDisplay?id=798257.1|Doc ID 798257.1]])\\ 
 +Other relevant MOS notes relating to gathering fixed objest stats include: 
 + 
 +NOTE:375386.1 - Rman Backup is Very Slow selecting from V$RMAN_STATUS\\ 
 +NOTE:748251.1 - EM Agent DBSNMP Using Up Excessive Temp Space In Database\\ 
 +BUG:7430745 - ORA-1422 DBMS_STATS.GATHER_TABLE_STATS ON X$KTFBUE ON 10.2.0.4\\ 
 +NOTE:1392603.1 - AWR or STATSPACK Snapshot collection extremely slow in 11gR2\\ 
 +BUG:5259025 - THE FIXED TABLE X$KTFBUE HAS NO STATISTICS\\ 
 +BUG:5880432 - QUERYING V$ACCESS CONTENTS ON LATCH: LIBRARY CACHE DRAWBACKS PERFORMANCE\\ 
 +NOTE:247611.1 - Known RMAN Performance Problems\\ 
 +NOTE:373152.1 - Rman Slow Performance on Exit\\ 
 +BUG:5247609 - RMAN SLOW PERFORMANCE DURING REGISTER DATABASE/OPEN RESETLOGS\\ 
 +NOTE:465787.1 - How to: Manage CBO Statistics During an Upgrade from 10g or 9i into 11g/12c\\ 
 +NOTE:743507.1 - How to Benefit from Automatic Maintenance Tasks Following the Removal of the GATHER_STATS_JOB in 11g and Onward?\\ 
 +NOTE:357765.1 - Rman uses a lot Of Temporary Segments ORA-1652: Unable To Extend Temp Segment\\ 
 +NOTE:1637294.1 - Some Fixed Tables(X$) are Missing CBO Statistics\\ 
 +NOTE:1355608.1 - Why Do Some Fixed Tables Not Have Optimizer Statistics?\\ 
 +NOTE:294346.1 - Running GATHER_FIXED_OBJECTS_STATS is very slow\\ 
 + 
 +The current stats can be exported first in case the results of gathering are worse! 
 +<code> 
 +exec dbms_stats.drop_stat_table('system','fixed_stats_table'); 
 +exec dbms_stats.create_stat_table('system','fixed_stats_table'); 
 +exec dbms_stats.export_fixed_objects_stats ( statown => 'system', stattab => 'fixed_stats_table' ); 
 + 
 +expdp userid=system/****** tables=system.fixed_stats_table dumpfile=20200120_EBST_fixed_stats_table.dmp logfile=expdp_20200120_EBST_fixed_stats_table.log 
 +</code> 
 +<code> 
 +exec dbms_stats.gather_dictionary_stats; 
 +</code> 
 +<code> 
 +exec dbms_stats.gather_fixed_objects_stats; 
 +</code> 
 + 
 +==== Stats are not gathered on fixed table X$KTFBUE ==== 
 +From a study by Jonathon Lewis... [[https://jonathanlewis.wordpress.com/2019/08/08/free-space-3/]] 
 +... 
 +Part of the problem, of course, is that x$ktfbue is one of the objects that Oracle skips when you gather “fixed object” stats – it can be a bit expensive for exactly the reason that querying it can be expensive, all those single block segment header reads. 
 +... 
 +<code> 
 +select table_name, num_rows, avg_row_len, sample_size, last_analyzed 
 +from   dba_tab_statistics 
 +where  owner = 'SYS' 
 +and    table_name = 'X$KTFBUE' 
 +
 +  
 +begin 
 +    dbms_stats.gather_table_stats('SYS','X$KTFBUE'); 
 +end; 
 +
 +</code> 
 + 
 +==== Compress indexes to take up less space in the buffer cache ==== 
 +<code> 
 +select blocks from dba_segments where segment_name = '&INDEX_NAME'; 
 +</code> 
 +<code> 
 +select index_name, compression from dba_indexes where index_name = '&INDEX_NAME'; 
 +</code> 
 +<code> 
 +alter index &INDEX_NAME rebuild compress advanced high; 
 +</code> 
 + 
 +==== Gather schema statistics ==== 
 +<code> 
 +begin 
 +dbms_stats.gather_schema_stats ( ownname          => 'FEUK_SCR' 
 +                               , options          => 'gather auto' 
 +                               , estimate_percent => 100 
 +                               , degree           => 6 
 +                               , method_opt       => 'for all indexed columns' 
 +                               , cascade          => true 
 +                               ); 
 +end; 
 +
 +</code> 
 + 
 +==== Gather table statistics ==== 
 +<code> 
 +BEGIN dbms_stats.gather_table_stats(ownname =>'FEUK_PRF',tabname=>'TD_NOTE',estimate_percent=>20,cascade=>true,degree=>6,METHOD_OPT=>'FOR ALL COLUMNS SIZE 100' ); END; 
 +</code> 
 +==== Show any events set in database ==== 
 +<code> 
 +set serveroutput on 
 +declare 
 +event_level number; 
 +begin 
 +    dbms_output.enable(null); 
 +    dbms_output.put_line ('Event list'); 
 +    dbms_output.put_line ('----------'); 
 +    for i in 10000..10999 loop 
 +    sys.dbms_system.read_ev(i,event_level); 
 +    if (event_level > 0) then 
 +        dbms_output.put_line('Event '||to_char(i)||' set at level '|| to_char(event_level)); 
 +    end if; 
 +    end loop; 
 +end; 
 +
 +</code> 
 + 
 +==== Show all Oracle hidden parameters ==== 
 +<code> 
 +set pages 100 lines 300 
 +col ksppinm  for a50 
 +col ksppstvl for a50 
 + 
 +select ksppinm 
 +,      ksppstvl 
 +from   x$ksppi 
 +,      x$ksppsv b 
 +where  1=1 
 +and    a.indx=b.indx 
 +and    substr(ksppinm,1,1) = '_' 
 +order  by ksppinm 
 +
 +</code> 
 +==== How Can we Run SQL Tuning Advisor For A SQL ID In Oracle Database? ==== 
 +  * [[https://www.funoracleapps.com/2022/01/how-can-we-run-sql-tuning-advisor-for.html|How Can we Run SQL Tuning Advisor For A SQL ID In Oracle Database? - funoracleapps.com]] 
 +Catch it while the statement is running...\\ 
 +1. Create Tuning Task 
 +<code> 
 +DECLARE 
 +l_sql_tune_task_id VARCHAR2(100); 
 +BEGIN 
 +l_sql_tune_task_id := DBMS_SQLTUNE.create_tuning_task ( 
 +sql_id => '4g5ah8zr6thnb', 
 +scope => DBMS_SQLTUNE.scope_comprehensive, 
 +time_limit => 500, 
 +task_name => '4g5ah8zr6thnb_tuning_task_cur', 
 +description => 'Tuning task_cur for statement 4g5ah8zr6thnb'); 
 +DBMS_OUTPUT.put_line('l_sql_tune_task_id: ' || l_sql_tune_task_id); 
 +END; 
 +
 +</code> 
 + 
 +2. Execute Tuning task: 
 +<code> 
 +EXEC DBMS_SQLTUNE.execute_tuning_task(task_name => '4g5ah8zr6thnb_tuning_task_cur'); 
 +</code> 
 + 
 +3. Get the Tuning advisor report. 
 + 
 +<code> 
 +set long 65536 
 +set longchunksize 65536 
 +set linesize 100 
 +select dbms_sqltune.report_tuning_task('4g5ah8zr6thnb_tuning_task_cur') from dual; 
 +</code> 
 + 
 +4. Get list of tuning task present in database:\\ 
 +We can get the list of tuning tasks present in database from DBA_ADVISOR_LOG 
 + 
 +<code> 
 +SELECT TASK_NAME, STATUS FROM DBA_ADVISOR_LOG WHERE TASK_NAME='4g5ah8zr6thnb_tuning_task_cur'
 +</code> 
 + 
 +5. Drop a tuning task: 
 +<code> 
 +execute dbms_sqltune.drop_tuning_task('4g5ah8zr6thnb_tuning_task_cur'); 
 + 
 + 
 +Method when the sql is not running. We can get the information from the AWR snaps when the query was ran. 
 + 
 +SQL_ID =4g5ah8zr6thnb 
 + 
 +Find the begin snap and end snap of the sql_id. 
 + 
 +select a.instance_number inst_id, a.snap_id,a.plan_hash_value, to_char(begin_interval_time,'dd-mon-yy hh24:mi') btime, abs(extract(minute from (end_interval_time-begin_interval_time)) + extract(hour from (end_interval_time-begin_interval_time))*60 + extract(day from (end_interval_time-begin_interval_time))*24*60) minutes, 
 +executions_delta executions, round(ELAPSED_TIME_delta/1000000/greatest(executions_delta,1),4) "avg duration (sec)" from dba_hist_SQLSTAT a, dba_hist_snapshot b 
 +where sql_id='&sql_id' and a.snap_id=b.snap_id 
 +and a.instance_number=b.instance_number 
 +order by snap_id desc, a.instance_number; 
 +From here we can get the begin snap and end snap of the sql_id. 
 + 
 +begin_snap -> 6377 
 +end_snap -> 6380 
 +</code> 
 + 
 +1. Create the tuning task: 
 + 
 +<code> 
 +DECLARE 
 +  l_sql_tune_task_id  VARCHAR2(100); 
 +BEGIN 
 +  l_sql_tune_task_id := DBMS_SQLTUNE.create_tuning_task ( 
 +                          begin_snap  => 6377, 
 +                          end_snap    => 6380, 
 +                          sql_id      => '4g5ah8zr6thnb', 
 +                          scope       => DBMS_SQLTUNE.scope_comprehensive, 
 +                          time_limit  => 600, 
 +                          task_name   => '4g5ah8zr6thnb_AWR_tuning_task', 
 +                          description => 'Tuning task for statement 4g5ah8zr6thnb  in AWR'); 
 +  DBMS_OUTPUT.put_line('l_sql_tune_task_id: ' || l_sql_tune_task_id); 
 +END; 
 +
 +2. Execute the tuning task: 
 + 
 +EXEC DBMS_SQLTUNE.execute_tuning_task(task_name => '4g5ah8zr6thnb_AWR_tuning_task'); 
 + 
 +3. Get the tuning task recommendation report 
 + 
 +SET LONG 10000000; 
 +SET PAGESIZE 100000000 
 +SET PAGESIZE 24 
 +SET LINESIZE 200 
 +SELECT DBMS_SQLTUNE.report_tuning_task('4g5ah8zr6thnb_AWR_tuning_task') AS recommendations FROM dual; 
 +</code> 
 +==== Find the sessions consuming a high amount Temporary space  ==== 
 +  * [[https://www.funoracleapps.com/2021/06/query-to-find-session-consuming-high.html|Query to find the session consuming high Temp TableSpace - funoracleapps.com]] 
 +Check the current status of the temporary tablespaces 
 +<code> 
 +select tablespace_name 
 +,      tablespace_size/1024/1024 "Total Space MB" 
 +,      allocated_space/1024/1024 "Alloc Space MB" 
 +,      free_space/1024/1024      "Free Space MB" 
 +from   dba_temp_free_space 
 +
 +</code> 
 +select s.sid 
 +,      s.username 
 +,      u.tablespace 
 +,      s.sql_hash_value||' or '||u.sqlhash hash_value 
 +,      u.segtype 
 +,      u.contents 
 +,      u.blocks 
 +from   v$session s 
 +,      v$tempseg_usage u 
 +where  s.saddr=u.session_addr 
 +and    u.tablespace=upper('&tablespace_name'
 +order  by u.blocks desc 
 +
 +</code> 
 + 
 +Then for the sid and schema, run this. I wish I knew what it did! 
 +<code> 
 +select hash_value 
 +,      sorts 
 +,      rows_processed/executions 
 +from   v$sql 
 +where  hash_value in (select hash_value from v$open_cursor where sid=&sid) 
 +and    sorts > 0 
 +and    parsing_schema_name=upper('&schema_name'
 +order  by rows_processed/executions 
 +
 +</code> 
 + 
 + 
 +==== How much memory is being used by processes (PGA memory)? ==== 
 +  *  [[https://www.spotonoracle.com/?p=267|Tracking down PGA memory leak – a walkthrough - www.spotonoracle.com]] 
 +  *  [[https://tanelpoder.com/2014/03/26/oracle-memory-troubleshooting-part-4-drilling-down-into-pga-memory-usage-with-vprocess_memory_detail/|Oracle Memory Troubleshooting, Part 4: Drilling down into PGA memory usage with V$PROCESS_MEMORY_DETAIL - Tanel Poder]] 
 +  *  [[https://github.com/tanelpoder/tpt-oracle/blob/master/pmem.sql|Show process memory usage breakdown - lookup by process SPID]] 
 +  *  [[https://support.oracle.com/epmos/faces/DocumentDisplay?id=822527.1|How To Find Where The Memory Is Growing For A Process (Doc ID 822527.1)]] 
 +  *  [[https://dioncho.wordpress.com/2009/06/09/707/#comments|Troubleshooting PGA leak with PGA heap dump - Dion Cho – Oracle Performance Storyteller]] 
 +<code> 
 +set lines 1000 pages 100 
 +col PGA_USED_MEGS for 999,999,990 
 +col PGA_ALLOC_MEGS for 999,999,990 
 +col PGA_FREEABLE_MEGS for 999,999,990 
 +col PGA_MAX_MEGS for 999,999,990 
 +compute sum of PGA_USED_MEGS on report 
 +compute sum of PGA_ALLOC_MEGS on report 
 +compute sum of PGA_MAX_MEGS on report 
 +break on report 
 +SELECT PROGRAM, (PGA_USED_MEM/1024/1024) PGA_USED_MEGS, (PGA_ALLOC_MEM/1024/1024) PGA_ALLOC_MEGS, (PGA_FREEABLE_MEM/1024/1024) PGA_FREEABLE_MEGS, (PGA_MAX_MEM/1024/1024) PGA_MAX_MEGS 
 +FROM V$PROCESS 
 +order by 2 
 +
 +</code> 
 +==== or PGA memory with a bit more surrounding info ...temporary tablespace usage also ==== 
 +<code> 
 +set lines 1000 pages 2000 trims on feed on newpa none 
 +col username        head "Username"  for a18 
 +col osuser          head "O/S User"  for a12 
 +col sid_ser         head "sid_ser"   for a12 
 +col ospid           head "O/S pid"   for a8 
 +col logontime       head "Logged on" for a15 
 +col timenow         head "Time now"  for a15 
 +col program         head "Program"   for a30 trunc 
 +col machine         head "Machine"   for a18 trunc 
 +col port            head "Port"      for 999999 
 +col status          head "Status"    for a10 
 +col cpu_secs        head "CPU (s)"   for 99990D9 
 +col max_megs        head "Max MB"    for 99990D9 
 +col alloc_megs      head "Alloc MB"  for 99990D9 
 +col used_megs       head "Used MB"   for 99990D9 
 +col freeable_megs   head "Freeable"  for 99990D9 
 +col temp_used       head "Temp MB"   for 99990D9 
 + 
 +col timenow nopri 
 + 
 +compute sum of alloc_megs    on report 
 +compute sum of used_megs     on report 
 +compute sum of freeable_megs on report 
 +break on report 
 + 
 +with temp_usage as ( 
 +select su.session_addr                          session_addr 
 +,      s.sid||'_'||s.serial#                    sid_ser 
 +,      sum(su.blocks)*tbs.block_size/1024/1024  mb_used 
 +,      su.tablespace                            tablespace 
 +from   v$session       s 
 +,      v$sort_usage    su 
 +,      dba_tablespaces tbs 
 +where  1=1 
 +and    s.saddr       = su.session_addr 
 +and    su.tablespace = tbs.tablespace_name 
 +group  by su.session_addr 
 +,      s.sid 
 +,      s.serial# 
 +,      tbs.block_size 
 +,      su.tablespace 
 +
 +SELECT nvl(s.username,'BACKGROUND'          username 
 +,      s.osuser                               osuser 
 +,      s.sid||','||s.serial#                  sid_ser 
 +,      p.spid                                 ospid 
 +,      to_char(logon_time,'YYYYMMDDHH24MISS') logontime 
 +,      to_char(sysdate,'YYYYMMDDHH24MISS'   timenow 
 +,      s.program                              program 
 +,      s.machine                              machine 
 +,      s.port                                 port 
 +,      s.status                               status 
 +,      round((ss.value/1000),2)               cpu_secs 
 +,      round(pga_max_mem/1024/1024)           max_megs 
 +,      round(pga_alloc_mem/1024/1024)         alloc_megs 
 +,      round(pga_used_mem/1024/1024)          used_megs 
 +,      round(pga_freeable_mem/1024/1024)      freeable_megs 
 +,      tu.mb_used                             temp_used 
 +from   v$process 
 +,      v$session 
 +,      v$sesstat  ss 
 +,      v$statname sn 
 +,      temp_usage tu 
 +where  p.addr        = s.paddr 
 +and    s.sid         = ss.sid 
 +and    ss.statistic# = sn.statistic# 
 +and    sn.name       = 'CPU used by this session' 
 +and    s.saddr       = tu.session_addr(+) 
 +order  by pga_used_mem desc 
 +
 + 
 +set lines 80 
 +</code> 
 + 
 +==== SGA and PGA historic usage per hour (or snapshot interval) ==== 
 +<code> 
 +set lines 1000 pages 5000 trims on 
 +col instance_name head "Instance"   for a9 
 +col sga           head "SGA(Mb)"    for 9,990.9 
 +col pga           head "PGA(Mb)"    for 9,990.9 
 +col tot           head "Total"      for 99,990.9 
 +col datetime      head "Date"       for a21 
 + 
 +select i.instance_name                  instance_name 
 +,      sga.allocated                    sga 
 +,      pga.allocated                    pga 
 +,      (sga.allocated+pga.allocated)    tot 
 +,      '   '||to_char(trunc(sn.end_interval_time,'mi'),'DD-MON-YY HH24:MI:SS') datetime 
 +from   ( 
 +       select snap_id 
 +       ,      instance_number 
 +       ,      round(sum(bytes)/1024/1024/1024,3) allocated 
 +       from   dba_hist_sgastat 
 +       group  by snap_id 
 +       ,      instance_number 
 +                       sga 
 +,      ( 
 +       select snap_id 
 +       ,      instance_number 
 +       ,      round(sum(value)/1024/1024/1024,3) allocated 
 +       from   dba_hist_pgastat where name = 'total PGA allocated' 
 +       group  by snap_id 
 +       ,      instance_number 
 +                       pga 
 +,      dba_hist_snapshot sn 
 +,      v$instance        i 
 +where  sn.snap_id         = sga.snap_id 
 +and    sn.instance_number = sga.instance_number 
 +and    sn.snap_id         = pga.snap_id 
 +and    sn.instance_number = pga.instance_number 
 +and    sn.instance_number = i.instance_number 
 +order  by sn.end_interval_time desc 
 +,      sn.instance_number 
 +
 + 
 +set lines 80 
 +</code> 
 + 
 +==== Tuning PGA_AGGREGATE_TARGET ==== 
 + 
 +  *  [[https://docs.oracle.com/database/121/TGDBA/tune_pga.htm#GUID-C8C6D009-CF38-4996-AD0E-D97CE0CECE3D]] 
 + 
 +<code> 
 +SELECT ROUND(pga_target_for_estimate/1024/1024) target_mb, 
 +       estd_pga_cache_hit_percentage cache_hit_perc, 
 +       estd_overalloc_count 
 +  FROM V$PGA_TARGET_ADVICE; 
 +</code>
  
-=====Gather schema statistics===== +==== Find the total PGA memory used by processes ==== 
-<code>9@@</code>+<code> 
 +select round(sum(pga_used_mem)/(1024*1024),2) pga_used_mb from v$process; 
 +</code> 
 +==== To calculate the amount of memory that you may need for PGA ==== 
 +<code> 
 +select * from v$pgastat; 
 +select * from v$pga_target_advice; 
 +</code> 
 +See [[https://docs.oracle.com/cd/E11882_01/server.112/e40402/dynviews_2096.htm]] for details on v$pgastat\\ 
 +See [[https://docs.oracle.com/en/database/oracle/oracle-database/12.2/refrn/V-PGA_TARGET_ADVICE.html]] for details on v$pga_target_advice\\ 
 +See [[https://dba010.com/2011/05/31/how-to-determine-if-pga-is-set-properly/]] for reference of below queries. 
 +<code> 
 +col name for a40 
 +select b.name 
 +,      sum(a.value) value 
 +from   v$sysstat a 
 +,      v$statname b 
 +where  a.statistic# = b.statistic# 
 +and    b.name like '%workarea executions - %' 
 +group  by b.name 
 +
 +</code> 
 +<code> 
 +NAME                                                    VALUE 
 +---------------------------------------- -------------------- 
 +workarea executions - onepass                         1081618 
 +workarea executions - multipass                         34847 
 +workarea executions - optimal                       469688227 
 +</code> 
 +The goal is to get rid of all passes and have everything optimal.\\ 
 +One pass means that data has had to be swapped out to the temporary tablespace to get the job done. Multipass means data had to be swapped out several times to get the job done.\\ 
 +These numbers are very high. 
 +<code> 
 +col pga_size_gb for 99999 
 +col estd_pga_cache_hit_percentage for 99999 head "ESTD_PGA_CACHE|HIT_PERCENTAGE" 
 +col estd_overalloc_count for 9999999999 head "ESTD_OVER|ALLOC_COUNT" 
 +select round(pga_target_for_estimate/1024/1024/1024) pga_size_gb 
 +,      estd_pga_cache_hit_percentage 
 +,      estd_overalloc_count 
 +from   v$pga_target_advice 
 +
 +</code> 
 +<code> 
 +            ESTD_PGA_CACHE   ESTD_OVER 
 +PGA_SIZE_GB HIT_PERCENTAGE ALLOC_COUNT 
 +----------- -------------- ----------- 
 +          0             94      324950 
 +          1             94      324929 
 +          2             94      324870 
 +          2             94      324084 
 +          3             98      322445 
 +          4             98      319425 
 +          4             98      301580 
 +          5             98      272987 
 +          5             98      249429 
 +          6             98      244328 
 +                      99      192640 
 +         12             99      122051 
 +         18            100           0 
 +         24            100           0 
 +</code> 
 +Setting the pga_aggregate_target to 18Gb will allow all processing to remain in the pga and not get swapped out to the temp tablespaces.
  
-=====Gather table statistics===== +==== Check the table size and percentage of fragmentation ==== 
-<code>10@@</code> +   [[https://orahow.com/how-to-find-and-remove-table-fragmentation-in-oracle-database/|How to Find and Remove Table Fragmentation in Oracle Database]] 
-=====Show any events set in database===== +<code> 
-<code>11@@</code>+select sum(bytes)/1024/1024/1024 from dba_segments where segment_name='&TABLE_NAME';
  
-=====Top memory consumers in AIX===== +select table_name,avg_row_len,round(((blocks*16/1024)),2)||'MB' "TOTAL_SIZE", 
-<code>12@@</code> +round((num_rows*avg_row_len/1024/1024),2)||'Mb' "ACTUAL_SIZE", 
-=====Show all Oracle hidden parameters===== +round(((blocks*16/1024)-(num_rows*avg_row_len/1024/1024)),2) ||'MB' "FRAGMENTED_SPACE", 
-<code>13@@</code> +(round(((blocks*16/1024)-(num_rows*avg_row_len/1024/1024)),2)/round(((blocks*16/1024)),2))*100 "percentage" 
-=====How much memory is being used by processes (PGA memory)?====+from all_tables WHERE table_name='&TABLE_NAME'; 
-<code>14@@</code>+</code>
  
-=====Enable Automatic Memory Management (AMM)=====+==== Enable Automatic Memory Management (AMM) ====
 AMM is enabled by setting one or both of the following memory parameters: AMM is enabled by setting one or both of the following memory parameters:
-<code>15@@</code> +<code> 
-With these memory parameters set, AMM is enabled.<br /> +alter system set memory_max_target=2g scope=spfile; 
-Memory will now be allocated automatically to where it is needed.<br /> +alter system set memory_target=2g scope=spfile; 
-<code>16@@</code> +</code> 
-If sga_target and/or pga_aggregate_target are set, these will be treated as minimum values.<br /> +With these memory parameters set, AMM is enabled.\\ 
-If only one of sga_target or pga_aggregate_target is set, the other will be set to (memory_target - the value set).<br /> +Memory will now be allocated automatically to where it is needed.\\ 
-<br /> +<code> 
-====Disable AMM==== +alter system set sga_target=0 scope=spfile; 
-<code>17@@</code>+alter system set pga_aggregate_target=0 scope=spfile; 
 +</code> 
 +If sga_target and/or pga_aggregate_target are set, these will be treated as minimum values.\\ 
 +If only one of sga_target or pga_aggregate_target is set, the other will be set to (memory_target - the value set).\\
  
-====Enable ASMM==== +=== Disable AMM === 
-<code>18@@</code>+<code> 
 +  SQL> alter system reset memory_max_target scope=spfile  sid='*'; 
 +  SQL> alter system reset memory_target  scope=spfile  sid='*'; 
 +</code>
  
-====Reboot database and verify that we have switched from AMM to ASMM==== +=== Enable ASMM === 
-<code>19@@</code> +<code> 
---> AMM disabled +  SQL> alter system set SGA_MAX_SIZE=1400m scope=spfile  sid='*'; 
 +  SQL> alter system set SGA_TARGET=1000m scope=spfile  sid='*'; 
 +  SQL> alter system set PGA_AGGREGATE_TARGET=480m scope=spfile  sid='*'; 
 +</code>
  
-<code>20@@</code>+=== Reboot database and verify that we have switched from AMM to ASMM === 
 +<code> 
 +SQL> show parameter memory 
 +NAME                     TYPE     VALUE 
 +------------------------------------ ----------- ------------------------------ 
 +memory_max_target             big integer 0 
 +memory_target                 big integer 0 
 +</code> 
 +--> AMM disabled 
 + 
 +<code> 
 +SQL> show parameter sga 
 +NAME                     TYPE          VALUE 
 +------------------------ ----------- ------------------------------ 
 +sga_max_size             big integer 1408M 
 +sga_target               big integer 912M 
 + 
 +SQL> show parameter pga 
 +NAME                     TYPE     VALUE 
 +------------------------ ----------- ------------------------------ 
 +pga_aggregate_target     big integer 480M 
 +</code>
 --> ASMM enabled ! --> ASMM enabled !
  
-=====SGA tuning===== +==== Show size of the large pool ==== 
-Problem with 12c SGA growing too high. Database keeps crashing due to memory issues. Started at 2G, then to 4G and now at 5G.<br />+<code> 
 +select name, sum(bytes) from v$sgastat where upper(pool) = 'LARGE POOL' group by rollup (name); 
 +</code> 
 + 
 +==== Quick SGA sizing advice ==== 
 +The MMON background process gathers statistics about sga usage and updates the V$SGA_TARGET_ADVICE view. 
 +<code> 
 +--- DB_CACHE_ADVICE should be ON 
 +  
 +SQL> show parameter db_cache_advice 
 +  
 +NAME                                 TYPE                             VALUE 
 +------------------------------------ -------------------------------- ------------------ 
 +db_cache_advice                      string                           ON 
 +  
 +-- STATISTICS_LEVEL should be TYPICAL/ALL. 
 +  
 +SQL> show parameter statistics_level 
 +  
 +NAME                                 TYPE                             VALUE 
 +------------------------------------ -------------------------------- -------------------------- 
 +statistics_level                     string                           TYPICAL 
 +</code> 
 +<code> 
 +select sga_size 
 +,      sga_size_factor 
 +,      estd_physical_reads 
 +,      estd_db_time 
 +from   v$sga_target_advice 
 +order  by sga_size; 
 +</code> 
 +Note where the ESTD_PHYSICAL_READS drop 
 +<code> 
 +  SGA_SIZE SGA_SIZE_FACTOR ESTD_PHYSICAL_READS ESTD_DB_TIME 
 +---------- --------------- ------------------- ------------ 
 +      3756            .375          2.0808E+10     11162201 
 +      5008              .5          2.0808E+10     11156419 
 +      6260            .625          2.0808E+10     11156419 
 +      7512             .75          2.0808E+10     11156419 
 +      8764            .875          2.0808E+10     11156419 
 +     10016                        1.3837E+10     19275084 
 +     11268           1.125          7146720999     59698790 
 +     12520            1.25          5065662261     22754237 
 +     13772           1.375          4939747138      8321054 
 +     15024             1.5          4939747138      8319126 
 +     16276           1.625          4939747138      8319126 
 +     17528            1.75          4939747138      8319126 
 +     18780           1.875          4939747138      8319126 
 +     20032                        4400110896    148965559 
 + 
 +14 rows selected. 
 +</code> 
 +Suggestion would be to increase sga target by 1.25 as there is no longer a significant decrease in ESTD_PHYSICAL_READS for the increase in sga size. 
 + 
 +==== SGA tuning ==== 
 +Problem with 12c SGA growing too high. Database keeps crashing due to memory issues. Started at 2G, then to 4G and now at 5G.\\
 Raised SR with Oracle. This is the response Raised SR with Oracle. This is the response
-<code>21@@</code>+<code> 
 +You need to query the advice views like V$MEMORY_TARGET_ADVICE, V$SGA_TARGET_ADVICE, V$SHARED_POOL_ADVICE etc. 
 +Interpret the results and then make the changes to memory components as indicated by the SIZE column. 
 +Please review 
 +Note 1323708.1 : Tuning the SGA_TARGET using V$SGA_TARGET_ADVICE then follow the version specific advice. 
 + 
 +Please note that Oracle recommend to specify a value for the each of the tunable parameters 
 +i.e Shared Pool, Database Buffer Cache, Large Pool, Java Pool and Streams Pool even when AMM or ASMM is enabled (you are using AMM). 
 +When minimum value for each of the tunable parameters is set, Oracle will make sure that the amount of memory being allocated for the corresponding pool will not shrink below the specified amount. 
 +Best practices with auto-tuning are to include explicit, minimum settings for the various auto-tuned components. 
 +This will help the auto-tuner make "smarter" choices about moving memory within the SGA. 
 +While it is not required to set explicit settings, the auto-tuner can get over aggressive with memory moves when auto-tuned components are set to 0. 
 + 
 +The parameter _kghdsidx_count controls the number of subpools used. Setting this parameter to 2 will be the best. Currently you have this parameter set to 5. 
 + 
 +SQL> alter system set "_kghdsidx_count"=2 scope=spfile; 
 +or add this in the pfile 
 +"_kghdsidx_count"=2 
 + 
 +As a best practice for SGA components you can use the following recommendations: 
 + 
 +- SHARED_POOL_SIZE >= 2G (because minimum 1G/subpool is recommended) 
 +- LARGE_POOL_SIZE >= 150M (used for parallel executions) 
 +- STREAMS_POOL_SIZE >= 150M (if you use datapump regularly) 
 +- JAVA_POOL_SIZE >= 150M (if you use java) 
 +- SGA_TARGET >= 3G (because it should be greater by 10% of the sum of it's components) 
 +</code>
  
 or another one from an 11g incident... or another one from an 11g incident...
-<code>22@@</code>+<code> 
 +ACTION PLAN 
 +=========== 
 +1. We can see that your shared pool is divided intro 4 subpools. 
 +Set _kghdsidx_count to 2 in order to be sure the shared pool will be divided only in 2 subpools and not more. You currently have 4 subpools which favor fragmentation. 
 + 
 +connect / as sysdba 
 +SQL > alter system set "_kghdsidx_count"=2 scope=spfile; 
 +- restart the database 
 + 
 +FYI: Starting with Oracle 9i the shared_pool will be divided into multiple sub-heaps. Until 9.0, the shared pool was always allocated in one large heap rather than multiple subheaps. The number of subpools even if produced a better performance, also it 
 +could produce ora-4031 errors when there is not space in the subpool as a result of fragmentation. One process will use only a specific subpool and if an error is reported, it wont be migrated to other subpool. Oracle 9i implemented multiple subpools to avoid shared pool latch contention. But, the cost of the multiple pools is that the size of each subpool will be smaller and hence more susceptible to an ORA-4031. So, the parameter _kghdsidx_count can be used to override the default for the number of sub pools. 
 + 
 +2. Decrease the number of open_cursors from 3000 to 1000. 
 + 
 +The open_cursors parameter is defined per session. Having many cursors open per session is too much and it means that the cursors are not correctly handled/closed at application level. 
 +1000 cursors created per sessions should be enough for any application. We recommend in some cases to go as low as 600 which should still be enough. 
 + 
 +3. Apply the workaround to disable the use of durations by setting "_enable_shared_pool_durations=FALSE"
 + 
 +This will allow unpinned SQLA, KGLH0, and KGLHD memory to be freed to make room for new permanent allocations. 
 + 
 +4. Set a minimum for the SGA within the memory_target size using the parameter sga_target. For example set sga_target=13G. This will mean that at all times 13G of memory from memory_target will be reserved for the SGA. More memory can be allocated to it if necessary and available. 
 +So please modify accordingly the parameters sga_max_size and sga_target. This will require a database restart. 
 +</code>
  
 But.. From 1323708.1 ... But.. From 1323708.1 ...
-<code>23@@</code>+<code> 
 +In Oracle11g (or higher), the information in the V$SGA_TARGET_ADVICE view is similar to that provided in the V$MEMORY_TARGET_ADVICE view for Automatic Shared Memory Management. 
 +If MEMORY_TARGET is set, then SGA_TARGET should be set to 0. See Document 443746.1 for more details on this. 
 +If MEMORY_TARGET is not set, use the same steps as for 10g. 
 +</code> 
 + 
 +==== Redo log file size and Database Performance ==== 
 +  *  [[https://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:9540730000346298496]] 
 +See how much time is spent writing redo logs and checkpointing 
 +<code> 
 +select /*+ ordered */  
 + a.sid,  
 + decode(a.type, 'BACKGROUND', substr (a.program,instr(a.program,'(',1,1)), 'FOREGROUND') type,  
 + b.time_waited,  
 + round(b.time_waited/b.total_waits,4) average_wait,  
 + round((sysdate - a.logon_time)*24) hours_connected  
 + from   v$session_event b, v$session a  
 + where  a.sid   = b.sid  
 + and    b.event = 'control file parallel write'  
 + order by type, time_waited; 
 +</code> 
 + 
 +==== How much redo was generated / How many archivelog switches have occurred per hour over the past week? ==== 
 +<code> 
 +set lines 200 pages 100 
 +col day   for a9 hea "Date\Hour" 
 +col "00"  for a5 
 +col "01"  for a5 
 +col "02"  for a5 
 +col "03"  for a5 
 +col "04"  for a5 
 +col "05"  for a5 
 +col "06"  for a5 
 +col "07"  for a5 
 +col "08"  for a5 
 +col "09"  for a5 
 +col "10"  for a5 
 +col "11"  for a5 
 +col "12"  for a5 
 +col "13"  for a5 
 +col "14"  for a5 
 +col "15"  for a5 
 +col "16"  for a5 
 +col "17"  for a5 
 +col "18"  for a5 
 +col "19"  for a5 
 +col "20"  for a5 
 +col "21"  for a5 
 +col "22"  for a5 
 +col "23"  for a5 
 +col "all" for a9 heading "All day" 
 +alter session set nls_date_format='DD-MON-YY' 
 +
 +select * from ( 
 +    select trunc(first_time) day, 
 +       to_char(sum(decode(to_char(first_time,'HH24'),'00',1,0)),'9999') "00", 
 +       to_char(sum(decode(to_char(first_time,'HH24'),'01',1,0)),'9999') "01", 
 +       to_char(sum(decode(to_char(first_time,'HH24'),'02',1,0)),'9999') "02", 
 +       to_char(sum(decode(to_char(first_time,'HH24'),'03',1,0)),'9999') "03", 
 +       to_char(sum(decode(to_char(first_time,'HH24'),'04',1,0)),'9999') "04", 
 +       to_char(sum(decode(to_char(first_time,'HH24'),'05',1,0)),'9999') "05", 
 +       to_char(sum(decode(to_char(first_time,'HH24'),'06',1,0)),'9999') "06", 
 +       to_char(sum(decode(to_char(first_time,'HH24'),'07',1,0)),'9999') "07", 
 +       to_char(sum(decode(to_char(first_time,'HH24'),'08',1,0)),'9999') "08", 
 +       to_char(sum(decode(to_char(first_time,'HH24'),'09',1,0)),'9999') "09", 
 +       to_char(sum(decode(to_char(first_time,'HH24'),'10',1,0)),'9999') "10", 
 +       to_char(sum(decode(to_char(first_time,'HH24'),'11',1,0)),'9999') "11", 
 +       to_char(sum(decode(to_char(first_time,'HH24'),'12',1,0)),'9999') "12", 
 +       to_char(sum(decode(to_char(first_time,'HH24'),'13',1,0)),'9999') "13", 
 +       to_char(sum(decode(to_char(first_time,'HH24'),'14',1,0)),'9999') "14", 
 +       to_char(sum(decode(to_char(first_time,'HH24'),'15',1,0)),'9999') "15", 
 +       to_char(sum(decode(to_char(first_time,'HH24'),'16',1,0)),'9999') "16", 
 +       to_char(sum(decode(to_char(first_time,'HH24'),'17',1,0)),'9999') "17", 
 +       to_char(sum(decode(to_char(first_time,'HH24'),'18',1,0)),'9999') "18", 
 +       to_char(sum(decode(to_char(first_time,'HH24'),'19',1,0)),'9999') "19", 
 +       to_char(sum(decode(to_char(first_time,'HH24'),'20',1,0)),'9999') "20", 
 +       to_char(sum(decode(to_char(first_time,'HH24'),'21',1,0)),'9999') "21", 
 +       to_char(sum(decode(to_char(first_time,'HH24'),'22',1,0)),'9999') "22", 
 +       to_char(sum(decode(to_char(first_time,'HH24'),'23',1,0)),'9999') "23", 
 +       to_char(count(to_char(first_time,'DD')),'9999') "all" 
 +    from   v$log_history 
 +    group  by trunc(first_time) 
 +    order  by trunc(first_time) desc 
 +
 +where  rownum < 8 
 +order  by 1 
 +
 +set lines 80
  
-=====How much redo was generated / How many archivelog switches have occurred per hour over the past week?===== +</code> 
-<code>24@@</code> +==== How much archive (redo volume) has been generated per day====
-=====How much archive has been generated per day=====+
 Also how much has RMAN deleted Also how much has RMAN deleted
-<code>25@@</code>+<code> 
 +set lines 1000 pages 100 
 +col day for a15 head "Date" 
 +select sum_arch.day 
 +,      round(sum_arch.generated_gb,1) generated_gb 
 +,      round(sum_arch_del.archived_gb,1) archived_gb 
 +,      round((sum_arch.generated_gb - sum_arch_del.archived_gb),1) remaining_gb 
 +from   ( 
 +       select to_char (completion_time, 'dd-mm-yyyy') day 
 +       ,      sum (round ( (blocks * block_size) / (1024 * 1024 * 1024), 2)) generated_gb 
 +       from   v$archived_log 
 +       where  archived = 'YES' 
 +       group by to_char (completion_time, 'dd-mm-yyyy')) sum_arch 
 +,      ( 
 +       select to_char (completion_time, 'dd-mm-yyyy') day 
 +       ,      sum (round ( (blocks * block_size) / (1024 * 1024 * 1024), 2)) archived_gb 
 +       from   v$archived_log 
 +       where  archived = 'YES' and deleted = 'YES' 
 +       group  by to_char (completion_time, 'dd-mm-yyyy')) sum_arch_del 
 +where  sum_arch.day = sum_arch_del.day(+) 
 +order  by to_date (day, 'dd-mm-yyyy') desc 
 +
 +set lines 80 
 +</code>
  
-=====Top session activity=====+==== Top session activity ==== 
 +  * [[https://support.oracle.com/epmos/faces/DocumentDisplay?id=232443.1|How to Identify Resource Intensive SQL ("TOP SQL") (Doc ID 232443.1)]]
 Enter READS, EXECS or CPU to order session activity by that column Enter READS, EXECS or CPU to order session activity by that column
-<code>26@@</code>+<code> 
 +set lines 500 pages 1000 verif off
  
-=====Show current used undo blocks for ongoing transactions===== +col username   for a15 
-<code>27@@</code>+col machine    for a26 
 +col module     for a30 
 +col logon_time for a20 
 +col program    for a30 
 +col killer     for a12 
 +col osuser     for a10
  
-=====Pinpoint which sessions are using lots of undo===== +prompt Enter CPU, READS or EXECS 
-<code>28@@</code> +prompt (Press Enter for CPU default) 
-=====How many blocks have been changed by sessions?===== + 
-High values indicate a session generating lots of redo<br />+select nvl(a.username, '(oracle)') as username 
 +,      a.osuser 
 +,      a.sid||','||a.serial# killer 
 +,      c.value as &&TYPE 
 +,      a.lockwait 
 +,      a.status 
 +,      a.module 
 +,      a.machine 
 +,      a.program 
 +,      to_char(a.logon_time,'dd-mon-yyyy hh24:mi:ss') as logon_time 
 +from   v$session a 
 +,      v$sesstat c 
 +,      v$statname d 
 +where  1=
 +and    a.sid        = c.sid 
 +and    c.statistic# = d.statistic# 
 +and    d.name       = decode(upper('&TYPE'), 'READS', 'session logical reads', 
 +                                             'EXECS', 'execute count', 
 +                                             'CPU',   'CPU used by this session', 
 +                                                      'CPU used by this session' 
 +                      ) 
 +order by c.value desc 
 +
 + 
 +undef TYPE 
 +</code> 
 + 
 +  * [[https://www.funoracleapps.com/2019/10/query-to-check-session-details-in.html|A session collecting query from funoracleapps.com]] 
 +<code> 
 +select s.ecid  
 +,      s.inst_id 
 +,      s.sid 
 +,      s.serial# 
 +,      p.spid 
 +,      s.status 
 +,      s.machine 
 +,      s.action 
 +,      s.module 
 +,      s.terminal 
 +,      s.sql_id 
 +,      s.last_call_et 
 +,      s.event 
 +,      s.client_info 
 +,      s.plsql_subprogram_id 
 +,      s.program 
 +,      s.client_identifier 
 +,      ( select max( substr( sql_text , 1, 40 )) FROM gv$sql sq      WHERE sq.sql_id = s.sql_id )                                                           sql_text 
 +,      ( select object_name                      FROM dba_procedures WHERE object_id = plsql_entry_object_id AND subprogram_id = 0)                         plsql_entry_object 
 +,      ( select procedure_name                   FROM dba_procedures WHERE object_id = plsql_entry_object_id AND subprogram_id = plsql_entry_subprogram_id) plsql_entry_subprogram 
 +,      ( select object_name                      FROM dba_procedures WHERE object_id = plsql_object_id       AND subprogram_id = 0)                         plsql_entry_object 
 +,      ( select procedure_name                   FROM dba_procedures WHERE object_id = plsql_object_id       AND subprogram_id = PLSQL_SUBPROGRAM_ID)       plsql_entry_subprogram 
 +,      'alter system kill session ' || '''' || s.SID || ',' || s.serial# ||',@'|| s.inst_id||''''|| ' immediate;'                                           kill_session 
 +from   gv$session s 
 +,      gv$process p 
 +where 
 +--client_identifier like '%HIMANSHU.SINGH%' 
 +--s.ecid like '%ZnHWOPoUDWbG%' 
 +-- sid=4361 
 +--p.spid='1196' 
 +--s.program like '%ICM%' 
 +and p.addr=s.paddr 
 +and p.inst_id = s.inst_id 
 +
 +</code> 
 + 
 + 
 +==== Show current used undo blocks for ongoing transactions ==== 
 +<code> 
 +select vs.sid 
 +,      vs.serial# 
 +,      vs.username 
 +,      vs.program 
 +,      vt.used_ublk 
 +,      vt.used_urec 
 +from   v$session     vs 
 +,      v$transaction vt 
 +where  vs.taddr = vt.addr 
 +order  by 5 desc, 6 desc 
 +</code> 
 + 
 +==== Pinpoint which sessions are using lots of undo ==== 
 +<code> 
 +select a.sid 
 +,      b.name 
 +,      a.value 
 +from   v$sesstat a 
 +,      v$statname b 
 +where  a.statistic# = b.statistic# 
 +and    a.statistic# = 176    # undo change vector size 
 +order  by a.value desc 
 +</code> 
 +==== How many blocks have been changed by sessions? ==== 
 +High values indicate a session generating lots of redo\\
 Use this query to check for programs generating lots of redo when these programs activate more than one transaction. Use this query to check for programs generating lots of redo when these programs activate more than one transaction.
-<code>29@@</code> +<code> 
-=====Top SQL=====+select vs.sid 
 +,      vs.serial# 
 +,      vs.username 
 +,      vs.program 
 +,      vi.block_changes 
 +from   v$session vs 
 +,      v$sess_io vi 
 +where  vs.sid = vi.sid 
 +order  by 5 desc 
 +,      1, 2, 3, 4; 
 +</code> 
 +==== Top SQL ====
 Shows the SQL statements that have caused the most disk reads per execution since the instance was last started Shows the SQL statements that have caused the most disk reads per execution since the instance was last started
-<code>30@@</code>+<code> 
 +set lines 1000 pages 1000 
 +set verif off 
 +col sql_text for a150 
 + 
 +select * 
 +from   ( 
 +       select substr(a.sql_text,1,200) sql_text 
 +       ,      trunc(a.disk_reads/decode(a.executions,0,1,a.executions)) reads_per_execution 
 +       ,      a.buffer_gets 
 +       ,      a.disk_reads 
 +       ,      a.executions 
 +       ,      a.sorts 
 +       ,      a.address 
 +       from   v$sqlarea a 
 +       order  by 2 desc 
 +       ) 
 +where  rownum <= 11 
 +
 +set lines 80 
 +</code>
 Top ten SQL statements with the greatest aggregate elapsed time Top ten SQL statements with the greatest aggregate elapsed time
-<code>31@@</code>+<code> 
 +set lines 1000 pages 1000 
 +col sql_text for a150 
 +SELECT sql_id,child_number,sql_text, elapsed_time 
 +FROM (SELECT sql_id, child_number, sql_text, elapsed_time, 
 +              cpu_time,disk_reads, 
 +      RANK () OVER (ORDER BY elapsed_time DESC) AS elapsed_rank 
 +         FROM v$sql) 
 +WHERE elapsed_rank <= 10 
 +
 +set lines 80 
 +</code>
 See the explain plan of these SQL's by feeding the sql_id and child_no into this cracker! See the explain plan of these SQL's by feeding the sql_id and child_no into this cracker!
-<code>32@@</code>+<code> 
 +SELECT *  FROM TABLE (DBMS_XPLAN.display_cursor (&sql_id, &child_no,'TYPICAL -BYTES')); 
 +</code>
  
-=====Top waits===== +==== Top waits ==== 
-Displays a list of the events currently being waited on by active sessions.<br /> +Displays a list of the events currently being waited on by active sessions.\\ 
-The meaning of the wait_time and seconds_in_wait columns varies depending on their values follows:<br /> +The meaning of the wait_time and seconds_in_wait columns varies depending on their values follows:\\ 
-  * wait_time - A non-zero value represents the session’s last wait time, while a zero value indicates that the session is currently waiting.<br /> +  *  wait_time - A non-zero value represents the session’s last wait time, while a zero value indicates that the session is currently waiting.\\ 
-  * seconds_in_wait - When the wait_time is zero, the seconds_in_wait value represents the seconds spent in the current wait condition.<br /> +  *  seconds_in_wait - When the wait_time is zero, the seconds_in_wait value represents the seconds spent in the current wait condition.\\ 
-When the wait_time is greater than zero, the seconds_in_wait value represents the seconds since the start of the last wait,<br />+When the wait_time is greater than zero, the seconds_in_wait value represents the seconds since the start of the last wait,\\
 and (seconds_in_wait - wait_time / 100) is the active seconds since the last wait ended. and (seconds_in_wait - wait_time / 100) is the active seconds since the last wait ended.
-<code>33@@</code> +<code> 
-=====Session wait history=====+set lines 200 pages 1000 
 +col username   for a20 
 +col event      for a30 
 +col wait_class for a15 
 + 
 +select nvl(s.username, '(oracle)') as username 
 +,      s.sid 
 +,      s.serial# 
 +,      sw.event 
 +,      sw.wait_class 
 +,      sw.wait_time 
 +,      sw.seconds_in_wait 
 +,      sw.state 
 +from   v$session_wait sw 
 +,      v$session s 
 +where  1=1 
 +and    s.sid = sw.sid 
 +order  by sw.seconds_in_wait desc 
 +
 +</code> 
 +and with the SQL... 
 +<code> 
 +select a.sid 
 +,      a.event 
 +,      a.wait_time 
 +,      round(c.physical_read_bytes/1024/1024/1024,2) gb_read 
 +,      round(c.physical_write_bytes/1024/1024/1024,2) gb_write 
 +,      c.sql_text 
 +from   v$session_wait a 
 +,      v$session b 
 +,      v$sql c 
 +where  1=
 +and    a.sid = b.sid 
 +and    b.sql_id = c.sql_id 
 +order  by wait_time desc 
 +
 +</code> 
 +==== Session wait history ====
 Once a session of interest has been identified, we can display the history of events associated with that session Once a session of interest has been identified, we can display the history of events associated with that session
-<code>34@@</code>+<code> 
 +set lines 200 pages 1000 
 +set verif off 
 +col username for a20 
 +col event    for a40
  
-=====System waits===== +select nvl(s.username, '(oracle)') as username 
-<code>35@@</code> +,      s.sid 
-=====Oracle Log File Sync Wait Event===== +,      s.serial# 
-Reference: [[http://logicalread.solarwinds.com/oracle-log-file-sync-wait-event-dr01/|logicalread.solarwinds.com]]<br />+,      se.event 
 +,      se.total_waits 
 +,      se.total_timeouts 
 +,      se.time_waited 
 +,      se.average_wait 
 +,      se.max_wait 
 +,      round(se.time_waited_micro/1000000,2) waited_secs 
 +from   v$session_event se 
 +,      v$session s 
 +where  1=
 +and    s.sid = se.sid 
 +and    s.sid = &1 
 +order  by se.time_waited desc 
 +
 +</code> 
 + 
 +==== System waits ==== 
 +<code> 
 +select event 
 +,       total_waits 
 +,       total_timeouts 
 +,       time_waited 
 +,       average_wait 
 +,       time_waited_micro 
 +from v$system_event 
 +order by event 
 +
 +</code> 
 +==== ASH queries ==== 
 +<code> 
 + SELECT NVL(a.event, 'ON CPU') AS event, 
 +       COUNT(*) AS total_wait_time 
 +FROM   v$active_session_history a 
 +WHERE  a.sample_time > SYSDATE - 5/(24*60) -- 5 mins 
 +GROUP BY a.event 
 +ORDER BY total_wait_time DESC; 
 +</code> 
 + 
 +and 
 + 
 +<code> 
 +SELECT NVL(a.event, 'ON CPU') AS event, 
 +       COUNT(*)*10 AS total_wait_time 
 +FROM   dba_hist_active_sess_history a 
 +WHERE  a.sample_time > SYSDATE - 1 
 +GROUP BY a.event 
 +ORDER BY total_wait_time DESC; 
 +</code> 
 + 
 +==== Oracle Log File Sync Wait Event ==== 
 +Reference: [[http://logicalread.solarwinds.com/oracle-log-file-sync-wait-event-dr01/|logicalread.solarwinds.com]]\\
 Snippet: Snippet:
 <blockquote> <blockquote>
-When a user session waits on the 'log file sync' event, it is actually waiting for the LGWR process to write the log buffer to the redo log file and<br /> +When a user session waits on the 'log file sync' event, it is actually waiting for the LGWR process to write the log buffer to the redo log file and\\ 
-return confirmation/control back to it. If the total wait time is significant, review the average wait time. If the average wait time is low but the<br />+return confirmation/control back to it. If the total wait time is significant, review the average wait time. If the average wait time is low but the\\
 number of waits is high, reduce the number of commits by batching (or committing after 'n') rows. number of waits is high, reduce the number of commits by batching (or committing after 'n') rows.
  
 If slow I/O, investigate the following: If slow I/O, investigate the following:
  
-    # Reduce contention on existing disks.<br /> +<code> 
-    # Put log files on faster disks.<br /> +    # Reduce contention on existing disks.\\ 
-    # Put alternate redo logs on different disks to minimise the effect archive processes (log files switches).<br /> +    # Put log files on faster disks.\\ 
-    # Review application design, use NOLOGGING operations where appropriate, and avoid changing more data than required.<br />+    # Put alternate redo logs on different disks to minimise the effect archive processes (log files switches).\\ 
 +    # Review application design, use NOLOGGING operations where appropriate, and avoid changing more data than required.\\ 
 +</code>
  
 If wait times are still significant, review each component of the 'log file sync' and tune separately. If wait times are still significant, review each component of the 'log file sync' and tune separately.
 </blockquote> </blockquote>
  
-=====EXPLAIN PLAN===== +==== Shared Memory Problem (unable to allocate shared memory ...) - and how to avoid it using bind variables ==== 
-====Usage (old school)====+  * [[https://asktom.oracle.com/pls/asktom/f?p=100:11:0::::p11_question_id:528893984337|use bind variables - asktom]]\\ 
 +Just to give you a tiny idea of how huge of a difference this can make performance wise, you only need to run a very small test: 
 +<code> 
 +tkyte@TKYTE816> alter system flush shared_pool; 
 +System altered. 
 + 
 +tkyte@TKYTE816> declare 
 +2 type rc is ref cursor; 
 +3 l_rc rc; 
 +4 l_dummy all_objects.object_name%type; 
 +5 l_start number default dbms_utility.get_time; 
 +6 begin 
 +7 for i in 1 .. 1000 
 +8 loop 
 +9 open l_rc for 
 +10 'select object_name 
 +11 from all_objects 
 +12 where object_id ' || i; 
 +13 fetch l_rc into l_dummy; 
 +14 close l_rc; 
 +15 end loop; 
 +16 dbms_output.put_line 
 +17 ( round( (dbms_utility.get_time-l_start)/100, 2 ) || 
 +18 ' seconds...' ); 
 +19 end; 
 +20 / 
 +14.86 seconds... 
 + 
 +PL/SQL procedure successfully completed. 
 + 
 +tkyte@TKYTE816> declare 
 +2 type rc is ref cursor; 
 +3 l_rc rc; 
 +4 l_dummy all_objects.object_name%type; 
 +5 l_start number default dbms_utility.get_time; 
 +6 begin 
 +7 for i in 1 .. 1000 
 +8 loop 
 +9 open l_rc for 
 +10 'select object_name 
 +11 from all_objects 
 +12 where object_id :x' 
 +13 using i; 
 +14 fetch l_rc into l_dummy; 
 +15 close l_rc; 
 +16 end loop; 
 +17 dbms_output.put_line 
 +18 ( round( (dbms_utility.get_time-l_start)/100, 2 ) || 
 +19 ' seconds...' ); 
 +20 end; 
 +21 / 
 +1.27 seconds... 
 + 
 +PL/SQL procedure successfully completed. 
 +</code> 
 + 
 +That is pretty dramatic. The fact is that not only does this execute much faster (we spent more time PARSING our queries then actually EXECUTING them!) it will let more users use your system simultaneously. 
 + 
 +==== EXPLAIN PLAN ==== 
 +=== Usage (old school) ===
 If no access to the system plan_table, create your own (also to keep the plans longer than a session), run $ORACLE_HOME/rdbms/admin/utlxplan.sql If no access to the system plan_table, create your own (also to keep the plans longer than a session), run $ORACLE_HOME/rdbms/admin/utlxplan.sql
-<code>36@@</code>+<code> 
 +explain plan 
 +[[set statement_id = 'statement_id']] 
 +[[into table_name ]] 
 +for sql_statement 
 +</code>
  
-====See the results==== +=== See the results === 
-<code>37@@</code>+<code> 
 +select rtrim(lpad(' ', 2*level) || rtrim(operation) || ' ' || rtrim(options) || ' ' || object_name) query_plan 
 +,      cost 
 +,      cardinality 
 +from   plan_table 
 +connect by prior id = parent_id 
 +start  with id = 0 
 +</code>
  
-====Usage (new school)====+=== Usage (new school) ===
 1. Explain plan for the most recent SQL statement executed in the session 1. Explain plan for the most recent SQL statement executed in the session
-<code>38@@</code>+<code> 
 +set pages 0 
 +select * from table(dbms_xplan.display_cursor); 
 +</code>
 2. Execute an explain plan command on a SELECT statement 2. Execute an explain plan command on a SELECT statement
-<code>39@@</code> +<code> 
-3. Explain plan on a previously executed statement<br /> +explain plan for 
-Find the SQL_ID<br /+select * 
-<code>40@@</code>+from   emp 
 +,      dept d 
 +where  d.deptno = e.deptno 
 +and    e.ename  = 'benoit' 
 +
 + 
 +set pages 0 lines 150 
 +select * from table(dbms_xplan.display); 
 +</code> 
 +3. Explain plan on a previously executed statement\\ 
 +Find the SQL_ID\\ 
 +<code
 +select sql_id, child_number 
 +from   v$sql 
 +where  sql_text like '%<something distinctive to find the SQL statement>%'; 
 +</code>
 Get the explain_plan Get the explain_plan
-<code>41@@</code>+<code> 
 +select * from table(dbms_xplan.display_cursor(('&sql_id',&child_number)); 
 +</code>
  
-=====Virtual Indexes=====+==== Virtual Indexes ====
 An index created to see if the optimiser would use it without actually having to build it An index created to see if the optimiser would use it without actually having to build it
-<code>42@@</code>+<code> 
 +SQL> ALTER SESSION SET "_use_nosegment_indexes"=TRUE; 
 +Session altered. 
 + 
 +SQL> CREATE INDEX sh.sales_vi1 ON sh.sales(quantity_sold) NOSEGMENT; 
 +Index created. 
 +</code>
 Now re-run the explain plan and see the difference. Now re-run the explain plan and see the difference.
  
-=====Statspack===== +==== Statspack ==== 
-If you are using Standard Edition, you cannot use the Grid utilities or to debug performance issues. You need to use the "old" method.<br /> +If you are using Standard Edition, you cannot use the Grid utilities or to debug performance issues. You need to use the "old" method.\\ 
-[[http://www.akadia.com/services/ora_statspack_survival_guide.html|http://www.akadia.com/services/ora_statspack_survival_guide.html]]<br />+[[http://www.akadia.com/services/ora_statspack_survival_guide.html|http://www.akadia.com/services/ora_statspack_survival_guide.html]]\\
 Reproduced here in case the page disappears... Reproduced here in case the page disappears...
-<code>43@@</code> +<code> 
-====References==== +Overview 
-  * [[http://www.dba-oracle.com/plsql/t_plsql_v$.htm|http://www.dba-oracle.com/plsql/t_plsql_v$.htm]]+ 
 +    STATSPACK is a performance diagnosis tool, available since Oracle8i. STATSPACK can be considered BSTAT/ESTAT's successor, incorporating many new features. STATSPACK is a diagnosis tool for instance-wide performance problems; it also supports application tuning activities by providing data which identifies high-load SQL statements. STATSPACK can be used both proactively to monitor the changing load on a system, and also reactively to investigate a performance problem. 
 + 
 +    Remember to set timed_statistics to true for your instance. Setting this parameter provides timing data which is invaluable for performance tuning. 
 + 
 +    The «more is better» approach is not always better! 
 + 
 +    The single most common misuse of STATSPACK is the «more is better» approach. Often STATSPACK reports spans hours or even days. The times between the snapshots (the collection points) should, in general, be measured in minutes, not hours and never days. 
 + 
 +    The STATSPACK reports we like are from 1 5-minute intervals during a busy or peak time, when the performance is at its worst. That provides a very focused look at what was going wrong at that exact moment in time. The problem with a very large STATSPACK snapshot window, where the time between the two snapshots is measured in hours, is that the events that caused serious performance issues for 20 minutes during peak processing don't look so bad when they're spread out over an 8-hour window. It's also true with STATSPACK that measuring things over too long of a period tends to level them out over time. Nothing will stand out and strike you as being wrong. So, when taking snapshots, schedule them about 15 to 30 minutes (maximum) apart. You might wait 3 or 4 hours between these two observations, but you should always do them in pairs and within minutes of each other. 
 + 
 +    «Having a history of the good times is just as important as having a history of the bad; you need both» 
 + 
 +    Another common mistake with STATSPACK is to gather snapshots only when there is a problem. That is fine to a point, but how much better would it be to have a STATSPACK report from when things were going good to compare it with when things are bad. A simple STATSPACK report that shows a tremendous increase in physical 1/0 activity or table scans (long tables) could help you track down that missing index. Or, if you see your soft parse percentage value went from 99% to 70%, you know that someone introduced a new feature into the system that isn't using bind variables (and is killing you). Having a history of the good times is just as important as having a history of the bad; you need both. 
 + 
 +Architecture 
 + 
 +    To fully understand the STATSPACK architecture, we have to look at the basic nature of the STATSPACK utility. The STATSPACK utility is an outgrowth of the Oracle UTLBSTAT and UTLESTAT utilities, which have been used with Oracle since the very earliest versions. 
 + 
 +UTLBSTAT - UTLESTAT 
 + 
 +    The BSTAT-ESTAT utilities capture information directly from the Oracle's in-memory structures and then compare the information from two snapshots in order to produce an elapsed-time report showing the activity of the database. If we look inside utlbstat.sql and utlestat.sql, we see the SQL that samples directly from the view: V$SYSSTAT; 
 + 
 +    insert into stats$begin_stats select * from v$sysstat; 
 +    insert into stats$end_stats select * from v$sysstat; 
 + 
 +STATSPACK 
 + 
 +    When a snapshot is executed, the STATSPACK software will sample from the RAM in-memory structures inside the SGA and transfer the values into the corresponding STATSPACK tables. These values are then available for comparing with other snapshots. 
 + 
 +    Note that in most cases, there is a direct correspondence between the v$ view in the SGA and the corresponding STATSPACK table. For example, we see that the stats$sysstat table is similar to the v$sysstat view. 
 + 
 +    SQL> desc v$sysstat; 
 +     Name                                      Null?    Type 
 +     ----------------------------------------- -------- ----------------------- 
 +     STATISTIC#                                         NUMBER 
 +     NAME                                               VARCHAR2(64) 
 +     CLASS                                              NUMBER 
 +     VALUE                                              NUMBER 
 +     STAT_ID                                            NUMBER 
 + 
 +    SQL> desc stats$sysstat; 
 +     Name                                      Null?    Type 
 +     ----------------------------------------- -------- ----------------------- 
 +     SNAP_ID                                   NOT NULL NUMBER 
 +     DBID                                      NOT NULL NUMBER 
 +     INSTANCE_NUMBER                           NOT NULL NUMBER 
 +     STATISTIC#                                NOT NULL NUMBER 
 +     NAME                                      NOT NULL VARCHAR2(64) 
 +     VALUE                                              NUMBER 
 + 
 +    It is critical to your understanding of the STATSPACK utility that you realize the information captured by a STATSPACK snapshot is accumulated values. The information from the V$VIEWS collects database information at startup time and continues to add the values until the instance is shutdown. In order to get a meaningful elapsed-time report, you must run a STATSPACK report that compares two snapshots as shown above. It is critical to understand that a report will be invalid if the database is shut down between snapshots. This is because all of the accumulated values will be reset, causing the second snapshot to have smaller values than the first snapshot. 
 + 
 +Installing and Configuring STATSPACK 
 + 
 +Create PERFSTAT Tablespace 
 + 
 +    The STATSPACK utility requires an isolated tablespace to obtain all of the objects and data. For uniformity, it is suggested that the tablespace be called PERFSTAT, the same name as the schema owner for the STATSPACK tables. It is important to closely watch the STATSPACK data to ensure that the stats$sql_summary table is not taking an inordinate amount of space. 
 + 
 +    SQL> CREATE TABLESPACE perfstat 
 +         DATAFILE '/u01/oracle/db/AKI1_perfstat.dbf' SIZE 1000M REUSE 
 +         EXTENT MANAGEMENT LOCAL UNIFORM SIZE 512K 
 +         SEGMENT SPACE MANAGEMENT AUTO 
 +         PERMANENT 
 +         ONLINE; 
 + 
 +Run the Create Scripts 
 + 
 +    Now that the tablespace exists, we can begin the installation process of the STATSPACK software. Note that you must have performed the following before attempting to install STATSPACK. 
 + 
 +        Run catdbsyn.sql as SYS 
 + 
 +        Run dbmspool.sql as SYS 
 + 
 +    $ cd $ORACLE_HOME/rdbms/admin 
 +    $ sqlplus "/ as sysdba" 
 +    SQL> start spcreate.sql 
 + 
 +    Choose the PERFSTAT user's password 
 +    ----------------------------------- 
 +    Not specifying a password will result in the installation FAILING 
 + 
 +    Enter value for perfstat_password: perfstat 
 + 
 +    Choose the Default tablespace for the PERFSTAT user 
 +    --------------------------------------------------- 
 +    Below is the list of online tablespaces in this database which can 
 +    store user data. Specifying the SYSTEM tablespace for the user'
 +    default tablespace will result in the installation FAILING, as 
 +    using SYSTEM for performance data is not supported. 
 + 
 +    Choose the PERFSTAT users's default tablespace. This is the tablespace 
 +    in which the STATSPACK tables and indexes will be created. 
 + 
 +    TABLESPACE_NAME CONTENTS STATSPACK DEFAULT TABLESPACE 
 +    ------------------------------ --------- ---------------------------- 
 +    PERFSTAT PERMANENT 
 +    SYSAUX PERMANENT * 
 +    USERS PERMANENT 
 + 
 +    Pressing <return> will result in STATSPACK's recommended default 
 +    tablespace (identified by *) being used. 
 + 
 +    Enter value for default_tablespace: PERFSTAT 
 + 
 +    Choose the Temporary tablespace for the PERFSTAT user 
 +    ----------------------------------------------------- 
 +    Below is the list of online tablespaces in this database which can 
 +    store temporary data (e.g. for sort workareas). Specifying the SYSTEM 
 +    tablespace for the user's temporary tablespace will result in the 
 +    installation FAILING, as using SYSTEM for workareas is not supported. 
 + 
 +    Choose the PERFSTAT user's Temporary tablespace. 
 + 
 +    TABLESPACE_NAME CONTENTS DB DEFAULT TEMP TABLESPACE 
 +    ------------------------------ --------- -------------------------- 
 +    TEMP TEMPORARY * 
 + 
 +    Pressing <return> will result in the database's default Temporary 
 +    tablespace (identified by *) being used. 
 + 
 +    Enter value for temporary_tablespace: TEMP 
 + 
 +    ..... 
 +    ..... 
 +    Creating Package STATSPACK... 
 + 
 +    Package created. 
 + 
 +    No errors. 
 +    Creating Package Body STATSPACK... 
 + 
 +    Package body created. 
 + 
 +    No errors. 
 + 
 +    NOTE: 
 +    SPCPKG complete. Please check spcpkg.lis for any errors. 
 + 
 +    Check the Logfiles: spcpkg.lis, spctab.lis, spcusr.lis 
 + 
 +Adjusting the STATSPACK Collection Level 
 + 
 +    STATSPACK has two types of collection options, level and threshold. The level parameter controls the type of data collected from Oracle, while the threshold parameter acts as a filter for the collection of SQL statements into the stats$sql_summary table. 
 + 
 +    SQL> SELECT * FROM stats$level_description ORDER BY snap_level; 
 +    Level 0 \tThis level captures general statistics, including rollback segment, row cache, SGA, system events, background events, session events, system statistics, wait statistics, lock statistics, and Latch information. 
 +    Level 5 \tThis level includes capturing high resource usage SQL Statements, along with all data captured by lower levels. 
 +    Level 6 \tThis level includes capturing SQL plan and SQL plan usage information for high resource usage SQL Statements, along with all data captured by lower levels. 
 +    Level 7 \tThis level captures segment level statistics, including logical and physical reads, row lock, itl and buffer busy waits, along with all data captured by lower levels. 
 +    Level 10 \tThis level includes capturing Child Latch statistics, along with all data captured by lower levels. 
 + 
 +    You can change the default level of a snapshot with the statspack.snap function. The i_modify_parameter => 'true' changes the level permanent for all snapshots in the future. 
 + 
 +    SQL> exec statspack.snap(i_snap_level => 6, i_modify_parameter => 'true'); 
 + 
 +Create, View and Delete Snapshots 
 + 
 +    sqlplus perfstat/perfstat 
 +    SQL> exec statspack.snap; 
 +    SQL> select name,snap_id,to_char(snap_time,'DD.MM.YYYY:HH24:MI:SS'
 +         "Date/Time" from stats$snapshot,v$database; 
 + 
 +    NAME         SNAP_ID Date/Time 
 +    --------- ---------- ------------------- 
 +    AKI1               4 14.11.2004:10:56:01 
 +    AKI1               1 13.11.2004:08:48:47 
 +    AKI1               2 13.11.2004:09:00:01 
 +    AKI1               3 13.11.2004:09:01:48 
 + 
 +    SQL> @?/rdbms/admin/sppurge; 
 +    Enter the Lower and Upper Snapshot ID 
 + 
 +Create the Report 
 + 
 +    sqlplus perfstat/perfstat 
 +    SQL> @?/rdbms/admin/spreport.sql 
 + 
 +Statspack at a Glance 
 + 
 +    What if you have this long STATSPACK report and you want to figure out if everything is running smoothly? Here, we will review what we look for in the report, section by section. We will use an actual STATSPACK report from our own Oracle 10g system. 
 + 
 +Statspack Report Header 
 + 
 +    STATSPACK report for 
 + 
 +    DB Name         DB Id    Instance     Inst Num Release     RAC Host 
 +    ------------ ----------- ------------ -------- ----------- --- ---------------- 
 +    AKI1          2006521736 AKI1                1 10.1.0.2.0  NO  akira 
 + 
 +                  Snap Id     Snap Time      Sessions Curs/Sess Comment 
 +                --------- ------------------ -------- --------- ------------------- 
 +    Begin Snap:         5 14-Nov-04 11:18:00       15      14.3 
 +      End Snap:         6 14-Nov-04 11:33:00       15      10.2 
 +       Elapsed:                15.00 (mins) 
 + 
 +    Cache Sizes (end) 
 +    ~~~~~~~~~~~~~~~~~ 
 +                   Buffer Cache:        24M      Std Block Size:         4K 
 +               Shared Pool Size:       764M          Log Buffer:     1,000K 
 + 
 +    Note that this section may appear slightly different depending on your version of Oracle. For example, the Curs/Sess column, which shows the number of open cursors per session, is new with Oracle9i (an 8i Statspack report would not show this data). 
 + 
 +    Here, the item we are most interested in is the elapsed time. We want that to be large enough to be meaningful, but small enough to be relevant (15 to 30 minutes is OK). If we use longer times, we begin to lose the needle in the haystack. 
 + 
 +Statspack Load Profile 
 + 
 +    Load Profile 
 +    ~~~~~~~~~~~~                            Per Second       Per Transaction 
 +                                       ---------------       --------------- 
 +                      Redo size:            425,649.84         16,600,343.64 
 +                  Logical reads:              1,679.69             65,508.00 
 +                  Block changes:              2,546.17             99,300.45 
 +                 Physical reads:                 77.81              3,034.55 
 +                Physical writes:                 78.35              3,055.64 
 +                     User calls:                  0.24                  9.55 
 +                         Parses:                  2.90                113.00 
 +                    Hard parses:                  0.16                  6.27 
 +                          Sorts:                  0.76                 29.82 
 +                         Logons:                  0.01                  0.36 
 +                       Executes:                  4.55                177.64 
 +                   Transactions:                  0.03 
 + 
 +      % Blocks changed per Read:  151.59    Recursive Call %:    99.56 
 +     Rollback per transaction %:    0.00       Rows per Sort:    65.61 
 + 
 +    Here, we are interested in a variety of things, but if we are looking at a "health check", three items are important: 
 + 
 +        The Hard parses (we want very few of them) 
 +        Executes (how many statements we are executing per second / transaction) 
 +        Transactions (how many transactions per second we process). 
 + 
 +    This gives an overall view of the load on the server. In this case, we are looking at a very good hard parse number and a fairly light system load (1 - 4 transactions per second is low). 
 + 
 +Statspack Instance Efficiency Percentage 
 + 
 +    Next, we move onto the Instance Efficiency Percentages section, which includes perhaps the only ratios we look at in any detail: 
 + 
 +    Instance Efficiency Percentages (Target 100%) 
 +    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 
 +                Buffer Nowait %:  100.00       Redo NoWait %:   99.99 
 +                Buffer  Hit   %:   95.39    In-memory Sort %:  100.00 
 +                Library Hit   %:   99.42        Soft Parse %:   94.45 
 +             Execute to Parse %:   36.39         Latch Hit %:  100.00 
 +    Parse CPU to Parse Elapsd %:   59.15     % Non-Parse CPU:   99.31 
 + 
 +     Shared Pool Statistics        Begin   End 
 +                                   ------  ------ 
 +                 Memory Usage %:   10.28   10.45 
 +        % SQL with executions>1:   70.10   71.08 
 +      % Memory for SQL w/exec>1:   44.52   44.70 
 + 
 +    The three in bold are the most important: Library Hit, Soft Parse % and Execute to Parse. All of these have to do with how well the shared pool is being utilized. Time after time, we find this to be the area of greatest payback, where we can achieve some real gains in performance. 
 + 
 +    Here, in this report, we are quite pleased with the Library Hit and the Soft Parse % values. If the library Hit ratio was low, it could be indicative of a shared pool that is too small, or just as likely, that the system did not make correct use of bind variables in the application. It would be an indicator to look at issues such as those. 
 + 
 +OLTP System 
 + 
 +    The Soft Parse % value is one of the most important (if not the only important) ratio in the database. For a typical OLTP system, it should be as near to 100% as possible. You quite simply do not hard parse after the database has been up for a while in your typical transactional / general-purpose database. The way you achieve that is with bind variables. In a regular system like this, we are doing many executions per second, and hard parsing is something to be avoided. 
 + 
 +Data Warehouse 
 + 
 +    In a data warehouse, we would like to generally see the Soft Parse ratio lower. We don't necessarily want to use bind variables in a data warehouse. This is because they typically use materialized views, histograms, and other things that are easily thwarted by bind variables. In a data warehouse, we may have many seconds between executions, so hard parsing is not evil; in fact, it is good in those environments. 
 + 
 +The moral of this is ... 
 + 
 +    ... to look at these ratios and look at how the system operates. Then, using that knowledge, determine if the ratio is okay given the conditions. If we just said that the execute-to-parse ratio for your system should be 95% or better, that would be unachievable in many web-based systems. If you have a routine that will be executed many times to generate a page, you should definitely parse once per page and execute it over and over, closing the cursor if necessary before your connection is returned to the connection pool. 
 + 
 +Statspack Top 5 Timed Events 
 + 
 +    Moving on, we get to the Top 5 Timed Events section (in Oracle9i Release 2 and later) or Top 5 Wait Events (in Oracle9i Release 1 and earlier). 
 + 
 +    Top 5 Timed Events 
 +    ~~~~~~~~~~~~~~~~~~                                                      % Total 
 +    Event                                               Waits    Time (s) Call Time 
 +    -------------------------------------------- ------------ ----------- --------- 
 +    CPU time                                                          122     91.65 
 +    db file sequential read                             1,571                1.61 
 +    db file scattered read                              1,174                1.59 
 +    log file sequential read                              342                1.39 
 +    control file parallel write                           450                1.39 
 +              ------------------------------------------------------------- 
 +    Wait Events  DB/Inst: AKI1/AKI1  Snaps: 5-6 
 + 
 +    -> s  - second 
 +    -> cs - centisecond -     100th of a second 
 +    -> ms - millisecond -    1000th of a second 
 +    -> us - microsecond - 1000000th of a second 
 +    -> ordered by wait time desc, waits desc (idle events last) 
 + 
 +    This section is among the most important and relevant sections in the Statspack report. Here is where you find out what events (typically wait events) are consuming the most time. In Oracle9i Release 2, this section is renamed and includes a new event: CPU time. 
 + 
 +        CPU time is not really a wait event (hence, the new name), but rather the sum of the CPU used by this session, or the amount of CPU time used during the snapshot window. In a heavily loaded system, if the CPU time event is the biggest event, that could point to some CPU-intensive processing (for example, forcing the use of an index when a full scan should have been used), which could be the cause of the bottleneck. 
 + 
 +        Db file sequential read - This wait event will be generated while waiting for writes to TEMP space generally (direct loads, Parallel DML (PDML) such as parallel updates. You may tune the PGA AGGREGATE TARGET parameter to reduce waits on sequential reads. 
 + 
 +        Db file scattered read - Next is the db file scattered read wait value. That generally happens during a full scan of a table. You can use the Statspack report to help identify the query in question and fix it. 
 + 
 +SQL ordered by Gets 
 + 
 +    Here you will find the most CPU-Time consuming SQL statements 
 + 
 +    SQL ordered by Gets  DB/Inst: AKI1/AKI1  Snaps: 5-6 
 +    -> Resources reported for PL/SQL code includes the resources used by all SQL 
 +       statements called by the code. 
 +    -> End Buffer Gets Threshold:     10000 Total Buffer Gets:         720,588 
 +    -> Captured SQL accounts for    3.1% of Total Buffer Gets 
 +    -> SQL reported below exceeded  1.0% of Total Buffer Gets 
 + 
 +                                                         CPU      Elapsd     Old 
 +      Buffer Gets    Executions  Gets per Exec  %Total Time (s)  Time (s) Hash Value 
 +    --------------- ------------ -------------- ------ -------- --------- ---------- 
 +             16,926            1       16,926.0    2.3     2.36      3.46 1279400914 
 +    Module: SQL*Plus 
 +    create table test as select * from all_objects 
 + 
 +Tablespace IO Stats 
 + 
 +    Tablespace 
 +    ------------------------------ 
 +                     Av      Av     Av                    Av        Buffer Av Buf 
 +             Reads Reads/s Rd(ms) Blks/Rd       Writes Writes/     Waits Wt(ms) 
 +    -------------- ------- ------ ------- ------------ -------- ---------- ------ 
 +    TAB      1,643          1.0    19.2       16,811       39          0    0.0 
 +    UNDO       166          0.5     1.0        5,948       14          0    0.0 
 +    SYSTEM     813          2.5     1.6          167        0          0    0.0 
 +    STATSPACK  146          0.3     1.1          277        1          0    0.0 
 +    SYSAUX      18          0.0     1.0           29        0          0    0.0 
 +    IDX         18          0.0     1.0           18        0          0    0.0 
 +    USER        18          0.0     1.0           18        0          0    0.0 
 +              ------------------------------------------------------------- 
 + 
 +Rollback Segment Stats 
 + 
 +    ->A high value for "Pct Waits" suggests more rollback segments may be required 
 +    ->RBS stats may not be accurate between begin and end snaps when using Auto Undo 
 +      managment, as RBS may be dynamically created and dropped as needed 
 + 
 +            Trans Table       Pct   Undo Bytes 
 +    RBS No      Gets        Waits     Written        Wraps  Shrinks  Extends 
 +    ------ -------------- ------- --------------- -------- -------- -------- 
 +                    8.0    0.00                      0        0        0 
 +                3,923.0    0.00      14,812,586       15        0       14 
 +                5,092.0    0.00      19,408,996       19        0       19 
 +                  295.0    0.00         586,760        1        0        0 
 +                1,312.0    0.00       4,986,920        5        0        5 
 +                    9.0    0.00                      0        0        0 
 +                    9.0    0.00                      0        0        0 
 +                    9.0    0.00                      0        0        0 
 +                    9.0    0.00                      0        0        0 
 +                    9.0    0.00                      0        0        0 
 +        10            9.0    0.00                      0        0        0 
 +              ------------------------------------------------------------- 
 + 
 +Rollback Segment Storage 
 + 
 +    ->Optimal Size should be larger than Avg Active 
 + 
 +    RBS No    Segment Size      Avg Active    Optimal Size    Maximum Size 
 +    ------ --------------- --------------- --------------- --------------- 
 +                 364,544                                       364,544 
 +              17,952,768       8,343,482                      17,952,768 
 +              25,292,800      11,854,857                      25,292,800 
 +               4,321,280         617,292                       6,418,432 
 +               8,515,584       1,566,623                       8,515,584 
 +                 126,976                                       126,976 
 +                 126,976                                       126,976 
 +                 126,976                                       126,976 
 +                 126,976                                       126,976 
 +                 126,976                                       126,976 
 +        10         126,976                                       126,976 
 +              ------------------------------------------------------------- 
 + 
 +Generate Execution Plan for given SQL statement 
 + 
 +    If you have identified one or more problematic SQL statement, you may want to check the execution plan. Remember the "Old Hash Value" from the report above (1279400914), then execute the scrip to generate the execution plan. 
 + 
 +    sqlplus perfstat/perfstat 
 +    SQL> @?/rdbms/admin/sprepsql.sql 
 +    Enter the Hash Value, in this example: 1279400914 
 + 
 +    SQL Text 
 +    ~~~~~~~~ 
 +    create table test as select * from all_objects 
 + 
 +    Known Optimizer Plan(s) for this Old Hash Value 
 +    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 
 +    Shows all known Optimizer Plans for this database instance, and the Snap Id's 
 +    they were first found in the shared pool.  A Plan Hash Value will appear 
 +    multiple times if the cost has changed 
 +    -> ordered by Snap Id 
 + 
 +      First        First          Plan 
 +     Snap Id     Snap Time     Hash Value        Cost 
 +    --------- --------------- ------------ ---------- 
 +            6 14 Nov 04 11:26   1386862634        52 
 + 
 +    Plans in shared pool between Begin and End Snap Ids 
 +    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 
 +    Shows the Execution Plans found in the shared pool between the begin and end 
 +    snapshots specified.  The values for Rows, Bytes and Cost shown below are those 
 +    which existed at the time the first-ever snapshot captured this plan - these 
 +    values often change over time, and so may not be indicative of current values 
 +    -> Rows indicates Cardinality, PHV is Plan Hash Value 
 +    -> ordered by Plan Hash Value 
 + 
 +    -------------------------------------------------------------------------------- 
 +    | Operation                      | PHV/Object Name      Rows | Bytes|   Cost | 
 +    -------------------------------------------------------------------------------- 
 +    |CREATE TABLE STATEMENT          |----- 1386862634 ----|            |     52 | 
 +    |LOAD AS SELECT                  |                                |        | 
 +    | VIEW                                                   1K|  216K|     44 | 
 +    |  FILTER                        |                                |        | 
 +    |   HASH JOIN                    |                         1K|  151K|     38 | 
 +    |    TABLE ACCESS FULL           |USER$                |    29 |  464 |      2 | 
 +    |    TABLE ACCESS FULL           |OBJ$                     3K|  249K|     35 | 
 +    |   TABLE ACCESS BY INDEX ROWID  |IND$                     1 |    7 |      2 | 
 +    |    INDEX UNIQUE SCAN           |I_IND1                   1 |      |      1 | 
 +    |   NESTED LOOPS                                         5 |  115 |     16 | 
 +    |    INDEX RANGE SCAN            |I_OBJAUTH1               1 |   10 |      2 | 
 +    |    FIXED TABLE FULL            |X$KZSRO              |     5 |   65 |     14 | 
 +    |   FIXED TABLE FULL             |X$KZSPR              |     1 |   26 |     14 | 
 +    |   FIXED TABLE FULL             |X$KZSPR              |     1 |   26 |     14 | 
 +    |   FIXED TABLE FULL             |X$KZSPR              |     1 |   26 |     14 | 
 +    |   FIXED TABLE FULL             |X$KZSPR              |     1 |   26 |     14 | 
 +    |   FIXED TABLE FULL             |X$KZSPR              |     1 |   26 |     14 | 
 +    |   FIXED TABLE FULL             |X$KZSPR              |     1 |   26 |     14 | 
 +    |   FIXED TABLE FULL             |X$KZSPR              |     1 |   26 |     14 | 
 +    |   FIXED TABLE FULL             |X$KZSPR              |     1 |   26 |     14 | 
 +    |   FIXED TABLE FULL             |X$KZSPR              |     1 |   26 |     14 | 
 +    |   FIXED TABLE FULL             |X$KZSPR              |     1 |   26 |     14 | 
 +    |   FIXED TABLE FULL             |X$KZSPR              |     1 |   26 |     14 | 
 +    |   FIXED TABLE FULL             |X$KZSPR              |     1 |   26 |     14 | 
 +    |   FIXED TABLE FULL             |X$KZSPR              |     1 |   26 |     14 | 
 +    |   FIXED TABLE FULL             |X$KZSPR              |     1 |   26 |     14 | 
 +    |   FIXED TABLE FULL             |X$KZSPR              |     1 |   26 |     14 | 
 +    |   FIXED TABLE FULL             |X$KZSPR              |     1 |   26 |     14 | 
 +    |   FIXED TABLE FULL             |X$KZSPR              |     1 |   26 |     14 | 
 +    |   FIXED TABLE FULL             |X$KZSPR              |     1 |   26 |     14 | 
 +    |   FIXED TABLE FULL             |X$KZSPR              |     1 |   26 |     14 | 
 +    |   FIXED TABLE FULL             |X$KZSPR              |     1 |   26 |     14 | 
 +    |   VIEW                                                 1 |   13 |      2 | 
 +    |    FAST DUAL                                           1 |      |      2 | 
 +    -------------------------------------------------------------------------------- 
 + 
 +Resolving Your Wait Events 
 + 
 +    The following are 10 of the most common causes for wait events, along with explanations and potential solutions: 
 + 
 +1. DB File Scattered Read 
 + 
 +    This generally indicates waits related to full table scans. As full table scans are pulled into memory, they rarely fall into contiguous buffers but instead are scattered throughout the buffer cache. A large number here indicates that your table may have missing or suppressed indexes. Although it may be more efficient in your situation to perform a full table scan than an index scan, check to ensure that full table scans are necessary when you see these waits. Try to cache small tables to avoid reading them in over and over again, since a full table scan is put at the cold end of the LRU (Least Recently Used) list. 
 + 
 +2. DB File Sequential Read 
 + 
 +    This event generally indicates a single block read (an index read, for example). A large number of waits here could indicate poor joining orders of tables, or unselective indexing. It is normal for this number to be large for a high-transaction, well-tuned system, but it can indicate problems in some circumstances. You should correlate this wait statistic with other known issues within the Statspack report, such as inefficient SQL. Check to ensure that index scans are necessary, and check join orders for multiple table joins. The DB_CACHE_SIZE will also be a determining factor in how often these waits show up. Problematic hash-area joins should show up in the PGA memory, but they're also memory hogs that could cause high wait numbers for sequential reads. They can also show up as direct path read/write waits. 
 + 
 +3. Free Buffer 
 + 
 +    This indicates your system is waiting for a buffer in memory, because none is currently available. Waits in this category may indicate that you need to increase the DB_BUFFER_CACHE, if all your SQL is tuned. Free buffer waits could also indicate that unselective SQL is causing data to flood the buffer cache with index blocks, leaving none for this particular statement that is waiting for the system to process. This normally indicates that there is a substantial amount of DML (insert/update/delete) being done and that the Database Writer (DBWR) is not writing quickly enough; the buffer cache could be full of multiple versions of the same buffer, causing great inefficiency. To address this, you may want to consider accelerating incremental checkpointing, using more DBWR processes, or increasing the number of physical disks. 
 + 
 +4. Buffer Busy 
 + 
 +    This is a wait for a buffer that is being used in an unshareable way or is being read into the buffer cache. Buffer busy waits should not be greater than 1 percent. Check the Buffer Wait Statistics section (or V$WAITSTAT) to find out if the wait is on a segment header. If this is the case, increase the freelist groups or increase the pctused to pctfree gap. If the wait is on an undo header, you can address this by adding rollback segments; if it's on an undo block, you need to reduce the data density on the table driving this consistent read or increase the DB_CACHE_SIZE. If the wait is on a data block, you can move data to another block to avoid this hot block, increase the freelists on the table, or use Locally Managed Tablespaces (LMTs). If it's on an index block, you should rebuild the index, partition the index, or use a reverse key index. To prevent buffer busy waits related to data blocks, you can also use a smaller block size: fewer records fall within a single block in this case, so it's not as "hot." When a DML (insert/update/ delete) occurs, Oracle Database writes information into the block, including all users who are "interested" in the state of the block (Interested Transaction List, ITL). To decrease waits in this area, you can increase the initrans, which will create the space in the block to allow multiple ITL slots. You can also increase the pctfree on the table where this block exists (this writes the ITL information up to the number specified by maxtrans, when there are not enough slots built with the initrans that is specified). 
 + 
 +5. Latch Free 
 + 
 +    Latches are low-level queuing mechanisms (they're accurately referred to as mutual exclusion mechanisms) used to protect shared memory structures in the system global area (SGA). Latches are like locks on memory that are very quickly obtained and released. Latches are used to prevent concurrent access to a shared memory structure. If the latch is not available, a latch free miss is recorded. Most latch problems are related to the failure to use bind variables (library cache latch), redo generation issues (redo allocation latch), buffer cache contention issues (cache buffers LRU chain), and hot blocks in the buffer cache (cache buffers chain). There are also latch waits related to bugs; check MetaLink for bug reports if you suspect this is the case. When latch miss ratios are greater than 0.5 percent, you should investigate the issue. 
 + 
 +6. Enqueue 
 + 
 +    An enqueue is a lock that protects a shared resource. Locks protect shared resources, such as data in a record, to prevent two people from updating the same data at the same time. An enqueue includes a queuing mechanism, which is FIFO (first in, first out). Note that Oracle's latching mechanism is not FIFO. Enqueue waits usually point to the ST enqueue, the HW enqueue, the TX4 enqueue, and the TM enqueue. The ST enqueue is used for space management and allocation for dictionary-managed tablespaces. Use LMTs, or try to preallocate extents or at least make the next extent larger for problematic dictionary-managed tablespaces. HW enqueues are used with the high-water mark of a segment; manually allocating the extents can circumvent this wait. TX4s are the most common enqueue waits. TX4 enqueue waits are usually the result of one of three issues. The first issue is duplicates in a unique index; you need to commit/rollback to free the enqueue. The second is multiple updates to the same bitmap index fragment. Since a single bitmap fragment may contain multiple rowids, you need to issue a commit or rollback to free the enqueue when multiple users are trying to update the same fragment. The third and most likely issue is when multiple users are updating the same block. If there are no free ITL slots, a block-level lock could occur. You can easily avoid this scenario by increasing the initrans and/or maxtrans to allow multiple ITL slots and/or by increasing the pctfree on the table. Finally, TM enqueues occur during DML to prevent DDL to the affected object. If you have foreign keys, be sure to index them to avoid this general locking issue. 
 + 
 +7. Log Buffer Space 
 + 
 +    This wait occurs because you are writing the log buffer faster than LGWR can write it to the redo logs, or because log switches are too slow. To address this problem, increase the size of the log files, or increase the size of the log buffer, or get faster disks to write to. You might even consider using solid-state disks, for their high speed. 
 + 
 +8. Log File Switch 
 + 
 +    All commit requests are waiting for "logfile switch (archiving needed)" or "logfile switch (Checkpoint. Incomplete)." Ensure that the archive disk is not full or slow. DBWR may be too slow because of I/O. You may need to add more or larger redo logs, and you may potentially need to add database writers if the DBWR is the problem. 
 + 
 +9. Log File Sync 
 + 
 +    When a user commits or rolls back data, the LGWR flushes the session's redo from the log buffer to the redo logs. The log file sync process must wait for this to successfully complete. To reduce wait events here, try to commit more records (try to commit a batch of 50 instead of one at a time, for example). Put redo logs on a faster disk, or alternate redo logs on different physical disks, to reduce the archiving effect on LGWR. Don't use RAID 5, since it is very slow for applications that write a lot; potentially consider using file system direct I/O or raw devices, which are very fast at writing information. 
 + 
 +10. Idle Event. 
 + 
 +    There are several idle wait events listed after the output; you can ignore them. Idle events are generally listed at the bottom of each section and include such things as SQL*Net message to/from client and other background-related timings. Idle events are listed in the stats$idle_event table. 
 + 
 +Remove STATSPACK from the Database 
 + 
 +    After a STATSPACK session you want to remove the STATSPACK tables. 
 + 
 +    sqlplus "/ as sysdba" 
 +    SQL> @?/rdbms/admin/spdrop.sql 
 +    SQL> DROP TABLESPACE perfstat INCLUDING CONTENTS AND DATAFILES; 
 +</code> 
 +=== References === 
 +  *  [[http://www.dba-oracle.com/plsql/t_plsql_v$.htm|http://www.dba-oracle.com/plsql/t_plsql_v$.htm]] 
 + 
 +==== Help with Oracle Trace File Analyser (TFA) ==== 
 + 
 +find current running sqls 
 + 
 +Find active sessions in oracle database 
 +Find waitevents in database 
 +Find sessions generating undo 
 +Find the temp usage of the sessions  
 +Find sessions generating lot of redo 
 +Monitor tablespace usage  
 +Script to Monitor undo tablespace usage 
 +Monitor TEMP tablespace usage  
 +Find blocking sessions  
 +Find long running operations 
 +Find locks present in database 
 +Find queries triggered from a procedure 
 +Get sid from os pid  
 +Kill all sessions of a sql_id 
 +kill all session of a user 
 +get parallel query detail 
 +Kill snipped session in db 
 +Top Query with high elapsed time  
 +Monitor parallel queries 
 +Find the locked objects 
 +Check open cursors 
 +Session login history from ASH 
 +Buffer Cache hit ratio 
 +Find top disk_reads by an user 
 +Get os pid from sid 
 +Get active sid of a pl/sql object 
 +Find buffer cache usage 
 +Monitor rollback transations 
 +Find column usage statistics 
 +Get background process details 
 +oracle db is 32bit or 64 bit? 
 +oracle license usage info 
 +db optimizer processing rate 
 +Purge recyclebin in database 
 +DB MONITORING 
 + 
 +xplain plan of sql_id from cursor 
 + 
 +xplain plan of sql_id from AWR 
 +Get sql_text from sid 
 +xplain plan of a sql statement 
 +xplain plan of a sql baseline 
 +Get bind values of a sql_id 
 +Flush a sql query from cursor 
 +Enable trace for a sql_id 
 +10053 OPTIMIZER TRACE  
 +Enable trace for a session  
 +Tracing all session of a user  
 +Enable tracing for a listener 
 +execution detail of a sql_id in cursor 
 +Pga usage by sessions 
 +segments with high physical read 
 +I/O usage of each tempfile 
 +Current SGA usage  
 +Top running queries from ASH 
 +Find blocking sessions from ASH 
 +Top cpu consuming sessions 
 +Sessions holding library cache lock 
 +Objects locked by library cache 
 +Sessions accessing an object 
 +Sqls doing full table scan 
 +Dictionary cache hit ratio 
 +Top sql queries using literal values 
 +Objects causing flushing of shared pool 
 +Latch type and sql hash value 
 +Objects causing latch contention 
 +Queries causing high physical read 
 +Mutex sleep in database 
 +Sql tuning advisor for sql_id from cursor 
 +run sga target advisory 
 +Run shared pool advisory 
 +Generate addm report 
 +DATABASE INFO 
 + 
 +Get redo log member info 
 + 
 +Get DDL of all tablespaces  
 +Get DDL of all privileges granted to user 
 +Get size of the database  
 +View hidden parameter setting 
 +Get ACL details in database  
 +Archive generation per hour 
 +Find active transactions in db 
 +Find who locked your account 
 +Find duplicate rows in table 
 +Database growth per month 
 +generate resize datafile script without ORA-03297 error 
 +Get database uptime 
 +Scn to timestamp and viceversa 
 +Disable/enable all triggers of schema 
 +Ger row_count of all the tables of a schema 
 +Spool sql query output to HTML 
 +Monitor index usage 
 +Get installed sqlpatches in db  
 +Cleanup orphaned datapump jobs 
 +Get Alert log location in db 
 +Installed RDBMS components 
 +Characterset info of database 
 +View/modify AWR retention 
 +Find optimal undo retention size 
 +Purge old awr snapshots 
 +Modify moving window size 
 +Open database link information 
 +utilization of current redo log ( in % ) 
 +Generate multiple AWR report 
 +Table not having index on fk column 
 +Get cpu memory info of db server 
 +Get database incarnation info 
 +View timezone info in db 
 +</code> 
 + 
 +  * [[https://www.linkedin.com/pulse/19-great-things-you-can-do-vactivesessionhistory-part-merav-kedem|19 Great Things You Can Do With V$ACTIVE_SESSION_HISTORY (Part 1)]] 
 +  * [[https://blogs.oracle.com/connect/post/beginning-performance-tuning-trace-your-steps|How to trace SQL sessions to identify Oracle Database bottlenecks - Arup Nanda]] 
 +  * [[https://support.oracle.com/epmos/faces/DocumentDisplay?id=33089.1|TROUBLESHOOTING: Possible Causes of Poor SQL Performance (Doc ID 33089.1)]] 
 + 
 +  * [[https://docs.oracle.com/database/121/TGSQL/tgsql_histo.htm#TGSQL95033|Histograms - Database SQL Tuning Guide - oracle.com]] 
 +  * [[https://www.informit.com/articles/article.aspx?p=1400612&seqNum=5|Oracle Performance Tuning: A Methodical Approach - informit.com]] 
performance.1544273361.txt.gz · Last modified: 2018/12/08 12:49 by 0.0.0.0

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki