Statspack explaination

Does anyone know where I can find an explaination of most of the statistics that comes with statspack report.
Thanks in advance
Hekan

Hi,
You could check great Statspack Viewer utilities which provide charting and GUI functionality with a lot of other useful features for STATSPACK here:
http://www.statsviewer.narod.ru

Similar Messages

  • Importing Statistics - Oracle 11g upgrade

    Hi,
    We are in the middle for planning for migration of oracle 9.2.0.8 database hosted in HP-UX to oracle 11gR2 to Linix. The database size is 1 TB ( 400GB of table data & 600GB of index data).
    Please let us know whether we can use the option of import/export the statistics from the Oraclr 9i to oradle 11g database. The database is highly OLTP/Batch. Will it be any Query performance problem due to the statistics import from oracle 9i to oracle 11g.
    Any suggestion are welcome and let me know if you need any more information.
    thanks,
    Mahesh

    Hello,
    Please let us know whether we can use the option of import/export the statistics from the Oraclr 9i to oradle 11g databaseIf I were you, when the data are imported to the 11g Database I would refresh the Statistics by using the dbms_stats Package.
    Then, you can test the most common queries on the new Database. If some perfomance troubles appear, you can use classical tools (Statspack, Explain Plan, TKPROF,...) or some useful tools you have in 11g as SQL Tuning Advisor or SQL Access Advisor :
    http://download.oracle.com/docs/cd/E11882_01/server.112/e16638/sql_tune.htm#PFGRF028
    http://download.oracle.com/docs/cd/E11882_01/server.112/e16638/advisor.htm#PFGRF008
    Hope this help.
    Best regards,
    Jean-Valentin

  • Statspack with explain plan.

    The goal is to obtain the overall system statistics and explain plans for the entire system.
    1. The only tool that I can think of is "StatsPack".
    2. With AWR there are 2 issues:-
    a. It is not free
    b. It does not give Explain plan output.
    3. We can use 10046 / 10053 Trace's however it wont give you as comphrehensive an output as statspack, you would also have to enable trace for each individual application while its running, grab the trace file, look at the report.
    4. We can also use outlines to store the explain plan this was primarily for Oracle version 10g.
    5. The last option we have is sql managed baselines. This is good , however it does not tell me conclusively how my system was executing a sql lets say 10 days back. It wont accept the new sql unless we promote it. Still there is no guarantee that the underlying sql plan will not change.
    So far statspack with level 6 appears to be the only solution.

    Personally I'd spend the money and get AWR.
    The problem with StatsPack is that when you run it, after an issue, it is too late to capture anything of value. With AWR snapshots are taken as often as you wish, 24 hours a day, and then when someone tells you there was a problem at 4:15 in the morning ... you have something to work with.

  • Explain statspack values for tablespace & file IO

    10.2.0.2 aix 5.2 64bit
    in the Tablespace IO Stats & File IO Stats section of statspack and awr reports can someone help to clear up a little confusion I have with the value for AV Reads/s & AV Rd(ms). I'll reference some values I have from one of my reports over a 1 hour snap shot period, with the first three columns being reads, av rd/s, av rd(ms) respectively for both sections.
    For Tablespace IO I have the following.
    PRODGLDTAI
    466,879 130 3.9 1.0 8,443 2 0 0.0
    For File IO I have the following for each file within this tablespace.
    PRODGLDTAI /jdb10/oradata/jde/b7333/prodgldtai04.dbf
    113,530 32 2.6 1.0 1,302 0 0 0.0
    PRODGLDTAI /jdb14/oradata/jde/b7333/prodgldtai03.dbf
    107,878 30 1.6 1.0 1,898 1 0 0.0
    PRODGLDTAI /jdb5/oradata/jde/b7333/prodgldtai01.dbf
    114,234 32 5.8 1.0 2,834 1 0 0.0
    PRODGLDTAI /jdb5/oradata/jde/b7333/prodgldtai02.dbf
    131,237 36 5.2 1.0 2,409 1 0 0.0
    From this I can calculate that there were on average 129.68 reads every second for the tablespace and that matches what is listed. But where does the av rd(ms) come from? If there are 1000 milli-seconds in a second and there were 130 reads per second, doesn't that work out to 7.6 ms per read?
    What exactly is av rd(ms)? Is it how many milli-seconds it takes on average for 1 read? I've read in the Oracle Performance Tuning doc that it shouldn't be higher than 20. What exactly is this statistic? Also, we are currently looking at the purchase of a SAN and we were told that value shouldn't be above 10, is that just a matter of opinion? Would these values be kind of useless on tablespaces and datafiles that aren't very active over an hours period of time?

    10.2.0.2 aix 5.2 64bit
    in the Tablespace IO Stats & File IO Stats section of statspack and awr reports can someone help to clear up a little confusion I have with the value for AV Reads/s & AV Rd(ms). I'll reference some values I have from one of my reports over a 1 hour snap shot period, with the first three columns being reads, av rd/s, av rd(ms) respectively for both sections.
    For Tablespace IO I have the following.
    PRODGLDTAI
    466,879 130 3.9 1.0 8,443 2 0 0.0
    For File IO I have the following for each file within this tablespace.
    PRODGLDTAI /jdb10/oradata/jde/b7333/prodgldtai04.dbf
    113,530 32 2.6 1.0 1,302 0 0 0.0
    PRODGLDTAI /jdb14/oradata/jde/b7333/prodgldtai03.dbf
    107,878 30 1.6 1.0 1,898 1 0 0.0
    PRODGLDTAI /jdb5/oradata/jde/b7333/prodgldtai01.dbf
    114,234 32 5.8 1.0 2,834 1 0 0.0
    PRODGLDTAI /jdb5/oradata/jde/b7333/prodgldtai02.dbf
    131,237 36 5.2 1.0 2,409 1 0 0.0
    From this I can calculate that there were on average 129.68 reads every second for the tablespace and that matches what is listed. But where does the av rd(ms) come from? If there are 1000 milli-seconds in a second and there were 130 reads per second, doesn't that work out to 7.6 ms per read?
    What exactly is av rd(ms)? Is it how many milli-seconds it takes on average for 1 read? I've read in the Oracle Performance Tuning doc that it shouldn't be higher than 20. What exactly is this statistic? Also, we are currently looking at the purchase of a SAN and we were told that value shouldn't be above 10, is that just a matter of opinion? Would these values be kind of useless on tablespaces and datafiles that aren't very active over an hours period of time?

  • Uniques constraint violation error while executing statspack.snap

    Hi,
    I have configured a job to run the statspack snap at a interval of 20 min from 6:00 PM to 3:00 AM . Do perform this task , I have crontab 2 scripts : one to execute the job at 6 PM and another to break the job at 3 AM. My Oracle version is 9.2.0.7 and OS env is AIX 5.3
    My execute scripts look like:
    sqlplus perfstat/perfstat <<EOF
    exec dbms_job.broken(341,FALSE);
    exec dbms_job.run(341);
    The problem is , that the job work fine for weekdays but on weekend get aborted with the error :
    ORA-12012: error on auto execute of job 341
    ORA-00001: unique constraint (PERFSTAT.STATS$SQL_SUMMARY_PK) violated
    ORA-06512: at "PERFSTAT.STATSPACK", line 1361
    ORA-06512: at "PERFSTAT.STATSPACK", line 2471
    ORA-06512: at "PERFSTAT.STATSPACK", line 91
    ORA-06512: at line 1
    After looking on to metalink , I came to know that this is one listed bug 2784796 which was fixed in 10g.
    My question is , why there is no issue on weekdays using the same script. There is no activity on the db on weekend and online backup start quite late at night.
    Thanks
    Anky

    The reasons for hitting this bug are explained in Metalink, "...cursors with same sql text (at least 31 first characters), same hash_value but a different parent cursor...", you can also find the workaround in Note:393300.1.
    Enrique

  • Understanding statspack report(CPU time in top time events)

    Hi,
    I am using oracle 9.2.0.8 RAC on SUN solaris platform.I am trying to understand my DB statistics using the below statspack report.Can you please coment on the below report
    My quetions/thoughts are:
    1) CPU time is in the top timed events,Is that eman some need to do with CPU increase.Was CPU bottleneck?
    2) Parse CPU to Parse Elapsd %: 80.28 .Is this means I am hard parsing most of the time.How can identify which queries doing more hard parses.what is mean by% Non-Parse CPU: 98.76
    3) Memory Usage %: 96.25 96.64.It seems to be there is too much memory usage.Can you elaborate this usage about what could be the reasons for this to happen
    4) global cache cr request is coming in the top wait evetns and top timed events.Is there some issue with RAC?
    5) can you please explain about 5 CR Blocks Served (RAC) and 5 CU Blocks Served (RAC) and Top 5 ITL Waits per
    Your help is appreciated!!
    Load Profile
    ~~~~~~~~~~~~ Per Second Per Transaction
    Redo size: 2,101,521.49 18,932.15
    Logical reads: 91,525.82 824.54
    Block changes: 6,720.68 60.55
    Physical reads: 5,644.92 50.85
    Physical writes: 464.97 4.19
    User calls: 922.79 8.31
    Parses: 342.37 3.08
    Hard parses: 1.52 0.01
    Sorts: 324.18 2.92
    Logons: 2.66 0.02
    Executes: 2,131.75 19.20
    Transactions: 111.00
    % Blocks changed per Read: 7.34 Recursive Call %: 78.48
    Rollback per transaction %: 22.43 Rows per Sort: 15.89
    Instance Efficiency Percentages (Target 100%)
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    Buffer Nowait %: 99.66 Redo NoWait %: 100.00
    Buffer Hit %: 93.86 In-memory Sort %: 100.00
    Library Hit %: 99.95 Soft Parse %: 99.56
    Execute to Parse %: 83.94 Latch Hit %: 99.79
    Parse CPU to Parse Elapsd %: 80.28 % Non-Parse CPU: 98.76
    Shared Pool Statistics Begin End
    Memory Usage %: 96.25 96.64
    % SQL with executions>1: 34.19 32.67
    % Memory for SQL w/exec>1: 39.87 40.47
    Top 5 Timed Events
    ~~~~~~~~~~~~~~~~~~ % Total
    Event Waits Time (s) Ela Time
    CPU time 10,406 42.54
    db file sequential read 1,707,372 4,282 17.51
    global cache cr request 2,566,822 2,369 9.68
    db file scattered read 1,109,892 1,719 7.03
    SQL*Net break/reset to client 17,287 1,348 5.51
    Wait Events for DB: Instance:
    -> s - second
    -> cs - centisecond - 100th of a second
    -> ms - millisecond - 1000th of a second
    -> us - microsecond - 1000000th of a second
    -> ordered by wait time desc, waits desc (idle events last)
    Avg
    Total Wait wait Waits
    Event Waits Timeouts Time (s) (ms) /txn
    db file sequential read 1,707,372 0 4,282 3 8.5
    global cache cr request 2,566,822 3,356 2,369 1 12.8
    db file scattered read 1,109,892 0 1,719 2 5.5
    SQL*Net break/reset to clien 17,287 0 1,348 78 0.1
    buffer busy waits 312,198 11 1,082 3 1.6
    Message was edited by:
    user509266

    This statspack taken for 30 minutes interval.We have 16 CPU's.We never got ORA-4031 errors.It means you have 16 * 30 * 60 = 28,800 seconds CPU available during the interval but you only used 10,406. So you don't have a CPU problem.
    For Statspack documentation, you can have a look to <ORACLE_HOME>/rdbms/admin/spdoc.txt, Metalink note 228913.1, Jonathan Lewis Scratchpad, books commended by Rajesh Kumar Yogi and also to http://www.oracle.com/technology/deploy/performance/index.html

  • Statspack Best Practices

    Hello Everyone:
    Common sense tells me that (within reason) statspack snapshots should be run fairly frequently. I have a set of users who is challenging that notion, saying that Statspack is spiking the system and slowing them down, and so they want me to only take snapshots every 12 hours.
    I remember seeing a document (I thought it was on MetaLink, but I dunno...) that spoke of best practices for Statspack snapshots. My customers want to limit me to one snapshot every 12 hours, and I contend that I might as well not run it with that window.
    Can someone point me to some best practice or other documentation that will support my contentions that:
    1) Statspack is NOT a resource hog, and
    2) twice-a-day is not going to provide meaningful data.
    Thanks,
    Mike
    P.S. If I'm all wet, and you know it, I'd like to see that documentation, too!

    Hi Mike,
    saying that Statspack is spiking the system and slowing them downI wrote both of the Oracle Press STATSPACK books and I've NEVER seen STATSPACK cause a burden. Remember a "snapshot" is a simple dump of the X$ memory structures into tables, very fast . . .
    they want me to only take snapshots every 12 hours.Why bother? STATSPACK and AWR reports are elapsed-time reports, and long-term reports are seldom useful . . . .
    An important thing to remember is that even if statistics are gathered too frequently with STATSPACK, reporting can always be done on a larger time window. For example, if snapshots are at five-minute intervals and there is a report that takes 30 minutes to run, that report may or may not be slow during any given five-minute period.
    After looking at the five-minute windows, the DBA can decide to look at a 30-minute window and then run a report that spans six individual five-minute windows. The moral of the story is to err on the side of sampling too often rather than not often enough.
    I have over 600 pages dedicated to STATSPACK and AWR analysis at the link below, if you want a super-detailed explaination:
    http://www.rampant-books.com/book_2005_1_awr_proactive_tuning.htm
    I'm not as authoritative as the documentation, but even hourly snapshot durations can cause loss of performance details.
    Ah, this Oracle Best Practices document may help:
    http://www.oracle.com/technology/products/manageability/database/pdf/ow05/PS_S998_273998_106-1_FIN_v1.pdf
    By default, every hour a snapshot of all workload andstatistics information is taken and stored in the AWR. The data is retained for 7 days by default and both snapshot interval and retention settings are userconfigurable."
    Hope this helps. . .
    Donald K. Burleson
    Oracle Press author

  • Plain Explain  and s methods (tools) for  to improve Performance

    Hi
    How can I do to use Plain Explain and others methods for to impove performance ?
    Where can I to find tutorial about it ?
    thank you in advance

    Hi
    How can I do to use Plain Explain and others
    methods for to impove performance ?
    Internally there are potentially several hundred 'procedures' that can be assembled in different ways to access data. For example, when getting one row from a table, you could use an index or a full table scan.
    Explain Plan shows the [proposed] access path, or complete list of the procedures, in the order called, to do what the SQL statement is requesting.
    The objective with Explain Plan is to review the proposed access path and determine whether alternates, through the use of hints or statistics or indexes or materialized views, might be 'better'.
    You often use Wait analysis, through StatsPack, AWR/ADDM, TKProf, Trace, etc. to determine which SQL statement is likely causing a performance issue.
    >
    Where can I to find tutorial about it ?Ah ... the $64K question. If we only knew ...
    There are so many variables involved, that most tutorials are nearly useless. The common approach therefore is to read - a lot. And build up your own 'interpretation' of the reading.
    Personal suggestion is to read (in order)
    1) Oracle's Database Concepts manual (described some of 'how' this is happening)
    2) Oracle's Performance Tuning manual (describes more of 'how' as related to performance and also describes some of the approaches)
    3) Tom Kyte's latest book (has a lot of demos and 'proofs' about how specific things work)
    4) Don Burleson's Statspack book (shows how to set up and do some basic interpretation)
    5) Jonathan's book (how the optimizer works - tough reading, though)
    6_ any book by the Oak Table (http://oaktable.net)
    Beyond that is any book that contains the words 'Oracle' and 'Performance' in the title or description. BUT ... when reading, use truck-loads, not just grains, of salt.
    Verify everything. I have seen an incredible amount of mistakes ... make 'em mysellf all the time, so I tend to recognize them when I see them. Believe nothing unless you have proven it for yourself.. Even then, realize there are exceptions and boundary conditions and ibgs and patches and statistics and CPU and memory and disk speed issues that will change ehat you have proven.
    It's not hopeless. But it is a lot of work and effort. And well rewarded, if you decide to get serious.

  • Can i get all statemnts analysys in a procedure when i run statspack

    Hi,
    I am running statspack to findout sql staements which are taking long time.I have some procedures too to analyze.can i get statistics for all statements within the procedures indivisually or i will get only overall statistics for that peocedure.i i want stats for all statements in a procedure which level i need to run.I i want explain plan also in stats report which level of snapshots i need to take.Please suggest me in this regard.
    Thanks and Regards
    Anand

    http://download-east.oracle.com/docs/cd/B10501_01/server.920/a96533/sqltrace.htm#1018

  • Oracle rownum wrong explain plan

    SCOTT@oracle10g>create table t as select * from dba_objects;
    Table created.
    SCOTT@oracle10g>alter table t modify CREATED date not null;
    Table altered.
    SCOTT@oracle10g>insert into t select * from t;
    50416 rows created.
    SCOTT@oracle10g>insert into t select * from t;
    100832 rows created.
    SCOTT@oracle10g>insert into t select * from t;
    201664 rows created.
    SCOTT@oracle10g>commit;
    Commit complete.
    SCOTT@oracle10g>create index t_created on t(created) nologging;
    Index created.
    SCOTT@oracle10g>select * from v$version;
    BANNER
    Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - Prod
    PL/SQL Release 10.2.0.3.0 - Production
    CORE    10.2.0.3.0      Production
    TNS for 32-bit Windows: Version 10.2.0.3.0 - Production
    NLSRTL Version 10.2.0.3.0 - Production
    SCOTT@oracle10g>set autot trace
    SCOTT@oracle10g>select t.owner,t.object_name   from
      2  (select rid from (
      3  select rownum rn,rid from
      4  (select rowid rid from t order by created)
      5  where rownum<100035)
      6  where rn>100000) h, t
      7  where t.rowid=h.rid;
    34 rows selected.
    Execution Plan
    Plan hash value: 3449471415
    | Id  | Operation           | Name      | Rows  | Bytes |TempSpc| Cost (%CPU)| T
    ime     |
    |   0 | SELECT STATEMENT    |           |   100K|    11M|       |  4776   (2)| 0
    0:00:58 |
    |*  1 |  HASH JOIN          |           |   100K|    11M|  3616K|  4776   (2)| 0
    0:00:58 |
    |*  2 |   VIEW              |           |   100K|  2442K|       |  1116   (2)| 0
    0:00:14 |
    |*  3 |    COUNT STOPKEY    |           |       |       |       |            |
            |
    |   4 |     VIEW            |           |   440K|  5157K|       |  1116   (2)| 0
    0:00:14 |
    |   5 |      INDEX FULL SCAN| T_CREATED |   440K|  9024K|       |  1116   (2)| 0
    0:00:14 |
    |   6 |   TABLE ACCESS FULL | T         |   440K|    39M|       |  1237   (2)| 0
    0:00:15 |
    Predicate Information (identified by operation id):
       1 - access("T".ROWID="RID")
       2 - filter("RN">100000)
       3 - filter(ROWNUM<100035)
    Note
       - dynamic sampling used for this statement
    Statistics
              0  recursive calls
              0  db block gets
           5814  consistent gets
              0  physical reads
              0  redo size
           1588  bytes sent via SQL*Net to client
            422  bytes received via SQL*Net from client
              4  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
             34  rows processed   
    here ,oracle don't choose the best explain plan ,I think becase  oracle compute cadinality 100k ,so it don't choose nest loop,why oracle can't compute cardinality 35 here ??
    |*  2 |   VIEW              |           |   100K|  2442K|       |  1116   (2)| 0
    SCOTT@oracle10g>select  t.owner,t.object_name   from t where rowid in
      2      (select rid from (
      3      select rownum rn,rid from
      4      (select rowid rid from t order by created)
      5      where rownum<100035)
      6      where rn>100000)
      7 
    SCOTT@oracle10g>/
    34 rows selected.
    Execution Plan
    Plan hash value: 1566335206
    | Id  | Operation                   | Name      | Rows  | Bytes | Cost (%CPU)| T
    ime     |
    |   0 | SELECT STATEMENT            |           |     1 |   107 |  1586   (2)| 0
    0:00:20 |
    |   1 |  NESTED LOOPS               |           |     1 |   107 |  1586   (2)| 0
    0:00:20 |
    |   2 |   VIEW                      | VW_NSO_1  |   100K|  1172K|  1116   (2)| 0
    0:00:14 |
    |   3 |    HASH UNIQUE              |           |     1 |  2442K|            |
            |
    |*  4 |     VIEW                    |           |   100K|  2442K|  1116   (2)| 0
    0:00:14 |
    |*  5 |      COUNT STOPKEY          |           |       |       |            |
            |
    |   6 |       VIEW                  |           |   440K|  5157K|  1116   (2)| 0
    0:00:14 |
    |   7 |        INDEX FULL SCAN      | T_CREATED |   440K|  9024K|  1116   (2)| 0
    0:00:14 |
    |   8 |   TABLE ACCESS BY USER ROWID| T         |     1 |    95 |     1   (0)| 0
    0:00:01 |
    Predicate Information (identified by operation id):
       4 - filter("RN">100000)
       5 - filter(ROWNUM<100035)
    Note
       - dynamic sampling used for this statement
    Statistics
              0  recursive calls
              0  db block gets
            301  consistent gets
              0  physical reads
              0  redo size
           1896  bytes sent via SQL*Net to client
            422  bytes received via SQL*Net from client
              4  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
             34  rows processed
    SCOTT@oracle10g>select /*+ordered use_nl(t)*/ t.owner,t.object_name   from
      2  (select rid from (
      3  select rownum rn,rid from
      4  (select rowid rid from t order by created)
      5  where rownum<100035)
      6  where rn>100000) h, t
      7  where t.rowid=h.rid;
    34 rows selected.
    Execution Plan
    Plan hash value: 3976541160
    | Id  | Operation                   | Name      | Rows  | Bytes | Cost (%CPU)| T
    ime     |
    |   0 | SELECT STATEMENT            |           |   100K|    11M|   101K  (1)| 0
    0:20:16 |
    |   1 |  NESTED LOOPS               |           |   100K|    11M|   101K  (1)| 0
    0:20:16 |
    |*  2 |   VIEW                      |           |   100K|  2442K|  1116   (2)| 0
    0:00:14 |
    |*  3 |    COUNT STOPKEY            |           |       |       |            |
            |
    |   4 |     VIEW                    |           |   440K|  5157K|  1116   (2)| 0
    0:00:14 |
    |   5 |      INDEX FULL SCAN        | T_CREATED |   440K|  9024K|  1116   (2)| 0
    0:00:14 |
    |   6 |   TABLE ACCESS BY USER ROWID| T         |     1 |    95 |     1   (0)| 0
    0:00:01 |
    Predicate Information (identified by operation id):
       2 - filter("RN">100000)
       3 - filter(ROWNUM<100035)
    Note
       - dynamic sampling used for this statement
    Statistics
              0  recursive calls
              0  db block gets
            304  consistent gets
              0  physical reads
              0  redo size
           1588  bytes sent via SQL*Net to client
            422  bytes received via SQL*Net from client
              4  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
             34  rows processed  

    jinyu wrote:
    Thanks for your great reply and posting ,could you tell me why subquery has the least cost here ??
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 1 | 107 | 1586 (2)| 00:00:20 |
    | 1 | NESTED LOOPS | | 1 | 107 | 1586 (2)| 00:00:20 |
    | 2 | VIEW | VW_NSO_1 | 100K| 1172K| 1116 (2)| 00:00:14 |
    | 3 | HASH UNIQUE | | 1 | 2442K| | |
    |* 4 | VIEW | | 100K| 2442K| 1116 (2)| 00:00:14 |
    |* 5 | COUNT STOPKEY | | | | | |
    | 6 | VIEW | | 440K| 5157K| 1116 (2)| 00:00:14 |
    | 7 | INDEX FULL SCAN | T_CREATED | 440K| 9024K| 1116 (2)| 00:00:14 |
    | 8 | TABLE ACCESS BY USER ROWID| T | 1 | 95 | 1 (0)| 00:00:01 |
    ----------------------------------------------------------------------------------------->
    You'll notice that as a result of a "driving" IN subquery Oracle has done a hash unique operation (line 3) on the rowids produced by the subquery. At this point the optimizer has lost all knowledge of the number of distinct values for that data column in the subquery and come back with the cardinality of one. The re-appearance of 100K as the cardinality in line 2 is an error, but I don't think the optimizer has used that value in later arithmetic.
    Given the cardinality of one, the obvious path into the T table is a nested loop.
    The same type of probelm appears when you use the table() operator in joins - you can use the cardinality() hint to try an tell Oracle how many rows the table() will produce, but that doesn't tell it how many distinct values there are in join columns - and that's an important detail when you work out the join cardinality and method).
    Regards
    Jonathan Lewis
    http://jonathanlewis.wordpress.com
    http://www.jlcomp.demon.co.uk
    To post code, statspack/AWR report, execution plans or trace files, start and end the section with the tag {noformat}{noformat} (lowercase, curly brackets, no spaces) so that the text appears in fixed format.
    "Science is more than a body of knowledge; it is a way of thinking"
    Carl Sagan                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

  • STATSPACK Performance Question / Discrepancy

    I'm trying to troubleshoot a performance issue and I'm having trouble interpreting the STATSPACK report. It seems like the STATSPACK report is missing information that I expect to be there. I'll explain below.
    Header
    STATSPACK report for
    Database    DB Id    Instance     Inst Num  Startup Time   Release     RAC
    ~~~~~~~~ ----------- ------------ -------- --------------- ----------- ---
              2636235846 testdb              1 30-Jan-11 16:10 11.2.0.2.0  NO
    Host Name             Platform                CPUs Cores Sockets   Memory (G)
    ~~~~ ---------------- ---------------------- ----- ----- ------- ------------
         TEST             Microsoft Windows IA (     4     2       0          3.4
    Snapshot       Snap Id     Snap Time      Sessions Curs/Sess Comment
    ~~~~~~~~    ---------- ------------------ -------- --------- ------------------
    Begin Snap:       3427 01-Feb-11 06:40:00       65       4.4
      End Snap:       3428 01-Feb-11 07:00:00       66       4.1
       Elapsed:      20.00 (mins) Av Act Sess:       7.3
       DB time:     146.39 (mins)      DB CPU:       8.27 (mins)
    Cache Sizes            Begin        End
    ~~~~~~~~~~~       ---------- ----------
        Buffer Cache:       192M       176M   Std Block Size:         8K
         Shared Pool:       396M       412M       Log Buffer:    10,848K
    Load Profile              Per Second    Per Transaction    Per Exec    Per Call
    ~~~~~~~~~~~~      ------------------  ----------------- ----------- -----------
          DB time(s):                7.3                2.0        0.06        0.04
           DB CPU(s):                0.4                0.1        0.00        0.00
           Redo size:            6,366.0            1,722.1
       Logical reads:            1,114.6              301.5
       Block changes:               35.8                9.7
      Physical reads:               44.9               12.1
    Physical writes:                1.5                0.4
          User calls:              192.2               52.0
              Parses:              101.5               27.5
         Hard parses:                3.6                1.0
    W/A MB processed:                0.1                0.0
              Logons:                0.1                0.0
            Executes:              115.1               31.1
           Rollbacks:                0.0                0.0
        Transactions:                3.7As you can see a significant amount of time was spent in database calls (DB Time) with relatively little time on CPU (DB CPU). Initially that made me think there were some significant wait events.
    Top 5 Timed Events                                                    Avg %Total
    ~~~~~~~~~~~~~~~~~~                                                   wait   Call
    Event                                            Waits    Time (s)   (ms)   Time
    log file sequential read                        48,166         681     14    7.9
    CPU time                                                       484           5.6
    db file sequential read                         35,357         205      6    2.4
    control file sequential read                    50,747          23      0     .3
    Disk file operations I/O                        16,518          18      1     .2
              -------------------------------------------------------------However, looking at the Top 5 Timed Events I don't see anything out of the ordinary given my normal operations. the log file sequential read may be a little slow but it doesn't make up a significant portion of the execution time.
    Based on an Excel/VB spreadsheet I wrote, which converts STATSPACK data to graphical form, I suspected that there was a wait event not listed here. So I decided to query the data directly. Here is the query and result.
    SQL> SELECT wait_class
      2       , event
      3       , delta/POWER(10,6) AS delta_sec
      4  FROM
      5  (
      6          SELECT syev.snap_id
      7               , evna.wait_class
      8               , syev.event
      9               , syev.time_waited_micro
    10               , syev.time_waited_micro - LAG(syev.time_waited_micro) OVER (PARTITION BY syev.event ORDER BY syev.snap_id) AS delta
    11          FROM   perfstat.stats$system_event syev
    12          JOIN   v$event_name                evna  ON  evna.name     = syev.event
    13          WHERE  syev.snap_id IN (3427,3428)
    14  )
    15  WHERE delta > 0
    16  ORDER BY delta DESC
    17  ;
    ?WAIT_CLASS               EVENT                                                                        DELTA_SEC
    Idle                      SQL*Net message from client                                                  21169.742
    Idle                      rdbms ipc message                                                            19708.390
    Application               enq: TM - contention                                                       7199.819
    Idle                      Space Manager: slave idle wait                                             3001.719
    Idle                      DIAG idle wait                                                             2382.943
    Idle                      jobq slave wait                                                            1258.829
    Idle                      smon timer                                                                 1220.902
    Idle                      Streams AQ: qmn coordinator idle wait                                      1204.648
    Idle                      Streams AQ: qmn slave idle wait                                            1204.637
    Idle                      pmon timer                                                                 1197.898
    Idle                      Streams AQ: waiting for messages in the queue                              1197.484
    Idle                      Streams AQ: waiting for time management or cleanup tasks                    791.803
    System I/O                log file sequential read                                                    681.444
    User I/O                  db file sequential read                                                     204.721
    System I/O                control file sequential read                                                 23.168
    User I/O                  Disk file operations I/O                                                     17.737
    User I/O                  db file parallel read                                                        14.536
    System I/O                log file parallel write                                                       7.618
    Commit                    log file sync                                                                 7.150
    User I/O                  db file scattered read                                                        3.488
    Idle                      SGA: MMAN sleep for component shrink                                          2.461
    User I/O                  direct path read                                                              1.621
    Other                     process diagnostic dump                                                       1.418
    ... snip ...So based on the above it looks like there was a significant amount of time spent in enq: TM - contention
    Question 1
    Why does this wait event not show up in the Top 5 Timed Events section? Note that this wait event is also not listed in any of the other wait events sections either.
    Moving on, I decided to look at the Time Model Statistics
    Time Model System Stats  DB/Inst: testdb  /testdb    Snaps: 3427-3428
    -> Ordered by % of DB time desc, Statistic name
    Statistic                                       Time (s) % DB time
    sql execute elapsed time                         8,731.0      99.4
    PL/SQL execution elapsed time                    1,201.1      13.7
    DB CPU                                             496.3       5.7
    parse time elapsed                                  26.4        .3
    hard parse elapsed time                             21.1        .2
    PL/SQL compilation elapsed time                      2.8        .0
    connection management call elapsed                   0.6        .0
    hard parse (bind mismatch) elapsed                   0.5        .0
    hard parse (sharing criteria) elaps                  0.5        .0
    failed parse elapsed time                            0.0        .0
    repeated bind elapsed time                           0.0        .0
    sequence load elapsed time                           0.0        .0
    DB time                                          8,783.2
    background elapsed time                             87.1
    background cpu time                                  2.4Great, so it looks like I spent >99% of DB Time in SQL calls. I decided to scroll to the SQL ordered by Elapsed time section. The header information surprised me.
    SQL ordered by Elapsed time for DB: testdb    Instance: testdb    Snaps: 3427 -3
    -> Total DB Time (s):           8,783
    -> Captured SQL accounts for    4.1% of Total DB Time
    -> SQL reported below exceeded  1.0% of Total DB TimeIf I'm spending > 99% of my time in SQL, I would have expected the captured % to be higher.
    Question 2
    Am I correct in assuming that a long running SQL that started before the first snap and is still running at the end of the second snap would not display in this section?
    Question 3
    Would that answer my wait event question above? Ala, are wait events not reported until the action that is waiting (execution of a SQL statement for example) is complete?
    So I looked a few snaps past what I have posted here. I still haven't determined why the enq: TM - contention wait is not displayed anywhere in the STATSPACK reports. I did end up finding an interesting PL/SQL block that may have been causing the issues. Here is the SQL ordered by Elapsed time for a snapshot that was taken an hour after the one I posted.
    SQL ordered by Elapsed time for DB: testdb    Instance: testdb    Snaps: 3431 -3
    -> Total DB Time (s):           1,088
    -> Captured SQL accounts for ######% of Total DB Time
    -> SQL reported below exceeded  1.0% of Total DB Time
      Elapsed                Elap per            CPU                        Old
      Time (s)   Executions  Exec (s)  %Total   Time (s)  Physical Reads Hash Value
      26492.65           29     913.54 ######    1539.34             480 1013630726
    Module: OEM.CacheModeWaitPool
    BEGIN EMDW_LOG.set_context(MGMT_JOB_ENGINE.MODULE_NAME, :1); BEG
    IN MGMT_JOB_ENGINE.process_wait_step(:2);END; EMDW_LOG.set_conte
    xt; END;I'm still not sure if this is the problem child or not.
    I just wanted to post this to get your thoughts on how I correctly/incorrectly attacked this problem and to see if you can fill in any gaps in my understanding.
    Thanks!

    Centinul wrote:
    I'm still not sure if this is the problem child or not.
    I just wanted to post this to get your thoughts on how I correctly/incorrectly attacked this problem and to see if you can fill in any gaps in my understanding.
    I think you've attacked the problem well.
    It has prompted me to take a little look at what's going on, running 11.1.0.6 in my case, and something IS broken.
    The key predicate in statspack for reporting top 5 is:
                      and e.total_waits         > nvl(b.total_waits,0)In other words, an event gets reported if total_waits increased across the period.
    So I've been taking snapshots of v$system_event and looking at 10046 trace files at level 8. The basic test was as simple as:
    <ul>
    Session 1: lock table t1 in exclusive mode
    Session 2: lock table t1 in exclusive mode
    </ul>
    About three seconds after session 2 started to wait, v$system_event incremented total_waits (for the "enq: TM - contention" event). When I committed in session 1 the total_waits figure did not change.
    Now do this after waiting across a snapshot:
    We start to wait, after three seconds we record a wait, a few minutes later perfstat takes a snapshot.
    30 minutes later "session 1" commits and our wait ends, but we do not increment total_waits, but we record 30+ minutes wait time.
    30 minutes later perfstat takes another snapshot
    The total_waits has not changed between the start and end snapshot even though we have added 30 minutes to the "enq: TM - contention" in the interim.
    The statspack report loses our 30 minutes from the Top N.
    It's a bug - raise an SR.
    Edit: The AWR will have the same problem, of course.
    Regards
    Jonathan Lewis
    Edited by: Jonathan Lewis on Feb 1, 2011 7:07 PM

  • CPU Time in Load Profile of STATSPACK

    All,
    I need a clarification on how CPU time is calculated...I have done few R&D,but i didn't get clear u'standing,but in SPDOC.txt(%ORACLE_HOME%\rdbms\admin\spdoc.txt ) as explained in below text:
    ===========================================================
    Additionally, instead of the percentage calculation being the % Total
    Wait Time (which is time for each wait event divided by the total wait
    time), the percentage calculation is now the % Total Call Time.
    Call Time is the total time spent in database calls (i.e. the total
    non-idle time spent within the database either on the CPU, or actively
    waiting).
    We compute 'Call Time' by adding the time spent on the CPU ('CPU used by
    this session' statistic) to the time used by all non-idle wait events.
    i.e.
    total call time = total CPU time + total wait time for non-idle events
    The % Total Call Time shown in the 'Top 5' heading on the summary page
    of the report, is the time for each timed event divided by the total call
    time (i.e. non-idle time).
    i.e.
    previously the calculation was:
    time for each wait event / total wait time for all events
    now the calculation is:
    time for each timed event / total call time
    MY STATSPACK REPORT:
    =============================================
    Top 5 Timed Events
    ~~~~~~~~~~~~~~~~~~ % Total
    Event Waits Time (s) Ela Time
    CPU time 2,741 53.50
    db file sequential read 299,449 1,063 20.75
    db file scattered read 69,389 337 6.59
    log file sync 43,220 334 6.53
    log file parallel write 86,222 246 4.81
    What is "total wait time for non-idle events" in the above definition...
    I have good idea on interpreting the Report at basic level...
    Please DONOT treat this as assignment or stuff like that...
    Any help, would be great...
    Regards,
    ~ORA

    The time that you see in this report is a cumulative time for all the sessions that were running on this database, during the snap period.
    What is "total wait time for non-idle events" in the above definition...As you know, not all wait events are idle in nature. F.ex. SQL*Net message from client wait event is idle event (to certain extent) but db file sequential read is not (as CPU needs to generate the address and perform other computation. All such events, where the CPU processing is still needed but is not directly servicing the end user request, contribute to the total time for non-idle events.

  • Db Time in Statspack report...

    Hi ,
    I generated a statspack report of 22 minutes duration.
    In the Instance Activity Stats DB/Inst portion of the report , there are the following figures , regarding the Db Time statistic:
    Total Per Second Per Transaction
    530,488 400.7 1,449.4
    whereas in the Time Model System Stats DB/Inst portion of the report , there is the following:
    Time (s)
    Db Time 94.2
    When the report was generated , there were a typical number of sessions that Oracle creates (sys , system , e.t.c.) and just one application user who has been runnning some forms -developed in Oracle Forms 10g.
    How the above figures can be explained..?????
    Thanks , a lot
    Simon
    Message was edited by:
    sgalaxy

    The figure 530,488 shows the total amount of time since the instance startup.
    The question is in which metric the Per Second Instance Activity Stats DB/Inst is calculated (i.e. the value of 400.7)- in seconds , milliseconds ...????
    Simon

  • Statspack-doubt about Module

    Hi,
    I am analyzing a statspack report. All queries starts with a
    'Module' clause, like
    MIS_VFPRIME.exe
    Module: MIS_VFS.exe
    Module: VF_Rec_UnAdj.exe
    FC_TopUp.exe
    RUU-Utility
    Module: oracle@acdgcbsodb1b (TNS V1-V3)
    Module: C:\Documents and Settings\Administrator\Desktop\
    Module: f90runm@acdgcbsfas3a (TNS V1-V3)
    Module: oracle@PROJDBSDB2 (S003)
    Module: NPA-DPD-LPP PROCESS =
    How do identify from where the query is executed?
    regards,
    Mat
    Message was edited by:
    user505933
    Message was edited by:
    user505933

    Hi,
    Thank you for the replay.
    We are working on Oracle9i on RH Linux AS4.
    Since the SQL is hidden within the module, you likely cannot see the originating source within a STATSPACK or AWR report, sorry.Then please explain what is the meaning of module clause?
    MIS_VFPRIME.exe
    Module: MIS_VFS.exe
    Module: VF_Rec_UnAdj.exe
    FC_TopUp.exe
    RUU-Utility
    Module: oracle@acdgcbsodb1b (TNS V1-V3)
    Module: C:\Documents and Settings\Administrator\Desktop\
    Module: f90runm@acdgcbsfas3a (TNS V1-V3)
    Module: oracle@PROJDBSDB2 (S003)
    Module: NPA-DPD-LPP PROCESS =
    Regards,
    Mathew

  • How to provide tuning solution from explain plan only

    Dear all,
    If I do not have any kind of access to the database and only have explain plan with me,how I can provideperformance or query  tuning solutions from that??
    Regards
    Anirban

    958657,
    If I do not have any kind of access to the database and only have explain plan with me,how I can provide performance or query  tuning solutions from that??
    This is contradictory as you said you don't have access but you have explain plan. You wont get any explain plan until you connect to the database and run "Explain plan for" statement for the query. How do you get the "explain plan"? If it is provided by someone to you, you might request to get the "Execution Plan" for the query.
    Keep in mind that "explain plan" and  "execution plan "  - these two are not same.
    Explain plan  is not enough for predicting elapsed/response time of a query as Explain plan is static Whereas Execution plan is dynamic and talks about query in execution.
    Oracle provides following things for a query to diagnose its performance :
    1. Static - Explain plan  - Not enough
    2. Dynamic:  Execution plan - Run time Plan
    3. awr/ statspack execution plan --Run time from the past - this is again dynamic execution plan of query runs in the past
    Tuning recommendation is possible by comparing run time of the same query in the past and today's run time and based on further analysis.
    Tuning Recommendation is not possible if you have only Explain plan.

Maybe you are looking for