Async disk IO in TOP 5 timed event

Hi all,
I took a statspack report and i found that in the Top 5 timed event we are getting async disk io
Tablespace IO Stats for DB: ac  Instance: ac11  Snaps: 101 -102
->ordered by IOs (Reads + Writes) desc
Tablespace
                 Av      Av     Av                    Av        Buffer Av Buf
         Reads Reads/s Rd(ms) Blks/Rd       Writes Writes/s      Waits Wt(ms)
data
       252,567      87    4.9     2.2       17,615        6        393   18.4
dataNRO
        97,326      34    3.1     3.7          255        0          0    0.0
dataWRO
        18,237       6    3.6     3.5          595        0          0    0.0
dataCRO
        10,157       4    4.8     1.8           63        0         22    3.2
UNDOTBS1
             3       0   10.0     1.0        4,733        2          1    0.0
dataERO
         2,906       1    4.2     4.7          237        0          0    0.0
MFXPIMA
           895       0    6.3     1.0          300        0          0    0.0
INDX
           852       0   15.7     1.0          250        0          0    0.0
SYSTEM
           228       0   11.3     1.8           72        0          0    0.0
dataHO
             1       0   50.0     1.0            1        0          0    0.0
dataOTH
             1       0    0.0     1.0            1        0          0    0.0
CWMLITE
             1       0    0.0     1.0            1        0          0    0.0
DRSYS
             1       0    0.0     1.0            1        0          0    0.0
ODM
             1       0    0.0     1.0            1        0          0    0.0
RACCONFIG
             1       0  100.0     1.0            1        0          0    0.0
TEMPSCHEMA
             1       0   90.0     1.0            1        0          0    0.0
TOOLS
             1       0    0.0     1.0            1        0          0    0.0
UNDOTBS2
             1       0   80.0     1.0            1        0          0    0.0
UNDOTBS3
             1       0   10.0     1.0            1        0          0    0.0
USERS
             1       0    0.0     1.0            1        0          0    0.0
XDB
             1       0    0.0     1.0            1        0          0    0.0
File IO Stats for DB: ac  Instance: ac11  Snaps: 101 -102
->ordered by Tablespace, File
Tablespace               Filename
                 Av      Av     Av                    Av        Buffer Av Buf
         Reads Reads/s Rd(ms) Blks/Rd       Writes Writes/s      Waits Wt(ms)
data                   /dev/vx/rdsk/racdg/orcl_raw_data01
       252,567      87    4.9     2.2       17,615        6        393   18.4
dataCRO                /dev/vx/rdsk/racdg/orcl_raw_caddatacro
        10,157       4    4.8     1.8           63        0         22    3.2
dataERO                /dev/vx/rdsk/racdg/orcl_raw_dataero
         2,906       1    4.2     4.7          237        0          0
dataHO                 /dev/vx/rdsk/racdg/orcl_raw_dataho
             1       0   50.0     1.0            1        0          0
dataNRO                /dev/vx/rdsk/racdg/orcl_raw_datanro
        97,326      34    3.1     3.7          255        0          0
dataOTH                /dev/vx/rdsk/racdg/orcl_raw_dataoth
             1       0    0.0     1.0            1        0          0
dataWRO                /dev/vx/rdsk/racdg/orcl_raw_datawro
        18,237       6    3.6     3.5          595        0          0
CWMLITE                  /dev/vx/rdsk/racdg/orcl_raw_cwmlite
             1       0    0.0     1.0            1        0          0
DRSYS                    /dev/vx/rdsk/racdg/orcl_raw_drsys
             1       0    0.0     1.0            1        0          0
INDX                     /dev/vx/rdsk/racdg/orcl_raw_indx01
           852       0   15.7     1.0          250        0          0
MFXPIMA                  /dev/vx/rdsk/racdg/orcl_raw_mfxpima
           895       0    6.3     1.0          300        0          0
ODM                      /dev/vx/rdsk/racdg/orcl_raw_odm
             1       0    0.0     1.0            1        0          0
RACCONFIG                /dev/vx/rdsk/racdg/orcl_raw_racconfig
             1       0  100.0     1.0            1        0          0
SYSTEM                   /dev/vx/rdsk/racdg/orcl_raw_system01
           228       0   11.3     1.8           72        0          0
TEMPSCHEMA               /dev/vx/rdsk/racdg/orcl_raW_tempschema
             1       0   90.0     1.0            1        0          0
TOOLS                    /dev/vx/rdsk/racdg/orcl_raw_tools
             1       0    0.0     1.0            1        0          0
UNDOTBS1                 /dev/vx/rdsk/racdg/orcl_raw_undotbs1
             3       0   10.0     1.0        4,733        2          1    0.0
UNDOTBS2                 /dev/vx/rdsk/racdg/orcl_raw_undotbs2
File IO Stats for DB: ac  Instance: ac11  Snaps: 101 -102
->ordered by Tablespace, File
Tablespace               Filename
                 Av      Av     Av                    Av        Buffer Av Buf
         Reads Reads/s Rd(ms) Blks/Rd       Writes Writes/s      Waits Wt(ms)
             1       0   80.0     1.0            1        0          0
UNDOTBS3                 /dev/vx/rdsk/racdg/orcl_raw_example
             1       0   10.0     1.0            1        0          0
USERS                    /dev/vx/rdsk/racdg/orcl_raw_users
             1       0    0.0     1.0            1        0          0
XDB                      /dev/vx/rdsk/racdg/orcl_raw_xdb
             1       0    0.0     1.0            1        0          0
          -------------------------------------------------------------Can anybody suggest me what we are facing this proble and what is the soln for the same although we are using SAN and our DB is ORacle 9.2.0.6 on sun box

We are having a 2 cpu on the system
STATSPACK report for
DB Name         DB Id    Instance     Inst Num Release     Cluster Host
ac          1372079993 ac11              1 9.2.0.6.0   YES     ac1
              Snap Id     Snap Time      Sessions Curs/Sess Comment
Begin Snap:       101 01-Apr-09 13:10:01    1,112     130.8
  End Snap:       102 01-Apr-09 13:58:14    1,112     132.3
   Elapsed:               48.22 (mins)
Cache Sizes (end)
~~~~~~~~~~~~~~~~~
               Buffer Cache:     4,288M      Std Block Size:          8K
           Shared Pool Size:       608M          Log Buffer:        977K
Load Profile
~~~~~~~~~~~~                            Per Second       Per Transaction
                  Redo size:             28,267.55              2,373.20
              Logical reads:              5,172.08                434.22
              Block changes:                195.00                 16.37
             Physical reads:                351.31                 29.49
            Physical writes:                  8.34                  0.70
                 User calls:                109.01                  9.15
                     Parses:                 13.71                  1.15
                Hard parses:                  0.29                  0.02
                      Sorts:                  4.53                  0.38
                     Logons:                  0.06                  0.01
                   Executes:                142.12                 11.93
               Transactions:                 11.91
  % Blocks changed per Read:    3.77    Recursive Call %:     72.11
Rollback per transaction %:    0.64       Rows per Sort:    104.78
Instance Efficiency Percentages (Target 100%)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
            Buffer Nowait %:  100.00       Redo NoWait %:    100.00
            Buffer  Hit   %:   93.21    In-memory Sort %:    100.00
            Library Hit   %:   99.62        Soft Parse %:     97.87
         Execute to Parse %:   90.35         Latch Hit %:     99.94
Parse CPU to Parse Elapsd %:   52.84     % Non-Parse CPU:     99.69
Shared Pool Statistics        Begin   End
             Memory Usage %:   90.10   90.77
    % SQL with executions>1:   71.19   72.64
  % Memory for SQL w/exec>1:   72.21   73.13
Top 5 Timed Events
~~~~~~~~~~~~~~~~~~                                                     % Total
Event                                               Waits    Time (s) Ela Time
CPU time                                                        4,356    55.57
async disk IO                                     233,930         986    12.58
db file sequential read                           185,633         984    12.55
global cache cr request                           487,188         524     6.68
db file scattered read                            180,026         428     5.46
          -------------------------------------------------------------

Similar Messages

  • AWR Top Timed Events

    Hi,
    when i generate AWR report i found
    db_file_sequential read.
    db_file_scattered read.
    Log file sync
    busy buffer wait
    How can i resolve the above wait events pls suggest me.

    Hi Srini,
    you don't need to resolve anything because it's in the top-5 wait list in an AWR report. Like Tanel Poder puts it "don't let your AWR report tell you what your problem is".
    In your case: db file sequential/scattered reads are disk I/O, that's perfectly normal for a database to be waiting on such events. However, you should check "top SQL by reads" to see if there are any statements that are causing a significant fraction of these reads.
    Log file sync is normally related to frequent commits, but can have other causes as well. Much more information is needed to investigate.
    Buffer busy wait indicates contention for hot blocks. If you want to reduce these events, you need to find the source of this contention, this is something AWR cannot tell you -- you have to turn to ASH and other high-resolution diagnostic tools instead.
    However, a fact that an event made it to the top-5 list is not meaningful in itself. What % of database time is it consuming? Is it big enough to justify any tuning effort?
    Best regards,
    Nikolay

  • Understanding statspack report(CPU time in top time events)

    Hi,
    I am using oracle 9.2.0.8 RAC on SUN solaris platform.I am trying to understand my DB statistics using the below statspack report.Can you please coment on the below report
    My quetions/thoughts are:
    1) CPU time is in the top timed events,Is that eman some need to do with CPU increase.Was CPU bottleneck?
    2) Parse CPU to Parse Elapsd %: 80.28 .Is this means I am hard parsing most of the time.How can identify which queries doing more hard parses.what is mean by% Non-Parse CPU: 98.76
    3) Memory Usage %: 96.25 96.64.It seems to be there is too much memory usage.Can you elaborate this usage about what could be the reasons for this to happen
    4) global cache cr request is coming in the top wait evetns and top timed events.Is there some issue with RAC?
    5) can you please explain about 5 CR Blocks Served (RAC) and 5 CU Blocks Served (RAC) and Top 5 ITL Waits per
    Your help is appreciated!!
    Load Profile
    ~~~~~~~~~~~~ Per Second Per Transaction
    Redo size: 2,101,521.49 18,932.15
    Logical reads: 91,525.82 824.54
    Block changes: 6,720.68 60.55
    Physical reads: 5,644.92 50.85
    Physical writes: 464.97 4.19
    User calls: 922.79 8.31
    Parses: 342.37 3.08
    Hard parses: 1.52 0.01
    Sorts: 324.18 2.92
    Logons: 2.66 0.02
    Executes: 2,131.75 19.20
    Transactions: 111.00
    % Blocks changed per Read: 7.34 Recursive Call %: 78.48
    Rollback per transaction %: 22.43 Rows per Sort: 15.89
    Instance Efficiency Percentages (Target 100%)
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    Buffer Nowait %: 99.66 Redo NoWait %: 100.00
    Buffer Hit %: 93.86 In-memory Sort %: 100.00
    Library Hit %: 99.95 Soft Parse %: 99.56
    Execute to Parse %: 83.94 Latch Hit %: 99.79
    Parse CPU to Parse Elapsd %: 80.28 % Non-Parse CPU: 98.76
    Shared Pool Statistics Begin End
    Memory Usage %: 96.25 96.64
    % SQL with executions>1: 34.19 32.67
    % Memory for SQL w/exec>1: 39.87 40.47
    Top 5 Timed Events
    ~~~~~~~~~~~~~~~~~~ % Total
    Event Waits Time (s) Ela Time
    CPU time 10,406 42.54
    db file sequential read 1,707,372 4,282 17.51
    global cache cr request 2,566,822 2,369 9.68
    db file scattered read 1,109,892 1,719 7.03
    SQL*Net break/reset to client 17,287 1,348 5.51
    Wait Events for DB: Instance:
    -> s - second
    -> cs - centisecond - 100th of a second
    -> ms - millisecond - 1000th of a second
    -> us - microsecond - 1000000th of a second
    -> ordered by wait time desc, waits desc (idle events last)
    Avg
    Total Wait wait Waits
    Event Waits Timeouts Time (s) (ms) /txn
    db file sequential read 1,707,372 0 4,282 3 8.5
    global cache cr request 2,566,822 3,356 2,369 1 12.8
    db file scattered read 1,109,892 0 1,719 2 5.5
    SQL*Net break/reset to clien 17,287 0 1,348 78 0.1
    buffer busy waits 312,198 11 1,082 3 1.6
    Message was edited by:
    user509266

    This statspack taken for 30 minutes interval.We have 16 CPU's.We never got ORA-4031 errors.It means you have 16 * 30 * 60 = 28,800 seconds CPU available during the interval but you only used 10,406. So you don't have a CPU problem.
    For Statspack documentation, you can have a look to <ORACLE_HOME>/rdbms/admin/spdoc.txt, Metalink note 228913.1, Jonathan Lewis Scratchpad, books commended by Rajesh Kumar Yogi and also to http://www.oracle.com/technology/deploy/performance/index.html

  • CPU Time in Top 5 Timed Events

    Hi,
    We have a 2 node RAC database(10.2.0.3) on Solaris 10.
    There is performance issue with CMRO application R12.
    In database I see CPU time consistently as the top wait event in the Top 5 Timed Events.
    This is mostl followed by db file sequential read.
    For one of the days:
    Top 5 Timed Events
    Event Waits Time(s) Avg Wait(ms) % Total Call Time Wait Class
    CPU time 8,383 82.8
    db file sequential read 173,417 838 5 8.3 User I/O
    SQL*Net break/reset to client 26,015 651 25 6.4 Application
    enq: TX - row lock contention 1,063 356 335 3.5 Application
    gcs log flush sync 37,747 88 2 .9 Other
    For other day:
    Top 5 Timed Events
    Event Waits Time(s) Avg Wait(ms) % Total Call Time Wait Class
    CPU time 25,286 62.0
    db file sequential read 2,644,332 8,267 3 20.3 User I/O
    gc buffer busy 1,358,725 3,830 3 9.4 Cluster
    read by other session 438,494 1,169 3 2.9 User I/O
    SQL*Net more data to client 19,423 879 45 2.2 Network
    Any idea of the of the bottleneck?
    Thanks

    8 CPUs, load average 4, runqueue 0 and usage 30-35%
    Does this indicate any issue with system resourcesNO. Not at all.
    However a poor schema design or inefficient SQL execution can mean that a query that should do 100 'consistent gets' is doing 10,000 'consistent gets' -- in the buffer cache, consuming CPU and not waiting for I/O. This is a scenario where you have idle CPU but CPU usage is inefficient. (Thus, for example, adding more CPUs will not help your users at all).
    So you should look at the queries and see if queries can be improved.
    If, on the other hand, users are not complaining of performance and all response times are within expectations, than you have no issue at all.
    Hemant K Chitale

  • StatsPack Report Top 5 Timed Events

    Here is my StatsPack report , The 1st one is base line, Performance looks nice here , but in 2nd and 3rd its not good. Need suggestion , what this reprot mean. and what should i do next.
    Cache Sizes (end)
    ~~~~~~~~~~~~~~~~~
    Buffer Cache: 1,920M Std Block Size: 16K
    Shared Pool Size: 304M Log Buffer: 20,480K
    Performance is good here
    Top 5 Timed Events
    ~~~~~~~~~~~~~~~~~~ % Total
    Event Waits Time (s) Ela Time
    enqueue 15,551 25,630 41.78
    db file sequential read 1,281,988 10,205 16.64
    CPU time 7,781 12.68
    log file sync 325,921 6,482 10.57
    buffer busy waits 199,959 2,591 4.22
    Performance is not good here
    Top 5 Timed Events
    ~~~~~~~~~~~~~~~~~~ % Total
    Event Waits Time (s) Ela Time
    db file sequential read 1,180,605 10,883 23.99
    CPU time 8,012 17.66
    log file sync 365,098 7,788 17.17
    enqueue 8,906 6,413 14.13
    db file scattered read 137,606 2,930 6.46
    Top 5 Timed Events
    ~~~~~~~~~~~~~~~~~~ % Total
    Event Waits Time (s) Ela Time
    enqueue 14,413 18,029 27.04
    db file sequential read 1,281,801 12,383 18.57
    CPU time 10,002 15.00
    log file sync 356,982 8,488 12.73
    row cache lock 1,765 3,510 5.27

    >
    Top 5 Timed Events
    ~~~~~~~~~~~~~~~~~~ % Total
    Event Waits Time (s) Ela Time
    > enqueue 14,413 18,029 27.04> db file sequential read 1,281,801 12,383 18.57
    CPU time 10,002 15.00
    log file sync 356,982 8,488 12.73
    > row cache lock 1,765 3,510 5.27
    buffer busy waits 199,959 2,591 4.22
    Look into these events, enqueue, row cache lock, buffer busy wait
    select * from V$ENQUEUE_STAT
    to find out which enqueue the database has been waiting on.
    The three events could be related.
    Most likely you have one/some hot objects that a lot queries updating on at the same time.
    also check
    v$session_wait
    when you have the performance issue to find out the hot objects.

  • Reliable message - Top first timed events

    Hi All,
    One fo my Db having reliable message as top first timed events. Does anyone have an idea about this cause and the solution for this . Please do the needful.
    Event Waits Time(s) Avg wait (ms) % DB time Wait Class
    reliable message 685,139 24,625 36 29.58 Other
    Thanks

    http://arulselvaraj.blogspot.com/2011/01/drop-tablespace-waiting-on-reliable.html

  • Oracle 10.2.0.4.3 - Statspack top 5 timed events show resmgr:become active

    Hello All,
    In my statspack I am seeing resmgr:decome active 20,009 seconds waiting. We have 8 queue tables and 8 resource group. Is this resource group or queue tables are causing this wait??
    Top 5 Timed Events Avg %Total
    ~~~~~~~~~~~~~~~~~~ wait Call
    Event Waits Time (s) (ms) Time
    resmgr:become active 1,189 20,009 16828 97.4
    CPU time 141 .7
    control file sequential read 29,319 74 3 .4
    os thread startup 73 59 803 .3
    DFS lock handle 527 55 104 .3
    Regards,
    Rashida

    You should have a look to the bug #6774317 if that applies to your case.
    Nicolas.

  • PX Deq Credit: send blkd At AWR "Top 5 Timed Events"

    PX Deq Credit: send blkd At Top 5 Timed Events
    Hi ,
    Below are examples of "Top 5 Timed Events" in my Staging data warehouse database.
    ALWAYS , at the most Top 5 Timed Events is the event : PX Deq Credit: send blkd.
    Oracle saids that its an idel event, but since it always at the the top of my AWR reports
    and all the others events are far behind it , i have a feeling that it may indicate of
    a problem.
    Top 5 Timed Events Avg %Total
    ~~~~~~~~~~~~~~~~~~ wait Call
    Event Waits Time (s) (ms) Time Wait Class
    PX Deq Credit: send blkd 3,152,038 255,152 81 95.6 Other
    direct path read 224,839 4,046 18 1.5 User I/O
    CPU time 3,217 1.2
    direct path read temp 109,209 2,407 22 0.9 User I/O
    db file scattered read 31,110 1,436 46 0.5 User I/O
    Top 5 Timed Events Avg %Total
    ~~~~~~~~~~~~~~~~~~ wait Call
    Event Waits Time (s) (ms) Time Wait Class
    PX Deq Credit: send blkd 6,846,579 16,359 2 50.4 Other
    direct path read 101,363 5,348 53 16.5 User I/O
    db file scattered read 105,377 4,991 47 15.4 User I/O
    CPU time 3,795 11.7
    direct path read temp 70,208 940 13 2.9 User I/O
    Hir some more information:
    Its a 500GB database on linux Red hat 4 with 8 CPUs and 16GB memory.
    Its based on an ASM file system.
    From the spfile:
    SQL> show parameter parallel
    NAME_COL_PLUS_SHOW_PARAM VALUE_COL_PLUS_SHOW_PARAM
    parallel_adaptive_multi_user TRUE
    parallel_automatic_tuning FALSE
    parallel_execution_message_size 4096
    parallel_instance_group
    parallel_max_servers 240
    parallel_min_percent 0
    parallel_min_servers 0
    parallel_server FALSE
    parallel_server_instances 1
    parallel_threads_per_cpu 2
    recovery_parallelism 0
    Thanks.

    >
    Metalink Note:280939.1 said:
    "Consider the use of different number for the DOP on your tables.
    On large tables and their indexes use high degree like #CPU.
    For smaller tables use DOP (#CPU)/2 as start value.
    Question 1:
    "On large tables"--> Does Metalink mean to a large
    table by its size (GB) or by number of rows ?
    That's one of those vague things that people say without thinking that it
    could have different meanings. Most people assume that a table that is
    large in Gb is also large in number of rows.
    As far as PQ is concerned I think that large numbers of rows may be more significant than large size, because (a) in multi-layer queries you pass rows around and (b) although the initial rows may be big you might not need all the columns to run the query, so Gb become less relevant once the data scan is complete
    As a strategy for keeping DOP on the tables, by the way, it sounds quite
    good. The difficulty is in the fine-tuning.
    Question 2:
    I checked how many parallel operations had been
    downgraded and found that less than 4% had been
    downgraded. Do you think that i still have to consider
    reducing the DOP ?
    Having lots of slaves means you are less likely to get downgrades. But it's the number of slaves active for a single query that introduce the dequeue waits - so yes, I think you do need to worry about the DOP. (Counter-intuitively, the few downgraded queries may have been performing better than the ones running at full DOP).
    The difficulty is this - do you need to choose a strategy, or do you just need to fix a couple of queries.
    Strategy 1: set DOP to 1 on all tables and indexes, then hint all queries that you think need to run parallel, possibly identifying a few tables and indexes that could benefit from an explicit setting for DOP.
    Strategy 2: set DOP to #CPUs on all very large tables and their indexes and #CPUs/2 on the less large tables and their indexes. Check for any queries that perform very badly and either hint different degrees, or fine-tune the degree on a few tables.
    Strategy 3: leave parallelism at default, identify particularly badly performing queries and either put in hints for DOP, or use them to identify any tables that need specific settings for DOP.
    Starting from scratch, I would want to adopt strategy 1.
    Starting from where you are at present, I would spend a little time checking to see if I could get some clues from any extreme queries - i.e. following strategy 3; but if under a lot of time pressure and saw no improvement I would switch to strategy 2.
    Regards
    Jonathan Lewis
    http://jonathanlewis.wordpress.com
    http://www.jlcomp.demon.co.uk

  • How to retrieve 'top 5 timed events' from AWR

    Hello,
    I have a 10g database and awr is running every hour. How do I get the 'Top 5 Timed Events' from the awr report for every single hour on the last five days worth of awr reports? I can see the awr reports through GRID but where is the actual location of the snapshots? Instead of getting this information from awr reports, can I get 'Top 5 timed events' from the data dictionary for every hour for the last five days? Thank you in advance.

    watson2000 wrote:
    Hello,
    I have a 10g database and awr is running every hour. How do I get the 'Top 5 Timed Events' from the awr report for every single hour on the last five days worth of awr reports? I can see the awr reports through GRID but where is the actual location of the snapshots? Instead of getting this information from awr reports, can I get 'Top 5 timed events' from the data dictionary for every hour for the last five days? The "Top 5" comes from combining information about system events and the time model statistics. The information is stored in a number of tables in the SYS schema with names starting with WRH$, but also exposed in a set of views starting with DBA_HIST.
    To reconstruct the Top 5 hourly for the last five days, you will need to access DBA_HIST_SYSTEM_EVENTS (system events) DBA_HIST_SYS_TIME_MODEL (time model) and DBA_HIST_SNAPSHOT (list of snapshot ids and times).
    It won't be a trivial query, as you need to collect the "DB CPU" entry from dba_hist_system_time_model, union it with the dba_hist_system_events, then take the top 5 for each snapshot from the result. You'll probably need to look at subquery factoring and analytic functions to make this as efficient as possible.
    Regards
    Jonathan Lewis
    http://jonathanlewis.wordpress.com
    http://www.jlcomp.demon.co.uk
    "Science is more than a body of knowledge; it is a way of thinking"
    Carl Sagan
    To post code, statspack/AWR report, execution plans or trace files, start and end the section with the tag {noformat}{noformat} (lowercase, curly brackets, no spaces) so that the text appears in fixed format.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

  • Slow DB performance; Top 5 timed events provided

    DB version:10gR2
    OS: aix
    Due to the slow performance of our DB, we took an AWR snapshot.
    The Top 5 timed events from our instance is shown below(Shown vertically due to formatting problems)
    Event that ranked first
    <font color="red"><b>CPU time</b></font>
    Waits - Blank
    Time(s)-29,387
    Avg Wait(ms) - Blank
    % Total Call Time - Blank
    Wait Class - Blank
    Event that ranked second
    <font color="red"><b>read by other session</b></font>
    Waits - 31,138,754
    Time(s)-9,768
    Avg Wait(ms) - 0
    % Total Call Time - 15.5
    Wait Class - User I/O
    Event that ranked third
    <font color="red"><b>db file sequential read</b></font>
    Waits - 22,618,061
    Time(s)-7,768
    Avg Wait(ms) - 0
    % Total Call Time - 12.5
    Wait Class - User I/O
    Event that ranked fourth
    <font color="red"><b>db file scattered read</b></font>
    Waits - 16,763,238
    Time(s)-7,768
    Avg Wait(ms) - 0
    % Total Call Time - 12.5
    Wait Class - User I/O
    Event that ranked fifth
    <font color="red"><b>enq: TM - contention</b></font>
    Waits - 539
    Time(s)-1,548
    Avg Wait(ms) - 2,898
    % Total Call Time - 2.5
    Wait Class - ApplicationDo you guys have any suggestions based upon the above mentioned Top 5 timed events? Do you need any other info from our AWR snapshot?

    DB version:10gR2
    OS: aix
    Due to the slow performance of our DB, we took an AWR
    snapshot.You mean "slow performance of your application", not "slow performance of your DB"?
    Database is just a storage room where you store stuff. Your storing and retrieving
    items from the storage room may or may not be too slow but the storage room itself
    is neither fast nor slow.
    The Top 5 timed events from our instance is shown
    below(Shown vertically due to formatting problems)
    Event that ranked first
    <font color="red"><b>CPU time</b></font>
    Waits - Blank
    Time(s)-29,387
    Avg Wait(ms) - Blank
    % Total Call Time - Blank
    Wait Class - BlankThis is a strong indication that there is a problem. Your SQL is burning CPU which is
    something that doesn't happen in a normal DB server. Oracle is probably doing hash join
    of small tables or something of that nature.
    >
    >
    Event that ranked second
    <font color="red"><b>read by other
    session</b></font>This happens when several sessions start requesting the same block to be read into SGA.
    The definition of the event can be found here:
    http://www.confio.com/English/Collaterals/Newsletter/2006/200601_TheOracleResource.pdf
    My experience with this wait event is that I see a lot of it when CURSOR_SHARING is set
    to "FORCE". In addition to CONFIO's solution, you can also switch to Oracle11g and use client caching. Also, you can rebuild your hottest indexes as reverse indexes, as long as you
    don't use them for range scans.
    Waits - 31,138,754
    Time(s)-9,768
    Avg Wait(ms) - 0
    % Total Call Time - 15.5
    Wait Class - User I/O
    Event that ranked third
    <font color="red"><b>db file sequential
    read</b></font>This is usually a sign of Oracle doing lots of index reads. Normally, this event is not something to be concerned about.
    Waits - 22,618,061
    Time(s)-7,768
    Avg Wait(ms) - 0
    % Total Call Time - 12.5
    Wait Class - User I/O
    Event that ranked fourth
    <font color="red"><b>db file scattered
    read</b></font>This event is waited on during a full table scan. This is perfectly in line with high CPU consumtion. You probably have quite a few hash joins.
    Waits - 16,763,238
    Time(s)-7,768
    Avg Wait(ms) - 0
    % Total Call Time - 12.5
    Wait Class - User I/O
    Event that ranked fifth
    <font color="red"><b>enq: TM - contention</b></font>Now, this is strange. TM locks are table locks. TX locks are transaction locks. You
    are waiting for the TABLE locks. It frequently happens when a report is ported from
    another database which doesn't normally enforce ACID requirements. Report tools
    cope with that by locking tables in the shared mode. If that kind of thing was allowed
    to run unchanged on an Oracle RDBMS, it will cause endless concurrency problems.
    I believe that this might be the root of the evil. However, without the insight into your
    application a definite "maybe" is all I can tell you.
    Do you guys have any suggestions based upon the above
    mentioned Top 5 timed events? Do you need any other
    info from our AWR snapshot?Gals should not respond?

  • Differences between "top 5 timed events" and "Top 5 Timed Foreground Events

    Dear all,
    i want to know what is the difference between "top 5 timed events" and "Top 5 Timed Foreground Events" in AWR reports
    the meaning the same is?
    thanks to all.

    chijar wrote:
    Dear all,
    i want to know what is the difference between "top 5 timed events" and "Top 5 Timed Foreground Events" in AWR reports
    the meaning the same is?
    thanks to all.what is the difference between foreground & background sessions?

  • Statspack : Top 5 Timed Events - CPU time

    Hi,
    I just get some statspack reports on my 10.2.0.2 database (HP-UX 11i).
    I'm just surprise about the CPU time into the Top-5 events.
    Top 5 Timed Events                                                    Avg %Total
    ~~~~~~~~~~~~~~~~~~                                                   wait   Call
    Event                                            Waits    Time (s)   (ms)   Time
    CPU time 4,263 97.3
    latch: cache buffers chains                    197,925          42      0    1.0
    log file parallel write                          8,982          22      2     .5
    log file sync                                    8,620          22      3     .5
    wait list latch free                               399           7     17     .2What does CPU time here ? Is it a problem here ?
    Thanks in advance for your lights.

    Hi,
    it seems that your database experiences a high SQL workload. The section of statspack report called "Top SQLs by Buffer Gets" might give you an idea what SQLs caused this CPU workload.

  • Os thread startup in Top 5 Timed Events AWR

    Hi all,
    I have Oracle 10.2.0.5 for HP UX
    I'm experiencing some slowness. While checking AWR I see the following:
    Top 5 Timed Events
    Event     Waits     Time(s)     Avg Wait(ms)     % Total Call Time     Wait Class
    CPU time          732          28.0     
    os thread startup      983      665      676      25.4     Concurrency
    log file switch (checkpoint incomplete)     1,279     617     482     23.6     Configuration
    row cache lock     98,641     577     6     22.1     Concurrency
    latch: session allocation     1,377     253     184     9.7     Other
    What could be the reason for os thread startup?
    too many processes due to parallellism?
    I have all tables in NOPARALLEL.
    And regarding log file switch (checkpoint incomplete), I changed redo size from 100 MB to 200 MB to reduce the frequency of logswitching.
    Thanks in advance.

    GOOGLE is your friend, but only when you actually use it!
    http://karlarao.wordpress.com/2009/04/06/os-thread-startup/

  • Log file sequential read  and RFS ping/write - among Top 5 event

    I have situation here to discuss. In a 3-node RAC setup which is Logical standby DB; one node is showing high CPU utilization around 40~50%. The CPU utilization was less than 20% 10 days back but from 9th oldest day it jumped and consistently shows the double figure. I ran AWR reports on all three nodes and found one node with high CPU utilization and shows below tops events-
    EVENT WAITS TIME(S) AVG WAIT(MS) %TOTAL CALL TIME WAIT CLASS
    CPU time 5,802 34.9
    RFS ping 15 5,118 33,671 30.8 Other
    Log file sequential read 234,831 5,036 21 30.3 System I/O
    Sql*Net more data from
    client 24,171 1,087 45 6.5 Network
    Db file sequential read 130,939 453 3 2.7 User I/O
    Findings:-
    On AWR report(file attached) for node= sipd207; we can see that "RFS PING" wait event takes 30% of the waits and "log file sequential read" wait event takes 30% of the waits that occurs in database.
    Environment :- (Oracle- 10.2.0.4.0, O/S - AIX .3)
    1)other node awr shows "log file sync" - is it due to oversized log buffer?
    2)Network wait events can be reduced by tweaking SDU & TDU values based on MDU.
    3) Why ARCH processes taking much to archives filled redo logs; is it issue with slow disk I/O?
    Regards
    WORKLOAD REPOSITORY report for<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<DB Name DB Id Instance Inst Num Release RAC Host
    XXXPDB 4123595889 XXX2p2 2 10.2.0.4.0 YES sipd207
    Snap Id Snap Time Sessions Curs/Sess
    Begin Snap: 1053 04-Apr-11 18:00:02 59 7.4
    End Snap: 1055 04-Apr-11 20:00:35 56 7.5
    Elapsed: 120.55 (mins)
    DB Time: 233.08 (mins)
    Cache Sizes
    ~~~~~~~~~~~ Begin End
    Buffer Cache: 3,728M 3,728M Std Block Size: 8K
    Shared Pool Size: 4,080M 4,080M Log Buffer: 14,332K
    Load Profile
    ~~~~~~~~~~~~ Per Second Per Transaction
    Redo size: 245,392.33 10,042.66
    Logical reads: 9,080.80 371.63
    Block changes: 1,518.12 62.13
    Physical reads: 7.50 0.31
    Physical writes: 44.00 1.80
    User calls: 36.44 1.49
    Parses: 25.84 1.06
    Hard parses: 0.59 0.02
    Sorts: 12.06 0.49
    Logons: 0.05 0.00
    Executes: 295.91 12.11
    Transactions: 24.43
    % Blocks changed per Read: 16.72 Recursive Call %: 94.18
    Rollback per transaction %: 4.15 Rows per Sort: 53.31
    Instance Efficiency Percentages (Target 100%)
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    Buffer Nowait %: 99.99 Redo NoWait %: 100.00
    Buffer Hit %: 99.92 In-memory Sort %: 100.00
    Library Hit %: 99.83 Soft Parse %: 97.71
    Execute to Parse %: 91.27 Latch Hit %: 99.79
    Parse CPU to Parse Elapsd %: 15.69 % Non-Parse CPU: 99.95
    Shared Pool Statistics Begin End
    Memory Usage %: 83.60 84.67
    % SQL with executions>1: 97.49 97.19
    % Memory for SQL w/exec>1: 97.10 96.67
    Top 5 Timed Events Avg %Total
    ~~~~~~~~~~~~~~~~~~ wait Call
    Event Waits Time (s) (ms) Time Wait Class
    CPU time 4,503 32.2
    RFS ping 168 4,275 25449 30.6 Other
    log file sequential read 183,537 4,173 23 29.8 System I/O
    SQL*Net more data from client 21,371 1,009 47 7.2 Network
    RFS write 25,438 343 13 2.5 System I/O
    RAC Statistics DB/Inst: UDAS2PDB/udas2p2 Snaps: 1053-1055
    Begin End
    Number of Instances: 3 3
    Global Cache Load Profile
    ~~~~~~~~~~~~~~~~~~~~~~~~~ Per Second Per Transaction
    Global Cache blocks received: 0.78 0.03
    Global Cache blocks served: 1.18 0.05
    GCS/GES messages received: 131.69 5.39
    GCS/GES messages sent: 139.26 5.70
    DBWR Fusion writes: 0.06 0.00
    Estd Interconnect traffic (KB) 68.60
    Global Cache Efficiency Percentages (Target local+remote 100%)
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    Buffer access - local cache %: 99.91
    Buffer access - remote cache %: 0.01
    Buffer access - disk %: 0.08
    Global Cache and Enqueue Services - Workload Characteristics
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    Avg global enqueue get time (ms): 0.5
    Avg global cache cr block receive time (ms): 0.9
    Avg global cache current block receive time (ms): 1.0
    Avg global cache cr block build time (ms): 0.0
    Avg global cache cr block send time (ms): 0.1
    Global cache log flushes for cr blocks served %: 2.9
    Avg global cache cr block flush time (ms): 4.6
    Avg global cache current block pin time (ms): 0.0
    Avg global cache current block send time (ms): 0.1
    Global cache log flushes for current blocks served %: 0.1
    Avg global cache current block flush time (ms): 5.0
    Global Cache and Enqueue Services - Messaging Statistics
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    Avg message sent queue time (ms): 0.1
    Avg message sent queue time on ksxp (ms): 0.6
    Avg message received queue time (ms): 0.0
    Avg GCS message process time (ms): 0.0
    Avg GES message process time (ms): 0.1
    % of direct sent messages: 31.57
    % of indirect sent messages: 5.17
    % of flow controlled messages: 63.26
    Time Model Statistics DB/Inst: UDAS2PDB/udas2p2 Snaps: 1053-1055
    -> Total time in database user-calls (DB Time): 13984.6s
    -> Statistics including the word "background" measure background process
    time, and so do not contribute to the DB time statistic
    -> Ordered by % or DB time desc, Statistic name
    Statistic Name Time (s) % of DB Time
    sql execute elapsed time 7,270.6 52.0
    DB CPU 4,503.1 32.2
    parse time elapsed 506.7 3.6
    hard parse elapsed time 497.8 3.6
    sequence load elapsed time 152.4 1.1
    failed parse elapsed time 19.5 .1
    repeated bind elapsed time 3.4 .0
    PL/SQL execution elapsed time 0.7 .0
    hard parse (sharing criteria) elapsed time 0.3 .0
    connection management call elapsed time 0.3 .0
    hard parse (bind mismatch) elapsed time 0.0 .0
    DB time 13,984.6 N/A
    background elapsed time 869.1 N/A
    background cpu time 276.6 N/A
    Wait Class DB/Inst: UDAS2PDB/udas2p2 Snaps: 1053-1055
    -> s - second
    -> cs - centisecond - 100th of a second
    -> ms - millisecond - 1000th of a second
    -> us - microsecond - 1000000th of a second
    -> ordered by wait time desc, waits desc
    Avg
    %Time Total Wait wait Waits
    Wait Class Waits -outs Time (s) (ms) /txn
    System I/O 529,934 .0 4,980 9 3.0
    Other 582,349 37.4 4,611 8 3.3
    Network 279,858 .0 1,009 4 1.6
    User I/O 54,899 .0 317 6 0.3
    Concurrency 136,907 .1 58 0 0.8
    Cluster 60,300 .0 41 1 0.3
    Commit 80 .0 10 130 0.0
    Application 6,707 .0 3 0 0.0
    Configuration 17,528 98.5 1 0 0.1
    Wait Events DB/Inst: UDAS2PDB/udas2p2 Snaps: 1053-1055
    -> s - second
    -> cs - centisecond - 100th of a second
    -> ms - millisecond - 1000th of a second
    -> us - microsecond - 1000000th of a second
    -> ordered by wait time desc, waits desc (idle events last)
    Avg
    %Time Total Wait wait Waits
    Event Waits -outs Time (s) (ms) /txn
    RFS ping 168 .0 4,275 25449 0.0
    log file sequential read 183,537 .0 4,173 23 1.0
    SQL*Net more data from clien 21,371 .0 1,009 47 0.1
    RFS write 25,438 .0 343 13 0.1
    db file sequential read 54,680 .0 316 6 0.3
    DFS lock handle 97,149 .0 214 2 0.5
    log file parallel write 104,808 .0 157 2 0.6
    db file parallel write 143,905 .0 149 1 0.8
    RFS random i/o 25,438 .0 86 3 0.1
    RFS dispatch 25,610 .0 56 2 0.1
    control file sequential read 39,309 .0 55 1 0.2
    row cache lock 130,665 .0 47 0 0.7
    gc current grant 2-way 35,498 .0 23 1 0.2
    wait for scn ack 50,872 .0 20 0 0.3
    enq: WL - contention 6,156 .0 14 2 0.0
    gc cr grant 2-way 16,917 .0 11 1 0.1
    log file sync 80 .0 10 130 0.0
    Log archive I/O 3,986 .0 9 2 0.0
    control file parallel write 3,493 .0 8 2 0.0
    latch free 2,356 .0 6 2 0.0
    ksxr poll remote instances 278,473 49.4 6 0 1.6
    enq: XR - database force log 2,890 .0 4 1 0.0
    enq: TX - index contention 325 .0 3 11 0.0
    buffer busy waits 4,371 .0 3 1 0.0
    gc current block 2-way 3,002 .0 3 1 0.0
    LGWR wait for redo copy 9,601 .2 2 0 0.1
    SQL*Net break/reset to clien 6,438 .0 2 0 0.0
    latch: ges resource hash lis 23,223 .0 2 0 0.1
    enq: WF - contention 32 6.3 2 62 0.0
    enq: FB - contention 660 .0 2 2 0.0
    enq: PS - contention 1,088 .0 2 1 0.0
    library cache lock 869 .0 1 2 0.0
    enq: CF - contention 671 .1 1 2 0.0
    gc current grant busy 1,488 .0 1 1 0.0
    gc current multi block reque 1,072 .0 1 1 0.0
    reliable message 618 .0 1 2 0.0
    CGS wait for IPC msg 62,402 100.0 1 0 0.4
    gc current block 3-way 998 .0 1 1 0.0
    name-service call wait 18 .0 1 57 0.0
    cursor: pin S wait on X 78 100.0 1 11 0.0
    os thread startup 16 .0 1 53 0.0
    enq: RO - fast object reuse 193 .0 1 3 0.0
    IPC send completion sync 652 99.2 1 1 0.0
    local write wait 194 .0 1 3 0.0
    gc cr block 2-way 534 .0 0 1 0.0
    log file switch completion 17 .0 0 20 0.0
    SQL*Net message to client 258,483 .0 0 0 1.5
    undo segment extension 17,282 99.9 0 0 0.1
    gc cr block 3-way 286 .7 0 1 0.0
    enq: TM - contention 76 .0 0 4 0.0
    PX Deq: reap credit 15,246 95.6 0 0 0.1
    kksfbc child completion 5 100.0 0 49 0.0
    enq: TT - contention 141 .0 0 2 0.0
    enq: HW - contention 203 .0 0 1 0.0
    RFS create 2 .0 0 115 0.0
    rdbms ipc reply 339 .0 0 1 0.0
    PX Deq Credit: send blkd 452 20.1 0 0 0.0
    gcs log flush sync 128 32.8 0 2 0.0
    latch: cache buffers chains 128 .0 0 1 0.0
    library cache pin 441 .0 0 0 0.0
    Wait Events DB/Inst: UDAS2PDB/udas2p2 Snaps: 1053-1055
    -> s - second
    -> cs - centisecond - 100th of a second
    -> ms - millisecond - 1000th of a second
    -> us - microsecond - 1000000th of a second
    -> ordered by wait time desc, waits desc (idle events last)

    We only apply on one node in a cluster so I would expect that the node running SQL Apply would have much higher usage and waits. Is this what you are asking?
    Larry

  • Difference between wait event and timed event

    Hi,
    Anyone has idea that what is the difference between wait events and timed events in Statspack report. I couldn't find it over google.
    Thanks.

    It's 10.2.0.1 on Linux
    (Couldn't do a query, because Linux is inside VM Ware. And it is not being accessed from Base windows machine.)
    Top 5 Timed Events                                                    Avg %Total
    ~~~~~~~~~~~~~~~~~~ wait Call
    Event Waits Time (s) (ms) Time
    db file scattered read 9,750,617 34,611 4 44.7
    CPU time 14,248 18.4
    read by other session 1,532,282 8,984 6 11.6
    db file sequential read 4,514,494 5,588 1 7.2
    latch: cache buffers lru chain 277,245 4,823 17 6.2
    Wait Events  DB/Inst: ABCD/ABCD  Snaps: 1-2
    -> s - second, cs - centisecond, ms - millisecond, us - microsecond
    -> %Timeouts: value of 0 indicates value was < .5%. Value of null is truly 0
    -> Only events with Total Wait Time (s) >= .001 are shown
    -> ordered by Total Wait Time desc, Waits desc (idle events last)
    Avg
    %Time Total Wait wait Waits
    Event Waits -outs Time (s) (ms) /txn
    db file scattered read 9,750,617 0 34,611 4 24.2
    read by other session 1,532,282 0 8,984 6 3.8
    db file sequential read 4,514,494 0 5,588 1 11.2
    latch: cache buffers lru chain 277,245 0 4,823 17 0.7
    latch free 121,466 0 3,291 27 0.3
    ----------------------------------------------------------------------------------------------------

Maybe you are looking for