Interpret DB CPUwait event (top 5 wait event AWR)

Hi,
Can anyone tell me how to read the table below especially the "DB CPU" section,
Is it right to say that 41.71% of time was consumed waiting for CPU?? this is urgent
Event                     Waits           Time(s)      Avg wait (ms)      % DB time      Wait Class
db file sequential read      300,835      1,483           5           58.42           User I/O
DB CPU                     1,059                     41.71
reliable message           9,499           18           2           0.71           Other
PX Deq: Slave Session Stats      6,506           11           2           0.43           Other
gc cr grant 2-way           26,218           6           0           0.25           Cluster

user589420 wrote:
Hi,
Can anyone tell me how to read the table below especially the "DB CPU" section,
Is it right to say that 41.71% of time was consumed waiting for CPU?? this is urgent
Event                           Waits  Time(s)  Avg wait (ms)  % DB time  Wait Class
db file sequential read       300,835    1,483              5      58.42  User I/O
DB CPU                                   1,059                     41.71
reliable message                9,499       18              2       0.71  Other
PX Deq: Slave Session Stats     6,506       11              2       0.43  Other
gc cr grant 2-way              26,218        6              0       0.25  Cluster
When posting information to the forum that includes critical spaces, like the above, use a { code } tag (without spaces) before and after the information.
I do not understand why this question is an urgent problem.
It is incorrect to state that 41.71% of the time was consumed waiting for the CPU. When an Oracle process is running on the CPU, it is officially not waiting. It causes a bit of confusion having the CPU time consumed listed among the top 5 wait events, but as long as you understand why it is in the top 5 list, it almost makes sense for it to be included.
The DB CPU statistic is listed as 1,059 seconds. If the duration of this report is 1 hour, that is 3,600 seconds of total time. If there is a single CPU in the server, there are 3,600 CPU seconds available in the time period, indicating that the server's CPU on average was 29.4% busy. If there were 12 CPUs in the server, there were 43,200 CPU seconds available in the time period, indicating that on average the CPUs were 2.5% busy. Does this mean that there was a problem, or was this OK, or is there not enough information? Just because on average the CPUs are not busy, that does not mean that there were not periods of intense CPU competion, where in fact there was a temporary shortage of available CPU time for processing.
The DB Time statistic is supposed to be an indication of work performed by the instance on behalf of the user sessions. It is the accumulation of CPU time consumed by foreground sessions plus the accumulated sum of all non-idle wait events consumed by foreground sessions. Blog articles that might be of interest to you:
http://hoopercharles.wordpress.com/2010/01/13/working-with-oracle-time-model-data/
http://hoopercharles.wordpress.com/2010/02/05/faulty-quotes-6-cpu-utilization/
Charles Hooper
Co-author of "Expert Oracle Practices: Oracle Database Administration from the Oak Table"
http://hoopercharles.wordpress.com/
IT Manager/Oracle DBA
K&M Machine-Fabricating, Inc.

Similar Messages

  • Oracle RAC 9i LMD library cache lock top wait event

    We are experiencing the library cache lock as our top wait event. Even thought the box is currently idle, The Global Enqueue Service Daemon (LMD) is taking up CPU cycles. The background process is also logging to trace "skgxpdocon: warning outstanding accept handle count has reached new high water mark 245000".
    Any help would be appreciated.
    Thanks

    There is a new patch for this - check out p4673610 on metalink. We have also experience the problem in 9.2.0.8.

  • Top wait events in awr

    hi,
    We are using 11.2.0.3.0 on solaris 10 facing slow performance, following are the Wait Events in AWR report, need assistance to overcome it. Also if any specific document to analyze AWR report and to pin point the performance bottleneck.
    Foreground Wait Events
    Avg
    %Time Total Wait
    wait
    Waits   % DB
    Event                        
    Waits -outs   Time (s)
    (ms)
    /txn   time
    direct path read           
    308,729
    0
    21,191 
    69
    58.0   39.5
    db file sequential read    
    208,754

    3,742 
    18
    39.2
    7.0
    cursor: pin S           
    19,541,899

    2,561  
    0  3,668.5
    4.8
    Background Wait Events
    Avg
    %Time Total Wait
    wait
    Waits   % bg
    Event                        
    Waits -outs   Time (s)
    (ms)
    /txn   time
    log file parallel write     
    26,479
    0   
    942 
    36 
    5.0   40.3
    db file parallel write     
    216,823
    0   
    809  
    4
    40.7   34.6
    control file sequential re  
    11,673
    0    
    56  

    2.2
    2.4
    control file parallel writ   
    6,280
    0    
    35  

    1.2
    1.5
    direct path read               
    534
    0    
    26 
    49 
    0.1
    1.1

    You need to identify if you are excessively running Parallel Query -- too many queries being parallelised and doing direct path reads bypassing the buffer cache.
    In 11gR2, you might also find full table scans of large tables becoming direct path reads.
    See this thread :  https://forums.oracle.com/thread/2552571
    Hemant K Chitale

  • Top Wait Events Query is needed

    Hi,
    I hope I'm asking this question in right place.
    I need a script and its output should give me the top 5 wait events in last 1 hour for an instance.

    986330 wrote:
    Hi,
    I hope I'm asking this question in right place.
    I need a script and its output should give me the top 5 wait events in last 1 hour for an instance.
    which Top 5? Top number of Waits? Top Total time Waited? Top Avg Wait Time?
    why don't you just run AWR report?

  • Tuning top wait events

    hi,
    why does the following wait event occurs.how to tune these wait events
    control file parallel write and direct path load
    With Regards
    Boo

    "control file parallel write" event occurs while the session is writing physical blocks to all control files.
    This happens when:
    * The session starts a control file transaction (to make sure that the control files
    are up to date in case the session crashes before committing the control file
    transaction)
    * The session commits a transaction to a control file
    * Changing a generic entry in the control file, the new value is being written to all
    control files
    The wait time is the time it takes to finish all writes to all control files
    To reduce this wait event, you can decrease the number of your control files (if this number too high) or place your control files to faster disks.
    "direct path load" event occurs when a session waits for completion of direct load operations to database files. For example, if you are using SQL*Loader direct path load operation or import dump file made in direct mode.
    To reduce this wait event, you can also place your control files to faster disks.

  • Top Wait Events

    hi gurus,
    3 node rac 10.2.0.4 serving a packaged application.
    Top 5 timed events in awr shown as
    Event=CPU time
         Waits=
    Time(s)=1,950
    Avg Wait(ms)
    % Total Call Time=45.3     
    Wait Class
    Event=gc cr multi block request
         Waits= 6,551,055
    Time(s)= 1,396
    Avg Wait(ms)=0
    % Total Call Time=38.9          
    Wait Class= Cluster
    Event=db file scattered read
         Waits= 186,295
    Time(s)= 719
    Avg Wait(ms)=4
    % Total Call Time=18.2          
    Wait Class= User I/O
    Event=db file parallel read
         Waits= 43,383
    Time(s)= 241
    Avg Wait(ms)=6
    % Total Call Time= 5.9               
    Wait Class= User I/O
    Event=      log file sync
         Waits= 71,064
    Time(s)= 83
    Avg Wait(ms)=1
    % Total Call Time= 3.1               
    Wait Class= Commit
    db_block_size=8KB
    db_file_multiblock_read_count = default setting of 128
    question:
    are the high wait values of gc cr multi block request and db file scattered read are due to db_file_multiblock_read_count?
    if that's the case, is there a way to find optimum value for db_file_multiblock_read_count?
    or any other findings please?
    experts, appreciate your valuable help
    thanks in advance,
    charles

    user570138 wrote:
    there are queries going for the full table scan with outer joins (milliion of records). those are the same sqls at the top of "sql order by cluster time"in awr with high CPU utilization.
    any way to fine tune the instance to reduce the "gc cr multi block request"
    apart from changing the code as the code belongs to a package based application please?
    Do you have a performance problem ?
    You are doing some large tablescans; these are (probably) the root cause of the gc cr multiblock read, the db file scattered reads, and the CPU, but if the queries are necessary and the execution paths are the best that can be done then maybe you just have to recognise that the resource you're using is reasonable for the queries you have to run.
    Otherwise
    <ul>
    (a) can you find a more efficient access path for any of these queries
    (b) can you make sure that all these queries run on the same node so that you get some benefit from node-affinity (possibly the object(s) will be remasted to that single node) and reduce the interconnect traffic.
    </ul>
    Regards
    Jonathan Lewis
    http://jonathanlewis.wordpress.com
    http://www.jlcomp.demon.co.uk
    To post code, statspack/AWR report, execution plans or trace files, start and end the section with the tag {noformat}{noformat} (lowercase, curly brackets, no spaces) so that the text appears in fixed format.
    "Science is more than a body of knowledge; it is a way of thinking"
    Carl Sagan                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

  • How do I the interpret "Disk file operations I/O" wait event?

    I have a large and very busy batch database. All of a sudden the "Disk file operations I/O" wait event is in the top 5 in AWR.
    The manual page isn't very helpful:
    http://download.oracle.com/docs/cd/E11882_01/server.112/e17110/waitevents003.htm#insertedID40
    Disk file operations I/O
    This event is used to wait for disk file operations (for example, open, close, seek, and resize). It is also used for miscellaneous I/O operations such as block dumps and password file accesses.
    So here is my question: What exactly is going on when I see this wait event? Why doesn't it show up as one of the other I/O events? Can I make it go away? Should I make it go away?
    DR

    sb92075 wrote:
    All of a sudden the "Disk file operations I/O" wait event is in the top 5 in AWR.Top wait event
    In EVERY Top Wait Event list, one wait event will ALWAYS be on top as #1; by definition of the list.
    Simply because any item, even #1, appears on this list does not mean this is a problem & needs to be fixed.
    If the Top Wait Event accounts for only 5 seconds out of a 1 hour sample,
    then reducing it to ZERO won't measurably improve overall application performance.
    The actual Time Waited is required to determine if it is a problem or not.It's taking 20% of time in a 15 minute sample. Anything that takes 20% of deserves to be understood....So: What actually causes it?
    DR

  • Top 5 wait event

    I need some guidance on my AWR 5 top wait events
    I have 10gR2 on Solaris 9.
    The top 5 events in my AWR (ran hourly) always contain the following: (not necessary in order)
    CPU time
    control file parallel write
    dbfile parallel write
    log file parallel write
    log file sync.
    Is this an indication of an undersized log buffer ?
    My value for log buffer is 14,258,176
    I have 4 CPUs
    I'd appreciate any help

    Hi!
    I do have the same problem and trying to figure it out
    Top 5 Timed Events Avg %Total
    ~~~~~~~~~~~~~~~~~~ wait Call
    Event Waits Time (s) (ms) Time Wait Class
    CPU time 932 71.3
    reliable message 2,828 509 180 38.9 Other
    control file parallel write 8,759 300 34 23.0 System I/O
    db file parallel write 19,813 238 12 18.2 System I/O
    control file sequential read 65,435 193 3 14.7 System I/O
    share with me, your thoughts
    Ravi

  • About wait events of awr

    is there anyone who can helip to advise me the meaning of
    1.SQL*Net break/reset to client
    2. Streams AQ: waiting for messages in the queue
    3. wait for unread message on broadcast channel
    we are processing a performence test , tks for your kind advise

    You can ignore these events as they are IDLE events, not WAIT events. In short, they mean user is connected but is not doing anything.

  • Need help to analysis "foreground and background wait events" on statspack report for oracle database 11.2.0.4 on AIX

    Hi: I'm analyzing this STATSPACK report: it is "volume test" on our UAT server, so most input is from 'bind variables'.  Our shared pool is well utilized in oracle.  Oracle redo logs is not appropriately configured on this server, as in 'Top 5 wait events' there are 2 for redos.
    I need to know what else information can be dig-out from 'foreground wait events' & 'background wait events', and what can assist us to better understanding, in combination of 'Top 5 wait event's, that how the server/test went?  it could be overwelming No. of wait events, so appreciate any helpful diagnostic or analysis.  Database is oracle 11.2.0.4 upgraded from 11.2.0.3, on IBM AIX power system 64bit, level 6.x
    STATSPACK report for
    Database    DB Id    Instance     Inst Num  Startup Time   Release     RAC
    ~~~~~~~~ ----------- ------------ -------- --------------- ----------- ---
    700000XXX   XXX              1 22-Apr-15 12:12 11.2.0.4.0  NO
    Host Name             Platform                CPUs Cores Sockets   Memory (G)
    ~~~~ ---------------- ---------------------- ----- ----- ------- ------------
         dXXXX_XXX    AIX-Based Systems (64-     2     1       0         16.0
    Snapshot       Snap Id     Snap Time      Sessions Curs/Sess Comment
    ~~~~~~~~    ---------- ------------------ -------- --------- ------------------
    Begin Snap:       5635 22-Apr-15 13:00:02      114       4.6
      End Snap:       5636 22-Apr-15 14:00:01      128       8.8
       Elapsed:      59.98 (mins) Av Act Sess:       0.6
       DB time:      35.98 (mins)      DB CPU:      19.43 (mins)
    Cache Sizes            Begin        End
    ~~~~~~~~~~~       ---------- ----------
        Buffer Cache:     2,064M              Std Block Size:         8K
         Shared Pool:     3,072M                  Log Buffer:    13,632K
    Load Profile              Per Second    Per Transaction    Per Exec    Per Call
    ~~~~~~~~~~~~      ------------------  ----------------- ----------- -----------
          DB time(s):                0.6                0.0        0.00        0.00
           DB CPU(s):                0.3                0.0        0.00        0.00
           Redo size:          458,720.6            8,755.7
       Logical reads:           12,874.2              245.7
       Block changes:            1,356.4               25.9
      Physical reads:                6.6                0.1
    Physical writes:               61.8                1.2
          User calls:            2,033.7               38.8
              Parses:              286.5                5.5
         Hard parses:                0.5                0.0
    W/A MB processed:                1.7                0.0
              Logons:                1.2                0.0
            Executes:              801.1               15.3
           Rollbacks:                6.1                0.1
        Transactions:               52.4
    Instance Efficiency Indicators
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
                Buffer Nowait %:  100.00       Redo NoWait %:  100.00
                Buffer  Hit   %:   99.98  Optimal W/A Exec %:  100.00
                Library Hit   %:   99.77        Soft Parse %:   99.82
             Execute to Parse %:   64.24         Latch Hit %:   99.98
    Parse CPU to Parse Elapsd %:   53.15     % Non-Parse CPU:   98.03
    Shared Pool Statistics        Begin   End
                 Memory Usage %:   10.50   12.79
        % SQL with executions>1:   69.98   78.37
      % Memory for SQL w/exec>1:   70.22   81.96
    Top 5 Timed Events                                                    Avg %Total
    ~~~~~~~~~~~~~~~~~~                                                   wait   Call
    Event                                            Waits    Time (s)   (ms)   Time
    CPU time                                                       847          50.2
    enq: TX - row lock contention                    4,480         434     97   25.8
    log file sync                                  284,169         185      1   11.0
    log file parallel write                        299,537         164      1    9.7
    log file sequential read                           698          16     24    1.0
    Host CPU  (CPUs: 2  Cores: 1  Sockets: 0)
    ~~~~~~~~              Load Average
                          Begin     End      User  System    Idle     WIO     WCPU
                           1.16    1.84     19.28   14.51   66.21    1.20   82.01
    Instance CPU
    ~~~~~~~~~~~~                                       % Time (seconds)
                         Host: Total time (s):                  7,193.8
                      Host: Busy CPU time (s):                  2,430.7
                       % of time Host is Busy:      33.8
                 Instance: Total CPU time (s):                  1,203.1
              % of Busy CPU used for Instance:      49.5
            Instance: Total Database time (s):                  2,426.4
      %DB time waiting for CPU (Resource Mgr):       0.0
    Memory Statistics                       Begin          End
    ~~~~~~~~~~~~~~~~~                ------------ ------------
                      Host Mem (MB):     16,384.0     16,384.0
                       SGA use (MB):      7,136.0      7,136.0
                       PGA use (MB):        282.5        361.4
        % Host Mem used for SGA+PGA:         45.3         45.8
    Foreground Wait Events  DB/Inst: XXXXXs  Snaps: 5635-5636
    -> Only events with Total Wait Time (s) >= .001 are shown
    -> ordered by Total Wait Time desc, Waits desc (idle events last)
                                                                 Avg          %Total
                                              %Tim Total Wait   wait    Waits   Call
    Event                               Waits  out   Time (s)   (ms)     /txn   Time
    enq: TX - row lock contentio        4,480    0        434     97      0.0   25.8
    log file sync                     284,167    0        185      1      1.5   11.0
    Disk file operations I/O            8,741    0          4      0      0.0     .2
    direct path write                  13,247    0          3      0      0.1     .2
    db file sequential read             6,058    0          1      0      0.0     .1
    buffer busy waits                   1,800    0          1      1      0.0     .1
    SQL*Net more data to client        29,161    0          1      0      0.2     .1
    direct path read                    7,696    0          1      0      0.0     .0
    db file scattered read                316    0          1      2      0.0     .0
    latch: shared pool                    144    0          0      2      0.0     .0
    CSS initialization                     30    0          0      3      0.0     .0
    cursor: pin S                          10    0          0      9      0.0     .0
    row cache lock                         41    0          0      2      0.0     .0
    latch: row cache objects               19    0          0      3      0.0     .0
    log file switch (private str            8    0          0      7      0.0     .0
    library cache: mutex X                 28    0          0      2      0.0     .0
    latch: cache buffers chains            54    0          0      1      0.0     .0
    latch free                            290    0          0      0      0.0     .0
    control file sequential read        1,568    0          0      0      0.0     .0
    log file switch (checkpoint             4    0          0      6      0.0     .0
    direct path sync                        8    0          0      3      0.0     .0
    latch: redo allocation                 60    0          0      0      0.0     .0
    SQL*Net break/reset to clien           34    0          0      1      0.0     .0
    latch: enqueue hash chains             45    0          0      0      0.0     .0
    latch: cache buffers lru cha            7    0          0      2      0.0     .0
    latch: session allocation               5    0          0      1      0.0     .0
    latch: object queue header o            6    0          0      1      0.0     .0
    ASM file metadata operation            30    0          0      0      0.0     .0
    latch: In memory undo latch            15    0          0      0      0.0     .0
    latch: undo global data                 8    0          0      0      0.0     .0
    SQL*Net message from client     6,362,536    0    278,225     44     33.7
    jobq slave wait                     7,270  100      3,635    500      0.0
    SQL*Net more data from clien        7,976    0         15      2      0.0
    SQL*Net message to client       6,362,544    0          8      0     33.7
    Background Wait Events  DB/Inst: XXXXXs  Snaps: 5635-5636
    -> Only events with Total Wait Time (s) >= .001 are shown
    -> ordered by Total Wait Time desc, Waits desc (idle events last)
                                                                 Avg          %Total
                                              %Tim Total Wait   wait    Waits   Call
    Event                               Waits  out   Time (s)   (ms)     /txn   Time
    log file parallel write           299,537    0        164      1      1.6    9.7
    log file sequential read              698    0         16     24      0.0    1.0
    db file parallel write              9,556    0         13      1      0.1     .8
    os thread startup                     146    0         10     70      0.0     .6
    control file parallel write         2,037    0          2      1      0.0     .1
    Log archive I/O                        35    0          1     30      0.0     .1
    LGWR wait for redo copy             2,447    0          0      0      0.0     .0
    db file async I/O submit            9,556    0          0      0      0.1     .0
    db file sequential read               145    0          0      2      0.0     .0
    Disk file operations I/O              349    0          0      0      0.0     .0
    db file scattered read                 30    0          0      4      0.0     .0
    control file sequential read        5,837    0          0      0      0.0     .0
    ADR block file read                    19    0          0      4      0.0     .0
    ADR block file write                    5    0          0     15      0.0     .0
    direct path write                      14    0          0      2      0.0     .0
    direct path read                        3    0          0      7      0.0     .0
    latch: shared pool                      3    0          0      6      0.0     .0
    log file single write                  56    0          0      0      0.0     .0
    latch: redo allocation                 53    0          0      0      0.0     .0
    latch: active service list              1    0          0      3      0.0     .0
    latch free                             11    0          0      0      0.0     .0
    rdbms ipc message                 314,523    5     57,189    182      1.7
    Space Manager: slave idle wa        4,086   88     18,996   4649      0.0
    DIAG idle wait                      7,185  100      7,186   1000      0.0
    Streams AQ: waiting for time            2   50      4,909 ######      0.0
    Streams AQ: qmn slave idle w          129    0      3,612  28002      0.0
    Streams AQ: qmn coordinator           258   50      3,612  14001      0.0
    smon timer                             43    2      3,605  83839      0.0
    pmon timer                          1,199   99      3,596   2999      0.0
    SQL*Net message from client        17,019    0         31      2      0.1
    SQL*Net message to client          12,762    0          0      0      0.1
    class slave wait                       28    0          0      0      0.0
    thank you very much!

    Hi: just know it now: it is a large amount of 'concurrent transaction' designed in this "Volume Test" - to simulate large incoming transaction volme, so I guess wait in eq:TX - row is expected.
    The fact: (1) redo logs at uat server is known to not well-tune for configurations (2) volume test slow 5%, however data amount in its test is kept the same by each time import  production data, by the team. So why it slowed 5% this year?
    The wait histogram is pasted below, any one interest to take a look?  any ideas?
    Wait Event Histogram  DB/Inst: XXXX/XXXX  Snaps: 5635-5636
    -> Total Waits - units: K is 1000, M is 1000000, G is 1000000000
    -> % of Waits - column heading: <=1s is truly <1024ms, >1s is truly >=1024ms
    -> % of Waits - value: .0 indicates value was <.05%, null is truly 0
    -> Ordered by Event (idle events last)
                               Total ----------------- % of Waits ------------------
    Event                      Waits  <1ms  <2ms  <4ms  <8ms <16ms <32ms  <=1s   >1s
    ADR block file read          19   26.3   5.3  10.5  57.9
    ADR block file write          5                     40.0        60.0
    ADR file lock                 6  100.0
    ARCH wait for archivelog l   14  100.0
    ASM file metadata operatio   30  100.0
    CSS initialization           30              100.0
    Disk file operations I/O   9090   97.2   1.4    .6    .4    .2    .1    .1
    LGWR wait for redo copy    2447   98.5    .5    .4    .2    .2    .2    .1
    Log archive I/O              35   40.0         8.6  25.7   2.9        22.9
    SQL*Net break/reset to cli   34   85.3   8.8         5.9
    SQL*Net more data to clien   29K  99.9    .0    .0    .0          .0    .0
    buffer busy waits          1800   96.8    .7    .7    .6    .3    .4    .5
    control file parallel writ 2037   90.7   5.0   2.1    .8   1.0    .3    .1
    control file sequential re 7405  100.0                      .0
    cursor: pin S                10   10.0                    90.0
    db file async I/O submit   9556   99.9    .0                .0          .0
    db file parallel read         1  100.0
    db file parallel write     9556   62.0  32.4   1.7    .8   1.5   1.3    .1
    db file scattered read      345   72.8   3.8   2.3  11.6   9.0    .6
    db file sequential read    6199   97.2    .2    .3   1.6    .7    .0    .0
    direct path read           7699   99.1    .4    .2    .1    .1    .0
    direct path sync              8   25.0  37.5  12.5  25.0
    direct path write            13K  97.8    .9    .5    .4    .3    .1    .0
    enq: TX - row lock content 4480     .4    .7   1.3   3.0   6.8  12.3  75.4    .1
    latch free                  301   98.3    .3    .7    .7
    latch: In memory undo latc   15   93.3   6.7
    latch: active service list    1              100.0
    latch: cache buffers chain   55   94.5                     3.6   1.8
    latch: cache buffers lru c    9   88.9                    11.1
    latch: call allocation        6  100.0
    latch: checkpoint queue la    3  100.0
    latch: enqueue hash chains   45   97.8                     2.2
    latch: messages               4  100.0
    latch: object queue header    7   85.7        14.3
    latch: redo allocation      113   97.3               1.8    .9
    latch: row cache objects     19   89.5                           5.3   5.3
    latch: session allocation     5   80.0              20.0
    latch: shared pool          147   90.5   1.4   2.7   1.4    .7   1.4   2.0
    latch: undo global data       8  100.0
    library cache: mutex X       28   89.3         3.6         3.6         3.6
    log file parallel write     299K  95.6   2.6   1.0    .4    .3    .2    .0
    log file sequential read    698   29.5    .1               4.6  46.8  18.9
    log file single write        56  100.0
    log file switch (checkpoin    4               25.0  50.0  25.0
    log file switch (private s    8         12.5        37.5  50.0
    log file sync               284K  93.3   3.7   1.4    .7    .5    .3    .1
    os thread startup           146                                      100.0
    row cache lock               41   85.4   9.8               2.4         2.4
    DIAG idle wait             7184                                      100.0
    SQL*Net message from clien 6379K  86.6   5.1   2.9   1.3    .7    .3   2.8    .3
    SQL*Net message to client  6375K 100.0    .0    .0    .0    .0    .0    .0
    Wait Event Histogram  DB/Inst: XXXX/xxxx  Snaps: 5635-5636
    -> Total Waits - units: K is 1000, M is 1000000, G is 1000000000
    -> % of Waits - column heading: <=1s is truly <1024ms, >1s is truly >=1024ms
    -> % of Waits - value: .0 indicates value was <.05%, null is truly 0
    -> Ordered by Event (idle events last)
                               Total ----------------- % of Waits ------------------
    Event                      Waits  <1ms  <2ms  <4ms  <8ms <16ms <32ms  <=1s   >1s
    SQL*Net more data from cli 7976   99.7    .1    .1    .0                      .1
    Space Manager: slave idle  4086     .1    .2    .0    .0    .3         3.2  96.1
    Streams AQ: qmn coordinato  258   49.2                .8                    50.0
    Streams AQ: qmn slave idle  129                                            100.0
    Streams AQ: waiting for ti    2   50.0                                      50.0
    class slave wait             28   92.9   3.6   3.6
    jobq slave wait            7270     .0                               100.0
    pmon timer                 1199                                            100.0
    rdbms ipc message           314K  10.3   7.3  39.7  15.4  10.6   5.3   8.2   3.3
    smon timer                   43                                            100.0

  • What is ges reusing os pid wait event

    What is wait event "ges reusing os pid". In our RAC environment it is one of the top wait events. How to minimze it.

    This is a wait event in Oracle 10g for Global Enqueue Services (ges) waiting on an operating system process id (os pid).
    How to resolve this issue? I checked the bug list on Metalink and there is a patch set for the issue that may help.
    Question: what version and patch release are you running for Oracle RAC?
    Also, you probably want to tune your public network and private interconnects between the nodes in your Oracle RAC cluster.
    Regards,
    Ben Prusinski
    http://oracle-magician.blogspot.com/

  • CXPACKET Wait events

    Hi,
    I am facing wait event CXPACKET in top wait events. we are using SQL Server 2012. the DOP is set to 1. 
    Please suggest how to resolve this
    REgards

    Hi,
    I am facing wait event CXPACKET in top wait events. we are using SQL Server 2012. the DOP is set to 1. 
    Any reaosn why DOP is set to 1.Have you tested this scenario.Instaed of first moving for resolving CXPACKET wait which is not a acctual reason it is just a symptom .It happens when various parallel threads are waiting to synchronize after doing the
    task.What is other major wait stats you can see.Can you paste output of below query here
    --By Jnathan Kehayias
    SELECT TOP 10
    wait_type ,
    max_wait_time_ms wait_time_ms ,
    signal_wait_time_ms ,
    wait_time_ms - signal_wait_time_ms AS resource_wait_time_ms ,
    100.0 * wait_time_ms / SUM(wait_time_ms) OVER ( )
    AS percent_total_waits ,
    100.0 * signal_wait_time_ms / SUM(signal_wait_time_ms) OVER ( )
    AS percent_total_signal_waits ,
    100.0 * ( wait_time_ms - signal_wait_time_ms )
    / SUM(wait_time_ms) OVER ( ) AS percent_total_resource_waits
    FROM sys.dm_os_wait_stats
    WHERE wait_time_ms > 0 -- remove zero wait_time
    AND wait_type NOT IN -- filter out additional irrelevant waits
    ( 'SLEEP_TASK', 'BROKER_TASK_STOP', 'BROKER_TO_FLUSH',
    'SQLTRACE_BUFFER_FLUSH','CLR_AUTO_EVENT', 'CLR_MANUAL_EVENT',
    'LAZYWRITER_SLEEP', 'SLEEP_SYSTEMTASK', 'SLEEP_BPOOL_FLUSH',
    'BROKER_EVENTHANDLER', 'XE_DISPATCHER_WAIT', 'FT_IFTSHC_MUTEX',
    'CHECKPOINT_QUEUE', 'FT_IFTS_SCHEDULER_IDLE_WAIT',
    'BROKER_TRANSMITTER', 'FT_IFTSHC_MUTEX', 'KSOURCE_WAKEUP',
    'LOGMGR_QUEUE', 'ONDEMAND_TASK_QUEUE',
    'REQUEST_FOR_DEADLOCK_SEARCH', 'XE_TIMER_EVENT', 'BAD_PAGE_PROCESS',
    'DBMIRROR_EVENTS_QUEUE', 'BROKER_RECEIVE_WAITFOR',
    'PREEMPTIVE_OS_GETPROCADDRESS', 'PREEMPTIVE_OS_AUTHENTICATIONOPS',
    'WAITFOR', 'DISPATCHER_QUEUE_SEMAPHORE', 'XE_DISPATCHER_JOIN',
    'RESOURCE_QUEUE' )
    ORDER BY wait_time_ms DESC
    Please mark this reply as the answer or vote as helpful, as appropriate, to make it useful for other readers

  • Top 5 wait events in AWR Repprt

    Hi,
    The following is top 5 wait event in my AWR reports...
    Whenever I take reports this are always top 5 events
    Top 5 Timed Events
    =============================================================================================================
    Event                
    CPU time           
    Waits               4,717
    % Total Call Time     62.0
    log file sync           
    Waits                64,963           
    Time(s)               1,362           
    Avg Wait(ms)          21                
    % Total Call Time     17.9           
    Wait Class          Commit
    log file parallel write     
    Waits               63,485           
    Time(s)               1,004           
    Avg Wait(ms)          16                
    % Total Call Time     13.2           
    Wait Class          System I/O
    enq: TX - row lock contention
    Waits               348           
    Time(s)               984           
    Avg Wait(ms)          2,828                
    % Total Call Time     12.9           
    Wait Class          Application
    db file parallel write      
    Waits               29,305           
    Time(s)               561           
    Avg Wait(ms)          19                
    % Total Call Time     7.4           
    Wait Class          System I/O
    ------------------------------------------------------------------------------------------------------------

    Start with Performance Tuning Guide
    10.2.3 Table of Wait Events and Potential Causes

  • Awr report showing "Undo segment recovery" in top 1st wait event.

    Hi all.
    Today evening oracle.exe is hitting 100% cpu in windows server 2003.
    In the awr report "undo segment recovery" listed in the top 5 wait event (1st place) and
    also in the enterprise manager it shows the details like,
    ACTION 1:
    Action Investigate the cause for high "undo segment recovery" waits. Refer to Oracle's "Database Reference" for the description of this wait event. Use given SQL for further investigation.
    Rationale The SQL statement with SQL_ID "0x63ctfjb1m1j" was found waiting for "undo segment recovery" wait event.
    SQL Text UPDATE PF_SubjectVEChapterPage SET NeedsRecalcState = NULL, NeedsUnsignState = ...
    SQL ID 0x63ctfjb1m1j
    Rationale The SQL statement with SQL_ID "0x6uvufcw5umh" was found waiting for "undo segment recovery" wait event.
    SQL Text
    SQL ID 0x6uvufcw5umh
    Rationale The SQL statement with SQL_ID "2dvmt5mhr3m10" was found waiting for "undo segment recovery" wait event.
    SQL Text UPDATE PF_SubjectVEChapterPage SET NeedsRecalcState = NULL, NeedsUnsignState = ...
    SQL ID 2dvmt5mhr3m10
    Rationale The SQL statement with SQL_ID "gx5pummu20jzb" was found waiting for "undo segment recovery" wait event.
    SQL Text UPDATE PF_SubjectVEChapterPage SET NeedsRecalcState = NULL, NeedsUnsignState = ...
    SQL ID gx5pummu20jzb
    Rationale The SQL statement with SQL_ID "1rxk3vt41zg1u" was found waiting for "undo segment recovery" wait event.
    SQL Text
    SQL ID 1rxk3vt41zg1u
    ACTION 2:
    Investigate the cause for high "undo segment recovery" waits in Module "dllhost.exe".
    ACTION 3:
    Investigate the cause for high "undo segment recovery" waits in Service "SYS$USERS".
    I'm not sure what action i need to take exactly.Please provide your valuable suggestions to proceed further.
    Thanks, Muhammed Thameem.

    http://download.oracle.com/docs/cd/A97630_01/server.920/a96536/apa5.htm
    "undo segment recovery
    PMON is rolling back a dead transaction. The wait continues until rollback finishes.
    Wait Time: 3 seconds
    Parameters:
    segment# -> The ID of the rollback segment that contains the transaction that is being rolled back
    tx flags -> The transaction flags (options) set for the transaction that is being rolled back?

  • DB CPU event in top 5 of AWR report question

    The top 5 foreground event in my AWR report is as follows. I am trying to understand if my db(system) is CPU bound.The elapsed time is 30 minutes and DB time is 675 in the load profile section. There are 32 cpus. The available CPU is 30*60*32 => 57600. The  DB CPU below is 35227. This is about 61 %. At what percentage of DB CPU do I consider my system is CPU bound ?. Also I want to make sure the method I arrived at this is correct or not. Please help. Event           Waits Time(s) Avg wait (ms) % DB time     Wait Class DB CPU                 35,277 87.12 DBMS_LDAP: LDAP operation 3,683       10,061       366         9.1             Other db file sequential read 233,584         933       4         2.3               User I/O read by other session 41,686           190       5       0.47                 User I/O log file sync 70,932      166       2       0.41                           Commit

    Why does the top 5 foreground indicate 87 % below the  % DB Time  column ? While my calculation shows 61 %, Which is correct for the interpretation if the system is CPU bound. The 2 lines of top 5 events are as follows
    Event
    Waits
    Time(s)
    Avg wait (ms)
    % DB time
    Wait Class
    DB CPU
    35,277
    87.12

Maybe you are looking for