Free buffer waits

We are seeing some free buffer waits contention and I wanted to get some input from the forum participants if you have time to address it.
It is on an HP-UX 11.31 Itanium system running 11.2.0.3 of Oracle. This is a datawarehouse staging database and there are about 60 merge statements running in parallel (themselves also doing parallel dml) doing several million updates against 60 different tables with a total rowcount around 2 billion.
The dev team is putting in a more efficient update statement so that may in itself resolve these issues but still I'd like to see if the number of db writer processes we have makes sense. As far as I know we do not have asych io configured on our OS due to some bugs we have seen in the past. The server has 14 cpus and 95 gig of memory. Here are the top 5 events from our AWR report:
Top 5 Timed Foreground Events
Event                   Waits      Time(s)  Avg wait (ms) % DB time Wait Class
free buffer waits       319,324    261,188            818     46.08 Configuration
db file parallel read   134,710    62,404             463     11.01 User I/O
DB CPU                             60,818                     10.73  
db file sequential read 11,783,603 26,032               2      4.59 User I/O
write complete waits    4,015      13,828            3444      2.44 Configuration Does it make sense that I should increase the number of db writers?
db_writer_processes=4 in our system. With 14 cpus this is supposed to be enough as I understand it. But we have 60 dedicated server processes doing updates and only four db writer processes doing writes so it kind of makes sense to increase the number of writers.
I'm researching this on my own but I would appreciate any input you have on this issue.
Thanks,
Bobby

Bobby Durrett wrote:
Well, I think I'm back to increasing the number of db writers. Here is part of Oracle note 139272.1: HP-UX: Asynchronous i/o
It may help - but I'm doubtful.
If nothing else, the comments in the note look as if they're at least 10 years old. (Note, in particular, that one of the references at the end of the note refers to a document about Oracle 8.1.7; I'm also slightly surprised by the lack of reference to I/O Slaves which were introduced to emulate async I/O for platforms that didn't support it; (update) and then there's a reference to an HP-UX patch is for 11.00, and descriptions of what to do for Oracle versions 7.3 and earlier!)
This serial operation can lead to a i/o bottleneck. There are two ways to counteract this:
a. configure multiple DBWR processes
b. use asynchronous i/oYou haven't told us anything about the db file parallel write times yet. If they're slow then increasing the number of db writers is likely to make them slower.
On the information you've given so far, I'd investigate setting filesystem_io_options to setall, in combination with mounting the file system with convosync={whatever it was}, as I suspect locking at the file level within the filesystem cache. (I'm not sure whether you need to set mincache=direct as well to see the write benefit - the problem is that your 11M single block reads might slow down because they would be bypassing the filesystem cache.)
It's at this point on a client site that I tend to get into detailed discussion with the System Admin to understand exactly what filesystem and O/S options are currently available and exactly what they do. These things are too variable (and often badly understood) between operating systems and versions.
Regards
Jonathan Lewis
Edited by: Jonathan Lewis on Apr 24, 2013 9:38 AM

Similar Messages

  • Free Buffer Waits Problem

    Hi there,
    (OS= OL, DB=11.2, RAM=128g, CPUs=16)
    I have a large bulk DML operation (using MERGE statement) to populate a cube-like table with about 3,000,000 records per day from about 600,000 records fact table data. It takes approximately 7 minutes to complete. It can be seen from OEM that the wait event: Free Buffer Waits is the main wait event during this operation.
    I spent some time googling the issue to find ways to get rid of that, but I'm get disappointed. I dont know now that if there is a problem in my DB parameters or it is related to OS level configuration or even other reasons. The DB is using AMM with 100g as the MEMORY_TARGET parameter and the cube-like table doesnt use ASSM.
    Please help me to learn how to handle the problem. This can be said in terms of some checking approaches to control the environment or any other ways.
    TNX.
    SMSK.
    Edited by: f9smsk on Dec 18, 2012 5:52 AM
    Edited by: f9smsk on Dec 18, 2012 5:55 AM

    @Nikolay Savvinov
    No I don't have that. any other monitoring tools?
    @Rob_J
    Can you provide me with some links to illustrate this?
    @Jonathan Lewis
    I found that wait event in the OEM performance curves. In OEM, the amount of time that some CPUs are dealing with some occurrences is shown in terms of performance curves, I think. In other word, I dont know how to provide you with what fraction of 7 minutes consumed for that wait event. I can tell you things I think useful which maybe non-scientific:
    As I mentioned before, I'm using a merge statement. As you know, this statement starts with a evaluation of using clause and then the DB decides on update or insert. I think the wait event is not due to first sub-operation but it is for insert operation and with a lower probability for update. The curve of "Free Buffer Wait" in the configuration class of OEM conforms with this assumption. It does not exists in first minute of 7 minutes. We can assume that the CPUs are calculating the using statement. Then it suddenly raised up to its peak and after some seconds, say 30-40 seconds, it falls down dramatically to a very little amount and goes on this way to eventually fade out at 5th minute. Please help me if there is a more scientific way to express the event.
    Tnx,
    SMSK.

  • What is the difference between buffer busy waits and free buffer waits

    what is the difference between buffer busy waits events and free buffer waits in Oracle database?
    select *
    from
       v$system_event
    where
       event like ‘%wait%’;
    EVENT                       TOTAL_WAITS TOTAL_TIMEOUTS TIME_WAITED AVERAGE_WAIT
    buffer busy waits                636528           1557      549700   .863591232
    write complete waits               1193              0       14799   12.4048617
    free buffer waits                  1601              0         622   .388507183

    jetq wrote:
    Buffer busy waits occur when an Oracle session needs to access a block in the buffer cache, but cannot because the buffer copy of the data block is locked. This buffer busy wait condition can happen for either of the following reasons:
    * The block is being read into the buffer by another session, so the waiting session must wait for the block read to complete. If the OP is running 10g, that would be recorded as "read by other session" not "buffer busy waits" - and unfortunately he didn't tell us the version.
    * Another session has the buffer block locked in a mode that is incompatible with the waiting session's request.
    The Free Buffer Waits Oracle metric wait event indicates that a server process was unable to find a free buffer and has posted the database writer to make free buffers by writing out dirty buffers.
    There is another possibility - if the OP is using a keep and recycle pool: see http://jonathanlewis.wordpress.com/2006/11/21/free-buffer-waits/
    Regards
    Jonathan Lewis
    http://jonathanlewis.wordpress.com
    http://www.jlcomp.demon.co.uk
    To post code, statspack/AWR report, execution plans or trace files, start and end the section with the tag {noformat}{noformat} (lowercase, curly brackets, no spaces) so that the text appears in fixed format.
    "For every expert there is an equal and opposite expert"
    Arthur C. Clarke                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

  • Busy buffer wait

    Hi
    I am getting huge buffer busy waits events on my database and its increasing
    following is the result of query on my database(9.2.0.8.0)
    SQL> select event,total_waits from v$system_event where event in ('free buffer waits','buffer busy waits')
    2 ;
    EVENT TOTAL_WAITS
    free buffer waits 118
    buffer busy waits 12827
    Also my "segment space management" is on "auto" on the tablespaces
    please someone let me know even when "segment space management" is on "auto" , why buffer busy waits is so huge
    And how to reduce this event
    Regards

    Hi,
    Try to post the out put for system wide if possible
    SELECT time, count, class
    FROM V$WAITSTAT
    ORDER BY time,count
    First of all check how many session till now impacted with this event. execute the below query and check
    SELECT count(*), event
    FROM v$session_wait
    WHERE wait_time = 0
         AND event NOT IN ('smon timer','pmon timer','rdbms ipc message',
                        'SQL*Net message from client')
    GROUP BY event
    ORDER BY 1 DESC;
    try to check p1, p2 and p3 value of below query, from there we can find the cause
    SELECT count(*) n_w , p1 FILE#, p2 BLK#, p3 CLASS
    FROM v$session_wait
    WHERE event = 'buffer busy waits'
    GROUP BY p1, p2, p3
    from the above query you will get the AFN and Block and wait class, use those inputs and check the actual segment cause, then we can seee what we need to do
    SELECT owner,segment_name,segment_type
    FROM dba_extents
    WHERE file_id=&file
    AND &blockid BETWEEN block_id AND block_id + blocks
    - Pavan Kumar N
    - Pavan Kumar N
    Updated with queries

  • High redo log buffer wait

    Hi,
    I can see "high redo log buffer wait" event. The instance spent 23% of its resources waiting for this event. Any suggestion to tune redo log buffer?
    DB version : 10.2.0.4.0
    Os : AIX
    SQL> SELECT name, value FROM v$sysstat WHERE name = 'redo log space requests';
    NAME VALUE
    redo log space requests 3542
    SQL> sho parameter buffer
    NAME TYPE VALUE
    buffer_pool_keep string
    buffer_pool_recycle string
    db_block_buffers integer 0
    log_buffer integer 14238720
    use_indirect_data_buffers boolean FALSE
    SQL> select GROUP#,BYTES from v$log;
    GROUP# BYTES
    1 1073741824
    4 1073741824
    3 1073741824
    2 1073741824
    SQL> show parameter sga
    NAME TYPE VALUE
    lock_sga boolean FALSE
    pre_page_sga boolean FALSE
    sga_max_size big integer 5G
    sga_target big integer 5G
    Thanks

    Gowin_dba wrote:
    I can see "high redo log buffer wait" event. The instance spent 23% of its resources waiting for this event. Any suggestion to tune redo log buffer?
    SQL> SELECT name, value FROM v$sysstat WHERE name = 'redo log space requests';
    NAME VALUE
    redo log space requests 3542 How are you getting from 3,542 "redo log space requests" to 23% of the instance resources waiting for "high redo log buffer wait" (which is not a wait event that can be found in v$event_name in any version of Oracle) ?
    "redo log space requests" is about log FILE space, by the way, not about log BUFFER space.
    Regards
    Jonathan Lewis

  • Should Bootable Backup HD's Have Free Buffer Space?

    Question: when creating the bootable backups, can each partition size be equal to the HD size it is going to backup or should there be some amount of free buffer space? If so, what percentage larger would you recommend the backup HD be than the one containing the OS?
    Thanks
    brae

    Brae wrote:
    when creating the bootable backups, can each partition size be equal to the HD size it is going to backup or should there be some amount of free buffer space?
    The backup partition doesn't have to be any larger than the partition it's backing up.
    You posted your question in the Airport/Time Capsule section. Is there a reason you considered that an appropriate place for it?

  • Buffer Waits on undo tablespace

    Hi,
    I am running Oracle Database 10g Enterprise Edition Release 10.2.0.3.0
    After creating awr report under the section:
    Tablespace IO Stats
    ordered by IOs (Reads + Writes) desc
    I noticed that I have 32 Buffer Waits for my undo tablespace.
    UNDOTBS Buffer Waits = 32
    Does anybody know how to reduce the buffer waits for my undo tablespace?
    Thanks!

    F. Munoz Alvarez wrote:
    My idea was to give to the OP some links to read and do some research to learn about the topic. Generally an admirable strategy. However, even though you have changed the list of references, none addresses the point already made by Hermant that the OP has not considered the scale of the issue.
    Again, though, if you follow the links you have posted, one seems to be unreachable, one has nothing to do with undo header waits, and the other three are limited to offering the same (out of date) suggestion about adding rollback segments.
    Regards
    Jonathan Lewis
    http://jonathanlewis.wordpress.com
    http://www.jlcomp.demon.co.uk
    "The greatest enemy of knowledge is not ignorance, it is the illusion of knowledge." (Stephen Hawking)

  • HT1977 i have free games waiting on my new iphone 5S why are they taking so long to load ????

    Hi , i installed 3 games from app store on my iphone 5S phone and they are still "waiting" anyone know why it takes so long  ?? they were free apps

    Are you on wifi or cellular data?

  • Performance problem - event : cursor: pin S wait on X

    Hi,
    Bellow is 17 min awr report , of oracle PeopleSoft DB on 10204 instance on HPUX machine.
    During this time the customers complained on poor performance.
    There were 4,104.23 execution per second and 3,784.95 parses
    which mean that almost any statment was parsed. since the Soft Parse %= 99.77
    its seems that most of the parses are soft parse.
    During those 17 min , the DB Time = 721.74 min and the "Top 5 Timed Events"
    shows : "cursor: pin S wait on X" at the top of the Timed Events
    Attached some details for the awr report
    Could you please suggest where to focus ?
    Thanks
    WORKLOAD REPOSITORY report for
    DB Name         DB Id    Instance     Inst Num Release     RAC Host
    xxxx          2993006132 xxxx                1 10.2.0.4.0  NO  xxxx
                  Snap Id      Snap Time      Sessions Curs/Sess
    Begin Snap:     18085 25-Mar-10 10:30:41       286      14.9
      End Snap:     18086 25-Mar-10 10:48:39       301      15.1
       Elapsed:               17.96 (mins)
       DB Time:              721.74 (mins)
    Cache Sizes
    ~~~~~~~~~~~                       Begin        End
                   Buffer Cache:     4,448M     4,368M  Std Block Size:         8K
               Shared Pool Size:     2,736M     2,816M      Log Buffer:     2,080K
    Load Profile
    ~~~~~~~~~~~~                            Per Second       Per Transaction
                      Redo size:          3,831,000.13            271,096.84
                  Logical reads:            164,733.47             11,657.20
                  Block changes:             17,757.42              1,256.59
                 Physical reads:                885.19                 62.64
                Physical writes:                504.92                 35.73
                     User calls:              5,775.09                408.67
                         Parses:              3,784.95                267.84
                    Hard parses:                  8.55                  0.60
                          Sorts:                212.37                 15.03
                         Logons:                  0.77                  0.05
                       Executes:              4,104.23                290.43
                   Transactions:                 14.13
      % Blocks changed per Read:   10.78    Recursive Call %:    24.14
    Rollback per transaction %:    0.18       Rows per Sort:    57.86
    Instance Efficiency Percentages (Target 100%)
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
                Buffer Nowait %:   99.98       Redo NoWait %:   99.97
                Buffer  Hit   %:   99.47    In-memory Sort %:  100.00
                Library Hit   %:   99.73        Soft Parse %:   99.77
             Execute to Parse %:    7.78         Latch Hit %:   99.77
    Parse CPU to Parse Elapsd %:    3.06     % Non-Parse CPU:   89.23
    Shared Pool Statistics        Begin    End
                 Memory Usage %:   34.44   34.78
        % SQL with executions>1:   76.52   60.40
      % Memory for SQL w/exec>1:   73.75   99.18
    Top 5 Timed Events                                         Avg %Total
    ~~~~~~~~~~~~~~~~~~                                        wait   Call
    Event                                 Waits    Time (s)   (ms)   Time Wait Class
    cursor: pin S wait on X           1,378,354      13,462     10   31.1 Concurrenc
    db file sequential read             878,684       8,779     10   20.3   User I/O
    CPU time                                          4,998          11.5
    local write wait                      2,692       2,442    907    5.6   User I/O
    cursor: pin S                     1,932,830       2,270      1    5.2      Other
    Time Model Statistics                  DB/Inst: xxxx/xxxx  Snaps: 18085-18086
    Statistic Name                                       Time (s) % of DB Time
    sql execute elapsed time                             21,690.6         50.1
    parse time elapsed                                   17,504.9         40.4
    DB CPU                                                4,998.0         11.5
    hard parse elapsed time                                 372.1           .9
    connection management call elapsed time                 183.9           .4
    sequence load elapsed time                              125.8           .3
    PL/SQL execution elapsed time                            89.2           .2
    PL/SQL compilation elapsed time                           9.2           .0
    inbound PL/SQL rpc elapsed time                           5.5           .0
    hard parse (sharing criteria) elapsed time                5.5           .0
    hard parse (bind mismatch) elapsed time                   0.5           .0
    failed parse elapsed time                                 0.1           .0
    repeated bind elapsed time                                0.0           .0
    DB time                                              43,304.1          N/A
    background elapsed time                               3,742.3          N/A
    background cpu time                                     114.8          N/A
                                                                      Avg
                                           %Time       Total Wait    wait     Waits
    Wait Class                      Waits  -outs         Time (s)    (ms)      /txn
    Concurrency                 1,413,633   97.5           14,283      10      92.8
    User I/O                      925,010     .3           11,485      12      60.7
    Other                       1,984,969     .2            2,858       1     130.3
    Application                     1,342   46.4            1,873    1396       0.1
    Configuration                  12,116   63.6            1,857     153       0.8
    System I/O                    582,094     .0            1,444       2      38.2
    Commit                         17,253     .6            1,057      61       1.1
    Network                     6,180,701     .0               68       0     405.9
    Wait Events                            DB/Inst: xxxx/xxxx  Snaps: 18085-18086
                                                                       Avg
                                                 %Time  Total Wait    wait     Waits
    Event                                 Waits  -outs    Time (s)    (ms)      /txn
    cursor: pin S wait on X           1,378,354  100.0      13,462      10      90.5
    db file sequential read             878,684     .0       8,779      10      57.7
    local write wait                      2,692   91.2       2,442     907       0.2
    cursor: pin S                     1,932,830     .0       2,270       1     126.9
    log file switch (checkpoint           2,669   49.1       1,510     566       0.2
    enq: RO - fast object reuse             542   86.5       1,398    2580       0.0
    log file sync                        17,253     .6       1,057      61       1.1
    control file sequential read        450,043     .0         579       1      29.6
    log file parallel write              17,903     .0         558      31       1.2
    enq: TX - row lock contentio            295   52.2         475    1610       0.0
    buffer busy waits                     7,338    4.4         348      47       0.5
    buffer exterminate                      322   92.5         302     938       0.0
    read by other session                24,694     .0         183       7       1.6
    library cache lock                       59   94.9         167    2825       0.0
    log file sequential read            109,494     .0         161       1       7.2
    latch: cache buffers chains          18,662     .0         149       8       1.2
    log buffer space                      2,493     .0         139      56       0.2
    Log archive I/O                       3,592     .0         131      37       0.2
    free buffer waits                     6,420   99.1         130      20       0.4
    latch free                           42,812     .0         121       3       2.8
    Streams capture: waiting for            845    6.0         106     125       0.1
    latch: library cache                  2,074     .0          96      46       0.1
    db file scattered read               12,437     .0          80       6       0.8
    enq: SQ - contention                    150   14.0          71     471       0.0
    SQL*Net more data from clien        331,961     .0          41       0      21.8
    latch: shared pool                      320     .0          32     100       0.0
    LGWR wait for redo copy               5,307   49.1          29       5       0.3
    SQL*Net more data to client         254,217     .0          17       0      16.7
    control file parallel write           1,038     .0          15      14       0.1
    latch: library cache lock               477     .4          14      29       0.0
    latch: row cache objects              6,013     .0          10       2       0.4
    SQL*Net message to client         5,587,878     .0          10       0     366.9
    latch: redo allocation                1,274     .0           9       7       0.1
    log file switch completion               62     .0           6      92       0.0
    Streams AQ: qmn coordinator               1  100.0           5    4882       0.0
    latch: cache buffers lru cha            434     .0           4       9       0.0
    block change tracking buffer            111     .0           4      35       0.0
    wait list latch free                    135     .0           3      21       0.0
    enq: TX - index contention              132     .0           2      17       0.0
    latch: session allocation               139     .0           2      14       0.0
    latch: object queue header o            379     .0           2       4       0.0
    row cache lock                           15     .0           2     107       0.0
    latch: redo copy                         56     .0           1      17       0.0
    latch: library cache pin                184     .0           1       5       0.0
    write complete waits                     14   28.6           1      51       0.0
    latch: redo writing                     251     .0           1       3       0.0
    enq: MN - contention                      3     .0           1     206       0.0
    enq: CF - contention                     16     .0           0      23       0.0
    log file single write                    24     .0           0      13       0.0
    os thread startup                         3     .0           0     102       0.0
    reliable message                         66     .0           0       4       0.0
    enq: JS - queue lock                      2     .0           0     136       0.0
    latch: cache buffer handles              46     .0           0       5       0.0
    buffer deadlock                          65  100.0           0       4       0.0
    latch: undo global data                  73     .0           0       3       0.0
    change tracking file synchro             24     .0           0       6       0.0
    change tracking file synchro             30     .0           0       3       0.0
    kksfbc child completion                   2  100.0           0      52       0.0
    SQL*Net break/reset to clien            505     .0           0       0       0.0
    db file parallel read                     3     .0           0      30       0.0
                                                                       Avg
                                                 %Time  Total Wait    wait     Waits
    Event                                 Waits  -outs    Time (s)    (ms)      /txn
    SQL*Net more data from dblin            127     .0           0       0       0.0
    SQL*Net more data to dblink             319     .0           0       0       0.0
    latch: enqueue hash chains               20     .0           0       2       0.0
    latch: checkpoint queue latc              5     .0           0       5       0.0
    SQL*Net message to dblink             6,199     .0           0       0       0.4
    enq: TX - allocate ITL entry              1     .0           0      22       0.0
    direct path read                      5,316     .0           0       0       0.3
    latch: messages                          24     .0           0       1       0.0
    enq: US - contention                      3     .0           0       4       0.0
    direct path write                     1,178     .0           0       0       0.1
    rdbms ipc reply                           1     .0           0       1       0.0
    library cache load lock                   2     .0           0       0       0.0
    direct path write temp                    3     .0           0       0       0.0
    direct path read temp                     3     .0           0       0       0.0
    SQL*Net message from client       5,587,890     .0     135,002      24     366.9
    wait for unread message on b          7,809   21.8       3,139     402       0.5
    LogMiner: client waiting for        262,604     .1       3,021      12      17.2
    LogMiner: wakeup event for b      1,405,104    2.4       2,917       2      92.3
    Streams AQ: qmn slave idle w            489     .0       2,650    5420       0.0
    LogMiner: wakeup event for p        123,723   32.1       2,453      20       8.1
    Streams AQ: waiting for time              9   55.6       1,790  198928       0.0
    LogMiner: reader waiting for         45,193   51.3       1,526      34       3.0
    Streams AQ: waiting for mess            297   99.3       1,052    3542       0.0
    Streams AQ: qmn coordinator             470   33.8       1,050    2233       0.0
    Streams AQ: delete acknowled            405   32.3       1,049    2591       0.0
    jobq slave wait                         379   77.8         958    2529       0.0
    LogMiner: wakeup event for r         16,591   10.6         125       8       1.1
    SGA: MMAN sleep for componen          3,928   99.3          35       9       0.3
    SQL*Net message from dblink           6,199     .0          31       5       0.4
    single-task message                     108     .0           8      74       0.0
    class slave wait                          3     .0           0       0       0.0
    Background Wait Events                 DB/Inst: xxxx/xxxx  Snaps: 18085-18086
    -> ordered by wait time desc, waits desc (idle events last)
                                                                       Avg
                                                 %Time  Total Wait    wait     Waits
    Event                                 Waits  -outs    Time (s)    (ms)      /txn
    log file parallel write              17,916     .0         558      31       1.2
    Log archive I/O                       3,592     .0         131      37       0.2
    log file sequential read              3,636     .0          47      13       0.2
    events in waitclass Other             6,149   42.4          40       7       0.4
    log file switch (checkpoint              30   53.3          19     619       0.0
    control file parallel write           1,038     .0          15      14       0.1
    db file sequential read               1,166     .0           6       5       0.1
    control file sequential read          2,986     .0           6       2       0.2
    latch: shared pool                        4     .0           4     917       0.0
    latch: library cache                      5     .0           3     646       0.0
    free buffer waits                       160   98.8           2      10       0.0
    buffer busy waits                         2     .0           1     404       0.0
    latch: redo writing                      19     .0           0      23       0.0
    log file single write                    24     .0           0      13       0.0
    os thread startup                         3     .0           0     102       0.0
    log buffer space                          7     .0           0      35       0.0
    latch: cache buffers chains              16     .0           0       8       0.0
    log file switch completion                1     .0           0      71       0.0
    latch: library cache lock                 3   66.7           0      11       0.0
    latch: redo copy                          1     .0           0      20       0.0
    direct path read                      5,316     .0           0       0       0.3
    latch: row cache objects                  3     .0           0       1       0.0
    direct path write                     1,174     .0           0       0       0.1
    latch: library cache pin                  3     .0           0       0       0.0
    rdbms ipc message                    20,401   24.2      11,112     545       1.3
    Streams AQ: qmn slave idle w            489     .0       2,650    5420       0.0
    Streams AQ: waiting for time              9   55.6       1,790  198928       0.0
    pmon timer                              379   94.5       1,050    2771       0.0
    Streams AQ: delete acknowled            406   32.3       1,050    2586       0.0
    Streams AQ: qmn coordinator             470   33.8       1,050    2233       0.0
    smon timer                              146     .0       1,039    7118       0.0
    SGA: MMAN sleep for componen          3,928   99.3          35       9       0.3
    Operating System Statistics             DB/Inst: xxxx/xxxx  Snaps: 18085-18086
    Statistic                                       Total
    AVG_BUSY_TIME                                  68,992
    AVG_IDLE_TIME                                  37,988
    AVG_IOWAIT_TIME                                28,529
    AVG_SYS_TIME                                   11,748
    AVG_USER_TIME                                  57,214
    BUSY_TIME                                     552,209
    IDLE_TIME                                     304,181
    IOWAIT_TIME                                   228,489
    SYS_TIME                                       94,253
    USER_TIME                                     457,956
    LOAD                                                2
    OS_CPU_WAIT_TIME                      147,872,604,500
    RSRC_MGR_CPU_WAIT_TIME                              0
    VM_IN_BYTES                                    49,152
    VM_OUT_BYTES                                        0
    PHYSICAL_MEMORY_BYTES                  25,630,269,440
    NUM_CPUS                                            8
    NUM_CPU_SOCKETS                                     8

    mbobak wrote:
    So, this is a parsing related wait. You already mentioned that you're doing lots of parsing, mostly soft. Do you have session_cursor_cache parameter set to a reasonable value? 10g, I believe the default is 50, which is probably not a bad starting point. You may get additional benefits with moderate increases, perhaps to 100-200 range. It can be costly to do so, but can the extra parsing be addressed in the application? Is there anything you can do to reduce parsing in the application? When the problem occurs, how is the CPU consumption on the box? Are the CPUs pegged? Are you bottlenecked on CPU resources? Finally, there are bugs around 10.2.0.x and mutexes, so, you may want to open an SR w/ Oracle support, and determine if the root cause is actually a bug.
    Mark,
    I think you might read a little more into the stats than you have done - averaging etc. notwithstanding.
    There are 8.55 "hard" parses per second - which in 17.96 minutes is about 9,500 hard parses - and there are 1.3M pin S wait on X: which is about 130 per hard parse (and 1.9M pin S). So the average statistics might be showing an interesting impact on individual actions.
    The waits on "local write wait" are worth nothing. There are various reasons for this, one of which is the segment header block writes and index root block writes when you truncate a table - which could also be a cause of the "enq: RO - fast object reuse" waits in the body of the report.
    Truncating tables tends to invalidate cursors and cause hard parsing.
    So I would look for code that is popular, executed from a number of sessions, and truncates tables.
    There were some bugs in this area relating to global temporay tables - but they should have been fixed in 10.2.0.4.
    Regards
    Jonathan Lewis
    http://jonathanlewis.wordpress.com
    http://www.jlcomp.demon.co.uk
    To post code, statspack/AWR report, execution plans or trace files, start and end the section with the tag {noformat}{noformat} (lowercase, curly brackets, no spaces) so that the text appears in fixed format.
    There is a +"Preview"+ tab at the top of the text entry panel. Use this to check what your message will look like before you post the message. If it looks a complete mess you're unlikely to get a response. (Click on the +"Plain text"+ tab if you want to edit the text to tidy it up.)
    +"Science is more than a body of knowledge; it is a way of thinking"+
    +Carl Sagan+                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

  • MULTIPLE BUFFER POOL의 개념 및 사용 방법 (ORACLE8)

    제품 : ORACLE SERVER
    작성날짜 : 1999-05-13
    Multiple Buffer Pool의 개념 및 사용 방법
    1. 필요성
    table이나 index 등 segment는 그 사용 빈도나 중요도 등에 따라 memory에
    buffering되는 것을 달리 할 필요가 있다. Oracle8에서는 buffer cache에 대해서
    multiple buffer pool이라는 새로운 특성의 개념을 제공하여 segment마다 다른
    buffer를 사용할 수 있도록 하고 있다.
    multiple buffer pool은 'keep', 'recycle', 그리고 'default' buffer pool로
    구성되며, 이것을 control하기 위한 internal algorithm은 하나의 buffer pool을
    사용할 때와 대부분 마찬가지다. 즉, 기존의 CACHE option이나 full table scan 시
    LRU end에 위치시키는 것 등은 모두 변함이 없으며, 단지 그러한 기법들이 각
    buffer마다 별도로 적용된다는 것 뿐이다.
    2. buffer의 종류
    multiple buffer pool의 주요 목적은 서로 다른 형태로 사용되는 것을 나누어 놓
    아 서로 방해가 되지 않도록 하는 것으로 정리할 수 있으며, 각각 다음과 같은
    경우에 사용하도록 한다.
    (1) KEEP buffer pool : 가능한 한 memory에 오랫동안 유지되어져야 하는
    segment를 위해 사용되어져야 한다. 자주 사용되어지고 cache size의 약 10%
    전후의 크기를 가진 segment가 이 pool을 사용하기에 적당하다.
    그러나, 여기에서도 Oracle7.3의 CACHE option과 마찬가지로 새로이 access
    되는 segment에 의해 LRU end 쪽으로 이동하는 것이 가능하므로 항상 cache
    된다고 보장할 수는 없다.
    적당한 크기로 지정하는 것이 중요한데 당연히, 동시에 memory에 올려지기를
    바라는 object들의 크기의 합보다는 커야 한다.
    (2) RECYCLE buffer pool : 자주 사용되어지지 않거나, buffer pool의 두배보다
    큰 정도의 큰 segment가 index search를 하는 작업 등에 사용되어지도록 한다.
    (3) DEFAULT buffer pool : 위의 두 buffer pool에 할당되지 않은 나머지는
    default buffer pool이 된다. 그러므로 KEEP이나 RECYCLE buffer pool은
    없어도 반드시 default buffer pool은 존재하게 된다.
    이 buffer pool은 Oracle7의 하나의 buffer pool과 같다.
    3. buffer pool을 설정하는 방법
    이러한 종류의 buffer pool을 지정하기 위해서 BUFFER_POOL_KEEP과
    BUFFER_POOL_RECYCLE이라는 parameter가 존재하며, DB_BLOCK_BUFFERS
    와 DB_BLOCK_LRU_LATCHES parameter도 함께 고려하여야 한다.
    syntax는 다음과 같다.
    BUFFER_POOL_KEEP=(buffers:<value>,lru_latches:<value>) 혹은
    BUFFER_POOL_KEEP=<value>
    BUFFER_POOL_RECYCLE=(buffers:<value>,lru_latches:<value>) 혹은
    BUFFER_POOL_RECYCLE=<value>
    위의 syntax에서 보는 바와 같이 각 pool에 대해서 buffer의 갯수 뿐 아니라
    LRU latch의 갯수도 지정할 수 있다. 만약 지정하지 않으면 그 pool에 대해서
    하나의 latch가 할당되는 것이다.
    DEFAULT pool에 대해서는 명시적으로 block의 갯수나 latch의 갯수를 지정할
    수 없고, 대신 전체 block의 갯수 (DB_BLOCK_BUFFERS)와 전체 LRU latch의
    갯수 (DB_BLOCK_LRU_LATCHES)에서 KEEP과 RECYCLE에 할당된 각각의
    값을 뺀 것만큼 default pool에 할당된다.
    간단한 예제로 설명하면 다음과 같다.
    예를 들어 initSID.ora file에 다음과 같이 parameter가 설정되어 있다고 가정한다.
    DB_BLOCK_BUFFERS=1000
    DB_BLOCK_LRU_LATCHES=6
    BUFFER_POOL_KEEP=(buffers:400,lru_latches:2)
    BUFFER_POOL_RECYCLE=100
    이러한 경우 KEEP pool에 대해서는 400개의 block과 2개의 LRU latch가 할당
    되고 RECYCLE pool에는 100개의 block과 1개의 LRU latch가 할당된다. 그리
    고 DEFAULT pool에는 500 (1000-400-100) 개의 block과 3 (6-2-1)개의 LRU latch
    가 할당되게 된다.
    각 LRU queue에 대해서 block은 균등하게 배분된다.
    즉, 이 예에서 DEFAULT queue는 LRU 1번이 167개의 block을 LRU2도 167개, 그리
    고 LRU3은 166개의 block을 가지게 되며, KEEP queue는 두개의 latch가 각각
    200개씩의 block을 그리고 RECYCLE queue는 100개의 block을 가지게 된다.
    이러한 정보는 v$buffer_pool을 통해 확인이 가능하며,
    이 예의 경우 다음과 같이 조회된다. 여기에서 set_count가 각 pool에 할당된
    latch의 갯수이며, lo_bnum과 hi_bnum이 buffer의 range이다.
    SQL> select * from v$buffer_pool;
    NAME      LO_SETID HI_SETID SET_COUNT BUFFERS LO_BNUM HI_BNUM
         0 0 0 0 0 0
    KEEP 4 5 2 400 0 399
    RECYCLE 6 6 1 100 400 499
    DEFAULT 1 3 3      500 500 999
    각 queue는 최소 50개의 block은 할당받아야 하며, 그렇지 않은 경우에는 오류
    가 발생한다. 즉 예를 들어, BUFFER_POOL_KEEP=(buffers:100, lru_latches:3)과
    같이 설정하면 alert.log file에 "Incorrect parameter specification for
    BUFFER_POOL_KEEP"이라는 오류 메시지가 적히게 되며, 100개의 block에 대해서
    최대 두개의 LRU latch만이 가능하게 된다.
    4. buffer pool을 지정하는 방법
    BUFFER_POOL이라는 Oracle8에서 새로 추가된 storage 절의 parameter를 이용
    하여 segment가 사용할 default pool을 지정할 수 있다. segment의 모든 block은
    지정된 pool을 사용하게 되며, 아래의 예제와 같이 사용하면 된다.
    CREATE TABLE keep_table(t NUMBER(10)) STORAGE (BUFFER_POOL KEEP);
    ALTER TABLE recycle_table storage(BUFFER_POOL RECYCLE);
    BUFFER_POOL은 tablespace나 rollback segment에 대해서는 지정할 수 없으며,
    clustered table에 대해서는 cluster level에서만 지정이 가능하다. partition
    table에 대해서는 각 partition별로 pool을 지정하는 것이 가능하다.
    일단 segments가 적당한 pool에 할당이 되고 난 후에는, logical hit ratio나
    free buffer waits와 같은 다양한 통계 정보가 확인 가능하다.
    이러한 통계 정보를 담고 있는 view는 v$buffer_pool_statistics이며,
    이 view는 $ORACLE_HOME/rdbms/admin/catperf.sql을 수행하면 생성된다.

  • Db buffer 9i

    Dear all,
    We are running 9.2.0.4 on Linux RHEL ..
    As per the satspack report, we are facing low buffer hit ratios , The number of dirty buffers are high. So, we are increasing db_buffer_Cache and the current dbwr_io_slaves is set to 8 and db_writer_process is set to 1..we are planinng to increase db_writer_process to 8...but, if dbwr_io_slaves is already set ,then we should not increase the value of db_writer_process
    Is it so ?
    Kai

    KaiS wrote:
    We are running 9.2.0.4 on Linux RHEL ..
    As per the satspack report, we are facing low buffer hit ratios , The number of dirty buffers are high. So, we are increasing db_buffer_Cache and the current dbwr_io_slaves is set to 8 and db_writer_process is set to 1..we are planinng to increase db_writer_process to 8...but, if dbwr_io_slaves is already set ,then we should not increase the value of db_writer_process
    You can't set multiple I/O slaves and multiple database writers at the same time. It's one or the other. Since IO Slaves were an early invention to overcome problems of systems that couldn't handle async writes it's likely that multiple database writers would be more appropriate. However, check Kevin Closson's blog for discussions on configuring these two features - start at http://kevinclosson.wordpress.com/kevin-closson-index/general-performance-and-io-topics/
    I note that you have said nothing about performance symptoms such as "free buffer waits" that might suggest you have an I/O problem - a low "buffer hit ratio" is a meaningless indicator when it comes performance, you need to find where you are doing excess work and losing unreasonable amounts of time.
    Regards
    Jonathan Lewis
    http://jonathanlewis.wordpress.com
    http://www.jlcomp.demon.co.uk
    To post code, statspack/AWR report, execution plans or trace files, start and end the section with the tag {noformat}{noformat} (lowercase, curly brackets, no spaces) so that the text appears in fixed format.
    There is a +"Preview"+ tab at the top of the text entry panel. Use this to check what your message will look like before you post the message. If it looks a complete mess you're unlikely to get a response. (Click on the +"Plain text"+ tab if you want to edit the text to tidy it up.)
    "Science is more than a body of knowledge; it is a way of thinking"
    Carl Sagan                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

  • My wait events - can anyone see a problem?

    hi,
    this is what i have, can anyone see a problem?
    thanks
    EVENT                               TOTAL_WAITS  PCT_WAITS TIME_WAIT_SEC PCT_TIME_WAITED TOTAL_TIMEOUTS PCT_TIMEOUTS AVERAGE_WAIT_SEC
    Streams AQ: qmn slave idle wait          148147         .3    4051461.88           38.04           3478          .07            27.35
    Streams AQ: qmn coordinator idle wa      291006        .59    3962890.53           37.21         148370         3.13            13.62
    it
    Streams AQ: waiting for time manage         948          0     2021434.2           18.98            948          .02          2132.31
    ment or cleanup tasks
    control file parallel write             1292057       2.64     266839.64            2.51              0            0              .21
    log file parallel write                28433394      58.02     134658.55            1.26              0            0                0
    db file sequential read                 8307195      16.95      69830.07             .66              0            0              .01
    free buffer waits                       3117839       6.36      43374.04             .41        3106093        65.55              .01
    log buffer space                          55520        .11       20810.2              .2          20235          .43              .37
    db file scattered read                   583604       1.19      18169.58             .17              0            0              .03
    write complete waits                      17946        .04      17536.66             .16          17941          .38              .98
    log file sync                            282268        .58      10005.35             .09           9369           .2              .04
    enq: RO - fast object reuse               26602        .05       6623.44             .06           2171          .05              .25
    enq: CF - contention                       1839          0       5178.14             .05           1723          .04             2.82
    Streams AQ: qmn coordinator waiting         999          0       4311.01             .04            883          .02             4.32
    for slave to start
    buffer busy waits                         32464        .07       3898.51             .04           3950          .08              .12
    control file sequential read            2199199       4.49       3558.34             .03              0            0                0
    SGA: MMAN sleep for component shrin      234330        .48       2523.65             .02         234216         4.94              .01
    k
    buffer exterminate                         1583          0       1539.72             .01           1573          .03              .97
    library cache pin                           317          0        927.71             .01            316          .01             2.93
    enq: CI - contention                       1829          0        570.84             .01            159            0              .31
    log file switch completion                 1658          0        517.18               0            425          .01              .31
    enq: TX - row lock contention               257          0         438.8               0            149            0             1.71
    read by other session                     27269        .06        355.17               0             52            0              .01
    os thread startup                          3869        .01        338.67               0             98            0              .09
    latch: shared pool                          760          0        285.87               0              0            0              .38
    latch: row cache objects                    664          0           250               0              0            0              .38
    Data file init write                      16324        .03        231.59               0              0            0              .01
    reliable message                          19189        .04        218.16               0            170            0              .01
    latch: library cache                        483          0        172.51               0              0            0              .36
    SQL*Net message from dblink             1143086       2.33        128.69               0              0            0                0
    latch free                                 6091        .01         121.1               0              0            0              .02
    library cache load lock                      90          0         89.48               0             18            0              .99
    log file single write                      1894          0         69.76               0              0            0              .04
    cursor: pin S wait on X                    5183        .01         55.87               0           5165          .11              .01
    local write wait                           6732        .01         42.58               0              2            0              .01
    log file switch (checkpoint incompl          95          0         42.11               0             30            0              .44
    ete)
    row cache lock                              119          0         30.96               0             10            0              .26
    SQL*Net more data from dblink             17198        .04         25.92               0              0            0                0
    log file switch (private strand flu          69          0         17.54               0              5            0              .25
    sh incomplete)
    enq: HW - contention                        180          0         16.53               0              5            0              .09
    enq: PR - contention                          9          0          14.5               0              2            0             1.61
    enq: JS - queue lock                         51          0         12.36               0              0            0              .24
    SQL*Net more data to client               48311         .1         11.66               0              0            0                0
    enq: TM - contention                         12          0         10.66               0              3            0              .89
    class slave wait                           3128        .01          7.03               0              1            0                0
    JS coord start wait                          68          0          6.42               0             68            0              .09
    direct path write                         92712        .19          6.06               0              0            0                0
    control file heartbeat                        1          0          3.91               0              1            0             3.91
    PX Deq: Par Recov Execute                   100          0           3.8               0              0            0              .04
    log file sequential read                   1900          0          2.88               0              0            0                0
    single-task message                          50          0          2.61               0              0            0              .05
    enq: TX - contention                         11          0          2.38               0              0            0              .22
    undo segment extension                  1181001       2.41          1.95               0        1180981        24.92                0
    db file single write                        165          0           1.3               0              0            0              .01
    enq: TX - index contention                   97          0          1.27               0              0            0              .01
    LGWR wait for redo copy                   20840        .04           .66               0              0            0                0
    JS kgl get object wait                        8          0           .63               0              8            0              .08
    SQL*Net message to dblink               1143086       2.33           .55               0              0            0                0
    kksfbc child completion                      14          0           .55               0             11            0              .04
    direct path read temp                    217237        .44           .41               0              0            0                0
    latch: cache buffers chains                2138          0           .37               0              0            0                0
    latch: messages                            1245          0           .27               0              0            0                0
    latch: redo writing                         786          0           .15               0              0            0                0
    PX Deq: Par Recov Reply                      65          0           .09               0              0            0                0
    latch: checkpoint queue latch               171          0           .09               0              0            0                0
    latch: redo allocation                     1029          0           .08               0              0            0                0
    latch: cache buffers lru chain              268          0           .07               0              0            0                0
    SGA: allocation forcing component g           5          0           .05               0              2            0              .01
    rowth
    db file parallel read                        83          0           .04               0              0            0                0
    latch: In memory undo latch                 558          0           .04               0              0            0                0
    latch: object queue header operatio         338          0           .04               0              0            0                0
    n
    direct path read                           5042        .01           .03               0              0            0                0
    PX Deque wait                                 7          0           .02               0              0            0                0
    direct path write temp                     4691        .01           .02               0              0            0                0
    enq: SQ - contention                          1          0           .02               0              0            0              .02
    latch: session allocation                   190          0           .02               0              0            0                0
    PX Deq: Join ACK                             15          0           .01               0              0            0                0
    cursor: pin S                               894          0           .01               0              0            0                0
    enq: TX - allocate ITL entry                 37          0           .01               0              0            0                0
    kkdlgon                                      15          0           .01               0              0            0                0
    latch: enqueue hash chains                   37          0           .01               0              0            0                0
    library cache lock                            1          0           .01               0              0            0              .01
    Log archive I/O                               1          0             0               0              0            0                0
    PX Deq: Par Recov Change Vector               2          0             0               0              0            0                0
    PX Deq: Signal ACK                            3          0             0               0              0            0                0
    PX Deq: Test for msg                          1          0             0               0              0            0                0
    PX qref latch                                 1          0             0               0              1            0                0
    SQL*Net break/reset to dblink                 5          0             0               0              0            0                0
    SQL*Net more data to dblink                   1          0             0               0              0            0                0
    buffer deadlock                              27          0             0               0             27            0                0
    checkpoint completed                          4          0             0               0              0            0                0
    cursor: mutex S                               3          0             0               0              0            0                0
    cursor: mutex X                               1          0             0               0              0            0                0
    enq: JS - q mem clnup lck                     1          0             0               0              0            0                0
    enq: PS - contention                          2          0             0               0              0            0                0
    enq: US - contention                          1          0             0               0              0            0                0
    instance state change                         2          0             0               0              0            0                0
    latch: library cache lock                     4          0             0               0              0            0                0
    latch: library cache pin                      1          0             0               0              0            0                0
    latch: object queue header heap               8          0             0               0              0            0                0
    latch: undo global data                       3          0             0               0              0            0                0
    recovery read                                39          0             0               0              0            0                0

    Hi,
    If its for a week than I won't bother. Probably you should try to get teh same report for these wait events in a much smaller period , like 20-30minutes of time period when your db is fully operational. If still at that time the wait events,these or any other, shoot up to high wait times, things can be investigated more deeply.
    HTH
    Aman....

  • Latches on Buffer Cache

    Hi Everyone,
    Please through some light on the latches to the buffers cache. How do they work?, let say some 10 users are trying to select the data on the same block may be same rows through different sessions. How does the latch come into play in this event. As its a simple select statement (shared lock) do the 10 users get the rows at the same time or it is some thing like one user get the latch and other 9 users spin for the latch? Please clarify how oracle handles such situations.
    Thanks for help in advance.

    user8710159 wrote:
    Hi Everyone,
    Please through some light on the latches to the buffers cache. How do they work?, let say some 10 users are trying to select the data on the same block may be same rows through different sessions. How does the latch come into play in this event. As its a simple select statement (shared lock) do the 10 users get the rows at the same time or it is some thing like one user get the latch and other 9 users spin for the latch? Please clarify how oracle handles such situations. Leave 10 users for a moment since it may confuse the things a little too much.
    There are two latches that come into the picture when the buffer cache is considered, cache buffer lru chain latch and cache buffer chain latch . I hope you do know that buffer cahcei is maintained by LRU algorithm and with LRU and CKPTQ linked lists. In addition to this, the buffer cache is maintained with the help of workarea sets which divide the buffer cache in partitions and each CBC LRU Chain latch is maintaining one set( the number is 2 per set in 11.2 I believe) .
    When the access is requested for the buffer from the buffer cache, first of all the extent map of the segment is loaded. This would tell Oracle about the information that its looking for. Now, since Oracle knows which buffers the need to read, they start to look for it. For this purpose DBA is used. DBA is Data block address which would consist of the file# and block# . This DBA address is hashed by Oracle and a hash key-value pair is generated which would be looked upon in a hash table consisting of hash buckets . The cached buffers are linked in these buckets with their hash values. For doing this search, the CBC chain latch is used using which your server process would scan the hash chains. If the buffer is found, its marked as a logical IO and the buffer is given back to you. If not the CBC chain latch is unpinned and the CBC LRU chain latch is used to find the free buffer in the LRU lists. The free buffer search is carried in all the work area sets one by one if from the previous set , free buffers are not found. If any time, required free buffers are found. the data is read from the disk using the physical io and is loaded into the buffers. If not , DBWR is asked to flush the cache immediately and by the time , its doing that, free buffer wait is reported for your server process.
    Surely , things would go more complex if in this DMLs get involved too. So lets leave this and hope it clears some doubts for you.
    HTH
    Aman....

  • Wait Class

    Hi,
    As per documents In general, the addition of wait classes helps direct the DBA more quickly toward the root cause of performance problems.
    How could i trace the root cause of performence problems if it is related to wait class?
    Thanks,

    userpat wrote:
    Hi,
    As per documents In general, the addition of wait classes helps direct the DBA more quickly toward the root cause of performance problems.
    How could i trace the root cause of performence problems if it is related to wait class?
    Thanks,I am not completely sure that I understand your question. The wait class gives you an approximate idea of where the performance problem will be found. You must then further investigate the wait events in that wait class. There are of course potential problems with starting at the wait class (some wait classes have 2 wait events, while others have many - that could throw off the search for the problem that is impacting performance the most), but at least it provides a starting point. To give you an idea of the wait events in each wait class, here is a SQL statement that was executed on Oracle Database 11.1.0.7:
    SQL> DESC V$EVENT_NAME
    Name                                      Null?    Type
    EVENT#                                             NUMBER
    EVENT_ID                                           NUMBER
    NAME                                               VARCHAR2(64)
    PARAMETER1                                         VARCHAR2(64)
    PARAMETER2                                         VARCHAR2(64)
    PARAMETER3                                         VARCHAR2(64)
    WAIT_CLASS_ID                                      NUMBER
    WAIT_CLASS#                                        NUMBER
    WAIT_CLASS                                         VARCHAR2(64)
    SELECT
      SUBSTR(NAME,1,30) EVEMT_NAME,
      SUBSTR(WAIT_CLASS,1,20) WAIT_CLASS
    FROM
      V$EVENT_NAME
    ORDER BY
      SUBSTR(WAIT_CLASS,1,20),
      SUBSTR(NAME,1,30);
    EVEMT_NAME                     WAIT_CLASS
    ASM COD rollback operation com Administrative
    ASM mount : wait for heartbeat Administrative
    Backup: sbtbackup              Administrative
    Backup: sbtbufinfo             Administrative
    Backup: sbtclose               Administrative
    Backup: sbtclose2              Administrative
    OLAP DML Sleep                 Application
    SQL*Net break/reset to client  Application
    SQL*Net break/reset to dblink  Application
    Streams capture: filter callba Application
    Streams: apply reader waiting  Application
    WCR: replay lock order         Application
    Wait for Table Lock            Application
    enq: KO - fast object checkpoi Application
    enq: PW - flush prewarm buffer Application
    enq: RC - Result Cache: Conten Application
    enq: RO - contention           Application
    enq: RO - fast object reuse    Application
    enq: TM - contention           Application
    enq: TX - row lock contention  Application
    enq: UL - contention           Application
    ASM PST query : wait for [PM][ Cluster
    gc assume                      Cluster
    gc block recovery request      Cluster
    enq: BB - 2PC across RAC insta Commit
    log file sync                  Commit
    Shared IO Pool Memory          Concurrency
    Streams apply: waiting for dep Concurrency
    buffer busy waits              Concurrency
    cursor: mutex S                Concurrency
    cursor: mutex X                Concurrency
    cursor: pin S wait on X        Concurrency
    Global transaction acquire ins Configuration
    Streams apply: waiting to comm Configuration
    checkpoint completed           Configuration
    enq: HW - contention           Configuration
    enq: SQ - contention           Configuration
    enq: SS - contention           Configuration
    enq: ST - contention           Configuration
    enq: TX - allocate ITL entry   Configuration
    free buffer waits              Configuration
    ASM background timer           Idle
    DIAG idle wait                 Idle
    EMON slave idle wait           Idle
    HS message to agent            Idle
    IORM Scheduler Slave Idle Wait Idle
    JOX Jit Process Sleep          Idle
    ARCH wait for flow-control     Network
    ARCH wait for net re-connect   Network
    ARCH wait for netserver detach Network
    ARCH wait for netserver init 1 Network
    ARCH wait for netserver init 2 Network
    ARCH wait for netserver start  Network
    ARCH wait on ATTACH            Network
    ARCH wait on DETACH            Network
    ARCH wait on SENDREQ           Network
    LGWR wait on ATTACH            Network
    LGWR wait on DETACH            Network
    LGWR wait on LNS               Network
    LGWR wait on SENDREQ           Network
    LNS wait on ATTACH             Network
    LNS wait on DETACH             Network
    LNS wait on LGWR               Network
    LNS wait on SENDREQ            Network
    SQL*Net message from dblink    Network
    SQL*Net message to client      Network
    SQL*Net message to dblink      Network
    SQL*Net more data from client  Network
    SQL*Net more data from dblink  Network
    AQ propagation connection      Other
    ARCH wait for archivelog lock  Other
    ARCH wait for process death 1  Other
    ARCH wait for process death 2  Other
    ARCH wait for process death 3  Other
    ARCH wait for process death 4  Other
    ARCH wait for process death 5  Other
    ARCH wait for process start 1  Other
    Streams AQ: enqueue blocked du Queueing
    Streams AQ: enqueue blocked on Queueing
    Streams capture: waiting for s Queueing
    Streams: flow control          Queueing
    Streams: resolve low memory co Queueing
    resmgr:I/O prioritization      Scheduler
    resmgr:become active           Scheduler
    resmgr:cpu quantum             Scheduler
    ARCH random i/o                System I/O
    ARCH sequential i/o            System I/O
    Archiver slave I/O             System I/O
    DBWR slave I/O                 System I/O
    LGWR random i/o                System I/O
    BFILE read                     User I/O
    DG Broker configuration file I User I/O
    Data file init write           User I/O
    Datapump dump file I/O         User I/O
    Log file init write            User I/O
    Shared IO Pool IO Completion   User I/O
    buffer read retry              User I/O
    cell multiblock physical read  User I/O
    cell single block physical rea User I/O
    cell smart file creation       User I/O
    cell smart index scan          User I/O
    cell smart table scan          User I/O
    cell statistics gather         User I/O
    db file parallel read          User I/O
    db file scattered read         User I/O
    db file sequential read        User I/O
    db file single write           User I/O
    ...So, if the User I/O wait class floats to the top of the wait classes between a known start time and end time, and the Commit wait class is at the bottom of the wait classes when comparing accumulated time, it probably would not make much sense to spend time investigating the wait events in the Commit class... until you realize that there is a single event in the Commit wait class that typically contributes wait time, while there are many in the User I/O wait class.
    Charles Hooper
    Co-author of "Expert Oracle Practices: Oracle Database Administration from the Oak Table"
    http://hoopercharles.wordpress.com/
    IT Manager/Oracle DBA
    K&M Machine-Fabricating, Inc.

  • Dbwr consuming high CPU after enabling DirectIO

    Hi,
    DBWR is consuming high CPU. After enabling DirectIO on Solaris SPARC 10, dbwr is eating away almost 1 CPU on a v440 machine i.e. 19% throughout the day. Neither of "buffer busy waits" or "write complete waits" or "free buffer waits" are in the top 5 wait events, which, to me, means that there is no buffer contention.
    What I understand is that after enabling DirectIO, it takes longer for the IO to complete because pre-DirectIO it would return from the file system cache whereas now it has to return from the disk (and I do see at the OS level that IO has become slow), but should that result in dbwr consuming more CPU?
    Infact after enabling DirectIO, IO has become very slow which is another problem and as a result log file writes have also become slow which is a 3rd problem. btw, I am aware that if there were many FTS, then DirectIO can make the system slow but there are no FTS in my case. Also, that SGA should be increased after enabling DirectIO, which has also been done.
    Thanks

    user12022918 wrote:
    DBWR is consuming high CPU. After enabling DirectIO on Solaris SPARC 10, dbwr is eating away almost 1 CPU on a v440 machine i.e. 19% throughout the day. 19% is less than 1/5th of a CPU. Or are you referring to a 100% being all 4 CPUs?
    What I understand is that after enabling DirectIO, it takes longer for the IO to complete because pre-DirectIO it would return from the file system cache whereas now it has to return from the disk Incorrect. See directio for details.
    Yes, removing the file system cache from the I/O layer for a device can reduce I/O performance if the caller does not perform its own caching. However, direct I/O will eliminate the system cache overheads (and associated CPU resources needed) from a caller (like Oracle) that implements its own sophisticated buffer cache.
    Direct I/O should therefore increase Oracle I/O performance and decrease resource footprint as it eliminates the need for the kernel to maintain a cache for that device.
    Infact after enabling DirectIO, IO has become very slow which is another problem and as a result log file writes have also become slow which is a 3rd problem. Direct I/O, as per the Sun docs, is an advisory call. It may not place that device in direct I/O modes. It may result in partial direct I/O. So you need to make sure exactly what happens and how successful (partial or complete) this setting was.
    btw, I am aware that if there were many FTS, then DirectIO can make the system slow but there are no FTS in my case. FTS (multi block reads/large sequential reads) is slower? This is contrary to Sun's docs that state:
    Large sequential I/O generally performs best with DIRECTIO_ON, except when a file is sparse or is being extended and is opened with O_SYNC or O_DSYNC.

Maybe you are looking for