'ksu process alloc latch yield'

Hi,
One of my SP gets stuck for a long time(and I've to kill the process) and the 'Event' and 'State' in v$session_wait are 'ksu process alloc latch yield' and 'WAIT' respectively. What can be the reason and solution ?
Thanks,
Surya.

Surya
Google is your friend: for 'ksu process alloc latch yield' it reveals:
"Well known bug 4658188. Also see Note:343180.1"
And on ORACLE-L list, a slightly less helpful comment here:
http://www.freelists.org/archives/oracle-l/08-2005/msg00074.html
If you have access to Metalink, look that up (sorry, I don't have access at the moment).
HTH
Regards Nigel

Similar Messages

  • REDO ALLOCATION LATCH와 REDO COPY LATCH에 대해서

    제품 : ORACLE SERVER
    작성날짜 : 2000-08-28
    REDO ALLOCATION LATCH와 REDO COPY LATCH에 대해서
    ========================================================================
    redo log buffer 사용 시 wait 비율이 높은 경우 log buffer의 크기 조정 외에도
    redo에 관한 latch를 조정함으로써 성능을 향상시킬 수 있다.
    여기에서는 redo allocation latch와 redo copy latch의 작동 원리와, tuning
    방법 등을 설명한다. 그리고 8i에서의 변경 사항도 포함하였다.
    1. redo entry를 redo log buffer에 할당하는 방법
    database에 변경 작업을 수행하는 server process는 PGA 부분에 변경 작업에 대한
    redo entry를 만든다. 그리고 이것을 redo log buffer에 옮기는데, 이 때 먼저
    할당될 redo log buffer의 부분을 allocation하고, 그 이후에 allocation 된
    redo log buffer 공간에, PGA 상에 만들어 둔 redo entry를 copy한다.
    Oracle은 SGA 내에서 발생하는 source code의 action의 serialization을 위해서
    latch를 사용한다. 즉, log buffer에서 redo entry를 save시킬 공간을 할당하는
    작업 절차도 먼저 latch를 부여받은 process가 작업하는 것이고, 할당받은
    공간에 redo entry를 copy시키는 작업 절차도 latch를 획득한 다음에만 수행
    가능하다.
    2. redo allocation latch
    모든 redo entry는 db에 수행된 작업 순서대로 redo log file에 기록되어야 한다.
    그러므로 redo log buffer에 redo entry를 할당하는 작업은 항상 serialization을
    유지하여야 하고, 그런 이유로 인해서 redo allocation latch는 instance마다
    하나만이 가능하다.
    server process는 이 redo allocation latch를 잡고, redo buffer 내에 redo
    entry를 저장시킬 위치를 확보한다.
    다음에 설명할 redo copy latch를 사용할 수 없는 환경에서는, redo buffer 내의
    확보한 공간 내에 redo entry를 copy하는 작업도 이 redo allocation latch를
    잡은 후 진행한다.
    3. redo copy latch (LOG_SIMULTANEOUS_COPIES)
    redo allocation latch 하나를 이용하여 redo buffer 내의 공간을 할당하고,
    redo entry를 copy하는 것은, redo buffer에 대한 작업을 모두 serialize시켜
    performance에 지장을 줄 수 있다.
    redo entry가 위치할 공간을 확보하는 작업은 반드시 시간 순서대로 위치해야
    하기 때문에 모두 순서가 유지되어야 하는 작업이지만, 일단 확보한 log
    buffer의 공간에 redo entry를 copy하는 작업은 동시에 수행되어도 지장이
    없다. 이것을 위해 복수 개의 redo copy latch를 지정하여, redo entry를 확보된
    영역에 동시에 copy가 가능하도록 하였다. 단 이러한 copy 작업은 CPU의
    작업이므로 CPU가 한장이면, 동시 작업이 불가능하여 이 redo copy latch를
    사용하는 것이 의미가 없어 사용이 불가능하다.
    즉, redo buffer에 대한 작업이 redo copy latch를 사용하면, 다음과 같은
    절차로 수행된다.
    (1) A server process가 redo allocation latch를 잡고 redo buffer 내의 공간을
    할당한 후 redo allocation latch는 푼다.
    (2) B server process가 redo allocation latch를 잡고 redo buffer의 새로운
    공간을 할당받은 다음 redo allocation latch를 푼다.
    (3) A process가 redo copy latch를 잡고 A process가 만든 PGA 내의 redo
    entry를 (1)번 단계에서 확보한 공간에 copy한다.
    이와 동시에 B process도 또 다른 redo copy latch를 잡고, (2)번 단계에서
    확보한 공간에 redo entry를 copy한다.
    redo copy latch의 갯수는 log_simultaneous_copies parameter에 의해 지정된다.
    CPU가 하나이면 이 값은 0가 되어, redo allocation latch를 이용해 redo
    allocation과 redo copy 작업을 한번에 수행하게 되며, default는 CPU_COUNT 값이
    된다. CPU_COUNT는 operating system layer에 의해 자동으로 결정되며, init.ora
    file에 명시적으로 지정하지 않는 parameter이다.
    즉, multi CPU 환경에서는 자동으로 redo copy latch를 사용하게 되는 것이다.
    default 외의 값을 사용하고자 한다면, $ORACLE_HOME/dbs/initSID.ora file 내에
    다음과 같이 지정하면 된다. 최대값의 제한은 없다.
    log_simultaneous_copies=n
    redo copy latch의 장점을 정리하면, redo buffer의 공간 할당을 위해 선행된
    작업이 redo entry를 copy할 때까지 기다리지 않아도 되며, redo copy 작업 자체도
    병렬도 수행이 가능하다는 것이다.
    4. LOG_SMALL_ENTRY_MAX_SIZE
    CPU를 복수개를 사용하는 경우 redo copy latch를 사용하는 것이 redo log buffer
    내의 작업에 대한 wait는 줄일 수 있음은 위에서 살펴보았다. 그런데 매우 작은
    redo entry의 경우 redo entry를 copy하는 데 시간이 거의 걸리지 않은 상황에서
    굳이 redo allocation과 redo copy를 별도의 latch를 이용하여 두 단계로 나누어
    작업하는 것이 성능 향상에 도움이 되지 않을 수도 있다. 오히려 copy하는 데 시간이
    거의 안 걸린다면 redo allocation latch 하나를 잡고서 redo allocation과 redo
    copy를 모두 수행하는 것이 나을 수도 있다.
    그렇다면 redo copy latch를 사용하는 상황에서 이 latch를 사용하지 않아도 되는,
    작은 redo entry의 기준을 설정할 필요가 있다.
    이것을 결정하는 parameter가 log_small_entry_max_size이다.
    즉, log_small_entry_max_size보다 큰 redo entry는 redo copy를 위해 redo copy
    latch를 사용하지만, 이보다 작은 크기의 redo entry에 대해서는 redo allocation
    latch를 이용하여 redo allocation과 copy를 모두 수행하게 되는 것이다.
    아무리 작은 redo entry라도 redo copy latch를 사용하게 하고 싶다면 이 값을
    0으로 지정하면 된다.
    5. tuning point
    multi-CPU 환경에서 redo copy latch의 갯수를 늘리거나, log_small_entry_max_size
    값을 줄여야 할때는 redo buffer에 contention이 발생하는 경우이다.
    redo buffer에 contention이 발생하는지 확인하는 방법은 다음과 같다.
    SQL>select name, gets, misses from v$latch where name = 'redo allocation';
    이 값의 결과에서 gets에 대한 misses의 비율이 1%를 넘는다면 이것은 contention
    이 존재한다고 판단될 수 있다. 물론 이것은 절대적인 기준은 아니어서, 1% 미만
    이라 하더라도 miss 율을 더 줄이고자 할 수도 있다.
    contention이 발생하면 log_simultaneous_copies의 값을 CPU의 약 두배 정도까지
    증가시키는 것이 권고할 만하고, log_small_entry_max_size의 값도 0에 가까운
    값으로 줄여서 contention의 상황을 지속적으로 살펴보아야 한다.
    6. 8i 에서의 변경 사항
    Oracle 8.1 에서는 위에서 설명한 두 parameter가 모두 나타나지 않는다.
    log_small_entry_max_size parameter는 완전히 없어졌으며,
    log_simultaneous_copies 값은 CPU_COUNT의 두배로 무조건 설정이 된다.
    이 CPU_COUNT는 앞에서 설명한 것과 같이 operating system layer에 의해
    자동으로 결정되는 parameter로 CPU의 갯수를 나타낸다.
    log_simultaneous_copies가 자동으로 CPU 갯수의 2배로 지정되는 것은 모든
    환경에서 대부분 최적이라고 볼 수 있으므로, user가 변경하지 않도록 하기 위해
    parameter 부분에 display가 안 되도록 하였다.

    제품 : ORACLE SERVER
    작성날짜 : 2000-08-28
    REDO ALLOCATION LATCH와 REDO COPY LATCH에 대해서
    ========================================================================
    redo log buffer 사용 시 wait 비율이 높은 경우 log buffer의 크기 조정 외에도
    redo에 관한 latch를 조정함으로써 성능을 향상시킬 수 있다.
    여기에서는 redo allocation latch와 redo copy latch의 작동 원리와, tuning
    방법 등을 설명한다. 그리고 8i에서의 변경 사항도 포함하였다.
    1. redo entry를 redo log buffer에 할당하는 방법
    database에 변경 작업을 수행하는 server process는 PGA 부분에 변경 작업에 대한
    redo entry를 만든다. 그리고 이것을 redo log buffer에 옮기는데, 이 때 먼저
    할당될 redo log buffer의 부분을 allocation하고, 그 이후에 allocation 된
    redo log buffer 공간에, PGA 상에 만들어 둔 redo entry를 copy한다.
    Oracle은 SGA 내에서 발생하는 source code의 action의 serialization을 위해서
    latch를 사용한다. 즉, log buffer에서 redo entry를 save시킬 공간을 할당하는
    작업 절차도 먼저 latch를 부여받은 process가 작업하는 것이고, 할당받은
    공간에 redo entry를 copy시키는 작업 절차도 latch를 획득한 다음에만 수행
    가능하다.
    2. redo allocation latch
    모든 redo entry는 db에 수행된 작업 순서대로 redo log file에 기록되어야 한다.
    그러므로 redo log buffer에 redo entry를 할당하는 작업은 항상 serialization을
    유지하여야 하고, 그런 이유로 인해서 redo allocation latch는 instance마다
    하나만이 가능하다.
    server process는 이 redo allocation latch를 잡고, redo buffer 내에 redo
    entry를 저장시킬 위치를 확보한다.
    다음에 설명할 redo copy latch를 사용할 수 없는 환경에서는, redo buffer 내의
    확보한 공간 내에 redo entry를 copy하는 작업도 이 redo allocation latch를
    잡은 후 진행한다.
    3. redo copy latch (LOG_SIMULTANEOUS_COPIES)
    redo allocation latch 하나를 이용하여 redo buffer 내의 공간을 할당하고,
    redo entry를 copy하는 것은, redo buffer에 대한 작업을 모두 serialize시켜
    performance에 지장을 줄 수 있다.
    redo entry가 위치할 공간을 확보하는 작업은 반드시 시간 순서대로 위치해야
    하기 때문에 모두 순서가 유지되어야 하는 작업이지만, 일단 확보한 log
    buffer의 공간에 redo entry를 copy하는 작업은 동시에 수행되어도 지장이
    없다. 이것을 위해 복수 개의 redo copy latch를 지정하여, redo entry를 확보된
    영역에 동시에 copy가 가능하도록 하였다. 단 이러한 copy 작업은 CPU의
    작업이므로 CPU가 한장이면, 동시 작업이 불가능하여 이 redo copy latch를
    사용하는 것이 의미가 없어 사용이 불가능하다.
    즉, redo buffer에 대한 작업이 redo copy latch를 사용하면, 다음과 같은
    절차로 수행된다.
    (1) A server process가 redo allocation latch를 잡고 redo buffer 내의 공간을
    할당한 후 redo allocation latch는 푼다.
    (2) B server process가 redo allocation latch를 잡고 redo buffer의 새로운
    공간을 할당받은 다음 redo allocation latch를 푼다.
    (3) A process가 redo copy latch를 잡고 A process가 만든 PGA 내의 redo
    entry를 (1)번 단계에서 확보한 공간에 copy한다.
    이와 동시에 B process도 또 다른 redo copy latch를 잡고, (2)번 단계에서
    확보한 공간에 redo entry를 copy한다.
    redo copy latch의 갯수는 log_simultaneous_copies parameter에 의해 지정된다.
    CPU가 하나이면 이 값은 0가 되어, redo allocation latch를 이용해 redo
    allocation과 redo copy 작업을 한번에 수행하게 되며, default는 CPU_COUNT 값이
    된다. CPU_COUNT는 operating system layer에 의해 자동으로 결정되며, init.ora
    file에 명시적으로 지정하지 않는 parameter이다.
    즉, multi CPU 환경에서는 자동으로 redo copy latch를 사용하게 되는 것이다.
    default 외의 값을 사용하고자 한다면, $ORACLE_HOME/dbs/initSID.ora file 내에
    다음과 같이 지정하면 된다. 최대값의 제한은 없다.
    log_simultaneous_copies=n
    redo copy latch의 장점을 정리하면, redo buffer의 공간 할당을 위해 선행된
    작업이 redo entry를 copy할 때까지 기다리지 않아도 되며, redo copy 작업 자체도
    병렬도 수행이 가능하다는 것이다.
    4. LOG_SMALL_ENTRY_MAX_SIZE
    CPU를 복수개를 사용하는 경우 redo copy latch를 사용하는 것이 redo log buffer
    내의 작업에 대한 wait는 줄일 수 있음은 위에서 살펴보았다. 그런데 매우 작은
    redo entry의 경우 redo entry를 copy하는 데 시간이 거의 걸리지 않은 상황에서
    굳이 redo allocation과 redo copy를 별도의 latch를 이용하여 두 단계로 나누어
    작업하는 것이 성능 향상에 도움이 되지 않을 수도 있다. 오히려 copy하는 데 시간이
    거의 안 걸린다면 redo allocation latch 하나를 잡고서 redo allocation과 redo
    copy를 모두 수행하는 것이 나을 수도 있다.
    그렇다면 redo copy latch를 사용하는 상황에서 이 latch를 사용하지 않아도 되는,
    작은 redo entry의 기준을 설정할 필요가 있다.
    이것을 결정하는 parameter가 log_small_entry_max_size이다.
    즉, log_small_entry_max_size보다 큰 redo entry는 redo copy를 위해 redo copy
    latch를 사용하지만, 이보다 작은 크기의 redo entry에 대해서는 redo allocation
    latch를 이용하여 redo allocation과 copy를 모두 수행하게 되는 것이다.
    아무리 작은 redo entry라도 redo copy latch를 사용하게 하고 싶다면 이 값을
    0으로 지정하면 된다.
    5. tuning point
    multi-CPU 환경에서 redo copy latch의 갯수를 늘리거나, log_small_entry_max_size
    값을 줄여야 할때는 redo buffer에 contention이 발생하는 경우이다.
    redo buffer에 contention이 발생하는지 확인하는 방법은 다음과 같다.
    SQL>select name, gets, misses from v$latch where name = 'redo allocation';
    이 값의 결과에서 gets에 대한 misses의 비율이 1%를 넘는다면 이것은 contention
    이 존재한다고 판단될 수 있다. 물론 이것은 절대적인 기준은 아니어서, 1% 미만
    이라 하더라도 miss 율을 더 줄이고자 할 수도 있다.
    contention이 발생하면 log_simultaneous_copies의 값을 CPU의 약 두배 정도까지
    증가시키는 것이 권고할 만하고, log_small_entry_max_size의 값도 0에 가까운
    값으로 줄여서 contention의 상황을 지속적으로 살펴보아야 한다.
    6. 8i 에서의 변경 사항
    Oracle 8.1 에서는 위에서 설명한 두 parameter가 모두 나타나지 않는다.
    log_small_entry_max_size parameter는 완전히 없어졌으며,
    log_simultaneous_copies 값은 CPU_COUNT의 두배로 무조건 설정이 된다.
    이 CPU_COUNT는 앞에서 설명한 것과 같이 operating system layer에 의해
    자동으로 결정되는 parameter로 CPU의 갯수를 나타낸다.
    log_simultaneous_copies가 자동으로 CPU 갯수의 2배로 지정되는 것은 모든
    환경에서 대부분 최적이라고 볼 수 있으므로, user가 변경하지 않도록 하기 위해
    parameter 부분에 display가 안 되도록 하였다.

  • High session allocation latch

    Hi,
    in my 9.2.0.8 database i've found high latch: session allocation.
    In addition to this my tnslnr process consume high cpu e paging space:
    Name            PID  CPU%  PgSp Owner
    tnslsnr     3256468   6.1 780.6 ora9R2
    oracle      5017828   2.9   4.6 ora9R2
    oracle      1953934   2.7   4.3 ora9R2due to problem descripted in ml 557397.1.
    Can these two problem related? Or how can i find the cause of this latch?
    Thanks to all.

    Hi,
    i've reduced the package compiled in debug mode.
    Before
    Top 5 Timed Events
    ~~~~~~~~~~~~~~~~~~                                                     % Total
    Event                                               Waits    Time (s) Ela Time
    latch free                                        400,134      12,316    33.21
    CPU time                                                       10,641    28.69
    db file sequential read                           902,778       5,363    14.46
    buffer busy waits                                 211,727       3,456     9.32
    db file scattered read                            322,748       2,011     5.42
                                          Get                            Spin &
    Latch Name                       Requests      Misses      Sleeps Sleeps 1->4
    session allocation            201,688,557   6,075,154      96,070 5981249/91772/2101/32/0
    library cache                 244,452,480   1,392,840     161,791 1234527/154974/3220/119/0
    cache buffers chains        4,075,484,598   1,060,603      79,117 998924/53273/4232/4177/0
    redo allocation                60,138,195     519,535      19,525 500326/18895/312/2/0After
    Top 5 Timed Events
    ~~~~~~~~~~~~~~~~~~                                                     % Total
    Event                                               Waits    Time (s) Ela Time
    CPU time                                                       10,684    49.91
    db file sequential read                           775,861       2,898    13.54
    latch free                                        230,618       2,531    11.82
    db file scattered read                            294,938       1,318     6.16
    db file parallel write                            643,377       1,115     5.21
                                          Get                            Spin &
    Latch Name                       Requests      Misses      Sleeps Sleeps 1->4
    session allocation            205,203,998   5,141,166      60,448 5081610/58670/880/6/0
    library cache                 250,958,548   1,390,770     102,917 1289488/99715/1521/46/0
    cache buffers chains        4,023,791,844   1,305,675      34,095 1274720/29256/967/732/0
    library cache pin             225,720,010     407,695       9,712 398039/9600/56/0/0There is an improvement but session allocation latch is still high.
    Edited by: 842366 on 27-dic-2011 5.53

  • Process order-Variance yield error

    Dear all,
    Kindly find below first image,in short/exc tab yield variance exist
    But in second image in short/exc tab yield variance not exist.
    Regards
    Rajasekaran

    Hi Raja,
    What is the order qty? is it 110 ? For how much qty confirmation is done..is it 100.. and during confirmation scrap is entered as 10..what is the status of process order DLV..?
    are you using any assembly scrap ? check in material master in MRP1 view and confirm ..can you check all this and confirm...
    Thanks
    Kumar

  • AAA PPP background processes allocation

    Greetings,
    What is the appropriate number of background process, defined by cmd:aaa processes number, that can satisfy the authentication and authorization of a 350 concurrent PPP access sessions, using a 3725, 7206 NAS and AAA engine solutions?
    TIA

    If your question is about the limit, there is no theoritical limit as such.

  • How to find Latch and what actions need to be taken when there is a latch

    Hi
    Can you please tell me how to find Latch and what actions need to be taken when there is a latch?
    Thanks
    Regards,
    RJ.

    1. What is a latch?
    Latches are low level serialization mechanisms used to protect shared
    data structures in the SGA. The implementation of latches is operating
    system dependent, particularly in regard to whether a process will wait
    for a latch and for how long.
    A latch is a type of a lock that can be very quickly acquired and freed.
    Latches are typically used to prevent more than one process from
    executing the same piece of code at a given time. Associated with each
    latch is a cleanup procedure that will be called if a process dies while
    holding the latch. Latches have an associated level that is used to
    prevent deadlocks. Once a process acquires a latch at a certain level it
    cannot subsequently acquire a latch at a level that is equal to or less
    than that level (unless it acquires it nowait).
    2. Latches vs Enqueues
    Enqueues are another type of locking mechanism used in Oracle.
    An enqueue is a more sophisticated mechanism which permits several concurrent
    processes to have varying degree of sharing of "known" resources. Any object
    which can be concurrently used, can be protected with enqueues. A good example
    is of locks on tables. We allow varying levels of sharing on tables e.g.
    two processes can lock a table in share mode or in share update mode etc.
    One difference is that the enqueue is obtained using an OS specific
    locking mechanism. An enqueue allows the user to store a value in the lock,
    i.e the mode in which we are requesting it. The OS lock manager keeps track
    of the resources locked. If a process cannot be granted the lock because it
    is incompatible with the mode requested and the lock is requested with wait,
    the OS puts the requesting process on a wait queue which is serviced in FIFO.
    Another difference between latches and enqueues is that
    in latches there is no ordered queue of waiters like in enqueues. Latch
    waiters may either use timers to wakeup and retry or spin (only in
    multiprocessors). Since all waiters are concurrently retrying (depending on
    the scheduler), anyone might get the latch and conceivably the first one to
    try might be the last one to get.
    3. When do we need to obtain a latch?
    A process acquires a latch when working with a structure in the SGA
    (System Global Area). It continues to hold the latch for the period
    of time it works with the structure. The latch is dropped when the
    process is finished with the structure. Each latch protects a different
    set of data, identified by the name of the latch.
    Oracle uses atomic instructions like "test and set" for operating on latches.
    Processes waiting to execute a part of code for which a latch has
    already been obtained by some other process will wait until the
    latch is released. Examples are redo allocation latches, copy
    latches, archive control latch etc. The basic idea is to block concurrent
    access to shared data structures. Since the instructions to
    set and free latches are atomic, the OS guarantees that only one process gets
    it. Since it is only one instruction, it is quite fast. Latches are held
    for short periods of time and provide a mechanism for cleanup in case
    a holder dies abnormally while holding it. This cleaning is done using
    the services of PMON.
    4. Latches request modes?
    Latches request can be made in two modes: "willing-to-wait" or "no wait". Normally,
    latches will be requested in "willing-to-wait" mode. A request in "willing-to-wait" mode
    will loop, wait, and request again until the latch is obtained. In "no wait" mode the process
    request the latch. If one is not available, instead of waiting, another one is requested. Only
    when all fail does the server process have to wait.
    Examples of "willing-to-wait" latches are: shared pool and library cache latches
    A example of "no wait" latches is the redo copy latch.
    5. What causes latch contention?
    If a required latch is busy, the process requesting it spins, tries again
    and if still not available, spins again. The loop is repeated up to a maximum
    number of times determined by the initialization parameter SPINCOUNT.
    If after this entire loop, the latch is still not available, the process must yield
    the CPU and go to sleep. Initially is sleeps for one centisecond. This time is
    doubled in every subsequent sleep.
    This causes a slowdown to occur and results in additional CPU usage,
    until a latch is available. The CPU usage is a consequence of the
    "spinning" of the process. "Spinning" means that the process continues to
    look for the availability of the latch after certain intervals of time,
    during which it sleeps.
    6. How to identify contention for internal latches?
    Relevant data dictionary views to query
    V$LATCH
    V$LATCHHOLDER
    V$LATCHNAME
    Each row in the V$LATCH table contains statistics for a different type
    of latch. The columns of the table reflect activity for different types
    of latch requests. The distinction between these types of requests is
    whether the requesting process continues to request a latch if it
    is unavailable:
    willing-to-wait If the latch requested with a willing-to-wait
    request is not available, the requesting process
    waits a short time and requests the latch again.
    The process continues waiting and requesting until
    the latch is available.
    no wait If the latch requested with an immediate request is
    not available, the requesting process does not
    wait, but continues processing.
    V$LATCHNAME key information:
    GETS Number of successful willing-to-wait requests for
    a latch.
    MISSES Number of times an initial willing-to-wait request
    was unsuccessful.
    SLEEPS Number of times a process waited a requested a latch
    after an initial wiling-to-wait request.
    IMMEDIATE_GETS Number of successful immediate requests for each latch.
    IMMEDIATE_MISSES Number of unsuccessful immediate requests for each latch.
    Calculating latch hit ratio
    To get the Hit ratio for latches apply the following formula:
    "willing-to-wait" Hit Ratio=(GETS-MISSES)/GETS
    "no wait" Hit Ratio=(IMMEDIATE_GETS-IMMEDIATE_MISSES)/IMMEDIATE_GETS
    This number should be close to 1. If not, tune according to the latch name
    7. Useful SQL scripts to get latch information
    ** Display System-wide latch statistics.
    column name format A32 truncate heading "LATCH NAME"
    column pid heading "HOLDER PID"
    select c.name,a.addr,a.gets,a.misses,a.sleeps,
    a.immediate_gets,a.immediate_misses,b.pid
    from v$latch a, v$latchholder b, v$latchname c
    where a.addr = b.laddr(+)
    and a.latch# = c.latch#
    order by a.latch#;
    ** Given a latch address, find out the latch name.
    column name format a64 heading 'Name'
    select a.name from v$latchname a, v$latch b
    where b.addr = '&addr'
    and b.latch#=a.latch#;
    ** Display latch statistics by latch name.
    column name format a32 heading 'LATCH NAME'
    column pid heading 'HOLDER PID'
    select c.name,a.addr,a.gets,a.misses,a.sleeps,
    a.immediate_gets,a.immediate_misses,b.pid
    from v$latch a, v$latchholder b, v$latchname c
    where a.addr = b.laddr(+) and a.latch# = c.latch#
    and c.name like '&latch_name%' order by a.latch#;
    8. List of all the latches
    Oracle versions might differ in the latch# assigned to the existing latches.
    The following query will help you to identify all latches and the number assigned.
    column name format a40 heading 'LATCH NAME'
    select latch#, name from v$latchname;

  • Peformance of one process is slow ( statspack report is attached)

    Hi,
    My version is 9.2.0.7 (HP-UX Itanium)
    we have recently migrated the DB from windows 2003 to Unix (HP-UX Itanium 11.23).
    we have one process which usually takes 15 mins before migration, now it is taking 25 mins to complete. I did not change anything at db level. same init.ora parameters. tables and indexes statistiscs are upto to date.
    Please guide me, what might be the wrong at instance level. Here I am skipping the sql query portion of statspack report due to security reasons.
    this statspack report is taken before running the process and after completion of process.
    STATSPACK report for
    DB Name         DB Id    Instance     Inst Num Release     Cluster Host
    UAT        496948094 UAT             1 9.2.0.7.0   NO      dbt
                  Snap Id     Snap Time      Sessions Curs/Sess Comment
    Begin Snap:         2 15-Jul-09 10:59:05       11       2.7
      End Snap:         3 15-Jul-09 12:42:18       17       4.4
       Elapsed:              103.22 (mins)
    Cache Sizes (end)
    ~~~~~~~~~~~~~~~~~
                   Buffer Cache:       400M      Std Block Size:          8K
               Shared Pool Size:       160M          Log Buffer:        512K
    Load Profile
    ~~~~~~~~~~~~                            Per Second       Per Transaction
                      Redo size:             44,830.27            435,162.76
                  Logical reads:             15,223.37            147,771.73
                  Block changes:                198.12              1,923.15
                 Physical reads:                 47.02                456.37
                Physical writes:                  7.05                 68.45
                     User calls:                 50.01                485.42
                         Parses:                 25.99                252.26
                    Hard parses:                  0.24                  2.38
                          Sorts:                  3.40                 33.00
                         Logons:                  0.02                  0.16
                       Executes:                 34.64                336.27
                   Transactions:                  0.10
      % Blocks changed per Read:    1.30    Recursive Call %:     27.05
    Rollback per transaction %:   33.70       Rows per Sort:   1532.57
    Instance Efficiency Percentages (Target 100%)
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
                Buffer Nowait %:  100.00       Redo NoWait %:    100.00
                Buffer  Hit   %:   99.69    In-memory Sort %:    100.00
                Library Hit   %:   99.38        Soft Parse %:     99.06
             Execute to Parse %:   24.98         Latch Hit %:    100.00
    Parse CPU to Parse Elapsd %:   48.39     % Non-Parse CPU:     99.53
    Shared Pool Statistics        Begin   End
                 Memory Usage %:   94.56   94.19
        % SQL with executions>1:   74.01   62.51
      % Memory for SQL w/exec>1:   52.89   54.29
    Top 5 Timed Events
    ~~~~~~~~~~~~~~~~~~                                                     % Total
    Event                                               Waits    Time (s) Ela Time
    CPU time                                                          895    48.10
    db file sequential read                           195,597         443    23.83
    log file parallel write                             1,706         260    13.97
    log buffer space                                      415         122     6.54
    control file parallel write                         2,074          66     3.53
    Wait Events for DB: UAT  Instance: UAT  Snaps: 2 -3
    -> s  - second
    -> cs - centisecond -     100th of a second
    -> ms - millisecond -    1000th of a second
    -> us - microsecond - 1000000th of a second
    -> ordered by wait time desc, waits desc (idle events last)
                                                                       Avg
                                                         Total Wait   wait    Waits
    Event                               Waits   Timeouts   Time (s)   (ms)     /txn
    db file sequential read           195,597          0        443      2    306.6
    log file parallel write             1,706          0        260    152      2.7
    log buffer space                      415          0        122    293      0.7
    control file parallel write         2,074          0         66     32      3.3
    log file sync                         678          4         51     75      1.1
    db file scattered read              6,608          0         21      3     10.4
    log file switch completion              9          0          2    208      0.0
    SQL*Net more data to client        24,072          0          1      0     37.7
    log file single write                  18          0          0     19      0.0
    db file parallel read                   9          0          0     13      0.0
    control file sequential read          928          0          0      0      1.5
    SQL*Net break/reset to clien          292          0          0      0      0.5
    latch free                             25          2          0      3      0.0
    log file sequential read               18          0          0      2      0.0
    LGWR wait for redo copy                37          0          0      0      0.1
    direct path read                       45          0          0      0      0.1
    direct path write                      45          0          0      0      0.1
    SQL*Net message from client       308,861          0     30,960    100    484.1
    SQL*Net more data from clien       26,217          0          3      0     41.1
    SQL*Net message to client         308,867          0          0      0    484.1
    Background Wait Events for DB: UAT  Instance: UAT  Snaps: 2 -3
    -> ordered by wait time desc, waits desc (idle events last)
                                                                       Avg
                                                         Total Wait   wait    Waits
    Event                               Waits   Timeouts   Time (s)   (ms)     /txn
    log file parallel write             1,706          0        260    152      2.7
    control file parallel write         2,074          0         66     32      3.3
    log buffer space                       10          0          1    149      0.0
    db file scattered read                 90          0          1      7      0.1
    db file sequential read               104          0          1      5      0.2
    log file single write                  18          0          0     19      0.0
    control file sequential read          876          0          0      0      1.4
    log file sequential read               18          0          0      2      0.0
    latch free                              4          2          0      9      0.0
    LGWR wait for redo copy                37          0          0      0      0.1
    direct path read                       45          0          0      0      0.1
    direct path write                      45          0          0      0      0.1
    rdbms ipc message                   7,222      5,888     21,416   2965     11.3
    pmon timer                          2,079      2,079      6,044   2907      3.3
    smon timer                             21         21      6,002 ######      0.0
    Instance Activity Stats for DB: UAT  Instance: UAT  Snaps: 2 -3
    Statistic                                      Total     per Second    per Trans
    CPU used by this session                      89,478           14.5        140.3
    CPU used when call started                    89,478           14.5        140.3
    CR blocks created                                148            0.0          0.2
    DBWR buffers scanned                         158,122           25.5        247.8
    DBWR checkpoint buffers written               11,909            1.9         18.7
    DBWR checkpoints                                   3            0.0          0.0
    DBWR free buffers found                      136,228           22.0        213.5
    DBWR lru scans                                    53            0.0          0.1
    DBWR make free requests                           53            0.0          0.1
    DBWR summed scan depth                       158,122           25.5        247.8
    DBWR transaction table writes                     43            0.0          0.1
    DBWR undo block writes                        19,283            3.1         30.2
    SQL*Net roundtrips to/from client            308,602           49.8        483.7
    active txn count during cleanout               6,812            1.1         10.7
    background checkpoints completed                   3            0.0          0.0
    background checkpoints started                     3            0.0          0.0
    background timeouts                            7,204            1.2         11.3
    branch node splits                                 4            0.0          0.0
    buffer is not pinned count                35,587,689        5,746.4     55,780.1
    buffer is pinned count                   202,539,737       32,704.6    317,460.4
    bytes received via SQL*Net from c        106,536,068       17,202.7    166,984.4
    bytes sent via SQL*Net to client          98,286,059       15,870.5    154,053.4
    calls to get snapshot scn: kcmgss            346,517           56.0        543.1
    calls to kcmgas                               42,563            6.9         66.7
    calls to kcmgcs                                7,735            1.3         12.1
    change write time                             12,666            2.1         19.9
    cleanout - number of ktugct calls              9,698            1.6         15.2
    cleanouts and rollbacks - consist                  0            0.0          0.0
    cleanouts only - consistent read               1,161            0.2          1.8
    cluster key scan block gets                   15,789            2.6         24.8
    cluster key scans                              6,534            1.1         10.2
    commit cleanout failures: block l                199            0.0          0.3
    commit cleanout failures: buffer                  69            0.0          0.1
    commit cleanout failures: callbac                  0            0.0          0.0
    commit cleanouts                              40,688            6.6         63.8
    commit cleanouts successfully com             40,420            6.5         63.4
    commit txn count during cleanout               4,652            0.8          7.3
    consistent changes                               150            0.0          0.2
    consistent gets                           93,071,913       15,028.6    145,880.7
    consistent gets - examination              1,487,526          240.2      2,331.6
    cursor authentications                           322            0.1          0.5
    data blocks consistent reads - un                 51            0.0          0.1
    db block changes                           1,226,967          198.1      1,923.2
    db block gets                              1,206,448          194.8      1,891.0
    deferred (CURRENT) block cleanout             13,478            2.2         21.1
    dirty buffers inspected                        9,876            1.6         15.5
    enqueue conversions                               41            0.0          0.1
    enqueue releases                              12,783            2.1         20.0
    enqueue requests                              12,785            2.1         20.0
    enqueue waits                                      0            0.0          0.0
    execute count                                214,538           34.6        336.3
    free buffer inspected                          9,879            1.6         15.5
    free buffer requested                        349,615           56.5        548.0
    hot buffers moved to head of LRU             141,298           22.8        221.5
    immediate (CR) block cleanout app              1,161            0.2          1.8
    immediate (CURRENT) block cleanou             23,894            3.9         37.5
    Instance Activity Stats for DB: UAT  Instance: UAT  Snaps: 2 -3
    Statistic                                      Total     per Second    per Trans
    index fast full scans (full)                      19            0.0          0.0
    index fetch by key                           671,512          108.4      1,052.5
    index scans kdiixs1                       56,328,309        9,095.5     88,288.9
    leaf node 90-10 splits                            16            0.0          0.0
    leaf node splits                               2,187            0.4          3.4
    logons cumulative                                105            0.0          0.2
    messages received                              1,653            0.3          2.6
    messages sent                                  1,653            0.3          2.6
    no buffer to keep pinned count                     0            0.0          0.0
    no work - consistent read gets            35,118,594        5,670.7     55,044.8
    opened cursors cumulative                      4,036            0.7          6.3
    parse count (failures)                            43            0.0          0.1
    parse count (hard)                             1,516            0.2          2.4
    parse count (total)                          160,939           26.0        252.3
    parse time cpu                                   421            0.1          0.7
    parse time elapsed                               870            0.1          1.4
    physical reads                               291,165           47.0        456.4
    physical reads direct                             45            0.0          0.1
    physical writes                               43,672            7.1         68.5
    physical writes direct                            45            0.0          0.1
    physical writes non checkpoint                41,379            6.7         64.9
    pinned buffers inspected                           3            0.0          0.0
    prefetched blocks                             88,896           14.4        139.3
    prefetched blocks aged out before                 22            0.0          0.0
    process last non-idle time                    75,777           12.2        118.8
    recursive calls                              114,829           18.5        180.0
    recursive cpu usage                           11,704            1.9         18.3
    redo blocks written                          275,521           44.5        431.9
    redo buffer allocation retries                   419            0.1          0.7
    redo entries                                 623,735          100.7        977.6
    redo log space requests                           10            0.0          0.0
    redo log space wait time                         192            0.0          0.3
    redo ordering marks                                3            0.0          0.0
    redo size                                277,633,840       44,830.3    435,162.8
    redo synch time                                5,185            0.8          8.1
    redo synch writes                                675            0.1          1.1
    redo wastage                                 818,952          132.2      1,283.6
    redo write time                               26,562            4.3         41.6
    redo writes                                    1,705            0.3          2.7
    rollback changes - undo records a                395            0.1          0.6
    rollbacks only - consistent read                  49            0.0          0.1
    rows fetched via callback                    553,910           89.4        868.2
    session connect time                          74,797           12.1        117.2
    session logical reads                     94,278,361       15,223.4    147,771.7
    session pga memory                         2,243,808          362.3      3,516.9
    session pga memory max                     1,790,880          289.2      2,807.0
    session uga memory                         2,096,104          338.5      3,285.4
    session uga memory max                    32,637,856        5,270.1     51,156.5
    shared hash latch upgrades - no w         56,430,882        9,112.0     88,449.7
    sorts (memory)                                21,055            3.4         33.0
    sorts (rows)                              32,268,330        5,210.5     50,577.3
    summed dirty queue length                     53,238            8.6         83.5
    switch current to new buffer                  37,071            6.0         58.1
    table fetch by rowid                      90,385,043       14,594.7    141,669.4
    table fetch continued row                    104,336           16.9        163.5
    table scan blocks gotten                     376,181           60.7        589.6
    Instance Activity Stats for DB: UAT  Instance: UAT  Snaps: 2 -3
    Statistic                                      Total     per Second    per Trans
    table scan rows gotten                     5,103,693          824.1      7,999.5
    table scans (long tables)                         97            0.0          0.2
    table scans (short tables)                    53,485            8.6         83.8
    transaction rollbacks                            247            0.0          0.4
    user calls                                   309,698           50.0        485.4
    user commits                                     423            0.1          0.7
    user rollbacks                                   215            0.0          0.3
    workarea executions - opt                 37,753            6.1         59.2
    write clones created in foregroun                718            0.1          1.1
    Tablespace IO Stats for DB: UAT  Instance: UAT  Snaps: 2 -3
    ->ordered by IOs (Reads + Writes) desc
    Tablespace
                     Av      Av     Av                    Av        Buffer Av Buf
             Reads Reads/s Rd(ms) Blks/Rd       Writes Writes/s      Waits Wt(ms)
    USERS
           200,144      32    2.3     1.4       22,576        4          0    0.0
    UNDOTBS1
                38       0    9.5     1.0       19,348        3          0    0.0
    SYSTEM
             2,016       0    4.7     1.5          505        0          0    0.0
    TOOLS
                14       0    9.3     1.3        1,237        0          0    0.0
    IMAGES
                 3       0    6.7     1.0            3        0          0    0.0
    INDX
                 3       0    6.7     1.0            3        0          0    0.0
    Buffer Pool Statistics for DB: UAT  Instance: UAT  Snaps: 2 -3
    -> Standard block size Pools  D: default,  K: keep,  R: recycle
    -> Default Pools for other block sizes: 2k, 4k, 8k, 16k, 32k
                                                               Free    Write  Buffer
         Number of Cache      Buffer    Physical   Physical  Buffer Complete    Busy
    P      Buffers Hit %        Gets       Reads     Writes   Waits    Waits   Waits
    D       49,625  99.7  94,278,286     291,074     43,627       0        0       0
    Instance Recovery Stats for DB: UAT  Instance: UAT  Snaps: 2 -3
    -> B: Begin snapshot,  E: End snapshot
      Targt Estd                                    Log File   Log Ckpt   Log Ckpt
      MTTR  MTTR   Recovery    Actual     Target      Size     Timeout    Interval
       (s)   (s)   Estd IOs  Redo Blks  Redo Blks  Redo Blks  Redo Blks  Redo Blks
    B    38     9       2311      13283      13021      92160      13021
    E    38     7        899       4041       3767      92160       3767
    Buffer Pool Advisory for DB: UAT  Instance: UAT  End Snap: 3
    -> Only rows with estimated physical reads >0 are displayed
    -> ordered by Block Size, Buffers For Estimate (default block size first)
            Size for  Size      Buffers for  Est Physical          Estimated
    P   Estimate (M) Factr         Estimate   Read Factor     Physical Reads
    D             32    .1            3,970          2.94          2,922,389
    D             64    .2            7,940          2.54          2,524,222
    D             96    .2           11,910          2.38          2,365,570
    D            128    .3           15,880          2.27          2,262,338
    D            160    .4           19,850          2.19          2,183,287
    D            192    .5           23,820          1.97          1,962,758
    D            224    .6           27,790          1.30          1,293,415
    D            256    .6           31,760          1.21          1,203,737
    D            288    .7           35,730          1.10          1,096,115
    D            320    .8           39,700          1.06          1,056,077
    D            352    .9           43,670          1.04          1,036,708
    D            384   1.0           47,640          1.02          1,012,912
    D            400   1.0           49,625          1.00            995,426
    D            416   1.0           51,610          0.99            982,641
    D            448   1.1           55,580          0.97            966,874
    D            480   1.2           59,550          0.89            890,749
    D            512   1.3           63,520          0.88            879,062
    D            544   1.4           67,490          0.87            864,539
    D            576   1.4           71,460          0.80            800,284
    D            608   1.5           75,430          0.76            756,222
    D            640   1.6           79,400          0.75            749,473
    PGA Aggr Target Stats for DB: UAT  Instance: UAT  Snaps: 2 -3
    -> B: Begin snap   E: End snap (rows dentified with B or E contain data
       which is absolute i.e. not diffed over the interval)
    -> PGA cache hit % - percentage of W/A (WorkArea) data processed only in-memory
    -> Auto PGA Target - actual workarea memory target
    -> W/A PGA Used    - amount of memory used for all Workareas (manual + auto)
    -> %PGA W/A Mem    - percentage of PGA memory allocated to workareas
    -> %Auto W/A Mem   - percentage of workarea memory controlled by Auto Mem Mgmt
    -> %Man W/A Mem    - percentage of workarea memory under manual control
    PGA Cache Hit % W/A MB Processed Extra W/A MB Read/Written
              100.0              851                         0
                                                 %PGA  %Auto   %Man
      PGA Aggr  Auto PGA   PGA Mem    W/A PGA    W/A    W/A    W/A   Global Mem
      Target(M) Target(M)  Alloc(M)   Used(M)    Mem    Mem    Mem    Bound(K)
    B       320       282       12.6        0.0     .0     .0     .0     16,384
    E       320       281       15.3        0.0     .0     .0     .0     16,384
    PGA Aggr Target Histogram for DB: UAT  Instance: UAT  Snaps: 2 -3
    -> Opt Executions are purely in-memory operations
        Low    High
    Opt Opt    Total Execs Opt Execs 1-Pass Execs M-Pass Execs
         8K     16K         37,010        37,010            0            0
        16K     32K             70            70            0            0
        32K     64K             11            11            0            0
        64K    128K             34            34            0            0
       128K    256K              9             9            0            0
       256K    512K             54            54            0            0
       512K   1024K            536           536            0            0
         1M      2M              7             7            0            0
         2M      4M             24            24            0            0
    PGA Memory Advisory for DB: UAT  Instance: UAT  End Snap: 3
    -> When using Auto Memory Mgmt, minly choose a pga_aggregate_target value
       where Estd PGA Overalloc Count is 0
                                           Estd Extra    Estd PGA   Estd PGA
    PGA Target    Size           W/A MB   W/A MB Read/      Cache  Overalloc
      Est (MB)   Factr        Processed Written to Disk     Hit %      Count
            40     0.1          3,269.7             98.2     97.0          0
            80     0.3          3,269.7              9.6    100.0          0
           160     0.5          3,269.7              9.6    100.0          0
           240     0.8          3,269.7              0.0    100.0          0
           320     1.0          3,269.7              0.0    100.0          0
           384     1.2          3,269.7              0.0    100.0          0
           448     1.4          3,269.7              0.0    100.0          0
           512     1.6          3,269.7              0.0    100.0          0
           576     1.8          3,269.7              0.0    100.0          0
           640     2.0          3,269.7              0.0    100.0          0
           960     3.0          3,269.7              0.0    100.0          0
         1,280     4.0          3,269.7              0.0    100.0          0
         1,920     6.0          3,269.7              0.0    100.0          0
         2,560     8.0          3,269.7              0.0    100.0          0
              -------------------------------------------------------------

    Rollback Segment Stats for DB: UAT  Instance: UAT  Snaps: 2 -3
    ->A high value for "Pct Waits" suggests more rollback segments may be required
    ->RBS stats may not be accurate between begin and end snaps when using Auto Undo
      managment, as RBS may be dynamically created and dropped as needed
            Trans Table       Pct   Undo Bytes
    RBS No      Gets        Waits     Written        Wraps  Shrinks  Extends
         0           22.0    0.00               0        0        0        0
         1          650.0    0.00       1,868,300        0        0        0
         2        1,987.0    0.00       4,613,768        9        0        7
         3        6,070.0    0.00      24,237,494       37        0       36
         4          223.0    0.00         418,942        3        0        1
         5          621.0    0.00       1,749,086       11        0       11
         6        8,313.0    0.00      48,389,590       54        0       52
         7        7,248.0    0.00      14,477,004       19        0       17
         8        1,883.0    0.00      12,332,646       14        0       12
         9        2,729.0    0.00      17,820,450       19        0       19
        10        1,009.0    0.00       2,857,150        5        0        3
    Rollback Segment Storage for DB: UAT  Instance: UAT  Snaps: 2 -3
    ->Opt Size should be larger than Avg Active
    RBS No    Segment Size      Avg Active    Opt Size    Maximum Size
         0         450,560               0                         450,560
         1       8,511,488           6,553                       8,511,488
         2       8,511,488       4,592,363                      18,997,248
         3      29,351,936      14,755,792                      29,483,008
         4       2,220,032         105,188                       2,220,032
         5       3,137,536       3,416,104                      54,648,832
         6      55,697,408      21,595,184                      55,697,408
         7      26,337,280       9,221,107                      26,337,280
         8      13,754,368       5,142,374                      13,754,368
         9      22,011,904      10,220,526                      22,011,904
        10       4,317,184       3,810,892                      13,754,368
    Undo Segment Summary for DB: UAT  Instance: UAT  Snaps: 2 -3
    -> Undo segment block stats:
    -> uS - unexpired Stolen,   uR - unexpired Released,   uU - unexpired reUsed
    -> eS - expired   Stolen,   eR - expired   Released,   eU - expired   reUsed
    Undo           Undo        Num  Max Qry     Max Tx Snapshot Out of uS/uR/uU/
    TS#         Blocks      Trans  Len (s)   Concurcy  Too Old  Space eS/eR/eU
       1         19,305    109,683      648          3        0      0 0/0/0/0/0/0
    Undo Segment Stats for DB: UAT  Instance: UAT  Snaps: 2 -3
    -> ordered by Time desc
                         Undo      Num Max Qry   Max Tx  Snap   Out of uS/uR/uU/
    End Time           Blocks    Trans Len (s)    Concy Too Old  Space eS/eR/eU
    15-Jul 12:32           10   13,451       3        2       0      0 0/0/0/0/0/0
    15-Jul 12:22           87   13,384       6        1       0      0 0/0/0/0/0/0
    15-Jul 12:12        3,746   13,229      91        1       0      0 0/0/0/0/0/0
    15-Jul 12:02        8,949   13,127     648        1       0      0 0/0/0/0/0/0
    15-Jul 11:52        1,496   10,476      24        1       0      0 0/0/0/0/0/0
    15-Jul 11:42        3,895   10,441       6        1       0      0 0/0/0/0/0/0
    15-Jul 11:32          531    9,155       1        3       0      0 0/0/0/0/0/0
    15-Jul 11:22            0    8,837       3        0       0      0 0/0/0/0/0/0
    15-Jul 11:12            4    8,817       3        1       0      0 0/0/0/0/0/0
    15-Jul 11:02          587    8,766       2        2       0      0 0/0/0/0/0/0
    Latch Activity for DB: UAT  Instance: UAT  Snaps: 2 -3
    ->"Get Requests", "Pct Get Miss" and "Avg Slps/Miss" are statistics for
      willing-to-wait latch get requests
    ->"NoWait Requests", "Pct NoWait Miss" are for no-wait latch get requests
    ->"Pct Misses" for both should be very close to 0.0
                                               Pct    Avg   Wait                 Pct
                                  Get          Get   Slps   Time       NoWait NoWait
    Latch                       Requests      Miss  /Miss    (s)     Requests   Miss
    Consistent RBA                    1,708    0.0             0            0
    FIB s.o chain latch                  40    0.0             0            0
    FOB s.o list latch                  467    0.0             0            0
    SQL memory manager latch              1    0.0             0        2,038    0.0
    SQL memory manager worka        174,015    0.0             0            0
    active checkpoint queue           2,081    0.0             0            0
    archive control                       1    0.0             0            0
    cache buffer handles            162,618    0.0             0            0
    cache buffers chains        190,111,507    0.0    0.2      0      426,778    0.0
    cache buffers lru chain         425,142    0.0    0.2      0       65,895    0.0
    channel handle pool latc            202    0.0             0            0
    channel operations paren          4,405    0.0             0            0
    checkpoint queue latch          228,932    0.0    0.0      0       41,321    0.0
    child cursor hash table          18,320    0.0             0            0
    commit callback allocati              4    0.0             0            0
    dml lock allocation               2,482    0.0             0            0
    dummy allocation                    204    0.0             0            0
    enqueue hash chains              25,615    0.0             0            0
    enqueues                         15,416    0.0             0            0
    event group latch                   104    0.0             0            0
    hash table column usage             410    0.0             0      191,319    0.0
    internal temp table obje          1,048    0.0             0            0
    job_queue_processes para            103    0.0             0            0
    ktm global data                      21    0.0             0            0
    lgwr LWN SCN                      3,215    0.0    0.0      0            0
    library cache                 1,657,451    0.0    0.0      0        1,479    0.1
    library cache load lock           1,126    0.0             0            0
    library cache pin             1,112,420    0.0    0.0      0            0
    library cache pin alloca        670,952    0.0    0.0      0            0
    list of block allocation          2,748    0.0             0            0
    loader state object free             36    0.0             0            0
    longop free list parent               1    0.0             0            1    0.0
    messages                         19,427    0.0             0            0
    mostly latch-free SCN             3,229    0.3    0.0      0            0
    multiblock read objects          15,022    0.0             0            0
    ncodef allocation latch              99    0.0             0            0
    object stats modificatio             28    0.0             0            0
    post/wait queue                   1,810    0.0             0        1,102    0.0
    process allocation                  202    0.0             0          104    0.0
    process group creation              202    0.0             0            0
    redo allocation                 629,175    0.0    0.0      0            0
    redo copy                             0                    0      623,865    0.0
    redo writing                     11,487    0.0             0            0
    row cache enqueue latch         197,626    0.0             0            0
    row cache objects               201,089    0.0             0          642    0.0
    sequence cache                      348    0.0             0            0
    session allocation                3,634    0.1    0.0      0            0
    session idle bit                621,031    0.0             0            0
    session switching                    99    0.0             0            0
    session timer                     2,079    0.0             0            0
    Latch Activity for DB: UAT  Instance: UAT  Snaps: 2 -3
    ->"Get Requests", "Pct Get Miss" and "Avg Slps/Miss" are statistics for
      willing-to-wait latch get requests
    ->"NoWait Requests", "Pct NoWait Miss" are for no-wait latch get requests
    ->"Pct Misses" for both should be very close to 0.0
                                               Pct    Avg   Wait                 Pct
                                  Get          Get   Slps   Time       NoWait NoWait
    Latch                       Requests      Miss  /Miss    (s)     Requests   Miss
    shared pool                     786,331    0.0    0.1      0            0
    sim partition latch                   0                    0          193    0.0
    simulator hash latch          5,885,552    0.0             0            0
    simulator lru latch              12,981    0.0             0       66,129    0.0
    sort extent pool                    120    0.0             0            0
    transaction allocation              249    0.0             0            0
    transaction branch alloc             99    0.0             0            0
    undo global data                 27,867    0.0             0            0
    user lock                           396    0.0             0            0
    Latch Sleep breakdown for DB: UAT  Instance: UAT  Snaps: 2 -3
    -> ordered by misses desc
                                          Get                            Spin &
    Latch Name                       Requests      Misses      Sleeps Sleeps 1->4
    cache buffers lru chain           425,142          82          15 67/15/0/0/0
    library cache                   1,657,451          76           3 73/3/0/0/0
    shared pool                       786,331          37           2 35/2/0/0/0
    redo allocation                   629,175          31           1 30/1/0/0/0
    cache buffers chains          190,111,507          21           4 19/0/2/0/0
    Latch Miss Sources for DB: UAT  Instance: UAT  Snaps: 2 -3
    -> only latches with sleeps are shown
    -> ordered by name, sleeps desc
                                                         NoWait              Waiter
    Latch Name               Where                       Misses     Sleeps   Sleeps
    cache buffers chains     kcbget: pin buffer               0          2        0
    cache buffers chains     kcbgtcr: fast path               0          2        0
    cache buffers lru chain  kcbbiop: lru scan                2         12        0
    cache buffers lru chain  kcbbwlru                         0          2        0
    cache buffers lru chain  kcbbxsv: move to being wri       0          1        0
    library cache            kgllkdl: child: cleanup          0          1        0
    library cache            kglpin: child: heap proces       0          1        0
    library cache            kglpndl: child: before pro       0          1        0
    redo allocation          kcrfwi: more space               0          1        0
    shared pool              kghalo                           0          2        0
              -------------------------------------------------------------

  • PMON failed to acquire latch, see PMON dump

    Hi,
    Database alert log show the following errors.
    PMON failed to acquire latch, see PMON dump
    PMON failed to acquire latch, see PMON dump
    PMON failed to acquire latch, see PMON dump
    PMON failed to acquire latch, see PMON dump
    PMON failed to acquire latch, see PMON dump
    PMON failed to acquire latch, see PMON dump
    PMON failed to acquire latch, see PMON dump
    PMON failed to acquire latch, see PMON dump
    PMON failed to acquire latch, see PMON dump
    PMON failed to acquire latch, see PMON dump
    PMON failed to acquire latch, see PMON dump
    PMON failed to acquire latch, see PMON dump
    PMON failed to acquire latch, see PMON dump
    PMON failed to acquire latch, see PMON dump
    PMON failed to acquire latch, see PMON dump
    PMON failed to acquire latch, see PMON dump
    PMON failed to acquire latch, see PMON dump
    PMON failed to acquire latch, see PMON dump
    PMON failed to acquire latch, see PMON dump
    PMON failed to acquire latch, see PMON dump
    PMON failed to acquire latch, see PMON dump
    PMON failed to acquire latch, see PMON dump
    PMON failed to acquire latch, see PMON dump
    During this time we excuting query,
    'exec sys.utl_recomp.recomp_parallel(8)' two session,
    One session hanged , another session running background,
    Kindly guide to us to find this issue
    Thanks and Regards
    Perumal Swamy.R

    Hi I have the same problem on a windows 2003 server, and in my alert log file there is that message :
    Wed Aug 10 11:20:40 2005
    kwqiclode: Error 1403 happened during loading of queue
    ORA-1403 encountered when generating server alert SMG-3503
    kwqiclode: Error 1403 happened during loading of queue
    Wed Aug 10 11:21:43 2005
    kwqiclode: Error 1403 happened during loading of queue
    kwqiclode: Error 1403 happened during loading of queue
    Wed Aug 10 11:22:46 2005
    kwqiclode: Error 1403 happened during loading of queue
    kwqiclode: Error 1403 happened during loading of queue
    Wed Aug 10 11:23:49 2005
    kwqiclode: Error 1403 happened during loading of queue
    kwqiclode: Error 1403 happened during loading of queue
    Wed Aug 10 11:24:49 2005
    kwqiclode: Error 1403 happened during loading of queue
    kwqiclode: Error 1403 happened during loading of queue
    Wed Aug 10 11:25:52 2005
    kwqiclode: Error 1403 happened during loading of queue
    kwqiclode: Error 1403 happened during loading of queue
    Wed Aug 10 11:26:55 2005
    kwqiclode: Error 1403 happened during loading of queue
    kwqiclode: Error 1403 happened during loading of queue
    kwqiclode: Error 1403 happened during loading of queue
    Wed Aug 10 11:27:55 2005
    kwqiclode: Error 1403 happened during loading of queue
    kwqiclode: Error 1403 happened during loading of queue
    Wed Aug 10 11:28:58 2005
    kwqiclode: Error 1403 happened during loading of queue
    kwqiclode: Error 1403 happened during loading of queue
    Wed Aug 10 11:29:40 2005
    kwqiclode: Error 1403 happened during loading of queue
    Wed Aug 10 11:30:01 2005
    kwqiclode: Error 1403 happened during loading of queue
    kwqiclode: Error 1403 happened during loading of queue
    Wed Aug 10 11:31:04 2005
    kwqiclode: Error 1403 happened during loading of queue
    ORA-1403 encountered when generating server alert SMG-3503
    kwqiclode: Error 1403 happened during loading of queue
    Wed Aug 10 11:32:07 2005
    kwqiclode: Error 1403 happened during loading of queue
    kwqiclode: Error 1403 happened during loading of queue
    Wed Aug 10 11:33:10 2005
    kwqiclode: Error 1403 happened during loading of queue
    kwqiclode: Error 1403 happened during loading of queue
    and in my pmon trace file d:\syges\bdump\syges_pmon_240.trc there is the above message :
    *** 2005-08-09 19:27:50.312
    PMON unable to acquire latch 2f0b508 process allocation
    possible holder pid = 17 ospid=2216
    *** 2005-08-09 19:27:50.343
    PMON unable to acquire latch 2f0b508 process allocation
    possible holder pid = 17 ospid=2216
    *** 2005-08-09 19:27:50.359
    PMON unable to acquire latch 2f0b508 process allocation
    possible holder pid = 17 ospid=2216
    *** 2005-08-09 19:27:50.375
    PMON unable to acquire latch 2f0b508 process allocation
    possible holder pid = 17 ospid=2216
    *** 2005-08-09 19:27:50.390
    PMON unable to acquire latch 2f0b508 process allocation
    possible holder pid = 17 ospid=2216
    *** 2005-08-09 19:27:50.406
    PMON unable to acquire latch 2f0b508 process allocation
    possible holder pid = 17 ospid=2216
    *** 2005-08-09 19:27:50.421
    PMON unable to acquire latch 2f0b508 process allocation
    possible holder pid = 17 ospid=2216
    *** 2005-08-09 19:27:50.437
    PMON unable to acquire latch 2f0b508 process allocation
    *** 2005-08-09 19:27:50.453
    version on my database :
    SQL> select * from v$version;
    BANNER
    Oracle Database 10g Enterprise Edition Release 10.1.0.4.0 - Prod
    PL/SQL Release 10.1.0.4.0 - Production
    CORE 10.1.0.4.0 Production
    TNS for 32-bit Windows: Version 10.1.0.4.0 - Production
    NLSRTL Version 10.1.0.4.0 - Production

  • OPP(Output Post Processor) not processing the report (XML Publisher)

    Hi,
    I have defined a concurrent program (XML Publisher report) then ran but failed with the errors below. I am running the report in Oracle E-Business Suite 11.5.10.2, thru a concurrent manager other than Standard Manager. My guess is that OPP(Output Post Processor) is not processing the request outputs coming from a different manager/work shift since requests ran thru Standard Managers are all OK.
    In the OAM(Oracle Applications Manager)-> OPP, there is only 1 process allocated for both Actual and Target. If we increase the number of processes, will it work?
    /home/app/oracle/prodcomn/temp/pasta19356_0.tmp:
    /home/app/oracle/prodcomn/temp/pasta19356_1.tmp: No such file or directory
    Pasta: Error: Print failed. Command=lp -c -dkonica4 /home/app/oracle/prodcomn/temp/pasta19356_1.tmp
    Pasta: Error: Check printCommand/ntPrintCommand in pasta.cfg
    Pasta: Error: Preprocess or Print command failed!!!
    Anybody who has experienced similar issue?
    Thanks in advance.
    Rownald

    Hello,
    Just an additional tests info. We have 2 concurrent managers which I think is affecting the XML report output - Standard Manager(running 24 hours) and a Warehouse manager (9am-4:15pm).
    When I run the report before or after the Warehouse manager workshift(9am-4:15pm), the output is fine - which means PDF is generated and Pasta printing is OK. However, when report is run between 9am-4:15pm, the report only shows XML output with an error in Pasta printing(above). I also found that re-opening the output(ran prior to Warehouse workshift) on the period between 9am-4:15pm would also result to just an XML output instead of the previous PDF.
    Anybody who has experienced similar issue like this? Any idea? The report is not directly defined as "inclusion" in the Warehouse manager, only the program calling it. Any effect of multiple concurrent managers in the XML Publisher output?
    Thanks in advance for your ideas..

  • Return Process in IS-Retail

    Hi,
    I have a doubt on Return Processes in Allocation.
    Process:
    - Allocation Table created
    - WA08 (Follow on doc created) PO & STO
    - WF10 Collective PO
    - PGR in DC --> Distributes to store
    The above is the Procurement Process.
    Now the question is -
    What are the Different Return Processes from Store to Vendor???
    Thanks,
    Kallol.

    Hi,
    There can be two process that come to my mind, there can be others depending on business need.
    1. Return PO from Store to Vendor.
    2. Return Allocation --> Return STO --> GR in DC --> Return PO to Vendor --> GI from DC
    Cheers
    Barry

  • Process Log - All Processes generates no list on 10.1.3.4

    Hi,
    We just upgraded to 10.1.3.4 and, unlike our previous version - 10.1.3.1 - when I select "All Processes" on the "Bpel Processes" pull down, no events are listed.
    I have to select a specific BPEL process in order to see any events.
    Is there any way to list the latest chronological list of events for all Processes in 10.1.3.4 (as in 10.1.3.1)?
    Thank you,
    AG

    Our DBAs upgraded another server last night, to 10.1.3.4, MLR#8, and, still, when I go to the "Administration" - "Process Log", select "All Processes" and click Filter, nothing is shown.
    Previous versions would list all activities in chronological order. This one shows a blank.
    If I select a specific Bpel process, a list is generated as expected - that part is working ok.
    It's the "All Processes" choice that yields no information.
    Thank you for any help,
    AG

  • SC3.1/2xV40z: java process eating memory

    I've got a 2-node Cluster (SC 3.1) on Solaris 10x86 606 running. Uptime is about 171 days. noticed a java process allocating about 1.3 GB of RAM. This happens only on one node. The other one shows max. 256 MB allocated by a particular java processes.
    Is there a memory leak in the SC3.1 software?

    Which java process is it? If it is cacao, then you've hit CR 6359362. It doesn't look like there is a fix just now, but the workaround is to disable cacao from starting up. See the CR for details.
    Tim
    ---

  • Reg - Process chain

    Hello Gurus,
    I am working in Process chain and I am new to this.
    I am having some basic doubts.Can any one of them,clarify my doubts.
    1.Whats the difference between event and job.
    2.How,we can find out the particular chain is a meta chain or local chain.
    3.Incase,If the chain is failed,how we can start the process.whats the step.
    4.Where,we can findout the process chain status..(i.e,its successfully running or unsucessfull. like this)..any transaction code is there to know the status
    5.Tell me,what are the things,i shouldnt do.(as a precautionary steps)
    Thanks in advance and It will be great help,if somebody clarify my doubts and forward docs to my email id...if u r having any realtime doc.
    I will assign points.
    email id - [email protected]
    Brgds
    Kumar

    hi,
    1. an event can trigger a job. After a job has finished, an event is raised and a new job is triggered (or not)
    2. check for the starter. locla chains have the option "strated by other chain" an can only be started directly whe having changed the starter.
    3. depends: in NW2004 process the chain manually or restert the chain completely. only few processtypes are able to be repaired. to recall the setting where the chain crashed is not possible (with the standard types ...)
    In NW2004s with BW7.0 its possible to start a chain from where it failed.
    4. you have 3 views for process chains in rspc: create, check and protocol (this paperlike button) here you can see the protocols for your chain. even when it has been activated, if it runs etc. you can also go to trc SLG1 to look for ProcessChain protocols.
    5.
    Transportiation fails in most cases. allow maintenance in
    Production.
    Never set a processchain live, untested. Especially monitor the number of processes allocated.
    (only basic rules)
    hth
    cheers
    sven

  • Set loss quantity to a fixed percentage in Process Order Confirmation - C0R6N

    Hi Experts,
    How it is fixed the loss qty percentage at the time of process order confirmation.
    I want to set the loss quantity to a certain percentage when compared to the Total Process order quantity.
    For Example:
    If the Total processed quantity = 100
                            Yield Quantity = 20.
                             Loss              = 80.
    This kind of scenario should not happen, since Loss quantity is very much higher that the yield quantity.  I want to set the loss quantity  to a particular percentage ( say 20-30 %) of the total processed quantity so that the yield quantity will be higher than the loss quantity by a substantial amount. Are there any possibilities of doing this through configuration. If not what are the other options?
    NB: Scrap quantity is not maintained at the time of  process order confirmation.
    Regards,
    Umapathy.

    Hi Umapathy,
    Yield and Scrap are user entry fields where you can have restriction on entered qty by changing  under delivery and over delivery to error through OPK4 config as shown below. Agree that any qty can be entered in any field (Yield or Scrap)
    OR
    You can define the operation scrap. Read the below doc for more details.
    http://scn.sap.com/docs/DOC-52834
    OR
    Another option is to use exit and write your logic based on input yield and loss.
    Thanks & Regards,
    Ramagiri

  • Increase of work processes for data transfer for specific tables

    Hi,
    We are using TDMS for transfer of around 1 month's data from our ECC Quality system to ECC test system. The test system was built using shell creation and the transfer started.
    We have allocated 13 batch processes in the central system and reciever system. There are currently two tables for which transfer is running. The transfer is completed for all other tables. But eventhough there are 13 processes available, there are only 3 processes allocated to one table and 2 processes allocated to another table for transfer.
    Since the time is dependent on the complete transfer of these two tables and since there are 8 free processes available, we would like to assign the remaining processes on these tables so that the transfer is faster.
    Please guide me if there is any way to change the same.
    Regards,
    Ragav

    Hi,
    Thanks to Sandeep and Pankaj for the replies.
    I've started data transfer in CRM landscape and in that i was able to define the read method, write method and change some technical settings for transfer.
    In the Technical settings for transfer option, i was able to increase the parallel transfer batch processes to be run in parallel.
    Regards,
    Ragav

Maybe you are looking for

  • Regular Expressions: Greedy vs Non-Greedy

    Guys, I just can't explain and find any explanation in the doc for such a behaviour: SQL> with t as (select 'the 1 january of the year 2007' str from dual)   2  select regexp_substr(str,'.*?[[:digit:]][ ][[:alpha:]]+.*') substr1,   3         regexp_s

  • 3rd party process using product allocation for ATP-check

    Hi all, is there anybody out there who can share experience in setting up the 3rd party process using product allocation on APO for ATP-check? As the documentation is not that impressive any help is higly appreceated. Thanks Michael

  • I need a Tip on how to do that

    Hi! I want to programm a Java Pop3-Checker, means a little app running in the systray and checking for new incoming mails in every specified interval of minutes. So my question is: How can I notice if there have new messages arrived in the Inbox, and

  • Horizontal lines rendering lighting filter in photoshop cc how do i fix this?

    I am using windows 7/64 bit and my graphics card is Nvidia FX3700

  • RoboHelp and Across

    Our company uses RoboHelp with the Across Language Server for our help file localization. We have some very large help projects (largest is approx 2800 topics) and are experiencing a lot of overhead with the files in the Across system. We generate th