Locking in RAC

Hi.. In my RAC database freqently i am facing more locking objects problem.how can I solve permanently.but I am killing the sessions of users who is holding locking objects..but is there any permanent solution.
And plz can any one give me the script to find the Locking objects etc in two node cluster environment (RAC).
Edited by: user8943492 on Sep 28, 2010 12:08 AM
Edited by: user8943492 on Sep 28, 2010 12:09 AM

user8943492 wrote:
Hi.. In my RAC database freqently i am facing more locking objects problem.how can I solve permanently.but I am killing the sessions of users how is holding locking objects..but is there any permanent solution.Locks are caused by application code. E.g. a client using an "+update+" SQL, or a client using a "+lock table+" SQL, etc.
Locks can also be caused by data model implementation - and this can result in a problem when the design's implementation lacks, e.g. foreign key column referenced is not indexed, bitmap indexes incorrectly used, etc.
These are not really Oracle problems - these relate to implementation and coding. And needs to be addressed at design and application level. That is the "+correct permanent solution+".

Similar Messages

  • Get the table's rowid of the session that is locking in RAC database

    I am a developer and not a DBA and I need to find th correct query to find the exact rowid of the record locked on a table. This is for a RAC database and locked record can be from the web form in oracle application server. When I try to get the correct row id, I get the following error:
    ORA-01410 - Invalid row id
    For the criteria, the output is Dbms_Rowid.rowid_create(1, -1, 36, 7845, 0), why I get a -1 for the ROW_WAIT_OBJ#?
    Additional Information: The lock type is DML and the lock mode is: Row Exclusive, the table is locked and the program is web oracle forms executiong.
    I am executing the query in Oracle Database 11g Enterprise Edition Release 11.1.0.7.0
    How to accomplish gettting the correct rowid? Below is the selection criteria I have:
    select vs.inst_id,
    vs.audsid audsid,
    locks.sid sid,
    locks.type,
    locks.id1 id1,
    locks.id2 id2,
    locks.lmode lmode,
    locks.request request,
    locks.ctime ctime,
    locks.block block,
    vs.serial# serial#,
    vs.username oracle_user,
    vs.osuser os_user,
    vs.program program,
    vs.module module,
    vs.action action,
    vs.process process,
    decode(locks.lmode,
    0, '0 None',
    1, '1 NULL',
    2, '2 Row Share',
    3, '3 Row Exclusive',
    4, '4 Share',
    5, '5 Share Row Exclusive',
    6, '6 Exclusive', '?') lock_mode_held,
    decode(locks.request,
    0, '0 None',
    1, '1 NULL',
    2, '2 Row Share',
    3, '3 Row Exclusive',
    4, '4 Share',
    5, '5 Share Row Exclusive',
    6, '6 Exclusive', '?') lock_mode_requested,
    decode(locks.type,
    'MR', 'Media Recovery',
    'RT', 'Redo Thread',
    'UN', 'User Name',
    'TX', 'Transaction',
    'TM', 'DML',
    'UL', 'PL/SQL User Lock',
    'DX', 'Distributed Xaction',
    'CF', 'Control File',
    'IS', 'Instance State',
    'FS', 'File Set',
    'IR', 'Instance Recovery',
    'ST', 'Disk Space Transaction',
    'TS', 'Temp Segment',
    'IV', 'Library Cache Invalidation',
    'LS', 'Log Start or Log Switch',
    'RW', 'Row Wait',
    'SQ', 'Sequence Number',
    'TE', 'Extend Table',
    'TT', 'Temp Table',
    locks.type) lock_type,
    vs.row_wait_obj# row_wait_obj#,
    vs.row_wait_file# row_wait_file,
    vs.row_wait_block# row_wait_block#,
    vs.row_wait_row# row_wait_row#,
    dbms_rowid.rowid_create ( 1, vs.ROW_WAIT_OBJ#, vs.ROW_WAIT_FILE#, vs.ROW_WAIT_BLOCK#, vs.ROW_WAIT_ROW# ) rowid_created,
    objs.owner object_owner,
    objs.object_name object_name,
    objs.object_type object_type,
    round( locks.ctime/60, 2 ) lock_time_in_minutes,
    from gv$session vs,
    gv$lock locks,
    dba_objects objs,
    dba_tables tbls
    where locks.id1 = objs.object_id
    and vs.sid = locks.sid
    and objs.owner = tbls.owner
    and objs.object_name = tbls.table_name
    and objs.owner != 'SYS'
    -- and locks.type in ('TM', 'TX')
    order by lock_time_in_minutes;
    Edited by: user3564713 on Jun 10, 2012 10:56 PM

    Firstly, read this thread
    Identifying locked rows
    And the last bit from Randolf
    >
    It is a common misconception that you can locate a locked row in Oracle via a query. The point is that the information that you're querying only gets populated in case of a blocking lock, and even then not in every case, since you might have blocking locks that do not refer to a particular row.
    Oracle stores the lock information within the block, so if you identified in which block the row is located that you've attempted to lock, you could get detailed information about the row locks of that block by performing a block dump.
    Other than that Oracle doesn't maintain this information anywhere else and it is only externalized for blocking situations - it is a matter of design that there is no central lock manager in Oracle that would inherently limit scalability, hence the downside of that approach is that there is no central information pool where you could obtain detailed information about row level locks.
    >
    However, if you see support note "Sample Code to Select from a Table EXCLUDING Locked Rows [ID 186531.1]"
    You can take the same code from the script in that note to identify rowid of the locked rows.

  • Deadlock on RAC

    Hello All,
    I am using Oracle 10g RAC.
    I facing some dead locks in the db,
    I checked the alert log for both instances and i did not find any ORA-00060 error.
    Is there other ora error to search it inside the alert log to check dead locks for RAC environments, is there any specific ORA for deadlocks related to RAC?
    Regards,

    NB wrote:
    Hello All,
    I am using Oracle 10g RAC.
    I facing some dead locks in the db,
    I checked the alert log for both instances and i did not find any ORA-00060 error.
    Is there other ora error to search it inside the alert log to check dead locks for RAC environments, is there any specific ORA for deadlocks related to RAC?
    Looks like you are having sessions waiting for locks but not a deadlock situation..
    Please check AWR report for that duration or use OEM console to check for wait events.
    In AWR report , find out top wait events and diagnose the session causing them..
    Regards
    Rajesh

  • [RAC] 9I REAL APPLICATION CLUSTERS의 CACHE FUSION 에 대한 이해

    제품 : ORACLE SERVER
    작성날짜 : 2004-08-13
    [RAC] 9I REAL APPLICATION CLUSTERS의 CACHE FUSION 에 대한 이해
    ==========================================================
    PURPOSE
    이 문서는, 오라클 리얼 애플리케이션 클러스터 환경의 캐쉬 퓨전의 기능과 장점을
    설명하는 것을 목적을 한다.
    SCOPE
    Real Application Clusters(RAC) Option은 9i Standard Edition에서는
    지원하지 않는다.
    Explanation
    Oracle 8i OPS에서 캐쉬 퓨전의 개념이 소개 되었다. 이 기능은, 원격 인스턴스가 관장하는
    블럭에 대한 읽기 일관성 있는 뷰를 위해 디스크를 통한 블럭의 pinging을 감소 시키는 목적으로
    추가되었다. 이 기능은, 다른 인스턴스에 의해 lock이 걸린 데이터를 select하는데 필요한 시간을
    현격하게 감소 시켜 주었다.
    lock을 건 인스턴스가 변경 사항을 기록하도록 강제 하여 (I/O 발생을 강제 발생) 원격 인스턴스가
    읽을 수 있게 하는 대신, 캐쉬 퓨전은 버퍼의 사본을 생성하여, 노드간 고속 연결 라인을 통해
    데이터를 조회하는 측에 전달 시켜준다. 이 개념은, 읽기/쓰기 경합에 따른 성능상의 문제를
    경감시켜 주기는 하나, 블럭의 내용을 변경시켜야 한다면, 원격 인스턴스에서 해당 블럭에 대해 쓰기를
    하도록 한 후 읽어 들여 사용하는 기존 방식과 동일한 ping 메카니즘을 사용 해야 했다.
    9i 리얼 애플리케이션 클러스터에서는, 쓰기/쓰기 경합 문제를 해결할 수 있도록 캐쉬 퓨전이
    설계되었으므로, 성능 향상 및 확장성이 개선 되었다. RAC cache fusion을 통해 Oracle 에서는
    다른 인스턴스에서 lock이 걸린 블럭에 대해 디스크를 통한 'ping'이 개념상으로는 더이상
    발생하지 않도록 함으로써, I/O의 양을 더 줄여 주게 되었다. 대신 RAC 인스턴스는, 더티
    버퍼의 사본에 대해 원격 인스턴스에서 쓰기 권한을 가질 수 있게 해 주었다. 이 기능은
    dba (1:1) 를 사용하는 realesable locking (특정 데이터 파일에 gc_files_to_locks 값을
    0으로 지정하거나, gc_files_to_locks 값을 아예 설정하지 않을 경우 사용 가능) 에서만
    적용되는 기능이다. Hashed lck은 ping 메카니즘을 이전 방식과 동일하게 사용하며, fixed lock은
    9i RAC에서는 더이상 사용 할 수 없게 되었다. (잇점이 없기에)
    앞에서 언급한 바와 같이 RAC 캐쉬 퓨전에서는 인스턴스가 dirty buffer의 사본을 사용할 수 있게
    해 준다. RAC 캐쉬 퓨전에서 새로 등장한 개념은 past image라는 개념인데 이것은 버퍼의 이전
    사본이 디스크게 기록되지 않은 상태를 말한다. 이 past image를 추적하기 위해 오라클에서는 전역
    락과 로컬 락의 role과 BWR (block written redo)를 사용한다. global 과 local lock의 role을 명확히
    하기 위해 Oracle 8i의 lock을 local lock으로 생각해보자. 이 경우 오라클 8i에서는 3가지 종류의
    락 모드가 존재한다 :
    N - Null
    S - Shared
    X - Exclusive
    오라클 9i RAC의 lock모드를 언급할 때는 3 글자로 lock의 종류를 구분한다. 첫번째 글자는 lock mode를
    나타내고, 두번째 글자는 lock의 role, 세번째 글자는 (숫자) 로컬 인스턴스에 past image가 있는지
    여부를 나타낸다. 이와 같은 방식으로 lock의 종류는 다음과 같이 나타낼 수 있다 :
    NL0 - Null Local 0 - 8i의 N과 동일 (past image 없음)
    SL0 - Shared Local 0 - 8i의 S와 동일 (past image 없음)
    XL0 - Exclusive Local 0 - 8i의 X와 동일 (past image 없음)
    SG0 - Shared Global 0 - 글로벌 S 락, 인스턴스에 current block image 소유
    XG0 - Exclusive Global 0 - 글로벌 X 락, 인스턴스에서 current block image 소유
    NG1 - Null Global 1 - 글로벌 N 락, 인스턴스에서 past image 소유
    SG1 - Shared Global 1 - 글로벌 S 락, 인스턴스에서 past image 소유
    XG1 - Exclusive Global 1 - 글로벌 X 락, 인스턴스에서 past image 소유
    lock이 최초로 획득된 경우, 로컬 role로 획득된다.만약 lock이 획득된 시점에 원격 인스턴스에 더티
    버퍼가 존재한다면, 글로벌 락 role을 가지고 획득하며, 이 경우, 더티 버퍼를 모든 원격 인스턴스에서는
    버퍼의 'past image'로 취급한다. 리커버리를 위해 past image를 가지고 있는 인스턴스에서는
    lock을 점유하고 있는 마스터 인스턴스에서 lock을 해재했다고 알려 오기 전 까지는 past image를
    버퍼 캐쉬에 저장한다. 버퍼내의 블럭이 폐기 될 때는 past image를 관리하는 인스턴스에서 DBR
    또는'block written redo'를 redo 스트림에 기록하게 되는데 이것은, 블럭이 이미 디스크에
    기록되었으므로, 이 인스턴스를 recovery하는데는 필요하지 않다는 것을 나타낸다.
    RAC 클러스터의 3번 노드가 EMP 테이블의 블럭을 관장하는 락 엘리먼트 123을 소유하고 있다고
    가정해 보자 :
    사용자 C가 인스턴스 3에 연결된 상태에서 EMP 테이블에 대한 select를 하면서 SL0 락이
    오픈 되었다 :
    | Instance 1 | | Instance 2 | | Instance 3 |
    | | | | | |
    | Lock Held | | Lock Held: | | Lock Held: |
    | on LENUM 123: | | on LENUM 123: | | on LENUM 123: |
    | | | | | SL0 |
    | | | | | |
    Shared lock을 획득하는 것은 락 role에 아무런 영향을 미치지 않는다. 따라서 만약 인스턴스
    2번에 연결된 사용자 B가 동일한 EMP 테이블에 대해 select를 하더라도, 인스턴스 2의 락
    엘리먼트 123의 락 모드는 동일하다. ( S 락이기 때문임 ) 그리고 락의 role 또한 동일한데
    버퍼 내용에 더티 상태로 변경된 내용이 없기 때문이다 :
    | Instance 1 | | Instance 2 | | Instance 3 |
    | | | | | |
    | Lock Held | | Lock Held | | Lock Held |
    | on LENUM 123: | | on LENUM 123: | | on LENUM 123: |
    | | | SL0 | | SL0 |
    | | | | | |
    첫번째 exclusive 락을 획득하는 것은 락의 role에 영향을 미치지 않는데 이 경우에도
    락 엘리먼트에 대한 더티 버퍼가 없기 때문이다. 따라서 만약 사용자 B가 EMP 테이블의 락
    엘리먼트 123에 속하는 row를 update 하더라도, XL0이 획득되고, 이미 획득된
    SL0 락은 제거된다. 이것은 8i OPS와 동일한 동작이다.
    | Instance 1 | | Instance 2 | | Instance 3 |
    | | | | | |
    | Lock Held | | Lock Held | | Lock Held |
    | on LENUM 123: | | on LENUM 123: | | on LENUM 123: |
    | | | XL0 | | |
    | | | | | |
    원격 노드상에 더티 버퍼가 존재하는 락 에리먼트에 대해 exclusive 락을 회극하는 것은 캐쉬
    퓨전에서 2단계로 넘어가도록 한다. 이 경우 인스턴스 1번에 연결된 사용자 A가
    락 엘리먼트 123번에 속하는 EMP 테이블의 row를 update 할 때, 인스턴스 2번의 블럭은
    버퍼 캐쉬 내에서 더티 상태로 존재하므로, 인스턴스는 블럭에 대한 사본을 생성 하여 인스턴스
    1번에 전달 한 후, 인스턴스는 null 글로벌 락을 같는다 (page image)*. 동시에
    인스턴스 1은 exclusive, globally dirtied lock을 획득하며, 인스턴스 2는 past
    image를 유지한다.
    * 인스턴스에서 블럭에 대한 자신만의 page image를 갖는 것은, 블럭이 디스크에 게록되지 않고
    또, 마스터 노드에서 알려주기 전 까지는 이미지를 폐기 하지 않기 위해서이다.
    | Instance 1 | | Instance 2 | | Instance 3 |
    | | | | | |
    | Lock Held | | Lock Held | | Lock Held |
    | on LENUM 123: | | on LENUM 123: | | on LENUM 123: |
    | XG0 | | NG1 | | |
    | | | | | |
    이제 인스턴스 3에 연결된 사용자 C가 EMP 테이블을 select 하려고 하며, EMP 테이블이
    락 엘리먼트 123에 속해있는 경우를 살펴보자. 사용자 C가 select 명령을 수행 시키면
    인스턴스 1의 락은, S 락으로 하향 조정되며, 가장 최근의 buffer 내 사본을 갖고 있으며
    인스턴스 2번은 그 이전의 버퍼에 대한 past image를 유지하게 된다.
    | Instance 1 | | Instance 2 | | Instance 3 |
    | | | | | |
    | Lock Held | | Lock Held | | Lock Held |
    | on LENUM 123: | | on LENUM 123: | | on LENUM 123: |
    | SG1 | | NG1 | | SG0 |
    | | | | | |
    이제 인스턴스 2번의 사용자 B가 EMP 락 엘리먼트 123에 대해 select를 하는 경우를 가정해
    보자. 인트넛느 2는 다른 인스턴스로 부터 읽기 일관성 있는 버퍼의 사본을 요청하게 된다.
    인스턴스 2번에 버퍼 내용을 전달해 줄 인스턴스는 다음과 같은 순서에 의해 결정된다:
    1. lock에 대한 마스터 인스턴스.
    2. 가장 최후의 page image를 S lock으로 점유하고 있는 인스턴스.
    3. Shared local 상태로 lock을 점유한 인스턴스.
    4. 가장 최근에 S lock를 부여 받은 인스턴스.
    인스턴스 1번이 락에 대한 마스터 인스턴스라고 가정해 보자 ( 그리고 가장 최근의 past image를 소유 )
    이 경우 인스턴스 2는 인스턴스 1번의 버퍼 캐쉬에 저장된 블럭 사본을 전달 받고, 인스턴스 2번이
    SG1 (past image에 대한 일기 일관성을 갖춘 사본) lock을 획득한다. 나머지 노드는 동일한 상태가
    유지된다:
    | Instance 1 | | Instance 2 | | Instance 3 |
    | | | | | |
    | Lock Held | | Lock Held | | Lock Held |
    | on LENUM 123: | | on LENUM 123: | | on LENUM 123: |
    | SG1 | | SG1 | | SG0 |
    | | | | | |
    이제 인스턴스 3번에 연결된 사용자 C가 테이블의 락 엘리먼트 123의 row를 upate 하고자 한다고
    가정해 보자. 사용자 C는 exclusive lock을 요청하며, 인스턴스 1번과 2번은 소유하고 있던 lock이
    하향 조정된다. 결과적으로는 인스턴스 3번이 XG0 lock을 점유하게 되고 인스턴스 1, 2번은
    NG1을 각각 점유 하게 된다:
    | Instance 1 | | Instance 2 | | Instance 3 |
    | | | | | |
    | Lock Held | | Lock Held | | Lock Held |
    | on LENUM 123: | | on LENUM 123: | | on LENUM 123: |
    | NG1 | | NG1 | | XG0 |
    | | | | | |
    인스턴스 3번에서 체크 포인트가 발생하는 경우, 모든 더티 버퍼의 내용을 디스크에 기록하게 된다.
    이 때 인스턴스 3은 마스터노드가 기록을 하였음을 다른 노드에게 알려주게 된다. 이 경우 인스턴스
    1, 2번은 past image를 폐기 처분할 수 있게 되며 인스턴스 3번만 XL0 락을 점유하게 된다.
    유의해야 할 점은 인스턴스 1, 2가 BWR(block written redos)을 각각의 리두 스트림에 기록하여
    해당 블럭이 이미 디스크에 기록되었으며, 인스턴스 리커버리에는 필요하지 않음을 기록한다는 점이다.
    | Instance 1 | | Instance 2 | | Instance 3 |
    | | | | | |
    | Lock Held | | Lock Held | | Lock Held |
    | on LENUM 123: | | on LENUM 123: | | on LENUM 123: |
    | | | | | XL0 |
    | | | | | |
    Example
    Reference Documents
    Note:139436.1 - Understanding 9i Real Application Clusters Cache Fusion
    Note 144152.1 - Understanding 9i Real Application Clusters Cache Fusion Recovery
    Note 139435.1 - Fast Reconfiguration in 9i Real Application Clusters

    Hi, how are you ?
    Well, Oracle8i greatly improved scalability for read/write applications through the introduction of Cache Fusion. Oracle9i improved Cache Fusion for write/write applications by further minimizing much of the disk write activity used to control data locking.
    It�s this.
    If you still having doubt, please call me.
    Regina Vidal

  • XA overhead in call to prepare, taking up to 1000 ms

    Hello everyone.
    In a particular use-case on our load-test environment (similar to production) where customer data is being updated via a SOAP from a weblogic 10.3 (JDBC driver 11.2.0.2.0) to two 11gR2 RAC cluster (which leads to a lot of SQL queries, including DML and a JMS message) we experience execution times of oracle.jdbc.xa.client.OracleXAResource.prepare(Xid) (which is being called once at the end of the service-call) which are far from being acceptable, about 300-1000 ms.
    We measured the execution times with java profilers (dynaTrace, MissionControl). To ensure these values are valid we put the ojdbc6_g.jar in place and saw the long times in the logs.
    Example:
    <record>
    <date>2011-07-27T16:48:45</date>
    <millis>1311785325858</millis>
    <sequence>7265</sequence>
    <logger>oracle.jdbc.xa.client</logger>
    <level>FINE</level>
    <class>oracle.jdbc.xa.client.OracleXAResource</class>
    <method>prepare</method>
    <thread>11</thread>
    <message>41B70007 Exit [354.443ms]</message>
    </record>
    We took a TCP dump in order to see what is being sent to the database but couldn't decompile what exactly is being transfered via the NET8 protocol.
    From what I've read (http://download.oracle.com/docs/cd/B28359_01/java.111/b31224/xadistra.htm) the thin driver should be using the native XA by default so this should not be a reason for the poor performance.
    We have many other services that do similar DML but don't show this behavior, so it must be something specific.
    From the profiling and TCP dumping we are pretty sure the time is being spent on the DB side.
    This assumption was strengthened by the odd fact that this monday, after no usage of the system on the weekend, the overhead suddenly just dissapeared! The execution times were as low as one would expect (~5-10 ms). We saw an out of memory (ORA-4030) occured on Saturday, which is still in investigation by the provider.
    I suspected that the long prepare times would come back after some load, so I initiated a load test which executes these use cases and simulates a real life scenario. After 1 or 2 hours, this was the case. Now we are in the same situation as before. Again it is reproducable with single calls and no other load on the DB. I imagined there might have been restarts of the instances or something similar in order to recover from the ORA-4030 so I initiated restarts of all instances but without success.
    This is were we are right now, the experiences so far lead imho to the following conclusions/assumptions:
    1. The time is being spent on the DB (maybe partly somewhere in the network)
    2. We are most probably experiencing an erroneous behavior because we had a situation were the issue did not occur, but we don't know why (yet)
    3. Maybe it was by accidential circumstances on monday that the problem dissapeared and it had nothing to do with our load-test later on that it is back now (since the physical hardware (DB server and storage) is shared but we see no contention on CPU,RAM or I/O)
    4. JMS should not be the issue because we see a dedicated prepare call which is fast and it's handled locally on the AppServer
    The big question is, how can we pin down where exactly on the DB the time is being spent? Is there a way to find out how long each participating RM takes in order to handle the prepare-call?
    Any help would be greatly appreciated, these execution times can threaten our SLA.
    Kind regards,
    Thomas
    PS: We've opened an SR as well, but there's has not been a lot of useful information so far. This statement is not very promising "There is no specific mechanism to find out why the prepare state takes time."

    Hi Thomas, you can do some test before recommend enable XA at RAC level.
    (Please check if the jdbc driver need access to PL/SQL level of XA procedures or the JDBC just use the API XA native of Oracle 11).
    check if you are using the jdbc for 11G.
    - A simple test to check the response time just please do shutdown abort (one node) and check the response time on the other node.
    - After do this test, shutdown the database and start the databases (both) to start a clean scenario and do some tests, if you feel the system goes slow just check the lock at RAC level if you see the same SID locking the same object on both nodes, well you need to run XA scripts on ypu database, if not keep looking. if you don't have the script to check the lock at RAC level, just let me know I can publish the scripts for you. on RAC 10G all the time just I run the XA scripts because some client need some PL/SQL api XA. EX: .COM or .NET over windows 2003 or windows 2008.

  • Bubbling exceptions up from sub packages

    Hi.  We run 2012 std.  We r trying 2 get our heads around the options we would have customizing capture of exceptions that bubble up (if that is possible) to our master pkg from sub pkgs.  This question is independent of ssis's sys logging.
    My recollection is that ssis tells us a lot (maybe too much repetition) about exceptions.  Our goal is to customize how we deal with exceptions at lower levels in the master pkg itself (perhaps in the .net service that calls the mstr if that can be
    bubbled up) so that we aren't incorporating the same custom logging at every level of the ssis call "tree/structure".
    We capture some exceptions already in vars by incorporating try catch blocks in our c# scripts.  But haven't addressed yet how those might be bubbled up.  It is hoped that whatever we do with these can mimic what we do with exceptions thrown by
    other ssis components where there is no c# code to catch the exception.
    Can the community get us started?

    this is what I'm concluding from what I see and what the community has said thus far:
    If our goal is a tool that can answer most inquiries about what our etl has done,
    1)   forget about dbo.sysssislogs .  It doesn't record exceptions.  Traceability may be present but at what cost?  It is susceptible to different approaches used by different programmers working on different pkgs.  Its
    not a question of if but rather when it will be truncated and thus be out of sync with supporting custom log tables.  It is probably going to behave differently from release to release of ssiis.
    2)  keep that flat file ssis log around.  I'm not sure what its formal name is but we often append a datestamp to a file name we give it when running pkgs from the command line.  As much of a challenge as it is to read that
    file, it seems more informative and friendly to me than dbo.sysssislog.  And kills two birds with one stone in the event that custom logging cant make a connection to the db where custom logging takes place.
    3)   get serious about ssis component naming conventions.   How informative is "For Each Loop" when trying to make heads or tails of what a log is trying to tell you.
    4)  don't allow parallel processing (neither in cf not df) until ssis gets better at the whole var locking unlocking race condition thing.  Handle locking yourself.  Incorporate multiple pkgs if both parallel processing and
    useful logging r important.
    5)  bake record counts, start times and end times into your logging solution.  Reject counts as well if applicable.         
    6) use pre, post and error handlers to do the logging with a std toolbox item if possible thus leaving pkgs more readable.  Record parent child component traceability info that is exposed in these handlers thru system vars.  Supplement latter with exceptions
    caught in c# scripts.  Use views and indexes for the dashboards that will need to look at this stuff from different perspectives and permissions after the fact.  Prepare for repetition where errors bubbled up thru parent ssis components.
    7)  get an answer on guids when ssis components r copied from place to place.  If a new guid isn't created, find a way (I think there is a tool for this), to generate a new guid for the new component.  I think a component guid column
    would be germane to a custom logging tool.  Similarly watch out for changing a component's guid once logging has begun. 

  • RMAN, RAC, NFS, and server lock ups

    Good day. My environment is:
    --a 2-node RAC
    --Enterprise Edition 11.2.0.3
    --RHEL 5.1
    The goal is to use RMAN to push backups to a shared NFS mount (on a different server). Both nodes will have access to this location (in the event one node goes down, the other can still run backups). Easy, right?
    Wrong.
    I've tried every NFS mount option in the book. Most work just fine, some don't. When I use the recommended NFS mount options:
    rw,bg,hard,nointr,rsize=32768,wsize=32768,tcp, vers=3,timeo=600, actimeo=0
    or
    rw,bg,hard,nointr,rsize=32768,wsize=32768,proto=tcp,noac,forcedirectio, vers=3,suid
    The mount works normally. I can "ls" and "mkdir" and "touch" and "vi" and "cp" files back and forth from the NFS backup location to the RAC node all day long. No problems. However, when I try to do almost anything in RMAN which requires writing to the NFS backup location such as the command "backup archive all delete input;" (or even things as simple as a Crosscheck or RMAN configuration change which writes any changes back to the autobackup ControlFile) the node locks up. There are no errors (or if there are, I don't know where to find them), even when I use RMAN log.
    Just to recap: I run a Crosscheck (or any RMAN process that writes to the NFS backup location), the node will lock up, and I can let it sit for a day, inaccessible, with CRSCTL on the other node saying it's offline, and the node will never come out of a "frozen" state. It cannot be pinged or connected to.
    I think I can safely rule out NFS mount options at this point.
    I understand (after extensive reading of MOS docs and testing) that RAC RMAN can and does suffer from inefficient I/O when writing to an NFS mount. I don't think that's the culprit either. The autobackup ControlFile is not that big and I cannot see how running a simple Crosscheck would lock an entire node.
    I am hoping someone has encountered this in the past and hopefully it's just a simple misconfiguration somewhere.

    My NFS line in /etc/fstab is (these options are for supporting 11.2.0.3, 11.1.0.7, and 10.2.0.4/5 simultaneously): server.domain:/NFS_Export /backup nfs rw,bg,hard,nointr,rsize=32768,wsize=32768,tcp,actimeo=0,vers=3,timeo=600 0 0
    Before you installed GI, did you by chance do a yum update? I've encountered a similar issue which ended up being due to mkinitrd creating a corrupted kernel; mkinitrd is invoked during the GI installation when the ADVM drivers are added and in my case mkinitrd created a new kernel prior to the new kernel being installed. Second to that, make sure you have the matching kernel headers to your kernel version. If they are different then you could probably get away with just creating a new kernel with mkinitrd and relinking GI/RDBMS homes, but be prepared to wipe GI and reinstall.

  • High enq: TX - row lock contention on RAC database

    Hi Gurus,
    I have SAP applications running on 5 Oracle 10g (10.2.0.5) RAC nodes.
    I could observe high row lock contention in the database.
    db file sequential read          13.555.789.712     7.148.542.630     5.27     65
    enq: TX - row lock contention     45.685.386     1.622.457.531     355.14     15
    CPU                         0     1.123.793.901          10
    gc buffer busy               969.769.720     365.874.242     3.77      3
    gc cr grant 2-way          7.565.517.708     161.443.528     .21     1
    log file sync               244.392.565     155.406.980     6.36     1
    gc current block busy          86.643.267     139.935.394     16.15     1
    db file parallel read          80.779.109     124.238.490     15.38     1
    gc current block 3-way          2.412.777.861     98.748.193     .41     1
    read by other session          227.935.152     95.543.751     4.19     1
    I am able to observe one or two update/insert statements in this state.
    I would need your help in go ahead for analyzing and finding out the problematic SQL statements.
    Though there is no performance issue at the moment, I would like to initiate this proactively.
    Database parameters are set consistently with the latest patches for 10.2.0.5.
    Br,
    Venky

    If you are licensed for diagnostic pack, look at the ASH data in v$active_session_history and dba_hist_active_sess_history.
    Using the p1/p2/p3 columns and the blocking information, you should be able to see what sessions were waiting on and what sql was being run for the sessions waiting.

  • What is dead locks with oracle and wht is race condition wrt oracle

    what is dead locks with oracle and wht is race condition wrt oracle

    > And do you know what a race condition is all about?
    It is a term used to indicate several processes attempting to use the same resource that is not capable of servicing all these at the same time. This could be due to the resource not being thread safe or implemented as a serialised resource.
    It is often easy to look up definitions on Google. In the Google search field, type "define:race condition".
    The following [url http://www.google.co.za/search?hl=en&q=define%3Arace+condition&btnG=Google+Search&meta=]web definitions page is displayed.

  • Control file lock time in rac

    Hi all,
    Database version 11.2.0.2
    Os version Red hat 5.3
    Two node RAC
    What is the default control file lock time in rac environment i am getting error after 120 seconds.
    The default lock time in single instance is 900 seconds.
    Does in rac this time lowered to 120 seconds.
    error in alert log file
    Errors in file /u01/app/oracle/diag/rdbms/lfgoimdb/LFGoimdb2/trace/LFGoimdb2_arc2_27691.trc (incident=15905):
    ORA-00240: control file enqueue held for more than 120 seconds
    After hitting this error I have observed that another node evicts this node.

    The above link state that it is a bug in previous versions but i am in recent version 11.2.0.2.5 and this bug is resolved in this version.

  • Locks after migrate to RAC 2 nodes.

    After migrating a stand alone database to RAC two-nodes (11.2.0.4) we are facing lock in database.
    It occurs always on the same tables. I asked developers if commit was issued after the update command and they confirmed commit was set after the update statement.
    Is there something we have to configure in database side to eliminate these locks?
    We have just 3 active sessions in this database, so there is no high-concurrent DML (initrans parameter)
    I'd appreciate any comment.
    Thanks
    KZ.

    After migrating a stand alone database to RAC two-nodes (11.2.0.4) we are facing lock in database.
    It occurs always on the same tables. I asked developers if commit was issued after the update command and they confirmed commit was set after the update statement.
    Is there something we have to configure in database side to eliminate these locks?
    We have just 3 active sessions in this database, so there is no high-concurrent DML (initrans parameter)
    Unless you can show what does this question has to do with Oracle 12c and multitenancy please mark it ANSWERED and repost it in the General Database forum.
    General Database Discussions
    When you repost you need to SHOW us what you did, not just tell us. Post the actual commands executed.
    What do you mean by 'migrating'? What 'same tables' are you talking about? What 'after the update command' are you talking about?

  • How locking take place in oracle rac environment?

    how locking take place in oracle rac environment?
    Suppose from one session, user is updating and from other session same rows are being selected then how this locking take place in Oracle RAC?

    user11936985 wrote:
    how locking take place in oracle rac environment?
    Suppose from one session, user is updating and from other session same rows are being selected then how this locking take place in Oracle RAC?In the case of one session updating and the other selecting, there is no locking issue, regardless of whether it's single instance or RAC.
    The update will take appropriate table (TM) and row-level (TX) locks, but the select will not take any locks (unless it's a select for update), so, there should be no problem.
    Oracle will use read consistency to guarantee that the selected results are self-consistent and consistent with the point in time of the start of the query.
    Hope that helps,
    -Mark

  • Oracle RAC 9i LMD library cache lock top wait event

    We are experiencing the library cache lock as our top wait event. Even thought the box is currently idle, The Global Enqueue Service Daemon (LMD) is taking up CPU cycles. The background process is also logging to trace "skgxpdocon: warning outstanding accept handle count has reached new high water mark 245000".
    Any help would be appreciated.
    Thanks

    There is a new patch for this - check out p4673610 on metalink. We have also experience the problem in 9.2.0.8.

  • Failed root.sh on 1st Node of 11.2.0.2.0 RAC on HP-UX 11.31 Itanium 64

    Started with 11.2.0.2.0 Grid Installation for 2 Node RAC on HP-UX 11.31 Itanium 64.
    Copying Software to remote node & linking libraries were successfully without any issue (upto 76%). But got issue while executing root.sh on Node1
    sph1erp:/oracle/11.2.0/grid #sh root.sh
    Running Oracle 11g root script...
    The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME= /oracle/11.2.0/grid
    Enter the full pathname of the local bin directory: [usr/local/bin]:
    Copying dbhome to /usr/local/bin ...
    Copying oraenv to /usr/local/bin ...
    Copying coraenv to /usr/local/bin ...
    Creating /etc/oratab file...
    Entries will be added to the /etc/oratab file as needed by
    Database Configuration Assistant when a database is created
    Finished running generic part of root script.
    Now product-specific root actions will be performed.
    Using configuration parameter file: /oracle/11.2.0/grid/crs/install/crsconfig_params
    Creating trace directory
    LOCAL ADD MODE
    Creating OCR keys for user 'root', privgrp 'sys'..
    Operation successful.
    OLR initialization - successful
    root wallet
    root wallet cert
    root cert export
    peer wallet
    profile reader wallet
    pa wallet
    peer wallet keys
    pa wallet keys
    peer cert request
    pa cert request
    peer cert
    pa cert
    peer root cert TP
    profile reader root cert TP
    pa root cert TP
    peer pa cert TP
    pa peer cert TP
    profile reader pa cert TP
    profile reader peer cert TP
    peer user cert
    pa user cert
    Adding daemon to inittab
    CRS-2672: Attempting to start 'ora.mdnsd' on 'sph1erp'
    CRS-2676: Start of 'ora.mdnsd' on 'sph1erp' succeeded
    CRS-2672: Attempting to start 'ora.gpnpd' on 'sph1erp'
    CRS-2676: Start of 'ora.gpnpd' on 'sph1erp' succeeded
    CRS-2672: Attempting to start 'ora.cssdmonitor' on 'sph1erp'
    CRS-2672: Attempting to start 'ora.gipcd' on 'sph1erp'
    CRS-2676: Start of 'ora.gipcd' on 'sph1erp' succeeded
    CRS-2676: Start of 'ora.cssdmonitor' on 'sph1erp' succeeded
    CRS-2672: Attempting to start 'ora.cssd' on 'sph1erp'
    CRS-2672: Attempting to start 'ora.diskmon' on 'sph1erp'
    CRS-2676: Start of 'ora.diskmon' on 'sph1erp' succeeded
    CRS-2676: Start of 'ora.cssd' on 'sph1erp' succeeded
    ASM created and started successfully.
    Disk Group OCRVOTE created successfully.
    clscfg: -install mode specified
    Successfully accumulated necessary OCR keys.
    Creating OCR keys for user 'root', privgrp 'sys'..
    Operation successful.
    CRS-4256: Updating the profile
    Successful addition of voting disk ab847ed2b4f04f2dbfb875226d2bb194.
    Successful addition of voting disk 85c05a5b30384f8dbff48cc069de7a7c.
    Successful addition of voting disk 649196fbdd614f9cbf26a9a0e6670a6e.
    Successful addition of voting disk 8815dfcee2e64f64bf00b9c76626ab41.
    Successful addition of voting disk 8ce55fe5534f4f77bfa9f54187592707.
    Successfully replaced voting disk group with +OCRVOTE.
    CRS-4256: Updating the profile
    CRS-4266: Voting file(s) successfully replaced
    ## STATE File Universal Id File Name Disk group
    1. ONLINE ab847ed2b4f04f2dbfb875226d2bb194 (/dev/oracle/ocrvote1) [OCRVOTE]
    2. ONLINE 85c05a5b30384f8dbff48cc069de7a7c (/dev/oracle/ocrvote2) [OCRVOTE]
    3. ONLINE 649196fbdd614f9cbf26a9a0e6670a6e (/dev/oracle/ocrvote3) [OCRVOTE]
    4. ONLINE 8815dfcee2e64f64bf00b9c76626ab41 (/dev/oracle/ocrvote4) [OCRVOTE]
    5. ONLINE 8ce55fe5534f4f77bfa9f54187592707 (/dev/oracle/ocrvote5) [OCRVOTE]
    Located 5 voting disk(s).
    Start of resource "ora.cluster_interconnect.haip" failed
    CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'sph1erp'
    CRS-5017: The resource action "ora.cluster_interconnect.haip start" encountered the following error:
    Start action for HAIP aborted
    CRS-2674: Start of 'ora.cluster_interconnect.haip' on 'sph1erp' failed
    CRS-2679: Attempting to clean 'ora.cluster_interconnect.haip' on 'sph1erp'
    CRS-2681: Clean of 'ora.cluster_interconnect.haip' on 'sph1erp' succeeded
    CRS-4000: Command Start failed, or completed with errors.
    Failed to start Oracle Clusterware stack
    Failed to start High Availability IP at /oracle/11.2.0/grid/crs/install/crsconfig_lib.pm line 1046.
    */oracle/11.2.0/grid/perl/bin/perl -I/oracle/11.2.0/grid/perl/lib -I/oracle/11.2.0/grid/crs/install /oracle/11.2.0/grid/crs/install/rootcrs.pl execution failed*
    sph1erp:/oracle/11.2.0/grid #
    Last few lines from CRS Log for node 1, where error came
    [ctssd(6467)]CRS-2401:The Cluster Time Synchronization Service started on host sph1erp.
    2011-02-25 23:04:16.491
    [oracle/11.2.0/grid/bin/orarootagent.bin(6423)]CRS-5818:Aborted command 'start for resource: ora.cluster_interconnect.haip 1 1' for resource 'ora.cluster_int
    erconnect.haip'. Details at (:CRSAGF00113:) {0:0:178} in */oracle/11.2.0/grid/log/sph1erp/agent/ohasd/orarootagent_root/orarootagent_root.log.*
    2011-02-25 23:04:20.521
    [ohasd(5513)]CRS-2757:Command 'Start' timed out waiting for response from the resource 'ora.cluster_interconnect.haip'. Details at (:CRSPE00111:) {0:0:178} in
    */oracle/11.2.0/grid/log/sph1erp/ohasd/ohasd.log.*
    Few lines from */oracle/11.2.0/grid/log/sph1erp/agent/ohasd/orarootagent_root/orarootagent_root.log.*
    =====================================================================================================
    2011-02-25 23:04:16.823: [ USRTHRD][16] {0:0:178} Starting Probe for ip 169.254.74.54
    2011-02-25 23:04:16.823: [ USRTHRD][16] {0:0:178} Transitioning to Probe State
    2011-02-25 23:04:17.177: [ USRTHRD][15] {0:0:178} [NetHAMain] thread stopping
    2011-02-25 23:04:17.177: [ USRTHRD][15] {0:0:178} Thread:[NetHAMain]isRunning is reset to false here
    2011-02-25 23:04:17.178: [ USRTHRD][12] {0:0:178} Thread:[NetHAMain]stop }
    2011-02-25 23:04:17.178: [ USRTHRD][12] {0:0:178} thread cleaning up
    2011-02-25 23:04:17.178: [ USRTHRD][12] {0:0:178} pausing thread
    2011-02-25 23:04:17.178: [ USRTHRD][12] {0:0:178} posting thread
    2011-02-25 23:04:17.178: [ USRTHRD][12] {0:0:178} Thread:[NetHAWork]stop {
    2011-02-25 23:04:17.645: [ USRTHRD][16] {0:0:178} [NetHAWork] thread stopping
    2011-02-25 23:04:17.645: [ USRTHRD][16] {0:0:178} Thread:[NetHAWork]isRunning is reset to false here
    2011-02-25 23:04:17.645: [ USRTHRD][12] {0:0:178} Thread:[NetHAWork]stop }
    2011-02-25 23:04:17.645: [ USRTHRD][12] {0:0:178} Thread:[NetHAWork]stop {
    2011-02-25 23:04:17.645: [ USRTHRD][12] {0:0:178} Thread:[NetHAWork]stop }
    2011-02-25 23:04:17.891: [ora.cluster_interconnect.haip][12] {0:0:178} [start] Start of HAIP aborted
    2011-02-25 23:04:17.892: [ AGENT][12] {0:0:178} UserErrorException: Locale is
    2011-02-25 23:04:17.893: [ora.cluster_interconnect.haip][12] {0:0:178} [start] clsnUtils::error Exception type=2 string=
    CRS-5017: The resource action "ora.cluster_interconnect.haip start" encountered the following error:
    Start action for HAIP aborted
    2011-02-25 23:04:17.893: [ AGFW][12] {0:0:178} sending status msg [CRS-5017: The resource action "ora.cluster_interconnect.haip start" encountered the foll
    owing error:
    Start action for HAIP aborted
    ] for start for resource: ora.cluster_interconnect.haip 1 1
    2011-02-25 23:04:17.893: [ora.cluster_interconnect.haip][12] {0:0:178} [start] clsn_agent::start }
    2011-02-25 23:04:17.894: [ AGFW][10] {0:0:178} Agent sending reply for: RESOURCE_START[ora.cluster_interconnect.haip 1 1] ID 4098:661
    2011-02-25 23:04:18.552: [ora.diskmon][12] {0:0:154} [check] DiskmonAgent::check {
    2011-02-25 23:04:18.552: [ora.diskmon][12] {0:0:154} [check] DiskmonAgent::check } - 0
    2011-02-25 23:04:19.573: [ AGFW][10] {0:0:154} Agent received the message: AGENT_HB[Engine] ID 12293:669
    2011-02-25 23:04:20.510: [ora.cluster_interconnect.haip][18] {0:0:178} [start] got lock
    2011-02-25 23:04:20.511: [ora.cluster_interconnect.haip][18] {0:0:178} [start] tryActionLock }
    2011-02-25 23:04:20.511: [ora.cluster_interconnect.haip][18] {0:0:178} [start] abort }
    2011-02-25 23:04:20.511: [ora.cluster_interconnect.haip][18] {0:0:178} [start] clsn_agent::abort }
    2011-02-25 23:04:20.511: [ AGFW][18] {0:0:178} Command: start for resource: ora.cluster_interconnect.haip 1 1 completed with status: TIMEDOUT
    2011-02-25 23:04:20.512: [ora.cluster_interconnect.haip][8] {0:0:178} [check] NetworkAgent::init enter {
    2011-02-25 23:04:20.513: [ora.cluster_interconnect.haip][8] {0:0:178} [check] NetworkAgent::init exit }
    2011-02-25 23:04:20.517: [ AGFW][10] {0:0:178} Agent sending reply for: RESOURCE_START[ora.cluster_interconnect.haip 1 1] ID 4098:661
    2011-02-25 23:04:20.519: [ USRTHRD][8] {0:0:178} Ocr Context init default level 23886304
    2011-02-25 23:04:20.519: [ default][8]clsvactversion:4: Retrieving Active Version from local storage.
    [ CLWAL][8]clsw_Initialize: OLR initlevel [70000]
    Few lines from */oracle/11.2.0/grid/log/sph1erp/ohasd/ohasd.log.*
    =====================================================================================================
    2011-02-25 23:04:21.627: [UiServer][30] {0:0:180} Done for ctx=6000000002604ce0
    2011-02-25 23:04:21.642: [UiServer][31] Closed: remote end failed/disc.
    2011-02-25 23:04:26.139: [ CLSINET][33]Returning NETDATA: 1 interfaces
    2011-02-25 23:04:26.139: [ CLSINET][33]# 0 Interface 'lan2',ip='10.10.16.50',mac='3c-4a-92-48-71-be',mask='255.255.255.240',net='10.10.16.48',use='cluster_int
    erconnect'
    2011-02-25 23:04:26.973: [UiServer][31] CS(60000000014b0790)set Properties ( root,60000000012e0260)
    2011-02-25 23:04:26.973: [UiServer][31] SS(6000000001372270)Accepted client connection: saddr =(ADDRESS=(PROTOCOL=ipc)(DEV=92)(KEY=OHASD_UI_SOCKET))daddr = (A
    DDRESS=(PROTOCOL=ipc)(KEY=OHASD_UI_SOCKET))
    2011-02-25 23:04:26.992: [UiServer][30] {0:0:181} processMessage called
    2011-02-25 23:04:26.993: [UiServer][30] {0:0:181} Sending message to PE. ctx= 6000000001b440f0
    2011-02-25 23:04:26.993: [UiServer][30] {0:0:181} Sending command to PE: 67
    2011-02-25 23:04:26.994: [ CRSPE][29] {0:0:181} Processing PE command id=173. Description: [Stat Resource : 600000000135f760]
    2011-02-25 23:04:26.997: [UiServer][30] {0:0:181} Done for ctx=6000000001b440f0
    2011-02-25 23:04:27.012: [UiServer][31] Closed: remote end failed/disc.
    2011-02-25 23:04:31.135: [ CLSINET][33]Returning NETDATA: 1 interfaces
    2011-02-25 23:04:31.135: [ CLSINET][33]# 0 Interface 'lan2',ip='10.10.16.50',mac='3c-4a-92-48-71-be',mask='255.255.255.240',net='10.10.16.48',use='cluster_int
    erconnect'
    2011-02-25 23:04:32.318: [UiServer][31] CS(60000000014b0790)set Properties ( root,60000000012e0260)
    2011-02-25 23:04:32.318: [UiServer][31] SS(6000000001372270)Accepted client connection: saddr =(ADDRESS=(PROTOCOL=ipc)(DEV=92)(KEY=OHASD_UI_SOCKET))daddr = (A
    DDRESS=(PROTOCOL=ipc)(KEY=OHASD_UI_SOCKET))
    2011-02-25 23:04:32.332: [UiServer][30] {0:0:182} processMessage called
    2011-02-25 23:04:32.333: [UiServer][30] {0:0:182} Sending message to PE. ctx= 6000000001b45ef0
    2011-02-25 23:04:32.333: [UiServer][30] {0:0:182} Sending command to PE: 68
    2011-02-25 23:04:32.334: [ CRSPE][29] {0:0:182} Processing PE command id=174. Description: [Stat Resource : 600000000135f760]
    2011-02-25 23:04:32.338: [UiServer][30] {0:0:182} Done for ctx=6000000001b45ef0
    2011-02-25 23:04:32.352: [UiServer][31] Closed: remote end failed/disc.
    2011-02-25 23:04:36.155: [ CLSINET][33]Returning NETDATA: 1 interfaces
    2011-02-25 23:04:36.155: [ CLSINET][33]# 0 Interface 'lan2',ip='10.10.16.50',mac='3c-4a-92-48-71-be',mask='255.255.255.240',net='10.10.16.48',use='cluster_int
    erconnect'
    2011-02-25 23:04:37.683: [UiServer][31] CS(60000000014b0790)set Properties ( root,60000000012e0260)
    2011-02-25 23:04:37.683: [UiServer][31] SS(6000000001372270)Accepted client connection: saddr =(ADDRESS=(PROTOCOL=ipc)(DEV=92)(KEY=OHASD_UI_SOCKET))daddr = (A
    DDRESS=(PROTOCOL=ipc)(KEY=OHASD_UI_SOCKET))
    2011-02-25 23:04:37.702: [UiServer][30] {0:0:183} processMessage called
    2011-02-25 23:04:37.703: [UiServer][30] {0:0:183} Sending message to PE. ctx= 6000000002604ce0
    2011-02-25 23:04:37.703: [UiServer][30] {0:0:183} Sending command to PE: 69
    2011-02-25 23:04:37.704: [ CRSPE][29] {0:0:183} Processing PE command id=175. Description: [Stat Resource : 600000000135f760]
    2011-02-25 23:04:37.708: [UiServer][30] {0:0:183} Done for ctx=6000000002604ce0
    2011-02-25 23:04:37.722: [UiServer][31] Closed: remote end failed/disc.
    2011-02-25 23:04:41.156: [ CLSINET][33]Returning NETDATA: 1 interfaces
    2011-02-25 23:04:41.156: [ CLSINET][33]# 0 Interface 'lan2',ip='10.10.16.50',mac='3c-4a-92-48-71-be',mask='255.255.255.240',net='10.10.16.48',use='cluster_int
    erconnect'
    What could be the issue ????
    Experts Please help me. Doing setup for the PRoduction Env...
    Do response ASAP...... Thanks
    Regards,
    Manish

    Thanks Sebastian for your input.
    yes. my lan2 is used for Cluster_interconnect which is having subnet 255.255.255.240.
    Below are IPs used for RAC
    Public
    Node1: 10.10.1.173/255.255.240.0
    Node2: 10.10.1.174/255.255.240.0
    Private
    Node1: 10.10.16.50/255.255.255.240
    Node2: 10.10.16.51/255.255.255.240
    Virtual
    Node1: 10.10.1.191/255.255.240.0
    Node2: 10.10.1.192/255.255.240.0
    SCAN (Defined in DNS)
    10.10.1.193/255.255.240.0
    10.10.1.194/255.255.240.0
    10.10.1.195/255.255.240.0
    As you said, I will scrap GI Software again & will try with 255.255.255.0.
    I Believe this Redundant Interconnect and ora.cluster_interconnect.haip present in 11.2.0.2.0 Version.
    Oracle says:
    Redundant Interconnect without any 3rd-party IP failover technology (bond, IPMP or similar) is supported natively by Grid Infrastructure starting from 11.2.0.2. Multiple private network adapters can be defined either during the installation phase or afterward using the oifcfg. Oracle Database, CSS, OCR, CRS, CTSS, and EVM components in 11.2.0.2 employ it automatically.
    Grid Infrastructure can activate a maximum of four private network adapters at a time even if more are defined. The ora.cluster_interconnect.haip resource will start one to four link local HAIP on private network adapters for interconnect communication for Oracle RAC, Oracle ASM, and Oracle ACFS etc.
    Grid automatically picks link local addresses from reserved 169.254.*.* subnet for HAIP, and it will not attempt to use any 169.254.*.* address if it's already in use for another purpose. With HAIP, by default, interconnect traffic will be load balanced across all active interconnect interfaces, and corresponding HAIP address will be failed over transparently to other adapters if one fails or becomes non-communicative. .
    The number of HAIP addresses is decided by how many private network adapters are active when Grid comes up on the first node in the cluster . If there's only one active private network, Grid will create one; if two, Grid will create two; and if more than two, Grid will create four HAIPs. The number of HAIPs won't change even if more private network adapters are activated later, a restart of clusterware on all nodes is required for new adapters to become effective.
    In my Setup, I am having Teaming for NIC's for Public & Private Interface. So I am thinking to break teaming of NICs because HAIP internally searching for next available NIC & not getting as all 4 are already in used with OS level NIC teaming.
    My only Concern is, as I am going to change subnet for the Private IPs, should I change Private IP address ????
    Thanks for the Support...
    Regards,
    Manish

  • Active session Spike on Oracle RAC 11G R2 on HP UX

    Dear Experts,
    We need urgent help please, as we are facing very low performance in production database.
    We are having oracle 11G RAC on HP Unix environment. Following is the ADDM report. Kindly check and please help me to figure it out the issue and resolve it at earliest.
    ---------Instance 1---------------
              ADDM Report for Task 'TASK_36650'
    Analysis Period
    AWR snapshot range from 11634 to 11636.
    Time period starts at 21-JUL-13 07.00.03 PM
    Time period ends at 21-JUL-13 09.00.49 PM
    Analysis Target
    Database 'MCMSDRAC' with DB ID 2894940361.
    Database version 11.2.0.1.0.
    ADDM performed an analysis of instance mcmsdrac1, numbered 1 and hosted at
    mcmsdbl1.
    Activity During the Analysis Period
    Total database time was 38466 seconds.
    The average number of active sessions was 5.31.
    Summary of Findings
       Description           Active Sessions      Recommendations
                             Percent of Activity  
    1  CPU Usage             1.44 | 27.08         1
    2  Interconnect Latency  .07 | 1.33           1
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
              Findings and Recommendations
    Finding 1: CPU Usage
    Impact is 1.44 active sessions, 27.08% of total activity.
    Host CPU was a bottleneck and the instance was consuming 99% of the host CPU.
    All wait times will be inflated by wait for CPU.
    Host CPU consumption was 99%.
       Recommendation 1: Host Configuration
       Estimated benefit is 1.44 active sessions, 27.08% of total activity.
       Action
          Consider adding more CPUs to the host or adding instances serving the
          database on other hosts.
       Action
          Session CPU consumption was throttled by the Oracle Resource Manager.
          Consider revising the resource plan that was active during the analysis
          period.
    Finding 2: Interconnect Latency
    Impact is .07 active sessions, 1.33% of total activity.
    Higher than expected latency of the cluster interconnect was responsible for
    significant database time on this instance.
    The instance was consuming 110 kilo bits per second of interconnect bandwidth.
    20% of this interconnect bandwidth was used for global cache messaging, 21%
    for parallel query messaging and 7% for database lock management.
    The average latency for 8K interconnect messages was 42153 microseconds.
    The instance is using the private interconnect device "lan2" with IP address
    172.16.200.71 and source "Oracle Cluster Repository".
    The device "lan2" was used for 100% of interconnect traffic and experienced 0
    send or receive errors during the analysis period.
       Recommendation 1: Host Configuration
       Estimated benefit is .07 active sessions, 1.33% of total activity.
       Action
          Investigate cause of high network interconnect latency between database
          instances. Oracle's recommended solution is to use a high speed
          dedicated network.
       Action
          Check the configuration of the cluster interconnect. Check OS setup like
          adapter setting, firmware and driver release. Check that the OS's socket
          receive buffers are large enough to store an entire multiblock read. The
          value of parameter "db_file_multiblock_read_count" may be decreased as a
          workaround.
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
              Additional Information
    Miscellaneous Information
    Wait class "Application" was not consuming significant database time.
    Wait class "Cluster" was not consuming significant database time.
    Wait class "Commit" was not consuming significant database time.
    Wait class "Concurrency" was not consuming significant database time.
    Wait class "Configuration" was not consuming significant database time.
    Wait class "Network" was not consuming significant database time.
    Wait class "User I/O" was not consuming significant database time.
    Session connect and disconnect calls were not consuming significant database
    time.
    Hard parsing of SQL statements was not consuming significant database time.
    The database's maintenance windows were active during 100% of the analysis
    period.
    ----------------Instance 2 --------------------
              ADDM Report for Task 'TASK_36652'
    Analysis Period
    AWR snapshot range from 11634 to 11636.
    Time period starts at 21-JUL-13 07.00.03 PM
    Time period ends at 21-JUL-13 09.00.49 PM
    Analysis Target
    Database 'MCMSDRAC' with DB ID 2894940361.
    Database version 11.2.0.1.0.
    ADDM performed an analysis of instance mcmsdrac2, numbered 2 and hosted at
    mcmsdbl2.
    Activity During the Analysis Period
    Total database time was 2898 seconds.
    The average number of active sessions was .4.
    Summary of Findings
        Description                 Active Sessions      Recommendations
                                    Percent of Activity  
    1   Top SQL Statements          .11 | 27.65          5
    2   Interconnect Latency        .1 | 24.15           1
    3   Shared Pool Latches         .09 | 22.42          1
    4   PL/SQL Execution            .06 | 14.39          2
    5   Unusual "Other" Wait Event  .03 | 8.73           4
    6   Unusual "Other" Wait Event  .03 | 6.42           3
    7   Unusual "Other" Wait Event  .03 | 6.29           6
    8   Hard Parse                  .02 | 5.5            0
    9   Soft Parse                  .02 | 3.86           2
    10  Unusual "Other" Wait Event  .01 | 3.75           4
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
              Findings and Recommendations
    Finding 1: Top SQL Statements
    Impact is .11 active sessions, 27.65% of total activity.
    SQL statements consuming significant database time were found. These
    statements offer a good opportunity for performance improvement.
       Recommendation 1: SQL Tuning
       Estimated benefit is .05 active sessions, 12.88% of total activity.
       Action
          Investigate the PL/SQL statement with SQL_ID "d1s02myktu19h" for
          possible performance improvements. You can supplement the information
          given here with an ASH report for this SQL_ID.
          Related Object
             SQL statement with SQL_ID d1s02myktu19h.
             begin dbms_utility.validate(:1,:2,:3,:4); end;
       Rationale
          The SQL Tuning Advisor cannot operate on PL/SQL statements.
       Rationale
          Database time for this SQL was divided as follows: 13% for SQL
          execution, 2% for parsing, 85% for PL/SQL execution and 0% for Java
          execution.
       Rationale
          SQL statement with SQL_ID "d1s02myktu19h" was executed 48 times and had
          an average elapsed time of 7 seconds.
       Rationale
          Waiting for event "library cache pin" in wait class "Concurrency"
          accounted for 70% of the database time spent in processing the SQL
          statement with SQL_ID "d1s02myktu19h".
       Rationale
          Top level calls to execute the PL/SQL statement with SQL_ID
          "63wt8yna5umd6" are responsible for 100% of the database time spent on
          the PL/SQL statement with SQL_ID "d1s02myktu19h".
          Related Object
             SQL statement with SQL_ID 63wt8yna5umd6.
             begin DBMS_UTILITY.COMPILE_SCHEMA( 'TPAUSER', FALSE ); end;
       Recommendation 2: SQL Tuning
       Estimated benefit is .02 active sessions, 4.55% of total activity.
       Action
          Run SQL Tuning Advisor on the SELECT statement with SQL_ID
          "fk3bh3t41101x".
          Related Object
             SQL statement with SQL_ID fk3bh3t41101x.
             SELECT MEM.MEMBER_CODE ,MEM.E_NAME,Pol.Policy_no
             ,pol.date_from,pol.date_to,POL.E_NAME,MEM.SEX,(SYSDATE-MEM.BIRTH_DATE
             ) AGE,POL.SCHEME_NO FROM TPAUSER.MEMBERS MEM,TPAUSER.POLICY POL WHERE
             POL.QUOTATION_NO=MEM.QUOTATION_NO AND POL.BRANCH_CODE=MEM.BRANCH_CODE
             and endt_no=(select max(endt_no) from tpauser.members mm where
             mm.member_code=mem.member_code AND mm.QUOTATION_NO=MEM.QUOTATION_NO)
             and member_code like '%' || nvl(:1,null) ||'%' ORDER BY MEMBER_CODE
       Rationale
          The SQL spent 92% of its database time on CPU, I/O and Cluster waits.
          This part of database time may be improved by the SQL Tuning Advisor.
       Rationale
          Database time for this SQL was divided as follows: 100% for SQL
          execution, 0% for parsing, 0% for PL/SQL execution and 0% for Java
          execution.
       Rationale
          SQL statement with SQL_ID "fk3bh3t41101x" was executed 14 times and had
          an average elapsed time of 4.9 seconds.
       Rationale
          At least one execution of the statement ran in parallel.
       Recommendation 3: SQL Tuning
       Estimated benefit is .02 active sessions, 3.79% of total activity.
       Action
          Run SQL Tuning Advisor on the SELECT statement with SQL_ID
          "7mhjbjg9ntqf5".
          Related Object
             SQL statement with SQL_ID 7mhjbjg9ntqf5.
             SELECT SUM(CNT) FROM (SELECT COUNT(PROC_CODE) CNT FROM
             TPAUSER.TORBINY_PROCEDURE WHERE BRANCH_CODE = :B6 AND QUOTATION_NO =
             :B5 AND CLASS_NO = :B4 AND OPTION_NO = :B3 AND PR_EFFECTIVE_DATE<=
             :B2 AND PROC_CODE = :B1 UNION SELECT COUNT(MED_CODE) CNT FROM
             TPAUSER.TORBINY_MEDICINE WHERE BRANCH_CODE = :B6 AND QUOTATION_NO =
             :B5 AND CLASS_NO = :B4 AND OPTION_NO = :B3 AND M_EFFECTIVE_DATE<= :B2
             AND MED_CODE = :B1 UNION SELECT COUNT(LAB_CODE) CNT FROM
             TPAUSER.TORBINY_LAB WHERE BRANCH_CODE = :B6 AND QUOTATION_NO = :B5
             AND CLASS_NO = :B4 AND OPTION_NO = :B3 AND L_EFFECTIVE_DATE<= :B2 AND
             LAB_CODE = :B1 )
       Rationale
          The SQL spent 100% of its database time on CPU, I/O and Cluster waits.
          This part of database time may be improved by the SQL Tuning Advisor.
       Rationale
          Database time for this SQL was divided as follows: 0% for SQL execution,
          0% for parsing, 100% for PL/SQL execution and 0% for Java execution.
       Rationale
          SQL statement with SQL_ID "7mhjbjg9ntqf5" was executed 31 times and had
          an average elapsed time of 3.4 seconds.
       Rationale
          Top level calls to execute the SELECT statement with SQL_ID
          "a11nzdnd91gsg" are responsible for 100% of the database time spent on
          the SELECT statement with SQL_ID "7mhjbjg9ntqf5".
          Related Object
             SQL statement with SQL_ID a11nzdnd91gsg.
             SELECT POLICY_NO,SCHEME_NO FROM TPAUSER.POLICY WHERE QUOTATION_NO
             =:B1
       Recommendation 4: SQL Tuning
       Estimated benefit is .01 active sessions, 3.03% of total activity.
       Action
          Investigate the SELECT statement with SQL_ID "4uqs4jt7aca5s" for
          possible performance improvements. You can supplement the information
          given here with an ASH report for this SQL_ID.
          Related Object
             SQL statement with SQL_ID 4uqs4jt7aca5s.
             SELECT DISTINCT USER_ID FROM GV$SESSION, USERS WHERE UPPER (USERNAME)
             = UPPER (USER_ID) AND USERS.APPROVAL_CLAIM='VC' AND USER_ID=:B1
       Rationale
          The SQL spent only 0% of its database time on CPU, I/O and Cluster
          waits. Therefore, the SQL Tuning Advisor is not applicable in this case.
          Look at performance data for the SQL to find potential improvements.
       Rationale
          Database time for this SQL was divided as follows: 100% for SQL
          execution, 0% for parsing, 0% for PL/SQL execution and 0% for Java
          execution.
       Rationale
          SQL statement with SQL_ID "4uqs4jt7aca5s" was executed 261 times and had
          an average elapsed time of 0.35 seconds.
       Rationale
          At least one execution of the statement ran in parallel.
       Rationale
          Top level calls to execute the PL/SQL statement with SQL_ID
          "91vt043t78460" are responsible for 100% of the database time spent on
          the SELECT statement with SQL_ID "4uqs4jt7aca5s".
          Related Object
             SQL statement with SQL_ID 91vt043t78460.
             begin TPAUSER.RECEIVE_NEW_FAX_APRROVAL(:V00001,:V00002,:V00003,:V0000
             4); end;
       Recommendation 5: SQL Tuning
       Estimated benefit is .01 active sessions, 3.03% of total activity.
       Action
          Run SQL Tuning Advisor on the SELECT statement with SQL_ID
          "7kt28fkc0yn5f".
          Related Object
             SQL statement with SQL_ID 7kt28fkc0yn5f.
             SELECT COUNT(*) FROM TPAUSER.APPROVAL_MASTER WHERE APPROVAL_STATUS IS
             NULL AND (UPPER(CODED) = UPPER(:B1 ) OR UPPER(PROCESSED_BY) =
             UPPER(:B1 ))
       Rationale
          The SQL spent 100% of its database time on CPU, I/O and Cluster waits.
          This part of database time may be improved by the SQL Tuning Advisor.
       Rationale
          Database time for this SQL was divided as follows: 100% for SQL
          execution, 0% for parsing, 0% for PL/SQL execution and 0% for Java
          execution.
       Rationale
          SQL statement with SQL_ID "7kt28fkc0yn5f" was executed 1034 times and
          had an average elapsed time of 0.063 seconds.
       Rationale
          Top level calls to execute the PL/SQL statement with SQL_ID
          "91vt043t78460" are responsible for 100% of the database time spent on
          the SELECT statement with SQL_ID "7kt28fkc0yn5f".
          Related Object
             SQL statement with SQL_ID 91vt043t78460.
             begin TPAUSER.RECEIVE_NEW_FAX_APRROVAL(:V00001,:V00002,:V00003,:V0000
             4); end;
    Finding 2: Interconnect Latency
    Impact is .1 active sessions, 24.15% of total activity.
    Higher than expected latency of the cluster interconnect was responsible for
    significant database time on this instance.
    The instance was consuming 128 kilo bits per second of interconnect bandwidth.
    17% of this interconnect bandwidth was used for global cache messaging, 6% for
    parallel query messaging and 8% for database lock management.
    The average latency for 8K interconnect messages was 41863 microseconds.
    The instance is using the private interconnect device "lan2" with IP address
    172.16.200.72 and source "Oracle Cluster Repository".
    The device "lan2" was used for 100% of interconnect traffic and experienced 0
    send or receive errors during the analysis period.
       Recommendation 1: Host Configuration
       Estimated benefit is .1 active sessions, 24.15% of total activity.
       Action
          Investigate cause of high network interconnect latency between database
          instances. Oracle's recommended solution is to use a high speed
          dedicated network.
       Action
          Check the configuration of the cluster interconnect. Check OS setup like
          adapter setting, firmware and driver release. Check that the OS's socket
          receive buffers are large enough to store an entire multiblock read. The
          value of parameter "db_file_multiblock_read_count" may be decreased as a
          workaround.
       Symptoms That Led to the Finding:
          Inter-instance messaging was consuming significant database time on this
          instance.
          Impact is .06 active sessions, 14.23% of total activity.
             Wait class "Cluster" was consuming significant database time.
             Impact is .06 active sessions, 14.23% of total activity.
    Finding 3: Shared Pool Latches
    Impact is .09 active sessions, 22.42% of total activity.
    Contention for latches related to the shared pool was consuming significant
    database time.
    Waits for "library cache lock" amounted to 5% of database time.
    Waits for "library cache pin" amounted to 17% of database time.
       Recommendation 1: Application Analysis
       Estimated benefit is .09 active sessions, 22.42% of total activity.
       Action
          Investigate the cause for latch contention using the given blocking
          sessions or modules.
       Rationale
          The session with ID 17 and serial number 15595 in instance number 1 was
          the blocking session responsible for 34% of this recommendation's
          benefit.
       Symptoms That Led to the Finding:
          Wait class "Concurrency" was consuming significant database time.
          Impact is .1 active sessions, 24.96% of total activity.
    Finding 4: PL/SQL Execution
    Impact is .06 active sessions, 14.39% of total activity.
    PL/SQL execution consumed significant database time.
       Recommendation 1: SQL Tuning
       Estimated benefit is .05 active sessions, 12.5% of total activity.
       Action
          Tune the entry point PL/SQL "SYS.DBMS_UTILITY.COMPILE_SCHEMA" of type
          "PACKAGE" and ID 6019. Refer to the PL/SQL documentation for addition
          information.
       Rationale
          318 seconds spent in executing PL/SQL "SYS.DBMS_UTILITY.VALIDATE#2" of
          type "PACKAGE" and ID 6019.
       Recommendation 2: SQL Tuning
       Estimated benefit is .01 active sessions, 1.89% of total activity.
       Action
          Tune the entry point PL/SQL
          "SYSMAN.EMD_MAINTENANCE.EXECUTE_EM_DBMS_JOB_PROCS" of type "PACKAGE" and
          ID 68654. Refer to the PL/SQL documentation for addition information.
    Finding 5: Unusual "Other" Wait Event
    Impact is .03 active sessions, 8.73% of total activity.
    Wait event "DFS lock handle" in wait class "Other" was consuming significant
    database time.
       Recommendation 1: Application Analysis
       Estimated benefit is .03 active sessions, 8.73% of total activity.
       Action
          Investigate the cause for high "DFS lock handle" waits. Refer to
          Oracle's "Database Reference" for the description of this wait event.
       Recommendation 2: Application Analysis
       Estimated benefit is .03 active sessions, 8.27% of total activity.
       Action
          Investigate the cause for high "DFS lock handle" waits in Service
          "mcmsdrac".
       Recommendation 3: Application Analysis
       Estimated benefit is .02 active sessions, 5.05% of total activity.
       Action
          Investigate the cause for high "DFS lock handle" waits in Module "TOAD
          9.7.2.5".
       Recommendation 4: Application Analysis
       Estimated benefit is .01 active sessions, 3.21% of total activity.
       Action
          Investigate the cause for high "DFS lock handle" waits in Module
          "toad.exe".
       Symptoms That Led to the Finding:
          Wait class "Other" was consuming significant database time.
          Impact is .15 active sessions, 38.29% of total activity.
    Finding 6: Unusual "Other" Wait Event
    Impact is .03 active sessions, 6.42% of total activity.
    Wait event "reliable message" in wait class "Other" was consuming significant
    database time.
       Recommendation 1: Application Analysis
       Estimated benefit is .03 active sessions, 6.42% of total activity.
       Action
          Investigate the cause for high "reliable message" waits. Refer to
          Oracle's "Database Reference" for the description of this wait event.
       Recommendation 2: Application Analysis
       Estimated benefit is .03 active sessions, 6.42% of total activity.
       Action
          Investigate the cause for high "reliable message" waits in Service
          "mcmsdrac".
       Recommendation 3: Application Analysis
       Estimated benefit is .02 active sessions, 4.13% of total activity.
       Action
          Investigate the cause for high "reliable message" waits in Module "TOAD
          9.7.2.5".
       Symptoms That Led to the Finding:
          Wait class "Other" was consuming significant database time.
          Impact is .15 active sessions, 38.29% of total activity.
    Finding 7: Unusual "Other" Wait Event
    Impact is .03 active sessions, 6.29% of total activity.
    Wait event "enq: PS - contention" in wait class "Other" was consuming
    significant database time.
       Recommendation 1: Application Analysis
       Estimated benefit is .03 active sessions, 6.29% of total activity.
       Action
          Investigate the cause for high "enq: PS - contention" waits. Refer to
          Oracle's "Database Reference" for the description of this wait event.
       Recommendation 2: Application Analysis
       Estimated benefit is .02 active sessions, 6.02% of total activity.
       Action
          Investigate the cause for high "enq: PS - contention" waits in Service
          "mcmsdrac".
       Recommendation 3: Application Analysis
       Estimated benefit is .02 active sessions, 4.93% of total activity.
       Action
          Investigate the cause for high "enq: PS - contention" waits with
          P1,P2,P3 ("name|mode, instance, slave ID") values "1347616774", "1" and
          "3599" respectively.
       Recommendation 4: Application Analysis
       Estimated benefit is .01 active sessions, 2.74% of total activity.
       Action
          Investigate the cause for high "enq: PS - contention" waits in Module
          "Inbox Reader_92.exe".
       Recommendation 5: Application Analysis
       Estimated benefit is .01 active sessions, 2.74% of total activity.
       Action
          Investigate the cause for high "enq: PS - contention" waits in Module
          "TOAD 9.7.2.5".
       Recommendation 6: Application Analysis
       Estimated benefit is .01 active sessions, 1.37% of total activity.
       Action
          Investigate the cause for high "enq: PS - contention" waits with
          P1,P2,P3 ("name|mode, instance, slave ID") values "1347616774", "1" and
          "3598" respectively.
       Symptoms That Led to the Finding:
          Wait class "Other" was consuming significant database time.
          Impact is .15 active sessions, 38.29% of total activity.
    Finding 8: Hard Parse
    Impact is .02 active sessions, 5.5% of total activity.
    Hard parsing of SQL statements was consuming significant database time.
    Hard parses due to cursor environment mismatch were not consuming significant
    database time.
    Hard parsing SQL statements that encountered parse errors was not consuming
    significant database time.
    Hard parses due to literal usage and cursor invalidation were not consuming
    significant database time.
    The Oracle instance memory (SGA and PGA) was adequately sized.
       No recommendations are available.
       Symptoms That Led to the Finding:
          Contention for latches related to the shared pool was consuming
          significant database time.
          Impact is .09 active sessions, 22.42% of total activity.
             Wait class "Concurrency" was consuming significant database time.
             Impact is .1 active sessions, 24.96% of total activity.
    Finding 9: Soft Parse
    Impact is .02 active sessions, 3.86% of total activity.
    Soft parsing of SQL statements was consuming significant database time.
       Recommendation 1: Application Analysis
       Estimated benefit is .02 active sessions, 3.86% of total activity.
       Action
          Investigate application logic to keep open the frequently used cursors.
          Note that cursors are closed by both cursor close calls and session
          disconnects.
       Recommendation 2: Database Configuration
       Estimated benefit is .02 active sessions, 3.86% of total activity.
       Action
          Consider increasing the session cursor cache size by increasing the
          value of parameter "session_cached_cursors".
       Rationale
          The value of parameter "session_cached_cursors" was "100" during the
          analysis period.
       Symptoms That Led to the Finding:
          Contention for latches related to the shared pool was consuming
          significant database time.
          Impact is .09 active sessions, 22.42% of total activity.
             Wait class "Concurrency" was consuming significant database time.
             Impact is .1 active sessions, 24.96% of total activity.
    Finding 10: Unusual "Other" Wait Event
    Impact is .01 active sessions, 3.75% of total activity.
    Wait event "IPC send completion sync" in wait class "Other" was consuming
    significant database time.
       Recommendation 1: Application Analysis
       Estimated benefit is .01 active sessions, 3.75% of total activity.
       Action
          Investigate the cause for high "IPC send completion sync" waits. Refer
          to Oracle's "Database Reference" for the description of this wait event.
       Recommendation 2: Application Analysis
       Estimated benefit is .01 active sessions, 3.75% of total activity.
       Action
          Investigate the cause for high "IPC send completion sync" waits with P1
          ("send count") value "1".
       Recommendation 3: Application Analysis
       Estimated benefit is .01 active sessions, 2.59% of total activity.
       Action
          Investigate the cause for high "IPC send completion sync" waits in
          Service "mcmsdrac".
       Recommendation 4: Application Analysis
       Estimated benefit is .01 active sessions, 1.73% of total activity.
       Action
          Investigate the cause for high "IPC send completion sync" waits in
          Module "TOAD 9.7.2.5".
       Symptoms That Led to the Finding:
          Wait class "Other" was consuming significant database time.
          Impact is .15 active sessions, 38.29% of total activity.
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
              Additional Information
    Miscellaneous Information
    Wait class "Application" was not consuming significant database time.
    Wait class "Commit" was not consuming significant database time.
    Wait class "Configuration" was not consuming significant database time.
    CPU was not a bottleneck for the instance.
    Wait class "Network" was not consuming significant database time.
    Wait class "User I/O" was not consuming significant database time.
    Session connect and disconnect calls were not consuming significant database
    time.
    The database's maintenance windows were active during 100% of the analysis
    period.
    Please help.

    Hello experts...
    Please do the needful... It's really very urgent.
    Thanks,
    Syed

Maybe you are looking for

  • Set-up of orange email won't work

    I have tried all permutations when setting an email for my orange.net account with pop 3's and Ssl's to find that nothing works. I tried phoning ee several months ago and still cannot do this on my LG g2. Since orange changed to ee this is not possib

  • Cant create more partitions

    Hi, i have Arch on a PC with a 80g HD. I have 4 partitions. - / "root" - /home - /boot - /swap Im looking to install Backtrack in a dualbooting with Arch. But i cant create more primary partitions. I tried creating and extended partition, but an exte

  • Movie Won't Download from iTunes on iPad

    I recently purchased two movies from iTunes using a gift card I got for Christmas. I started the download and once the first movie got too about 3.2 GB of 6.2' it stopped and started over for no apparent reason. I tried multiple times to download bot

  • Webservice without HTTP Authentication

    Dear people, I need to create a webservice using a function module (SE80 --> webservice wizard). But when I create this webservice it has a username and password so I need to create it without a username and password. How can I create it without any

  • How do you roll back from 10.6.8? Kernel Panic, etc.

    Any way to roll back to 10.6.7? A *LOT* of my files have disappeared!