Sql statement consuming significant database time

This sql statement consuming significant elapsed time(2,446 seconds). also also cause significant user i/0 , is this because of using temp TABLE IN THIS QUERY ,
or is there any alternative for this query ?
"WITH TEMP AS ( SELECT ROOT, CHILD, LEVEL LEV FROM catagory V START WITH CHILD=:B1 CONNECT BY PRIOR ROOT=CHILD ) SELECT CHILD FROM TEMP WHERE LEV=(SELECT MAX(LEV) FROM TEMP) "
Thanks
Renjith

Click on the FAQ stick posting (at the top of this forum). Find the "+when my query is slow+" question and read the FAQ response to it.

Similar Messages

  • JAVA execution consumed significant database time

    Dear all,
    DB 11. on solaris..
    I have a time consuming package.. this package unloads data in the current db and loads data from another remote db in the same network. below is the recommendation from Oracle ADDM report.
    This session is a work flow session. What can I do for JAVA execution consumed significant database time.?
    JAVA execution consumed significant database time.
       Recommendation 1: SQL Tuning
       Estimated benefit is .5 active sessions, 43.98% of total activity.
       Action
          Tune the PL/SQL block with SQL_ID "6bd4fvsx8n42v". Refer to the "Tuning
          PL/SQL Applications" chapter of Oracle's "PL/SQL User's Guide and
          Reference".
          Related Object
             SQL statement with SQL_ID 6bd4fvsx8n42v.
             DECLARE job BINARY_INTEGER := :job; next_date DATE := :mydate;
             broken BOOLEAN := FALSE; BEGIN OWF_USER.START_METS_REFRESH(SYSDATE );
             :mydate := next_date; IF broken THEN :b := 1; ELSE :b := 0; END IF;
             END;Please advise
    Kai

    Hi Forstmann,
    Thanks for your update.
    Even i have collected ADDM report, extract of Node1 report as below
    FINDING 1: 40% impact (22193 seconds)
    Cluster multi-block requests were consuming significant database time.
    RECOMMENDATION 1: SQL Tuning, 6% benefit (3313 seconds)
    ACTION: Run SQL Tuning Advisor on the SQL statement with SQL_ID
    "59qd3x0jg40h1". Look for an alternative plan that does not use
    object scans.
    SYMPTOMS THAT LED TO THE FINDING:
    SYMPTOM: Inter-instance messaging was consuming significant database
    time on this instance. (55% impact [30269 seconds])
    SYMPTOM: Wait class "Cluster" was consuming significant database
    time. (55% impact [30271 seconds])
    FINDING 3: 13% impact (7008 seconds)
    Read and write contention on database blocks was consuming significant
    database time.
    NO RECOMMENDATIONS AVAILABLE
    SYMPTOMS THAT LED TO THE FINDING:
    SYMPTOM: Inter-instance messaging was consuming significant database
    time on this instance. (55% impact [30269 seconds])
    SYMPTOM: Wait class "Cluster" was consuming significant database
    time. (55% impact [30271 seconds])
    Any help from your side , please?
    Thanks,
    Sunand

  • Cluster multi-block requests were consuming significant database time

    Hi,
    DB : 10.2.0.4 RAC ASM
    OS : AIX 5.2 64-bit
    We are facing too much performance issues and CPU idle time becoming 20%.Based on the AWR report , the top 5 events are showing that problem is in cluster side.I placed 1st node AWR report here for your suggestions.
    WORKLOAD REPOSITORY report for
    DB Name DB Id Instance Inst Num Release RAC Host
    PROD 1251728398 PROD1 1 10.2.0.4.0 YES msprod1
    Snap Id Snap Time Sessions Curs/Sess
    Begin Snap: 26177 26-Jul-11 14:29:02 142 37.7
    End Snap: 26178 26-Jul-11 15:29:11 159 49.1
    Elapsed: 60.15 (mins)
    DB Time: 915.85 (mins)
    Cache Sizes
    ~~~~~~~~~~~ Begin End
    Buffer Cache: 23,504M 23,504M Std Block Size: 8K
    Shared Pool Size: 27,584M 27,584M Log Buffer: 14,248K
    Load Profile
    ~~~~~~~~~~~~ Per Second Per Transaction
    Redo size: 28,126.82 2,675.18
    Logical reads: 526,807.26 50,105.44
    Block changes: 3,080.07 292.95
    Physical reads: 962.90 91.58
    Physical writes: 157.66 15.00
    User calls: 1,392.75 132.47
    Parses: 246.05 23.40
    Hard parses: 11.03 1.05
    Sorts: 42.07 4.00
    Logons: 0.68 0.07
    Executes: 930.74 88.52
    Transactions: 10.51
    % Blocks changed per Read: 0.58 Recursive Call %: 32.31
    Rollback per transaction %: 9.68 Rows per Sort: 4276.06
    Instance Efficiency Percentages (Target 100%)
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    Buffer Nowait %: 99.87 Redo NoWait %: 100.00
    Buffer Hit %: 99.84 In-memory Sort %: 99.99
    Library Hit %: 98.25 Soft Parse %: 95.52
    Execute to Parse %: 73.56 Latch Hit %: 99.51
    Parse CPU to Parse Elapsd %: 9.22 % Non-Parse CPU: 99.94
    Shared Pool Statistics Begin End
    Memory Usage %: 68.11 71.55
    % SQL with executions>1: 94.54 92.31
    % Memory for SQL w/exec>1: 98.79 98.74
    Top 5 Timed Events Avg %Total
    ~~~~~~~~~~~~~~~~~~ wait Call
    Event Waits Time (s) (ms) Time Wait Class
    CPU time 18,798 34.2
    gc cr multi block request 46,184,663 18,075 0 32.9 Cluster
    gc buffer busy 2,468,308 6,897 3 12.6 Cluster
    gc current block 2-way 1,826,433 4,422 2 8.0 Cluster
    db file sequential read 142,632 366 3 0.7 User I/O
    RAC Statistics DB/Inst: PROD/PROD1 Snaps: 26177-26178
    Begin End
    Number of Instances: 2 2
    Global Cache Load Profile
    ~~~~~~~~~~~~~~~~~~~~~~~~~ Per Second Per Transaction
    Global Cache blocks received: 14,112.50 1,342.26
    Global Cache blocks served: 619.72 58.94
    GCS/GES messages received: 2,099.38 199.68
    GCS/GES messages sent: 23,341.11 2,220.01
    DBWR Fusion writes: 3.43 0.33
    Estd Interconnect traffic (KB) 122,826.57
    Global Cache Efficiency Percentages (Target local+remote 100%)
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    Buffer access - local cache %: 97.16
    Buffer access - remote cache %: 2.68
    Buffer access - disk %: 0.16
    Global Cache and Enqueue Services - Workload Characteristics
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    Avg global enqueue get time (ms): 0.6
    Avg global cache cr block receive time (ms): 2.8
    Avg global cache current block receive time (ms): 3.0
    Avg global cache cr block build time (ms): 0.0
    Avg global cache cr block send time (ms): 0.0
    Global cache log flushes for cr blocks served %: 11.3
    Avg global cache cr block flush time (ms): 1.7
    Avg global cache current block pin time (ms): 0.0
    Avg global cache current block send time (ms): 0.0
    Global cache log flushes for current blocks served %: 0.0
    Avg global cache current block flush time (ms): 4.1
    Global Cache and Enqueue Services - Messaging Statistics
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    Avg message sent queue time (ms): 0.1
    Avg message sent queue time on ksxp (ms): 2.4
    Avg message received queue time (ms): 0.0
    Avg GCS message process time (ms): 0.0
    Avg GES message process time (ms): 0.0
    % of direct sent messages: 6.27
    % of indirect sent messages: 93.48
    % of flow controlled messages: 0.25
    Time Model Statistics DB/Inst: PROD/PROD1 Snaps: 26177-26178
    -> Total time in database user-calls (DB Time): 54951s
    -> Statistics including the word "background" measure background process
    time, and so do not contribute to the DB time statistic
    -> Ordered by % or DB time desc, Statistic name
    Statistic Name Time (s) % of DB Time
    sql execute elapsed time 54,618.2 99.4
    DB CPU 18,798.1 34.2
    parse time elapsed 494.3 .9
    hard parse elapsed time 397.4 .7
    PL/SQL execution elapsed time 38.6 .1
    hard parse (sharing criteria) elapsed time 27.3 .0
    sequence load elapsed time 5.0 .0
    failed parse elapsed time 3.3 .0
    PL/SQL compilation elapsed time 2.1 .0
    inbound PL/SQL rpc elapsed time 1.2 .0
    repeated bind elapsed time 0.8 .0
    connection management call elapsed time 0.6 .0
    hard parse (bind mismatch) elapsed time 0.3 .0
    DB time 54,951.0 N/A
    background elapsed time 1,027.9 N/A
    background cpu time 518.1 N/A
    Wait Class DB/Inst: PROD/PROD1 Snaps: 26177-26178
    -> s - second
    -> cs - centisecond - 100th of a second
    -> ms - millisecond - 1000th of a second
    -> us - microsecond - 1000000th of a second
    -> ordered by wait time desc, waits desc
    Avg
    %Time Total Wait wait Waits
    Wait Class Waits -outs Time (s) (ms) /txn
    Cluster 50,666,311 .0 30,236 1 1,335.4
    User I/O 419,542 .0 811 2 11.1
    Network 4,824,383 .0 242 0 127.2
    Other 797,753 88.5 208 0 21.0
    Concurrency 212,350 .1 121 1 5.6
    Commit 16,215 .0 53 3 0.4
    System I/O 60,831 .0 29 0 1.6
    Application 6,069 .0 6 1 0.2
    Configuration 763 97.0 0 0 0.0
    Second node top 5 events are as below,
    Top 5 Timed Events
              Avg %Total
    ~~~~~~~~~~~~~~~~~~ wait Call
    Event Waits Time (s) (ms) Time Wait Class
    CPU time 25,959 42.2
    db file sequential read 2,288,168 5,587 2 9.1 User I/O
    gc current block 2-way 822,985 2,232 3 3.6 Cluster
    read by other session 345,338 1,166 3 1.9 User I/O
    gc cr multi block request 991,270 831 1 1.4 Cluster
    My RAM is 95GB each node and SGA is 51 GB and PGA is 14 GB.
    Any inputs from your side are greatly helpful to me ,please.
    Thanks,
    Sunand

    Hi Forstmann,
    Thanks for your update.
    Even i have collected ADDM report, extract of Node1 report as below
    FINDING 1: 40% impact (22193 seconds)
    Cluster multi-block requests were consuming significant database time.
    RECOMMENDATION 1: SQL Tuning, 6% benefit (3313 seconds)
    ACTION: Run SQL Tuning Advisor on the SQL statement with SQL_ID
    "59qd3x0jg40h1". Look for an alternative plan that does not use
    object scans.
    SYMPTOMS THAT LED TO THE FINDING:
    SYMPTOM: Inter-instance messaging was consuming significant database
    time on this instance. (55% impact [30269 seconds])
    SYMPTOM: Wait class "Cluster" was consuming significant database
    time. (55% impact [30271 seconds])
    FINDING 3: 13% impact (7008 seconds)
    Read and write contention on database blocks was consuming significant
    database time.
    NO RECOMMENDATIONS AVAILABLE
    SYMPTOMS THAT LED TO THE FINDING:
    SYMPTOM: Inter-instance messaging was consuming significant database
    time on this instance. (55% impact [30269 seconds])
    SYMPTOM: Wait class "Cluster" was consuming significant database
    time. (55% impact [30271 seconds])
    Any help from your side , please?
    Thanks,
    Sunand

  • Wait class 'commit' consuming significant database time.

    Hi
    My awr report was showing that the log file sync as the top wait event.I can also see additional message saying that wait class 'commit' consuming significant database time.Can any one suggest me what are the tuning things i need to consider for this wait class 'LOG FILE SYNC'
    Thanks

    Be very careful about this.
    Follow this only if you can afford to lose some data in case of instance failure
    (eg death of the instance from a bug, server panic/reboot, power failure etc).
    Oracle's normal behaviour is to guarantee that every committed transaction
    IS available by ensuring that it is in the redologs and reapplying it if necessary
    in case of an instance failure and recovery or media recovery.
    A Commit NOWAIT means that there is a possibility, however slight, that
    the last few transaction(s) might not have gotten into the redo logs at the time
    of instance failure.
    Your application / analysts must be able to identify transactions that are 'lost'
    and reapply them after you restart a crashed instance.

  • Contention on index block splits  consuming significant database time

    Hi Guys,
    can anybody suggest on how to remove Contention on index block splits,this is giving so many issues on my production DB,the CPU usage shots up and application hangs for few minutes.
    DB is 10.2.0.3 and OS is IBM AIX 5.3

    I found this.. it might be useful
    One possibility is that this is caused by shared CBC latching peculiarities:
    1) during normal selects your index root block can be examined under a
    shared cache buffers chains latch.
    So as long as everybody is only reading the index root block, everybody can
    do it concurrently (without pinning the block). The "current holder count"
    in the CBC latch structure is just increased by one for every read only
    latch get and decreased by one on every release. 0 value means that nobody
    has this latch taken currently.
    Nobody has to wait for others for reading index root block in all read only
    case. That greatly helps to combat hot index root issues.
    2) Now if a branch block split happens a level below the root block, the
    root block has to be pinned in exclusive mode for reflecting this change in
    it. In order to pin a block you need to get the corresponding CBC latch in
    exclusive mode.
    If there are already a bunch of readers on the latch, then the exclusive
    latch getter will just flip a bit in the CBC latch structure - stating it's
    interest for exclusive get.
    Every read only latch get will check for this bit, if it's set, then the
    getters will just spin instead, waiting this bit to be cleared (they may
    yield or sleep immediately as well, I haven't checked). Now the exclusive
    getter has to spin/wait until all the shared getters have released the latch
    and the "current holder count" drops to zero. Once it's zero (and the getter
    manager to get on to CPU) it can get the latch, do its work and release the
    latch.
    During all that time starting from when the "exclusive interest" bit was
    set, nobody could access this indexes root block except the processes which
    already had the latch in shared mode. Depending on latch spin/sleep strategy
    for this particular case and OSD implementation, this could mean that all
    those "4000 readers per second" start just spinning on that latch, causing
    heavy spike in CPU usage and they all queue up.
    How do diagnose that:
    You could sample v$latch_misses to see whether the number of "kcbgtcr:
    kslbegin shared" nowaitfails/sleeps counter takes an exceptional jump up
    once you observe this hiccup.
    How to fix that once diagnosed:
    The usual stuff, like partitioning if possible or creating a single table
    hash cluster instead.
    If you see that the problem comes from excessive spinning, think about
    reducing the spinning overhead (by reducing spincount for example). This
    could affect your other database functions though..
    If you can't do the above - then if you have off-peak time, then analyse
    indexes (using treedump for start) and if you see a block split coming in a
    branch below root block, then force the branch block to split during
    off-peak time by inserting carefully picked values into the index tree,
    which go exactly in the range which cause the proper block to split. Then
    you can just roll back your transaction - the block splits are not rolled
    back nor coalesced somehow, as this is done in a separate recursive
    transaction.
    And this
    With indexes, the story is more complicated since you can't just insert a
    row into any free block available like with tables. Multiple freelists with
    tables help us to spread up inserts to different datablocks, since every
    freelist has its distinct set of datablocks in it. With indexes, the
    inserted key has to go exactly to the block where the structure of b?tree
    index dictates, so multiple freelists can't help to spread contention here.
    When any of the index blocks has to split, a new block has to be allocated
    from the freelist (and possibly unlinked from previous location in index),
    causing an update to freelist entry in segment header block. Now if you had
    defined multiple freelists for your segment, they'd still remain in the
    single segment header block and if you'd have several simultaneous block
    splits, the segment header would become the bottleneck.
    You could relieve this by having multiple freelist groups (spreading up
    freelists into multiple blocks after segment header), but this approach has
    it's problems as well - like a server process which maps to freelist group 1
    doesn't see free blocks in freelist group 2, thus possibly wasting space in
    some cases...
    So, if you have huge contention on regular index blocks, then you should
    rethink the design (avoid right hand indexes for example), or physical
    design (partition the index), increasing freelists won't help here.
    But if you have contention on index segment's header block because of block
    splits/freelist operations, then either partition the index or have multiple
    freelist groups, adding freelists again won't help here. Note that adding
    freelist groups require segment rebuild.

  • Where to run SQL statements in Oracle Database 11gR1 ?

    Folks,
    Hello. I have just installed Oracle Database 11gR1 and login to Database Control page. There are 4 tabs on the top: Database, Setup, Preference, Help and Logout.
    I just create a table "table1" in "Database" tap. But I don't see anywhere to run SQL statement such as Select, Insert, Update.
    Can any folk tell me where to run SQL statements in Oracle Database 11gR1 ?
    Or
    Can any folk provide a User Manual for Oracle DB 11gR1 ?

    You can run from a terminal or install an SQL client and connect from there.
    http://www.articlesbase.com/databases-articles/how-to-install-oracle-11g-client-1793770.html
    Best Regards
    mseberg
    Assuming you have an oracle OS user on Linux you can try typing sqlplus at you OS command prompt. Generally you will have a .bash_profile with setting like this:
    # User specific environment and startup programs
    PATH=$PATH:$HOME/bin
    export PATH
    # Oracle Settings
    TMP=/tmp; export TMP
    TMPDIR=$TMP; export TMPDIR
    export ORACLE_BASE=/u01/app/oracle
    export ORACLE_HOME=/u01/app/oracle/product/11.2.0
    #export DISPLAY=localhost:0.0
    export TZ=CST6CDT 
    export ORACLE_SID=ORCL
    export ORACLE_TERM=xterm
    #export TNS_ADMIN= Set if sqlnet.ora, tnsnames.ora, etc. are not in $ORACLE_HOME/network/admin
    export NLS_LANG=AMERICAN;
    LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib
    LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/lib
    export LD_LIBRARY_PATH
    # Set shell search paths
    PATH=/usr/sbin:$PATH; export PATH
    export PATH=$PATH:$ORACLE_HOME/bin
    # CLASSPATH:
    CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib
    CLASSPATH=$CLASSPATH:$ORACLE_HOME/network/jlib
    export EDITOR=vi
    set -o vi
    PS1='$PWD:$ORACLE_SID >'Edited by: mseberg on Jul 11, 2011 3:18 PM

  • Can the format of a SQL Statement modify the execution time of an SQL ....

    Can the format of a SQL Statement modify the execution time of an SQL statement?
    Thanks in advance

    It depends on:
    1) What oracle version are you using
    2) What do you mean for "format"
    For example: if you're on Oracle9i and changing format means changing the order of the tables in the FROM clause and you're using Rule Based Optimizer then the execution plan and the execution time can be very different...
    Max
    [My Italian Oracle blog|http://oracleitalia.wordpress.com/2009/12/29/estrarre-i-dati-in-formato-xml-da-sql/]

  • Look for histroy of sql statement executed in database

    is there a way to look for histroy or list of sql statement executed in database.?
    similar to history command in linux or bash shell.

    The newer <a href="http://download.oracle.com/docs/cd/B19306_01/server.102/b14237/dynviews_2131.htm">v$sqlstats</a> (10g) is recommended over v$sql as (according to the documentation) it's "faster, more scalable, and has a greater data retention (the statistics may still appear in this view, even after the cursor has been aged out of the shared pool)", although it's missing a couple of the columns v$sql has.
    The history version (if you are licenced for AWR, which is part of the extra-cost Diagnostics Pack - you may not be licenced to use it even if the dictionary views are installed) is DBA_HIST_SQLSTAT.

  • Workspace to parse SQL statements from multiple database schema's

    Hi ,
    When go through the following link of workspace admin user :
    Home>HTML DB Workspace Administration>Manage Services>Schema Reports>Workspace Schemas there I saw "Your workspace has the privilege to parse SQL statements using the following database schemas. Note that the terms database schema and database user can be used interchangeably.
    " at the right pane.
    So how can i make my workspace to parse SQL statements from multiple database schema's ?That must be listed there .But i have only one schema in view ? Can i make it multiple ?If then How ?
    Anybody have an idea ?
    ROSY

    Assign as many schemas to your workspace as you want using the administration app. Read about it in the doc.
    Scott

  • Why has the sql statement been extucted two times in shell script?

    I tried to test rac load balance using the following shell script on suse 10 + oracle 10g rac.
    oracle@SZDB:~> more load_balance.sh
    #!/bin/bash
    for i in {1..20}
    do
    echo $i
    sqlplus -S system/oracle@ORA10G <<EOF
    select instance_name from v\$instance;
    EOF
    sleep 1
    done
    exit 0After execute shell script, I got the follow result.
    oracle@SZDB:~> ./load_balance.sh
    1
    INSTANCE_NAME
    ora10g2
    INSTANCE_NAME
    ora10g2
    2
    INSTANCE_NAME
    ora10g1
    INSTANCE_NAME
    ora10g1
    3
    INSTANCE_NAME
    ora10g1
    INSTANCE_NAME
    ora10g1Seem the sql statement has been executed two times in each loop. If you feel free please help to have a look. Thanks in advance.
    Robinson

    You can end a SQL command in one of three ways:
    * with a semicolon (;)
    * with a slash (/) on a line by itself
    * with a blank line
    A semicolon (;) tells SQL*Plus that you want to run the current command that was entered. SQL*Plus processes the command and also stores the command in the SQL buffer.
    A blank line in a SQL statement or script tells SQL*Plus that you have finished entering the command, but do not want to run it yet, but it's stored the command on SQL Buffer.
    A slash (/) on a line by itself tells SQL*Plus that you wish to run the command stored on SQL buffer.

  • SQL statement taking a long time

    Hi friends,
    The below query is taking a very long time to execute. Please give some advise to optimise it.
    INSERT INTO AFMS_ATT_DETL
        ATT_NUM,
        ATT_REF_CD,
        ATT_REF_TYP_CD,
        ATT_DOC_CD,
        ATT_FILE_NAM,
        ATT_FILE_VER_NUM_TXT,
        ATT_DOC_LIB_NAM,
        ATT_DESC_TXT,
        ATT_TYP_CD,
        ACTIVE_IND,
        CRT_BY_USR_NUM,
        CRT_DTTM,
        UPD_BY_USR_NUM,
        UPD_DTTM ,
        APP_ACC_CD,
        ARCH_CRT_BTCH_NUM,
            ARCH_CRT_DTTM,
        ARCH_UPD_BTCH_NUM ,
            ARCH_UPD_DTTM
      (SELECT
        K.ATT_NUM,
        K.ATT_REF_CD,
        K.ATT_REF_TYP_CD,
        K.ATT_DOC_CD,
        K.ATT_FILE_NAM,
        K.ATT_FILE_VER_NUM_TXT,
        K.ATT_DOC_LIB_NAM,
        K.ATT_DESC_TXT,
        K.ATT_TYP_CD,
        K.ACTIVE_IND,
        K.CRT_BY_USR_NUM,
        K.CRT_DTTM,
        K.UPD_BY_USR_NUM,
        K.UPD_DTTM ,
        L_APP_ACC_CD1,
        L_ARCH_BTCH_NUM,
        SYSDATE ,
        L_ARCH_BTCH_NUM ,
        SYSDATE
        FROM
            FMS_ATT_DETL K
        WHERE
         ( K.ATT_REF_CD IN
             (SELECT CSE_CD FROM T_AFMS_CSE_DETL  )  AND
            K.ATT_REF_TYP_CD ='ATTREF03'
         ) OR
             ATT_REF_CD IN
             (SELECT TO_CHAR(CMNT_PROC_NUM) FROM AFMS_CMNT_PROC ) AND
                  ATT_REF_TYP_CD = 'ATTREF02'
             )    OR
             ATT_REF_CD IN
             (SELECT TO_CHAR(CSE_RPLY_NUM) FROM AFMS_CSE_RPLY ) AND
                  ATT_REF_TYP_CD = 'ATTREF01'
        AND NOT EXISTS (SELECT ATT_NUM FROM (
          SELECT B.CSE_RPLY_NUM CSE_RPLY_NUM,B.ATT_NUM ATT_NUM FROM
            FMS_CSE_RPLY_ATT_MAP B
          WHERE
          NOT EXISTS
             (SELECT A.CSE_RPLY_NUM CSE_RPLY_NUM FROM AFMS_CSE_RPLY A WHERE A.CSE_RPLY_NUM  = B.CSE_RPLY_NUM)
        ) X WHERE X.ATT_NUM = K.ATT_NUM)
       ) ;

    The explain plan for above query is as below:
    PLAN_TABLE_OUTPUT
    Plan hash value: 871385851
    | Id  | Operation                | Name                    | Rows  | Bytes | Cos
    t (%CPU)| Time     |
    |   0 | INSERT STATEMENT         |                         |     9 |  1188 | 6  (17)| 00:00:01 |
    |   1 |  LOAD TABLE CONVENTIONAL | AFMS_ATT_DETL           |       |       |        |          |
    |*  2 |   FILTER                 |                         |       |       |        |          |
    |*  3 |    HASH JOIN RIGHT ANTI  |                         |   167 | 22044 | 6  (17)| 00:00:01 |
    |   4 |     VIEW                 | VW_SQ_1                 |     1 |    13 | 1   (0)| 00:00:01 |
    |   5 |      NESTED LOOPS ANTI   |                         |     1 |    12 | 1   (0)| 00:00:01 |
    |   6 |       INDEX FULL SCAN    | FMS_CSE_RPLY_ATT_MAP_PK |    25 |   200 | 1   (0)| 00:00:01 |
    |*  7 |       INDEX UNIQUE SCAN  | AFMS_CSE_RPLY_PK        |   162 |   648 | 0   (0)| 00:00:01 |
    |   8 |     TABLE ACCESS FULL    | FMS_ATT_DETL            |   167 | 19873 | 4   (0)| 00:00:01 |
    |*  9 |    INDEX UNIQUE SCAN     | T_AFMS_CSE_DETL_PK      |     1 |     9 | 0   (0)| 00:00:01 |
    |* 10 |    INDEX FULL SCAN       | AFMS_CSE_RPLY_PK        |     1 |     4 | 1   (0)| 00:00:01 |
    |* 11 |    INDEX FULL SCAN       | AFMS_CMNT_PROC_PK       |     1 |     5 | 1   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       2 - filter("K"."ATT_REF_TYP_CD"='ATTREF03' AND  EXISTS (SELECT 0 FROM "T_AFMS_CSE_DETL"
                  "T_AFMS_CSE_DETL" WHERE "CSE_CD"=:B1) OR "ATT_REF_TYP_CD"='ATTREF01' AND  EXISTS (SELECT 0
                  FROM "AFMS_CSE_RPLY" "AFMS_CSE_RPLY" WHERE TO_CHAR("CSE_RPLY_NUM")=:B2) OR
                  "ATT_REF_TYP_CD"='ATTREF02' AND  EXISTS (SELECT 0 FROM "AFMS_CMNT_PROC" "AFMS_CMNT_PROC"
                  WHERE TO_CHAR("CMNT_PROC_NUM")=:B3))
       3 - access("ITEM_1"="K"."ATT_NUM")
       7 - access("A"."CSE_RPLY_NUM"="B"."CSE_RPLY_NUM")
       9 - access("CSE_CD"=:B1)
      10 - filter(TO_CHAR("CSE_RPLY_NUM")=:B1)
      11 - filter(TO_CHAR("CMNT_PROC_NUM")=:B1)

  • Contention for latches related to the shared pool was consuming significant

    We are having performance issue on our database. When I look at the AWR, I see that there is a contention for latches. Below is the AWR Report.
    ADDM Report for Task 'ADDM:1775307360_12808'
    Analysis Period
    AWR snapshot range from 12807 to 12808.
    Time period starts at 10-MAY-11 01.00.15 PM
    Time period ends at 10-MAY-11 02.00.23 PM
    Analysis Target
    Database 'ADVFDWP' with DB ID 1775307360.
    Database version 11.1.0.7.0.
    ADDM performed an analysis of all instances.
    Activity During the Analysis Period
    Total database time was 27827 seconds.
    The average number of active sessions was 7.71.
    Summary of Findings
    Description Active Sessions Recommendations
    Percent of Activity
    1 Shared Pool Latches 6.43 | 83.42 0
    2 Top SQL by DB Time 2.41 | 31.24 3
    3 "Concurrency" Wait Class 2.18 | 28.22 0
    4 PL/SQL Execution 1.53 | 19.86 1
    5 "User I/O" wait Class 1.33 | 17.24 0
    6 Hard Parse 1.24 | 16.14 0
    7 Undersized Buffer Cache .83 | 10.73 0
    8 CPU Usage .7 | 9.02 0
    9 Top SQL By I/O .31 | 4.04 1
    10 Top Segments by I/O .24 | 3.12 1
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    Findings and Recommendations
    Finding 1: Shared Pool Latches
    Impact is 6.43 active sessions, 83.42% of total activity.
    Contention for latches related to the shared pool was consuming significant
    database time in some instances.
    Instances that were significantly affected by this finding:
    Number Name Percent Impact ADDM Task Name
    1 ADVFDWP1 99.31 ADDM:1775307360_1_12808
    Check the ADDM analysis of affected instances for recommendations.
    Finding 2: Top SQL by DB Time
    Impact is 2.41 active sessions, 31.24% of total activity.
    SQL statements consuming significant database time were found.
    Recommendation 1: SQL Tuning
    Estimated benefit is 1.07 active sessions, 13.82% of total activity.
    Action
    Run SQL Tuning Advisor on the SQL statement with SQL_ID "fdk73nhpt93a5".
    Related Object
    SQL statement with SQL_ID fdk73nhpt93a5.
    INSERT INTO SFCDM.F_LOAN_PTFL_MOL_SNPSHT SELECT * FROM
    F_LOAN_PTFL_MOL_SNPSHT_STG
    Recommendation 2: SQL Tuning
    Estimated benefit is 1 active sessions, 12.96% of total activity.
    Action
    Tune the PL/SQL block with SQL_ID "7nvgzsgy9ydn9". Refer to the "Tuning
    PL/SQL Applications" chapter of Oracle's "PL/SQL User's Guide and
    Reference".
    Related Object
    SQL statement with SQL_ID 7nvgzsgy9ydn9.
    begin
    insert into SFCDM.F_LOAN_PTFL_MOL_SNPSHT select * from
    F_LOAN_PTFL_MOL_SNPSHT_STG;
    end;
    Recommendation 3: SQL Tuning
    Estimated benefit is .4 active sessions, 5.2% of total activity.
    Action
    Investigate the SQL statement with SQL_ID "fcvfq2gzmxu0t" for possible
    performance improvements.
    Related Object
    SQL statement with SQL_ID fcvfq2gzmxu0t.
    select
    a11.DT_YR_MO DT_YR_MO,
    a11.IND_SCRTZD IND_SCRTZD,
    a13.CD_LNSTAT CD_LNSTAT_INTGRTD,
    sum(a11.CNT_LOAN) WJXBFS1,
    sum(a11.AMT_PART_EOP_UPB) WJXBFS2,
    sum(a11.AMT_LST_VLD_PART_UPB) WJXBFS3
    from
    SFCDM.F_LOAN_PTFL_MOL_SNPSHT
    a11
    join
    SFCDM.D_DETD_LNSTAT_CURR
    a12
    on
    (a11.ID_CYCL_CLOS_DETD_LNSTAT_SRGT = a12.ID_DETD_LNSTAT_SRGT)
    join
    SFCDM.D_LNSTAT_CD
    a13
    on
    (a12.ID_LNSTAT_CD_SRGT = a13.ID_LNSTAT_CD_SRGT)
    join
    SFCDM.D_LOAN_CHARTC_CURR_MINI
    a14
    on
    (a11.ID_LOAN_CHARTC_SRGT = a14.ID_LOAN_CHARTC_SRGT)
    where
    (a11.DT_YR_MO in (201103)
    and a14.CD_SFCRM_LOAN_BUS_LI not in ('L', 'T', 'W')
    and a13.CD_LNSTAT in (14)
    and not exists
    (select * from SFCDM.F_LOAN_PTFL_MOL_SNPSHT s
    where s.id_loan_syst_gend = a11.id_loan_syst_gend
    and s.dt_yr_mo

    It is worth checking the actual size of the shared pool e.g.
    select pool,sum(bytes)/1024/1024/1024 from v$sgastat group by pool;
    the parameters you ahve posted suggest you have set a minimum but no maximum, so it could very large.
    Next up is looking for unhared SQL i.e.
    select column1 from some_table where column2='A_VALUE';
    select column1 from some_table where column2='Another_Value';
    where the code should be using binds instead of literals for security and performance reasons, a simple way to find this is to look in v$sql for sql having the same plan_hash_value but different sql_Ids and compare the sql_fulltext of each statement.
    Also a possibility is sql with many child cursors, this is trickier as the cause may vary and may not be easy to fix. check th econtents of v$sql for sql that have high values in the child_number column anmd investigate the contents of v$sql_shared_cursor for the reason there are multiple child cursors.
    Chris

  • Sql fatch taking so much time

    Hi,
    I am using oracle 10.2.0 on linux. my database is performing slow user complain.
    I use addm to find the issue. This is my addm for that period.
    FINDING 2: 69% impact (903 seconds)
    Individual database segments responsible for significant user I/O wait were
    found.
       RECOMMENDATION 1: Segment Tuning, 69% benefit (903 seconds)
          ACTION: Investigate application logic involving I/O on TABLE
             "MPE_SCHEMA3.SUBSCRIBERS" with object id 25149.
             RELEVANT OBJECT: database object with id 25149
          RATIONALE: The I/O usage statistics for the object are: 0 full object
             scans, 46058 physical reads, 1 physical writes and 0 direct reads.
          RATIONALE: The SQL statement with SQL_ID "4bqrym38a9ths" spent
             significant time waiting for User I/O on the hot object.
             RELEVANT OBJECT: SQL statement with SQL_ID 4bqrym38a9ths
             SELECT COUNT(*) FROM "SUBSCRIBERS" "A1" WHERE
             "A1"."LS_CHG_DTE_CUS">:B3 AND "A1"."LS_CHG_DTE_CUS"<=:B2 AND
             "A1"."CONNECT_DTE_SBB">=TRUNC(SYSDATE@!)-:B1 AND
             "A1"."CONNECT_DTE_SBB"<=TRUNC(SYSDATE@!)
          RATIONALE: The SQL statement with SQL_ID "51b5swtaaz42r" spent
             significant time waiting for User I/O on the hot object.
             RELEVANT OBJECT: SQL statement with SQL_ID 51b5swtaaz42r
             SELECT "CUST_ACCT_NO","SUB_ACCT_NO","PHONE_NO","LS_CHG_DTE_CUS" FROM
             "SUBSCRIBERS" "SC" WHERE "LS_CHG_DTE_CUS"<:1 AND :2="CUST_ACCT_NO"
             AND :3="SUB_ACCT_NO" AND :4="PHONE_NO" AND :5<"LS_CHG_DTE_CUS"
       SYMPTOMS THAT LED TO THE FINDING:
          SYMPTOM: Wait class "User I/O" was consuming significant database time.
                   (72% impact [943 seconds])
    FINDING 3: 67% impact (889 seconds)
    SQL statements consuming significant database time were found.
       RECOMMENDATION 1: SQL Tuning, 55% benefit (729 seconds)
          ACTION: Run SQL Tuning Advisor on the SQL statement with SQL_ID
             "4bqrym38a9ths".
             RELEVANT OBJECT: SQL statement with SQL_ID 4bqrym38a9ths and
             PLAN_HASH 2462067897
             SELECT COUNT(*) FROM "SUBSCRIBERS" "A1" WHERE
             "A1"."LS_CHG_DTE_CUS">:B3 AND "A1"."LS_CHG_DTE_CUS"<=:B2 AND
             "A1"."CONNECT_DTE_SBB">=TRUNC(SYSDATE@!)-:B1 AND
             "A1"."CONNECT_DTE_SBB"<=TRUNC(SYSDATE@!)
          RATIONALE: SQL statement with SQL_ID "4bqrym38a9ths" was executed 1
             times and had an average elapsed time of 725 seconds.
       RECOMMENDATION 2: SQL Tuning, 14% benefit (182 seconds)
          ACTION: Run SQL Tuning Advisor on the SQL statement with SQL_ID
             "51b5swtaaz42r".
             RELEVANT OBJECT: SQL statement with SQL_ID 51b5swtaaz42r and
             PLAN_HASH 2608341829
             SELECT "CUST_ACCT_NO","SUB_ACCT_NO","PHONE_NO","LS_CHG_DTE_CUS" FROM
             "SUBSCRIBERS" "SC" WHERE "LS_CHG_DTE_CUS"<:1 AND :2="CUST_ACCT_NO"
             AND :3="SUB_ACCT_NO" AND :4="PHONE_NO" AND :5<"LS_CHG_DTE_CUS"
          RATIONALE: SQL statement with SQL_ID "51b5swtaaz42r" was executed 19649
             times and had an average elapsed time of 0.0083 seconds.
    FINDING 4: 67% impact (882 seconds)
    Individual SQL statements responsible for significant user I/O wait were
    found.
       RECOMMENDATION 1: SQL Tuning, 55% benefit (729 seconds)
          ACTION: Run SQL Tuning Advisor on the SQL statement with SQL_ID
             "4bqrym38a9ths".
             RELEVANT OBJECT: SQL statement with SQL_ID 4bqrym38a9ths and
             PLAN_HASH 2462067897
             SELECT COUNT(*) FROM "SUBSCRIBERS" "A1" WHERE
             "A1"."LS_CHG_DTE_CUS">:B3 AND "A1"."LS_CHG_DTE_CUS"<=:B2 AND
             "A1"."CONNECT_DTE_SBB">=TRUNC(SYSDATE@!)-:B1 AND
             "A1"."CONNECT_DTE_SBB"<=TRUNC(SYSDATE@!)
          RATIONALE: SQL statement with SQL_ID "4bqrym38a9ths" was executed 1
             times and had an average elapsed time of 725 seconds.
          RATIONALE: Average time spent in User I/O wait events per execution was
             720 seconds.
       RECOMMENDATION 2: SQL Tuning, 14% benefit (182 seconds)
          ACTION: Run SQL Tuning Advisor on the SQL statement with SQL_ID
             "51b5swtaaz42r".
             RELEVANT OBJECT: SQL statement with SQL_ID 51b5swtaaz42r and
             PLAN_HASH 2608341829
             SELECT "CUST_ACCT_NO","SUB_ACCT_NO","PHONE_NO","LS_CHG_DTE_CUS" FROM
             "SUBSCRIBERS" "SC" WHERE "LS_CHG_DTE_CUS"<:1 AND :2="CUST_ACCT_NO"
             AND :3="SUB_ACCT_NO" AND :4="PHONE_NO" AND :5<"LS_CHG_DTE_CUS"
          RATIONALE: SQL statement with SQL_ID "51b5swtaaz42r" was executed 19649
             times and had an average elapsed time of 0.0083 seconds.
          RATIONALE: Average time spent in User I/O wait events per execution was
             0.0082 seconds.
       SYMPTOMS THAT LED TO THE FINDING:
          SYMPTOM: Wait class "User I/O" was consuming significant database time.
                   (72% impact [943 seconds])I know SUBSCRIBERS is the table which is creating problems. we have heavy insert, update and delete on this table this table have 4121886 records and no pertitions.
    As per my observation pertitioning the table can help to improve the performence.
    I am looking for experts suggestion. what is creating problem, how to resolve the issue, what else can be done other that partition the table
    Thanks
    umesh
    Edited by: Umesh Sharma on Dec 21, 2009 11:38 PM
    Edited by: Umesh Sharma on Dec 21, 2009 11:47 PM

    I often say here in forums that a solution is only as good as the problem definition.
    The original analysis you posted shows mostly symptoms. For example:
    RATIONALE: SQL statement with SQL_ID "4bqrym38a9ths" was executed 1
    times and had an average elapsed time of 725 seconds.Of what benefit will it be for a 24h processing period by shaving off a few seconds from a 725 second SQL that is executed only once in that period?
    Is there I/O contention? Serialised I/O is by nature slow. A process doing 46058 physical reads is unlikely to be considered fast, as there is an inherent latency per I/O call. This not necessarily mean I/O contention. The I/O subsystem could be handling this as fast as possible, with sufficient spare capacity to serve an additional 50% load.
    There are numerous factors to consider when dealing with performance. I would not be comfortable only using output from such a report to drive performance tuning. Software and hardware systems are not that simple - a holistic view is needed before isolating specific areas and focusing on those only as the problem areas that need to be addressed.

  • High-load sql statement using v$sql

    Hi,
    Can any one please tell me, how do we find high load sql statement and it's user from v$SQL view.
    what is the query to find it.
    Thank you!

    Hello,
    You can run ADDM report and check its findings it will tell you tome stuff like the following:
         Finding
    67      SQL statements consuming significant database time were found.
    40.7      Time spent on the CPU by the instance was responsible for a substantial part of database time.
    20.7      Individual SQL statements responsible for significant user I/O wait were found.
    13.7      Individual database segments responsible for significant user I/O wait were found.Kind regards
    Mohamed Elazab

  • How to find redo generated per each SQL Statement?

    Database version is 10.2.0.4. OS is AIX.
    We have excessive redo generation during specific time of a day. We would like to identify which statement is contributing to this excessive redo.
    ADDM shows this as first finding
    Waits on event "log file sync" while performing COMMIT and ROLLBACK operations were consuming significant database time.
    Is it possible to find out which sql statement is generating how much redo in bytes during a specific time?
    Thank You
    Sarayu

    user11181920 wrote:
    Is it possible to find out which sql statement is generating how much redo in bytes during a specific time?It will not help you.
    For example, you will see bunch of UPDATE,DELETE,INSERT statements issued by apps. So? What you going to do with them? If you can modify the app to eliminate unnecessary DMLs - it would be great of course. But it is unlikely.From what I've seen, namely developers putting way more commits in than the business rules would justify, it is not so unlikely.
    >
    Also when DML statements generate Redo, the do not cause too many waits of "log file sync", because buffer Log writer flushes buffer as soon as it fills more than 30% or after some time, or on Commit/Rollback. This may or may not be true, as there can be several reasons for log file sync: insufficient I/O (which is the only one you are addressing); CPU starvation (because the cpu ultimately controls i/o requests, and even fixing some unrelated sql that hogs the cpu could change this); lgwr priority (on some systems it makes sense to simply raise the priority, it can vary widely as to whether this makes sense); and other things like private redo, file option flags and the IMU relationships with undo that can be too complicated to put in a partial sentence.
    >
    Waits on event "log file sync" while performing COMMIT and ROLLBACK operations were consuming significant database time.So the root cause can be too many commits, which you should evaluate and fix if possible. You can also use Tanel Poder's snapper script, including on lgwr to see more closely what it is waiting on. See page 28: http://files.e2sn.com/slides/Tanel_Poder_log_file_sync.pdf
    >
    Right. It is because Log Writer writes here synchronously, reliably, it waits until HDD writes data down and responds its written OK.
    And HDDs are not the fastest things in computer. They are mechanical - that is why.
    I would recommend you :
    1. Place redo log files on fast HDDs that used preferably dedicated for this purpose only. If other I/O will frequently disturb these drives they will cause heads change tracks which causes longest HDD waits.
    2. Consider to use faster technology like SAN or SSD.Read this instant classic: http://www.pythian.com/news/28797/de-confusing-ssd-for-oracle-databases/

Maybe you are looking for

  • Phtmlb: space between Label and Field

    Hello, I am using <phtmlb:matrix> to arrange the fields next to one another on the view. For fields, <phtmlb:formLayoutInputField> is used in which case the label is taken automatically through binding to DDIC fields. At one particular row of matrix,

  • AWKEY field of the Document Header of BAPI_ACC_DOCUMENT_POST

    HI Experts, I am using BAPI_ACC_DOCUMENT_POST to create Finance Postings and have a requirement to populate the Reference key field externally while using this BAPI. I am aware of the fact that if I leave the fields OBJ_TYP(Reference Transaction), OB

  • G5 Quad - No Airport Reception After Upgrade - Software Issue

    Hi, I've just installed a 'wireless upgrade kit' in my quad (yes, I know it says AASP install only but I did in anyway). Bluetooth is working fine but airport gives me no reception at all. Now, after some poking around I became pretty convinced this

  • Acrobat 8.1.2 Reader Network Deployment Missing pdf Icon

    Hi We have deployed acrobat 8.1.2 reader via windows gpo using the adobe customization wizard to create to a msi file. The installation went smoothly but now the desktops do not show the acrobat for .pdf files. The file type association looks ok but

  • About Interactive reporting

    Hi i created cube in Hyperion Essbase 9.3.0v with four dimensions & one measure containing duplicate members.( dimensions r having many generations too) I also have large members in each dimensions.when i try to view that cube in IR its showing me fo