Guide me to tune the query

Hi All,
I need to tune the query which is taking more than 1 hour to execute over 8 Lakhs of record.
SQL> explain plan for SELECT C.aci_cust_code      cust_code,
2 C.aci_cust_name cust_name,
3      R.NAME ruledefination,
4 B.RULECODE ALERTS,
5 A.custom1 tran_id,
6 TD_get_value('AMLTRANTYPE', RTRIM(A.custom17)) trantype,
7 A.CUSTOM18 tran_nature,
8 A.custom25 tran_date,
9 A.messageno messageno,
10 TD_get_value('AMLTRANSTATUS', A.status) msgstatus,
11 D.acai_acct_type acct_type,
12 A.custom19 acct_number,
13 A.CURRENCY CURRENCY,
14 A.priorityamount amount,
15 A.operator USERNAME,
16 A.msgdb_id msgdb_id,
17 A.msg_mode_in msg_mode_in
18 FROM MSGDB A,
19 MSGALERTS B,
20 AML_CUST_INFO C,
21      AML_CUST_ACC_INFO D,
22 RULETBL2 R,
23           (SELECT tdkey FROM tabledetails WHERE tdidcode = 'AML-INCLUDEQ' ) amlqueues
24 WHERE A.msgdb_id = B.msgdb_id AND
25 A.queueid = amlqueues.tdkey AND
26           A.MSG_MODE_IN = 'AML-TRANS' AND
27 A.custom15 = C.aci_cust_code AND
28 A.CUSTOM19=D.ACAI_ACCT_NUMBER(+) AND
29 TO_CHAR(A.custom25,'YYYYMMDD') BETWEEN
30      TO_CHAR(TO_DATE('2011/01/01','YYYY/MM/DD'),'YYYYMMDD')
31 AND TO_CHAR(TO_DATE('2011/01/31','YYYY/MM/DD'),'YYYYMMDD')
32 AND B.RULECODE = R.RULECODE
33 ORDER BY A.custom25, msgdb_id,B.rulecode;
Explained.
PLAN_TABLE_OUTPUT
Plan hash value: 1081661146
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 173K| 30M| | 12697 (2)| 00:02:33 |
| 1 | SORT ORDER BY | | 173K| 30M| 66M| 12697 (2)| 00:02:33 |
|* 2 | HASH JOIN | | 173K| 30M| | 5580 (4)| 00:01:07 |
| 3 | VIEW | index$_join$_005 | 3395 | 81480 | | 42 (3)| 00:00:01 |
|* 4 | HASH JOIN | | | | | | |
| 5 | INDEX FAST FULL SCAN | IDX_RCODE | 3395 | 81480 | | 10 (0)| 00:00:01 |
| 6 | INDEX FAST FULL SCAN | SYS_C0040836 | 3395 | 81480 | | 31 (0)| 00:00:01 |
|* 7 | HASH JOIN | | 1737 | 276K| | 5534 (4)| 00:01:07 |
|* 8 | HASH JOIN | | 559 | 86645 | | 4575 (3)| 00:00:55 |
|* 9 | HASH JOIN OUTER | | 448 | 56000 | | 4463 (3)| 00:00:54 |
| 10 | NESTED LOOPS | | 448 | 47040 | | 4404 (3)| 00:00:53 |
|* 11 | TABLE ACCESS BY INDEX ROWID| MSGDB | 451 | 35178 | | 4403 (3)| 00:00:53 |
|* 12 | INDEX RANGE SCAN | I_MODEDATE | 2292 | | | 4323 (3)| 00:00:52 |
|* 13 | INDEX UNIQUE SCAN | PK_TABLEDETAIL | 1 | 27 | | 0 (0)| 00:00:01 |
| 14 | INDEX FAST FULL SCAN | ACC_NUMBER_TYPE | 58947 | 1151K| | 58 (4)| 00:00:01 |
| 15 | TABLE ACCESS FULL | AML_CUST_INFO | 18340 | 537K| | 111 (1)| 00:00:02 |
| 16 | TABLE ACCESS FULL | MSGALERTS | 868K| 6782K| | 944 (4)| 00:00:12 |
There is no index on RULECODE of MSGALERTS and RULETBL2 table.
Could yu guys guide me how to tune this query with or without creating any new index.
Thanks,

To emphasise what hoek has said regarding dates, NEVER compare dates with dates by converting them to strings (or numbers). By doing so, you remove vital information from the optimizer.
For example, what is the difference between "31st Dec 2010" and "1st Jan 2011"? Easy, they're dates, that's 1 day.
But what's the difference between "20101231" and "20110101"? Easy: 20110101 - 20101231 = 8870.
That makes the difference between the optimizer guessing 1 row or 8870 rows... a fairly big difference, I think you'll agree, which could well impact on the plan the optimizer chooses.
One other point - leaving the clause as dates gives:
AND    a.custom25 BETWEEN TO_DATE('2011/01/01', 'YYYY/MM/DD')
                      AND TO_DATE('2011/01/31', 'YYYY/MM/DD') which excludes any dates on 31st Jan 2011 except midnight, eg. 10am on 31st Jan 2011 won't be returned by your query.
If you're after rows for a given month, then you could do:
AND    trunc(a.custom25, 'mm') = TO_DATE('01/01/2011', 'dd/mm/yyyy')

Similar Messages

  • Undo_tablespace and undo_retetion or tune the query  which one to increase

    hi all,
    In my logs file s
    ORA-01555 caused by SQL statement below (SQL ID: 9nc0n06yryhbk, Query Duration=165122 sec, SCN: 0x05ff.062f3363):
    Tue Feb 5 02:26:39 2008
    SELECT /*+ FIRST_ROWS */ /*+ ORDERED */ B.GID ,K.ELEMENT_TYPE ,K.DATA_SOURCE_GID ,K.PROD_ID ,K.OPTION_CD ,K.MCC ,K.SPN ,K.REGION_CD ,K.WW_CD ,K.COUNTRY_CD ,K.ERROR ,B.GBATCH_ID
    ,B.PERIOD_SEQ_NUM ,B.ACTION ,B.ERROR B_ERROR ,B.COST ,B.INPUT_FILE_ROW_NUM ,B.ACTION_STATUS ,B.ACTION_TIMESTAMP FROM T_INPUT_BUCKET B, T_COS_INPUT_KEY K WHERE B.ACTION_STATUS =
    'initial' AND B.GINPUT_KEY_ID = K.GID AND B.GBATCH_ID = :B1 AND ROWNUM < 8001
    Tue Feb 5 02:35:21 2008
    Thread 2 advanced to log sequence 42907
    Mon Feb 4 09:10:55 2008
    ORA-01555 caused by SQL statement below (SQL ID: 9nc0n06yryhbk, Query Duration=104081 sec, SCN: 0x05ff.05ebc008):
    Mon Feb 4 09:10:55 2008
    SELECT /*+ FIRST_ROWS */ /*+ ORDERED */ B.GID ,K.ELEMENT_TYPE ,K.DATA_SOURCE_GID ,K.PROD_ID ,K.OPTION_CD ,K.MCC ,K.SPN ,K.REGION_CD ,K.WW_CD ,K.COUNTRY_CD ,K.ERROR ,B.GBATCH_ID
    ,B.PERIOD_SEQ_NUM ,B.ACTION ,B.ERROR B_ERROR ,B.COST ,B.INPUT_FILE_ROW_NUM ,B.ACTION_STATUS ,B.ACTION_TIMESTAMP FROM T_INPUT_BUCKET B, T_COS_INPUT_KEY K WHERE B.ACTION_STATUS =
    'initial' AND B.GINPUT_KEY_ID = K.GID AND B.GBATCH_ID = :B1 AND ROWNUM < 8001
    Mon Feb 4 09:14:08 2008
    ===============================================
    and my undo_retention
    Current usage:
    UNDO_01 96736 11596 12 88
    UNDO_02 96736 9357 10 90
    NAME TYPE VALUE
    undo_management string AUTO
    undo_retention integer 36000
    can anyone please guide me which one is best to increase undo_retention or undo_tablespace
    or tune the query?
    my database version is
    Oracle Database 10g Enterprise Edition Release 10.2.0.2.0 - 64bi
    PL/SQL Release 10.2.0.2.0 - Production
    "CORE     10.2.0.2.0     Production"
    TNS for HPUX: Version 10.2.0.2.0 - Production
    NLSRTL Version 10.2.0.2.0 - Production
    thanks in advance

    IMO best is to
    1) tune the query to minimize time and resource use;
    2) set the undo_retention to size the undo tablespace for the required 'consistent read' rebuild requirements;
    3) set the retention guarantee appropriately
    4) size the undo tablespace based on the required size, probably dictated by 2)
    Why is this an 'either one or other' question? When driving a car and looking for best fuel efficiency, you tune up the car, drive properly AND use the right fuel. You don't just pick one and leave it at that.

  • How to tune the query...?

    Hi all,
    I am having a table with millions of records and the query is taking hours
    time. How to tune the query apart from doing the following things.
    1. Creating or Deleting indexes.
    2. Using Bind variables.
    3. Using Hints.
    4. Updating the Statitics regurarly.
    Actually, i have asked this question in interview how to tune the query.
    I told him the above 4 things. Then he told, these are not working, then
    how you will tune this query.
    Thanks in advance,
    Pal

    user546710 wrote:
    Actually, i have asked this question in interview how to tune the query.
    I told him the above 4 things. Then he told, these are not working, then
    how you will tune this query.It actually depends on the scenario/problem given.
    You may want to read this first.
    When your query takes too long ...
    When your query takes too long ...
    HOW TO: Post a SQL statement tuning request - template posting
    HOW TO: Post a SQL statement tuning request - template posting

  • How to tune the query for duplicate records while joining the two tables

    hi,i am executing the query which has retrieving multiple tables,in which one of them has duplicate record,how to get single record

    Not enough info...subject says "tune" the query, message says "write" the query...and where is actual query that you had tried ?

  • How to tune the query and difference between CBO AND RBO.. Which is good

    Hello Friends,
    Here are some questions I have pls reply back with complete description and url if any ..
    1)How Did you tune Query,
    2)What approach you take to tune query? Do you use Hints?
    3)Where did you tune the query and what are the issue with query?
    4)What is difference between RBO and CBO? where u use RBO and CBO.
    5)Give some information about hash join?
    6) Using explain plan how do u know where the bottle neck in query .. how u will identify where the bottle neck is from explain plan .
    thanks/Kumar

    Hi,
    kumar73 wrote:
    Hello Friends,
    Here are some questions I have pls reply back with complete description and url if any ..
    1)How Did you tune Query, Use EXPLAIN PLAN to see exactly where it is spending its time, and address those areas.
    See the forum FAQ
    SQL and PL/SQL FAQ
    "3. How to improve the performance of my query?"
    2)What approach you take to tune query? Do you use Hints?Hints can help.
    Even more helpful is writing the SQL efficiently (avoiding multiple scans of the same table, filtering early, using built-in rather than user-defined functions, ...), creating and using indexes, and, for large tables, partitioning.
    Table design can have a big impact on performace.
    Look for ways to do part of what you need before the query. This includes denormalizing (when appropriate), the kind of pre-digesting that often takes place in data warehouses, function-based indexes, and, starting in Oracle 11, virtual columns.
    3)Where did you tune the query and what are the issue with query?Either this question is a vague summary of the entire thread, or I don't understand it. Can you re-phrase this part?
    4)What is difference between RBO and CBO? where u use RBO and CBO.Basically, use RBO if you have Oracle 7 or earlier.

  • Help needed to tune the Query:Statistics added

    Can someone DBA please help me to tune this query:
    SELECT DISTINCT K.ATTRIBUTE_VALUE AGENCY_ID,B.PROFILE_NM ,NVL(G.OFFICE_DESC,'--') OFFICE_DESC,f.OFFICE_ID,B.PROFILE_ID,'%' ROLE,'%' LAYOUT,
    CASE
    WHEN 'flagB' = '%' THEN
    NVL(J.ISS_GRP_DESC,'BILLING')
    WHEN 'flagO' = '%' THEN
    NVL(J.ISS_GRP_DESC,'ORDERING')
    WHEN 'flag' = '%' THEN
    NVL(J.ISS_GRP_DESC,'BILLING/ORDERING')
    ELSE
    NVL(J.ISS_GRP_DESC,' ')
    END ISS_GRP_DESC,
    DECODE(NVL(H.USERID,' ') ,' ','--','<a sbcuid_in=' || H.USERID || ' target=NEW >'||H.FIRSTNAME || ' ' || H.LASTNAME || '( ' || H.USERID || ' )</a>' ) USER_NAME
    FROM
    PROFILE_PORTAL B ,
    TBL_BDA_AGENCY_RESP_REP C ,
    TBL_BDA_AGENCY_OFFICE F,
    TBL_BDA_OFFICE G,
    USERS_PORTAL H,
    TBL_BDA_USR_ISS_GRP I ,
    TBL_BDA_ISS_GROUP J,
    ATTRIBUTE_VALUES_PORTAL K,
    PROFILE_TYPE_PORTAL L
    WHERE
    B.PROFILE_ID = F.AGENCY_ID (+)
    AND B.PROFILE_ID = C.AGENCY_ID (+)
    AND G.OFFICE_ID (+)= F.OFFICE_ID
    AND H.USERID (+)= C.RESP_USR_ID
    AND C.ISS_GRP_ID = I.ISS_GRP_ID (+)
    AND I.ISS_GRP_ID = J.ISS_GRP_ID(+)
    AND 'PROFILE.'||B.PROFILE_ID = K.ENTITY_ID(+)
    AND K.ATTRIBUTE_VALUE IS NOT NULL
    AND L.PROFILE_TYPE_ID = B.PROFILE_TYPE_ID
    AND L.APPLICATION_CD='BDA'
    AND NOT EXISTS (SELECT agency_id
    FROM TBL_BDA_AGENCY_RESP_REP t
    WHERE t.ISS_GRP_ID IN ('%')
    AND t.AGENCY_ID = C.AGENCY_ID)
    AND K.ATTRIBUTE_VALUE LIKE '%'
    AND UPPER(B.PROFILE_NM) LIKE UPPER('%')
    AND (to_char(NVL(B.PROFILE_ID,0)) LIKE '%' OR NVL(B.PROFILE_ID,0) IN ('a'))
    AND NVL(G.OFFICE_ID,0) IN ('%')
    AND (to_char(NVL(C.RESP_USR_ID,'0')) LIKE '%' OR NVL(C.RESP_USR_ID,'0') IN ('k'))
    ORDER BY PROFILE_NM
    The number of rows in these tables are as follows:
    PROFILE_PORTAL -- 2392
    TBL_BDA_AGENCY_RESP_REP 3508
    TBL_BDA_AGENCY_OFFICE 2151
    TBL_BDA_OFFICE 3
    USERS_PORTAL 270500
    TBL_BDA_USR_ISS_GRP 234
    TBL_BDA_ISS_GROUP 2
    ATTRIBUTE_VALUES_PORTAL 2790
    PROFILE_TYPE_PORTAL 3
    The Explain pal nhas given this o/p to me:
    SQL> select * from table(dbms_xplan.display) dual;
    PLAN_TABLE_OUTPUT
    | Id | Operation | Name | Rows | Bytes |TempSpc| Cost |
    | 0 | SELECT STATEMENT | | 807 | 102K| | 2533 |
    | 1 | SORT UNIQUE | | 807 | 102K| 232K| 82 |
    |* 2 | FILTER | | | | | |
    |* 3 | HASH JOIN OUTER | | 807 | 102K| | 52 |
    |* 4 | HASH JOIN OUTER | | 807 | 95226 | | 40 |
    |* 5 | TABLE ACCESS BY INDEX ROWID | ATTRIBUTE_VALUES | 1 | 23 | | 2 |
    | 6 | NESTED LOOPS | | 7 | 805 | | 37 |
    | 7 | NESTED LOOPS OUTER | | 6 | 552 | | 25 |
    |* 8 | FILTER | | | | | |
    | 9 | NESTED LOOPS OUTER | | | | | |
    |* 10 | FILTER | | | | | |
    | 11 | NESTED LOOPS OUTER | | | | | |
    | 12 | NESTED LOOPS OUTER | | 3 | 141 | | 10 |
    |* 13 | HASH JOIN | | 3 | 120 | | 7 |
    |* 14 | TABLE ACCESS FULL | PROFILE | 6 | 198 | | 4 |
    |* 15 | TABLE ACCESS FULL | PROFILE_TYPE | 1 | 7 | | 2 |
    |* 16 | INDEX RANGE SCAN | SYS_C0019777 | 1 | 7 | | 1 |
    | 17 | TABLE ACCESS BY INDEX ROWID| TBL_BDA_OFFICE | 1 | 10 | | 1 |
    |* 18 | INDEX UNIQUE SCAN | SYS_C0019800 | 1 | | | |
    | 19 | TABLE ACCESS BY INDEX ROWID | TBL_BDA_AGENCY_RESP_REP | 2 | 26 | | 2 |
    |* 20 | INDEX RANGE SCAN | IDX_AGECYRESP_AGNCYID | 2 | | | 1 |
    | 21 | TABLE ACCESS BY INDEX ROWID | USER_ | 1 | 22 | | 1 |
    |* 22 | INDEX UNIQUE SCAN | USER_PK | 1 | | | |
    |* 23 | INDEX RANGE SCAN | IDX_ATTVAL_ENTATTID | 1 | | | 1 |
    | 24 | TABLE ACCESS FULL | TBL_BDA_USR_ISS_GRP | 234 | 702 | | 2 |
    | 25 | TABLE ACCESS FULL | TBL_BDA_ISS_GROUP | 2 | 24 | | 2 |
    |* 26 | TABLE ACCESS BY INDEX ROWID | TBL_BDA_AGENCY_RESP_REP | 1 | 7 | | 3 |
    |* 27 | INDEX RANGE SCAN | IDX_AGECYRESP_AGNCYID | 2 | | | 1 |
    Predicate Information (identified by operation id):
    2 - filter( NOT EXISTS (SELECT /*+ */ 0 FROM "TBL_BDA_AGENCY_RESP_REP" "T" WHERE "T"."AGENCY_ID"=:B1
    AND "T"."ISS_GRP_ID"=TO_NUMBER('%')))
    3 - access("I"."ISS_GRP_ID"="J"."ISS_GRP_ID"(+))
    4 - access("SYS_ALIAS_1"."ISS_GRP_ID"="I"."ISS_GRP_ID"(+))
    5 - filter("K"."ATTRIBUTE_VALUE" IS NOT NULL AND "K"."ATTRIBUTE_VALUE" LIKE '%')
    8 - filter(NVL("SYS_ALIAS_1"."RESP_USR_ID",'0') LIKE '%' OR NVL("SYS_ALIAS_1"."RESP_USR_ID",'0')='k')
    10 - filter(NVL("G"."OFFICE_ID",0)=TO_NUMBER('%'))
    13 - access("L"."PROFILE_TYPE_ID"="B"."PROFILE_TYPE_ID")
    14 - filter(UPPER("B"."PROFILE_NM") LIKE '%' AND (TO_CHAR(NVL("B"."PROFILE_ID",0)) LIKE '%' OR
    NVL("B"."PROFILE_ID",0)=TO_NUMBER('a')))
    15 - filter("L"."APPLICATION_CD"='BDA')
    16 - access("B"."PROFILE_ID"="F"."AGENCY_ID"(+))
    18 - access("G"."OFFICE_ID"(+)="F"."OFFICE_ID")
    20 - access("B"."PROFILE_ID"="SYS_ALIAS_1"."AGENCY_ID"(+))
    22 - access("H"."USERID"(+)="SYS_ALIAS_1"."RESP_USR_ID")
    23 - access("K"."ENTITY_ID"='PROFILE.'||TO_CHAR("B"."PROFILE_ID"))
    26 - filter("T"."ISS_GRP_ID"=TO_NUMBER('%'))
    27 - access("T"."AGENCY_ID"=:B1)
    Note: cpu costing is off
    57 rows selected.
    Elapsed: 00:00:01.08
    Please help me.
    Aashish S.

    Hello Eric,
    Here is the code:
    SELECT DISTINCT
    K.ATTRIBUTE_VALUE AGENCY_ID,
    B.PROFILE_NM ,
    NVL(G.OFFICE_DESC,'--') OFFICE_DESC,
    f.OFFICE_ID,
    B.PROFILE_ID,
    '%' ROLE,
    '%' LAYOUT,
    case
    WHEN 'flagB' = '%' THEN
    NVL(J.ISS_GRP_DESC,'BILLING')
    WHEN 'flagO' = '%' THEN
    NVL(J.ISS_GRP_DESC,'ORDERING')
    WHEN 'flag' = '%' THEN
    NVL(J.ISS_GRP_DESC,'BILLING/ORDERING')
    else
    NVL(J.ISS_GRP_DESC,' ')
    END ISS_GRP_DESC,
    DECODE(NVL(H.USERID,' ') ,' ','--','&lt;a sbcuid_in=' || H.USERID || ' target=NEW &gt;'||H.FIRSTNAME || ' ' || H.LASTNAME ||
    '( ' || H.USERID || ' )&lt;/a&gt;' ) USER_NAME
    from
    PROFILE_PORTAL B ,
    TBL_BDA_AGENCY_RESP_REP C ,
    TBL_BDA_AGENCY_OFFICE F,
    TBL_BDA_OFFICE G,
    USERS_PORTAL H,
    TBL_BDA_USR_ISS_GRP I ,
    TBL_BDA_ISS_GROUP J,
    ATTRIBUTE_VALUES_PORTAL K,
    PROFILE_TYPE_PORTAL L
    WHERE
    B.PROFILE_ID = F.AGENCY_ID (+)
    AND B.PROFILE_ID = C.AGENCY_ID (+)
    AND G.OFFICE_ID (+)= F.OFFICE_ID
    AND H.USERID (+)= C.RESP_USR_ID
    AND C.ISS_GRP_ID = I.ISS_GRP_ID (+)
    AND I.ISS_GRP_ID = J.ISS_GRP_ID(+)
    AND 'PROFILE.'||B.PROFILE_ID = K.ENTITY_ID(+)
    AND K.ATTRIBUTE_VALUE IS NOT NULL
    AND L.PROFILE_TYPE_ID = B.PROFILE_TYPE_ID
    AND L.APPLICATION_CD='BDA'
    AND NOT EXISTS
    (SELECT agency_id
    FROM TBL_BDA_AGENCY_RESP_REP t
    WHERE t.ISS_GRP_ID IN (1)
    AND t.AGENCY_ID = C.AGENCY_ID)
    AND K.ATTRIBUTE_VALUE LIKE '%'
    AND UPPER(B.PROFILE_NM) LIKE UPPER('%')
    AND (to_char(NVL(B.PROFILE_ID,0))
    LIKE '%'
    OR NVL(B.PROFILE_ID,0) IN (1))
    AND NVL(G.OFFICE_ID,0) IN (1)
    AND (to_char(NVL(C.RESP_USR_ID,'0'))
    LIKE '%'
    OR NVL(C.RESP_USR_ID,'0') IN ('%'))
    ORDER BY PROFILE_NM
    This is the Query and the query takes some mins. to run in prod environment.
    From the Query plan ,I am not able to get any idea for optimization.
    Now,Can you tell me which steps I need to follow to run it faster and which all modifications should be made?
    Thanks.
    Aashish S.

  • Tune the query

    hi,
    can any one tell me how to tune the below query. i am using oracle10.2.0.1.0 and this query take 5 mins to fetch the 10000 record.
    select
    nvl(w.range_num,v.range_num),
    w.min_capacity,
    w.max_capacity,
    w.price,
    v.min_capacity,
    v.max_capacity,
    v.price
    from TABLE( GET_VECTOR(20,'W') ) w
    full outer join TABLE( GET_VECTOR(20,'V') ) v
    on v.range_num = w.range_num;

    CREATE OR REPLACE function GET_VECTOR (arg_key number, arg_type varchar2)
    return VECTOR_TAB pipelined
    IS
    cursor cur2(in_key number) is
    select rownum, min_capacity, max_capacity, price
    from vol_v
    where key = in_key;
    order by min_capacity;
    out_rec PRICE_VEC := PRICE_VEC(NULL,NULL,NULL,NULL);
    BEGIN
    if (arg_type = 'V') then
    open cur2 (arg_key);
    LOOP
    fetch cur2 into
    out_rec.range_num,
    out_rec.min_capacity,
    out_rec.max_capacity,
    out_rec.price;
    exit when cur2%NOTFOUND;
    pipe row(out_rec);
    END LOOP;
    close cur2;
    end if;
    return;
    END GET_VECTOR;
    Edited by: user11272074 on Oct 28, 2009 10:22 PM
    Edited by: user11272074 on Oct 28, 2009 10:23 PM

  • Tunning the query

    Hi all,
    Please help me in tuning this query:
    select tab1.col1, tab1.col2, tab1.col3, tab2.col1, sum(tab1.col4), avg(tab1.col5), tab2.col2, tab2.col3, tab2.col4, tab2.col5
    from table1 tab1 , table2 tab2
    where tab2.col6 = tab1.col6 and
    to_char(tab2.col3, 'yyyy') = to_char(sysdate, 'yyyy')
    and tab2.col7=tab1.col2
    and tab2.col7 in ('11','12')
    group by tab1.col1, tab1.col2, tab1.col3, tab2.col1, tab2.col2, tab2.col3, tab2.col4, tab2.col5
    <br>
    Operation     Object Name     Rows     Bytes     Cost     Object Node     In/Out     PStart     PStop
    SELECT STATEMENT Optimizer Mode=CHOOSE          1 K          11456                     
    SORT GROUP BY          1 K     80 K     11456                     
    HASH JOIN          1 K     80 K     11419                     
    TABLE ACCESS FULL     tab2     1 K     49 K     3562                     
    TABLE ACCESS FULL     tab1     401 K     11 M     7847      
    <br>

    Hi,
    If I use
    tab2.col3 >= TRUNC(SYSDATE) AND tab2.col3 < TRUNC(TRUNC(sysdate) + 366)
    Then the total query is returning only 15 rows where as if I use
    to_char(tab2.col3, 'yyyy') = to_char(sysdate, 'yyyy')...
    then the query returns more than 70,000 rows.
    As you suggested I have created an index on
    tab2 (col3,col7,col6,col1,col2,col4,col5) and also on
    tab1(col2,col1,col3,col4,col5)
    This drastically reduced the cost, The explain plan is as follows:
    Operation     Object Name     Rows     Bytes     Cost     Object Node     In/Out     PStart     PStop
    SELECT STATEMENT Optimizer Mode=CHOOSE          1 K          84                     
    SORT GROUP BY          1 K     80 K     84                     
    HASH JOIN          1 K     80 K     47                     
    INDEX FULL SCAN     tab2_id3     1 K     49 K     26                     
    INLIST ITERATOR                                        
    INDEX RANGE SCAN     tab1_id5     401 K     11 M     11

  • Tune the query with join and not exists

    This is on 10g R2.
    I have a query similar to :
    Select A.*, C.*
    From A inner join B on A.id = B.id
    Left join C on A.kid = C.kid
    Where not exists
    (select * from D where A.fid = D.fid and A.stat = 2);
    I want avoiding to use the NOT EXISTS in the last part of the query
    I tried the autotrace explain of above and compared with others format and found no better execution plan than that. The explain plan indicated that there were long "table access full" operation on B, due to its little huge records, and a long operation of the "NESTED LOOPS OUTER". I had tried to replace the NOT EXISTS part with another LEFT JOIN in the FROM, but it went worse. So Anyone can suggest a better way? or it is the most efficient query I can get?

    Here is the tkprof output
    from baandb.ttfacr200201 a
       inner join baandb.ttfgld106201 c on (a.t$ttyp = c.t$otyp and a.t$ninv = c.t$odoc) and c.t$leac like :"SYS_B_0"
       left join baandb.ttfgld910201 d on c.t$dim2 = d.t$dimx and d.t$dtyp = :"SYS_B_1"
       where not exists
        (select * from baandb.tcisli205201 b
         where a.t$ttyp = b.t$ityp and a.t$ninv = b.t$idoc)
         and (a.t$trec = :"SYS_B_2" or a.t$trec = :"SYS_B_3" and t$tdoc = :"SYS_B_4")
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      1      0.01       0.01          0          0          0           0
    Fetch        5      1.06      52.11      29925      45943          0          54
    total        7      1.07      52.12      29925      45943          0          54
    Misses in library cache during parse: 1
    Misses in library cache during execute: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 31
    Rows     Row Source Operation
         54  HASH JOIN RIGHT ANTI (cr=45943 pr=29925 pw=0 time=2317005 us)
       9957   INDEX FAST FULL SCAN TCISLI205201$IDX1 (cr=39 pr=0 pw=0 time=54 us)(object id 16639)
      10067   NESTED LOOPS OUTER (cr=45904 pr=29925 pw=0 time=68531937 us)
      10067    HASH JOIN  (cr=35837 pr=29925 pw=0 time=68471521 us)
      10420     TABLE ACCESS FULL TTFACR200201 (cr=2424 pr=0 pw=0 time=20894 us)
      33156     TABLE ACCESS FULL TTFGLD106201 (cr=33413 pr=29925 pw=0 time=117767552 us)
         51    INDEX UNIQUE SCAN TTFGLD910201$IDX1 (cr=10067 pr=0 pw=0 time=53177 us)(object id 20402)
    OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        2      0.00       0.00          0          0          0           0
    Execute      3      0.02       0.02          0          0          0           0
    Fetch        6      1.06      52.11      29925      45943          0          55
    total       11      1.08      52.14      29925      45943          0          55

  • Needs to tune the query

    Hi ,
    I have a Query that needs to be tuned.
    The query joins two views with some filter condition.
    While running the individual view query with the filter condition i can able to get the results quickly within a seconds.
    But while joining the views conditions of the same criteria which i have used for the individual query takes more than 30 minute.
    i am struggling to tuning this query which was written using the views.
    Note :
    My problem is while checking the explain plan unique sort is taking more cost.
    is that i can reduce the time by giving some optimizer hints to reduce the unique sort cost for query using views?
    Thanks & regards,
    Senthur pandi M

    Hi,
    BluShadow wrote:
    957595 wrote:
    Hi ,
    I have a Query that needs to be tuned.
    The query joins two views with some filter condition.
    While running the individual view query with the filter condition i can able to get the results quickly within a seconds.
    But while joining the views conditions of the same criteria which i have used for the individual query takes more than 30 minute.
    i am struggling to tuning this query which was written using the views.
    Note :
    My problem is while checking the explain plan unique sort is taking more cost.Cost in not necessarily a good comparison to use. The cost is a figure determined on a per-query basis,The problem with cost is that it's a prediction made by the optimizer, rather than the actual measure of query performance. Optimizer often makes mistakes about expected query performance. Ironically, people normally look at query cost when it needs tuning, i.e. when the chance that optimizer made a mistake is especially high.
    In many internet forums one can see claims that cost estimates are meaningless across different queries. Such claims are unfounded. When calculated correctly, cost is quite meaningful, and in such cases there is nothing wrong about comparing cost not only for different queries, but also for different databases (if they have same optimizer settings and system stats).
    is that i can reduce the time by giving some optimizer hints to reduce the unique sort cost for query using views?Hints are not the way to improve performance. That's an overstatement. The sad truth is that in many cases there is no viable alternative to using hints. Rather than always avoid hints no matter what cost, it's better to understand how hints affect optimizer behavior, and when it's safe to use them.
    They are great for identifying where the cause of a performance issue is, but shouldn't be used in production code, as it would be like saying that you know better than Oracle how to retrieve the data, not just now, but in the future as more
    data is added and as data is deleted and replaced with new data etc. By adding hints you are effectively forcing the optimizer to execute the query in a particular way, which may be fast now, but in the future may be worse than what the optimizer can determine itself.Hints that force the optimizer to use a specific access path or a specific join method are dangerous -- because the only lock-in one part of the plan, but not the entire plan (e.g. INDEX hint only ensures that an index is used if possible, but it cannot ensure INDEX UNIQUE/RANGE SCAN, so you may end up in a situation when the optimizer is doing an expensive and meaningless INDEX FULL SCAN because of the hint that was indended to force a different, more selective, access method).
    Hints that don't do that, but rather prevent the optimizer from trying to be smart when it's better to keep things simple, are relatively safe.
    So, use the hints to identify where there are issues in the SQL or in the database design, and fix those issues, rather than leave hints in production code.As a general rule, sure. Here, however, the problem seems to be obvious -- if views are fast separately, and slow when joined, that suggests that the optimizer doesn't merge them correctly.
    Best regards,
    Nikolay

  • How cluster will tune the query

    Hi All,
    Please help me in understanding the autotrace Regarding Clusters.
    I created 2 tables
    tbl_master (col1 number, col2 number) with 800 rows AND unique col1.
    and tbl_detail(col1 number, col2 number) with 80000 with repeated col1.
    Now I created the replica of both tables with cluster on col1. as
    tbl_master1 (col1 number, col2 number) with 800 rows AND unique col1. --selected from tbl_master
    and tbl_detail1(col1 number, col2 number) with 80000 with repeated col1.--selected from tbl_detail
    I traced the out put of the non cluster table joins ans clustered tables joins.
    I set autotrace trace
    and set timing on
    I found that time elapsed is almost same even when COST is less. could anybody pls explain.
    select a.* from tbl_master a, tbl_detail b where a.col1 =b.col1
    80000 rows selected.
    Elapsed: 00:00:00.67
    Execution Plan
    0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=98 Card=80000 Byte
    s=1040000)
    1 0 HASH JOIN (Cost=98 Card=80000 Bytes=1040000)
    2 1 TABLE ACCESS (FULL) OF 'TBL_MASTER' (TABLE) (Cost=55 Car
    d=800 Bytes=8000)
    3 1 TABLE ACCESS (FULL) OF 'TBL_DETAIL' (TABLE) (Cost=42 Car
    d=80000 Bytes=240000)
    Statistics
    0 recursive calls
    0 db block gets
    5724 consistent gets
    0 physical reads
    0 redo size
    1066812 bytes sent via SQL*Net to client
    59175 bytes received via SQL*Net from client
    5335 SQL*Net roundtrips to/from client
    0 sorts (memory)
    0 sorts (disk)
    80000 rows processed
    select a.* from tbl_master1 a, tbl_detail1 b where a.col1 =b.col1
    80000 rows selected.
    Elapsed: 00:00:00.65
    Execution Plan
    0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=980 Card=72000 Byt
    es=1656000)
    1 0 NESTED LOOPS (Cost=980 Card=72000 Bytes=1656000)
    2 1 TABLE ACCESS (FULL) OF 'TBL_MASTER1' (CLUSTER) (Cost=180
    Card=800 Bytes=8000)
    3 1 TABLE ACCESS (CLUSTER) OF 'TBL_DETAIL1' (CLUSTER) (Cost=
    1 Card=90 Bytes=1170)
    Statistics
    0 recursive calls
    0 db block gets
    7749 consistent gets
    0 physical reads
    0 redo size
    1065475 bytes sent via SQL*Net to client
    59175 bytes received via SQL*Net from client
    5335 SQL*Net roundtrips to/from client
    0 sorts (memory)
    0 sorts (disk)
    80000 rows processed

    There is a lot of information missing from your question.
    1. What version of Oracle?
    2. Cluster by what? Index or hash? The DDL would have helped.
    and if hash was the number of has buckets determined correctly?
    3. Did you use DBMS_STATS to gather relevant optimizer statistics?
    4. What you posted is not an explain plan. In the future use dbms_xplan.display
    http://www.psoug.org/reference/explain_plan.html
    as demonstrated at the above link
    All of that said ... explain plan says nothing about how long a query takes. The cost is a measure of disk i/o and, if done properly, CPU, based on Oracle's assumption of what it might do if it actually ran the query. It might do what EXPLAIN PLAN indicates and it might not.
    Run a real explain plan and it might help understand what is happening but, quite frankly, with such a small number of rows the timing difference might be invisible.

  • Tunning the Query with Distinct Clause

    Hi All,
    I have the below query that returns 28113657 records
    select src_Wc_id, osp_id, src_osp_id
    from osp_complements o1
    where exists (select 1 from wc_crossref wc
                        where wc.src_wc_id = o1.SRC_WC_ID
                        and wc.state = 'CA')
    This query executes within a second...
    But when i include a DISTINCT clause in the select statement, it takes more time ... (more than 20 mins)
    I am trying to get it tunned. Please advice me with your knowledge to get it done
    Thanks for your time
    Kannan.

    Retrieving distinct rows requires a sort of all returned rows. 20 - 3 = ~17 mins for sorting 28 mln rows looks too much. You need to tune your instance in order to speed up sort operation. The amount of memory dedicated to sorts is controlled by PGA_AGGREGATE_TARGET parameter. If it's set to 0 (not recommended) then SORT_AREA_SIZE is used. The process of PGA tuning is quite complex and described in PGA Memory Management chapter of Performance Tuning Guide.
    There is a workaround which allows to bypass sort operation, but it requires proper index and proper access by that index. The idea is that rows rertrieved via index are automatically ordered by indexed columns. If that and only that columns (possibly - in the same order as in the index, I don't know) are selected using DISTINCT then sort is not actually performed. Rows are already sorted due to access via index.
    Hope this will help you.
    Regards,
    Dima

  • Help required regarding tunning the query mentioned

    HI all ,
    Query mentioned below takes around 1 hr to complete . It's being used by the autoconfig kindly me in tunning it ..
    QUery :
    UPDATE WF_ITEM_ATTRIBUTE_VALUES WIAV SET WIAV.TEXT_VALUE = REPLACE(WIAV.TEXT_VALUE,:B1,:B2)
    WHERE (WIAV.ITEM_TYPE, WIAV.NAME) = (SELECT WIA.ITEM_TYPE, WIA.NAME
    FROM WF_ITEM_ATTRIBUTES WIA WHERE WIA.TYPE = 'URL'
    AND WIA.ITEM_TYPE = WIAV.ITEM_TYPE
    AND WIA.NAME = WIAV.NAME)
    AND WIAV.TEXT_VALUE IS NOT NULL
    AND INSTR(WIAV.TEXT_VALUE
    , :B1) > 0
    Plan :*
    <pre>
    | Id | Operation | Name | Rows | Bytes | Cost |
    | 0 | UPDATE STATEMENT | | 453 | 14496 | 284K|
    | 1 | UPDATE | WF_ITEM_ATTRIBUTE_VALUES | | | |
    |* 2 | FILTER | | | | |
    |* 3 | TABLE ACCESS FULL | WF_ITEM_ATTRIBUTE_VALUES | 453 | 14496 | 282K|
    |* 4 | TABLE ACCESS BY INDEX ROWID| WF_ITEM_ATTRIBUTES | 1 | 33 | 2 |
    |* 5 | INDEX UNIQUE SCAN | WF_ITEM_ATTRIBUTES_PK | 1 | | 1 |
    Predicate Information (identified by operation id):
    2 - filter(("SYS_ALIAS_2"."ITEM_TYPE","SYS_ALIAS_2"."NAME")= (SELECT /*+ */
    "WIA"."ITEM_TYPE","WIA"."NAME" FROM "APPLSYS"."WF_ITEM_ATTRIBUTES" "WIA" WHERE
    "WIA"."NAME"=:B1 AND "WIA"."ITEM_TYPE"=:B2 AND "WIA"."TYPE"='URL'))
    3 - filter("SYS_ALIAS_2"."TEXT_VALUE" IS NOT NULL AND
    INSTR("SYS_ALIAS_2"."TEXT_VALUE",:Z)>0)
    4 - filter("WIA"."TYPE"='URL')
    5 - access("WIA"."ITEM_TYPE"=:B1 AND "WIA"."NAME"=:B2)
    </pre>
    Index :*
    <pre>
    INDEX_NAME COLUMN_NAME
    APPLSYS WF_ITEM_ATTRIBUTE_VALUES_PK 1 ITEM_TYPE
    2 ITEM_KEY
    3 NAME
    </pre>
    regds
    Rahul
    Edited by: RahulG on Jan 2, 2009 10:47 PM
    Edited by: RahulG on Jan 2, 2009 10:48 PM

    RahulG wrote:
    HI all ,
    Query mentioned below takes around 1 hr to complete . It's being used by the autoconfig kindly me in tunning it ..
    A few notes:
    1. Your query is using bind variables. If you're already on 9i or later (probably 9iR2 according to plan output), this statement will be subject to bind variable peeking and therefore the output of EXPLAIN PLAN is only of limited use, since the actual execution plan might be different and/or might be based on different cardinality estimates based on the actual bind values peeked at hard parse time. You can use the V$SQL_PLAN view to get the actual execution plan(s) if the statement is still cached in the shared pool, from 10g on DBMS_XPLAN.DISPLAY_CURSOR is available for that purpose.
    2. The execution plan posted suggests that only 453 rows will correspond to the filter criteria (but, as mentioned in 1. is based on an unknown bind variable value when using EXPLAIN PLAN), and probably therefore the optimizer didn't unnest the subquery but runs this as recursive FILTER query potentially for each row passing the filter criteria on the driving table WF_ITEM_ATTRIBUTE_VALUES. Depending on the actual number of rows this might be inefficient, and unnesting the subquery and turning it into a join might be more appropriate. This might accomplished e.g. by providing more representative statistics to the optimizer (are the statistics up-to-date?).
    Although you can't change the SQL you could try this manually by using the UNNEST hint to see if it makes any difference in the execution plan (and run time):
    WHERE (WIAV.ITEM_TYPE, WIAV.NAME) = (SELECT /*+ UNNEST */ WIA.ITEM_TYPE, WIA.NAME
    ...3. The composite index WF_ITEM_ATTRIBUTE_VALUES_PK can only be used on the first column ITEM_TYPE for effective index access, the NAME column would have to be used as filter on all index leaf blocks that would be found using a range scan on ITEM_TYPE. This might be quite inefficient, and/or might lead to a lot of rows/blocks that need to be visited in the table using this index access path.
    4. You could try to trace the execution by enabling extended SQL trace, e.g. using the (undocumented) DBMS_SUPPORT package in 9i. Running the "tkprof" utility on the generated trace file tells you the actual row source cardinalities (which can then be compared to the estimates of the optimizer) and - if the "waits" have been enabled - what your statement has waited for most.
    Regards,
    Randolf
    Oracle related stuff blog:
    http://oracle-randolf.blogspot.com/
    SQLTools++ for Oracle (Open source Oracle GUI for Windows):
    http://www.sqltools-plusplus.org:7676/
    http://sourceforge.net/projects/sqlt-pp/

  • Please tune the query

    Hi folks..pls tune/rewrite my query
    SELECT
    bu_code,bu_type,cust_no ,cur_code,sales_date,receipt_no,till_no,card_no,invoice_total,amount_of_goods,(invoice_total - amount_of_goods) AS amount_of_non_goods,
    pay_in_advance AS amounts_of_advance_pay,amount_of_discounts, Error_flag
    FROM
    (select
                distinct  'STO' AS BU_TYPE,
    workinv.tot_cust_no AS  cust_no,
    workinv.comp_code as comp_code,
    workinv.cash_no as till_no,
    workinv.receipt_no  as receipt_no,
    workinv.sales_date as sales_date,
    workinv.cur_code as cur_code,
    invhead.acct_usr_no as card_no,
    invhead.inv_no as inv_no,
    invhead.sto_no as bu_code, (SELECT MAX (DECODE (e.sum_code, 'TOTAL', e.amount_incl))
                                              FROM invoice_sums_t e
                                              WHERE e.inv_no = invsums.inv_no
                                              AND e.comp_code = invsums.comp_code) AS invoice_total,
                                             (SELECT MAX (DECODE (e.sum_code, 'PIA', e.amount_incl))
                                               FROM invoice_sums_t e
                                               WHERE e.inv_no = invsums.inv_no
                                              AND e.comp_code = invsums.comp_code) AS pay_in_advance,
                                             (SELECT SUM(e.amount_incl)
                                              FROM invoice_sums_t e
                                              WHERE E.SUM_CODE LIKE 'GOODS0%'
                                              AND e.inv_no = invsums.inv_no
                                              AND e.comp_code = invsums.comp_code) AS amount_of_goods,
                                             (SELECT SUM(e.amount_incl)
                                              FROM invoice_sums_t e
                                             WHERE E.SUM_CODE LIKE 'DISCOUNT0%'
                                              AND e.inv_no = invsums.inv_no
                                              AND e.comp_code = invsums.comp_code) AS amount_of_discounts ,
                                      CASE workinv.error_flag WHEN 'H' THEN 'Y' ELSE 'N' END
                                      AS Error_flag,
                                      WORKINV.ERROR_FLAG AS invoice_on_hold
    FROM  work_invoice_info_t workinv,
              invoice_header_t invhead,
              invoice_sums_t invsums,
              i_invoice_info_t_log invlog,
              o_pam_document_header_log_t opdhlt
    WHERE  invhead.comp_code= workinv.comp_code
    AND TRIM(workinv.Tot_cust_no) =TRIM(invlog.tot_cust_no)
    AND TRIM (workinv.sto_no) = invhead.sto_no
    AND TRIM (workinv.sales_date) =TO_CHAR (invhead.sales_date, 'YYMMDD')
    AND TRIM (workinv.cash_no) =TO_NUMBER (TRIM (invhead.cash_no))
    AND TRIM (workinv.receipt_no) =TO_NUMBER (TRIM (invhead.receipt_no))
    AND invhead.comp_code = invsums.comp_code
    AND invhead.inv_no = invsums.inv_no
    AND TRIM(workinv.sto_no) = invlog.sto_no
    AND TRIM(workinv.receipt_no) = invlog.receipt_no
    AND TRIM(workinv.cash_no) = invlog.cash_no
    AND TRIM(workinv.sales_date) = invlog.sales_date)dear folks am debugging the code as step by step
    i have taken inline query and selecting 1 as column from joining the same tables used in the above query
    select
       1
    FROM  work_invoice_info_t workinv,
              invoice_header_t invhead,
              invoice_sums_t invsums,
              i_invoice_info_t_log invlog,
              o_pam_document_header_log_t opdhlt
    WHERE  invhead.comp_code= workinv.comp_code
    AND TRIM(workinv.Tot_cust_no) =TRIM(invlog.tot_cust_no) --*if iam firing only this much then output is coming --in 2sec*
    AND TRIM (workinv.sto_no) = invhead.sto_no -- if i add the below 2 cols then it is taking lot of time(half an hr)
    AND TRIM (workinv.sales_date) =TO_CHAR (invhead.sales_date, 'YYMMDD')Hence should i create indexes on both 'sto_no' and 'sales_date' cols ?
    please shed some light on this

    newbie wrote:
    ..so how can i tune please share ideasTuning is a vast area. Many people spend their entire working lives tuning other people's code. Those people make fine livings from their work. They couldn't do that if it was merely a matter of squinting at some shonky piece of SQL and saying, "Ah, that's the badger!" No, tuning requires a great deal of context and additional information. Explain plans, statistcs, metadata, right down to what version of the database you're using.
    Now, you already have been provided with links to helpful threads: these explain how you can proceed in collecting this information and investigating your problem. The sooner you start reading those links the sooner you can start diagnosing the poor performance.
    If you still can't crack it, by all means post here again. But don't bother until you have gathered all the ibnformation you need to [post so that we can understand your situation.
    Cheers, APC                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

  • How to tune the query..."sum" operation is taking too much long time...

    how to reduce the execution time for the below query..."sum" operation is taking too much long time....
    SELECT
    B.DP_AVPS_STATUS,
    SUM(DP_AD_CURR_BAL*D.ISIN_CLOSE_PRICE) Qty,
    B.DP_NET_WORTH,
    B.CMN_DP_FLAG,
    B.RESTRAIN_TYPE ,
    B.LETT_GENR_DATE,
    E.MAX_NET_WORTH,
    E.MIN_NET_WORTH
    FROM DPMADV0 A, PABRNCHDTLP0 B, PABANKCCYP0 D,PADPNETWORTHDTLP0 E,
    (SELECT CID_NUMB FROM CFCUSTMASTD0
    WHERE SUB_TYPE NOT IN (1,2,5,6,7,28,29,30,31,40,41,48,49,50,57,83)) C
    WHERE A.DP_AD_BR_NBR = B.BRNCH_NUMB
    AND A.DP_AD_CCY_CDE = D.CCY_ALPHA_CODE
    AND A.DP_AD_ACCT_NBR = C.CID_NUMB
    AND E.DP_ACCOUNT_TYPE = B.DP_ACCOUNT_TYPE
    AND SUBSTR(B.BRNCH_NUMB,1,3) like SUBSTR(:hvBrnchNumb,1,3)
    AND ((B.DP_TYPE IN (2) AND B.DP_ACCOUNT_TYPE IN (10, 11)) or
    (B.DP_TYPE IN (3) and B.DP_ACCOUNT_TYPE=11 ) )
    AND B.DEL_FLAG = 'N'
    AND B.DP_STATUS = 'A'
    AND D.ISIN_STATUS = 'A'
    GROUP BY B.DP_NET_WORTH,B.RESTRAIN_TYPE,B.DP_AVPS_STATUS,E.MAX_NET_WORTH,
    E.MIN_NET_WORTH,B.CMN_DP_FLAG,B.LETT_GENR_DATE;

    Hi,
    please produce a plan with rowsource statistics (if not sure how, follow instructions in http://savvinov.com/2012/09/24/a-sqlplus-script-for-diagnosing-poor-sql-plans/) and post it using tags to preserve formatting.
    Best regards,
      Nikolay
    P.S. I also suggest that you work on your open threads:
    Handle:      946903 
    Status Level:      Newbie
    Registered:      Jul 16, 2012
    Total Posts:      11
    Total Questions:      5 (5 unresolved)                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

Maybe you are looking for