Help required regarding tunning the query mentioned

HI all ,
Query mentioned below takes around 1 hr to complete . It's being used by the autoconfig kindly me in tunning it ..
QUery :
UPDATE WF_ITEM_ATTRIBUTE_VALUES WIAV SET WIAV.TEXT_VALUE = REPLACE(WIAV.TEXT_VALUE,:B1,:B2)
WHERE (WIAV.ITEM_TYPE, WIAV.NAME) = (SELECT WIA.ITEM_TYPE, WIA.NAME
FROM WF_ITEM_ATTRIBUTES WIA WHERE WIA.TYPE = 'URL'
AND WIA.ITEM_TYPE = WIAV.ITEM_TYPE
AND WIA.NAME = WIAV.NAME)
AND WIAV.TEXT_VALUE IS NOT NULL
AND INSTR(WIAV.TEXT_VALUE
, :B1) > 0
Plan :*
<pre>
| Id | Operation | Name | Rows | Bytes | Cost |
| 0 | UPDATE STATEMENT | | 453 | 14496 | 284K|
| 1 | UPDATE | WF_ITEM_ATTRIBUTE_VALUES | | | |
|* 2 | FILTER | | | | |
|* 3 | TABLE ACCESS FULL | WF_ITEM_ATTRIBUTE_VALUES | 453 | 14496 | 282K|
|* 4 | TABLE ACCESS BY INDEX ROWID| WF_ITEM_ATTRIBUTES | 1 | 33 | 2 |
|* 5 | INDEX UNIQUE SCAN | WF_ITEM_ATTRIBUTES_PK | 1 | | 1 |
Predicate Information (identified by operation id):
2 - filter(("SYS_ALIAS_2"."ITEM_TYPE","SYS_ALIAS_2"."NAME")= (SELECT /*+ */
"WIA"."ITEM_TYPE","WIA"."NAME" FROM "APPLSYS"."WF_ITEM_ATTRIBUTES" "WIA" WHERE
"WIA"."NAME"=:B1 AND "WIA"."ITEM_TYPE"=:B2 AND "WIA"."TYPE"='URL'))
3 - filter("SYS_ALIAS_2"."TEXT_VALUE" IS NOT NULL AND
INSTR("SYS_ALIAS_2"."TEXT_VALUE",:Z)>0)
4 - filter("WIA"."TYPE"='URL')
5 - access("WIA"."ITEM_TYPE"=:B1 AND "WIA"."NAME"=:B2)
</pre>
Index :*
<pre>
INDEX_NAME COLUMN_NAME
APPLSYS WF_ITEM_ATTRIBUTE_VALUES_PK 1 ITEM_TYPE
2 ITEM_KEY
3 NAME
</pre>
regds
Rahul
Edited by: RahulG on Jan 2, 2009 10:47 PM
Edited by: RahulG on Jan 2, 2009 10:48 PM

RahulG wrote:
HI all ,
Query mentioned below takes around 1 hr to complete . It's being used by the autoconfig kindly me in tunning it ..
A few notes:
1. Your query is using bind variables. If you're already on 9i or later (probably 9iR2 according to plan output), this statement will be subject to bind variable peeking and therefore the output of EXPLAIN PLAN is only of limited use, since the actual execution plan might be different and/or might be based on different cardinality estimates based on the actual bind values peeked at hard parse time. You can use the V$SQL_PLAN view to get the actual execution plan(s) if the statement is still cached in the shared pool, from 10g on DBMS_XPLAN.DISPLAY_CURSOR is available for that purpose.
2. The execution plan posted suggests that only 453 rows will correspond to the filter criteria (but, as mentioned in 1. is based on an unknown bind variable value when using EXPLAIN PLAN), and probably therefore the optimizer didn't unnest the subquery but runs this as recursive FILTER query potentially for each row passing the filter criteria on the driving table WF_ITEM_ATTRIBUTE_VALUES. Depending on the actual number of rows this might be inefficient, and unnesting the subquery and turning it into a join might be more appropriate. This might accomplished e.g. by providing more representative statistics to the optimizer (are the statistics up-to-date?).
Although you can't change the SQL you could try this manually by using the UNNEST hint to see if it makes any difference in the execution plan (and run time):
WHERE (WIAV.ITEM_TYPE, WIAV.NAME) = (SELECT /*+ UNNEST */ WIA.ITEM_TYPE, WIA.NAME
...3. The composite index WF_ITEM_ATTRIBUTE_VALUES_PK can only be used on the first column ITEM_TYPE for effective index access, the NAME column would have to be used as filter on all index leaf blocks that would be found using a range scan on ITEM_TYPE. This might be quite inefficient, and/or might lead to a lot of rows/blocks that need to be visited in the table using this index access path.
4. You could try to trace the execution by enabling extended SQL trace, e.g. using the (undocumented) DBMS_SUPPORT package in 9i. Running the "tkprof" utility on the generated trace file tells you the actual row source cardinalities (which can then be compared to the estimates of the optimizer) and - if the "waits" have been enabled - what your statement has waited for most.
Regards,
Randolf
Oracle related stuff blog:
http://oracle-randolf.blogspot.com/
SQLTools++ for Oracle (Open source Oracle GUI for Windows):
http://www.sqltools-plusplus.org:7676/
http://sourceforge.net/projects/sqlt-pp/

Similar Messages

  • Help needed to tune the Query:Statistics added

    Can someone DBA please help me to tune this query:
    SELECT DISTINCT K.ATTRIBUTE_VALUE AGENCY_ID,B.PROFILE_NM ,NVL(G.OFFICE_DESC,'--') OFFICE_DESC,f.OFFICE_ID,B.PROFILE_ID,'%' ROLE,'%' LAYOUT,
    CASE
    WHEN 'flagB' = '%' THEN
    NVL(J.ISS_GRP_DESC,'BILLING')
    WHEN 'flagO' = '%' THEN
    NVL(J.ISS_GRP_DESC,'ORDERING')
    WHEN 'flag' = '%' THEN
    NVL(J.ISS_GRP_DESC,'BILLING/ORDERING')
    ELSE
    NVL(J.ISS_GRP_DESC,' ')
    END ISS_GRP_DESC,
    DECODE(NVL(H.USERID,' ') ,' ','--','<a sbcuid_in=' || H.USERID || ' target=NEW >'||H.FIRSTNAME || ' ' || H.LASTNAME || '( ' || H.USERID || ' )</a>' ) USER_NAME
    FROM
    PROFILE_PORTAL B ,
    TBL_BDA_AGENCY_RESP_REP C ,
    TBL_BDA_AGENCY_OFFICE F,
    TBL_BDA_OFFICE G,
    USERS_PORTAL H,
    TBL_BDA_USR_ISS_GRP I ,
    TBL_BDA_ISS_GROUP J,
    ATTRIBUTE_VALUES_PORTAL K,
    PROFILE_TYPE_PORTAL L
    WHERE
    B.PROFILE_ID = F.AGENCY_ID (+)
    AND B.PROFILE_ID = C.AGENCY_ID (+)
    AND G.OFFICE_ID (+)= F.OFFICE_ID
    AND H.USERID (+)= C.RESP_USR_ID
    AND C.ISS_GRP_ID = I.ISS_GRP_ID (+)
    AND I.ISS_GRP_ID = J.ISS_GRP_ID(+)
    AND 'PROFILE.'||B.PROFILE_ID = K.ENTITY_ID(+)
    AND K.ATTRIBUTE_VALUE IS NOT NULL
    AND L.PROFILE_TYPE_ID = B.PROFILE_TYPE_ID
    AND L.APPLICATION_CD='BDA'
    AND NOT EXISTS (SELECT agency_id
    FROM TBL_BDA_AGENCY_RESP_REP t
    WHERE t.ISS_GRP_ID IN ('%')
    AND t.AGENCY_ID = C.AGENCY_ID)
    AND K.ATTRIBUTE_VALUE LIKE '%'
    AND UPPER(B.PROFILE_NM) LIKE UPPER('%')
    AND (to_char(NVL(B.PROFILE_ID,0)) LIKE '%' OR NVL(B.PROFILE_ID,0) IN ('a'))
    AND NVL(G.OFFICE_ID,0) IN ('%')
    AND (to_char(NVL(C.RESP_USR_ID,'0')) LIKE '%' OR NVL(C.RESP_USR_ID,'0') IN ('k'))
    ORDER BY PROFILE_NM
    The number of rows in these tables are as follows:
    PROFILE_PORTAL -- 2392
    TBL_BDA_AGENCY_RESP_REP 3508
    TBL_BDA_AGENCY_OFFICE 2151
    TBL_BDA_OFFICE 3
    USERS_PORTAL 270500
    TBL_BDA_USR_ISS_GRP 234
    TBL_BDA_ISS_GROUP 2
    ATTRIBUTE_VALUES_PORTAL 2790
    PROFILE_TYPE_PORTAL 3
    The Explain pal nhas given this o/p to me:
    SQL> select * from table(dbms_xplan.display) dual;
    PLAN_TABLE_OUTPUT
    | Id | Operation | Name | Rows | Bytes |TempSpc| Cost |
    | 0 | SELECT STATEMENT | | 807 | 102K| | 2533 |
    | 1 | SORT UNIQUE | | 807 | 102K| 232K| 82 |
    |* 2 | FILTER | | | | | |
    |* 3 | HASH JOIN OUTER | | 807 | 102K| | 52 |
    |* 4 | HASH JOIN OUTER | | 807 | 95226 | | 40 |
    |* 5 | TABLE ACCESS BY INDEX ROWID | ATTRIBUTE_VALUES | 1 | 23 | | 2 |
    | 6 | NESTED LOOPS | | 7 | 805 | | 37 |
    | 7 | NESTED LOOPS OUTER | | 6 | 552 | | 25 |
    |* 8 | FILTER | | | | | |
    | 9 | NESTED LOOPS OUTER | | | | | |
    |* 10 | FILTER | | | | | |
    | 11 | NESTED LOOPS OUTER | | | | | |
    | 12 | NESTED LOOPS OUTER | | 3 | 141 | | 10 |
    |* 13 | HASH JOIN | | 3 | 120 | | 7 |
    |* 14 | TABLE ACCESS FULL | PROFILE | 6 | 198 | | 4 |
    |* 15 | TABLE ACCESS FULL | PROFILE_TYPE | 1 | 7 | | 2 |
    |* 16 | INDEX RANGE SCAN | SYS_C0019777 | 1 | 7 | | 1 |
    | 17 | TABLE ACCESS BY INDEX ROWID| TBL_BDA_OFFICE | 1 | 10 | | 1 |
    |* 18 | INDEX UNIQUE SCAN | SYS_C0019800 | 1 | | | |
    | 19 | TABLE ACCESS BY INDEX ROWID | TBL_BDA_AGENCY_RESP_REP | 2 | 26 | | 2 |
    |* 20 | INDEX RANGE SCAN | IDX_AGECYRESP_AGNCYID | 2 | | | 1 |
    | 21 | TABLE ACCESS BY INDEX ROWID | USER_ | 1 | 22 | | 1 |
    |* 22 | INDEX UNIQUE SCAN | USER_PK | 1 | | | |
    |* 23 | INDEX RANGE SCAN | IDX_ATTVAL_ENTATTID | 1 | | | 1 |
    | 24 | TABLE ACCESS FULL | TBL_BDA_USR_ISS_GRP | 234 | 702 | | 2 |
    | 25 | TABLE ACCESS FULL | TBL_BDA_ISS_GROUP | 2 | 24 | | 2 |
    |* 26 | TABLE ACCESS BY INDEX ROWID | TBL_BDA_AGENCY_RESP_REP | 1 | 7 | | 3 |
    |* 27 | INDEX RANGE SCAN | IDX_AGECYRESP_AGNCYID | 2 | | | 1 |
    Predicate Information (identified by operation id):
    2 - filter( NOT EXISTS (SELECT /*+ */ 0 FROM "TBL_BDA_AGENCY_RESP_REP" "T" WHERE "T"."AGENCY_ID"=:B1
    AND "T"."ISS_GRP_ID"=TO_NUMBER('%')))
    3 - access("I"."ISS_GRP_ID"="J"."ISS_GRP_ID"(+))
    4 - access("SYS_ALIAS_1"."ISS_GRP_ID"="I"."ISS_GRP_ID"(+))
    5 - filter("K"."ATTRIBUTE_VALUE" IS NOT NULL AND "K"."ATTRIBUTE_VALUE" LIKE '%')
    8 - filter(NVL("SYS_ALIAS_1"."RESP_USR_ID",'0') LIKE '%' OR NVL("SYS_ALIAS_1"."RESP_USR_ID",'0')='k')
    10 - filter(NVL("G"."OFFICE_ID",0)=TO_NUMBER('%'))
    13 - access("L"."PROFILE_TYPE_ID"="B"."PROFILE_TYPE_ID")
    14 - filter(UPPER("B"."PROFILE_NM") LIKE '%' AND (TO_CHAR(NVL("B"."PROFILE_ID",0)) LIKE '%' OR
    NVL("B"."PROFILE_ID",0)=TO_NUMBER('a')))
    15 - filter("L"."APPLICATION_CD"='BDA')
    16 - access("B"."PROFILE_ID"="F"."AGENCY_ID"(+))
    18 - access("G"."OFFICE_ID"(+)="F"."OFFICE_ID")
    20 - access("B"."PROFILE_ID"="SYS_ALIAS_1"."AGENCY_ID"(+))
    22 - access("H"."USERID"(+)="SYS_ALIAS_1"."RESP_USR_ID")
    23 - access("K"."ENTITY_ID"='PROFILE.'||TO_CHAR("B"."PROFILE_ID"))
    26 - filter("T"."ISS_GRP_ID"=TO_NUMBER('%'))
    27 - access("T"."AGENCY_ID"=:B1)
    Note: cpu costing is off
    57 rows selected.
    Elapsed: 00:00:01.08
    Please help me.
    Aashish S.

    Hello Eric,
    Here is the code:
    SELECT DISTINCT
    K.ATTRIBUTE_VALUE AGENCY_ID,
    B.PROFILE_NM ,
    NVL(G.OFFICE_DESC,'--') OFFICE_DESC,
    f.OFFICE_ID,
    B.PROFILE_ID,
    '%' ROLE,
    '%' LAYOUT,
    case
    WHEN 'flagB' = '%' THEN
    NVL(J.ISS_GRP_DESC,'BILLING')
    WHEN 'flagO' = '%' THEN
    NVL(J.ISS_GRP_DESC,'ORDERING')
    WHEN 'flag' = '%' THEN
    NVL(J.ISS_GRP_DESC,'BILLING/ORDERING')
    else
    NVL(J.ISS_GRP_DESC,' ')
    END ISS_GRP_DESC,
    DECODE(NVL(H.USERID,' ') ,' ','--','&lt;a sbcuid_in=' || H.USERID || ' target=NEW &gt;'||H.FIRSTNAME || ' ' || H.LASTNAME ||
    '( ' || H.USERID || ' )&lt;/a&gt;' ) USER_NAME
    from
    PROFILE_PORTAL B ,
    TBL_BDA_AGENCY_RESP_REP C ,
    TBL_BDA_AGENCY_OFFICE F,
    TBL_BDA_OFFICE G,
    USERS_PORTAL H,
    TBL_BDA_USR_ISS_GRP I ,
    TBL_BDA_ISS_GROUP J,
    ATTRIBUTE_VALUES_PORTAL K,
    PROFILE_TYPE_PORTAL L
    WHERE
    B.PROFILE_ID = F.AGENCY_ID (+)
    AND B.PROFILE_ID = C.AGENCY_ID (+)
    AND G.OFFICE_ID (+)= F.OFFICE_ID
    AND H.USERID (+)= C.RESP_USR_ID
    AND C.ISS_GRP_ID = I.ISS_GRP_ID (+)
    AND I.ISS_GRP_ID = J.ISS_GRP_ID(+)
    AND 'PROFILE.'||B.PROFILE_ID = K.ENTITY_ID(+)
    AND K.ATTRIBUTE_VALUE IS NOT NULL
    AND L.PROFILE_TYPE_ID = B.PROFILE_TYPE_ID
    AND L.APPLICATION_CD='BDA'
    AND NOT EXISTS
    (SELECT agency_id
    FROM TBL_BDA_AGENCY_RESP_REP t
    WHERE t.ISS_GRP_ID IN (1)
    AND t.AGENCY_ID = C.AGENCY_ID)
    AND K.ATTRIBUTE_VALUE LIKE '%'
    AND UPPER(B.PROFILE_NM) LIKE UPPER('%')
    AND (to_char(NVL(B.PROFILE_ID,0))
    LIKE '%'
    OR NVL(B.PROFILE_ID,0) IN (1))
    AND NVL(G.OFFICE_ID,0) IN (1)
    AND (to_char(NVL(C.RESP_USR_ID,'0'))
    LIKE '%'
    OR NVL(C.RESP_USR_ID,'0') IN ('%'))
    ORDER BY PROFILE_NM
    This is the Query and the query takes some mins. to run in prod environment.
    From the Query plan ,I am not able to get any idea for optimization.
    Now,Can you tell me which steps I need to follow to run it faster and which all modifications should be made?
    Thanks.
    Aashish S.

  • Help required in optimizing the query response time

    Hi,
    I am working on a application which uses a jdbc thin client. My requirement is to select all the table rows in one table and use the column values to select data in another table in another database.
    The first table can have maximum of 6 million rows but the second table rows will be around 9000.
    My first query is returning within 30-40 milliseconds when the table is having 200000 rows. But when I am iterating the result set and query the second table the query is taking around 4 millisecond for each query.
    the second query selection criteria is to find the value in the range .
    for example my_table ( varchar2 column1, varchar2 start_range, varchar2 end_range);
    My first query returns a result which then will be used to select using the following query
    select column1 from my_table where start_range < my_value and end_range> my_value;
    I have created an index on start_range and end_range. this query is taking around 4 millisseconds which I think is too much.
    I am using a preparedStatement for the second query loop.
    Can some one suggest me how I can improve the query response time?
    Regards,
    Shyam

    Try the code below.
    Pre-requistee: you should know how to pass ARRAY objects to oracle and receive resultsets from java. There are 1000s of samples available on net.
    I have written a sample db code for the same interraction.
    Procedure get_list takes a array input from java and returns the record set back to java. You can change the tablenames and the creteria.
    Good luck.
    DROP TYPE idlist;
    CREATE OR REPLACE TYPE idlist AS TABLE OF NUMBER;
    CREATE OR REPLACE PACKAGE mypkg1
    AS
       PROCEDURE get_list (myval_list idlist, orefcur OUT sys_refcursor);
    END mypkg1;
    CREATE OR REPLACE PACKAGE BODY mypkg1
    AS
       PROCEDURE get_list (myval_list idlist, orefcur OUT sys_refcursor)
       AS
          ctr   NUMBER;
       BEGIN
          DBMS_OUTPUT.put_line (myval_list.COUNT);
          FOR x IN (SELECT object_name, object_id, myvalue
                      FROM user_objects a,
                           (SELECT myval_list (ROWNUM + 1) myvalue
                              FROM TABLE (myval_list)) b
                     WHERE a.object_id < b.myvalue)
          LOOP
             DBMS_OUTPUT.put_line (   x.object_name
                                   || ' - '
                                   || x.object_id
                                   || ' - '
                                   || x.myvalue
          END LOOP;
       END;
    END mypkg1;
    [pre]
    Testing the code above. Make sure dbms output is ON.
    [pre]
    DECLARE
       a      idlist;
       refc   sys_refcursor;
       c number;
    BEGIN
       SELECT x.nu
       BULK COLLECT INTO a
         FROM (SELECT 5000 nu
                 FROM DUAL) x;
       mypkg1.get_list (a, refc);
    END;
    [pre]
    Vishal V.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

  • Urgent Help required in Tunning this query

    I have table ACCOUNT SPONSOR HOMESTORE ASH with more than 30 million rows.
    My batch daily need to update or insert into this table from a temporary table TEMP_HSTRALCT. The data for temporary table is populated by below query which selects from two tables TRANSACTION POINTS and REDEMPTIONS.However both these tables are partitioned on date time and is run daily and this is running for hours.
    Can anyone please help me on tuning this query
    INSERT INTO temp_hstralct
    (tmp_n_collector_account_num, tmp_v_location_id,
    tmp_v_sponsor_id, tmp_v_source_file_name,
    tmp_n_psc_insert_ind, tmp_n_psc_update_ind,
    tmp_n_transaction_amount, tmp_n_transaction_points,
    tmp_n_acc_insert_ind, tmp_n_ash_insert_ind,
    tmp_n_col_insert_ind, tmp_n_check_digit,
    tmp_n_collector_issue_num, tmp_n_csl_insert_ind,
    tmp_v_offer_code, tmp_n_psa_insert_ind)
    SELECT DISTINCT trp_n_collector_account_num account_num,
    trp_v_location_id location_id,
    trp_v_sponsor_id sponsor_id,
    trp_c_creation_user batch_id, 0, 0, 0, 0, 0, 0, 0, 0,
    0, 0, 0, 0
    FROM transaction_points, ACCOUNT, locations_master,homestores
    WHERE hsr_v_accrual_allowed = 'Y'
    AND trp_n_collector_account_num = ACCOUNT.acc_n_account_num(+)
    AND ( ( ( ACCOUNT.acc_v_account_type = 'C'
    OR ACCOUNT.acc_v_account_type IS NULL
    AND hsr_v_b2c_accounts = 'Y'
    OR ( ACCOUNT.acc_v_account_type = 'B'
    AND hsr_v_nfb_accounts = 'Y'
    OR ( ACCOUNT.acc_v_account_type = 'H'
    AND hsr_v_hybrid_accounts = 'Y'
    AND trp_d_creation_date_time BETWEEN SYSDATE-3
    AND SYSDATE
    AND trp_v_sponsor_id = 'JSAINSBURY'
    AND trp_v_location_id =
    locations_master.lnm_v_location_id
    AND locations_master.lnm_v_partner_id = 'JSAINSBURY'
    AND ( ( ( (INSTR
    (hsr_v_store_status,
    locations_master.lnm_c_location_status
    ) > 0
    AND (INSTR
    (hsr_v_store_type,
    locations_master.lnm_c_location_type
    ) > 0
    AND hsr_v_homestore_assignment = 'ST'
    OR ( ( locations_master.lnm_c_homestore_ind =
    'Y'
    AND (INSTR
    (hsr_v_store_status,
    locations_master.lnm_c_location_status
    ) > 0
    AND hsr_v_homestore_assignment = 'HS'
    UNION ALL
    SELECT DISTINCT rdm_n_collector_account_num account_num,
    rdm_v_location_id location_id,
    rom_v_supplier_id sponsor_id,
    rdm_c_creation_user batch_id, 0, 0, 0, 0, 0, 0, 0, 0,
    0, 0, 0, 0
    FROM redemption_details,
    reward_offer_master,
    ACCOUNT,
    locations_master,
    HOMESTORES
    WHERE hsr_v_redemption_allowed = 'Y'
    AND rdm_n_collector_account_num = ACCOUNT.acc_n_account_num(+)
    AND ( ( ( ACCOUNT.acc_v_account_type = 'C'
    OR ACCOUNT.acc_v_account_type IS NULL
    AND hsr_v_b2c_accounts = 'Y'
    OR ( ACCOUNT.acc_v_account_type = 'B'
    AND hsr_v_nfb_accounts = 'Y'
    OR ( ACCOUNT.acc_v_account_type = 'H'
    AND hsr_v_hybrid_accounts = 'Y'
    AND rdm_d_creation_date_time BETWEEN SYSDATE-3
    AND SYSDATE
    AND rom_v_reward_offer_id = rdm_v_reward_id
    AND rom_v_supplier_id = 'JSAINSBURY'
    AND rdm_v_location_id =
    locations_master.lnm_v_location_id
    AND locations_master.lnm_v_partner_id ='JSAINSBURY'
    AND ( ( ( (INSTR
    (hsr_v_store_status,
    locations_master.lnm_c_location_status
    ) > 0
    AND (INSTR
    (hsr_v_store_type,
    locations_master.lnm_c_location_type
    ) > 0
    AND hsr_v_homestore_assignment = 'ST'
    OR ( ( locations_master.lnm_c_homestore_ind =
    'Y'
    AND (INSTR
    (hsr_v_store_status,
    locations_master.lnm_c_location_status
    ) > 0
    AND hsr_v_homestore_assignment = 'HS'
    );

    I have copied the explain as it is and can you please try pasting in the text pad.Can you let me know whether parallel hint on this will speed up the select queries.
              Plan
              INSERT STATEMENT CHOOSECost: 410,815 Bytes: 2,798,394 Cardinality: 15,395                                                        
                   32 UNION-ALL                                                   
                        15 SORT UNIQUE Cost: 177,626 Bytes: 2,105,592 Cardinality: 11,896                                              
                             14 FILTER                                         
                                  13 HASH JOIN Cost: 177,312 Bytes: 2,105,592 Cardinality: 11,896                                    
                                       2 TABLE ACCESS BY INDEX ROWID LMHOLTP.LOCATIONS_MASTER Cost: 37 Bytes: 23,184 Cardinality: 966                               
                                            1 INDEX RANGE SCAN NON-UNIQUE LMHOLTP.IX_LOCATIONS_MASTER_3 Cost: 3 Cardinality: 1                          
                                       12 FILTER                               
                                            11 HASH JOIN OUTER                          
                                                 8 MERGE JOIN CARTESIAN Cost: 155,948 Bytes: 702,656,660 Cardinality: 4,845,908                     
                                                      3 TABLE ACCESS FULL LMHOLTP.HOMESTORES Cost: 2 Bytes: 104 Cardinality: 1                
                                                      7 BUFFER SORT Cost: 155,946 Bytes: 198,682,228 Cardinality: 4,845,908                
                                                           6 PARTITION RANGE ITERATOR Partition #: 12           
                                                                5 TABLE ACCESS BY LOCAL INDEX ROWID LMHOLTP.TRANSACTION_POINTS Cost: 155,946 Bytes: 198,682,228 Cardinality: 4,845,908 Partition #: 12      
                                                                     4 INDEX RANGE SCAN NON-UNIQUE LMHOLTP.IX_TRANSACTION_POINTS_1 Cost: 24,880 Cardinality: 6,978,108 Partition #: 12
                                                 10 PARTITION RANGE ALL Partition #: 15 Partitions accessed #1 - #5                    
                                                      9 TABLE ACCESS FULL LMHOLTP.ACCOUNT Cost: 6,928 Bytes: 68,495,680 Cardinality: 8,561,960 Partition #: 15 Partitions accessed #1 - #5               
                        31 SORT UNIQUE Cost: 233,189 Bytes: 692,802 Cardinality: 3,499                                              
                             30 FILTER                                         
                                  29 FILTER                                    
                                       28 NESTED LOOPS OUTER                               
                                            24 HASH JOIN Cost: 226,088 Bytes: 664,810 Cardinality: 3,499                          
                                                 16 TABLE ACCESS FULL LMHOLTP.REWARD_OFFER_MASTER Cost: 8 Bytes: 2,280 Cardinality: 114                     
                                                 23 HASH JOIN Cost: 226,079 Bytes: 8,327,280 Cardinality: 48,984                     
                                                      20 TABLE ACCESS BY INDEX ROWID LMHOLTP.LOCATIONS_MASTER Cost: 37 Bytes: 432 Cardinality: 18                
                                                           19 NESTED LOOPS Cost: 39 Bytes: 2,304 Cardinality: 18           
                                                                17 TABLE ACCESS FULL LMHOLTP.HOMESTORES Cost: 2 Bytes: 104 Cardinality: 1      
                                                                18 INDEX RANGE SCAN NON-UNIQUE LMHOLTP.IX_LOCATIONS_MASTER_3 Cost: 3 Cardinality: 966      
                                                      22 PARTITION RANGE ITERATOR Partition #: 28                
                                                           21 TABLE ACCESS FULL LMHOLTP.REDEMPTION_DETAILS Cost: 226,019 Bytes: 261,636,270 Cardinality: 6,229,435 Partition #: 28           
                                            27 PARTITION RANGE ITERATOR Partition #: 30                          
                                                 26 TABLE ACCESS BY LOCAL INDEX ROWID LMHOLTP.ACCOUNT Cost: 2 Bytes: 8 Cardinality: 1 Partition #: 30                     
                                                      25 INDEX UNIQUE SCAN UNIQUE LMHOLTP.CO_PK_ACCOUNT Cost: 1 Cardinality: 1 Partition #: 30

  • Help required in writing the query

    I have the following level accounts
    10000
    11000
    11100
    11101
    11200
    11201
    12000
    12100
    12101
    12102
    12103
    20000
    21000
    21100
    21101
    22000
    22101
    21200
    21201
    I have to take the amount of the child account and sum it to the parent account as below:
    11100 = 11101 + 11102 - 11103+.........
    11000 = 11100 + 11200 + 11300+.........
    10000 = 11000 + 12000 + 13000+.........
    21100 = 21101 + 21102 - 21103+.........
    21000 = 21100 + 21200 + 21300+.........
    20000 = 21000 + 22000 + 23000+.........
    Please help...
    Thanks in advance.

    SQL> with t as (
      2             select 10000 ACCOUNT, NULL BEG_BAL, NULL END_BAL from dual union all
      3             select 11000,NULL,NULL from dual union all
      4             select 11100,NULL,NULL from dual union all
      5             select 11101,1750.00,4150.00 from dual union all
      6             select 11102,1550.00,3150.00 from dual union all
      7             select 11103,1650.00,3200.00 from dual union all
      8             select 11200,NULL,NULL from dual union all
      9             select 11201,800.00,1250.00 from dual union all
    10             select 11202,1550.00,3150.00 from dual union all
    11             select 12000,NULL,NULL from dual union all
    12             select 12100,NULL,NULL from dual union all
    13             select 12101,1200.00,5000.00 from dual union all
    14             select 12102,1500.00,3000.00 from dual union all
    15             select 12103,1550.00,2750.00 from dual union all
    16             select 20000,NULL,NULL from dual union all
    17             select 21000,NULL,NULL from dual union all
    18             select 21100,NULL,NULL from dual union all
    19             select 21101,2000.00,6500.00 from dual union all
    20             select 21102,1500.00,3500.00 from dual union all
    21             select 21103,1750.00,3550.00 from dual union all
    22             select 22000,NULL,NULL from dual union all
    23             select 22100,NULL,NULL from dual union all
    24             select 22101,1550.00,3550.00 from dual union all
    25             select 22102,2550.00,5550.00 from dual union all
    26             select 21200,NULL,NULL from dual union all
    27             select 21201,2550.00,6500.00 from dual union all
    28             select 21202,3550.00,7500.00 from dual
    29            )
    30  select  account,
    31          beg_bal,
    32          end_bal,
    33          sum(beg_bal) over(order by account range between current row and grouping_window follow
    ing) group_beg_bal,
    34          sum(end_bal) over(order by account range between current row and grouping_window follow
    ing) group_end_bal
    35    from  (
    36           select  t.*,
    37                   case
    38                     when account / 10000 = trunc(account / 10000) then 9999
    39                     when account / 1000 = trunc(account / 1000) then 999
    40                     when account / 100 = trunc(account / 100) then 99
    41                     when account / 10 = trunc(account / 10) then 9
    42                     else 0
    43                   end grouping_window
    44             from  t
    45          )
    46  order by account
    47  /
       ACCOUNT    BEG_BAL    END_BAL GROUP_BEG_BAL GROUP_END_BAL
         10000                               11550         25650
         11000                                7300         14900
         11100                                4950         10500
         11101       1750       4150          1750          4150
         11102       1550       3150          1550          3150
         11103       1650       3200          1650          3200
         11200                                2350          4400
         11201        800       1250           800          1250
         11202       1550       3150          1550          3150
         12000                                4250         10750
         12100                                4250         10750
       ACCOUNT    BEG_BAL    END_BAL GROUP_BEG_BAL GROUP_END_BAL
         12101       1200       5000          1200          5000
         12102       1500       3000          1500          3000
         12103       1550       2750          1550          2750
         20000                               15450         36650
         21000                               11350         27550
         21100                                5250         13550
         21101       2000       6500          2000          6500
         21102       1500       3500          1500          3500
         21103       1750       3550          1750          3550
         21200                                6100         14000
         21201       2550       6500          2550          6500
       ACCOUNT    BEG_BAL    END_BAL GROUP_BEG_BAL GROUP_END_BAL
         21202       3550       7500          3550          7500
         22000                                4100          9100
         22100                                4100          9100
         22101       1550       3550          1550          3550
         22102       2550       5550          2550          5550
    27 rows selected.
    SQL> SY.

  • Help needed to optimize the query

    Help needed to optimize the query:
    The requirement is to select the record with max eff_date from HIST_TBL and that max eff_date should be > = '01-Jan-2007'.
    This is having high cost and taking around 15mins to execute.
    Can anyone help to fine-tune this??
       SELECT c.H_SEC,
                    c.S_PAID,
                    c.H_PAID,
                    table_c.EFF_DATE
       FROM    MTCH_TBL c
                    LEFT OUTER JOIN
                       (SELECT b.SEC_ALIAS,
                               b.EFF_DATE,
                               b.INSTANCE
                          FROM HIST_TBL b
                         WHERE b.EFF_DATE =
                                  (SELECT MAX (b2.EFF_DATE)
                                     FROM HIST_TBL b2
                                    WHERE b.SEC_ALIAS = b2.SEC_ALIAS
                                          AND b.INSTANCE =
                                                 b2.INSTANCE
                                          AND b2.EFF_DATE >= '01-Jan-2007')
                               OR b.EFF_DATE IS NULL) table_c
                    ON  table_c.SEC_ALIAS=c.H_SEC
                       AND table_c.INSTANCE = 100;

    To start with, I would avoid scanning HIST_TBL twice.
    Try this
    select c.h_sec
         , c.s_paid
         , c.h_paid
         , table_c.eff_date
      from mtch_tbl c
      left
      join (
              select sec_alias
                   , eff_date
                   , instance
                from (
                        select sec_alias
                             , eff_date
                             , instance
                             , max(eff_date) over(partition by sec_alias, instance) max_eff_date
                          from hist_tbl b
                         where eff_date >= to_date('01-jan-2007', 'dd-mon-yyyy')
                            or eff_date is null
               where eff_date = max_eff_date
                  or eff_date is null
           ) table_c
        on table_c.sec_alias = c.h_sec
       and table_c.instance  = 100;

  • Guide me to tune the query

    Hi All,
    I need to tune the query which is taking more than 1 hour to execute over 8 Lakhs of record.
    SQL> explain plan for SELECT C.aci_cust_code      cust_code,
    2 C.aci_cust_name cust_name,
    3      R.NAME ruledefination,
    4 B.RULECODE ALERTS,
    5 A.custom1 tran_id,
    6 TD_get_value('AMLTRANTYPE', RTRIM(A.custom17)) trantype,
    7 A.CUSTOM18 tran_nature,
    8 A.custom25 tran_date,
    9 A.messageno messageno,
    10 TD_get_value('AMLTRANSTATUS', A.status) msgstatus,
    11 D.acai_acct_type acct_type,
    12 A.custom19 acct_number,
    13 A.CURRENCY CURRENCY,
    14 A.priorityamount amount,
    15 A.operator USERNAME,
    16 A.msgdb_id msgdb_id,
    17 A.msg_mode_in msg_mode_in
    18 FROM MSGDB A,
    19 MSGALERTS B,
    20 AML_CUST_INFO C,
    21      AML_CUST_ACC_INFO D,
    22 RULETBL2 R,
    23           (SELECT tdkey FROM tabledetails WHERE tdidcode = 'AML-INCLUDEQ' ) amlqueues
    24 WHERE A.msgdb_id = B.msgdb_id AND
    25 A.queueid = amlqueues.tdkey AND
    26           A.MSG_MODE_IN = 'AML-TRANS' AND
    27 A.custom15 = C.aci_cust_code AND
    28 A.CUSTOM19=D.ACAI_ACCT_NUMBER(+) AND
    29 TO_CHAR(A.custom25,'YYYYMMDD') BETWEEN
    30      TO_CHAR(TO_DATE('2011/01/01','YYYY/MM/DD'),'YYYYMMDD')
    31 AND TO_CHAR(TO_DATE('2011/01/31','YYYY/MM/DD'),'YYYYMMDD')
    32 AND B.RULECODE = R.RULECODE
    33 ORDER BY A.custom25, msgdb_id,B.rulecode;
    Explained.
    PLAN_TABLE_OUTPUT
    Plan hash value: 1081661146
    | Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 173K| 30M| | 12697 (2)| 00:02:33 |
    | 1 | SORT ORDER BY | | 173K| 30M| 66M| 12697 (2)| 00:02:33 |
    |* 2 | HASH JOIN | | 173K| 30M| | 5580 (4)| 00:01:07 |
    | 3 | VIEW | index$_join$_005 | 3395 | 81480 | | 42 (3)| 00:00:01 |
    |* 4 | HASH JOIN | | | | | | |
    | 5 | INDEX FAST FULL SCAN | IDX_RCODE | 3395 | 81480 | | 10 (0)| 00:00:01 |
    | 6 | INDEX FAST FULL SCAN | SYS_C0040836 | 3395 | 81480 | | 31 (0)| 00:00:01 |
    |* 7 | HASH JOIN | | 1737 | 276K| | 5534 (4)| 00:01:07 |
    |* 8 | HASH JOIN | | 559 | 86645 | | 4575 (3)| 00:00:55 |
    |* 9 | HASH JOIN OUTER | | 448 | 56000 | | 4463 (3)| 00:00:54 |
    | 10 | NESTED LOOPS | | 448 | 47040 | | 4404 (3)| 00:00:53 |
    |* 11 | TABLE ACCESS BY INDEX ROWID| MSGDB | 451 | 35178 | | 4403 (3)| 00:00:53 |
    |* 12 | INDEX RANGE SCAN | I_MODEDATE | 2292 | | | 4323 (3)| 00:00:52 |
    |* 13 | INDEX UNIQUE SCAN | PK_TABLEDETAIL | 1 | 27 | | 0 (0)| 00:00:01 |
    | 14 | INDEX FAST FULL SCAN | ACC_NUMBER_TYPE | 58947 | 1151K| | 58 (4)| 00:00:01 |
    | 15 | TABLE ACCESS FULL | AML_CUST_INFO | 18340 | 537K| | 111 (1)| 00:00:02 |
    | 16 | TABLE ACCESS FULL | MSGALERTS | 868K| 6782K| | 944 (4)| 00:00:12 |
    There is no index on RULECODE of MSGALERTS and RULETBL2 table.
    Could yu guys guide me how to tune this query with or without creating any new index.
    Thanks,

    To emphasise what hoek has said regarding dates, NEVER compare dates with dates by converting them to strings (or numbers). By doing so, you remove vital information from the optimizer.
    For example, what is the difference between "31st Dec 2010" and "1st Jan 2011"? Easy, they're dates, that's 1 day.
    But what's the difference between "20101231" and "20110101"? Easy: 20110101 - 20101231 = 8870.
    That makes the difference between the optimizer guessing 1 row or 8870 rows... a fairly big difference, I think you'll agree, which could well impact on the plan the optimizer chooses.
    One other point - leaving the clause as dates gives:
    AND    a.custom25 BETWEEN TO_DATE('2011/01/01', 'YYYY/MM/DD')
                          AND TO_DATE('2011/01/31', 'YYYY/MM/DD') which excludes any dates on 31st Jan 2011 except midnight, eg. 10am on 31st Jan 2011 won't be returned by your query.
    If you're after rows for a given month, then you could do:
    AND    trunc(a.custom25, 'mm') = TO_DATE('01/01/2011', 'dd/mm/yyyy')

  • Undo_tablespace and undo_retetion or tune the query  which one to increase

    hi all,
    In my logs file s
    ORA-01555 caused by SQL statement below (SQL ID: 9nc0n06yryhbk, Query Duration=165122 sec, SCN: 0x05ff.062f3363):
    Tue Feb 5 02:26:39 2008
    SELECT /*+ FIRST_ROWS */ /*+ ORDERED */ B.GID ,K.ELEMENT_TYPE ,K.DATA_SOURCE_GID ,K.PROD_ID ,K.OPTION_CD ,K.MCC ,K.SPN ,K.REGION_CD ,K.WW_CD ,K.COUNTRY_CD ,K.ERROR ,B.GBATCH_ID
    ,B.PERIOD_SEQ_NUM ,B.ACTION ,B.ERROR B_ERROR ,B.COST ,B.INPUT_FILE_ROW_NUM ,B.ACTION_STATUS ,B.ACTION_TIMESTAMP FROM T_INPUT_BUCKET B, T_COS_INPUT_KEY K WHERE B.ACTION_STATUS =
    'initial' AND B.GINPUT_KEY_ID = K.GID AND B.GBATCH_ID = :B1 AND ROWNUM < 8001
    Tue Feb 5 02:35:21 2008
    Thread 2 advanced to log sequence 42907
    Mon Feb 4 09:10:55 2008
    ORA-01555 caused by SQL statement below (SQL ID: 9nc0n06yryhbk, Query Duration=104081 sec, SCN: 0x05ff.05ebc008):
    Mon Feb 4 09:10:55 2008
    SELECT /*+ FIRST_ROWS */ /*+ ORDERED */ B.GID ,K.ELEMENT_TYPE ,K.DATA_SOURCE_GID ,K.PROD_ID ,K.OPTION_CD ,K.MCC ,K.SPN ,K.REGION_CD ,K.WW_CD ,K.COUNTRY_CD ,K.ERROR ,B.GBATCH_ID
    ,B.PERIOD_SEQ_NUM ,B.ACTION ,B.ERROR B_ERROR ,B.COST ,B.INPUT_FILE_ROW_NUM ,B.ACTION_STATUS ,B.ACTION_TIMESTAMP FROM T_INPUT_BUCKET B, T_COS_INPUT_KEY K WHERE B.ACTION_STATUS =
    'initial' AND B.GINPUT_KEY_ID = K.GID AND B.GBATCH_ID = :B1 AND ROWNUM < 8001
    Mon Feb 4 09:14:08 2008
    ===============================================
    and my undo_retention
    Current usage:
    UNDO_01 96736 11596 12 88
    UNDO_02 96736 9357 10 90
    NAME TYPE VALUE
    undo_management string AUTO
    undo_retention integer 36000
    can anyone please guide me which one is best to increase undo_retention or undo_tablespace
    or tune the query?
    my database version is
    Oracle Database 10g Enterprise Edition Release 10.2.0.2.0 - 64bi
    PL/SQL Release 10.2.0.2.0 - Production
    "CORE     10.2.0.2.0     Production"
    TNS for HPUX: Version 10.2.0.2.0 - Production
    NLSRTL Version 10.2.0.2.0 - Production
    thanks in advance

    IMO best is to
    1) tune the query to minimize time and resource use;
    2) set the undo_retention to size the undo tablespace for the required 'consistent read' rebuild requirements;
    3) set the retention guarantee appropriately
    4) size the undo tablespace based on the required size, probably dictated by 2)
    Why is this an 'either one or other' question? When driving a car and looking for best fuel efficiency, you tune up the car, drive properly AND use the right fuel. You don't just pick one and leave it at that.

  • How to tune the query and difference between CBO AND RBO.. Which is good

    Hello Friends,
    Here are some questions I have pls reply back with complete description and url if any ..
    1)How Did you tune Query,
    2)What approach you take to tune query? Do you use Hints?
    3)Where did you tune the query and what are the issue with query?
    4)What is difference between RBO and CBO? where u use RBO and CBO.
    5)Give some information about hash join?
    6) Using explain plan how do u know where the bottle neck in query .. how u will identify where the bottle neck is from explain plan .
    thanks/Kumar

    Hi,
    kumar73 wrote:
    Hello Friends,
    Here are some questions I have pls reply back with complete description and url if any ..
    1)How Did you tune Query, Use EXPLAIN PLAN to see exactly where it is spending its time, and address those areas.
    See the forum FAQ
    SQL and PL/SQL FAQ
    "3. How to improve the performance of my query?"
    2)What approach you take to tune query? Do you use Hints?Hints can help.
    Even more helpful is writing the SQL efficiently (avoiding multiple scans of the same table, filtering early, using built-in rather than user-defined functions, ...), creating and using indexes, and, for large tables, partitioning.
    Table design can have a big impact on performace.
    Look for ways to do part of what you need before the query. This includes denormalizing (when appropriate), the kind of pre-digesting that often takes place in data warehouses, function-based indexes, and, starting in Oracle 11, virtual columns.
    3)Where did you tune the query and what are the issue with query?Either this question is a vague summary of the entire thread, or I don't understand it. Can you re-phrase this part?
    4)What is difference between RBO and CBO? where u use RBO and CBO.Basically, use RBO if you have Oracle 7 or earlier.

  • How to tune the query...?

    Hi all,
    I am having a table with millions of records and the query is taking hours
    time. How to tune the query apart from doing the following things.
    1. Creating or Deleting indexes.
    2. Using Bind variables.
    3. Using Hints.
    4. Updating the Statitics regurarly.
    Actually, i have asked this question in interview how to tune the query.
    I told him the above 4 things. Then he told, these are not working, then
    how you will tune this query.
    Thanks in advance,
    Pal

    user546710 wrote:
    Actually, i have asked this question in interview how to tune the query.
    I told him the above 4 things. Then he told, these are not working, then
    how you will tune this query.It actually depends on the scenario/problem given.
    You may want to read this first.
    When your query takes too long ...
    When your query takes too long ...
    HOW TO: Post a SQL statement tuning request - template posting
    HOW TO: Post a SQL statement tuning request - template posting

  • How to tune the query for duplicate records while joining the two tables

    hi,i am executing the query which has retrieving multiple tables,in which one of them has duplicate record,how to get single record

    Not enough info...subject says "tune" the query, message says "write" the query...and where is actual query that you had tried ?

  • Help required in generating the following query

    Hi all,
    i need solution regarding the following problem.
    we got many number of groups ,each group containg many number of names,each name containing some items under that
    i had written a query stating that select item where grp  =[0%] and name =[1%] from a table
    it will execute and dispaly a window with two fields at the first field i entered group and in the second field when i select for existing values it shows all the items of all groups
    but we need only items that belong to the group that we enter in the first field
    any one tell me the query regarding this requirement

    HI,
    The alternative solution is: develop your add-on, which get the filtering window, and connects the 2 parameters inside, then runs the query and displays it in a matrix/Grid.
    It is not takes so many time.
    Regards,
    J.

  • Help required in refining a query

    I have a sql query :
    select C1 AS No_Of_Completed_WOs, C2 as Wos_with_Soft_Fail, (C1-C2) as Completed_Wos,C3 as Failed_Wos,C4 as TimedOut_Wos from (select (CASE WHEN TBL_WRK_ORD.EXTSYS_ID IS NULL THEN '<NULL>'
    ELSE TBL_WRK_ORD.EXTSYS_ID END),
    count(case when WO_STAT=104 AND XACTION_TYPE like '%Completion%' then WO_STAT end ) as C1,
    count(case when WO_STAT=104 AND XACTION_TYPE like '%Soft Error%' then WO_STAT end ) AS C2,
    count(case when WO_STAT=253 then WO_STAT end ) AS C3,
    count(case when WO_STAT=251 then WO_STAT end ) AS C4
    from TBL_WRK_ORD,TBL_WO_EVENT_QUEUE WHERE TBL_WRK_ORD.WO_ID=TBL_WO_EVENT_QUEUE.WO_ID group by TBL_WRK_ORD.EXTSYS_ID)
    The query basically maintains a separate counter for each of the qualifying condition.
    C1 : if the WO_STAT is 104 and XACTION_TYPE is '%Completion%'
    C2 : if the WO_STAT is 104 and XACTION_TYPE is '%Soft Error%'
    C3 : if the WO_STAT is 253
    C4 : if the WO_STAT is 251.
    Now, the requirement is that we need to fetch distinct records, based on which the summation will be carried upon.
    I was not able to incorporate the "distinct" logic in the above query.
    Required help in this regards, as how to fetch distinct records and use it in the summation logic.
    A quick help would be appreciated and thankful.
    Regards
    RAT.
    Edited by: user9546298 on Feb 24, 2010 6:36 AM
    Edited by: user9546298 on Feb 24, 2010 7:06 AM

    Here is the sample data :
    {code }
    Table : TBL_WRK_ORD
    WO_ID SCHED_DTS WO_STAT
    EXTSYS_ID
    A00000001-QRY_ISUP_SERVICE_L 20091126 03:28:02 103
    MRK
    A00000002-S3 20091201 02:42:34 255
    MRK
    A00000003-A4 20091201 02:50:28 253
    A00000006-ADD1 20091202 05:37:45 104
    A00000004-ADD1 20091201 03:40:39 104
    A00000005-DNH1 20091202 05:12:06 253
    6 rows selected.
    Table : TBL_WO_EVENT_QUEUE
    WO_ID XACTION_TYPE
    A00000001-QRY_ISUP_SERVICE_L WO Accept
    A00000001-QRY_ISUP_SERVICE_L WO Startup
    A00000001-QRY_ISUP_SERVICE_L WO Completion
    A00000002-S3 WO Accept
    A00000003-A4 WO Accept
    A00000003-A4 WO Startup
    A00000003-A4 WO Rollback
    A00000003-A4 WO Failure
    A00000005-DNH1 WO Rollback
    A00000005-DNH1 WO Failure
    A00000006-ADD1 WO Accept
    A00000006-ADD1 WO Startup
    A00000004-ADD1 WO Accept
    A00000004-ADD1 WO Startup
    A00000004-ADD1 WO Completion
    A00000005-DNH1 WO Accept
    A00000005-DNH1 WO Startup
    A00000006-ADD1 WO Completion
    Current output :
    NO_OF_COMPLETED_WOS WOS_WITH_SOFT_FAIL COMPLETED_WOS FAILED_WOS TIMEDOUT_WOS
    2 0 2 8 0
    0 0 0 0 0
    Here, as you can see above that the result has duplicates, which caused the counters to show wrong result.
    Now, the query should filter based on given criteria and produce the count of the filtered rows.
    But doing so, should consider distinct rows for counting.
    Also, just placing distinct at counting logic [ pasted by you ] didnot work. I tried it out, but in vain.
    Regards
    RAT.
    Edited by: user9546298 on Feb 24, 2010 7:09 AM                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

  • SQL Developer extensions : Help required regarding reports

    Hi,
    We are developing an extension for SQL Developer which is similar to reports . So , I need some information regarding the APIs related to report output.
    I need to access the resultset of the query output of a User Defined report so that we can customize our extension for some specific requirements. Is there a way to access the resultset after the query gets executed ? Or Are there any APIs where I can look into for some understanding of the background process which happens after the query execution and before the output gets printed on the output window.
    Any help or suggestions on this is highly appreciated.

    Please do not duplicate threads. No answer means nobody can or wants to help.
    At most you can bump the first thread in case someone missed it.
    Thanks,
    K.

  • Help Required - Regarding New Implementation

    Hello Experts,
    Implementing a new SAP ... and got bit confused about the April 1 opening balances and March closure.
    On 1st March 2014 we entered the following Opening Balances -
    1. Open POs
    2. Open Sales Orders
    3. Stock Opening Balances
    Then for the month of March we have been running SAP and the legacy system in parallel , making the following transactions –
    1.       Sales Order (Invoiced in March only)
    2.       Deliveries
    3.       All AR Invoices
    4.       All Purchase Orders
    5.       All GRNs
    6.       All AP Invoices
    7.       No Bank receipts or payments were created. End of the month both AR and AP invoices will be Open for bank receipts and payments respectively.
    The main question is which transactions need to be created/ closed to have opening balances in SAP.
    (In other words ...
    What is required now to create the correct opening balances? Can we close the invoices, both sales and purchases, without creating receipts and payments, respectively?
    How do we need to enter the customer and supplier opening balances when we have already created all the invoices for the month of march?)
    Your help will be much appreciated...
    Thank you.
    Best Regards

    Hi, Venkat
    I think that your Subject is not relevant to your problem
    For Search Help problem please follow the test the following code.
    PARAMETERS: ppernr LIKE pa0001-pernr.
    DATA: i_return TYPE ddshretval OCCURS 0 WITH HEADER LINE,
          c TYPE c VALUE 'S'.
    * Search Help for Pernr
    AT SELECTION-SCREEN ON VALUE-REQUEST FOR ppernr.
      TYPES: BEGIN OF t_pernr,
        pernr LIKE pa0001-pernr,
        ename LIKE pa0001-ename,
      END OF t_pernr.
      DATA: it_pa0001 TYPE STANDARD TABLE OF t_pernr WITH HEADER LINE.
      SELECT pernr ename from pa0001
        INTO CORRESPONDING FIELDS OF TABLE it_pa0001
        where pa0001~endda = '99991231' .
    *    WHERE zsdo~kunnr IN sokunnr.
      DELETE it_pa0001 WHERE pernr = '00000000'.
      SORT it_pa0001 BY pernr.
      CALL FUNCTION 'F4IF_INT_TABLE_VALUE_REQUEST'
        EXPORTING
          retfield    = 'PERNR'
          dynpprog    = sy-repid
          dynpnr      = sy-dynnr
          dynprofield = 'PERNR'
          value_org   = c
        TABLES
          value_tab   = it_pa0001
          return_tab  = i_return.
    Pleae Reply if any Issue,
    and If above is also not solution too, than Please Explain your Problem
    Kind Regards,
    Fasial
    Edited by: Faisal Altaf on Jan 17, 2009 2:46 PM

Maybe you are looking for

  • ICloud missing from my system preferences

    I recently changed my email for my apple ID log in and since doing that I keep getting an error that pops up to fix my apple ID and password for my icloud. When I click to update it takes me to system prefferences and gives me a NULL error. I have OS

  • Apple TV Display Too Large

    My Apple TV output is displayed (zoomed) much larger than the television screen.  Need help in determining how to reduce the display size, as this is new.  Thanks for any help you can provide.

  • 1611 error message and Apple refusing to fix

    Hello all, I started to have a few random problems with my iPod touch less than 3 weeks after getting it. I always took very good care it (I got the 16gb and it cost WAY to much NOT to take good care of it). Well, after trying to fix it, it completel

  • CK11n with activity prices from different plan versions

    Hello, We are looking for a simple & practical way to calculate a standard cost estimate for a material using activity prices from different CO plan version. In the valuation variant of the costing variant the CO plan version is determined from where

  • How to paste a field for signature

    Is it possible to paste a field for signature in a form?