Update SQL taking too much time.. hw to optimised?

Hi,
here is the query; this is taking time more than 3hrs to execute.
UPDATE process_sub_step proc1
SET process_indicator = '0',
process_result = 'SC'
WHERE EXISTS (SELECT *
FROM link_trans link1
WHERE link1.tr_leu_id = proc1.tr_leu_id AND link1.TYPE = 'S')
AND process_substep_id = 'TR_02_03'
AND process_indicator = '2'
The above query is taking more than 3hrs to execuite ...!!!!!!!
Record count in the both table is more than 10L.
The column used in join condition has index.
Execution Plan is : -
UPDATE STATEMENT ALL_ROWSCost: 6                     
     5 UPDATE PROCESS_SUB_STEP
          4 NESTED LOOPS SEMI Cost: 6 Bytes: 23 Cardinality: 1           
               1 INDEX RANGE SCAN INDEX PROC_SSTL_COMB_IDX Cost: 3 Bytes: 16 Cardinality: 1      
               3 TABLE ACCESS BY INDEX ROWID TABLE EGR.LINK_TRANS_CONS Cost: 2 Bytes: 378 Cardinality: 54      
                    2 INDEX RANGE SCAN INDEX LTC_TYPE_IDX Cost: 1 Cardinality: 2
What approach should I take to improve the performance......!!!!
But same query if i write in select mode like below; it display result very quick -
select *
from
process_sub_step proc1
WHERE EXISTS (SELECT *
FROM link_trans link1
WHERE link1.tr_leu_id = proc1.tr_leu_id AND link1.TYPE = 'S')
AND process_substep_id = 'TR_02_03'
AND process_indicator = '2';
Thanks...

> here is the query; this is taking time more than 3hrs to execute.
Obviously. For every single row in the process_sub_step matching the filter criteria, a sub-select has to be executed on the link_trans table.
So if that filter criteria identified a million rows, a million exist SELECTs have to be executed. Even a unique index scan will take accumulate into a large overhead. In your case, it is an index range scan that is performed which is slower than a unique scan.
Btw, there is no joining done here - just a nested loop process. I.e. read a row at a time from process_sub_step using a loop, and in each loop iteration, run a SELECT against link_trans.
To optimise this specific SQL means having to do less I/O (process less rows):
a) selecting less rows from process_sub_step to delete
b) making the select filter on link_trans more restrictive
Neither is likely practical. Which means you need to approach this problem with another method and not an EXIST sub-select.

Similar Messages

  • Update statement taking too much time

    Hi All,
    I defined a cursor in which there are almost 94 lakh records and i want to update few fields of a table from another table which takes data over a dblink.
    my cursor is
    cursor c_tpo is
          select t.loc_loc_id,
                 t.till_no,
                 t.transaction_date,
                 t.sales_audit_txn_no,
                 t.third_party_order_ref
            from dwt_om_email_addresses t
           where (t.merc_loaded_date_time is null or t.ws_ordered_date_time is null or
                 t.copos_ordered_date_time is null)
    -- And for loop is
    for i in c_tpo loop
        select count(*)
          into ln_count
          from web_sales@nrstp
         where dmw_order = i.third_party_order_ref;
        if ln_count = 1 then
          select date_loaded, order_date, order_date, pkt_ctrl_nbr
            into ld_merc_load_date,
                 ld_ws_order_date,
                 ld_cop_order_date,
                 lc_pkt_ctrl_nbr
            from web_sales@nrstp
           where dmw_order = i.third_party_order_ref;
          update dwt_om_email_addresses t
             set merc_loaded_date_time   = ld_merc_load_date,
                 ws_ordered_date_time    = ld_ws_order_date,
                 copos_ordered_date_time = ld_cop_order_date
           where t.loc_loc_id = i.loc_loc_id
             and t.till_no = i.till_no
             and t.transaction_date = i.transaction_date
             and t.sales_audit_txn_no = i.sales_audit_txn_no;
      end if;
        ln_count          := 0;
        ln_count_pkt      := 0;
        ld_merc_load_date := null;
        ld_ws_order_date  := null;
        ld_cop_order_date := null;
        ld_pkms_date_time := null;
        lc_pkt_ctrl_nbr   := null;
      end loop;
      -----------The ln_count is used to avoid the exception no_data_found and too_many_rows. How do I speed up the process as it takes 1 hr to update 500 records.
    What would be the best approach to achieve this in less time.
    Thanks

    if you check the your condition some where you have metioned
    dmw_order = t.third_party_order_ref
    if both of these are unique keys or primary key then there is no problem
    but if for there are multiple values edit the query by using
    rownum <= 1
    update dwt_om_email_addresses t
    set (merc_loaded_date_time,ws_ordered_date_time,copos_ordered_date_time) =(select date_loaded, order_date, order_date, pkt_ctrl_nbr from( select date_loaded, order_date, order_date, pkt_ctrl_nbr
                                                                                 from web_sales@nrstp
                                                                                  where dmw_order = t.third_party_order_ref)
    where rownum<=1)
    where exists  (select '1'
                  from web_sales@nrstp i
                  where t.loc_loc_id = i.loc_loc_id
                  and t.till_no = i.till_no
                  and t.transaction_date = i.transaction_date
                  and t.sales_audit_txn_no = i.sales_audit_txn_no
                  and (t.merc_loaded_date_time is null or t.ws_ordered_date_time is null or
                 t.copos_ordered_date_time is null));and for exists clause there is no problem
    Edited by: 810345 on Jun 10, 2011 4:07 PM
    Edited by: 810345 on Jun 10, 2011 4:08 PM

  • Catbundle.sql taking too much time while running it for CPU-April2011

    Hi ,
    I am on oracle 11.1.0.7.0 and I am applying the CPU patch April 2011.
    OS is Solaris SPARC.
    As part of that , i am running the catbundle.sql.
    currently , for last 180 mins , the catbundle.sql is at this state :
    SQL> PROMPT Processing Oracle Java Supplied Packages...
    Processing Oracle Java Supplied Packages...
    SQL> ALTER SESSION SET current_schema = sys;
    Session altered.
    SQL> @?/rdbms/admin/initcdc.sql
    SQL> Rem
    SQL> Rem $Header: initcdc.sql 15-mar-2006.08:20:07 mbrey Exp $
    SQL> Rem
    SQL> Rem initcdc.sql
    SQL> Rem
    SQL> Rem Copyright (c) 2000, 2006, Oracle. All rights reserved.
    SQL> Rem
    SQL> Rem    NAME
    SQL> Rem      initcdc.sql - script used to load CDC jar files into the database
    SQL> Rem
    SQL> Rem    DESCRIPTION
    SQL> Rem      <short description of component this file declares/defines>
    SQL> Rem
    SQL> Rem    NOTES
    SQL> Rem      script must be run as SYS
    SQL> Rem
    SQL> Rem    MODIFIED   (MM/DD/YY)
    SQL> Rem    mbrey       03/15/06 - bug 5092790 add datapump registration
    SQL> Rem    pabingha    02/25/03 - fix undoc interfaces
    SQL> Rem    wnorcott    03/14/02 - bug-2239726 disable triggers.
    SQL> Rem    wnorcott    01/31/02 - function 'active' return 0 or 1.
    SQL> Rem    wnorcott    01/30/02 - disable CDC triggers, CREATE_CHANGE_TABLE re-enables.
    SQL> Rem    wnorcott    06/26/01 - rid trailing slash.  As per Mark Jungermann
    SQL> Rem    gviswana    05/25/01 - CREATE OR REPLACE SYNONYM
    SQL> Rem    jgalanes    11/17/00 - for Import/Export grant execute on util to
    SQL> REM                           SELECT_CATLOG_ROLE
    SQL> Rem    wnorcott    09/07/00 - new loadjava syntax for performance.
    SQL> Rem    wnorcott    07/18/00 - rid LOGMNR_UID$.clientid
    SQL> Rem    wnorcott    06/28/00 - move logmnr_dict view here
    SQL> Rem    wnorcott    03/28/00 - fix trigger install
    SQL> Rem    wnorcott    03/27/00 - Install change table triggers
    SQL> Rem    mbrey       01/26/00 - script to load CDC jars
    SQL> Rem    mbrey       01/26/00 - Created
    SQL> Rem
    SQL> call sys.dbms_java.loadjava('-v -f -r -s -g public rdbms/jlib/CDC.jar');It has run the @?/rdbms/admin/initcdc.sql script and now its not proceeding next.. I am not sure if its stuck or still running.. How do i find that..
    what should i do ?
    *PS : Ive read this note :Script Fails At Loadjava With ORA-03113 and ORA-03114 [ID 358232.1] But its when it errors out , for me its not erroring out also..*
    Thanks
    Kk
    Edited by: Kk on May 25, 2011 2:41 AM

    just an update :
    I sat watching it for 3 hours.. then i though of stopping it.. pressed CTRL-Z and boom , it just took me to SQL showing that it had completed...
    So , if someone else faces this issue , press CTRL-Z after some 5-10 minutes it will show that its running well.
    Regards
    Kk
    Edited by: Kk on May 25, 2011 3:55 AM

  • HT1430 My inbox is update is taking too much time?

    My inbox is not getting updated?  I see the sign "checking e-mail" displayed for more than an hour and still the inbox does not get updated.  Also when I send a message, I do not see it in the Sent box.

    Assuming that you are connected to network okay then the best thing to do  is to delete your email account and re-enter it. I know that it sounds strange but this does sometime help. Good luck and do report back.

  • Auto Invoice Program taking too much time : problem with update sql

    Hi ,
    Oracle db version 11.2.0.3
    Oracle EBS version : 12.1.3
    Though we have a SEV-1 SR with oracle we have not been able to find much success.
    We have an auto invoice program which runs many times in the day and its taking too much time since the begining . On troubleshooting we have found one query to be taking too much of the time and seek suggestion on how to tune it.  I am attaching the explain plan for the for same.  Its an update query. Please guide.
    Plan
    UPDATE STATEMENT  ALL_ROWSCost: 0  Bytes: 124  Cardinality: 1 
      50 UPDATE AR.RA_CUST_TRX_LINE_GL_DIST_ALL
      27 FILTER 
      26 HASH JOIN  Cost: 8,937,633  Bytes: 4,261,258,760  Cardinality: 34,364,990 
      24 VIEW VIEW SYS.VW_NSO_1 Cost: 8,618,413  Bytes: 446,744,870  Cardinality: 34,364,990 
      23 SORT UNIQUE  Cost: 8,618,413  Bytes: 4,042,339,978  Cardinality: 34,364,990 
      22 UNION-ALL 
      9 FILTER 
      8 SORT GROUP BY  Cost: 5,643,052  Bytes: 3,164,892,625  Cardinality: 25,319,141 
      7 HASH JOIN  Cost: 1,640,602  Bytes: 32,460,436,875  Cardinality: 259,683,495 
      1 TABLE ACCESS FULL TABLE AR.RA_CUST_TRX_LINE_GL_DIST_ALL Cost: 154,993  Bytes: 402,499,500  Cardinality: 20,124,975 
      6 HASH JOIN  Cost: 853,567  Bytes: 22,544,143,440  Cardinality: 214,706,128 
      4 HASH JOIN  Cost: 536,708  Bytes: 2,357,000,550  Cardinality: 29,835,450 
      2 TABLE ACCESS FULL TABLE AR.RA_CUST_TRX_LINE_GL_DIST_ALL Cost: 153,008  Bytes: 1,163,582,550  Cardinality: 29,835,450 
      3 TABLE ACCESS FULL TABLE AR.RA_CUSTOMER_TRX_LINES_ALL Cost: 307,314  Bytes: 1,193,526,000  Cardinality: 29,838,150 
      5 TABLE ACCESS FULL TABLE AR.RA_CUSTOMER_TRX_ALL Cost: 132,951  Bytes: 3,123,197,116  Cardinality: 120,122,966 
      21 FILTER 
      20 SORT GROUP BY  Cost: 2,975,360  Bytes: 877,447,353  Cardinality: 9,045,849 
      19 HASH JOIN  Cost: 998,323  Bytes: 17,548,946,769  Cardinality: 180,916,977 
      13 VIEW VIEW AR.index$_join$_027 Cost: 108,438  Bytes: 867,771,256  Cardinality: 78,888,296 
      12 HASH JOIN 
      10 INDEX RANGE SCAN INDEX AR.RA_CUSTOMER_TRX_N15 Cost: 58,206  Bytes: 867,771,256  Cardinality: 78,888,296 
      11 INDEX FAST FULL SCAN INDEX (UNIQUE) AR.RA_CUSTOMER_TRX_U1 Cost: 62,322  Bytes: 867,771,256  Cardinality: 78,888,296 
      18 HASH JOIN  Cost: 748,497  Bytes: 3,281,713,302  Cardinality: 38,159,457 
      14 TABLE ACCESS FULL TABLE AR.RA_CUST_TRX_LINE_GL_DIST_ALL Cost: 154,993  Bytes: 402,499,500  Cardinality: 20,124,975 
      17 HASH JOIN  Cost: 519,713  Bytes: 1,969,317,900  Cardinality: 29,838,150 
      15 TABLE ACCESS FULL TABLE AR.RA_CUSTOMER_TRX_LINES_ALL Cost: 302,822  Bytes: 716,115,600  Cardinality: 29,838,150 
      16 TABLE ACCESS FULL TABLE AR.RA_CUST_TRX_LINE_GL_DIST_ALL Cost: 149,847  Bytes: 1,253,202,300  Cardinality: 29,838,150 
      25 TABLE ACCESS FULL TABLE AR.RA_CUST_TRX_LINE_GL_DIST_ALL Cost: 157,552  Bytes: 5,158,998,615  Cardinality: 46,477,465 
      41 SORT GROUP BY  Bytes: 75  Cardinality: 1 
      40 FILTER 
      39 MERGE JOIN CARTESIAN  Cost: 11  Bytes: 75  Cardinality: 1 
      35 NESTED LOOPS  Cost: 8  Bytes: 50  Cardinality: 1 
      32 NESTED LOOPS  Cost: 5  Bytes: 30  Cardinality: 1 
      29 TABLE ACCESS BY INDEX ROWID TABLE AR.RA_CUSTOMER_TRX_LINES_ALL Cost: 3  Bytes: 22  Cardinality: 1 
      28 INDEX UNIQUE SCAN INDEX (UNIQUE) AR.RA_CUSTOMER_TRX_LINES_U1 Cost: 2  Cardinality: 1 
      31 TABLE ACCESS BY INDEX ROWID TABLE AR.RA_CUSTOMER_TRX_ALL Cost: 2  Bytes: 133,114,520  Cardinality: 16,639,315 
      30 INDEX UNIQUE SCAN INDEX (UNIQUE) AR.RA_CUSTOMER_TRX_U1 Cost: 1  Cardinality: 1 
      34 TABLE ACCESS BY INDEX ROWID TABLE AR.RA_CUST_TRX_LINE_GL_DIST_ALL Cost: 3  Bytes: 20  Cardinality: 1 
      33 INDEX RANGE SCAN INDEX AR.RA_CUST_TRX_LINE_GL_DIST_N6 Cost: 2  Cardinality: 1 
      38 BUFFER SORT  Cost: 9  Bytes: 25  Cardinality: 1 
      37 TABLE ACCESS BY INDEX ROWID TABLE AR.RA_CUST_TRX_LINE_GL_DIST_ALL Cost: 3  Bytes: 25  Cardinality: 1 
      36 INDEX RANGE SCAN INDEX AR.RA_CUST_TRX_LINE_GL_DIST_N1 Cost: 2  Cardinality: 1 
      49 SORT GROUP BY  Bytes: 48  Cardinality: 1 
      48 FILTER 
      47 NESTED LOOPS 
      45 NESTED LOOPS  Cost: 7  Bytes: 48  Cardinality: 1 
      43 TABLE ACCESS BY INDEX ROWID TABLE AR.RA_CUST_TRX_LINE_GL_DIST_ALL Cost: 4  Bytes: 20  Cardinality: 1 
      42 INDEX RANGE SCAN INDEX AR.RA_CUST_TRX_LINE_GL_DIST_N6 Cost: 3  Cardinality: 1 
      44 INDEX RANGE SCAN INDEX AR.RA_CUST_TRX_LINE_GL_DIST_N1 Cost: 2  Cardinality: 1 
      46 TABLE ACCESS BY INDEX ROWID TABLE AR.RA_CUST_TRX_LINE_GL_DIST_ALL Cost: 3  Bytes: 28  Cardinality: 1 
    As per oracle , they had suggested multiple patches but that has not been helpful. Please suggest how do i tune this query. I dont have much clue of query tuning.
    Regards

    Hi Paul, My bad. I am sorry I missed it.
    Query as below :
    UPDATE RA_CUST_TRX_LINE_GL_DIST LGD SET (AMOUNT, ACCTD_AMOUNT) = (SELECT /*+ index(rec1 RA_CUST_TRX_LINE_GL_DIST_N6) ordered */ NVL(LGD.AMOUNT, 0) - ( SUM(LGD2.AMOUNT) - ( DECODE(LGD.GL_DATE, REC1.GL_DATE, 1, 0) * CTL.EXTENDED_AMOUNT ) ), NVL(LGD.ACCTD_AMOUNT, 0) - ( SUM(LGD2.ACCTD_AMOUNT) - ( DECODE(LGD.GL_DATE, REC1.GL_DATE, 1, 0) * DECODE(:B2 , NULL, ROUND( CTL.EXTENDED_AMOUNT * NVL(CT.EXCHANGE_RATE,1), :B3 ), ROUND( ( CTL.EXTENDED_AMOUNT * NVL(CT.EXCHANGE_RATE,1) ) / :B2 ) * :B2 ) ) ) FROM RA_CUSTOMER_TRX_LINES CTL, RA_CUSTOMER_TRX CT, RA_CUST_TRX_LINE_GL_DIST LGD2, RA_CUST_TRX_LINE_GL_DIST REC1 WHERE CTL.CUSTOMER_TRX_LINE_ID = LGD2.CUSTOMER_TRX_LINE_ID AND CTL.CUSTOMER_TRX_ID = CT.CUSTOMER_TRX_ID AND LGD.CUSTOMER_TRX_LINE_ID = CTL.CUSTOMER_TRX_LINE_ID AND LGD2.ACCOUNT_SET_FLAG = 'N' AND REC1.CUSTOMER_TRX_ID = CT.CUSTOMER_TRX_ID AND REC1.ACCOUNT_CLASS = 'REC' AND REC1.LATEST_REC_FLAG = 'Y' AND NVL(LGD.GL_DATE, TO_DATE( 2415021, 'J') ) = NVL(LGD2.GL_DATE, TO_DATE( 2415021, 'J') ) GROUP BY CTL.CUSTOMER_TRX_LINE_ID, REC1.GL_DATE, CTL.EXTENDED_AMOUNT, CTL.REVENUE_AMOUNT, CT.EXCHANGE_RATE ), PERCENT = (SELECT /*+ index(rec2 RA_CUST_TRX_LINE_GL_DIST_N6) */ DECODE(LGD.ACCOUNT_CLASS || LGD.ACCOUNT_SET_FLAG, 'SUSPENSEN', LGD.PERCENT, 'UNBILLN', LGD.PERCENT, 'UNEARNN', LGD.PERCENT, NVL(LGD.PERCENT, 0) - ( SUM(NVL(LGD4.PERCENT, 0)) - DECODE(REC2.GL_DATE, NVL(LGD.GL_DATE, REC2.GL_DATE), 100, 0) ) ) FROM RA_CUST_TRX_LINE_GL_DIST LGD4, RA_CUST_TRX_LINE_GL_DIST REC2 WHERE LGD.CUSTOMER_TRX_LINE_ID = LGD4.CUSTOMER_TRX_LINE_ID AND REC2.CUSTOMER_TRX_ID = LGD.CUSTOMER_TRX_ID AND REC2.CUSTOMER_TRX_ID = LGD4.CUSTOMER_TRX_ID AND REC2.ACCOUNT_CLASS = 'REC' AND REC2.LATEST_REC_FLAG = 'Y' AND LGD4.ACCOUNT_SET_FLAG = LGD.ACCOUNT_SET_FLAG AND DECODE(LGD4.ACCOUNT_SET_FLAG, 'Y', LGD4.ACCOUNT_CLASS, LGD.ACCOUNT_CLASS) = LGD.ACCOUNT_CLASS AND NVL(LGD.GL_DATE, TO_DATE( 2415021, 'J') ) = NVL(LGD4.GL_DATE, TO_DATE( 2415021, 'J') ) GROUP BY REC2.GL_DATE, LGD.GL_DATE ), LAST_UPDATED_BY = :B1 , LAST_UPDATE_DATE = SYSDATE WHERE CUST_TRX_LINE_GL_DIST_ID IN (SELECT /*+ index(rec3 RA_CUST_TRX_LINE_GL_DIST_N6) */ MIN(DECODE(LGD3.GL_POSTED_DATE, NULL, LGD3.CUST_TRX_LINE_GL_DIST_ID, NULL) ) FROM RA_CUSTOMER_TRX_LINES CTL, RA_CUSTOMER_TRX T, RA_CUST_TRX_LINE_GL_DIST LGD3, RA_CUST_TRX_LINE_GL_DIST REC3 WHERE T.REQUEST_ID = :B5 AND T.CUSTOMER_TRX_ID = CTL.CUSTOMER_TRX_ID AND (CTL.LINE_TYPE IN ( 'TAX','FREIGHT','CHARGES','SUSPENSE' ) OR (CTL.LINE_TYPE = 'LINE' AND CTL.ACCOUNTING_RULE_ID IS NULL )) AND LGD3.CUSTOMER_TRX_LINE_ID = CTL.CUSTOMER_TRX_LINE_ID AND LGD3.ACCOUNT_SET_FLAG = 'N' AND REC3.CUSTOMER_TRX_ID = T.CUSTOMER_TRX_ID AND REC3.ACCOUNT_CLASS = 'REC' AND REC3.LATEST_REC_FLAG = 'Y' AND NVL(T.PREVIOUS_CUSTOMER_TRX_ID, -1) = DECODE(:B4 , 'INV', -1, 'REGULAR_CM', T.PREVIOUS_CUSTOMER_TRX_ID, NVL(T.PREVIOUS_CUSTOMER_TRX_ID, -1) ) GROUP BY CTL.CUSTOMER_TRX_LINE_ID, LGD3.GL_DATE, REC3.GL_DATE, CTL.EXTENDED_AMOUNT, CTL.REVENUE_AMOUNT, T.EXCHANGE_RATE HAVING ( SUM(NVL(LGD3.AMOUNT, 0)) <> CTL.EXTENDED_AMOUNT * DECODE(LGD3.GL_DATE, REC3.GL_DATE, 1, 0) OR SUM(NVL(LGD3.ACCTD_AMOUNT, 0)) <> DECODE(LGD3.GL_DATE, REC3.GL_DATE, 1, 0) * DECODE(:B2 , NULL, ROUND( CTL.EXTENDED_AMOUNT * NVL(T.EXCHANGE_RATE,1), :B3 ), ROUND( ( CTL.EXTENDED_AMOUNT * NVL(T.EXCHANGE_RATE,1) ) / :B2 ) * :B2 ) ) UNION SELECT /*+ index(rec5 RA_CUST_TRX_LINE_GL_DIST_N6) INDEX (lgd5 ra_cust_trx_line_gl_dist_n6) index(ctl2 ra_customer_trx_lines_u1) */ TO_NUMBER( MIN(DECODE(LGD5.GL_POSTED_DATE||LGD5.ACCOUNT_CLASS|| LGD5.ACCOUNT_SET_FLAG, 'REVN', LGD5.CUST_TRX_LINE_GL_DIST_ID, 'REVY', LGD5.CUST_TRX_LINE_GL_DIST_ID, 'TAXN', LGD5.CUST_TRX_LINE_GL_DIST_ID, 'TAXY', LGD5.CUST_TRX_LINE_GL_DIST_ID, 'FREIGHTN', LGD5.CUST_TRX_LINE_GL_DIST_ID, 'FREIGHTY', LGD5.CUST_TRX_LINE_GL_DIST_ID, 'CHARGESN', LGD5.CUST_TRX_LINE_GL_DIST_ID, 'CHARGESY', LGD5.CUST_TRX_LINE_GL_DIST_ID, 'UNEARNY', LGD5.CUST_TRX_LINE_GL_DIST_ID, 'UNBILLY', LGD5.CUST_TRX_LINE_GL_DIST_ID, NULL ) ) ) FROM RA_CUST_TRX_LINE_GL_DIST LGD5, RA_CUST_TRX_LINE_GL_DIST REC5, RA_CUSTOMER_TRX_LINES CTL2, RA_CUSTOMER_TRX T WHERE T.REQUEST_ID = :B5 AND T.CUSTOMER_TRX_ID = REC5.CUSTOMER_TRX_ID AND CTL2.CUSTOMER_TRX_LINE_ID = LGD5.CUSTOMER_TRX_LINE_ID AND REC5.CUSTOMER_TRX_ID = LGD5.CUSTOMER_TRX_ID AND REC5.ACCOUNT_CLASS = 'REC' AND REC5.LATEST_REC_FLAG = 'Y' AND (CTL2.LINE_TYPE IN ( 'TAX','FREIGHT','CHARGES','SUSPENSE') OR (CTL2.LINE_TYPE = 'LINE' AND (CTL2.ACCOUNTING_RULE_ID IS NULL OR LGD5.ACCOUNT_SET_FLAG = 'Y' ))) GROUP BY LGD5.CUSTOMER_TRX_LINE_ID, LGD5.GL_DATE, REC5.GL_DATE, LGD5.ACCOUNT_SET_FLAG, DECODE(LGD5.ACCOUNT_SET_FLAG, 'N', NULL, LGD5.ACCOUNT_CLASS) HAVING SUM(NVL(LGD5.PERCENT, 0)) <> DECODE( NVL(LGD5.GL_DATE, REC5.GL_DATE), REC5.GL_DATE, 100, 0) )
    I understand that this could be a seeded query but my attempt is to tune it.
    Regards

  • Hi from the last two days my iphone have very slow to open the apps and very slow when i check the notification window , it taking too much time to open when i tap to down . help me to resolve the issue.

    Hi,  from the last two days my iphone( iphone 4 with ios 5) have very slow to open the apps and very slow when i check the notification window , it taking too much time to open when i tap to down . help me to resolve the issue.

    The Basic Troubleshooting Steps are:
    Restart... Reset... Restore...
    iPhone Reset
    http://support.apple.com/kb/ht1430
    Try this First... You will Not Lose Any Data...
    Turn the Phone Off...
    Press and Hold the Sleep/Wake Button and the Home Button at the Same Time...
    Wait for the Apple logo to Appear and then Disappear...
    Usually takes about 15 - 20 Seconds... ( But can take Longer...)
    Release the Buttons...
    Turn the Phone On...
    If that does not help... See Here:
    Backing up, Updating and Restoring
    http://support.apple.com/kb/HT1414

  • Taking too much time and not connecting to ipad software server

    taking too much time and not connecting to ipad software server and not updating.. why

    because everyone is doing the same thing you are... be patient, and you will get the update... I had to grab the update over 3G (it won't let you download or install, but it got me in the door) and then switch back to wifi and download and install.

  • HT201250 my time capsule is taking too much time indexing backup and then taking longer time to back up ( 207 days ) or longer !!! what shall i do ?

    my time capsule is taking too much time indexing backup and then taking longer time to back up ( 207 days ) or longer !!! what shall i do ?

    Try 10.7.5 supplemental update.
    This update seems to have solved this problem for many.
    Best.

  • Load is taking too much time & going to short dump.

    Hi,
                We have around 1000 records to be loaded from r/3 . The  load is taking too much time and it is still running.
    When tried to manually update the records ,we are getting an error ' converting  the rate into indirect quote: CAD/USD( ex. rate type EURX). Did any one faced this type of error.
    Thanks,
    APR

    Hi,
    what are you trying to do? If you wanna load exchange rates this can be done via context menu of the appropriate source system ( transfer global settings/exchange rates)

  • Procedure Taking too Much Time

    Hi Guys,
    I have package which contains number of function and procedure which is link with Oracle 's standard report called Prodction Shortage Report , it is taking too much time and even now days it is not finsih after three days.
    So I am looking for some tuning steps which I can follow for all the queries one by one, I tried touse Explain Plan what it is showing not too much difference after movement of column from one location to another etc , even I have updated the existing indexes and also created new indexes.
    Thanks
    Shishu Paul

    Shishu,
    Is this report part of an Oracle product such as the Enterprise Business Suite (EBS)? If so, I recommend you create a Service Request (SR) with Oracle and have them tune the report. Doing your own tuning could cause you some support problems later on.
    Just my two cents...
    Craig

  • While condition is taking too much time

    I have a query that returns around 2100 records ( not many ! ) , when I am processing my result set with a while condition , it's taking too much time ( around 30 seconds ). Here is the code
    public static GroupHierEntity load(Connection con)
         throws SQLException
         internalCustomer=false;
         String customerNameOfLogger = com.photomask.framework.ControlServlet.CUSTOMER_NAME;
    if ( customerNameOfLogger.startsWith("DPI") || customerNameOfLogger.startsWith("DUPONT") || customerNameOfLogger==null || customerNameOfLogger.equals("") ||customerNameOfLogger.equals("Unavailable") )
    { internalCustomer=true;}
    // System.out.println(" ***************** customer name of logger " + com.photomask.framework.ControlServlet.CUSTOMER_NAME + "internal customer " + internalCustomer);
    // show all groups to internal customers and only their customer groups for external customers
    if (internalCustomer) {
              stmtLoad = con.prepareStatement(sqlLoad);
         ResultSet rs = stmtLoad.executeQuery();
         return new GroupHierEntity(rs); }
         else
         stmtLoadExternal = con.prepareStatement(sqlLoadExternal);
         stmtLoadExternal.setString(1, customerNameOfLogger);
         stmtLoadExternal.setString(2, customerNameOfLogger);
         // System.out.println("***** sql " +sqlLoadExternal);
         ResultSet rs = stmtLoadExternal.executeQuery();
    return new GroupHierEntity(rs);
    GroupHierEntity ge = GroupHierEntity.load(con);
    while(ge.next())
    lvl = ge.getInt("lvl");
    oid = ge.getLong("oid");
    name = ge.getString("name");
    if(internalCustomer) {
    if (lvl == 2)
    int i = getAlphaIndex(name);
    super.setAppendRoot(alphaIndex);
    gn = new GroupListDataNode(lvl+1,oid,name);
    gn.setSelectable(true);
    this.addNode(gn);
    count++;
    System.out.println("*** count "+ count);
    ge.close();
    ========================
    Then I removed every thing in the while clause and just run as it is , still it is taking same time ( 30 secs )
    while(ge.next())
    {count++;}
    Why the while condition ( ge.next() ) is taking so much time ? Is there any other efficient way of reading the result set ?
    Thanks ,
    bala

    I tried all these things. The query is not taking much time ( 1 sec ). but the resultset.next() is taking too much time. I counted the time by putting System.out.pr.. at various points to see whuch is taking how much time.
    executeQuery() is only taking 1 sec. Processing the result set ( moving the cursor to next position ) is taking too much time.
    I have similar queries that return some 800 rows , that only takes 1 sec.
    I have doubt on resultset.next(). Any other alternative ?

  • ODS to CUBE load taking too much time..

    Hi all ,
    we are loading the data from our ZODS to ZCUBE, but the data load is taking too much time , we haven't created any indexes , we alsotried by making infosource for the ODS but still tha same problem .. It is always showing 0 from 345674 records that is the records are not getting extracted from ODS .
    Can anybody help me in this regards , it is a bit urgent ..
    Thanks in advance.

    Hi,
    there are a few you can check. First you should check if this job hasn't ended in a dump with ST22.
    The next thing you can do, if the job doesn't end abnormaly, is to reduce the amount of records processed at the same time. Sometimes the system has trouble if the amount of records that it has to process is too large. Go to the InfoPackage -> DataS. Default Data Transfer -> Fill the maximum to 10% of de the default value. Try to run the load again.
    If the job still doesn't finish then you have to check wether or not there are any ABAP routines and/or formula involved in the update rule ? Maybe they running in a loop.
    regards,
    Raymond Baggen
    Uphantis bv

  • Client import taking too much time

    hi all,
    i am importing a client , i it has complete copy table 19,803 of 19,803 but for last four hours its status is processing
    scc3
    Target Client           650
    Copy Type               Client Import Post-Proc
    Profile                 SAP_CUST
    Status                  Processing...
    User                    SAP*
    Start on                24.05.2009 / 15:08:03
    Last Entry on           24.05.2009 / 15:36:25
    Current Action:         Post Processing
    -  Last Exit Program    RGBCFL01
    Transport Requests
    - Client-Specific       PRDKT00004
    - Texts                 PRDKX00004
    Statistics for this Run
    - No. of Tables             19803 of     19803
    - Deleted Lines                 7
    - Copied Lines                  0
    sm50
    1 DIA 542           Running Yes             SAPLTHFB 650 SAP*     
    7 BGD 4172   Running Yes 11479  RGTBGD23 650 SAP* Sequential Read     D010INC
    sm66
    Server  No. Type PID Status  Reason Sem Start Error CPU Time   User Report   Action          Table
    prdsap_PRD_00  7  BTC 4172 Running   Yes    11711 SAP* RGTBGD23 Sequential Read D010INC
    plz guide me why it is taking too much time , while it has finished most of the things
    best regard
    Khan

    The import is in post processing. It digs through all the documents and adapts them to the new client. Most of the tables in the application area have a "MANDT" (= client) field which needs to be changed. Depending of the size of the client this can take a huge amount of time.
    You can try to improve the speed by updating the table statistics for table D010INC.
    Markus

  • Job is taking too much time

    Hi,
    I am running one SQL code that is taking 1 hrs. but when I schedule this code in JOB using package it is taking 5 hrs.
    could anybody suggest why it is taking too much time?
    Regards
    Gagan

    Use TRACE and TKPROF with wait events to see where time is being spent (or waisted).
    See these informative threads:
    http://oracle-randolf.blogspot.com/2009/02/basic-sql-statement-performance.html
    HOW TO: Post a SQL statement tuning request - template posting
    Also you can use V$SESSION and/or V$SESSION_LONGOPS to see what code is currently executing.

  • Report is taking too much time when running from parameter form

    Dear All
    I have developed report in oracle reports bulider 10g. while running it from report builder data is coming very fast.
    But, If it is running from parameter form it is taking too much time to format report in PDF.
    Please suggest any configuration or setting if anybody is having Idea.
    Thanks

    Hi,
    The first thing to check is whether the query is running to completion in TOAD. By default, TOAD just selects the first 50 rows, where as Discoverer must return all the rows before displaying results if a crosstab report is used.
    Secondly, check that the queries and the explain plans are the same in Discoverer and Toad. Although, Discoverer shows the SQL in the SQL inspector this isn't necessarily the SQL actually sent to the database. Use TOAD to interogate the Discoverer session to determine the actual SQL and compare this SQL and explain plan to SQL you ran in TOAD.
    Thirdly, check that the session context is the same in both cases. So check that any custom contexts and the USER_ENV context is the same, and if any security packages or VPD policies are used in the SQL that these have been initialised the same.
    If you still cannot determine the difference then trace both sessions.
    Rod West

Maybe you are looking for

  • Assign source of supply in PR from SO

    I have a PR that automatically created from SO. I have already created outline agreement for each material in PO and created source list (with fix indicator) too. But when I click "Assign source of supply" button in PR, system ask me to choose betwee

  • Can't open office 2007 documents in sharepoint

    I can open pdf files find, just not word or excel.   when I double click on files, 1st messages says files could be corrupt. 2nd message "word found unreadable content in the document.  Do you want to recover and trust source.  After clicking yes ne

  • IGoogle will not load completely

    Since installing Firefox 5.0, iGoogle will not finish loading (or takes more than 5 minutes). IE8 loads iGoogle normally. Firefox 5.0 load Google immediately. While waiting for iGoogle to finish load there is a status box that says "Read www-opensoci

  • Automatic trim of too long strings

    Hi, if I try to insert a too long string value into a VARCHAR column, I get the error ORA-12899. Is it possible to set the Oracle database so that too long strings get trimmed automatically? Thank you.

  • Will a Samsung HN-M101MBB 1TB 5400rpm SATA 2.5 inch Internal Hard Drive work in my late 2008 silver macbook

    will a Samsung HN-M101MBB 1TB 5400rpm SATA 2.5 inch Internal Hard Drive work in my 13-inch, Aluminum, Late 2008 macbook.