Performance Tuning of a merge statement

Hi,
below query is occupying the temp tablespace(120GB) and explain planis not showing it .
Can someone please help me with this.
explain plan for MERGE INTO BKMAIN.BK_CUST_OD_PEAK_SUM TGT USING
WITH OD_MAIN AS
SELECT
MAX (
CASE
WHEN CUST_BAL_MAX.BK_TRN_TS <= CUST_BAL_TEMP.BK_TRN_TS
AND CUST_BAL_MAX.BK_CUR_BAL_RPT_CCY_AM >= 0
THEN CUST_BAL_MAX.BK_TRN_TS
ELSE NULL
END) T_TMP_TRN_TS,
MIN(
CASE
WHEN CUST_BAL_MAX.BK_TRN_TS >= CUST_BAL_TEMP.BK_TRN_TS
AND CUST_BAL_MAX.BK_CUR_BAL_RPT_CCY_AM >= 0
THEN CUST_BAL_MAX.BK_TRN_TS
ELSE NULL
END) T_TMP_TRN_TS1 ,
CUST_BAL_TEMP.BK_BUS_EFF_DT ,
CUST_BAL_TEMP.BK_CUR_BAL_RPT_CCY_AM ,
CUST_BAL_TEMP.BK_PDAY_CLS_BAL_RPT_CCY_AM ,
CUST_BAL_MAX.N_CUST_SKEY
FROM BKMAIN.BK_CUST_TRN_TM_BAL_SUM CUST_BAL_MAX ,
SELECT TRN_SUM.N_CUST_SKEY ,
TRN_SUM.BK_BUS_EFF_DT ,
TRN_SUM.BK_TRN_TS ,
TRN_SUM.BK_CUR_BAL_RPT_CCY_AM ,
CUST_OD_RSLT.BK_PDAY_CLS_BAL_RPT_CCY_AM
FROM BKMAIN.BK_CUST_TRN_TM_BAL_SUM TRN_SUM,
BKMAIN.BK_CUST_OD_PEAK_SUM CUST_OD_RSLT
WHERE (TRN_SUM.BK_BUS_EFF_DT = '02-APR-2013'
AND TRN_SUM.N_CUST_SKEY = CUST_OD_RSLT.N_CUST_SKEY
AND TRN_SUM.BK_BUS_EFF_DT = CUST_OD_RSLT.BK_BUS_EFF_DT
AND TRN_SUM.BK_CUR_BAL_RPT_CCY_AM= (-1*CUST_OD_RSLT.BK_MAX_OD_RPT_CCY_AM))
CUST_BAL_TEMP
WHERE CUST_BAL_MAX.BK_BUS_EFF_DT='02-APR-2013'
AND CUST_BAL_MAX.N_CUST_SKEY =CUST_BAL_TEMP.N_CUST_SKEY
AND CUST_BAL_MAX.BK_BUS_EFF_DT =CUST_BAL_TEMP.BK_BUS_EFF_DT
GROUP BY CUST_BAL_MAX.N_CUST_SKEY ,
CUST_BAL_TEMP.BK_BUS_EFF_DT ,
CUST_BAL_TEMP.BK_CUR_BAL_RPT_CCY_AM,
CUST_BAL_TEMP.BK_PDAY_CLS_BAL_RPT_CCY_AM
SELECT
N_CUST_SKEY,
BK_BUS_EFF_DT ,
CASE
WHEN T_TMP_TRN_TS IS NOT NULL
THEN
SELECT CUST_BAL.BK_CUR_BAL_END_TS
FROM BKMAIN.BK_CUST_TRN_TM_BAL_SUM CUST_BAL
WHERE CUST_BAL.BK_BUS_EFF_DT='02-APR-2013'
AND CUST_BAL.N_CUST_SKEY = OD_MAIN.N_CUST_SKEY
AND CUST_BAL.BK_TRN_TS = OD_MAIN.T_TMP_TRN_TS
WHEN (T_TMP_TRN_TS IS NULL
AND OD_MAIN.BK_PDAY_CLS_BAL_RPT_CCY_AM < 0)
THEN BK_FN_GET_STRT_EOD_BUS_TS(1, '02-APR-2013','S')
WHEN (T_TMP_TRN_TS IS NULL
AND OD_MAIN.BK_PDAY_CLS_BAL_RPT_CCY_AM >= 0)
THEN
SELECT MIN(CUST_BAL.BK_TRN_TS)
FROM BKMAIN.BK_CUST_TRN_TM_BAL_SUM CUST_BAL
WHERE CUST_BAL.BK_BUS_EFF_DT='02-APR-2013'
AND CUST_BAL.N_CUST_SKEY = OD_MAIN.N_CUST_SKEY
AND CUST_BAL.BK_OD_FL ='Y'
END T_MAX_OD_STRT_TS,
CASE
WHEN T_TMP_TRN_TS1 IS NOT NULL
THEN
SELECT CUST_BAL.BK_CUR_BAL_STRT_TS
FROM BKMAIN.BK_CUST_TRN_TM_BAL_SUM CUST_BAL
WHERE CUST_BAL.BK_BUS_EFF_DT='02-APR-2013'
AND CUST_BAL.N_CUST_SKEY = OD_MAIN.N_CUST_SKEY
AND CUST_BAL.BK_TRN_TS = OD_MAIN.T_TMP_TRN_TS1
WHEN (T_TMP_TRN_TS1 IS NULL )
THEN BK_FN_GET_STRT_EOD_BUS_TS(1, '02-APR-2013','E')
END T_MAX_OD_END_TS
FROM OD_MAIN
) SRC ON(TGT.N_CUST_SKEY = SRC.N_CUST_SKEY AND TGT.BK_BUS_EFF_DT = SRC.BK_BUS_EFF_DT AND TGT.BK_BUS_EFF_DT = '02-APR-2013')
WHEN MATCHED THEN
UPDATE
SET BK_MAX_OD_STRT_TS = T_MAX_OD_STRT_TS,
BK_MAX_OD_END_TS = T_MAX_OD_END_TS;
set linesize 2000;
select * from table( dbms_xplan.display );
PLAN_TABLE_OUTPUT
Plan hash value: 2341776056
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | MERGE STATEMENT | | 1 | 54 | 2035 (1)| 00:00:29 |
| 1 | MERGE | BK_CUST_OD_PEAK_SUM | | | | |
|* 2 | TABLE ACCESS BY INDEX ROWID | BK_CUST_TRN_TM_BAL_SUM | 1 | 35 | 4 (0)| 00:00:01 |
|* 3 | INDEX RANGE SCAN | PK_BK_CUST_TRN_TM_BAL_SUM | 1 | | 3 (0)| 00:00:01 |
|* 4 | TABLE ACCESS BY INDEX ROWID | BK_CUST_TRN_TM_BAL_SUM | 1 | 35 | 4 (0)| 00:00:01 |
|* 5 | INDEX RANGE SCAN | PK_BK_CUST_TRN_TM_BAL_SUM | 1 | | 3 (0)| 00:00:01 |
PLAN_TABLE_OUTPUT
| 6 | SORT AGGREGATE | | 1 | 26 | | |
|* 7 | TABLE ACCESS BY INDEX ROWID | BK_CUST_TRN_TM_BAL_SUM | 1 | 26 | 9 (0)| 00:00:01 |
|* 8 | INDEX RANGE SCAN | PK_BK_CUST_TRN_TM_BAL_SUM | 5 | | 3 (0)| 00:00:01 |
| 9 | VIEW | | | | | |
| 10 | NESTED LOOPS | | | | | |
| 11 | NESTED LOOPS | | 1 | 173 | 2035 (1)| 00:00:29 |
| 12 | VIEW | | 1 | 61 | 2033 (1)| 00:00:29 |
| 13 | SORT GROUP BY | | 1 | 85 | 2033 (1)| 00:00:29 |
| 14 | NESTED LOOPS | | | | | |
| 15 | NESTED LOOPS | | 1 | 85 | 2032 (1)| 00:00:29 |
|* 16 | HASH JOIN | | 1 | 54 | 2024 (1)| 00:00:29 |
PLAN_TABLE_OUTPUT
|* 17 | TABLE ACCESS STORAGE FULL| BK_CUST_OD_PEAK_SUM | 18254 | 410K| 118 (0)| 00:00:02 |
|* 18 | TABLE ACCESS STORAGE FULL| BK_CUST_TRN_TM_BAL_SUM | 370K| 10M| 1904 (1)| 00:00:27 |
|* 19 | INDEX RANGE SCAN | PK_BK_CUST_TRN_TM_BAL_SUM | 5 | | 2 (0)| 00:00:01 |
|* 20 | TABLE ACCESS BY INDEX ROWID| BK_CUST_TRN_TM_BAL_SUM | 3 | 93 | 8 (0)| 00:00:01 |
|* 21 | INDEX RANGE SCAN | PK_BK_CUST_OD_PEAK_SUM | 1 | | 1 (0)| 00:00:01 |
| 22 | TABLE ACCESS BY INDEX ROWID | BK_CUST_OD_PEAK_SUM | 1 | 112 | 2 (0)| 00:00:01 |
Predicate Information (identified by operation id):
PLAN_TABLE_OUTPUT
2 - filter("CUST_BAL"."BK_BUS_EFF_DT"=TO_DATE(' 2013-04-02 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))
3 - access("CUST_BAL"."N_CUST_SKEY"=:B1 AND "CUST_BAL"."BK_TRN_TS"=:B2)
4 - filter("CUST_BAL"."BK_BUS_EFF_DT"=TO_DATE(' 2013-04-02 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))
5 - access("CUST_BAL"."N_CUST_SKEY"=:B1 AND "CUST_BAL"."BK_TRN_TS"=:B2)
7 - filter("CUST_BAL"."BK_BUS_EFF_DT"=TO_DATE(' 2013-04-02 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
"CUST_BAL"."BK_OD_FL"='Y')
8 - access("CUST_BAL"."N_CUST_SKEY"=:B1)
16 - access("TRN_SUM"."N_CUST_SKEY"="CUST_OD_RSLT"."N_CUST_SKEY" AND
"TRN_SUM"."BK_BUS_EFF_DT"="CUST_OD_RSLT"."BK_BUS_EFF_DT" AND
"TRN_SUM"."BK_CUR_BAL_RPT_CCY_AM"=(-1)*"CUST_OD_RSLT"."BK_MAX_OD_RPT_CCY_AM")
17 - storage("CUST_OD_RSLT"."BK_BUS_EFF_DT"=TO_DATE(' 2013-04-02 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))
PLAN_TABLE_OUTPUT
filter("CUST_OD_RSLT"."BK_BUS_EFF_DT"=TO_DATE(' 2013-04-02 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))
18 - storage("TRN_SUM"."BK_BUS_EFF_DT"=TO_DATE(' 2013-04-02 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))
filter("TRN_SUM"."BK_BUS_EFF_DT"=TO_DATE(' 2013-04-02 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))
19 - access("CUST_BAL_MAX"."N_CUST_SKEY"="TRN_SUM"."N_CUST_SKEY")
20 - filter("CUST_BAL_MAX"."BK_BUS_EFF_DT"=TO_DATE(' 2013-04-02 00:00:00', 'syyyy-mm-dd hh24:mi:ss')
AND "CUST_BAL_MAX"."BK_BUS_EFF_DT"="TRN_SUM"."BK_BUS_EFF_DT")
21 - access("TGT"."N_CUST_SKEY"="N_CUST_SKEY" AND "TGT"."BK_BUS_EFF_DT"=TO_DATE(' 2013-04-02
00:00:00', 'syyyy-mm-dd hh24:mi:ss'))
filter("TGT"."BK_BUS_EFF_DT"=TO_DATE(' 2013-04-02 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))
53 rows selected.

Hi
sb92075 wrote:
it appears that the STATISTICS do NOT reflect reallity; or do you really have many tables with 1 row?not necessarily (and not even likely)
1) explain plan shows expected number of rows after filters are applied, so even if stats are perfectly correct, but predicates are correlated, then it's easy to get cardinality = 1 because the optimizer has no way of knowing correlations between columns (unless you're in 11g and you have collected exteded stats on this colgroup)
2) in explain plan, cardinalities of driven operations are show per 1 iterantion. E.g.:
NESTED LOOP cardinality = 1,000,000
  TABLE ACCESS FULL A cardinality = 1,000,000
  TABLE ACCESS BY ROWID B cardinality = 1
       INDEX UNIQUE SCAN PK$B cardinality = 1doesn't mean that the optimizer expects to find 1 row in table B, with or without filters, it means that there will be 1 row per each of 1,000,000 iterations.
In this specific case, the most suspicious operation in the plan is HASH JOIN 16: first, because it's highly unusual to have 18k rows in one table and 370k in another
and find only 1 match; second, because it's a 3-column join, which probably explains why the join cardinality is estimated so low.
Often, such problems are mitigated by multicolumn join sanity checks, so maybe the OP is either on an old version of Oracle that doesn't have these checks, or these checks are disabled for some reason.
Best regards,
Nikolay

Similar Messages

  • Performance of merge statement

    hi all,
    any advices or tips on how to optimize the performance the of the merge statement?
    can indexes on the target /source tables do some help?
    thanks

    user2361373 wrote:
    you cannot improve the performance of merge A bit of a misleading answer when the merge encompasses a query that can be improved, hence the merge performance can be improved.
    but the source query inside merge is to be optimized based on rowid or primary key update runs faster.There are many ways to improve a query and it all depends on the query itself. It doesn't necessarily have to be to do with rowid or the primary key. Firstly the cause of the performance issue needs identifying.

  • Performance tuning in t

    hi,
    I have to do perofrmance for one program, it is taking 67000 secs in back ground for execution and 1000 secs for some varints .It is an  ALV report.
    please suggest me how to proced to change the code.

    Performance tuning for Data Selection Statement
    <b>http://www.sap-img.com/abap/performance-tuning-for-data-selection-statement.htm</b>Debugger
    http://help.sap.com/saphelp_47x200/helpdata/en/c6/617ca9e68c11d2b2ab080009b43351/content.htm
    http://www.cba.nau.edu/haney-j/CIS497/Assignments/Debugging.doc
    http://help.sap.com/saphelp_erp2005/helpdata/en/b3/d322540c3beb4ba53795784eebb680/frameset.htm
    Run Time Analyser
    http://help.sap.com/saphelp_47x200/helpdata/en/c6/617cafe68c11d2b2ab080009b43351/content.htm
    SQL trace
    http://help.sap.com/saphelp_47x200/helpdata/en/d1/801f7c454211d189710000e8322d00/content.htm
    CATT - Computer Aided Testing Too
    http://help.sap.com/saphelp_47x200/helpdata/en/b3/410b37233f7c6fe10000009b38f936/frameset.htm
    Test Workbench
    http://help.sap.com/saphelp_47x200/helpdata/en/a8/157235d0fa8742e10000009b38f889/frameset.htm
    Coverage Analyser
    http://help.sap.com/saphelp_47x200/helpdata/en/c7/af9a79061a11d4b3d4080009b43351/content.htm
    Runtime Monitor
    http://help.sap.com/saphelp_47x200/helpdata/en/b5/fa121cc15911d5993d00508b6b8b11/content.htm
    Memory Inspector
    http://help.sap.com/saphelp_47x200/helpdata/en/a2/e5fc84cc87964cb2c29f584152d74e/content.htm
    ECATT - Extended Computer Aided testing tool.
    http://help.sap.com/saphelp_47x200/helpdata/en/20/e81c3b84e65e7be10000000a11402f/frameset.htm
    Just refer to these links...
    performance
    Performance
    Performance Guide
    performance issues...
    Performance Tuning
    Performance issues
    performance tuning
    performance tuning
    You can go to the transaction SE30 to have the runtime analysis of your program.Also try the transaction SCI , which is SAP Code Inspector.

  • Regarding performance tuning

    hi,
    i have developed a report program.its taking too much time to fetch the records.so what steps i have to consider to improve the performance. urgent plz.

    Hi,
    Check this links
    Performance tuning for Data Selection Statement & Others
    http://www.sap-img.com/abap/performance-tuning-for-data-selection-statement.htm
    http://www.sapdevelopment.co.uk/perform/performhome.htm
    http://www.thespot4sap.com/Articles/SAPABAPPerformanceTuning_PerformanceAnalysisTools.asp
    http://www.thespot4sap.com/Articles/SAPABAPPerformanceTuning_Introduction.asp
    1. Debugger
    http://help.sap.com/saphelp_47x200/helpdata/en/c6/617ca9e68c11d2b2ab080009b43351/content.htm
    http://www.cba.nau.edu/haney-j/CIS497/Assignments/Debugging.doc
    http://help.sap.com/saphelp_erp2005/helpdata/en/b3/d322540c3beb4ba53795784eebb680/frameset.htm
    2. Run Time Analyser
    http://help.sap.com/saphelp_47x200/helpdata/en/c6/617cafe68c11d2b2ab080009b43351/content.htm
    3. SQL trace
    http://help.sap.com/saphelp_47x200/helpdata/en/d1/801f7c454211d189710000e8322d00/content.htm
    6. Coverage Analyser
    http://help.sap.com/saphelp_47x200/helpdata/en/c7/af9a79061a11d4b3d4080009b43351/content.htm
    7. Runtime Monitor
    http://help.sap.com/saphelp_47x200/helpdata/en/b5/fa121cc15911d5993d00508b6b8b11/content.htm
    8. Memory Inspector
    http://help.sap.com/saphelp_47x200/helpdata/en/a2/e5fc84cc87964cb2c29f584152d74e/content.htm
    http://sap.genieholdings.com/abap/performance.htm
    http://www.dbis.ethz.ch/research/publications/19.pdf
    Reward Points if it is Useful.
    Thanks,
    Manjunath MS

  • Performance tuning techniques

    I am looking to compile a list of the major performance tuning techniques that can be implemented in an ABAP program. 
    Appreciate any feedback
    J

    HI,
    chk this.
    http://www.erpgenie.com/abap/performance.htm
    http://www.thespot4sap.com/Articles/SAPABAPPerformanceTuning_PerformanceAnalysisTools.asp
    http://www.sap-img.com/abap/performance-tuning-for-data-selection-statement.htm
    Performance tuning for Data Selection Statement 
    For all entries
    The for all entries creates a where clause, where all the entries in the driver table are combined with OR. If the number of 
    entries in the driver table is larger than rsdb/max_blocking_factor, several similar SQL statements are executed to limit the 
    length of the WHERE clause. 
    The plus
    Large amount of data 
    Mixing processing and reading of data 
    Fast internal reprocessing of data 
    Fast 
    The Minus
    Difficult to program/understand 
    Memory could be critical (use FREE or PACKAGE size) 
    Some steps that might make FOR ALL ENTRIES more efficient:
    Removing duplicates from the the driver table 
    Sorting the driver table 
    If possible, convert the data in the driver table to ranges so a BETWEEN statement is used instead of and OR statement:
    FOR ALL ENTRIES IN i_tab
      WHERE mykey >= i_tab-low and
            mykey <= i_tab-high.
    Nested selects
    The plus:
    Small amount of data 
    Mixing processing and reading of data 
    Easy to code - and understand 
    The minus:
    Large amount of data 
    when mixed processing isn’t needed 
    Performance killer no. 1
    Select using JOINS
    The plus
    Very large amount of data 
    Similar to Nested selects - when the accesses are planned by the programmer 
    In some cases the fastest 
    Not so memory critical 
    The minus
    Very difficult to program/understand 
    Mixing processing and reading of data not possible 
    Use the selection criteria
    SELECT * FROM SBOOK.                   
      CHECK: SBOOK-CARRID = 'LH' AND       
                      SBOOK-CONNID = '0400'.        
    ENDSELECT.                             
    SELECT * FROM SBOOK                     
      WHERE CARRID = 'LH' AND               
            CONNID = '0400'.                
    ENDSELECT.                              
    Use the aggregated functions
    C4A = '000'.              
    SELECT * FROM T100        
      WHERE SPRSL = 'D' AND   
            ARBGB = '00'.     
      CHECK: T100-MSGNR > C4A.
      C4A = T100-MSGNR.       
    ENDSELECT.                
    SELECT MAX( MSGNR ) FROM T100 INTO C4A 
    WHERE SPRSL = 'D' AND                
           ARBGB = '00'.                  
    Select with view
    SELECT * FROM DD01L                    
      WHERE DOMNAME LIKE 'CHAR%'           
            AND AS4LOCAL = 'A'.            
      SELECT SINGLE * FROM DD01T           
        WHERE   DOMNAME    = DD01L-DOMNAME 
            AND AS4LOCAL   = 'A'           
            AND AS4VERS    = DD01L-AS4VERS 
            AND DDLANGUAGE = SY-LANGU.     
    ENDSELECT.                             
    SELECT * FROM DD01V                    
    WHERE DOMNAME LIKE 'CHAR%'           
           AND DDLANGUAGE = SY-LANGU.     
    ENDSELECT.                             
    Select with index support
    SELECT * FROM T100            
    WHERE     ARBGB = '00'      
           AND MSGNR = '999'.    
    ENDSELECT.                    
    SELECT * FROM T002.             
      SELECT * FROM T100            
        WHERE     SPRSL = T002-SPRAS
              AND ARBGB = '00'      
              AND MSGNR = '999'.    
      ENDSELECT.                    
    ENDSELECT.                      
    Select … Into table
    REFRESH X006.                 
    SELECT * FROM T006 INTO X006. 
      APPEND X006.                
    ENDSELECT
    SELECT * FROM T006 INTO TABLE X006.
    Select with selection list
    SELECT * FROM DD01L              
      WHERE DOMNAME LIKE 'CHAR%'     
            AND AS4LOCAL = 'A'.      
    ENDSELECT
    SELECT DOMNAME FROM DD01L    
    INTO DD01L-DOMNAME         
    WHERE DOMNAME LIKE 'CHAR%' 
           AND AS4LOCAL = 'A'.  
    ENDSELECT
    Key access to multiple lines
    LOOP AT TAB.          
    CHECK TAB-K = KVAL. 
    ENDLOOP.              
    LOOP AT TAB WHERE K = KVAL.     
    ENDLOOP.                        
    Copying internal tables
    REFRESH TAB_DEST.              
    LOOP AT TAB_SRC INTO TAB_DEST. 
      APPEND TAB_DEST.             
    ENDLOOP.                       
    TAB_DEST[] = TAB_SRC[].
    Modifying a set of lines
    LOOP AT TAB.             
      IF TAB-FLAG IS INITIAL.
        TAB-FLAG = 'X'.      
      ENDIF.                 
      MODIFY TAB.            
    ENDLOOP.                 
    TAB-FLAG = 'X'.                  
    MODIFY TAB TRANSPORTING FLAG     
               WHERE FLAG IS INITIAL.
    Deleting a sequence of lines
    DO 101 TIMES.               
      DELETE TAB_DEST INDEX 450.
    ENDDO.                      
    DELETE TAB_DEST FROM 450 TO 550.
    Linear search vs. binary
    READ TABLE TAB WITH KEY K = 'X'.
    READ TABLE TAB WITH KEY K = 'X' BINARY SEARCH.
    Comparison of internal tables
    DESCRIBE TABLE: TAB1 LINES L1,      
                    TAB2 LINES L2.      
    IF L1 <> L2.                        
      TAB_DIFFERENT = 'X'.              
    ELSE.                               
      TAB_DIFFERENT = SPACE.            
      LOOP AT TAB1.                     
        READ TABLE TAB2 INDEX SY-TABIX. 
        IF TAB1 <> TAB2.                
          TAB_DIFFERENT = 'X'. EXIT.    
        ENDIF.                          
      ENDLOOP.                          
    ENDIF.                              
    IF TAB_DIFFERENT = SPACE.           
    ENDIF.                              
    IF TAB1[] = TAB2[].  
    ENDIF.               
    Modify selected components
    LOOP AT TAB.           
    TAB-DATE = SY-DATUM. 
    MODIFY TAB.          
    ENDLOOP.               
    WA-DATE = SY-DATUM.                    
    LOOP AT TAB.                           
    MODIFY TAB FROM WA TRANSPORTING DATE.
    ENDLOOP.                               
    Appending two internal tables
    LOOP AT TAB_SRC.              
      APPEND TAB_SRC TO TAB_DEST. 
    ENDLOOP
    APPEND LINES OF TAB_SRC TO TAB_DEST.
    Deleting a set of lines
    LOOP AT TAB_DEST WHERE K = KVAL. 
      DELETE TAB_DEST.               
    ENDLOOP
    DELETE TAB_DEST WHERE K = KVAL.
    Tools available in SAP to pin-point a performance problem
    The runtime analysis (SE30)
    SQL Trace (ST05)
    Tips and Tricks tool
    The performance database
    Optimizing the load of the database
    Using table buffering
    Using buffered tables improves the performance considerably. Note that in some cases a stament can not be used with a buffered table, so when using these staments the buffer will be bypassed. These staments are:
    Select DISTINCT 
    ORDER BY / GROUP BY / HAVING clause 
    Any WHERE clasuse that contains a subquery or IS NULL expression 
    JOIN s 
    A SELECT... FOR UPDATE 
    If you wnat to explicitly bypass the bufer, use the BYPASS BUFFER addition to the SELECT clause.
    Use the ABAP SORT Clause Instead of ORDER BY
    The ORDER BY clause is executed on the database server while the ABAP SORT statement is executed on the application server. The datbase server will usually be the bottleneck, so sometimes it is better to move thje sort from the datsbase server to the application server.
    If you are not sorting by the primary key ( E.g. using the ORDER BY PRIMARY key statement) but are sorting by another key, it could be better to use the ABAP SORT stament to sort the data in an internal table. Note however that for very large result sets it might not be a feasible solution and you would want to let the datbase server sort it.
    Avoid ther SELECT DISTINCT Statement
    As with the ORDER BY clause it could be better to avoid using SELECT DISTINCT, if some of the fields are not part of an index. Instead use ABAP SORT + DELETE ADJACENT DUPLICATES on an internal table, to delete duplciate rows.
    Regds
    Anver
    if hlped pls mark points

  • Performance Tuning - Suggestions

    Hi,
    I have an ABAP (Interactive List) Program times out in PRD very often. The ABAP run time is about 99%. The DB time is less than 1%. All the select statements has the table index in place. Actually it isprocessing all the Production Orders (Released but not Confirmed/Closed). Please let me know if you have any suggestion.
    Appreciate Your Help.
    Thanks,
    Kannan.

    Hi
    1) Dont use nested select statements
    2) If possible use for all entries in addition
    3) In the where addition make sure you give all the primary key
    4) Use Index for the selection criteria.
    5) You can also use inner joins
    6) You can try to put the data from the first select statement into an Itab and then in order to select the data from the second table use for all entries in.
    7) Use the runtime analysis SE30 and SQL Trace (ST05) to identify the performance and also to identify where the load is heavy, so that you can change the code accordingly
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/5d0db4c9-0e01-0010-b68f-9b1408d5f234
    ABAP performance depends upon various factors and in devicded in three parts:
    1. Database
    2. ABAP
    3. System
    Run Any program using SE30 (performance analys) to improve performance refer to tips and trics section of SE30, Always remember that ABAP perfirmance is improved when there is least load on Database.
    u can get an interactive grap in SE30 regarding this with a file.
    also if u find runtime of parts of codes then use :
    Switch on RTA Dynamically within ABAP Code
    *To turn runtim analysis on within ABAP code insert the following code
    SET RUN TIME ANALYZER ON.
    *To turn runtim analysis off within ABAP code insert the following code
    SET RUN TIME ANALYZER OFF.
    Always check the driver internal tables is not empty, while using FOR ALL ENTRIES
    Avoid for all entries in JOINS
    Try to avoid joins and use FOR ALL ENTRIES.
    Try to restrict the joins to 1 level only ie only for tables
    Avoid using Select *.
    Avoid having multiple Selects from the same table in the same object.
    Try to minimize the number of variables to save memory.
    The sequence of fields in 'where clause' must be as per primary/secondary index ( if any)
    Avoid creation of index as far as possible
    Avoid operators like <>, > , < & like % in where clause conditions
    Avoid select/select single statements in loops.
    Try to use 'binary search' in READ internal table. Ensure table is sorted before using BINARY SEARCH.
    Avoid using aggregate functions (SUM, MAX etc) in selects ( GROUP BY , HAVING,)
    Avoid using ORDER BY in selects
    Avoid Nested Selects
    Avoid Nested Loops of Internal Tables
    Try to use FIELD SYMBOLS.
    Try to avoid into Corresponding Fields of
    Avoid using Select Distinct, Use DELETE ADJACENT
    Check the following Links
    Re: performance tuning
    Re: Performance tuning of program
    http://www.sapgenie.com/abap/performance.htm
    http://www.thespot4sap.com/Articles/SAPABAPPerformanceTuning_PerformanceAnalysisTools.asp
    check the below link
    http://www.sap-img.com/abap/performance-tuning-for-data-selection-statement.htm
    See the following link if it's any help:
    http://www.thespot4sap.com/Articles/SAPABAPPerformanceTuning_PerformanceAnalysisTools.asp
    Check also http://service.sap.com/performance
    and
    books like
    http://www.sap-press.com/product.cfm?account=&product=H951
    http://www.sap-press.com/product.cfm?account=&product=H973
    http://www.sap-img.com/abap/more-than-100-abap-interview-faqs.htm
    http://www.thespot4sap.com/Articles/SAPABAPPerformanceTuning_PerformanceAnalysisTools.asp
    Performance tuning for Data Selection Statement
    http://www.sap-img.com/abap/performance-tuning-for-data-selection-statement.htm
    Debugger
    http://help.sap.com/saphelp_47x200/helpdata/en/c6/617ca9e68c11d2b2ab080009b43351/content.htm
    http://www.cba.nau.edu/haney-j/CIS497/Assignments/Debugging.doc
    http://help.sap.com/saphelp_erp2005/helpdata/en/b3/d322540c3beb4ba53795784eebb680/frameset.htm
    Run Time Analyser
    http://help.sap.com/saphelp_47x200/helpdata/en/c6/617cafe68c11d2b2ab080009b43351/content.htm
    SQL trace
    http://help.sap.com/saphelp_47x200/helpdata/en/d1/801f7c454211d189710000e8322d00/content.htm
    CATT - Computer Aided Testing Too
    http://help.sap.com/saphelp_47x200/helpdata/en/b3/410b37233f7c6fe10000009b38f936/frameset.htm
    Test Workbench
    http://help.sap.com/saphelp_47x200/helpdata/en/a8/157235d0fa8742e10000009b38f889/frameset.htm
    Coverage Analyser
    http://help.sap.com/saphelp_47x200/helpdata/en/c7/af9a79061a11d4b3d4080009b43351/content.htm
    Runtime Monitor
    http://help.sap.com/saphelp_47x200/helpdata/en/b5/fa121cc15911d5993d00508b6b8b11/content.htm
    Memory Inspector
    http://help.sap.com/saphelp_47x200/helpdata/en/a2/e5fc84cc87964cb2c29f584152d74e/content.htm
    ECATT - Extended Computer Aided testing tool.
    http://help.sap.com/saphelp_47x200/helpdata/en/20/e81c3b84e65e7be10000000a11402f/frameset.htm
    Just refer to these links...
    performance
    Performance
    Performance Guide
    performance issues...
    Performance Tuning
    Performance issues
    performance tuning
    performance tuning
    You can go to the transaction SE30 to have the runtime analysis of your program.Also try the transaction SCI , which is SAP Code Inspector.
    1 Always check the driver internal tables is not empty, while using FOR ALL ENTRIES
    2 Avoid for all entries in JOINS
    3 Try to avoid joins and use FOR ALL ENTRIES.
    4 Try to restrict the joins to 1 level only ie only for 2 tables
    5 Avoid using Select *.
    6 Avoid having multiple Selects from the same table in the same object.
    7 Try to minimize the number of variables to save memory.
    8 The sequence of fields in 'where clause' must be as per primary/secondary index ( if any)
    9 Avoid creation of index as far as possible
    10 Avoid operators like <>, > , < & like % in where clause conditions
    11 Avoid select/select single statements in loops.
    12 Try to use 'binary search' in READ internal table. Ensure table is sorted before using BINARY SEARCH.
    13 Avoid using aggregate functions (SUM, MAX etc) in selects ( GROUP BY , HAVING,)
    14 Avoid using ORDER BY in selects
    15 Avoid Nested Selects
    16 Avoid Nested Loops of Internal Tables
    17 Try to use FIELD SYMBOLS.
    18 Try to avoid into Corresponding Fields of
    19 Avoid using Select Distinct, Use DELETE ADJACENT.
    Regards
    Anji

  • What are the steps doing a performance tuning for pertcular program

    What are the steps doing a performance tuning for pertcular program

    chk this link
    http://www.thespot4sap.com/Articles/SAPABAPPerformanceTuning_PerformanceAnalysisTools.asp
    checkout these links:
    www.sapgenie.com/abap/performance.htm
    www.sap-img.com/abap/ performance-tuning-for-data-selection-statement.htm
    www.thespot4sap.com/Articles/ SAPABAPPerformanceTuning_Introduction.asp
    Message was edited by: Chandrasekhar Jagarlamudi

  • Performance Tuning Of VA01

    Hi,
    Please can anybody help me in the performance tuning of the VA01 transaction since its consuming a lot of time in production.
    This issue is very urgent.
    Pls help.

    Always check the driver internal tables is not empty, while using FOR ALL ENTRIES
    Avoid for all entries in JOINS
    Try to avoid joins and use FOR ALL ENTRIES.
    Try to restrict the joins to 1 level only ie only for tables
    Avoid using Select *.
    Avoid having multiple Selects from the same table in the same object.
    Try to minimize the number of variables to save memory.
    The sequence of fields in 'where clause' must be as per primary/secondary index ( if any)
    Avoid creation of index as far as possible
    Avoid operators like <>, > , < & like % in where clause conditions
    Avoid select/select single statements in loops.
    Try to use 'binary search' in READ internal table. Ensure table is sorted before using BINARY SEARCH.
    Avoid using aggregate functions (SUM, MAX etc) in selects ( GROUP BY , HAVING,)
    Avoid using ORDER BY in selects
    Avoid Nested Selects
    Avoid Nested Loops of Internal Tables
    Try to use FIELD SYMBOLS.
    Try to avoid into Corresponding Fields of
    Avoid using Select Distinct, Use DELETE ADJACENT
    you can refer these links :
    http://www.sapgenie.com/abap/performance.htm
    chk this
    How to increase the performance of a program
    Check the following Links
    Re: performance tuning
    Re: Performance tuning of program
    http://www.sapgenie.com/abap/performance.htm
    http://www.thespot4sap.com/Articles/SAPABAPPerformanceTuning_PerformanceAnalysisTools.asp
    check the below link
    http://www.sap-img.com/abap/performance-tuning-for-data-selection-statement.htm
    See the following link if it's any help:
    http://www.thespot4sap.com/Articles/SAPABAPPerformanceTuning_PerformanceAnalysisTools.asp
    Check also http://service.sap.com/performance
    and
    books like
    http://www.sap-press.com/product.cfm?account=&product=H951
    http://www.sap-press.com/product.cfm?account=&product=H973
    http://www.sap-img.com/abap/more-than-100-abap-interview-faqs.htm
    cheers!
    sri

  • Merged Dimension Performance vs. Multiple SQL Statements via Contexts

    Hi there,
    If you have a Webi reoprt and you select two measures, each from a different context, along with some dimensions and it generates two seperate SQL statements via a "Join", does that join happen outside of the scope of the Webi Processing Server?
    If it happens within the Webi Processing Server memory, how is the processing different from if you were to have two separate queries in your report and then merge the dimensions, with respect to performance?
    Thanks,
    Allan

    you can use the code as per your requirement
    but you need to do some performance tuning
    http://biemond.blogspot.com/2010/08/things-you-need-to-do-for-owsm-11g.html

  • Invalid statement in Performance Tuning Guide

    Oracle® Database Performance Tuning Guide
    10g Release 2 (10.2)
    Part Number B14211-01
    13 The Query Optimizer
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14211/optimops.htm#sthref1324
    excerpt:
    "You can specify fast full index scans with the initialization parameter OPTIMIZER_FEATURES_ENABLE or the INDEX_FFS hint. Fast full index scans cannot be performed against bitmap indexes."
    Emphasis mine - Gints
    Here is counterexample:
    SQL> select * from v$version;
    BANNER
    Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Prod
    PL/SQL Release 10.2.0.1.0 - Production
    CORE    10.2.0.1.0      Production
    TNS for 32-bit Windows: Version 10.2.0.1.0 - Production
    NLSRTL Version 10.2.0.1.0 - Production
    SQL> create table blah (sex varchar2(1) not null, data varchar2(4000));
    Table created.
    SQL> insert into blah select 'F', lpad('a', 4000, 'a') from user_objects where rownum<=10;
    10 rows created.
    SQL> insert into blah select 'M', lpad('a', 4000, 'a') from user_objects where rownum<=10;
    10 rows created.
    SQL> commit;
    Commit complete.
    SQL> create bitmap index sexidx on blah(sex);
    Index created.
    SQL> exec dbms_stats.gather_table_stats(user, 'blah', cascade=>true)
    PL/SQL procedure successfully completed.
    SQL>
    SQL> set autot traceonly expl
    SQL> set lines 100
    SQL> select count(*) from blah where sex = 'F';
    SQL> /
    Execution Plan
    Plan hash value: 1028317341
    | Id  | Operation                     | Name   | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT              |        |     1 |     2 |     1   (0)| 00:00:01 |
    |   1 |  SORT AGGREGATE               |        |     1 |     2 |            |          |
    |   2 |   BITMAP CONVERSION COUNT     |        |    10 |    20 |     1   (0)| 00:00:01 |
    |*  3 |    BITMAP INDEX FAST FULL SCAN| SEXIDX |       |       |            |          |
    Predicate Information (identified by operation id):
       3 - filter("SEX"='F')
    SQL> set autot off
    SQL> alter session set events '10046 trace name context forever, level 12';
    Session altered.
    SQL> select count(*) from blah where sex = 'F';
      COUNT(*)
            10
    SQL> disconn
    Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
    With the Partitioning, OLAP and Data Mining optionsand here is relevant section from tkprofed trace file assuring that bitmap index fast full scan really was performed.
    select count(*)
    from
    blah where sex = 'F'
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.02          0          0          0           0
    Execute      1      0.00       0.03          0          0          0           0
    Fetch        2      0.00       0.00          0          3          0           1
    total        4      0.00       0.05          0          3          0           1
    Misses in library cache during parse: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 60 
    Rows     Row Source Operation
          1  SORT AGGREGATE (cr=3 pr=0 pw=0 time=74 us)
          1   BITMAP CONVERSION COUNT (cr=3 pr=0 pw=0 time=55 us)
          1    BITMAP INDEX FAST FULL SCAN SEXIDX (cr=3 pr=0 pw=0 time=43 us)(object id 98446)Gints Plivna
    http://www.gplivna.eu

    Hello Gints. I've reported this to the writer responsible for the Performance Tuning Guide. One of us will get back to you with the resolution.
    Regards,
    Diana

  • Performance tuning in merge command

    HI
    My merge statement takes 45 mins to merge source table to target
    below is execution plan
    This is no of rows merged
    906703 rows merged.
    later I need to merge every day stage table to traget table -- 14701358 no of rows
    I suspect this will take more than two hrs how i can improve this
    My merge syntax as follows
    MERGE /*+ INDEX_FFS(s idx_pis_test1a_branchcode idx_pis_test1a_invoiceno idx_pis_test1a_vouchertype idx_pis_test1a_shpmtno ) */ into scott.pis_test1_A s
    using
    (select * from scott.pis_test11) t
    on
    (s.BRANCH_CODE =t.BRANCH_CODE
    and nvl(s.shpmt_no,'X')=nvl(t.shpmt_no,'X')
    and s.invoice_no=t.invoice_no
    and s.VOUCHER_TYPE=t.VOUCHER_TYPE
    and s.suffix_invoice=t.suffix_invoice
    and TRIM(s.suffix_shpmt)=TRIM(t.suffix_shpmt))
    when not matched
    then
    INSERT
    values
    commit
    Execution Plan
    0 MERGE STATEMENT Optimizer=ALL_ROWS (Cost=32310 Card=1115245
    Bytes=649072590)
    1 0 MERGE OF 'PIS_TEST1_A'
    2 1 VIEW
    3 2 HASH JOIN (RIGHT OUTER) (Cost=32310 Card=1115245 Bytes
    =630113425)
    4 3 TABLE ACCESS (BY INDEX ROWID) OF 'PIS_TEST1_A' (TABL
    E) (Cost=827 Card=658889 Bytes=188442254)
    5 4 INDEX (FULL SCAN) OF 'IDX_PIS_TEST1A_BRANCHCODE' (
    INDEX) (Cost=26 Card=658889)
    6 3 TABLE ACCESS (FULL) OF 'PIS_TEST1' (TABLE) (Cost=674
    6 Card=1115245 Bytes=311153355)

    Hi Justin
    my ans as below
    1) Are your statistics accurate and up to date?
    No i did not gather stats , itry with stats in target table no improvement
    2) Why do you have to hint the query? What happens if you remove the hint?
    if i take out the hint it take bit longer
    3) Why are you merging 14 million rows every day? Are you sure that you have to update the existing
    rows and that you can't just load each days records into a new partition with a new effective date?
    yes i need to load 14 million records every daye and merge with target table everyday
    the reason i cannot get one day records , the txt file come with appended records (whole years data in text file)
    currently in my merge there is no update only insert
    Justin
    Hi greg
    my asn as below
    As Justin mentioned, this index hint seems unlikely to help matters
    (is this hint even syntactically correct anyway?). Depending on your responses to
    Justin's questions and the number of columns you are updating,
    you could also consider adding a condition(s) to the ON clause that specifically
    checks that the columns you want to update are actually different. The actual act of updating
    can be a lot slower than just checking for differences.
    AND s.col1 != t.col1
    -- Iam not update at all only insert
    If you are running this one time a day only, you could also consider a /*+ parallel */ hint (or hints) to speed things up.
    I cannot use palallel & append command in hint because i only use insert in merge
    If i use palallel & append hint it corrupted the index due to i use insert only in merge
    rds
    ak

  • Performance problem with MERGE statement

    Version : 11.1.0.7.0
    I have an insert statement like following which is taking less than 2 secs to complete and inserts around 4000 rows:
    INSERT INTO sch.tab1
              (c1,c2,c3)
    SELECT c1,c2,c3
       FROM sch1.tab1@dblink
      WHERE c1 IN (SELECT c1 FROM sch1.tab2@dblink);I wanted to change it to a MERGE statement just to avoid duplicate data. I changed it to following :
    MERGE INTO sch.tab1 t1
    USING (SELECT c1,c2,c3
       FROM sch1.tab1@dblink
      WHERE c1 IN (SELECT c1 FROM sch1.tab2@dblink) t2
    ON (t1.c1 = t2.c1)
    WHEN NOT MATCHED THEN
    INSERT (t1.c1,t1.c2,t1.c3)
    VALUES (t2.c1,t2.c2,t2.c3);The MERGE statement is taking more than 2 mins (and I stopped the execution after that). I removed the WHERE clause subquery inside the subquery of the USING section and it executed in 1 sec.
    If I execute the same select statement with the WHERE clause outside the MERGE statement, it takes just 1 sec to return the data.
    Is there any known issue with MERGE statement while implementing using above scenario?

    riedelme wrote:
    Are your join columns indexed?
    Yes, the join columns are indexed.
    You are doing a remote query inside the merge; remote queries can slow things down. Do you have to select all thr rows from the remote table? What if you copied them locally using a materialized view?Yes, I agree that remote queries will slow things down. But the same is not happening while select, insert and pl/sql. It happens only when we are using MERGE. I have to test what happens if we use a subquery refering to a local table or materialized view. Even if it works, I think there is still a problem with MERGE in case of remote subqueries (atleast till I test local queries). I wish some one can test similar scenarios so that we can know whether it is a genuine problem or some specific problem from my side.
    >
    BTW, I haven't had great luck with MERGE either :(. Last time I tried to use it I found it faster to use a loop with insert/update logic.
    Edited by: riedelme on Jul 28, 2009 12:12 PM:) I used the same to overcome this situation. I think MERGE needs to be still improved functionally from Oracle side. I personally feel that it is one of the robust features to grace SQL or PL/SQL.

  • Error executing a stored procedure from SSIS using the MERGE statement between databases

    Good morning,
    I'm trying to execute from SSIS a stored procedure that compares the content of two tables on different databases in the same server and updates one of them. To perform this action, I've created a stored procedure in the destination database and I'm
    comparing the data between tables with the MERGE statement. When I execute the procedure on the destination database the error that I obtain is:
    "Msg 916, Level 14, State 1, Procedure RefreshDestinationTable, Line 13
    The server principal "XXXX" is not able to access the database "XXXX" under the current security context."
    Some things to take in account:
    1. I've created a temporary table on the same destination database to check if the problem was on the MERGE statement and it works fine.
    2. I've created the procedure with the option "WITH EXECUTE AS DBO".
    I've read that it can be a problem of permissions but I don't know if I'm executing the procedure from SSIS to which user/login I should give permissions and which.
    Could you give me some tip to continue investigating how to solve the problem?
    Thank you,
    Virgilio

    Read Erland's article http://www.sommarskog.se/grantperm.html
    Best Regards,Uri Dimant SQL Server MVP,
    http://sqlblog.com/blogs/uri_dimant/
    MS SQL optimization: MS SQL Development and Optimization
    MS SQL Consulting:
    Large scale of database and data cleansing
    Remote DBA Services:
    Improves MS SQL Database Performance
    SQL Server Integration Services:
    Business Intelligence

  • Problem in Merge statement

    Hi All,
    I am using merge statement to update 30000 records from the tables having 55 lacs records.
    But it is taking much time as i have to kill the session after 12 hours,as it was still going on.
    If,Same update i m doing using cursors,it will finish in less than 3 hours.
    Merge i was using is :-
    MERGE INTO Table1 a
    USING (SELECT MAX (TO_DATE ( TO_CHAR (contact_date, 'dd/mm/yyyy')
    || contact_time,
    'dd/mm/yyyy HH24:Mi:SS'
    ) m_condate,
    appl_id
    FROM Table2 b,
    (SELECT DISTINCT acc_no acc_no
    FROM Table3, Table1
    WHERE acc_no=appl_id AND delinquent_flag= 'Y'
    AND financier_id='NEWACLOS') d
    WHERE d.acc_no = b.appl_id
    AND ( contacted_by IS NOT NULL
    OR followup_branch_code IS NOT NULL
    GROUP BY appl_id) c
    ON (a.appl_id = c.appl_id AND a.delinquent_flag = 'Y')
    WHEN MATCHED THEN
    UPDATE
    SET last_contact_date = c.m_condate;
    In this query table 1 has 30000 records and table2 and table 3 have 3670955 and 555674 records respectively.
    Please suggest,what i am doing wrong in merge,because as per my understanding merge statement is much better than updates or updates using cursors.
    Required info is as follows:
    SQL> show parameter user_dump_dest
    NAME TYPE VALUE
    user_dump_dest string /opt/oracle/admin/FINCLUAT/udu
    mp
    SQL> show parameter optimizer
    NAME TYPE VALUE
    optimizer_dynamic_sampling integer 2
    optimizer_features_enable string 10.2.0.4
    optimizer_index_caching integer 0
    optimizer_index_cost_adj integer 100
    optimizer_mode string ALL_ROWS
    optimizer_secure_view_merging boolean TRUE
    SQL> show parameter db_file_multi
    NAME TYPE VALUE
    db_file_multiblock_read_count integer 16
    SQL> show parameter db_block_size
    NAME TYPE VALUE
    db_block_size integer 8192
    SQL> column sname format a20
    SQL> column pname format a20
    SQL> column pval2 format a20
    SQL> select
    2 sname     ,
    3 pname     ,
    4 pval1     ,
    5 pval2
    6 from
    7 sys.aux_stats$;
    sys.aux_stats$
    ERROR at line 7:
    ORA-00942: table or view does not exist
    Elapsed: 00:00:00.05
    SQL> explain plan for
    2 -- put your statement here
    3 MERGE INTO cs_case_info a
    4      USING (SELECT MAX (TO_DATE ( TO_CHAR (contact_date, 'dd/mm/yyyy')
    5                          || contact_time,
    6                          'dd/mm/yyyy HH24:Mi:SS'
    7                          )
    8                ) m_condate,
    9                appl_id
    10           FROM CS_CASE_DETAILS_ACLOS b,
    11                (SELECT DISTINCT acc_no acc_no
    12                FROM NEWACLOS_RESEARCH_HIST_AYLA, cs_case_info
    13                WHERE acc_no=appl_id AND delinquent_flag= 'Y'
    14                AND financier_id='NEWACLOS') d
    15           WHERE d.acc_no = b.appl_id
    16           AND ( contacted_by IS NOT NULL
    17                OR followup_branch_code IS NOT NULL
    18                )
    19           GROUP BY appl_id) c
    20      ON (a.appl_id = c.appl_id AND a.delinquent_flag = 'Y')
    21      WHEN MATCHED THEN
    22      UPDATE
    23           SET last_contact_date = c.m_condate
    24      ;
    Explained.
    Elapsed: 00:00:00.08
    SQL> select * from table(dbms_xplan.display);
    PLAN_TABLE_OUTPUT
    | Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)|
    | 0 | MERGE STATEMENT | | 47156 | 874K| | 128K (1)|
    | 1 | MERGE | CS_CASE_INFO | | | | |
    | 2 | VIEW | | | | | |
    | 3 | HASH JOIN | | 47156 | 36M| 5672K| 128K (1)|
    | 4 | VIEW | | 47156 | 5111K| | 82339 (1)|
    | 5 | SORT GROUP BY | | 47156 | 4236K| 298M| 82339 (1)|
    | 6 | HASH JOIN | | 2820K| 247M| 10M| 60621 (1)|
    | 7 | HASH JOIN | | 216K| 7830K| | 6985 (1)|
    | 8 | VIEW | index$_join$_012 | 11033 | 258K| | 1583 (1)|
    | 9 | HASH JOIN | | | | | |
    | 10 | INDEX RANGE SCAN | IDX_CCI_DEL | 11033 | 258K| | 768 (1)|
    | 11 | INDEX RANGE SCAN | CS_CASE_INFO_UK | 11033 | 258K| | 821 (1)|
    | 12 | INDEX FAST FULL SCAN| IDX_NACL_RSH_ACC_NO | 5539K| 68M| | 5382 (1)|
    | 13 | TABLE ACCESS FULL | CS_CASE_DETAILS_ACLOS | 3670K| 192M| | 41477 (1)|
    | 14 | TABLE ACCESS FULL | CS_CASE_INFO | 304K| 205M| | 35975 (1)|
    Note
    - 'PLAN_TABLE' is old version
    24 rows selected.
    Elapsed: 00:00:01.04
    SQL> rollback;
    Rollback complete.
    Elapsed: 00:00:00.03
    SQL> set autotrace traceonly arraysize 100
    SQL> alter session set events '10046 trace name context forever, level 8';
    ERROR:
    ORA-01031: insufficient privileges
    Elapsed: 00:00:00.04
    SQL>      disconnect
    Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    SQL>      spool off
    Edited by: user4528984 on May 5, 2009 10:37 PM

    For one thing, alias your tables and use that in the column specifications (table1.column1 = table2.column3 for example)...
          SELECT
             DISTINCT
                acc_no acc_no
          FROM Table3, Table1
          WHERE acc_no            = appl_id
          AND   delinquent_flag   = 'Y'
          AND   financier_id      = 'NEWACLOS'We don't know what your tables look like, what columns come from where. Try and help us help you, assume we know NOTHING about YOUR system, because more likely than naught, that's going to be the case.
    In addition to that, please read through this which will give you a better-er idea of how to post a tuning related question, you've not provided near enough information for us to intelligently help you.
    HOW TO: Post a SQL statement tuning request - template posting

  • Using hints in MERGE statement

    i have a merge statement
    in that statement for select clause i am using index hints
    can i use it
    whether that will increase the performance or the reverse will happen
    any comments

    Hints should always be your last option. First try tune the sql without using any hints, most of the cases you will be ok. Over time when the table statistics(ex. rows) changes considerably hints may be negative impact.

Maybe you are looking for

  • Good coding practice for object init

    We have a code that requires initializing instance of the object for each item in the loop. I have following option in mind. Please advice which one is the best option. TIA Option 1 ======= public class AThread extends Thread { private ObjectA var1 =

  • Dbms_xmldom.writetoclob.different result on different oracle version.

    Hi all, I have a problem using dbms_xmldom. please give me an advise. take a look here. ora-06502 when using xmltype thanks

  • Saving scanned images

    I'm using a MacBook to import images from an scanner using Photoshop Elements. It asks me what byte order, PC or MAC? What's the difference?

  • MacBook Pro and Sow Leopard video problem

    When I connect MacBook Pro using Snow Leopard to LCD projector, the projected image is very problematic. Two symptoms. First, the video has a flicker to it. Second. menu lists have overlapping double and triple images so they cannot be selected accur

  • [H87-G43] UEFI Version [E7816IMS.2xx Releases]

    E7816IMS.250 ====>E7816IMS.261 [BETA BIOS] VGA BIOS         : 2177 GOP       : 5.0.1035 ME Version    : 9.0.21.1462 RAID ROM    : 12.8.0.1016 PV(12.7.0.1936) LAN OptionRom    : 2.53 (UEFI v2.021)    Update code base form 4.6.5.4_VEB_1AQQW032a_m024