Performance of merge statement

hi all,
any advices or tips on how to optimize the performance the of the merge statement?
can indexes on the target /source tables do some help?
thanks

user2361373 wrote:
you cannot improve the performance of merge A bit of a misleading answer when the merge encompasses a query that can be improved, hence the merge performance can be improved.
but the source query inside merge is to be optimized based on rowid or primary key update runs faster.There are many ways to improve a query and it all depends on the query itself. It doesn't necessarily have to be to do with rowid or the primary key. Firstly the cause of the performance issue needs identifying.

Similar Messages

  • Automatic Parallelism causes Merge statement to take longer.

    We have a problem in a new project as part of the ETL load into the Oracle datawarehouse we perform a merge statement to update rows in a global temporary table then load
    the results into a permanant table, when testing with automatic parallel execution enabled the plan changes and the merge never finishes and consumes vast amounts of resources.
    The database version is:-
    Database version :11.2.0.3
    OS: redhat 64bit
    three node rac 20 cores per node
    when executing serially the query response is typically similar to the following:
    MERGE /*+ gather_plan_statistics no_parallel */ INTO T_GTTCHARGEVALUES USING
      (SELECT
      CASTACCOUNTID,
      CHARGESCHEME,
      MAX(CUMULATIVEVALUE) AS MAXMONTHVALUE,
      MAX(CUMULATIVECOUNT) AS MAXMONTHCOUNT
    FROM
      V_CACHARGESALL
    WHERE
      CHARGEDATE >= TRUNC(TO_DATE(:B1,'YYYY-MM-DD'),'MM')
      AND CHARGEDATE < TO_DATE(:B1,'YYYY-MM-DD')
    GROUP BY
       CASTACCOUNTID,
       CHARGESCHEME
    HAVING MAX(CUMULATIVECOUNT) IS NOT NULL ) MTOTAL
    ON
      (T_GTTCHARGEVALUES.CASTACCOUNTID=MTOTAL.CASTACCOUNTID AND
      T_GTTCHARGEVALUES.CHARGESCHEME=MTOTAL.CHARGESCHEME)
    WHEN MATCHED
    THEN UPDATE SET
      CUMULATIVEVALUE=CUMULATIVEVALUE+MTOTAL.MAXMONTHVALUE ,
      CUMULATIVECOUNT=CUMULATIVECOUNT+MTOTAL.MAXMONTHCOUNT;
    1448340 rows merged.
    select * from table(dbms_xplan.display_cursor(null,null,'ALLSTATS LAST')); 
    | Id  | Operation                       | Name              | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |  OMem |  1Mem | Used-Mem |
    |   0 | MERGE STATEMENT                 |                   |      1 |        |      0 |00:03:08.43 |    2095K|    186K|       |       |          |
    |   1 |  MERGE                          | T_GTTCHARGEVALUES |      1 |        |      0 |00:03:08.43 |    2095K|    186K|       |       |          |
    |   2 |   VIEW                          |                   |      1 |        |   1448K|00:02:53.14 |     619K|    177K|       |       |          |
    |*  3 |    HASH JOIN                    |                   |      1 |      1 |   1448K|00:02:52.70 |     619K|    177K|   812K|   812K| 1218K (0)|
    |   4 |     VIEW                        |                   |      1 |      1 |    203 |00:02:51.26 |     608K|    177K|       |       |          |
    |*  5 |      FILTER                     |                   |      1 |        |    203 |00:02:51.26 |     608K|    177K|       |       |          |
    |   6 |       SORT GROUP BY             |                   |      1 |      1 |    480 |00:02:51.26 |     608K|    177K| 73728 | 73728 |          |
    |*  7 |        FILTER                   |                   |      1 |        |     21M|00:02:56.04 |     608K|    177K|       |       |          |
    |   8 |         PARTITION RANGE ITERATOR|                   |      1 |    392K|     21M|00:02:51.32 |     608K|    177K|       |       |          |
    |*  9 |          TABLE ACCESS FULL      | T_CACHARGES       |     24 |    392K|     21M|00:02:47.48 |     608K|    177K|       |       |          |
    |  10 |     TABLE ACCESS FULL           | T_GTTCHARGEVALUES |      1 |   1451K|   1451K|00:00:00.48 |   10980 |      0 |       |       |          |
    Predicate Information (identified by operation id):
       3 - access("T_GTTCHARGEVALUES"."CASTACCOUNTID"="MTOTAL"."CASTACCOUNTID" AND "T_GTTCHARGEVALUES"."CHARGESCHEME"="MTOTAL"."CHARGESCHEME")
       5 - filter(MAX("CUMULATIVECOUNT") IS NOT NULL)
       7 - filter(TRUNC(TO_DATE(:B1,'YYYY-MM-DD'),'fmmm')<TO_DATE(:B1,'YYYY-MM-DD'))
       9 - filter(("LOGICALLYDELETED"=0 AND "CHARGEDATE">=TRUNC(TO_DATE(:B1,'YYYY-MM-DD'),'fmmm') AND "CHARGEDATE"<TO_DATE(:B1,'YYYY-MM-DD')))removing the no_parallel hint results in the following, (this is pulled from the sql monitoring report and editied to remove the lines relating to individual parallel servers)
    I understand that the query is considered for parallel execution due to the estimated length of time it will run for and although the degree of parallleism seems excessive
    it is the default maximum for the server configuration, what we are tryig to understand is which statistics could be inacurate or missing and could cause this kind of problem.
    In this case we can add the no_parallel hint in the etl package as a workaround but would really liek to identify the root cause to avoid similar problems elsewhere.
    SQL Monitoring Report
    SQL Text
    MERGE INTO T_GTTCHARGEVALUES USING (SELECT CASTACCOUNTID, CHARGESCHEME, MAX(CUMULATIVEVALUE) AS MAXMONTHVALUE,
    MAX(CUMULATIVECOUNT) AS MAXMONTHCOUNT FROM V_CACHARGESALL WHERE CHARGEDATE >= TRUNC(TO_DATE(:B1,'YYYY-MM-DD'),'MM')
    AND CHARGEDATE < to_date(:B1,'YYYY-MM-DD')
    GROUP BY CASTACCOUNTID, CHARGESCHEME HAVING MAX(CUMULATIVECOUNT) IS NOT NULL ) MTOTAL
    ON (T_GTTCHARGEVALUES.CASTACCOUNTID=MTOTAL.CASTACCOUNTID AND
    T_GTTCHARGEVALUES.CHARGESCHEME=MTOTAL.CHARGESCHEME) WHEN MATCHED THEN UPDATE SET
    CUMULATIVEVALUE=CUMULATIVEVALUE+MTOTAL.MAXMONTHVALUE ,
    CUMULATIVECOUNT=CUMULATIVECOUNT+MTOTAL.MAXMONTHCOUNT
    Error: ORA-1013
    ORA-01013: user requested cancel of current operation
    Global Information
    Status              :  DONE (ERROR)
    Instance ID         :  1
    Session             :  XXXX(2815:12369)
    SQL ID              :  70kzttjbyyspt
    SQL Execution ID    :  16777216
    Execution Started   :  04/27/2012 09:43:27
    First Refresh Time  :  04/27/2012 09:43:27
    Last Refresh Time   :  04/27/2012 09:48:43
    Duration            :  316s
    Module/Action       :  SQL*Plus/-
    Service             :  SYS$USERS
    Program             :  sqlplus@XXXX (TNS V1-V3)
    Binds
    ========================================================================================================================
    | Name | Position |     Type     |                                        Value                                        |
    ========================================================================================================================
    | :B1  |        1 | VARCHAR2(32) | 2012-04-25                                                                          |
    ========================================================================================================================
    Global Stats
    ====================================================================================================================
    | Elapsed | Queuing |   Cpu   |    IO    | Application | Concurrency | Cluster  |  Other   | Buffer | Read | Read  |
    | Time(s) | Time(s) | Time(s) | Waits(s) |  Waits(s)   |  Waits(s)   | Waits(s) | Waits(s) |  Gets  | Reqs | Bytes |
    ====================================================================================================================
    |    7555 |    0.00 |    4290 |     2812 |        0.08 |          27 |      183 |      243 |     3M | 294K |   7GB |
    ====================================================================================================================
    SQL Plan Monitoring Details (Plan Hash Value=323941584)
    ==========================================================================================================================================================================================================
    | Id |             Operation             |       Name        |  Rows   | Cost  |   Time    | Start  | Execs |   Rows   | Read | Read  |  Mem  | Activity |                Activity Detail                |
    |    |                                   |                   | (Estim) |       | Active(s) | Active |       | (Actual) | Reqs | Bytes | (Max) |   (%)    |                  (# samples)                  |
    ==========================================================================================================================================================================================================
    |  0 | MERGE STATEMENT                   |                   |         |       |           |        |     1 |          |      |       |       |          |                                               |
    |  1 |   MERGE                           | T_GTTCHARGEVALUES |         |       |           |        |     1 |          |      |       |       |          |                                               |
    |  2 |    PX COORDINATOR                 |                   |         |       |        57 |     +1 |   481 |        0 |  317 |   5MB |       |     4.05 | latch: shared pool (40)                       |
    |    |                                   |                   |         |       |           |        |       |          |      |       |       |          | os thread startup (17)                        |
    |    |                                   |                   |         |       |           |        |       |          |      |       |       |          | Cpu (7)                                       |
    |    |                                   |                   |         |       |           |        |       |          |      |       |       |          | DFS lock handle (36)                          |
    |    |                                   |                   |         |       |           |        |       |          |      |       |       |          | SGA: allocation forcing component growth (14) |
    |    |                                   |                   |         |       |           |        |       |          |      |       |       |          | latch: parallel query alloc buffer (200)      |
    |  3 |     PX SEND QC (RANDOM)           | :TQ10003          |       1 | 19054 |           |        |       |          |      |       |       |          |                                               |
    |  4 |      VIEW                         |                   |         |       |           |        |       |          |      |       |       |          |                                               |
    |  5 |       FILTER                      |                   |         |       |           |        |       |          |      |       |       |          |                                               |
    |  6 |        SORT GROUP BY              |                   |       1 | 19054 |           |        |       |          |      |       |       |          |                                               |
    |  7 |         PX RECEIVE                |                   |       1 | 19054 |           |        |       |          |      |       |       |          |                                               |
    |  8 |          PX SEND HASH             | :TQ10002          |       1 | 19054 |           |        |   240 |          |      |       |       |          |                                               |
    |  9 |           SORT GROUP BY           |                   |       1 | 19054 |       246 |    +70 |   240 |        0 |      |       |  228M |    49.32 | Cpu (3821)                                    |
    | 10 |            FILTER                 |                   |         |       |       245 |    +71 |   240 |       3G |      |       |       |     0.08 | Cpu (6)                                       |
    | 11 |             HASH JOIN             |                   |       1 | 19054 |       259 |    +57 |   240 |       3G |      |       |  276M |     4.31 | Cpu (334)                                     |
    | 12 |              PX RECEIVE           |                   |      1M |     5 |       259 |    +57 |   240 |       1M |      |       |       |     0.04 | Cpu (3)                                       |
    | 13 |               PX SEND HASH        | :TQ10000          |      1M |     5 |         6 |    +56 |   240 |       1M |      |       |       |     0.01 | Cpu (1)                                       |
    | 14 |                PX BLOCK ITERATOR  |                   |      1M |     5 |         6 |    +56 |   240 |       1M |      |       |       |     0.03 | Cpu (1)                                       |
    |    |                                   |                   |         |       |           |        |       |          |      |       |       |          | PX Deq: reap credit (1)                       |
    | 15 |                 TABLE ACCESS FULL | T_GTTCHARGEVALUES |      1M |     5 |         7 |    +55 |  5486 |       1M | 5487 |  86MB |       |     2.31 | gc cr grant 2-way (3)                         |
    |    |                                   |                   |         |       |           |        |       |          |      |       |       |          | gc current block lost (7)                     |
    |    |                                   |                   |         |       |           |        |       |          |      |       |       |          | Cpu (7)                                       |
    |    |                                   |                   |         |       |           |        |       |          |      |       |       |          | db file sequential read (162)                 |
    | 16 |              PX RECEIVE           |                   |     78M | 19047 |       255 |    +61 |   240 |     801K |      |       |       |     0.03 | IPC send completion sync (2)                  |
    | 17 |               PX SEND HASH        | :TQ10001          |     78M | 19047 |       250 |    +66 |   240 |       3M |      |       |       |     0.06 | Cpu (5)                                       |
    | 18 |                PX BLOCK ITERATOR  |                   |     78M | 19047 |       250 |    +66 |   240 |       4M |      |       |       |          |                                               |
    | 19 |                 TABLE ACCESS FULL | T_CACHARGES       |     78M | 19047 |       254 |    +62 |  1016 |       4M | 288K |   6GB |       |    37.69 | gc buffer busy acquire (104)                  |
    |    |                                   |                   |         |       |           |        |       |          |      |       |       |          | gc cr block 2-way (1)                         |
    |    |                                   |                   |         |       |           |        |       |          |      |       |       |          | gc cr block lost (9)                          |
    |    |                                   |                   |         |       |           |        |       |          |      |       |       |          | gc cr grant 2-way (14)                        |
    |    |                                   |                   |         |       |           |        |       |          |      |       |       |          | gc cr multi block request (1)                 |
    |    |                                   |                   |         |       |           |        |       |          |      |       |       |          | gc current block 2-way (3)                    |
    |    |                                   |                   |         |       |           |        |       |          |      |       |       |          | gc current block 3-way (2)                    |
    |    |                                   |                   |         |       |           |        |       |          |      |       |       |          | gc current block busy (1)                     |
    |    |                                   |                   |         |       |           |        |       |          |      |       |       |          | gc current grant busy (2)                     |
    |    |                                   |                   |         |       |           |        |       |          |      |       |       |          | Cpu (58)                                      |
    |    |                                   |                   |         |       |           |        |       |          |      |       |       |          | latch: gc element (1)                         |
    |    |                                   |                   |         |       |           |        |       |          |      |       |       |          | db file parallel read (26)                    |
    |    |                                   |                   |         |       |           |        |       |          |      |       |       |          | db file scattered read (207)                  |
    |    |                                   |                   |         |       |           |        |       |          |      |       |       |          | db file sequential read (2433)                |
    |    |                                   |                   |         |       |           |        |       |          |      |       |       |          | direct path read (1)                          |
    |    |                                   |                   |         |       |           |        |       |          |      |       |       |          | read by other session (57)                    |
    ==========================================================================================================================================================================================================
    Parallel Execution Details (DOP=240 , Servers Allocated=480)
    Instances  : 3

    chris_c wrote:
    | Id  | Operation                       | Name              | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |  OMem |  1Mem | Used-Mem |
    |*  9 |          TABLE ACCESS FULL      | T_CACHARGES       |     24 |    392K|     21M|00:02:47.48 |     608K|    177K|       |       |          |
    Based on the discrepancy between the estimated number of rows and the actual, and the below posted bind value of 2012-04-25 i'd first be checking if the statistics on T_CACHARGES are up to date.
    As a reference
    http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:4399338600346902127
    So that would be my first avenue of exploration.
    Cheers,

  • Performance problem with MERGE statement

    Version : 11.1.0.7.0
    I have an insert statement like following which is taking less than 2 secs to complete and inserts around 4000 rows:
    INSERT INTO sch.tab1
              (c1,c2,c3)
    SELECT c1,c2,c3
       FROM sch1.tab1@dblink
      WHERE c1 IN (SELECT c1 FROM sch1.tab2@dblink);I wanted to change it to a MERGE statement just to avoid duplicate data. I changed it to following :
    MERGE INTO sch.tab1 t1
    USING (SELECT c1,c2,c3
       FROM sch1.tab1@dblink
      WHERE c1 IN (SELECT c1 FROM sch1.tab2@dblink) t2
    ON (t1.c1 = t2.c1)
    WHEN NOT MATCHED THEN
    INSERT (t1.c1,t1.c2,t1.c3)
    VALUES (t2.c1,t2.c2,t2.c3);The MERGE statement is taking more than 2 mins (and I stopped the execution after that). I removed the WHERE clause subquery inside the subquery of the USING section and it executed in 1 sec.
    If I execute the same select statement with the WHERE clause outside the MERGE statement, it takes just 1 sec to return the data.
    Is there any known issue with MERGE statement while implementing using above scenario?

    riedelme wrote:
    Are your join columns indexed?
    Yes, the join columns are indexed.
    You are doing a remote query inside the merge; remote queries can slow things down. Do you have to select all thr rows from the remote table? What if you copied them locally using a materialized view?Yes, I agree that remote queries will slow things down. But the same is not happening while select, insert and pl/sql. It happens only when we are using MERGE. I have to test what happens if we use a subquery refering to a local table or materialized view. Even if it works, I think there is still a problem with MERGE in case of remote subqueries (atleast till I test local queries). I wish some one can test similar scenarios so that we can know whether it is a genuine problem or some specific problem from my side.
    >
    BTW, I haven't had great luck with MERGE either :(. Last time I tried to use it I found it faster to use a loop with insert/update logic.
    Edited by: riedelme on Jul 28, 2009 12:12 PM:) I used the same to overcome this situation. I think MERGE needs to be still improved functionally from Oracle side. I personally feel that it is one of the robust features to grace SQL or PL/SQL.

  • Performance Tuning of a merge statement

    Hi,
    below query is occupying the temp tablespace(120GB) and explain planis not showing it .
    Can someone please help me with this.
    explain plan for MERGE INTO BKMAIN.BK_CUST_OD_PEAK_SUM TGT USING
    WITH OD_MAIN AS
    SELECT
    MAX (
    CASE
    WHEN CUST_BAL_MAX.BK_TRN_TS <= CUST_BAL_TEMP.BK_TRN_TS
    AND CUST_BAL_MAX.BK_CUR_BAL_RPT_CCY_AM >= 0
    THEN CUST_BAL_MAX.BK_TRN_TS
    ELSE NULL
    END) T_TMP_TRN_TS,
    MIN(
    CASE
    WHEN CUST_BAL_MAX.BK_TRN_TS >= CUST_BAL_TEMP.BK_TRN_TS
    AND CUST_BAL_MAX.BK_CUR_BAL_RPT_CCY_AM >= 0
    THEN CUST_BAL_MAX.BK_TRN_TS
    ELSE NULL
    END) T_TMP_TRN_TS1 ,
    CUST_BAL_TEMP.BK_BUS_EFF_DT ,
    CUST_BAL_TEMP.BK_CUR_BAL_RPT_CCY_AM ,
    CUST_BAL_TEMP.BK_PDAY_CLS_BAL_RPT_CCY_AM ,
    CUST_BAL_MAX.N_CUST_SKEY
    FROM BKMAIN.BK_CUST_TRN_TM_BAL_SUM CUST_BAL_MAX ,
    SELECT TRN_SUM.N_CUST_SKEY ,
    TRN_SUM.BK_BUS_EFF_DT ,
    TRN_SUM.BK_TRN_TS ,
    TRN_SUM.BK_CUR_BAL_RPT_CCY_AM ,
    CUST_OD_RSLT.BK_PDAY_CLS_BAL_RPT_CCY_AM
    FROM BKMAIN.BK_CUST_TRN_TM_BAL_SUM TRN_SUM,
    BKMAIN.BK_CUST_OD_PEAK_SUM CUST_OD_RSLT
    WHERE (TRN_SUM.BK_BUS_EFF_DT = '02-APR-2013'
    AND TRN_SUM.N_CUST_SKEY = CUST_OD_RSLT.N_CUST_SKEY
    AND TRN_SUM.BK_BUS_EFF_DT = CUST_OD_RSLT.BK_BUS_EFF_DT
    AND TRN_SUM.BK_CUR_BAL_RPT_CCY_AM= (-1*CUST_OD_RSLT.BK_MAX_OD_RPT_CCY_AM))
    CUST_BAL_TEMP
    WHERE CUST_BAL_MAX.BK_BUS_EFF_DT='02-APR-2013'
    AND CUST_BAL_MAX.N_CUST_SKEY =CUST_BAL_TEMP.N_CUST_SKEY
    AND CUST_BAL_MAX.BK_BUS_EFF_DT =CUST_BAL_TEMP.BK_BUS_EFF_DT
    GROUP BY CUST_BAL_MAX.N_CUST_SKEY ,
    CUST_BAL_TEMP.BK_BUS_EFF_DT ,
    CUST_BAL_TEMP.BK_CUR_BAL_RPT_CCY_AM,
    CUST_BAL_TEMP.BK_PDAY_CLS_BAL_RPT_CCY_AM
    SELECT
    N_CUST_SKEY,
    BK_BUS_EFF_DT ,
    CASE
    WHEN T_TMP_TRN_TS IS NOT NULL
    THEN
    SELECT CUST_BAL.BK_CUR_BAL_END_TS
    FROM BKMAIN.BK_CUST_TRN_TM_BAL_SUM CUST_BAL
    WHERE CUST_BAL.BK_BUS_EFF_DT='02-APR-2013'
    AND CUST_BAL.N_CUST_SKEY = OD_MAIN.N_CUST_SKEY
    AND CUST_BAL.BK_TRN_TS = OD_MAIN.T_TMP_TRN_TS
    WHEN (T_TMP_TRN_TS IS NULL
    AND OD_MAIN.BK_PDAY_CLS_BAL_RPT_CCY_AM < 0)
    THEN BK_FN_GET_STRT_EOD_BUS_TS(1, '02-APR-2013','S')
    WHEN (T_TMP_TRN_TS IS NULL
    AND OD_MAIN.BK_PDAY_CLS_BAL_RPT_CCY_AM >= 0)
    THEN
    SELECT MIN(CUST_BAL.BK_TRN_TS)
    FROM BKMAIN.BK_CUST_TRN_TM_BAL_SUM CUST_BAL
    WHERE CUST_BAL.BK_BUS_EFF_DT='02-APR-2013'
    AND CUST_BAL.N_CUST_SKEY = OD_MAIN.N_CUST_SKEY
    AND CUST_BAL.BK_OD_FL ='Y'
    END T_MAX_OD_STRT_TS,
    CASE
    WHEN T_TMP_TRN_TS1 IS NOT NULL
    THEN
    SELECT CUST_BAL.BK_CUR_BAL_STRT_TS
    FROM BKMAIN.BK_CUST_TRN_TM_BAL_SUM CUST_BAL
    WHERE CUST_BAL.BK_BUS_EFF_DT='02-APR-2013'
    AND CUST_BAL.N_CUST_SKEY = OD_MAIN.N_CUST_SKEY
    AND CUST_BAL.BK_TRN_TS = OD_MAIN.T_TMP_TRN_TS1
    WHEN (T_TMP_TRN_TS1 IS NULL )
    THEN BK_FN_GET_STRT_EOD_BUS_TS(1, '02-APR-2013','E')
    END T_MAX_OD_END_TS
    FROM OD_MAIN
    ) SRC ON(TGT.N_CUST_SKEY = SRC.N_CUST_SKEY AND TGT.BK_BUS_EFF_DT = SRC.BK_BUS_EFF_DT AND TGT.BK_BUS_EFF_DT = '02-APR-2013')
    WHEN MATCHED THEN
    UPDATE
    SET BK_MAX_OD_STRT_TS = T_MAX_OD_STRT_TS,
    BK_MAX_OD_END_TS = T_MAX_OD_END_TS;
    set linesize 2000;
    select * from table( dbms_xplan.display );
    PLAN_TABLE_OUTPUT
    Plan hash value: 2341776056
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    | 0 | MERGE STATEMENT | | 1 | 54 | 2035 (1)| 00:00:29 |
    | 1 | MERGE | BK_CUST_OD_PEAK_SUM | | | | |
    |* 2 | TABLE ACCESS BY INDEX ROWID | BK_CUST_TRN_TM_BAL_SUM | 1 | 35 | 4 (0)| 00:00:01 |
    |* 3 | INDEX RANGE SCAN | PK_BK_CUST_TRN_TM_BAL_SUM | 1 | | 3 (0)| 00:00:01 |
    |* 4 | TABLE ACCESS BY INDEX ROWID | BK_CUST_TRN_TM_BAL_SUM | 1 | 35 | 4 (0)| 00:00:01 |
    |* 5 | INDEX RANGE SCAN | PK_BK_CUST_TRN_TM_BAL_SUM | 1 | | 3 (0)| 00:00:01 |
    PLAN_TABLE_OUTPUT
    | 6 | SORT AGGREGATE | | 1 | 26 | | |
    |* 7 | TABLE ACCESS BY INDEX ROWID | BK_CUST_TRN_TM_BAL_SUM | 1 | 26 | 9 (0)| 00:00:01 |
    |* 8 | INDEX RANGE SCAN | PK_BK_CUST_TRN_TM_BAL_SUM | 5 | | 3 (0)| 00:00:01 |
    | 9 | VIEW | | | | | |
    | 10 | NESTED LOOPS | | | | | |
    | 11 | NESTED LOOPS | | 1 | 173 | 2035 (1)| 00:00:29 |
    | 12 | VIEW | | 1 | 61 | 2033 (1)| 00:00:29 |
    | 13 | SORT GROUP BY | | 1 | 85 | 2033 (1)| 00:00:29 |
    | 14 | NESTED LOOPS | | | | | |
    | 15 | NESTED LOOPS | | 1 | 85 | 2032 (1)| 00:00:29 |
    |* 16 | HASH JOIN | | 1 | 54 | 2024 (1)| 00:00:29 |
    PLAN_TABLE_OUTPUT
    |* 17 | TABLE ACCESS STORAGE FULL| BK_CUST_OD_PEAK_SUM | 18254 | 410K| 118 (0)| 00:00:02 |
    |* 18 | TABLE ACCESS STORAGE FULL| BK_CUST_TRN_TM_BAL_SUM | 370K| 10M| 1904 (1)| 00:00:27 |
    |* 19 | INDEX RANGE SCAN | PK_BK_CUST_TRN_TM_BAL_SUM | 5 | | 2 (0)| 00:00:01 |
    |* 20 | TABLE ACCESS BY INDEX ROWID| BK_CUST_TRN_TM_BAL_SUM | 3 | 93 | 8 (0)| 00:00:01 |
    |* 21 | INDEX RANGE SCAN | PK_BK_CUST_OD_PEAK_SUM | 1 | | 1 (0)| 00:00:01 |
    | 22 | TABLE ACCESS BY INDEX ROWID | BK_CUST_OD_PEAK_SUM | 1 | 112 | 2 (0)| 00:00:01 |
    Predicate Information (identified by operation id):
    PLAN_TABLE_OUTPUT
    2 - filter("CUST_BAL"."BK_BUS_EFF_DT"=TO_DATE(' 2013-04-02 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))
    3 - access("CUST_BAL"."N_CUST_SKEY"=:B1 AND "CUST_BAL"."BK_TRN_TS"=:B2)
    4 - filter("CUST_BAL"."BK_BUS_EFF_DT"=TO_DATE(' 2013-04-02 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))
    5 - access("CUST_BAL"."N_CUST_SKEY"=:B1 AND "CUST_BAL"."BK_TRN_TS"=:B2)
    7 - filter("CUST_BAL"."BK_BUS_EFF_DT"=TO_DATE(' 2013-04-02 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
    "CUST_BAL"."BK_OD_FL"='Y')
    8 - access("CUST_BAL"."N_CUST_SKEY"=:B1)
    16 - access("TRN_SUM"."N_CUST_SKEY"="CUST_OD_RSLT"."N_CUST_SKEY" AND
    "TRN_SUM"."BK_BUS_EFF_DT"="CUST_OD_RSLT"."BK_BUS_EFF_DT" AND
    "TRN_SUM"."BK_CUR_BAL_RPT_CCY_AM"=(-1)*"CUST_OD_RSLT"."BK_MAX_OD_RPT_CCY_AM")
    17 - storage("CUST_OD_RSLT"."BK_BUS_EFF_DT"=TO_DATE(' 2013-04-02 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))
    PLAN_TABLE_OUTPUT
    filter("CUST_OD_RSLT"."BK_BUS_EFF_DT"=TO_DATE(' 2013-04-02 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))
    18 - storage("TRN_SUM"."BK_BUS_EFF_DT"=TO_DATE(' 2013-04-02 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))
    filter("TRN_SUM"."BK_BUS_EFF_DT"=TO_DATE(' 2013-04-02 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))
    19 - access("CUST_BAL_MAX"."N_CUST_SKEY"="TRN_SUM"."N_CUST_SKEY")
    20 - filter("CUST_BAL_MAX"."BK_BUS_EFF_DT"=TO_DATE(' 2013-04-02 00:00:00', 'syyyy-mm-dd hh24:mi:ss')
    AND "CUST_BAL_MAX"."BK_BUS_EFF_DT"="TRN_SUM"."BK_BUS_EFF_DT")
    21 - access("TGT"."N_CUST_SKEY"="N_CUST_SKEY" AND "TGT"."BK_BUS_EFF_DT"=TO_DATE(' 2013-04-02
    00:00:00', 'syyyy-mm-dd hh24:mi:ss'))
    filter("TGT"."BK_BUS_EFF_DT"=TO_DATE(' 2013-04-02 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))
    53 rows selected.

    Hi
    sb92075 wrote:
    it appears that the STATISTICS do NOT reflect reallity; or do you really have many tables with 1 row?not necessarily (and not even likely)
    1) explain plan shows expected number of rows after filters are applied, so even if stats are perfectly correct, but predicates are correlated, then it's easy to get cardinality = 1 because the optimizer has no way of knowing correlations between columns (unless you're in 11g and you have collected exteded stats on this colgroup)
    2) in explain plan, cardinalities of driven operations are show per 1 iterantion. E.g.:
    NESTED LOOP cardinality = 1,000,000
      TABLE ACCESS FULL A cardinality = 1,000,000
      TABLE ACCESS BY ROWID B cardinality = 1
           INDEX UNIQUE SCAN PK$B cardinality = 1doesn't mean that the optimizer expects to find 1 row in table B, with or without filters, it means that there will be 1 row per each of 1,000,000 iterations.
    In this specific case, the most suspicious operation in the plan is HASH JOIN 16: first, because it's highly unusual to have 18k rows in one table and 370k in another
    and find only 1 match; second, because it's a 3-column join, which probably explains why the join cardinality is estimated so low.
    Often, such problems are mitigated by multicolumn join sanity checks, so maybe the OP is either on an old version of Oracle that doesn't have these checks, or these checks are disabled for some reason.
    Best regards,
    Nikolay

  • Error executing a stored procedure from SSIS using the MERGE statement between databases

    Good morning,
    I'm trying to execute from SSIS a stored procedure that compares the content of two tables on different databases in the same server and updates one of them. To perform this action, I've created a stored procedure in the destination database and I'm
    comparing the data between tables with the MERGE statement. When I execute the procedure on the destination database the error that I obtain is:
    "Msg 916, Level 14, State 1, Procedure RefreshDestinationTable, Line 13
    The server principal "XXXX" is not able to access the database "XXXX" under the current security context."
    Some things to take in account:
    1. I've created a temporary table on the same destination database to check if the problem was on the MERGE statement and it works fine.
    2. I've created the procedure with the option "WITH EXECUTE AS DBO".
    I've read that it can be a problem of permissions but I don't know if I'm executing the procedure from SSIS to which user/login I should give permissions and which.
    Could you give me some tip to continue investigating how to solve the problem?
    Thank you,
    Virgilio

    Read Erland's article http://www.sommarskog.se/grantperm.html
    Best Regards,Uri Dimant SQL Server MVP,
    http://sqlblog.com/blogs/uri_dimant/
    MS SQL optimization: MS SQL Development and Optimization
    MS SQL Consulting:
    Large scale of database and data cleansing
    Remote DBA Services:
    Improves MS SQL Database Performance
    SQL Server Integration Services:
    Business Intelligence

  • Using hints in MERGE statement

    i have a merge statement
    in that statement for select clause i am using index hints
    can i use it
    whether that will increase the performance or the reverse will happen
    any comments

    Hints should always be your last option. First try tune the sql without using any hints, most of the cases you will be ok. Over time when the table statistics(ex. rows) changes considerably hints may be negative impact.

  • Oracle 9.2i - Log Errors in a Merge Statement

    Hi all,
    I want to log errors in a merge statement in way to allow the statement to finish without rollback. I see that in Oracle 10g2 it is possible with "LOG ERRORS INTO err$_dest ('MERGE') REJECT LIMIT UNLIMITED;" instruction but, apparently, it's not possible in Oracle 9.2i.
    Is there another way to solve this problem?

    Depending on what type of errors you expect, you may be helped by deferring your constraints: unique, foreign key and check constraints can be deferred; that means they are only enforced when you commit.
    You could defer all constraints, perform the bulk insert and then instead of committing you first try to set all constraints to immediate. If this fails, there are errors. If it does not, you can commit.
    To find the exact errors, you can try to switch all deferred constraints back to immediate one by one. The ones that succeed are not violated by your transaction, oinly the ones that fail to switch to immediate are not met by your transaction.
    For the violated constraints, you can find the offending records by simply selecting them. For example if the check constraint states Col X + Col Y < 10000 you will find the offending records by selecting all records where not (Col X + Col Y < 10000 ). Unfortunately we have no better mechanism than this for finding the records that are in violation of the rules.
    best regards
    Lucas

  • Will insert (ignore duplicates) have a better performance than merge?

    Will insert (ignore duplicates) have a better performance than merge (insert if not duplicate)?

    Ok. Here is exactly what is happenning -
    We had a table with no unique index on it. We used 'insert all' statement to insert record.
    But later when we found duplicates in there we started removing them manually.
    Now, to resolve the issue we added unique index and added exception handling to ignore DUP_VAL_ON_INDEX exception.
    But with this all records being inserted by 'INSERT ALL' statement gets ignored even if only one record is duplicate.
    Hence we have finally replaced 'insert all' with merge statement. Which inserts only if a corresponding record is not found (match based on column in unique index) in the table.
    But I am wondering how much performance will get impacted.

  • What the hell is going on with my MERGE statement

    Hi Guys,
    I maybe posted this to the wrong forum - appreciate it if you would take a look and let me know what you think.
    What the hell is going on with my MERGE statement
    Regards,
    Mark.

    The problem exists in the statement in bold. The inline view of the merge
    returns 11808 rows and should only perform 11808 lookups on the index
    on TBL_INSTRUMENT to determine whether or not to insert a row. It is
    doing 1393981363 !!Do you have accurate statistics on these tables? Have you got histograms on skewed columns?
    Also, given that you are only acting on WHEN NOT MATCHED are you sure MERGE is the way to go? A straightforward insert driving off a NOT EXISTS sub-query or an anti-join might be a much better approach.
    Cheers, APC
    Blog : http://radiofreetooting.blogspot.com/

  • MERGE Statement - unable to get a stable set of rows in the source tables

    OWB Client: 10.1.0.2.0
    OWB Repository: 10.1.0.1.0
    I am trying to create a MERGE in OWB.
    I get the following error:
    ORA-12801: error signaled in parallel query server P004 ORA-30926: unable to get a stable set of rows in the source tables
    I have read the other posts regarding this and can't seem to get a fix.
    The target table has a unique index on the field that I am matching on.
    The "incoming" data doesn't have a unique index, but I have checked and confirmed that it is unique on the appropriate key.
    The "incoming" data is created by a join and filter in the mapping and I'd rather avoid having to load this data into a new table and add a unique index on this.
    Any help would be great.
    Thanks
    Laura

    Hello Laura,
    The MERGE statement does not require any constraints on its target table or source table. The only requirement is that two input rows cannot update the same target row, meaning that all existing target rows can be matched by at most one input row (otherwise the MERGE would be undeterministic since you don't know which of the input rows you would end up with in the target).
    If a table takes ages to load (and is not really big) I suspect that your mapping is not running in set mode and that it performs a full table scan on source data for each target row it produces.
    If you ARE running in set mode you should run explain plan to get a hint on what is wrong.
    Regarding your original mapping, try to set the target operator property:
    Match by constraint=no constraints
    and then check the Loading properties on each target column.
    Regards, Hans Henrik

  • MERGE Statement Problem for Storing Old Data

    Hi,
    I am using MERGE statement to update as well as insert rows on ta table.
    I have a data like in a table 'TABLEA' as 10 20 30 ABCD
    I want to update the table using 10 20 30 DEFG but i want the old data i.e 10 20 30 ABCD
    to store in a History table i.e TABLEA_H.
    Is there any way to store the data
    Any help will be needful for me

    Hi,
    Trigger usage may affect the performance as we are handling Production environment.
    is there any way to implement the scenario without using Triggers?
    Any help will be needful for me

  • Merge statement

    i would like to know if it is possible to identify the row that is causing the problem when you use a merge statement in pl/sql. i know if you create a cursor and then loop through the data you can identify the column but what about if i have only a merge that will either insert or update. is it possible to identify which row of data cause the problem? thanks

    You can use an Error Logging Table<br>
    <br>
    Nicolas.

  • Merge Statement in PL/SQL

    Hi
    I am using a merge statement where i am updating and inserting records in table 2 from table 1.
    I want to log the no. of rows updated and new rows inserted in the log table.
    If i am not wrong, we can use sql%rowcount but i need help as how to use this statement.
    Please suggest a solution.
    Thanks

    user11018028 wrote:
    Will the sql%rowcount will give the no. of updated rows OR no. of newly inserted rows OR the sum of both in case of a merge statement.Total number of rows that changed (SUM).

  • Question on passing string values to Partition clause in a merge statement

    Hello All,
    I am using the below code to update specific sub-partition data using oracle merge statements.
    I am getting the sub-partition name and passing this as a string to the sub-partition clause.
    The Merge statement is failing stating that the specified sub-partition does not exist. But the sub-partition do exists for the table.
    We are using Oracle 11gr2 database.
    Below is the code which I am using to populate the data.
    declare
    ln_min_batchkey PLS_INTEGER;
    ln_max_batchkey PLS_INTEGER;
    lv_partition_name VARCHAR2 (32767);
    lv_subpartition_name VARCHAR2 (32767);
    begin
    FOR m1 IN ( SELECT (year_val + 1) AS year_val, year_val AS orig_year_val
    FROM ( SELECT DISTINCT
    TO_CHAR (batch_create_dt, 'YYYY') year_val
    FROM stores_comm_mob_sub_temp
    ORDER BY 1)
    ORDER BY year_val)
    LOOP
    lv_partition_name :=
    scmsa_handset_mobility_data_build.fn_get_partition_name (
    p_table_name => 'STORES_COMM_MOB_SUB_INFO',
    p_search_string => m1.year_val);
    FOR m2
    IN (SELECT DISTINCT
    'M' || TO_CHAR (batch_create_dt, 'MM') AS month_val
    FROM stores_comm_mob_sub_temp
    WHERE TO_CHAR (batch_create_dt, 'YYYY') = m1.orig_year_val)
    LOOP
    lv_subpartition_name :=
    scmsa_handset_mobility_data_build.fn_get_subpartition_name (
    p_table_name => 'STORES_COMM_MOB_SUB_INFO',
    p_partition_name => lv_partition_name,
    p_search_string => m2.month_val);
                        DBMS_OUTPUT.PUT_LINE('The lv_subpartition_name => '||lv_subpartition_name||' and lv_partition_name=> '||lv_partition_name);
    IF lv_subpartition_name IS NULL
    THEN
                             DBMS_OUTPUT.PUT_LINE('INSIDE IF => '||m2.month_val);
    INSERT INTO STORES_COMM_MOB_SUB_INFO T1 (
    t1.ntlogin,
    t1.first_name,
    t1.last_name,
    t1.job_title,
    t1.store_id,
    t1.batch_create_dt)
    SELECT t2.ntlogin,
    t2.first_name,
    t2.last_name,
    t2.job_title,
    t2.store_id,
    t2.batch_create_dt
    FROM stores_comm_mob_sub_temp t2
    WHERE TO_CHAR (batch_create_dt, 'YYYY') = m1.orig_year_val
    AND 'M' || TO_CHAR (batch_create_dt, 'MM') =
    m2.month_val;
    ELSIF lv_subpartition_name IS NOT NULL
    THEN
                        DBMS_OUTPUT.PUT_LINE('INSIDE ELSIF => '||m2.month_val);
    MERGE INTO (SELECT *
    FROM stores_comm_mob_sub_info
    SUBPARTITION (lv_subpartition_name)) T1
    USING (SELECT *
    FROM stores_comm_mob_sub_temp
    WHERE TO_CHAR (batch_create_dt, 'YYYY') =
    m1.orig_year_val
    AND 'M' || TO_CHAR (batch_create_dt, 'MM') =
    m2.month_val) T2
    ON (T1.store_id = T2.store_id
    AND T1.ntlogin = T2.ntlogin)
    WHEN MATCHED
    THEN
    UPDATE SET
    t1.postpaid_totalqty =
    (NVL (t1.postpaid_totalqty, 0)
    + NVL (t2.postpaid_totalqty, 0)),
    t1.sales_transaction_dt =
    GREATEST (
    NVL (t1.sales_transaction_dt,
    t2.sales_transaction_dt),
    NVL (t2.sales_transaction_dt,
    t1.sales_transaction_dt)),
    t1.batch_create_dt =
    GREATEST (
    NVL (t1.batch_create_dt, t2.batch_create_dt),
    NVL (t2.batch_create_dt, t1.batch_create_dt))
    WHEN NOT MATCHED
    THEN
    INSERT (t1.ntlogin,
    t1.first_name,
    t1.last_name,
    t1.job_title,
    t1.store_id,
    t1.batch_create_dt)
    VALUES (t2.ntlogin,
    t2.first_name,
    t2.last_name,
    t2.job_title,
    t2.store_id,
    t2.batch_create_dt);
    END IF;
    END LOOP;
    END LOOP;
    COMMIT;
    end;
    Much appreciate your inputs here.
    Thanks,
    MK.

    I've not used partitioning, but I do not see MERGE supporting a variable as a partition name in
    MERGE INTO (SELECT *
    FROM stores_comm_mob_sub_info
    SUBPARTITION (lv_subpartition_name)) T1
    USING ... I suspect it is looking for a partition called lv_subpartition_name.
    I also don't see why you need that partition name - the ON clause should be able to identify the partition's criteria.

  • Issue while using SUBPARTITION clause in the MERGE statement in PLSQL Code

    Hello All,
    I am using the below code to update specific sub-partition data using oracle merge statements.
    I am getting the sub-partition name and passing this as a string to the sub-partition clause.
    The Merge statement is failing stating that the specified sub-partition does not exist. But the sub-partition do exists for the table.
    We are using Oracle 11gr2 database.
    Below is the code which I am using to populate the data.
    declare
    ln_min_batchkey PLS_INTEGER;
    ln_max_batchkey PLS_INTEGER;
    lv_partition_name VARCHAR2 (32767);
    lv_subpartition_name VARCHAR2 (32767);
    begin
    FOR m1 IN ( SELECT (year_val + 1) AS year_val, year_val AS orig_year_val
    FROM ( SELECT DISTINCT
    TO_CHAR (batch_create_dt, 'YYYY') year_val
    FROM stores_comm_mob_sub_temp
    ORDER BY 1)
    ORDER BY year_val)
    LOOP
    lv_partition_name :=
    scmsa_handset_mobility_data_build.fn_get_partition_name (
    p_table_name => 'STORES_COMM_MOB_SUB_INFO',
    p_search_string => m1.year_val);
    FOR m2
    IN (SELECT DISTINCT
    'M' || TO_CHAR (batch_create_dt, 'MM') AS month_val
    FROM stores_comm_mob_sub_temp
    WHERE TO_CHAR (batch_create_dt, 'YYYY') = m1.orig_year_val)
    LOOP
    lv_subpartition_name :=
    scmsa_handset_mobility_data_build.fn_get_subpartition_name (
    p_table_name => 'STORES_COMM_MOB_SUB_INFO',
    p_partition_name => lv_partition_name,
    p_search_string => m2.month_val);
                        DBMS_OUTPUT.PUT_LINE('The lv_subpartition_name => '||lv_subpartition_name||' and lv_partition_name=> '||lv_partition_name);
    IF lv_subpartition_name IS NULL
    THEN
                             DBMS_OUTPUT.PUT_LINE('INSIDE IF => '||m2.month_val);
    INSERT INTO STORES_COMM_MOB_SUB_INFO T1 (
    t1.ntlogin,
    t1.first_name,
    t1.last_name,
    t1.job_title,
    t1.store_id,
    t1.batch_create_dt)
    SELECT t2.ntlogin,
    t2.first_name,
    t2.last_name,
    t2.job_title,
    t2.store_id,
    t2.batch_create_dt
    FROM stores_comm_mob_sub_temp t2
    WHERE TO_CHAR (batch_create_dt, 'YYYY') = m1.orig_year_val
    AND 'M' || TO_CHAR (batch_create_dt, 'MM') =
    m2.month_val;
    ELSIF lv_subpartition_name IS NOT NULL
    THEN
                        DBMS_OUTPUT.PUT_LINE('INSIDE ELSIF => '||m2.month_val);
    MERGE INTO (SELECT *
    FROM stores_comm_mob_sub_info
    SUBPARTITION (lv_subpartition_name)) T1 --> Issue Here
    USING (SELECT *
    FROM stores_comm_mob_sub_temp
    WHERE TO_CHAR (batch_create_dt, 'YYYY') =
    m1.orig_year_val
    AND 'M' || TO_CHAR (batch_create_dt, 'MM') =
    m2.month_val) T2
    ON (T1.store_id = T2.store_id
    AND T1.ntlogin = T2.ntlogin)
    WHEN MATCHED
    THEN
    UPDATE SET
    t1.postpaid_totalqty =
    (NVL (t1.postpaid_totalqty, 0)
    + NVL (t2.postpaid_totalqty, 0)),
    t1.sales_transaction_dt =
    GREATEST (
    NVL (t1.sales_transaction_dt,
    t2.sales_transaction_dt),
    NVL (t2.sales_transaction_dt,
    t1.sales_transaction_dt)),
    t1.batch_create_dt =
    GREATEST (
    NVL (t1.batch_create_dt, t2.batch_create_dt),
    NVL (t2.batch_create_dt, t1.batch_create_dt))
    WHEN NOT MATCHED
    THEN
    INSERT (t1.ntlogin,
    t1.first_name,
    t1.last_name,
    t1.job_title,
    t1.store_id,
    t1.batch_create_dt)
    VALUES (t2.ntlogin,
    t2.first_name,
    t2.last_name,
    t2.job_title,
    t2.store_id,
    t2.batch_create_dt);
    END IF;
    END LOOP;
    END LOOP;
    COMMIT;
    end;
    Much appreciate your inputs here.
    Thanks,
    MK.
    (SORRY TO POST THE SAME QUESTION TWICE).
    Edited by: Maddy on May 23, 2013 10:20 PM

    Duplicate question

Maybe you are looking for

  • Difference between sy-datm and sysdate in a variant of a report

    hi, I am working on a report which fills the database and i had assigned variant called SYSDATE -1 bcoz it will update the database table for all the plants for the previous date. can anybody guide me that whether i should use the variable SYSDATE- 1

  • DTW Error A/R Credit Memos

    Hello guys! I got a problem while trying to upload A/R Credit Memos through Data Transfer Workbench. It shows an error message inthe DTW Error Log: "Cannot add invoice with zero total and deferred tax http://ORIN.DocTotalApplication-Defined or Object

  • Acrobat 9 freezing mouse

    Couple days ago, acrobat popped up when I tried to print a page from the web, and told me it needed to be run again as part of the suite it was installed with.. So I fired up PS.It shut down and didn't print that page. Since then anytime I try to pri

  • How do you underline type in java

    I am new to java and I can't seem to find a way to make type underline in a applet. do I add something to the g.string or is there a int. I need to use?

  • GL Interface Journal import code

    Hi, Any one have GL Interface Journal import code, Please send that code to below mail id. It is very Urgent. Mail ID: [email protected] Thanks in Advance Varma.