Automatic Parallelism causes Merge statement to take longer.

We have a problem in a new project as part of the ETL load into the Oracle datawarehouse we perform a merge statement to update rows in a global temporary table then load
the results into a permanant table, when testing with automatic parallel execution enabled the plan changes and the merge never finishes and consumes vast amounts of resources.
The database version is:-
Database version :11.2.0.3
OS: redhat 64bit
three node rac 20 cores per node
when executing serially the query response is typically similar to the following:
MERGE /*+ gather_plan_statistics no_parallel */ INTO T_GTTCHARGEVALUES USING
  (SELECT
  CASTACCOUNTID,
  CHARGESCHEME,
  MAX(CUMULATIVEVALUE) AS MAXMONTHVALUE,
  MAX(CUMULATIVECOUNT) AS MAXMONTHCOUNT
FROM
  V_CACHARGESALL
WHERE
  CHARGEDATE >= TRUNC(TO_DATE(:B1,'YYYY-MM-DD'),'MM')
  AND CHARGEDATE < TO_DATE(:B1,'YYYY-MM-DD')
GROUP BY
   CASTACCOUNTID,
   CHARGESCHEME
HAVING MAX(CUMULATIVECOUNT) IS NOT NULL ) MTOTAL
ON
  (T_GTTCHARGEVALUES.CASTACCOUNTID=MTOTAL.CASTACCOUNTID AND
  T_GTTCHARGEVALUES.CHARGESCHEME=MTOTAL.CHARGESCHEME)
WHEN MATCHED
THEN UPDATE SET
  CUMULATIVEVALUE=CUMULATIVEVALUE+MTOTAL.MAXMONTHVALUE ,
  CUMULATIVECOUNT=CUMULATIVECOUNT+MTOTAL.MAXMONTHCOUNT;
1448340 rows merged.
select * from table(dbms_xplan.display_cursor(null,null,'ALLSTATS LAST')); 
| Id  | Operation                       | Name              | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |  OMem |  1Mem | Used-Mem |
|   0 | MERGE STATEMENT                 |                   |      1 |        |      0 |00:03:08.43 |    2095K|    186K|       |       |          |
|   1 |  MERGE                          | T_GTTCHARGEVALUES |      1 |        |      0 |00:03:08.43 |    2095K|    186K|       |       |          |
|   2 |   VIEW                          |                   |      1 |        |   1448K|00:02:53.14 |     619K|    177K|       |       |          |
|*  3 |    HASH JOIN                    |                   |      1 |      1 |   1448K|00:02:52.70 |     619K|    177K|   812K|   812K| 1218K (0)|
|   4 |     VIEW                        |                   |      1 |      1 |    203 |00:02:51.26 |     608K|    177K|       |       |          |
|*  5 |      FILTER                     |                   |      1 |        |    203 |00:02:51.26 |     608K|    177K|       |       |          |
|   6 |       SORT GROUP BY             |                   |      1 |      1 |    480 |00:02:51.26 |     608K|    177K| 73728 | 73728 |          |
|*  7 |        FILTER                   |                   |      1 |        |     21M|00:02:56.04 |     608K|    177K|       |       |          |
|   8 |         PARTITION RANGE ITERATOR|                   |      1 |    392K|     21M|00:02:51.32 |     608K|    177K|       |       |          |
|*  9 |          TABLE ACCESS FULL      | T_CACHARGES       |     24 |    392K|     21M|00:02:47.48 |     608K|    177K|       |       |          |
|  10 |     TABLE ACCESS FULL           | T_GTTCHARGEVALUES |      1 |   1451K|   1451K|00:00:00.48 |   10980 |      0 |       |       |          |
Predicate Information (identified by operation id):
   3 - access("T_GTTCHARGEVALUES"."CASTACCOUNTID"="MTOTAL"."CASTACCOUNTID" AND "T_GTTCHARGEVALUES"."CHARGESCHEME"="MTOTAL"."CHARGESCHEME")
   5 - filter(MAX("CUMULATIVECOUNT") IS NOT NULL)
   7 - filter(TRUNC(TO_DATE(:B1,'YYYY-MM-DD'),'fmmm')<TO_DATE(:B1,'YYYY-MM-DD'))
   9 - filter(("LOGICALLYDELETED"=0 AND "CHARGEDATE">=TRUNC(TO_DATE(:B1,'YYYY-MM-DD'),'fmmm') AND "CHARGEDATE"<TO_DATE(:B1,'YYYY-MM-DD')))removing the no_parallel hint results in the following, (this is pulled from the sql monitoring report and editied to remove the lines relating to individual parallel servers)
I understand that the query is considered for parallel execution due to the estimated length of time it will run for and although the degree of parallleism seems excessive
it is the default maximum for the server configuration, what we are tryig to understand is which statistics could be inacurate or missing and could cause this kind of problem.
In this case we can add the no_parallel hint in the etl package as a workaround but would really liek to identify the root cause to avoid similar problems elsewhere.
SQL Monitoring Report
SQL Text
MERGE INTO T_GTTCHARGEVALUES USING (SELECT CASTACCOUNTID, CHARGESCHEME, MAX(CUMULATIVEVALUE) AS MAXMONTHVALUE,
MAX(CUMULATIVECOUNT) AS MAXMONTHCOUNT FROM V_CACHARGESALL WHERE CHARGEDATE >= TRUNC(TO_DATE(:B1,'YYYY-MM-DD'),'MM')
AND CHARGEDATE < to_date(:B1,'YYYY-MM-DD')
GROUP BY CASTACCOUNTID, CHARGESCHEME HAVING MAX(CUMULATIVECOUNT) IS NOT NULL ) MTOTAL
ON (T_GTTCHARGEVALUES.CASTACCOUNTID=MTOTAL.CASTACCOUNTID AND
T_GTTCHARGEVALUES.CHARGESCHEME=MTOTAL.CHARGESCHEME) WHEN MATCHED THEN UPDATE SET
CUMULATIVEVALUE=CUMULATIVEVALUE+MTOTAL.MAXMONTHVALUE ,
CUMULATIVECOUNT=CUMULATIVECOUNT+MTOTAL.MAXMONTHCOUNT
Error: ORA-1013
ORA-01013: user requested cancel of current operation
Global Information
Status              :  DONE (ERROR)
Instance ID         :  1
Session             :  XXXX(2815:12369)
SQL ID              :  70kzttjbyyspt
SQL Execution ID    :  16777216
Execution Started   :  04/27/2012 09:43:27
First Refresh Time  :  04/27/2012 09:43:27
Last Refresh Time   :  04/27/2012 09:48:43
Duration            :  316s
Module/Action       :  SQL*Plus/-
Service             :  SYS$USERS
Program             :  sqlplus@XXXX (TNS V1-V3)
Binds
========================================================================================================================
| Name | Position |     Type     |                                        Value                                        |
========================================================================================================================
| :B1  |        1 | VARCHAR2(32) | 2012-04-25                                                                          |
========================================================================================================================
Global Stats
====================================================================================================================
| Elapsed | Queuing |   Cpu   |    IO    | Application | Concurrency | Cluster  |  Other   | Buffer | Read | Read  |
| Time(s) | Time(s) | Time(s) | Waits(s) |  Waits(s)   |  Waits(s)   | Waits(s) | Waits(s) |  Gets  | Reqs | Bytes |
====================================================================================================================
|    7555 |    0.00 |    4290 |     2812 |        0.08 |          27 |      183 |      243 |     3M | 294K |   7GB |
====================================================================================================================
SQL Plan Monitoring Details (Plan Hash Value=323941584)
==========================================================================================================================================================================================================
| Id |             Operation             |       Name        |  Rows   | Cost  |   Time    | Start  | Execs |   Rows   | Read | Read  |  Mem  | Activity |                Activity Detail                |
|    |                                   |                   | (Estim) |       | Active(s) | Active |       | (Actual) | Reqs | Bytes | (Max) |   (%)    |                  (# samples)                  |
==========================================================================================================================================================================================================
|  0 | MERGE STATEMENT                   |                   |         |       |           |        |     1 |          |      |       |       |          |                                               |
|  1 |   MERGE                           | T_GTTCHARGEVALUES |         |       |           |        |     1 |          |      |       |       |          |                                               |
|  2 |    PX COORDINATOR                 |                   |         |       |        57 |     +1 |   481 |        0 |  317 |   5MB |       |     4.05 | latch: shared pool (40)                       |
|    |                                   |                   |         |       |           |        |       |          |      |       |       |          | os thread startup (17)                        |
|    |                                   |                   |         |       |           |        |       |          |      |       |       |          | Cpu (7)                                       |
|    |                                   |                   |         |       |           |        |       |          |      |       |       |          | DFS lock handle (36)                          |
|    |                                   |                   |         |       |           |        |       |          |      |       |       |          | SGA: allocation forcing component growth (14) |
|    |                                   |                   |         |       |           |        |       |          |      |       |       |          | latch: parallel query alloc buffer (200)      |
|  3 |     PX SEND QC (RANDOM)           | :TQ10003          |       1 | 19054 |           |        |       |          |      |       |       |          |                                               |
|  4 |      VIEW                         |                   |         |       |           |        |       |          |      |       |       |          |                                               |
|  5 |       FILTER                      |                   |         |       |           |        |       |          |      |       |       |          |                                               |
|  6 |        SORT GROUP BY              |                   |       1 | 19054 |           |        |       |          |      |       |       |          |                                               |
|  7 |         PX RECEIVE                |                   |       1 | 19054 |           |        |       |          |      |       |       |          |                                               |
|  8 |          PX SEND HASH             | :TQ10002          |       1 | 19054 |           |        |   240 |          |      |       |       |          |                                               |
|  9 |           SORT GROUP BY           |                   |       1 | 19054 |       246 |    +70 |   240 |        0 |      |       |  228M |    49.32 | Cpu (3821)                                    |
| 10 |            FILTER                 |                   |         |       |       245 |    +71 |   240 |       3G |      |       |       |     0.08 | Cpu (6)                                       |
| 11 |             HASH JOIN             |                   |       1 | 19054 |       259 |    +57 |   240 |       3G |      |       |  276M |     4.31 | Cpu (334)                                     |
| 12 |              PX RECEIVE           |                   |      1M |     5 |       259 |    +57 |   240 |       1M |      |       |       |     0.04 | Cpu (3)                                       |
| 13 |               PX SEND HASH        | :TQ10000          |      1M |     5 |         6 |    +56 |   240 |       1M |      |       |       |     0.01 | Cpu (1)                                       |
| 14 |                PX BLOCK ITERATOR  |                   |      1M |     5 |         6 |    +56 |   240 |       1M |      |       |       |     0.03 | Cpu (1)                                       |
|    |                                   |                   |         |       |           |        |       |          |      |       |       |          | PX Deq: reap credit (1)                       |
| 15 |                 TABLE ACCESS FULL | T_GTTCHARGEVALUES |      1M |     5 |         7 |    +55 |  5486 |       1M | 5487 |  86MB |       |     2.31 | gc cr grant 2-way (3)                         |
|    |                                   |                   |         |       |           |        |       |          |      |       |       |          | gc current block lost (7)                     |
|    |                                   |                   |         |       |           |        |       |          |      |       |       |          | Cpu (7)                                       |
|    |                                   |                   |         |       |           |        |       |          |      |       |       |          | db file sequential read (162)                 |
| 16 |              PX RECEIVE           |                   |     78M | 19047 |       255 |    +61 |   240 |     801K |      |       |       |     0.03 | IPC send completion sync (2)                  |
| 17 |               PX SEND HASH        | :TQ10001          |     78M | 19047 |       250 |    +66 |   240 |       3M |      |       |       |     0.06 | Cpu (5)                                       |
| 18 |                PX BLOCK ITERATOR  |                   |     78M | 19047 |       250 |    +66 |   240 |       4M |      |       |       |          |                                               |
| 19 |                 TABLE ACCESS FULL | T_CACHARGES       |     78M | 19047 |       254 |    +62 |  1016 |       4M | 288K |   6GB |       |    37.69 | gc buffer busy acquire (104)                  |
|    |                                   |                   |         |       |           |        |       |          |      |       |       |          | gc cr block 2-way (1)                         |
|    |                                   |                   |         |       |           |        |       |          |      |       |       |          | gc cr block lost (9)                          |
|    |                                   |                   |         |       |           |        |       |          |      |       |       |          | gc cr grant 2-way (14)                        |
|    |                                   |                   |         |       |           |        |       |          |      |       |       |          | gc cr multi block request (1)                 |
|    |                                   |                   |         |       |           |        |       |          |      |       |       |          | gc current block 2-way (3)                    |
|    |                                   |                   |         |       |           |        |       |          |      |       |       |          | gc current block 3-way (2)                    |
|    |                                   |                   |         |       |           |        |       |          |      |       |       |          | gc current block busy (1)                     |
|    |                                   |                   |         |       |           |        |       |          |      |       |       |          | gc current grant busy (2)                     |
|    |                                   |                   |         |       |           |        |       |          |      |       |       |          | Cpu (58)                                      |
|    |                                   |                   |         |       |           |        |       |          |      |       |       |          | latch: gc element (1)                         |
|    |                                   |                   |         |       |           |        |       |          |      |       |       |          | db file parallel read (26)                    |
|    |                                   |                   |         |       |           |        |       |          |      |       |       |          | db file scattered read (207)                  |
|    |                                   |                   |         |       |           |        |       |          |      |       |       |          | db file sequential read (2433)                |
|    |                                   |                   |         |       |           |        |       |          |      |       |       |          | direct path read (1)                          |
|    |                                   |                   |         |       |           |        |       |          |      |       |       |          | read by other session (57)                    |
==========================================================================================================================================================================================================
Parallel Execution Details (DOP=240 , Servers Allocated=480)
Instances  : 3

chris_c wrote:
| Id  | Operation                       | Name              | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |  OMem |  1Mem | Used-Mem |
|*  9 |          TABLE ACCESS FULL      | T_CACHARGES       |     24 |    392K|     21M|00:02:47.48 |     608K|    177K|       |       |          |
Based on the discrepancy between the estimated number of rows and the actual, and the below posted bind value of 2012-04-25 i'd first be checking if the statistics on T_CACHARGES are up to date.
As a reference
http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:4399338600346902127
So that would be my first avenue of exploration.
Cheers,

Similar Messages

  • Performance problem with MERGE statement

    Version : 11.1.0.7.0
    I have an insert statement like following which is taking less than 2 secs to complete and inserts around 4000 rows:
    INSERT INTO sch.tab1
              (c1,c2,c3)
    SELECT c1,c2,c3
       FROM sch1.tab1@dblink
      WHERE c1 IN (SELECT c1 FROM sch1.tab2@dblink);I wanted to change it to a MERGE statement just to avoid duplicate data. I changed it to following :
    MERGE INTO sch.tab1 t1
    USING (SELECT c1,c2,c3
       FROM sch1.tab1@dblink
      WHERE c1 IN (SELECT c1 FROM sch1.tab2@dblink) t2
    ON (t1.c1 = t2.c1)
    WHEN NOT MATCHED THEN
    INSERT (t1.c1,t1.c2,t1.c3)
    VALUES (t2.c1,t2.c2,t2.c3);The MERGE statement is taking more than 2 mins (and I stopped the execution after that). I removed the WHERE clause subquery inside the subquery of the USING section and it executed in 1 sec.
    If I execute the same select statement with the WHERE clause outside the MERGE statement, it takes just 1 sec to return the data.
    Is there any known issue with MERGE statement while implementing using above scenario?

    riedelme wrote:
    Are your join columns indexed?
    Yes, the join columns are indexed.
    You are doing a remote query inside the merge; remote queries can slow things down. Do you have to select all thr rows from the remote table? What if you copied them locally using a materialized view?Yes, I agree that remote queries will slow things down. But the same is not happening while select, insert and pl/sql. It happens only when we are using MERGE. I have to test what happens if we use a subquery refering to a local table or materialized view. Even if it works, I think there is still a problem with MERGE in case of remote subqueries (atleast till I test local queries). I wish some one can test similar scenarios so that we can know whether it is a genuine problem or some specific problem from my side.
    >
    BTW, I haven't had great luck with MERGE either :(. Last time I tried to use it I found it faster to use a loop with insert/update logic.
    Edited by: riedelme on Jul 28, 2009 12:12 PM:) I used the same to overcome this situation. I think MERGE needs to be still improved functionally from Oracle side. I personally feel that it is one of the robust features to grace SQL or PL/SQL.

  • Replication update statement takes long time

    Hi Replication experts,
    I have a issue and please suggest if my understanding and solution is correct.
    We have a transactional replication setup for data warehouse, from today morning replication got huge latency so when I looked into it I saw the "sp_MSupd_< tablename >" was running for very long time and by this time it was running for 9 hours
    still no data was updated, Latency went very high. What we feel is that the index maintenance was not done in subscriber also it was not replicated too. So due to high fragmentation the update statement could take very long.
    As there was no error message or blocks found all we see is the update taking very long. So to avoid this we are planning to remove the index, as this is just data warehouse index maintenance is not required by removing we can gain some space too. Is this a
    good Idea to implement?
    When we ran profiler nothing was found, no error or alert logged.
    Please let me know your suggestions
    Thanks
    Best Regards Moug

    Hi All,
    The Issue we found to be is that a day before, new column was added to the table and 9000 rows were updated ... This caused the distributor to slow down the update operation as the index in subscriber(15 million rows) was updating every time  a row
    is updated.. So we suspect that adding a new column and updating it at the same time is the culprit..
    But still am not convinced as per my understanding the distributor operates by comparing the primary key of publisher that was changed with the primary key of the subscriber and apply those change.. If that is the case why should it worry about the new column
    added or the number of updates..
    Also one strange thing is another table was also lagging behind in its operation ,though that table was updated with just 18 rows.. So in a nutshell the update operation was totally down by distributor operation on that day..
    Can someone please shed some light on this, Replication is my favorite topic but now I got fear of it.. after failing to know what could be issue... 
    Best Regards Moug

  • SELECT statement takes long time

    Hi All,
    In the following code, if the T_QMIH-EQUNR contains blank or space values ,SELECT statement takes longer time to acess the data from OBJK table. If it T_QMIH-EQUNR contains values other than blank, performance is good and it fetches data very fast.
    Already we have indexes for EQUNR in OBJK table.
    Only for blank entries , it takes much time.Can anybody tell why it behaves for balnk entries?
    if not T_QMIH[] IS INITIAL.
            SORT T_QMIH BY EQUNR.
            REFRESH T_OBJK.
            SELECT EQUNR OBKNR
              FROM OBJK INTO TABLE T_OBJK
              FOR ALL ENTRIES IN T_QMIH
              WHERE OBJK~TASER = 'SER01' AND
             OBJK~EQUNR = T_QMIH-EQUNR.
    Thanks
    Ajay

    Hi
    You can use the field QMIH-QMNUM with OBJK-IHNUM
    in QMIH table, EQUNR is not primary key, it will have multiple entries
    so to improve the performance use one dummy internal table for QMIH  and sort it on EQUNR
    delete adjacent duplicates from d_qmih and use the same in for all entries
    this will improve the performance.
    Also use the fields in sequence of the index and primary keys also in select
    if not T_QMIH[] IS INITIAL.
    SORT T_QMIH BY EQUNR.
    REFRESH T_OBJK.
    SELECT EQUNR OBKNR
    FROM OBJK INTO TABLE T_OBJK
    FOR ALL ENTRIES IN T_QMIH
    WHERE  IHNUM =  T_QMIH-QMNUM
    OBJK~TASER = 'SER01' AND
    OBJK~EQUNR = T_QMIH-EQUNR.
    try this and let me know
    regards
    Shiva

  • Two parallel hints in a merge statement

    I am not sure if using the parallel hint in such a way as below would be helpful in parallelizing the query run. Can a parallel hint be used for two different tables in a merge statement like mentioned below?
    MERGE INTO /*+ parallel (TABLE_A,8) */ TABLE_A  A
    USING( SELECT /*+ parallel (TABLE_B,8) */
                            col1,
                            col2
                  FROM TABLE_B
             )  B
    ON A.col1 = B.col1
    WHEN MATCHED THEN
    UPATE ....
    WHEN NOT MATCHED THEN
    INSERT ...
    I am using Oracle 10g.
    Thanks

    This does make sense (allowing for "someone else"'s observation about aliases); however MERGE is DML, so you couldn't get the merge phase working in parallel if you didn't also execute
    alter session enable parallel dml;
    Regards
    Jonathan Lewis

  • What could be the reason for Crawl process to take long time or get in to a hung state.

    Hi All,
    What could be the reason for Crawl process to take long time or get in to a hung state? Is it something also related to the DB Server resources crunch? Does this lead to Index file corruption?
    What should be the process to be followed when index file is corrupted? How do we come to know about that?
    Thanks in Advance.

    "The crawl time depends on what you are crawling -- the number of items, the size of the items, the location of the items. If you have a lot of content that needs to be crawled, it will take much time".
    http://social.msdn.microsoft.com/Forums/sharepoint/en-US/f4cad578-f3bc-4822-b660-47ad27ce094a/sharepoint-2007-crawl-taking-long-time-to-complete?forum=sharepointgeneralprevious
    "The only clean and recommended way to recover from an index corruption is to completely rebuild the index on all the servers in the farm."
    http://blogs.technet.com/b/victorbutuza/archive/2008/11/11/event-id-4138-an-index-corruption-was-detected-in-component-shadowmerge-in-catalog-portal-content.aspx
    Whenever search index file got corrupted it will got the details to Event logs
    http://technet.microsoft.com/en-us/library/ff468695%28v=office.14%29.aspx
    My Blog- http://www.sharepoint-journey.com|
    If a post answers your question, please click Mark As Answer on that post and Vote as Helpful

  • BPM Process chain takes long time to process

    We have BI7, Netweaver 2004s on Oracle and SUN Solaris
    There is a process chain (BPM) which pulls data from the CRM system into BW. The scheduled time to run this chain is 0034 hrs. This chain should ideally complete before / around 0830 Hrs. <b>Now the problem is that every alternate day this chain behaves normally and gets completed well before 0830 hrs but every alternate day this chain fails…</b> there are almost 40 chains running daily. Some are event triggered (dependent with each other) or some run in parallel. In this, (BPM) process chain, usually there are 5 requests with 3 Delta and 2 full uploads (Master Data). The delta uploads finishes in 30 minutes without any issues with very few record transfers. The first full upload is from 0034 hrs to approximately 0130 hrs and the 2nd upload is from 0130 hrs to 0230 hrs. Now if the 1st upload gets delayed then the people who are initiating these chains, stop the 2nd full upload and continue it after all the process chains are completed. Now this entire BPM process chain sometimes takes 17 -18 hrs to complete!!!!!
    No other loads in CRM or BW when these process chains are running
    CRM has background jobs to push IDOCS to BW which run every 2 minutes which runs successfully
    Yesterday this chain got completed successfully (well within stipulated time) with over 33,00,000 records transferred but sometimes it has failed to transfer even 12,00,000 records!!
    Attaching a zip file, please refer the “21 to 26 Analysis screen shot.doc” from the zip file
    Within the zip file, attaching “Normal timings of daily process chains.xls” – the name explains it….
    Also within the zip file refer “BPM Infoprovider and data source screen shot.doc” please refer this file as the infopackage (page 2) which was used in the process chain is not displayed later on in page number 6 BUT CHAIN GOT SUCESSFULLY COMPLETED
    We have analyzed:--
    1)     The PSA data for BPM process chain for past few days
    2)     The info providers for BPM process chain for past few days
    3)     The ODS entries for BPM process chain for past few days
    4)     The point of failure of BPM process chain for past few days
    5)     The overall performance of all the process chains for past few days
    6)     The number of requests in BW for this process chain
    7)     The load on CRM system for past few days when this process chain ran on BW system
    As per our analysis, there are couple of things which can be fixed in the BW system:--
    1)     The partner agreement (transaction WE20) defined for the partner LS/BP3CLNT475 mentions both message types RSSEND and RSINFO: -- collect IDOCs and pack size = 1 Since the pack size = 1 will generate 1 TRFC call per IDOC, it should be changed to 10 so that less number of TRFCs will be generated thus less overhead for the BW server resulting in the increase in performance
    2)     In the definition of destination for the concerned RFC in BW (SM59), the “Technical Setting” tab says the “Load balancing” option = “No”. We are planning to make it “Yes”
    But we believe that though these changes will bring some increase in performance, this is not the root cause of the abnormal behavior of this chain as this chain runs successfully on every alternate day with approximately the same amount of load in it.
    I was not able to attach the many screen shots or the info which I had gathered during my analysis. Please advice how do I attach these files
    Best Regards,

    Hi,
    Normally  index  creation or deletion can take long time in case  your database statistics are not updated properly, so can check  stat  after your data loading is completed and index generation is done,  Do creation of database statistics.
    Then try to recheck ...
    Regards,
    Satya

  • Bulk collect forall vs single merge statement

    I understand that a single DML statement is better than using bulk collect for all having intermediate commits. My only concern is if I'm loading a large amount of data like 100 million records into a 800 million record table with foreign keys and indexes and the session gets killed, the rollback might take a long time which is not acceptable. Using bulk collect forall with interval commits is slower than a single straight merge statement, but in case of dead session, the rollback time won't be as bad and a reload of the not yet committed data will not be as bad. To design a recoverable data load that may not be affected as badly, is bulk collect + for all the right approach?

    1. specifics about the actual data available
    2. the location/source of the data
    3. whether NOLOGGING is appropriate
    4. whether PARALLEL is an option
    1. I need to transform the data before, so I can build the staging tables to match to be the same structure as the tables I'm loading to.
    2. It's in the same database (11.2)
    3. Cannot use NOLOGGING or APPEND because I need to allow DML in the target table and I can't use NOLOGGING because I cannot afford to lose the data in case of failure.
    4. PARALLEL is an option. I've done some research on DBMS_PARALLEL_EXECUTE and it sounds very cool. Can this be used to load to two tables? I have a parent child tables. I can chunk the data and load these two tables separately, but the only requirement would be that I need to commit together. I cannot load a chunk into the parent table and commit before I load the corresponding chunk into its child table. Can this be done using DBMS_PARALLEL_EXECUTE? If so, I think this would be the perfect solution since it looks like it's exactly what I'm looking for. However, if this doesn't work, is bulk collect + for all the best option I am left with?
    What is the underlying technology of DBMS_PARALLEL_EXECUTE?

  • After updating to ios 7.1 my iPhone 5s battery *****. It takes longer to charge and shorter to finish.

    3 months back I upgraded from iPhone 4s to 5s and realised 4s is a much better phone. It gave me longer battery life of around 8.30 hrs of usage on 2g and 7 hrs on 3G but 5s was giving me only 7 hours of usage on 2g and 5 hours on 3g but since I've updated it to ios 7.1 my battery ***** even more. It takes longer to charge and shorter to drain. Now I get only 4 hours usage on 2g and somewhat 3hrs roughly on 3G and it takes around 3 hours to charge where awe it used to take only 100 minutes perfectly. I don't know what's wrong with apple. I was a huge apple fan when I had my 4s but now I'm disappointed. But worst thing comes last night, I usually lose around 10% due to my network's pop up messages overnight but last night i lost 37%. When I got up and checked usage it shows me 6 hrs 30 minutes usage but I haven't used my phone since I was sleeping. Hope a new update releases soon to patch this but till that some advices on saving battery would be appreciated, thank you.

    Please be more specific. What have you tried already? Please follow the below steps as described also in another topic by Nathan.
    1. Hard reset hold both buttons until Apple logo appears.
    2. Restore from an iTunes sync and backup.  This made a difference for a lot of peopel, battery life got much better.
    3. Limit push notifications (apps and e-mail accounts - set e.g. e-mail to fetch once an hour) to those you really need.
    4. Limit location services to those you really need, and watch to see if the indicator comes up.
    5. Get the app System Activity Monitor to look for a runaway process that is using a lot of CPU.
    6. Suggestion by others to reset all settings.
    7. Suggestion by others that iCloud may be involved, and to try to turn off some iCloud settings.
    8. Limit or turn off 'background app refresh'.
    9. Add apps back in carefully to identify an offending app that is causing CPU or network activity that kills battery life.
    10 Also make sure that e-mail is fetched (e.g. every hour) or set to manual instead of pushed.
    11 Do you have a good carrier reception? Have you tried to manual set your carrier within your iPhone instead of the 'automatic'?
    12 Have you checked if the timezone is set to manual instead of automatic and see if this makes a difference in battery use?
    In my case it was a bug with an app that caused the battery drain. I also noticed that the use and stand-by time were exactly the same which is ofcourse not correct. This was all caused by this app which was using GPS tracking all the time. Unfortunately an app such as System Activity Monitor did not indicate that in my case but someone posted this solution on a Dutch forum that the previous version of the app 'Scoupy' had a bug and caused this battery drain.
    If you haven't tried every single one of these, please do not post that battery life is terrible.

  • Word File Takes Longer and Longer to Save

    I’ve been working daily on the same 100-page document for several months. I had no problems with the document under Word 2003. However, under Word 2010 the file grows in size over time, and saving the file takes longer and longer to the point where there
    are significant timeouts (Word is “not responding”) whenever an automatic save occurs. 
    I don't want to diable automatic saves because Word does crash for me on rare occasions, generally if I do something "too fast".
    I’ve found a workaround for the problem, and every three weeks or so after the automatic saves have become painfully long I copy the document to the clipboard (except for the last paragraph mark) and then I open a new document based on the relevant template
    and I paste the clipboard contents into the new document. I rename the old version of the document and the new version becomes the working version. This reduces the file size (currently around 1.4 MB) by about 150 KB and the problem goes away for another three
    weeks.
    Certain aspects of my situation are unusual, and these may or may not be relevant to the problem:
    At the end of each day I use (via a macro) the Review, Compare feature of Word to compare the document with the previous day’s version to allow me to reread any changes I made to it.
    I use various other macros for intelligent page-turning, resizing windows, smart Find, etc.
    I maintain the document as a DOC file (Word 97-2003 Compatibility Mode) because I need to share the document with an organization that requires this format.
    The document flips back and forth a few times between being a one-column and two-column document.
    The document has a table of contents on the last page.
    The headings in the document have embedded section and subsection numbers.
    The document has numerous embedded SEQ and cross-reference fields.
    The document has embedded EMF pictures that were generated by a non-Microsoft application.
    The long times to save the file and the temporary solution I’ve found to the problem suggest that some "junk" is accumulating “in” the last paragraph mark. This junk doesn't cause any operational errors, but it slows things down to the point where
    the auto-save times out and I temporarily get the distracting "not responding" message. It would be nice if Word could automatically eliminate the junk in the last paragraph mark so that I wouldn’t have to do it manually.
    Do you have any suggestions for how I might eliminate the problem?
    I'd be pleased to send a copy of the slow-saving file to a Microsoft Word programmer for diagnosis of the problem.
    I have up-to-date Windows 7 professional (64 bit) and Word 2010 14.0.6129.5000 (32 bit).
    Thanks for your help,
    Don Macnaughton

    I am experiencing exactly the same save issue, although I cannot use the suggestion of copying to a new document as I have allot of references within the same document and I'm scared that I'll loose them (or mess them up).
    It is nearly a year later, did you have any luck?
    Francois,
    I'm still experiencing the problem. However, I've now converted the document from a DOC to a DOCX, but that made no difference.  So every 18 or so days I copy all of the document into a new document except for the last paragraph mark
    and the problem goes away for another 18 or so days.  For my document this solution is fully reliable although it's less convenient because it's a little complicated and I worry I may make a mistake or some text may be lost in the transition.
    So I'm still looking for a solution to the problem. Is there anything unique about your document or your handling of the document that might be the cause of the problem?  Are you using macros, Compare Versions, switching back and forth between
    one and two columns, or anything else that is common to the features that I list in my first post in this thread?
    You might want to try my copying solution as a test while keeping your original document as the official version that you continue to work with.  You could then check the test document very carefully to see if my solution works with your
    document.  You might find that you can trust my solution (or you might not). 
    By the way, I make sure that the copy worked properly by doing a Compare Versions of the old and new documents.  (Surprisingly, sometimes the compare finds very minor differences between the two documents, but usually not.)
    If the problem really bothers you, you can hire Microsoft Support, although that will cost you some money.  If you do that, please let us know the outcome.
    Don Macnaughton

  • SQL Update statement taking too long..

    Hi All,
    I have a simple update statement that goes through a table of 95000 rows that is taking too long to update; here are the details:
    Oracle Version: 11.2.0.1 64bit
    OS: Windows 2008 64bit
    desc temp_person;
    Name                                                                                Null?    Type
    PERSON_ID                                                                           NOT NULL NUMBER(10)
    DISTRICT_ID                                                                     NOT NULL NUMBER(10)
    FIRST_NAME                                                                                   VARCHAR2(60)
    MIDDLE_NAME                                                                                  VARCHAR2(60)
    LAST_NAME                                                                                    VARCHAR2(60)
    BIRTH_DATE                                                                                   DATE
    SIN                                                                                          VARCHAR2(11)
    PARTY_ID                                                                                     NUMBER(10)
    ACTIVE_STATUS                                                                       NOT NULL VARCHAR2(1)
    TAXABLE_FLAG                                                                                 VARCHAR2(1)
    CPP_EXEMPT                                                                                   VARCHAR2(1)
    EVENT_ID                                                                            NOT NULL NUMBER(10)
    USER_INFO_ID                                                                                 NUMBER(10)
    TIMESTAMP                                                                           NOT NULL DATE
    CREATE INDEX tmp_rs_PERSON_ED ON temp_person (PERSON_ID,DISTRICT_ID) TABLESPACE D_INDEX;
    Index created.
    ANALYZE INDEX tmp_PERSON_ED COMPUTE STATISTICS;
    Index analyzed.
    explain plan for update temp_person
      2  set first_name = (select trim(f_name)
      3                    from ext_names_csv
      4                               where temp_person.PERSON_ID=ext_names_csv.p_id
      5                               and   temp_person.DISTRICT_ID=ext_names_csv.ed_id);
    Explained.
    @?/rdbms/admin/utlxpls.sql
    PLAN_TABLE_OUTPUT
    Plan hash value: 3786226716
    | Id  | Operation                   | Name           | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | UPDATE STATEMENT            |                | 82095 |  4649K|  2052K  (4)| 06:50:31 |
    |   1 |  UPDATE                     | TEMP_PERSON    |       |       |            |          |
    |   2 |   TABLE ACCESS FULL         | TEMP_PERSON    | 82095 |  4649K|   191   (1)| 00:00:03 |
    |*  3 |   EXTERNAL TABLE ACCESS FULL| EXT_NAMES_CSV  |     1 |   178 |    24   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       3 - filter("EXT_NAMES_CSV"."P_ID"=:B1 AND "EXT_NAMES_CSV"."ED_ID"=:B2)
    Note
       - dynamic sampling used for this statement (level=2)
    19 rows selected.By the looks of it the update is going to take 6 hrs!!!
    ext_names_csv is an external table that have the same number of rows as the PERSON table.
    ROHO@rohof> desc ext_names_csv
    Name                                                                                Null?    Type
    P_ID                                                                                         NUMBER
    ED_ID                                                                                        NUMBER
    F_NAME                                                                                       VARCHAR2(300)
    L_NAME                                                                                       VARCHAR2(300)Anyone can help diagnose this please.
    Thanks
    Edited by: rsar001 on Feb 11, 2011 9:10 PM

    Thank you all for the great ideas, you have been extremely helpful. Here is what we did and were able to resolve the query.
    We started with Etbin's idea to create a table from the ext table so that we can index and reference easier than an external table, so we did the following:
    SQL> create table ext_person as select P_ID,ED_ID,trim(F_NAME) fst_name,trim(L_NAME) lst_name from EXT_NAMES_CSV;
    Table created.
    SQL> desc ext_person
    Name                                                                                Null?    Type
    P_ID                                                                                         NUMBER
    ED_ID                                                                                        NUMBER
    FST_NAME                                                                                     VARCHAR2(300)
    LST_NAME                                                                                     VARCHAR2(300)
    SQL> select count(*) from ext_person;
      COUNT(*)
         93383
    SQL> CREATE INDEX EXT_PERSON_ED ON ext_person (P_ID,ED_ID) TABLESPACE D_INDEX;
    Index created.
    SQL> exec dbms_stats.gather_index_stats(ownname=>'APPD', indname=>'EXT_PERSON_ED',partname=> NULL , estimate_percent=> 30 );
    PL/SQL procedure successfully completed.We had a look at the plan with the original SQL query that we had:
    SQL> explain plan for update temp_person
      2  set first_name = (select fst_name
      3                    from ext_person
      4                               where temp_person.PERSON_ID=ext_person.p_id
      5                               and   temp_person.DISTRICT_ID=ext_person.ed_id);
    Explained.
    SQL> @?/rdbms/admin/utlxpls.sql
    PLAN_TABLE_OUTPUT
    Plan hash value: 1236196514
    | Id  | Operation                    | Name           | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | UPDATE STATEMENT             |                | 93383 |  1550K|   186K (50)| 00:37:24 |
    |   1 |  UPDATE                      | TEMP_PERSON    |       |       |            |          |
    |   2 |   TABLE ACCESS FULL          | TEMP_PERSON    | 93383 |  1550K|   191   (1)| 00:00:03 |
    |   3 |   TABLE ACCESS BY INDEX ROWID| EXTT_PERSON    |     9 |  1602 |     1   (0)| 00:00:01 |
    |*  4 |    INDEX RANGE SCAN          | EXT_PERSON_ED  |     1 |       |     1   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       4 - access("EXT_PERSON"."P_ID"=:B1 AND "RS_PERSON"."ED_ID"=:B2)
    Note
       - dynamic sampling used for this statement (level=2)
    20 rows selected.As you can see the time has dropped to 37min (from 6 hrs). Then we decided to change the SQL query and use donisback's suggestion (using MERGE); we explained the plan for teh new query and here is the results:
    SQL> explain plan for MERGE INTO temp_person t
      2  USING (SELECT fst_name ,p_id,ed_id
      3  FROM  ext_person) ext
      4  ON (ext.p_id=t.person_id AND ext.ed_id=t.district_id)
      5  WHEN MATCHED THEN
      6  UPDATE set t.first_name=ext.fst_name;
    Explained.
    SQL> @?/rdbms/admin/utlxpls.sql
    PLAN_TABLE_OUTPUT
    Plan hash value: 2192307910
    | Id  | Operation            | Name         | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
    |   0 | MERGE STATEMENT      |              | 92307 |    14M|       |  1417   (1)| 00:00:17 |
    |   1 |  MERGE               | TEMP_PERSON  |       |       |       |            |          |
    |   2 |   VIEW               |              |       |       |       |            |          |
    |*  3 |    HASH JOIN         |              | 92307 |    20M|  6384K|  1417   (1)| 00:00:17 |
    |   4 |     TABLE ACCESS FULL| TEMP_PERSON  | 93383 |  5289K|       |   192   (2)| 00:00:03 |
    |   5 |     TABLE ACCESS FULL| EXT_PERSON   | 92307 |    15M|       |    85   (2)| 00:00:02 |
    Predicate Information (identified by operation id):
       3 - access("P_ID"="T"."PERSON_ID" AND "ED_ID"="T"."DISTRICT_ID")
    Note
       - dynamic sampling used for this statement (level=2)
    21 rows selected.As you can see, the update now takes 00:00:17 to run (need to say more?) :)
    Thank you all for your ideas that helped us get to the solution.
    Much appreciated.
    Thanks

  • MERGE Statement - unable to get a stable set of rows in the source tables

    OWB Client: 10.1.0.2.0
    OWB Repository: 10.1.0.1.0
    I am trying to create a MERGE in OWB.
    I get the following error:
    ORA-12801: error signaled in parallel query server P004 ORA-30926: unable to get a stable set of rows in the source tables
    I have read the other posts regarding this and can't seem to get a fix.
    The target table has a unique index on the field that I am matching on.
    The "incoming" data doesn't have a unique index, but I have checked and confirmed that it is unique on the appropriate key.
    The "incoming" data is created by a join and filter in the mapping and I'd rather avoid having to load this data into a new table and add a unique index on this.
    Any help would be great.
    Thanks
    Laura

    Hello Laura,
    The MERGE statement does not require any constraints on its target table or source table. The only requirement is that two input rows cannot update the same target row, meaning that all existing target rows can be matched by at most one input row (otherwise the MERGE would be undeterministic since you don't know which of the input rows you would end up with in the target).
    If a table takes ages to load (and is not really big) I suspect that your mapping is not running in set mode and that it performs a full table scan on source data for each target row it produces.
    If you ARE running in set mode you should run explain plan to get a hint on what is wrong.
    Regarding your original mapping, try to set the target operator property:
    Match by constraint=no constraints
    and then check the Loading properties on each target column.
    Regards, Hans Henrik

  • Any Substitution of Merge statement ?

    Hi All,
    I just would like to know,is it any way to tune the Merge Update or Insert statement based on match ?.I am having one proc where i am using merge statetment and checking based condition it checks whether the record is present if yes then update else insert it,almost 100 k rows is there and this proc takes long time (around 6-7 hrs) to get complete,
    so i just want to know is it any way i can tune my query or any other substitution i can use it instead of MERGE.
    many thanks n advance..
    Anwy

    There should be an applicable index for the destination-table. The index should have a good selectivity with the columns compared in the match condition. Best is to use a unique key or primary key - index.
    Additionally the statistics of the destination-table should be up to date (dbms_stats).
    Edited by: hm on 11.11.2010 02:25

  • CPU goes to 100% state svchost takes all resources on Qosmio F10

    CPU goes to 100% state svchost takes all resources. Norton does not find any illegal software. Have tried to list the processes (run... etc.). I get a long list, don't know what to do about it. Does anybody out there have some help?

    Hello,
    if you have your note for a year or more and did not take it for an internal heatsink clean and essentials parts, yet.
    One possible problem here is: Heatsink blocked with dust (CPU goes to 100% state svchost takes all resources)
    90% of the high temperatures problems, unexpected shutdowns, 100%CPU all the time without running background processes, very hot keyboard area, freezing programs, are caused by the heatsink blocked with dust.
    The normal temperature for the processor is between 55C and 79C. For the motherboard is around 55C~60C. When the heatsink is blocked those temperatures can reach 99C (processor) and around 80C (motherboard). Measure in Celsius degrees.
    You can check it out how "hot" your Qosmio is.
    Download the Everest ultimate 2006 edition V 3.50.761, look for it with Google search.
    Run it and see the temperatures for CPU, Processor and motherboard on your Qosmio:
    See:
    http://www.rabayjr.com/vista/qosmioeverest.jpg
    Run the System stability test (find it under tools menu) for 10 minutes aprox.
    Watch the temperatures at taskar. If it reachs more than 92C for the processor on stress mode, its time to clean the heatsink and all essential parts of your Qosmio. Attempt to the processor fan RPM, too, if they run fast and noisy, you really must perform a clean.
    But it depends on the ambient environment, for a 30C summer day the processor temperature almost get the 90C. this is normal in stress mode. And the fans run without noise. Slow RPM mode.
    All measured in Celsius degrees, for Farenheith please convert it.
    Francisco

  • Merge statement

    i would like to know if it is possible to identify the row that is causing the problem when you use a merge statement in pl/sql. i know if you create a cursor and then loop through the data you can identify the column but what about if i have only a merge that will either insert or update. is it possible to identify which row of data cause the problem? thanks

    You can use an Error Logging Table<br>
    <br>
    Nicolas.

Maybe you are looking for