Performance Issue: Update Statement

Hi Team,
My current environment is Oracle 11g Rac...
My application team is executing a update statement (ofcourse it is there daily activity) ...
updating rows of 1 lac, daily it takes about 3-4 minutes to run the statement.
But today its taking more time i.e more than 8 hours.
then I have generated the explain plan of the update statement and found that its taking full table scan.
Kindly assist me in fixing the issue by letting me know where and how to look for the problem.
**Note: Stats gather is updated
Thanks in advance.
Regards

If you notice there are no indexes to the below update statement -
UPDATE REMEDY_JOURNALS_FACT SET JNL_CREATED_BY_IDENTITY_KEY = ?, JNL_CREATED_BY_HR_KEY = ?, JNL_CREATED_BY_NTWRK_KEY = ?, JNL_MODIFIED_BY_IDENTITY_KEY = ?, JNL_MODIFIED_BY_HR_KEY = ?, JNL_MODIFIED_BY_NTWRK_KEY = ?, JNL_ASSGN_TO_IDENTITY_KEY = ?, JNL_ASSGN_TO_HR_KEY = ?, JNL_ASSGN_TO_NTWRK_KEY = ?, JNL_REMEDY_STATUS_KEY = ?, JOURNALID = ?, JNL_DATE_CREATED = ?, JNL_DATE_MODIFIED = ?, ENTRYTYPE = ?, TMPTEMPDATETIME1 = ?, RELATEDFORMNAME = ?, RELATED_RECORDID = ?, RELATEDFORMKEYWORD = ?, TMPRELATEDRECORDID = ?, ACCESS_X = ?, JOURNAL_TEXT = ?, DATE_X = ?, SHORTDESCRIPTION = ?, TMPCREATEDBY = ?, TMPCREATE_DATE = ?, TMPLASTMODIFIEDBY = ?, TMPMODIFIEDDATE = ?, TMPJOURNALID = ?, JNL_JOURNALTYPE = ?, COPIEDTOWORKLOG = ?, PRIVATE = ?, RELATEDKEYSTONEID = ?, URLLOCATION = ?, ASSIGNEEGROUP = ?, LAST_UPDATE_DT = ? WHERE REMEDY_JOURNALS_KEY = ?
Explain Plan -
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop | TQ |IN-OUT| PQ Distrib |
| 0 | UPDATE STATEMENT | | | | 1055 (100)| | | | | | |
| 1 | UPDATE | REMEDY_JOURNALS_FACT | | | | | | | | | |
| 2 | PX COORDINATOR | | | | | | | | | | |
| 3 | PX SEND QC (RANDOM) | :TQ10000 | 1 | 784 | 1055 (1)| 00:00:05 | | | Q1,00 | P->S | QC (RAND) |
| 4 | PX BLOCK ITERATOR | | 1 | 784 | 1055 (1)| 00:00:05 | 1 | 10 | Q1,00 | PCWC | |
|* 5 | TABLE ACCESS STORAGE FULL| REMEDY_JOURNALS_FACT | 1 | 784 | 1055 (1)| 00:00:05 | 1 | 10 | Q1,00 | PCWP | |
Predicate Information (identified by operation id):
5 - storage(:Z>=:Z AND :Z<=:Z AND "REMEDY_JOURNALS_KEY"=:36) filter("REMEDY_JOURNALS_KEY"=:36)
Note
- automatic DOP: skipped because of IO calibrate statistics are missing
Edited by: GeetaM on Aug 17, 2012 2:18 PM

Similar Messages

  • Performance issue with statement

    This is the same as my other thread but with everything formatted.
    I'm having a lot of issues trying to tune this statement. I have added some new indexes and even moved existing indexes to a 32k tablespace. The execution plan has improved but when I execute the statement the data never returns. I see where my bottle-neck is but I'm lost on what else I can do to improve the performance.
    STATEMENT:
    SELECT DISTINCT c.oprclass, a.business_unit, i.descr, a.zsc_load,
                    b.ship_to_cust_id, b.zsc_load_status, f.ld_cnt,
                    b.zsc_mill_release, b.address_seq_num, d.name1,
                    e.address1 || ' - ' || e.city || ', ' || e.state || '  '
                    || e.postal
               FROM ps_zsc_ld a,
                    ps_zsc_ld_seq b,
                    ps_sec_bu_cls c,
                    ps_customer d,
                    ps_set_cntrl_group g,
                    ps_rec_group_rec r,
                    ps_bus_unit_tbl_fs i,
                    (SELECT   business_unit, zsc_load, COUNT (*) AS ld_cnt
                         FROM ps_zsc_ld_seq
                     GROUP BY business_unit, zsc_load) f,
                    (SELECT *
                       FROM ps_cust_address ca
                      WHERE effdt =
                               (SELECT MAX (effdt)
                                  FROM ps_cust_address ca1
                                 WHERE ca.setid = ca1.setid
                                   AND ca.cust_id = ca1.cust_id
                                   AND ca.address_seq_num = ca1.address_seq_num
                                   AND ca1.effdt <= SYSDATE)) e
              WHERE a.business_unit = b.business_unit
                AND a.zsc_load = b.zsc_load
                AND r.recname = 'CUSTOMER'
                AND g.rec_group_id = r.rec_group_id
                AND g.setcntrlvalue = a.business_unit
                AND d.setid = g.setid
                AND b.ship_to_cust_id = d.cust_id
                AND e.setid = g.setid
                AND b.ship_to_cust_id = e.cust_id
                AND b.address_seq_num = e.address_seq_num
                AND a.business_unit = f.business_unit
                AND a.zsc_load = f.zsc_load
                AND a.business_unit = c.business_unit
                AND a.business_unit = i.business_unit;EXECUTION PLAN:
    Plan
    SELECT STATEMENT  CHOOSECost: 1,052  Bytes: 291  Cardinality: 1                                                              
         25 SORT UNIQUE  Cost: 1,052  Bytes: 291  Cardinality: 1                                                         
              24 SORT GROUP BY  Cost: 1,052  Bytes: 291  Cardinality: 1                                                    
                   23 FILTER                                               
                        19 NESTED LOOPS  Cost: 1,027  Bytes: 291  Cardinality: 1                                          
                             17 NESTED LOOPS  Cost: 1,026  Bytes: 279  Cardinality: 1                                     
                                  15 NESTED LOOPS  Cost: 1,025  Bytes: 263  Cardinality: 1                                
                                       12 NESTED LOOPS  Cost: 1,024  Bytes: 227  Cardinality: 1                           
                                            10 NESTED LOOPS  Cost: 1,023  Bytes: 28,542  Cardinality: 134                      
                                                 7 HASH JOIN  Cost: 60  Bytes: 134,101  Cardinality: 803                 
                                                      5 NESTED LOOPS  Cost: 49  Bytes: 5,175  Cardinality: 45            
                                                           3 NESTED LOOPS  Cost: 48  Bytes: 1,230,725  Cardinality: 12,955       
                                                                1 TABLE ACCESS FULL SYSADM.PS_CUST_ADDRESS Cost: 20  Bytes: 3,465  Cardinality: 45 
                                                                2 INDEX RANGE SCAN UNIQUE SYSADM.TEST3 Cost: 1  Bytes: 5,130  Cardinality: 285 
                                                           4 INDEX UNIQUE SCAN UNIQUE SYSADM.PS_REC_GROUP_REC Bytes: 20  Cardinality: 1       
                                                      6 INDEX FAST FULL SCAN NON-UNIQUE SYSADM.PS0CUSTOMER Cost: 10  Bytes: 252,460  Cardinality: 4,855            
                                                 9 TABLE ACCESS BY INDEX ROWID SYSADM.PS_ZSC_LD_SEQ Cost: 2  Bytes: 46  Cardinality: 1                 
                                                      8 INDEX RANGE SCAN UNIQUE SYSADM.TEST7 Cost: 1  Cardinality: 1            
                                            11 INDEX UNIQUE SCAN UNIQUE SYSADM.PS_ZSC_LD Bytes: 14  Cardinality: 1                      
                                       14 TABLE ACCESS BY INDEX ROWID SYSADM.PS_BUS_UNIT_TBL_FS Cost: 2  Bytes: 36  Cardinality: 1                           
                                            13 INDEX UNIQUE SCAN UNIQUE SYSADM.PS_BUS_UNIT_TBL_FS Cardinality: 1                      
                                  16 INDEX FULL SCAN UNIQUE SYSADM.PS_SEC_BU_CLS Cost: 2  Bytes: 96  Cardinality: 6                                
                             18 INDEX RANGE SCAN UNIQUE SYSADM.PS_ZSC_LD_SEQ Cost: 1  Bytes: 12  Cardinality: 1                                     
                        22 SORT AGGREGATE  Bytes: 31  Cardinality: 1                                          
                             21 FIRST ROW  Cost: 2  Bytes: 31  Cardinality: 1                                     
                                  20 INDEX RANGE SCAN (MIN/MAX) UNIQUE SYSADM.PS_CUST_ADDRESS Cost: 2  Cardinality: 5,364                                TRACE INFO:
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.22       0.24          0          0          0           0
    Execute      1      0.00       0.00          0          0          0           0
    Fetch        1   1208.24    1179.86         92  221319711          0           0
    total        3   1208.46    1180.11         92  221319711          0           0
    Misses in library cache during parse: 1
    Optimizer mode: CHOOSE
    Parsing user id: 81 
    Rows     Row Source Operation
          0  SORT UNIQUE (cr=0 r=0 w=0 time=0 us)
          0   SORT GROUP BY (cr=0 r=0 w=0 time=0 us)
          0    FILTER  (cr=0 r=0 w=0 time=0 us)
          0     NESTED LOOPS  (cr=0 r=0 w=0 time=0 us)
          0      NESTED LOOPS  (cr=0 r=0 w=0 time=0 us)
          0       NESTED LOOPS  (cr=0 r=0 w=0 time=0 us)
          0        NESTED LOOPS  (cr=0 r=0 w=0 time=0 us)
          0         NESTED LOOPS  (cr=0 r=0 w=0 time=0 us)
          0          HASH JOIN  (cr=0 r=0 w=0 time=0 us)
    2717099           NESTED LOOPS  (cr=221319646 r=92 w=0 time=48747178172 us)
    220447566            NESTED LOOPS  (cr=872143 r=92 w=0 time=10965565169 us)
       4590             TABLE ACCESS FULL OBJ#(15335) (cr=99 r=92 w=0 time=58365 us)
    220447566             INDEX RANGE SCAN OBJ#(2684506) (cr=872044 r=0 w=0 time=2533034831 us)(object id 2684506)
    2717099            INDEX UNIQUE SCAN OBJ#(583764) (cr=220447568 r=0 w=0 time=23792811449 us)(object id 583764)
          0           INDEX FAST FULL SCAN OBJ#(15319) (cr=0 r=0 w=0 time=0 us)(object id 15319)
          0          TABLE ACCESS BY INDEX ROWID OBJ#(735431) (cr=0 r=0 w=0 time=0 us)
          0           INDEX RANGE SCAN OBJ#(2684517) (cr=0 r=0 w=0 time=0 us)(object id 2684517)
          0         INDEX UNIQUE SCAN OBJ#(550855) (cr=0 r=0 w=0 time=0 us)(object id 550855)
          0        TABLE ACCESS BY INDEX ROWID OBJ#(11041) (cr=0 r=0 w=0 time=0 us)
          0         INDEX UNIQUE SCAN OBJ#(582984) (cr=0 r=0 w=0 time=0 us)(object id 582984)
          0       INDEX FULL SCAN OBJ#(583859) (cr=0 r=0 w=0 time=0 us)(object id 583859)
          0      INDEX RANGE SCAN OBJ#(2684186) (cr=0 r=0 w=0 time=0 us)(object id 2684186)
          0     SORT AGGREGATE (cr=0 r=0 w=0 time=0 us)
          0      FIRST ROW  (cr=0 r=0 w=0 time=0 us)
          0       INDEX RANGE SCAN (MIN/MAX) OBJ#(15336) (cr=0 r=0 w=0 time=0 us)(object id 15336)
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      SQL*Net message to client                       1        0.00          0.00
      db file scattered read                         14        0.00          0.00
      direct path write                            3392        0.00          0.06
      db file sequential read                         8        0.00          0.00

    I had an index on that table but still that is not where my bottle neck was showing so I removed it. I have added the index back and clearly it has helped the execution plan.
    PLAN_TABLE_OUTPUT                                                                                                                           
    | Id  | Operation                           |  Name               | Rows  | Bytes | Cost (%CPU)|                                            
    |   0 | SELECT STATEMENT                    |                     |     1 |   291 |  1035   (1)|                                            
    |   1 |  SORT UNIQUE                        |                     |     1 |   291 |  1035   (1)|                                            
    |   2 |   SORT GROUP BY                     |                     |     1 |   291 |  1035   (1)|                                            
    |   3 |    FILTER                           |                     |       |       |            |                                            
    |   4 |     NESTED LOOPS                    |                     |     1 |   291 |  1010   (1)|                                            
    |   5 |      NESTED LOOPS                   |                     |     1 |   279 |  1009   (1)|                                            
    |   6 |       NESTED LOOPS                  |                     |     1 |   243 |  1008   (1)|                                            
    |   7 |        NESTED LOOPS                 |                     |     1 |   227 |  1006   (0)|                                            
    |   8 |         NESTED LOOPS                |                     |   135 | 28755 |  1005   (0)|                                            
    |   9 |          HASH JOIN                  |                     |   805 |   131K|    39   (0)|                                            
    |  10 |           HASH JOIN                 |                     |    45 |  5175 |    28   (0)|                                            
    |  11 |            TABLE ACCESS FULL        | PS_CUST_ADDRESS     |    45 |  3465 |    20   (0)|                                            
    |  12 |            NESTED LOOPS             |                     |  3398 |   126K|     7   (0)|                                            
    |  13 |             INDEX FAST FULL SCAN    | PS_REC_GROUP_REC    |     1 |    20 |     5   (0)|                                            
    |  14 |             INDEX RANGE SCAN        | TEST11              |  3398 | 61164 |     3   (0)|                                            
    |  15 |           INDEX FAST FULL SCAN      | PS0CUSTOMER         |  4855 |   246K|    10   (0)|                                            
    |  16 |          TABLE ACCESS BY INDEX ROWID| PS_ZSC_LD_SEQ       |     1 |    46 |     2   (0)|                                            
    |  17 |           INDEX RANGE SCAN          | PS0ZSC_LD_SEQ       |     1 |       |     1   (0)|                                            
    |  18 |         INDEX UNIQUE SCAN           | PS_ZSC_LD           |     1 |    14 |            |                                            
    |  19 |        INDEX FULL SCAN              | PS_SEC_BU_CLS       |     3 |    48 |     2   (0)|                                            
    |  20 |       TABLE ACCESS BY INDEX ROWID   | PS_BUS_UNIT_TBL_FS  |     1 |    36 |     2  (50)|                                            
    |  21 |        INDEX UNIQUE SCAN            | PS_BUS_UNIT_TBL_FS  |     1 |       |            |                                            
    |  22 |      INDEX RANGE SCAN               | PS_ZSC_LD_SEQ       |     1 |    12 |     1   (0)|                                            
    |  23 |     SORT AGGREGATE                  |                     |     1 |    31 |            |                                            
    |  24 |      FIRST ROW                      |                     |     1 |    31 |     2   (0)|                                            
    |  25 |       INDEX RANGE SCAN (MIN/MAX)    | PS_CUST_ADDRESS     |  5364 |       |     2   (0)|                                            
    ------------------------------------------------------------------------------------------------

  • Performance issue - Select statement

    Hi  I am having the 10 lack records in the KONP table . If i am trying to see all the records in SE11 , it is giving the short dump TSV_TNEW_PAGE_ALLOC_FAILED . I know this is because of less memory in the Server . Is there any other way to get the data ? How to optimise the below SELECT statement if i have large data in the table .
    i_condn_data - is having 8 lack records .
    SELECT knumh kznep valtg valdt zterm
            FROM konp
            INTO TABLE i_condn_data_b
            FOR ALL ENTRIES IN i_condn_data
            WHERE knumh = i_condn_data-knumh
            AND kschl = p_kschl.
    Please suggest .

    Hi,
    try to use "UP TO n ROWS" to control the quantity of selected data in each Loop step.
    Something like this:
    sort itab by itab-knumh.
    flag = 'X'.
    while flag = 'X'.
    SELECT knumh kznep valtg valdt zterm
    FROM konp
    INTO TABLE i_condn_data_b UP TO one_million ROWS
    WHERE knumh > new_value_for_selection
    AND kschl = p_kschl.
    describe table i_condn_data_b lines i.
    read table i_condn_data_b index i.
    new_value_for_selection = i_condn_data_b-knumh.
    *....your logic for table i_condn_data_b
    if  one_million  > i.
    clear flag.
    endif.
    endwhile.
    Regards

  • Performance Issue - Update response time

    Hi,
    I am trying to get the equipment data and I have to link with asset. The only way I could find is through m_equia view. I select the equipment number from m_equia passing the key as asset no and from there i go to equi and get the other data.
    But when i am trying to select from m_equia the database time is more. So, can some one suggest me a better option for this other than the m_equia to get the equipment details with asset as the key. I also have cost center details with me.
    Thanks,

    Hi,
    Please find the select on m_equia and further select on it.
    Get asset related data from the view
    IF NOT i_asset[] IS INITIAL.
       SELECT anlnr
              equnr
         FROM m_equia
         INTO TABLE i_asst_equi
         FOR ALL ENTRIES IN i_asset
         WHERE anlnr = i_asset-anln1 AND
               anlun = i_asset-anln2 AND
               bukrs = i_asset-bukrs.
       IF sy-subrc = 0.
         SORT i_asst_equi BY equnr.
       ENDIF.
    ENDIF.
    Get Equipment related data
    IF NOT i_asst_equi[] IS INITIAL.
       SELECT equi~equnr
              herst
              typbz
              eqart
              mapar
              serge
              iwerk
         FROM equi
         INNER JOIN equz
         ON equiequnr = equzequnr
         INTO TABLE i_equipment
         FOR ALL ENTRIES IN i_asst_equi
         WHERE equi~equnr = i_asst_equi-equnr.
       SORT i_equipment BY equnr.
    ENDIF.
    Thanks.
    Message was edited by:
            lakshmi

  • Performance degradation in Update statement

    Hi
    I have a table t1 with 3 of the columns as :- c1 ,c2,c3.
    there is a non unique index on c2 ,c3.
    I am using an update statement as : update t1 set c1="value" where c2="some_value" and c3="some_value2" ;
    this works fine in case the number of rows for the same c2 and c3 are less (~100) but takes a large time when ~30,000.
    And it has to be run many times in my application (time out is occuring now)
    Can anybody suggest a solution ?
    Thanks
    Message was edited by:
    user580975

    First off, setting a table to NOLOGGING does not affect the rate of redo generation unless you are also doing direct-path inserts (which is obviously not the case here and which come with their own set of issues). Note that the amount of redo being generated in these two examples is quite similar.
    SCOTT @ jcave102 Local> create table all_obj_cpy as select * from all_objects;
    Table created.
    Elapsed: 00:00:10.78
    SCOTT @ jcave102 Local> set autotrace traceonly;
    SCOTT @ jcave102 Local> alter table all_obj_cpy logging;
    Table altered.
    Elapsed: 00:00:00.01
    SCOTT @ jcave102 Local> update all_obj_cpy
      2  set last_ddl_time = sysdate - 1;
    42511 rows updated.
    Elapsed: 00:00:01.45
    Execution Plan
    Plan hash value: 1645016300
    | Id  | Operation          | Name        | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | UPDATE STATEMENT   |             | 43776 |   384K|   137   (3)| 00:00:02 |
    |   1 |  UPDATE            | ALL_OBJ_CPY |       |       |            |          |
    |   2 |   TABLE ACCESS FULL| ALL_OBJ_CPY | 43776 |   384K|   137   (3)| 00:00:02 |
    Note
       - dynamic sampling used for this statement
    Statistics
            556  recursive calls
          46075  db block gets
           1558  consistent gets
              0  physical reads
       13575764  redo size
            924  bytes sent via SQL*Net to client
            965  bytes received via SQL*Net from client
              6  SQL*Net roundtrips to/from client
              6  sorts (memory)
              0  sorts (disk)
          42511  rows processed
    SCOTT @ jcave102 Local> alter table all_obj_cpy nologging;
    Table altered.
    Elapsed: 00:00:00.01
    SCOTT @ jcave102 Local> update all_obj_cpy
      2  set last_ddl_time = sysdate - 2;
    42511 rows updated.
    Elapsed: 00:00:01.32
    Execution Plan
    Plan hash value: 1645016300
    | Id  | Operation          | Name        | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | UPDATE STATEMENT   |             | 43776 |   384K|   137   (3)| 00:00:02 |
    |   1 |  UPDATE            | ALL_OBJ_CPY |       |       |            |          |
    |   2 |   TABLE ACCESS FULL| ALL_OBJ_CPY | 43776 |   384K|   137   (3)| 00:00:02 |
    Note
       - dynamic sampling used for this statement
    Statistics
            561  recursive calls
          44949  db block gets
           1496  consistent gets
              0  physical reads
       12799600  redo size
            924  bytes sent via SQL*Net to client
            965  bytes received via SQL*Net from client
              6  SQL*Net roundtrips to/from client
              6  sorts (memory)
              0  sorts (disk)
          42511  rows processedSecond, if you did manage to do an unlogged operation, make absolutely certain you understand the recovery implications. You must do a complete backup of the database after an unlogged operation or the table will not be recovered in the event of a database failure. If you have a standby database for disaster recovery, unlogged operations would cause the standby to be useless (hence the option to force logging at the tablespace and/or database level).
    While it's certainly possible that this is an Oracle server configuration problem, it would be relatively tough to configure a system so that a 30,000 row update would force excessive log switches. If it were a configuration problem, you'd expect that any update of 30,000 rows would be slow and that multiple sessions running smaller updates would also be slow, but I don't believe that describes the symptoms the original poster is concerned about.
    Justin

  • Not Updating Customized Table when System having Performance Issue

    Hi,
    This is actually the same topic as "Not Updating Customized Table when System having Performance Issue" which is posted last December by Leonard Tan regarding the user exit EXIT_SAPLMBMB_001.
    Recently we changed the program function module z_mm_save_hide_qty to update task. However this causes more data not updated. Hence we put back the old version (without the update task).  But now it is not working as it used to be (e.g. version 1 - 10 records not updated, version 2 with update task - 20 records not updated, back to version 1 - 20 records not updated).
    I tried debugging the program, however whenever I debugged, there is nothing wrong and the data is updated correctly.
    Please advise if anyone has any idea why is this happening. Many thanks.
    Regards,
    Janet

    Hi Janet,
    you are right. This is a basic rule not to do any COMMIT or RFC calls in a user exit.
    Have a look at SAP note 92550. Here they say that exit EXIT_SAPLMBMB_001 is called in the update routine MB_POST_DOCUMENT. And this routine is already called in UPDATE TASK from  FUNCTION 'MB_UPDATE_TASKS' IN UPDATE TASK.
    SAP also tells us not to do any updates on SAP system tables like MBEW, MARD, MSEG.
    Before the exit is called, now they call 'MB_DOCUMENT_BADI' with methods MB_DOCUMENT_BEFORE_UPDATE and MB_DOCUMENT_UPDATE. Possibly you have more success implementing the BADI.
    I don't know your situation and goal so this is all I can tell you now.
    Good luck!
    Regards,
    Clemens

  • Performance issue in update new records from Ekko to my ztable

    I'm doing changes to an existing program
    In this program I need to update any new purchase orders created in EKKO-EBELN to my ztable-ebeln.
    I need to update my ztable with the new records created on that particular date.
    This is a daily running job.
    This is the code I wrote and I'm getting 150,000 records into this loop and I'm getting performance issue, can Anyone suggest me how to avoid performance issue.
    loop at tb_ekko.
        at new ebeln.
          read table tb_ztable with key ebeln = tb_ekko-ebeln binary search.
          if sy-subrc <> 0.
            tb_ztable-ebeln = tb_ekko-ebeln.
            tb_ztable-zlimit = ' '.
            insert ztable from tb_ztable.
          endif.
        endat.
      endloop.
    Thanks
    Hema.

    Modify  your code as follows:
    loop at tb_ekko.
    at new ebeln.
    read table tb_ztable with key ebeln = tb_ekko-ebeln binary search.
    if sy-subrc <> 0.
    tb_ztable_new-ebeln = tb_ekko-ebeln.
    tb_ztable_new-zlimit = ' '.
    append tb_ztable_new.
    clear tb_ztable_new.
    endif.
    endat.
    endloop.
    insert ztable from table tb_ztable_new.
    Regards,
    ravi

  • How to update this query and avoid performance issue?

    Hi, guys:
    I wonder how to update the following query to make it weekend day aware. My boss want the query to consider business days only. Below is just a portion of the query:
    select count(distinct cmv.invoicekey ) total ,'3' as type, 'VALID CALL DATE' as Category
    FROM cbwp_mv2 cmv
    where cmv.colresponse=1
    And Trunc(cmv.Invdate)  Between (Trunc(Sysdate)-1)-39 And (Trunc(Sysdate)-1)-37
    And Trunc(cmv.Whendate) Between cmv.Invdate+37 And cmv.Invdate+39the CBWP_MV2 is a materialized view to tune query. This query is written for a data warehouse application, the CBWP_MV2 will be updated every day evening. My boss wants the condition in the query to consider only business days, for example, if (Trunc(Sysdate)-1)-39 falls in weekend, I need to move the range begins from next coming business day, if (Trunc(Sysdate)-1)-37 falls in weekend, I need to move the range ends from next coming business day. but I should always keep the range within 3 business days. If there is overlap on weekend, always push to later business days.
    Question: how to implement it and avoid performance issue? I am afraid that if I use a function, it greatly reduce the performance. This view already contains more than 100K rows.
    thank you in advance!
    Sam
    Edited by: lxiscas on Dec 18, 2012 7:55 AM
    Edited by: lxiscas on Dec 18, 2012 7:56 AM

    You are already using a function, since you're using TRUNC on invdate and whendate.
    If you have indexes on those columns, then they will not be used because of the TRUNC.
    Consider omitting the TRUNC or testing with Function Based Indexes.
    Regarding business days:
    If you search this forum, you'll find lots of examples.
    Here's another 'golden oldie': http://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:185012348071
    Regarding performance:
    Steps to take are explained from the links you find here: {message:id=9360003}
    Read them, they are more than worth it for now and future questions.

  • Performance issue with the ABAP statements

    Hello,
    Please can some help me with the below statements where I am getting performance problem.
    SELECT * FROM /BIC/ASALHDR0100 into Table CHDATE.
    SORT CHDATE by DOC_NUMBER.
    SORT SOURCE_PACKAGE by DOC_NUMBER.
    LOOP AT CHDATE INTO WA_CHDATE.
       READ TABLE SOURCE_PACKAGE INTO WA_CIDATE WITH KEY DOC_NUMBER =
       WA_CHDATE-DOC_NUMBER BINARY SEARCH.
       MOVE WA_CHDATE-CREATEDON  to WA_CIDATE-CREATEDON.
    APPEND WA_CIDATE to CIDATE.
    ENDLOOP.
    I wrote an above code for the follwing requirement.
    1. I have 2 tables from where i am getting the data
    2.I have common fields in both the table names CREATEDON date. In both the tables I hve the values.
    3. While accessing the 2 table and copying to thrid table i have to modify the field.
    I am getting performance issues with the above statements.
    Than
    Edited by: Rob Burbank on Jul 29, 2010 10:06 AM

    Hello,
    try a select like the following one instead of you code.
    SELECT field field2 ...
    INTO TABLE it_table
    FROM table1 AS T1 INNER JOIN table2 AS T2
    ON t1-doc_number = t2-doc_number

  • After update performance issue with certain PCs

    Over the past 2 months there was an update to the Forefront Client that caused a strange issue on some of our machines.  Basically when we update device drivers MsMpEng.exe goes nuts and uses at least 50% to 98% of the CPU for the entire duration of
    the driver install/update.  
    The immediate result is that some of the larger device driver installs (like video drivers) actually time out because of this.  Using procmon I traced the cause of the delay. It looks like MsMpEng is scanning the setupapi.dev.log file over and over
    and over..hundreds..maybe thousands of times.  We noticed that if we turned off logging for the device driver udpate/install and app install logging that this issue disappears.  We've had this level of logging for years and only in the past 2 months
    or so has this issue started to pop up.  The setupapi.dev.log file is rather large and so I can see why it's causing a delay if it has to scan the file over and over and over.
    We aren't sure what changed.  We used an older image that had an older version of the Forefront client installed and we made sure it didn't update to a newer client version.  That version of forefront had no issue with device drivers timing out. After
    updating the client we now have the issue. A change was made recently and we believe the forefront client is behaving differently.  
    Any ideas about what could cause this?
    Thanks.

    Hi,
    Thanks for your question.
    How did you perform the update of Forefront Client Security, from windows update or manually? In addition, what are the editions of the Forefront Client Security now and before
    you updated?
    Best regards,
    Susie

  • Performance Issues DPS - Single Edition - PDF Rendering, Multi-State Text

    Hello,
    Please advise on the above issue...
    On a retina display ipad, I'm having issues with multi-state objects that contain text. The column on the left is crisp text, non multi-state object. However when I place the text in a multi state object (right) it's not rendering in high resolution. A bit pixelated.
    Single Issue - Creative Cloud License
    iPad Gen III Retina Original Version
    My application is compiled using app version: v24
    Also I experience performance issues using PDF mode... when you change a page everything is super grainy, and then ~2 Seconds later it renders crisp. It used to be prerendered, or much faster. Perhaps I was using PNG? I switched to PDF mode as recommended in countless articles. Is the slow render just a side affect of PDF? Anything I can do to optimize? Recommendations?
    Thanks so much!
    Richie

    Thank You, Bob!
    Vector fixed it! I only have to change it on 29872498329 pages
    One thing I just realized is, that I have the same issue with buttons. I have a button with Normal, and Click states. It has the same rendering issue (grainy) and I couldn't find a place to make the button render 'vector'.

  • Long running update statement needs a performance improve.

    Hi,
    I have the following update statement which runs for over 3 hours and updates 215 million rows, is there anyway I can rewrite it so it performs better?
    UPDATE TABLE1 v
            SET closs = (SELECT MIN(slloss)
                               FROM TABLE2 l
            WHERE polnum = slpoln
            AND      polren = slrenn
            AND      polseq = slseqn
            AND      vehnum = slvehn
            AND      linecvg = sllcvg);Here is the execution plan:
    PLAN_TABLE_OUTPUT
    | Id  | Operation                     | Name        | Rows  | Bytes | Cost (%CPU)|
    |   0 | UPDATE STATEMENT              |             |   214M|  4291M|  2344K  (2)|
    |   1 |  UPDATE                       | TABLE1      |       |       |            |
    |   2 |   TABLE ACCESS FULL           | TABLE1      |   214M|  4291M|  2344K  (2)|
    |   3 |   SORT AGGREGATE              |             |     1 |    21 |            |
    |   4 |    TABLE ACCESS BY INDEX ROWID| TABLE2      |     1 |    21 |     5   (0)|
    |   5 |     INDEX RANGE SCAN          | TABLE2_N2   |     2 |       |     3   (0)|
    ----------------------------------------------------------------------------------Here are create table statements for TABLE1(215million rows) and TABLE2(1million rows):
    CREATE TABLE  TABLE2 (SLCLMN VARCHAR2(11 byte),
        SLFEAT NUMBER(2), SLPOLN NUMBER(9), SLRENN NUMBER(2),
        SLSEQN NUMBER(2), SLVEHN NUMBER(2), SLDRVN NUMBER(2),
        SLCVCD VARCHAR2(6 byte), SLLCVG NUMBER(4), SLSABB
        VARCHAR2(2 byte), SLPRCD VARCHAR2(3 byte), SLRRDT
        NUMBER(8), SLAYCD NUMBER(7), SLCITY VARCHAR2(28 byte),
        SLZIP5 NUMBER(5), SLCEDING VARCHAR2(1 byte), SLCEDELOSS
        VARCHAR2(1 byte), SLRISKTYPE VARCHAR2(1 byte), SLVEHDESIG
        VARCHAR2(1 byte)) 
        TABLESPACE S_DATA PCTFREE 10 PCTUSED 0 INITRANS 1
        MAXTRANS 255
        STORAGE ( INITIAL 106496K NEXT 0K MINEXTENTS 1 MAXEXTENTS
        2147483645 PCTINCREASE 0)
        NOLOGGING
        MONITORING;
    CREATE TABLE  TABLE1 (POLNUM NUMBER(9) NOT NULL,
        POLREN NUMBER(2) NOT NULL, POLSEQ NUMBER(2) NOT NULL,
        VEHNUM NUMBER(2) NOT NULL, CVGCODE VARCHAR2(8 byte) NOT
        NULL, LINECVG NUMBER(4), MAINVEH CHAR(1 byte), MAINCVG
        CHAR(1 byte), CVGLIMIT VARCHAR2(13 byte), CVGDED
        VARCHAR2(10 byte), FULLCVG CHAR(1 byte), CVGGRP CHAR(4
        byte), CYCVG CHAR(1 byte), POLTYPE CHAR(1 byte),
        CHANNEL CHAR(2 byte), UWTIER VARCHAR2(6 byte), SUBTIER
        VARCHAR2(6 byte), THITIER VARCHAR2(3 byte), COMPGRP
        VARCHAR2(8 byte), PRODGRP VARCHAR2(6 byte), UWSYS
        VARCHAR2(6 byte), BRAND VARCHAR2(8 byte), COMP NUMBER(2),
        STATE CHAR(2 byte), PROD CHAR(3 byte), RRDATE DATE,
        STATENUM NUMBER(2), EFT_BP CHAR(1 byte), AGYCODE
        NUMBER(7), AGYSUB CHAR(3 byte), AGYCLASS CHAR(1 byte),
        CLMAGYCODE NUMBER(7), AGYALTCODE VARCHAR2(25 byte),
        AGYRELATION VARCHAR2(10 byte), RATECITY VARCHAR2(28 byte),
        RATEZIP NUMBER(5), RATETERR NUMBER, CURTERR NUMBER,
        CURRRPROD CHAR(6 byte), CURRRDATE DATE, RATESYMB NUMBER,
        SYMBTYPE CHAR(1 byte), CVGTERR NUMBER(3), CVGSYMB
        NUMBER(3), VEHTERR NUMBER, VEHYEAR NUMBER, VEHMAKE
        VARCHAR2(6 byte), VEHMODEL VARCHAR2(10 byte), VEHSUBMOD
        VARCHAR2(10 byte), VEHBODY VARCHAR2(6 byte), VEHVIN
        VARCHAR2(10 byte), VEHAGE NUMBER(3), VEHSYMB NUMBER,
        DRVNUM NUMBER, DUMMYDRV CHAR(1 byte), DRVAGE NUMBER(3),
        DRVSEX VARCHAR2(1 byte), DRVMS VARCHAR2(1 byte), DRVPTS
        NUMBER(3), DRVPTSDD NUMBER(3), DRVGRP CHAR(7 byte),
        DRVSR22 VARCHAR2(1 byte), DRVVTIER CHAR(2 byte),
        BUSUSESUR CHAR(1 byte), EXCLDRVSUR CHAR(1 byte),
        CSCODED NUMBER(5), CSACTUAL NUMBER(5), CSOVERRD
        NUMBER(5), ANNMILES NUMBER(6), DLORIGDATE DATE,
        DLLASTDATE DATE, DLMONTHS NUMBER(6), MATUREDSC CHAR(1
        byte), PERSISTDSC CHAR(1 byte), ANNUALMILES_RANGE
        VARCHAR2(25 byte), CEDEDLOSS VARCHAR2(1 byte), CEDEDPOL
        VARCHAR2(1 byte), CEDEDCVG VARCHAR2(1 byte),
        CONSTRAINT TABLE1_PK PRIMARY KEY(POLNUM, POLREN,
        POLSEQ, VEHNUM, CVGCODE)
        USING INDEX 
        TABLESPACE V_INDEX
        STORAGE ( INITIAL 3874816K NEXT 0K MINEXTENTS 1 MAXEXTENTS
        2147483645 PCTINCREASE 0) PCTFREE 10 INITRANS 2 MAXTRANS 255)
        TABLESPACE U_DATA PCTFREE 10 PCTUSED 0 INITRANS 1
        MAXTRANS 255
        STORAGE ( INITIAL 4194304K NEXT 0K MINEXTENTS 1 MAXEXTENTS
        2147483645 PCTINCREASE 0)
        NOLOGGING
        MONITORING;Thank you very much!

    user6053424 wrote:
    Hi,
    I have the following update statement which runs for over 3 hours and updates 215 million rows, is there anyway I can rewrite it so it performs better?
    UPDATE TABLE1 v
    SET closs = (SELECT MIN(slloss)
    FROM TABLE2 l
    WHERE polnum = slpoln
    AND      polren = slrenn
    AND      polseq = slseqn
    AND      vehnum = slvehn
    AND      linecvg = sllcvg);
    Are you trying to perform a correlated update? If so, you can perform something similar to;
    Sample data;
    create table t1 as (
       select 1 id, 10 val from dual union all
       select 1 id, 10 val from dual union all
       select 2 id, 10 val from dual union all
       select 2 id, 10 val from dual union all
       select 2 id, 10 val from dual);
    Table created
    create table t2 as (
       select 1 id, 100 val from dual union all
       select 1 id, 200 val from dual union all
       select 2 id, 500 val from dual union all
       select 2 id, 600 val from dual);
    Table createdThe MERGE will update each row based on the maximum for each ID;
    merge into t1
    using (select id, max(val) max_val
           from t2
           group by id) subq
    on (t1.id = subq.id)
    when matched then update
        set t1.val = subq.max_val;
    Done
    select * from t1;
            ID        VAL
             1        200
             1        200
             2        600
             2        600
             2        600If you want all rows updated to the same value then remove the ID grouping from the subquery and from the ON clause.

  • Update Performance Issue

    Hi everyone I need to improve update performance of the table called dcag_document_item_lines
    STATEMENT
    update /*+ NO_CPU_COSTING */ dcag_document_item_lines a
    set A.ALIEN_ITEM = 'N'
    where EXISTS (SELECT b.ip_part_code
    from pending_dcag_pt6_mra B
    where A.CODE = B.ip_part_code)
    AND A.item_line_type=('PA')
    AND EXISTS (SELECT c.document_id
    from dcag_documents c
    where c.document_id = a.document_id
    and c.document_date >= '01-jan-2007');
    NUMBER OF RECORDS
    dcag_document_item_lines=200 millon records
    pending_dcag_pt6_mra =1.7 million
    dcag_documents =10 Million
    INDEXED COLUMNS
    dcag_document_item_lines.DOCUMENT_ID -----inndex1
    dcag_document_item_lines.LINE_ITEM_NO -----inndex2
    dcag_documents.CLIENT_ID -----D_CLIENT_DEP_DOC_DATE_IND
    DEPARTMENT_TYPE
    DOCUMENT_DATE
    MODEL_ID
    POSTCODE_TYPE
    AGE_AT_EVENT
    VEHICLE_ID
    dcag_documents.DOCUMENT_ID ----DCAG_DOCUMENTS_PK
    PENDING_DCAG_PT6_MRA.IP_PART_CODE -----PENDING_DCAG_PT6_MRA_IDX
    PENDING_DCAG_PT6_MRA.SYS_NC00010$------PT6_PRINT_PART_CODE_IDX
    PENDING_DCAG_PT6_MRA.SYS_NC00011$------PT6_SORTCODE_IDX
    ORACLE VER:Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi
    EXECUTION PLAN
    Statement Id=1638424 Type=
    Cost=2.0960068036193E-317 TimeStamp=23-11-09::11::33:52
    (1) UPDATE STATEMENT CHOOSE
    Est. Rows: 18,609,815 Cost: 580,793
    UPDATE DCAG.DCAG_DOCUMENT_ITEM_LINES
    (6) HASH JOIN RIGHT SEMI
    Est. Rows: 18,609,815 Cost: 580,793
    (2) TABLE TABLE ACCESS FULL DCAG.DCAG_DOCUMENTS [Analyzed]
    (2) Blocks: 633,918 Est. Rows: 12,113,924 of 24,916,950 Cost: 96,214
    Tablespace: DCAG_T
    (5) HASH JOIN RIGHT SEMI
    Est. Rows: 25,321,324 Cost: 460,759
    (3) INDEX INDEX FAST FULL SCAN DCAG.PENDING_DCAG_PT6_MRA_IDX [Analyzed]
    Est. Rows: 1,700,302 Cost: 1,694
    (4) TABLE TABLE ACCESS FULL DCAG.DCAG_DOCUMENT_ITEM_LINES [Analyzed]
    (4) Blocks: 2,778,518 Est. Rows: 73,066,252 of 199,390,240 Cost: 421,708
    Tablespace: DCAG_T
    Any help will be aprreciated.
    Thanks guys.
    Edited by: user12260689 on 23-Nov-2009 03:39

    A few questions:
    1. Are the statistics up to date?
    2. Why are you using a hint in the update statement?
    3. How many rows are you expecting to be updated? Is the estimate of ~18M correct in the execution plan?
    This may be some useful reading for you:
    {message:id=1812597}
    {thread:id=863295}
    Also when you post things like queries and execution plans please place them in \ tags so the formatting is preserved. It's really hard to read (especially an execution plan) in the normal forum text.
    Thanks!                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

  • How to track a column is not updated in a update statement issued.

    Hello All,
    Is there a way to write a trigger when a particular column is not specified in update statement.
    For example, consider i have a table with 20 columns(Say Column1...column20). Trigger has to get fired only when column15(say) is not specified.
    I know
    CREATE OR REPLACE TRIGGER test_trigger
    BEFORE UPDATE
    OF COLUMN1, COLUMN2......COLUMN20--Except COLUMN15
    ON TESTTABLE
    FOR EACH ROW
    BEGIN
    END of_clause; /
    above trigger will solve my problem. But i don't want to mention all columns in this trigger. It will cause maintainance problem afterwards.
    Is there any way to mention something like NOT OF COLUMN in the trigger ?
    Regards,
    Abhijit.

    That trigger would get fired for every column except column 15.
    What do you mean by "when a particular column is not specified in update statement"? Do you mean that the column is not mentioned at all in the update statement, or that the value of that column is not being changed even if it is mentioned in the statement?
    If you mean the former, then I don't think there is any way to do that. however, if you want to do something only if the value in column15 is unchanged, then something along the lines of:
    CREATE TRIGGER test_trigger
       BEFORE UPDATE OF testtable
       FOR EACH ROW
    BEGIN
    BEGIN
       IF (:new.column15 IS NULL and :old.column15 IS NULL) OR
          (:new.column15 IS NOT NULL and :old.column15 IS NOT NULL and
           :new.column15 = :old.column15) THEN
          < do whatever for no changes >
       ELSE
          < do nothing or something else for changes >
       END IF;
    END;John

  • Update statement issue

    Whenever I join two tables in update statement
    following error occures ,However all columns exist
    update bio1
    set appl_uid = demdata.appl_uid
    where bio1.cnic = demdata.cnic
    ora-00904 demdataid.cnic invalid indentifier

    no dear i am at the SQLPLUS prompt and i had the access the of of both of these tables and demdata is the table and appl_uid id is the column, i am just trying to update first_table.appl_uid column with the join of 2nd tables's appl_uid column on the basis of the common field CNIC in between both of these tables, the most simplest update is not working, i am getting late from lunch, plz help
    Regards,

Maybe you are looking for

  • Clock & Calculator missing?

    I notice unlike the iPod Touch there isn't a clock or a calculator built in app on the iPad 3.  Was there a reason to exclude these from the iPad and can they be downloaded from somewhere, not necessarily the calculator but I quite like the clock app

  • Display bug in the Flash update (10.0.2)

    So I installed the Flash CS4 update that was released in May and for the most part I'm happy with the changes.  There's at least one huge bug, though: the display doesn't update correctly when the start frame is changed on a graphic symbol via any me

  • Finding media files

    is there a way you can have itunes search your computer for media files automatically like media player?

  • We bought a prepaid phone online with a 45.00 plan and cant get the minutes?

    we bought a lg optimus zone phone on verizon website on june 19 2014 with 45.00 worth of minutes on the allset plan and they wont give us our minutes to use we have called and called them it took us over 5 hours just to get the phone activated alone.

  • Can't Shake NFS Automount, please HELP!

    Mac connects directly to cable modem. Problem? Something keeps grabbing me when I go online... my Mac cannot change it's IP address. Folder called automount gets created upon reboot. Network icon - get info - shows the following: Kind of file is Alia