Update query performance - Challenging one

Hi All,
Below is the procedure I have created to update the five columns of one table. I have to update almost 9673533 records.
With the below procedure for updating 500 records it took around 1.5 hrs.
What would be the best strategy to enhance the performance of the procedure.. Please throw your ideas to make it in less time.
create or replace procedure test is
  cursor cur_web_prd is
    select rp_id
      from dw_retl_prds t;
  lc_short_brief       varchar2(300);
  lc_long_brief        varchar2(2000);
  ld_date_first_online date;
  ld_date_sent         date;
  lc_rp_id             varchar2(20);
  lc_copos_ssl2        varchar2(40);
  lc_qoslistid         varchar2(40);
  ln_web_prd_cnt       number;
  ln_copos_cnt         number;
  ln_qolist_cnt        number;     
begin
  for i in cur_web_prd loop
      select count(*)
      into ln_web_prd_cnt
      from dw_retl_prds t, web_prds@nrstp s
     where t.rp_id = s.rp_rp_id
       and t.rp_id = i.rp_id;
  if ln_web_prd_cnt = 1 then
    select t.rp_id,
           s.short_brief,
           s.long_brief,
           s.date_first_online,
           s.date_sent
      into lc_rp_id,
           lc_short_brief,
           lc_long_brief,
           ld_date_first_online,
           ld_date_sent
      from dw_retl_prds t, web_prds@nrstp s
     where t.rp_id = s.rp_rp_id
       and t.rp_id = i.rp_id;
      select count(*)
      into ln_copos_cnt
      from dw_retl_prd_cat_val_us@dwenq t, dw_retl_prd_cat_vals@dwenq s
     where t.rp_rp_id = i.rp_id
       and trunc(sysdate) between t.dw_eftv_from and
           nvl(t.dw_eftv_to, trunc(sysdate))
       and s.pc_pc_id = 'COPOS_SSL2'
       and trunc(sysdate) between s.dw_eftv_from and
           nvl(s.dw_eftv_to, trunc(sysdate))
       and t.pc_pc_id = s.pc_pc_id
       and t.pcv_pcv_id = s.pcv_id;
       if ln_copos_cnt = 1 then
    select s.descr
      into lc_copos_ssl2
      from dw_retl_prd_cat_val_us@dwenq t, dw_retl_prd_cat_vals@dwenq s
     where t.rp_rp_id = i.rp_id
       and trunc(sysdate) between t.dw_eftv_from and
           nvl(t.dw_eftv_to, trunc(sysdate))
       and s.pc_pc_id = 'COPOS_SSL2'
       and trunc(sysdate) between s.dw_eftv_from and
           nvl(s.dw_eftv_to, trunc(sysdate))
       and t.pc_pc_id = s.pc_pc_id
       and t.pcv_pcv_id = s.pcv_id;
       else
           lc_copos_ssl2 := null;
       end if;
        select count(*)
      into ln_qolist_cnt
      from dw_retl_prd_cat_val_us@dwenq t, dw_retl_prd_cat_vals@dwenq s
     where t.rp_rp_id = i.rp_id
       and trunc(sysdate) between t.dw_eftv_from and
           nvl(t.dw_eftv_to, trunc(sysdate))
       and s.pc_pc_id = 'QOSLISTID'
       and trunc(sysdate) between s.dw_eftv_from and
           nvl(s.dw_eftv_to, trunc(sysdate))
       and t.pc_pc_id = s.pc_pc_id
       and t.pcv_pcv_id = s.pcv_id;
       if ln_qolist_cnt = 1 then
    select s.descr
      into lc_qoslistid
      from dw_retl_prd_cat_val_us@dwenq t, dw_retl_prd_cat_vals@dwenq s
     where t.rp_rp_id = i.rp_id
       and trunc(sysdate) between t.dw_eftv_from and
           nvl(t.dw_eftv_to, trunc(sysdate))
       and s.pc_pc_id = 'QOSLISTID'
       and trunc(sysdate) between s.dw_eftv_from and
           nvl(s.dw_eftv_to, trunc(sysdate))
       and t.pc_pc_id = s.pc_pc_id
       and t.pcv_pcv_id = s.pcv_id;
       else
           lc_qoslistid := null;
       end if;
    update dw_retl_prds t
       set t.web_short_brief   = lc_short_brief,
           t.web_long_brief    = lc_long_brief,
           t.date_first_online = ld_date_first_online,
           t.date_sent         = ld_date_sent,
           t.copos_ssl#2       = lc_copos_ssl2,
           t.qoslistid         = lc_qoslistid
     where t.rp_id = i.rp_id;
   end if;
   ln_web_prd_cnt            := 0;   
   lc_short_brief            := null;          
   lc_long_brief             := null;
   ld_date_first_online      := null;
   ld_date_sent              := null;
   lc_copos_ssl2             := null;
   lc_qoslistid              := null;
  end loop;
  commit;
  end;I have taken count(*) to check and avoid the too_many_rows and no_data_found exceptions. As there is no Unique key on the table Web_prds@nrstp.
For dw_retl_prds the unique key is rp_id.

Hello
First things first, you need to trace it to find out where it is spending it's time. Find out which statements are taking the longest and target them.
That said however, you are using the slowest possible method for processing this data by using a cursor for loop. You are then doing separate lookups over a database link - and you are doing the queries twice! You could reduce the time of the lookups by 50% just by combining the queries...for example
CREATE OR REPLACE PROCEDURE test
IS
   CURSOR cur_web_prd
   IS
      SELECT rp_id
        FROM dw_retl_prds t;
   lc_short_brief         VARCHAR2 (300);
   lc_long_brief          VARCHAR2 (2000);
   ld_date_first_online   DATE;
   ld_date_sent           DATE;
   lc_rp_id               VARCHAR2 (20);
   lc_copos_ssl2          VARCHAR2 (40);
   lc_qoslistid           VARCHAR2 (40);
   ln_web_prd_cnt         NUMBER;
   ln_copos_cnt           NUMBER;
   ln_qolist_cnt          NUMBER;
BEGIN
   FOR i IN cur_web_prd
   LOOP
      BEGIN
               SELECT t.rp_id,
                s.short_brief,
                s.long_brief,
                s.date_first_online,
                s.date_sent
           INTO lc_rp_id,
                lc_short_brief,
                lc_long_brief,
                ld_date_first_online,
                ld_date_sent
           FROM dw_retl_prds t, web_prds@nrstp s
          WHERE t.rp_id = s.rp_rp_id AND t.rp_id = i.rp_id;
          BEGIN
              SELECT s.descr
                  INTO lc_copos_ssl2
                  FROM dw_retl_prd_cat_val_us@dwenq t,
                       dw_retl_prd_cat_vals@dwenq s
                 WHERE t.rp_rp_id = i.rp_id
                       AND TRUNC (SYSDATE) BETWEEN t.dw_eftv_from
                                               AND NVL (t.dw_eftv_to,
                                                        TRUNC (SYSDATE))
                       AND s.pc_pc_id = 'COPOS_SSL2'
                       AND TRUNC (SYSDATE) BETWEEN s.dw_eftv_from
                                               AND NVL (s.dw_eftv_to,
                                                        TRUNC (SYSDATE))
                       AND t.pc_pc_id = s.pc_pc_id
                       AND t.pcv_pcv_id = s.pcv_id;  
          EXCEPTION
                WHEN NO_DATA_FOUND THEN
                     lc_copos_ssl2 := NULL;
          END;
          BEGIN
              SELECT s.descr
                  INTO lc_qoslistid
                  FROM dw_retl_prd_cat_val_us@dwenq t,
                       dw_retl_prd_cat_vals@dwenq s
                 WHERE t.rp_rp_id = i.rp_id
                       AND TRUNC (SYSDATE) BETWEEN t.dw_eftv_from
                                               AND NVL (t.dw_eftv_to,
                                                        TRUNC (SYSDATE))
                       AND s.pc_pc_id = 'QOSLISTID'
                       AND TRUNC (SYSDATE) BETWEEN s.dw_eftv_from
                                               AND NVL (s.dw_eftv_to,
                                                        TRUNC (SYSDATE))
                       AND t.pc_pc_id = s.pc_pc_id
                       AND t.pcv_pcv_id = s.pcv_id;
            EXCEPTION
                WHEN NO_DATA_FOUND THEN
                    lc_qoslistid := NULL;
            END;
            UPDATE dw_retl_prds t
            SET t.web_short_brief = lc_short_brief,
                t.web_long_brief = lc_long_brief,
                t.date_first_online = ld_date_first_online,
                t.date_sent = ld_date_sent,
                t.copos_ssl#2 = lc_copos_ssl2,
                t.qoslistid = lc_qoslistid
          WHERE t.rp_id = i.rp_id;
      EXCEPTION
            WHEN
                NO_DATA_FOUND THEN
                    NULL;
      END;
      ln_web_prd_cnt := 0;
      lc_short_brief := NULL;
      lc_long_brief := NULL;
      ld_date_first_online := NULL;
      ld_date_sent := NULL;
      lc_copos_ssl2 := NULL;
      lc_qoslistid := NULL;
   END LOOP;
   COMMIT;
END;But this is still not an efficient implementation - you should try to put all of this into 1 or more Update statements without the loop. Something like...
            UPDATE dw_retl_prds t
            SET (   t.web_short_brief,
                    t.web_long_brief,
                    t.date_first_online,
                    t.date_sent
                ) =
                (   SELECT
                        s.short_brief,
                        s.long_brief,
                        s.date_first_online,
                        s.date_sent
                    FROM
                        web_prds@nrstp s
                    WHERE
                        t.rp_id = s.rp_rp_id
            WHERE
                EXISTS( SELECT
                            NULL
                        FROM
                            web_prds@nrstp s
                        WHERE
                            t.rp_id = s.rp_rp_id
                        );That way you would have 1, 2 or 3 update statements each of which will run in a fraction of the time it takes for your process to run. You may need to do more work after this, such as materializing the results locally from the remote database rather than joining over the database link, but that's a bridge you can cross when you come to it.
HTH
David

Similar Messages

  • Unique Query Performance Challenge

    Experts,
    Please I need help in this area.  I have a query written from a Multi-Provider. The query is using 98% of its data from 1 base cube. Currently it takes about 4 minutes to run and I want to bring it down to 1 minute.
    This query is run off a web template and it is not static. The users can drilldown in any direction as required. The performance is more of a problem from the drilldown.
    This query is a cost report with a lot of calculated and restricted key figures, and also a lot of excludes and includes all within the key figures.
    The query has 13 restricted key figures and 5 calculated using the restricted, so 18 in all. Each restricted key figure resembles this example:
    •     Cost Element (hierarchy restriction or singles values or ranges restriction)
    •     Sender/Receiver
    •     Version
    •     Value Type
    •     Amount
    I believe the complex restrictions are slowing this report down. At the moment I am trying to speed up this report and it has proved a big challenge.
    Has anybody experienced a similar challenge before?
    Please do not point me to OSS notes or the standard performance documents. I have tried all that. Is there something else beyond those that can help here? Maybe a trick someone has tried?
    Help!!

    Thank you all for replying:
    This Problem is still NOT solved but I have more Information.
    This query Contains a heirarchy (Main CH in row) and a second object contains hierachy also but selected via Authorisation in the User Profile.
    Acutually both hierarchies are selected in Authorisation from the User profile but once the User is in the report, the User can drilldown in the displayed hierarchy depending on thier authorisation.
    But most users are at the highest level in both hierarchies so they drilldown on both hierarchies and this is done via a separate selection Section in the Web template.
    I am trying to build the exact picture of my scenario.....pls any help.
    With this new information, Can I still do the following:
    Buffer Hier or Cache by Hier level???

  • Performance of update query for single column vs multiple column

    Hi All,
    I could not find any answer for this, does it ever matter in terms of performance updating single column versus multiple column in a single update query.
    For eg. table consisting of 15 columns, what would be the difference in performance when one column is update in the single query against another update query where 15 columns are updated.
    Please do keep in mind my table columns in actually could be around 150+.
    Thanks for any information provided.

    If updated columns aren´t on where clause , then the only impact of 15 columns will be a increase on redo generation, and a possible chainning row.
    So since the redo is one of the things that have a large impact, the answer is yes.
    The performance will be slower.
    Regards
    Helio Dias.
    http://heliodias.com
    OCE SQL, OCP 9i

  • SELECT query performance : One big table Vs many small tables

    Hello,
    We are using BDB 11g with SQLITE support. I have a query about 'select' query performance when we have one huge table vs. multiple small tables.
    Basically in our application, we need to run select query multiple times and today we have one huge table. Do you guys think breaking them into
    multiple small tables will help ?
    For test purposes we tried creating multiple tables but performance of 'select' query was more or less same. Would that be because all tables will map to only one database in backed with key/value pair and when we run lookup (select query) on small table or big table it wont make difference ?
    Thanks.

    Hello,
    There is some information on this topic in the FAQ at:
    http://www.oracle.com/technology/products/berkeley-db/faq/db_faq.html#9-63
    If this does not address your question, please just let me know.
    Thanks,
    Sandra

  • Update Query is Performing Full table Scan of 1 Millions Records

    Hello Everyboby I have one update query ,
    UPDATE tablea SET
              task_status = 12
              WHERE tablea.link_id >0
              AND tablea.task_status <> 0
              AND tablea.event_class='eventexception'
              AND Exists(SELECT 1 from tablea ltask where ltask.task_id=tablea.link_id
              AND ltask.task_status = 0)
    When I do explain plan it shows following result...
    Execution Plan
    0 UPDATE STATEMENT Optimizer=CHOOSE
    1 0 UPDATE OF 'tablea'
    2 1 FILTER
    3 2 TABLE ACCESS (FULL) OF 'tablea'
    4 2 TABLE ACCESS (BY INDEX ROWID) OF 'tablea'
    5 4 INDEX (UNIQUE SCAN) OF 'PK_tablea' (UNIQUE)
    NOW tablea may have more than 10 MILLION Records....This would take hell of time even if it has to
    update 2 records....please suggest me some optimal solutions.....
    Regards
    Mahesh

    I see your point but my question or logic say i have index on all columns used in where clauses so i find no reason for oracle to do full table scan,,,,
    UPDATE tablea SET
    task_status = 12
    WHERE tablea.link_id >0
    AND tablea.task_status <> 0
    AND tablea.event_class='eventexception'
    AND Exists(SELECT 1 from tablea ltask where ltask.task_id=tablea.link_id
    AND ltask.task_status = 0)
    I am clearly statis where task_status <> 0 and event_class= something and tablea.link_id >0
    so ideal case FOR optimizer should be
    Step 1)Select all the rowid having this condition...
    Step 2)
    For each row in rowid get all the row where task_status=0
    and where taskid=linkid of rowid selected above...
    Step 3)While looping for each rowid if it find any condition try for rowid obtained from ltask in task 2 update that record....
    I want to have this kind of plan,,,,,does anyone know how to make oracle obtained this kind of plan......
    It is try FULL TABLE SCAN is harmfull alteast not better than index scan.....

  • How can update two rows in one query?

    How can update two rows in one query?
    Edited by: OracleM on May 4, 2009 12:16 AM

    What do you mean with "two rows"? May be two columns??
    If the where clause of UPDATE query matches two rows, then it will update two rows. If you want to update two columns, then add column names to your query
    UPDATE your_table SET col1=value1, col2=value2 WHERE your_where_clause- - - - - - - - - - - - - - - - - - - - -
    Kamran Agayev A. (10g OCP)
    http://kamranagayev.wordpress.com
    [Step by Step install Oracle on Linux and Automate the installation using Shell Script |http://kamranagayev.wordpress.com/2009/05/01/step-by-step-installing-oracle-database-10g-release-2-on-linux-centos-and-automate-the-installation-using-linux-shell-script/]

  • QUERY PERFORMANCE AND DATA LOADING PERFORMANCE ISSUES

    WHAT ARE  QUERY PERFORMANCE ISSUES WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES...PLZ URGENT
    WHAT ARE DATALOADING PERFORMANCE ISSUES  WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES PLZ URGENT
    WILL REWARD FULL POINT S
    REGARDS
    GURU

    BW Back end
    Some Tips -
    1)Identify long-running extraction processes on the source system. Extraction processes are performed by several extraction jobs running on the source system. The run-time of these jobs affects the performance. Use transaction code SM37 — Background Processing Job Management — to analyze the run-times of these jobs. If the run-time of data collection jobs lasts for several hours, schedule these jobs to run more frequently. This way, less data is written into update tables for each run and extraction performance increases.
    2)Identify high run-times for ABAP code, especially for user exits. The quality of any custom ABAP programs used in data extraction affects the extraction performance. Use transaction code SE30 — ABAP/4 Run-time Analysis — and then run the analysis for the transaction code RSA3 — Extractor Checker. The system then records the activities of the extraction program so you can review them to identify time-consuming activities. Eliminate those long-running activities or substitute them with alternative program logic.
    3)Identify expensive SQL statements. If database run-time is high for extraction jobs, use transaction code ST05 — Performance Trace. On this screen, select ALEREMOTE user and then select SQL trace to record the SQL statements. Identify the time-consuming sections from the results. If the data-selection times are high on a particular SQL statement, index the DataSource tables to increase the performance of selection (see no. 6 below). While using ST05, make sure that no other extraction job is running with ALEREMOTE user.
    4)Balance loads by distributing processes onto different servers if possible. If your site uses more than one BW application server, distribute the extraction processes to different servers using transaction code SM59 — Maintain RFC Destination. Load balancing is possible only if the extraction program allows the option
    5)Set optimum parameters for data-packet size. Packet size affects the number of data requests to the database. Set the data-packet size to optimum values for an efficient data-extraction mechanism. To find the optimum value, start with a packet size in the range of 50,000 to 100,000 and gradually increase it. At some point, you will reach the threshold at which increasing packet size further does not provide any performance increase. To set the packet size, use transaction code SBIW — BW IMG Menu — on the source system. To set the data load parameters for flat-file uploads, use transaction code RSCUSTV6 in BW.
    6)Build indexes on DataSource tables based on selection criteria. Indexing DataSource tables improves the extraction performance, because it reduces the read times of those tables.
    7)Execute collection jobs in parallel. Like the Business Content extractors, generic extractors have a number of collection jobs to retrieve relevant data from DataSource tables. Scheduling these collection jobs to run in parallel reduces the total extraction time, and they can be scheduled via transaction code SM37 in the source system.
    8). Break up your data selections for InfoPackages and schedule the portions to run in parallel. This parallel upload mechanism sends different portions of the data to BW at the same time, and as a result the total upload time is reduced. You can schedule InfoPackages in the Administrator Workbench.
    You can upload data from a data target (InfoCube and ODS) to another data target within the BW system. While uploading, you can schedule more than one InfoPackage with different selection options in each one. For example, fiscal year or fiscal year period can be used as selection options. Avoid using parallel uploads for high volumes of data if hardware resources are constrained. Each InfoPacket uses one background process (if scheduled to run in the background) or dialog process (if scheduled to run online) of the application server, and too many processes could overwhelm a slow server.
    9). Building secondary indexes on the tables for the selection fields optimizes these tables for reading, reducing extraction time. If your selection fields are not key fields on the table, primary indexes are not much of a help when accessing data. In this case it is better to create secondary indexes with selection fields on the associated table using ABAP Dictionary to improve better selection performance.
    10)Analyze upload times to the PSA and identify long-running uploads. When you extract the data using PSA method, data is written into PSA tables in the BW system. If your data is on the order of tens of millions, consider partitioning these PSA tables for better performance, but pay attention to the partition sizes. Partitioning PSA tables improves data-load performance because it's faster to insert data into smaller database tables. Partitioning also provides increased performance for maintenance of PSA tables — for example, you can delete a portion of data faster. You can set the size of each partition in the PSA parameters screen, in transaction code SPRO or RSCUSTV6, so that BW creates a new partition automatically when a threshold value is reached.
    11)Debug any routines in the transfer and update rules and eliminate single selects from the routines. Using single selects in custom ABAP routines for selecting data from database tables reduces performance considerably. It is better to use buffers and array operations. When you use buffers or array operations, the system reads data from the database tables and stores it in the memory for manipulation, improving performance. If you do not use buffers or array operations, the whole reading process is performed on the database with many table accesses, and performance deteriorates. Also, extensive use of library transformations in the ABAP code reduces performance; since these transformations are not compiled in advance, they are carried out during run-time.
    12)Before uploading a high volume of transaction data into InfoCubes, activate the number-range buffer for dimension IDs. The number-range buffer is a parameter that identifies the number of sequential dimension IDs stored in the memory. If you increase the number range before high-volume data upload, you reduce the number of reads from the dimension tables and hence increase the upload performance. Do not forget to set the number-range values back to their original values after the upload. Use transaction code SNRO to maintain the number range buffer values for InfoCubes.
    13)Drop the indexes before uploading high-volume data into InfoCubes. Regenerate them after the upload. Indexes on InfoCubes are optimized for reading data from the InfoCubes. If the indexes exist during the upload, BW reads the indexes and tries to insert the records according to the indexes, resulting in poor upload performance. You can automate the dropping and regeneration of the indexes through InfoPackage scheduling. You can drop indexes in the Manage InfoCube screen in the Administrator Workbench.
    14)IDoc (intermediate document) archiving improves the extraction and loading performance and can be applied on both BW and R/3 systems. In addition to IDoc archiving, data archiving is available for InfoCubes and ODS objects.
    Hope it Helps
    Chetan
    @CP..

  • Update query takes much time to execute

    Hi Experts,
    I need help regarding performance of the query.
    update TEST_TAB
    set fail=1, msg='HARD'
    where id in (
    select src.id from TEST_TAB src
    inner join TEST_TAB l_1 on src.email=l_1.email and l_1.database_id=335090 and l_1.msg='HARD' and l_1.fail=1
    inner join TEST_TAB l_2 on src.email=l_2.email and l_2.database_id=338310 and l_2.msg='HARD' and l_2.fail=1
    inner join TEST_TAB l_3 on src.email=l_3.email and l_3.database_id=338470 and l_3.msg='HARD' and l_3.fail=1
    where src.database_id=1111111;
    This query is running for too long, takes >1 hour and it updates 26000 records.
    But, if we run inner select query
    select src.id from TEST_TAB src
    inner join TEST_TAB l_1 on src.email=l_1.email and l_1.database_id=335090 and l_1.msg='HARD' and l_1.fail=1
    inner join TEST_TAB l_2 on src.email=l_2.email and l_2.database_id=338310 and l_2.msg='HARD' and l_2.fail=1
    inner join TEST_TAB l_3 on src.email=l_3.email and l_3.database_id=338470 and l_3.msg='HARD' and l_3.fail=1
    where src.database_id=1111111
    It takes <1 minute to execute.
    Please give me suggetions in the update query so that i will improve performance of the query.

    SELECT src.id FROM lead src
            inner join lead l_1 ON src.email=l_1.email AND
    l_1.database_id=335090 AND l_1.bounce_msg_t='HARD' AND l_1.failed=1
            inner join lead l_2 ON src.email=l_2.email AND
    l_2.database_id=338310 AND l_2.bounce_msg_t='HARD' AND l_2.failed=1
            inner join lead l_3 ON src.email=l_3.email AND
    l_3.database_id=338470 AND l_3.bounce_msg_t='HARD' AND l_3.failed=1
        WHERE src.database_id=264170;
    Operation     Object Name     Rows     Bytes     Cost     Object Node     In/Out     PStart     PStop
    SELECT STATEMENT Optimizer Mode=ALL_ROWS          1             10453                              
      TABLE ACCESS BY INDEX ROWID     LEAD     1       32       27                              
        NESTED LOOPS          1       130       10453                              
          HASH JOIN          1       98       10426                              
            HASH JOIN          199       12 K     6950                              
              TABLE ACCESS BY INDEX ROWID     LEAD     202       6 K     3476                              
                INDEX RANGE SCAN     LEAD_DATABASE_FK_I     94 K           259                              
              TABLE ACCESS BY INDEX ROWID     LEAD     94 K     3 M     3473                              
                INDEX RANGE SCAN     LEAD_DATABASE_FK_I     94 K           259                              
            TABLE ACCESS BY INDEX ROWID     LEAD     202       6 K     3476                              
              INDEX RANGE SCAN     LEAD_DATABASE_FK_I     94 K           259                              
          INDEX RANGE SCAN     LEAD_IDX_4     24             3   Update for one row.
         UPDATE lead SET failed=1, bounce_msg_t='HARD'
    WHERE id IN (SELECT src.id FROM lead src
    inner join lead l_1 ON src.email=l_1.email AND
    l_1.database_id=335090 AND l_1.bounce_msg_t='HARD' AND l_1.failed=1
    inner join lead l_2 ON src.email=l_2.email AND
    l_2.database_id=338310 AND l_2.bounce_msg_t='HARD' AND l_2.failed=1
    inner join lead l_3 ON src.email=l_3.email AND
    l_3.database_id=338470 AND l_3.bounce_msg_t='HARD' AND l_3.failed=1
    WHERE src.database_id=264170
         AND ROWNUM=1)
    Operation     Object Name     Rows     Bytes     Cost     Object Node     In/Out     PStart     PStop
    UPDATE STATEMENT Optimizer Mode=ALL_ROWS          1             10456                              
      UPDATE     LEAD                                               
        NESTED LOOPS          1       32       10456                              
          VIEW     VW_NSO_1     1       13       10453                              
            SORT UNIQUE          1       130                                    
              COUNT STOPKEY                                                    
                TABLE ACCESS BY INDEX ROWID     LEAD     1       32       27                              
                  NESTED LOOPS          1       130       10453                              
                    HASH JOIN          1       98       10426                              
                      HASH JOIN          199       12 K     6950                              
                        TABLE ACCESS BY INDEX ROWID     LEAD     202       6 K     3476                              
                          INDEX RANGE SCAN     LEAD_DATABASE_FK_I     94 K           259                              
                        TABLE ACCESS BY INDEX ROWID     LEAD     94 K     3 M     3473                              
                          INDEX RANGE SCAN     LEAD_DATABASE_FK_I     94 K           259                              
                      TABLE ACCESS BY INDEX ROWID     LEAD     202       6 K     3476                              
                        INDEX RANGE SCAN     LEAD_DATABASE_FK_I     94 K           259                              
                    INDEX RANGE SCAN     LEAD_IDX_4     24             3                              
          TABLE ACCESS BY INDEX ROWID     LEAD     1       19       2                              
            INDEX UNIQUE SCAN     LEADS_PK     1             1 

  • Update query issue to update middle (n records) of the rows in a table

    Hi
    I have a below requirement for that I gone thru one below appoch to fulfill my requirement.
    Requirement:
    I need to pull 3 crore data thru ODI, source table does not have a primary key and it have 200 columns with 3 crores records (it has a 25 columns as composite key).
    I need to populate those 3 crores records into target oracle DB but when I tried to load 3 crores at time I got an error so I approch incremental load, since for that I need to update for each 1 crore records with flg1,flg2 anf flg3 (flag values), for this update I have added one more column in source table using those flag values I can populate 1 crore records in target machine.
    Approch
    For aove requirement I writem below query to update flg1 for first crores records update tbl_name set rownum_id='flg1' where rownum<=10000000; and it updated successfully without any error, for second, to update flg2 for 2nd crore records I wrote below update query and it execute 20min and given *0 rows updated* Please help if below query is wrong
    Query: update tbl_name set rownum_id='flg2' where rownum<=20000000 and rownum_id is NULL;
    Thanks in advance
    Regards,
    Phanikanth

    A couple of thoughts.
    1. This forum is in English and accessed by people in more than 100 countries. Use metric counts not crore. Translate to million, billions, trillions, etc.
    2. No database version ... makes helping you very hard.
    3. 200 columns with 25 in a composite key is a nightmare by almost any definition ... fix your design before going any further.
    4. What error message? Post the complete message not just your impression of it.
    5. Oracle tables are heap tables .. there is no such concept as the top, the bottom, or the middle.
    6. If you are trying to perform sampling ... use the SAMPLE clause (http://www.morganslibrary.org/reference/select.html#sssc).
    7. What is ODI? Do not expect your international audience to know what the acronym means.

  • Effect of Restricted Keyfigure & calculated keyfigure in query performance

    Hi,
             What is the effect of Restricted Keyfigure & calculated keyfigure  in Query Performance?
    Regards
    Anil

    As compared to formulas that are evaluated during query execution, calculated key figures are pre-calculated and their definitions are stored in the metadata repository for reuse in queries. The incorporation of business metrics and key performance indicators as calculated key figures, such as gross profit and return on investment (which are frequently used, widely understood, and rarely changed), improve query performance and ensure that calculated key figures are reported consistently by different users. Note that this approach improves query runtime performance but slows InfoCube or ODS object update time. As a rule of thumb, if multiple and frequently used queries use the same formula to compute calculated fields, use calculated key figures instead of formulas.
    RKFs result in additional database processing and complexity in retrieving the query result and therefore should be avoided when possible.
    other than performance, there might be other considerations to determine which one of the options should be used.
    If the RKF's are query specific and not used anywhere in majority of other queries, I would go for structure selections. And from my personal exp, sometimes all the developers end up with so many RKF and CKF's that you get easily lost in the web and not to mention the duplication.
    if the same structure is needed widely across most of the queries, that might be a good idea to have global structure to be available across the provider, which might considerable cut down the development time.

  • Structures Vs RKFs and CKFs In Query performance

    Hi Gurus,
    I am creating a GL query which will be returning with a couple of KFs and some calculations as well with different GL accounts and I wanted to know which one is going to be more beneficial for me either to create Restricted keyfigures and Calculated Keyfigures or to just use a structure for all the selections and formula calculations?
    Which option will be better for query performance?
    Thanks in advance

    As compared to formulas that are evaluated during query execution, calculated key figures are pre-calculated and their definitions are stored in the metadata repository for reuse in queries. The incorporation of business metrics and key performance indicators as calculated key figures, such as gross profit and return on investment (which are frequently used, widely understood, and rarely changed), improve query performance and ensure that calculated key figures are reported consistently by different users. Note that this approach improves query runtime performance but slows InfoCube or ODS object update time. As a rule of thumb, if multiple and frequently used queries use the same formula to compute calculated fields, use calculated key figures instead of formulas.
    RKFs result in additional database processing and complexity in retrieving the query result and therefore should be avoided when possible.

  • Advanced update query

    I am trying to perform an update query on one field that uses multiple foreign key parameters from one table to update the other table. The data in all three foreign key parameter fields are not constant. Accuracy is absolutely critical. I tied this as well as other various scenarios with no success. Can anyone help?
    Update A_table a
    set a.rate = (
    select b.rate
    from B_table b
    where b.id = a.id
    and b.transdate = a.transdate
    and b.orgnum = a.orgnum
    and b.transdate = to_date('31/12/2007', 'dd/mm/yyyy')
    )

    I think most of the new guys post product related questions here because, they might have heard that there is something called forums, they go to google and type oracle forums. When the forums home page opens they directly post in the first link available, which is unfortunately this forum. Probably moving Database forum up, and moving technology network community forums below would solve this problem compared to the number of people who don't even read the forum name before posting.

  • Query performed well in dev but hanging on prod

    Hi DBAs,
    Urgent help is required on this issue.
    One of my sql query performed well in dev server, but now for 1 week it is hanging in prod server so,
    * How can i proceed to troubleshoot the problem.
    * What are areas i need to look for , and for what ?
    Regards
    Asif

    Are they both the same version of databases?
    How recent is your dev data (if possible update your dev from prod)?
    If prod to dev copy is not possible
    ....AND stats for both db are up-to-date
    ....AND big differnce in db size for dev & prod
    ....=> You can copy prod's stats and import to dev then run your query on dev to localise the issue then you can start tuning the query on dev.

  • Query performance - bench marking

    Performance of  a query at any given time is determined by many variables, and I would like to know how to detect performance issues against a benchmark. One thought was to store some data in a separate cube (and not load any more data into it), run a specific query daily (few times) and have the run time info. updated on BW statistics cube. And when users complain about any performance issues on the system for different queries, we can run this specific query to see if there is really a true performance issue in the system.
    The objective is to establish bench marks for tracking query performance. Any thoughts or suggestions would be appreciated.
    Thanks,
    Sree

    hi Sree,
    that's good idea, but i think only volume data factor may considered here, other factors like query design, aggregate, frontend, network, etc. take a look following for query performance
    Prakash's weblog on this topic..
    /people/prakash.darji/blog/2006/01/27/query-creation-checklist
    /people/prakash.darji/blog/2006/01/26/query-optimization
    oss note
    557870 'FAQ BW Query Performance'
    and 567746 'Composite note BW 3.x performance Query and Web'
    some docs
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/3f66ba90-0201-0010-ac8d-b61d8fd9abe9
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/cccad390-0201-0010-5093-fd9ec8157802
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/ce7fb368-0601-0010-64ba-fadc985a1f94
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/c8c4d794-0501-0010-a693-918a17e663cc
    https://websmp206.sap-ag.de/~sapidb/011000358700001394912002
    hope this helps.

  • Update multiple records with one submit?

    I can't for the life of me figure out how to do this. I have checkboxes with primary key id's listed as the value in a repeat region. I want to perform an update on 2 columns for all id's checked.
    SUDEO CODE
    <form>
    <cfoutput query ....
    other textfields...
               <input name="checkbox" type="checkbox" id="checkbox" value="#table.id#" />
    </cfoutput>
    SUBMIT BUTTON
    </form>
    My update statement:
    UPDATE table
    SET column1 = 1, column2 ='whatever'
    WHERE id IN (<cfqueryparam cfsqltype="cf_sql_integer" value="#TRIM(FORM.checkbox)#">)
    </cfquery>
    It only updates if I check one box, does nothing when more than one is selected.
    I also tried looping over the update statement but nothing seems to work.

    You were missing the list attribute to cfqueryparam (you can also
    replace the cfif with a null attribute on the cfqueryparam but it has
    the disadvantage of sending the query to the db: null="#Not
    Len(Trim(form.checkbox))#").
    <cfif Len(Trim(form.checkbox))>
    <cfquery ...>
    UPDATE table
    SET column1 = 1, column2 = 'whatever'
    WHERE id IN (<cfqueryparam cfsqltype="cf_sql_integer"
    value="#form.checkbox#" list="true">)
    </cfquery>
    </cfif>
    Mack

Maybe you are looking for

  • Automate MR21 Process

    Hi Experts, We have a client requirement as below: Client wants to compare the info record price per material for all related vendors. Once he arrives at the best effective price (Material Price + Freight + Duty), he would want this price to update i

  • ORA 1092 Oracle instance terminated.

    I am in process of creating oracle database with my other databse template. But when i click on finish, i got the error like ORA 1092 Oracle instance terminated. And database creation fails. What are the probable solutions? Thanks,

  • Anyone havign trouble with Facebook shutting down after afew seconds. Loads fine on iPhone 5 but quits after a few seconds.

    iPhone 5s. All up to date. Tried deleting Facebook app then reloading. No help. Tried shutting iphone off/on. No help. Tried long reset of iPhone. No help. Tried again reloading Facebook. Still it loads fine but quits after a few sconds. Anoyone else

  • Strage login issue

    Hi All, I am running oracle 1og on aix. when i try to login as sys on command prompt looks like oracle not available but i can log on to OEM with sys user and actually it is running. I can log with diffrent user on same command prompt succesfully. Ca

  • Very urgent reading google api

    I am new to xml. I would like to read this xml http://www.google.com/ig/api?weather=Milano with a single sql query in a 10.2 oracle installation. Could you help me? I don't want to develop pl-sql, only a single sql statement. In particular I would li