Performance Issue - 2.5 hrs of 37000 Records...

Hello Experts,
Good Day to all...
Following is the Anonymous block which i am using to execute a procedure logic...
Execution time is resulting more... Around 2.5 hrs for 37000 records in departments table.
Moreover there is a procedure - *"validations.move_process_history"* which has lots of DBMS_OUTPUT messages been stored and i thought of storing into a temporary table.
Will this increases the performance??
Plz suggest.
Thanks for ur suggestions...
DECLARE
   CURSOR cur_get_rec
   IS
   select
           di.location_type
          ,di.location_no
          ,di.dept_id
    from departments di
   where di.location_no    = 600
     and di.location_type = 'CH'
     and di.dept_id     > 0;
   l_cnt      NUMBER := 0;
   l_error_msg    varchar2(32678);
BEGIN
FOR i IN cur_get_rec
LOOP
     validations.move_process_history
     p_loc_typ           => i.location_type,                         --> Passing IN to procedure
     p_loc_nbr               => i.location_no,                    --> Passing IN to procedure
     p_dept_id           => i.dept_id,                              --> Passing IN to procedure
     p_ts_start               => (SYSTIMESTAMP - 200),          --> Passing IN to procedure
     p_ts_end                 => SYSTIMESTAMP,                    --> Passing IN to procedure
     p_mode                  => 0,                                   --> Passing IN to procedure
     p_result                  => l_cnt,                              --> Passing OUT ; 1 - Success or 0 - Failure
     p_error_msg           => l_error_msg                         --> Passing OUT from procedure i.e error messages
     END LOOP;
END;
validations.move_process_history Procedure
PROCEDURE move_process_history
     p_loc_typ           departments.loc_type%TYPE 
     p_loc_nbr            departments.loc_nbr%TYPE 
     p_dept_id                departments.dept_id%TYPE       
     p_ts_start           TIMESTAMP  
     p_ts_end               TIMESTAMP
     p_mode              NUMBER   
     p_result                OUT NUMBER
     p_error_msg            OUT VARCHAR2
  ) IS
CURSOR c_hist IS
       SELECT
        FROM history
       WHERE ...
   --TYPE g_hist_table IS TABLE OF history%ROWTYPE;
   -- Global Collection of PL/SQL Table for Holding History Records
    l_hist_table g_hist_table ;
  Begin
  dbms_output.put_line('opening cursor');
  p_result := 0 ;
  OPEN c_hist;
  LOOP
      dbms_output.put_line('fetching ..');
      -- Fetch 2000 history records for processing
       FETCH c_hist BULK COLLECT
         INTO  l_hist_table limit 10000;
       FOR i IN 1..l_hist_table.COUNT
       Loop
               If...then
               else
                    p_error_msg := 'Dept# '||l_hist_table(i).p_loc_nbr ||' Effective Timestamp '||l_hist_table(i).Timestamp ||' Sequence #'
            ||l_hist_table(i).sequence || ' Mode Id '||l_hist_table(i).mode_id ;
               end if;
               if... then
               else
                    p_error_msg := 'Dept# '||l_hist_table(i).p_loc_nbr ||' Effective Timestamp '||l_hist_table(i).Timestamp ||' Sequence #'
            ||l_hist_table(i).sequence || ' Mode Id '||l_hist_table(i).mode_id ;
               end if;
       End Loop;
       -- Flush out the data from memory and to Databasse
       l_hist_table.delete;
  Exception When Others then
  rollback;
  -- Cleanup
  close c_hist;
  l_hist_table.delete;
  raise;
  End move_process_history;

Thanks Billy for your prompt reply.
I am so glad to see your reply.
If you could have some sample eg. of Parallel processing, definitely it will help me in this case.
Moreover to be more specific; departments & History table has 37000 records. --> Reply to Mustafa KALAYCI
I will explain you the complete logic of query below
-- Below code is called in-order to update the information related to the (location_no-600) in the dept_history table.
-- cur_get_rec i.e Departments table contain 37000 records
DECLARE
   CURSOR cur_get_rec
   IS
   select
           di.location_type
          ,di.location_no
          ,di.dept_id
    from departments di
   where di.location_no    = 600
     and di.location_type = 'CH'
     and di.dept_id     > 0;
   l_cnt      NUMBER := 0;
   l_error_msg    varchar2(32678);
BEGIN
FOR i IN cur_get_rec
LOOP
     validations.move_process_history
     p_loc_typ           => i.location_type,                         --> Passing IN to procedure
     p_loc_nbr           => i.location_no,                    --> Passing IN to procedure
     p_dept_id           => i.dept_id,                              --> Passing IN to procedure
     p_ts_start          => (SYSTIMESTAMP - 200),          --> Passing IN to procedure
     p_ts_end            => SYSTIMESTAMP,                    --> Passing IN to procedure
     p_mode              => 0,                                   --> Passing IN to procedure
     p_result            => l_cnt,                              --> Passing OUT ; 1 - Success or 0 - Failure
     p_error_msg         => l_error_msg                         --> Passing OUT from procedure i.e error messages
     END LOOP;
END;VALIDATIONS.MOVE_PROCESS_HISTORY PROCEDURE
-- Pass all 37000 records for processing.
PROCEDURE move_process_history
     p_loc_typ           departments.loc_type%TYPE 
     p_loc_nbr            departments.loc_nbr%TYPE 
     p_dept_id                departments.dept_id%TYPE       
     p_ts_start           TIMESTAMP  
     p_ts_end               TIMESTAMP
     p_mode              NUMBER   
     p_result                OUT NUMBER
     p_error_msg            OUT VARCHAR2
  ) IS
CURSOR c_hist IS
       SELECT
        FROM history
       WHERE loc_typ = p_loc_typ
              AND loc_nbr = p_loc_nbr
              AND dept_id = p_dept_id
              AND eff_ts BETWEEN p_ts_start AND p_ts_end
         ORDER BY eff_ts ASC;
   --TYPE g_hist_table IS TABLE OF history%ROWTYPE;
   -- Global Collection of PL/SQL Table for Holding History Records
    l_hist_table g_hist_table ;
   l_abs_row_number NUMBER := 0;
   BEGIN
      DBMS_OUTPUT.put_line ('opening cursor');
      p_result := 0;
      OPEN c_sih;
      LOOP
         -- Flush out the Record Collection if it is not empty
         DBMS_OUTPUT.put_line ('fetching ..');
         FETCH c_sih
         BULK COLLECT INTO l_hist_table LIMIT 10000;
         FOR i IN 1 .. l_hist_table.COUNT
         LOOP
            l_abs_row_number := l_abs_row_number + 1;
            IF (l_abs_row_number = 1)
            THEN
               -- Initialize the Values to local variable
            ELSE
                     p_error_msg := 'Dept# '||l_hist_table(i).p_loc_nbr ||' Effective Timestamp '||l_hist_table(i).Timestamp ||' Sequence #'
                                             ||l_hist_table(i).sequence || ' Mode Id '||l_hist_table(i).mode_id ;
               END IF;
                     IF (p_error_msg IS NULL)
                  THEN
                     p_error_msg :=     'Dept# '||l_hist_table(i).p_loc_nbr ||' Effective Timestamp '||l_hist_table(i).Timestamp ||' Sequence #'
                                             ||l_hist_table(i).sequence || ' Mode Id '||l_hist_table(i).mode_id ;                  --
                  END IF;
         END LOOP;
         -- Flush out the data from memory and to Databasse
         l_hist_table.DELETE;
         IF (   p_mode = validation_correction_rollback
             OR p_mode = validation_correction_commit
         THEN
            -- calling procedure which Updates History table
            update_history;
               -- Calling procedure which updates the Departments table
            update_departments;
         END IF;
         -- Delete the Temp Collection for Update
         -- Reset Counters
         l_ctr := 0;
         EXIT WHEN c_sih%NOTFOUND;
      END LOOP;
      -- Cleanup
      CLOSE c_sih;
      l_hist_table.DELETE;
      IF (p_mode = validation_correction_commit)
      THEN
         DBMS_OUTPUT.put_line ('Comitting');
         COMMIT;
      ELSE
         DBMS_OUTPUT.put_line ('Rolling Back');
         ROLLBACK;
      END IF;
      p_result := l_abs_row_number;
   EXCEPTION
      WHEN OTHERS
      THEN
         ROLLBACK;
         -- Cleanup
         CLOSE c_sih;
         l_hist_table.DELETE;
         RAISE;
   END move_process_history;

Similar Messages

  • Web.ProcessBatchData performance issue - optimised way to insert 20000 records.

    Hi all,
    I am trying following code for inserting the 20000 items into sharepoint list.
    I have used the "Thread.Sleep(999999999)" for complete the processbatch inserting the items.
    But after some time I am getting error: "Thread was being aborted"
    I want to optimize the performance OR Is there any other way to use the thread with process batch.
    Can anyone please help me out here how to do ?
    Code:
    string BatchToInsertPRProducts = string.Empty;
    BatchToInsertPRProducts = BuildBatch(row, spPRProductList.ID);
    string BatchPRProductReturn = web.ProcessBatchData(BatchToInsertPRProducts);
    Thread.Sleep(999999999);
    Thanks,
    Harish Patil

    Hi Harish Patil:
    In a basic text editor such as Notepad, open the web.config file for example %SYSTEMDRIVE%\Inetpub\wwwroot -or- %SYSTEMDRIVE%\\Inetpub\wwwroot\wss\VirtualDirectories\80 folder
    ress CTRL + F to open the Find dialog box.
    Find the following tag:
    <httpRuntime maxRequestLength="51200" />
    Replace it with this tag:
    <httpRuntime executionTimeout="6000" maxRequestLength="51200" />

  • Performance issue while updating the  custom  infotype records

    Hi All
    i have the info type with 80,000 records my requirement is to change the infotype wage type value to the  certain hard coded values .
    after populating the final internal table , in loop i am passing the records and updating the info type by using the hr_infotype_operation function module
    i have done the coding like bellow .
    loop at lt_infotype assigning     < x>
    at new pernr.
    bapi_employee_enquee
    endat.
    hr_infotype_operation
    at end of pernr .
    bapi_employee_dequee.
    endat.
    end loop.
    but it is taking nearly 15 hours to update all the records .please suggest me better solution for reducing the execution time .
    Thanks & Regards ,
    pramodh.m

    The delay problem can be solved with HR_INFOTYPE_OPERATION by using it with another FM i.e. HR_PSBUFFER_INITIALIZE the delay happens because of buffer problem. Use HR_PSBUFFER_INITIALIZE inside the loop just before you call HR_INFOTYPE_OPERATION and you wont see that much delay.

  • Performance issue in update new records from Ekko to my ztable

    I'm doing changes to an existing program
    In this program I need to update any new purchase orders created in EKKO-EBELN to my ztable-ebeln.
    I need to update my ztable with the new records created on that particular date.
    This is a daily running job.
    This is the code I wrote and I'm getting 150,000 records into this loop and I'm getting performance issue, can Anyone suggest me how to avoid performance issue.
    loop at tb_ekko.
        at new ebeln.
          read table tb_ztable with key ebeln = tb_ekko-ebeln binary search.
          if sy-subrc <> 0.
            tb_ztable-ebeln = tb_ekko-ebeln.
            tb_ztable-zlimit = ' '.
            insert ztable from tb_ztable.
          endif.
        endat.
      endloop.
    Thanks
    Hema.

    Modify  your code as follows:
    loop at tb_ekko.
    at new ebeln.
    read table tb_ztable with key ebeln = tb_ekko-ebeln binary search.
    if sy-subrc <> 0.
    tb_ztable_new-ebeln = tb_ekko-ebeln.
    tb_ztable_new-zlimit = ' '.
    append tb_ztable_new.
    clear tb_ztable_new.
    endif.
    endat.
    endloop.
    insert ztable from table tb_ztable_new.
    Regards,
    ravi

  • Performance Issue: Retrieving records from Oracle Database

    While retrieving data from Oracle database we are facing performance issues.
    The query is returning 890 records and while displaying it on the jsp page, the page is taking almost 18 minutes for displaying records.
    I have observed that cpu usage is 100% while processing the request.
    Could any one advise what are the methods at DB end or Java end we can think of to avoid such issues.
    Thanks
    R.

    passion_for_java wrote:
    Will it make any difference if I select columns instead of ls.*
    possibly, especially if there's a lot or data being returned.
    Less data over the wire means a faster response,
    You may also want to look at your database, is that outer join really needed? Does it perform? Are your indexes good?
    A bad index (or a missing one) can kill query performance (we've seen performance of queries drop from seconds to hours when indexes got corrupted).
    A missing index can cause full table scans, which of course kill performance if the table is large.

  • Performance issues while query data from a table having large records

    Hi all,
    I have a performance issues on the queries on mtl_transaction_accounts table which has around 48,000,000 rows. One of the query is as below
    SQL ID: 98pqcjwuhf0y6 Plan Hash: 3227911261
    SELECT SUM (B.BASE_TRANSACTION_VALUE)
    FROM
    MTL_TRANSACTION_ACCOUNTS B , MTL_PARAMETERS A  
    WHERE A.ORGANIZATION_ID =    B.ORGANIZATION_ID 
    AND A.ORGANIZATION_ID =  :b1 
    AND B.REFERENCE_ACCOUNT =    A.MATERIAL_ACCOUNT 
    AND B.TRANSACTION_DATE <=  LAST_DAY (TO_DATE (:b2 ,   'MON-YY' )  )  
    AND B.ACCOUNTING_LINE_TYPE !=  15  
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      3      0.02       0.05          0          0          0           0
    Fetch        3    134.74     722.82     847951    1003824          0           2
    total        7    134.76     722.87     847951    1003824          0           2
    Misses in library cache during parse: 1
    Misses in library cache during execute: 2
    Optimizer mode: ALL_ROWS
    Parsing user id: 193  (APPS)
    Number of plan statistics captured: 1
    Rows (1st) Rows (avg) Rows (max)  Row Source Operation
             1          1          1  SORT AGGREGATE (cr=469496 pr=397503 pw=0 time=237575841 us)
        788242     788242     788242   NESTED LOOPS  (cr=469496 pr=397503 pw=0 time=337519154 us cost=644 size=5920 card=160)
             1          1          1    TABLE ACCESS BY INDEX ROWID MTL_PARAMETERS (cr=2 pr=0 pw=0 time=59 us cost=1 size=10 card=1)
             1          1          1     INDEX UNIQUE SCAN MTL_PARAMETERS_U1 (cr=1 pr=0 pw=0 time=40 us cost=0 size=0 card=1)(object id 181399)
        788242     788242     788242    TABLE ACCESS BY INDEX ROWID MTL_TRANSACTION_ACCOUNTS (cr=469494 pr=397503 pw=0 time=336447304 us cost=643 size=4320 card=160)
       8704356    8704356    8704356     INDEX RANGE SCAN MTL_TRANSACTION_ACCOUNTS_N3 (cr=28826 pr=28826 pw=0 time=27109752 us cost=28 size=0 card=7316)(object id 181802)
    Rows     Execution Plan
          0  SELECT STATEMENT   MODE: ALL_ROWS
          1   SORT (AGGREGATE)
    788242    NESTED LOOPS
          1     TABLE ACCESS   MODE: ANALYZED (BY INDEX ROWID) OF
                    'MTL_PARAMETERS' (TABLE)
          1      INDEX   MODE: ANALYZED (UNIQUE SCAN) OF
                     'MTL_PARAMETERS_U1' (INDEX (UNIQUE))
    788242     TABLE ACCESS   MODE: ANALYZED (BY INDEX ROWID) OF
                    'MTL_TRANSACTION_ACCOUNTS' (TABLE)
    8704356      INDEX   MODE: ANALYZED (RANGE SCAN) OF
                     'MTL_TRANSACTION_ACCOUNTS_N3' (INDEX)
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      row cache lock                                 29        0.00          0.02
      SQL*Net message to client                       2        0.00          0.00
      db file sequential read                    847951        0.40        581.90
      latch: object queue header operation            3        0.00          0.00
      latch: gc element                              14        0.00          0.00
      gc cr grant 2-way                               3        0.00          0.00
      latch: gcs resource hash                        1        0.00          0.00
      SQL*Net message from client                     2        0.00          0.00
      gc current block 3-way                          1        0.00          0.00
    ********************************************************************************On a 5 node rac environment the program completes in 15 hours whereas on a single node environemnt the program completes in 2 hours.
    Is there any way I can improve the performance of this query?
    Regards
    Edited by: mhosur on Dec 10, 2012 2:41 AM
    Edited by: mhosur on Dec 10, 2012 2:59 AM
    Edited by: mhosur on Dec 11, 2012 10:32 PM

    CREATE INDEX mtl_transaction_accounts_n0
      ON mtl_transaction_accounts (
                                   transaction_date
                                 , organization_id
                                 , reference_account
                                 , accounting_line_type
    /:p

  • Performance issue in procedure

    Hi All
    i have a performance issue with below procedure it is taking 10-15 hrs .custom table has 2 lacks record .
    PROCEDURE update_summary_dollar_amounts( p_errbuf OUT VARCHAR2
    ,p_retcode OUT NUMBER) IS
    v_customer_id NUMBER := NULL;
    pymt_count NUMBER := 0;
    rec_count NUMBER := 0;
    v_number_late NUMBER;
    v_number_on_time NUMBER;
    v_days_late NUMBER;
    v_avg_elapsed NUMBER;
    v_avg_elapsed_US NUMBER;
    v_percent_prompt NUMBER;
    v_percent_late NUMBER;
    v_number_open NUMBER;
    v_last_payment_amount NUMBER;
    v_last_payment_date DATE;
    v_prev_payment_amount NUMBER;
    v_prev_payment_date DATE;
    v_last_sale_amount NUMBER;
    v_last_sale_date DATE;
    v_mtd_sales NUMBER;
    v_ytd_sales NUMBER;
    v_prev_year_sales NUMBER;
    v_prev_receipt_num VARCHAR2(30);
    v_last_sale VARCHAR2(50);
    c_current_year VARCHAR2(4);
    c_previous_year VARCHAR2(4);
    c_current_month VARCHAR2(8);
    /* ====================================================================== */
    /* CURSOR Customer Cursor (Main Customer) LOOP */
    /* ====================================================================== */
    CURSOR customer_cursor IS
    SELECT cst.customer_id customer_id
    ,cst.customer_number customer_number
    ,cst.org_id org_id
    FROM zz_ar_customer_summary_all cst
    ORDER by cst.customer_id;
    /* ====================================================================== */
    /* CURSOR Payments Cursor LOOP */
    /* Note: This logic is taken from the Customer Credit Snapshot */
    /* Report - ARXCCS */
    /* ====================================================================== */
    CURSOR payments_cursor IS
    SELECT cr.receipt_number receipt_num
    ,NVL(cr.amount,0) amount
    ,crh.gl_date gl_date
    FROM ar_lookups
    ,ar_cash_receipts_all cr
    ,ar_cash_receipt_history_all crh
    ,ar_receivable_applications_all ra
    ,ra_customer_trx_all ct
    WHERE NVL(cr.type,'CASH') = ar_lookups.lookup_code
    AND ar_lookups.lookup_type = 'PAYMENT_CATEGORY_TYPE'
    AND cr.pay_from_customer = v_customer_id
    AND cr.cash_receipt_id = ra.cash_receipt_id
    AND cr.cash_receipt_id = crh.cash_receipt_id
    AND crh.first_posted_record_flag = 'Y'
    AND ra.applied_customer_trx_id = ct.customer_trx_id(+)
    ORDER BY cr.creation_date DESC
    ,cr.cash_receipt_id DESC
    ,ra.creation_date DESC;
    customer_record customer_cursor%rowtype;
    payments_record payments_cursor%rowtype;
    BEGIN
    p_errbuf := NULL;
    p_retcode := 0;
    c_current_year := TO_CHAR(SYSDATE,'YYYY');
    c_current_month := TO_CHAR(SYSDATE,'YYYYMM');
    c_previous_year := TO_CHAR(TO_NUMBER(c_current_year) - 1);
    FOR customer_record IN customer_cursor LOOP
    /* Get Days Late and Average Elapsed Days */
    /* Note: This logic is taken from the Customer Credit Snapshot */
    /* Report - ARXCCS */
    BEGIN
    v_customer_id := customer_record.customer_id;
    BEGIN
    SELECT DECODE(COUNT(cr.deposit_date), 0, 0, ROUND(SUM(cr.deposit_date - ps.trx_date) / COUNT(cr.deposit_date))) avgdays
    ,DECODE(COUNT(cr.deposit_date), 0, 0, ROUND(SUM(cr.deposit_date - ps.due_date) / COUNT(cr.deposit_date))) avgdayslate
    ,NVL(SUM(DECODE(SIGN(cr.deposit_date - ps.due_date),1, 1, 0)), 0) newlate
    ,NVL(SUM( DECODE(SIGN(cr.deposit_date - ps.due_date),1, 0, 1)), 0) newontime
    INTO v_avg_elapsed
    ,v_days_late
    ,v_number_late
    ,v_number_on_time
    FROM ar_receivable_applications_all ra
    ,ar_cash_receipts_all cr
    ,ar_payment_schedules_all ps
    WHERE ra.cash_receipt_id = cr.cash_receipt_id
    AND ra.applied_payment_schedule_id = ps.payment_schedule_id
    AND ps.customer_id = v_customer_id
    AND ra.apply_date BETWEEN ADD_MONTHS(SYSDATE, -12) AND SYSDATE
    AND ra.status = 'APP'
    AND ra.display = 'Y'
    AND NVL(ps.receipt_confirmed_flag,'Y') = 'Y';
    EXCEPTION
    WHEN NO_DATA_FOUND THEN
    v_days_late := NULL;
    v_number_late := NULL;
    v_avg_elapsed := NULL;
    v_number_on_time := NULL;
    END;
    IF (v_number_on_time + v_number_late) > 0
    THEN
    v_percent_prompt := ROUND(v_number_on_time/(v_number_on_time + v_number_late),2) * 100;
    v_percent_late := ROUND(v_number_late/(v_number_on_time + v_number_late),2) * 100;
    ELSE
    v_percent_prompt := 0;
    v_percent_late := 0;
    END IF;
    /* C2# 49827 */
    /* Get new average elapsed days for US use only */
    v_avg_elapsed_us := NULL;
    IF NVL(customer_record.org_id,-999) = 114
    THEN
    v_avg_elapsed_us := 0;
    BEGIN
    SELECT ROUND(SUM(NVL(ra.amount_applied,0) * (cr.deposit_date - ps.trx_date)) / DECODE(SUM(NVL(ra.amount_applied,0)),0,1,SUM(NVL(ra.amount_applied,0)))) avg_elapsed_us
    INTO v_avg_elapsed_us
    FROM ar_receivable_applications_all ra
    ,ar_cash_receipts_all cr
    ,ar_payment_schedules_all ps
    WHERE ra.cash_receipt_id = cr.cash_receipt_id
    AND ra.applied_payment_schedule_id = ps.payment_schedule_id
    AND ps.customer_id = v_customer_id
    AND ra.apply_date BETWEEN ADD_MONTHS(SYSDATE, -06) AND SYSDATE
    AND ps.status = 'CL'
    AND ra.status = 'APP'
    AND ra.display = 'Y'
    AND nvl(ps.receipt_confirmed_flag,'Y') = 'Y'
    AND ra.amount_applied <> 0;
    v_avg_elapsed_us := NVL(v_avg_elapsed_us,0);
    EXCEPTION
    WHEN NO_DATA_FOUND THEN
    v_avg_elapsed_us := NULL;
    END;
    END IF;
    END;
    /* Get MTD, YTD, Prev Year Sales */
    /* Note: This logic is taken from the Customer Credit Snapshot */
    /* Report - ARXCCS */
    BEGIN
    SELECT NVL(SUM(DECODE(TO_CHAR(ps.trx_date,'YYYYMM'),c_current_month,amount_due_original,0)),0) mtd_sales
    ,NVL(SUM(DECODE(TO_CHAR(ps.trx_date,'YYYY'),c_current_year,amount_due_original,0)),0) ytd_sales
    ,NVL(SUM(DECODE(TO_CHAR(ps.trx_date,'YYYY'),c_previous_year,amount_due_original,0)),0) prev_sales
    ,SUM(DECODE(ps.status,'OP',(DECODE(SIGN(amount_due_original),1,1,0)),0)) number_open
    INTO v_mtd_sales
    ,v_ytd_sales
    ,v_prev_year_sales
    ,v_number_open
    FROM ar_payment_schedules_all ps
    WHERE ps.customer_id = v_customer_id
    AND ps.class != 'PMT';
    EXCEPTION
    WHEN NO_DATA_FOUND THEN
    v_mtd_sales := NULL;
    v_ytd_sales := NULL;
    v_prev_year_sales := NULL;
    END;
    /* Get Last and Previous Payments */
    pymt_count := 0;
    v_last_payment_date := NULL;
    v_prev_payment_date := NULL;
    v_last_payment_amount := NULL;
    v_prev_payment_amount := NULL;
    v_prev_receipt_num := NULL;
    FOR payments_record IN payments_cursor LOOP
    BEGIN
    IF payments_record.receipt_num = v_prev_receipt_num
    THEN
    NULL;
    ELSIF pymt_count = 0
    THEN
    v_last_payment_date := payments_record.gl_date;
    v_last_payment_amount := payments_record.amount;
    pymt_count := pymt_count +1;
    v_prev_receipt_num := payments_record.receipt_num;
    ELSIF pymt_count = 1
    THEN
    v_prev_payment_date := payments_record.gl_date;
    v_prev_payment_amount := payments_record.amount;
    EXIT;
    ELSE
    EXIT;
    END IF;
    END;
    END LOOP;
    /* Get Last Sale Date and Amount */
    /* Note: This logic is taken from the Customer Credit Snapshot */
    /* Report - ARXCCS */
    BEGIN
    SELECT MAX(TO_CHAR(ct.trx_date,'YYYYDDD')||ps.amount_due_original)
    INTO v_last_sale
    FROM ra_cust_trx_types_all ctt
    ,ar_payment_schedules_all ps
    ,ra_customer_trx_all ct
    WHERE ps.customer_trx_id = ct.customer_trx_id
    AND ct.cust_trx_type_id = ctt.cust_trx_type_id
    AND ct.bill_to_customer_id = v_customer_id
    AND ps.class || '' = 'INV'
    ORDER BY ct.trx_date DESC
    ,ct.customer_trx_id DESC;
    EXCEPTION
    WHEN NO_DATA_FOUND
    THEN
    v_last_sale := NULL;
    END;
    IF v_last_sale IS NOT NULL
    THEN
    v_last_sale_date := TO_DATE(SUBSTR(v_last_sale,1,7),'YYYYDDD');
    v_last_sale_amount := SUBSTR(v_last_sale,8,15);
    ELSE
    v_last_sale_date := NULL;
    v_last_sale_amount := NULL;
    END IF;
    /* Update Values into ZZ_AR_CUSTOMER_SUMMARY_ALL */
    BEGIN
    UPDATE zz_ar_customer_summary_all
    SET sales_last_year = v_prev_year_sales
    ,sales_ytd = v_ytd_sales
    ,sales_mtd = v_mtd_sales
    ,last_sale_date = v_last_sale_date
    ,last_sale_amount = v_last_sale_amount
    ,last_payment_date = v_last_payment_date
    ,last_payment_amount = v_last_payment_amount
    ,previous_payment_date = v_prev_payment_date
    ,previous_payment_amount = v_prev_payment_amount
    ,prompt = v_percent_prompt
    ,late = v_percent_late
    ,avg_elapsed_days = v_avg_elapsed
    ,avg_elapsed_days_us = v_avg_elapsed_us -- C2# 49827
    ,days_late = v_days_late
    ,number_open = v_number_open
    WHERE customer_id = customer_record.customer_id;
    EXCEPTION
    WHEN PROGRAM_ERROR THEN NULL;
    WHEN DUP_VAL_ON_INDEX THEN NULL;
    WHEN STORAGE_ERROR THEN NULL;
    WHEN OTHERS THEN NULL;
    END;
    rec_count := rec_count + 1;
    IF rec_count = 10000
    THEN
    COMMIT;
    rec_count := 0;
    fnd_file.put_line(fnd_file.output,'Commit at customer_id = ' || TO_CHAR(customer_record.customer_id) || ' ' || TO_CHAR(SYSDATE, 'DD-MON-YYYY HH24:MI:SS'));
    fnd_file.new_line(fnd_file.output,1);
    END IF;
    END LOOP;
    COMMIT;
    EXCEPTION
    WHEN others THEN
    ROLLBACK;
    p_retcode := 2;
    p_errbuf := SQLERRM;
    END update_summary_dollar_amounts;
    Thanks,
    Anu

    Based on my initial assessment of the code, it looks like you are utilizing the "slow by slow" method. It is often termed "slow by slow" because it is one of the most INefficient ways of doing data processing. The "slow by slow" method uses CURSOR FOR LOOPs to loop through entire record sets and process them one at a time. In your case it looks like you are using NESTED FOR LOOPs which could exacerbate the problem.
    I recommend you re-think your approach and try and do everything in a single, or a few SQL statements if possible and avoid the procedural logic.
    If you can post your business requirements, and sample data we may be able to help you achieve your goal.
    HTH!

  • Insert performance issue with Partitioned Table.....

    Hi All,
    I have a performance issue during with a table which is partitioned. without table being partitioned
    it ran in less time but after partition it took more than double.
    1) The table was created initially without any partition and the below insert took only 27 minuts.
    Total Rec Inserted :- 2424233
    PL/SQL procedure successfully completed.
    Elapsed: 00:27:35.20
    2) Now I re-created the table with partition(range yearly - below) and the same insert took 59 minuts.
    Is there anyway i can achive the better performance during insert on this partitioned table?
    [ similerly, I have another table with 50 Million records and the insert took 10 hrs without partition.
    with partitioning the table, it took 18 hours... ]
    SQL> select * from table(dbms_xplan.display);
    PLAN_TABLE_OUTPUT
    Plan hash value: 4195045590
    | Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 643K| 34M| | 12917 (3)| 00:02:36 |
    |* 1 | HASH JOIN | | 643K| 34M| 2112K| 12917 (3)| 00:02:36 |
    | 2 | VIEW | index$_join$_001 | 69534 | 1290K| | 529 (3)| 00:00:07 |
    |* 3 | HASH JOIN | | | | | | |
    | 4 | INDEX FAST FULL SCAN| PK_ACCOUNT_MASTER_BASE | 69534 | 1290K| | 181 (3)| 00:00
    | 5 | INDEX FAST FULL SCAN| ACCOUNT_MASTER_BASE_IDX2 | 69534 | 1290K| | 474 (2)| 00:00
    PLAN_TABLE_OUTPUT
    | 6 | TABLE ACCESS FULL | TB_SISADMIN_BALANCE | 2424K| 87M| | 6413 (4)| 00:01:17 |
    Predicate Information (identified by operation id):
    1 - access("A"."VENDOR_ACCT_NBR"=SUBSTR("B"."ACCOUNT_NO",1,8) AND
    "A"."VENDOR_CD"="B"."COMPANY_NO")
    3 - access(ROWID=ROWID)
    Open C1;
    Loop
    Fetch C1 Bulk Collect Into C_Rectype Limit 10000;
    Forall I In 1..C_Rectype.Count
    Insert test
         col1,col2,col3)
    Values
         val1, val2,val3);
    V_Rec := V_Rec + Nvl(C_Rectype.Count,0);
    Commit;
    Exit When C_Rectype.Count = 0;
    C_Rectype.delete;
    End Loop;
    End;
    Total Rec Inserted :- 2424233
    PL/SQL procedure successfully completed.
    Elapsed: 00:51:01.22
    Edited by: user520824 on Jul 16, 2010 9:16 AM

    I'm concerned about the view in step 2 and the index join in step 3. A composite index with both columns might eliminate the index join and result in fewer read operations.
    If you know which partition the data is going into beforehand you can save a little bit of processing by specifying the partition (which may not be a scalable long-term solution) in the insert - I'm not 100% sure you can do this on inserts but I know you can on selects.
    The APPEND hint won't help the way you are using it - the VALUES clause in an insert makes it be ignored. Where it is effective and should help you is if you can do the insert in one query - insert into/select from. If you are using the loop to avoid filling up undo/rollback you can use a bulk collect to batch the selects and commit accordingly - but don't commit more often than you have to because more frequent commits slow transactions down.
    I don't think there is a nologging hint :)
    So, try something like
    insert /*+ hints */ into ...
    Select
         A.Ing_Acct_Nbr, currency_Symbol,
         Balance_Date,     Company_No,
         Substr(Account_No,1,8) Account_No,
         Substr(Account_No,9,1) Typ_Cd ,
         Substr(Account_No,10,1) Chk_Cd,
         Td_Balance,     Sd_Balance,
         Sysdate,     'Sisadmin'
    From Ideaal_Cons.Tb_Account_Master_Base A,
         Ideaal_Staging.Tb_Sisadmin_Balance B
    Where A.Vendor_Acct_Nbr = Substr(B.Account_No,1,8)
       And A.Vendor_Cd = b.company_no
          ;Edited by: riedelme on Jul 16, 2010 7:42 AM

  • Loading performance issues

    HI gurus
    please can u help in loading issues.i am extracting  data from standard extractor  in purchasing  for 3 lakhs record it is taking  18 hrs..can u please  suuguest  me loading performance issues .
    -KP

    Hi,
    Loading Performance:
    a) Always load and activate the master data before you load the transaction data so that the SIDs don't have to be created at the loading of the transaction data.
    b) Have the optimum packet size. If you have too small a packet size, the system writes messages to the monitor and those entries keep increasing over time and cause slow down. Try different packet sizes to arrive at the optimum number.
    c) Fine tune your data model. Make use of the line item dimension where possible.
    d) Make use of the load parallelization as much as you can.
    e) Check your CMOD code. If you have direct reads, change them to read all the data into internal table first and then do a binary search.
    f) Check code in your start routine, transfer rules and update rules. If you have BW Statistics cubes turned on, you can find out where most of the CPU time is spent and concentrate on that area first.
    g) Work with the basis folks and make sure the database parameters are optimized. If you search on OSS based on the database you are using, it provides recommendations.
    h) Set up your loads processes appropriately. Don't load all the data all the time unless you have to. (e.g) If your functionals say the historical fiscal years are not changed, then load only current FY and onwards.
    i) Set up your jobs to run when there is not much activity. If the system resources are already strained, your processes will have to wait for resources.
    j) For the initial loads only, always buffer the number ranges for SIDs and DIM ids
    Hareesh

  • Report Performance Issue - Activity

    Hi gurus,
    I'm developing an Activity report using Transactional database (Online real time object).
    the purpose of the report is to list down all contacts related activities and activities NOT related to Contact by activity owner (user id).
    In order to fullfill that requirment I've created 2 report
    1) All Activities related to Contact -- Report A
    pull in Acitivity ID , Activity Type, Status, Contact ID
    2) All Activities not related to Contact UNION All Activities related to Contact (Base report) -- Report B
    to get the list of activities not related to contact i'm using Advanced filter based on result of another request which is I think is the part that slow down the query.
    <Activity ID not equal to any Activity ID in Report B>
    Anyone encountered performance issue due to the advanced filter in analytic before?
    any input is really appriciated
    Thanks in advanced,
    Fina

    Fina,
    Union is always the last option. If you can get all record in one report, do not use union.
    since all records, which you are targeting, are in the activity subject area, it is not nessecery to combine reports. add a column with the following logic
    if contact id is null (or = 'Unspecified') then owner name else contact name
    Hopefully, this is helping.

  • Report performance Issue in BI Answers

    Hi All,
    We have a performance issues with reports. Report is running more than 10 mins. we took query from the session log and ran it in database, at that time it took not more than 2 mins. We have verified proper indexes on the where clause columns.
    Could any once suggest to improve the performance in BI answers?
    Thanks in advance,

    I hope you dont have many case statements and complex calculations that you do in the Answers.
    Next thing you need to monitor is how many rows of data that you are trying to retrieve from the query. If the volume is huge then it takes time to do the formatting on the Answers as you are going to dump huge volumes of data. Database(like teradata) returns initially like 1-2000 records if you hit show all records then even db is gonna fair amount of time if you are dumping many records
    hope it helps
    thanks
    Prash

  • Performance Issue for BI system

    Hello,
    We are facing performance issues for BI System. Its a preproductive system and its performance is degrading badly everyday. I was checking system came to know program buffer hit ratio is increaasing everyday due to high Swaps. So asked to change the parameter abap/buffersize which was 300Mb to 500Mb. But still no major improvement is found in the system.
    There is 16GB Ram available and Server is HP-UX and with Netweaver2004s with Oracle 10.2.0.4.0 installed in it.
    The Main problem is while running a report or creating a query is taking way too long time.
    Kindly help me.

    Hello SIva,
    Thanks for your reply but i have checked ST02 and ST03 and also SM50 and its normal
    we are having 9 dialog processes, 3 Background , 2 Update and 1 spool.
    No one is using the system currently but in ST02 i can see the swaps are in red.
    Buffer                 HitRatio   % Alloc. KB  Freesp. KB   % Free Sp.   Dir. Size  FreeDirEnt   % Free Dir    Swaps    DB Accs
    Nametab (NTAB)                                                                                0
       Table definition     99,60     6.798                                                   20.000                                            29.532    153.221
       Field definition     99,82      31.562        784                 2,61           20.000      6.222          31,11          17.246     41.248
       Short NTAB           99,94     3.625      2.446                81,53          5.000        2.801          56,02             0            2.254
       Initial records      73,95        6.625        998                 16,63          5.000        690             13,80             40.069     49.528
                                                                                    0
    boldprogram                97,66     300.000     1.074                 0,38           75.000     67.177        89,57           219.665    725.703bold
    CUA                    99,75         3.000        875                   36,29          1.500      1.401          93,40            55.277      2.497
    Screen                 99,80         4.297      1.365                 33,35          2.000      1.811          90,55              119         3.214
    Calendar              100,00       488            361                  75,52            200         42              21,00               0            158
    OTR                   100,00         4.096      3.313                  100,00        2.000      2.000          100,00              0
                                                                                    0
    Tables                                                                                0
       Generic Key          99,17    29.297      1.450                  5,23           5.000        350             7,00             2.219      3.085.633
       Single record        99,43    10.000      1.907                  19,41           500         344            68,80              39          467.978
                                                                                    0
    Export/import          82,75     4.096         43                      1,30            2.000        662          33,10            137.208
    Exp./ Imp. SHM         89,83     4.096        438                    13,22         2.000      1.482          74,10               0    
    SAP Memory      Curr.Use %    CurUse[KB]    MaxUse[KB]    In Mem[KB]    OnDisk[KB]    SAPCurCach      HitRatio %
    Roll area               2,22                5.832               22.856             131.072     131.072                   IDs           96,61
    Page area              1,08              2.832                24.144               65.536    196.608              Statement     79,00
    Extended memory     22,90       958.464           1.929.216          4.186.112          0                                         0,00
    Heap memory                                    0                  0                    1.473.767          0                                         0,00
    Call Stati             HitRatio %     ABAP/4 Req      ABAP Fails     DBTotCalls         AvTime[ms]      DBRowsAff.
      Select single     88,59               63.073.369        5.817.659      4.322.263             0                         57.255.710
      Select               72,68               284.080.387          0               13.718.442             0                        32.199.124
      Insert                 0,00                  151.955             5.458             166.159               0                           323.725
      Update               0,00                    378.161           97.884           395.814               0                            486.880
      Delete                 0,00                    389.398          332.619          415.562              0                             244.495
    Edited by: Srikanth Sunkara on May 12, 2011 11:50 AM

  • Returning multiple values from a called tabular form(performance issue)

    I hope someone can help with this.
    I have a form that calls another form to display a multiple column tabular list of values(needs to allow for user sorting so could not use a LOV).
    The user selects one or more records from the list by using check boxes. In order to detect the records selected I loop through the block looking for boxes checked off and return those records to the calling form via a PL/SQL table.
    The form displaying the tabular list loads quickly(about 5000 records in the base table). However when I select one or more values from the table and return back to the calling form, it takes a while(about 3-4 minutes) to return to the called form with the selected values.
    I guess it is going through the block(all 5000 records) looking for boxes checked off and that is what is causing the noticeable pause.
    Is this normal given the data volumes I have or are there any other perhaps better techniques or tricks I could use to improve performance. I am using Forms6i.
    Sorry for being so long-winded and thanks in advance for any help.

    Try writing to your PL/SQL table when the user selects (or remove when deselect) by usuing a when-checkbox-changed trigger. This will eliminate the need for you top loop through a block with 5000 records and should improve your performance.
    I am not aware of any performance issues with PL/SQL tables in forms, but if you still have slow performance try using a shared record-group instead. I have used these in the past for exactly the same thing and had no performance problems.
    Hope this helps,
    Candace Stover
    Forms Product Management

  • Performance issues with dynamic action (PL/SQL)

    Hi!
    I'm having perfomance issues with a dynamic action that is triggered on a button click.
    I have 5 drop down lists to select columns which the users want to filter, 5 drop down lists to select an operation and 5 boxes to input values.
    After that, there is a filter button that just submits the page based on the selected filters.
    This part works fine, the data is filtered almost instantaneously.
    After this, I have 3 column selectors and 3 boxes where users put values they wish to update the filtered rows to,
    There is an update button that calls the dynamic action (procedure that is written below).
    It should be straight out, the only performance issue could be the decode section, because I need to cover cases when user wants to set a value to null (@) and when he doesn't want update 3 columns, but less (he leaves '').
    Hence P99_X_UC1 || ' = decode('  || P99_X_UV1 ||','''','|| P99_X_UC1  ||',''@'',null,'|| P99_X_UV1  ||')
    However when I finally click the update button, my browser freezes and nothing happens on the table.
    Can anyone help me solve this and improve the speed of the update?
    Regards,
    Ivan
    P.S. The code for the procedure is below:
    create or replace
    PROCEDURE DWP.PROC_UPD
    (P99_X_UC1 in VARCHAR2,
    P99_X_UV1 in VARCHAR2,
    P99_X_UC2 in VARCHAR2,
    P99_X_UV2 in VARCHAR2,
    P99_X_UC3 in VARCHAR2,
    P99_X_UV3 in VARCHAR2,
    P99_X_COL in VARCHAR2,
    P99_X_O in VARCHAR2,
    P99_X_V in VARCHAR2,
    P99_X_COL2 in VARCHAR2,
    P99_X_O2 in VARCHAR2,
    P99_X_V2 in VARCHAR2,
    P99_X_COL3 in VARCHAR2,
    P99_X_O3 in VARCHAR2,
    P99_X_V3 in VARCHAR2,
    P99_X_COL4 in VARCHAR2,
    P99_X_O4 in VARCHAR2,
    P99_X_V4 in VARCHAR2,
    P99_X_COL5 in VARCHAR2,
    P99_X_O5 in VARCHAR2,
    P99_X_V5 in VARCHAR2,
    P99_X_CD in VARCHAR2,
    P99_X_VD in VARCHAR2
    ) IS
    l_sql_stmt varchar2(32600);
    p_table_name varchar2(30) := 'DWP.IZV_SLOG_DET'; 
    BEGIN
    l_sql_stmt := 'update ' || p_table_name || ' set '
    || P99_X_UC1 || ' = decode('  || P99_X_UV1 ||','''','|| P99_X_UC1  ||',''@'',null,'|| P99_X_UV1  ||'),'
    || P99_X_UC2 || ' = decode('  || P99_X_UV2 ||','''','|| P99_X_UC2  ||',''@'',null,'|| P99_X_UV2  ||'),'
    || P99_X_UC3 || ' = decode('  || P99_X_UV3 ||','''','|| P99_X_UC3  ||',''@'',null,'|| P99_X_UV3  ||') where '||
    P99_X_COL  ||' '|| P99_X_O  ||' ' || P99_X_V  || ' and ' ||
    P99_X_COL2 ||' '|| P99_X_O2 ||' ' || P99_X_V2 || ' and ' ||
    P99_X_COL3 ||' '|| P99_X_O3 ||' ' || P99_X_V3 || ' and ' ||
    P99_X_COL4 ||' '|| P99_X_O4 ||' ' || P99_X_V4 || ' and ' ||
    P99_X_COL5 ||' '|| P99_X_O5 ||' ' || P99_X_V5 || ' and ' ||
    P99_X_CD   ||       ' = '         || P99_X_VD ;
    --dbms_output.put_line(l_sql_stmt); 
    EXECUTE IMMEDIATE l_sql_stmt;
    END;

    Hi Ivan,
    I do not think that the decode is performance relevant. Maybe the update hangs because some other transaction has uncommitted changes to one of the affected rows or the where clause is not selective enough and needs to update a huge amount of records.
    Besides that - and I might be wrong, because I only know some part of your app - the code here looks like you have a huge sql injection vulnerability here. Maybe you should consider re-writing your logic in static sql. If that is not possible, you should make sure that the user input only contains allowed values, e.g. by white-listing P99_X_On (i.e. make sure they only contain known values like '=', '<', ...), and by using dbms_assert.enquote_name/enquote_literal on the other P99_X_nnn parameters.
    Regards,
    Christian

  • Performance issues with pipelined table functions

    I am testing pipelined table functions to be able to re-use the <font face="courier">base_query</font> function. Contrary to my understanding, the <font face="courier">with_pipeline</font> procedure runs 6 time slower than the legacy <font face="courier">no_pipeline</font> procedure. Am I missing something? The <font face="courier">processor</font> function is from [url http://www.oracle-developer.net/display.php?id=429]improving performance with pipelined table functions .
    Edit: The underlying query returns 500,000 rows in about 3 minutes. So there are are no performance issues with the query itself.
    Many thanks in advance.
    CREATE OR REPLACE PACKAGE pipeline_example
    IS
       TYPE resultset_typ IS REF CURSOR;
       TYPE row_typ IS RECORD (colC VARCHAR2(200), colD VARCHAR2(200), colE VARCHAR2(200));
       TYPE table_typ IS TABLE OF row_typ;
       FUNCTION base_query (argA IN VARCHAR2, argB IN VARCHAR2)
          RETURN resultset_typ;
       c_default_limit   CONSTANT PLS_INTEGER := 100;  
       FUNCTION processor (
          p_source_data   IN resultset_typ,
          p_limit_size    IN PLS_INTEGER DEFAULT c_default_limit)
          RETURN table_typ
          PIPELINED
          PARALLEL_ENABLE(PARTITION p_source_data BY ANY);
       PROCEDURE with_pipeline (argA          IN     VARCHAR2,
                                argB          IN     VARCHAR2,
                                o_resultset      OUT resultset_typ);
       PROCEDURE no_pipeline (argA          IN     VARCHAR2,
                              argB          IN     VARCHAR2,
                              o_resultset      OUT resultset_typ);
    END pipeline_example;
    CREATE OR REPLACE PACKAGE BODY pipeline_example
    IS
       FUNCTION base_query (argA IN VARCHAR2, argB IN VARCHAR2)
          RETURN resultset_typ
       IS
          o_resultset   resultset_typ;
       BEGIN
          OPEN o_resultset FOR
             SELECT colC, colD, colE
               FROM some_table
              WHERE colA = ArgA AND colB = argB;
          RETURN o_resultset;
       END base_query;
       FUNCTION processor (
          p_source_data   IN resultset_typ,
          p_limit_size    IN PLS_INTEGER DEFAULT c_default_limit)
          RETURN table_typ
          PIPELINED
          PARALLEL_ENABLE(PARTITION p_source_data BY ANY)
       IS
          aa_source_data   table_typ;-- := table_typ ();
       BEGIN
          LOOP
             FETCH p_source_data
             BULK COLLECT INTO aa_source_data
             LIMIT p_limit_size;
             EXIT WHEN aa_source_data.COUNT = 0;
             /* Process the batch of (p_limit_size) records... */
             FOR i IN 1 .. aa_source_data.COUNT
             LOOP
                PIPE ROW (aa_source_data (i));
             END LOOP;
          END LOOP;
          CLOSE p_source_data;
          RETURN;
       END processor;
       PROCEDURE with_pipeline (argA          IN     VARCHAR2,
                                argB          IN     VARCHAR2,
                                o_resultset      OUT resultset_typ)
       IS
       BEGIN
          OPEN o_resultset FOR
               SELECT /*+ PARALLEL(t, 5) */ colC,
                      SUM (CASE WHEN colD > colE AND colE != '0' THEN colD / ColE END)de,
                      SUM (CASE WHEN colE > colD AND colD != '0' THEN colE / ColD END)ed,
                      SUM (CASE WHEN colD = colE AND colD != '0' THEN '1' END) de_one,
                      SUM (CASE WHEN colD = '0' OR colE = '0' THEN '0' END) de_zero
                 FROM TABLE (processor (base_query (argA, argB),100)) t
             GROUP BY colC
             ORDER BY colC
       END with_pipeline;
       PROCEDURE no_pipeline (argA          IN     VARCHAR2,
                              argB          IN     VARCHAR2,
                              o_resultset      OUT resultset_typ)
       IS
       BEGIN
          OPEN o_resultset FOR
               SELECT colC,
                      SUM (CASE WHEN colD > colE AND colE  != '0' THEN colD / ColE END)de,
                      SUM (CASE WHEN colE > colD AND colD  != '0' THEN colE / ColD END)ed,
                      SUM (CASE WHEN colD = colE AND colD  != '0' THEN 1 END) de_one,
                      SUM (CASE WHEN colD = '0' OR colE = '0' THEN '0' END) de_zero
                 FROM (SELECT colC, colD, colE
                         FROM some_table
                        WHERE colA = ArgA AND colB = argB)
             GROUP BY colC
             ORDER BY colC;
       END no_pipeline;
    END pipeline_example;
    ALTER PACKAGE pipeline_example COMPILE;Edited by: Earthlink on Nov 14, 2010 9:47 AM
    Edited by: Earthlink on Nov 14, 2010 11:31 AM
    Edited by: Earthlink on Nov 14, 2010 11:32 AM
    Edited by: Earthlink on Nov 20, 2010 12:04 PM
    Edited by: Earthlink on Nov 20, 2010 12:54 PM

    Earthlink wrote:
    Contrary to my understanding, the <font face="courier">with_pipeline</font> procedure runs 6 time slower than the legacy <font face="courier">no_pipeline</font> procedure. Am I missing something? Well, we're missing a lot here.
    Like:
    - a database version
    - how did you test
    - what data do you have, how is it distributed, indexed
    and so on.
    If you want to find out what's going on then use a TRACE with wait events.
    All nessecary steps are explained in these threads:
    HOW TO: Post a SQL statement tuning request - template posting
    http://oracle-randolf.blogspot.com/2009/02/basic-sql-statement-performance.html
    Another nice one is RUNSTATS:
    http://asktom.oracle.com/pls/asktom/ASKTOM.download_file?p_file=6551378329289980701

Maybe you are looking for