Central confirmation is taking huge time for perticular user in SRM

Hi Gurus.
I am facing an issue in Production system. For Some users Central confirmation is taking huge time for user ,
around 10 users reported issue as of now and taking 10 times more than usual. Any suggestions will be great help. If any users facing this issue.

Hi Prabhakar,
As Konstantin rightly mentioned, kindly check those BADI's implementations especially BBP_WF_LIST. In addition to that, please check whether you are getting any dump as below
TSV_TNEW_PAGE_ALLOC_FAILED
Best Regards,
Bharathi

Similar Messages

  • Taking huge time to fetch data from CDHDR

    Hi Experts,
    To count the entries in CDHDR table it taking huge time and throught time_out dump.
    I hope in this table some more than million entries exist. Is there any alternate to findout the entries?.
    We are finding the data from CDHDR by following conditions.
    Objclass - classify.
    Udate     -  'X' date
    Utime     -  'X' (( even selecton 1 Min))
    We also tried to index the UDATE filed but it takes huge time ( more than 6 Hrs and uncomplete)
    Can you suggest us is there any alternate to find the entries.
    Regards,
    VS

    Hello,
    at se16 display initila sceen and for you selection criteria you can run it as background and create spool reqeust.
    se16 > content, selection criteria and then Proram execute on background.
    Best regards,
    Peter

  • Query taking long time for EXTRACTING the data more than 24 hours

    Hi ,
    Query taking long time for EXTRACTING the data more than 24 hours please find the query and explain plan details below even indexes avilable on table's goe's to FULL TABLE SCAN. please suggest me.......
    SQL> explain plan for select a.account_id,round(a.account_balance,2) account_balance,
    2 nvl(ah.invoice_id,ah.adjustment_id) transaction_id,
    to_char(ah.effective_start_date,'DD-MON-YYYY') transaction_date,
    to_char(nvl(i.payment_due_date,
    to_date('30-12-9999','dd-mm-yyyy')),'DD-MON-YYYY')
    due_date, ah.current_balance-ah.previous_balance amount,
    decode(ah.invoice_id,null,'A','I') transaction_type
    3 4 5 6 7 8 from account a,account_history ah,invoice i_+
    where a.account_id=ah.account_id
    and a.account_type_id=1000002
    and round(a.account_balance,2) > 0
    and (ah.invoice_id is not null or ah.adjustment_id is not null)
    and ah.CURRENT_BALANCE > ah.previous_balance
    and ah.invoice_id=i.invoice_id(+)
    AND a.account_balance > 0
    order by a.account_id,ah.effective_start_date desc; 9 10 11 12 13 14 15 16
    Explained.
    SQL> select * from table(dbms_xplan.display);
    PLAN_TABLE_OUTPUT
    | Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)|
    | 0 | SELECT STATEMENT | | 544K| 30M| | 693K (20)|
    | 1 | SORT ORDER BY | | 544K| 30M| 75M| 693K (20)|
    |* 2 | HASH JOIN | | 544K| 30M| | 689K (20)|
    |* 3 | TABLE ACCESS FULL | ACCOUNT | 20080 | 294K| | 6220 (18)|
    |* 4 | HASH JOIN OUTER | | 131M| 5532M| 5155M| 678K (20)|
    |* 5 | TABLE ACCESS FULL| ACCOUNT_HISTORY | 131M| 3646M| | 197K (25)|
    | 6 | TABLE ACCESS FULL| INVOICE | 262M| 3758M| | 306K (18)|
    Predicate Information (identified by operation id):
    2 - access("A"."ACCOUNT_ID"="AH"."ACCOUNT_ID")
    3 - filter("A"."ACCOUNT_TYPE_ID"=1000002 AND "A"."ACCOUNT_BALANCE">0 AND
    ROUND("A"."ACCOUNT_BALANCE",2)>0)
    4 - access("AH"."INVOICE_ID"="I"."INVOICE_ID"(+))
    5 - filter("AH"."CURRENT_BALANCE">"AH"."PREVIOUS_BALANCE" AND ("AH"."INVOICE_ID"
    IS NOT NULL OR "AH"."ADJUSTMENT_ID" IS NOT NULL))
    22 rows selected.
    Index Details:+_
    SQL> select INDEX_OWNER,INDEX_NAME,COLUMN_NAME,TABLE_NAME from dba_ind_columns where
    2 table_name in ('INVOICE','ACCOUNT','ACCOUNT_HISTORY') order by 4;
    INDEX_OWNER INDEX_NAME COLUMN_NAME TABLE_NAME
    OPS$SVM_SRV4 P_ACCOUNT ACCOUNT_ID ACCOUNT
    OPS$SVM_SRV4 U_ACCOUNT_NAME ACCOUNT_NAME ACCOUNT
    OPS$SVM_SRV4 U_ACCOUNT CUSTOMER_NODE_ID ACCOUNT
    OPS$SVM_SRV4 U_ACCOUNT ACCOUNT_TYPE_ID ACCOUNT
    OPS$SVM_SRV4 I_ACCOUNT_ACCOUNT_TYPE ACCOUNT_TYPE_ID ACCOUNT
    OPS$SVM_SRV4 I_ACCOUNT_INVOICE INVOICE_ID ACCOUNT
    OPS$SVM_SRV4 I_ACCOUNT_PREVIOUS_INVOICE PREVIOUS_INVOICE_ID ACCOUNT
    OPS$SVM_SRV4 U_ACCOUNT_NAME_ID ACCOUNT_NAME ACCOUNT
    OPS$SVM_SRV4 U_ACCOUNT_NAME_ID ACCOUNT_ID ACCOUNT
    OPS$SVM_SRV4 I_LAST_MODIFIED_ACCOUNT LAST_MODIFIED ACCOUNT
    OPS$SVM_SRV4 I_ACCOUNT_INVOICE_ACCOUNT INVOICE_ACCOUNT_ID ACCOUNT
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_ACCOUNT ACCOUNT_ID ACCOUNT_HISTORY
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_ACCOUNT SEQNR ACCOUNT_HISTORY
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_INVOICE INVOICE_ID ACCOUNT_HISTORY
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_ADINV INVOICE_ID ACCOUNT_HISTORY
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_CIA CURRENT_BALANCE ACCOUNT_HISTORY
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_CIA INVOICE_ID ACCOUNT_HISTORY
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_CIA ADJUSTMENT_ID ACCOUNT_HISTORY
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_CIA ACCOUNT_ID ACCOUNT_HISTORY
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_LMOD LAST_MODIFIED ACCOUNT_HISTORY
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_ADINV ADJUSTMENT_ID ACCOUNT_HISTORY
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_PAYMENT PAYMENT_ID ACCOUNT_HISTORY
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_ADJUSTMENT ADJUSTMENT_ID ACCOUNT_HISTORY
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_APPLIED_DT APPLIED_DATE ACCOUNT_HISTORY
    OPS$SVM_SRV4 P_INVOICE INVOICE_ID INVOICE
    OPS$SVM_SRV4 U_INVOICE CUSTOMER_INVOICE_STR INVOICE
    OPS$SVM_SRV4 I_LAST_MODIFIED_INVOICE LAST_MODIFIED INVOICE
    OPS$SVM_SRV4 U_INVOICE_ACCOUNT ACCOUNT_ID INVOICE
    OPS$SVM_SRV4 U_INVOICE_ACCOUNT BILL_RUN_ID INVOICE
    OPS$SVM_SRV4 I_INVOICE_BILL_RUN BILL_RUN_ID INVOICE
    OPS$SVM_SRV4 I_INVOICE_INVOICE_TYPE INVOICE_TYPE_ID INVOICE
    OPS$SVM_SRV4 I_INVOICE_CUSTOMER_NODE CUSTOMER_NODE_ID INVOICE
    32 rows selected.
    Regards,
    Bathula
    Oracle-DBA

    I have some suggestions. But first, you realize that you have some redundant indexes, right? You have an index on account(account_name) and also account(account_name, account_id), and also account_history(invoice_id) and account_history(invoice_id, adjustment_id). No matter, I will suggest some new composite indexes.
    Also, you do not need two lines for these conditions:
    and round(a.account_balance, 2) > 0
    AND a.account_balance > 0
    You can just use: and a.account_balance >= 0.005
    So the formatted query isselect a.account_id,
           round(a.account_balance, 2) account_balance,
           nvl(ah.invoice_id, ah.adjustment_id) transaction_id,
           to_char(ah.effective_start_date, 'DD-MON-YYYY') transaction_date,
           to_char(nvl(i.payment_due_date, to_date('30-12-9999', 'dd-mm-yyyy')),
                   'DD-MON-YYYY') due_date,
           ah.current_balance - ah.previous_balance amount,
           decode(ah.invoice_id, null, 'A', 'I') transaction_type
      from account a, account_history ah, invoice i
    where a.account_id = ah.account_id
       and a.account_type_id = 1000002
       and (ah.invoice_id is not null or ah.adjustment_id is not null)
       and ah.CURRENT_BALANCE > ah.previous_balance
       and ah.invoice_id = i.invoice_id(+)
       AND a.account_balance >= .005
    order by a.account_id, ah.effective_start_date desc;You will probably want to select:
    1. From ACCOUNT first (your smaller table), for which you supply a literal on account_type_id. That should limit the accounts retrieved from ACCOUNT_HISTORY
    2. From ACCOUNT_HISTORY. We want to limit the records as much as possible on this table because of the outer join.
    3. INVOICE we want to access last because it seems to be least restricted, it is the biggest, and it has the outer join condition so it will manufacture rows to match as many rows as come back from account_history.
    Try the query above after creating the following composite indexes. The order of the columns is important:create index account_composite_i on account(account_type_id, account_balance, account_id);
    create index acct_history_comp_i on account_history(account_id, invoice_id, adjustment_id, current_balance, previous_balance, effective_start_date);
    create index invoice_composite_i on invoice(invoice_id, payment_due_date);All the columns used in the where clause will be indexed, in a logical order suited to the needs of the query. Plus each selected column is indexed as well so that we should not need to touch the tables at all to satisfy the query.
    Try the query after creating these indexes.
    A final suggestion is to try larger sort and hash area sizes and a manual workarea policy.alter session set workarea_size_policy = manual;
    alter session set sort_area_size = 2147483647;
    alter session set hash_area_size = 2147483647;

  • Procedure is taking more time for execution

    hi,
    when i am trying to execute the below procedure it is taking more time for
    execution.
    can you pls suggest the possaible ways to tune the query.
    PROCEDURE sp_sel_cntr_ri_fact (
    po_cntr_ri_fact_cursor OUT t_cursor
    IS
    BEGIN
    OPEN po_cntr_ri_fact_cursor FOR
    SELET c_RI_fAt_id, c_RI_fAt_code,c_RI_fAt_nme,
         case when exists (SELET 'x' FROM A_CRF_PARAM_CALIB t WHERE t.c_RI_fAt_id = A_IC_CNTR_RI_FAT.c_RI_fAt_id)
         then 'Yes'
                   when exists (SELET 'x' FROM A_EMPI_ERV_CALIB_DETAIL t WHERE t.c_RI_fAt_id = A_IC_CNTR_RI_FAT.c_RI_fAt_id)
                   then 'Yes'
                   when exists (SELET 'x' FROM A_IC_CNTRY_IC_CRF_MPG_DTL t WHERE t.c_RI_fAt_id = A_IC_CNTR_RI_FAT.c_RI_fAt_id)
                   then 'Yes'
                   when exists (SELET 'x' FROM A_IC_CRF_CNTRYIDX_MPG_DTL t WHERE t.c_RI_fAt_id = A_IC_CNTR_RI_FAT.c_RI_fAt_id)
                   then 'Yes'
                   when exists (SELET 'x' FROM A_IC_CRF_RESI_COR t WHERE t.x_axis_c_RI_fAt_id = A_IC_CNTR_RI_FAT.c_RI_fAt_id)
                   then 'Yes'
                   when exists (SELET 'x' FROM A_IC_CRF_RESI_COR t WHERE t.y_axis_c_RI_fAt_id = A_IC_CNTR_RI_FAT.c_RI_fAt_id)
                   then 'Yes'
                   when exists (SELET 'x' FROM A_PAR_MARO_GAMMA_PRIME_CALIB t WHERE t.c_RI_fAt_id = A_IC_CNTR_RI_FAT.c_RI_fAt_id)
                   then 'Yes'
                   when exists (SELET 'x' FROM D_ANALYSIS_FAT t WHERE t.c_RI_fAt_id = A_IC_CNTR_RI_FAT.c_RI_fAt_id)
                   then 'Yes'
                   when exists (SELET 'x' FROM D_CALIB_CNTRY_RI_FATOR t WHERE t.c_RI_fAt_id = A_IC_CNTR_RI_FAT.c_RI_fAt_id)
                   then 'Yes'
                   when exists (SELET 'x' FROM E_BUSI_PORT_DTL t WHERE t.c_RI_fAt_id = A_IC_CNTR_RI_FAT.c_RI_fAt_id)
                   then 'Yes'
                   when exists (SELET 'x' FROM E_CNTRY_LOSS_DIST_RSLT t WHERE t.c_RI_fAt_id = A_IC_CNTR_RI_FAT.c_RI_fAt_id)
                   then 'Yes'
                   when exists (SELET 'x' FROM E_CNTRY_LOSS_RSLT t WHERE t.c_RI_fAt_id = A_IC_CNTR_RI_FAT.c_RI_fAt_id)
                   then 'Yes'
                   when exists (SELET 'x' FROM E_CRF_BUS_PORTFOL_CRITERIA t WHERE t.c_RI_fAt_id = A_IC_CNTR_RI_FAT.c_RI_fAt_id)
                   then 'Yes'
                   when exists (SELET 'x' FROM E_CRF_CORR_RSLT t WHERE t.c_RI_fAt_id = A_IC_CNTR_RI_FAT.c_RI_fAt_id)
                   then 'Yes'
                   when exists (SELET 'x' FROM E_HYPO_PORTF_DTL t WHERE t.c_RI_fAt_id = A_IC_CNTR_RI_FAT.c_RI_fAt_id)
                   then 'Yes'
              else
                   'No'
         end used_analysis_ind,
         creation_date, datetime_stamp, user_id
         FROM A_IC_CNTR_RI_FAT
    ORDER BY c_RI_fAt_id_nme DESC;
    END sp_sel_cntr_ri_fact;

    [When your query takes too long...|http://forums.oracle.com/forums/thread.jspa?messageID=1812597]

  • ADF application taking more time for first time and less from second time

    Hi Experts,
    We are using ADF 11.1.1.2.
    Our application contains 5 jsp pages, 10 - 12 taskflows, and 50 jsff pages.
    For the first time in the day if we use the application it is taking more than 60 sec on some actions.
    And from the next time onwords it is taking 5 to 6 sec.
    Same thing is happening daily.
    Can any one tell me why this application is taking more time for first time and less time from second time.
    Regards
    Gayaz

    Hi,
    If you don't restart you WLS every day, then you should read about Tuning Application Module Pools and Connection Pools
    http://docs.oracle.com/cd/E15523_01/web.1111/b31974/bcampool.htm#sm0301
    And pay attention to the parameter: Maximum Available Size, Minimum Available Size
    http://docs.oracle.com/cd/E15523_01/web.1111/b31974/bcampool.htm#sm0314
    And adjust them to suit your needs.

  • Self Service Password Registration Page taking more time for loading in FIM 2010 R2

    Hi,
    I have beeen successfullly installed FIM 2010 R2 SSPR and it is working fine
    but my problem is that Self Service Password Registration Page taking more time for loading when i provide Window Credential,it is taking approximate 50 to 60 Seconds for loading a page in FIM 2010 R2
    very urgent requirement.
    Regards
    Anil Kumar

    Double check that the objectSid, accountname and domain is populated for the users in the FIM portal, and each user is connected to their AD counterparts
    Check here for more info:
    http://social.technet.microsoft.com/wiki/contents/articles/20213.troubleshooting-fim-sspr-error-3003-the-current-user-account-is-not-recognized-by-forefront-identity-manager-please-contact-your-help-desk-or-system-administrator.aspx

  • Impdp taking long time for only few MBs data...

    Hi All,
    I have one query related to impdp. I have one expdp file and size is 47M. When I restore this dmp using impdp it will take more time. Also initially table_data loaded finsih very fast but then later on alter function/procedure/view taking a lot time almost 4 to 5 hrs.
    I have no idea why its taking long time... Earlier I can see one DB link is failed and it given error for "TNS name could not resolved" so I create DB link also before run impdp but the same result. Can any one suggest what could be the cause to taking long time for only 47MB data?
    Note - Both expdp and impdp database version is 11.2.0.3.0. If I import the same dmp file into 11.2.0.1.0 then its done in few mins.
    Thanks...

    Also Read
    Checklist For Slow Performance Of DataPump Export (expdp) And Import (impdp) [ID 453895.1]
    DataPump Import (IMPDP) is Very Slow at Object/System/Role Grants, Default Roles [ID 1267951.1]

  • Program SAPLSBAL_DB taking long time for BALHDR table entries

    Hi Guys,
    I am running a Z program in Quality and Production system both which is uploading data from Desktop.
    In Quality system the Z program is successfully uploading datas but in production system its taking very long time even sometime getting time out.
    As per trace analysis, Program SAPLSBAL_DB taking long time for BALHDR table entries.
    Can anybody provide me any suggestion.
    Regards,
    Shyamal.

    These are QA screen shots where no issue, but we are getting very long time in CRP.
    Regards,
    Shyamal

  • OBIEE 10g taking long time for login

    Hello Experts,
    I am using     Oracle BI applications 7.9.6.2 with OBIEE 10.1.3.4.2.
    When any user tries to login it is taking much time for the process(50-60 sec) but after login other query responce is good.
    Can you suggest me how can i reduce this login interval ?
    Thank You

    Pls. disable unwanted VARIABLES in repository & check for any Init block initialization taking more time.

  • We are running a report ? it is  taking long time for  execution. what step

    we are running a report ? it is  taking long time for  execution. what steps will we do to reduce the execution time?

    Hi ,
    the performance can be improved thru many ways..
    First try to select based on the key fields if it is a very large table.
    if not then create a Secondary index for selection.
    dont perform selects inside a loop , instead do FOR ALL ENTRIES IN
    try to perform may operations in one loop rather than calling different loops on the same internal table..
    All these above and many more steps can be implemented to improve the performance.
    Need to look into your code, how it can be improved in your case...
    Regards,
    Vivek Shah

  • You are running a report. It is taking long time for

    You are running a report. It is taking long time for
    execution. What steps will you do to reduce the
    execution time.
        plx explain clearly

    Avoid loops inside loops.
    Avoid select inside loops.
    Select only the data that is required instead of using select *
    Select the field in the sequence as they are present in the database, and also specify the fields in the where clause in the same sequence.
    When ur using for all entries in the select statement, check whether the internal table to which ur refering is not initial.
    Remove select... endselect instead use into table
    Avoid Select Single inside the loop, instead select all the data before the loop and read that internal table inside the loop using binary search.
    Sort the Internal tables where ever necessary.

  • Tell me the perameter to set maximum gui auto logout time for limited users

    hi gurus...
    i want to know the perameter to set the maximum gui auto logout time for limited users...
    at present i have auto logout time as 30 minutes..but i need to set the value as 10 minutes for some group of user...
    if any one know any perameter plz let me know..
    thanks in advance,
    chaitanya...

    Hi Chaitanya,
    I don't think theres a specific parameter to achieve this, but you can set the value of rdisp/gui_auto_logout to 10 in one of the instances and create a new logon group for this users.
    Hope this help!
    Juan
    Please reward with points if helpful

  • Blocking Movement type 309 for perticular user ID

    Hello,
    We want to block Movement Type '309' for perticular user ID.
    How this can be achieved?

    Hi,
    You can do it through autherization  control ,
    For Object
    M_MSEG_BMB     Material Documents: Movement Type
    M_MSEG_BWA     Goods Movements: Movement Type
    M_MSEG_LGO     Goods Movements: Storage Location
    Revove the Movement type 309 , for the particular user.
    Regards
    Manish

  • UPDATE proc taking HUGE TIME

    Hi
    Oracle UPDATE proc is taking over 10 Hours to update 1,30,000 records :-
    /**********************CODE***************************/
    PROCEDURE Update_SP IS
    CURSOR C1 IS
    select tim.c_col,mp.t_n
    from Materialized_VW tim, MP_Table mp
    where tim.R_id = mp.R_id
    and tim.P_id = mp.P_id
    and tim.t_id = mp.t_id
    and mp.t_date between wk_comm and wk_end;
    BEGIN
    FOR I IN C1
    LOOP
    IF v_c=100000 THEN
    v_c:=0;
    COMMIT;
    END IF;
    v_c:=v_c+1;
    UPDATE MP_Table mp
    SET c_col = i.c_col
    WHERE mp.t_n = i.t_n;
    END LOOP;
    COMMIT;
    EXCEPTION
    WHEN OTHERS THEN
    ROLLBACK;
    err_num := SQLCODE;
    err_msg := SUBSTR(SQLERRM,1,100);
    END Update_SP;
    /**********************CODE***************************/
    Materialized_VW :- It has 4 SEPARATE indexes on the columns R_id, P_id, t_id, c_col
    MP_Table :- It has 4 SEPARATE indexes on the columns R_id, P_id, t_id, t_n
    The Explain Plan shows (NOTE : Whenever NUMBER OF RECORDS is More)
    SELECT STATEMENT ALL_ROWS
    Cost: 17,542 Bytes: 67 Cardinality: 1
    3 HASH JOIN
    Cost: 17,542 Bytes: 67 Cardinality: 1
    1 TABLE ACCESS FULL MP_TABLE
    Cost: 14 Bytes: 111,645 Cardinality: 4,135
    2 TABLE ACCESS FULL MATERIALIZED_VW
    Cost: 16,957 Bytes: 178,668,800 Cardinality: 4,466,720
    The Explain Plan shows (NOTE : Whenever NUMBER OF RECORDS is Less)
    SELECT STATEMENT ALL_ROWS
    Cost: 2,228 Bytes: 67 Cardinality: 1
    6 NESTED LOOPS Cost: 2,228 Bytes: 67 Cardinality: 1
    1 TABLE ACCESS FULL MP_TABLE Cost: 3 Bytes: 12,015 Cardinality: 445
    5 TABLE ACCESS BY INDEX ROWID MATERIALIZED_VW Cost: 2,228 Bytes: 40 Cardinality: 1
    4 AND-EQUAL
    2 INDEX RANGE SCAN NON-UNIQUE MATERIALIZED_VW_INDX1
    3 INDEX RANGE SCAN NON-UNIQUE MATERIALIZED_VW_INDX2
    This INTERMITTENT behaviour of EXPLAIN PLAN is causing it to take HUGE TIME whenever the number of records is more.
    This strange behaviour is causing problems as 10 Hours is too much for any UPDATE (that too the number of records is only 6 digit number).
    But, we cannnot use a DIRECT UPDATE as well as that would result in Oracle Exceptions.
    Please suggest ways of reducing the time or any other method of doing the above ASAP.
    Also, is there any way to establish a standard behaviour which takes less time.
    Thanks
    Arnab

    Hi BluShadow,
    I followed up your example extending it to the bulk processing.
    I have tested insert and update operations.
    Here are the insert result:
    SQL> CREATE TABLE mytable (x number, z varchar2(5));
    Table created.
    SQL> DECLARE
      2    v_sysdate DATE;
      3    v_insert NUMBER;
      4    TYPE t_nt_x IS TABLE OF NUMBER;
      5    TYPE t_nt_z IS TABLE OF VARCHAR2(5);
      6    v_nt_x t_nt_x;
      7    v_nt_z t_nt_z;
      8    CURSOR c1 IS SELECT rownum as x, 'test1' as z FROM DUAL CONNECT BY ROWNUM <= 1000000;
      9  BEGIN
    10 
    11    -- Single insert
    12    v_insert := 0;
    13    EXECUTE IMMEDIATE 'TRUNCATE TABLE mytable';
    14    v_sysdate := SYSDATE;
    15    INSERT INTO mytable (x,z) SELECT rownum,'test1' FROM DUAL CONNECT BY ROWNUM <= 1000000;
    16    v_insert := SQL%ROWCOUNT;
    17    COMMIT;
    18    DBMS_OUTPUT.PUT_LINE('Single insert--> Row Inserted: '||v_insert||' Time Taken: '||ROUND(((SYSDATE-v_sysdate)*(24*60*60)),0));
    19 
    20    -- Multi insert
    21    v_insert := 0;
    22    EXECUTE IMMEDIATE 'TRUNCATE TABLE mytable';
    23    v_sysdate := SYSDATE;
    24    FOR i IN 1..1000000
    25    LOOP
    26      INSERT INTO mytable (x,z) VALUES (i,'test1');
    27      v_insert := v_insert+SQL%ROWCOUNT;
    28    END LOOP;
    29    COMMIT;
    30    DBMS_OUTPUT.PUT_LINE('Multi insert--> Row Inserted: '||v_insert||' Time Taken: '||ROUND(((SYSDATE-v_sysdate)*(24*60*60)),0));
    31 
    32    -- Multi insert using bulk
    33    v_insert := 0;
    34    EXECUTE IMMEDIATE 'TRUNCATE TABLE mytable';
    35    v_sysdate := SYSDATE;
    36    OPEN c1;
    37    LOOP
    38      FETCH c1 BULK COLLECT INTO v_nt_x,v_nt_z LIMIT 100000;
    39      EXIT WHEN C1%NOTFOUND;
    40      FORALL i IN 1..v_nt_x.count
    41        INSERT INTO mytable (x,z) VALUES (v_nt_x(i),v_nt_z(i));
    42        v_insert := v_insert+SQL%ROWCOUNT;
    43    END LOOP;
    44    COMMIT;
    45    DBMS_OUTPUT.PUT_LINE('Multi insert using bulk--> Row Inserted: '||v_insert||' Time Taken: '||ROUND(((SYSDATE-v_sysdate)*(24*60*60)),0));
    46 
    47  END;
    48  /
    Single insert--> Row Inserted: 1000000 Time Taken: 3
    Multi insert--> Row Inserted: 1000000 Time Taken: 62
    Multi insert using bulk--> Row Inserted: 1000000 Time Taken: 10
    PL/SQL procedure successfully completed.and here the update result:
    SQL> DECLARE
      2    v_sysdate DATE;
      3    v_update NUMBER;
      4    TYPE t_nt_x IS TABLE OF ROWID;
      5    TYPE t_nt_z IS TABLE OF VARCHAR2(5);
      6    v_nt_x t_nt_x;
      7    v_nt_z t_nt_z;
      8    CURSOR c1 IS SELECT rowid as ri, 'test4' as z FROM mytable;
      9  BEGIN
    10 
    11    -- Single update
    12    v_update := 0;
    13    v_sysdate := SYSDATE;
    14    UPDATE mytable SET z='test2';
    15    v_update := SQL%ROWCOUNT;
    16    COMMIT;
    17    DBMS_OUTPUT.PUT_LINE('Single update--> Row Updated: '||v_update||' Time Taken: '||ROUND(((SYSDATE-v_sysdate)*(24*60*60)),0));
    18 
    19    -- Multi update
    20    v_update := 0;
    21    v_sysdate := SYSDATE;
    22    FOR rec IN (SELECT ROWID AS ri FROM mytable)
    23    LOOP
    24      UPDATE mytable SET z='test3' WHERE ROWID=rec.ri;
    25      v_update := v_update+SQL%ROWCOUNT;
    26    END LOOP;
    27    COMMIT;
    28    DBMS_OUTPUT.PUT_LINE('Multi update--> Row Updated: '||v_update||' Time Taken: '||ROUND(((SYSDATE-v_sysdate)*(24*60*60)),0));
    29 
    30    -- Multi update using bulk
    31    v_update := 0;
    32    v_sysdate := SYSDATE;
    33    OPEN c1;
    34    LOOP
    35      FETCH c1 BULK COLLECT INTO v_nt_x,v_nt_z LIMIT 100000;
    36      EXIT WHEN C1%NOTFOUND;
    37      FORALL i IN 1..v_nt_x.count
    38        UPDATE mytable SET z=v_nt_z(i) WHERE ROWID=v_nt_x(i);
    39        v_update := v_update+SQL%ROWCOUNT;
    40    END LOOP;
    41    COMMIT;
    42    DBMS_OUTPUT.PUT_LINE('Multi update using bulk--> Row Updated: '||v_update||' Time Taken: '||ROUND(((SYSDATE-v_sysdate)*(24*60*60)),0));
    43 
    44  END;
    45  /
    Single update--> Row Updated: 1000000 Time Taken: 39
    Multi update--> Row Updated: 1000000 Time Taken: 60
    Multi update using bulk--> Row Updated: 1000000 Time Taken: 32
    PL/SQL procedure successfully completed.The single statement has still got the better perfomance, but with the bulk processing the cursor performance has improved dramatically
    (in the update case the bulk processing is even slightly better than the single statement).
    I guess that with the bulk processing the switching between the SQL and PL/SQL engines is much less.
    It would be interesting to test it with more rows, i might do it tomorrow.
    Just thought it would have been interesting sharing the result with you guys.
    Cheers,
    Davide

  • Taking more time for retreving data from nested table

    Hi
    we have two databases db1 and db2,in database db2 we have number of nested tables were there.
    Now the problem is we had link between two databases,whenever u firing the any query in db1 internally it's going to acces nested tables in db2.
    For feching records it's going to take much more time even records are less in table . what would be the reason.
    plz help me daliy we are facing the problem regarding the same

    Please avoid duplicate thread :
    quaries taking more time
    Nicolas.
    +< mod. action : thread locked>+

Maybe you are looking for

  • Change request management in solution manger release 7.1 or 7.0

    Hi All, Can any body please explain me how to configure change request management in solman release7.1 and if it is possible please send document to my below maild Id. And please let me know what are the steps needs to be done satellite systems, we h

  • There was an error in opening the doeument..  The file cannot be found

    Hi, A user from our application (IE based) is having an error in opening a pdf document. He's getting the following message: There was an error in opening the doeument. The file cannot be found. The error is ecnountered when the link of the document

  • My pavilion dv2418nr wireless switch light stays orange

    My pavilion dv2418nr wireless switch light stays orange, will not go to blue, will not recognize any surrounding neighborhood connections.  ISP says the router is fine, that it is the laptop switch not allowing the wireless card to read any connectio

  • How do i make a second iTunes Library?

    pleas someone tell me how to make a second itunes library for me because my brother has one and he doesnt want my songs on his library.. Please Help

  • Embedding iPhoto keywords into files as IPTC keywords?

    The subject pretty much says it all. Does anyone know of an application or script that will run through an iPhoto Library and for each image that has keywords assigned in iPhoto, will embed those keywords in the IPTC keywords field? Thanks, -s-