Loading in Info Cube takes huge time

Hi,
We are loading transactional data in the info cube ZFI_C01  from a DSO ZFI_O05
We loaded around 1.8 million records ( 1776444 )  through a request in the DSO ZFI_O05  that took around 13 minutes including activation of that request.( Note that flag for SIDs Generation upon Activation was checked  and so, SIDs were generated for all the characteristics in the DSO during activation )
When we loaded the same request to the cube ZFI_C01, the request took around 3 hours to finish.
I did RSRV Checks for the infocube ZFI_C01 to find out the ratio between fact table and dimension tables.
( goto RSRV->all elementary test->database->database info abt infoprovider table->give the cube name ..check the log it gives the ratio between fact table and dim table... )
I got the following results for the two of the dimensions that are involved in the cube:
Table /BIC/DZFI_C013 has 1564356 entries; size corresponds to 86% of the InfoCube
Table /BIC/DZFI_C012 has 1649990 entries; size corresponds to 91% of the InfoCube
When I checked the properties for both these dimensions, the checkbox "High Cardinality" and "Line Item Dimensions" were unchecked.
I can not check the checkbox "Line item Dimension" as both these dimensions are having more than one info characteristics.
Shall I check "High Cardinality" checkbox and retry loading as ratio of the dimension table to fact table size is more than 20%.
But I am bit unclear what impact it will have on reportin

Hi there,
Check if you have any routines with code (start routine, end routine, etc.) that can have huge load performance.
Also check in sm50 when you do the load to the InfoCube if you're having several reads to the table NRIV and/or to some specific dimension tables. If so, find out what are the entries for buffer number range to those dimension tables, and after repeat the load (cancel it meanwhile) to the infocube, go to transaction SNRO and keep in buffer range 500 of sixe to those buffer number range. Repeat the load.
Try to see if it hepls,
Diogo.

Similar Messages

  • I need format for data in excel file load into info cube to planning area.

    Hi gurus,
    I need format for data in excel file load into info cube to planning area.
    can you send me what should i maintain header
    i have knowledge on like
    plant,location,customer,product,history qty,calander
    100,delhi,suresh,nokia,250,2011211
    if it is  right or wrong can u explain  and send me about excel file format.
    babu

    Hi Babu,
    The file format should be same as you want to upload. The sequence of File format should be same communication structure.
    Like,
    Initial columns with Characteristics (ex: plant,location,customer,product)
    date column (check for data format) (ex: calander)
    Last columsn with Key figures (history qty)
    Hope this helps.
    Regards,
    Nawanit

  • ODS takes huge time

    Hi guys,
      The data lookup in the ODS is taking huge time .....it was working fine a week back it has around  200 mill records..
    Thanks,
    Your help will be greatly appreciated

    For two records, yes it shud not take that much time...
    Any just check the system performance on the same.. Some times the basis might be working on some backup activities which might also cause the issue in the system performace...
    So check if this is happening to everyone in ur project.. and also check the content directly from the ODS content instead of the listcube.. for the comparision..
    Also check how is ur system is loaded with?
    Thanks..
    Hope this helps

  • Load to Info cube - Error

    I am trying to load from DSO to an Info cube. The records doesnt have error and but the process always gets terminated and gives the follwoing message:
    Error while updating to target ZPU_C01 (type INFOCUBE)
    Processing Terminated
    Its not able to create "Error DTP" as well since there are no errors.
    Any suggestions?

    Hi.......
    Hav u Debug the request..........In the DTP Monitor............click on Debugging in the top.............
    How is system performance..............Check Short dumps in ST22..............and System Log in SM21..........
    Regards,
    Debjani.........

  • Data is not loaded into info cube

    Hi All,
    I have a custom defined info cube and full load is running every month and It's hardly taking 1 hour to finish the same.
    But this month data came to PSA successfully from there to data target process stopped at update rules, non of the records are getting added to info cube. We haven't done any changes on update rules also.
    Can anybody let me know what might be the reason behind it. Thanks.
    Regards,
    Ashok
    Message was edited by: Ashok kaipu

    Hi Ashok,
    You can do the following:
    1. In the Monitor Status tab, turn the request Red.
    2. In the Details tab right click this data package and choose Manual Update.
    3. After the processing is done, the datapackage will be green but the overall request will still be red.
    4. In the Monitor Status tab, turn the request back to original status (this will make it green)
    Hope this helps...

  • Flat File: no data load into Info Cube

    Hi there,
    i try to load a flat file. When I simulate the upload its works well. But no Data was load in my Info Cube. When I try define a query there are no available.
    Can someone provide me with a solution for this problem?
    With rgds
    Oktay Demir

    Hi Oktay,
    in addition to A.H.P.'s marks, check if
    - Data is posted not only into PSA but also into datatarget,
    - updaterules are active.
    - Check Monitor-Status in Cube-Administration
    - Check availabilitiy for reporting of the request wthin Cube-Administration.
    Cheers
    Sven

  • First WES7 start takes huge time on some machines

    Hi,
    we have a WES7 image that resembles full-blown W7. We don't use EWF/HORM or other embedded features. Everything is OK, except that on the recent HP desktop units for some machines the first boot takes ages (literally, it takes 4-5 hours to complete!!).
    Windows boots normal and then sits in the 'Setup is installing devices 69%' screen for a very long time. It doesn't hang, as numbers are counting (till 75% or so) and then it works perfectly. The
    disk is not loaded during the wait, the CPU probably not as well, at least the cooler is not running fast.
    From 10 machines from one batch it happens on approximately half. It was not observed before (on older HP machines and on any other customer machines) and it started only with the new HP EliteDesk G1 800 SFF machines. 
    Quite weird issue and if in house we can simply wait for 5 hours per machine, then on field it's not acceptable at all.
    Does anybody have any idea what to check or what to do to resolve the issue?
    Thanks in advance,
    Nikolai
    WBR, Nikolai

    Hello Sean,
    sorry for a long delay, was busy with the release/etc. So, last Friday I was managed to reproduce the case. As I told before, OOBE went smooth till ~60% of drivers installation and then it drastically slowed down. Finally (after 3 hours) it's finished. I did
    re-installation around 10 times in 2 days and it hangs twice.
    I checked panther folder and found first this info in setupact.log
    (here is the link on the full log http://www.webcctv.com/files/%7B70F7C68D-EF19-45F7-85EB-E3F571EFB7EA%7D/setupact.zip, it's 21 kB in zip file)
    2015-02-06 17:45:34, Info                  SYSPRP Re-checking active status...
    2015-02-06 17:45:35, Info                  SYSPRP Status: Starting...
    2015-02-06 17:45:35, Info                  SYSPRP Status: Total = 0,
    Complete = 0, Progress = 0%
    (0%)
    2015-02-06 17:45:35, Info                  SYSPRP Status: Active
    2015-02-06 17:45:35, Info                  SYSPRP Status: Total = 1,
    Complete = 0, Progress = 0%
    (0%)
    2015-02-06 17:45:46, Info                  SYSPRP SPPNP: Reinstalling devices on the system...
    2015-02-06 17:45:46, Info                  SYSPRP Status: Active
    2015-02-06 17:45:46, Info                  SYSPRP Status: Total = 2,
    Complete = 0, Progress = 0%
    (0%)
    2015-02-06 17:45:46, Info                  SYSPRP Status: Active
    2015-02-06 17:45:46, Info                  SYSPRP Status: Total = 3,
    Complete = 0, Progress = 0%
    (0%)
    That was ok, then as you can see the speed of driver installation has decreased:
    2015-02-06 17:46:55, Info                  SYSPRP Status: Total = 172,
    Complete = 103, Progress = 59%
    (59%)
    2015-02-06 17:46:55, Info                  SYSPRP Status: Active
    2015-02-06 17:46:55, Info                  SYSPRP Status: Total = 173,
    Complete = 103, Progress = 59%
    (59%)
    2015-02-06 17:51:55, Info                  SYSPRP Status: Active
    2015-02-06 17:51:55, Info                  SYSPRP Status: Total = 173,
    Complete = 104, Progress = 60%
    (60%)
    2015-02-06 17:51:55, Info                  SYSPRP Status: Active
    2015-02-06 17:51:55, Info                  SYSPRP Status: Total = 173,
    Complete = 105, Progress = 60%
    (60%)
    2015-02-06 17:56:55, Info                  SYSPRP Status: Active
    2015-02-06 17:56:55, Info                  SYSPRP Status: Total = 173,
    Complete = 106, Progress = 61%
    (61%)
    2015-02-06 18:01:55, Info                  SYSPRP Status: Active
    2015-02-06 18:01:55, Info                  SYSPRP Status: Total = 173,
    Complete = 107, Progress = 61%
    (61%)
    2015-02-06 18:06:55, Info                  SYSPRP Status: Active
    2015-02-06 18:06:55, Info                  SYSPRP Status: Total = 173,
    Complete = 108, Progress = 62%
    (62%)
    2015-02-06 18:11:56, Info                  SYSPRP Status: Active
    2015-02-06 18:11:56, Info                  SYSPRP Status: Total = 173,
    Complete = 109, Progress = 63%
    (63%)
    2015-02-06 18:16:56, Info                  SYSPRP Status: Active
    2015-02-06 18:16:56, Info                  SYSPRP Status: Total = 173,
    Complete = 110, Progress = 63%
    (63%)
    and finished ~3hours later:
    2015-02-06 21:37:08, Info                  SYSPRP Status: Active
    2015-02-06 21:37:08, Info                  SYSPRP Status: Total = 173,
    Complete = 171, Progress = 98%
    (98%)
    2015-02-06 21:37:08, Info                  SYSPRP Status: Active
    2015-02-06 21:37:08, Info                  SYSPRP Status: Total = 173,
    Complete = 172, Progress = 99%
    (99%)
    2015-02-06 21:37:08, Info                  SYSPRP Status: End
    2015-02-06 21:37:08, Info                  SYSPRP Status: Total = -1,
    Complete = -1, Progress = 100%
    (100%)
    2015-02-06 21:37:08, Info                  SYSPRP Callback canceled operation, Err = 1223!
    2015-02-06 21:37:08, Info                  SYSPRP DeviceInstallStatusThreadProc failed, Err = 1223
    2015-02-06 21:37:09, Info                  SYSPRP SPPNP: Devices reinstallation completed.
    2015-02-06 21:37:09, Info                  SYSPRP Waiting for thread to exit
    2015-02-06 21:37:09, Info                  SYSPRP Thread exited
    Checked other files in that folder, but don't see anythings specific.
    Any ideas?
    Nikolai
    WBR, Nikolai

  • Selective deletion on info cube taking long time.

    We have a cube with line item dimension and partioned on Fiscal year. When we are doing selective deletions on the cube on 0fiscper (removing the old redundant data) it is taking quite long time (approx 10 hours) to do this selective deletions as there is huge data in the E fact table ,Request older than 4 days are compressed in the cube and it has some millions of records in this table.
    Is there any way we can improve the performance of this selective deletion? Any improvements suggestions we are on sap BW 3.5 system.

    Hi,
    The issue is very specific .
    I recommend to check if there are defined matching indexes which fit to the where clauses of the delete statements. This will make the access to the record most effient.
    All other indexes need to be just deleted for the time-being and re-created after the clean is completed. This will save the time for updating the indexes.
    The deletes should be performed in bulks - i.e. many, e.g. 25, 100, more at once. If deletes sqls are send from ABAP to database one by one there is huge overhead in the network traffic time.
    Best Regards,
    Sylvia

  • Data load in the Production takes long time

    Hi All,
    We have load which is extracting data from 0HR_PA_0 Data Source. It is loading daily some 2 lacs 30thousand records.But it was taking lots of time to load around 3 hrs.So if we check in the joblog of the source system the job is active for a longer time and i can see there are no enhancements done for the data source.So is there any body who can throw light on this.The processing type is psa and data targets in serial. If we change to PSa and datatargets parallel is there any improvement in the load?
    Thanks

    You have told that you have enhancement in the data source.. in ABAP make sure below optimizations are met
    Below things we need to keep in our mind:
    1)       Try not use SELECT statement inside WHILE loop
    2)       Use WHILE <logic> ASSIGNING <FS> instead of Use WHILE <logic> INTO WA.
    3)       We should not take all table in work area instead we will use proper where clause in select statement to minimize load in memory.
    4)       While reading Internal table use BINARY SEARCH as much as possible.
    Above guideline also applies for Start routines.
    Cheers,
    DK
    (Giving points is another way to say thanks:))

  • Data loading from source system takes long time.

    Hi,
         I am loading data from R/3 to BW. I am getting following message in the monitor.
    Request still running
    Diagnosis
    No errors could be found. The current process has probably not finished yet.
    System response
    The ALE inbox of the SAP BW is identical to the ALE outbox of the source system
    and/or
    the maximum wait time for this request has not yet run out
    and/or
    the batch job in the source system has not yet ended.
    Current status
    in the source system
    Is there anything wrong with partner profile maintanance in the source system.
    Cheers
    Senthil

    Hi,
    I will suggest you to check a few places where you can see the status
    1) SM37 job log (In source system if load is from R/3 or in BW if its a datamart load) (give request name) and it should give you the details about the request. If its active make sure that the job log is getting updated at frequent intervals.
    Also see if there is any 'sysfail' for any datapacket in SM37.
    2) SM66 get the job details (server name PID etc from SM37) and see in SM66 if the job is running or not. (In source system if load is from R/3 or in BW if its a datamart load). See if its accessing/updating some tables or is not doing anything at all.
    3) RSMO see what is available in details tab. It may be in update rules.
    4) ST22 check if any short dump has occured.(In source system if load is from R/3 or in BW if its a datamart load)
    5) SM58 and BD87 for pending tRFCs and IDOCS.
    Once you identify you can rectify the error.
    If all the records are in PSA you can pull it from the PSA to target. Else you may have to pull it again from source infoprovider.
    If its running and if you are able to see it active in SM66 you can wait for some time to let it finish. You can also try SM50 / SM51 to see what is happening in the system level like reading/inserting tables etc.
    If you feel its active and running you can verify by checking if the number of records has increased in the data tables.
    SM21 - System log can also be helpful.
    Thanks,
    JituK

  • Delta loading in info cube

    Hi Gurus,
    I am trying to load a cube <b>0fiar_c03</b> using delta mechanism. In the infopackage i have given as initialize delta process and while i tried to delta in extraction mode it is saying that delta mechanism is not possible and please use full update. Whether delta load is possible in this cube or should i follow some other procedure for delta load.
    Regards,
    Raghavendra.

    Sanity check the datasource you are using is DELTA capable itself.
    After that ,create IP as INIT to load data from ECC  to PSA
                   create DTP - DELTA to load from PSA to Infocube (no need of  creating INIT DTP)..
    http://help.sap.com/saphelp_nw70/helpdata/en/79/0d163c240a9440e10000000a114084/frameset.htm
    Hope it Helps
    Chetan
    @CP..

  • Shut down to take huge time

    Hi All,
    I am using 11.1.1.3 on windows server 2003,when i started essbase,planning, HFR,web analysis related services and after completion of my work i am trying to shut down it will take 1-2 hours to shutdown.
    please any one help me to resolve this challenge.
    Thanks,

    Are you stopping the services before shutting down the machine, I know there was an issue with OpenLdap and if you didn't stop the service it would take a long time for the machine to shut down.
    Cheers
    John
    http://john-goodwin.blogspot.com/

  • Data load to DSO takes long time to finish

    Dear All,
    We have a data load from data source to std  DSO.The data load takes 5 hours to complete  6000 records in single data package which is long time.
    Process monitor shows yellow status at one of the step for long time "No message :Transformation End" and after 5 hours approx  it completes successfully.
    Please find the snapshot of process monitor(Attached File Process monitor.png).
    There is an end routine and the transformation  is having direct mapping except for a target object exchage rate which is master data look up of DSO (Attached FIle : Transformation rule.png)
    The look up DSO /BI0/AFIGL_DS00 in the below code is having DOCNUM as a primary key  but not the POSKY. Since one of the field is not a primary key,secondary index is created for the look up DSO.But,still it takes huge time to finish the last step as mentioned in the snapshot.
    Setting for parallel process is 1
    DTP--> Update tab-->Error handling-->No update,no reporting.But there is a error DTP present which I believe that there is no use when "No update,No reporting" option is chosen.
    Can you please suggest the reason for the such long time.Also,Please suggest how to find the exact place where it consumes lot of time.
    End routine Logic:
        IF NOT RESULT_PACKAGE IS INITIAL.
          REFRESH IT_FIG.
          SELECT DOCNUM  POSKY DEBCRE LOCC
          FROM /BI0/AFIGL_DS00 INTO TABLE IT_FIG
          FOR ALL ENTRIES IN RESULT_PACKAGE
          WHERE DOCNUM = RESULT_PACKAGE-BILNO AND
                POSKY = '02'.
        LOOP AT RESULT_PACKAGE ASSIGNING <RESULT_FIELDS>.
            READ TABLE IT_FIG INTO WA_FIG WITH KEY
                       DOCNUM = <RESULT_FIELDS>-BILNO.
            IF SY-SUBRC EQ 0.
              <RESULT_FIELDS>-DEB = WA_FIG-DEBCRE.
              <RESULT_FIELDS>-LOC_CURRC2 = WA_FIG-LOCC.
            ENDIF.
        ENDLOOP.
        ENDIF.
    Thanks in advance
    Regards
    Pradeep

    Hi,
    below code check it and try to load the data.
    IF RESULT_PACKAGE IS NOT INITIAL.
          SELECT DOCNUM 
                          POSKY
                          DEBCRE
                          LOCC
          FROM /BI0/AFIGL_DS00 INTO TABLE IT_FIG
          FOR ALL ENTRIES IN RESULT_PACKAGE
          WHERE DOCNUM = RESULT_PACKAGE-BILNO AND
                POSKY = '02'.
        LOOP AT RESULT_PACKAGE ASSIGNING <RESULT_FIELDS>.
            READ TABLE IT_FIG INTO WA_FIG WITH KEY
                       DOCNUM = <RESULT_FIELDS>-BILNO.
            IF SY-SUBRC EQ 0.
               <RESULT_FIELDS>-DEB = WA_DOCNUM.
               <RESULT_FIELDS>-DEB = WA_POSKY.
              <RESULT_FIELDS>-DEB = WA_FIG-DEBCRE.
              <RESULT_FIELDS>-LOC_CURRC2 = WA_FIG-LOCC.
            ENDIF.
        ENDLOOP.
        ENDIF.
    if your are getting any error please let us know
    1.decrease the data packet size in DTP like 10,000 or 20,000.
    2.increase the parallel process at DTP level.
    Thanks,
    Phani.

  • Delta records are not loading from DSO to info cube

    My query is about delta loading from DSO to info cube. (Filter used in selection)
    Delta records are not loading from DSO to Info cube. I have tried all options available in DTP but no luck.
    Selected "Change log" and "Get one request only" and run the DTP, but 0 records got updated in info cube
    Selected "Change log" and "Get all new data request by request", but again 0 records got updated
    Selected "Change log" and "Only get the delta once", in that case all delta records loaded to info cube as it was in DSO and  gave error message "Lock Table Overflow" .
    When I run full load using same filter, data is loading from DSO to info cube.
    Can anyone please help me on this to get delta records from DSO to info cube?
    Thanks,
    Shamma

    Data is loading in case of full load with the same filter, so I don't think filter is an issue.
    When I follow below sequence, I get lock table overflow error;
    1. Full load with active table with or without archive
    2. Then with the same setting if I run init, the final status remains yellow and when I change the status to green manually, it gives lock table overflow error.
    When I chnage the settings of DTP to init run;
    1. Select change log and get only one request, and run the init, It is successfully completed with green status
    2. But when I run the same DTP for delta records, it does not load any data.
    Please help me to resolve this issue.

  • UPDATE proc taking HUGE TIME

    Hi
    Oracle UPDATE proc is taking over 10 Hours to update 1,30,000 records :-
    /**********************CODE***************************/
    PROCEDURE Update_SP IS
    CURSOR C1 IS
    select tim.c_col,mp.t_n
    from Materialized_VW tim, MP_Table mp
    where tim.R_id = mp.R_id
    and tim.P_id = mp.P_id
    and tim.t_id = mp.t_id
    and mp.t_date between wk_comm and wk_end;
    BEGIN
    FOR I IN C1
    LOOP
    IF v_c=100000 THEN
    v_c:=0;
    COMMIT;
    END IF;
    v_c:=v_c+1;
    UPDATE MP_Table mp
    SET c_col = i.c_col
    WHERE mp.t_n = i.t_n;
    END LOOP;
    COMMIT;
    EXCEPTION
    WHEN OTHERS THEN
    ROLLBACK;
    err_num := SQLCODE;
    err_msg := SUBSTR(SQLERRM,1,100);
    END Update_SP;
    /**********************CODE***************************/
    Materialized_VW :- It has 4 SEPARATE indexes on the columns R_id, P_id, t_id, c_col
    MP_Table :- It has 4 SEPARATE indexes on the columns R_id, P_id, t_id, t_n
    The Explain Plan shows (NOTE : Whenever NUMBER OF RECORDS is More)
    SELECT STATEMENT ALL_ROWS
    Cost: 17,542 Bytes: 67 Cardinality: 1
    3 HASH JOIN
    Cost: 17,542 Bytes: 67 Cardinality: 1
    1 TABLE ACCESS FULL MP_TABLE
    Cost: 14 Bytes: 111,645 Cardinality: 4,135
    2 TABLE ACCESS FULL MATERIALIZED_VW
    Cost: 16,957 Bytes: 178,668,800 Cardinality: 4,466,720
    The Explain Plan shows (NOTE : Whenever NUMBER OF RECORDS is Less)
    SELECT STATEMENT ALL_ROWS
    Cost: 2,228 Bytes: 67 Cardinality: 1
    6 NESTED LOOPS Cost: 2,228 Bytes: 67 Cardinality: 1
    1 TABLE ACCESS FULL MP_TABLE Cost: 3 Bytes: 12,015 Cardinality: 445
    5 TABLE ACCESS BY INDEX ROWID MATERIALIZED_VW Cost: 2,228 Bytes: 40 Cardinality: 1
    4 AND-EQUAL
    2 INDEX RANGE SCAN NON-UNIQUE MATERIALIZED_VW_INDX1
    3 INDEX RANGE SCAN NON-UNIQUE MATERIALIZED_VW_INDX2
    This INTERMITTENT behaviour of EXPLAIN PLAN is causing it to take HUGE TIME whenever the number of records is more.
    This strange behaviour is causing problems as 10 Hours is too much for any UPDATE (that too the number of records is only 6 digit number).
    But, we cannnot use a DIRECT UPDATE as well as that would result in Oracle Exceptions.
    Please suggest ways of reducing the time or any other method of doing the above ASAP.
    Also, is there any way to establish a standard behaviour which takes less time.
    Thanks
    Arnab

    Hi BluShadow,
    I followed up your example extending it to the bulk processing.
    I have tested insert and update operations.
    Here are the insert result:
    SQL> CREATE TABLE mytable (x number, z varchar2(5));
    Table created.
    SQL> DECLARE
      2    v_sysdate DATE;
      3    v_insert NUMBER;
      4    TYPE t_nt_x IS TABLE OF NUMBER;
      5    TYPE t_nt_z IS TABLE OF VARCHAR2(5);
      6    v_nt_x t_nt_x;
      7    v_nt_z t_nt_z;
      8    CURSOR c1 IS SELECT rownum as x, 'test1' as z FROM DUAL CONNECT BY ROWNUM <= 1000000;
      9  BEGIN
    10 
    11    -- Single insert
    12    v_insert := 0;
    13    EXECUTE IMMEDIATE 'TRUNCATE TABLE mytable';
    14    v_sysdate := SYSDATE;
    15    INSERT INTO mytable (x,z) SELECT rownum,'test1' FROM DUAL CONNECT BY ROWNUM <= 1000000;
    16    v_insert := SQL%ROWCOUNT;
    17    COMMIT;
    18    DBMS_OUTPUT.PUT_LINE('Single insert--> Row Inserted: '||v_insert||' Time Taken: '||ROUND(((SYSDATE-v_sysdate)*(24*60*60)),0));
    19 
    20    -- Multi insert
    21    v_insert := 0;
    22    EXECUTE IMMEDIATE 'TRUNCATE TABLE mytable';
    23    v_sysdate := SYSDATE;
    24    FOR i IN 1..1000000
    25    LOOP
    26      INSERT INTO mytable (x,z) VALUES (i,'test1');
    27      v_insert := v_insert+SQL%ROWCOUNT;
    28    END LOOP;
    29    COMMIT;
    30    DBMS_OUTPUT.PUT_LINE('Multi insert--> Row Inserted: '||v_insert||' Time Taken: '||ROUND(((SYSDATE-v_sysdate)*(24*60*60)),0));
    31 
    32    -- Multi insert using bulk
    33    v_insert := 0;
    34    EXECUTE IMMEDIATE 'TRUNCATE TABLE mytable';
    35    v_sysdate := SYSDATE;
    36    OPEN c1;
    37    LOOP
    38      FETCH c1 BULK COLLECT INTO v_nt_x,v_nt_z LIMIT 100000;
    39      EXIT WHEN C1%NOTFOUND;
    40      FORALL i IN 1..v_nt_x.count
    41        INSERT INTO mytable (x,z) VALUES (v_nt_x(i),v_nt_z(i));
    42        v_insert := v_insert+SQL%ROWCOUNT;
    43    END LOOP;
    44    COMMIT;
    45    DBMS_OUTPUT.PUT_LINE('Multi insert using bulk--> Row Inserted: '||v_insert||' Time Taken: '||ROUND(((SYSDATE-v_sysdate)*(24*60*60)),0));
    46 
    47  END;
    48  /
    Single insert--> Row Inserted: 1000000 Time Taken: 3
    Multi insert--> Row Inserted: 1000000 Time Taken: 62
    Multi insert using bulk--> Row Inserted: 1000000 Time Taken: 10
    PL/SQL procedure successfully completed.and here the update result:
    SQL> DECLARE
      2    v_sysdate DATE;
      3    v_update NUMBER;
      4    TYPE t_nt_x IS TABLE OF ROWID;
      5    TYPE t_nt_z IS TABLE OF VARCHAR2(5);
      6    v_nt_x t_nt_x;
      7    v_nt_z t_nt_z;
      8    CURSOR c1 IS SELECT rowid as ri, 'test4' as z FROM mytable;
      9  BEGIN
    10 
    11    -- Single update
    12    v_update := 0;
    13    v_sysdate := SYSDATE;
    14    UPDATE mytable SET z='test2';
    15    v_update := SQL%ROWCOUNT;
    16    COMMIT;
    17    DBMS_OUTPUT.PUT_LINE('Single update--> Row Updated: '||v_update||' Time Taken: '||ROUND(((SYSDATE-v_sysdate)*(24*60*60)),0));
    18 
    19    -- Multi update
    20    v_update := 0;
    21    v_sysdate := SYSDATE;
    22    FOR rec IN (SELECT ROWID AS ri FROM mytable)
    23    LOOP
    24      UPDATE mytable SET z='test3' WHERE ROWID=rec.ri;
    25      v_update := v_update+SQL%ROWCOUNT;
    26    END LOOP;
    27    COMMIT;
    28    DBMS_OUTPUT.PUT_LINE('Multi update--> Row Updated: '||v_update||' Time Taken: '||ROUND(((SYSDATE-v_sysdate)*(24*60*60)),0));
    29 
    30    -- Multi update using bulk
    31    v_update := 0;
    32    v_sysdate := SYSDATE;
    33    OPEN c1;
    34    LOOP
    35      FETCH c1 BULK COLLECT INTO v_nt_x,v_nt_z LIMIT 100000;
    36      EXIT WHEN C1%NOTFOUND;
    37      FORALL i IN 1..v_nt_x.count
    38        UPDATE mytable SET z=v_nt_z(i) WHERE ROWID=v_nt_x(i);
    39        v_update := v_update+SQL%ROWCOUNT;
    40    END LOOP;
    41    COMMIT;
    42    DBMS_OUTPUT.PUT_LINE('Multi update using bulk--> Row Updated: '||v_update||' Time Taken: '||ROUND(((SYSDATE-v_sysdate)*(24*60*60)),0));
    43 
    44  END;
    45  /
    Single update--> Row Updated: 1000000 Time Taken: 39
    Multi update--> Row Updated: 1000000 Time Taken: 60
    Multi update using bulk--> Row Updated: 1000000 Time Taken: 32
    PL/SQL procedure successfully completed.The single statement has still got the better perfomance, but with the bulk processing the cursor performance has improved dramatically
    (in the update case the bulk processing is even slightly better than the single statement).
    I guess that with the bulk processing the switching between the SQL and PL/SQL engines is much less.
    It would be interesting to test it with more rows, i might do it tomorrow.
    Just thought it would have been interesting sharing the result with you guys.
    Cheers,
    Davide

Maybe you are looking for

  • Transaction FBL3N - display field XREF1, XREF3

    Hello all, I have a problem with the transaction FBL3N. I can not see the value of field XREF1 and XREF3 in some variants. I can not add the field XREF3 into the table T021S. Any suggestions ? Marcela

  • To add table PO text(Item)

    Hi, I want to give text as a table from word or from excel in text(Item) while crearting PO. Actually when I copy and paste from excel its coming but after saving I am not getting my data as a table format. I want to save as table format in Item text

  • [CF10] CFINPUT custom validation start and end dates

    Hello, everyone, Another developer that I am working with is working with a form that has start and end dates.  He'd like to be able to use custom JavaScript validation to make sure that not only are the dates in proper format (that part of the JS is

  • How to create and see the user interface of helloworld sample??

    hello friends, i have deployed the hello world sample now i want to know how to create and see the user interface of hello world sample. thanks and regards..

  • Server patch set 10.2.0.2 good for Clusterware ? (Win2k3)

    Hello all, I was under the impression that the latest patchset for the Server does not cover CRS, and therefore was not to be applied, since, according to CRS's documentation it (CRS) must be equal or later version than RAC. But as i was reading Doc.