Bulk collect question Urgent please

hi all,
i have a cursor return 10 rows,
i define a table of record from type of this record,
i want to fetch the cursor data 3 times in the table of record
i.e ...............
the cursor data take from index 1 to index 10
the dame cursor data take from index 11 to 20
the dame cursor data take from index 21 to 30
i make fetch cur bulk collect into table_rec,
now the table_rec has 10 rows of the cursor
i want to append the same 10 rows of the cursor into the table_rec in one step
i try the bulk collect next time ( it empty the table of record or overide the existing data)
thanks

Hello,
That is the way it works. If you want too add other records to the collection, don't use the BULK COLLECT feature, but a standard cursor.
Francois

Similar Messages

  • Benefits Strong Interview Questions - Urgent Please

    Dear Friends,
    I need know the more depth Benefits Interview Questions and Answers?
    I appreciate your time.
    Thanks,
    Sammer.

    http://www.sap-tests.com/tests/benefits-administration/1.html
    I hope this helps!

  • Webcache Team Question - Urgent please

    Hi,
    I have a pair of 9iAS R1 Servers running forms listener servlet 6i, and reports over cgi - there is a dozen or so reasons as to why I'm compelled to use this configuration.
    Fronting these I have a webcache 9.0.2 box that is correctly configured for SSL. The webcache SSL port 443 site is mapped to the forms 9iAS boxes on http port 80.
    I want my users to connect only to the webcache box, and not to the 9iAS servers. If possible, I would like to configure these 9iAS servers to receive IP traffic from the webcache box only. I can do this with IPSec or using a host based firewall such as Zonealarm.
    When my users connect to the ssl webcache port, the browser shows the status bar lock icon and a connection is established to one of the 9iAS servers over http. The browser continues to show the icon, but the forms application appears to 'hand off' the connection to the app server. This is highlighted where I enforce a rule on zonealarm (on the app server) to accept traffic from the webcache ip address only - I get a zonealarm msg box indicating that a request from the users own ip address has been blocked.
    My question - is it possible to route all connections through webcache, and channel all user connections through that box to the app servers.
    I would very very very grateful if someone could answer this question really fast.
    Many thanks
    Pat

    The phone requires an initial activation to function. If you don't have an AT&T SIM and account, you can't activate via iTunes and you can't even use it as a music player.
    You can activate it on someone else's account if they have a valid SIM, then give them their SIM back.
    BUT, if your switching from VZW to AT&T, why not buy it AT&T directly, port your number and get the nice cheap subsidized price? Or if you buy it off Craigslist, it should be used and already activated, just sold without a SIM - in that case it will work fine as an iPod, BUT if you update the software, it will require activation of you will have a paperweight.

  • Disadvantage of limit in bulk collection

    hi,
    Is there any disadvantage to use limit in bulk collect?
    and please clear me advantage of limit also.
    I am great confusion that if i don't write limit in bulk collect then too its working fine then wats' the need of limit?

    actually i was getting below error,thats why i want to limit adv/disadv.
    I hav posted it also.
    please telle me where sud i correct my code.
    In 2nd limit option if i m decreasing limit to 10000 or 5k(less then 50k ) then its working fine.
    my cursor c & cmain contains around 5 lac records.
    error-
    ORA-22165: given index [32768] must be in the range of [1] to [32767].
    i am using bulk collecrt & forall,as in below senerio.
    declare
    cursor cmain is...
    type <collectionType> is table of tablename%rowtype;
    <collectionvariable> <collectionType>;
    begin
    declare
    cursor c1 .....
    type <collectionType> is table of tablename%rowtype;
    <collectionvariable> <collectionType>;
    begin
    open c1;
    loop
    fetch c1 into <collectionvariable2> limit 50000;
    forall i in 1 .. <collectionVariable1>.count
    stmnt....
    EXIT WHEN C1%NOTFOUND;
    end loop;
    close c1;
    COMMIT;
    end;
    open cmain;
    loop
    fetch cmain into <collectionVariable2> limit 50000;
    forall i in 1 .. <collectionVariable2>.count
    stmnt....
    EXIT WHEN CMAIN%NOTFOUND;
    end loop;
    close cmain;
    COMMIT;
    end

  • Bulk Collect taking more time. Please suggest .

    I am working on oracle 11g
    I have one normal insert proc
    CREATE OR REPLACE PROCEDURE test2
    AS
         BEGIN
         INSERT INTO first_table
         (citiversion, financialcollectionid,
         dataitemid, dataitemvalue,
         unittypeid, financialinstanceid,
         VERSION, providerid, user_comment,
         userid, insert_timestamp,
         latestflag, finalflag, filename,
         final_ytdltm_flag, nmflag , partition_key
         SELECT citiversion, financialcollectionid,
         dataitemid, dataitemvalue, unittypeid,
         new_fi, VERSION, providerid,
         user_comment, userid,
         insert_timestamp, latestflag,
         finalflag, filename, '', nmflag,1
         FROM secon_table
         WHERE financialinstanceid = 1
    AND changeflag = 'A'
    AND processed_flg = 'N';
         END test2;
    To impove performance i have normal insert into convert it to bulk collect :
    CREATE OR REPLACE PROCEDURE test
    AS
    BEGIN
         DECLARE
         CURSOR get_cat_fin_collection_data(n_instanceid NUMBER) IS
         SELECT citiversion,
         financialcollectionid,
         dataitemid,
         dataitemvalue,
         unittypeid,
         new_fi,
         VERSION,
         providerid,
         user_comment,
         userid,
         insert_timestamp,
         latestflag,
         finalflag,
         filename,
         nmflag,
                   1
         FROM secon_table
         WHERE financialinstanceid = n_instanceid
    AND changeflag = 'A'
    AND processed_flg = 'N';
         TYPE data_array IS TABLE OF get_cat_fin_collection_data%ROWTYPE;
         l_data data_array;
         BEGIN
         OPEN get_cat_fin_collection_data(1);
         LOOP
         FETCH get_cat_fin_collection_data BULK COLLECT
         INTO l_data limit 100;
         FORALL i IN 1 .. l_data.COUNT
         INSERT INTO first_table VALUES l_data (i);
    EXIT WHEN l_data.count =0;
         END LOOP;
         CLOSE get_cat_fin_collection_data;
         END;
         END test;
    But bulk collect is taking more time.
    below is the timings
    SQL> set timing on
    SQL> exec test
    PL/SQL procedure successfully completed
    Executed in 16.703 seconds
    SQL> exec test2
    PL/SQL procedure successfully completed
    Executed in 9.406 seconds
    SQL> rollback;
    Rollback complete
    Executed in 2.75 seconds
    SQL> exec test
    PL/SQL procedure successfully completed
    Executed in 16.266 seconds
    SQL> rollback;
    Rollback complete
    Executed in 2.812 seconds
    Normal insert :- 9.4 second
    Bulk insert:- 16.266 seconds
    I am processing 1 lakh rows.
    Can you please tell me the reason why bulk collect is taking more time. ? According to my knowledge it should take less time.
    Please suggect do i need to check any parameter?
    Please help.
    Edited by: 976747 on Feb 4, 2013 1:12 AM

    >
    Can you please tell me the reason why bulk collect is taking more time. ? According to my knowledge it should take less time.
    Please suggect do i need to check any parameter?In that case, your knowledge is flawed.
    Pure SQL is almost always faster than PL/SQL.
    If your Insert into Select is executing slow, then it is probably because the Select statement is taking long to execute. How many rows are being Selected and Inserted from your query?
    You might also consider tuning the Select statement. For more information on Posting a Tuning request, read {message:id=3292438} and post the relevant information.

  • Please help with an embedded query (INSERT RETURNING BULK COLLECT INTO)

    I am trying to write a query inside the C# code where I would insert values into a table in bulk using bind variables. But I also I would like to receive a bulk collection of generated sequence number IDs for the REQUEST_ID. I am trying to use RETURNING REQUEST_ID BULK COLLECT INTO :REQUEST_IDs clause where :REQUEST_IDs is another bind variable
    Here is a full query that use in the C# code
    string sql = "INSERT INTO REQUESTS_TBL(REQUEST_ID, CID, PROVIDER_ID, PROVIDER_NAME, REQUEST_TYPE_ID, REQUEST_METHOD_ID, " +
    "SERVICE_START_DT, SERVICE_END_DT, SERVICE_LOCATION_CITY, SERVICE_LOCATION_STATE, " +
    "BENEFICIARY_FIRST_NAME, BENEFICIARY_LAST_NAME, BENEFICIARY_DOB, HICNUM, CCN, " +
    "CLAIM_RECEIPT_DT, ADMISSION_DT, BILL_TYPE, LANGUAGE_ID, CONTRACTOR_ID, PRIORITY_ID, " +
    "UNIVERSE_DT, REQUEST_DT, BENEFICIARY_M_INITIAL, ATTENDING_PROVIDER_NUMBER, " +
    "BILLING_NPI, BENE_ZIP_CODE, DRG, FINAL_ALLOWED_AMT, STUDY_ID, REFERRING_NPI) " +
    "VALUES " +
    "(SQ_CDCDATA.NEXTVAL, :CIDs, :PROVIDER_IDs, :PROVIDER_NAMEs, :REQUEST_TYPE_IDs, :REQUEST_METHOD_IDs, " +
    ":SERVICE_START_DTs, :SERVICE_END_DTs, :SERVICE_LOCATION_CITYs, :SERVICE_LOCATION_STATEs, " +
    ":BENEFICIARY_FIRST_NAMEs, :BENEFICIARY_LAST_NAMEs, :BENEFICIARY_DOBs, :HICNUMs, :CCNs, " +
    ":CLAIM_RECEIPT_DTs, :ADMISSION_DTs, :BILL_TYPEs, :LANGUAGE_IDs, :CONTRACTOR_IDs, :PRIORITY_IDs, " +
    ":UNIVERSE_DTs, :REQUEST_DTs, :BENEFICIARY_M_INITIALs, :ATTENDING_PROVIDER_NUMBERs, " +
    ":BILLING_NPIs, :BENE_ZIP_CODEs, :DRGs, :FINAL_ALLOWED_AMTs, :STUDY_IDs, :REFERRING_NPIs) " +
    " RETURNING REQUEST_ID BULK COLLECT INTO :REQUEST_IDs";
    int[] REQUEST_IDs = new int[range];
    cmd.Parameters.Add(":REQUEST_IDs", OracleDbType.Int32, REQUEST_IDs, System.Data.ParameterDirection.Output);
    However, when I run this query, it gives me a strange error ORA-00925: missing INTO keyword. I am not sure what that error means since I am not missing any INTOs
    Please help me resolve this error or I would appreciate a different solution
    Thank you

    It seems you are not doing a bulk insert but rather an array bind.
    (Which you will also find that it is problematic to do an INSERT with a bulk collect returning clause (while this works just fine for update/deletes) :
    http://www.oracle-developer.net/display.php?id=413)
    But you are using array bind, so you simply just need to use a
    ... Returning REQUEST_ID INTO :REQUEST_IDand that'll return you a Rquest_ID[]
    see below for a working example (I used a procedure but the result is the same)
    //Create Table Zzztab(Deptno Number, Deptname Varchar2(50) , Loc Varchar2(50) , State Varchar2(2) , Idno Number(10)) ;
    //create sequence zzzseq ;
    //CREATE OR REPLACE PROCEDURE ZZZ( P_DEPTNO   IN ZZZTAB.DEPTNO%TYPE,
    //                      P_DEPTNAME IN ZZZTAB.DEPTNAME%TYPE,
    //                      P_LOC      IN ZZZTAB.LOC%TYPE,
    //                      P_State    In Zzztab.State%Type ,
    //                      p_idno     out zzztab.idno%type
    //         IS
    //Begin
    //      Insert Into Zzztab (Deptno,   Deptname,   Loc,   State , Idno)
    //                  Values (P_Deptno, P_Deptname, P_Loc, P_State, Zzzseq.Nextval)
    //                  returning idno into p_idno;
    //END ZZZ;
    //Drop Procedure Zzz ;
    //Drop Sequence Zzzseq ;
    //drop Table Zzztab;
      class ArrayBind
        static void Main(string[] args)
          // Connect
            string connectStr = GetConnectionString();
          // Setup the Tables for sample
          Setup(connectStr);
          // Initialize array of data
          int[]    myArrayDeptNo   = new int[3]{1, 2, 3};
          String[] myArrayDeptName = {"Dev", "QA", "Facility"};
          String[] myArrayDeptLoc  = {"New York", "Chicago", "Texas"};
          String[] state = {"NY","IL","TX"} ;
          OracleConnection connection = new OracleConnection(connectStr);
          OracleCommand    command    = new OracleCommand (
            "zzz", connection);
          command.CommandType = CommandType.StoredProcedure;
          // Set the Array Size to 3. This applied to all the parameter in
          // associated with this command
          command.ArrayBindCount = 3;
          command.BindByName = true;
          // deptno parameter
          OracleParameter deptNoParam = new OracleParameter("p_deptno",OracleDbType.Int32);
          deptNoParam.Direction       = ParameterDirection.Input;
          deptNoParam.Value           = myArrayDeptNo;
          command.Parameters.Add(deptNoParam);
          // deptname parameter
          OracleParameter deptNameParam = new OracleParameter("p_deptname", OracleDbType.Varchar2);
          deptNameParam.Direction       = ParameterDirection.Input;
          deptNameParam.Value           = myArrayDeptName;
          command.Parameters.Add(deptNameParam);
          // loc parameter
          OracleParameter deptLocParam = new OracleParameter("p_loc", OracleDbType.Varchar2);
          deptLocParam.Direction       = ParameterDirection.Input;
          deptLocParam.Value           = myArrayDeptLoc;
          command.Parameters.Add(deptLocParam);
          //P_STATE -- -ARRAY
          OracleParameter stateParam = new OracleParameter("P_STATE", OracleDbType.Varchar2);
          stateParam.Direction = ParameterDirection.Input;
          stateParam.Value = state;
          command.Parameters.Add(stateParam);
                  //idParam-- ARRAY
          OracleParameter idParam = new OracleParameter("p_idno", OracleDbType.Int64 );
          idParam.Direction = ParameterDirection.Output ;
          idParam.OracleDbTypeEx = OracleDbType.Int64;
          command.Parameters.Add(idParam);
          try
            connection.Open();
            command.ExecuteNonQuery ();
            Console.WriteLine("{0} Rows Inserted", command.ArrayBindCount);
              //now cycle through the output param array
            foreach (Int64 i in (Int64[])idParam.Value)
                Console.WriteLine(i);
          catch (Exception e)
            Console.WriteLine("Execution Failed:" + e.Message);
          finally
            // connection, command used server side resource, dispose them
            // asap to conserve resource
            connection.Close();
            command.Dispose();
            connection.Dispose();
          Console.WriteLine("Press Enter to finish");
          Console.ReadKey();
        }

  • Please help with the query (INSERT RETURNING BULK COLLECT INTO)

    I am trying to write a query inside the C# code where I would insert values into a table in bulk using bind variables. But I also I would like to receive a bulk collection of generated sequence number IDs for the REQUEST_ID. I am trying to use RETURNING REQUEST_ID BULK COLLECT INTO :REQUEST_IDs clause where :REQUEST_IDs is another bind variable
    Here is a full query that use in the C# code
    INSERT INTO REQUESTS_TBL(REQUEST_ID, CID, PROVIDER_ID, PROVIDER_NAME, REQUEST_TYPE_ID, REQUEST_METHOD_ID, SERVICE_START_DT, SERVICE_END_DT, SERVICE_LOCATION_CITY, SERVICE_LOCATION_STATE, BENEFICIARY_FIRST_NAME,
    BENEFICIARY_LAST_NAME, BENEFICIARY_DOB, HICNUM, CCN, CLAIM_RECEIPT_DT, ADMISSION_DT, BILL_TYPE,
    LANGUAGE_ID, CONTRACTOR_ID, PRIORITY_ID, UNIVERSE_DT, REQUEST_DT, BENEFICIARY_M_INITIAL,
    ATTENDING_PROVIDER_NUMBER, BILLING_NPI, BENE_ZIP_CODE, DRG, FINAL_ALLOWED_AMT, STUDY_ID, REFERRING_NPI)
    VALUES
    (SQ_CDCDATA.NEXTVAL, :CIDs, :PROVIDER_IDs, :PROVIDER_NAMEs, :REQUEST_TYPE_IDs,
    :REQUEST_METHOD_IDs, :SERVICE_START_DTs, :SERVICE_END_DTs, :SERVICE_LOCATION_CITYs,
    :SERVICE_LOCATION_STATEs, :BENEFICIARY_FIRST_NAMEs, :BENEFICIARY_LAST_NAMEs, :BENEFICIARY_DOBs,
    :HICNUMs, :CCNs, :CLAIM_RECEIPT_DTs, :ADMISSION_DTs, :BILL_TYPEs, :LANGUAGE_IDs,
    :CONTRACTOR_IDs, :PRIORITY_IDs, :UNIVERSE_DTs, :REQUEST_DTs, :BENEFICIARY_M_INITIALs,
    :ATTENDING_PROVIDER_NUMBERs, :BILLING_NPIs, :BENE_ZIP_CODEs, :DRGs, :FINAL_ALLOWED_AMTs,
    :STUDY_IDs, :REFERRING_NPIs) RETURNING REQUEST_ID BULK COLLECT INTO :REQUEST_IDs
    However, when I run this query, it gives me a strange error ORA-00925: missing INTO keyword. I am not sure what that error means since I am not missing any INTOs
    Please help me resolve this error or I would appreciate a different solution
    Thank you

    You cannot use (and do not want to in this case) the BULK COLLECT.
    create table for_testing
       the_id      number not null primary key,
       some_data   number
    declare
       l_return_value for_testing.the_id%type;
    begin
      4 
       insert into for_testing
          the_id,
          some_data
       values
          1,
          5
       returning the_id into l_return_value;
       dbms_output.put_line('the return values is ' || l_return_value);
    end;
    20  /
    the return values is 1
    PL/SQL procedure successfully completed.
    Elapsed: 00:00:00.02
    TUBBY_TUBBZ?Is a simple example. In the future, please use the tags to preserve formatting on your code like i have so it remains readable .                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

  • Please help me on "Bulk Collect"

    Hi All,
    I've a query like :
    select deptno,
    min(sal)
    from emp;
    The above query is taking some 1.2 min to return the expected result of 91k rows and taking cost of 3666.
    But when i re written the query like;
    select deptno,
    min(sal)
    from emp where deptno = :dno;
    It is giving the correct result & taking very less time,but i need to get the result for all dept's without passing the deptno dynamically.
    As per my knowledge i planing to create one function using "Bulk-Collect" and return the all deptno.That result i 've to use in where condition.
    But when i trying to create a function i'm not able to do it.. SO can anyone help how to solve the above problem..
    How to pass the entire column values in where condition using function..
    Do i need to create a function or is there any other way of doing it..
    Please help me...
    Thanks,
    gra

    Hi,
    I don't follow you when you say that want to get all departments and them pass them back to your query, that makes no sense to me.
    If you want max salary for all departments, just:
    select deptno, min(sal) min_sal
      from emp
    group by deptno;No function, no bulk collect no nothing, but plain SQL.
    Regards
    Peter

  • A question on bulk collect

    My Version
    Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit ProductionPls see the code below
    declare
    type empno_arr_type is table of int;
    empno_arr empno_arr_type;
    begin
    select empno  into empno_arr
    from emp;
    end;
    Output
    PLS-00642: local collection types not allowed in SQL statementsNow same code with bulk collect executes fine.
    declare
    type empno_arr_type is table of int;
    empno_arr empno_arr_type;
    begin
    select empno bulk collect into empno_arr
    from emp;
    end;Just wanted to undestand what exactly is the difference other than reduced context switch with bulk bind.
    What could have made the second code work since both are local collection types only?
    Thanks,
    CJ

    910555 wrote:
    Just wanted to undestand what exactly is the difference other than reduced context switch with bulk bind.There is basically a single context switch with an implicit SQL cursor in PL/SQL. So whether or not bulk fetching/binding is used, is immaterial - it does not reduce context switching as there is already the minimal possible context switching for executing that implicit cursor.
    E.g.
    declare
    begin
       --// single context switch (1 row fetched)
      select salary into salaryAmount from emp where empno = 1234;
       --// single context switch (20 rows fetched)
      select salary bulk collect into salaryAmounts from emp where deptno = 1234;
    end;Context switching happens each time you use a SQL cursor interface. E.g.
    declare
      cursor c is ...
    begin
      --// context switch
      open c;
      loop
        --// context switch
        fetch from c  ....
      end loop;
      --// context switch
      close c;
    end;Bulk processing and binding reduce context switching when using cursor processing loops like in the above example. If you can fetch more rows per loop iterations, then there will be less context switching.
    For example, the cursor outputs a 1000 rows. The above sample code using single row fetches in the loop, will have a 1000 loop iterations and therefore a 1000 context switches. A bulk fetch of a 100, will fetch a 100 rows per loop iteration for a total cost of 10 iterations and 10 context switches to consume the 1000 rows output of that cursor.
    What could have made the second code work since both are local collection types only?You are confusing the receive structure (in PL/SQL) with context switching.
    When you fetch from a cursor, you need to supply a PL/SQL structure to receive the output (called a SQL projection) from the cursor.
    If the fetch is a single row fetch, a scalar (single value) structure is needed.
    If the fetch is a multi-row (bulk) fetch, the structure needs to be an array (aka collection or nested table).

  • Need to increase performance-bulk collect in cursor with limit and in the for loop inserting into the trigger table

    Hi all,
    I have a performance issue in the below code,where i am trying to insert the data from table_stg into target_tab and in parent_tab tables and then to child tables via cursor with bulk collect .the target_tab and parent_tab are huge tables and have a row wise trigger enabled on it .the trigger is mandatory . This timetaken for this block to execute is 5000 seconds.Now my requirement is to reduce it to 5 to 10 mins.
    can someone please guide me here.Its bit urgent .Awaiting for your response.
    declare
    vmax_Value NUMBER(5);
      vcnt number(10);
      id_val number(20);
      pc_id number(15);
      vtable_nm VARCHAR2(100);
      vstep_no  VARCHAR2(10);
      vsql_code VARCHAR2(10);
      vsql_errm varchar2(200);
      vtarget_starttime timestamp;
      limit_in number :=10000;
      idx           number(10);
              cursor stg_cursor is
             select
                   DESCRIPTION,
                   SORT_CODE,
                   ACCOUNT_NUMBER,
                     to_number(to_char(CORRESPONDENCE_DATE,'DD')) crr_day,
                     to_char(CORRESPONDENCE_DATE,'MONTH') crr_month,
                     to_number(substr(to_char(CORRESPONDENCE_DATE,'DD-MON-YYYY'),8,4)) crr_year,
                   PARTY_ID,
                   GUID,
                   PAPERLESS_REF_IND,
                   PRODUCT_TYPE,
                   PRODUCT_BRAND,
                   PRODUCT_HELD_ID,
                   NOTIFICATION_PREF,
                   UNREAD_CORRES_PERIOD,
                   EMAIL_ID,
                   MOBILE_NUMBER,
                   TITLE,
                   SURNAME,
                   POSTCODE,
                   EVENT_TYPE,
                   PRIORITY_IND,
                   SUBJECT,
                   EXT_PRD_ID_TX,
                   EXT_PRD_HLD_ID_TX,
                   EXT_SYS_ID,
                   EXT_PTY_ID_TX,
                   ACCOUNT_TYPE_CD,
                   COM_PFR_TYP_TX,
                   COM_PFR_OPT_TX,
                   COM_PFR_RSN_CD
             from  table_stg;
    type rec_type is table of stg_rec_type index by pls_integer;
    v_rt_all_cols rec_type;
    BEGIN
      vstep_no   := '0';
      vmax_value := 0;
      vtarget_starttime := systimestamp;
      id_val    := 0;
      pc_id     := 0;
      success_flag := 0;
              vstep_no  := '1';
              vtable_nm := 'before cursor';
        OPEN stg_cursor;
              vstep_no  := '2';
              vtable_nm := 'After cursor';
       LOOP
              vstep_no  := '3';
              vtable_nm := 'before fetch';
    --loop
        FETCH stg_cursor BULK COLLECT INTO v_rt_all_cols LIMIT limit_in;
                  vstep_no  := '4';
                  vtable_nm := 'after fetch';
    --EXIT WHEN v_rt_all_cols.COUNT = 0;
        EXIT WHEN stg_cursor%NOTFOUND;
    FOR i IN 1 .. v_rt_all_cols.COUNT
      LOOP
       dbms_output.put_line(upper(v_rt_all_cols(i).event_type));
        if (upper(v_rt_all_cols(i).event_type) = upper('System_enforced')) then
                  vstep_no  := '4.1';
                  vtable_nm := 'before seq sel';
              select PC_SEQ.nextval into pc_id from dual;
                  vstep_no  := '4.2';
                  vtable_nm := 'before insert corres';
              INSERT INTO target1_tab
                           (ID,
                            PARTY_ID,
                            PRODUCT_BRAND,
                            SORT_CODE,
                            ACCOUNT_NUMBER,
                            EXT_PRD_ID_TX,         
                            EXT_PRD_HLD_ID_TX,
                            EXT_SYS_ID,
                            EXT_PTY_ID_TX,
                            ACCOUNT_TYPE_CD,
                            COM_PFR_TYP_TX,
                            COM_PFR_OPT_TX,
                            COM_PFR_RSN_CD,
                            status)
             VALUES
                            (pc_id,
                             v_rt_all_cols(i).party_id,
                             decode(v_rt_all_cols(i).product_brand,'LTB',2,'HLX',1,'HAL',1,'BOS',3,'VER',4,0),
                             v_rt_all_cols(i).sort_code,
                             'XXXX'||substr(trim(v_rt_all_cols(i).ACCOUNT_NUMBER),length(trim(v_rt_all_cols(i).ACCOUNT_NUMBER))-3,4),
                             v_rt_all_cols(i).EXT_PRD_ID_TX,
                             v_rt_all_cols(i).EXT_PRD_HLD_ID_TX,
                             v_rt_all_cols(i).EXT_SYS_ID,
                             v_rt_all_cols(i).EXT_PTY_ID_TX,
                             v_rt_all_cols(i).ACCOUNT_TYPE_CD,
                             v_rt_all_cols(i).COM_PFR_TYP_TX,
                             v_rt_all_cols(i).COM_PFR_OPT_TX,
                             v_rt_all_cols(i).COM_PFR_RSN_CD,
                             NULL);
                  vstep_no  := '4.3';
                  vtable_nm := 'after insert corres';
        else
              select COM_SEQ.nextval into id_val from dual;
                  vstep_no  := '6';
                  vtable_nm := 'before insertcomm';
          if (upper(v_rt_all_cols(i).event_type) = upper('REMINDER')) then
                vstep_no  := '6.01';
                  vtable_nm := 'after if insertcomm';
              insert into parent_tab
                 (ID ,
                 CTEM_CODE,
                 CHA_CODE,            
                 CT_CODE,                           
                 CONTACT_POINT_ID,             
                 SOURCE,
                 RECEIVED_DATE,                             
                 SEND_DATE,
                 RETRY_COUNT)
              values
                 (id_val,
                  lower(v_rt_all_cols(i).event_type), 
                  decode(v_rt_all_cols(i).product_brand,'LTB',2,'HLX',1,'HAL',1,'BOS',3,'VER',4,0),
                  'Email',
                  v_rt_all_cols(i).email_id,
                  'IADAREMINDER',
                  systimestamp,
                  systimestamp,
                  0);  
         else
                vstep_no  := '6.02';
                  vtable_nm := 'after else insertcomm';
              insert into parent_tab
                 (ID ,
                 CTEM_CODE,
                 CHA_CODE,            
                 CT_CODE,                           
                 CONTACT_POINT_ID,             
                 SOURCE,
                 RECEIVED_DATE,                             
                 SEND_DATE,
                 RETRY_COUNT)
              values
                 (id_val,
                  lower(v_rt_all_cols(i).event_type), 
                  decode(v_rt_all_cols(i).product_brand,'LTB',2,'HLX',1,'HAL',1,'BOS',3,'VER',4,0),
                  'Email',
                  v_rt_all_cols(i).email_id,
                  'CORRESPONDENCE',
                  systimestamp,
                  systimestamp,
                  0); 
            END if; 
                  vstep_no  := '6.11';
                  vtable_nm := 'before chop';
             if (v_rt_all_cols(i).ACCOUNT_NUMBER is not null) then 
                      v_rt_all_cols(i).ACCOUNT_NUMBER := 'XXXX'||substr(trim(v_rt_all_cols(i).ACCOUNT_NUMBER),length(trim(v_rt_all_cols(i).ACCOUNT_NUMBER))-3,4);
              insert into child_tab
                 (COM_ID,                                            
                 KEY,                                                                                                                                            
                 VALUE)
              values
                (id_val,
                 'IB.Correspondence.AccountNumberMasked',
                 v_rt_all_cols(i).ACCOUNT_NUMBER);
             end if;
                  vstep_no  := '6.1';
                  vtable_nm := 'before stateday';
             if (v_rt_all_cols(i).crr_day is not null) then 
              insert into child_tab
                 (COM_ID,                                            
                 KEY,                                                                                                                                            
                 VALUE)
              values
                (id_val,
                 --'IB.Correspondence.Date.Day',
                 'IB.Crsp.Date.Day',
                 v_rt_all_cols(i).crr_day);
             end if;
                  vstep_no  := '6.2';
                  vtable_nm := 'before statemth';
             if (v_rt_all_cols(i).crr_month is not null) then 
              insert into child_tab
                 (COM_ID,                                            
                 KEY,                                                                                                                                            
                 VALUE)
              values
                (id_val,
                 --'IB.Correspondence.Date.Month',
                 'IB.Crsp.Date.Month',
                 v_rt_all_cols(i).crr_month);
             end if;
                  vstep_no  := '6.3';
                  vtable_nm := 'before stateyear';
             if (v_rt_all_cols(i).crr_year is not null) then 
              insert into child_tab
                 (COM_ID,                                            
                 KEY,                                                                                                                                            
                 VALUE)
              values
                (id_val,
                 --'IB.Correspondence.Date.Year',
                 'IB.Crsp.Date.Year',
                 v_rt_all_cols(i).crr_year);
             end if;
                  vstep_no  := '7';
                  vtable_nm := 'before type';
               if (v_rt_all_cols(i).product_type is not null) then
                  insert into child_tab
                     (COM_ID,                                            
                     KEY,                                                                                                                                        
                     VALUE)
                  values
                    (id_val,
                     'IB.Product.ProductName',
                   v_rt_all_cols(i).product_type);
                end if;
                  vstep_no  := '9';
                  vtable_nm := 'before title';         
              if (trim(v_rt_all_cols(i).title) is not null) then
              insert into child_tab
                 (COM_ID,                                            
                 KEY,                                                                                                                                            
                 VALUE )
              values
                (id_val,
                 'IB.Customer.Title',
                 trim(v_rt_all_cols(i).title));
              end if;
                  vstep_no  := '10';
                  vtable_nm := 'before surname';
              if (v_rt_all_cols(i).surname is not null) then
                insert into child_tab
                   (COM_ID,                                            
                   KEY,                                                                                                                                          
                   VALUE)
                values
                  (id_val,
                  'IB.Customer.LastName',
                  v_rt_all_cols(i).surname);
              end if;
                            vstep_no  := '12';
                            vtable_nm := 'before postcd';
              if (trim(v_rt_all_cols(i).POSTCODE) is not null) then
              insert into child_tab
                 (COM_ID,                                            
                 KEY,                                                                                                                                            
                 VALUE)                              
               values
                (id_val,
                 'IB.Customer.Addr.PostCodeMasked',
                  substr(replace(v_rt_all_cols(i).POSTCODE,' ',''),length(replace(v_rt_all_cols(i).POSTCODE,' ',''))-2,3));
              end if;
                            vstep_no  := '13';
                            vtable_nm := 'before subject';
              if (trim(v_rt_all_cols(i).SUBJECT) is not null) then
              insert into child_tab
                 (COM_ID,                                            
                 KEY,                                                                                                                                            
                 VALUE)                              
               values
                (id_val,
                 'IB.Correspondence.Subject',
                  v_rt_all_cols(i).subject);
              end if;
                            vstep_no  := '14';
                            vtable_nm := 'before inactivity';
              if (trim(v_rt_all_cols(i).UNREAD_CORRES_PERIOD) is null or
                  trim(v_rt_all_cols(i).UNREAD_CORRES_PERIOD) = '3' or
                  trim(v_rt_all_cols(i).UNREAD_CORRES_PERIOD) = '6' or
                  trim(v_rt_all_cols(i).UNREAD_CORRES_PERIOD) = '9') then
              insert into child_tab
                 (COM_ID,                                            
                 KEY,                                                                                                                                            
                 VALUE)                              
               values
                (id_val,
                 'IB.Correspondence.Inactivity',
                  v_rt_all_cols(i).UNREAD_CORRES_PERIOD);
              end if;
                          vstep_no  := '14.1';
                          vtable_nm := 'after notfound';
        end if;
                          vstep_no  := '15';
                          vtable_nm := 'after notfound';
        END LOOP;
        end loop;
                          vstep_no  := '16';
                          vtable_nm := 'before closecur';
        CLOSE stg_cursor;
                          vstep_no  := '17';
                          vtable_nm := 'before commit';
        DELETE FROM table_stg;
      COMMIT;
                          vstep_no  := '18';
                          vtable_nm := 'after commit';
    EXCEPTION
    WHEN OTHERS THEN
      ROLLBACK;
      success_flag := 1;
      vsql_code := SQLCODE;
      vsql_errm := SUBSTR(sqlerrm,1,200);
      error_logging_pkg.inserterrorlog('samp',vsql_code,vsql_errm, vtable_nm,vstep_no);
      RAISE_APPLICATION_ERROR (-20011, 'samp '||vstep_no||' SQLERRM:'||SQLERRM);
    end;
    Thanks

    Its bit urgent
    NO - it is NOT urgent. Not to us.
    If you have an urgent problem you need to hire a consultant.
    I have a performance issue in the below code,
    Maybe you do and maybe you don't. How are we to really know? You haven't posted ANYTHING indicating that a performance issue exists. Please read the FAQ for how to post a tuning request and the info you need to provide. First and foremost you have to post SOMETHING that actually shows that a performance issue exists. Troubleshooting requires FACTS not just a subjective opinion.
    where i am trying to insert the data from table_stg into target_tab and in parent_tab tables and then to child tables via cursor with bulk collect .the target_tab and parent_tab are huge tables and have a row wise trigger enabled on it .the trigger is mandatory . This timetaken for this block to execute is 5000 seconds.Now my requirement is to reduce it to 5 to 10 mins.
    Personally I think 5000 seconds (about 1 hr 20 minutes) is very fast for processing 800 trillion rows of data into parent and child tables. Why do you think that is slow?
    Your code has several major flaws that need to be corrected before you can even determine what, if anything, needs to be tuned.
    This code has the EXIT statement at the beginning of the loop instead of at the end
        FETCH stg_cursor BULK COLLECT INTO v_rt_all_cols LIMIT limit_in;
                  vstep_no  := '4';
                  vtable_nm := 'after fetch';
    --EXIT WHEN v_rt_all_cols.COUNT = 0;
        EXIT WHEN stg_cursor%NOTFOUND;
    The correct place for the %NOTFOUND test when using BULK COLLECT is at the END of the loop; that is, the last statement in the loop.
    You can use a COUNT test at the start of the loop but ironically you have commented it out and have now done it wrong. Either move the NOTFOUND test to the end of the loop or remove it and uncomment the COUNT test.
    WHEN OTHERS THEN
      ROLLBACK;
    That basically says you don't even care what problem occurs or whether the problem is for a single record of your 10,000 in the collection. You pretty much just throw away any stack trace and substitute your own message.
    Your code also has NO exception handling for any of the individual steps or blocks of code.
    The code you posted also begs the question of why you are using NAME=VALUE pairs for child data rows? Why aren't you using a standard relational table for this data?
    As others have noted you are using slow-by-slow (row by row processing). Let's assume that PL/SQL, the bulk collect and row-by-row is actually necessary.
    Then you should be constructing the parent and child records into collections and then inserting them in BULK using FORALL.
    1. Create a collection for the new parent rows
    2. Create a collection for the new child rows
    3. For each set of LIMIT source row data
      a. empty the parent and child collections
      b. populate those collections with new parent/child data
      c. bulk insert the parent collection into the parent table
      d. bulk insert the child collection into the child table
    And unless you really want to either load EVERYTHING or abandon everything you should use bulk exception handling so that the clean data gets processed and only the dirty data gets rejected.

  • How to view errors if bulk collect has thrown errors

    Hi,
    I have few questions.
    1.How to view error whether bulk collect is successful or not
    2.What is identified & unidentified relationships in ERWIN
    3.How to see the errors whether the sql loder is successful or not
    and how to open the log file.Is there any specific command in UNIX
    which tells loader is successful or thrown error
    4.When executing the pl/sql procedure from UNIX.how to check for errors.
    Please provide the answers for this
    Thanks

    Use SAVE EXCEPTIONS clause in your FORALL loop.
    Is this for homework/test?

  • Needed help in bulk collect using collections

    Hi,
    I have created a schema level collection like "CREATE OR REPLACE TYPE T_EMP_NO IS TABLE OF NUMBER ;
    will i able to use this in a where clause which involves bulk collect?
    Please share ur thoughts.
    My oracle version is 10g

    user13710379 wrote:
    Will i be able to do a bulk collect into a table using this collection of my sql type?Bulk fetches collects into an array like structure - not into a SQL table like structure. So calling a collection variable in PL/SQL a "+PL/SQL table+" does not make much sense as this array structure is nothing like a table. For the same reason, one needs to question running SQL select statements against PL/SQL arrays.
    As for your SQL type defined - it is a collection (array) of numbers. Thus it can be used to bulk fetch a numeric column.

  • Using bulk collect and for all to solve a problem

    Hi All
    I have a following problem.
    Please forgive me if its a stupid question :-) im learning.
    1: Data in a staging table xx_staging_table
    2: two Target table t1, t2 where some columns from xx_staging_table are inserted into
    Some of the columns from the staging table data are checked for valid entries and then some columns from that row will be loaded into the two target tables.
    The two target tables use different set of columns from the staging table
    When I had a thousand records there was no problem with a direct insert but it seems we will now have half a million records.
    This has slowed down the process considerably.
    My question is
    Can I use the bulk collect and for all functionality to get specific columns from a staging table, then validate the row using those columns
    and then use a bulk insert to load the data into a specific table.?
    So code would be like
    get_staging_data cursor will have all the columns i need from the staging table
    cursor get_staging_data
    is select * from xx_staging_table (about 500000) records
    Use bulk collect to load about 10000 or so records into a plsql table
    and then do a bulk insert like this
    CREATE TABLE t1 AS SELECT * FROM all_objects WHERE 1 = 2;
    CREATE OR REPLACE PROCEDURE test_proc (p_array_size IN PLS_INTEGER DEFAULT 100)
    IS
    TYPE ARRAY IS TABLE OF all_objects%ROWTYPE;
    l_data ARRAY;
    CURSOR c IS SELECT * FROM all_objects;
    BEGIN
    OPEN c;
    LOOP
    FETCH c BULK COLLECT INTO l_data LIMIT p_array_size;
    FORALL i IN 1..l_data.COUNT
    INSERT INTO t1 VALUES l_data(i);
    EXIT WHEN c%NOTFOUND;
    END LOOP;
    CLOSE c;
    END test_proc;
    In the above example t1 and the cursor have the same number of columns
    In my case the columns in the cursor loop are a small subset of the columns of table t1
    so can i use a forall to load that subset into the table t1? How does that work?
    Thanks
    J

    user7348303 wrote:
    checking if the value is valid and theres also some conditional processing rules ( such as if the value is a certain value no inserts are needed)
    which are a little more complex than I can put in a simpleWell, if the processing is too complex (and conditional) to be done in SQL, then doing that in PL/SQL is justified... but will be slower as you are now introducing an additional layer. Data now needs to travel between the SQL layer and PL/SQL layer. This is slower.
    PL/SQL is inherently serialised - and this also effects performance and scalability. PL/SQL cannot be parallelised by Oracle in an automated fashion. SQL processes can.
    To put in in simple terms. You create PL/SQL procedure Foo that processes SQL cursor and you execute that proc. Oracle cannot run multiple parallel copies of Foo. It perhaps can parallelise that SQL cursor that Foo uses - but not Foo itself.
    However, if Foo is called by the SQL engine it can run in parallel - as the SQL process calling Foo is running in parallel. So if you make Foo a pipeline table function (written in PL/SQL), and you design and code it as a thread-safe/parallel enabled function, it can be callled and used and executed in parallel, by the SQL engine.
    So moving your PL/SQL code into a parallel enabled pipeline function written in PL/SQL, and using that function via parallel SQL, can increase performance over running that same basic PL/SQL processing as a serialised process.
    This is of course assuming that the processing that needs to be done using PL/SQL code, can be designed and coded for parallel processing in this fashion.

  • Procedure for Insert to BULK COLLECT

    hi,
    I have 2 questions-
    1) Say I have below code. I want to call an insert procedure insead of INSERT INTO. If I do would it give any performance issue?
    CREATE OR REPLACE PROCEDURE test_proc (p_array_size IN PLS_INTEGER DEFAULT 100)
    IS
    TYPE ARRAY IS TABLE OF all_objects%ROWTYPE;
    l_data ARRAY;
    CURSOR c IS SELECT * FROM all_objects;
    BEGIN
    OPEN c;
    LOOP
    FETCH c BULK COLLECT INTO l_data LIMIT p_array_size;
    FORALL i IN 1..l_data.COUNT
    INSERT INTO t1 VALUES l_data(i);
    EXIT WHEN c%NOTFOUND;
    END LOOP;
    CLOSE c;
    END test_proc;
    CREATE OR REPLACE PROCEDURE insert_proc ( col1 table.col1%Type,
    col2 table.col2%Type,
    col20 table.col20%Type)
    BEGIN
    INSERT INTO HistoryTable (col1, col2, ...col20)
    VALUES(val1, val2, ...val 20);
    END;
    END;
    2) Is there any clean method to create insert procedure which has 20 columns which I can call in other proc to do bulk insert?

    It is good that you explained your requirements, but you did not give us some data to see with and work with.
    If you could, help us with below details, it might be possible to help you:
    1. Create table statements for your Tables (eg. Checking, Savings and history)
    2. Insert Into statements for Sample data for your Tables.
    3. validations that you need to perform
    4. Expected output based on the Sample data provided in step 2.
    Please do not forget to post your version number
    select * from v$version;Also, use {noformat}{noformat} tags, before and after SQL Statements, Expected Output to preserve spaces and make the post more readable.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

  • Calling Stored procedure which uses Bulk Collect

    Hi All, I have Oracle stored procedure which uses Bulk Collect and returns table type parameter as output. Can anyone please help me how Can I call this kind of stored procedures which returns table type output using VB and Oracle's Driver. (I am successfully able to call using MS ODBC driver, but I want to use OraOLEDB driver.)

    861412 wrote:
    how Can I call this kind of stored procedures which returns table type output using VB and Oracle's Driver. This forum deals with the server-side languages SQL and PL/SQL.
    Your question deals with the client side and Visual Basic language.

Maybe you are looking for