Plsql bulk collect

Dear forum,
What is the criteria to use bulk collect in our plsql programs
what are the oracle parameters relavent to bulk collect perfimance.
Thanks

Use bulk collect anywhere that you can not use a simple SQL statement and, most often, within the context of a loop where in the old days a cursor loop would have been used.
Any parameter that affects memory or the optimizer can potentially affect bulk collect.
Homework research should be done at http://tahiti.oracle.com
Demos of this functionality can be found at http://www.psoug.org/library.html

Similar Messages

  • Can I use Bulk Collect results as input parameter for another cursor

    MUSIC            ==> remote MUSIC_DB database, MUSIC table has 60 million rows
    PRICE_DATA ==> remote PRICING_DB database, PRICE_DATE table has 1 billion rows
    These two table once existed in same database, but size of database exceeded available hardware size and hardware budget, so the PRICE_DATA table was moved to another Oracle database.  I need to create a single report that combines data from both of these tables, and a distributed join with DRIVING_SITE hint will not work because the size of both table is too large to push to one DRIVING_SITE location, so I wrote this PLSQL block to process in small blocks.
    QUESTION: how can use bulk collect from one cursor and pass that bulk collected information as input to second cursor without specifically listing each cell of the PLSQL bulk collection?  See sample pseudo-code below, I am trying to determine more efficient way to code than hard-coding 100 parameter names into 2nd cursor.
    NOTE: below is truly pseudo-code, I had to change the names of everything to adhere to NDA, but below works and is fast enough for my purposes, but if I want to change from 100 input parameters to 200, I have to add more hard-coded values.  There has got to be a better way.
    DECLARE
         -- define cursor that retrieves distinct SONG_IDs from MUSIC table in remote music database
         CURSOR C_CURRENT_MUSIC
         IS
        select distinct SONG_ID
        from MUSIC@MUSIC_DB
        where PRODUCTION_RELEASE=1
         /*  define a parameterized cursor that accepts 100 SONG_IDs and retrieves
              required pricing information
         CURSOR C_get_music_price_data
                   P_SONG_ID_001 NUMBER, P_SONG_ID_002 NUMBER, P_SONG_ID_003 NUMBER, P_SONG_ID_004 NUMBER, P_SONG_ID_005 NUMBER, P_SONG_ID_006 NUMBER, P_SONG_ID_007 NUMBER, P_SONG_ID_008 NUMBER, P_SONG_ID_009 NUMBER, P_SONG_ID_010 NUMBER,
                   P_SONG_ID_011 NUMBER, P_SONG_ID_012 NUMBER, P_SONG_ID_013 NUMBER, P_SONG_ID_014 NUMBER, P_SONG_ID_015 NUMBER, P_SONG_ID_016 NUMBER, P_SONG_ID_017 NUMBER, P_SONG_ID_018 NUMBER, P_SONG_ID_019 NUMBER, P_SONG_ID_020 NUMBER,
                   P_SONG_ID_021 NUMBER, P_SONG_ID_022 NUMBER, P_SONG_ID_023 NUMBER, P_SONG_ID_024 NUMBER, P_SONG_ID_025 NUMBER, P_SONG_ID_026 NUMBER, P_SONG_ID_027 NUMBER, P_SONG_ID_028 NUMBER, P_SONG_ID_029 NUMBER, P_SONG_ID_030 NUMBER,
                   P_SONG_ID_031 NUMBER, P_SONG_ID_032 NUMBER, P_SONG_ID_033 NUMBER, P_SONG_ID_034 NUMBER, P_SONG_ID_035 NUMBER, P_SONG_ID_036 NUMBER, P_SONG_ID_037 NUMBER, P_SONG_ID_038 NUMBER, P_SONG_ID_039 NUMBER, P_SONG_ID_040 NUMBER,
                   P_SONG_ID_041 NUMBER, P_SONG_ID_042 NUMBER, P_SONG_ID_043 NUMBER, P_SONG_ID_044 NUMBER, P_SONG_ID_045 NUMBER, P_SONG_ID_046 NUMBER, P_SONG_ID_047 NUMBER, P_SONG_ID_048 NUMBER, P_SONG_ID_049 NUMBER, P_SONG_ID_050 NUMBER,
                   P_SONG_ID_051 NUMBER, P_SONG_ID_052 NUMBER, P_SONG_ID_053 NUMBER, P_SONG_ID_054 NUMBER, P_SONG_ID_055 NUMBER, P_SONG_ID_056 NUMBER, P_SONG_ID_057 NUMBER, P_SONG_ID_058 NUMBER, P_SONG_ID_059 NUMBER, P_SONG_ID_060 NUMBER,
                   P_SONG_ID_061 NUMBER, P_SONG_ID_062 NUMBER, P_SONG_ID_063 NUMBER, P_SONG_ID_064 NUMBER, P_SONG_ID_065 NUMBER, P_SONG_ID_066 NUMBER, P_SONG_ID_067 NUMBER, P_SONG_ID_068 NUMBER, P_SONG_ID_069 NUMBER, P_SONG_ID_070 NUMBER,
                   P_SONG_ID_071 NUMBER, P_SONG_ID_072 NUMBER, P_SONG_ID_073 NUMBER, P_SONG_ID_074 NUMBER, P_SONG_ID_075 NUMBER, P_SONG_ID_076 NUMBER, P_SONG_ID_077 NUMBER, P_SONG_ID_078 NUMBER, P_SONG_ID_079 NUMBER, P_SONG_ID_080 NUMBER,
                   P_SONG_ID_081 NUMBER, P_SONG_ID_082 NUMBER, P_SONG_ID_083 NUMBER, P_SONG_ID_084 NUMBER, P_SONG_ID_085 NUMBER, P_SONG_ID_086 NUMBER, P_SONG_ID_087 NUMBER, P_SONG_ID_088 NUMBER, P_SONG_ID_089 NUMBER, P_SONG_ID_090 NUMBER,
                   P_SONG_ID_091 NUMBER, P_SONG_ID_092 NUMBER, P_SONG_ID_093 NUMBER, P_SONG_ID_094 NUMBER, P_SONG_ID_095 NUMBER, P_SONG_ID_096 NUMBER, P_SONG_ID_097 NUMBER, P_SONG_ID_098 NUMBER, P_SONG_ID_099 NUMBER, P_SONG_ID_100 NUMBER
         IS
         select
         from PRICE_DATA@PRICING_DB
         where COUNTRY = 'USA'
         and START_DATE <= sysdate
         and END_DATE > sysdate
         and vpc.SONG_ID IN
                   P_SONG_ID_001 ,P_SONG_ID_002 ,P_SONG_ID_003 ,P_SONG_ID_004 ,P_SONG_ID_005 ,P_SONG_ID_006 ,P_SONG_ID_007 ,P_SONG_ID_008 ,P_SONG_ID_009 ,P_SONG_ID_010,
                   P_SONG_ID_011 ,P_SONG_ID_012 ,P_SONG_ID_013 ,P_SONG_ID_014 ,P_SONG_ID_015 ,P_SONG_ID_016 ,P_SONG_ID_017 ,P_SONG_ID_018 ,P_SONG_ID_019 ,P_SONG_ID_020,
                   P_SONG_ID_021 ,P_SONG_ID_022 ,P_SONG_ID_023 ,P_SONG_ID_024 ,P_SONG_ID_025 ,P_SONG_ID_026 ,P_SONG_ID_027 ,P_SONG_ID_028 ,P_SONG_ID_029 ,P_SONG_ID_030,
                   P_SONG_ID_031 ,P_SONG_ID_032 ,P_SONG_ID_033 ,P_SONG_ID_034 ,P_SONG_ID_035 ,P_SONG_ID_036 ,P_SONG_ID_037 ,P_SONG_ID_038 ,P_SONG_ID_039 ,P_SONG_ID_040,
                   P_SONG_ID_041 ,P_SONG_ID_042 ,P_SONG_ID_043 ,P_SONG_ID_044 ,P_SONG_ID_045 ,P_SONG_ID_046 ,P_SONG_ID_047 ,P_SONG_ID_048 ,P_SONG_ID_049 ,P_SONG_ID_050,
                   P_SONG_ID_051 ,P_SONG_ID_052 ,P_SONG_ID_053 ,P_SONG_ID_054 ,P_SONG_ID_055 ,P_SONG_ID_056 ,P_SONG_ID_057 ,P_SONG_ID_058 ,P_SONG_ID_059 ,P_SONG_ID_060,
                   P_SONG_ID_061 ,P_SONG_ID_062 ,P_SONG_ID_063 ,P_SONG_ID_064 ,P_SONG_ID_065 ,P_SONG_ID_066 ,P_SONG_ID_067 ,P_SONG_ID_068 ,P_SONG_ID_069 ,P_SONG_ID_070,
                   P_SONG_ID_071 ,P_SONG_ID_072 ,P_SONG_ID_073 ,P_SONG_ID_074 ,P_SONG_ID_075 ,P_SONG_ID_076 ,P_SONG_ID_077 ,P_SONG_ID_078 ,P_SONG_ID_079 ,P_SONG_ID_080,
                   P_SONG_ID_081 ,P_SONG_ID_082 ,P_SONG_ID_083 ,P_SONG_ID_084 ,P_SONG_ID_085 ,P_SONG_ID_086 ,P_SONG_ID_087 ,P_SONG_ID_088 ,P_SONG_ID_089 ,P_SONG_ID_090,
                   P_SONG_ID_091 ,P_SONG_ID_092 ,P_SONG_ID_093 ,P_SONG_ID_094 ,P_SONG_ID_095 ,P_SONG_ID_096 ,P_SONG_ID_097 ,P_SONG_ID_098 ,P_SONG_ID_099 ,P_SONG_ID_100
         group by
               vpc.SONG_ID
              ,vpc.STOREFRONT_ID
         TYPE SONG_ID_TYPE IS TABLE OF MUSIC@MUSIC_DB%TYPE INDEX BY BINARY_INTEGER;
         V_SONG_ID_ARRAY                         SONG_ID_TYPE                     ;
         v_commit_counter           NUMBER := 0;
    BEGIN
         /* open cursor you intent to bulk collect from */
         OPEN C_CURRENT_MUSIC;
         LOOP
              /* in batches of 100, bulk collect ADAM_ID mapped TMS_IDENTIFIER into PLSQL table or records */
              FETCH C_CURRENT_MUSIC BULK COLLECT INTO V_SONG_ID_ARRAY LIMIT 100;
                   EXIT WHEN V_SONG_ID_ARRAY.COUNT = 0;
                   /* to avoid NO DATA FOUND error when pass 100 parameters to OPEN cursor, if the arrary
                      is not fully populated to 100, pad the array with nulls to fill up to 100 cells. */
                   IF (V_SONG_ID_ARRAY.COUNT >=1 and V_SONG_ID_ARRAY.COUNT <> 100) THEN
                        FOR j IN V_SONG_ID_ARRAY.COUNT+1..100 LOOP
                             V_SONG_ID_ARRAY(j) := null;
                        END LOOP;
                   END IF;
              /* pass a batch of 100 to cursor that get price information per SONG_ID and STOREFRONT_ID */
              FOR j IN C_get_music_price_data
                        V_SONG_ID_ARRAY(1) ,V_SONG_ID_ARRAY(2) ,V_SONG_ID_ARRAY(3) ,V_SONG_ID_ARRAY(4) ,V_SONG_ID_ARRAY(5) ,V_SONG_ID_ARRAY(6) ,V_SONG_ID_ARRAY(7) ,V_SONG_ID_ARRAY(8) ,V_SONG_ID_ARRAY(9) ,V_SONG_ID_ARRAY(10) ,
                        V_SONG_ID_ARRAY(11) ,V_SONG_ID_ARRAY(12) ,V_SONG_ID_ARRAY(13) ,V_SONG_ID_ARRAY(14) ,V_SONG_ID_ARRAY(15) ,V_SONG_ID_ARRAY(16) ,V_SONG_ID_ARRAY(17) ,V_SONG_ID_ARRAY(18) ,V_SONG_ID_ARRAY(19) ,V_SONG_ID_ARRAY(20) ,
                        V_SONG_ID_ARRAY(21) ,V_SONG_ID_ARRAY(22) ,V_SONG_ID_ARRAY(23) ,V_SONG_ID_ARRAY(24) ,V_SONG_ID_ARRAY(25) ,V_SONG_ID_ARRAY(26) ,V_SONG_ID_ARRAY(27) ,V_SONG_ID_ARRAY(28) ,V_SONG_ID_ARRAY(29) ,V_SONG_ID_ARRAY(30) ,
                        V_SONG_ID_ARRAY(31) ,V_SONG_ID_ARRAY(32) ,V_SONG_ID_ARRAY(33) ,V_SONG_ID_ARRAY(34) ,V_SONG_ID_ARRAY(35) ,V_SONG_ID_ARRAY(36) ,V_SONG_ID_ARRAY(37) ,V_SONG_ID_ARRAY(38) ,V_SONG_ID_ARRAY(39) ,V_SONG_ID_ARRAY(40) ,
                        V_SONG_ID_ARRAY(41) ,V_SONG_ID_ARRAY(42) ,V_SONG_ID_ARRAY(43) ,V_SONG_ID_ARRAY(44) ,V_SONG_ID_ARRAY(45) ,V_SONG_ID_ARRAY(46) ,V_SONG_ID_ARRAY(47) ,V_SONG_ID_ARRAY(48) ,V_SONG_ID_ARRAY(49) ,V_SONG_ID_ARRAY(50) ,
                        V_SONG_ID_ARRAY(51) ,V_SONG_ID_ARRAY(52) ,V_SONG_ID_ARRAY(53) ,V_SONG_ID_ARRAY(54) ,V_SONG_ID_ARRAY(55) ,V_SONG_ID_ARRAY(56) ,V_SONG_ID_ARRAY(57) ,V_SONG_ID_ARRAY(58) ,V_SONG_ID_ARRAY(59) ,V_SONG_ID_ARRAY(60) ,
                        V_SONG_ID_ARRAY(61) ,V_SONG_ID_ARRAY(62) ,V_SONG_ID_ARRAY(63) ,V_SONG_ID_ARRAY(64) ,V_SONG_ID_ARRAY(65) ,V_SONG_ID_ARRAY(66) ,V_SONG_ID_ARRAY(67) ,V_SONG_ID_ARRAY(68) ,V_SONG_ID_ARRAY(69) ,V_SONG_ID_ARRAY(70) ,
                        V_SONG_ID_ARRAY(71) ,V_SONG_ID_ARRAY(72) ,V_SONG_ID_ARRAY(73) ,V_SONG_ID_ARRAY(74) ,V_SONG_ID_ARRAY(75) ,V_SONG_ID_ARRAY(76) ,V_SONG_ID_ARRAY(77) ,V_SONG_ID_ARRAY(78) ,V_SONG_ID_ARRAY(79) ,V_SONG_ID_ARRAY(80) ,
                        V_SONG_ID_ARRAY(81) ,V_SONG_ID_ARRAY(82) ,V_SONG_ID_ARRAY(83) ,V_SONG_ID_ARRAY(84) ,V_SONG_ID_ARRAY(85) ,V_SONG_ID_ARRAY(86) ,V_SONG_ID_ARRAY(87) ,V_SONG_ID_ARRAY(88) ,V_SONG_ID_ARRAY(89) ,V_SONG_ID_ARRAY(90) ,
                        V_SONG_ID_ARRAY(91) ,V_SONG_ID_ARRAY(92) ,V_SONG_ID_ARRAY(93) ,V_SONG_ID_ARRAY(94) ,V_SONG_ID_ARRAY(95) ,V_SONG_ID_ARRAY(96) ,V_SONG_ID_ARRAY(97) ,V_SONG_ID_ARRAY(98) ,V_SONG_ID_ARRAY(99) ,V_SONG_ID_ARRAY(100)        
              LOOP
                   /* do stuff with data from Song and Pricing Database coming from the two
                        separate cursors, then continue processing more rows...
              END LOOP;
              /* commit after each batch of 100 SONG_IDs is processed */        
              COMMIT;
              EXIT WHEN C_CURRENT_MUSIC%NOTFOUND;  -- exit when there are no more rows to fetch from cursor
         END LOOP; -- bulk fetching loop
         CLOSE C_CURRENT_MUSIC; -- close cursor that was used in bulk collection
         /* commit rows */
         COMMIT; -- commit any remaining uncommitted data.
    END;

    I've got a problem when using passing VARRAY of numbers as parameter to remote cursor: it takes a super long time to run, sometimes doesn't finish even after an hour as passed.
    Continuing with my example in original entry, I replaced the bulk collect into PLSQL table collection with a VARRAY and i bulk collect into the VARRAY, this is fast and I know it works because I can DBMS_OUTPUT.PUT_LINE cells of VARRAY so I know it is getting populated correctly.  However, when I pass the VARRAY containing 100 cells populated with SONG_IDs as parameter to cursor, execution time is over an hour and when I am expecting a few seconds.
    Below code example strips the problem down to it's raw details, I skip the bulk collect and just manually populate a VARRAY with 100 SONG_ID values, then try to pass to as parameter to a cursor, but the execution time of cursor is unexpectedly long, over 30 minutes, sometime longer, when I am expecting seconds.
    IMPORTANT: If I take the same 100 SONG_IDs and place them directly in the cursor query's where IN clause, the SQL runs in under 5 seconds and returns result.  Also, if I pass the 100 SONG_IDs as individual cells of a PLSQL table collection, then it also runs fast.
    I thought that since the VARRAY is used via select subquery that is it queried locally, but the cursor is remote, and that I had a distribute problem on my hands, so I put in the DRIVING_SITE hint to attempt to force the result of query against VARRAY to go to remote server and rest of query will run there before returning result, but that didn't work either, still got slow response.
    Is something wrong with my code, or I am running into a Oracle problem that may require support to resolve?
    DECLARE
         /*  define a parameterized cursor that accepts XXX number of in SONG_IDs and
          retrieves required pricing information
         CURSOR C_get_music_price_data
      p_array_song_ids SYS.ODCInumberList              
         IS
         select  /*+DRIVING_SITE(pd) */
      count(distinct s.EVE_ID)
         from PRICE_DATA@PRICING_DB pd
         where pd.COUNTRY = 'USA'
         and pd.START_DATE <= sysdate
         and pd.END_DATE > sysdate
         and pd.SONG_ID IN
              select column_value from table(p_array_song_ids)
         group by
               pd.SONG_ID
              ,pd.STOREFRONT_ID
      V_ARRAY_SONG_IDS SYS.ODCInumberList := SYS.ODCInumberList();    
    BEGIN
    V_ARRAY_SONG_IDS.EXTEND(100);
    V_ARRAY_SONG_IDS(  1 ) := 31135  ;
    V_ARRAY_SONG_IDS(  2 ) := 31140   ;
    V_ARRAY_SONG_IDS(  3 ) := 31142   ;
    V_ARRAY_SONG_IDS(  4 ) := 31144   ;
    V_ARRAY_SONG_IDS(  5 ) := 31146   ;
    V_ARRAY_SONG_IDS(  6 ) := 31148   ;
    V_ARRAY_SONG_IDS(  7 ) := 31150   ;
    V_ARRAY_SONG_IDS(  8 ) := 31152   ;
    V_ARRAY_SONG_IDS(  9 ) := 31154   ;
    V_ARRAY_SONG_IDS( 10 ) := 31156   ;
    V_ARRAY_SONG_IDS( 11 ) := 31158   ;
    V_ARRAY_SONG_IDS( 12 ) := 31160   ;
    V_ARRAY_SONG_IDS( 13 ) := 33598   ;
    V_ARRAY_SONG_IDS( 14 ) := 33603   ;
    V_ARRAY_SONG_IDS( 15 ) := 33605   ;
    V_ARRAY_SONG_IDS( 16 ) := 33607   ;
    V_ARRAY_SONG_IDS( 17 ) := 33609   ;
    V_ARRAY_SONG_IDS( 18 ) := 33611   ;
    V_ARRAY_SONG_IDS( 19 ) := 33613   ;
    V_ARRAY_SONG_IDS( 20 ) := 33615   ;
    V_ARRAY_SONG_IDS( 21 ) := 33617   ;
    V_ARRAY_SONG_IDS( 22 ) := 33630   ;
    V_ARRAY_SONG_IDS( 23 ) := 33632   ;
    V_ARRAY_SONG_IDS( 24 ) := 33636   ;
    V_ARRAY_SONG_IDS( 25 ) := 33638   ;
    V_ARRAY_SONG_IDS( 26 ) := 33640   ;
    V_ARRAY_SONG_IDS( 27 ) := 33642   ;
    V_ARRAY_SONG_IDS( 28 ) := 33644   ;
    V_ARRAY_SONG_IDS( 29 ) := 33646   ;
    V_ARRAY_SONG_IDS( 30 ) := 33648   ;
    V_ARRAY_SONG_IDS( 31 ) := 33662   ;
    V_ARRAY_SONG_IDS( 32 ) := 33667   ;
    V_ARRAY_SONG_IDS( 33 ) := 33669   ;
    V_ARRAY_SONG_IDS( 34 ) := 33671   ;
    V_ARRAY_SONG_IDS( 35 ) := 33673   ;
    V_ARRAY_SONG_IDS( 36 ) := 33675   ;
    V_ARRAY_SONG_IDS( 37 ) := 33677   ;
    V_ARRAY_SONG_IDS( 38 ) := 33679   ;
    V_ARRAY_SONG_IDS( 39 ) := 33681   ;
    V_ARRAY_SONG_IDS( 40 ) := 33683   ;
    V_ARRAY_SONG_IDS( 41 ) := 33685   ;
    V_ARRAY_SONG_IDS( 42 ) := 33700   ;
    V_ARRAY_SONG_IDS( 43 ) := 33702   ;
    V_ARRAY_SONG_IDS( 44 ) := 33704   ;
    V_ARRAY_SONG_IDS( 45 ) := 33706   ;
    V_ARRAY_SONG_IDS( 46 ) := 33708   ;
    V_ARRAY_SONG_IDS( 47 ) := 33710   ;
    V_ARRAY_SONG_IDS( 48 ) := 33712   ;
    V_ARRAY_SONG_IDS( 49 ) := 33723   ;
    V_ARRAY_SONG_IDS( 50 ) := 33725   ;
    V_ARRAY_SONG_IDS( 51 ) := 33727   ;
    V_ARRAY_SONG_IDS( 52 ) := 33729   ;
    V_ARRAY_SONG_IDS( 53 ) := 33731   ;
    V_ARRAY_SONG_IDS( 54 ) := 33733   ;
    V_ARRAY_SONG_IDS( 55 ) := 33735   ;
    V_ARRAY_SONG_IDS( 56 ) := 33737   ;
    V_ARRAY_SONG_IDS( 57 ) := 33749   ;
    V_ARRAY_SONG_IDS( 58 ) := 33751   ;
    V_ARRAY_SONG_IDS( 59 ) := 33753   ;
    V_ARRAY_SONG_IDS( 60 ) := 33755   ;
    V_ARRAY_SONG_IDS( 61 ) := 33757   ;
    V_ARRAY_SONG_IDS( 62 ) := 33759   ;
    V_ARRAY_SONG_IDS( 63 ) := 33761   ;
    V_ARRAY_SONG_IDS( 64 ) := 33763   ;
    V_ARRAY_SONG_IDS( 65 ) := 33775   ;
    V_ARRAY_SONG_IDS( 66 ) := 33777   ;
    V_ARRAY_SONG_IDS( 67 ) := 33779   ;
    V_ARRAY_SONG_IDS( 68 ) := 33781   ;
    V_ARRAY_SONG_IDS( 69 ) := 33783   ;
    V_ARRAY_SONG_IDS( 70 ) := 33785   ;
    V_ARRAY_SONG_IDS( 71 ) := 33787   ;
    V_ARRAY_SONG_IDS( 72 ) := 33789   ;
    V_ARRAY_SONG_IDS( 73 ) := 33791   ;
    V_ARRAY_SONG_IDS( 74 ) := 33793   ;
    V_ARRAY_SONG_IDS( 75 ) := 33807   ;
    V_ARRAY_SONG_IDS( 76 ) := 33809   ;
    V_ARRAY_SONG_IDS( 77 ) := 33811   ;
    V_ARRAY_SONG_IDS( 78 ) := 33813   ;
    V_ARRAY_SONG_IDS( 79 ) := 33815   ;
    V_ARRAY_SONG_IDS( 80 ) := 33817   ;
    V_ARRAY_SONG_IDS( 81 ) := 33819   ;
    V_ARRAY_SONG_IDS( 82 ) := 33821   ;
    V_ARRAY_SONG_IDS( 83 ) := 33823   ;
    V_ARRAY_SONG_IDS( 84 ) := 33825   ;
    V_ARRAY_SONG_IDS( 85 ) := 33839   ;
    V_ARRAY_SONG_IDS( 86 ) := 33844   ;
    V_ARRAY_SONG_IDS( 87 ) := 33846   ;
    V_ARRAY_SONG_IDS( 88 ) := 33848   ;
    V_ARRAY_SONG_IDS( 89 ) := 33850   ;
    V_ARRAY_SONG_IDS( 90 ) := 33852   ;
    V_ARRAY_SONG_IDS( 91 ) := 33854   ;
    V_ARRAY_SONG_IDS( 92 ) := 33856   ;
    V_ARRAY_SONG_IDS( 93 ) := 33858   ;
    V_ARRAY_SONG_IDS( 94 ) := 33860   ;
    V_ARRAY_SONG_IDS( 95 ) := 33874   ;
    V_ARRAY_SONG_IDS( 96 ) := 33879   ;
    V_ARRAY_SONG_IDS( 97 ) := 33881   ;
    V_ARRAY_SONG_IDS( 98 ) := 33883   ;
    V_ARRAY_SONG_IDS( 99 ) := 33885   ;
    V_ARRAY_SONG_IDS(100 ) := 33889  ;
        /* do stuff with data from Song and Pricing Database coming from the two
      separate cursors, then continue processing more rows...
      FOR i IN C_get_music_price_data( v_array_song_ids ) LOOP
      . (this is the loop where I pass in v_array_song_ids
      .  populated with only 100 cells and it runs forever)
      END LOOP; 
    END;

  • How to improve performance using bulk collects with plsql tables or arrays

    Hi All,
    my procedure is like this
    declare
    cursor c1 is select ----------------------
    begin
    assigning to variables
    validations on that variables
    --50 validations are here --
    insert into a table
    end;
    we have created indexes on primary keys,
    i want to use
    DECLARE
    CURSOR a_cur IS
    SELECT program_id
    FROM airplanes;
    TYPE myarray IS TABLE OF a_cur%ROWTYPE;
    cur_array myarray;
    BEGIN
    OPEN a_cur;
    LOOP
    FETCH a_cur BULK COLLECT INTO cur_array LIMIT 100;
    ***---------can i assign cursor data to the plsql table variables or array***
    ***validate on the pl sql variable as---***
    i
    nsert into a table
    EXIT WHEN a_cur%NOTFOUND;
    END LOOP;
    CLOSE a_cur;
    END;
    Edited by: Veekay on Oct 21, 2011 4:28 AM

    Fastest way often is this:
    insert /*+append */
    into aTable
    select * from airplanes;
    commit;The select and insert part can even be done in parallel if needed.
    However if the oparation is complex or the dataset is very very very very very large or the programmer is decent but not excellent then the bulk approach should be considered. It is often a pretty stable and linear scaling approach.
    The solution depends a little on the database version.
    LOOP
      FETCH a_cur BULK COLLECT INTO cur_array LIMIT 100;
      EXIT WHEN a_cur.count = 0;
      forall i in a_cur.first.. a_cur.last
      insert into aTable (id)
      values (a_cur(i));
    END LOOP;
    ...If you have more then one column then you might need a single collection for each column. Other possibilities depend on the db version.
    Also: do not exit using a_cur%NOTFOUND. This is wrong! You might loose records from the end of the data set.

  • Using bulk collect and for all to solve a problem

    Hi All
    I have a following problem.
    Please forgive me if its a stupid question :-) im learning.
    1: Data in a staging table xx_staging_table
    2: two Target table t1, t2 where some columns from xx_staging_table are inserted into
    Some of the columns from the staging table data are checked for valid entries and then some columns from that row will be loaded into the two target tables.
    The two target tables use different set of columns from the staging table
    When I had a thousand records there was no problem with a direct insert but it seems we will now have half a million records.
    This has slowed down the process considerably.
    My question is
    Can I use the bulk collect and for all functionality to get specific columns from a staging table, then validate the row using those columns
    and then use a bulk insert to load the data into a specific table.?
    So code would be like
    get_staging_data cursor will have all the columns i need from the staging table
    cursor get_staging_data
    is select * from xx_staging_table (about 500000) records
    Use bulk collect to load about 10000 or so records into a plsql table
    and then do a bulk insert like this
    CREATE TABLE t1 AS SELECT * FROM all_objects WHERE 1 = 2;
    CREATE OR REPLACE PROCEDURE test_proc (p_array_size IN PLS_INTEGER DEFAULT 100)
    IS
    TYPE ARRAY IS TABLE OF all_objects%ROWTYPE;
    l_data ARRAY;
    CURSOR c IS SELECT * FROM all_objects;
    BEGIN
    OPEN c;
    LOOP
    FETCH c BULK COLLECT INTO l_data LIMIT p_array_size;
    FORALL i IN 1..l_data.COUNT
    INSERT INTO t1 VALUES l_data(i);
    EXIT WHEN c%NOTFOUND;
    END LOOP;
    CLOSE c;
    END test_proc;
    In the above example t1 and the cursor have the same number of columns
    In my case the columns in the cursor loop are a small subset of the columns of table t1
    so can i use a forall to load that subset into the table t1? How does that work?
    Thanks
    J

    user7348303 wrote:
    checking if the value is valid and theres also some conditional processing rules ( such as if the value is a certain value no inserts are needed)
    which are a little more complex than I can put in a simpleWell, if the processing is too complex (and conditional) to be done in SQL, then doing that in PL/SQL is justified... but will be slower as you are now introducing an additional layer. Data now needs to travel between the SQL layer and PL/SQL layer. This is slower.
    PL/SQL is inherently serialised - and this also effects performance and scalability. PL/SQL cannot be parallelised by Oracle in an automated fashion. SQL processes can.
    To put in in simple terms. You create PL/SQL procedure Foo that processes SQL cursor and you execute that proc. Oracle cannot run multiple parallel copies of Foo. It perhaps can parallelise that SQL cursor that Foo uses - but not Foo itself.
    However, if Foo is called by the SQL engine it can run in parallel - as the SQL process calling Foo is running in parallel. So if you make Foo a pipeline table function (written in PL/SQL), and you design and code it as a thread-safe/parallel enabled function, it can be callled and used and executed in parallel, by the SQL engine.
    So moving your PL/SQL code into a parallel enabled pipeline function written in PL/SQL, and using that function via parallel SQL, can increase performance over running that same basic PL/SQL processing as a serialised process.
    This is of course assuming that the processing that needs to be done using PL/SQL code, can be designed and coded for parallel processing in this fashion.

  • BULK COLLECT in select query inside a function

    Hi All,
    My query is :
    SELECT col1,col2,col3 FROM table_a; --( consider this is a long running query with lot of joins)
    Need to know how can i get the output of the above query from a function using BULK COLLECT.
    and i tried this:
    CREATE OR REPLACE TYPE tab_a_row
    AS OBJECT (
    col1 number(20),
    col2 number(20),
    col2 number(20)) ;
    create or replace type tab_a_nt as table of tab_a_row;
    create or replace function get_table_a
    return sys_refcursor
    is
    tab_a_recs tab_a_nt;
    rv sys_refcursor;
    begin
    SELECT tab_a_row(col1,col2,col3) BULK COLLECT INTO tab_a_recs FROM table_a;
    open rv for select * from table(tab_a_recs);
    return rv;
    end;
    Function created successfully. and i exec this from sql plus using
    SQL> var rc refcursor;
    SQL> exec :rc := get_table_a;
    BEGIN :rc := get_table_a; END;
    ERROR at line 1:
    ORA-22905: cannot access rows from a non-nested table item
    ORA-06512: at "GET_TABLE_A", line 12
    ORA-06512: at line 1
    Kindly share your ideas on how to use bulk collect and get set of outputs from a function.
    Edited by: 887268 on Apr 18, 2013 3:10 AM

    >
    If i use refcursor , then the JAVA code needs to be changed accordinglyto get the refcursor output.
    >
    Well, of course. Java has to know what the sql projection is. How else will it know how many columns there are and their datatypes.
    But that is true no matter what method you use.
    >
    But if i use a PLSQL COLLECTION TYPE (nested tables ) , then i can get output as ,
    select * from table(function_name(input1,input2));
    >
    No - using the 'table' function mean you are calling a PIPELINED function.
    This is a sample of a PIPELINED procedure.
    -- type to match emp record
    create or replace type emp_scalar_type as object
      (EMPNO NUMBER(4) ,
       ENAME VARCHAR2(10),
       JOB VARCHAR2(9),
       MGR NUMBER(4),
       HIREDATE DATE,
       SAL NUMBER(7, 2),
       COMM NUMBER(7, 2),
       DEPTNO NUMBER(2)
    -- table of emp records
    create or replace type emp_table_type as table of emp_scalar_type
    -- pipelined function
    create or replace function get_emp( p_deptno in number )
      return emp_table_type
      PIPELINED
      as
       TYPE EmpCurTyp IS REF CURSOR RETURN emp%ROWTYPE;
        emp_cv EmpCurTyp;
        l_rec  emp%rowtype;
      begin
        open emp_cv for select * from emp where deptno = p_deptno;
        loop
          fetch emp_cv into l_rec;
          exit when (emp_cv%notfound);
          pipe row( emp_scalar_type( l_rec.empno, LOWER(l_rec.ename),
              l_rec.job, l_rec.mgr, l_rec.hiredate, l_rec.sal, l_rec.comm, l_rec.deptno ) );
        end loop;
        return;
      end;
    select * from table(get_emp(20))Or your function could return a collection like the example from this thread this morning.
    Example of Collection as datatype of a function’s return value
    CREATE OR REPLACE TYPE enamelist as VARRAY(20) of VARCHAR2(20)
    /* Formatted on 4/18/2013 4:06:47 PM (QP5 v5.126.903.23003) */
    CREATE OR REPLACE FUNCTION ename_fn
    RETURN enamelist
    AS
    v_cursor_main enamelist := enamelist ();
    BEGIN
    SELECT ename
    BULK COLLECT
    INTO v_cursor_main
    FROM emp;
    RETURN v_cursor_main;
    EXCEPTION
    WHEN OTHERS
    THEN
    RETURN v_cursor_main;
    END;
    select * from table(ename_fn()) from dual;
    COLUMN_VALUE
    SMITH
    ALLEN
    WARD
    JONES
    MARTIN
    BLAKE
    CLARK
    SCOTT
    KING
    TURNER
    ADAMS
    JAMES
    FORD
    MILLER

  • Problem with BULK COLLECT with million rows - Oracle 9.0.1.4

    We have a requirement where are supposed to load 58 millions of rows into a FACT Table in our DATA WAREHOUSE. We initially planned to use Oracle Warehouse Builder but due to performance reasons, decided to write custom code. We wrote a custome procedure which opens a simple cursor and reads all the 58 million rows from the SOURCE Table and in a loop processes the rows and inserts the records into a TARGET Table. The logic works fine but it took 20hrs to complete the load.
    We then tried to leverage the BULK COLLECT and FORALL and PARALLEL options and modified our PL/SQL code completely to reflect these. Our code looks very simple.
    1. We declared PL/SQL BINARY_INDEXed Tables to store the data in memory.
    2. We used BULK COLLECT into FETCH the data.
    3. We used FORALL statement while inserting the data.
    We did not introduce any of our transformation logic yet.
    We tried with the 600,000 records first and it completed in 1 min and 29 sec with no problems. We then doubled the no. of rows to 1.2 million and the program crashed with the following error:
    ERROR at line 1:
    ORA-04030: out of process memory when trying to allocate 16408 bytes (koh-kghu
    call ,pmucalm coll)
    ORA-06512: at "VVA.BULKLOAD", line 66
    ORA-06512: at line 1
    We got the same error even with 1 million rows.
    We do have the following configuration:
    SGA - 8.2 GB
    PGA
    - Aggregate Target - 3GB
    - Current Allocated - 439444KB (439 MB)
    - Maximum allocated - 2695753 KB (2.6 GB)
    Temp Table Space - 60.9 GB (Total)
    - 20 GB (Available approximately)
    I think we do have more than enough memory to process the 1 million rows!!
    Also, some times the same program results in the following error:
    SQL> exec bulkload
    BEGIN bulkload; END;
    ERROR at line 1:
    ORA-03113: end-of-file on communication channel
    We did not even attempt the full load. Also, we are not using the PARALLEL option yet.
    Are we hitting any bug here? Or PL/SQL is not capable of mass loads? I would appreciate any thoughts on this?
    Thanks,
    Haranadh
    Following is the code:
    set echo off
    set timing on
    create or replace procedure bulkload as
    -- SOURCE --
    TYPE src_cpd_dt IS TABLE OF ima_ama_acct.cpd_dt%TYPE;
    TYPE src_acqr_ctry_cd IS TABLE OF ima_ama_acct.acqr_ctry_cd%TYPE;
    TYPE src_acqr_pcr_ctry_cd IS TABLE OF ima_ama_acct.acqr_pcr_ctry_cd%TYPE;
    TYPE src_issr_bin IS TABLE OF ima_ama_acct.issr_bin%TYPE;
    TYPE src_mrch_locn_ref_id IS TABLE OF ima_ama_acct.mrch_locn_ref_id%TYPE;
    TYPE src_ntwrk_id IS TABLE OF ima_ama_acct.ntwrk_id%TYPE;
    TYPE src_stip_advc_cd IS TABLE OF ima_ama_acct.stip_advc_cd%TYPE;
    TYPE src_authn_resp_cd IS TABLE OF ima_ama_acct.authn_resp_cd%TYPE;
    TYPE src_authn_actvy_cd IS TABLE OF ima_ama_acct.authn_actvy_cd%TYPE;
    TYPE src_resp_tm_id IS TABLE OF ima_ama_acct.resp_tm_id%TYPE;
    TYPE src_mrch_ref_id IS TABLE OF ima_ama_acct.mrch_ref_id%TYPE;
    TYPE src_issr_pcr IS TABLE OF ima_ama_acct.issr_pcr%TYPE;
    TYPE src_issr_ctry_cd IS TABLE OF ima_ama_acct.issr_ctry_cd%TYPE;
    TYPE src_acct_num IS TABLE OF ima_ama_acct.acct_num%TYPE;
    TYPE src_tran_cnt IS TABLE OF ima_ama_acct.tran_cnt%TYPE;
    TYPE src_usd_tran_amt IS TABLE OF ima_ama_acct.usd_tran_amt%TYPE;
    src_cpd_dt_array src_cpd_dt;
    src_acqr_ctry_cd_array      src_acqr_ctry_cd;
    src_acqr_pcr_ctry_cd_array     src_acqr_pcr_ctry_cd;
    src_issr_bin_array      src_issr_bin;
    src_mrch_locn_ref_id_array     src_mrch_locn_ref_id;
    src_ntwrk_id_array      src_ntwrk_id;
    src_stip_advc_cd_array      src_stip_advc_cd;
    src_authn_resp_cd_array      src_authn_resp_cd;
    src_authn_actvy_cd_array      src_authn_actvy_cd;
    src_resp_tm_id_array      src_resp_tm_id;
    src_mrch_ref_id_array      src_mrch_ref_id;
    src_issr_pcr_array      src_issr_pcr;
    src_issr_ctry_cd_array      src_issr_ctry_cd;
    src_acct_num_array      src_acct_num;
    src_tran_cnt_array      src_tran_cnt;
    src_usd_tran_amt_array      src_usd_tran_amt;
    j number := 1;
    CURSOR c1 IS
    SELECT
    cpd_dt,
    acqr_ctry_cd ,
    acqr_pcr_ctry_cd,
    issr_bin,
    mrch_locn_ref_id,
    ntwrk_id,
    stip_advc_cd,
    authn_resp_cd,
    authn_actvy_cd,
    resp_tm_id,
    mrch_ref_id,
    issr_pcr,
    issr_ctry_cd,
    acct_num,
    tran_cnt,
    usd_tran_amt
    FROM ima_ama_acct ima_ama_acct
    ORDER BY issr_bin;
    BEGIN
    OPEN c1;
    FETCH c1 bulk collect into
    src_cpd_dt_array ,
    src_acqr_ctry_cd_array ,
    src_acqr_pcr_ctry_cd_array,
    src_issr_bin_array ,
    src_mrch_locn_ref_id_array,
    src_ntwrk_id_array ,
    src_stip_advc_cd_array ,
    src_authn_resp_cd_array ,
    src_authn_actvy_cd_array ,
    src_resp_tm_id_array ,
    src_mrch_ref_id_array ,
    src_issr_pcr_array ,
    src_issr_ctry_cd_array ,
    src_acct_num_array ,
    src_tran_cnt_array ,
    src_usd_tran_amt_array ;
    CLOSE C1;
    FORALL j in 1 .. src_cpd_dt_array.count
    INSERT INTO ima_dly_acct (
         CPD_DT,
         ACQR_CTRY_CD,
         ACQR_TIER_CD,
         ACQR_PCR_CTRY_CD,
         ACQR_PCR_TIER_CD,
         ISSR_BIN,
         OWNR_BUS_ID,
         USER_BUS_ID,
         MRCH_LOCN_REF_ID,
         NTWRK_ID,
         STIP_ADVC_CD,
         AUTHN_RESP_CD,
         AUTHN_ACTVY_CD,
         RESP_TM_ID,
         PROD_REF_ID,
         MRCH_REF_ID,
         ISSR_PCR,
         ISSR_CTRY_CD,
         ACCT_NUM,
         TRAN_CNT,
         USD_TRAN_AMT)
         VALUES (
         src_cpd_dt_array(j),
         src_acqr_ctry_cd_array(j),
         null,
         src_acqr_pcr_ctry_cd_array(j),
              null,
              src_issr_bin_array(j),
              null,
              null,
              src_mrch_locn_ref_id_array(j),
              src_ntwrk_id_array(j),
              src_stip_advc_cd_array(j),
              src_authn_resp_cd_array(j),
              src_authn_actvy_cd_array(j),
              src_resp_tm_id_array(j),
              null,
              src_mrch_ref_id_array(j),
              src_issr_pcr_array(j),
              src_issr_ctry_cd_array(j),
              src_acct_num_array(j),
              src_tran_cnt_array(j),
              src_usd_tran_amt_array(j));
    COMMIT;
    END bulkload;
    SHOW ERRORS
    -----------------------------------------------------------------------------

    do you have a unique key available in the rows you are fetching?
    It seems a cursor with 20 million rows that is as wide as all the columnsyou want to work with is a lot of memory for the server to use at once. You may be able to do this with parallel processing (dop over 8) and a lot of memory for the warehouse box (and the box you are extracting data from)...but is this the most efficient (and thereby fastest) way to do it?
    What if you used a cursor to select a unique key only, and then during the cursor loop fetch each record, transform it, and insert it into the target?
    Its a different way to do a lot at once, but it cuts down on the overall memory overhead for the process.
    I know this isnt as elegant as a single insert to do it all at once, but sometimes trimming a process down so it takes less resources at any given moment is much faster than trying to do the whole thing at once.
    My solution is probably biased by transaction systems, so I would be interested in what the data warehouse community thinks of this.
    For example:
    source table my_transactions (tx_seq_id number, tx_fact1 varchar2(10), tx_fact2 varchar2(20), tx_fact3 number, ...)
    select a cursor of tx_seq_id only (even at 20 million rows this is not much)
    you could then either use a for loop or even bulk collect into a plsql collection or table
    then process individually like this:
    procedure process_a_tx(p_tx_seq_id in number)
    is
    rTX my_transactions%rowtype;
    begin
    select * into rTX from my_transactions where tx_seq_id = p_tx_seq_id;
    --modify values as needed
    insert into my_target(a, b, c) values (rtx.fact_1, rtx.fact2, rtx.fact3);
    commit;
    exception
    when others
    rollback;
    --write to a log or raise and exception
    end process_a_tx;
    procedure collect_tx
    is
    cursor tx is
    select tx_seq_id from my_transactions;
    begin
    for rTx in cTx loop
    process_a_tx(rtx.tx_seq_id);
    end loop;
    end collect_tx;

  • Good bulk collect limit

    I wrote a plsql procedure using bulk collect / forall
    The procedure uses bulk collect to fetch from a normal cursor, Then I am using for all to insert into
    target table, The number of rows are 234965470
    Question:
    What should ideally be the limit for my bulk collect ?
    According to below, it should be in hundreds
    http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:1583402705463
    I put a bulk collect limit of 50000 - took close to 2 hours
    then i tried 10000 - just 3 mins shorter than the above time
    But if you commit every 500 rows, Then is there not another theory that frequent commits is not good ?

    user650888 wrote:
    What should ideally be the limit for my bulk collect ?The answer to that is, What Does a Bulk Collect Do?
    And no, it is a fallacy that it makes SQL faster. It does not. Never did. Never will.
    A bulk process reduces the number of calls that need to be made to the SQL engine. In PL/SQL that is called context switching as the PL and SQL engines are tightly coupled.
    If for example, the SQL cursor outputs a 1000 rows and single row fetches are used, a 1000 calls or context switches are required, for transferring the row data from the one engine to the other. If a bulk processing of a 100 is for example used, then only 10 context switches are needed. That is a significant reduction in context switches.
    If you do a 1000 row bulk collect, only 1 context switch is needed. That is a barely noticeable difference in the time between 1 and 10 context switches. So using a bulk limit of a 1000 will not improve performance at all versus a 100 row limit.
    There is a price for this - bulk processing needs to use very expensive private process memory on the server. Oracle calls this the PGA. Consider the difference in memory between a 100 limit and a 1000 limit. 10x more PGA is needed for a 1000 limit - and no real performance gains result as there is a negligible reduction in context switches.
    A 100 limit is often bandied around as the bulk collect limit that is the best. That is not really true. If the rows are very small, impact on the PGA is less severe - a higher limit can make sense.
    Likewise, if the rows are very large (100+ large columns fetched), then a 100 limit can make an unreasonable demand on PGA... which will quickly become a bad performance situation when a bunch of clients all execute this code at the same time.
    So the sweet spot for a bulk limit typically varies between 10 and a 1000.
    I put a bulk collect limit of 50000 - took close to 2 hours
    then i tried 10000 - just 3 mins shorter than the above timeThis is just plain wrong. As you've seen, you are not improving performance at all. In fact, your code can cause severe performance problems on the server due to the high demand on private process memory being made, and the increase in work of the swap daemons that need to keep up with this demand.
    Bulk processing DOES NOT INCRESASE SQL performance. This is important to understand. The ONLY THING that it does, is to reduce the number of calls between the SQL and PL/SQL engines.
    But if you commit every 500 rows, Then is there not another theory that frequent commits is not good ?That is not just plain wrong, but an idiotic approach. A commit is work. Why do you want to add more work to the process and expect that to increase performance?

  • Bulk collect seems to retain previous row value when current returns null

    Dear all,
    I am currently writing package in plsql.
    The main logic of the program is as follow .. Program works fine .. but ... when no data is found for current  V_1, V_2,V_3 , insertion is done with previous cursor row value of V_1,V_2,V_3, ... which is not good.
    I tried to change the last nested cursor with first..last instead of 1..count, but result is the same.
    Any idea?
    open c_trt;
       loop  
       fetch c_trt bulk collect into bk_trig limit v_limit;
         open c_bkeve;
              fetch c_bkeve bulk collect into bk_eve limit v_limit;
                   if bk_eve.count > 0 then
                         for k in 1..bk_eve.count loop;
                             case
                                  when a =1 then    
                                       open c_bkieve(bk_eve(k).age,bk_eve(k).ope, bk_eve(k).eve);
                                            fetch c_bkieve bulk collect into bk_ieve limit v_limit;
                                                 if bk_ieve.count > 0 then
                                                       for j in 1..bk_ieve.count loop
                                                           fetch c_bkieve bulk collect into bk_ieve limit v_limit;
                                                                if bk_ieve.count > 0 then
                                                                  for j in 1..bk_ieve.count loop
                                                                     case bk_ieve(j).a
                                                                         when 'ABC' then
                                                                            V_1 := nvl(trim(bk_ieve(j).b),null);
                                                                         when 'XYZ' then
                                                                            V_2 := nvl(trim(substr(bk_ieve(j).b,1,4)),null);
                                                                            V_3 := nvl(trim(substr(bk_ieve(j).b,6,22)),null);
                                                                          else
                                                                               null;
                                                                         end case;
                                                                     end loop;
                                                                else
                                                                     V_1 := null;
                                                                     V_2 := null;
                                                                     V_3 := null;
                                                                end if;
                                            close c_bkieve;
                        insert into xxx values(V_1,V_2,V_3);
    etc, etc
    Thanks for your help
    Jerome

    Something like this
       select a.dco
            , a.agsa
            , a.agem
            , a.agde
            , a.ope
            , a.eve
            , a.tpr
            , a.nat
            , a.age
            , a.dev
            , a.ncp
            , a.suf
            , a.sen
            , a.dva
            , a.mon
            , a.lib
            , c.cli
         from bmvtg_mi a
         join bcom c
           on a.age = c.age
          and a.dev = c.dev
          and a.ncp = c.ncp
          and a.suf = c.suf 
         join (
                   select x.*
                        , y.cur_char
                     from bkeve_mi x
                     left join wb_currency y
                       on x.csp4 = y.cur_num
              ) b
           on b.age = decode(v_var1, 'age', a.age, 'agem', a.agem, 'agsa', a.agsa, 'agde', a.agde, a.age)
        where exists
                  select *
                    from wb_client sc
                   where c.cli = sc.customer_number
                     and ready = 1
          and exists
                  select *
                    from wb_pdt sp
                   where c.cpro = sp.c_pro

  • Generic bulk collect function

    I've ran into a scenario in Forms (10g) where some of the embedded plsql could benefit from the use of Bulk Collects. Now, as I've just found out, Forms doesn't like Bulks in it's embedded code, returns a 'feature not supported in client...' error.
    I attempted to write a function in the db (10g), to be called from the Form, accept a ref cursor parameter, Fetch the cursor contents using bulk into an array, then return the filled array to the Form. (It may not be performant either, but I'll never know because I didn't get it working.)
    However, fatally, I can't create a record array in the function without knowing the structure of the cursor. I've googled, and not found any way to do it.
    So, before I give up on it and go for a different solution, I just thought I'd check here to see if anyone else has attempted this? Or am I right in now thinking it's just not do-able.
    thanks in advance.

    Not a good idea to bulk collect in PL/SQL on the behalf of an external client like Forms.
    A bulk collect does a single thing only - it reduces the context switching between the PL/SQL and SQL engines. If you fetch a 1000 rows one at a time, then that fetch statement will cause a 1000 context switches as PL/SQL needs to step into the SQL engine in order to copy that row's data into PL/SQL.
    If you do a bulk fetch of a 100 rows/fetch, then you reduce the number of context switches to 10. A significant reduction.
    Okay, so now you hook Forms (or Java/Delphi/C#/etc) into this chain and push that bulk collection from a PL/SQL variable to this client. What do you achieve? Not much as you're now offsetting that reduction in context switches with more memory that needs to be used (that PL/SQL collection buffer needs PGA memory), and you add the overheads of first pulling the data from the db buffer cache into PL/SQL and then to the client.
    PL/SQL makes a very poor data cache for a client - the db buffer cache was designed exactly for that purpose. And is far superior than anything we can code in PL/SQL for the same role.
    It is much simpler, more robust, to rather fetch the data directly from the db buffer cache - no intermediate PL/SQL caching and buffering when fetching data. This will also scale better and perform better.
    The ideal is to use PL/SQL to implement a business logic layer, security, pre-processing, validation and other good stuff for the client - and then return a ref cursor to the client. Allowing the client to use that prepared cursor to fetch data directly from the db buffer cache.

  • Bulk Collects in Cursors

    Im enhancing a piece of code I have written and would like to use a bulk collect to enter values into my collection, rather than multiple switches.
    Can someone advise me on the correct use of bulk collects with cursors? Is it possible, or should I simply enter the SQL directly into the PLSQL?
    Thanks

    It's not only possible but preferrable to use BULK COLLECT to fetch cursor data into collections. If you search the Oracle documentation at http://tahiti.oracle.com for BULK COLLECT, you'll get plenty of hits that walk you through the details. Basically, though, you want something like this (lifted from the PL/SQL User's Guide)
    DECLARE
       TYPE NumTab IS TABLE OF emp.empno%TYPE;
       TYPE NameTab IS TABLE OF emp.ename%TYPE;
       enums NumTab;  -- no need to initialize
       names NameTab;
    BEGIN
       SELECT empno, ename  
         BULK COLLECT INTO enums, names
         FROM emp;
    END;Justin
    Distributed Database Consulting, Inc.
    www.ddbcinc.com/askDDBC

  • Any way to use cursor values inside other cursor by bulk collect?

    hi,
    Is there any way to use cursor get_tables value insdide loop get column if i am using bulk collect in both cursors?
    I tried a lot but i am nt able to do it.kindly help...
    create or replace procedure MULTIPLE_CURSORS_PROC is
    v_owner varchar2(40);
    v_table_name varchar2(40);
    v_column_name varchar2(100);
    cursor get_tables is
    select distinct tbl.owner, tbl.table_name
    from all_tables tbl
    where tbl.owner = 'SYSTEM';
    cursor get_columns is
    select distinct col.column_name
    from all_tab_columns col
    where col.owner = v_owner
    and col.table_name = v_table_name;
    begin
    open get_tables;
    loop
    fetch get_tables into v_owner, v_table_name;
    open get_columns;
    loop
    fetch get_columns into v_column_name;
    end loop;
    close get_columns;
    end loop;
    close get_tables;
    end ;

    hi there
    Refer this
    CREATE OR REPLACE PROCEDURE MULTIPLE_CURSORS_PROC
    IS
       TYPE scol IS VARRAY (10000) OF VARCHAR2 (32767);
       v_table_name    scol;
       v_column_name   scol;
       TYPE curtyp IS REF CURSOR;
       get_columns     curtyp;
       CURSOR get_tables
       IS
          SELECT   DISTINCT tbl.table_name
            FROM   all_tables tbl
           WHERE   tbl.owner = 'SYSTEM';
    BEGIN
       OPEN get_tables;
       LOOP
          FETCH get_tables BULK COLLECT INTO   v_table_name;
          FOR indx IN v_table_name.FIRST .. v_table_name.LAST
          LOOP
             SELECT   DISTINCT col.column_name
               BULK   COLLECT
               INTO   v_column_name
               FROM   all_tab_columns col
              WHERE   col.table_name = v_table_name (indx);
             FOR ind IN v_column_name.FIRST .. v_column_name.LAST
             LOOP
                DBMS_OUTPUT.put_line (v_column_name (ind));
             END LOOP;
          END LOOP;
          EXIT WHEN get_tables%NOTFOUND;
       END LOOP;
       CLOSE get_tables;
    END MULTIPLE_CURSORS_PROC;regards
    Hitesh

  • Need to increase performance-bulk collect in cursor with limit and in the for loop inserting into the trigger table

    Hi all,
    I have a performance issue in the below code,where i am trying to insert the data from table_stg into target_tab and in parent_tab tables and then to child tables via cursor with bulk collect .the target_tab and parent_tab are huge tables and have a row wise trigger enabled on it .the trigger is mandatory . This timetaken for this block to execute is 5000 seconds.Now my requirement is to reduce it to 5 to 10 mins.
    can someone please guide me here.Its bit urgent .Awaiting for your response.
    declare
    vmax_Value NUMBER(5);
      vcnt number(10);
      id_val number(20);
      pc_id number(15);
      vtable_nm VARCHAR2(100);
      vstep_no  VARCHAR2(10);
      vsql_code VARCHAR2(10);
      vsql_errm varchar2(200);
      vtarget_starttime timestamp;
      limit_in number :=10000;
      idx           number(10);
              cursor stg_cursor is
             select
                   DESCRIPTION,
                   SORT_CODE,
                   ACCOUNT_NUMBER,
                     to_number(to_char(CORRESPONDENCE_DATE,'DD')) crr_day,
                     to_char(CORRESPONDENCE_DATE,'MONTH') crr_month,
                     to_number(substr(to_char(CORRESPONDENCE_DATE,'DD-MON-YYYY'),8,4)) crr_year,
                   PARTY_ID,
                   GUID,
                   PAPERLESS_REF_IND,
                   PRODUCT_TYPE,
                   PRODUCT_BRAND,
                   PRODUCT_HELD_ID,
                   NOTIFICATION_PREF,
                   UNREAD_CORRES_PERIOD,
                   EMAIL_ID,
                   MOBILE_NUMBER,
                   TITLE,
                   SURNAME,
                   POSTCODE,
                   EVENT_TYPE,
                   PRIORITY_IND,
                   SUBJECT,
                   EXT_PRD_ID_TX,
                   EXT_PRD_HLD_ID_TX,
                   EXT_SYS_ID,
                   EXT_PTY_ID_TX,
                   ACCOUNT_TYPE_CD,
                   COM_PFR_TYP_TX,
                   COM_PFR_OPT_TX,
                   COM_PFR_RSN_CD
             from  table_stg;
    type rec_type is table of stg_rec_type index by pls_integer;
    v_rt_all_cols rec_type;
    BEGIN
      vstep_no   := '0';
      vmax_value := 0;
      vtarget_starttime := systimestamp;
      id_val    := 0;
      pc_id     := 0;
      success_flag := 0;
              vstep_no  := '1';
              vtable_nm := 'before cursor';
        OPEN stg_cursor;
              vstep_no  := '2';
              vtable_nm := 'After cursor';
       LOOP
              vstep_no  := '3';
              vtable_nm := 'before fetch';
    --loop
        FETCH stg_cursor BULK COLLECT INTO v_rt_all_cols LIMIT limit_in;
                  vstep_no  := '4';
                  vtable_nm := 'after fetch';
    --EXIT WHEN v_rt_all_cols.COUNT = 0;
        EXIT WHEN stg_cursor%NOTFOUND;
    FOR i IN 1 .. v_rt_all_cols.COUNT
      LOOP
       dbms_output.put_line(upper(v_rt_all_cols(i).event_type));
        if (upper(v_rt_all_cols(i).event_type) = upper('System_enforced')) then
                  vstep_no  := '4.1';
                  vtable_nm := 'before seq sel';
              select PC_SEQ.nextval into pc_id from dual;
                  vstep_no  := '4.2';
                  vtable_nm := 'before insert corres';
              INSERT INTO target1_tab
                           (ID,
                            PARTY_ID,
                            PRODUCT_BRAND,
                            SORT_CODE,
                            ACCOUNT_NUMBER,
                            EXT_PRD_ID_TX,         
                            EXT_PRD_HLD_ID_TX,
                            EXT_SYS_ID,
                            EXT_PTY_ID_TX,
                            ACCOUNT_TYPE_CD,
                            COM_PFR_TYP_TX,
                            COM_PFR_OPT_TX,
                            COM_PFR_RSN_CD,
                            status)
             VALUES
                            (pc_id,
                             v_rt_all_cols(i).party_id,
                             decode(v_rt_all_cols(i).product_brand,'LTB',2,'HLX',1,'HAL',1,'BOS',3,'VER',4,0),
                             v_rt_all_cols(i).sort_code,
                             'XXXX'||substr(trim(v_rt_all_cols(i).ACCOUNT_NUMBER),length(trim(v_rt_all_cols(i).ACCOUNT_NUMBER))-3,4),
                             v_rt_all_cols(i).EXT_PRD_ID_TX,
                             v_rt_all_cols(i).EXT_PRD_HLD_ID_TX,
                             v_rt_all_cols(i).EXT_SYS_ID,
                             v_rt_all_cols(i).EXT_PTY_ID_TX,
                             v_rt_all_cols(i).ACCOUNT_TYPE_CD,
                             v_rt_all_cols(i).COM_PFR_TYP_TX,
                             v_rt_all_cols(i).COM_PFR_OPT_TX,
                             v_rt_all_cols(i).COM_PFR_RSN_CD,
                             NULL);
                  vstep_no  := '4.3';
                  vtable_nm := 'after insert corres';
        else
              select COM_SEQ.nextval into id_val from dual;
                  vstep_no  := '6';
                  vtable_nm := 'before insertcomm';
          if (upper(v_rt_all_cols(i).event_type) = upper('REMINDER')) then
                vstep_no  := '6.01';
                  vtable_nm := 'after if insertcomm';
              insert into parent_tab
                 (ID ,
                 CTEM_CODE,
                 CHA_CODE,            
                 CT_CODE,                           
                 CONTACT_POINT_ID,             
                 SOURCE,
                 RECEIVED_DATE,                             
                 SEND_DATE,
                 RETRY_COUNT)
              values
                 (id_val,
                  lower(v_rt_all_cols(i).event_type), 
                  decode(v_rt_all_cols(i).product_brand,'LTB',2,'HLX',1,'HAL',1,'BOS',3,'VER',4,0),
                  'Email',
                  v_rt_all_cols(i).email_id,
                  'IADAREMINDER',
                  systimestamp,
                  systimestamp,
                  0);  
         else
                vstep_no  := '6.02';
                  vtable_nm := 'after else insertcomm';
              insert into parent_tab
                 (ID ,
                 CTEM_CODE,
                 CHA_CODE,            
                 CT_CODE,                           
                 CONTACT_POINT_ID,             
                 SOURCE,
                 RECEIVED_DATE,                             
                 SEND_DATE,
                 RETRY_COUNT)
              values
                 (id_val,
                  lower(v_rt_all_cols(i).event_type), 
                  decode(v_rt_all_cols(i).product_brand,'LTB',2,'HLX',1,'HAL',1,'BOS',3,'VER',4,0),
                  'Email',
                  v_rt_all_cols(i).email_id,
                  'CORRESPONDENCE',
                  systimestamp,
                  systimestamp,
                  0); 
            END if; 
                  vstep_no  := '6.11';
                  vtable_nm := 'before chop';
             if (v_rt_all_cols(i).ACCOUNT_NUMBER is not null) then 
                      v_rt_all_cols(i).ACCOUNT_NUMBER := 'XXXX'||substr(trim(v_rt_all_cols(i).ACCOUNT_NUMBER),length(trim(v_rt_all_cols(i).ACCOUNT_NUMBER))-3,4);
              insert into child_tab
                 (COM_ID,                                            
                 KEY,                                                                                                                                            
                 VALUE)
              values
                (id_val,
                 'IB.Correspondence.AccountNumberMasked',
                 v_rt_all_cols(i).ACCOUNT_NUMBER);
             end if;
                  vstep_no  := '6.1';
                  vtable_nm := 'before stateday';
             if (v_rt_all_cols(i).crr_day is not null) then 
              insert into child_tab
                 (COM_ID,                                            
                 KEY,                                                                                                                                            
                 VALUE)
              values
                (id_val,
                 --'IB.Correspondence.Date.Day',
                 'IB.Crsp.Date.Day',
                 v_rt_all_cols(i).crr_day);
             end if;
                  vstep_no  := '6.2';
                  vtable_nm := 'before statemth';
             if (v_rt_all_cols(i).crr_month is not null) then 
              insert into child_tab
                 (COM_ID,                                            
                 KEY,                                                                                                                                            
                 VALUE)
              values
                (id_val,
                 --'IB.Correspondence.Date.Month',
                 'IB.Crsp.Date.Month',
                 v_rt_all_cols(i).crr_month);
             end if;
                  vstep_no  := '6.3';
                  vtable_nm := 'before stateyear';
             if (v_rt_all_cols(i).crr_year is not null) then 
              insert into child_tab
                 (COM_ID,                                            
                 KEY,                                                                                                                                            
                 VALUE)
              values
                (id_val,
                 --'IB.Correspondence.Date.Year',
                 'IB.Crsp.Date.Year',
                 v_rt_all_cols(i).crr_year);
             end if;
                  vstep_no  := '7';
                  vtable_nm := 'before type';
               if (v_rt_all_cols(i).product_type is not null) then
                  insert into child_tab
                     (COM_ID,                                            
                     KEY,                                                                                                                                        
                     VALUE)
                  values
                    (id_val,
                     'IB.Product.ProductName',
                   v_rt_all_cols(i).product_type);
                end if;
                  vstep_no  := '9';
                  vtable_nm := 'before title';         
              if (trim(v_rt_all_cols(i).title) is not null) then
              insert into child_tab
                 (COM_ID,                                            
                 KEY,                                                                                                                                            
                 VALUE )
              values
                (id_val,
                 'IB.Customer.Title',
                 trim(v_rt_all_cols(i).title));
              end if;
                  vstep_no  := '10';
                  vtable_nm := 'before surname';
              if (v_rt_all_cols(i).surname is not null) then
                insert into child_tab
                   (COM_ID,                                            
                   KEY,                                                                                                                                          
                   VALUE)
                values
                  (id_val,
                  'IB.Customer.LastName',
                  v_rt_all_cols(i).surname);
              end if;
                            vstep_no  := '12';
                            vtable_nm := 'before postcd';
              if (trim(v_rt_all_cols(i).POSTCODE) is not null) then
              insert into child_tab
                 (COM_ID,                                            
                 KEY,                                                                                                                                            
                 VALUE)                              
               values
                (id_val,
                 'IB.Customer.Addr.PostCodeMasked',
                  substr(replace(v_rt_all_cols(i).POSTCODE,' ',''),length(replace(v_rt_all_cols(i).POSTCODE,' ',''))-2,3));
              end if;
                            vstep_no  := '13';
                            vtable_nm := 'before subject';
              if (trim(v_rt_all_cols(i).SUBJECT) is not null) then
              insert into child_tab
                 (COM_ID,                                            
                 KEY,                                                                                                                                            
                 VALUE)                              
               values
                (id_val,
                 'IB.Correspondence.Subject',
                  v_rt_all_cols(i).subject);
              end if;
                            vstep_no  := '14';
                            vtable_nm := 'before inactivity';
              if (trim(v_rt_all_cols(i).UNREAD_CORRES_PERIOD) is null or
                  trim(v_rt_all_cols(i).UNREAD_CORRES_PERIOD) = '3' or
                  trim(v_rt_all_cols(i).UNREAD_CORRES_PERIOD) = '6' or
                  trim(v_rt_all_cols(i).UNREAD_CORRES_PERIOD) = '9') then
              insert into child_tab
                 (COM_ID,                                            
                 KEY,                                                                                                                                            
                 VALUE)                              
               values
                (id_val,
                 'IB.Correspondence.Inactivity',
                  v_rt_all_cols(i).UNREAD_CORRES_PERIOD);
              end if;
                          vstep_no  := '14.1';
                          vtable_nm := 'after notfound';
        end if;
                          vstep_no  := '15';
                          vtable_nm := 'after notfound';
        END LOOP;
        end loop;
                          vstep_no  := '16';
                          vtable_nm := 'before closecur';
        CLOSE stg_cursor;
                          vstep_no  := '17';
                          vtable_nm := 'before commit';
        DELETE FROM table_stg;
      COMMIT;
                          vstep_no  := '18';
                          vtable_nm := 'after commit';
    EXCEPTION
    WHEN OTHERS THEN
      ROLLBACK;
      success_flag := 1;
      vsql_code := SQLCODE;
      vsql_errm := SUBSTR(sqlerrm,1,200);
      error_logging_pkg.inserterrorlog('samp',vsql_code,vsql_errm, vtable_nm,vstep_no);
      RAISE_APPLICATION_ERROR (-20011, 'samp '||vstep_no||' SQLERRM:'||SQLERRM);
    end;
    Thanks

    Its bit urgent
    NO - it is NOT urgent. Not to us.
    If you have an urgent problem you need to hire a consultant.
    I have a performance issue in the below code,
    Maybe you do and maybe you don't. How are we to really know? You haven't posted ANYTHING indicating that a performance issue exists. Please read the FAQ for how to post a tuning request and the info you need to provide. First and foremost you have to post SOMETHING that actually shows that a performance issue exists. Troubleshooting requires FACTS not just a subjective opinion.
    where i am trying to insert the data from table_stg into target_tab and in parent_tab tables and then to child tables via cursor with bulk collect .the target_tab and parent_tab are huge tables and have a row wise trigger enabled on it .the trigger is mandatory . This timetaken for this block to execute is 5000 seconds.Now my requirement is to reduce it to 5 to 10 mins.
    Personally I think 5000 seconds (about 1 hr 20 minutes) is very fast for processing 800 trillion rows of data into parent and child tables. Why do you think that is slow?
    Your code has several major flaws that need to be corrected before you can even determine what, if anything, needs to be tuned.
    This code has the EXIT statement at the beginning of the loop instead of at the end
        FETCH stg_cursor BULK COLLECT INTO v_rt_all_cols LIMIT limit_in;
                  vstep_no  := '4';
                  vtable_nm := 'after fetch';
    --EXIT WHEN v_rt_all_cols.COUNT = 0;
        EXIT WHEN stg_cursor%NOTFOUND;
    The correct place for the %NOTFOUND test when using BULK COLLECT is at the END of the loop; that is, the last statement in the loop.
    You can use a COUNT test at the start of the loop but ironically you have commented it out and have now done it wrong. Either move the NOTFOUND test to the end of the loop or remove it and uncomment the COUNT test.
    WHEN OTHERS THEN
      ROLLBACK;
    That basically says you don't even care what problem occurs or whether the problem is for a single record of your 10,000 in the collection. You pretty much just throw away any stack trace and substitute your own message.
    Your code also has NO exception handling for any of the individual steps or blocks of code.
    The code you posted also begs the question of why you are using NAME=VALUE pairs for child data rows? Why aren't you using a standard relational table for this data?
    As others have noted you are using slow-by-slow (row by row processing). Let's assume that PL/SQL, the bulk collect and row-by-row is actually necessary.
    Then you should be constructing the parent and child records into collections and then inserting them in BULK using FORALL.
    1. Create a collection for the new parent rows
    2. Create a collection for the new child rows
    3. For each set of LIMIT source row data
      a. empty the parent and child collections
      b. populate those collections with new parent/child data
      c. bulk insert the parent collection into the parent table
      d. bulk insert the child collection into the child table
    And unless you really want to either load EVERYTHING or abandon everything you should use bulk exception handling so that the clean data gets processed and only the dirty data gets rejected.

  • Opening two cursors using open cursor with bulk collect on colections ..

    Is it possible to have the implementatiion of using bulk collect with collections using two open cursors ..
    first c1
    second c2
    open c1
    loop
    open c2
    loop
    end loop
    close c2
    end loop;
    close c1
    what i found is for every outer loop of cursor c1 , cursor c2 is open and closed for every record.
    is this willl imporove the performace .?
    EXAMPLE:-
    NOTE: The relatoin between finc and minc is one to many ..finc is parent and minc is child
    function chk_notnull_blank ( colname IN number ) return number is
    BEGIN
    if ( colname is NOT NULL and colname not in ( -8E14, -7E14, -6E14, -5E14, -4E14, -3E14, -2E14, -1E14, -1E9 )) then
    RETURN colname ;
    else
    RETURN 0;
    end if;
    END chk_notnull_blank;
    procedure Proc_AnnualFmlyTotIncSummary is
    CURSOR c_cur_finc IS SELECT FAMID FROM FINC ;
    CURSOR c_cur_minc IS SELECT FAMID, MEMBNO , ANFEDTX, ANGOVRTX, ANPRVPNX, ANRRDEDX, ANSLTX, SALARYX, SALARYBX, NONFARMX, NONFRMBX , FARMINCX, FRMINCBX, RRRETIRX, RRRETRBX, SOCRRX, INDRETX, JSSDEDX, SSIX, SSIBX from MINC minc WHERE FAMID IN ( SELECT FAMID FROM FINC finc WHERE minc.FAMID = finc.FAMID );
    v_tot_fsalaryx number := 0;
    v_tot_fnonfrmx number := 0;
    v_tot_ffrmincx number := 0;
    v_tot_frretirx number := 0;
    v_tot_findretx number := 0;
    v_tot_fjssdedx number := 0;
    v_tot_fssix number := 0;
    v_temp_sum_fsalaryx number := 0;
    v_temp_sum_fnonfrmx number := 0;
    v_temp_sum_ffrmincx number := 0;
    v_temp_sum_frretirx number := 0;
    v_temp_sum_findretx number := 0;
    v_temp_sum_fjssdedx number := 0;
    v_temp_sum_fssix number := 0;
    TYPE minc_rec IS RECORD (FAMID MINC.FAMID%TYPE, MEMBNO MINC.MEMBNO%TYPE , ANFEDTX MINC.ANFEDTX%TYPE, ANGOVRTX MINC.ANGOVRTX%TYPE , ANPRVPNX MINC.ANPRVPNX%TYPE , ANRRDEDX MINC.ANRRDEDX%TYPE , ANSLTX MINC.ANSLTX%TYPE, SALARYX MINC.SALARYX%TYPE , SALARYBX MINC.SALARYBX%TYPE , NONFARMX MINC.NONFARMX%TYPE , NONFRMBX MINC.NONFRMBX%TYPE, FARMINCX MINC.FARMINCX%TYPE , FRMINCBX MINC.FRMINCBX%TYPE , RRRETIRX MINC.RRRETIRX%TYPE , RRRETRBX MINC.RRRETRBX%TYPE, SOCRRX MINC.SOCRRX%TYPE , INDRETX MINC.INDRETX%TYPE , JSSDEDX MINC.JSSDEDX%TYPE , SSIX MINC.SSIX%TYPE , SSIBX MINC.SSIBX%TYPE );
    v_flag_boolean boolean := false;
    v_famid number ;
    v_stmt varchar2(3200) ;
    v_limit number := 50;
    v_temp_FAMTFEDX number := 0 ;
    v_temp_FGOVRETX number := 0 ;
    v_temp_FPRIVPENX number := 0 ;
    v_temp_FRRDEDX number := 0 ;
    v_temp_FSLTAXX number := 0 ;
    v_temp_FSALARYX number := 0 ;
    v_temp_FNONFRMX number := 0 ;
    v_temp_FFRMINCX number := 0 ;
    v_temp_FRRETIRX number := 0 ;
    v_temp_FINDRETX number := 0 ;
    v_temp_FJSSDEDX number := 0 ;
    v_temp_FSSIX number := 0 ;
    BEGIN
    OPEN c_cur_finc ;
    LOOP
         FETCH c_cur_finc BULK COLLECT INTO famid_type_tbl LIMIT v_limit;
         EXIT WHEN famid_type_tbl.COUNT = 0;
         FOR i in famid_type_tbl.FIRST..famid_type_tbl.LAST
         LOOP
         OPEN c_cur_minc ;
         LOOP
         FETCH c_cur_minc BULK COLLECT INTO minc_rec_type_tbl LIMIT v_limit;
         EXIT WHEN minc_rec_type_tbl.COUNT = 0;
              FOR j IN minc_rec_type_tbl.FIRST..minc_rec_type_tbl.LAST
              LOOP
              if ( famid_type_tbl(i) = minc_rec_type_tbl(j).FAMID ) THEN
              v_temp_FAMTFEDX := v_temp_FAMTFEDX + chk_notnull_blank(minc_rec_type_tbl(j).ANFEDTX );
              v_temp_FGOVRETX := v_temp_FGOVRETX + chk_notnull_blank(minc_rec_type_tbl(j).ANGOVRTX);
              v_temp_FPRIPENX := v_temp_FPRIPENX + chk_notnull_blank(minc_rec_type_tbl(j).ANPRVPNX);
              v_temp_FRRDEDX := v_temp_FRRDEDX + chk_notnull_blank(minc_rec_type_tbl(j).ANRRDEDX);
              v_temp_FSLTAXX := v_temp_FSLTAXX + chk_notnull_blank(minc_rec_type_tbl(j).ANSLTX );
              v_temp_FSALARYX := v_temp_FSALARYX + chk_notnull_blank(minc_rec_type_tbl(j).SALARYX ) + chk_notnull_blank(minc_rec_type_tbl(j).SALARYBX);
              v_temp_FNONFRMX := v_temp_FNONFRMX + chk_notnull_blank(minc_rec_type_tbl(j).NONFARMX) + chk_notnull_blank(minc_rec_type_tbl(j).NONFRMBX);
              v_temp_FFRMINCX := v_temp_FFRMINCX + chk_notnull_blank(minc_rec_type_tbl(j).FARMINCX) + chk_notnull_blank(minc_rec_type_tbl(j).FRMINCBX );
              v_temp_FRRETIRX := v_temp_FRRETIRX + chk_notnull_blank(minc_rec_type_tbl(j).RRRETIRX) + chk_notnull_blank(minc_rec_type_tbl(j).RRRETRBX ) + chk_notnull_blank(minc_rec_type_tbl(j).SOCRRX);
              v_temp_FINDREXT := v_temp_FINDRETX + chk_notnull_blank(minc_rec_type_tbl(j).INDRETX);
              v_temp_FJSSDEDX := v_temp_FJSSDEDX + chk_notnull_blank(minc_rec_type_tbl(j).JSSDEDX);
              v_temp_FSSIX := v_temp_FSSIX + chk_notnull_blank(minc_rec_type_tbl(j).SSIX ) + chk_notnull_blank(minc_rec_type_tbl(j).SSIBX);
              END IF;
              END LOOP;
         END LOOP ;
         CLOSE c_cur_minc;
         UPDATE FINC SET FAMTFEDX = v_temp_FAMTFEDX WHERE FAMID = famid_type_tbl(i);
         END LOOP;
    END LOOP;
    CLOSE c_cur_finc;
    END;
    EXCEPTION
    WHEN OTHERS THEN
    raise_application_error(-20001,'An error was encountered - '||SQLCODE||' -ERROR- '||SQLERRM);
    v_err_code := SQLCODE;
    v_err_msg := substr(SQLERRM, 1, 200);
    INSERT INTO audit_table (error_number, error_message) VALUES (v_err_code, v_err_msg);
    error_logging(p_error_code => substr(sqlerrm,1,9), p_error_message => substr(sqlerrm,12), p_package =>'PKG_FCI_APP',p_procedure => 'Proc_Annual_Deductions_FromPay ' , p_location => v_location);
    end Proc_AnnualFmlyTotIncSummary ;
    Is the proga efficient and free from compilation errors ..?
    thanks/kumar
    Edited by: kumar73 on Sep 22, 2010 12:48 PM

    function chk_notnull_blank ( colname IN number ) return number is Maybe this function should have its own forum:
    how to use case in this program
    Re: how to declare a formal parameter in a function of type record and access ?
    Re: how to define a function with table type parameter
    Re: creation of db trigger with error ..
    Re: How to write a trigger for the below scenario
    how to improve the code using advanced methods
    yours advice in improving the coding ..
    How to use bulk in multiple cursors !!
    ;-)

  • How to view errors if bulk collect has thrown errors

    Hi,
    I have few questions.
    1.How to view error whether bulk collect is successful or not
    2.What is identified & unidentified relationships in ERWIN
    3.How to see the errors whether the sql loder is successful or not
    and how to open the log file.Is there any specific command in UNIX
    which tells loader is successful or thrown error
    4.When executing the pl/sql procedure from UNIX.how to check for errors.
    Please provide the answers for this
    Thanks

    Use SAVE EXCEPTIONS clause in your FORALL loop.
    Is this for homework/test?

  • Use of FOR Cursor and BULK COLLECT INTO

    Dear all,
    in which case we prefer to use FOR cursor and cursor with BULK COLLECT INTO? The following contains two block that query identically where one is using FOR cursor, the other is using BULK COLLECT INTO . Which one that performs better given in the existing task? How do we measure performance between these two?
    I'm using sample HR schema:
    declare
    l_start number;
    BEGIN
    l_start:= DBMS_UTILITY.get_time;
    dbms_lock.sleep(1);
    FOR employee IN (SELECT e.last_name, j.job_title FROM employees e,jobs j
    where e.job_id=j.job_id and  e.job_id LIKE '%CLERK%' AND e.manager_id > 120 ORDER BY e.last_name)
    LOOP
      DBMS_OUTPUT.PUT_LINE ('Name = ' || employee.last_name || ', Job = ' || employee.job_title);
    END LOOP;
    DBMS_OUTPUT.put_line('total time: ' || to_char(DBMS_UTILITY.get_time - l_start) || ' hsecs');
    END;
    declare
    l_start number;
    type rec_type is table of varchar2(20);
    name_rec rec_type;
    job_rec rec_type;
    begin
    l_start:= DBMS_UTILITY.get_time;
    dbms_lock.sleep(1);
    SELECT e.last_name, j.job_title bulk collect into name_rec,job_rec FROM employees e,jobs j
    where e.job_id=j.job_id and  e.job_id LIKE '%CLERK%' AND e.manager_id > 120 ORDER BY e.last_name;
    for j in name_rec.first..name_rec.last loop
      DBMS_OUTPUT.PUT_LINE ('Name = ' || name_rec(j) || ', Job = ' || job_rec(j));
    END LOOP;
    DBMS_OUTPUT.put_line('total time: ' || to_char(DBMS_UTILITY.get_time - l_start) || ' hsecs');
    end;
    /In this code, I put timestamp in each block, but they are useless since they both run virtually instantaneous...
    Best regards,
    Val

    If you want to get 100% benifit of bulk collect then it must be implemented as below
    declare
         Cursor cur_emp
         is
         SELECT     e.last_name, j.job_title
         FROM     employees e,jobs j
         where     e.job_id=j.job_id
                   and  e.job_id LIKE '%CLERK%'
                   AND e.manager_id > 120
         ORDER BY e.last_name;
         l_start number;
         type rec_type is table of varchar2(20);
         name_rec rec_type;
         job_rec rec_type;
    begin
         l_start:= DBMS_UTILITY.get_time;
         dbms_lock.sleep(1);
         /*SELECT e.last_name, j.job_title bulk collect into name_rec,job_rec FROM employees e,jobs j
         where e.job_id=j.job_id and  e.job_id LIKE '%CLERK%' AND e.manager_id > 120 ORDER BY e.last_name;
         OPEN cur_emp;
         LOOP
              FETCH cur_emp BULK COLLECT INTO name_rec LIMIT 100;
              EXIT WHEN name_rec.COUNT=0;
              FOR j in 1..name_rec.COUNT
              LOOP
                   DBMS_OUTPUT.PUT_LINE ('Name = ' || name_rec(j) || ', Job = ' || job_rec(j));          
              END LOOP;
              EXIT WHEN cur_emp%NOTFOUND;
         END LOOP;
            CLOSE cur_emp;
         DBMS_OUTPUT.put_line('total time: ' || to_char(DBMS_UTILITY.get_time - l_start) || ' hsecs');
    end;
    /

Maybe you are looking for

  • Can't hear the microphone

    Hi, I purchased a Speed Link USB microphone (http://www.play.com/PC/PCs/-/676/883/-/5983672/SPEEDLINK-SL-8709-Pure-Voice-Micr ophone-II-USB/Product.html?searchtype=genre). In the system properties, the device is recognised and when I speak into it, t

  • 0PUBLISHING_TEMPLATE chart display

    Hi, We have BW3.10. I set up a web template with a ROLE MENU item as the navigation frame, and the result gets loaded in the 0PUBLISHING_TEMPLATE in another frame. When a query is selected, in the application toolbar, there are chart options - bar, p

  • Captivate 5 will not open/execute

    I am running Captivate 5 as an admin on a 64bit version of Windows 7 (Ultimate N to be exact) with 8gb of ddr2 pc6400 on a core 2 duo e7400. Nomatter what I've tried...I can not get captivate 5 to run.. it gets stuck at the splash screen loading... I

  • Best Results - Internal or External Hard Drive

    What are the best results: I have heard that a fully optimized (defragged) external drive is required for the best results, but not sure. Any advice would be helpful Thanks, Steve

  • Dell N5040 Wifi works randomly.

    Hello everyone. I installed Arch recently and everything works fine except for the wifi. I can use wifi-menu to connect to my network, but the connection will only work randomly. I have to connect multiple times to get it to work. The connection is n