Having problem using BULK COLLECT - FORALL

Hi,
I'm facing a problem while setting a table type value before inserting into a table using FORALL.
My concern is that i'm unable to generate the values in FOR LOOP, as by using dbms_output.put_line i observed that after 100 rows execution the process exits giving error as
ORA-22160: element at index [1] does not exist
ORA-06512: at "XYZ", line 568
ORA-06512: at line 2
I need to use the values stored in FOR LOOP in the same order for insertion in table TEMP using FOR ALL;
I'm guessing that i'm using the wrong technique for storing values in FOR LOOP.
Below given is my SP structure.
Any suggestion would be hepful.
Thanks!!
create or replace procedure XYZ
cursor cur is
select A,B,C,D from ABCD; ---expecting 40,000 row fetch
type t_A is table of ABCD.A%type index by pls_integer;
type t_B is table of ABCD.B%type index by pls_integer;
type t_C is table of ABCD.C%type index by pls_integer;
type t_D is table of ABCD.D%type index by pls_integer;
v_A t_A;
v_B t_B;
v_C t_C;
v_D t_D;
type t_E is table of VARCHAR2(100);
type t_F is table of VARCHAR2(100);
v_E t_E := t_E();
v_F t_F := t_F();
begin
open cur;
loop
fetch cur BULK COLLECT INTO v_A,v_B,v_C,v_D limit 100;
for i in 1 .. v_A.count loop
v_E.extend(i);
select 1111 into v_E(i) from dual;
v_F.extend(i);
v_F(i) := 'Hi !!';
----calculating v_E(i) and v_F(i) here----
end loop;
forall in i in 1 .. v_A.count
insert into table TEMP values (v_E(i), v_F(i));
exit when cur%NOTFOUND;
end loop;
close cur;
end;
BANNER
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi
PL/SQL Release 10.2.0.4.0 - Production
CORE     10.2.0.4.0     Production
TNS for HPUX: Version 10.2.0.4.0 - Production
NLSRTL Version 10.2.0.4.0 - Production
-------

The problem is that inside the IF ELSIF blocks i need to query various tables..As I thought. But why did you concentrate on BULK COLLECT - FORALL?
The cursor whereas does take more time to execute.More time then?
We have join of two tables have 18,00,000(normal table) and >17,92,2067(MView) records, having inidex on one the join column.
After joining these two and adding the filter conditions i'm having around >40,000 >rows.? You have a cursor row. And then inside the loop you have a query which returns 40'000 rows? What do you do with that data?
Is the query you show running INSIDE the loop?
I guess you still talk about the LOOP query and your are unhappy that it is not taking an index?
1. The loop is NOT the problem. It's the "... i need to query various tables"
2. ORACLE is ok when it's NOT taking the index. That is faster!!
3. If you add code and execution plans, please add tags. Otherwise it's unreadable.
Try to merge your LOOP query with the "various tables" and make ONE query out of 40000*various ;-)                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

Similar Messages

  • How to use BULK COLLECT, FORALL and TREAT

    There is a need to read match and update data from and into a custom table. The table would have about 3 millions rows and holds key numbers. BAsed on a field value of this custom table, relevant data needs to be fetched from joins of other tables and updated in the custom table. I plan to use BULK COLLECT and FORALL.
    All examples I have seen, do an insert into a table. How do I go about reading all values of a given field and fetching other relevant data and then updating the custom table with data fetched.
    Defined an object with specifics like this
    CREATE OR REPLACE TYPE imei_ot AS OBJECT (
    recid NUMBER,
    imei VARCHAR2(30),
    STORE VARCHAR2(100),
    status VARCHAR2(1),
    TIMESTAMP DATE,
    order_number VARCHAR2(30),
    order_type VARCHAR2(30),
    sku VARCHAR2(30),
    order_date DATE,
    attribute1 VARCHAR2(240),
    market VARCHAR2(240),
    processed_flag VARCHAR2(1),
    last_update_date DATE
    Now within a package procedure I have defined like this.
    type imei_ott is table of imei_ot;
    imei_ntt imei_ott;
    begin
    SELECT imei_ot (recid,
    imei,
    STORE,
    status,
    TIMESTAMP,
    order_number,
    order_type,
    sku,
    order_date,
    attribute1,
    market,
    processed_flag,
    last_update_date
    BULK COLLECT INTO imei_ntt
    FROM (SELECT stg.recid, stg.imei, cip.store_location, 'S',
    co.rtl_txn_timestamp, co.rtl_order_number, 'CUST',
    msi.segment1 || '.' || msi.segment3,
    TRUNC (co.txn_timestamp), col.part_number, 'ZZ',
    stg.processed_flag, SYSDATE
    FROM custom_orders co,
    custom_order_lines col,
    custom_stg stg,
    mtl_system_items_b msi
    WHERE co.header_id = col.header_id
    AND msi.inventory_item_id = col.inventory_item_id
    AND msi.organization_id =
    (SELECT organization_id
    FROM hr_all_organization_units_tl
    WHERE NAME = 'Item Master'
    AND source_lang = USERENV ('LANG'))
    AND stg.imei = col.serial_number
    AND stg.processed_flag = 'U');
    /* Update staging table in one go for COR order data */
    FORALL indx IN 1 .. imei_ntt.COUNT
    UPDATE custom_stg
    SET STORE = TREAT (imei_ntt (indx) AS imei_ot).STORE,
    status = TREAT (imei_ntt (indx) AS imei_ot).status,
    TIMESTAMP = TREAT (imei_ntt (indx) AS imei_ot).TIMESTAMP,
    order_number = TREAT (imei_ntt (indx) AS imei_ot).order_number,
    order_type = TREAT (imei_ntt (indx) AS imei_ot).order_type,
    sku = TREAT (imei_ntt (indx) AS imei_ot).sku,
    order_date = TREAT (imei_ntt (indx) AS imei_ot).order_date,
    attribute1 = TREAT (imei_ntt (indx) AS imei_ot).attribute1,
    market = TREAT (imei_ntt (indx) AS imei_ot).market,
    processed_flag =
    TREAT (imei_ntt (indx) AS imei_ot).processed_flag,
    last_update_date =
    TREAT (imei_ntt (indx) AS imei_ot).last_update_date
    WHERE recid = TREAT (imei_ntt (indx) AS imei_ot).recid
    AND imei = TREAT (imei_ntt (indx) AS imei_ot).imei;
    DBMS_OUTPUT.put_line ( TO_CHAR (SQL%ROWCOUNT)
    || ' rows updated using Bulk Collect / For All.'
    EXCEPTION
    WHEN NO_DATA_FOUND
    THEN
    DBMS_OUTPUT.put_line ('No Data: ' || SQLERRM);
    WHEN OTHERS
    THEN
    DBMS_OUTPUT.put_line ('Other Error: ' || SQLERRM);
    END;
    Now for the unfortunate part. When I compile the pkg, I face an error
    PL/SQL: ORA-00904: "LAST_UPDATE_DATE": invalid identifier
    I am not sure where I am wrong. Object type has the last update date field and the custom table also has the same field.
    Could someone please throw some light and suggestion?
    Thanks
    uds

    I suspect your error comes from the »bulk collect into« and not from the »forall loop«.
    From a first glance you need to alias sysdate with last_update_date and some of the other select items need to be aliased as well :
    But a simplified version would be
    select imei_ot (stg.recid,
                    stg.imei,
                    cip.store_location,
                    'S',
                    co.rtl_txn_timestamp,
                    co.rtl_order_number,
                    'CUST',
                    msi.segment1 || '.' || msi.segment3,
                    trunc (co.txn_timestamp),
                    col.part_number,
                    'ZZ',
                    stg.processed_flag,
                    sysdate
    bulk collect into imei_ntt
      from custom_orders co,
           custom_order_lines col,
           custom_stg stg,
           mtl_system_items_b msi
    where co.header_id = col.header_id
       and msi.inventory_item_id = col.inventory_item_id
       and msi.organization_id =
                  (select organization_id
                     from hr_all_organization_units_tl
                    where name = 'Item Master' and source_lang = userenv ('LANG'))
       and stg.imei = col.serial_number
       and stg.processed_flag = 'U';
    ...

  • Exceptional handling using Bulk Collect FORALL

    Hi Members,
    Following are by DB details
    SELECT * FROM v$version;                 
    Oracle9i Enterprise Edition Release 9.2.0.7.0 - 64bit Production
    PL/SQL Release 9.2.0.7.0 - Production
    CORE     9.2.0.7.0     Production
    TNS for HPUX: Version 9.2.0.7.0 - Production
    NLSRTL Version 9.2.0.7.0 - ProductionI need to handle exception during Bulk Collect FORALL operation and update the table column with Status_flag as 'E' for given item name and extension id. Below is the code snippet for the same.
    declare
      k  NUMBER;
    cursor c_list_price
    IS
    SELECT  /*+ leading(a)  */    item_name,
                             'PUBLISH_TO_PRICE_CATALOG' ATTRIBUTE_NAME,
                             MAX(DECODE (attribute_name,
                                         'PUBLISH_TO_PRICE_CATALOG',
                                         attribute_value))
                                ATTRIBUTE_VALUE,
                             NULL GEOGRAPHY_LEVEL,
                             NULL GEOGRAPHY_VALUE,
                             NULL INCLUDE_GEOGRAPHY,
                             NULL EFFECTIVE_START_DATE,
                             NULL EFFECTIVE_TO_DATE,
                             EXTENSION_ID,
                             NULL PRICING_UNIT,
                             NULL ATTRIBUTE_FROM,
                             NULL ATTRIBUTE_TO,
                             NULL FIXED_BASE_PRICE,
                             NULL PARENT_ATTRIBUTE,
                             NULL DURATION_QUANTITY,
                             NULL ATTRIBUTE2,
                             NULL ATTRIBUTE3,
                             NULL ATTRIBUTE4,
                             NULL ATTRIBUTE5,
                             NULL ATTRIBUTE6,
                             NULL ATTRIBUTE7,
                             NULL ATTRIBUTE8,
                             NULL ATTRIBUTE9,
                             NULL ATTRIBUTE10,
                             NULL ATTRIBUTE11,
                             NULL ATTRIBUTE12,
                             NULL ATTRIBUTE13,
                             NULL ATTRIBUTE14,
                             NULL ATTRIBUTE15,
                             --ORG_CODE,
                             ITEM_CATALOG_CATEGORY ITEM_CATEGORY,
                             a.INVENTORY_ITEM_ID INVENTORY_ITEM_ID
                      FROM   XXCPD_ITM_SEC_UDA_DETAILS_TB a
                     WHERE   DECODE(attribute_group_name,'LIST_PRICE',DECODE(STATUS_FLAG,'E',1,'P',2,3)
                                                                                                       ,'XXITM_FORM_PRICING_HS',DECODE(STATUS_FLAG,'E',4,'P',5,6)
                                                                                                       ,'XXITM_FORM_PRICING_SB',DECODE(STATUS_FLAG,'E',4,'P',5,6)
                                                                                                       ,'XXITM_ADV_FORM_PRICING_HS',DECODE(STATUS_FLAG,'E',7,'P',8,9)
                                                                                                       ,'XXITM_ADV_FORM_PRICING_SB',DECODE(STATUS_FLAG,'E',7,'P',8,9)
                                                                                                       ,'XXITM_ADV_FORM_PRICING_HWSW',DECODE(STATUS_FLAG,'E',10,'P',11,12)
                                                                                                       ,'XXITM_ADV_FORM_PRICING_SWSUB',DECODE(STATUS_FLAG,'E',10,'P',11,12)
                                                                                                       ,'XXITM_ROUTE_TO_MARKET',DECODE(STATUS_FLAG,'E',13,'P',14,15)) in (1,2,3)
                             AND exists(select /*+ no_merge */  '1' from mtl_system_items_b b where a.inventory_item_id=b.inventory_item_id and b.organization_id=1)
                  GROUP BY   item_name,
                             extension_id,
                             ITEM_CATALOG_CATEGORY,
                             INVENTORY_ITEM_ID;
    TYPE myarray IS TABLE OF c_list_price%ROWTYPE;
    c_lp myarray;
    BEGIN
      OPEN c_list_price;
      LOOP
      FETCH  c_list_price BULK COLLECT INTO  c_lp
                                      LIMIT 50000;
      IF c_lp.count = 0 THEN
        EXIT;
      END IF;
      Begin
      FORALL i IN c_lp.FIRST..c_lp.LAST SAVE EXCEPTIONS
        INSERT /*+append*/ INTO XXCPD_ITEM_PRICING_ATTR_BKP2(
                                                                   line_id,
                                                                   ITEM_NAME,
                                                                   ATTRIBUTE_NAME,
                                                                   ATTRIBUTE_VALUE,
                                                                   GEOGRAPHY_LEVEL,
                                                                   GEOGRAPHY_VALUE,
                                                                   INCLUDE_GEOGRAPHY,
                                                                   EFFECTIVE_START_DATE,
                                                                   EFFECTIVE_TO_DATE,
                                                                   EXTENSION_ID,
                                                                   PRICING_UNIT,
                                                                   ATTRIBUTE_FROM,
                                                                   ATTRIBUTE_TO,
                                                                   FIXED_BASE_PRICE,
                                                                   PARENT_ATTRIBUTE,
                                                                   DURATION_QUANTITY,
                                                                   ATTRIBUTE2,
                                                                   ATTRIBUTE3,
                                                                   ATTRIBUTE4,
                                                                   ATTRIBUTE5,
                                                                   ATTRIBUTE6,
                                                                   ATTRIBUTE7,
                                                                   ATTRIBUTE8,
                                                                   ATTRIBUTE9,
                                                                   ATTRIBUTE10,
                                                                   ATTRIBUTE11,
                                                                   ATTRIBUTE12,
                                                                   ATTRIBUTE13,
                                                                   ATTRIBUTE14,
                                                                   ATTRIBUTE15,
                                                                   ITEM_CATEGORY,
                                                                   INVENTORY_ITEM_ID
                                                            VALUES
                                                                   xxcpd.xxcpd_if_prc_line_id_s.NEXTVAL
                                                                  ,c_lp(i).ITEM_NAME
                                                                  ,c_lp(i).ATTRIBUTE_NAME     
                                                                  ,c_lp(i).ATTRIBUTE_VALUE    
                                                                  ,c_lp(i).GEOGRAPHY_LEVEL    
                                                                  ,c_lp(i).GEOGRAPHY_VALUE    
                                                                  ,c_lp(i).INCLUDE_GEOGRAPHY  
                                                                  ,c_lp(i).EFFECTIVE_START_DATE
                                                                  ,c_lp(i).EFFECTIVE_TO_DATE  
                                                                  ,c_lp(i).EXTENSION_ID       
                                                                  ,c_lp(i).PRICING_UNIT       
                                                                  ,c_lp(i).ATTRIBUTE_FROM     
                                                                  ,c_lp(i).ATTRIBUTE_TO       
                                                                  ,c_lp(i).FIXED_BASE_PRICE  
                                                                  ,c_lp(i).PARENT_ATTRIBUTE   
                                                                  ,c_lp(i).DURATION_QUANTITY  
                                                                  ,c_lp(i).ATTRIBUTE2         
                                                                  ,c_lp(i).ATTRIBUTE3         
                                                                  ,c_lp(i).ATTRIBUTE4         
                                                                  ,c_lp(i).ATTRIBUTE5         
                                                                  ,c_lp(i).ATTRIBUTE6         
                                                                  ,c_lp(i).ATTRIBUTE7         
                                                                  ,c_lp(i).ATTRIBUTE8         
                                                                  ,c_lp(i).ATTRIBUTE9         
                                                                  ,c_lp(i).ATTRIBUTE10        
                                                                  ,c_lp(i).ATTRIBUTE11        
                                                                  ,c_lp(i).ATTRIBUTE12        
                                                                  ,c_lp(i).ATTRIBUTE13        
                                                                  ,c_lp(i).ATTRIBUTE14        
                                                                  ,c_lp(i).ATTRIBUTE15        
                                                                  ,c_lp(i).ITEM_CATEGORY      
                                                                  ,c_lp(i).INVENTORY_ITEM_ID
        EXCEPTION
        WHEN OTHERS THEN
        FOR j IN 1..SQL%bulk_exceptions.Count
            LOOP
                        UPDATE XXCPD_ITM_SEC_UDA_DETAILS_TB
                        SET status_flag = 'E',
                            last_updated_by = 1,
                            last_update_date = SYSDATE
                        WHERE item_name = c_lp(SQL%BULK_EXCEPTIONS(j).ERROR_INDEX).item_name
                        AND  extension_id=c_lp(SQL%BULK_EXCEPTIONS(j).ERROR_INDEX).extension_id;
                        COMMIT;
                 IF c_list_price%ISOPEN THEN
                      CLOSE c_list_price;
                 END IF;
                FND_FILE.put_line(FND_FILE.output,'********************Exception Occured*************:-- '||SYSTIMESTAMP);
            END LOOP;
         END;
      END LOOP;
      CLOSE  c_list_price;
        COMMIT;
    end;and I am getting following error
    ORA-06550: line 156, column 47:
    PL/SQL: ORA-00911: invalid character
    ORA-06550: line 152, column 21:
    PL/SQL: SQL Statement ignoredpointing to following lines in exception block for update clause.
    WHERE item_name = c_lp(SQL%BULK_EXCEPTIONS(j).ERROR_INDEX).item_name
                        AND  extension_id=c_lp(SQL%BULK_EXCEPTIONS(j).ERROR_INDEX).extension_id; can some one please help me out with the issue.
    Thanks in Advance

    Have re-written the code in the following manner
    declare
      lv_ITEM_NAME            DBMS_SQL.VARCHAR2S;
      lv_ITEM_CATEGORY        DBMS_SQL.VARCHAR2S;
      lv_INVENTORY_ITEM_ID    DBMS_SQL.NUMBER_TABLE;
      lv_ATTRIBUTE_NAME       DBMS_SQL.VARCHAR2S;
      lv_ATTRIBUTE_VALUE      DBMS_SQL.VARCHAR2S;
      lv_GEOGRAPHY_LEVEL      DBMS_SQL.VARCHAR2S;
      lv_GEOGRAPHY_VALUE      DBMS_SQL.VARCHAR2S;
      lv_INCLUDE_GEOGRAPHY    DBMS_SQL.VARCHAR2S;
      lv_EFFECTIVE_START_DATE DBMS_SQL.date_table;
      lv_EFFECTIVE_TO_DATE    DBMS_SQL.date_table;
      lv_EXTENSION_ID         DBMS_SQL.NUMBER_TABLE;
      lv_PRICING_UNIT         DBMS_SQL.VARCHAR2S;
      lv_ATTRIBUTE_FROM       DBMS_SQL.VARCHAR2S;
      lv_ATTRIBUTE_TO         DBMS_SQL.VARCHAR2S;
      lv_FIXED_BASE_PRICE     DBMS_SQL.NUMBER_TABLE;
      lv_PARENT_ATTRIBUTE     DBMS_SQL.VARCHAR2S;
      lv_DURATION_QUANTITY    DBMS_SQL.NUMBER_TABLE;
      lv_ATTRIBUTE2           DBMS_SQL.VARCHAR2S;
      lv_ATTRIBUTE3           DBMS_SQL.VARCHAR2S;
      lv_ATTRIBUTE4           DBMS_SQL.VARCHAR2S;
      lv_ATTRIBUTE5           DBMS_SQL.VARCHAR2S;
      lv_ATTRIBUTE6           DBMS_SQL.VARCHAR2S;
      lv_ATTRIBUTE7           DBMS_SQL.VARCHAR2S;
      lv_ATTRIBUTE8           DBMS_SQL.VARCHAR2S;
      lv_ATTRIBUTE9           DBMS_SQL.VARCHAR2S;
      lv_ATTRIBUTE10          DBMS_SQL.VARCHAR2S;
      lv_ATTRIBUTE11          DBMS_SQL.VARCHAR2S;
      lv_ATTRIBUTE12          DBMS_SQL.VARCHAR2S;
      lv_ATTRIBUTE13          DBMS_SQL.VARCHAR2S;
      lv_ATTRIBUTE14          DBMS_SQL.VARCHAR2S;
      lv_ATTRIBUTE15          DBMS_SQL.VARCHAR2S;
        l_item_name XXCPD_ITM_SEC_UDA_DETAILS_TB.item_name%TYPE;
        l_extension_id XXCPD_ITM_SEC_UDA_DETAILS_TB.extension_id%TYPE;
    cursor c_list_price
    IS
    SELECT  /*+ leading(a)  */    item_name,
                             'PUBLISH_TO_PRICE_CATALOG' ATTRIBUTE_NAME,
                             MAX(DECODE (attribute_name,
                                         'PUBLISH_TO_PRICE_CATALOG',
                                         attribute_value))
                                ATTRIBUTE_VALUE,
                             NULL GEOGRAPHY_LEVEL,
                             NULL GEOGRAPHY_VALUE,
                             NULL INCLUDE_GEOGRAPHY,
                             NULL EFFECTIVE_START_DATE,
                             NULL EFFECTIVE_TO_DATE,
                             EXTENSION_ID,
                             NULL PRICING_UNIT,
                             NULL ATTRIBUTE_FROM,
                             NULL ATTRIBUTE_TO,
                             NULL FIXED_BASE_PRICE,
                             NULL PARENT_ATTRIBUTE,
                             NULL DURATION_QUANTITY,
                             NULL ATTRIBUTE2,
                             NULL ATTRIBUTE3,
                             NULL ATTRIBUTE4,
                             NULL ATTRIBUTE5,
                             NULL ATTRIBUTE6,
                             NULL ATTRIBUTE7,
                             NULL ATTRIBUTE8,
                             NULL ATTRIBUTE9,
                             NULL ATTRIBUTE10,
                             NULL ATTRIBUTE11,
                             NULL ATTRIBUTE12,
                             NULL ATTRIBUTE13,
                             NULL ATTRIBUTE14,
                             NULL ATTRIBUTE15,
                             --ORG_CODE,
                             ITEM_CATALOG_CATEGORY ITEM_CATEGORY,
                             a.INVENTORY_ITEM_ID INVENTORY_ITEM_ID
                      FROM   XXCPD_ITM_SEC_UDA_DETAILS_TB a
                     WHERE   DECODE(attribute_group_name,'LIST_PRICE',DECODE(STATUS_FLAG,'E',1,'P',2,3)
                                                       ,'XXITM_FORM_PRICING_HS',DECODE(STATUS_FLAG,'E',4,'P',5,6)
                                                       ,'XXITM_FORM_PRICING_SB',DECODE(STATUS_FLAG,'E',4,'P',5,6)
                                                       ,'XXITM_ADV_FORM_PRICING_HS',DECODE(STATUS_FLAG,'E',7,'P',8,9)
                                                       ,'XXITM_ADV_FORM_PRICING_SB',DECODE(STATUS_FLAG,'E',7,'P',8,9)
                                                       ,'XXITM_ADV_FORM_PRICING_HWSW',DECODE(STATUS_FLAG,'E',10,'P',11,12)
                                                       ,'XXITM_ADV_FORM_PRICING_SWSUB',DECODE(STATUS_FLAG,'E',10,'P',11,12)
                                                       ,'XXITM_ROUTE_TO_MARKET',DECODE(STATUS_FLAG,'E',13,'P',14,15)) in (1,2,3)
                             AND exists(select /*+ no_merge */  '1' from mtl_system_items_b b where a.inventory_item_id=b.inventory_item_id and b.organization_id=1)
                  GROUP BY   item_name,
                             extension_id,
                             ITEM_CATALOG_CATEGORY,
                             INVENTORY_ITEM_ID;
    BEGIN
      OPEN c_list_price;
      LOOP
      lv_ITEM_NAME.delete;          
      lv_ITEM_CATEGORY.delete;      
      lv_INVENTORY_ITEM_ID.delete;  
      lv_ATTRIBUTE_NAME.delete;     
      lv_ATTRIBUTE_VALUE.delete;    
      lv_GEOGRAPHY_LEVEL.delete;    
      lv_GEOGRAPHY_VALUE.delete;    
      lv_INCLUDE_GEOGRAPHY.delete;  
      lv_EFFECTIVE_START_DATE.delete;
      lv_EFFECTIVE_TO_DATE.delete;  
      lv_EXTENSION_ID.delete;       
      lv_PRICING_UNIT.delete;       
      lv_ATTRIBUTE_FROM.delete;     
      lv_ATTRIBUTE_TO.delete;       
      lv_FIXED_BASE_PRICE.delete;   
      lv_PARENT_ATTRIBUTE.delete;   
      lv_DURATION_QUANTITY.delete;  
      lv_ATTRIBUTE2.delete;         
      lv_ATTRIBUTE3.delete;         
      lv_ATTRIBUTE4.delete;         
      lv_ATTRIBUTE5.delete;         
      lv_ATTRIBUTE6.delete;         
      lv_ATTRIBUTE7.delete;         
      lv_ATTRIBUTE8.delete;         
      lv_ATTRIBUTE9.delete;         
      lv_ATTRIBUTE10.delete;        
      lv_ATTRIBUTE11.delete;        
      lv_ATTRIBUTE12.delete;        
      lv_ATTRIBUTE13.delete;        
      lv_ATTRIBUTE14.delete;        
      lv_ATTRIBUTE15.delete;        
      FETCH  c_list_price BULK COLLECT INTO   lv_ITEM_NAME
                                             ,lv_ATTRIBUTE_NAME     
                                             ,lv_ATTRIBUTE_VALUE    
                                             ,lv_GEOGRAPHY_LEVEL    
                                             ,lv_GEOGRAPHY_VALUE    
                                             ,lv_INCLUDE_GEOGRAPHY  
                                             ,lv_EFFECTIVE_START_DATE
                                             ,lv_EFFECTIVE_TO_DATE  
                                             ,lv_EXTENSION_ID       
                                             ,lv_PRICING_UNIT       
                                             ,lv_ATTRIBUTE_FROM     
                                             ,lv_ATTRIBUTE_TO       
                                             ,lv_FIXED_BASE_PRICE   
                                             ,lv_PARENT_ATTRIBUTE   
                                             ,lv_DURATION_QUANTITY  
                                             ,lv_ATTRIBUTE2         
                                             ,lv_ATTRIBUTE3         
                                             ,lv_ATTRIBUTE4         
                                             ,lv_ATTRIBUTE5         
                                             ,lv_ATTRIBUTE6         
                                             ,lv_ATTRIBUTE7         
                                             ,lv_ATTRIBUTE8         
                                             ,lv_ATTRIBUTE9         
                                             ,lv_ATTRIBUTE10        
                                             ,lv_ATTRIBUTE11        
                                             ,lv_ATTRIBUTE12        
                                             ,lv_ATTRIBUTE13        
                                             ,lv_ATTRIBUTE14        
                                             ,lv_ATTRIBUTE15        
                                             ,lv_ITEM_CATEGORY      
                                             ,lv_INVENTORY_ITEM_ID
                                      LIMIT 50000;
      IF lv_INVENTORY_ITEM_ID.count = 0 THEN
        EXIT;
      END IF;
      Begin
      FORALL I IN 1..lv_INVENTORY_ITEM_ID.count SAVE EXCEPTIONS
        INSERT /*+append*/ INTO XXCPD_ITEM_PRICING_ATTR_BKP2(
                                                                   line_id,
                                                                   ITEM_NAME,
                                                                   ATTRIBUTE_NAME,
                                                                   ATTRIBUTE_VALUE,
                                                                   GEOGRAPHY_LEVEL,
                                                                   GEOGRAPHY_VALUE,
                                                                   INCLUDE_GEOGRAPHY,
                                                                   EFFECTIVE_START_DATE,
                                                                   EFFECTIVE_TO_DATE,
                                                                   EXTENSION_ID,
                                                                   PRICING_UNIT,
                                                                   ATTRIBUTE_FROM,
                                                                   ATTRIBUTE_TO,
                                                                   FIXED_BASE_PRICE,
                                                                   PARENT_ATTRIBUTE,
                                                                   DURATION_QUANTITY,
                                                                   ATTRIBUTE2,
                                                                   ATTRIBUTE3,
                                                                   ATTRIBUTE4,
                                                                   ATTRIBUTE5,
                                                                   ATTRIBUTE6,
                                                                   ATTRIBUTE7,
                                                                   ATTRIBUTE8,
                                                                   ATTRIBUTE9,
                                                                   ATTRIBUTE10,
                                                                   ATTRIBUTE11,
                                                                   ATTRIBUTE12,
                                                                   ATTRIBUTE13,
                                                                   ATTRIBUTE14,
                                                                   ATTRIBUTE15,
                                                                   ITEM_CATEGORY,
                                                                   INVENTORY_ITEM_ID
                                                            VALUES
                                                                   xxcpd.xxcpd_if_prc_line_id_s.NEXTVAL
                                                                  ,lv_ITEM_NAME(i)
                                                                  ,lv_ATTRIBUTE_NAME(i)     
                                                                  ,lv_ATTRIBUTE_VALUE(i)    
                                                                  ,lv_GEOGRAPHY_LEVEL(i)    
                                                                  ,lv_GEOGRAPHY_VALUE(i)    
                                                                  ,lv_INCLUDE_GEOGRAPHY(i)  
                                                                  ,lv_EFFECTIVE_START_DATE(i)
                                                                  ,lv_EFFECTIVE_TO_DATE(i)  
                                                                  ,lv_EXTENSION_ID(i)       
                                                                  ,lv_PRICING_UNIT(i)       
                                                                  ,lv_ATTRIBUTE_FROM(i)     
                                                                  ,lv_ATTRIBUTE_TO(i)       
                                                                  ,lv_FIXED_BASE_PRICE(i)  
                                                                  ,lv_PARENT_ATTRIBUTE(i)   
                                                                  ,lv_DURATION_QUANTITY(i)  
                                                                  ,lv_ATTRIBUTE2(i)         
                                                                  ,lv_ATTRIBUTE3(i)         
                                                                  ,lv_ATTRIBUTE4(i)         
                                                                  ,lv_ATTRIBUTE5(i)         
                                                                  ,lv_ATTRIBUTE6(i)         
                                                                  ,lv_ATTRIBUTE7(i)         
                                                                  ,lv_ATTRIBUTE8(i)         
                                                                  ,lv_ATTRIBUTE9(i)         
                                                                  ,lv_ATTRIBUTE10(i)        
                                                                  ,lv_ATTRIBUTE11(i)        
                                                                  ,lv_ATTRIBUTE12(i)        
                                                                  ,lv_ATTRIBUTE13(i)        
                                                                  ,lv_ATTRIBUTE14(i)        
                                                                  ,lv_ATTRIBUTE15(i)        
                                                                  ,lv_ITEM_CATEGORY(i)      
                                                                  ,lv_INVENTORY_ITEM_ID(i)
        EXCEPTION
        WHEN OTHERS THEN
        ROLLBACK;
        FOR j IN 1..SQL%bulk_exceptions.Count
            LOOP
            l_item_name:=lv_ITEM_NAME(SQL%BULK_EXCEPTIONS(j).ERROR_INDEX);
            l_extension_id:=lv_EXTENSION_ID(SQL%BULK_EXCEPTIONS(j).ERROR_INDEX);
                        UPDATE XXCPD_ITM_SEC_UDA_DETAILS_TB
                        SET status_flag = 'E',
                            last_updated_by = 1,
                            last_update_date = SYSDATE
                        WHERE item_name = l_item_name
                        AND  extension_id=l_extension_id;
                        COMMIT;
                FND_FILE.put_line(FND_FILE.output,'********************Exception Occured*************:-- '||SYSTIMESTAMP);
            END LOOP;
         END;
      END LOOP;
      CLOSE  c_list_price;
        COMMIT;
    end;

  • Bulk collect forall vs single merge statement

    I understand that a single DML statement is better than using bulk collect for all having intermediate commits. My only concern is if I'm loading a large amount of data like 100 million records into a 800 million record table with foreign keys and indexes and the session gets killed, the rollback might take a long time which is not acceptable. Using bulk collect forall with interval commits is slower than a single straight merge statement, but in case of dead session, the rollback time won't be as bad and a reload of the not yet committed data will not be as bad. To design a recoverable data load that may not be affected as badly, is bulk collect + for all the right approach?

    1. specifics about the actual data available
    2. the location/source of the data
    3. whether NOLOGGING is appropriate
    4. whether PARALLEL is an option
    1. I need to transform the data before, so I can build the staging tables to match to be the same structure as the tables I'm loading to.
    2. It's in the same database (11.2)
    3. Cannot use NOLOGGING or APPEND because I need to allow DML in the target table and I can't use NOLOGGING because I cannot afford to lose the data in case of failure.
    4. PARALLEL is an option. I've done some research on DBMS_PARALLEL_EXECUTE and it sounds very cool. Can this be used to load to two tables? I have a parent child tables. I can chunk the data and load these two tables separately, but the only requirement would be that I need to commit together. I cannot load a chunk into the parent table and commit before I load the corresponding chunk into its child table. Can this be done using DBMS_PARALLEL_EXECUTE? If so, I think this would be the perfect solution since it looks like it's exactly what I'm looking for. However, if this doesn't work, is bulk collect + for all the best option I am left with?
    What is the underlying technology of DBMS_PARALLEL_EXECUTE?

  • Bulk collect / forall which collection type?

    Hi I am trying to speed up the query below using bulk collect / forall:
    SELECT h.cust_order_no AS custord, l.shipment_set AS sset
    FROM info.tlp_out_messaging_hdr h, info.tlp_out_messaging_lin l
    WHERE h.message_id = l.message_id
    AND h.contract = '12384'
    AND l.shipment_set IS NOT NULL
    AND h.cust_order_no IS NOT NULL
    GROUP BY h.cust_order_no, l.shipment_set
    I would like to extract the 2 fields selected above into a new table as quickly as possible, but I’m fairly new to Oracle and I’m finding it difficult to sort out the best way to do it. The query below does not work (no doubt there are numerous issues) but hopefully it is sufficiently developed to shows the sort of thing I am trying to achieve:
    DECLARE
    TYPE xcustord IS TABLE OF info.tlp_out_messaging_hdr.cust_order_no%TYPE;
    TYPE xsset IS TABLE OF info.tlp_out_messaging_lin.shipment_set%TYPE;
    TYPE xarray IS TABLE OF tp_a1_tab%rowtype INDEX BY BINARY_INTEGER;
    v_xarray xarray;
    v_xcustord xcustord;
    v_xsset xsset;
    CURSOR cur IS
    SELECT h.cust_order_no AS custord, l.shipment_set AS sset
    FROM info.tlp_out_messaging_hdr h, info.tlp_out_messaging_lin l
    WHERE h.message_id = l.message_id
    AND h.contract = '1111'
    AND l.shipment_set IS NOT NULL
    AND h.cust_order_no IS NOT NULL;
    BEGIN
    OPEN cur;
    LOOP
    FETCH cur
    BULK COLLECT INTO v_xarray LIMIT 10000;
    EXIT WHEN v_xcustord.COUNT() = 0;
    FORALL i IN 1 .. v_xarray.COUNT
    INSERT INTO TP_A1_TAB (cust_order_no, shipment_set)
    VALUES (v_xarray(i).cust_order_no,v_xarray(i).shipment_set);
    commit;
    END LOOP;
    CLOSE cur;
    END;
    I am running on Oracle 9i release 2.

    Well I suppose I can share the whole query as it stands for context and information (see below).
    This is a very ugly piece of code that I am trying to improve. The advantage it has currently is
    that it works, the disadvantage it has is that it's very slow. My thoughts were that bulk collect
    and forall might be useful tools here as part of the re-write hence my original question, but perhaps not.
    So on a more general note any advice on how best speed up this code would be welcome:
    CREATE OR REPLACE VIEW DLP_TEST AS
    WITH aa AS
    (SELECT h.cust_order_no AS c_ref,l.shipment_set AS shipset,
    l.line_id, h.message_id, l.message_line_no,
    l.vendor_part_no AS part, l.rqst_quantity AS rqst_qty, l.quantity AS qty,
    l.status AS status, h.rowversion AS allocation_date
    FROM info.tlp_in_messaging_hdr h
    LEFT JOIN  info.tlp_in_messaging_lin l
    ON h.message_id = l.message_id
    WHERE h.contract = '12384'
    AND h.cust_order_no IS NOT NULL
    UNION ALL
    SELECT ho.cust_order_no AS c_ref, lo.shipment_set AS shipset,
    lo.line_id, ho.message_id, lo.message_line_no,
    lo.vendor_part_no AS part,lo.rqst_quantity AS rqst_qty, lo.quantity AS qty,
    lo.status AS status, ho.rowversion AS allocation_date
    FROM info.tlp_out_messaging_hdr ho, info.tlp_out_messaging_lin lo
    WHERE ho.message_id = lo.message_id
    AND ho.contract = '12384'
    AND ho.cust_order_no IS NOT NULL),
    a1 AS
    (SELECT h.cust_order_no AS custord, l.shipment_set AS sset
    FROM info.tlp_out_messaging_hdr h, info.tlp_out_messaging_lin l
    WHERE h.message_id = l.message_id
    AND h.contract = '12384'
    AND l.shipment_set IS NOT NULL
    AND h.cust_order_no IS NOT NULL
    GROUP BY h.cust_order_no, l.shipment_set),
    a2 AS
    (SELECT ho.cust_order_no AS c_ref, lo.shipment_set AS shipset
    FROM info.tlp_out_messaging_hdr ho, info.tlp_out_messaging_lin lo
    WHERE ho.message_id = lo.message_id
    AND ho.contract = '12384'
    AND ho.message_type = '3B13'
    AND lo.shipment_set IS NOT NULL
    GROUP BY ho.cust_order_no, lo.shipment_set),
    a3 AS
    (SELECT a1.custord, a1.sset, CONCAT('SHIPSET',a1.sset) AS ssset
    FROM a1
    LEFT OUTER JOIN a2
    ON a1.custord = a2.c_ref AND a1.sset = a2.shipset
    WHERE a2.c_ref IS NULL),
    bb AS
    (SELECT so.contract, so.order_no, sr.service_request_no AS sr_no, sr.reference_no,
    substr(sr.part_no,8) AS shipset,
    substr(note_text,1,instr(note_text,'.',1,1)-1) AS Major_line,
    note_text AS CISCO_line,ma.part_no,
    (Select TO_DATE(TO_CHAR(TO_DATE(d.objversion,'YYYYMMDDHH24MISS'),'YYYY-MM-DD HH24:MI:SS'),'YYYY-MM-DD HH24:MI:SS')
    FROM ifsapp.document_text d WHERE so.note_id = d.note_id AND so.contract = '12384') AS Print_Date,
    ma.note_text
    FROM (ifsapp.service_request sr
    LEFT OUTER JOIN ifsapp.work_order_shop_ord ws
    ON sr.service_request_no = ws.wo_no
    LEFT OUTER JOIN ifsapp.shop_ord so
    ON ws.order_no = so.order_no)
    LEFT OUTER JOIN ifsapp.customer_order co
    ON sr.reference_no = co.cust_ref
    LEFT OUTER JOIN ifsapp.shop_material_alloc ma
    ON so.order_no = ma.order_no
    JOIN a3
    ON a3.custord = sr.reference_no AND a3.sset = substr(sr.part_no,8)
    WHERE sr.part_contract = '12384'
    AND so.contract = '12384'
    AND co.contract = '12384'
    AND sr.reference_no IS NOT NULL
    AND ma.part_no NOT LIKE 'SHIPSET%'),
    cc AS
    (SELECT 
    bb.reference_no,
    bb.shipset,
    bb.order_no,
    bb.cisco_line,
    aa.message_id,
    aa.allocation_date,
    row_number() over(PARTITION BY bb.reference_no, bb.shipset, aa.allocation_date
    ORDER BY bb.reference_no, bb.shipset, aa.allocation_date, aa.message_id DESC) AS selector
    FROM bb
    LEFT OUTER JOIN aa
    ON  bb.reference_no = aa.c_ref AND bb.shipset = aa.shipset
    WHERE aa.allocation_date <= bb.print_date
    OR aa.allocation_date IS NULL
    OR bb.print_date IS NULL),
    dd AS
    (SELECT
    MAX(reference_no) AS reference_no,
    MAX(shipset) AS shipset,
    order_no,
    MAX(allocation_date) AS allocation_date
    FROM cc
    WHERE selector = 1                                   
    GROUP BY order_no, selector),
    ee AS
    (SELECT
    smx.order_no,
    SUM(smx.qty_assigned) AS total_allocated,
    SUM(smx.qty_issued) AS total_issued,
    SUM(smx.qty_required) AS total_required
    FROM ifsapp.shop_material_alloc smx
    WHERE smx.contract = '12384'
    AND smx.part_no NOT LIKE 'SHIPSET%'
    GROUP BY smx.order_no),
    ff AS
    (SELECT
    dd.reference_no,
    dd.shipset,
    dd.order_no,
    MAX(allocation_date) AS last_allocation,
    MAX(ee.total_allocated) AS total_allocated,
    MAX(ee.total_issued) AS total_issued,
    MAX(ee.total_required) AS total_required
    FROM dd
    LEFT OUTER JOIN ee
    ON dd.order_no = ee.order_no
    GROUP BY dd.reference_no, dd.shipset, dd.order_no),
    base AS
    (SELECT x.order_no, x.part_no, z.rel_no, MIN(x.dated) AS dated, MIN(y.cust_ref) AS cust_ref, MIN(z.line_no) AS line_no,
    MIN(y.state) AS state, MIN(y.contract) AS contract, MIN(z.demand_order_ref1) AS demand_order_ref1
    FROM ifsapp.inventory_transaction_hist x, ifsapp.customer_order y, ifsapp.customer_order_line z
    WHERE x.contract = '12384'
    AND x.order_no = y.order_no
    AND y.order_no = z.order_no
    AND x.transaction = 'REP-OESHIP'
    AND x.part_no = z.part_no
    AND TRUNC(x.dated) >= SYSDATE - 8
    GROUP BY x.order_no, x.part_no, z.rel_no)
    SELECT
    DISTINCT
    bb.contract,
    bb.order_no,
    bb.sr_no,
    CAST('-' AS varchar2(40)) AS Usr,
    CAST('-' AS varchar2(40)) AS Name,
    CAST('01' AS number) AS Operation,
    CAST('Last Reservation' AS varchar2(40)) AS Action,
    CAST('-' AS varchar2(40)) AS Workcenter,
    CAST('-' AS varchar2(40)) AS Next_Workcenter_no,
    CAST('Print SO' AS varchar2(40)) AS Next_WC_Description,
    ff.total_allocated,
    ff.total_issued,
    ff.total_required,
    ff.shipset,
    ff.last_allocation AS Action_date,
    ff.reference_no
    FROM ff
    LEFT OUTER JOIN bb
    ON bb.order_no = ff.order_no
    WHERE bb.order_no IS NOT NULL
    UNION ALL
    SELECT
    c.contract AS Site, c.order_no AS Shop_Order, b.wo_no AS SR_No,
    CAST('-' AS varchar2(40)) AS UserID,
    CAST('-' AS varchar2(40)) AS User_Name,
    CAST('02' AS number) AS Operation,
    CAST('SO Printed' AS varchar2(40)) AS Action,
    CAST('SOPRINT' AS varchar2(40)) AS Workcenter,
    CAST('PKRPT' AS varchar2(40)) AS Next_Workcenter_no,
    CAST('Pickreport' AS varchar2(40)) AS Next_WC_Description,
    CAST('0' AS number) AS Total_Allocated,
    CAST('0' AS number) AS Total_Issued,
    CAST('0' AS number) AS Total_Required,
    e.part_no AS Ship_Set,
    TO_DATE(TO_CHAR(TO_DATE(d.objversion,'YYYYMMDDHH24MISS'),'YYYY-MM-DD HH24:MI:SS'),'YYYY-MM-DD HH24:MI:SS')AS Action_Date,
    f.cust_ref AS cust_ref
    FROM ifsapp.shop_ord c
    LEFT OUTER JOIN ifsapp.work_order_shop_ord b
    ON b.order_no = c.order_no
    LEFT OUTER JOIN ifsapp.document_text d
    ON d.note_id = c.note_id
    LEFT OUTER JOIN ifsapp.customer_order_line e
    ON e.demand_order_ref1 = TRIM(to_char(b.wo_no))
    LEFT OUTER JOIN ifsapp.customer_order f
    ON f.order_no = e.order_no
    JOIN a3
    ON a3.custord = f.cust_ref AND a3.ssset = e.part_no
    WHERE c.contract = '12384'
    AND e.contract = '12384'
    AND d.objversion IS NOT NULL
    UNION ALL
    SELECT
    a.site AS Site, a.order_no AS Shop_Order, b.wo_no AS SR_No, a.userid AS UserID,
    a.user_name AS Name, a.operation_no AS Operation, a.action AS Action,
    a.workcenter_no AS Workcenter, a.next_work_center_no AS Next_Workcenter_no,
    (SELECT d.description  FROM ifsapp.work_center d WHERE a.next_work_center_no = d.work_center_no AND a.site = d.contract)
    AS Next_WC_Description,
    CAST('0' AS number) AS Total_Allocated,
    CAST('0' AS number) AS Total_Issued,
    CAST('0' AS number) AS Total_Required,
    e.part_no AS Ship_set,
    a.action_date AS Action_Date, f.cust_ref AS cust_ref
    FROM ifsapp.shop_ord c
    LEFT OUTER JOIN ifsapp.work_order_shop_ord b
    ON b.order_no = c.order_no
    LEFT OUTER JOIN ifsapp.customer_order_line e
    ON e.demand_order_ref1 = to_char(b.wo_no)
    LEFT OUTER JOIN ifsapp.customer_order f
    ON f.order_no = e.order_no
    LEFT OUTER JOIN info.tp_hvt_so_op_hist a
    ON a.order_no = c.order_no
    JOIN a3
    ON a3.custord = f.cust_ref AND a3.ssset = e.part_no
    WHERE a.site = '12384'
    AND c.contract = '12384'
    AND e.contract = '12384'
    AND f.contract = '12384'
    UNION ALL
    SELECT so.contract AS Site, so.order_no AS Shop_Order_No, sr.service_request_no AS SR_No,
    CAST('-' AS varchar2(40)) AS "User",
    CAST('-' AS varchar2(40)) AS "Name",
    CAST('999' AS number) AS "Operation",
    CAST('Shipped' AS varchar2(40)) AS "Action",
    CAST('SHIP' AS varchar2(40)) AS "Workcenter",
    CAST('-' AS varchar2(40)) AS "Next_Workcenter_no",
    CAST('-' AS varchar2(40)) AS "Next_WC_Description",
    CAST('0' AS number) AS Total_Allocated,
    CAST('0' AS number) AS Total_Issued,
    CAST('0' AS number) AS Total_Required,
    so.part_no AS ship_set, base.dated AS Action_Date,
    sr.reference_no AS CUST_REF
    FROM base
    LEFT OUTER JOIN ifsapp.service_request sr
    ON base.demand_order_ref1 = sr.service_request_no
    LEFT OUTER JOIN ifsapp.work_order_shop_ord ws
    ON sr.service_request_no = ws.wo_no
    LEFT OUTER JOIN ifsapp.shop_ord so
    ON ws.order_no = so.order_no
    WHERE base.contract = '12384';

  • Using bulk collect and for all to solve a problem

    Hi All
    I have a following problem.
    Please forgive me if its a stupid question :-) im learning.
    1: Data in a staging table xx_staging_table
    2: two Target table t1, t2 where some columns from xx_staging_table are inserted into
    Some of the columns from the staging table data are checked for valid entries and then some columns from that row will be loaded into the two target tables.
    The two target tables use different set of columns from the staging table
    When I had a thousand records there was no problem with a direct insert but it seems we will now have half a million records.
    This has slowed down the process considerably.
    My question is
    Can I use the bulk collect and for all functionality to get specific columns from a staging table, then validate the row using those columns
    and then use a bulk insert to load the data into a specific table.?
    So code would be like
    get_staging_data cursor will have all the columns i need from the staging table
    cursor get_staging_data
    is select * from xx_staging_table (about 500000) records
    Use bulk collect to load about 10000 or so records into a plsql table
    and then do a bulk insert like this
    CREATE TABLE t1 AS SELECT * FROM all_objects WHERE 1 = 2;
    CREATE OR REPLACE PROCEDURE test_proc (p_array_size IN PLS_INTEGER DEFAULT 100)
    IS
    TYPE ARRAY IS TABLE OF all_objects%ROWTYPE;
    l_data ARRAY;
    CURSOR c IS SELECT * FROM all_objects;
    BEGIN
    OPEN c;
    LOOP
    FETCH c BULK COLLECT INTO l_data LIMIT p_array_size;
    FORALL i IN 1..l_data.COUNT
    INSERT INTO t1 VALUES l_data(i);
    EXIT WHEN c%NOTFOUND;
    END LOOP;
    CLOSE c;
    END test_proc;
    In the above example t1 and the cursor have the same number of columns
    In my case the columns in the cursor loop are a small subset of the columns of table t1
    so can i use a forall to load that subset into the table t1? How does that work?
    Thanks
    J

    user7348303 wrote:
    checking if the value is valid and theres also some conditional processing rules ( such as if the value is a certain value no inserts are needed)
    which are a little more complex than I can put in a simpleWell, if the processing is too complex (and conditional) to be done in SQL, then doing that in PL/SQL is justified... but will be slower as you are now introducing an additional layer. Data now needs to travel between the SQL layer and PL/SQL layer. This is slower.
    PL/SQL is inherently serialised - and this also effects performance and scalability. PL/SQL cannot be parallelised by Oracle in an automated fashion. SQL processes can.
    To put in in simple terms. You create PL/SQL procedure Foo that processes SQL cursor and you execute that proc. Oracle cannot run multiple parallel copies of Foo. It perhaps can parallelise that SQL cursor that Foo uses - but not Foo itself.
    However, if Foo is called by the SQL engine it can run in parallel - as the SQL process calling Foo is running in parallel. So if you make Foo a pipeline table function (written in PL/SQL), and you design and code it as a thread-safe/parallel enabled function, it can be callled and used and executed in parallel, by the SQL engine.
    So moving your PL/SQL code into a parallel enabled pipeline function written in PL/SQL, and using that function via parallel SQL, can increase performance over running that same basic PL/SQL processing as a serialised process.
    This is of course assuming that the processing that needs to be done using PL/SQL code, can be designed and coded for parallel processing in this fashion.

  • How to use Bulk Collect and Forall

    Hi all,
    We are on Oracle 10g. I have a requirement to read from table A and then for each record in table A, find matching rows in table B and then write the identified information in table B to the target table (table C). In the past, I had used two ‘cursor for loops’ to achieve that. To make the new procedure, more efficient, I would like to learn to use ‘bulk collect’ and ‘forall’.
    Here is what I have so far:
    DECLARE
    TYPE employee_array IS TABLE OF EMPLOYEES%ROWTYPE;
    employee_data  employee_array;
    TYPE job_history_array IS TABLE OF JOB_HISTORY%ROWTYPE;
    Job_history_data   job_history_array;
    BatchSize CONSTANT POSITIVE := 5;
    -- Read from File A
    CURSOR c_get_employees IS
             SELECT  Employee_id,
                       first_name,
                       last_name,
                       hire_date,
                       job_id
              FROM EMPLOYEES;
    -- Read from File B based on employee ID in File A
    CURSOR c_get_job_history (p_employee_id number) IS
             select start_date,
                      end_date,
                      job_id,
                      department_id
             FROM JOB_HISTORY
             WHERE employee_id = p_employee_id;
    BEGIN
        OPEN c_get_employees;
        LOOP
            FETCH c_get_employees BULK COLLECT INTO employee_data.employee_id.LAST,
                                                                              employee_data.first_name.LAST,
                                                                              employee_data.last_name.LAST,
                                                                              employee_data.hire_date.LAST,
                                                                              employee_data.job_id.LAST
             LIMIT BatchSize;
            FORALL i in 1.. employee_data.COUNT
                    Open c_get_job_history (employee_data(i).employee_id);
                    FETCH c_get_job_history BULKCOLLECT INTO job_history_array LIMIT BatchSize;
                             FORALL k in 1.. Job_history_data.COUNT LOOP
                                            -- insert into FILE C
                                              INSERT INTO MY_TEST(employee_id, first_name, last_name, hire_date, job_id)
                                                                values (job_history_array(k).employee_id, job_history_array(k).first_name,
                                                                          job_history_array(k).last_name, job_history_array(k).hire_date,
                                                                          job_history_array(k).job_id);
                                             EXIT WHEN job_ history_data.count < BatchSize                        
                             END LOOP;                          
                             CLOSE c_get_job_history;                          
                     EXIT WHEN employee_data.COUNT < BatchSize;
           END LOOP;
            COMMIT;
            CLOSE c_get_employees;
    END;
                     When I run this script, I get
    [Error] Execution (47: 17): ORA-06550: line 47, column 17:
    PLS-00103: Encountered the symbol "OPEN" when expecting one of the following:
       . ( * @ % & - + / at mod remainder rem select update with
       <an exponent (**)> delete insert || execute multiset save
       merge
    ORA-06550: line 48, column 17:
    PLS-00103: Encountered the symbol "FETCH" when expecting one of the following:
       begin function package pragma procedure subtype type use
       <an identifier> <a double-quoted delimited-identifier> form
       current cursorWhat is the best way to code this? Once, I learn how to do this, I apply the knowledge to the real application in which file A would have around 200 rows and file B would have hundreds of thousands of rows.
    Thank you for your guidance,
    Seyed

    Hello BlueShadow,
    Following your advice, I modified a stored procedure that initially was using two cursor for loops to read from tables A and B to write to table C to use instead something like your suggestion listed below:
    INSERT INTO tableC
    SELECT …
    FROM tableA JOIN tableB on (join condition).I tried this change on a procedure writing to tableC with keys disabled. I will try this against the real table that has primary key and indexes and report the result later.
    Thank you very much,
    Seyed

  • PL/SQL using BULK COLLECT and MERGE

    what i am trying to do is to use bulk collect to create an array of row data, then loop through the array and either insert or update a table, hence, merge:
    FORALL i in ID.first .. ID.last SAVE EXCEPTIONS
    MERGE INTO table1 t USING (
    select ID(i) ID, {other array fields...} from dual) s
    ON t.ID = s.ID
    WHEN MATCHED THEN UPDATE...
    WHEN NOT MATCHED THEN INSERT ...
    The problem is that Oracle always do a INSERT. Has anyone had the same problem? Any workaround?
    Thanks.

    in package header:
         TYPE ID_TYPE IS TABLE OF table1.ID%TYPE;
    in the proc, I declared
         ID ID_TYPE;
    then a bulk collect:
    select * from .. BULK COLLECT INTO ID...
    In addition, i truncate the destination table and run the Proc. all records are insert's. then I ran the same proc again, expecting all records to be updated, but insert occured again causing exceptions due to violation of unique keys.
    Message was edited by:
    zliao01

  • Need help with Bulk Collect ForAll Update

    Hi - I'm trying to do a Bulk Collect/ForAll Update but am having issues.
    My declarations look like this:
         CURSOR cur_hhlds_for_update is
            SELECT hsh.household_id, hsh.special_handling_type_id
              FROM compas.household_special_handling hsh
                 , scr_id_lookup s
             WHERE hsh.household_id = s.id
               AND s.scr = v_scr
               AND s.run_date = TRUNC (SYSDATE)
               AND effective_date IS NULL
               AND special_handling_type_id = 1
               AND created_by != v_user;
         TYPE rec_hhlds_for_update IS RECORD (
              household_id  HOUSEHOLD_SPECIAL_HANDLING.household_id%type,
              spec_handl_type_id HOUSEHOLD_SPECIAL_HANDLING.SPECIAL_HANDLING_TYPE_ID%type
         TYPE spec_handling_update_array IS TABLE OF rec_hhlds_for_update;
         l_spec_handling_update_array  spec_handling_update_array;And then the Bulk Collect/ForAll looks like this:
           OPEN cur_hhlds_for_update;
           LOOP
            FETCH cur_hhlds_for_update BULK COLLECT INTO l_spec_handling_update_array LIMIT 1000;
            EXIT WHEN l_spec_handling_update_array.count = 0;
            FORALL i IN 1..l_spec_handling_update_array.COUNT
            UPDATE compas.household_special_handling
               SET effective_date =  TRUNC(SYSDATE)
                 , last_modified_by = v_user
                 , last_modified_date = SYSDATE
             WHERE household_id = l_spec_handling_update_array(i).household_id
               AND special_handling_type_id = l_spec_handling_update_array(i).spec_handl_type_id;
              l_special_handling_update_cnt := l_special_handling_update_cnt + SQL%ROWCOUNT;         
          END LOOP;And this is the error I'm receiving:
    ORA-06550: line 262, column 31:
    PLS-00436: implementation restriction: cannot reference fields of BULK In-BIND table of records
    ORA-06550: line 262, column 31:
    PLS-00382: expression is of wrong type
    ORA-06550: line 263, column 43:
    PL/SQL: ORA-22806: not an object or REF
    ORA-06550: line 258, column 9:
    PL/SQL: SQMy problem is that the table being updated has a composite primary key so I have two conditions in my where clause. This the the first time I'm even attempting the Bulk Collect/ForAll Update and it seems like it would be straight forward if I was only dealing with a single-column primary key. Can anyone please help advise me as to what I'm missing here or how I can accomplish this?
    Thanks!
    Christine

    You cannot reference a column inside a record when doin a for all. You need to refer as a whole collection . So you will need two collections.
    Try like this,
    DECLARE
       CURSOR cur_hhlds_for_update
       IS
          SELECT hsh.household_id, hsh.special_handling_type_id
            FROM compas.household_special_handling hsh, scr_id_lookup s
           WHERE hsh.household_id = s.ID
             AND s.scr = v_scr
             AND s.run_date = TRUNC (SYSDATE)
             AND effective_date IS NULL
             AND special_handling_type_id = 1
             AND created_by != v_user;
       TYPE arr_household_id IS TABLE OF HOUSEHOLD_SPECIAL_HANDLING.household_id%TYPE
                                   INDEX BY BINARY_INTEGER;
       TYPE arr_spec_handl_type_id IS TABLE OF HOUSEHOLD_SPECIAL_HANDLING.SPECIAL_HANDLING_TYPE_ID%TYPE
                                         INDEX BY BINARY_INTEGER;
       l_household_id_col         arr_household_id;
       l_spec_handl_type_id_col   arr_spec_handl_type_id;
    BEGIN
       OPEN cur_hhlds_for_update;
       LOOP
          FETCH cur_hhlds_for_update
            BULK COLLECT INTO l_household_id_col, l_spec_handl_type_id_col
          LIMIT 1000;
          EXIT WHEN cur_hhlds_for_update%NOTFOUND;
          FORALL i IN l_household_id_col.FIRST .. l_household_id_col.LAST
             UPDATE compas.household_special_handling
                SET effective_date = TRUNC (SYSDATE),
                    last_modified_by = v_user,
                    last_modified_date = SYSDATE
              WHERE household_id = l_household_id_col(i)
                AND special_handling_type_id = l_spec_handl_type_id_col(i);
       --l_special_handling_update_cnt := l_special_handling_update_cnt + SQL%ROWCOUNT; -- Not sure what this does.
       END LOOP;
    END;G.

  • Bulk collect- forall- Lock

    Hi,
    I have been using bulk collect with Forall for insert into a partition table which has uniquw constraing on 2 columns but the partition table is aquiring lock every time I ran the procedure and it doesn't get killed even though session is killed. The no. of records I am trying to insert with duplicate values are 2000(appr).
    Can any one suggest what could be the problem..
    Thanks in advance

    Hi,
    The Code is
    BEGIN
    strQuery := ' SELECT C1, C2, C3, C4, C5, C6, C7, C8, C9, C10 ';
              strQuery := strQuery ||' FROM TEMP';
              strQuery := strQuery ||' WHERE C3 = ''R'' AND C4 = ''' || Bid;
              strQuery := strQuery || ''' AND TO_CHAR(C8,''DDMMYYYY'') = ''' || CallDate||'''';
              EXECUTE IMMEDIATE strQuery BULK COLLECT INTO V_C1, V_C2, V_C3, V_C4, V_C5, V_C6, V_C7, V_C8, V_C9, V_10 ;
    Count1 := SQL%rowcount;
    strQuery := 'INSERT INTO '|| PartitionTabName ||' (T1, T2, T3, T4, T5, T6, T7, T8, T9, T10) VALUES (';
    strQuery := strQuery ||':a1, :a2 , :a3 , :a4 , :a5 , :a6 , :a7 , :a8 , :a9 , :a10 )';
    FORALL j IN 1.. Count1 save EXCEPTIONS
    EXECUTE IMMEDIATE strQuery USING V_C1(j) , V_C2(j) , V_C3(j) , V_C4(j) , V_C5(j) , V_C6(j) , V_C7(j) , V_C8(j) , V_C9(j) , V_C10(j);
    COMMIT;
    EXCEPTION
    WHEN OTHERS THEN
    Count1 := SQL%BULK_EXCEPTIONS.COUNT;
    DBMS_OUTPUT.PUT_LINE('ERROR '||Count1);
    COMMIT;
    END;
    Temp table has composite unique key constraint on PartitionTabName table are T1, T2, T3,T4,T5.

  • Bulk Collect Forall with CLOB

    I have a 10.2.0.4 database that contains a PL/SQL procedure that copies data from a singe remote 10.2.0.4 database table. The procedure will return anywhere from 50,000 to 500,000 rows of data. In testing I have made this a pretty speedy process using BULK COLLECT and using FORALL to load them in 2000 row batches. It has now been requested that I include an additional column from my source table which happens to be a CLOB datatype. However, when I try to perform this with the extra column I get the standard "cannot select remote lob locators" or whatever. Does anyone know of a way to perform this using BULK COLLECT? I've seen countless examples of doing it using "INSERT INTO TABLEX SELECT COL1, COL2, COL3, etc FROM TABLE Y@DBLINK". I don't want to do it this way for performance reasons. Any suggestions would be greatly appreciated.

    sjm133 wrote:
    I've seen countless examples of doing it using "INSERT INTO TABLEX SELECT COL1, COL2, COL3, etc FROM TABLE Y@DBLINK". I don't want to do it this way for performance reasons.What performance reasons are those then?
    Best thing to do would be to give it a go and see, and then if you find problems with it look for alternatives. Don't dismiss solutions without trying them. ;)

  • Problem with BULK COLLECT with million rows - Oracle 9.0.1.4

    We have a requirement where are supposed to load 58 millions of rows into a FACT Table in our DATA WAREHOUSE. We initially planned to use Oracle Warehouse Builder but due to performance reasons, decided to write custom code. We wrote a custome procedure which opens a simple cursor and reads all the 58 million rows from the SOURCE Table and in a loop processes the rows and inserts the records into a TARGET Table. The logic works fine but it took 20hrs to complete the load.
    We then tried to leverage the BULK COLLECT and FORALL and PARALLEL options and modified our PL/SQL code completely to reflect these. Our code looks very simple.
    1. We declared PL/SQL BINARY_INDEXed Tables to store the data in memory.
    2. We used BULK COLLECT into FETCH the data.
    3. We used FORALL statement while inserting the data.
    We did not introduce any of our transformation logic yet.
    We tried with the 600,000 records first and it completed in 1 min and 29 sec with no problems. We then doubled the no. of rows to 1.2 million and the program crashed with the following error:
    ERROR at line 1:
    ORA-04030: out of process memory when trying to allocate 16408 bytes (koh-kghu
    call ,pmucalm coll)
    ORA-06512: at "VVA.BULKLOAD", line 66
    ORA-06512: at line 1
    We got the same error even with 1 million rows.
    We do have the following configuration:
    SGA - 8.2 GB
    PGA
    - Aggregate Target - 3GB
    - Current Allocated - 439444KB (439 MB)
    - Maximum allocated - 2695753 KB (2.6 GB)
    Temp Table Space - 60.9 GB (Total)
    - 20 GB (Available approximately)
    I think we do have more than enough memory to process the 1 million rows!!
    Also, some times the same program results in the following error:
    SQL> exec bulkload
    BEGIN bulkload; END;
    ERROR at line 1:
    ORA-03113: end-of-file on communication channel
    We did not even attempt the full load. Also, we are not using the PARALLEL option yet.
    Are we hitting any bug here? Or PL/SQL is not capable of mass loads? I would appreciate any thoughts on this?
    Thanks,
    Haranadh
    Following is the code:
    set echo off
    set timing on
    create or replace procedure bulkload as
    -- SOURCE --
    TYPE src_cpd_dt IS TABLE OF ima_ama_acct.cpd_dt%TYPE;
    TYPE src_acqr_ctry_cd IS TABLE OF ima_ama_acct.acqr_ctry_cd%TYPE;
    TYPE src_acqr_pcr_ctry_cd IS TABLE OF ima_ama_acct.acqr_pcr_ctry_cd%TYPE;
    TYPE src_issr_bin IS TABLE OF ima_ama_acct.issr_bin%TYPE;
    TYPE src_mrch_locn_ref_id IS TABLE OF ima_ama_acct.mrch_locn_ref_id%TYPE;
    TYPE src_ntwrk_id IS TABLE OF ima_ama_acct.ntwrk_id%TYPE;
    TYPE src_stip_advc_cd IS TABLE OF ima_ama_acct.stip_advc_cd%TYPE;
    TYPE src_authn_resp_cd IS TABLE OF ima_ama_acct.authn_resp_cd%TYPE;
    TYPE src_authn_actvy_cd IS TABLE OF ima_ama_acct.authn_actvy_cd%TYPE;
    TYPE src_resp_tm_id IS TABLE OF ima_ama_acct.resp_tm_id%TYPE;
    TYPE src_mrch_ref_id IS TABLE OF ima_ama_acct.mrch_ref_id%TYPE;
    TYPE src_issr_pcr IS TABLE OF ima_ama_acct.issr_pcr%TYPE;
    TYPE src_issr_ctry_cd IS TABLE OF ima_ama_acct.issr_ctry_cd%TYPE;
    TYPE src_acct_num IS TABLE OF ima_ama_acct.acct_num%TYPE;
    TYPE src_tran_cnt IS TABLE OF ima_ama_acct.tran_cnt%TYPE;
    TYPE src_usd_tran_amt IS TABLE OF ima_ama_acct.usd_tran_amt%TYPE;
    src_cpd_dt_array src_cpd_dt;
    src_acqr_ctry_cd_array      src_acqr_ctry_cd;
    src_acqr_pcr_ctry_cd_array     src_acqr_pcr_ctry_cd;
    src_issr_bin_array      src_issr_bin;
    src_mrch_locn_ref_id_array     src_mrch_locn_ref_id;
    src_ntwrk_id_array      src_ntwrk_id;
    src_stip_advc_cd_array      src_stip_advc_cd;
    src_authn_resp_cd_array      src_authn_resp_cd;
    src_authn_actvy_cd_array      src_authn_actvy_cd;
    src_resp_tm_id_array      src_resp_tm_id;
    src_mrch_ref_id_array      src_mrch_ref_id;
    src_issr_pcr_array      src_issr_pcr;
    src_issr_ctry_cd_array      src_issr_ctry_cd;
    src_acct_num_array      src_acct_num;
    src_tran_cnt_array      src_tran_cnt;
    src_usd_tran_amt_array      src_usd_tran_amt;
    j number := 1;
    CURSOR c1 IS
    SELECT
    cpd_dt,
    acqr_ctry_cd ,
    acqr_pcr_ctry_cd,
    issr_bin,
    mrch_locn_ref_id,
    ntwrk_id,
    stip_advc_cd,
    authn_resp_cd,
    authn_actvy_cd,
    resp_tm_id,
    mrch_ref_id,
    issr_pcr,
    issr_ctry_cd,
    acct_num,
    tran_cnt,
    usd_tran_amt
    FROM ima_ama_acct ima_ama_acct
    ORDER BY issr_bin;
    BEGIN
    OPEN c1;
    FETCH c1 bulk collect into
    src_cpd_dt_array ,
    src_acqr_ctry_cd_array ,
    src_acqr_pcr_ctry_cd_array,
    src_issr_bin_array ,
    src_mrch_locn_ref_id_array,
    src_ntwrk_id_array ,
    src_stip_advc_cd_array ,
    src_authn_resp_cd_array ,
    src_authn_actvy_cd_array ,
    src_resp_tm_id_array ,
    src_mrch_ref_id_array ,
    src_issr_pcr_array ,
    src_issr_ctry_cd_array ,
    src_acct_num_array ,
    src_tran_cnt_array ,
    src_usd_tran_amt_array ;
    CLOSE C1;
    FORALL j in 1 .. src_cpd_dt_array.count
    INSERT INTO ima_dly_acct (
         CPD_DT,
         ACQR_CTRY_CD,
         ACQR_TIER_CD,
         ACQR_PCR_CTRY_CD,
         ACQR_PCR_TIER_CD,
         ISSR_BIN,
         OWNR_BUS_ID,
         USER_BUS_ID,
         MRCH_LOCN_REF_ID,
         NTWRK_ID,
         STIP_ADVC_CD,
         AUTHN_RESP_CD,
         AUTHN_ACTVY_CD,
         RESP_TM_ID,
         PROD_REF_ID,
         MRCH_REF_ID,
         ISSR_PCR,
         ISSR_CTRY_CD,
         ACCT_NUM,
         TRAN_CNT,
         USD_TRAN_AMT)
         VALUES (
         src_cpd_dt_array(j),
         src_acqr_ctry_cd_array(j),
         null,
         src_acqr_pcr_ctry_cd_array(j),
              null,
              src_issr_bin_array(j),
              null,
              null,
              src_mrch_locn_ref_id_array(j),
              src_ntwrk_id_array(j),
              src_stip_advc_cd_array(j),
              src_authn_resp_cd_array(j),
              src_authn_actvy_cd_array(j),
              src_resp_tm_id_array(j),
              null,
              src_mrch_ref_id_array(j),
              src_issr_pcr_array(j),
              src_issr_ctry_cd_array(j),
              src_acct_num_array(j),
              src_tran_cnt_array(j),
              src_usd_tran_amt_array(j));
    COMMIT;
    END bulkload;
    SHOW ERRORS
    -----------------------------------------------------------------------------

    do you have a unique key available in the rows you are fetching?
    It seems a cursor with 20 million rows that is as wide as all the columnsyou want to work with is a lot of memory for the server to use at once. You may be able to do this with parallel processing (dop over 8) and a lot of memory for the warehouse box (and the box you are extracting data from)...but is this the most efficient (and thereby fastest) way to do it?
    What if you used a cursor to select a unique key only, and then during the cursor loop fetch each record, transform it, and insert it into the target?
    Its a different way to do a lot at once, but it cuts down on the overall memory overhead for the process.
    I know this isnt as elegant as a single insert to do it all at once, but sometimes trimming a process down so it takes less resources at any given moment is much faster than trying to do the whole thing at once.
    My solution is probably biased by transaction systems, so I would be interested in what the data warehouse community thinks of this.
    For example:
    source table my_transactions (tx_seq_id number, tx_fact1 varchar2(10), tx_fact2 varchar2(20), tx_fact3 number, ...)
    select a cursor of tx_seq_id only (even at 20 million rows this is not much)
    you could then either use a for loop or even bulk collect into a plsql collection or table
    then process individually like this:
    procedure process_a_tx(p_tx_seq_id in number)
    is
    rTX my_transactions%rowtype;
    begin
    select * into rTX from my_transactions where tx_seq_id = p_tx_seq_id;
    --modify values as needed
    insert into my_target(a, b, c) values (rtx.fact_1, rtx.fact2, rtx.fact3);
    commit;
    exception
    when others
    rollback;
    --write to a log or raise and exception
    end process_a_tx;
    procedure collect_tx
    is
    cursor tx is
    select tx_seq_id from my_transactions;
    begin
    for rTx in cTx loop
    process_a_tx(rtx.tx_seq_id);
    end loop;
    end collect_tx;

  • Can I use Bulk Collect results as input parameter for another cursor

    MUSIC            ==> remote MUSIC_DB database, MUSIC table has 60 million rows
    PRICE_DATA ==> remote PRICING_DB database, PRICE_DATE table has 1 billion rows
    These two table once existed in same database, but size of database exceeded available hardware size and hardware budget, so the PRICE_DATA table was moved to another Oracle database.  I need to create a single report that combines data from both of these tables, and a distributed join with DRIVING_SITE hint will not work because the size of both table is too large to push to one DRIVING_SITE location, so I wrote this PLSQL block to process in small blocks.
    QUESTION: how can use bulk collect from one cursor and pass that bulk collected information as input to second cursor without specifically listing each cell of the PLSQL bulk collection?  See sample pseudo-code below, I am trying to determine more efficient way to code than hard-coding 100 parameter names into 2nd cursor.
    NOTE: below is truly pseudo-code, I had to change the names of everything to adhere to NDA, but below works and is fast enough for my purposes, but if I want to change from 100 input parameters to 200, I have to add more hard-coded values.  There has got to be a better way.
    DECLARE
         -- define cursor that retrieves distinct SONG_IDs from MUSIC table in remote music database
         CURSOR C_CURRENT_MUSIC
         IS
        select distinct SONG_ID
        from MUSIC@MUSIC_DB
        where PRODUCTION_RELEASE=1
         /*  define a parameterized cursor that accepts 100 SONG_IDs and retrieves
              required pricing information
         CURSOR C_get_music_price_data
                   P_SONG_ID_001 NUMBER, P_SONG_ID_002 NUMBER, P_SONG_ID_003 NUMBER, P_SONG_ID_004 NUMBER, P_SONG_ID_005 NUMBER, P_SONG_ID_006 NUMBER, P_SONG_ID_007 NUMBER, P_SONG_ID_008 NUMBER, P_SONG_ID_009 NUMBER, P_SONG_ID_010 NUMBER,
                   P_SONG_ID_011 NUMBER, P_SONG_ID_012 NUMBER, P_SONG_ID_013 NUMBER, P_SONG_ID_014 NUMBER, P_SONG_ID_015 NUMBER, P_SONG_ID_016 NUMBER, P_SONG_ID_017 NUMBER, P_SONG_ID_018 NUMBER, P_SONG_ID_019 NUMBER, P_SONG_ID_020 NUMBER,
                   P_SONG_ID_021 NUMBER, P_SONG_ID_022 NUMBER, P_SONG_ID_023 NUMBER, P_SONG_ID_024 NUMBER, P_SONG_ID_025 NUMBER, P_SONG_ID_026 NUMBER, P_SONG_ID_027 NUMBER, P_SONG_ID_028 NUMBER, P_SONG_ID_029 NUMBER, P_SONG_ID_030 NUMBER,
                   P_SONG_ID_031 NUMBER, P_SONG_ID_032 NUMBER, P_SONG_ID_033 NUMBER, P_SONG_ID_034 NUMBER, P_SONG_ID_035 NUMBER, P_SONG_ID_036 NUMBER, P_SONG_ID_037 NUMBER, P_SONG_ID_038 NUMBER, P_SONG_ID_039 NUMBER, P_SONG_ID_040 NUMBER,
                   P_SONG_ID_041 NUMBER, P_SONG_ID_042 NUMBER, P_SONG_ID_043 NUMBER, P_SONG_ID_044 NUMBER, P_SONG_ID_045 NUMBER, P_SONG_ID_046 NUMBER, P_SONG_ID_047 NUMBER, P_SONG_ID_048 NUMBER, P_SONG_ID_049 NUMBER, P_SONG_ID_050 NUMBER,
                   P_SONG_ID_051 NUMBER, P_SONG_ID_052 NUMBER, P_SONG_ID_053 NUMBER, P_SONG_ID_054 NUMBER, P_SONG_ID_055 NUMBER, P_SONG_ID_056 NUMBER, P_SONG_ID_057 NUMBER, P_SONG_ID_058 NUMBER, P_SONG_ID_059 NUMBER, P_SONG_ID_060 NUMBER,
                   P_SONG_ID_061 NUMBER, P_SONG_ID_062 NUMBER, P_SONG_ID_063 NUMBER, P_SONG_ID_064 NUMBER, P_SONG_ID_065 NUMBER, P_SONG_ID_066 NUMBER, P_SONG_ID_067 NUMBER, P_SONG_ID_068 NUMBER, P_SONG_ID_069 NUMBER, P_SONG_ID_070 NUMBER,
                   P_SONG_ID_071 NUMBER, P_SONG_ID_072 NUMBER, P_SONG_ID_073 NUMBER, P_SONG_ID_074 NUMBER, P_SONG_ID_075 NUMBER, P_SONG_ID_076 NUMBER, P_SONG_ID_077 NUMBER, P_SONG_ID_078 NUMBER, P_SONG_ID_079 NUMBER, P_SONG_ID_080 NUMBER,
                   P_SONG_ID_081 NUMBER, P_SONG_ID_082 NUMBER, P_SONG_ID_083 NUMBER, P_SONG_ID_084 NUMBER, P_SONG_ID_085 NUMBER, P_SONG_ID_086 NUMBER, P_SONG_ID_087 NUMBER, P_SONG_ID_088 NUMBER, P_SONG_ID_089 NUMBER, P_SONG_ID_090 NUMBER,
                   P_SONG_ID_091 NUMBER, P_SONG_ID_092 NUMBER, P_SONG_ID_093 NUMBER, P_SONG_ID_094 NUMBER, P_SONG_ID_095 NUMBER, P_SONG_ID_096 NUMBER, P_SONG_ID_097 NUMBER, P_SONG_ID_098 NUMBER, P_SONG_ID_099 NUMBER, P_SONG_ID_100 NUMBER
         IS
         select
         from PRICE_DATA@PRICING_DB
         where COUNTRY = 'USA'
         and START_DATE <= sysdate
         and END_DATE > sysdate
         and vpc.SONG_ID IN
                   P_SONG_ID_001 ,P_SONG_ID_002 ,P_SONG_ID_003 ,P_SONG_ID_004 ,P_SONG_ID_005 ,P_SONG_ID_006 ,P_SONG_ID_007 ,P_SONG_ID_008 ,P_SONG_ID_009 ,P_SONG_ID_010,
                   P_SONG_ID_011 ,P_SONG_ID_012 ,P_SONG_ID_013 ,P_SONG_ID_014 ,P_SONG_ID_015 ,P_SONG_ID_016 ,P_SONG_ID_017 ,P_SONG_ID_018 ,P_SONG_ID_019 ,P_SONG_ID_020,
                   P_SONG_ID_021 ,P_SONG_ID_022 ,P_SONG_ID_023 ,P_SONG_ID_024 ,P_SONG_ID_025 ,P_SONG_ID_026 ,P_SONG_ID_027 ,P_SONG_ID_028 ,P_SONG_ID_029 ,P_SONG_ID_030,
                   P_SONG_ID_031 ,P_SONG_ID_032 ,P_SONG_ID_033 ,P_SONG_ID_034 ,P_SONG_ID_035 ,P_SONG_ID_036 ,P_SONG_ID_037 ,P_SONG_ID_038 ,P_SONG_ID_039 ,P_SONG_ID_040,
                   P_SONG_ID_041 ,P_SONG_ID_042 ,P_SONG_ID_043 ,P_SONG_ID_044 ,P_SONG_ID_045 ,P_SONG_ID_046 ,P_SONG_ID_047 ,P_SONG_ID_048 ,P_SONG_ID_049 ,P_SONG_ID_050,
                   P_SONG_ID_051 ,P_SONG_ID_052 ,P_SONG_ID_053 ,P_SONG_ID_054 ,P_SONG_ID_055 ,P_SONG_ID_056 ,P_SONG_ID_057 ,P_SONG_ID_058 ,P_SONG_ID_059 ,P_SONG_ID_060,
                   P_SONG_ID_061 ,P_SONG_ID_062 ,P_SONG_ID_063 ,P_SONG_ID_064 ,P_SONG_ID_065 ,P_SONG_ID_066 ,P_SONG_ID_067 ,P_SONG_ID_068 ,P_SONG_ID_069 ,P_SONG_ID_070,
                   P_SONG_ID_071 ,P_SONG_ID_072 ,P_SONG_ID_073 ,P_SONG_ID_074 ,P_SONG_ID_075 ,P_SONG_ID_076 ,P_SONG_ID_077 ,P_SONG_ID_078 ,P_SONG_ID_079 ,P_SONG_ID_080,
                   P_SONG_ID_081 ,P_SONG_ID_082 ,P_SONG_ID_083 ,P_SONG_ID_084 ,P_SONG_ID_085 ,P_SONG_ID_086 ,P_SONG_ID_087 ,P_SONG_ID_088 ,P_SONG_ID_089 ,P_SONG_ID_090,
                   P_SONG_ID_091 ,P_SONG_ID_092 ,P_SONG_ID_093 ,P_SONG_ID_094 ,P_SONG_ID_095 ,P_SONG_ID_096 ,P_SONG_ID_097 ,P_SONG_ID_098 ,P_SONG_ID_099 ,P_SONG_ID_100
         group by
               vpc.SONG_ID
              ,vpc.STOREFRONT_ID
         TYPE SONG_ID_TYPE IS TABLE OF MUSIC@MUSIC_DB%TYPE INDEX BY BINARY_INTEGER;
         V_SONG_ID_ARRAY                         SONG_ID_TYPE                     ;
         v_commit_counter           NUMBER := 0;
    BEGIN
         /* open cursor you intent to bulk collect from */
         OPEN C_CURRENT_MUSIC;
         LOOP
              /* in batches of 100, bulk collect ADAM_ID mapped TMS_IDENTIFIER into PLSQL table or records */
              FETCH C_CURRENT_MUSIC BULK COLLECT INTO V_SONG_ID_ARRAY LIMIT 100;
                   EXIT WHEN V_SONG_ID_ARRAY.COUNT = 0;
                   /* to avoid NO DATA FOUND error when pass 100 parameters to OPEN cursor, if the arrary
                      is not fully populated to 100, pad the array with nulls to fill up to 100 cells. */
                   IF (V_SONG_ID_ARRAY.COUNT >=1 and V_SONG_ID_ARRAY.COUNT <> 100) THEN
                        FOR j IN V_SONG_ID_ARRAY.COUNT+1..100 LOOP
                             V_SONG_ID_ARRAY(j) := null;
                        END LOOP;
                   END IF;
              /* pass a batch of 100 to cursor that get price information per SONG_ID and STOREFRONT_ID */
              FOR j IN C_get_music_price_data
                        V_SONG_ID_ARRAY(1) ,V_SONG_ID_ARRAY(2) ,V_SONG_ID_ARRAY(3) ,V_SONG_ID_ARRAY(4) ,V_SONG_ID_ARRAY(5) ,V_SONG_ID_ARRAY(6) ,V_SONG_ID_ARRAY(7) ,V_SONG_ID_ARRAY(8) ,V_SONG_ID_ARRAY(9) ,V_SONG_ID_ARRAY(10) ,
                        V_SONG_ID_ARRAY(11) ,V_SONG_ID_ARRAY(12) ,V_SONG_ID_ARRAY(13) ,V_SONG_ID_ARRAY(14) ,V_SONG_ID_ARRAY(15) ,V_SONG_ID_ARRAY(16) ,V_SONG_ID_ARRAY(17) ,V_SONG_ID_ARRAY(18) ,V_SONG_ID_ARRAY(19) ,V_SONG_ID_ARRAY(20) ,
                        V_SONG_ID_ARRAY(21) ,V_SONG_ID_ARRAY(22) ,V_SONG_ID_ARRAY(23) ,V_SONG_ID_ARRAY(24) ,V_SONG_ID_ARRAY(25) ,V_SONG_ID_ARRAY(26) ,V_SONG_ID_ARRAY(27) ,V_SONG_ID_ARRAY(28) ,V_SONG_ID_ARRAY(29) ,V_SONG_ID_ARRAY(30) ,
                        V_SONG_ID_ARRAY(31) ,V_SONG_ID_ARRAY(32) ,V_SONG_ID_ARRAY(33) ,V_SONG_ID_ARRAY(34) ,V_SONG_ID_ARRAY(35) ,V_SONG_ID_ARRAY(36) ,V_SONG_ID_ARRAY(37) ,V_SONG_ID_ARRAY(38) ,V_SONG_ID_ARRAY(39) ,V_SONG_ID_ARRAY(40) ,
                        V_SONG_ID_ARRAY(41) ,V_SONG_ID_ARRAY(42) ,V_SONG_ID_ARRAY(43) ,V_SONG_ID_ARRAY(44) ,V_SONG_ID_ARRAY(45) ,V_SONG_ID_ARRAY(46) ,V_SONG_ID_ARRAY(47) ,V_SONG_ID_ARRAY(48) ,V_SONG_ID_ARRAY(49) ,V_SONG_ID_ARRAY(50) ,
                        V_SONG_ID_ARRAY(51) ,V_SONG_ID_ARRAY(52) ,V_SONG_ID_ARRAY(53) ,V_SONG_ID_ARRAY(54) ,V_SONG_ID_ARRAY(55) ,V_SONG_ID_ARRAY(56) ,V_SONG_ID_ARRAY(57) ,V_SONG_ID_ARRAY(58) ,V_SONG_ID_ARRAY(59) ,V_SONG_ID_ARRAY(60) ,
                        V_SONG_ID_ARRAY(61) ,V_SONG_ID_ARRAY(62) ,V_SONG_ID_ARRAY(63) ,V_SONG_ID_ARRAY(64) ,V_SONG_ID_ARRAY(65) ,V_SONG_ID_ARRAY(66) ,V_SONG_ID_ARRAY(67) ,V_SONG_ID_ARRAY(68) ,V_SONG_ID_ARRAY(69) ,V_SONG_ID_ARRAY(70) ,
                        V_SONG_ID_ARRAY(71) ,V_SONG_ID_ARRAY(72) ,V_SONG_ID_ARRAY(73) ,V_SONG_ID_ARRAY(74) ,V_SONG_ID_ARRAY(75) ,V_SONG_ID_ARRAY(76) ,V_SONG_ID_ARRAY(77) ,V_SONG_ID_ARRAY(78) ,V_SONG_ID_ARRAY(79) ,V_SONG_ID_ARRAY(80) ,
                        V_SONG_ID_ARRAY(81) ,V_SONG_ID_ARRAY(82) ,V_SONG_ID_ARRAY(83) ,V_SONG_ID_ARRAY(84) ,V_SONG_ID_ARRAY(85) ,V_SONG_ID_ARRAY(86) ,V_SONG_ID_ARRAY(87) ,V_SONG_ID_ARRAY(88) ,V_SONG_ID_ARRAY(89) ,V_SONG_ID_ARRAY(90) ,
                        V_SONG_ID_ARRAY(91) ,V_SONG_ID_ARRAY(92) ,V_SONG_ID_ARRAY(93) ,V_SONG_ID_ARRAY(94) ,V_SONG_ID_ARRAY(95) ,V_SONG_ID_ARRAY(96) ,V_SONG_ID_ARRAY(97) ,V_SONG_ID_ARRAY(98) ,V_SONG_ID_ARRAY(99) ,V_SONG_ID_ARRAY(100)        
              LOOP
                   /* do stuff with data from Song and Pricing Database coming from the two
                        separate cursors, then continue processing more rows...
              END LOOP;
              /* commit after each batch of 100 SONG_IDs is processed */        
              COMMIT;
              EXIT WHEN C_CURRENT_MUSIC%NOTFOUND;  -- exit when there are no more rows to fetch from cursor
         END LOOP; -- bulk fetching loop
         CLOSE C_CURRENT_MUSIC; -- close cursor that was used in bulk collection
         /* commit rows */
         COMMIT; -- commit any remaining uncommitted data.
    END;

    I've got a problem when using passing VARRAY of numbers as parameter to remote cursor: it takes a super long time to run, sometimes doesn't finish even after an hour as passed.
    Continuing with my example in original entry, I replaced the bulk collect into PLSQL table collection with a VARRAY and i bulk collect into the VARRAY, this is fast and I know it works because I can DBMS_OUTPUT.PUT_LINE cells of VARRAY so I know it is getting populated correctly.  However, when I pass the VARRAY containing 100 cells populated with SONG_IDs as parameter to cursor, execution time is over an hour and when I am expecting a few seconds.
    Below code example strips the problem down to it's raw details, I skip the bulk collect and just manually populate a VARRAY with 100 SONG_ID values, then try to pass to as parameter to a cursor, but the execution time of cursor is unexpectedly long, over 30 minutes, sometime longer, when I am expecting seconds.
    IMPORTANT: If I take the same 100 SONG_IDs and place them directly in the cursor query's where IN clause, the SQL runs in under 5 seconds and returns result.  Also, if I pass the 100 SONG_IDs as individual cells of a PLSQL table collection, then it also runs fast.
    I thought that since the VARRAY is used via select subquery that is it queried locally, but the cursor is remote, and that I had a distribute problem on my hands, so I put in the DRIVING_SITE hint to attempt to force the result of query against VARRAY to go to remote server and rest of query will run there before returning result, but that didn't work either, still got slow response.
    Is something wrong with my code, or I am running into a Oracle problem that may require support to resolve?
    DECLARE
         /*  define a parameterized cursor that accepts XXX number of in SONG_IDs and
          retrieves required pricing information
         CURSOR C_get_music_price_data
      p_array_song_ids SYS.ODCInumberList              
         IS
         select  /*+DRIVING_SITE(pd) */
      count(distinct s.EVE_ID)
         from PRICE_DATA@PRICING_DB pd
         where pd.COUNTRY = 'USA'
         and pd.START_DATE <= sysdate
         and pd.END_DATE > sysdate
         and pd.SONG_ID IN
              select column_value from table(p_array_song_ids)
         group by
               pd.SONG_ID
              ,pd.STOREFRONT_ID
      V_ARRAY_SONG_IDS SYS.ODCInumberList := SYS.ODCInumberList();    
    BEGIN
    V_ARRAY_SONG_IDS.EXTEND(100);
    V_ARRAY_SONG_IDS(  1 ) := 31135  ;
    V_ARRAY_SONG_IDS(  2 ) := 31140   ;
    V_ARRAY_SONG_IDS(  3 ) := 31142   ;
    V_ARRAY_SONG_IDS(  4 ) := 31144   ;
    V_ARRAY_SONG_IDS(  5 ) := 31146   ;
    V_ARRAY_SONG_IDS(  6 ) := 31148   ;
    V_ARRAY_SONG_IDS(  7 ) := 31150   ;
    V_ARRAY_SONG_IDS(  8 ) := 31152   ;
    V_ARRAY_SONG_IDS(  9 ) := 31154   ;
    V_ARRAY_SONG_IDS( 10 ) := 31156   ;
    V_ARRAY_SONG_IDS( 11 ) := 31158   ;
    V_ARRAY_SONG_IDS( 12 ) := 31160   ;
    V_ARRAY_SONG_IDS( 13 ) := 33598   ;
    V_ARRAY_SONG_IDS( 14 ) := 33603   ;
    V_ARRAY_SONG_IDS( 15 ) := 33605   ;
    V_ARRAY_SONG_IDS( 16 ) := 33607   ;
    V_ARRAY_SONG_IDS( 17 ) := 33609   ;
    V_ARRAY_SONG_IDS( 18 ) := 33611   ;
    V_ARRAY_SONG_IDS( 19 ) := 33613   ;
    V_ARRAY_SONG_IDS( 20 ) := 33615   ;
    V_ARRAY_SONG_IDS( 21 ) := 33617   ;
    V_ARRAY_SONG_IDS( 22 ) := 33630   ;
    V_ARRAY_SONG_IDS( 23 ) := 33632   ;
    V_ARRAY_SONG_IDS( 24 ) := 33636   ;
    V_ARRAY_SONG_IDS( 25 ) := 33638   ;
    V_ARRAY_SONG_IDS( 26 ) := 33640   ;
    V_ARRAY_SONG_IDS( 27 ) := 33642   ;
    V_ARRAY_SONG_IDS( 28 ) := 33644   ;
    V_ARRAY_SONG_IDS( 29 ) := 33646   ;
    V_ARRAY_SONG_IDS( 30 ) := 33648   ;
    V_ARRAY_SONG_IDS( 31 ) := 33662   ;
    V_ARRAY_SONG_IDS( 32 ) := 33667   ;
    V_ARRAY_SONG_IDS( 33 ) := 33669   ;
    V_ARRAY_SONG_IDS( 34 ) := 33671   ;
    V_ARRAY_SONG_IDS( 35 ) := 33673   ;
    V_ARRAY_SONG_IDS( 36 ) := 33675   ;
    V_ARRAY_SONG_IDS( 37 ) := 33677   ;
    V_ARRAY_SONG_IDS( 38 ) := 33679   ;
    V_ARRAY_SONG_IDS( 39 ) := 33681   ;
    V_ARRAY_SONG_IDS( 40 ) := 33683   ;
    V_ARRAY_SONG_IDS( 41 ) := 33685   ;
    V_ARRAY_SONG_IDS( 42 ) := 33700   ;
    V_ARRAY_SONG_IDS( 43 ) := 33702   ;
    V_ARRAY_SONG_IDS( 44 ) := 33704   ;
    V_ARRAY_SONG_IDS( 45 ) := 33706   ;
    V_ARRAY_SONG_IDS( 46 ) := 33708   ;
    V_ARRAY_SONG_IDS( 47 ) := 33710   ;
    V_ARRAY_SONG_IDS( 48 ) := 33712   ;
    V_ARRAY_SONG_IDS( 49 ) := 33723   ;
    V_ARRAY_SONG_IDS( 50 ) := 33725   ;
    V_ARRAY_SONG_IDS( 51 ) := 33727   ;
    V_ARRAY_SONG_IDS( 52 ) := 33729   ;
    V_ARRAY_SONG_IDS( 53 ) := 33731   ;
    V_ARRAY_SONG_IDS( 54 ) := 33733   ;
    V_ARRAY_SONG_IDS( 55 ) := 33735   ;
    V_ARRAY_SONG_IDS( 56 ) := 33737   ;
    V_ARRAY_SONG_IDS( 57 ) := 33749   ;
    V_ARRAY_SONG_IDS( 58 ) := 33751   ;
    V_ARRAY_SONG_IDS( 59 ) := 33753   ;
    V_ARRAY_SONG_IDS( 60 ) := 33755   ;
    V_ARRAY_SONG_IDS( 61 ) := 33757   ;
    V_ARRAY_SONG_IDS( 62 ) := 33759   ;
    V_ARRAY_SONG_IDS( 63 ) := 33761   ;
    V_ARRAY_SONG_IDS( 64 ) := 33763   ;
    V_ARRAY_SONG_IDS( 65 ) := 33775   ;
    V_ARRAY_SONG_IDS( 66 ) := 33777   ;
    V_ARRAY_SONG_IDS( 67 ) := 33779   ;
    V_ARRAY_SONG_IDS( 68 ) := 33781   ;
    V_ARRAY_SONG_IDS( 69 ) := 33783   ;
    V_ARRAY_SONG_IDS( 70 ) := 33785   ;
    V_ARRAY_SONG_IDS( 71 ) := 33787   ;
    V_ARRAY_SONG_IDS( 72 ) := 33789   ;
    V_ARRAY_SONG_IDS( 73 ) := 33791   ;
    V_ARRAY_SONG_IDS( 74 ) := 33793   ;
    V_ARRAY_SONG_IDS( 75 ) := 33807   ;
    V_ARRAY_SONG_IDS( 76 ) := 33809   ;
    V_ARRAY_SONG_IDS( 77 ) := 33811   ;
    V_ARRAY_SONG_IDS( 78 ) := 33813   ;
    V_ARRAY_SONG_IDS( 79 ) := 33815   ;
    V_ARRAY_SONG_IDS( 80 ) := 33817   ;
    V_ARRAY_SONG_IDS( 81 ) := 33819   ;
    V_ARRAY_SONG_IDS( 82 ) := 33821   ;
    V_ARRAY_SONG_IDS( 83 ) := 33823   ;
    V_ARRAY_SONG_IDS( 84 ) := 33825   ;
    V_ARRAY_SONG_IDS( 85 ) := 33839   ;
    V_ARRAY_SONG_IDS( 86 ) := 33844   ;
    V_ARRAY_SONG_IDS( 87 ) := 33846   ;
    V_ARRAY_SONG_IDS( 88 ) := 33848   ;
    V_ARRAY_SONG_IDS( 89 ) := 33850   ;
    V_ARRAY_SONG_IDS( 90 ) := 33852   ;
    V_ARRAY_SONG_IDS( 91 ) := 33854   ;
    V_ARRAY_SONG_IDS( 92 ) := 33856   ;
    V_ARRAY_SONG_IDS( 93 ) := 33858   ;
    V_ARRAY_SONG_IDS( 94 ) := 33860   ;
    V_ARRAY_SONG_IDS( 95 ) := 33874   ;
    V_ARRAY_SONG_IDS( 96 ) := 33879   ;
    V_ARRAY_SONG_IDS( 97 ) := 33881   ;
    V_ARRAY_SONG_IDS( 98 ) := 33883   ;
    V_ARRAY_SONG_IDS( 99 ) := 33885   ;
    V_ARRAY_SONG_IDS(100 ) := 33889  ;
        /* do stuff with data from Song and Pricing Database coming from the two
      separate cursors, then continue processing more rows...
      FOR i IN C_get_music_price_data( v_array_song_ids ) LOOP
      . (this is the loop where I pass in v_array_song_ids
      .  populated with only 100 cells and it runs forever)
      END LOOP; 
    END;

  • Problem in Bulk collect

    Hi ,
    I have declared a cursor with bulk collect. I have to insert only 2 columns values from tableA to TableB. The number of columns in tableA and TableB are not equal.
    DECLARE
    CURSOR c1                                    
    IS
    SELECT COLUMN1,COLUMN2,COLUMN3,COLUMN5 FROM TABLEA WHERE COLUMNS1='XXXXXXXX';
    TYPE Cust_tab IS TABLE OF C1%ROWTYPE;
    Custs Cust_tab;
    BEGIN
         OPEN c1;
    LOOP
         FETCH c1 BULK COLLECT INTO Custs LIMIT 100;
    EXIT WHEN c1%NOTFOUND;               
         END LOOP;
         FORALL i IN 1 .. Custs.COUNT
    SAVE EXCEPTIONS
    INSERT into TABLEB(COL1,COL2,COL3,COL5) VALUES (Custs(i).COLUMN1,Custs(i).COLUMN2,Custs(i).COLUMN3,Custs(i).COLUMN5);
    END ;
    Iam getting an error as
    ERROR at line 16:
    ORA-06550: line 16, column 66:
    PLS-00436: implementation restriction: cannot reference fields of BULK In-BIND
    table of records
    ORA-06550: line 16, column 66:
    PLS-00382: expression is of wrong type
    ORA-06550: line 16, column 88:
    PLS-00436: implementation restriction: cannot reference fields of BULK In-BIND
    table of records
    ORA-06550: line 16, column 88:
    PLS-00382: expression is of wrong type
    ORA-06550: line 16, column 107:
    PLS-00436: implementation restriction: cannot reference fields of BULK In-BIND
    table of records
    ORA-06550: line 16, column 107:
    PLS-00382: expression is of wrong type
    ORA-06550: line 16, column 124:
    PLS-00436: implementation restriction: cannot reference fields of BULK In-BIND
    table of records
    ORA-06550: line 16, column 124:
    PLS-00382: expression is of wrong type
    ORA-06550: line 16, column 66:
    PL/SQL: ORA-22806: not an object or REF
    ORA-06550: line 16, column 14:
    PL/SQL: SQL Statement ignored.
    Please let me know how to get out of the error.

    INSERT into TABLEB(COL1,COL2,COL3,COL5)
    SELECT COLUMN1,COLUMN2,COLUMN3,COLUMN5 FROM TABLEA WHERE COLUMNS1='XXXXXXXX';
    works like a charm. Moreover: it SCALES!!
    There is no reason in this case to use BULK COLLECT.
    You should only use PL/SQL if you can't do it using SQL.
    You can do it using SQL: you should use SQL.
    Other than that: just read the error message, realize you can't directly use a collection involved in a fetch, just set up an extra collection and assign the old collection to the new.
    The code is wrong anyway, as it doesn't allow for more than one fetch.
    Sybrand Bakker
    Senior Oracle DBA

  • How to use BULK COLLECT in ref cursor?

    hi,
    can we use bulk collect in ref cursor ? if yes then please give small example ..
    thanks

    Try this:
    create or replace type person_ot as object (name varchar2(10)) not final;
    create or replace type student_ot under person_ot (s_num number) not final;
    create type person_tt as table of person_ot;
    create table persons of person_ot;
    declare
    lv_person_list person_tt;
    lv_sql varchar2(1000);
    ref_cur sys_refcursor;
    begin
    lv_sql:= 'select new student_ot(''fred'', 100) from dual
    union all
    select new student_ot(''sally'', 200) from dual';
    open ref_cur for lv_sql;
    fetch ref_cur bulk collect into lv_person_list;
    close ref_cur;
    for i in lv_person_list.first..lv_person_list.last loop
    dbms_output.put_line(lv_person_list(i).name );
    end loop;
    forall i in lv_person_list.first..lv_person_list.last
    insert into persons values lv_person_list(i);
    end;
    /

Maybe you are looking for

  • Poor signal strength on MacBook Pro

    We bought a new AirPort Extreme 802.11n for Christmas. Setup was fairly easy. We have two Macs--an 2005 iMac which is plugged into the AE via Ethernet, and a MacBook Pro from mid-2006 that we use in other rooms. Our problem is that the signal strengt

  • Show negative values as 0 in BEx Report

    Hi Gurus, I have a report which shows negative values for POs in BW BEx. The user wants all negative values to show up as 0 in the report. Is there a way I can show that as 0 in the report  for all negative numbers easily. Any help is deeply apprecia

  • Visual Studio 2008 Crystal Report 10.5 merge modules download

    Hi All, I'm struggling with this problem about 2 weeks and still cannot find the resolution. I was developed a .Net application with visual studio 2008 and using crystal report to generate report. It is working fine in my local machine. However, afte

  • Does adding id properties increase file size?

    I tend do use the id property in addition to comments as tool to help me view the layout of components and renderers in the outline view of my workspace. I'm curious whether this is a bad practice if these are properties which aren't necessary to the

  • Reports Sugestion - Data Template Support for Downloadable PDF Reports.

    This is something that I miss in APEX reports that I would like to see. Right now I have BI Publisher tied to my APEX install but there are reports I would like to move over but I hate the fact that APEX does not have the ability to use BI Publisher