How to use BULK COLLECT, FORALL and TREAT
There is a need to read match and update data from and into a custom table. The table would have about 3 millions rows and holds key numbers. BAsed on a field value of this custom table, relevant data needs to be fetched from joins of other tables and updated in the custom table. I plan to use BULK COLLECT and FORALL.
All examples I have seen, do an insert into a table. How do I go about reading all values of a given field and fetching other relevant data and then updating the custom table with data fetched.
Defined an object with specifics like this
CREATE OR REPLACE TYPE imei_ot AS OBJECT (
recid NUMBER,
imei VARCHAR2(30),
STORE VARCHAR2(100),
status VARCHAR2(1),
TIMESTAMP DATE,
order_number VARCHAR2(30),
order_type VARCHAR2(30),
sku VARCHAR2(30),
order_date DATE,
attribute1 VARCHAR2(240),
market VARCHAR2(240),
processed_flag VARCHAR2(1),
last_update_date DATE
Now within a package procedure I have defined like this.
type imei_ott is table of imei_ot;
imei_ntt imei_ott;
begin
SELECT imei_ot (recid,
imei,
STORE,
status,
TIMESTAMP,
order_number,
order_type,
sku,
order_date,
attribute1,
market,
processed_flag,
last_update_date
BULK COLLECT INTO imei_ntt
FROM (SELECT stg.recid, stg.imei, cip.store_location, 'S',
co.rtl_txn_timestamp, co.rtl_order_number, 'CUST',
msi.segment1 || '.' || msi.segment3,
TRUNC (co.txn_timestamp), col.part_number, 'ZZ',
stg.processed_flag, SYSDATE
FROM custom_orders co,
custom_order_lines col,
custom_stg stg,
mtl_system_items_b msi
WHERE co.header_id = col.header_id
AND msi.inventory_item_id = col.inventory_item_id
AND msi.organization_id =
(SELECT organization_id
FROM hr_all_organization_units_tl
WHERE NAME = 'Item Master'
AND source_lang = USERENV ('LANG'))
AND stg.imei = col.serial_number
AND stg.processed_flag = 'U');
/* Update staging table in one go for COR order data */
FORALL indx IN 1 .. imei_ntt.COUNT
UPDATE custom_stg
SET STORE = TREAT (imei_ntt (indx) AS imei_ot).STORE,
status = TREAT (imei_ntt (indx) AS imei_ot).status,
TIMESTAMP = TREAT (imei_ntt (indx) AS imei_ot).TIMESTAMP,
order_number = TREAT (imei_ntt (indx) AS imei_ot).order_number,
order_type = TREAT (imei_ntt (indx) AS imei_ot).order_type,
sku = TREAT (imei_ntt (indx) AS imei_ot).sku,
order_date = TREAT (imei_ntt (indx) AS imei_ot).order_date,
attribute1 = TREAT (imei_ntt (indx) AS imei_ot).attribute1,
market = TREAT (imei_ntt (indx) AS imei_ot).market,
processed_flag =
TREAT (imei_ntt (indx) AS imei_ot).processed_flag,
last_update_date =
TREAT (imei_ntt (indx) AS imei_ot).last_update_date
WHERE recid = TREAT (imei_ntt (indx) AS imei_ot).recid
AND imei = TREAT (imei_ntt (indx) AS imei_ot).imei;
DBMS_OUTPUT.put_line ( TO_CHAR (SQL%ROWCOUNT)
|| ' rows updated using Bulk Collect / For All.'
EXCEPTION
WHEN NO_DATA_FOUND
THEN
DBMS_OUTPUT.put_line ('No Data: ' || SQLERRM);
WHEN OTHERS
THEN
DBMS_OUTPUT.put_line ('Other Error: ' || SQLERRM);
END;
Now for the unfortunate part. When I compile the pkg, I face an error
PL/SQL: ORA-00904: "LAST_UPDATE_DATE": invalid identifier
I am not sure where I am wrong. Object type has the last update date field and the custom table also has the same field.
Could someone please throw some light and suggestion?
Thanks
uds
I suspect your error comes from the »bulk collect into« and not from the »forall loop«.
From a first glance you need to alias sysdate with last_update_date and some of the other select items need to be aliased as well :
But a simplified version would be
select imei_ot (stg.recid,
stg.imei,
cip.store_location,
'S',
co.rtl_txn_timestamp,
co.rtl_order_number,
'CUST',
msi.segment1 || '.' || msi.segment3,
trunc (co.txn_timestamp),
col.part_number,
'ZZ',
stg.processed_flag,
sysdate
bulk collect into imei_ntt
from custom_orders co,
custom_order_lines col,
custom_stg stg,
mtl_system_items_b msi
where co.header_id = col.header_id
and msi.inventory_item_id = col.inventory_item_id
and msi.organization_id =
(select organization_id
from hr_all_organization_units_tl
where name = 'Item Master' and source_lang = userenv ('LANG'))
and stg.imei = col.serial_number
and stg.processed_flag = 'U';
...
Similar Messages
-
How to use Bulk Collect and Forall
Hi all,
We are on Oracle 10g. I have a requirement to read from table A and then for each record in table A, find matching rows in table B and then write the identified information in table B to the target table (table C). In the past, I had used two ‘cursor for loops’ to achieve that. To make the new procedure, more efficient, I would like to learn to use ‘bulk collect’ and ‘forall’.
Here is what I have so far:
DECLARE
TYPE employee_array IS TABLE OF EMPLOYEES%ROWTYPE;
employee_data employee_array;
TYPE job_history_array IS TABLE OF JOB_HISTORY%ROWTYPE;
Job_history_data job_history_array;
BatchSize CONSTANT POSITIVE := 5;
-- Read from File A
CURSOR c_get_employees IS
SELECT Employee_id,
first_name,
last_name,
hire_date,
job_id
FROM EMPLOYEES;
-- Read from File B based on employee ID in File A
CURSOR c_get_job_history (p_employee_id number) IS
select start_date,
end_date,
job_id,
department_id
FROM JOB_HISTORY
WHERE employee_id = p_employee_id;
BEGIN
OPEN c_get_employees;
LOOP
FETCH c_get_employees BULK COLLECT INTO employee_data.employee_id.LAST,
employee_data.first_name.LAST,
employee_data.last_name.LAST,
employee_data.hire_date.LAST,
employee_data.job_id.LAST
LIMIT BatchSize;
FORALL i in 1.. employee_data.COUNT
Open c_get_job_history (employee_data(i).employee_id);
FETCH c_get_job_history BULKCOLLECT INTO job_history_array LIMIT BatchSize;
FORALL k in 1.. Job_history_data.COUNT LOOP
-- insert into FILE C
INSERT INTO MY_TEST(employee_id, first_name, last_name, hire_date, job_id)
values (job_history_array(k).employee_id, job_history_array(k).first_name,
job_history_array(k).last_name, job_history_array(k).hire_date,
job_history_array(k).job_id);
EXIT WHEN job_ history_data.count < BatchSize
END LOOP;
CLOSE c_get_job_history;
EXIT WHEN employee_data.COUNT < BatchSize;
END LOOP;
COMMIT;
CLOSE c_get_employees;
END;
When I run this script, I get
[Error] Execution (47: 17): ORA-06550: line 47, column 17:
PLS-00103: Encountered the symbol "OPEN" when expecting one of the following:
. ( * @ % & - + / at mod remainder rem select update with
<an exponent (**)> delete insert || execute multiset save
merge
ORA-06550: line 48, column 17:
PLS-00103: Encountered the symbol "FETCH" when expecting one of the following:
begin function package pragma procedure subtype type use
<an identifier> <a double-quoted delimited-identifier> form
current cursorWhat is the best way to code this? Once, I learn how to do this, I apply the knowledge to the real application in which file A would have around 200 rows and file B would have hundreds of thousands of rows.
Thank you for your guidance,
SeyedHello BlueShadow,
Following your advice, I modified a stored procedure that initially was using two cursor for loops to read from tables A and B to write to table C to use instead something like your suggestion listed below:
INSERT INTO tableC
SELECT …
FROM tableA JOIN tableB on (join condition).I tried this change on a procedure writing to tableC with keys disabled. I will try this against the real table that has primary key and indexes and report the result later.
Thank you very much,
Seyed -
How to use BULK COLLECT in oracle forms
hi gurus,
I am using oracle forms
Forms [32 Bit] Version 10.1.2.0.2 (Production)
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - ProductionI wanna use bulk collect from database table lets say <employees>
while working on database level with collections and records it's working very well for me, but when I try to use that technique on oracle forms it hits me error
error 591 this feature is not supported in client side programmingI know I can use cursors to loop through the records of oracle tables ,
but I'm convenient while using collections and arrays
for example
Set Serveroutput On
Declare
Type Rec_T Is Record (
Empid Number ,
Empname Varchar2(100)
Type V_R Is Table Of Rec_T Index By Binary_Integer;
V_Array V_R;
Begin
Select Employee_Id , First_Name
Bulk Collect
Into V_Array
From Employees;
For Indx In V_Array.First..V_Array.Last Loop
Dbms_Output.Put_Line('employees id '||V_Array(Indx).Empid ||'and the name is '||V_Array(Indx).Empname);
End Loop;
End;I wanna use this same way on oracle forms , for certain purposes , please guide me how can I use ...
thanks...For information, you can use and populate a collection within the Forms application without using the BULK COLLECT
Francoisactually I want to work with arrays , index tables ,
like
record_type (variable , variable2);
type type_name <record_type> index by binary_integer
type_variable type_name;
and in main body of program
select something
bulk collect into type_variable
from any_table;
loop
type_variable(indx).variable , type_variable(indx).variable2;
end loop;
this is very useful for my logic on which I am working
like
type_variable(indx).variable || type_variable(indx-1);
if it's possible with cursors then how can I use cursor that can fullfill my this logic@Francois
if it's possible then how can i populate without using bulk collect?
thanks
and for others replies: if I can use stored procedures please give me any example..
thanks -
Having problem using BULK COLLECT - FORALL
Hi,
I'm facing a problem while setting a table type value before inserting into a table using FORALL.
My concern is that i'm unable to generate the values in FOR LOOP, as by using dbms_output.put_line i observed that after 100 rows execution the process exits giving error as
ORA-22160: element at index [1] does not exist
ORA-06512: at "XYZ", line 568
ORA-06512: at line 2
I need to use the values stored in FOR LOOP in the same order for insertion in table TEMP using FOR ALL;
I'm guessing that i'm using the wrong technique for storing values in FOR LOOP.
Below given is my SP structure.
Any suggestion would be hepful.
Thanks!!
create or replace procedure XYZ
cursor cur is
select A,B,C,D from ABCD; ---expecting 40,000 row fetch
type t_A is table of ABCD.A%type index by pls_integer;
type t_B is table of ABCD.B%type index by pls_integer;
type t_C is table of ABCD.C%type index by pls_integer;
type t_D is table of ABCD.D%type index by pls_integer;
v_A t_A;
v_B t_B;
v_C t_C;
v_D t_D;
type t_E is table of VARCHAR2(100);
type t_F is table of VARCHAR2(100);
v_E t_E := t_E();
v_F t_F := t_F();
begin
open cur;
loop
fetch cur BULK COLLECT INTO v_A,v_B,v_C,v_D limit 100;
for i in 1 .. v_A.count loop
v_E.extend(i);
select 1111 into v_E(i) from dual;
v_F.extend(i);
v_F(i) := 'Hi !!';
----calculating v_E(i) and v_F(i) here----
end loop;
forall in i in 1 .. v_A.count
insert into table TEMP values (v_E(i), v_F(i));
exit when cur%NOTFOUND;
end loop;
close cur;
end;
BANNER
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi
PL/SQL Release 10.2.0.4.0 - Production
CORE 10.2.0.4.0 Production
TNS for HPUX: Version 10.2.0.4.0 - Production
NLSRTL Version 10.2.0.4.0 - Production
-------The problem is that inside the IF ELSIF blocks i need to query various tables..As I thought. But why did you concentrate on BULK COLLECT - FORALL?
The cursor whereas does take more time to execute.More time then?
We have join of two tables have 18,00,000(normal table) and >17,92,2067(MView) records, having inidex on one the join column.
After joining these two and adding the filter conditions i'm having around >40,000 >rows.? You have a cursor row. And then inside the loop you have a query which returns 40'000 rows? What do you do with that data?
Is the query you show running INSIDE the loop?
I guess you still talk about the LOOP query and your are unhappy that it is not taking an index?
1. The loop is NOT the problem. It's the "... i need to query various tables"
2. ORACLE is ok when it's NOT taking the index. That is faster!!
3. If you add code and execution plans, please add tags. Otherwise it's unreadable.
Try to merge your LOOP query with the "various tables" and make ONE query out of 40000*various ;-) -
How to use BULK COLLECT in Oracle Forms 11g
Forms is showing error that "Feature is not support in Client Side Program" when i am trying to impliment Bulk collect in Forms 11g.
i need to load full data from DB to my form becuase using cursor is very slow....
Is there any method/Work around to achieve this ....declare
type arr is table of emp%rowtype ;
lv_arr arr;
begin
select * bulk collect in to lv_arr from emp;
/*written code here to process the data and write in to file*/
end;Unless you are inserting/updating the data you are holding in the array into a database table I don't think there is much peformance gain in using bulk-collect in conjunction with writing a file. Bulk processing will increase performance by minimizing context switches from the SQL to the PL/SQL engine, nothing more, nothing less.
In any case bulk processing is not available in forms, if you really need to make use of it you need to do it in a stored procedure.
cheers -
How to use BULK COLLECT in ref cursor?
hi,
can we use bulk collect in ref cursor ? if yes then please give small example ..
thanksTry this:
create or replace type person_ot as object (name varchar2(10)) not final;
create or replace type student_ot under person_ot (s_num number) not final;
create type person_tt as table of person_ot;
create table persons of person_ot;
declare
lv_person_list person_tt;
lv_sql varchar2(1000);
ref_cur sys_refcursor;
begin
lv_sql:= 'select new student_ot(''fred'', 100) from dual
union all
select new student_ot(''sally'', 200) from dual';
open ref_cur for lv_sql;
fetch ref_cur bulk collect into lv_person_list;
close ref_cur;
for i in lv_person_list.first..lv_person_list.last loop
dbms_output.put_line(lv_person_list(i).name );
end loop;
forall i in lv_person_list.first..lv_person_list.last
insert into persons values lv_person_list(i);
end;
/ -
ORA-06502 during a procedure which uses Bulk collect feature and nested tab
Hello Friends,
have created one procedure which uses Bulk collect and nested table to hold the bulk data. This procedure was using one cursor and a nested table with the same type as the cursor to hold data fetched from cursor. Bulk collection technique was used to collect data from cursor to nested table. But it is giving ORA-06502 error.
I reduced code of procedure to following to trace the error point. But still error is comming. Please help us to find the cause and solve it.
Script which is giving error:
declare
v_Errorflag BINARY_INTEGER;
v_flag number := 1;
CURSOR cur_terminal_info Is
SELECT distinct
'a' SettlementType
FROM
dual;
TYPE typ_cur_terminal IS TABLE OF cur_terminal_info%ROWTYPE;
Tab_Terminal_info typ_cur_Terminal;
BEGIN
v_Errorflag := 2;
OPEN cur_terminal_info;
LOOP
v_Errorflag := 4;
FETCH cur_terminal_info BULK COLLECT INTO tab_terminal_info LIMIT 300;
EXIT WHEN cur_terminal_info%rowcount <= 0;
v_Errorflag := 5;
FOR Y IN Tab_Terminal_Info.FIRST..tab_terminal_info.LAST
LOOP
dbms_output.put_line(v_flag);
v_flag := v_flag + 1;
end loop;
END LOOP;
v_Errorflag := 13;
COMMIT;
END;
I have updated script as following to change datatype as varchar2 for nested table, but still same error is
comming..
declare
v_Errorflag BINARY_INTEGER;
v_flag number := 1;
CURSOR cur_terminal_info Is
SELECT distinct
'a' SettlementType
FROM
dual;
TYPE typ_cur_terminal IS TABLE OF varchar2(50);
Tab_Terminal_info typ_cur_Terminal;
BEGIN
v_Errorflag := 2;
OPEN cur_terminal_info;
LOOP
v_Errorflag := 4;
FETCH cur_terminal_info BULK COLLECT INTO tab_terminal_info LIMIT 300;
EXIT WHEN cur_terminal_info%rowcount <= 0;
v_Errorflag := 5;
FOR Y IN Tab_Terminal_Info.FIRST..tab_terminal_info.LAST
LOOP
dbms_output.put_line(v_flag);
v_flag := v_flag + 1;
end loop;
END LOOP;
v_Errorflag := 13;
COMMIT;
I could not find the exact reason of error.
Please help us to solve this error.
Thanks and Regards..
Dipali..Hello Friends,
I got the solution.. :)
I did one mistake in procedure where the loop should end.
I used the statemetn: EXIT WHEN cur_terminal_info%rowcount <= 0;
But it should be: EXIT WHEN Tab_Terminal_Info.COUNT <= 0;
Now my script is working fine.. :)
Thanks and Regards,
Dipali.. -
Exceptional handling using Bulk Collect FORALL
Hi Members,
Following are by DB details
SELECT * FROM v$version;
Oracle9i Enterprise Edition Release 9.2.0.7.0 - 64bit Production
PL/SQL Release 9.2.0.7.0 - Production
CORE 9.2.0.7.0 Production
TNS for HPUX: Version 9.2.0.7.0 - Production
NLSRTL Version 9.2.0.7.0 - ProductionI need to handle exception during Bulk Collect FORALL operation and update the table column with Status_flag as 'E' for given item name and extension id. Below is the code snippet for the same.
declare
k NUMBER;
cursor c_list_price
IS
SELECT /*+ leading(a) */ item_name,
'PUBLISH_TO_PRICE_CATALOG' ATTRIBUTE_NAME,
MAX(DECODE (attribute_name,
'PUBLISH_TO_PRICE_CATALOG',
attribute_value))
ATTRIBUTE_VALUE,
NULL GEOGRAPHY_LEVEL,
NULL GEOGRAPHY_VALUE,
NULL INCLUDE_GEOGRAPHY,
NULL EFFECTIVE_START_DATE,
NULL EFFECTIVE_TO_DATE,
EXTENSION_ID,
NULL PRICING_UNIT,
NULL ATTRIBUTE_FROM,
NULL ATTRIBUTE_TO,
NULL FIXED_BASE_PRICE,
NULL PARENT_ATTRIBUTE,
NULL DURATION_QUANTITY,
NULL ATTRIBUTE2,
NULL ATTRIBUTE3,
NULL ATTRIBUTE4,
NULL ATTRIBUTE5,
NULL ATTRIBUTE6,
NULL ATTRIBUTE7,
NULL ATTRIBUTE8,
NULL ATTRIBUTE9,
NULL ATTRIBUTE10,
NULL ATTRIBUTE11,
NULL ATTRIBUTE12,
NULL ATTRIBUTE13,
NULL ATTRIBUTE14,
NULL ATTRIBUTE15,
--ORG_CODE,
ITEM_CATALOG_CATEGORY ITEM_CATEGORY,
a.INVENTORY_ITEM_ID INVENTORY_ITEM_ID
FROM XXCPD_ITM_SEC_UDA_DETAILS_TB a
WHERE DECODE(attribute_group_name,'LIST_PRICE',DECODE(STATUS_FLAG,'E',1,'P',2,3)
,'XXITM_FORM_PRICING_HS',DECODE(STATUS_FLAG,'E',4,'P',5,6)
,'XXITM_FORM_PRICING_SB',DECODE(STATUS_FLAG,'E',4,'P',5,6)
,'XXITM_ADV_FORM_PRICING_HS',DECODE(STATUS_FLAG,'E',7,'P',8,9)
,'XXITM_ADV_FORM_PRICING_SB',DECODE(STATUS_FLAG,'E',7,'P',8,9)
,'XXITM_ADV_FORM_PRICING_HWSW',DECODE(STATUS_FLAG,'E',10,'P',11,12)
,'XXITM_ADV_FORM_PRICING_SWSUB',DECODE(STATUS_FLAG,'E',10,'P',11,12)
,'XXITM_ROUTE_TO_MARKET',DECODE(STATUS_FLAG,'E',13,'P',14,15)) in (1,2,3)
AND exists(select /*+ no_merge */ '1' from mtl_system_items_b b where a.inventory_item_id=b.inventory_item_id and b.organization_id=1)
GROUP BY item_name,
extension_id,
ITEM_CATALOG_CATEGORY,
INVENTORY_ITEM_ID;
TYPE myarray IS TABLE OF c_list_price%ROWTYPE;
c_lp myarray;
BEGIN
OPEN c_list_price;
LOOP
FETCH c_list_price BULK COLLECT INTO c_lp
LIMIT 50000;
IF c_lp.count = 0 THEN
EXIT;
END IF;
Begin
FORALL i IN c_lp.FIRST..c_lp.LAST SAVE EXCEPTIONS
INSERT /*+append*/ INTO XXCPD_ITEM_PRICING_ATTR_BKP2(
line_id,
ITEM_NAME,
ATTRIBUTE_NAME,
ATTRIBUTE_VALUE,
GEOGRAPHY_LEVEL,
GEOGRAPHY_VALUE,
INCLUDE_GEOGRAPHY,
EFFECTIVE_START_DATE,
EFFECTIVE_TO_DATE,
EXTENSION_ID,
PRICING_UNIT,
ATTRIBUTE_FROM,
ATTRIBUTE_TO,
FIXED_BASE_PRICE,
PARENT_ATTRIBUTE,
DURATION_QUANTITY,
ATTRIBUTE2,
ATTRIBUTE3,
ATTRIBUTE4,
ATTRIBUTE5,
ATTRIBUTE6,
ATTRIBUTE7,
ATTRIBUTE8,
ATTRIBUTE9,
ATTRIBUTE10,
ATTRIBUTE11,
ATTRIBUTE12,
ATTRIBUTE13,
ATTRIBUTE14,
ATTRIBUTE15,
ITEM_CATEGORY,
INVENTORY_ITEM_ID
VALUES
xxcpd.xxcpd_if_prc_line_id_s.NEXTVAL
,c_lp(i).ITEM_NAME
,c_lp(i).ATTRIBUTE_NAME
,c_lp(i).ATTRIBUTE_VALUE
,c_lp(i).GEOGRAPHY_LEVEL
,c_lp(i).GEOGRAPHY_VALUE
,c_lp(i).INCLUDE_GEOGRAPHY
,c_lp(i).EFFECTIVE_START_DATE
,c_lp(i).EFFECTIVE_TO_DATE
,c_lp(i).EXTENSION_ID
,c_lp(i).PRICING_UNIT
,c_lp(i).ATTRIBUTE_FROM
,c_lp(i).ATTRIBUTE_TO
,c_lp(i).FIXED_BASE_PRICE
,c_lp(i).PARENT_ATTRIBUTE
,c_lp(i).DURATION_QUANTITY
,c_lp(i).ATTRIBUTE2
,c_lp(i).ATTRIBUTE3
,c_lp(i).ATTRIBUTE4
,c_lp(i).ATTRIBUTE5
,c_lp(i).ATTRIBUTE6
,c_lp(i).ATTRIBUTE7
,c_lp(i).ATTRIBUTE8
,c_lp(i).ATTRIBUTE9
,c_lp(i).ATTRIBUTE10
,c_lp(i).ATTRIBUTE11
,c_lp(i).ATTRIBUTE12
,c_lp(i).ATTRIBUTE13
,c_lp(i).ATTRIBUTE14
,c_lp(i).ATTRIBUTE15
,c_lp(i).ITEM_CATEGORY
,c_lp(i).INVENTORY_ITEM_ID
EXCEPTION
WHEN OTHERS THEN
FOR j IN 1..SQL%bulk_exceptions.Count
LOOP
UPDATE XXCPD_ITM_SEC_UDA_DETAILS_TB
SET status_flag = 'E',
last_updated_by = 1,
last_update_date = SYSDATE
WHERE item_name = c_lp(SQL%BULK_EXCEPTIONS(j).ERROR_INDEX).item_name
AND extension_id=c_lp(SQL%BULK_EXCEPTIONS(j).ERROR_INDEX).extension_id;
COMMIT;
IF c_list_price%ISOPEN THEN
CLOSE c_list_price;
END IF;
FND_FILE.put_line(FND_FILE.output,'********************Exception Occured*************:-- '||SYSTIMESTAMP);
END LOOP;
END;
END LOOP;
CLOSE c_list_price;
COMMIT;
end;and I am getting following error
ORA-06550: line 156, column 47:
PL/SQL: ORA-00911: invalid character
ORA-06550: line 152, column 21:
PL/SQL: SQL Statement ignoredpointing to following lines in exception block for update clause.
WHERE item_name = c_lp(SQL%BULK_EXCEPTIONS(j).ERROR_INDEX).item_name
AND extension_id=c_lp(SQL%BULK_EXCEPTIONS(j).ERROR_INDEX).extension_id; can some one please help me out with the issue.
Thanks in AdvanceHave re-written the code in the following manner
declare
lv_ITEM_NAME DBMS_SQL.VARCHAR2S;
lv_ITEM_CATEGORY DBMS_SQL.VARCHAR2S;
lv_INVENTORY_ITEM_ID DBMS_SQL.NUMBER_TABLE;
lv_ATTRIBUTE_NAME DBMS_SQL.VARCHAR2S;
lv_ATTRIBUTE_VALUE DBMS_SQL.VARCHAR2S;
lv_GEOGRAPHY_LEVEL DBMS_SQL.VARCHAR2S;
lv_GEOGRAPHY_VALUE DBMS_SQL.VARCHAR2S;
lv_INCLUDE_GEOGRAPHY DBMS_SQL.VARCHAR2S;
lv_EFFECTIVE_START_DATE DBMS_SQL.date_table;
lv_EFFECTIVE_TO_DATE DBMS_SQL.date_table;
lv_EXTENSION_ID DBMS_SQL.NUMBER_TABLE;
lv_PRICING_UNIT DBMS_SQL.VARCHAR2S;
lv_ATTRIBUTE_FROM DBMS_SQL.VARCHAR2S;
lv_ATTRIBUTE_TO DBMS_SQL.VARCHAR2S;
lv_FIXED_BASE_PRICE DBMS_SQL.NUMBER_TABLE;
lv_PARENT_ATTRIBUTE DBMS_SQL.VARCHAR2S;
lv_DURATION_QUANTITY DBMS_SQL.NUMBER_TABLE;
lv_ATTRIBUTE2 DBMS_SQL.VARCHAR2S;
lv_ATTRIBUTE3 DBMS_SQL.VARCHAR2S;
lv_ATTRIBUTE4 DBMS_SQL.VARCHAR2S;
lv_ATTRIBUTE5 DBMS_SQL.VARCHAR2S;
lv_ATTRIBUTE6 DBMS_SQL.VARCHAR2S;
lv_ATTRIBUTE7 DBMS_SQL.VARCHAR2S;
lv_ATTRIBUTE8 DBMS_SQL.VARCHAR2S;
lv_ATTRIBUTE9 DBMS_SQL.VARCHAR2S;
lv_ATTRIBUTE10 DBMS_SQL.VARCHAR2S;
lv_ATTRIBUTE11 DBMS_SQL.VARCHAR2S;
lv_ATTRIBUTE12 DBMS_SQL.VARCHAR2S;
lv_ATTRIBUTE13 DBMS_SQL.VARCHAR2S;
lv_ATTRIBUTE14 DBMS_SQL.VARCHAR2S;
lv_ATTRIBUTE15 DBMS_SQL.VARCHAR2S;
l_item_name XXCPD_ITM_SEC_UDA_DETAILS_TB.item_name%TYPE;
l_extension_id XXCPD_ITM_SEC_UDA_DETAILS_TB.extension_id%TYPE;
cursor c_list_price
IS
SELECT /*+ leading(a) */ item_name,
'PUBLISH_TO_PRICE_CATALOG' ATTRIBUTE_NAME,
MAX(DECODE (attribute_name,
'PUBLISH_TO_PRICE_CATALOG',
attribute_value))
ATTRIBUTE_VALUE,
NULL GEOGRAPHY_LEVEL,
NULL GEOGRAPHY_VALUE,
NULL INCLUDE_GEOGRAPHY,
NULL EFFECTIVE_START_DATE,
NULL EFFECTIVE_TO_DATE,
EXTENSION_ID,
NULL PRICING_UNIT,
NULL ATTRIBUTE_FROM,
NULL ATTRIBUTE_TO,
NULL FIXED_BASE_PRICE,
NULL PARENT_ATTRIBUTE,
NULL DURATION_QUANTITY,
NULL ATTRIBUTE2,
NULL ATTRIBUTE3,
NULL ATTRIBUTE4,
NULL ATTRIBUTE5,
NULL ATTRIBUTE6,
NULL ATTRIBUTE7,
NULL ATTRIBUTE8,
NULL ATTRIBUTE9,
NULL ATTRIBUTE10,
NULL ATTRIBUTE11,
NULL ATTRIBUTE12,
NULL ATTRIBUTE13,
NULL ATTRIBUTE14,
NULL ATTRIBUTE15,
--ORG_CODE,
ITEM_CATALOG_CATEGORY ITEM_CATEGORY,
a.INVENTORY_ITEM_ID INVENTORY_ITEM_ID
FROM XXCPD_ITM_SEC_UDA_DETAILS_TB a
WHERE DECODE(attribute_group_name,'LIST_PRICE',DECODE(STATUS_FLAG,'E',1,'P',2,3)
,'XXITM_FORM_PRICING_HS',DECODE(STATUS_FLAG,'E',4,'P',5,6)
,'XXITM_FORM_PRICING_SB',DECODE(STATUS_FLAG,'E',4,'P',5,6)
,'XXITM_ADV_FORM_PRICING_HS',DECODE(STATUS_FLAG,'E',7,'P',8,9)
,'XXITM_ADV_FORM_PRICING_SB',DECODE(STATUS_FLAG,'E',7,'P',8,9)
,'XXITM_ADV_FORM_PRICING_HWSW',DECODE(STATUS_FLAG,'E',10,'P',11,12)
,'XXITM_ADV_FORM_PRICING_SWSUB',DECODE(STATUS_FLAG,'E',10,'P',11,12)
,'XXITM_ROUTE_TO_MARKET',DECODE(STATUS_FLAG,'E',13,'P',14,15)) in (1,2,3)
AND exists(select /*+ no_merge */ '1' from mtl_system_items_b b where a.inventory_item_id=b.inventory_item_id and b.organization_id=1)
GROUP BY item_name,
extension_id,
ITEM_CATALOG_CATEGORY,
INVENTORY_ITEM_ID;
BEGIN
OPEN c_list_price;
LOOP
lv_ITEM_NAME.delete;
lv_ITEM_CATEGORY.delete;
lv_INVENTORY_ITEM_ID.delete;
lv_ATTRIBUTE_NAME.delete;
lv_ATTRIBUTE_VALUE.delete;
lv_GEOGRAPHY_LEVEL.delete;
lv_GEOGRAPHY_VALUE.delete;
lv_INCLUDE_GEOGRAPHY.delete;
lv_EFFECTIVE_START_DATE.delete;
lv_EFFECTIVE_TO_DATE.delete;
lv_EXTENSION_ID.delete;
lv_PRICING_UNIT.delete;
lv_ATTRIBUTE_FROM.delete;
lv_ATTRIBUTE_TO.delete;
lv_FIXED_BASE_PRICE.delete;
lv_PARENT_ATTRIBUTE.delete;
lv_DURATION_QUANTITY.delete;
lv_ATTRIBUTE2.delete;
lv_ATTRIBUTE3.delete;
lv_ATTRIBUTE4.delete;
lv_ATTRIBUTE5.delete;
lv_ATTRIBUTE6.delete;
lv_ATTRIBUTE7.delete;
lv_ATTRIBUTE8.delete;
lv_ATTRIBUTE9.delete;
lv_ATTRIBUTE10.delete;
lv_ATTRIBUTE11.delete;
lv_ATTRIBUTE12.delete;
lv_ATTRIBUTE13.delete;
lv_ATTRIBUTE14.delete;
lv_ATTRIBUTE15.delete;
FETCH c_list_price BULK COLLECT INTO lv_ITEM_NAME
,lv_ATTRIBUTE_NAME
,lv_ATTRIBUTE_VALUE
,lv_GEOGRAPHY_LEVEL
,lv_GEOGRAPHY_VALUE
,lv_INCLUDE_GEOGRAPHY
,lv_EFFECTIVE_START_DATE
,lv_EFFECTIVE_TO_DATE
,lv_EXTENSION_ID
,lv_PRICING_UNIT
,lv_ATTRIBUTE_FROM
,lv_ATTRIBUTE_TO
,lv_FIXED_BASE_PRICE
,lv_PARENT_ATTRIBUTE
,lv_DURATION_QUANTITY
,lv_ATTRIBUTE2
,lv_ATTRIBUTE3
,lv_ATTRIBUTE4
,lv_ATTRIBUTE5
,lv_ATTRIBUTE6
,lv_ATTRIBUTE7
,lv_ATTRIBUTE8
,lv_ATTRIBUTE9
,lv_ATTRIBUTE10
,lv_ATTRIBUTE11
,lv_ATTRIBUTE12
,lv_ATTRIBUTE13
,lv_ATTRIBUTE14
,lv_ATTRIBUTE15
,lv_ITEM_CATEGORY
,lv_INVENTORY_ITEM_ID
LIMIT 50000;
IF lv_INVENTORY_ITEM_ID.count = 0 THEN
EXIT;
END IF;
Begin
FORALL I IN 1..lv_INVENTORY_ITEM_ID.count SAVE EXCEPTIONS
INSERT /*+append*/ INTO XXCPD_ITEM_PRICING_ATTR_BKP2(
line_id,
ITEM_NAME,
ATTRIBUTE_NAME,
ATTRIBUTE_VALUE,
GEOGRAPHY_LEVEL,
GEOGRAPHY_VALUE,
INCLUDE_GEOGRAPHY,
EFFECTIVE_START_DATE,
EFFECTIVE_TO_DATE,
EXTENSION_ID,
PRICING_UNIT,
ATTRIBUTE_FROM,
ATTRIBUTE_TO,
FIXED_BASE_PRICE,
PARENT_ATTRIBUTE,
DURATION_QUANTITY,
ATTRIBUTE2,
ATTRIBUTE3,
ATTRIBUTE4,
ATTRIBUTE5,
ATTRIBUTE6,
ATTRIBUTE7,
ATTRIBUTE8,
ATTRIBUTE9,
ATTRIBUTE10,
ATTRIBUTE11,
ATTRIBUTE12,
ATTRIBUTE13,
ATTRIBUTE14,
ATTRIBUTE15,
ITEM_CATEGORY,
INVENTORY_ITEM_ID
VALUES
xxcpd.xxcpd_if_prc_line_id_s.NEXTVAL
,lv_ITEM_NAME(i)
,lv_ATTRIBUTE_NAME(i)
,lv_ATTRIBUTE_VALUE(i)
,lv_GEOGRAPHY_LEVEL(i)
,lv_GEOGRAPHY_VALUE(i)
,lv_INCLUDE_GEOGRAPHY(i)
,lv_EFFECTIVE_START_DATE(i)
,lv_EFFECTIVE_TO_DATE(i)
,lv_EXTENSION_ID(i)
,lv_PRICING_UNIT(i)
,lv_ATTRIBUTE_FROM(i)
,lv_ATTRIBUTE_TO(i)
,lv_FIXED_BASE_PRICE(i)
,lv_PARENT_ATTRIBUTE(i)
,lv_DURATION_QUANTITY(i)
,lv_ATTRIBUTE2(i)
,lv_ATTRIBUTE3(i)
,lv_ATTRIBUTE4(i)
,lv_ATTRIBUTE5(i)
,lv_ATTRIBUTE6(i)
,lv_ATTRIBUTE7(i)
,lv_ATTRIBUTE8(i)
,lv_ATTRIBUTE9(i)
,lv_ATTRIBUTE10(i)
,lv_ATTRIBUTE11(i)
,lv_ATTRIBUTE12(i)
,lv_ATTRIBUTE13(i)
,lv_ATTRIBUTE14(i)
,lv_ATTRIBUTE15(i)
,lv_ITEM_CATEGORY(i)
,lv_INVENTORY_ITEM_ID(i)
EXCEPTION
WHEN OTHERS THEN
ROLLBACK;
FOR j IN 1..SQL%bulk_exceptions.Count
LOOP
l_item_name:=lv_ITEM_NAME(SQL%BULK_EXCEPTIONS(j).ERROR_INDEX);
l_extension_id:=lv_EXTENSION_ID(SQL%BULK_EXCEPTIONS(j).ERROR_INDEX);
UPDATE XXCPD_ITM_SEC_UDA_DETAILS_TB
SET status_flag = 'E',
last_updated_by = 1,
last_update_date = SYSDATE
WHERE item_name = l_item_name
AND extension_id=l_extension_id;
COMMIT;
FND_FILE.put_line(FND_FILE.output,'********************Exception Occured*************:-- '||SYSTIMESTAMP);
END LOOP;
END;
END LOOP;
CLOSE c_list_price;
COMMIT;
end; -
How to use bulk collect into clause
hi all,
i have requirement like this i want to change the query by passing the different table names in oracle database 10gr2.
like if i use first i pass the scott.emp table name select * from scott.emp;
then i want pass scott.dept table name select * from scott.dept;
using select * option in the select list.
how can i execute it.
give me any solution.
please reply....Hi,
i recently also ran into some serious trouble to make dynamic sql work.
It was about parallel pipelined function with strongly typed cursor (for "partition ... by hash(...)").
But in addition requiring dynamic SQL for the input to this cursor.
I couldn't make it work with execute immediate or something else.
So i used another way - I translated dynamic SQL into dynamically created static SQL:
1. create a base SQL data object type with abstract interface for code (e.g. some Execute() member function).
2. dynamically create a derived SQL data object type with concrete implementation of Execute() holding your "dynamic SQL" "in a static way"
3. delegate to Execute().
Let me quote my example from comp.databases.oracle.server, thread "dynamically created cursor doesn't work for parallel pipelined functions"
- pls. see below (it's an old one - with likely some odd stuff inside).
It might sound some kind of strange for DB programmer.
(I come from C++. Though this is not an excuse smile)
Maybe i just missed another possible option to handle the problem.
And it's a definitely verbose.
But i think it's at least a (last) option.
Actually i would be interested to hear, what others think about it.
best regards,
Frank
--------------- snip -------------------------
drop table parallel_test;
drop type MyDoit;
drop type BaseDoit;
drop type ton;
create type ton as table of number;
CREATE TABLE parallel_test (
id NUMBER(10),
description VARCHAR2(50)
BEGIN
FOR i IN 1 .. 100000 LOOP
INSERT INTO parallel_test (id, description)
VALUES (i, 'Description or ' || i);
END LOOP;
COMMIT;
END;
create or replace type BaseDoit as object (
id number,
static function make(p_id in number)
return BaseDoit,
member procedure doit(
p_sids in out nocopy ton,
p_counts in out nocopy ton)
) not final;
create or replace type body BaseDoit as
static function make(p_id in number)
return BaseDoit
is
begin
return new BaseDoit(p_id);
end;
member procedure doit(
p_sids in out nocopy ton,
p_counts in out nocopy ton)
is
begin
dbms_output.put_line('BaseDoit.doit(' || id || ') invoked...');
end;
end;
-- Define a strongly typed REF CURSOR type.
CREATE OR REPLACE PACKAGE parallel_ptf_api AS
TYPE t_parallel_test_row IS RECORD (
id1 NUMBER(10),
desc1 VARCHAR2(50),
id2 NUMBER(10),
desc2 VARCHAR2(50),
sid NUMBER
TYPE t_parallel_test_tab IS TABLE OF t_parallel_test_row;
TYPE t_parallel_test_ref_cursor IS REF CURSOR RETURN
t_parallel_test_row;
FUNCTION test_ptf (p_cursor IN t_parallel_test_ref_cursor)
RETURN t_parallel_test_tab PIPELINED
PARALLEL_ENABLE(PARTITION p_cursor BY any);
END parallel_ptf_api;
SHOW ERRORS
CREATE OR REPLACE PACKAGE BODY parallel_ptf_api AS
FUNCTION test_ptf (p_cursor IN t_parallel_test_ref_cursor)
RETURN t_parallel_test_tab PIPELINED
PARALLEL_ENABLE(PARTITION p_cursor BY any)
IS
l_row t_parallel_test_row;
BEGIN
LOOP
FETCH p_cursor
INTO l_row;
EXIT WHEN p_cursor%NOTFOUND;
select userenv('SID') into l_row.sid from dual;
PIPE ROW (l_row);
END LOOP;
RETURN;
END test_ptf;
END parallel_ptf_api;
SHOW ERRORS
PROMPT
PROMPT Serial Execution
PROMPT ================
SELECT sid, count(*)
FROM TABLE(parallel_ptf_api.test_ptf(CURSOR(SELECT t1.id,
t1.description, t2.id, t2.description, null
FROM parallel_test t1,
parallel_test t2
where t1.id = t2.id
) t2
GROUP BY sid;
PROMPT
PROMPT Parallel Execution
PROMPT ==================
SELECT sid, count(*)
FROM TABLE(parallel_ptf_api.test_ptf(CURSOR(SELECT /*+
parallel(t1,5) */ t1.id, t1.description, t2.id, t2.description, null
FROM parallel_test t1,
parallel_test t2
where t1.id = t2.id
) t2
GROUP BY sid;
PROMPT
PROMPT Parallel Execution 2
PROMPT ==================
set serveroutput on;
declare
v_sids ton := ton();
v_counts ton := ton();
-- v_cur parallel_ptf_api.t_parallel_test_ref_cursor;
v_cur sys_refcursor;
procedure OpenCursor(p_refCursor out sys_refcursor)
is
begin
open p_refCursor for 'SELECT /*+ parallel(t1,5) */ t1.id,
t1.description, t2.id, t2.description, null
FROM parallel_test t1,
parallel_test t2
where t1.id = t2.id';
end;
begin
OpenCursor(v_cur);
SELECT sid, count(*) bulk collect into v_sids, v_counts
FROM TABLE(parallel_ptf_api.test_ptf(v_cur)) t2
GROUP BY sid;
for i in v_sids.FIRST.. v_sids.LAST loop
dbms_output.put_line (v_sids(i) || ', ' || v_counts(i));
end loop;
end;
PROMPT
PROMPT Parallel Execution 3
PROMPT ==================
set serveroutput on;
declare
instance BaseDoit;
v_sids ton := ton();
v_counts ton := ton();
procedure CreateMyDoit
is
cmd varchar2(4096 char);
begin
cmd := 'create or replace type MyDoit under BaseDoit ( ' ||
' static function make(p_id in number) ' ||
' return MyDoit, ' ||
' overriding member procedure doit( ' ||
' p_sids in out nocopy ton, ' ||
' p_counts in out nocopy ton) ' ||
execute immediate cmd;
cmd := 'create or replace type body MyDoit as ' ||
' static function make(p_id in number) ' ||
' return MyDoit ' ||
' is ' ||
' begin ' ||
' return new MyDoit(p_id); ' ||
' end; ' ||
' ' ||
' overriding member procedure doit( ' ||
' p_sids in out nocopy ton, ' ||
' p_counts in out nocopy ton) ' ||
' is ' ||
' begin ' ||
' dbms_output.put_line(''MyDoit.doit('' || id || '')
invoked...''); ' ||
' SELECT sid, count(*) bulk collect into p_sids, p_counts ' ||
' FROM TABLE(parallel_ptf_api.test_ptf(CURSOR( ' ||
' SELECT /*+ parallel(t1,5) */ t1.id, t1.description,
t2.id, t2.description, null ' ||
' FROM parallel_test t1, parallel_test t2 ' ||
' where t1.id = t2.id ' ||
' ))) ' ||
' GROUP BY sid; ' ||
' end; ' ||
' end; ';
execute immediate cmd;
end;
begin
CreateMyDoit;
execute immediate 'select MyDoit.Make(11) from dual' into instance;
instance.doit(v_sids, v_counts);
if v_sids.COUNT > 0 then
for i in v_sids.FIRST.. v_sids.LAST loop
dbms_output.put_line (v_sids(i) || ', ' || v_counts(i));
end loop;
end if;
end;
--------------- snap ------------------------- -
Object Type and Bulk Collect/Forall
How can I bulk collect into (and read from) a collection which is a table of an object type such as
CREATE TABLE base_table (
attr1 NUMBER,
attr2 NUMBER,
attr3 NUMBER,
attr4 NUMBER);
CREATE OR REPLACE TYPE rec_t IS OBJECT (
attr1 NUMBER,
attr2 NUMBER,
attr3 NUMBER,
attr4 NUMBER);
CREATE OR REPLACE TYPE col_t IS TABLE OF rect;
In my pl sql code I instantiate the collection type and want to populate it with a BULK COLLECT - statemt:
PROCEDURE test IS
v_col col_t;
BEGIN
SELECT rec_T(attr1, attr2, attr3, attr4)
BULK COLLECT INTO v_col
FROM base_table
FORALL i IN v_col.FIRST..v_col.LAST INSERT INTO base_table VALUES (rec_t(v_col(i)));
END;
? If I do it this way I get the following exception on the FORALL insert:
PL/SQL: ORA-00947: not enough values
Edited by: user12149927 on 22.01.2010 00:48try like this
CREATE TABLE BASE_TABLE
ATTR1 NUMBER,
ATTR2 NUMBER,
ATTR3 NUMBER,
ATTR4 NUMBER
CREATE OR REPLACE TYPE rec_t IS OBJECT (attr1 number, attr2 number, attr3 number, attr4 number);
CREATE OR REPLACE TYPE col_t IS TABLE OF rec_t;
CREATE OR REPLACE PROCEDURE test
IS
v_col col_t;
BEGIN
SELECT rec_t (attr1, attr2, attr3, attr4)
BULK COLLECT INTO v_col
FROM base_table;
INSERT INTO base_table
SELECT *
FROM table (CAST (v_col AS col_t));
END;Regards,
Mahesh Kaila
Edited by: Mahesh Kaila on Jan 22, 2010 12:56 AM -
Bulk collect / forall which collection type?
Hi I am trying to speed up the query below using bulk collect / forall:
SELECT h.cust_order_no AS custord, l.shipment_set AS sset
FROM info.tlp_out_messaging_hdr h, info.tlp_out_messaging_lin l
WHERE h.message_id = l.message_id
AND h.contract = '12384'
AND l.shipment_set IS NOT NULL
AND h.cust_order_no IS NOT NULL
GROUP BY h.cust_order_no, l.shipment_set
I would like to extract the 2 fields selected above into a new table as quickly as possible, but I’m fairly new to Oracle and I’m finding it difficult to sort out the best way to do it. The query below does not work (no doubt there are numerous issues) but hopefully it is sufficiently developed to shows the sort of thing I am trying to achieve:
DECLARE
TYPE xcustord IS TABLE OF info.tlp_out_messaging_hdr.cust_order_no%TYPE;
TYPE xsset IS TABLE OF info.tlp_out_messaging_lin.shipment_set%TYPE;
TYPE xarray IS TABLE OF tp_a1_tab%rowtype INDEX BY BINARY_INTEGER;
v_xarray xarray;
v_xcustord xcustord;
v_xsset xsset;
CURSOR cur IS
SELECT h.cust_order_no AS custord, l.shipment_set AS sset
FROM info.tlp_out_messaging_hdr h, info.tlp_out_messaging_lin l
WHERE h.message_id = l.message_id
AND h.contract = '1111'
AND l.shipment_set IS NOT NULL
AND h.cust_order_no IS NOT NULL;
BEGIN
OPEN cur;
LOOP
FETCH cur
BULK COLLECT INTO v_xarray LIMIT 10000;
EXIT WHEN v_xcustord.COUNT() = 0;
FORALL i IN 1 .. v_xarray.COUNT
INSERT INTO TP_A1_TAB (cust_order_no, shipment_set)
VALUES (v_xarray(i).cust_order_no,v_xarray(i).shipment_set);
commit;
END LOOP;
CLOSE cur;
END;
I am running on Oracle 9i release 2.Well I suppose I can share the whole query as it stands for context and information (see below).
This is a very ugly piece of code that I am trying to improve. The advantage it has currently is
that it works, the disadvantage it has is that it's very slow. My thoughts were that bulk collect
and forall might be useful tools here as part of the re-write hence my original question, but perhaps not.
So on a more general note any advice on how best speed up this code would be welcome:
CREATE OR REPLACE VIEW DLP_TEST AS
WITH aa AS
(SELECT h.cust_order_no AS c_ref,l.shipment_set AS shipset,
l.line_id, h.message_id, l.message_line_no,
l.vendor_part_no AS part, l.rqst_quantity AS rqst_qty, l.quantity AS qty,
l.status AS status, h.rowversion AS allocation_date
FROM info.tlp_in_messaging_hdr h
LEFT JOIN info.tlp_in_messaging_lin l
ON h.message_id = l.message_id
WHERE h.contract = '12384'
AND h.cust_order_no IS NOT NULL
UNION ALL
SELECT ho.cust_order_no AS c_ref, lo.shipment_set AS shipset,
lo.line_id, ho.message_id, lo.message_line_no,
lo.vendor_part_no AS part,lo.rqst_quantity AS rqst_qty, lo.quantity AS qty,
lo.status AS status, ho.rowversion AS allocation_date
FROM info.tlp_out_messaging_hdr ho, info.tlp_out_messaging_lin lo
WHERE ho.message_id = lo.message_id
AND ho.contract = '12384'
AND ho.cust_order_no IS NOT NULL),
a1 AS
(SELECT h.cust_order_no AS custord, l.shipment_set AS sset
FROM info.tlp_out_messaging_hdr h, info.tlp_out_messaging_lin l
WHERE h.message_id = l.message_id
AND h.contract = '12384'
AND l.shipment_set IS NOT NULL
AND h.cust_order_no IS NOT NULL
GROUP BY h.cust_order_no, l.shipment_set),
a2 AS
(SELECT ho.cust_order_no AS c_ref, lo.shipment_set AS shipset
FROM info.tlp_out_messaging_hdr ho, info.tlp_out_messaging_lin lo
WHERE ho.message_id = lo.message_id
AND ho.contract = '12384'
AND ho.message_type = '3B13'
AND lo.shipment_set IS NOT NULL
GROUP BY ho.cust_order_no, lo.shipment_set),
a3 AS
(SELECT a1.custord, a1.sset, CONCAT('SHIPSET',a1.sset) AS ssset
FROM a1
LEFT OUTER JOIN a2
ON a1.custord = a2.c_ref AND a1.sset = a2.shipset
WHERE a2.c_ref IS NULL),
bb AS
(SELECT so.contract, so.order_no, sr.service_request_no AS sr_no, sr.reference_no,
substr(sr.part_no,8) AS shipset,
substr(note_text,1,instr(note_text,'.',1,1)-1) AS Major_line,
note_text AS CISCO_line,ma.part_no,
(Select TO_DATE(TO_CHAR(TO_DATE(d.objversion,'YYYYMMDDHH24MISS'),'YYYY-MM-DD HH24:MI:SS'),'YYYY-MM-DD HH24:MI:SS')
FROM ifsapp.document_text d WHERE so.note_id = d.note_id AND so.contract = '12384') AS Print_Date,
ma.note_text
FROM (ifsapp.service_request sr
LEFT OUTER JOIN ifsapp.work_order_shop_ord ws
ON sr.service_request_no = ws.wo_no
LEFT OUTER JOIN ifsapp.shop_ord so
ON ws.order_no = so.order_no)
LEFT OUTER JOIN ifsapp.customer_order co
ON sr.reference_no = co.cust_ref
LEFT OUTER JOIN ifsapp.shop_material_alloc ma
ON so.order_no = ma.order_no
JOIN a3
ON a3.custord = sr.reference_no AND a3.sset = substr(sr.part_no,8)
WHERE sr.part_contract = '12384'
AND so.contract = '12384'
AND co.contract = '12384'
AND sr.reference_no IS NOT NULL
AND ma.part_no NOT LIKE 'SHIPSET%'),
cc AS
(SELECT
bb.reference_no,
bb.shipset,
bb.order_no,
bb.cisco_line,
aa.message_id,
aa.allocation_date,
row_number() over(PARTITION BY bb.reference_no, bb.shipset, aa.allocation_date
ORDER BY bb.reference_no, bb.shipset, aa.allocation_date, aa.message_id DESC) AS selector
FROM bb
LEFT OUTER JOIN aa
ON bb.reference_no = aa.c_ref AND bb.shipset = aa.shipset
WHERE aa.allocation_date <= bb.print_date
OR aa.allocation_date IS NULL
OR bb.print_date IS NULL),
dd AS
(SELECT
MAX(reference_no) AS reference_no,
MAX(shipset) AS shipset,
order_no,
MAX(allocation_date) AS allocation_date
FROM cc
WHERE selector = 1
GROUP BY order_no, selector),
ee AS
(SELECT
smx.order_no,
SUM(smx.qty_assigned) AS total_allocated,
SUM(smx.qty_issued) AS total_issued,
SUM(smx.qty_required) AS total_required
FROM ifsapp.shop_material_alloc smx
WHERE smx.contract = '12384'
AND smx.part_no NOT LIKE 'SHIPSET%'
GROUP BY smx.order_no),
ff AS
(SELECT
dd.reference_no,
dd.shipset,
dd.order_no,
MAX(allocation_date) AS last_allocation,
MAX(ee.total_allocated) AS total_allocated,
MAX(ee.total_issued) AS total_issued,
MAX(ee.total_required) AS total_required
FROM dd
LEFT OUTER JOIN ee
ON dd.order_no = ee.order_no
GROUP BY dd.reference_no, dd.shipset, dd.order_no),
base AS
(SELECT x.order_no, x.part_no, z.rel_no, MIN(x.dated) AS dated, MIN(y.cust_ref) AS cust_ref, MIN(z.line_no) AS line_no,
MIN(y.state) AS state, MIN(y.contract) AS contract, MIN(z.demand_order_ref1) AS demand_order_ref1
FROM ifsapp.inventory_transaction_hist x, ifsapp.customer_order y, ifsapp.customer_order_line z
WHERE x.contract = '12384'
AND x.order_no = y.order_no
AND y.order_no = z.order_no
AND x.transaction = 'REP-OESHIP'
AND x.part_no = z.part_no
AND TRUNC(x.dated) >= SYSDATE - 8
GROUP BY x.order_no, x.part_no, z.rel_no)
SELECT
DISTINCT
bb.contract,
bb.order_no,
bb.sr_no,
CAST('-' AS varchar2(40)) AS Usr,
CAST('-' AS varchar2(40)) AS Name,
CAST('01' AS number) AS Operation,
CAST('Last Reservation' AS varchar2(40)) AS Action,
CAST('-' AS varchar2(40)) AS Workcenter,
CAST('-' AS varchar2(40)) AS Next_Workcenter_no,
CAST('Print SO' AS varchar2(40)) AS Next_WC_Description,
ff.total_allocated,
ff.total_issued,
ff.total_required,
ff.shipset,
ff.last_allocation AS Action_date,
ff.reference_no
FROM ff
LEFT OUTER JOIN bb
ON bb.order_no = ff.order_no
WHERE bb.order_no IS NOT NULL
UNION ALL
SELECT
c.contract AS Site, c.order_no AS Shop_Order, b.wo_no AS SR_No,
CAST('-' AS varchar2(40)) AS UserID,
CAST('-' AS varchar2(40)) AS User_Name,
CAST('02' AS number) AS Operation,
CAST('SO Printed' AS varchar2(40)) AS Action,
CAST('SOPRINT' AS varchar2(40)) AS Workcenter,
CAST('PKRPT' AS varchar2(40)) AS Next_Workcenter_no,
CAST('Pickreport' AS varchar2(40)) AS Next_WC_Description,
CAST('0' AS number) AS Total_Allocated,
CAST('0' AS number) AS Total_Issued,
CAST('0' AS number) AS Total_Required,
e.part_no AS Ship_Set,
TO_DATE(TO_CHAR(TO_DATE(d.objversion,'YYYYMMDDHH24MISS'),'YYYY-MM-DD HH24:MI:SS'),'YYYY-MM-DD HH24:MI:SS')AS Action_Date,
f.cust_ref AS cust_ref
FROM ifsapp.shop_ord c
LEFT OUTER JOIN ifsapp.work_order_shop_ord b
ON b.order_no = c.order_no
LEFT OUTER JOIN ifsapp.document_text d
ON d.note_id = c.note_id
LEFT OUTER JOIN ifsapp.customer_order_line e
ON e.demand_order_ref1 = TRIM(to_char(b.wo_no))
LEFT OUTER JOIN ifsapp.customer_order f
ON f.order_no = e.order_no
JOIN a3
ON a3.custord = f.cust_ref AND a3.ssset = e.part_no
WHERE c.contract = '12384'
AND e.contract = '12384'
AND d.objversion IS NOT NULL
UNION ALL
SELECT
a.site AS Site, a.order_no AS Shop_Order, b.wo_no AS SR_No, a.userid AS UserID,
a.user_name AS Name, a.operation_no AS Operation, a.action AS Action,
a.workcenter_no AS Workcenter, a.next_work_center_no AS Next_Workcenter_no,
(SELECT d.description FROM ifsapp.work_center d WHERE a.next_work_center_no = d.work_center_no AND a.site = d.contract)
AS Next_WC_Description,
CAST('0' AS number) AS Total_Allocated,
CAST('0' AS number) AS Total_Issued,
CAST('0' AS number) AS Total_Required,
e.part_no AS Ship_set,
a.action_date AS Action_Date, f.cust_ref AS cust_ref
FROM ifsapp.shop_ord c
LEFT OUTER JOIN ifsapp.work_order_shop_ord b
ON b.order_no = c.order_no
LEFT OUTER JOIN ifsapp.customer_order_line e
ON e.demand_order_ref1 = to_char(b.wo_no)
LEFT OUTER JOIN ifsapp.customer_order f
ON f.order_no = e.order_no
LEFT OUTER JOIN info.tp_hvt_so_op_hist a
ON a.order_no = c.order_no
JOIN a3
ON a3.custord = f.cust_ref AND a3.ssset = e.part_no
WHERE a.site = '12384'
AND c.contract = '12384'
AND e.contract = '12384'
AND f.contract = '12384'
UNION ALL
SELECT so.contract AS Site, so.order_no AS Shop_Order_No, sr.service_request_no AS SR_No,
CAST('-' AS varchar2(40)) AS "User",
CAST('-' AS varchar2(40)) AS "Name",
CAST('999' AS number) AS "Operation",
CAST('Shipped' AS varchar2(40)) AS "Action",
CAST('SHIP' AS varchar2(40)) AS "Workcenter",
CAST('-' AS varchar2(40)) AS "Next_Workcenter_no",
CAST('-' AS varchar2(40)) AS "Next_WC_Description",
CAST('0' AS number) AS Total_Allocated,
CAST('0' AS number) AS Total_Issued,
CAST('0' AS number) AS Total_Required,
so.part_no AS ship_set, base.dated AS Action_Date,
sr.reference_no AS CUST_REF
FROM base
LEFT OUTER JOIN ifsapp.service_request sr
ON base.demand_order_ref1 = sr.service_request_no
LEFT OUTER JOIN ifsapp.work_order_shop_ord ws
ON sr.service_request_no = ws.wo_no
LEFT OUTER JOIN ifsapp.shop_ord so
ON ws.order_no = so.order_no
WHERE base.contract = '12384'; -
Can I use Bulk Collect results as input parameter for another cursor
MUSIC ==> remote MUSIC_DB database, MUSIC table has 60 million rows
PRICE_DATA ==> remote PRICING_DB database, PRICE_DATE table has 1 billion rows
These two table once existed in same database, but size of database exceeded available hardware size and hardware budget, so the PRICE_DATA table was moved to another Oracle database. I need to create a single report that combines data from both of these tables, and a distributed join with DRIVING_SITE hint will not work because the size of both table is too large to push to one DRIVING_SITE location, so I wrote this PLSQL block to process in small blocks.
QUESTION: how can use bulk collect from one cursor and pass that bulk collected information as input to second cursor without specifically listing each cell of the PLSQL bulk collection? See sample pseudo-code below, I am trying to determine more efficient way to code than hard-coding 100 parameter names into 2nd cursor.
NOTE: below is truly pseudo-code, I had to change the names of everything to adhere to NDA, but below works and is fast enough for my purposes, but if I want to change from 100 input parameters to 200, I have to add more hard-coded values. There has got to be a better way.
DECLARE
-- define cursor that retrieves distinct SONG_IDs from MUSIC table in remote music database
CURSOR C_CURRENT_MUSIC
IS
select distinct SONG_ID
from MUSIC@MUSIC_DB
where PRODUCTION_RELEASE=1
/* define a parameterized cursor that accepts 100 SONG_IDs and retrieves
required pricing information
CURSOR C_get_music_price_data
P_SONG_ID_001 NUMBER, P_SONG_ID_002 NUMBER, P_SONG_ID_003 NUMBER, P_SONG_ID_004 NUMBER, P_SONG_ID_005 NUMBER, P_SONG_ID_006 NUMBER, P_SONG_ID_007 NUMBER, P_SONG_ID_008 NUMBER, P_SONG_ID_009 NUMBER, P_SONG_ID_010 NUMBER,
P_SONG_ID_011 NUMBER, P_SONG_ID_012 NUMBER, P_SONG_ID_013 NUMBER, P_SONG_ID_014 NUMBER, P_SONG_ID_015 NUMBER, P_SONG_ID_016 NUMBER, P_SONG_ID_017 NUMBER, P_SONG_ID_018 NUMBER, P_SONG_ID_019 NUMBER, P_SONG_ID_020 NUMBER,
P_SONG_ID_021 NUMBER, P_SONG_ID_022 NUMBER, P_SONG_ID_023 NUMBER, P_SONG_ID_024 NUMBER, P_SONG_ID_025 NUMBER, P_SONG_ID_026 NUMBER, P_SONG_ID_027 NUMBER, P_SONG_ID_028 NUMBER, P_SONG_ID_029 NUMBER, P_SONG_ID_030 NUMBER,
P_SONG_ID_031 NUMBER, P_SONG_ID_032 NUMBER, P_SONG_ID_033 NUMBER, P_SONG_ID_034 NUMBER, P_SONG_ID_035 NUMBER, P_SONG_ID_036 NUMBER, P_SONG_ID_037 NUMBER, P_SONG_ID_038 NUMBER, P_SONG_ID_039 NUMBER, P_SONG_ID_040 NUMBER,
P_SONG_ID_041 NUMBER, P_SONG_ID_042 NUMBER, P_SONG_ID_043 NUMBER, P_SONG_ID_044 NUMBER, P_SONG_ID_045 NUMBER, P_SONG_ID_046 NUMBER, P_SONG_ID_047 NUMBER, P_SONG_ID_048 NUMBER, P_SONG_ID_049 NUMBER, P_SONG_ID_050 NUMBER,
P_SONG_ID_051 NUMBER, P_SONG_ID_052 NUMBER, P_SONG_ID_053 NUMBER, P_SONG_ID_054 NUMBER, P_SONG_ID_055 NUMBER, P_SONG_ID_056 NUMBER, P_SONG_ID_057 NUMBER, P_SONG_ID_058 NUMBER, P_SONG_ID_059 NUMBER, P_SONG_ID_060 NUMBER,
P_SONG_ID_061 NUMBER, P_SONG_ID_062 NUMBER, P_SONG_ID_063 NUMBER, P_SONG_ID_064 NUMBER, P_SONG_ID_065 NUMBER, P_SONG_ID_066 NUMBER, P_SONG_ID_067 NUMBER, P_SONG_ID_068 NUMBER, P_SONG_ID_069 NUMBER, P_SONG_ID_070 NUMBER,
P_SONG_ID_071 NUMBER, P_SONG_ID_072 NUMBER, P_SONG_ID_073 NUMBER, P_SONG_ID_074 NUMBER, P_SONG_ID_075 NUMBER, P_SONG_ID_076 NUMBER, P_SONG_ID_077 NUMBER, P_SONG_ID_078 NUMBER, P_SONG_ID_079 NUMBER, P_SONG_ID_080 NUMBER,
P_SONG_ID_081 NUMBER, P_SONG_ID_082 NUMBER, P_SONG_ID_083 NUMBER, P_SONG_ID_084 NUMBER, P_SONG_ID_085 NUMBER, P_SONG_ID_086 NUMBER, P_SONG_ID_087 NUMBER, P_SONG_ID_088 NUMBER, P_SONG_ID_089 NUMBER, P_SONG_ID_090 NUMBER,
P_SONG_ID_091 NUMBER, P_SONG_ID_092 NUMBER, P_SONG_ID_093 NUMBER, P_SONG_ID_094 NUMBER, P_SONG_ID_095 NUMBER, P_SONG_ID_096 NUMBER, P_SONG_ID_097 NUMBER, P_SONG_ID_098 NUMBER, P_SONG_ID_099 NUMBER, P_SONG_ID_100 NUMBER
IS
select
from PRICE_DATA@PRICING_DB
where COUNTRY = 'USA'
and START_DATE <= sysdate
and END_DATE > sysdate
and vpc.SONG_ID IN
P_SONG_ID_001 ,P_SONG_ID_002 ,P_SONG_ID_003 ,P_SONG_ID_004 ,P_SONG_ID_005 ,P_SONG_ID_006 ,P_SONG_ID_007 ,P_SONG_ID_008 ,P_SONG_ID_009 ,P_SONG_ID_010,
P_SONG_ID_011 ,P_SONG_ID_012 ,P_SONG_ID_013 ,P_SONG_ID_014 ,P_SONG_ID_015 ,P_SONG_ID_016 ,P_SONG_ID_017 ,P_SONG_ID_018 ,P_SONG_ID_019 ,P_SONG_ID_020,
P_SONG_ID_021 ,P_SONG_ID_022 ,P_SONG_ID_023 ,P_SONG_ID_024 ,P_SONG_ID_025 ,P_SONG_ID_026 ,P_SONG_ID_027 ,P_SONG_ID_028 ,P_SONG_ID_029 ,P_SONG_ID_030,
P_SONG_ID_031 ,P_SONG_ID_032 ,P_SONG_ID_033 ,P_SONG_ID_034 ,P_SONG_ID_035 ,P_SONG_ID_036 ,P_SONG_ID_037 ,P_SONG_ID_038 ,P_SONG_ID_039 ,P_SONG_ID_040,
P_SONG_ID_041 ,P_SONG_ID_042 ,P_SONG_ID_043 ,P_SONG_ID_044 ,P_SONG_ID_045 ,P_SONG_ID_046 ,P_SONG_ID_047 ,P_SONG_ID_048 ,P_SONG_ID_049 ,P_SONG_ID_050,
P_SONG_ID_051 ,P_SONG_ID_052 ,P_SONG_ID_053 ,P_SONG_ID_054 ,P_SONG_ID_055 ,P_SONG_ID_056 ,P_SONG_ID_057 ,P_SONG_ID_058 ,P_SONG_ID_059 ,P_SONG_ID_060,
P_SONG_ID_061 ,P_SONG_ID_062 ,P_SONG_ID_063 ,P_SONG_ID_064 ,P_SONG_ID_065 ,P_SONG_ID_066 ,P_SONG_ID_067 ,P_SONG_ID_068 ,P_SONG_ID_069 ,P_SONG_ID_070,
P_SONG_ID_071 ,P_SONG_ID_072 ,P_SONG_ID_073 ,P_SONG_ID_074 ,P_SONG_ID_075 ,P_SONG_ID_076 ,P_SONG_ID_077 ,P_SONG_ID_078 ,P_SONG_ID_079 ,P_SONG_ID_080,
P_SONG_ID_081 ,P_SONG_ID_082 ,P_SONG_ID_083 ,P_SONG_ID_084 ,P_SONG_ID_085 ,P_SONG_ID_086 ,P_SONG_ID_087 ,P_SONG_ID_088 ,P_SONG_ID_089 ,P_SONG_ID_090,
P_SONG_ID_091 ,P_SONG_ID_092 ,P_SONG_ID_093 ,P_SONG_ID_094 ,P_SONG_ID_095 ,P_SONG_ID_096 ,P_SONG_ID_097 ,P_SONG_ID_098 ,P_SONG_ID_099 ,P_SONG_ID_100
group by
vpc.SONG_ID
,vpc.STOREFRONT_ID
TYPE SONG_ID_TYPE IS TABLE OF MUSIC@MUSIC_DB%TYPE INDEX BY BINARY_INTEGER;
V_SONG_ID_ARRAY SONG_ID_TYPE ;
v_commit_counter NUMBER := 0;
BEGIN
/* open cursor you intent to bulk collect from */
OPEN C_CURRENT_MUSIC;
LOOP
/* in batches of 100, bulk collect ADAM_ID mapped TMS_IDENTIFIER into PLSQL table or records */
FETCH C_CURRENT_MUSIC BULK COLLECT INTO V_SONG_ID_ARRAY LIMIT 100;
EXIT WHEN V_SONG_ID_ARRAY.COUNT = 0;
/* to avoid NO DATA FOUND error when pass 100 parameters to OPEN cursor, if the arrary
is not fully populated to 100, pad the array with nulls to fill up to 100 cells. */
IF (V_SONG_ID_ARRAY.COUNT >=1 and V_SONG_ID_ARRAY.COUNT <> 100) THEN
FOR j IN V_SONG_ID_ARRAY.COUNT+1..100 LOOP
V_SONG_ID_ARRAY(j) := null;
END LOOP;
END IF;
/* pass a batch of 100 to cursor that get price information per SONG_ID and STOREFRONT_ID */
FOR j IN C_get_music_price_data
V_SONG_ID_ARRAY(1) ,V_SONG_ID_ARRAY(2) ,V_SONG_ID_ARRAY(3) ,V_SONG_ID_ARRAY(4) ,V_SONG_ID_ARRAY(5) ,V_SONG_ID_ARRAY(6) ,V_SONG_ID_ARRAY(7) ,V_SONG_ID_ARRAY(8) ,V_SONG_ID_ARRAY(9) ,V_SONG_ID_ARRAY(10) ,
V_SONG_ID_ARRAY(11) ,V_SONG_ID_ARRAY(12) ,V_SONG_ID_ARRAY(13) ,V_SONG_ID_ARRAY(14) ,V_SONG_ID_ARRAY(15) ,V_SONG_ID_ARRAY(16) ,V_SONG_ID_ARRAY(17) ,V_SONG_ID_ARRAY(18) ,V_SONG_ID_ARRAY(19) ,V_SONG_ID_ARRAY(20) ,
V_SONG_ID_ARRAY(21) ,V_SONG_ID_ARRAY(22) ,V_SONG_ID_ARRAY(23) ,V_SONG_ID_ARRAY(24) ,V_SONG_ID_ARRAY(25) ,V_SONG_ID_ARRAY(26) ,V_SONG_ID_ARRAY(27) ,V_SONG_ID_ARRAY(28) ,V_SONG_ID_ARRAY(29) ,V_SONG_ID_ARRAY(30) ,
V_SONG_ID_ARRAY(31) ,V_SONG_ID_ARRAY(32) ,V_SONG_ID_ARRAY(33) ,V_SONG_ID_ARRAY(34) ,V_SONG_ID_ARRAY(35) ,V_SONG_ID_ARRAY(36) ,V_SONG_ID_ARRAY(37) ,V_SONG_ID_ARRAY(38) ,V_SONG_ID_ARRAY(39) ,V_SONG_ID_ARRAY(40) ,
V_SONG_ID_ARRAY(41) ,V_SONG_ID_ARRAY(42) ,V_SONG_ID_ARRAY(43) ,V_SONG_ID_ARRAY(44) ,V_SONG_ID_ARRAY(45) ,V_SONG_ID_ARRAY(46) ,V_SONG_ID_ARRAY(47) ,V_SONG_ID_ARRAY(48) ,V_SONG_ID_ARRAY(49) ,V_SONG_ID_ARRAY(50) ,
V_SONG_ID_ARRAY(51) ,V_SONG_ID_ARRAY(52) ,V_SONG_ID_ARRAY(53) ,V_SONG_ID_ARRAY(54) ,V_SONG_ID_ARRAY(55) ,V_SONG_ID_ARRAY(56) ,V_SONG_ID_ARRAY(57) ,V_SONG_ID_ARRAY(58) ,V_SONG_ID_ARRAY(59) ,V_SONG_ID_ARRAY(60) ,
V_SONG_ID_ARRAY(61) ,V_SONG_ID_ARRAY(62) ,V_SONG_ID_ARRAY(63) ,V_SONG_ID_ARRAY(64) ,V_SONG_ID_ARRAY(65) ,V_SONG_ID_ARRAY(66) ,V_SONG_ID_ARRAY(67) ,V_SONG_ID_ARRAY(68) ,V_SONG_ID_ARRAY(69) ,V_SONG_ID_ARRAY(70) ,
V_SONG_ID_ARRAY(71) ,V_SONG_ID_ARRAY(72) ,V_SONG_ID_ARRAY(73) ,V_SONG_ID_ARRAY(74) ,V_SONG_ID_ARRAY(75) ,V_SONG_ID_ARRAY(76) ,V_SONG_ID_ARRAY(77) ,V_SONG_ID_ARRAY(78) ,V_SONG_ID_ARRAY(79) ,V_SONG_ID_ARRAY(80) ,
V_SONG_ID_ARRAY(81) ,V_SONG_ID_ARRAY(82) ,V_SONG_ID_ARRAY(83) ,V_SONG_ID_ARRAY(84) ,V_SONG_ID_ARRAY(85) ,V_SONG_ID_ARRAY(86) ,V_SONG_ID_ARRAY(87) ,V_SONG_ID_ARRAY(88) ,V_SONG_ID_ARRAY(89) ,V_SONG_ID_ARRAY(90) ,
V_SONG_ID_ARRAY(91) ,V_SONG_ID_ARRAY(92) ,V_SONG_ID_ARRAY(93) ,V_SONG_ID_ARRAY(94) ,V_SONG_ID_ARRAY(95) ,V_SONG_ID_ARRAY(96) ,V_SONG_ID_ARRAY(97) ,V_SONG_ID_ARRAY(98) ,V_SONG_ID_ARRAY(99) ,V_SONG_ID_ARRAY(100)
LOOP
/* do stuff with data from Song and Pricing Database coming from the two
separate cursors, then continue processing more rows...
END LOOP;
/* commit after each batch of 100 SONG_IDs is processed */
COMMIT;
EXIT WHEN C_CURRENT_MUSIC%NOTFOUND; -- exit when there are no more rows to fetch from cursor
END LOOP; -- bulk fetching loop
CLOSE C_CURRENT_MUSIC; -- close cursor that was used in bulk collection
/* commit rows */
COMMIT; -- commit any remaining uncommitted data.
END;I've got a problem when using passing VARRAY of numbers as parameter to remote cursor: it takes a super long time to run, sometimes doesn't finish even after an hour as passed.
Continuing with my example in original entry, I replaced the bulk collect into PLSQL table collection with a VARRAY and i bulk collect into the VARRAY, this is fast and I know it works because I can DBMS_OUTPUT.PUT_LINE cells of VARRAY so I know it is getting populated correctly. However, when I pass the VARRAY containing 100 cells populated with SONG_IDs as parameter to cursor, execution time is over an hour and when I am expecting a few seconds.
Below code example strips the problem down to it's raw details, I skip the bulk collect and just manually populate a VARRAY with 100 SONG_ID values, then try to pass to as parameter to a cursor, but the execution time of cursor is unexpectedly long, over 30 minutes, sometime longer, when I am expecting seconds.
IMPORTANT: If I take the same 100 SONG_IDs and place them directly in the cursor query's where IN clause, the SQL runs in under 5 seconds and returns result. Also, if I pass the 100 SONG_IDs as individual cells of a PLSQL table collection, then it also runs fast.
I thought that since the VARRAY is used via select subquery that is it queried locally, but the cursor is remote, and that I had a distribute problem on my hands, so I put in the DRIVING_SITE hint to attempt to force the result of query against VARRAY to go to remote server and rest of query will run there before returning result, but that didn't work either, still got slow response.
Is something wrong with my code, or I am running into a Oracle problem that may require support to resolve?
DECLARE
/* define a parameterized cursor that accepts XXX number of in SONG_IDs and
retrieves required pricing information
CURSOR C_get_music_price_data
p_array_song_ids SYS.ODCInumberList
IS
select /*+DRIVING_SITE(pd) */
count(distinct s.EVE_ID)
from PRICE_DATA@PRICING_DB pd
where pd.COUNTRY = 'USA'
and pd.START_DATE <= sysdate
and pd.END_DATE > sysdate
and pd.SONG_ID IN
select column_value from table(p_array_song_ids)
group by
pd.SONG_ID
,pd.STOREFRONT_ID
V_ARRAY_SONG_IDS SYS.ODCInumberList := SYS.ODCInumberList();
BEGIN
V_ARRAY_SONG_IDS.EXTEND(100);
V_ARRAY_SONG_IDS( 1 ) := 31135 ;
V_ARRAY_SONG_IDS( 2 ) := 31140 ;
V_ARRAY_SONG_IDS( 3 ) := 31142 ;
V_ARRAY_SONG_IDS( 4 ) := 31144 ;
V_ARRAY_SONG_IDS( 5 ) := 31146 ;
V_ARRAY_SONG_IDS( 6 ) := 31148 ;
V_ARRAY_SONG_IDS( 7 ) := 31150 ;
V_ARRAY_SONG_IDS( 8 ) := 31152 ;
V_ARRAY_SONG_IDS( 9 ) := 31154 ;
V_ARRAY_SONG_IDS( 10 ) := 31156 ;
V_ARRAY_SONG_IDS( 11 ) := 31158 ;
V_ARRAY_SONG_IDS( 12 ) := 31160 ;
V_ARRAY_SONG_IDS( 13 ) := 33598 ;
V_ARRAY_SONG_IDS( 14 ) := 33603 ;
V_ARRAY_SONG_IDS( 15 ) := 33605 ;
V_ARRAY_SONG_IDS( 16 ) := 33607 ;
V_ARRAY_SONG_IDS( 17 ) := 33609 ;
V_ARRAY_SONG_IDS( 18 ) := 33611 ;
V_ARRAY_SONG_IDS( 19 ) := 33613 ;
V_ARRAY_SONG_IDS( 20 ) := 33615 ;
V_ARRAY_SONG_IDS( 21 ) := 33617 ;
V_ARRAY_SONG_IDS( 22 ) := 33630 ;
V_ARRAY_SONG_IDS( 23 ) := 33632 ;
V_ARRAY_SONG_IDS( 24 ) := 33636 ;
V_ARRAY_SONG_IDS( 25 ) := 33638 ;
V_ARRAY_SONG_IDS( 26 ) := 33640 ;
V_ARRAY_SONG_IDS( 27 ) := 33642 ;
V_ARRAY_SONG_IDS( 28 ) := 33644 ;
V_ARRAY_SONG_IDS( 29 ) := 33646 ;
V_ARRAY_SONG_IDS( 30 ) := 33648 ;
V_ARRAY_SONG_IDS( 31 ) := 33662 ;
V_ARRAY_SONG_IDS( 32 ) := 33667 ;
V_ARRAY_SONG_IDS( 33 ) := 33669 ;
V_ARRAY_SONG_IDS( 34 ) := 33671 ;
V_ARRAY_SONG_IDS( 35 ) := 33673 ;
V_ARRAY_SONG_IDS( 36 ) := 33675 ;
V_ARRAY_SONG_IDS( 37 ) := 33677 ;
V_ARRAY_SONG_IDS( 38 ) := 33679 ;
V_ARRAY_SONG_IDS( 39 ) := 33681 ;
V_ARRAY_SONG_IDS( 40 ) := 33683 ;
V_ARRAY_SONG_IDS( 41 ) := 33685 ;
V_ARRAY_SONG_IDS( 42 ) := 33700 ;
V_ARRAY_SONG_IDS( 43 ) := 33702 ;
V_ARRAY_SONG_IDS( 44 ) := 33704 ;
V_ARRAY_SONG_IDS( 45 ) := 33706 ;
V_ARRAY_SONG_IDS( 46 ) := 33708 ;
V_ARRAY_SONG_IDS( 47 ) := 33710 ;
V_ARRAY_SONG_IDS( 48 ) := 33712 ;
V_ARRAY_SONG_IDS( 49 ) := 33723 ;
V_ARRAY_SONG_IDS( 50 ) := 33725 ;
V_ARRAY_SONG_IDS( 51 ) := 33727 ;
V_ARRAY_SONG_IDS( 52 ) := 33729 ;
V_ARRAY_SONG_IDS( 53 ) := 33731 ;
V_ARRAY_SONG_IDS( 54 ) := 33733 ;
V_ARRAY_SONG_IDS( 55 ) := 33735 ;
V_ARRAY_SONG_IDS( 56 ) := 33737 ;
V_ARRAY_SONG_IDS( 57 ) := 33749 ;
V_ARRAY_SONG_IDS( 58 ) := 33751 ;
V_ARRAY_SONG_IDS( 59 ) := 33753 ;
V_ARRAY_SONG_IDS( 60 ) := 33755 ;
V_ARRAY_SONG_IDS( 61 ) := 33757 ;
V_ARRAY_SONG_IDS( 62 ) := 33759 ;
V_ARRAY_SONG_IDS( 63 ) := 33761 ;
V_ARRAY_SONG_IDS( 64 ) := 33763 ;
V_ARRAY_SONG_IDS( 65 ) := 33775 ;
V_ARRAY_SONG_IDS( 66 ) := 33777 ;
V_ARRAY_SONG_IDS( 67 ) := 33779 ;
V_ARRAY_SONG_IDS( 68 ) := 33781 ;
V_ARRAY_SONG_IDS( 69 ) := 33783 ;
V_ARRAY_SONG_IDS( 70 ) := 33785 ;
V_ARRAY_SONG_IDS( 71 ) := 33787 ;
V_ARRAY_SONG_IDS( 72 ) := 33789 ;
V_ARRAY_SONG_IDS( 73 ) := 33791 ;
V_ARRAY_SONG_IDS( 74 ) := 33793 ;
V_ARRAY_SONG_IDS( 75 ) := 33807 ;
V_ARRAY_SONG_IDS( 76 ) := 33809 ;
V_ARRAY_SONG_IDS( 77 ) := 33811 ;
V_ARRAY_SONG_IDS( 78 ) := 33813 ;
V_ARRAY_SONG_IDS( 79 ) := 33815 ;
V_ARRAY_SONG_IDS( 80 ) := 33817 ;
V_ARRAY_SONG_IDS( 81 ) := 33819 ;
V_ARRAY_SONG_IDS( 82 ) := 33821 ;
V_ARRAY_SONG_IDS( 83 ) := 33823 ;
V_ARRAY_SONG_IDS( 84 ) := 33825 ;
V_ARRAY_SONG_IDS( 85 ) := 33839 ;
V_ARRAY_SONG_IDS( 86 ) := 33844 ;
V_ARRAY_SONG_IDS( 87 ) := 33846 ;
V_ARRAY_SONG_IDS( 88 ) := 33848 ;
V_ARRAY_SONG_IDS( 89 ) := 33850 ;
V_ARRAY_SONG_IDS( 90 ) := 33852 ;
V_ARRAY_SONG_IDS( 91 ) := 33854 ;
V_ARRAY_SONG_IDS( 92 ) := 33856 ;
V_ARRAY_SONG_IDS( 93 ) := 33858 ;
V_ARRAY_SONG_IDS( 94 ) := 33860 ;
V_ARRAY_SONG_IDS( 95 ) := 33874 ;
V_ARRAY_SONG_IDS( 96 ) := 33879 ;
V_ARRAY_SONG_IDS( 97 ) := 33881 ;
V_ARRAY_SONG_IDS( 98 ) := 33883 ;
V_ARRAY_SONG_IDS( 99 ) := 33885 ;
V_ARRAY_SONG_IDS(100 ) := 33889 ;
/* do stuff with data from Song and Pricing Database coming from the two
separate cursors, then continue processing more rows...
FOR i IN C_get_music_price_data( v_array_song_ids ) LOOP
. (this is the loop where I pass in v_array_song_ids
. populated with only 100 cells and it runs forever)
END LOOP;
END; -
Bulk collect forall vs single merge statement
I understand that a single DML statement is better than using bulk collect for all having intermediate commits. My only concern is if I'm loading a large amount of data like 100 million records into a 800 million record table with foreign keys and indexes and the session gets killed, the rollback might take a long time which is not acceptable. Using bulk collect forall with interval commits is slower than a single straight merge statement, but in case of dead session, the rollback time won't be as bad and a reload of the not yet committed data will not be as bad. To design a recoverable data load that may not be affected as badly, is bulk collect + for all the right approach?
1. specifics about the actual data available
2. the location/source of the data
3. whether NOLOGGING is appropriate
4. whether PARALLEL is an option
1. I need to transform the data before, so I can build the staging tables to match to be the same structure as the tables I'm loading to.
2. It's in the same database (11.2)
3. Cannot use NOLOGGING or APPEND because I need to allow DML in the target table and I can't use NOLOGGING because I cannot afford to lose the data in case of failure.
4. PARALLEL is an option. I've done some research on DBMS_PARALLEL_EXECUTE and it sounds very cool. Can this be used to load to two tables? I have a parent child tables. I can chunk the data and load these two tables separately, but the only requirement would be that I need to commit together. I cannot load a chunk into the parent table and commit before I load the corresponding chunk into its child table. Can this be done using DBMS_PARALLEL_EXECUTE? If so, I think this would be the perfect solution since it looks like it's exactly what I'm looking for. However, if this doesn't work, is bulk collect + for all the best option I am left with?
What is the underlying technology of DBMS_PARALLEL_EXECUTE? -
Approach of using Bulk Collect
Hi Experts,
how to use bulk collect for uncertain number of columns of select statement.
Master table structure:
Create table tabmst
(id number,
cls_input varchar2(2000),
price number);
insert into tabmst(1,'select product, price from product',500);
insert into tabmst(2,'select product, price,purchase_dt from product',100);
insert into tabmst(3,'select * from product',1000);
Currently I want to store Select statement of cls_input column in a local variable like
dyn_qry:= cls_input; by using a cursor.
Now my question is how to use Bulk Collect by using "Execute Immediate" in Bulk collect variable as there is not certainity of the number of columns from "Select Statment". Please suggest.
Sample code:
I created TYPE variable for Bulk Collect also support blk_var;
Declare
dyn_qry varchar2(3000);
cursor c1 is select * from tabmst;
begin
for i in c1 loop
dyn_qry:= cls_input;
Execute immediate dyn_qry into blk_var;
End Loop;
End;
Now I want to store values of Each "Select statements columns" which is executing by dynamic SQL. but it is uncertain that how many columns with return from dynamic SQL.
Please suggest the approach on the same. Thanks in advance.>
I don't think you can use bulk collect with EXECUTE IMMEDIATE. They do two different things. EXECUTE IMMEDIATE allows the execlution of dynamic SQL. BULK COLLECT provides optimization of SELECT statements when loading the contents into collections. I am not aware of any support for BULK COLLECT with EXECUTE IMMEDIATE.
You may be able to do this a different way. If you must use dynamic SQL (I suggest you don't unless it is absolutely necessary. Dynamic SQL is hard to write, hard to debug, hard to maintain, and hard to tune) use a reference cursor instead. You can use the BULK COLLECT with the standard fetch. -
Using bulk collect and for all to solve a problem
Hi All
I have a following problem.
Please forgive me if its a stupid question :-) im learning.
1: Data in a staging table xx_staging_table
2: two Target table t1, t2 where some columns from xx_staging_table are inserted into
Some of the columns from the staging table data are checked for valid entries and then some columns from that row will be loaded into the two target tables.
The two target tables use different set of columns from the staging table
When I had a thousand records there was no problem with a direct insert but it seems we will now have half a million records.
This has slowed down the process considerably.
My question is
Can I use the bulk collect and for all functionality to get specific columns from a staging table, then validate the row using those columns
and then use a bulk insert to load the data into a specific table.?
So code would be like
get_staging_data cursor will have all the columns i need from the staging table
cursor get_staging_data
is select * from xx_staging_table (about 500000) records
Use bulk collect to load about 10000 or so records into a plsql table
and then do a bulk insert like this
CREATE TABLE t1 AS SELECT * FROM all_objects WHERE 1 = 2;
CREATE OR REPLACE PROCEDURE test_proc (p_array_size IN PLS_INTEGER DEFAULT 100)
IS
TYPE ARRAY IS TABLE OF all_objects%ROWTYPE;
l_data ARRAY;
CURSOR c IS SELECT * FROM all_objects;
BEGIN
OPEN c;
LOOP
FETCH c BULK COLLECT INTO l_data LIMIT p_array_size;
FORALL i IN 1..l_data.COUNT
INSERT INTO t1 VALUES l_data(i);
EXIT WHEN c%NOTFOUND;
END LOOP;
CLOSE c;
END test_proc;
In the above example t1 and the cursor have the same number of columns
In my case the columns in the cursor loop are a small subset of the columns of table t1
so can i use a forall to load that subset into the table t1? How does that work?
Thanks
Juser7348303 wrote:
checking if the value is valid and theres also some conditional processing rules ( such as if the value is a certain value no inserts are needed)
which are a little more complex than I can put in a simpleWell, if the processing is too complex (and conditional) to be done in SQL, then doing that in PL/SQL is justified... but will be slower as you are now introducing an additional layer. Data now needs to travel between the SQL layer and PL/SQL layer. This is slower.
PL/SQL is inherently serialised - and this also effects performance and scalability. PL/SQL cannot be parallelised by Oracle in an automated fashion. SQL processes can.
To put in in simple terms. You create PL/SQL procedure Foo that processes SQL cursor and you execute that proc. Oracle cannot run multiple parallel copies of Foo. It perhaps can parallelise that SQL cursor that Foo uses - but not Foo itself.
However, if Foo is called by the SQL engine it can run in parallel - as the SQL process calling Foo is running in parallel. So if you make Foo a pipeline table function (written in PL/SQL), and you design and code it as a thread-safe/parallel enabled function, it can be callled and used and executed in parallel, by the SQL engine.
So moving your PL/SQL code into a parallel enabled pipeline function written in PL/SQL, and using that function via parallel SQL, can increase performance over running that same basic PL/SQL processing as a serialised process.
This is of course assuming that the processing that needs to be done using PL/SQL code, can be designed and coded for parallel processing in this fashion.
Maybe you are looking for
-
Awesome + Shifty: clientbuttons don't work
Hi, I'm playing around with awesome and installed shifty. Works fine, but the key-/buttonbindings for actions on client windows (modkey + rightclick = resize etc.) have seized to work. What could that be? My rc.lua is posted below, apart from some ke
-
Can a prospect be converted to an opportunity via the web service?
Hi We have successfully integrated an ecommerce website with the Siebel CRM system. Orders feed into Siebel Opportunities and customers expressing an interest in products who fill the "request for more info" form are fed into Siebel prospects. We wan
-
Got a great picture on my Samsung 37" LCD using the above cable and configuration. But NO SOUND. Is anyone familiar with this problem and have the answer. I tried several settings changes, but to know avail. Thanks IKH
-
Hooking up to your stereo system
I have two receivers. One that's a bit older that I have no problem hooking up the Kensington Stereo Dock fo iPod. I just put it in the audio input on the front of the reciever and het video and I get the music. I ahve a much nicer and newer Harmon K
-
CITADEL DATABASE NOT CONFIGURED AS A RELATIONAL DATABASE
HI: I have the same problem described by another member before, and I haven't found any resolution of this problem: The problem was: >I enabled database logging, and configured the shared variables that I wanted to log. Then I deployed the variables