BULK COLLECT Issues in 10g
I have a procedure that runs a function which uses BULK COLLECT INTO on a 9i database. When I try to run the same procedure with the same function on 10g, it fails. I can't figure out how to find the exact error message but have narrowed it down to the following:
In the main procedure, I have this:
Select Count(ApprId) into v_cnt
From TABLE(udf_OverallPerms(v_camid))
Where ApprId in
(Select M_ApprId
From PM_MetricDefApproved Where M_AreaValueId=v_areaValueId and M_ParentId = 0
UNION
Select MDAC.M_ApprId
From PM_MetricDefApproved MDAC,
PM_MetricDefApproved MDAP
Where MDAC.M_ParentId=MDAP.M_ApprId
and MDAP.M_AreaValueId=v_areaValueId and MDAC.M_ParentId > 0);
The code for the function is as follows:
CREATE OR REPLACE FUNCTION udf_OverallPerms(v_UserCAMID IN VARCHAR2)
RETURN OVERALL_TBL
AS
v_data OVERALL_TBL := OVERALL_TBL();
BEGIN
Select OVERALL_REC(ApprId, MAX(bRead), MAX(bWrite),
MAX(bApprove), MAX(bDelete),
MAX(bAdmin), MAX(bView), MAX(bViewWrite))
BULK COLLECT INTO v_data
From vwObjectRoles
Where (ObjectId IN
(Select GroupId as ObjectId from vwUsersInGroups
Where UserCAMID=v_UserCAMID)
AND ObjectTypeId=(Select ObjectTypeId From PM_ObjectTypes Where ObjectTypeCode='GRP'))
OR
(ObjectId =
(Select UserId as ObjectId FROM PM_UserLookup
Where UserCAMID=v_UserCAMID)
AND ObjectTypeId=(Select ObjectTypeId From PM_ObjectTypes Where ObjectTypeCode='USER'))
Group By ApprId;
return v_data;
END;
Does anyone know of any issues in 10g with this? I have been banging my head on this for a day now.
Thanks,
Greg
OK. I managed to get the EXCEPTION. Not sure it helps or not.
ORA-02055: distributed update operation failed; rollback required
ORA-22905: cannot access rows from a non-nested table item
ORA-06512: at "C83_WEBDEV_PERFORM.USP_C8IMPORT_POLICY", line 179
ORA-06512: at line 17
Version is 10.2.0.1.
Similar Messages
-
Issue in using Cursor+Dynamic SQL+ Bulk collect +FORALL
Hi,
I have a dynamic query which I need to use as a cursor to fetch records that inturn need to be inserted into a staging table.
The issue I am facing is I am not sure how to declare the variable to fetch the records into. Since I am using a dynamic cursor how do I declare it?
My code looks something like this -
TYPE c_details_tbl_type IS REF CURSOR;
c_details c_details_tbl_type;
TYPE c_det_tbl_type IS TABLE OF c_details%ROWTYPE INDEX BY PLS_INTEGER;
c_det_tbl c_det_tbl_type; -- ???
BEGIN
v_string1 := 'SELECT....'
v_string2 := ' UNION ALL SELECT....'
v_string3 := 'AND ....'
v_string := v_string1||v_string2||v_string3;
OPEN c_details FOR v_string;
LOOP
FETCH c_details BULK COLLECT
INTO c_det_tbl LIMIT 1000;
IF (c_det_tbl.COUNT > 0) THEN
DELETE FROM STG;
FORALL i IN 1..c_det_tbl.COUNT
INSERT INTO STG
VALUES (c_det_tbl(i));
END IF;
EXIT WHEN c_details%NOTFOUND;
END LOOP;
CLOSE c_details;
END
ThanksWhy the bulk collect? All that this does is slow down the read process (SELECT) and write process (INSERT).
Data selected needs (as a collection) to be pushed into the PGA memory of the PL/SQL engine. And then that very same data needs to be pushed again by the PL/SQL engine back to the database to be inserted. Why?
It is a lot faster, needs a lot less resources, with fewer moving parts, to simply instruct the SQL engine to do both these steps using a single INSERT..SELECT statement. And this can support parallel DML too for scalability when data volumes get large.
It is also pretty easy to make a single SQL statement like this dynamic and even support bind variables.
Simplicity is the ultimate form of elegance. Pushing data needlessly around is not simple and thus not a very elegant way to address the problem. -
Issue with Anlytical Functions,Ref Cusor and Bulk Collect
Hi All
pls go through the following code
declare
type salt is table of emp.sal%type index by binary_integer;
st salt;
type refc is ref cursor;
rc refc;
begin
open rc for 'select max(sal) over (Partition by deptno) as Sal from emp';
fetch rc bulk collect into st;
close rc;
for i in st.first..st.last loop
dbms_output.put_LINE(st(i));
end loop;
end;
When execute above code following error come :
declare
ERROR at line 1:
ORA-01001: invalid cursor
ORA-06512: at line 8
since Anlytical functions are not supported at pl/sql,i used the ref cursor,but these record are not allowed to collect in pl/sql table.
pls can one send a work around.
to insert recs into pl/sql table from anlytical function.
Thanks for Reading the Request
Raj Ganga
mail : [email protected]Just ran it exactly as it is. It works.
I am on 9i which version are you using..
SQL> declare
2 type salt is table of emp.sal%type index by binary_integer;
3 st salt;
4 type refc is ref cursor;
5 rc refc;
6 begin
7 open rc for 'select max(sal) over (Partition by deptno) as Sal from emp';
8 fetch rc bulk collect into st;
9 close rc;
10 for i in st.first..st.last loop
11 dbms_output.put_LINE(st(i));
12 end loop;
13 end;
14 /
PL/SQL procedure successfully completed.
SQL> set serveroutput on
SQL> /
5000
5000
5000
3000
3000
3000
3000
3000
2850
2850
2850
2850
2850
2850
PL/SQL procedure successfully completed. -
Bulk collect into compound array
Hi guys,
just having a bit of an issue here. here's what I've got so far (simplified):
declare
type t_rec is record(foo_rec foo%rowtype,
id number);
type t_tab is table of t_rec index by pls_integer;
w_footab t_tab;
begin
select foo.*,
3
bulk collect into w_footab
from foo;
end;but I get the error pls00597 expression w_footab in the INTO list is of wrong type.
oracle version: Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64biHi WhiteHat,
You can't select into a layered PL/SQL structure like this.
SQL> create table foo (col1,col2) as select 1,2 from dual
2 /
Tabel is aangemaakt.
SQL> declare
2 type t_rec is record
3 ( foo_rec foo%rowtype
4 , id number
5 );
6 type t_tab is table of t_rec index by pls_integer;
7 w_footab t_tab;
8 begin
9 select foo.*
10 , 3
11 bulk collect into w_footab
12 from foo
13 ;
14 end;
15 /
bulk collect into w_footab
FOUT in regel 11:
.ORA-06550: line 11, column 28:
PLS-00597: expression 'W_FOOTAB' in the INTO list is of wrong type
ORA-06550: line 12, column 5:
PL/SQL: ORA-00904: : invalid identifier
ORA-06550: line 9, column 3:
PL/SQL: SQL Statement ignoredWhen you flatten it out, it works:
SQL> declare
2 type t_rec is record
3 ( col1 foo.col1%type
4 , col2 foo.col2%type
5 , id number
6 );
7 type t_tab is table of t_rec index by pls_integer;
8 w_footab t_tab;
9 begin
10 select foo.*
11 , 3
12 bulk collect into w_footab
13 from foo
14 ;
15 end;
16 /
PL/SQL-procedure is geslaagd.Or use SQL types, and you can use any structure you want:
SQL> create type foorec is object
2 ( foo_col1 number
3 , foo_col2 number
4 );
5 /
Type is aangemaakt.
SQL> declare
2 type t_rec is record
3 ( foo_rec foorec
4 , id number
5 );
6 type t_tab is table of t_rec index by pls_integer;
7 w_footab t_tab;
8 begin
9 select foorec(foo.col1,foo.col2)
10 , 3
11 bulk collect into w_footab
12 from foo
13 ;
14 end;
15 /
PL/SQL-procedure is geslaagd.Regards,
Rob. -
How to use table type in bulk collect
Hi experts,
How to use table type in bulk collect see the procedure used( oracle 10g)
and error is
PLS-00597: expression 'REQ_REC' in the INTO list is of wrong type
CREATE OR REPLACE PROCEDURE SAMPLE_SP IS
TYPE TYP_A AS OBJECT
( COLMN1 TABLE1.COLM1%TYPE,
COLMN2 TABLE1.COLM2%TYPE,
COLMN3 TABLE1.COLM3%TYPE
TYPE REC_A IS TABLE OF TYP_A;
REQ_REC A_REC;
CURSOR REQ_CUR IS SELECT COLM1,COLM2,COLM3 FROM TABLE1 WHERE <CONDITION>;
BEGIN
OPEN REQ_REC;
LOOP
EXIT WHEN REQ_REC%NOTFOUND;
FETCH REQ_REC BULK COLLECT INTO REQ_REC LIMIT 1000;
FOR I IN 1..REQ_REC.COUNT
LOOP
<insert statement>
END LOOP;
COMMIT;
END LOOP;
END SAMPLE_SP;
Many thanks,
Kalingaok but that is not an issue..
Hi experts,
How to use table type in bulk collect see the procedure used( oracle 10g)
and error is
PLS-00597: expression 'REQ_REC' in the INTO list is of wrong type
CREATE OR REPLACE PROCEDURE SAMPLE_SP IS
TYPE TYP_A AS OBJECT
( COLMN1 TABLE1.COLM1%TYPE,
COLMN2 TABLE1.COLM2%TYPE,
COLMN3 TABLE1.COLM3%TYPE
TYPE REC_A IS TABLE OF TYP_A;
REQ_REC A_REC;
CURSOR REQ_CUR IS SELECT COLM1,COLM2,COLM3 FROM TABLE1 WHERE <CONDITION>;
BEGIN
OPEN REQ_CUR;
LOOP
EXIT WHEN REQ_REC%NOTFOUND;
FETCH REQ_REC BULK COLLECT INTO REQ_REC LIMIT 1000;
FOR I IN 1..REQ_REC.COUNT
LOOP
<insert statement>
END LOOP;
COMMIT;
END LOOP;
END SAMPLE_SP;
Many thanks,
Kalinga
Message was edited by:
Kalinga -
Hi all,
I have a performance issue in the below code,where i am trying to insert the data from table_stg into target_tab and in parent_tab tables and then to child tables via cursor with bulk collect .the target_tab and parent_tab are huge tables and have a row wise trigger enabled on it .the trigger is mandatory . This timetaken for this block to execute is 5000 seconds.Now my requirement is to reduce it to 5 to 10 mins.
can someone please guide me here.Its bit urgent .Awaiting for your response.
declare
vmax_Value NUMBER(5);
vcnt number(10);
id_val number(20);
pc_id number(15);
vtable_nm VARCHAR2(100);
vstep_no VARCHAR2(10);
vsql_code VARCHAR2(10);
vsql_errm varchar2(200);
vtarget_starttime timestamp;
limit_in number :=10000;
idx number(10);
cursor stg_cursor is
select
DESCRIPTION,
SORT_CODE,
ACCOUNT_NUMBER,
to_number(to_char(CORRESPONDENCE_DATE,'DD')) crr_day,
to_char(CORRESPONDENCE_DATE,'MONTH') crr_month,
to_number(substr(to_char(CORRESPONDENCE_DATE,'DD-MON-YYYY'),8,4)) crr_year,
PARTY_ID,
GUID,
PAPERLESS_REF_IND,
PRODUCT_TYPE,
PRODUCT_BRAND,
PRODUCT_HELD_ID,
NOTIFICATION_PREF,
UNREAD_CORRES_PERIOD,
EMAIL_ID,
MOBILE_NUMBER,
TITLE,
SURNAME,
POSTCODE,
EVENT_TYPE,
PRIORITY_IND,
SUBJECT,
EXT_PRD_ID_TX,
EXT_PRD_HLD_ID_TX,
EXT_SYS_ID,
EXT_PTY_ID_TX,
ACCOUNT_TYPE_CD,
COM_PFR_TYP_TX,
COM_PFR_OPT_TX,
COM_PFR_RSN_CD
from table_stg;
type rec_type is table of stg_rec_type index by pls_integer;
v_rt_all_cols rec_type;
BEGIN
vstep_no := '0';
vmax_value := 0;
vtarget_starttime := systimestamp;
id_val := 0;
pc_id := 0;
success_flag := 0;
vstep_no := '1';
vtable_nm := 'before cursor';
OPEN stg_cursor;
vstep_no := '2';
vtable_nm := 'After cursor';
LOOP
vstep_no := '3';
vtable_nm := 'before fetch';
--loop
FETCH stg_cursor BULK COLLECT INTO v_rt_all_cols LIMIT limit_in;
vstep_no := '4';
vtable_nm := 'after fetch';
--EXIT WHEN v_rt_all_cols.COUNT = 0;
EXIT WHEN stg_cursor%NOTFOUND;
FOR i IN 1 .. v_rt_all_cols.COUNT
LOOP
dbms_output.put_line(upper(v_rt_all_cols(i).event_type));
if (upper(v_rt_all_cols(i).event_type) = upper('System_enforced')) then
vstep_no := '4.1';
vtable_nm := 'before seq sel';
select PC_SEQ.nextval into pc_id from dual;
vstep_no := '4.2';
vtable_nm := 'before insert corres';
INSERT INTO target1_tab
(ID,
PARTY_ID,
PRODUCT_BRAND,
SORT_CODE,
ACCOUNT_NUMBER,
EXT_PRD_ID_TX,
EXT_PRD_HLD_ID_TX,
EXT_SYS_ID,
EXT_PTY_ID_TX,
ACCOUNT_TYPE_CD,
COM_PFR_TYP_TX,
COM_PFR_OPT_TX,
COM_PFR_RSN_CD,
status)
VALUES
(pc_id,
v_rt_all_cols(i).party_id,
decode(v_rt_all_cols(i).product_brand,'LTB',2,'HLX',1,'HAL',1,'BOS',3,'VER',4,0),
v_rt_all_cols(i).sort_code,
'XXXX'||substr(trim(v_rt_all_cols(i).ACCOUNT_NUMBER),length(trim(v_rt_all_cols(i).ACCOUNT_NUMBER))-3,4),
v_rt_all_cols(i).EXT_PRD_ID_TX,
v_rt_all_cols(i).EXT_PRD_HLD_ID_TX,
v_rt_all_cols(i).EXT_SYS_ID,
v_rt_all_cols(i).EXT_PTY_ID_TX,
v_rt_all_cols(i).ACCOUNT_TYPE_CD,
v_rt_all_cols(i).COM_PFR_TYP_TX,
v_rt_all_cols(i).COM_PFR_OPT_TX,
v_rt_all_cols(i).COM_PFR_RSN_CD,
NULL);
vstep_no := '4.3';
vtable_nm := 'after insert corres';
else
select COM_SEQ.nextval into id_val from dual;
vstep_no := '6';
vtable_nm := 'before insertcomm';
if (upper(v_rt_all_cols(i).event_type) = upper('REMINDER')) then
vstep_no := '6.01';
vtable_nm := 'after if insertcomm';
insert into parent_tab
(ID ,
CTEM_CODE,
CHA_CODE,
CT_CODE,
CONTACT_POINT_ID,
SOURCE,
RECEIVED_DATE,
SEND_DATE,
RETRY_COUNT)
values
(id_val,
lower(v_rt_all_cols(i).event_type),
decode(v_rt_all_cols(i).product_brand,'LTB',2,'HLX',1,'HAL',1,'BOS',3,'VER',4,0),
'Email',
v_rt_all_cols(i).email_id,
'IADAREMINDER',
systimestamp,
systimestamp,
0);
else
vstep_no := '6.02';
vtable_nm := 'after else insertcomm';
insert into parent_tab
(ID ,
CTEM_CODE,
CHA_CODE,
CT_CODE,
CONTACT_POINT_ID,
SOURCE,
RECEIVED_DATE,
SEND_DATE,
RETRY_COUNT)
values
(id_val,
lower(v_rt_all_cols(i).event_type),
decode(v_rt_all_cols(i).product_brand,'LTB',2,'HLX',1,'HAL',1,'BOS',3,'VER',4,0),
'Email',
v_rt_all_cols(i).email_id,
'CORRESPONDENCE',
systimestamp,
systimestamp,
0);
END if;
vstep_no := '6.11';
vtable_nm := 'before chop';
if (v_rt_all_cols(i).ACCOUNT_NUMBER is not null) then
v_rt_all_cols(i).ACCOUNT_NUMBER := 'XXXX'||substr(trim(v_rt_all_cols(i).ACCOUNT_NUMBER),length(trim(v_rt_all_cols(i).ACCOUNT_NUMBER))-3,4);
insert into child_tab
(COM_ID,
KEY,
VALUE)
values
(id_val,
'IB.Correspondence.AccountNumberMasked',
v_rt_all_cols(i).ACCOUNT_NUMBER);
end if;
vstep_no := '6.1';
vtable_nm := 'before stateday';
if (v_rt_all_cols(i).crr_day is not null) then
insert into child_tab
(COM_ID,
KEY,
VALUE)
values
(id_val,
--'IB.Correspondence.Date.Day',
'IB.Crsp.Date.Day',
v_rt_all_cols(i).crr_day);
end if;
vstep_no := '6.2';
vtable_nm := 'before statemth';
if (v_rt_all_cols(i).crr_month is not null) then
insert into child_tab
(COM_ID,
KEY,
VALUE)
values
(id_val,
--'IB.Correspondence.Date.Month',
'IB.Crsp.Date.Month',
v_rt_all_cols(i).crr_month);
end if;
vstep_no := '6.3';
vtable_nm := 'before stateyear';
if (v_rt_all_cols(i).crr_year is not null) then
insert into child_tab
(COM_ID,
KEY,
VALUE)
values
(id_val,
--'IB.Correspondence.Date.Year',
'IB.Crsp.Date.Year',
v_rt_all_cols(i).crr_year);
end if;
vstep_no := '7';
vtable_nm := 'before type';
if (v_rt_all_cols(i).product_type is not null) then
insert into child_tab
(COM_ID,
KEY,
VALUE)
values
(id_val,
'IB.Product.ProductName',
v_rt_all_cols(i).product_type);
end if;
vstep_no := '9';
vtable_nm := 'before title';
if (trim(v_rt_all_cols(i).title) is not null) then
insert into child_tab
(COM_ID,
KEY,
VALUE )
values
(id_val,
'IB.Customer.Title',
trim(v_rt_all_cols(i).title));
end if;
vstep_no := '10';
vtable_nm := 'before surname';
if (v_rt_all_cols(i).surname is not null) then
insert into child_tab
(COM_ID,
KEY,
VALUE)
values
(id_val,
'IB.Customer.LastName',
v_rt_all_cols(i).surname);
end if;
vstep_no := '12';
vtable_nm := 'before postcd';
if (trim(v_rt_all_cols(i).POSTCODE) is not null) then
insert into child_tab
(COM_ID,
KEY,
VALUE)
values
(id_val,
'IB.Customer.Addr.PostCodeMasked',
substr(replace(v_rt_all_cols(i).POSTCODE,' ',''),length(replace(v_rt_all_cols(i).POSTCODE,' ',''))-2,3));
end if;
vstep_no := '13';
vtable_nm := 'before subject';
if (trim(v_rt_all_cols(i).SUBJECT) is not null) then
insert into child_tab
(COM_ID,
KEY,
VALUE)
values
(id_val,
'IB.Correspondence.Subject',
v_rt_all_cols(i).subject);
end if;
vstep_no := '14';
vtable_nm := 'before inactivity';
if (trim(v_rt_all_cols(i).UNREAD_CORRES_PERIOD) is null or
trim(v_rt_all_cols(i).UNREAD_CORRES_PERIOD) = '3' or
trim(v_rt_all_cols(i).UNREAD_CORRES_PERIOD) = '6' or
trim(v_rt_all_cols(i).UNREAD_CORRES_PERIOD) = '9') then
insert into child_tab
(COM_ID,
KEY,
VALUE)
values
(id_val,
'IB.Correspondence.Inactivity',
v_rt_all_cols(i).UNREAD_CORRES_PERIOD);
end if;
vstep_no := '14.1';
vtable_nm := 'after notfound';
end if;
vstep_no := '15';
vtable_nm := 'after notfound';
END LOOP;
end loop;
vstep_no := '16';
vtable_nm := 'before closecur';
CLOSE stg_cursor;
vstep_no := '17';
vtable_nm := 'before commit';
DELETE FROM table_stg;
COMMIT;
vstep_no := '18';
vtable_nm := 'after commit';
EXCEPTION
WHEN OTHERS THEN
ROLLBACK;
success_flag := 1;
vsql_code := SQLCODE;
vsql_errm := SUBSTR(sqlerrm,1,200);
error_logging_pkg.inserterrorlog('samp',vsql_code,vsql_errm, vtable_nm,vstep_no);
RAISE_APPLICATION_ERROR (-20011, 'samp '||vstep_no||' SQLERRM:'||SQLERRM);
end;
ThanksIts bit urgent
NO - it is NOT urgent. Not to us.
If you have an urgent problem you need to hire a consultant.
I have a performance issue in the below code,
Maybe you do and maybe you don't. How are we to really know? You haven't posted ANYTHING indicating that a performance issue exists. Please read the FAQ for how to post a tuning request and the info you need to provide. First and foremost you have to post SOMETHING that actually shows that a performance issue exists. Troubleshooting requires FACTS not just a subjective opinion.
where i am trying to insert the data from table_stg into target_tab and in parent_tab tables and then to child tables via cursor with bulk collect .the target_tab and parent_tab are huge tables and have a row wise trigger enabled on it .the trigger is mandatory . This timetaken for this block to execute is 5000 seconds.Now my requirement is to reduce it to 5 to 10 mins.
Personally I think 5000 seconds (about 1 hr 20 minutes) is very fast for processing 800 trillion rows of data into parent and child tables. Why do you think that is slow?
Your code has several major flaws that need to be corrected before you can even determine what, if anything, needs to be tuned.
This code has the EXIT statement at the beginning of the loop instead of at the end
FETCH stg_cursor BULK COLLECT INTO v_rt_all_cols LIMIT limit_in;
vstep_no := '4';
vtable_nm := 'after fetch';
--EXIT WHEN v_rt_all_cols.COUNT = 0;
EXIT WHEN stg_cursor%NOTFOUND;
The correct place for the %NOTFOUND test when using BULK COLLECT is at the END of the loop; that is, the last statement in the loop.
You can use a COUNT test at the start of the loop but ironically you have commented it out and have now done it wrong. Either move the NOTFOUND test to the end of the loop or remove it and uncomment the COUNT test.
WHEN OTHERS THEN
ROLLBACK;
That basically says you don't even care what problem occurs or whether the problem is for a single record of your 10,000 in the collection. You pretty much just throw away any stack trace and substitute your own message.
Your code also has NO exception handling for any of the individual steps or blocks of code.
The code you posted also begs the question of why you are using NAME=VALUE pairs for child data rows? Why aren't you using a standard relational table for this data?
As others have noted you are using slow-by-slow (row by row processing). Let's assume that PL/SQL, the bulk collect and row-by-row is actually necessary.
Then you should be constructing the parent and child records into collections and then inserting them in BULK using FORALL.
1. Create a collection for the new parent rows
2. Create a collection for the new child rows
3. For each set of LIMIT source row data
a. empty the parent and child collections
b. populate those collections with new parent/child data
c. bulk insert the parent collection into the parent table
d. bulk insert the child collection into the child table
And unless you really want to either load EVERYTHING or abandon everything you should use bulk exception handling so that the clean data gets processed and only the dirty data gets rejected. -
How to use BULK COLLECT in oracle forms
hi gurus,
I am using oracle forms
Forms [32 Bit] Version 10.1.2.0.2 (Production)
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - ProductionI wanna use bulk collect from database table lets say <employees>
while working on database level with collections and records it's working very well for me, but when I try to use that technique on oracle forms it hits me error
error 591 this feature is not supported in client side programmingI know I can use cursors to loop through the records of oracle tables ,
but I'm convenient while using collections and arrays
for example
Set Serveroutput On
Declare
Type Rec_T Is Record (
Empid Number ,
Empname Varchar2(100)
Type V_R Is Table Of Rec_T Index By Binary_Integer;
V_Array V_R;
Begin
Select Employee_Id , First_Name
Bulk Collect
Into V_Array
From Employees;
For Indx In V_Array.First..V_Array.Last Loop
Dbms_Output.Put_Line('employees id '||V_Array(Indx).Empid ||'and the name is '||V_Array(Indx).Empname);
End Loop;
End;I wanna use this same way on oracle forms , for certain purposes , please guide me how can I use ...
thanks...For information, you can use and populate a collection within the Forms application without using the BULK COLLECT
Francoisactually I want to work with arrays , index tables ,
like
record_type (variable , variable2);
type type_name <record_type> index by binary_integer
type_variable type_name;
and in main body of program
select something
bulk collect into type_variable
from any_table;
loop
type_variable(indx).variable , type_variable(indx).variable2;
end loop;
this is very useful for my logic on which I am working
like
type_variable(indx).variable || type_variable(indx-1);
if it's possible with cursors then how can I use cursor that can fullfill my this logic@Francois
if it's possible then how can i populate without using bulk collect?
thanks
and for others replies: if I can use stored procedures please give me any example..
thanks -
Error while using bulk collect
Hi
I tried with the following code,
DECLARE
TYPE EmpRec IS RECORD (last_name EMP.ename%TYPE,
salary emp.sal%TYPE);
emp_info EmpRec;
TYPE empnest IS TABLE OF EMP.empno%TYPE;
empnestvar empnest;
BEGIN
empnestvar := empnest(7566,7788);
FOR i in empnestvar.first..empnestvar.last LOOP
UPDATE emp SET sal = sal * 1.1 WHERE empno = empnestvar(i)
RETURNING ename, sal BULK COLLECT INTO emp_info;
DBMS_OUTPUT.PUT_LINE('Just gave a raise to ' || emp_info.last_name ||
', who now makes ' || emp_info.salary);
ROLLBACK;
END LOOP;
END;getting this following err
RETURNING ename, sal BULK COLLECT INTO emp_info;
ERROR at line 11:
ORA-03113: end-of-file on communication channelCould you please advice me in this
ThanksThe main problem i you are bulk collecting into a "record" type variable.
SQL>
SQL> SHOW user
USER is "SCOTT"
SQL> SELECT * FROM v$version;
BANNER
Oracle Database 10g Enterprise Edition Release 10.2.0.2.0 - Prod
PL/SQL Release 10.2.0.2.0 - Production
CORE 10.2.0.2.0 Production
TNS for Solaris: Version 10.2.0.2.0 - Production
NLSRTL Version 10.2.0.2.0 - Production
SQL> SET SERVEROUT on
SQL> DECLARE
TYPE EmpRec IS RECORD(
last_name EMP.ename%TYPE,
salary emp.sal%TYPE);
TYPE emp_bl IS TABLE OF EmpRec; --Added.
emp_info emp_bl; --Changed.
TYPE empnest IS TABLE OF EMP.empno%TYPE;
empnestvar empnest;
BEGIN
empnestvar := empnest(7566, 7788);
FOR i in empnestvar.first .. empnestvar.last LOOP
UPDATE emp
SET sal = sal * 1.1
WHERE empno = empnestvar(i) RETURNING ename, sal BULK COLLECT INTO
emp_info;
DBMS_OUTPUT.PUT_LINE('Just gave a raise to ' || emp_info(1)
.last_name || ', who now makes ' || emp_info(1)
.salary);
ROLLBACK;
END LOOP;
END; 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21
22 /
Just gave a raise to JONES, who now makes 3272.5
Just gave a raise to SCOTT, who now makes 3300
PL/SQL procedure successfully completed.
SQL>Although, I must say that, as because empno is the primary key, here RETURNING INT with BULK COLLECT doesn't make any sense. -
I understand that Oracle 10g will rewrite your CURSOR FOR LOOP statements into a BULK COLLECT statement.
I am contemplating no longer explicitly writing the BULK COLLECT from now on as it reduces the number of lines of code and greatly simplifies code.
Can anyone see any serious flaws in this strategy?
Kind Regards
Chris> I also think it is a good idea if people do take the
time to decide their strategy. You seem to be
suggesting that it is a bad idea to stop and think
about what you require from your loop.
Well, that depends on the type of programmer. When one deals with programmers that are not true PL/SQL developers and view PL/SQL.. well, as some kind of inferior database language (compared to something like Java for example).. you want to have templates and stuff to enforce best practises.
> I also don't agree with the 'package tuning knob'.
Each query may have different requirements and, as
with most things in programming, fixing one thing can
have a negative effect on another. It is about the
only place where would not advocate constants.
You have a point - but even so, defining these as constants (even if it has to be inside the actual proc doing the bulk fetch, one per bulk fetch) make it a lot easier to maintain than having to search out the actual bulk fetch statements in the code.
> But i would suggest that analysis is performed on a
close-to-live enviroment with a production level
server, large body of test data and multiple users. I
think we agree on that point.
Yeah.. but the problem there is that I have never really seen such an environment. Usually due to costs. How do you for example duplicate a large RAC, terabytes of SAN space, 1000's of users, for use as a close-to-live enviroment?
The usual approach (by management) is to spend as little as possible on development and Q&A platforms. Which at times means that the performance of dev vs. production can vary a lot.
So we have to play the hand we're dealt with unfortunately.
> Hmmm cmegar and I have never once said "don't worry
about, PL/SQL does it for you".
Yes - of course not. I'm just rambling on in general describing the usual attitudes I see when it comes to features like this.
It is managable in small dev team, but larger ones.. not really. There this attitude is often previlant in my experience. The "silver bullet" bullet syndrome.
> Would you use the same argument with regard to unit
testing. There are not many pl/sql developers who
unit test but should that prevent me using the
technique?
Well.. to be honest, I do not think that a developer that writes at least some basic unit tests for his/her code can be called a developer.
> >[i]Relying on implicit features to "fix" code for
you negates a deeper and better understanding of the
language and what you writeI don't see how using an implicit bulk collect is
fixing code.
Which is why I put it in brackets - "fixing" ito making it more performant, or "fixing" it as a FOR loop contains DMLs that can be changed to FORALLs.
> Don't tell me you've never
updated older code to take advangate of a new
feature.
I can never stop the urge to refactor old code I'm working with. :-)
> >[i]I think of features as an implicit bulk collect
behind the scenes, as crutches for mediocre
programmers.I take it that statement is suggesting that cmedgar
and I are mediocre programmers? Not a nice way to end
an otherwise constructive argument.
How does that saying go? You claim the cloth that I cut? :-)
My sincere apologies to both you guys - I did not intend that statement to personal at all.
Besides, I'm usually more blunt than that what it comes to throwing personal insults around. ;-)
This statement was just a general observation going back to my early days of writing Cobol and Natural. Programmers at time do not seem to care about grokking the features and apply them correctly. Actually I want to say "most programmers" and "a lot of times", but then I would be accused of generalisation. ;-)
I simply find it very frustrating dealing with programmers that does not simply love to write code. Programmers that see it as a mere job.
Someone once said that he never starts out to write beautiful code. But when he is done and the code is not beautiful and elegant (and simple), he knows he has screwed up.
In my experience.. many programmers will not understand this. -
Procedure for Insert to BULK COLLECT
hi,
I have 2 questions-
1) Say I have below code. I want to call an insert procedure insead of INSERT INTO. If I do would it give any performance issue?
CREATE OR REPLACE PROCEDURE test_proc (p_array_size IN PLS_INTEGER DEFAULT 100)
IS
TYPE ARRAY IS TABLE OF all_objects%ROWTYPE;
l_data ARRAY;
CURSOR c IS SELECT * FROM all_objects;
BEGIN
OPEN c;
LOOP
FETCH c BULK COLLECT INTO l_data LIMIT p_array_size;
FORALL i IN 1..l_data.COUNT
INSERT INTO t1 VALUES l_data(i);
EXIT WHEN c%NOTFOUND;
END LOOP;
CLOSE c;
END test_proc;
CREATE OR REPLACE PROCEDURE insert_proc ( col1 table.col1%Type,
col2 table.col2%Type,
col20 table.col20%Type)
BEGIN
INSERT INTO HistoryTable (col1, col2, ...col20)
VALUES(val1, val2, ...val 20);
END;
END;
2) Is there any clean method to create insert procedure which has 20 columns which I can call in other proc to do bulk insert?It is good that you explained your requirements, but you did not give us some data to see with and work with.
If you could, help us with below details, it might be possible to help you:
1. Create table statements for your Tables (eg. Checking, Savings and history)
2. Insert Into statements for Sample data for your Tables.
3. validations that you need to perform
4. Expected output based on the Sample data provided in step 2.
Please do not forget to post your version number
select * from v$version;Also, use {noformat}{noformat} tags, before and after SQL Statements, Expected Output to preserve spaces and make the post more readable. -
Dynamic sql with dynamic bulk collection variable
Hi,
I am facing the issue while bulk collecting dynamic sql query data into dynamic variable.
Eg:
query1:= << dynamic select query>>
Execute immediate query1 bulk collect into Dynamic_varibale;
here dynamic_varible is pl/sql table type with 1 column.
How do i declare "dynamic_variable" here????
please suggest...create type t_id is table of number
SQL> create type t_id is table of number
2 /
Type created.
SQL> declare
2
3 v_tid t_id;
4 v_results sys_refcursor;
5
6 v_employee_id number;
7 v_name varchar2(100);
8
9 v_sql varchar2(1000);
10
11
12 begin
13 v_tid := t_id(7902,7934);
14
15 --
16
17
18 v_sql := 'select empno, ename from scott.emp ' ||CHR(10)
19 || 'where empno in (select column_value from table(cast(:v_tid as
t_id)))';
20
21 dbms_output.put_line(v_sql);
22 dbms_output.put_line('----------');
23
24 open v_results for v_sql using v_tid;
25
26
27 IF v_results IS NOT NULL
28 THEN
29 LOOP
30 FETCH v_results
31 INTO v_employee_id, v_name;
32
33 EXIT WHEN (v_results%NOTFOUND);
34 dbms_output.put_line(v_name);
35 END LOOP;
36
37 IF v_results%ISOPEN
38 THEN
39 CLOSE v_results;
40 END IF;
41 END IF;
42
43 end;
44 /
select empno, ename from scott.emp
where empno in (select column_value from
table(cast(:v_tid as t_id)))
FORD
MILLER -
Which is better??? for loop or bulk collect
declare
cursor test is
select * from employees;
begin
open test;
loop
exit when test%notfound;
fetch test into myvar_a(i); CASE A
i:=i+1;
end loop;
close test;
open test;
fetch test bulk collect into myvar_b; CASE B
close test;
end;
Which case is better?? A or B?
Edited by: Kakashi on May 31, 2009 12:54 AMDepends on the meaning of better.
Generally case B should be faster although a bit more elaborate code is required.
But there may be exceptions. I think I read somewhere (I'm home now and I cannot find it at the moment) that in 10g (or 11g - not sure) 100 rows at a time are pre-fetched behind scenes even when you use case A. So using case B with a low limit could well be slower.
If I can express an additional opinion case F(irst) is nearly always the best i.e. plain SQL (no loops at all). I'm aware that sometimes it cannot be used, but should be the first approach to be tried.
Regards
Etbin
FOUND: http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:213366500346264333
CONTAINS
Hey Tom, love the site. I noticed in your first fetch, which was in the first for loop that did an unconditional exit:
2 for x in ( select rownum r, t1.* from big_table.big_table t1 )
3 loop
4 exit;
5 end loop;
In looking at the TKPROF output for that query, it shows the number of rows being fetched as 100. Does that prove / demonstrate the bulk collecting optimization that Oracle added in 10g, where it implicitly and automatically does a bulk collect of limit 100 behind the scenes?
This came up at a discussion at my site very recently, and I think I can just point them to your example here as a demo rather than creating my own. I assume that if you ran the same thing in 9iR2, then that first fetch of rows in TKPROF would only show 1?
Followup April 18, 2007 - 1pm US/Eastern:
yes, that demonstrates the implicit array fetch of 100 rows...
in 9i, it would show 1 row fetched.
Edited by: Etbin on 31.5.2009 10:38 -
Using bulk collect with select
i am working on oracle 10g release 2 .
My requirement is like this
1 declare
2 type id_type is table of fnd_menus.menu_id%type;
3 id_t id_type;
4 cursor cur_menu is select menu_name from menu;
5 type name_type is table of menu.menu_name%type;
6 name_t name_type;
7 begin
8 open cur_menu;
9 fetch cur_menu bulk collect into name_t;
10 forall i in name_t.first..name_t.last
11 select menu_id into id_t(i) from fnd_menus where menu_name = name_t(i);
12* end;
SQL> /
select menu_id into id_t(i) from fnd_menus where menu_name = name_t(i);
ERROR at line 11:
ORA-06550: line 11, column 23:
PLS-00437: FORALL bulk index cannot be used in INTO clause
ORA-06550: line 11, column 31:
PL/SQL: ORA-00904: : invalid identifier
ORA-06550: line 11, column 3:
PL/SQL: SQL Statement ignoredSo how i can bulk select into a table the rows that satisfy a particular condition ?A forall statement is used bulk execute DML, as can be read [url http://download.oracle.com/docs/cd/B19306_01/appdev.102/b14261/forall_statement.htm#LNPLS01321]here in the documentation.
I guess you want something like this:
SQL> create table menu
2 as
3 select 'A' menu_name from dual union all
4 select 'B' from dual union all
5 select 'C' from dual
6 /
Tabel is aangemaakt.
SQL> create table fnd_menus
2 as
3 select 10 menu_id, 'A' menu_name from dual union all
4 select 9, 'B' from dual union all
5 select 8, 'C' from dual union all
6 select 7, 'D' from dual
7 /
Tabel is aangemaakt.
SQL> declare
2 type id_type is table of fnd_menus.menu_id%type;
3 id_t id_type;
4 cursor cur_menu is select menu_name from menu;
5 type name_type is table of menu.menu_name%type;
6 name_t name_type;
7 begin
8 open cur_menu;
9 fetch cur_menu bulk collect into name_t;
10 forall i in name_t.first..name_t.last
11 select menu_id into id_t(i) from fnd_menus where menu_name = name_t(i);
12 end;
13 /
select menu_id into id_t(i) from fnd_menus where menu_name = name_t(i);
FOUT in regel 11:
.ORA-06550: Regel 11, kolom 25:
PLS-00437: FORALL-bulkindex kan niet worden gebruikt in INTO-clausule..
ORA-06550: Regel 11, kolom 33:
PL/SQL: ORA-00904: : ongeldige ID.
ORA-06550: Regel 11, kolom 5:
PL/SQL: SQL Statement ignored.
SQL> declare
2 type id_type is table of fnd_menus.menu_id%type;
3 id_t id_type;
4 begin
5 select menu_id
6 bulk collect into id_t
7 from fnd_menus
8 where menu_name in (select menu_name from menu)
9 ;
10 for i in 1..id_t.count
11 loop
12 dbms_output.put_line(id_t(i));
13 end loop
14 ;
15 end;
16 /
10
9
8
PL/SQL-procedure is geslaagd.Regards,
Rob. -
How to use Bulk Collect and Forall
Hi all,
We are on Oracle 10g. I have a requirement to read from table A and then for each record in table A, find matching rows in table B and then write the identified information in table B to the target table (table C). In the past, I had used two ‘cursor for loops’ to achieve that. To make the new procedure, more efficient, I would like to learn to use ‘bulk collect’ and ‘forall’.
Here is what I have so far:
DECLARE
TYPE employee_array IS TABLE OF EMPLOYEES%ROWTYPE;
employee_data employee_array;
TYPE job_history_array IS TABLE OF JOB_HISTORY%ROWTYPE;
Job_history_data job_history_array;
BatchSize CONSTANT POSITIVE := 5;
-- Read from File A
CURSOR c_get_employees IS
SELECT Employee_id,
first_name,
last_name,
hire_date,
job_id
FROM EMPLOYEES;
-- Read from File B based on employee ID in File A
CURSOR c_get_job_history (p_employee_id number) IS
select start_date,
end_date,
job_id,
department_id
FROM JOB_HISTORY
WHERE employee_id = p_employee_id;
BEGIN
OPEN c_get_employees;
LOOP
FETCH c_get_employees BULK COLLECT INTO employee_data.employee_id.LAST,
employee_data.first_name.LAST,
employee_data.last_name.LAST,
employee_data.hire_date.LAST,
employee_data.job_id.LAST
LIMIT BatchSize;
FORALL i in 1.. employee_data.COUNT
Open c_get_job_history (employee_data(i).employee_id);
FETCH c_get_job_history BULKCOLLECT INTO job_history_array LIMIT BatchSize;
FORALL k in 1.. Job_history_data.COUNT LOOP
-- insert into FILE C
INSERT INTO MY_TEST(employee_id, first_name, last_name, hire_date, job_id)
values (job_history_array(k).employee_id, job_history_array(k).first_name,
job_history_array(k).last_name, job_history_array(k).hire_date,
job_history_array(k).job_id);
EXIT WHEN job_ history_data.count < BatchSize
END LOOP;
CLOSE c_get_job_history;
EXIT WHEN employee_data.COUNT < BatchSize;
END LOOP;
COMMIT;
CLOSE c_get_employees;
END;
When I run this script, I get
[Error] Execution (47: 17): ORA-06550: line 47, column 17:
PLS-00103: Encountered the symbol "OPEN" when expecting one of the following:
. ( * @ % & - + / at mod remainder rem select update with
<an exponent (**)> delete insert || execute multiset save
merge
ORA-06550: line 48, column 17:
PLS-00103: Encountered the symbol "FETCH" when expecting one of the following:
begin function package pragma procedure subtype type use
<an identifier> <a double-quoted delimited-identifier> form
current cursorWhat is the best way to code this? Once, I learn how to do this, I apply the knowledge to the real application in which file A would have around 200 rows and file B would have hundreds of thousands of rows.
Thank you for your guidance,
SeyedHello BlueShadow,
Following your advice, I modified a stored procedure that initially was using two cursor for loops to read from tables A and B to write to table C to use instead something like your suggestion listed below:
INSERT INTO tableC
SELECT …
FROM tableA JOIN tableB on (join condition).I tried this change on a procedure writing to tableC with keys disabled. I will try this against the real table that has primary key and indexes and report the result later.
Thank you very much,
Seyed -
Cursor ORDER BY Clause Changing Row Count In BULK COLLECT ... FOR LOOP?
Oracle 10g Enterprise Edition Release 10.2.0.4.0 running on Windows Server 2003
Oracle Client 10.2.0.2.0 running on Windows 2000
I have some PL/SQL code that's intended to update a column in a table based on a lookup from another table. I started out by testing it with the UPDATE statement commented out, just visually inspecting the DBMS_OUTPUT results to see if it was sane. During this testing I added/changed the cursor ORDER BY clause to make it easier to read the output, and saw some strange results. I've run the code 3 times with:
1. no ORDER BY clause
2. ORDER BY with two columns (neither indexed)
3. ORDER BY with one column (not indexed)
and get three different "rows updated" counts - in fact, when using the ORDER BY clauses it appears that the code is processing more rows than without either ORDER BY clause. I'm wondering why adding / changing an ORDER BY <non-indexed column> clause in a cursor would affect the row count?
The code structure is:
TYPE my_Table_t IS TABLE OF table1%ROWTYPE ;
my_Table my_Table_t ;
CURSOR my_Cursor IS SELECT * FROM table1 ; -- initial case - no ORDER BY clause
-- ORDER BY table1.column1, table1.column2 ; -- neither column indexed
-- ORDER BY table1.column2 ; -- column not indexed
my_Loop_Count NUMBER := 0 ;
OPEN my_Cursor ;
LOOP
FETCH my_Cursor BULK COLLECT INTO my_Table LIMIT 100 ;
EXIT WHEN my_Table.COUNT = 0 ;
FOR i IN 1..my_Table.COUNT LOOP
my_New_Value := <call a pkg.funct to retrieve expected value from another table> ;
EXIT WHEN my_New_Value IS NULL ;
EXIT WHEN my_New_Value = <an undesirable value> ;
IF my_New_Value <> my_Table(i).column3 THEN
DBMS_OUTPUT.PUT_LINE( 'Changing ' || my_Table(i).column3 || ' to ' || my_New_Value ) ;
UPDATE table1 SET column3 = my_New_Value WHERE column_pk = my_Table(i).column_pk ;
my_Loop_Count := my_Loop_Count + 1 ;
END IF ;
END LOOP ;
COMMIT ;
END LOOP ;
CLOSE my_Cursor ;
DBMS_OUTPUT.PUT_LINE( 'Processed ' || my_Loop_Count || ' Rows ' ) ;Hello (and welcome),
Your handling the inner cursor exit control is suspect, which will result in (seemingly) erratic record counts.
Instead of:
LOOP
FETCH my_Cursor BULK COLLECT INTO my_Table LIMIT 100 ;
EXIT WHEN my_Table.COUNT = 0 ;
FOR i IN 1..my_Table.COUNT LOOP
my_New_Value := <call a pkg.funct to retrieve expected value from another table> ;
EXIT WHEN my_New_Value IS NULL ;
EXIT WHEN my_New_Value = <an undesirable value> ;
IF my_New_Value my_Table(i).column3 THEN
DBMS_OUTPUT.PUT_LINE( 'Changing ' || my_Table(i).column3 || ' to ' || my_New_Value ) ;
UPDATE table1 SET column3 = my_New_Value WHERE column_pk = my_Table(i).column_pk ;
my_Loop_Count := my_Loop_Count + 1 ;
END IF ;
END LOOP ;
COMMIT ;
END LOOP ;Try this:
LOOP
FETCH my_Cursor BULK COLLECT INTO my_Table LIMIT 100 ;
FOR i IN 1..my_Table.COUNT LOOP
my_New_Value := <call a pkg.funct to retrieve expected value from another table> ;
EXIT WHEN my_New_Value IS NULL ;
EXIT WHEN my_New_Value = <an undesirable value> ;
IF my_New_Value my_Table(i).column3 THEN
DBMS_OUTPUT.PUT_LINE( 'Changing ' || my_Table(i).column3 || ' to ' || my_New_Value ) ;
UPDATE table1 SET column3 = my_New_Value WHERE column_pk = my_Table(i).column_pk ;
my_Loop_Count := my_Loop_Count + 1 ;
END IF ;
EXIT WHEN my_Cursor%NOTFOUND;
END LOOP ;
END LOOP ;
COMMIT ;Which also takes the COMMIT outside of the LOOP -- try to never have a COMMIT inside of any LOOP.
Additionally, not too sure about these:
my_New_Value := <call a pkg.funct to retrieve expected value from another table> ;
EXIT WHEN my_New_Value IS NULL ;
EXIT WHEN my_New_Value = <an undesirable value> ;Any one of those EXITs will bypass your my_Loop_Count increment.
Edited by: SeánMacGC on Jul 9, 2009 8:37 AM
Had the cursor not found in the wrong place, now corrected.
Maybe you are looking for
-
Creation of Production Orders(CO01) , its opeartion & Components in mass
Hi Gurus I have to create the Production Orders in mass with Operation & Component alloacation & i want to use the BAPI BAPI_PRODORD_CREATE, whether i can create the Opeartions & components of Production Orders at the same time using this BAPI BAPI_P
-
Issue with RAW files from Canon 5DMKIII
Hello, I use Adobe Photoshop CS6 with Camera Raw 7. Photoshop Bridge CS6 do not recognize the RAW files from the Canon 5D Mark III, any idea? Thanks in advance.
-
can anyone help me? my daughter has set up a four digit security code to access my new i pad.(with out my knowledge) I havnt set the i pad up to my lap top so when i went in to sync it so i could restore its settings it wouldnt let me unless i put t
-
How can I mount database?
Hi all, I use SQL /PLSQL command line and I need to start the database without mounting it!so I write: SQLPLUS /nolog user:sys passwd:******** startup pfile="/Orahome/....../file.ora/" nomount it work , the database starts but not mount , if I type"
-
[ANNOUNCE] MyFaces 1.0.2 alpha
Version 1.0.2 alpha of free open source JSF implementation MyFaces was just released. Many bugs fixed, new tree component. Changes in Release 1.0.2 alpha: * refactored custom components to net.sourceforge.myfaces.custom package * custom message and m