Disadvantage of limit in bulk collection
hi,
Is there any disadvantage to use limit in bulk collect?
and please clear me advantage of limit also.
I am great confusion that if i don't write limit in bulk collect then too its working fine then wats' the need of limit?
actually i was getting below error,thats why i want to limit adv/disadv.
I hav posted it also.
please telle me where sud i correct my code.
In 2nd limit option if i m decreasing limit to 10000 or 5k(less then 50k ) then its working fine.
my cursor c & cmain contains around 5 lac records.
error-
ORA-22165: given index [32768] must be in the range of [1] to [32767].
i am using bulk collecrt & forall,as in below senerio.
declare
cursor cmain is...
type <collectionType> is table of tablename%rowtype;
<collectionvariable> <collectionType>;
begin
declare
cursor c1 .....
type <collectionType> is table of tablename%rowtype;
<collectionvariable> <collectionType>;
begin
open c1;
loop
fetch c1 into <collectionvariable2> limit 50000;
forall i in 1 .. <collectionVariable1>.count
stmnt....
EXIT WHEN C1%NOTFOUND;
end loop;
close c1;
COMMIT;
end;
open cmain;
loop
fetch cmain into <collectionVariable2> limit 50000;
forall i in 1 .. <collectionVariable2>.count
stmnt....
EXIT WHEN CMAIN%NOTFOUND;
end loop;
close cmain;
COMMIT;
end
Similar Messages
-
How to decide the limit in bulk collect clause
Hi,
we have got a pl/sql application which is performing mass DML including bulk insert,update and merge over millions of data.Now i am little bit confused in deciding the LIMIT in bulk collect clause.is there any way from which i can decide the optimal limit for my bulk collect clause.and i want to know what are the key factors that affects the limit in bulk collect.
eargerly waiting for ur reply...
thanx
somyHello,
Check this example out and it might help you. All depends how much memory you want to allocate to do this job, you have to experiment to find optimal value (see memory consumption, speed of pl/sql block). There is no formula for finding optimal value as every system is configured differently, so once you have to see how is your oracle parameter (memory related ) configured and monitor system while this is running. I had used 500 for aroun 2.8 million rows.
DECLARE
TYPE array
IS
TABLE OF my_objects%ROWTYPE
INDEX BY BINARY_INTEGER;
data array;
errors NUMBER;
dml_errors exception;
error_count NUMBER := 0;
PRAGMA EXCEPTION_INIT (dml_errors, -24381);
CURSOR mycur
IS
SELECT *
FROM t;
BEGIN
OPEN mycur;
LOOP
FETCH mycur BULK COLLECT INTO data LIMIT 100;
BEGIN
FORALL i IN 1 .. data.COUNT
SAVE EXCEPTIONS
INSERT INTO my_new_objects
VALUES data (i);
EXCEPTION
WHEN dml_errors
THEN
errors := sql%BULK_EXCEPTIONS.COUNT;
error_count := error_count + errors;
FOR i IN 1 .. errors
LOOP
DBMS_OUTPUT.put_line( 'Error occurred during iteration '
|| sql%BULK_EXCEPTIONS(i).ERROR_INDEX
|| ' Oracle error is '
|| sql%BULK_EXCEPTIONS(i).ERROR_CODE);
END LOOP;
END;
EXIT WHEN c%NOTFOUND;
END LOOP;
CLOSE mycur;
DBMS_OUTPUT.put_line (error_count || ' total errors');
END;Regards
OrionNet
Edited by: OrionNet on Dec 17, 2008 12:55 AM -
Why bulk collect ,bulk update are faster?
Hi all gurus,
I have one doubt ,all of us knows that bulk collect is faster but i want to know why it is faster? what is mechanism behind that?
what oracle does internally so that it works faster?
thanks in advance
vijay
Edited by: vijay29 on Mar 2, 2009 2:24 AMExample for Bulk Collect.
DECLARE
TYPE myvar2 IS TABLE OF a%ROWTYPE; -- Creating a Pl/SQL table
CURSOR c1
IS
SELECT *
FROM a; -- Declaring a cursor
myvar1 myvar2; -- Declaring a PL/SQL Table variable.
BEGIN
OPEN c1;
Here is a finding when using 'LIMIT' clause in 'BULK COLLECT' statement.
Example :
Note : Table EMP has 15 records.
CASE #1:
SQL> SELECT COUNT (*)
FROM emp;
COUNT(*)
15
DECLARE
TYPE numtab IS TABLE OF NUMBER
INDEX BY BINARY_INTEGER;
CURSOR c1
IS
SELECT empno
FROM emp;
empnos numtab;
ROWS NATURAL := 10;
BEGIN
OPEN c1;
LOOP
/* The following statement fetches 10 rows (or less). */
FETCH c1
BULK COLLECT INTO empnos LIMIT ROWS;
EXIT WHEN c1%NOTFOUND;
FORALL i IN empnos.FIRST .. empnos.LAST
DELETE FROM emp
WHERE empno = empnos (i);
END LOOP;
CLOSE c1;
END;
Observation : When the above code segment is run the total records deleted will be only 10 as shown below.
SQL> SELECT COUNT (*)
FROM emp;
COUNT(*)
5
Since the statement 'EXIT WHEN c1%NOTFOUND' executes before the DML statement (DELETE in this case),
the loop terminates as soon as the cursor (C1) has the rowcount(5 rows) less than the LIMIT count(10 rows).
CASE #2:
SQL> SELECT COUNT (*)
FROM emp;
COUNT(*)
15
DECLARE
TYPE numtab IS TABLE OF NUMBER
INDEX BY BINARY_INTEGER;
CURSOR c1
IS
SELECT empno
FROM emp;
empnos numtab;
ROWS NATURAL := 10;
BEGIN
OPEN c1;
LOOP
/* The following statement fetches 10 rows (or less). */
FETCH c1
BULK COLLECT INTO empnos LIMIT ROWS;
FORALL i IN empnos.FIRST .. empnos.LAST
DELETE FROM emp
WHERE empno = empnos (i);
EXIT WHEN c1%NOTFOUND;
END LOOP;
CLOSE c1;
END;
Observation : When the above code segment is run the total records deleted will be 15 as shown below.
SQL> SELECT COUNT (*)
FROM emp;
COUNT(*)
0
Since the statement 'EXIT WHEN c1%NOTFOUND' executes after the DML statement (DELETE in this case),
the loop terminates after executing the DML & deletes the rest of the 5 records also and exits.
Conclusion :
When handling 'LIMIT' with 'BULK COLLECT INTO' statements, better to use the statement
'EXIT WHEN <cursorname>%NOTFOUND' after the DML statement.
And also use the following way with set of statements:
Declare
Type Numtab Is Table Of Number
Index By Binary_Integer;
Cursor C1
Is
Select Empno
From Emp;
Empnos Numtab;
Rows Natural := 10;
Tot_Recs Number;
Begin
Open C1;
Loop
/* The Following Statement Fetches 10 Rows (Or Less). */
Fetch C1
Bulk Collect Into Empnos Limit Rows;
If Msagrmnt%rowcount > Tot_Recs
Then
For I In Empnos.First .. Empnos.Last
Loop
-- Here We Can Include Set Of Pl/sql Stmts And We May Store Other Values Into
-- Pl/sql Table And Then Insert Into The Table (For Both Cursor Selection Values And Other Selection Values As Below)
End Loop;
Forall I In Empnos.First..Empnos.Last
Insert Into Empno1 Values … Ie Pl/sql Table Variables;
End If;
Tot_Recs := C1%rowcount;
Exit When C1%notfound;
End Loop;
Close C1;
End; -
Hi all,
I have a performance issue in the below code,where i am trying to insert the data from table_stg into target_tab and in parent_tab tables and then to child tables via cursor with bulk collect .the target_tab and parent_tab are huge tables and have a row wise trigger enabled on it .the trigger is mandatory . This timetaken for this block to execute is 5000 seconds.Now my requirement is to reduce it to 5 to 10 mins.
can someone please guide me here.Its bit urgent .Awaiting for your response.
declare
vmax_Value NUMBER(5);
vcnt number(10);
id_val number(20);
pc_id number(15);
vtable_nm VARCHAR2(100);
vstep_no VARCHAR2(10);
vsql_code VARCHAR2(10);
vsql_errm varchar2(200);
vtarget_starttime timestamp;
limit_in number :=10000;
idx number(10);
cursor stg_cursor is
select
DESCRIPTION,
SORT_CODE,
ACCOUNT_NUMBER,
to_number(to_char(CORRESPONDENCE_DATE,'DD')) crr_day,
to_char(CORRESPONDENCE_DATE,'MONTH') crr_month,
to_number(substr(to_char(CORRESPONDENCE_DATE,'DD-MON-YYYY'),8,4)) crr_year,
PARTY_ID,
GUID,
PAPERLESS_REF_IND,
PRODUCT_TYPE,
PRODUCT_BRAND,
PRODUCT_HELD_ID,
NOTIFICATION_PREF,
UNREAD_CORRES_PERIOD,
EMAIL_ID,
MOBILE_NUMBER,
TITLE,
SURNAME,
POSTCODE,
EVENT_TYPE,
PRIORITY_IND,
SUBJECT,
EXT_PRD_ID_TX,
EXT_PRD_HLD_ID_TX,
EXT_SYS_ID,
EXT_PTY_ID_TX,
ACCOUNT_TYPE_CD,
COM_PFR_TYP_TX,
COM_PFR_OPT_TX,
COM_PFR_RSN_CD
from table_stg;
type rec_type is table of stg_rec_type index by pls_integer;
v_rt_all_cols rec_type;
BEGIN
vstep_no := '0';
vmax_value := 0;
vtarget_starttime := systimestamp;
id_val := 0;
pc_id := 0;
success_flag := 0;
vstep_no := '1';
vtable_nm := 'before cursor';
OPEN stg_cursor;
vstep_no := '2';
vtable_nm := 'After cursor';
LOOP
vstep_no := '3';
vtable_nm := 'before fetch';
--loop
FETCH stg_cursor BULK COLLECT INTO v_rt_all_cols LIMIT limit_in;
vstep_no := '4';
vtable_nm := 'after fetch';
--EXIT WHEN v_rt_all_cols.COUNT = 0;
EXIT WHEN stg_cursor%NOTFOUND;
FOR i IN 1 .. v_rt_all_cols.COUNT
LOOP
dbms_output.put_line(upper(v_rt_all_cols(i).event_type));
if (upper(v_rt_all_cols(i).event_type) = upper('System_enforced')) then
vstep_no := '4.1';
vtable_nm := 'before seq sel';
select PC_SEQ.nextval into pc_id from dual;
vstep_no := '4.2';
vtable_nm := 'before insert corres';
INSERT INTO target1_tab
(ID,
PARTY_ID,
PRODUCT_BRAND,
SORT_CODE,
ACCOUNT_NUMBER,
EXT_PRD_ID_TX,
EXT_PRD_HLD_ID_TX,
EXT_SYS_ID,
EXT_PTY_ID_TX,
ACCOUNT_TYPE_CD,
COM_PFR_TYP_TX,
COM_PFR_OPT_TX,
COM_PFR_RSN_CD,
status)
VALUES
(pc_id,
v_rt_all_cols(i).party_id,
decode(v_rt_all_cols(i).product_brand,'LTB',2,'HLX',1,'HAL',1,'BOS',3,'VER',4,0),
v_rt_all_cols(i).sort_code,
'XXXX'||substr(trim(v_rt_all_cols(i).ACCOUNT_NUMBER),length(trim(v_rt_all_cols(i).ACCOUNT_NUMBER))-3,4),
v_rt_all_cols(i).EXT_PRD_ID_TX,
v_rt_all_cols(i).EXT_PRD_HLD_ID_TX,
v_rt_all_cols(i).EXT_SYS_ID,
v_rt_all_cols(i).EXT_PTY_ID_TX,
v_rt_all_cols(i).ACCOUNT_TYPE_CD,
v_rt_all_cols(i).COM_PFR_TYP_TX,
v_rt_all_cols(i).COM_PFR_OPT_TX,
v_rt_all_cols(i).COM_PFR_RSN_CD,
NULL);
vstep_no := '4.3';
vtable_nm := 'after insert corres';
else
select COM_SEQ.nextval into id_val from dual;
vstep_no := '6';
vtable_nm := 'before insertcomm';
if (upper(v_rt_all_cols(i).event_type) = upper('REMINDER')) then
vstep_no := '6.01';
vtable_nm := 'after if insertcomm';
insert into parent_tab
(ID ,
CTEM_CODE,
CHA_CODE,
CT_CODE,
CONTACT_POINT_ID,
SOURCE,
RECEIVED_DATE,
SEND_DATE,
RETRY_COUNT)
values
(id_val,
lower(v_rt_all_cols(i).event_type),
decode(v_rt_all_cols(i).product_brand,'LTB',2,'HLX',1,'HAL',1,'BOS',3,'VER',4,0),
'Email',
v_rt_all_cols(i).email_id,
'IADAREMINDER',
systimestamp,
systimestamp,
0);
else
vstep_no := '6.02';
vtable_nm := 'after else insertcomm';
insert into parent_tab
(ID ,
CTEM_CODE,
CHA_CODE,
CT_CODE,
CONTACT_POINT_ID,
SOURCE,
RECEIVED_DATE,
SEND_DATE,
RETRY_COUNT)
values
(id_val,
lower(v_rt_all_cols(i).event_type),
decode(v_rt_all_cols(i).product_brand,'LTB',2,'HLX',1,'HAL',1,'BOS',3,'VER',4,0),
'Email',
v_rt_all_cols(i).email_id,
'CORRESPONDENCE',
systimestamp,
systimestamp,
0);
END if;
vstep_no := '6.11';
vtable_nm := 'before chop';
if (v_rt_all_cols(i).ACCOUNT_NUMBER is not null) then
v_rt_all_cols(i).ACCOUNT_NUMBER := 'XXXX'||substr(trim(v_rt_all_cols(i).ACCOUNT_NUMBER),length(trim(v_rt_all_cols(i).ACCOUNT_NUMBER))-3,4);
insert into child_tab
(COM_ID,
KEY,
VALUE)
values
(id_val,
'IB.Correspondence.AccountNumberMasked',
v_rt_all_cols(i).ACCOUNT_NUMBER);
end if;
vstep_no := '6.1';
vtable_nm := 'before stateday';
if (v_rt_all_cols(i).crr_day is not null) then
insert into child_tab
(COM_ID,
KEY,
VALUE)
values
(id_val,
--'IB.Correspondence.Date.Day',
'IB.Crsp.Date.Day',
v_rt_all_cols(i).crr_day);
end if;
vstep_no := '6.2';
vtable_nm := 'before statemth';
if (v_rt_all_cols(i).crr_month is not null) then
insert into child_tab
(COM_ID,
KEY,
VALUE)
values
(id_val,
--'IB.Correspondence.Date.Month',
'IB.Crsp.Date.Month',
v_rt_all_cols(i).crr_month);
end if;
vstep_no := '6.3';
vtable_nm := 'before stateyear';
if (v_rt_all_cols(i).crr_year is not null) then
insert into child_tab
(COM_ID,
KEY,
VALUE)
values
(id_val,
--'IB.Correspondence.Date.Year',
'IB.Crsp.Date.Year',
v_rt_all_cols(i).crr_year);
end if;
vstep_no := '7';
vtable_nm := 'before type';
if (v_rt_all_cols(i).product_type is not null) then
insert into child_tab
(COM_ID,
KEY,
VALUE)
values
(id_val,
'IB.Product.ProductName',
v_rt_all_cols(i).product_type);
end if;
vstep_no := '9';
vtable_nm := 'before title';
if (trim(v_rt_all_cols(i).title) is not null) then
insert into child_tab
(COM_ID,
KEY,
VALUE )
values
(id_val,
'IB.Customer.Title',
trim(v_rt_all_cols(i).title));
end if;
vstep_no := '10';
vtable_nm := 'before surname';
if (v_rt_all_cols(i).surname is not null) then
insert into child_tab
(COM_ID,
KEY,
VALUE)
values
(id_val,
'IB.Customer.LastName',
v_rt_all_cols(i).surname);
end if;
vstep_no := '12';
vtable_nm := 'before postcd';
if (trim(v_rt_all_cols(i).POSTCODE) is not null) then
insert into child_tab
(COM_ID,
KEY,
VALUE)
values
(id_val,
'IB.Customer.Addr.PostCodeMasked',
substr(replace(v_rt_all_cols(i).POSTCODE,' ',''),length(replace(v_rt_all_cols(i).POSTCODE,' ',''))-2,3));
end if;
vstep_no := '13';
vtable_nm := 'before subject';
if (trim(v_rt_all_cols(i).SUBJECT) is not null) then
insert into child_tab
(COM_ID,
KEY,
VALUE)
values
(id_val,
'IB.Correspondence.Subject',
v_rt_all_cols(i).subject);
end if;
vstep_no := '14';
vtable_nm := 'before inactivity';
if (trim(v_rt_all_cols(i).UNREAD_CORRES_PERIOD) is null or
trim(v_rt_all_cols(i).UNREAD_CORRES_PERIOD) = '3' or
trim(v_rt_all_cols(i).UNREAD_CORRES_PERIOD) = '6' or
trim(v_rt_all_cols(i).UNREAD_CORRES_PERIOD) = '9') then
insert into child_tab
(COM_ID,
KEY,
VALUE)
values
(id_val,
'IB.Correspondence.Inactivity',
v_rt_all_cols(i).UNREAD_CORRES_PERIOD);
end if;
vstep_no := '14.1';
vtable_nm := 'after notfound';
end if;
vstep_no := '15';
vtable_nm := 'after notfound';
END LOOP;
end loop;
vstep_no := '16';
vtable_nm := 'before closecur';
CLOSE stg_cursor;
vstep_no := '17';
vtable_nm := 'before commit';
DELETE FROM table_stg;
COMMIT;
vstep_no := '18';
vtable_nm := 'after commit';
EXCEPTION
WHEN OTHERS THEN
ROLLBACK;
success_flag := 1;
vsql_code := SQLCODE;
vsql_errm := SUBSTR(sqlerrm,1,200);
error_logging_pkg.inserterrorlog('samp',vsql_code,vsql_errm, vtable_nm,vstep_no);
RAISE_APPLICATION_ERROR (-20011, 'samp '||vstep_no||' SQLERRM:'||SQLERRM);
end;
ThanksIts bit urgent
NO - it is NOT urgent. Not to us.
If you have an urgent problem you need to hire a consultant.
I have a performance issue in the below code,
Maybe you do and maybe you don't. How are we to really know? You haven't posted ANYTHING indicating that a performance issue exists. Please read the FAQ for how to post a tuning request and the info you need to provide. First and foremost you have to post SOMETHING that actually shows that a performance issue exists. Troubleshooting requires FACTS not just a subjective opinion.
where i am trying to insert the data from table_stg into target_tab and in parent_tab tables and then to child tables via cursor with bulk collect .the target_tab and parent_tab are huge tables and have a row wise trigger enabled on it .the trigger is mandatory . This timetaken for this block to execute is 5000 seconds.Now my requirement is to reduce it to 5 to 10 mins.
Personally I think 5000 seconds (about 1 hr 20 minutes) is very fast for processing 800 trillion rows of data into parent and child tables. Why do you think that is slow?
Your code has several major flaws that need to be corrected before you can even determine what, if anything, needs to be tuned.
This code has the EXIT statement at the beginning of the loop instead of at the end
FETCH stg_cursor BULK COLLECT INTO v_rt_all_cols LIMIT limit_in;
vstep_no := '4';
vtable_nm := 'after fetch';
--EXIT WHEN v_rt_all_cols.COUNT = 0;
EXIT WHEN stg_cursor%NOTFOUND;
The correct place for the %NOTFOUND test when using BULK COLLECT is at the END of the loop; that is, the last statement in the loop.
You can use a COUNT test at the start of the loop but ironically you have commented it out and have now done it wrong. Either move the NOTFOUND test to the end of the loop or remove it and uncomment the COUNT test.
WHEN OTHERS THEN
ROLLBACK;
That basically says you don't even care what problem occurs or whether the problem is for a single record of your 10,000 in the collection. You pretty much just throw away any stack trace and substitute your own message.
Your code also has NO exception handling for any of the individual steps or blocks of code.
The code you posted also begs the question of why you are using NAME=VALUE pairs for child data rows? Why aren't you using a standard relational table for this data?
As others have noted you are using slow-by-slow (row by row processing). Let's assume that PL/SQL, the bulk collect and row-by-row is actually necessary.
Then you should be constructing the parent and child records into collections and then inserting them in BULK using FORALL.
1. Create a collection for the new parent rows
2. Create a collection for the new child rows
3. For each set of LIMIT source row data
a. empty the parent and child collections
b. populate those collections with new parent/child data
c. bulk insert the parent collection into the parent table
d. bulk insert the child collection into the child table
And unless you really want to either load EVERYTHING or abandon everything you should use bulk exception handling so that the clean data gets processed and only the dirty data gets rejected. -
Bulk collect limit 1000 is looping only 1000 records out of 35000 records
In below code I have to loop around 35000 records for every month of the year starting from Aug-2010 to Aug-2011.
I am using bulk collect with limit clause but the problem is:
a: Limit clause is returning only 1000 records.
b: It is taking too much time to process.
CREATE OR REPLACE PACKAGE BODY UDBFINV AS
F UTL_FILE.FILE_TYPE;
PV_SEQ_NO NUMBER(7);
PV_REC_CNT NUMBER(7) := 0;
PV_CRLF VARCHAR2(2) := CHR(13) || CHR(10);
TYPE REC_PART IS RECORD(
PART_NUM PM_PART_HARSH.PART_NUM%TYPE,
ON_HAND_QTY PM_PART_HARSH.ON_HAND_QTY%TYPE,
ENGG_PREFIX PM_PART_HARSH.ENGG_PREFIX%TYPE,
ENGG_BASE PM_PART_HARSH.ENGG_BASE%TYPE,
ENGG_SUFFIX PM_PART_HARSH.ENGG_SUFFIX%TYPE);
TYPE TB_PART IS TABLE OF REC_PART;
TYPE REC_DATE IS RECORD(
START_DATE DATE,
END_DATE DATE);
TYPE TB_MONTH IS TABLE OF REC_DATE;
PROCEDURE MAIN IS
/* To be called in Scheduler Programs Action */
BEGIN
/* Initializing package global variables;*/
IFMAINT.V_PROG_NAME := 'FULL_INVENTORY';
IFMAINT.V_ERR_LOG_TAB := 'UDB_ERR_FINV';
IFMAINT.V_HIST_TAB := 'UDB_HT_FINV';
IFMAINT.V_UTL_DIR_NAME := 'UDB_SEND';
IFMAINT.V_PROG_TYPE := 'S';
IFMAINT.V_IF_TYPE := 'U';
IFMAINT.V_REC_CNT := 0;
IFMAINT.V_DEL_INS := 'Y';
IFMAINT.V_KEY_INFO := NULL;
IFMAINT.V_MSG := NULL;
IFMAINT.V_ORA_MSG := NULL;
IFSMAINT.V_FILE_NUM := IFSMAINT.V_FILE_NUM + 1;
IFMAINT.LOG_ERROR; /*Initialize error log table, delete prev. rows*/
/*End of initialization section*/
IFMAINT.SET_INITIAL_PARAM;
IFMAINT.SET_PROGRAM_PARAM;
IFMAINT.SET_UTL_DIR_PATH;
IFMAINT.GET_DEALER_PARAMETERS;
PV_SEQ_NO := IFSMAINT.GENERATE_FILE_NAME;
IF NOT CHECK_FILE_EXISTS THEN
WRITE_FILE;
END IF;
IF IFMAINT.V_BACKUP_PATH_SEND IS NOT NULL THEN
IFMAINT.COPY_FILE(IFMAINT.V_UTL_DIR_PATH,
IFMAINT.V_FILE_NAME,
IFMAINT.V_BACKUP_PATH_SEND);
END IF;
IFMAINT.MOVE_FILE(IFMAINT.V_UTL_DIR_PATH,
IFMAINT.V_FILE_NAME,
IFMAINT.V_FILE_DEST_PATH);
COMMIT;
EXCEPTION
WHEN IFMAINT.E_TERMINATE THEN
IFMAINT.V_DEL_INS := 'N';
IFMAINT.LOG_ERROR;
ROLLBACK;
UTL_FILE.FCLOSE(F);
IFMAINT.DELETE_FILE(IFMAINT.V_UTL_DIR_PATH, IFMAINT.V_FILE_NAME);
RAISE_APPLICATION_ERROR(IFMAINT.V_USER_ERRCODE, IFMAINT.V_ORA_MSG);
WHEN OTHERS THEN
IFMAINT.V_DEL_INS := 'N';
IFMAINT.V_MSG := 'ERROR IN MAIN PROCEDURE ||IFMAINT.V_PROG_NAME';
IFMAINT.V_ORA_MSG := SUBSTR(SQLERRM, 1, 255);
IFMAINT.V_USER_ERRCODE := -20101;
IFMAINT.LOG_ERROR;
ROLLBACK;
UTL_FILE.FCLOSE(F);
IFMAINT.DELETE_FILE(IFMAINT.V_UTL_DIR_PATH, IFMAINT.V_FILE_NAME);
RAISE_APPLICATION_ERROR(IFMAINT.V_USER_ERRCODE, IFMAINT.V_ORA_MSG);
END;
PROCEDURE WRITE_FILE IS
CURSOR CR_PART IS
SELECT A.PART_NUM, ON_HAND_QTY, ENGG_PREFIX, ENGG_BASE, ENGG_SUFFIX
FROM PM_PART_HARSH A;
lv_cursor TB_PART;
LV_CURR_MONTH NUMBER;
LV_MONTH_1 NUMBER := NULL;
LV_MONTH_2 NUMBER := NULL;
LV_MONTH_3 NUMBER := NULL;
LV_MONTH_4 NUMBER := NULL;
LV_MONTH_5 NUMBER := NULL;
LV_MONTH_6 NUMBER := NULL;
LV_MONTH_7 NUMBER := NULL;
LV_MONTH_8 NUMBER := NULL;
LV_MONTH_9 NUMBER := NULL;
LV_MONTH_10 NUMBER := NULL;
LV_MONTH_11 NUMBER := NULL;
LV_MONTH_12 NUMBER := NULL;
lv_month TB_MONTH := TB_MONTH();
BEGIN
IF CR_PART%ISOPEN THEN
CLOSE CR_PART;
END IF;
FOR K IN 1 .. 12 LOOP
lv_month.EXTEND();
lv_month(k).start_date := ADD_MONTHS(TRUNC(SYSDATE, 'MM'), - (K + 1));
lv_month(k).end_date := (ADD_MONTHS(TRUNC(SYSDATE, 'MM'), -K) - 1);
END LOOP;
F := utl_file.fopen(IFMAINT.V_UTL_DIR_NAME, IFMAINT.V_FILE_NAME, 'W');
IF UTL_FILE.IS_OPEN(F) THEN
/*FILE HEADER*/
utl_file.put_line(F,
RPAD('$CUD-', 5, ' ') ||
RPAD(SUBSTR(IFMAINT.V_PANDA_CD, 1, 5), 5, ' ') ||
RPAD('-136-', 5, ' ') || RPAD('000000', 6, ' ') ||
RPAD('-REDFLEX-KA-', 13, ' ') ||
RPAD('00000000-', 9, ' ') ||
RPAD(IFMAINT.V_CDS_SPEC_REL_NUM, 5, ' ') ||
RPAD('CD', 2, ' ') ||
RPAD(TO_CHAR(SYSDATE, 'MMDDYY'), 6, ' ') ||
LPAD(IFSMAINT.V_FILE_NUM, 2, 0) ||
RPAD('-', 1, ' ') || RPAD(' ', 9, ' ') ||
RPAD('-', 1, ' ') || RPAD(' ', 17, ' ') ||
RPAD('CD230', 5, ' ') ||
RPAD(TO_CHAR(SYSDATE, 'MMDDYY'), 6, ' ') ||
LPAD(IFSMAINT.V_FILE_NUM, 2, 0) ||
LPAD(PV_REC_CNT, 8, 0) || RPAD(' ', 5, ' ') ||
RPAD('00000000', 8, ' ') || RPAD('CUD', 3, ' ') ||
RPAD(IFMAINT.V_CDS_SPEC_REL_NUM, 5, ' ') ||
RPAD(IFMAINT.V_GEO_SALES_AREA_CD, 3, ' ') ||
RPAD(IFMAINT.V_FRANCHISE_CD, 2, ' ') ||
RPAD(IFMAINT.V_DSP_REL_NUM, 9, ' ') ||
RPAD('00136REDFLEX', 12, ' ') || RPAD(' ', 1, ' ') ||
RPAD('KA', 2, ' ') || RPAD('000000', 6, ' ') ||
RPAD('00D', 3, ' ') ||
RPAD(IFMAINT.V_VENDOR_ID, 6, ' ') ||
RPAD(IFSMAINT.V_FILE_TYPE, 1, ' ') ||
RPAD('>', 1, ' ') || PV_CRLF);
/*LINE ITEMS*/
OPEN CR_PART;
FETCH CR_PART BULK COLLECT
INTO lv_cursor limit 1000;
FOR I IN lv_cursor.FIRST .. lv_cursor.LAST LOOP
SELECT SUM(A.BILL_QTY)
INTO LV_CURR_MONTH
FROM PD_ISSUE A, PH_ISSUE B
WHERE A.DOC_TYPE IN ('CRI', 'RRI', 'RSI', 'CSI')
AND A.DOC_NUM = B.DOC_NUM
AND B.DOC_DATE BETWEEN TRUNC(SYSDATE, 'MM') AND SYSDATE
AND A.PART_NUM = LV_CURSOR(i).PART_NUM;
FOR J IN 1 .. 12 LOOP
SELECT SUM(A.BILL_QTY)
INTO LV_MONTH_1
FROM PD_ISSUE A, PH_ISSUE B
WHERE A.DOC_TYPE IN ('CRI', 'RRI', 'RSI', 'CSI')
AND A.DOC_NUM = B.DOC_NUM
AND B.DOC_DATE BETWEEN lv_month(J).start_date and lv_month(J)
.end_date
AND A.PART_NUM = LV_CURSOR(i).PART_NUM;
END LOOP;
utl_file.put_line(F,
RPAD('IL', 2, ' ') ||
RPAD(TO_CHAR(SYSDATE, 'RRRRMMDD'), 8, ' ') ||
RPAD(LV_CURSOR(I).ENGG_PREFIX, 6, ' ') ||
RPAD(LV_CURSOR(I).ENGG_BASE, 8, ' ') ||
RPAD(LV_CURSOR(I).ENGG_SUFFIX, 6, ' ') ||
LPAD(LV_CURSOR(I).ON_HAND_QTY, 7, 0) ||
LPAD(NVL(LV_CURR_MONTH, 0), 7, 0) ||
LPAD(LV_MONTH_1, 7, 0) || LPAD(LV_MONTH_2, 7, 0) ||
LPAD(LV_MONTH_3, 7, 0) || LPAD(LV_MONTH_4, 7, 0) ||
LPAD(LV_MONTH_5, 7, 0) || LPAD(LV_MONTH_6, 7, 0) ||
LPAD(LV_MONTH_7, 7, 0) || LPAD(LV_MONTH_8, 7, 0) ||
LPAD(LV_MONTH_9, 7, 0) || LPAD(LV_MONTH_10, 7, 0) ||
LPAD(LV_MONTH_11, 7, 0) ||
LPAD(LV_MONTH_12, 7, 0));
IFMAINT.V_REC_CNT := IFMAINT.V_REC_CNT + 1;
END LOOP;
CLOSE CR_PART;
/*TRAILER*/
utl_file.put_line(F,
RPAD('$EOF-', 5, ' ') || RPAD('320R', 4, ' ') ||
RPAD(SUBSTR(IFMAINT.V_PANDA_CD, 1, 5), 5, ' ') ||
RPAD(' ', 5, ' ') ||
RPAD(IFMAINT.V_GEO_SALES_AREA_CD, 3, ' ') ||
RPAD(TO_CHAR(SYSDATE, 'MM-DD-RR'), 6, ' ') ||
LPAD(IFSMAINT.V_FILE_NUM, 2, 0) ||
LPAD(IFMAINT.V_REC_CNT, 8, 0) || 'H' || '>' ||
IFMAINT.V_REC_CNT);
utl_file.fclose(F);
IFMAINT.INSERT_HISTORY;
END IF;
END;
FUNCTION CHECK_FILE_EXISTS RETURN BOOLEAN IS
LB_FILE_EXIST BOOLEAN := FALSE;
LN_FILE_LENGTH NUMBER;
LN_BLOCK_SIZE NUMBER;
BEGIN
UTL_FILE.FGETATTR(IFMAINT.V_UTL_DIR_NAME,
IFMAINT.V_FILE_NAME,
LB_FILE_EXIST,
LN_FILE_LENGTH,
LN_BLOCK_SIZE);
IF LB_FILE_EXIST THEN
RETURN TRUE;
END IF;
RETURN FALSE;
EXCEPTION
WHEN OTHERS THEN
RETURN FALSE;
END;
END;Try this:
OPEN CR_PART;
loop
FETCH CR_PART BULK COLLECT
INTO lv_cursor limit 1000;
exit when CR_PART%notfound;
FOR I IN lv_cursor.FIRST .. lv_cursor.LAST LOOP
SELECT SUM(A.BILL_QTY)
INTO LV_CURR_MONTH
FROM PD_ISSUE A, PH_ISSUE B
WHERE A.DOC_TYPE IN ('CRI', 'RRI', 'RSI', 'CSI')
AND A.DOC_NUM = B.DOC_NUM
AND B.DOC_DATE BETWEEN TRUNC(SYSDATE, 'MM') AND SYSDATE
AND A.PART_NUM = LV_CURSOR(i).PART_NUM;
FOR J IN 1 .. 12 LOOP
SELECT SUM(A.BILL_QTY)
INTO LV_MONTH_1
FROM PD_ISSUE A, PH_ISSUE B
WHERE A.DOC_TYPE IN ('CRI', 'RRI', 'RSI', 'CSI')
AND A.DOC_NUM = B.DOC_NUM
AND B.DOC_DATE BETWEEN lv_month(J).start_date and lv_month(J)
.end_date
AND A.PART_NUM = LV_CURSOR(i).PART_NUM;
END LOOP;
utl_file.put_line(F,
RPAD('IL', 2, ' ') ||
RPAD(TO_CHAR(SYSDATE, 'RRRRMMDD'), 8, ' ') ||
RPAD(LV_CURSOR(I).ENGG_PREFIX, 6, ' ') ||
RPAD(LV_CURSOR(I).ENGG_BASE, 8, ' ') ||
RPAD(LV_CURSOR(I).ENGG_SUFFIX, 6, ' ') ||
LPAD(LV_CURSOR(I).ON_HAND_QTY, 7, 0) ||
LPAD(NVL(LV_CURR_MONTH, 0), 7, 0) ||
LPAD(LV_MONTH_1, 7, 0) || LPAD(LV_MONTH_2, 7, 0) ||
LPAD(LV_MONTH_3, 7, 0) || LPAD(LV_MONTH_4, 7, 0) ||
LPAD(LV_MONTH_5, 7, 0) || LPAD(LV_MONTH_6, 7, 0) ||
LPAD(LV_MONTH_7, 7, 0) || LPAD(LV_MONTH_8, 7, 0) ||
LPAD(LV_MONTH_9, 7, 0) || LPAD(LV_MONTH_10, 7, 0) ||
LPAD(LV_MONTH_11, 7, 0) ||
LPAD(LV_MONTH_12, 7, 0));
IFMAINT.V_REC_CNT := IFMAINT.V_REC_CNT + 1;
END LOOP;
end loop;
CLOSE CR_PART; -
Bulk Collect into is storing less no of rows in collection when using LIMIT?
I have written the following anonymous PL SQL Block. However, the line dbms_output.put_line(total_tckt_col.LAST) gives me output as 366 (in DBMS_OUTPUT is SQL Developer) which is correct when no limit is set. If the limit is set to 100 in the FETCH statement then dbms_output.put_line(total_tckt_col.LAST) gives me 66. What I am doing wrong here?
DECLARE
CURSOR cur_total_tckt
is
select t.ticket_id ticket_id, t.created_date created_date, t.created_by created_by, t.ticket_status ticket_status,
t.last_changed last_changed, h.created_date closed_date
from n01.cc_ticket_info t
inner join n01.cc_ticket_status_history h
on (t.ticket_id = h.ticket_id)
where t.last_changed >= '6/28/2012 17:28:59' and t.last_changed < (sysdate + interval '1' day);
type total_tckt_colcn
is
TABLE OF cur_total_tckt%rowtype;
total_tckt_col total_tckt_colcn;
total_coach_col total_tckt_colcn;
begin
total_tckt_col := total_tckt_colcn ();
total_coach_col := total_tckt_colcn ();
OPEN cur_total_tckt;
loop
fetch cur_total_tckt bulk collect into total_tckt_col;
-- fetch cur_total_tckt bulk collect into total_tckt_col limit 100;
EXIT
WHEN (cur_total_tckt%NOTFOUND);
END LOOP ;
CLOSE cur_total_tckt;
dbms_output.put_line(total_tckt_col.LAST);
FOR i IN total_tckt_col.first..total_tckt_col.last
LOOP
dbms_output.put_line(i);
END LOOP;
end;Ishan wrote:
Here is modified version of your code on standard EMP table in scott schema.
Did you test it? All you demonstrate is last batch has 4 rows. But you print it outsite the loop. This way if last batch is incomplete (has less than limit rows) your loop doesn't process last batch. Assume you want to print enames:
DECLARE
CURSOR cur_total_tckt
IS
select ename
from emp; -- I have a total of 14 records in this table
type total_tckt_colcn
is
TABLE OF cur_total_tckt%rowtype;
total_tckt_col total_tckt_colcn;
BEGIN
total_tckt_col := total_tckt_colcn ();
OPEN cur_total_tckt;
LOOP
fetch cur_total_tckt bulk collect into total_tckt_col limit 5;
EXIT WHEN cur_total_tckt%NOTFOUND;
FOR v_i IN 1..total_tckt_col.count LOOP
dbms_output.put_line(total_tckt_col(v_i).ename);
END LOOP;
END LOOP ;
CLOSE cur_total_tckt;
END;
SMITH
ALLEN
WARD
JONES
MARTIN
BLAKE
CLARK
SCOTT
KING
TURNER
PL/SQL procedure successfully completed.
SQL>
As you can see, it didn't print last batch. Why? Because NOTFOUND is set to true if exact number of rows you asked to fetch was not fetched. So last batch has 4 rows while code asks to fetch 5. Therefore, NOTFOUND is set to true and code exits before processing that last batch. So you have to repeat processing code again outside the loop:
DECLARE
CURSOR cur_total_tckt
IS
select ename
from emp; -- I have a total of 14 records in this table
type total_tckt_colcn
is
TABLE OF cur_total_tckt%rowtype;
total_tckt_col total_tckt_colcn;
BEGIN
total_tckt_col := total_tckt_colcn ();
OPEN cur_total_tckt;
LOOP
fetch cur_total_tckt bulk collect into total_tckt_col limit 5;
EXIT WHEN cur_total_tckt%NOTFOUND;
FOR v_i IN 1..total_tckt_col.count LOOP
dbms_output.put_line(total_tckt_col(v_i).ename);
END LOOP;
END LOOP ;
FOR v_i IN 1..total_tckt_col.count LOOP
dbms_output.put_line(total_tckt_col(v_i).ename);
END LOOP;
CLOSE cur_total_tckt;
END;
SMITH
ALLEN
WARD
JONES
MARTIN
BLAKE
CLARK
SCOTT
KING
TURNER
ADAMS
JAMES
FORD
MILLER
PL/SQL procedure successfully completed.
SQL>
But you must agree repeating that processing code twice isn't best solution. When using BULK COLLECT LIMIT we should exit not by NOTFOUND but rather by collection.count = 0:
DECLARE
CURSOR cur_total_tckt
IS
select ename
from emp; -- I have a total of 14 records in this table
type total_tckt_colcn
is
TABLE OF cur_total_tckt%rowtype;
total_tckt_col total_tckt_colcn;
BEGIN
total_tckt_col := total_tckt_colcn ();
OPEN cur_total_tckt;
LOOP
fetch cur_total_tckt bulk collect into total_tckt_col limit 6;
EXIT WHEN total_tckt_col.count = 0;
FOR v_i IN 1..total_tckt_col.count LOOP
dbms_output.put_line(total_tckt_col(v_i).ename);
END LOOP;
END LOOP ;
CLOSE cur_total_tckt;
END;
SMITH
ALLEN
WARD
JONES
MARTIN
BLAKE
CLARK
SCOTT
KING
TURNER
ADAMS
JAMES
FORD
MILLER
PL/SQL procedure successfully completed.
SQL>
SY. -
Doubt about Bulk Collect with LIMIT
Hi
I have a Doubt about Bulk collect , When is done Commit
I Get a example in PSOUG
http://psoug.org/reference/array_processing.html
CREATE TABLE servers2 AS
SELECT *
FROM servers
WHERE 1=2;
DECLARE
CURSOR s_cur IS
SELECT *
FROM servers;
TYPE fetch_array IS TABLE OF s_cur%ROWTYPE;
s_array fetch_array;
BEGIN
OPEN s_cur;
LOOP
FETCH s_cur BULK COLLECT INTO s_array LIMIT 1000;
FORALL i IN 1..s_array.COUNT
INSERT INTO servers2 VALUES s_array(i);
EXIT WHEN s_cur%NOTFOUND;
END LOOP;
CLOSE s_cur;
COMMIT;
END;If my table Servers have 3 000 000 records , when is done commit ? when insert all records ?
could crash redo log ?
using 9.2.08muttleychess wrote:
If my table Servers have 3 000 000 records , when is done commit ? Commit point has nothing to do with how many rows you process. It is purely business driven. Your code implements some business transaction, right? So if you commit before whole trancaction (from business standpoint) is complete other sessions will already see changes that are (from business standpoint) incomplete. Also, what if rest of trancaction (from business standpoint) fails?
SY. -
How to handle the bad record while using bulk collect with limit.
Hi
How to handle the Bad record as part of the insertion/updation to avoid the transaction.
Example:
I am inserting into table with LIMIT of 1000 records and i've got error at 588th record.
i want to commit the transaction with 588 inserted record in table and log the error into
error logging table then i've to continue with transaction with 560th record.
Can anyone suggest me in this case.
Regards,
yuva>
How to handle the Bad record as part of the insertion/updation to avoid the transaction.
>
Use the SAVE EXCEPTIONS clause of the FORALL if you are doing bulk inserts.
See SAVE EXCEPTIONS in the PL/SQL Language doc
http://docs.oracle.com/cd/B28359_01/appdev.111/b28370/tuning.htm
And then see Example 12-9 Bulk Operation that continues despite exceptions
>
Example 12-9 Bulk Operation that Continues Despite Exceptions
-- Temporary table for this example:
CREATE TABLE emp_temp AS SELECT * FROM employees;
DECLARE
TYPE empid_tab IS TABLE OF employees.employee_id%TYPE;
emp_sr empid_tab;
-- Exception handler for ORA-24381:
errors NUMBER;
dml_errors EXCEPTION;
PRAGMA EXCEPTION_INIT(dml_errors, -24381);
BEGIN
SELECT employee_id
BULK COLLECT INTO emp_sr FROM emp_temp
WHERE hire_date < '30-DEC-94';
-- Add '_SR' to job_id of most senior employees:
FORALL i IN emp_sr.FIRST..emp_sr.LAST SAVE EXCEPTIONS
UPDATE emp_temp SET job_id = job_id || '_SR'
WHERE emp_sr(i) = emp_temp.employee_id;
-- If errors occurred during FORALL SAVE EXCEPTIONS,
-- a single exception is raised when the statement completes.
EXCEPTION
-- Figure out what failed and why
WHEN dml_errors THEN
errors := SQL%BULK_EXCEPTIONS.COUNT;
DBMS_OUTPUT.PUT_LINE
('Number of statements that failed: ' || errors);
FOR i IN 1..errors LOOP
DBMS_OUTPUT.PUT_LINE('Error #' || i || ' occurred during '||
'iteration #' || SQL%BULK_EXCEPTIONS(i).ERROR_INDEX);
DBMS_OUTPUT.PUT_LINE('Error message is ' ||
SQLERRM(-SQL%BULK_EXCEPTIONS(i).ERROR_CODE));
END LOOP;
END;
DROP TABLE emp_temp; -
I wrote a plsql procedure using bulk collect / forall
The procedure uses bulk collect to fetch from a normal cursor, Then I am using for all to insert into
target table, The number of rows are 234965470
Question:
What should ideally be the limit for my bulk collect ?
According to below, it should be in hundreds
http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:1583402705463
I put a bulk collect limit of 50000 - took close to 2 hours
then i tried 10000 - just 3 mins shorter than the above time
But if you commit every 500 rows, Then is there not another theory that frequent commits is not good ?user650888 wrote:
What should ideally be the limit for my bulk collect ?The answer to that is, What Does a Bulk Collect Do?
And no, it is a fallacy that it makes SQL faster. It does not. Never did. Never will.
A bulk process reduces the number of calls that need to be made to the SQL engine. In PL/SQL that is called context switching as the PL and SQL engines are tightly coupled.
If for example, the SQL cursor outputs a 1000 rows and single row fetches are used, a 1000 calls or context switches are required, for transferring the row data from the one engine to the other. If a bulk processing of a 100 is for example used, then only 10 context switches are needed. That is a significant reduction in context switches.
If you do a 1000 row bulk collect, only 1 context switch is needed. That is a barely noticeable difference in the time between 1 and 10 context switches. So using a bulk limit of a 1000 will not improve performance at all versus a 100 row limit.
There is a price for this - bulk processing needs to use very expensive private process memory on the server. Oracle calls this the PGA. Consider the difference in memory between a 100 limit and a 1000 limit. 10x more PGA is needed for a 1000 limit - and no real performance gains result as there is a negligible reduction in context switches.
A 100 limit is often bandied around as the bulk collect limit that is the best. That is not really true. If the rows are very small, impact on the PGA is less severe - a higher limit can make sense.
Likewise, if the rows are very large (100+ large columns fetched), then a 100 limit can make an unreasonable demand on PGA... which will quickly become a bad performance situation when a bunch of clients all execute this code at the same time.
So the sweet spot for a bulk limit typically varies between 10 and a 1000.
I put a bulk collect limit of 50000 - took close to 2 hours
then i tried 10000 - just 3 mins shorter than the above timeThis is just plain wrong. As you've seen, you are not improving performance at all. In fact, your code can cause severe performance problems on the server due to the high demand on private process memory being made, and the increase in work of the swap daemons that need to keep up with this demand.
Bulk processing DOES NOT INCRESASE SQL performance. This is important to understand. The ONLY THING that it does, is to reduce the number of calls between the SQL and PL/SQL engines.
But if you commit every 500 rows, Then is there not another theory that frequent commits is not good ?That is not just plain wrong, but an idiotic approach. A commit is work. Why do you want to add more work to the process and expect that to increase performance? -
BULK COLLECT LIMIT affects performance in any way?
Hi,
I want to know whether BULK COLLECT LIMIT affects performance in any way?
How does one decide what limit to keep?
is there a oracle recommendation on the same?I agree with Bonist that you should read Tom's article but I am going to disagree, to a minor extent, with Tom's comment about 100 rows.
When developing with BULK COLLECT I always add a parameter to stored procedures that is used to tune the limit clause Then when the code goes to unit testing, and at the beginning of integrated unit testing, the value is varied and the results graphed. For final testing the parameter is dropped. The number I hard code for production is the value at the left side of the top of the bell curve.
What I find is that 100 is sometimes the right number butI have one app I developed recently where 50,000 was the right number (11gR1). As with almost everything Oracle the best answer is always that "it depends." In the case where 50K is the best solution the server has 32G of RAM, and everything processed is an SMS message averaging only 79 bytes. -
Bulk collect / forall which collection type?
Hi I am trying to speed up the query below using bulk collect / forall:
SELECT h.cust_order_no AS custord, l.shipment_set AS sset
FROM info.tlp_out_messaging_hdr h, info.tlp_out_messaging_lin l
WHERE h.message_id = l.message_id
AND h.contract = '12384'
AND l.shipment_set IS NOT NULL
AND h.cust_order_no IS NOT NULL
GROUP BY h.cust_order_no, l.shipment_set
I would like to extract the 2 fields selected above into a new table as quickly as possible, but I’m fairly new to Oracle and I’m finding it difficult to sort out the best way to do it. The query below does not work (no doubt there are numerous issues) but hopefully it is sufficiently developed to shows the sort of thing I am trying to achieve:
DECLARE
TYPE xcustord IS TABLE OF info.tlp_out_messaging_hdr.cust_order_no%TYPE;
TYPE xsset IS TABLE OF info.tlp_out_messaging_lin.shipment_set%TYPE;
TYPE xarray IS TABLE OF tp_a1_tab%rowtype INDEX BY BINARY_INTEGER;
v_xarray xarray;
v_xcustord xcustord;
v_xsset xsset;
CURSOR cur IS
SELECT h.cust_order_no AS custord, l.shipment_set AS sset
FROM info.tlp_out_messaging_hdr h, info.tlp_out_messaging_lin l
WHERE h.message_id = l.message_id
AND h.contract = '1111'
AND l.shipment_set IS NOT NULL
AND h.cust_order_no IS NOT NULL;
BEGIN
OPEN cur;
LOOP
FETCH cur
BULK COLLECT INTO v_xarray LIMIT 10000;
EXIT WHEN v_xcustord.COUNT() = 0;
FORALL i IN 1 .. v_xarray.COUNT
INSERT INTO TP_A1_TAB (cust_order_no, shipment_set)
VALUES (v_xarray(i).cust_order_no,v_xarray(i).shipment_set);
commit;
END LOOP;
CLOSE cur;
END;
I am running on Oracle 9i release 2.Well I suppose I can share the whole query as it stands for context and information (see below).
This is a very ugly piece of code that I am trying to improve. The advantage it has currently is
that it works, the disadvantage it has is that it's very slow. My thoughts were that bulk collect
and forall might be useful tools here as part of the re-write hence my original question, but perhaps not.
So on a more general note any advice on how best speed up this code would be welcome:
CREATE OR REPLACE VIEW DLP_TEST AS
WITH aa AS
(SELECT h.cust_order_no AS c_ref,l.shipment_set AS shipset,
l.line_id, h.message_id, l.message_line_no,
l.vendor_part_no AS part, l.rqst_quantity AS rqst_qty, l.quantity AS qty,
l.status AS status, h.rowversion AS allocation_date
FROM info.tlp_in_messaging_hdr h
LEFT JOIN info.tlp_in_messaging_lin l
ON h.message_id = l.message_id
WHERE h.contract = '12384'
AND h.cust_order_no IS NOT NULL
UNION ALL
SELECT ho.cust_order_no AS c_ref, lo.shipment_set AS shipset,
lo.line_id, ho.message_id, lo.message_line_no,
lo.vendor_part_no AS part,lo.rqst_quantity AS rqst_qty, lo.quantity AS qty,
lo.status AS status, ho.rowversion AS allocation_date
FROM info.tlp_out_messaging_hdr ho, info.tlp_out_messaging_lin lo
WHERE ho.message_id = lo.message_id
AND ho.contract = '12384'
AND ho.cust_order_no IS NOT NULL),
a1 AS
(SELECT h.cust_order_no AS custord, l.shipment_set AS sset
FROM info.tlp_out_messaging_hdr h, info.tlp_out_messaging_lin l
WHERE h.message_id = l.message_id
AND h.contract = '12384'
AND l.shipment_set IS NOT NULL
AND h.cust_order_no IS NOT NULL
GROUP BY h.cust_order_no, l.shipment_set),
a2 AS
(SELECT ho.cust_order_no AS c_ref, lo.shipment_set AS shipset
FROM info.tlp_out_messaging_hdr ho, info.tlp_out_messaging_lin lo
WHERE ho.message_id = lo.message_id
AND ho.contract = '12384'
AND ho.message_type = '3B13'
AND lo.shipment_set IS NOT NULL
GROUP BY ho.cust_order_no, lo.shipment_set),
a3 AS
(SELECT a1.custord, a1.sset, CONCAT('SHIPSET',a1.sset) AS ssset
FROM a1
LEFT OUTER JOIN a2
ON a1.custord = a2.c_ref AND a1.sset = a2.shipset
WHERE a2.c_ref IS NULL),
bb AS
(SELECT so.contract, so.order_no, sr.service_request_no AS sr_no, sr.reference_no,
substr(sr.part_no,8) AS shipset,
substr(note_text,1,instr(note_text,'.',1,1)-1) AS Major_line,
note_text AS CISCO_line,ma.part_no,
(Select TO_DATE(TO_CHAR(TO_DATE(d.objversion,'YYYYMMDDHH24MISS'),'YYYY-MM-DD HH24:MI:SS'),'YYYY-MM-DD HH24:MI:SS')
FROM ifsapp.document_text d WHERE so.note_id = d.note_id AND so.contract = '12384') AS Print_Date,
ma.note_text
FROM (ifsapp.service_request sr
LEFT OUTER JOIN ifsapp.work_order_shop_ord ws
ON sr.service_request_no = ws.wo_no
LEFT OUTER JOIN ifsapp.shop_ord so
ON ws.order_no = so.order_no)
LEFT OUTER JOIN ifsapp.customer_order co
ON sr.reference_no = co.cust_ref
LEFT OUTER JOIN ifsapp.shop_material_alloc ma
ON so.order_no = ma.order_no
JOIN a3
ON a3.custord = sr.reference_no AND a3.sset = substr(sr.part_no,8)
WHERE sr.part_contract = '12384'
AND so.contract = '12384'
AND co.contract = '12384'
AND sr.reference_no IS NOT NULL
AND ma.part_no NOT LIKE 'SHIPSET%'),
cc AS
(SELECT
bb.reference_no,
bb.shipset,
bb.order_no,
bb.cisco_line,
aa.message_id,
aa.allocation_date,
row_number() over(PARTITION BY bb.reference_no, bb.shipset, aa.allocation_date
ORDER BY bb.reference_no, bb.shipset, aa.allocation_date, aa.message_id DESC) AS selector
FROM bb
LEFT OUTER JOIN aa
ON bb.reference_no = aa.c_ref AND bb.shipset = aa.shipset
WHERE aa.allocation_date <= bb.print_date
OR aa.allocation_date IS NULL
OR bb.print_date IS NULL),
dd AS
(SELECT
MAX(reference_no) AS reference_no,
MAX(shipset) AS shipset,
order_no,
MAX(allocation_date) AS allocation_date
FROM cc
WHERE selector = 1
GROUP BY order_no, selector),
ee AS
(SELECT
smx.order_no,
SUM(smx.qty_assigned) AS total_allocated,
SUM(smx.qty_issued) AS total_issued,
SUM(smx.qty_required) AS total_required
FROM ifsapp.shop_material_alloc smx
WHERE smx.contract = '12384'
AND smx.part_no NOT LIKE 'SHIPSET%'
GROUP BY smx.order_no),
ff AS
(SELECT
dd.reference_no,
dd.shipset,
dd.order_no,
MAX(allocation_date) AS last_allocation,
MAX(ee.total_allocated) AS total_allocated,
MAX(ee.total_issued) AS total_issued,
MAX(ee.total_required) AS total_required
FROM dd
LEFT OUTER JOIN ee
ON dd.order_no = ee.order_no
GROUP BY dd.reference_no, dd.shipset, dd.order_no),
base AS
(SELECT x.order_no, x.part_no, z.rel_no, MIN(x.dated) AS dated, MIN(y.cust_ref) AS cust_ref, MIN(z.line_no) AS line_no,
MIN(y.state) AS state, MIN(y.contract) AS contract, MIN(z.demand_order_ref1) AS demand_order_ref1
FROM ifsapp.inventory_transaction_hist x, ifsapp.customer_order y, ifsapp.customer_order_line z
WHERE x.contract = '12384'
AND x.order_no = y.order_no
AND y.order_no = z.order_no
AND x.transaction = 'REP-OESHIP'
AND x.part_no = z.part_no
AND TRUNC(x.dated) >= SYSDATE - 8
GROUP BY x.order_no, x.part_no, z.rel_no)
SELECT
DISTINCT
bb.contract,
bb.order_no,
bb.sr_no,
CAST('-' AS varchar2(40)) AS Usr,
CAST('-' AS varchar2(40)) AS Name,
CAST('01' AS number) AS Operation,
CAST('Last Reservation' AS varchar2(40)) AS Action,
CAST('-' AS varchar2(40)) AS Workcenter,
CAST('-' AS varchar2(40)) AS Next_Workcenter_no,
CAST('Print SO' AS varchar2(40)) AS Next_WC_Description,
ff.total_allocated,
ff.total_issued,
ff.total_required,
ff.shipset,
ff.last_allocation AS Action_date,
ff.reference_no
FROM ff
LEFT OUTER JOIN bb
ON bb.order_no = ff.order_no
WHERE bb.order_no IS NOT NULL
UNION ALL
SELECT
c.contract AS Site, c.order_no AS Shop_Order, b.wo_no AS SR_No,
CAST('-' AS varchar2(40)) AS UserID,
CAST('-' AS varchar2(40)) AS User_Name,
CAST('02' AS number) AS Operation,
CAST('SO Printed' AS varchar2(40)) AS Action,
CAST('SOPRINT' AS varchar2(40)) AS Workcenter,
CAST('PKRPT' AS varchar2(40)) AS Next_Workcenter_no,
CAST('Pickreport' AS varchar2(40)) AS Next_WC_Description,
CAST('0' AS number) AS Total_Allocated,
CAST('0' AS number) AS Total_Issued,
CAST('0' AS number) AS Total_Required,
e.part_no AS Ship_Set,
TO_DATE(TO_CHAR(TO_DATE(d.objversion,'YYYYMMDDHH24MISS'),'YYYY-MM-DD HH24:MI:SS'),'YYYY-MM-DD HH24:MI:SS')AS Action_Date,
f.cust_ref AS cust_ref
FROM ifsapp.shop_ord c
LEFT OUTER JOIN ifsapp.work_order_shop_ord b
ON b.order_no = c.order_no
LEFT OUTER JOIN ifsapp.document_text d
ON d.note_id = c.note_id
LEFT OUTER JOIN ifsapp.customer_order_line e
ON e.demand_order_ref1 = TRIM(to_char(b.wo_no))
LEFT OUTER JOIN ifsapp.customer_order f
ON f.order_no = e.order_no
JOIN a3
ON a3.custord = f.cust_ref AND a3.ssset = e.part_no
WHERE c.contract = '12384'
AND e.contract = '12384'
AND d.objversion IS NOT NULL
UNION ALL
SELECT
a.site AS Site, a.order_no AS Shop_Order, b.wo_no AS SR_No, a.userid AS UserID,
a.user_name AS Name, a.operation_no AS Operation, a.action AS Action,
a.workcenter_no AS Workcenter, a.next_work_center_no AS Next_Workcenter_no,
(SELECT d.description FROM ifsapp.work_center d WHERE a.next_work_center_no = d.work_center_no AND a.site = d.contract)
AS Next_WC_Description,
CAST('0' AS number) AS Total_Allocated,
CAST('0' AS number) AS Total_Issued,
CAST('0' AS number) AS Total_Required,
e.part_no AS Ship_set,
a.action_date AS Action_Date, f.cust_ref AS cust_ref
FROM ifsapp.shop_ord c
LEFT OUTER JOIN ifsapp.work_order_shop_ord b
ON b.order_no = c.order_no
LEFT OUTER JOIN ifsapp.customer_order_line e
ON e.demand_order_ref1 = to_char(b.wo_no)
LEFT OUTER JOIN ifsapp.customer_order f
ON f.order_no = e.order_no
LEFT OUTER JOIN info.tp_hvt_so_op_hist a
ON a.order_no = c.order_no
JOIN a3
ON a3.custord = f.cust_ref AND a3.ssset = e.part_no
WHERE a.site = '12384'
AND c.contract = '12384'
AND e.contract = '12384'
AND f.contract = '12384'
UNION ALL
SELECT so.contract AS Site, so.order_no AS Shop_Order_No, sr.service_request_no AS SR_No,
CAST('-' AS varchar2(40)) AS "User",
CAST('-' AS varchar2(40)) AS "Name",
CAST('999' AS number) AS "Operation",
CAST('Shipped' AS varchar2(40)) AS "Action",
CAST('SHIP' AS varchar2(40)) AS "Workcenter",
CAST('-' AS varchar2(40)) AS "Next_Workcenter_no",
CAST('-' AS varchar2(40)) AS "Next_WC_Description",
CAST('0' AS number) AS Total_Allocated,
CAST('0' AS number) AS Total_Issued,
CAST('0' AS number) AS Total_Required,
so.part_no AS ship_set, base.dated AS Action_Date,
sr.reference_no AS CUST_REF
FROM base
LEFT OUTER JOIN ifsapp.service_request sr
ON base.demand_order_ref1 = sr.service_request_no
LEFT OUTER JOIN ifsapp.work_order_shop_ord ws
ON sr.service_request_no = ws.wo_no
LEFT OUTER JOIN ifsapp.shop_ord so
ON ws.order_no = so.order_no
WHERE base.contract = '12384'; -
Opening two cursors using open cursor with bulk collect on colections ..
Is it possible to have the implementatiion of using bulk collect with collections using two open cursors ..
first c1
second c2
open c1
loop
open c2
loop
end loop
close c2
end loop;
close c1
what i found is for every outer loop of cursor c1 , cursor c2 is open and closed for every record.
is this willl imporove the performace .?
EXAMPLE:-
NOTE: The relatoin between finc and minc is one to many ..finc is parent and minc is child
function chk_notnull_blank ( colname IN number ) return number is
BEGIN
if ( colname is NOT NULL and colname not in ( -8E14, -7E14, -6E14, -5E14, -4E14, -3E14, -2E14, -1E14, -1E9 )) then
RETURN colname ;
else
RETURN 0;
end if;
END chk_notnull_blank;
procedure Proc_AnnualFmlyTotIncSummary is
CURSOR c_cur_finc IS SELECT FAMID FROM FINC ;
CURSOR c_cur_minc IS SELECT FAMID, MEMBNO , ANFEDTX, ANGOVRTX, ANPRVPNX, ANRRDEDX, ANSLTX, SALARYX, SALARYBX, NONFARMX, NONFRMBX , FARMINCX, FRMINCBX, RRRETIRX, RRRETRBX, SOCRRX, INDRETX, JSSDEDX, SSIX, SSIBX from MINC minc WHERE FAMID IN ( SELECT FAMID FROM FINC finc WHERE minc.FAMID = finc.FAMID );
v_tot_fsalaryx number := 0;
v_tot_fnonfrmx number := 0;
v_tot_ffrmincx number := 0;
v_tot_frretirx number := 0;
v_tot_findretx number := 0;
v_tot_fjssdedx number := 0;
v_tot_fssix number := 0;
v_temp_sum_fsalaryx number := 0;
v_temp_sum_fnonfrmx number := 0;
v_temp_sum_ffrmincx number := 0;
v_temp_sum_frretirx number := 0;
v_temp_sum_findretx number := 0;
v_temp_sum_fjssdedx number := 0;
v_temp_sum_fssix number := 0;
TYPE minc_rec IS RECORD (FAMID MINC.FAMID%TYPE, MEMBNO MINC.MEMBNO%TYPE , ANFEDTX MINC.ANFEDTX%TYPE, ANGOVRTX MINC.ANGOVRTX%TYPE , ANPRVPNX MINC.ANPRVPNX%TYPE , ANRRDEDX MINC.ANRRDEDX%TYPE , ANSLTX MINC.ANSLTX%TYPE, SALARYX MINC.SALARYX%TYPE , SALARYBX MINC.SALARYBX%TYPE , NONFARMX MINC.NONFARMX%TYPE , NONFRMBX MINC.NONFRMBX%TYPE, FARMINCX MINC.FARMINCX%TYPE , FRMINCBX MINC.FRMINCBX%TYPE , RRRETIRX MINC.RRRETIRX%TYPE , RRRETRBX MINC.RRRETRBX%TYPE, SOCRRX MINC.SOCRRX%TYPE , INDRETX MINC.INDRETX%TYPE , JSSDEDX MINC.JSSDEDX%TYPE , SSIX MINC.SSIX%TYPE , SSIBX MINC.SSIBX%TYPE );
v_flag_boolean boolean := false;
v_famid number ;
v_stmt varchar2(3200) ;
v_limit number := 50;
v_temp_FAMTFEDX number := 0 ;
v_temp_FGOVRETX number := 0 ;
v_temp_FPRIVPENX number := 0 ;
v_temp_FRRDEDX number := 0 ;
v_temp_FSLTAXX number := 0 ;
v_temp_FSALARYX number := 0 ;
v_temp_FNONFRMX number := 0 ;
v_temp_FFRMINCX number := 0 ;
v_temp_FRRETIRX number := 0 ;
v_temp_FINDRETX number := 0 ;
v_temp_FJSSDEDX number := 0 ;
v_temp_FSSIX number := 0 ;
BEGIN
OPEN c_cur_finc ;
LOOP
FETCH c_cur_finc BULK COLLECT INTO famid_type_tbl LIMIT v_limit;
EXIT WHEN famid_type_tbl.COUNT = 0;
FOR i in famid_type_tbl.FIRST..famid_type_tbl.LAST
LOOP
OPEN c_cur_minc ;
LOOP
FETCH c_cur_minc BULK COLLECT INTO minc_rec_type_tbl LIMIT v_limit;
EXIT WHEN minc_rec_type_tbl.COUNT = 0;
FOR j IN minc_rec_type_tbl.FIRST..minc_rec_type_tbl.LAST
LOOP
if ( famid_type_tbl(i) = minc_rec_type_tbl(j).FAMID ) THEN
v_temp_FAMTFEDX := v_temp_FAMTFEDX + chk_notnull_blank(minc_rec_type_tbl(j).ANFEDTX );
v_temp_FGOVRETX := v_temp_FGOVRETX + chk_notnull_blank(minc_rec_type_tbl(j).ANGOVRTX);
v_temp_FPRIPENX := v_temp_FPRIPENX + chk_notnull_blank(minc_rec_type_tbl(j).ANPRVPNX);
v_temp_FRRDEDX := v_temp_FRRDEDX + chk_notnull_blank(minc_rec_type_tbl(j).ANRRDEDX);
v_temp_FSLTAXX := v_temp_FSLTAXX + chk_notnull_blank(minc_rec_type_tbl(j).ANSLTX );
v_temp_FSALARYX := v_temp_FSALARYX + chk_notnull_blank(minc_rec_type_tbl(j).SALARYX ) + chk_notnull_blank(minc_rec_type_tbl(j).SALARYBX);
v_temp_FNONFRMX := v_temp_FNONFRMX + chk_notnull_blank(minc_rec_type_tbl(j).NONFARMX) + chk_notnull_blank(minc_rec_type_tbl(j).NONFRMBX);
v_temp_FFRMINCX := v_temp_FFRMINCX + chk_notnull_blank(minc_rec_type_tbl(j).FARMINCX) + chk_notnull_blank(minc_rec_type_tbl(j).FRMINCBX );
v_temp_FRRETIRX := v_temp_FRRETIRX + chk_notnull_blank(minc_rec_type_tbl(j).RRRETIRX) + chk_notnull_blank(minc_rec_type_tbl(j).RRRETRBX ) + chk_notnull_blank(minc_rec_type_tbl(j).SOCRRX);
v_temp_FINDREXT := v_temp_FINDRETX + chk_notnull_blank(minc_rec_type_tbl(j).INDRETX);
v_temp_FJSSDEDX := v_temp_FJSSDEDX + chk_notnull_blank(minc_rec_type_tbl(j).JSSDEDX);
v_temp_FSSIX := v_temp_FSSIX + chk_notnull_blank(minc_rec_type_tbl(j).SSIX ) + chk_notnull_blank(minc_rec_type_tbl(j).SSIBX);
END IF;
END LOOP;
END LOOP ;
CLOSE c_cur_minc;
UPDATE FINC SET FAMTFEDX = v_temp_FAMTFEDX WHERE FAMID = famid_type_tbl(i);
END LOOP;
END LOOP;
CLOSE c_cur_finc;
END;
EXCEPTION
WHEN OTHERS THEN
raise_application_error(-20001,'An error was encountered - '||SQLCODE||' -ERROR- '||SQLERRM);
v_err_code := SQLCODE;
v_err_msg := substr(SQLERRM, 1, 200);
INSERT INTO audit_table (error_number, error_message) VALUES (v_err_code, v_err_msg);
error_logging(p_error_code => substr(sqlerrm,1,9), p_error_message => substr(sqlerrm,12), p_package =>'PKG_FCI_APP',p_procedure => 'Proc_Annual_Deductions_FromPay ' , p_location => v_location);
end Proc_AnnualFmlyTotIncSummary ;
Is the proga efficient and free from compilation errors ..?
thanks/kumar
Edited by: kumar73 on Sep 22, 2010 12:48 PMfunction chk_notnull_blank ( colname IN number ) return number is Maybe this function should have its own forum:
how to use case in this program
Re: how to declare a formal parameter in a function of type record and access ?
Re: how to define a function with table type parameter
Re: creation of db trigger with error ..
Re: How to write a trigger for the below scenario
how to improve the code using advanced methods
yours advice in improving the coding ..
How to use bulk in multiple cursors !!
;-) -
Can I use Bulk Collect results as input parameter for another cursor
MUSIC ==> remote MUSIC_DB database, MUSIC table has 60 million rows
PRICE_DATA ==> remote PRICING_DB database, PRICE_DATE table has 1 billion rows
These two table once existed in same database, but size of database exceeded available hardware size and hardware budget, so the PRICE_DATA table was moved to another Oracle database. I need to create a single report that combines data from both of these tables, and a distributed join with DRIVING_SITE hint will not work because the size of both table is too large to push to one DRIVING_SITE location, so I wrote this PLSQL block to process in small blocks.
QUESTION: how can use bulk collect from one cursor and pass that bulk collected information as input to second cursor without specifically listing each cell of the PLSQL bulk collection? See sample pseudo-code below, I am trying to determine more efficient way to code than hard-coding 100 parameter names into 2nd cursor.
NOTE: below is truly pseudo-code, I had to change the names of everything to adhere to NDA, but below works and is fast enough for my purposes, but if I want to change from 100 input parameters to 200, I have to add more hard-coded values. There has got to be a better way.
DECLARE
-- define cursor that retrieves distinct SONG_IDs from MUSIC table in remote music database
CURSOR C_CURRENT_MUSIC
IS
select distinct SONG_ID
from MUSIC@MUSIC_DB
where PRODUCTION_RELEASE=1
/* define a parameterized cursor that accepts 100 SONG_IDs and retrieves
required pricing information
CURSOR C_get_music_price_data
P_SONG_ID_001 NUMBER, P_SONG_ID_002 NUMBER, P_SONG_ID_003 NUMBER, P_SONG_ID_004 NUMBER, P_SONG_ID_005 NUMBER, P_SONG_ID_006 NUMBER, P_SONG_ID_007 NUMBER, P_SONG_ID_008 NUMBER, P_SONG_ID_009 NUMBER, P_SONG_ID_010 NUMBER,
P_SONG_ID_011 NUMBER, P_SONG_ID_012 NUMBER, P_SONG_ID_013 NUMBER, P_SONG_ID_014 NUMBER, P_SONG_ID_015 NUMBER, P_SONG_ID_016 NUMBER, P_SONG_ID_017 NUMBER, P_SONG_ID_018 NUMBER, P_SONG_ID_019 NUMBER, P_SONG_ID_020 NUMBER,
P_SONG_ID_021 NUMBER, P_SONG_ID_022 NUMBER, P_SONG_ID_023 NUMBER, P_SONG_ID_024 NUMBER, P_SONG_ID_025 NUMBER, P_SONG_ID_026 NUMBER, P_SONG_ID_027 NUMBER, P_SONG_ID_028 NUMBER, P_SONG_ID_029 NUMBER, P_SONG_ID_030 NUMBER,
P_SONG_ID_031 NUMBER, P_SONG_ID_032 NUMBER, P_SONG_ID_033 NUMBER, P_SONG_ID_034 NUMBER, P_SONG_ID_035 NUMBER, P_SONG_ID_036 NUMBER, P_SONG_ID_037 NUMBER, P_SONG_ID_038 NUMBER, P_SONG_ID_039 NUMBER, P_SONG_ID_040 NUMBER,
P_SONG_ID_041 NUMBER, P_SONG_ID_042 NUMBER, P_SONG_ID_043 NUMBER, P_SONG_ID_044 NUMBER, P_SONG_ID_045 NUMBER, P_SONG_ID_046 NUMBER, P_SONG_ID_047 NUMBER, P_SONG_ID_048 NUMBER, P_SONG_ID_049 NUMBER, P_SONG_ID_050 NUMBER,
P_SONG_ID_051 NUMBER, P_SONG_ID_052 NUMBER, P_SONG_ID_053 NUMBER, P_SONG_ID_054 NUMBER, P_SONG_ID_055 NUMBER, P_SONG_ID_056 NUMBER, P_SONG_ID_057 NUMBER, P_SONG_ID_058 NUMBER, P_SONG_ID_059 NUMBER, P_SONG_ID_060 NUMBER,
P_SONG_ID_061 NUMBER, P_SONG_ID_062 NUMBER, P_SONG_ID_063 NUMBER, P_SONG_ID_064 NUMBER, P_SONG_ID_065 NUMBER, P_SONG_ID_066 NUMBER, P_SONG_ID_067 NUMBER, P_SONG_ID_068 NUMBER, P_SONG_ID_069 NUMBER, P_SONG_ID_070 NUMBER,
P_SONG_ID_071 NUMBER, P_SONG_ID_072 NUMBER, P_SONG_ID_073 NUMBER, P_SONG_ID_074 NUMBER, P_SONG_ID_075 NUMBER, P_SONG_ID_076 NUMBER, P_SONG_ID_077 NUMBER, P_SONG_ID_078 NUMBER, P_SONG_ID_079 NUMBER, P_SONG_ID_080 NUMBER,
P_SONG_ID_081 NUMBER, P_SONG_ID_082 NUMBER, P_SONG_ID_083 NUMBER, P_SONG_ID_084 NUMBER, P_SONG_ID_085 NUMBER, P_SONG_ID_086 NUMBER, P_SONG_ID_087 NUMBER, P_SONG_ID_088 NUMBER, P_SONG_ID_089 NUMBER, P_SONG_ID_090 NUMBER,
P_SONG_ID_091 NUMBER, P_SONG_ID_092 NUMBER, P_SONG_ID_093 NUMBER, P_SONG_ID_094 NUMBER, P_SONG_ID_095 NUMBER, P_SONG_ID_096 NUMBER, P_SONG_ID_097 NUMBER, P_SONG_ID_098 NUMBER, P_SONG_ID_099 NUMBER, P_SONG_ID_100 NUMBER
IS
select
from PRICE_DATA@PRICING_DB
where COUNTRY = 'USA'
and START_DATE <= sysdate
and END_DATE > sysdate
and vpc.SONG_ID IN
P_SONG_ID_001 ,P_SONG_ID_002 ,P_SONG_ID_003 ,P_SONG_ID_004 ,P_SONG_ID_005 ,P_SONG_ID_006 ,P_SONG_ID_007 ,P_SONG_ID_008 ,P_SONG_ID_009 ,P_SONG_ID_010,
P_SONG_ID_011 ,P_SONG_ID_012 ,P_SONG_ID_013 ,P_SONG_ID_014 ,P_SONG_ID_015 ,P_SONG_ID_016 ,P_SONG_ID_017 ,P_SONG_ID_018 ,P_SONG_ID_019 ,P_SONG_ID_020,
P_SONG_ID_021 ,P_SONG_ID_022 ,P_SONG_ID_023 ,P_SONG_ID_024 ,P_SONG_ID_025 ,P_SONG_ID_026 ,P_SONG_ID_027 ,P_SONG_ID_028 ,P_SONG_ID_029 ,P_SONG_ID_030,
P_SONG_ID_031 ,P_SONG_ID_032 ,P_SONG_ID_033 ,P_SONG_ID_034 ,P_SONG_ID_035 ,P_SONG_ID_036 ,P_SONG_ID_037 ,P_SONG_ID_038 ,P_SONG_ID_039 ,P_SONG_ID_040,
P_SONG_ID_041 ,P_SONG_ID_042 ,P_SONG_ID_043 ,P_SONG_ID_044 ,P_SONG_ID_045 ,P_SONG_ID_046 ,P_SONG_ID_047 ,P_SONG_ID_048 ,P_SONG_ID_049 ,P_SONG_ID_050,
P_SONG_ID_051 ,P_SONG_ID_052 ,P_SONG_ID_053 ,P_SONG_ID_054 ,P_SONG_ID_055 ,P_SONG_ID_056 ,P_SONG_ID_057 ,P_SONG_ID_058 ,P_SONG_ID_059 ,P_SONG_ID_060,
P_SONG_ID_061 ,P_SONG_ID_062 ,P_SONG_ID_063 ,P_SONG_ID_064 ,P_SONG_ID_065 ,P_SONG_ID_066 ,P_SONG_ID_067 ,P_SONG_ID_068 ,P_SONG_ID_069 ,P_SONG_ID_070,
P_SONG_ID_071 ,P_SONG_ID_072 ,P_SONG_ID_073 ,P_SONG_ID_074 ,P_SONG_ID_075 ,P_SONG_ID_076 ,P_SONG_ID_077 ,P_SONG_ID_078 ,P_SONG_ID_079 ,P_SONG_ID_080,
P_SONG_ID_081 ,P_SONG_ID_082 ,P_SONG_ID_083 ,P_SONG_ID_084 ,P_SONG_ID_085 ,P_SONG_ID_086 ,P_SONG_ID_087 ,P_SONG_ID_088 ,P_SONG_ID_089 ,P_SONG_ID_090,
P_SONG_ID_091 ,P_SONG_ID_092 ,P_SONG_ID_093 ,P_SONG_ID_094 ,P_SONG_ID_095 ,P_SONG_ID_096 ,P_SONG_ID_097 ,P_SONG_ID_098 ,P_SONG_ID_099 ,P_SONG_ID_100
group by
vpc.SONG_ID
,vpc.STOREFRONT_ID
TYPE SONG_ID_TYPE IS TABLE OF MUSIC@MUSIC_DB%TYPE INDEX BY BINARY_INTEGER;
V_SONG_ID_ARRAY SONG_ID_TYPE ;
v_commit_counter NUMBER := 0;
BEGIN
/* open cursor you intent to bulk collect from */
OPEN C_CURRENT_MUSIC;
LOOP
/* in batches of 100, bulk collect ADAM_ID mapped TMS_IDENTIFIER into PLSQL table or records */
FETCH C_CURRENT_MUSIC BULK COLLECT INTO V_SONG_ID_ARRAY LIMIT 100;
EXIT WHEN V_SONG_ID_ARRAY.COUNT = 0;
/* to avoid NO DATA FOUND error when pass 100 parameters to OPEN cursor, if the arrary
is not fully populated to 100, pad the array with nulls to fill up to 100 cells. */
IF (V_SONG_ID_ARRAY.COUNT >=1 and V_SONG_ID_ARRAY.COUNT <> 100) THEN
FOR j IN V_SONG_ID_ARRAY.COUNT+1..100 LOOP
V_SONG_ID_ARRAY(j) := null;
END LOOP;
END IF;
/* pass a batch of 100 to cursor that get price information per SONG_ID and STOREFRONT_ID */
FOR j IN C_get_music_price_data
V_SONG_ID_ARRAY(1) ,V_SONG_ID_ARRAY(2) ,V_SONG_ID_ARRAY(3) ,V_SONG_ID_ARRAY(4) ,V_SONG_ID_ARRAY(5) ,V_SONG_ID_ARRAY(6) ,V_SONG_ID_ARRAY(7) ,V_SONG_ID_ARRAY(8) ,V_SONG_ID_ARRAY(9) ,V_SONG_ID_ARRAY(10) ,
V_SONG_ID_ARRAY(11) ,V_SONG_ID_ARRAY(12) ,V_SONG_ID_ARRAY(13) ,V_SONG_ID_ARRAY(14) ,V_SONG_ID_ARRAY(15) ,V_SONG_ID_ARRAY(16) ,V_SONG_ID_ARRAY(17) ,V_SONG_ID_ARRAY(18) ,V_SONG_ID_ARRAY(19) ,V_SONG_ID_ARRAY(20) ,
V_SONG_ID_ARRAY(21) ,V_SONG_ID_ARRAY(22) ,V_SONG_ID_ARRAY(23) ,V_SONG_ID_ARRAY(24) ,V_SONG_ID_ARRAY(25) ,V_SONG_ID_ARRAY(26) ,V_SONG_ID_ARRAY(27) ,V_SONG_ID_ARRAY(28) ,V_SONG_ID_ARRAY(29) ,V_SONG_ID_ARRAY(30) ,
V_SONG_ID_ARRAY(31) ,V_SONG_ID_ARRAY(32) ,V_SONG_ID_ARRAY(33) ,V_SONG_ID_ARRAY(34) ,V_SONG_ID_ARRAY(35) ,V_SONG_ID_ARRAY(36) ,V_SONG_ID_ARRAY(37) ,V_SONG_ID_ARRAY(38) ,V_SONG_ID_ARRAY(39) ,V_SONG_ID_ARRAY(40) ,
V_SONG_ID_ARRAY(41) ,V_SONG_ID_ARRAY(42) ,V_SONG_ID_ARRAY(43) ,V_SONG_ID_ARRAY(44) ,V_SONG_ID_ARRAY(45) ,V_SONG_ID_ARRAY(46) ,V_SONG_ID_ARRAY(47) ,V_SONG_ID_ARRAY(48) ,V_SONG_ID_ARRAY(49) ,V_SONG_ID_ARRAY(50) ,
V_SONG_ID_ARRAY(51) ,V_SONG_ID_ARRAY(52) ,V_SONG_ID_ARRAY(53) ,V_SONG_ID_ARRAY(54) ,V_SONG_ID_ARRAY(55) ,V_SONG_ID_ARRAY(56) ,V_SONG_ID_ARRAY(57) ,V_SONG_ID_ARRAY(58) ,V_SONG_ID_ARRAY(59) ,V_SONG_ID_ARRAY(60) ,
V_SONG_ID_ARRAY(61) ,V_SONG_ID_ARRAY(62) ,V_SONG_ID_ARRAY(63) ,V_SONG_ID_ARRAY(64) ,V_SONG_ID_ARRAY(65) ,V_SONG_ID_ARRAY(66) ,V_SONG_ID_ARRAY(67) ,V_SONG_ID_ARRAY(68) ,V_SONG_ID_ARRAY(69) ,V_SONG_ID_ARRAY(70) ,
V_SONG_ID_ARRAY(71) ,V_SONG_ID_ARRAY(72) ,V_SONG_ID_ARRAY(73) ,V_SONG_ID_ARRAY(74) ,V_SONG_ID_ARRAY(75) ,V_SONG_ID_ARRAY(76) ,V_SONG_ID_ARRAY(77) ,V_SONG_ID_ARRAY(78) ,V_SONG_ID_ARRAY(79) ,V_SONG_ID_ARRAY(80) ,
V_SONG_ID_ARRAY(81) ,V_SONG_ID_ARRAY(82) ,V_SONG_ID_ARRAY(83) ,V_SONG_ID_ARRAY(84) ,V_SONG_ID_ARRAY(85) ,V_SONG_ID_ARRAY(86) ,V_SONG_ID_ARRAY(87) ,V_SONG_ID_ARRAY(88) ,V_SONG_ID_ARRAY(89) ,V_SONG_ID_ARRAY(90) ,
V_SONG_ID_ARRAY(91) ,V_SONG_ID_ARRAY(92) ,V_SONG_ID_ARRAY(93) ,V_SONG_ID_ARRAY(94) ,V_SONG_ID_ARRAY(95) ,V_SONG_ID_ARRAY(96) ,V_SONG_ID_ARRAY(97) ,V_SONG_ID_ARRAY(98) ,V_SONG_ID_ARRAY(99) ,V_SONG_ID_ARRAY(100)
LOOP
/* do stuff with data from Song and Pricing Database coming from the two
separate cursors, then continue processing more rows...
END LOOP;
/* commit after each batch of 100 SONG_IDs is processed */
COMMIT;
EXIT WHEN C_CURRENT_MUSIC%NOTFOUND; -- exit when there are no more rows to fetch from cursor
END LOOP; -- bulk fetching loop
CLOSE C_CURRENT_MUSIC; -- close cursor that was used in bulk collection
/* commit rows */
COMMIT; -- commit any remaining uncommitted data.
END;I've got a problem when using passing VARRAY of numbers as parameter to remote cursor: it takes a super long time to run, sometimes doesn't finish even after an hour as passed.
Continuing with my example in original entry, I replaced the bulk collect into PLSQL table collection with a VARRAY and i bulk collect into the VARRAY, this is fast and I know it works because I can DBMS_OUTPUT.PUT_LINE cells of VARRAY so I know it is getting populated correctly. However, when I pass the VARRAY containing 100 cells populated with SONG_IDs as parameter to cursor, execution time is over an hour and when I am expecting a few seconds.
Below code example strips the problem down to it's raw details, I skip the bulk collect and just manually populate a VARRAY with 100 SONG_ID values, then try to pass to as parameter to a cursor, but the execution time of cursor is unexpectedly long, over 30 minutes, sometime longer, when I am expecting seconds.
IMPORTANT: If I take the same 100 SONG_IDs and place them directly in the cursor query's where IN clause, the SQL runs in under 5 seconds and returns result. Also, if I pass the 100 SONG_IDs as individual cells of a PLSQL table collection, then it also runs fast.
I thought that since the VARRAY is used via select subquery that is it queried locally, but the cursor is remote, and that I had a distribute problem on my hands, so I put in the DRIVING_SITE hint to attempt to force the result of query against VARRAY to go to remote server and rest of query will run there before returning result, but that didn't work either, still got slow response.
Is something wrong with my code, or I am running into a Oracle problem that may require support to resolve?
DECLARE
/* define a parameterized cursor that accepts XXX number of in SONG_IDs and
retrieves required pricing information
CURSOR C_get_music_price_data
p_array_song_ids SYS.ODCInumberList
IS
select /*+DRIVING_SITE(pd) */
count(distinct s.EVE_ID)
from PRICE_DATA@PRICING_DB pd
where pd.COUNTRY = 'USA'
and pd.START_DATE <= sysdate
and pd.END_DATE > sysdate
and pd.SONG_ID IN
select column_value from table(p_array_song_ids)
group by
pd.SONG_ID
,pd.STOREFRONT_ID
V_ARRAY_SONG_IDS SYS.ODCInumberList := SYS.ODCInumberList();
BEGIN
V_ARRAY_SONG_IDS.EXTEND(100);
V_ARRAY_SONG_IDS( 1 ) := 31135 ;
V_ARRAY_SONG_IDS( 2 ) := 31140 ;
V_ARRAY_SONG_IDS( 3 ) := 31142 ;
V_ARRAY_SONG_IDS( 4 ) := 31144 ;
V_ARRAY_SONG_IDS( 5 ) := 31146 ;
V_ARRAY_SONG_IDS( 6 ) := 31148 ;
V_ARRAY_SONG_IDS( 7 ) := 31150 ;
V_ARRAY_SONG_IDS( 8 ) := 31152 ;
V_ARRAY_SONG_IDS( 9 ) := 31154 ;
V_ARRAY_SONG_IDS( 10 ) := 31156 ;
V_ARRAY_SONG_IDS( 11 ) := 31158 ;
V_ARRAY_SONG_IDS( 12 ) := 31160 ;
V_ARRAY_SONG_IDS( 13 ) := 33598 ;
V_ARRAY_SONG_IDS( 14 ) := 33603 ;
V_ARRAY_SONG_IDS( 15 ) := 33605 ;
V_ARRAY_SONG_IDS( 16 ) := 33607 ;
V_ARRAY_SONG_IDS( 17 ) := 33609 ;
V_ARRAY_SONG_IDS( 18 ) := 33611 ;
V_ARRAY_SONG_IDS( 19 ) := 33613 ;
V_ARRAY_SONG_IDS( 20 ) := 33615 ;
V_ARRAY_SONG_IDS( 21 ) := 33617 ;
V_ARRAY_SONG_IDS( 22 ) := 33630 ;
V_ARRAY_SONG_IDS( 23 ) := 33632 ;
V_ARRAY_SONG_IDS( 24 ) := 33636 ;
V_ARRAY_SONG_IDS( 25 ) := 33638 ;
V_ARRAY_SONG_IDS( 26 ) := 33640 ;
V_ARRAY_SONG_IDS( 27 ) := 33642 ;
V_ARRAY_SONG_IDS( 28 ) := 33644 ;
V_ARRAY_SONG_IDS( 29 ) := 33646 ;
V_ARRAY_SONG_IDS( 30 ) := 33648 ;
V_ARRAY_SONG_IDS( 31 ) := 33662 ;
V_ARRAY_SONG_IDS( 32 ) := 33667 ;
V_ARRAY_SONG_IDS( 33 ) := 33669 ;
V_ARRAY_SONG_IDS( 34 ) := 33671 ;
V_ARRAY_SONG_IDS( 35 ) := 33673 ;
V_ARRAY_SONG_IDS( 36 ) := 33675 ;
V_ARRAY_SONG_IDS( 37 ) := 33677 ;
V_ARRAY_SONG_IDS( 38 ) := 33679 ;
V_ARRAY_SONG_IDS( 39 ) := 33681 ;
V_ARRAY_SONG_IDS( 40 ) := 33683 ;
V_ARRAY_SONG_IDS( 41 ) := 33685 ;
V_ARRAY_SONG_IDS( 42 ) := 33700 ;
V_ARRAY_SONG_IDS( 43 ) := 33702 ;
V_ARRAY_SONG_IDS( 44 ) := 33704 ;
V_ARRAY_SONG_IDS( 45 ) := 33706 ;
V_ARRAY_SONG_IDS( 46 ) := 33708 ;
V_ARRAY_SONG_IDS( 47 ) := 33710 ;
V_ARRAY_SONG_IDS( 48 ) := 33712 ;
V_ARRAY_SONG_IDS( 49 ) := 33723 ;
V_ARRAY_SONG_IDS( 50 ) := 33725 ;
V_ARRAY_SONG_IDS( 51 ) := 33727 ;
V_ARRAY_SONG_IDS( 52 ) := 33729 ;
V_ARRAY_SONG_IDS( 53 ) := 33731 ;
V_ARRAY_SONG_IDS( 54 ) := 33733 ;
V_ARRAY_SONG_IDS( 55 ) := 33735 ;
V_ARRAY_SONG_IDS( 56 ) := 33737 ;
V_ARRAY_SONG_IDS( 57 ) := 33749 ;
V_ARRAY_SONG_IDS( 58 ) := 33751 ;
V_ARRAY_SONG_IDS( 59 ) := 33753 ;
V_ARRAY_SONG_IDS( 60 ) := 33755 ;
V_ARRAY_SONG_IDS( 61 ) := 33757 ;
V_ARRAY_SONG_IDS( 62 ) := 33759 ;
V_ARRAY_SONG_IDS( 63 ) := 33761 ;
V_ARRAY_SONG_IDS( 64 ) := 33763 ;
V_ARRAY_SONG_IDS( 65 ) := 33775 ;
V_ARRAY_SONG_IDS( 66 ) := 33777 ;
V_ARRAY_SONG_IDS( 67 ) := 33779 ;
V_ARRAY_SONG_IDS( 68 ) := 33781 ;
V_ARRAY_SONG_IDS( 69 ) := 33783 ;
V_ARRAY_SONG_IDS( 70 ) := 33785 ;
V_ARRAY_SONG_IDS( 71 ) := 33787 ;
V_ARRAY_SONG_IDS( 72 ) := 33789 ;
V_ARRAY_SONG_IDS( 73 ) := 33791 ;
V_ARRAY_SONG_IDS( 74 ) := 33793 ;
V_ARRAY_SONG_IDS( 75 ) := 33807 ;
V_ARRAY_SONG_IDS( 76 ) := 33809 ;
V_ARRAY_SONG_IDS( 77 ) := 33811 ;
V_ARRAY_SONG_IDS( 78 ) := 33813 ;
V_ARRAY_SONG_IDS( 79 ) := 33815 ;
V_ARRAY_SONG_IDS( 80 ) := 33817 ;
V_ARRAY_SONG_IDS( 81 ) := 33819 ;
V_ARRAY_SONG_IDS( 82 ) := 33821 ;
V_ARRAY_SONG_IDS( 83 ) := 33823 ;
V_ARRAY_SONG_IDS( 84 ) := 33825 ;
V_ARRAY_SONG_IDS( 85 ) := 33839 ;
V_ARRAY_SONG_IDS( 86 ) := 33844 ;
V_ARRAY_SONG_IDS( 87 ) := 33846 ;
V_ARRAY_SONG_IDS( 88 ) := 33848 ;
V_ARRAY_SONG_IDS( 89 ) := 33850 ;
V_ARRAY_SONG_IDS( 90 ) := 33852 ;
V_ARRAY_SONG_IDS( 91 ) := 33854 ;
V_ARRAY_SONG_IDS( 92 ) := 33856 ;
V_ARRAY_SONG_IDS( 93 ) := 33858 ;
V_ARRAY_SONG_IDS( 94 ) := 33860 ;
V_ARRAY_SONG_IDS( 95 ) := 33874 ;
V_ARRAY_SONG_IDS( 96 ) := 33879 ;
V_ARRAY_SONG_IDS( 97 ) := 33881 ;
V_ARRAY_SONG_IDS( 98 ) := 33883 ;
V_ARRAY_SONG_IDS( 99 ) := 33885 ;
V_ARRAY_SONG_IDS(100 ) := 33889 ;
/* do stuff with data from Song and Pricing Database coming from the two
separate cursors, then continue processing more rows...
FOR i IN C_get_music_price_data( v_array_song_ids ) LOOP
. (this is the loop where I pass in v_array_song_ids
. populated with only 100 cells and it runs forever)
END LOOP;
END; -
How to improve performance using bulk collects with plsql tables or arrays
Hi All,
my procedure is like this
declare
cursor c1 is select ----------------------
begin
assigning to variables
validations on that variables
--50 validations are here --
insert into a table
end;
we have created indexes on primary keys,
i want to use
DECLARE
CURSOR a_cur IS
SELECT program_id
FROM airplanes;
TYPE myarray IS TABLE OF a_cur%ROWTYPE;
cur_array myarray;
BEGIN
OPEN a_cur;
LOOP
FETCH a_cur BULK COLLECT INTO cur_array LIMIT 100;
***---------can i assign cursor data to the plsql table variables or array***
***validate on the pl sql variable as---***
i
nsert into a table
EXIT WHEN a_cur%NOTFOUND;
END LOOP;
CLOSE a_cur;
END;
Edited by: Veekay on Oct 21, 2011 4:28 AMFastest way often is this:
insert /*+append */
into aTable
select * from airplanes;
commit;The select and insert part can even be done in parallel if needed.
However if the oparation is complex or the dataset is very very very very very large or the programmer is decent but not excellent then the bulk approach should be considered. It is often a pretty stable and linear scaling approach.
The solution depends a little on the database version.
LOOP
FETCH a_cur BULK COLLECT INTO cur_array LIMIT 100;
EXIT WHEN a_cur.count = 0;
forall i in a_cur.first.. a_cur.last
insert into aTable (id)
values (a_cur(i));
END LOOP;
...If you have more then one column then you might need a single collection for each column. Other possibilities depend on the db version.
Also: do not exit using a_cur%NOTFOUND. This is wrong! You might loose records from the end of the data set. -
Use of FOR Cursor and BULK COLLECT INTO
Dear all,
in which case we prefer to use FOR cursor and cursor with BULK COLLECT INTO? The following contains two block that query identically where one is using FOR cursor, the other is using BULK COLLECT INTO . Which one that performs better given in the existing task? How do we measure performance between these two?
I'm using sample HR schema:
declare
l_start number;
BEGIN
l_start:= DBMS_UTILITY.get_time;
dbms_lock.sleep(1);
FOR employee IN (SELECT e.last_name, j.job_title FROM employees e,jobs j
where e.job_id=j.job_id and e.job_id LIKE '%CLERK%' AND e.manager_id > 120 ORDER BY e.last_name)
LOOP
DBMS_OUTPUT.PUT_LINE ('Name = ' || employee.last_name || ', Job = ' || employee.job_title);
END LOOP;
DBMS_OUTPUT.put_line('total time: ' || to_char(DBMS_UTILITY.get_time - l_start) || ' hsecs');
END;
declare
l_start number;
type rec_type is table of varchar2(20);
name_rec rec_type;
job_rec rec_type;
begin
l_start:= DBMS_UTILITY.get_time;
dbms_lock.sleep(1);
SELECT e.last_name, j.job_title bulk collect into name_rec,job_rec FROM employees e,jobs j
where e.job_id=j.job_id and e.job_id LIKE '%CLERK%' AND e.manager_id > 120 ORDER BY e.last_name;
for j in name_rec.first..name_rec.last loop
DBMS_OUTPUT.PUT_LINE ('Name = ' || name_rec(j) || ', Job = ' || job_rec(j));
END LOOP;
DBMS_OUTPUT.put_line('total time: ' || to_char(DBMS_UTILITY.get_time - l_start) || ' hsecs');
end;
/In this code, I put timestamp in each block, but they are useless since they both run virtually instantaneous...
Best regards,
ValIf you want to get 100% benifit of bulk collect then it must be implemented as below
declare
Cursor cur_emp
is
SELECT e.last_name, j.job_title
FROM employees e,jobs j
where e.job_id=j.job_id
and e.job_id LIKE '%CLERK%'
AND e.manager_id > 120
ORDER BY e.last_name;
l_start number;
type rec_type is table of varchar2(20);
name_rec rec_type;
job_rec rec_type;
begin
l_start:= DBMS_UTILITY.get_time;
dbms_lock.sleep(1);
/*SELECT e.last_name, j.job_title bulk collect into name_rec,job_rec FROM employees e,jobs j
where e.job_id=j.job_id and e.job_id LIKE '%CLERK%' AND e.manager_id > 120 ORDER BY e.last_name;
OPEN cur_emp;
LOOP
FETCH cur_emp BULK COLLECT INTO name_rec LIMIT 100;
EXIT WHEN name_rec.COUNT=0;
FOR j in 1..name_rec.COUNT
LOOP
DBMS_OUTPUT.PUT_LINE ('Name = ' || name_rec(j) || ', Job = ' || job_rec(j));
END LOOP;
EXIT WHEN cur_emp%NOTFOUND;
END LOOP;
CLOSE cur_emp;
DBMS_OUTPUT.put_line('total time: ' || to_char(DBMS_UTILITY.get_time - l_start) || ' hsecs');
end;
/
Maybe you are looking for
-
How do I make my movies download faster??
Is there a way that I can convert my movie clips on iWeb so that I can make it faster to download for people who go to my site?? Thanks alot! Maxfield Fey
-
Satellite C660-18C - Can't get the microphone to work
So I couldnt get the microphone to work so I bought a new one. I also can not get this to work. I have tried everyway I can think of and it simply will not register any sound and yes the microphone is on and the volume is full. Can someone please hel
-
When I try to connect my Apple Magic Mouse to my Windows 8 pc it says enter the Pass code?
I have a Windows 8 pc and I want to connect my Magic Mouse to it. So I search for bluetooth devices and it picks up. I click connect then it asks for my pass code and I don't know what it's talking about. I got this mouse for my birthday and it's ref
-
Failed trying to read text....
My Systems working like this, client open page with browser,and see some text at the page... Back end Working : RMIServer try to read some file with read code like this : fileRead = new FileInputStream("UIA.txt"); inputServer = new BufferedReader(new
-
Class Diagrams, Stereo Types
How do I set the stereotype of a class in a UML class diagram. I have tried setting the visual properties to display stereotypes but JDev only displays <<class>> for the stereotype. I would like to be able to assign the stereotypes entity, controller