Which is more (faster) performance oriented? Collections or Cursor For Loop
Hi,
Please help me regarding a dilemma.
Is, using a Cursor For Loop faster than a loop run upon a collection variable that was being populated using bulk collect? Why?
Thanks in advance
rollerz
You can just test it by yourself. Can’t you? Ok I was bit curious to know myself so I did this…
TEST ROUND 1
SQL> declare
2 ltime integer;
3 begin
4 ltime := dbms_utility.get_time;
5
6 for i in (select level from dual connect by level <= 100000)
7 loop
8 null;
9 end loop;
10
11 ltime := dbms_utility.get_time - ltime;
12 dbms_output.put_line('ExecTime:'||ltime/100||' seconds...');
13 end;
14 /
ExecTime:.19 seconds...
PL/SQL procedure successfully completed.
SQL> declare
2 ltime integer;
3 type my_type is table of integer;
4 lType my_type;
5 begin
6 ltime := dbms_utility.get_time;
7 select level bulk collect into lType from dual connect by level <= 100000;
8
9 for i in 1..lType.count
10 loop
11 null;
12 end loop;
13
14 ltime := dbms_utility.get_time - ltime;
15 dbms_output.put_line('ExecTime:'||ltime/100||' seconds...');
16 end;
17 /
ExecTime:.14 seconds...
PL/SQL procedure successfully completed.
TEST ROUND 2
SQL> declare
2 ltime integer;
3 begin
4 ltime := dbms_utility.get_time;
5
6 for i in (select level from dual connect by level <= 100000)
7 loop
8 null;
9 end loop;
10
11 ltime := dbms_utility.get_time - ltime;
12 dbms_output.put_line('ExecTime:'||ltime/100||' seconds...');
13 end;
14 /
ExecTime:.17 seconds...
PL/SQL procedure successfully completed.
SQL> declare
2 ltime integer;
3 type my_type is table of integer;
4 lType my_type;
5 begin
6 ltime := dbms_utility.get_time;
7 select level bulk collect into lType from dual connect by level <= 100000;
8
9 for i in 1..lType.count
10 loop
11 null;
12 end loop;
13
14 ltime := dbms_utility.get_time - ltime;
15 dbms_output.put_line('ExecTime:'||ltime/100||' seconds...');
16 end;
17 /
ExecTime:.13 seconds...
TEST ROUND 3
PL/SQL procedure successfully completed.
SQL> declare
2 ltime integer;
3 begin
4 ltime := dbms_utility.get_time;
5
6 for i in (select level from dual connect by level <= 100000)
7 loop
8 null;
9 end loop;
10
11 ltime := dbms_utility.get_time - ltime;
12 dbms_output.put_line('ExecTime:'||ltime/100||' seconds...');
13 end;
14 /
ExecTime:.16 seconds...
PL/SQL procedure successfully completed.
SQL> declare
2 ltime integer;
3 type my_type is table of integer;
4 lType my_type;
5 begin
6 ltime := dbms_utility.get_time;
7 select level bulk collect into lType from dual connect by level <= 100000;
8
9 for i in 1..lType.count
10 loop
11 null;
12 end loop;
13
14 ltime := dbms_utility.get_time - ltime;
15 dbms_output.put_line('ExecTime:'||ltime/100||' seconds...');
16 end;
17 /
ExecTime:.13 seconds...
TEST ROUND 4
PL/SQL procedure successfully completed.
SQL> declare
2 ltime integer;
3 begin
4 ltime := dbms_utility.get_time;
5
6 for i in (select level from dual connect by level <= 100000)
7 loop
8 null;
9 end loop;
10
11 ltime := dbms_utility.get_time - ltime;
12 dbms_output.put_line('ExecTime:'||ltime/100||' seconds...');
13 end;
14 /
ExecTime:.16 seconds...
PL/SQL procedure successfully completed.
SQL> declare
2 ltime integer;
3 type my_type is table of integer;
4 lType my_type;
5 begin
6 ltime := dbms_utility.get_time;
7 select level bulk collect into lType from dual connect by level <= 100000;
8
9 for i in 1..lType.count
10 loop
11 null;
12 end loop;
13
14 ltime := dbms_utility.get_time - ltime;
15 dbms_output.put_line('ExecTime:'||ltime/100||' seconds...');
16 end;
17 /
ExecTime:.13 seconds...
PL/SQL procedure successfully completed.So bulk collect looks faster...
Thanks,
Karthick.
Similar Messages
-
Which is more faster - IN or EXISTS?
Which is more faster - IN or EXISTS?
It depends,
but for many simple cases the optimizer generates exactly the same plan for IN and EXISTS,
so there shouldn't be any difference in speed.
explain plan for
select * from hr.jobs j
where j.job_id in ( select job_id from hr.job_history );
select * from table(dbms_xplan.display );
plan FOR succeeded.
PLAN_TABLE_OUTPUT
Plan hash value: 3539156008
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 8 | 336 | 3 (0)| 00:00:01 |
| 1 | NESTED LOOPS SEMI | | 8 | 336 | 3 (0)| 00:00:01 |
| 2 | TABLE ACCESS FULL| JOBS | 19 | 627 | 3 (0)| 00:00:01 |
|* 3 | INDEX RANGE SCAN | JHIST_JOB_IX | 4 | 36 | 0 (0)| 00:00:01 |
Predicate Information (identified by operation id):
3 - access("J"."JOB_ID"="JOB_ID")
15 rows selected
explain plan for
select * from hr.jobs jb
where exists (
select 1 from hr.job_history hi
where jb.job_id = hi.job_id
select * from table(dbms_xplan.display );
plan FOR succeeded.
PLAN_TABLE_OUTPUT
Plan hash value: 3539156008
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 8 | 336 | 3 (0)| 00:00:01 |
| 1 | NESTED LOOPS SEMI | | 8 | 336 | 3 (0)| 00:00:01 |
| 2 | TABLE ACCESS FULL| JOBS | 19 | 627 | 3 (0)| 00:00:01 |
|* 3 | INDEX RANGE SCAN | JHIST_JOB_IX | 4 | 36 | 0 (0)| 00:00:01 |
Predicate Information (identified by operation id):
3 - access("JB"."JOB_ID"="HI"."JOB_ID")
15 rows selected -
Cursor For Loop SQL/PL right application? Need help with PL Performance
I will preface this post by saying that I am a novice Oracle PL user, so an overexplanation would not be an issue here.
Goal: Run a hierarchial query for over 120k rows and insert output into Table 1. Currently I am using a Cursor For Loop that takes the first record and puts 2 columns in "Start" section and "connect by" section. The hierarchial query runs and then it inserts the output into another table. I do this 120k times( I know it's not very efficient). Now the hierarchial query doesn't take too long ( run by itself for many parts) but this loop process is taking over 9 hrs to run all 120k records. I am looking for a way to make this run faster. I've read about "Bulk collect" and "forall", but I am not understanding how they function to help me in my specific case.
Is there anyway I can rewrite the PL/SQL Statement below with the Cursor For loop or with another methodology to accomplish the goal significantly quicker?
Below is the code ( I am leaving some parts out for space)
CREATE OR REPLACE PROCEDURE INV_BOM is
CURSOR DISPATCH_CSR IS
select materialid,plantid
from INV_SAP_BOM_MAKE_UNIQUE;
Begin
For Row_value in Dispatch_CSR Loop
begin
insert into Table 1
select column1
,column2
,column3
,column4
from( select ..
from table 3
start with materialid = row_value.materialid
and plantid = row_value.plantid
connect by prior plantid = row.value_plantid
exception...
end loop
exception..
commitBluShadow:
The table that the cursor is pulling from ( INV_SAP_BOM_MAKE_UNIQUE) has only 2 columns
Materialid and Plantid
Example
Materialid Plantid
100-C 1000
100-B 1010
X-2 2004
I use the cursor to go down the list 1 by 1 and run a hierarchical query for each row. The only reason I do this is because I have 120,000 materialid,plantid combinations that I need to run and SQL has a limit of 1000 items in the "start with" if I'm semi-correct on that.
Structure of Table it would be inserted into ( Table 1) after Hierarchical SQL Statement runs:
Materialid Plantid User Create Column1 Col2
100-C 1000 25 EA
The Hierarchical query ran gives the 2 columns at the end.
I am looking for a way to either just run a quicker SQL or a more efficient way of running all 120,000 materialid, plantid rows through the Hierarchial Query.
Any Advice? I really appreciate it. Thank You. -
Which is better ListIterator or Simple list.size() in for loop
Hi everybody
I have one scenario in which, I need to delete the duplicate values from the list (Note : Duplicate values are coming in sequence).
I tried to approaches as follows
1) Using ListIterator
I iterated all values and if I found any duplicate then I called remove() method of Iterator.
2) I made another ArrrayList object, and iterated the old list using size() and for loop, and if I found unique value then I added that value to the new ArrayList.
I also created one test java file to find out the performance in both cases. But I am not pretty sure that this test is correct or not. Please suggest me which approach is correct.
code For Test
public class TestReadonly {
public static void main(String[] args) {
List list = new ArrayList();
long beforeHeap = 0,afterHeap = 0;
long beforeTime = 0,afterTime = 0;
addElementsToList(list);
Collections.sort(list);
callGC();
beforeHeap = Runtime.getRuntime().freeMemory();
beforeTime = System.currentTimeMillis();
System.out.println(" Before "+System.currentTimeMillis()+" List Size "+list.size()+" heap Size "+beforeHeap);
new TestReadonly().deleteDuplicated1(list);
afterHeap = Runtime.getRuntime().freeMemory();
afterTime = System.currentTimeMillis();
System.out.println(" After "+System.currentTimeMillis()+" List Size "+list.size()+" heap Size "+afterHeap);
System.out.println(" Time Differance "+(afterTime-beforeTime)+" Heap Differance "+(afterHeap-beforeHeap));
list.clear();
addElementsToList(list);
Collections.sort(list);
callGC();
beforeHeap = Runtime.getRuntime().freeMemory();
beforeTime = System.currentTimeMillis();
System.out.println(" Before "+System.currentTimeMillis()+" List Size "+list.size()+" heap Size "+beforeHeap);
list = new TestReadonly().deleteDuplicated2(list);
afterHeap = Runtime.getRuntime().freeMemory();
afterTime = System.currentTimeMillis();
System.out.println(" After "+System.currentTimeMillis()+" List Size "+list.size()+" heap Size "+afterHeap);
System.out.println(" Time Differance "+(afterTime-beforeTime)+" Heap Differance "+(afterHeap-beforeHeap));
* @param list
private static void addElementsToList(List list) {
for(int i=0;i<1000000;i++) {
list.add("List Object"+i);
for(int i=0;i<10000;i++) {
list.add("List Object"+i);
private static void callGC() {
Runtime.getRuntime().gc();
Runtime.getRuntime().gc();
Runtime.getRuntime().gc();
Runtime.getRuntime().gc();
Runtime.getRuntime().gc();
private void deleteDuplicated1(List employeeList) {
String BLANK = "";
String currentEmployeeNumber = null;
String previousEmployeeNumber = null;
ListIterator iterator = employeeList.listIterator();
while (iterator.hasNext()) {
previousEmployeeNumber = currentEmployeeNumber;
currentEmployeeNumber = (String) iterator.next();
if ((currentEmployeeNumber.equals(previousEmployeeNumber))
|| ((BLANK.equals(currentEmployeeNumber) && BLANK
.equals(previousEmployeeNumber)))) {
iterator.remove();
private List deleteDuplicated2(List employeeList) {
String BLANK = "";
String currentEmployeeNumber = null;
String previousEmployeeNumber = null;
List l1 = new ArrayList(employeeList.size());
for (int i =0;i<employeeList.size();i++) {
previousEmployeeNumber = currentEmployeeNumber;
currentEmployeeNumber = (String) employeeList.get(i);
if (!(currentEmployeeNumber.equals(previousEmployeeNumber))
|| ((BLANK.equals(currentEmployeeNumber) && BLANK
.equals(previousEmployeeNumber)))) {
l1.add(currentEmployeeNumber);
return l1;
Output
Before 1179384331873 List Size 1010000 heap Size 60739664
After 1179384365545 List Size 1000000 heap Size 60737600
Time Differance 33672 Heap Differance -2064
Before 1179384367545 List Size 1010000 heap Size 60739584
After 1179384367639 List Size 1000000 heap Size 56697504
Time Differance 94 Heap Differance -4042080I think, you test is ok like that. Although I would have tested with two different applications, just to be shure that the heap is clean. You never know what gc() actually does.
Still, your results show what is expected:
Approach 1 (List iterator) takes virtually no extra memory, but takes a lot of time, since the list has to be rebuild after each remove.
Approach 2 is much faster, but takes a lot of extra memory, since you need a "copy" of the original list.
Basically, both approaches are valid. You have to decide depending on your requirements.
Approach 1 can be optimized by using a LinkedList instead of an ArrayList. When you remove an element from an ArrayList, all following elements have to be shifted which takes a lot of time. A LinkedList should behave better.
Finally, instead of searching for duplicates, consider to check for duplicates when filling the list or even use a Map.
Tobias -
Bulk collect usage in cursor for loop
Hi Team,
I have one cursor like below assuming cursor is having 3000 records,
CURSOR csr_del_frm_stg(c_source_name VARCHAR2 , c_file_type VARCHAR2)
IS
SELECT stg.last_name,stg.employee_number,stg.email
FROM akam_int.xxak_eb_contact_stg stg
MINUS
SELECT ss.last_name,ss.employee_number,ss.email
FROM akam_int.xxak_eb_contact_stg_ss ss;
I declared one record type variable as,
TYPE emp_rec IS RECORD (LAST_NAME VARCHAR2(40)
*,EMPLOYEE_NUMBER VARCHAR2(50)*
*,EMAIL VARCHAR2(80)*
TYPE emp_rec_ss IS VARRAY(3000) OF emp_rec;
Im updating the status of those cursor records to 'C' in the below for loop,
FOR l_csr_del_frm_stg IN csr_del_frm_stg(p_source_name , p_file_type)
LOOP
FETCH csr_del_frm_stg BULK COLLECT INTO emp_rec_ss LIMIT 500;
FORALL i IN emp_rec_ss.FIRST..emp_rec_ss.LAST
UPDATE akam_int.xxak_eb_contact_stg stg
SET akam_status_flag = 'C'
WHERE stg.employee_number = emp_rec_ss(i).employee_number;
EXIT WHEN csr_del_frm_stg%NOTFOUND;
END LOOP;
Getting following errors if i compile the code,
PLS-00321: expression 'EMP_REC_SS' is inappropriate as the left hand side of an assignment statement
PLS-00302: component 'FIRST' must be declaredUse cursor variables:
declare
v_where varchar2(100) := '&where_clause';
v_cur sys_refcursor;
v_ename varchar2(30);
begin
open v_cur for 'select ename from emp where ' || v_where;
loop
fetch v_cur into v_ename;
exit when v_cur%notfound;
dbms_output.put_line(v_ename);
end loop;
close v_cur;
end;
Enter value for where_clause: deptno = 10
CLARK
KING
MILLER
PL/SQL procedure successfully completed.
SQL> /
Enter value for where_clause: sal = 5000
KING
PL/SQL procedure successfully completed.
SQL> /
Enter value for where_clause: job = ''CLERK''
SMITH
ADAMS
JAMES
MILLER
PL/SQL procedure successfully completed.
SQL> SY. -
Hi all,
I have a performance issue in the below code,where i am trying to insert the data from table_stg into target_tab and in parent_tab tables and then to child tables via cursor with bulk collect .the target_tab and parent_tab are huge tables and have a row wise trigger enabled on it .the trigger is mandatory . This timetaken for this block to execute is 5000 seconds.Now my requirement is to reduce it to 5 to 10 mins.
can someone please guide me here.Its bit urgent .Awaiting for your response.
declare
vmax_Value NUMBER(5);
vcnt number(10);
id_val number(20);
pc_id number(15);
vtable_nm VARCHAR2(100);
vstep_no VARCHAR2(10);
vsql_code VARCHAR2(10);
vsql_errm varchar2(200);
vtarget_starttime timestamp;
limit_in number :=10000;
idx number(10);
cursor stg_cursor is
select
DESCRIPTION,
SORT_CODE,
ACCOUNT_NUMBER,
to_number(to_char(CORRESPONDENCE_DATE,'DD')) crr_day,
to_char(CORRESPONDENCE_DATE,'MONTH') crr_month,
to_number(substr(to_char(CORRESPONDENCE_DATE,'DD-MON-YYYY'),8,4)) crr_year,
PARTY_ID,
GUID,
PAPERLESS_REF_IND,
PRODUCT_TYPE,
PRODUCT_BRAND,
PRODUCT_HELD_ID,
NOTIFICATION_PREF,
UNREAD_CORRES_PERIOD,
EMAIL_ID,
MOBILE_NUMBER,
TITLE,
SURNAME,
POSTCODE,
EVENT_TYPE,
PRIORITY_IND,
SUBJECT,
EXT_PRD_ID_TX,
EXT_PRD_HLD_ID_TX,
EXT_SYS_ID,
EXT_PTY_ID_TX,
ACCOUNT_TYPE_CD,
COM_PFR_TYP_TX,
COM_PFR_OPT_TX,
COM_PFR_RSN_CD
from table_stg;
type rec_type is table of stg_rec_type index by pls_integer;
v_rt_all_cols rec_type;
BEGIN
vstep_no := '0';
vmax_value := 0;
vtarget_starttime := systimestamp;
id_val := 0;
pc_id := 0;
success_flag := 0;
vstep_no := '1';
vtable_nm := 'before cursor';
OPEN stg_cursor;
vstep_no := '2';
vtable_nm := 'After cursor';
LOOP
vstep_no := '3';
vtable_nm := 'before fetch';
--loop
FETCH stg_cursor BULK COLLECT INTO v_rt_all_cols LIMIT limit_in;
vstep_no := '4';
vtable_nm := 'after fetch';
--EXIT WHEN v_rt_all_cols.COUNT = 0;
EXIT WHEN stg_cursor%NOTFOUND;
FOR i IN 1 .. v_rt_all_cols.COUNT
LOOP
dbms_output.put_line(upper(v_rt_all_cols(i).event_type));
if (upper(v_rt_all_cols(i).event_type) = upper('System_enforced')) then
vstep_no := '4.1';
vtable_nm := 'before seq sel';
select PC_SEQ.nextval into pc_id from dual;
vstep_no := '4.2';
vtable_nm := 'before insert corres';
INSERT INTO target1_tab
(ID,
PARTY_ID,
PRODUCT_BRAND,
SORT_CODE,
ACCOUNT_NUMBER,
EXT_PRD_ID_TX,
EXT_PRD_HLD_ID_TX,
EXT_SYS_ID,
EXT_PTY_ID_TX,
ACCOUNT_TYPE_CD,
COM_PFR_TYP_TX,
COM_PFR_OPT_TX,
COM_PFR_RSN_CD,
status)
VALUES
(pc_id,
v_rt_all_cols(i).party_id,
decode(v_rt_all_cols(i).product_brand,'LTB',2,'HLX',1,'HAL',1,'BOS',3,'VER',4,0),
v_rt_all_cols(i).sort_code,
'XXXX'||substr(trim(v_rt_all_cols(i).ACCOUNT_NUMBER),length(trim(v_rt_all_cols(i).ACCOUNT_NUMBER))-3,4),
v_rt_all_cols(i).EXT_PRD_ID_TX,
v_rt_all_cols(i).EXT_PRD_HLD_ID_TX,
v_rt_all_cols(i).EXT_SYS_ID,
v_rt_all_cols(i).EXT_PTY_ID_TX,
v_rt_all_cols(i).ACCOUNT_TYPE_CD,
v_rt_all_cols(i).COM_PFR_TYP_TX,
v_rt_all_cols(i).COM_PFR_OPT_TX,
v_rt_all_cols(i).COM_PFR_RSN_CD,
NULL);
vstep_no := '4.3';
vtable_nm := 'after insert corres';
else
select COM_SEQ.nextval into id_val from dual;
vstep_no := '6';
vtable_nm := 'before insertcomm';
if (upper(v_rt_all_cols(i).event_type) = upper('REMINDER')) then
vstep_no := '6.01';
vtable_nm := 'after if insertcomm';
insert into parent_tab
(ID ,
CTEM_CODE,
CHA_CODE,
CT_CODE,
CONTACT_POINT_ID,
SOURCE,
RECEIVED_DATE,
SEND_DATE,
RETRY_COUNT)
values
(id_val,
lower(v_rt_all_cols(i).event_type),
decode(v_rt_all_cols(i).product_brand,'LTB',2,'HLX',1,'HAL',1,'BOS',3,'VER',4,0),
'Email',
v_rt_all_cols(i).email_id,
'IADAREMINDER',
systimestamp,
systimestamp,
0);
else
vstep_no := '6.02';
vtable_nm := 'after else insertcomm';
insert into parent_tab
(ID ,
CTEM_CODE,
CHA_CODE,
CT_CODE,
CONTACT_POINT_ID,
SOURCE,
RECEIVED_DATE,
SEND_DATE,
RETRY_COUNT)
values
(id_val,
lower(v_rt_all_cols(i).event_type),
decode(v_rt_all_cols(i).product_brand,'LTB',2,'HLX',1,'HAL',1,'BOS',3,'VER',4,0),
'Email',
v_rt_all_cols(i).email_id,
'CORRESPONDENCE',
systimestamp,
systimestamp,
0);
END if;
vstep_no := '6.11';
vtable_nm := 'before chop';
if (v_rt_all_cols(i).ACCOUNT_NUMBER is not null) then
v_rt_all_cols(i).ACCOUNT_NUMBER := 'XXXX'||substr(trim(v_rt_all_cols(i).ACCOUNT_NUMBER),length(trim(v_rt_all_cols(i).ACCOUNT_NUMBER))-3,4);
insert into child_tab
(COM_ID,
KEY,
VALUE)
values
(id_val,
'IB.Correspondence.AccountNumberMasked',
v_rt_all_cols(i).ACCOUNT_NUMBER);
end if;
vstep_no := '6.1';
vtable_nm := 'before stateday';
if (v_rt_all_cols(i).crr_day is not null) then
insert into child_tab
(COM_ID,
KEY,
VALUE)
values
(id_val,
--'IB.Correspondence.Date.Day',
'IB.Crsp.Date.Day',
v_rt_all_cols(i).crr_day);
end if;
vstep_no := '6.2';
vtable_nm := 'before statemth';
if (v_rt_all_cols(i).crr_month is not null) then
insert into child_tab
(COM_ID,
KEY,
VALUE)
values
(id_val,
--'IB.Correspondence.Date.Month',
'IB.Crsp.Date.Month',
v_rt_all_cols(i).crr_month);
end if;
vstep_no := '6.3';
vtable_nm := 'before stateyear';
if (v_rt_all_cols(i).crr_year is not null) then
insert into child_tab
(COM_ID,
KEY,
VALUE)
values
(id_val,
--'IB.Correspondence.Date.Year',
'IB.Crsp.Date.Year',
v_rt_all_cols(i).crr_year);
end if;
vstep_no := '7';
vtable_nm := 'before type';
if (v_rt_all_cols(i).product_type is not null) then
insert into child_tab
(COM_ID,
KEY,
VALUE)
values
(id_val,
'IB.Product.ProductName',
v_rt_all_cols(i).product_type);
end if;
vstep_no := '9';
vtable_nm := 'before title';
if (trim(v_rt_all_cols(i).title) is not null) then
insert into child_tab
(COM_ID,
KEY,
VALUE )
values
(id_val,
'IB.Customer.Title',
trim(v_rt_all_cols(i).title));
end if;
vstep_no := '10';
vtable_nm := 'before surname';
if (v_rt_all_cols(i).surname is not null) then
insert into child_tab
(COM_ID,
KEY,
VALUE)
values
(id_val,
'IB.Customer.LastName',
v_rt_all_cols(i).surname);
end if;
vstep_no := '12';
vtable_nm := 'before postcd';
if (trim(v_rt_all_cols(i).POSTCODE) is not null) then
insert into child_tab
(COM_ID,
KEY,
VALUE)
values
(id_val,
'IB.Customer.Addr.PostCodeMasked',
substr(replace(v_rt_all_cols(i).POSTCODE,' ',''),length(replace(v_rt_all_cols(i).POSTCODE,' ',''))-2,3));
end if;
vstep_no := '13';
vtable_nm := 'before subject';
if (trim(v_rt_all_cols(i).SUBJECT) is not null) then
insert into child_tab
(COM_ID,
KEY,
VALUE)
values
(id_val,
'IB.Correspondence.Subject',
v_rt_all_cols(i).subject);
end if;
vstep_no := '14';
vtable_nm := 'before inactivity';
if (trim(v_rt_all_cols(i).UNREAD_CORRES_PERIOD) is null or
trim(v_rt_all_cols(i).UNREAD_CORRES_PERIOD) = '3' or
trim(v_rt_all_cols(i).UNREAD_CORRES_PERIOD) = '6' or
trim(v_rt_all_cols(i).UNREAD_CORRES_PERIOD) = '9') then
insert into child_tab
(COM_ID,
KEY,
VALUE)
values
(id_val,
'IB.Correspondence.Inactivity',
v_rt_all_cols(i).UNREAD_CORRES_PERIOD);
end if;
vstep_no := '14.1';
vtable_nm := 'after notfound';
end if;
vstep_no := '15';
vtable_nm := 'after notfound';
END LOOP;
end loop;
vstep_no := '16';
vtable_nm := 'before closecur';
CLOSE stg_cursor;
vstep_no := '17';
vtable_nm := 'before commit';
DELETE FROM table_stg;
COMMIT;
vstep_no := '18';
vtable_nm := 'after commit';
EXCEPTION
WHEN OTHERS THEN
ROLLBACK;
success_flag := 1;
vsql_code := SQLCODE;
vsql_errm := SUBSTR(sqlerrm,1,200);
error_logging_pkg.inserterrorlog('samp',vsql_code,vsql_errm, vtable_nm,vstep_no);
RAISE_APPLICATION_ERROR (-20011, 'samp '||vstep_no||' SQLERRM:'||SQLERRM);
end;
ThanksIts bit urgent
NO - it is NOT urgent. Not to us.
If you have an urgent problem you need to hire a consultant.
I have a performance issue in the below code,
Maybe you do and maybe you don't. How are we to really know? You haven't posted ANYTHING indicating that a performance issue exists. Please read the FAQ for how to post a tuning request and the info you need to provide. First and foremost you have to post SOMETHING that actually shows that a performance issue exists. Troubleshooting requires FACTS not just a subjective opinion.
where i am trying to insert the data from table_stg into target_tab and in parent_tab tables and then to child tables via cursor with bulk collect .the target_tab and parent_tab are huge tables and have a row wise trigger enabled on it .the trigger is mandatory . This timetaken for this block to execute is 5000 seconds.Now my requirement is to reduce it to 5 to 10 mins.
Personally I think 5000 seconds (about 1 hr 20 minutes) is very fast for processing 800 trillion rows of data into parent and child tables. Why do you think that is slow?
Your code has several major flaws that need to be corrected before you can even determine what, if anything, needs to be tuned.
This code has the EXIT statement at the beginning of the loop instead of at the end
FETCH stg_cursor BULK COLLECT INTO v_rt_all_cols LIMIT limit_in;
vstep_no := '4';
vtable_nm := 'after fetch';
--EXIT WHEN v_rt_all_cols.COUNT = 0;
EXIT WHEN stg_cursor%NOTFOUND;
The correct place for the %NOTFOUND test when using BULK COLLECT is at the END of the loop; that is, the last statement in the loop.
You can use a COUNT test at the start of the loop but ironically you have commented it out and have now done it wrong. Either move the NOTFOUND test to the end of the loop or remove it and uncomment the COUNT test.
WHEN OTHERS THEN
ROLLBACK;
That basically says you don't even care what problem occurs or whether the problem is for a single record of your 10,000 in the collection. You pretty much just throw away any stack trace and substitute your own message.
Your code also has NO exception handling for any of the individual steps or blocks of code.
The code you posted also begs the question of why you are using NAME=VALUE pairs for child data rows? Why aren't you using a standard relational table for this data?
As others have noted you are using slow-by-slow (row by row processing). Let's assume that PL/SQL, the bulk collect and row-by-row is actually necessary.
Then you should be constructing the parent and child records into collections and then inserting them in BULK using FORALL.
1. Create a collection for the new parent rows
2. Create a collection for the new child rows
3. For each set of LIMIT source row data
a. empty the parent and child collections
b. populate those collections with new parent/child data
c. bulk insert the parent collection into the parent table
d. bulk insert the child collection into the child table
And unless you really want to either load EVERYTHING or abandon everything you should use bulk exception handling so that the clean data gets processed and only the dirty data gets rejected. -
Query to retrieve the records which have more than one assignment_id
Hello,
I am trying to write a query to retrieve all the records from the table per_all_assignments_f which has more than one different assignment_id for each person_id. Below is the query i have written but this retrieves the records even if a person_id has duplicate assignment_id's but i need records which have more than one assignement_id with no duplicates for each person_id
select assignment_id ,person_id, assignment_id
From per_all_assignments_f
having count(assignment_id) >1
group by person_id, assignment_id
Thank You.
PKMaybe something like this?
select *
From per_all_assignments_f f1
where exists (select 1
from per_all_assignments_f f2
where f2.person_id = f1.person_id
and f2.assignment_id != f1.assignment_id
);Edited by: SomeoneElse on May 7, 2010 2:23 PM
(you can add a DISTINCT to the outer query if you need to) -
How to upload a file which has more than 999 line item through BDC ?
Hello Techards
Hi to all
Can any body tell me how to upload a file which has more than 999 line item through BDC for traction F-02 ?
Thanks in advance.
ShovanHello Shovan,
You split it up to post two accounting documents with the help of a "suspense" a/c.
Say, you have to post the following line items below:
line 1 - dr. - GL a/c X - $1000
line 2 - cr. - GL a/c Y - $1
line 3 - cr. - GL a/c Y - $1
line 1001 - cr. - GL a/c Y - $1
You cannot post the above as a single doc in SAP (because of technical reasons), so you need to break it up into 2 documents as below:
Doc1
line 1 - dr - GL a/c X - $1000
line 2 - cr - GL a/c Y - $1
line 3 - cr - GL a/c Y - $1
line 998 - cr - GL a/c Y - $1
line 999 - cr - SUSPENSE a/c - $3
Doc2
line 1 - dr - SUSPENSE a/c - $3
line 2 - cr - GL a/c Y - $3
Note that there is no incorrect impact on accounting as first we credit suspense a/c by $3 and next we debit the same suspense a/c by $3 as a result the effect is nil. Similarly, we credit $997 to GL a/c Y (which is less by $3) in the first doc which is compensated by the second doc by crediting the shortfall of $3.
Hope this helps,
Cheers,
Sougata. -
Breaking a single query using for loop to improve performance
Hi,
This is a continuation of my previous post which I marked Answered and few things are were left from my side and I was advised to post a new thread...
[Will it be a good ply|http://forums.oracle.com/forums/thread.jspa?threadID=880922&tstart=0]
will it be applicable to any query because I took this to simplify the thing to explain my case
which is actually........... pull the data from some tables using joins and insert into some other tables .....
I used few procedures for diff conditions one of them use plain' insert into select' and others use bulk collect technique.
I followed the simple strategy (for DML ) by Tom Kytes ..and Steven Feuerstein
Use single statement whenever it is possible..
if not use PL/SQL with bulk collect to avoid for loop.
Please correct me if I am wrong.
Thanks and Regards,
Hesh.A simple test can prove the case
SQL> set serveroutput on
SQL>
SQL> create table t
2 as
3 select * from all_objects where 1=2
4 /
Table created.
SQL> declare
2 ltime integer;
3 begin
4 ltime := dbms_utility.get_time;
5 insert into t select * from all_objects;
6 ltime := dbms_utility.get_time - ltime;
7
8 dbms_output.put_line(ltime);
9 end;
10 /
187
PL/SQL procedure successfully completed.
SQL> rollback
2 /
Rollback complete.
SQL> declare
2 ltime integer;
3 type tbl is table of t%rowtype index by pls_integer;
4 l_tbl tbl;
5 cursor c
6 is
7 select * from all_objects;
8 begin
9 ltime := dbms_utility.get_time;
10 open c;
11 loop
12 fetch c bulk collect into l_tbl limit 500;
13 exit when c%notfound;
14
15 forall i in 1..l_tbl.count
16 insert into t values l_tbl(i);
17 end loop;
18
19 ltime := dbms_utility.get_time - ltime;
20
21 dbms_output.put_line(ltime);
22 end;
23 /
390
PL/SQL procedure successfully completed.
SQL> rollback
2 /
Rollback complete. -
Collection which stores more then 2 values
Hi there ,
How to create a collection which stores more then 2 values of different types like int,string,date etc and retrieve each elements separately.Will any one help me
Thank u
RamyaGo to this page:
http://java.sun.com/docs/books/tutorial/
and read the tutorial on Collections. -
Which is more efficient way to get result set from database server
Hi,
I am working on a project where I require to query database to fetch result set and then iterate through the resultset. Now, What I want is that I want to create one single java code that would call many different SQLs and create a list out of resultset. There are two approaches for me.
1.) To create a txt file where I can store my queries. My java program can read this file and get the appropriate query to be used.
2.) To create a stored procedure containing the queries and call the stored procedure from my java program. Also, not that some of the queries needs to be created dynamically depending upon the parameteters supplied.
Out of these two approches which is optimum and why?
Also, following things to be noted.
1. At times I want to create where clause of the query dynamically depenending upon the parameters passed.
2. I want one single java file that will handle all database calls.
3. Paramters to the stored procedure can be passed using array descriptor.
4. Conneciton I am making using JNDI.
Please do provide me optimum way of out these two. You may also suggest some other approaches, if any.
Thanks,
Rajan
Edited by: RP on Jun 26, 2012 11:11 PMRP wrote:
In case of queries stored in text files. I will require to replace some pre defined placeholder with actual parameters and then pass that modified query to db engine. Even I liked the second approach as it is more easily maintainable. There are a couple of issues. Shared SQL is one. Irrespective of the method used, the SQL cursor that is created needs to have bind variables. This ensures re-usability of the cursor. This reduces the risk of Shared Pool fragmentation. This lowers hard parsing and reduces CPU utilisation.
Another issue is flexibility. If the SQL cursors are created by stored procedures, this code resides on the server and abstracts the Java client from the complexities of SQL and SQL performance. The code can easily be updated and fine tuned to deliver faster/better SQL cursors, or modified to take new Oracle features, changes in data model, and so on, into consideration. This stored proc can be updated without having to touch or recompile a single byte of Java client code.
There's also the security issue. What is more secure? SQL encapsulated in stored procs in a secure database and server environment? Or SQL "encapsulated" in text files on the client?
The less code you have running on the client, the less code you have running in the wild that can be compromised without having to first compromise the server.
Only I was worried about any performace issue might happen using this approach. Performance is not a factor of who creates the SQL cursor.
Whether Java client creates a SQL cursor, or a PL/SQL stored proc creates a SQL cursor, or a .Net client creates a SQL cursor - that SQL cursor does not know what the client is. It does not care what the client is. The SQL cursor performs as well as it is capable of.. given the execution plan, data volumes, server resources and speed/performance of the server.
The client language and SQL cursor interface used by the client (there are several in PL/SQL), determines the performance of the client's interaction with the cursor (e.g. round trips to the database when interfacing with the cursor). The client language (and its client interface to the cursor) does not dictate the actual performance of that SQL cursor on the database (does not make joins faster, or I/O faster)
One more question, Will my java program close the cursor that I opened in Procedure?That you need to ask your Java code. Java code leaking ref cursors are unfortunately all too common. You need to make sure that your Java client interface to SQL cursors, closes the cursor handle when done. -
Which is better??? for loop or bulk collect
declare
cursor test is
select * from employees;
begin
open test;
loop
exit when test%notfound;
fetch test into myvar_a(i); CASE A
i:=i+1;
end loop;
close test;
open test;
fetch test bulk collect into myvar_b; CASE B
close test;
end;
Which case is better?? A or B?
Edited by: Kakashi on May 31, 2009 12:54 AMDepends on the meaning of better.
Generally case B should be faster although a bit more elaborate code is required.
But there may be exceptions. I think I read somewhere (I'm home now and I cannot find it at the moment) that in 10g (or 11g - not sure) 100 rows at a time are pre-fetched behind scenes even when you use case A. So using case B with a low limit could well be slower.
If I can express an additional opinion case F(irst) is nearly always the best i.e. plain SQL (no loops at all). I'm aware that sometimes it cannot be used, but should be the first approach to be tried.
Regards
Etbin
FOUND: http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:213366500346264333
CONTAINS
Hey Tom, love the site. I noticed in your first fetch, which was in the first for loop that did an unconditional exit:
2 for x in ( select rownum r, t1.* from big_table.big_table t1 )
3 loop
4 exit;
5 end loop;
In looking at the TKPROF output for that query, it shows the number of rows being fetched as 100. Does that prove / demonstrate the bulk collecting optimization that Oracle added in 10g, where it implicitly and automatically does a bulk collect of limit 100 behind the scenes?
This came up at a discussion at my site very recently, and I think I can just point them to your example here as a demo rather than creating my own. I assume that if you ran the same thing in 9iR2, then that first fetch of rows in TKPROF would only show 1?
Followup April 18, 2007 - 1pm US/Eastern:
yes, that demonstrates the implicit array fetch of 100 rows...
in 9i, it would show 1 row fetched.
Edited by: Etbin on 31.5.2009 10:38 -
Update query which taking more time
Hi
I am running an update query which takeing more time any help to run this fast.
update arm538e_tmp t
set t.qtr5 =(select (sum(nvl(m.net_sales_value,0))/1000) from mnthly_sales_actvty m
where m.vndr#=t.vndr#
and m.cust_type_cd=t.cust_type
and m.cust_type_cd<>13
and m.yymm between 201301 and 201303
group by m.vndr#,m.cust_type_cd;
help will be appreciable
thank you
Edited by: 960991 on Apr 16, 2013 7:11 AM960991 wrote:
Hi
I am running an update query which takeing more time any help to run this fast.
update arm538e_tmp t
set t.qtr5 =(select (sum(nvl(m.net_sales_value,0))/1000) from mnthly_sales_actvty m
where m.vndr#=t.vndr#
and m.cust_type_cd=t.cust_type
and m.cust_type_cd13
and m.yymm between 201301 and 201303
group by m.vndr#,m.cust_type_cd;
help will be appreciable
thank youUpdates with subqueries can be slow. Get an execution plan for the update to see what SQL is doing.
Some things to look at ...
1. Are you sure you posted the right SQL? I could not "balance" the parenthesis - 4 "(" and 3 ")"
2. Unnecessary "(" ")" in the subquery "(sum" are confusing
3. Updates with subqueries can be slow. The tqtr5 value seems to evaluate to a constant. You might improve performance by computing the value beforehand and using a variable instead of the subquery
4. Subquery appears to be correlated - good! Make sure the subquery is properly indexed if it reads < 20% of the rows in the table (this figure depends on the version of Oracle)
5. Is tqtr5 part of an index? It is a bad idea to update indexed columns -
Which is more useful in include directive and taglibraries
I am writing some static html tags inside a file.
This files i need to call from jsp pages.
one is i can call using include directive.
other is writing taglibraries.
Performance and optimization wise which is more appropriate?
There is an include directive and a jsp:include tag. I would suggest that
performance differences between these are nominal, with the directive being
slightly faster due to its "inlining nature".
Don't write your own tags to do this unless you have to. That would be
silly.
Peace,
Cameron Purdy
Tangosol, Inc.
http://www.tangosol.com
+1.617.623.5782
WebLogic Consulting Available
"Raaj" <[email protected]> wrote in message
news:3af02b63$[email protected]..
>
> I am writing some static html tags inside a file.
> This files i need to call from jsp pages.
> one is i can call using include directive.
> other is writing taglibraries.
> Performance and optimization wise which is more appropriate?
>
-
Hi,
I want to know that which database work fast or connect with JDBC more efficiently? I have worked on MS Access to store data. but not happy with the performance. Can anybody give me info. about other database programs like mysql, sql server, oracle or sybase. I mean which of them can be more fast with JDBC?It DEPENDS on what type of application you are developing. For a read-only type of website/application, mysql is BLAZING fast for reads. You also can't beat the price, FREE!!!
If you are looking to do a heavy transaction type of application and don't want to manage the transactions yourself and let the db do it for you, then oracle is my preference (just b/c I have grown accustomed to it despite how damn expensive it is). If price is an issue you might want to look at DB2 or even Sybase. But if it is a heavy duty application then Oracle IMHO is the way to go!
Matt Veitas
Maybe you are looking for
-
Help with exporting tracks from Garageband into iTunes please?!
I have voice-recorded a list of Italian words and phrases which I want to import into iTunes as separate tracks so that I can shuffle them & test myself to learn them. So far I have been using the cycle region to import each word but it is taking for
-
Barcode through Visual Basic 6.0
How to make a barcode generator using barcode?
-
How to override the dialog "Do you want to save" while closing the form?
HI I have two blocks on the form.Master block is based on a view wheres the detail block is populated by a procedure. Even if the user doesnot change any record and trying to close the form it is still showing the default "Do you want to save" with t
-
Start process -"Start using Metachain or API"
Dear Experts, I have a process chain,PC2 that needs to be scheduled after completion of process chain, PC1. Now can I use the condition "Start using Metachain or API". But I have no idea how it works. I referred to documentation servic
-
How can I get rid of 'underline' under hyperlinked text?
In various other apps i've been able to keep the text as hyperlinked, but get rid of the automatic underlining. Can I somehow create a style or set a preference to get iWeb to do this? Ben