FORALL and autocommit behavour
Hi everyone,
I have to to remove a large amount of double entries from a huge table based on three different criteria.
The affected database is running 24/7 with a lot of concurrent access including many DML commands. There is no possibility to lock the DB while the cleanup script is running.
So far I've used a PL/SQL script which is doing mainly the following steps:
1. Get a list of Ids (both primary key id and internal ID) via BULK collect into a collection (May contain upto 1.500.000 records to remove)
2. Pass the retrieved data to FORALL to actually remove these values
3. Finally commit
Tests on our Test environment shows that the runtime might easily exceed 6-8 hours (the affected table is really large) per Script. So my collegues asked me to perform a commit every 2000 records.
I'm not sure if FORALL utilises the SQL*Plus feature "SET AUTOCOMMIT ON 2000" ?
If this is not the case (which I assume), is there any other way to foce FORALL to commit on every 'N' records? Or do I have to change the code to use the BULK COLLECT LIMIT clause and feed FORALL with that "batch"?
Hope that someone could give some hints on that topic.
Thanks for help!
Best regards,
Sascha
user12865818 wrote:
Tests on our Test environment shows that the runtime might easily exceed 6-8 hours (the affected table is really large) per Script. So my collegues asked me to perform a commit every 2000 records.Your colleagues are idiots.
With that volume size, your code will be fetching across commits (a violation of ANSI SQL standards) - and it will run into ORA-01555: snapshot too old.
Firstly identify WHAT the performance problem is. That is pretty easy as you have a "really large table" that needs every single row read to be checked for duplicates.
So the performance issue is I/O. I/O is the slowest operation on a database. As your code needs to check every single row, every single row needs to be read. Indexes cannot provide a faster and better path (using an index in such a case will only cause more I/O and performance degradation).
How do you address an I/O performance problem? By doing more commits!? That is laughable. And does not in ANY way reduce the amount of I/O that needs to be done.
I/O is addressed at application level by two simple methods:
1) do less I/O
2) do I/O in parallel
You need to make sure that the method you use for identifying the duplicates are optimal. The least amount of I/O to achieve the desired result.
Secondly, as the entire table needs to be processed, you can do that in parallel. Oracle supports Parallel Query (PQ) for both DML and DDL statements. You can use DBMS_PARALLEL_EXECUTE. You can manually determine the physical row address range of that table, and break that up into multiple subranges - and execute your code in parallel, for each of the subranges.
Increasing the commit frequency simply means MORE work needs to be done... and the real performance problem, the amount of I/O, is not made one teeny bit faster.
Similar Messages
-
Error in checking FORALL and FOR performance in PL/SQL.!
I am using FORALL and FOR construct' comparison in term of times taken by them(performance).
This is whats my problem:
SQL> SET SERVEROUTPUT ON;
SQL> CREATE TABLE T1(C1 VARCHAR2(100));
Table created.
SQL> declare
2 t1 number;
3 t2 number;
4 t3 number;
5 begin
6 for i in 1..10
7 loop
8 t1:=dbms_utility.get_time;
9 insert into t1 values('RAVIKANTH');
10 end loop;
11 t2:=dbms_utility.get_time;
12 forall i in 1..10
13 insert into t1 values('RAVIKANTH');
14 t3:=dbms_utility.get_time;
15 dbms_output.put_line('Time taken for FOR'||TO_CHAR((t2-t1)/100));
16 dbms_output.put_line('Time taken for FORALL'||TO_CHAR((t3-t1)/100));
17 end;
18 /
declare
ERROR at line 1:
ORA-06550: line 13, column 1:
PLS-00435: DML statement without BULK In-BIND cannot be used inside FORALL
... dear friends, please help me out in resolving this.
Thanks in advance,
Ravikanth K.Forall works with sets of data, like index-by tables. Here's an example:
declare
t1 number;
t2 number;
t3 number;
type r_tt is table of t%rowtype
index by binary_integer;
tt r_tt;
begin
for i in 1..10
loop
t1:=dbms_utility.get_time; -- <----- this one should be outside the loop
insert into t values('RAVIKANTH');
end loop;
t2:=dbms_utility.get_time;
-- Load the index-by Table
for i in 1..10
loop
tt (i).c1 := 'RAVIKANTH';
end loop;
-- Use this index-by Table to perform the Bulk Operation
forall i in 1..10
insert into t values tt (i);
t3:=dbms_utility.get_time;
dbms_output.put_line('Time taken for FOR'||TO_CHAR((t2-t1)/100));
dbms_output.put_line('Time taken for FORALL'||TO_CHAR((t3-t1)/100));
end;You probably won't see much difference in elapsed time. You testing a single insert again a bulk operation.
btw: You have a local variable with the same name as your table -
Hi,
I am just confusing..what is the difference between FORALL and FOR.Why it works?
declare
var1 varchar2(500);
type varray_type is table of varchar2(50);
v_varray varray_type;
begin
v_varray := varray_type('smith', 'king', 'jones');
for i in 1 .. 3 loop
var1 := var1 || ',' || v_varray(i);
end loop;
dbms_output.put_line(var1);
end;
OUTPUT: ,smith,king,jones
And why it is not..
declare
var1 varchar2(500);
type varray_type is table of varchar2(50);
v_varray varray_type;
begin
v_varray := varray_type('smith', 'king', 'jones');
forall i in varray.first .. varray.last
var1 := var1 || ',' || v_varray(i);
end loop;
dbms_output.put_line(var1);
end;
I am still confusing..can't we assign values to variable in FORALL?
please suggest me. -
USING IF IN FORALL AND BULK COLLECT
Hi All,
I wrote an program..I have doubt whether i can use if condition in FORALL INSERT OR BULK COLLECT? I can't go for 'for loop' ....Is there any way to to do validations in FORALL INSERT and BULK COLLECT like we do in 'for loop' ...
create or replace
PROCEDURE name AS
CURSOR CUR_name IS
SELECT OLD_name,NEW_name FROM DIRECTORY_LISTING_AUDIT;
TYPE V_OLD_name IS TABLE OF DIRECTORY_LISTING_AUDIT.OLD_name%TYPE;
Z_V_OLD_name V_OLD_name ;
TYPE V_NEW_name IS TABLE OF DIRECTORY_LISTING_AUDIT.NEW_name%TYPE;
Z_V_NEW_name V_NEW_name ;
BEGIN
OPEN CUR_name ;
LOOP
FETCH CUR_name BULK COLLECT INTO Z_V_OLD_name,Z_V_NEW_name;
IF Z_V_NEW_name <> NULL THEN
Z_V_OLD_name := Z_V_NEW_name ;
Z_V_NEW_name := NULL;
END IF;
FORALL I IN Z_V_NEW_name.COUNT
INSERT INTO TEMP_DIREC_AUDIT (OLD_name,NEW_name) VALUES (Z_V_OLD_name(I),Z_V_NEW_name(I));
EXIT WHEN CUR_name%NOTFOUND;
END LOOP;
CLOSE CUR_name;
END name;FORALL i IN v_tab.FIRST .. v_tab.LAST
INSERT ALL
WHEN v_tab (i) = 1
THEN
INTO sen_temp
(col_num
VALUES (v_tab (i) + 5
SELECT dummy
FROM DUAL;
EXIT WHEN c1%NOTFOUND;this is the one u looking for i guess... -
A question on FORALL and Collections
It's hard to find a difinitive answer to this question via online docs. I'm doing a FORALL without a BULK COLLECT.
Givin the following;
In Package Header:
TYPE scalarArray1 is varray(500) of varchar2(50);
TYPE scalarArray2 is varray(500) of varchar2(50);
In package Body
...populate the 2 arrays
FORALL i in 1..N
EXECUTE IMMEDIATE
'UPDATE table_name SET column1 = :1
WHERE column2 = :2'
USING scalarArray1(i), scalarArray1(i);
This will give me the following error:
PLS-00430: FORALL iteration variable is not allowed in this context
PLS-00435: DML statement without BULK In-BIND cannot be used inside FORALL
I'm begining to think that you can't have for FORALL statment without at BULK COLLECT statement.
I'm I correct?You defined VARRAY types, now you need to declare variables of those types and use the variables.
SQL> create table t (
2 column1 varchar2(30),
3 column2 varchar2(30) );
Table created.
SQL>
SQL> insert into t
2 select 'a' column1, 'y'||level column2
3 from dual
4 connect by level <= 12;
12 rows created.
SQL>
SQL> commit;
Commit complete.
SQL> select * from t;
COLUMN1 COLUMN2
a y1
a y2
a y3
a y4
a y5
a y6
a y7
a y8
a y9
a y10
a y11
a y12
12 rows selected.
SQL> DECLARE
2 TYPE scalararray1 IS VARRAY (500) OF VARCHAR2 (50);
3 TYPE scalararray2 IS VARRAY (500) OF VARCHAR2 (50);
4 --
5 sa1 scalararray1;
6 sa2 scalararray2;
7 BEGIN
8 sa1 := scalararray1 ();
9 sa2 := scalararray2 ();
10 --
11 FOR i IN 1 .. 10
12 LOOP
13 sa1.EXTEND ();
14 sa1 (i) := 'x' || i;
15 sa2.EXTEND ();
16 sa2 (i) := 'y' || i;
17 END LOOP;
18 --
19 FORALL i IN 1 .. sa1.COUNT ()
20 EXECUTE IMMEDIATE 'UPDATE t SET column1 = :1 WHERE column2 = :2'
21 USING sa1 (i), sa2 (i);
22 END;
23 /
PL/SQL procedure successfully completed.
SQL> select * from t;
COLUMN1 COLUMN2
x1 y1
x2 y2
x3 y3
x4 y4
x5 y5
x6 y6
x7 y7
x8 y8
x9 y9
x10 y10
a y11
a y12
12 rows selected.
SQL> commit;
Commit complete.
SQL> -
How to use BULK COLLECT, FORALL and TREAT
There is a need to read match and update data from and into a custom table. The table would have about 3 millions rows and holds key numbers. BAsed on a field value of this custom table, relevant data needs to be fetched from joins of other tables and updated in the custom table. I plan to use BULK COLLECT and FORALL.
All examples I have seen, do an insert into a table. How do I go about reading all values of a given field and fetching other relevant data and then updating the custom table with data fetched.
Defined an object with specifics like this
CREATE OR REPLACE TYPE imei_ot AS OBJECT (
recid NUMBER,
imei VARCHAR2(30),
STORE VARCHAR2(100),
status VARCHAR2(1),
TIMESTAMP DATE,
order_number VARCHAR2(30),
order_type VARCHAR2(30),
sku VARCHAR2(30),
order_date DATE,
attribute1 VARCHAR2(240),
market VARCHAR2(240),
processed_flag VARCHAR2(1),
last_update_date DATE
Now within a package procedure I have defined like this.
type imei_ott is table of imei_ot;
imei_ntt imei_ott;
begin
SELECT imei_ot (recid,
imei,
STORE,
status,
TIMESTAMP,
order_number,
order_type,
sku,
order_date,
attribute1,
market,
processed_flag,
last_update_date
BULK COLLECT INTO imei_ntt
FROM (SELECT stg.recid, stg.imei, cip.store_location, 'S',
co.rtl_txn_timestamp, co.rtl_order_number, 'CUST',
msi.segment1 || '.' || msi.segment3,
TRUNC (co.txn_timestamp), col.part_number, 'ZZ',
stg.processed_flag, SYSDATE
FROM custom_orders co,
custom_order_lines col,
custom_stg stg,
mtl_system_items_b msi
WHERE co.header_id = col.header_id
AND msi.inventory_item_id = col.inventory_item_id
AND msi.organization_id =
(SELECT organization_id
FROM hr_all_organization_units_tl
WHERE NAME = 'Item Master'
AND source_lang = USERENV ('LANG'))
AND stg.imei = col.serial_number
AND stg.processed_flag = 'U');
/* Update staging table in one go for COR order data */
FORALL indx IN 1 .. imei_ntt.COUNT
UPDATE custom_stg
SET STORE = TREAT (imei_ntt (indx) AS imei_ot).STORE,
status = TREAT (imei_ntt (indx) AS imei_ot).status,
TIMESTAMP = TREAT (imei_ntt (indx) AS imei_ot).TIMESTAMP,
order_number = TREAT (imei_ntt (indx) AS imei_ot).order_number,
order_type = TREAT (imei_ntt (indx) AS imei_ot).order_type,
sku = TREAT (imei_ntt (indx) AS imei_ot).sku,
order_date = TREAT (imei_ntt (indx) AS imei_ot).order_date,
attribute1 = TREAT (imei_ntt (indx) AS imei_ot).attribute1,
market = TREAT (imei_ntt (indx) AS imei_ot).market,
processed_flag =
TREAT (imei_ntt (indx) AS imei_ot).processed_flag,
last_update_date =
TREAT (imei_ntt (indx) AS imei_ot).last_update_date
WHERE recid = TREAT (imei_ntt (indx) AS imei_ot).recid
AND imei = TREAT (imei_ntt (indx) AS imei_ot).imei;
DBMS_OUTPUT.put_line ( TO_CHAR (SQL%ROWCOUNT)
|| ' rows updated using Bulk Collect / For All.'
EXCEPTION
WHEN NO_DATA_FOUND
THEN
DBMS_OUTPUT.put_line ('No Data: ' || SQLERRM);
WHEN OTHERS
THEN
DBMS_OUTPUT.put_line ('Other Error: ' || SQLERRM);
END;
Now for the unfortunate part. When I compile the pkg, I face an error
PL/SQL: ORA-00904: "LAST_UPDATE_DATE": invalid identifier
I am not sure where I am wrong. Object type has the last update date field and the custom table also has the same field.
Could someone please throw some light and suggestion?
Thanks
udsI suspect your error comes from the »bulk collect into« and not from the »forall loop«.
From a first glance you need to alias sysdate with last_update_date and some of the other select items need to be aliased as well :
But a simplified version would be
select imei_ot (stg.recid,
stg.imei,
cip.store_location,
'S',
co.rtl_txn_timestamp,
co.rtl_order_number,
'CUST',
msi.segment1 || '.' || msi.segment3,
trunc (co.txn_timestamp),
col.part_number,
'ZZ',
stg.processed_flag,
sysdate
bulk collect into imei_ntt
from custom_orders co,
custom_order_lines col,
custom_stg stg,
mtl_system_items_b msi
where co.header_id = col.header_id
and msi.inventory_item_id = col.inventory_item_id
and msi.organization_id =
(select organization_id
from hr_all_organization_units_tl
where name = 'Item Master' and source_lang = userenv ('LANG'))
and stg.imei = col.serial_number
and stg.processed_flag = 'U';
... -
Hello Everyone,
I have gone through the document also.
I am using dblink to get data in to table FORALL,
sometimes the database connection is lost from dblink. , i want to stop the process. and exit the loop, i am handeling the error. it raised three errors
ORA-02068
ORA-02063
ORA-02048
i read the desciption of the error, i am not sure which one is the correct to hadle. I want to handle this as special error in exception, how can i hadle this excepton.
i looks like that once the db connection is lost forall stops execution. is that correct? please correct if if i am wrong.
Please help me.
thank you in advanceYou will want to map these exceptions using PRAGMA EXCEPTION_INIT and capture them as a user defined exception. You will find an example of doing this here:
http://www.psoug.org/reference/exception_handling.html -
Using forall and bulkcollect together
Hey group,
I am trying to use bulk collect and forall together.
i have bulk collect on 3 columns and insert is on more than 3 columns.can anybody tell me how to reference those collection objects in bulk collect statement.
you can see the procedure,i highlighted things am trying.
Please let me know,if am clear enough.
PROCEDURE do_insert
IS
PROCEDURE process_insert_record
IS
CURSOR c_es_div_split
IS
SELECT div_id
FROM zrep_mpg_div
WHERE div_id IN ('PC', 'BP', 'BI', 'CI', 'CR');
PROCEDURE write_record
IS
CURSOR c_plan_fields
IS
SELECT x.comp_plan_id, x.comp_plan_cd, cp.comp_plan_nm
FROM cp_div_xref@dm x, comp_plan@dm cp
WHERE x.comp_plan_id = cp.comp_plan_id
AND x.div = v_div
AND x.sorg_cd = v_sorg_cd
AND x.comp_plan_yr = TO_NUMBER (TO_CHAR (v_to_dt, 'yyyy'));
TYPE test1 IS TABLE OF c_plan_fields%ROWTYPE
INDEX BY BINARY_INTEGER;
test2 test1;
BEGIN -- write_record
OPEN c_plan_fields;
FETCH c_plan_fields bulk collect INTO test2;
CLOSE c_plan_fields;
ForAll X In 1..test2.last
INSERT INTO cust_hier
(sorg_cd, cust_cd, bunt, --DP
div,
from_dt,
to_dt,
cust_ter_cd, cust_rgn_cd, cust_grp_cd,
cust_area_cd, sorg_desc, cust_nm, cust_ter_desc,
cust_rgn_desc, cust_grp_desc, cust_area_desc,
cust_mkt_cd, cust_mkt_desc, curr_flag,
last_mth_flag, comp_plan_id, comp_plan_cd,
comp_plan_nm, asgn_typ, lddt
VALUES (v_sorg_cd, v_cust_cd, v_bunt, --DP
v_div,
TRUNC (v_from_dt),
TO_DATE (TO_CHAR (v_to_dt, 'mmddyyyy') || '235959',
'mmddyyyyhh24miss'
v_ter, v_rgn, v_grp,
v_area, v_sorg_desc, v_cust_nm, v_cust_ter_desc,
v_rgn_desc, v_grp_desc, v_area_desc,
v_mkt, v_mkt_desc, v_curr_flag,
v_last_mth_flag, test2(x).comp_plan_id,test2(x).comp_plan_cd,
test2(x).comp_plan_nm, v_asgn_typ, v_begin_dt
v_plan_id := 0;
v_plan_cd := 0;
v_plan_nm := NULL;
v_out_cnt := v_out_cnt + 1;
IF doing_both
THEN
COMMIT;
ELSE
-- commiting v_commit_rows rows at a time.
IF v_out_cnt >= v_commit_cnt
THEN
COMMIT;
p.l ( 'Commit point reached: '
|| v_out_cnt
|| 'at: '
|| TO_CHAR (SYSDATE, 'mm/dd hh24:mi:ss')
v_commit_cnt := v_commit_cnt + v_commit_rows;
END IF;
END IF;
END write_record;Ugly code.
Bulk processing does what in PL? One and one thing only. It reduces context switching between the PL and SQL engines. That is it. Nothing more. It is not magic that increases performance. And there is a penalty to pay for the reduction in context switching - memory. Very expensive PGA memory.
To reduce the context switches, bigger chunks of data are passed between the PL and SQL engines. You have coded a single fetch for all the rows from the cursor. All that data collected from the SQL engine has to be stored in the PL engine. This requires memory. The more rows, the more memory. And the memory used is dedicated non-shared server memory. The worse kind to use on a server where resources need to be shared in order for the server to scale.
Use the LIMIT clause. That controls how many rows are fetched. And thus you manage just how much memory is consumed.
And why the incremental commit? What do you think you are achieving with that? Except consuming more resources by doing more commits.. not to mention risking data integrity as this process can now fail and result in only partial changes. And only changing some of the rows when you need to change all the rows is corrupting the data in the database in my book.
Also, why use PL at all? Surely you can do a INSERT into <table1> SELECT .. FROM <table2>,<table3> WHERE ...
And with this, you can also use parallel processing (DML). You can use direct path inserts. You do not need a single byte of PL engine code or memory. You do not have to ship data between the PL and SQL engines. What you now have is fast performing code that can scale. -
WARNING, WARNING - Delete and AutoCommit
I've just hit a bug which has cost me a couple of days work and a fair bit of hair loss!
Deleting data seems to autocommit. You cannot roll it back even though the autocommit button is off.
There should be flashing lights in the front of the app saying that this doesn't work.
Another thing - auto commit should not be available as an option as its just wrong/bad.. you name it. Remove it please, pretty please.
Please fix.I was sure that I had successfully rolled back deletes and I have just confirmed that it works fine for me (just picked a log table I have been working with that I didn't care about the data in).
select count(*) from table;
43366
delete from table where rownum < 10;
Message: 9 rows deleted
select count(*) from table;
43357
rollback (via toolbar icon)
Message: Rollback complete
select count(*) from table;
43366
As far as the Auto Commit option - I wouldn't use it myself, but I can't see a problem with it being available for people to use if they want. I would definitely want it to be off by default, but it is. -
i am using mysql, tomcat and toplink. how to set autocommit off?
my persistence.xml is this
<?xml version="1.0" encoding="UTF-8"?>
<persistence version="1.0" xmlns="http://java.sun.com/xml/ns/persistence" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/persistence http://java.sun.com/xml/ns/persistence/persistence_1_0.xsd">
<persistence-unit name="SeleccionsCatPU" transaction-type="RESOURCE_LOCAL">
<provider>oracle.toplink.essentials.ejb.cmp3.EntityManagerFactoryProvider</provider>
<non-jta-data-source>java:comp/env/jdbc/selcat</non-jta-data-source>
<class>com.xlives.selcat.prueba.Prueba</class>
<properties>
<property name="toplink.session.customizer" value="com.xlives.system.jpa.DataSourceSessionCustomizer"/>
<property name="toplink.logging.level" value="FINEST"/>
</properties>
</persistence-unit>
</persistence>
and my customizer is this
JNDIConnector conn = (JNDIConnector)session.getLogin().getConnector();
conn.setLookupType(JNDIConnector.STRING_LOOKUP);
session.getLogin().setTransactionIsolation(DatabaseLogin.TRANSACTION_READ_COMMITTED);
my code is
EntityTransaction et = em.getTransaction();
Prueba p = new Prueba();
em.persist(p);
p = new Prueba();
em.persist(p);
Query q = em.createQuery("SELECT p FROM Prueba p");
List<Prueba> results = q.getResultList();
System.out.println(new Date());
for (Prueba a: results) {
// System.out.println(a.getFecha());
Date b = a.getFecha();
et.rollback();
after rollback the transaction the table PRUEBA has 2 more rows. -
I am looking for explanation for the following:
Version 9.2.08
CREATE TABLE demo (a_col NUMBER, b_date DATE);
INSERT INTO demo (a_col, b_date) VALUES (1, TRUNC(SYSDATE - 181));
INSERT INTO demo (a_col, b_date) VALUES (1, TRUNC(SYSDATE - 182));
INSERT INTO demo (a_col, b_date) VALUES (1, TRUNC(SYSDATE - 183));
COMMIT;
DECLARE
keepdays NUMBER (9) := 180;
deldate DATE;
cntr NUMBER (9) := 0;
lastday DATE;
CURSOR c1 (in_deldate DATE)
IS
SELECT ROWID
FROM demo
WHERE b_date = in_deldate;
TYPE c1_tbl_typ IS TABLE OF VARCHAR2 (100)
INDEX BY PLS_INTEGER;
rowid_tbl c1_tbl_typ;
BEGIN
--COMMIT; --2
FOR rec IN 1 .. 3
LOOP
cntr := cntr + 1;
deldate := TRUNC (SYSDATE) - (keepdays + cntr);
DBMS_OUTPUT.put_line (deldate);
OPEN c1 (deldate);
LOOP
FETCH c1
BULK COLLECT INTO rowid_tbl LIMIT 1000;
DBMS_OUTPUT.put_line (rowid_tbl.COUNT);
FORALL i IN 1 .. rowid_tbl.COUNT
DELETE FROM demo
WHERE ROWID = rowid_tbl (i);
DBMS_OUTPUT.put_line (SQL%ROWCOUNT);
COMMIT; --1
EXIT WHEN c1%NOTFOUND;
END LOOP;
CLOSE c1;
END LOOP;
END;
22-AUG-08
1
1
21-AUG-08
1
1
20-AUG-08
1
1
Now run the script again with no data in the table:
I do not see sql%rowcount for the first date
22-AUG-08
0
21-AUG-08
0
0
20-AUG-08
0
0
If I add commit (2) in the beginning
22-AUG-08
0
0
21-AUG-08
0
0
20-AUG-08
0
0
Now remove all commits from script
22-AUG-08
0
21-AUG-08
0
20-AUG-08
0Thanks.The problem that you are running into is that your collection is empty and you are trying to do a FORALL bulk bind operation with an empty collection.
It executes without error, but the cursor attribute %ROWCOUNT is NULL and you don't check for NULLs.
Try this:
DECLARE
keepdays NUMBER (9) := 180;
deldate DATE;
cntr NUMBER (9) := 0;
lastday DATE;
CURSOR c1 (in_deldate DATE)
IS
SELECT ROWID
FROM demo
WHERE b_date = in_deldate;
TYPE c1_tbl_typ IS TABLE OF VARCHAR2 (100)
INDEX BY PLS_INTEGER;
rowid_tbl c1_tbl_typ;
BEGIN
-- COMMIT; --2
FOR rec IN 1 .. 3
LOOP
cntr := cntr + 1;
deldate := TRUNC (SYSDATE) - (keepdays + cntr);
DBMS_OUTPUT.put_line (deldate);
OPEN c1 (deldate);
LOOP
FETCH c1 BULK COLLECT INTO rowid_tbl LIMIT 1000;
DBMS_OUTPUT.put_line ('Collected '||rowid_tbl.COUNT||' rows');
FORALL i IN 1 .. rowid_tbl.COUNT
DELETE FROM demo
WHERE ROWID = rowid_tbl (i);
DBMS_OUTPUT.put_line ('Deleted '||NVL(SQL%ROWCOUNT,0)||' rows');
COMMIT; --1
EXIT WHEN c1%NOTFOUND;
END LOOP;
CLOSE c1;
END LOOP;
END;
FRED@ora111>/
22-Aug-2008 00:00:00
Collected 0 rows
Deleted 0 rows
21-Aug-2008 00:00:00
Collected 0 rows
Deleted 0 rows
20-Aug-2008 00:00:00
Collected 0 rows
Deleted 0 rows
PL/SQL procedure successfully completed. -
hello, I have a problem with forall command. I need update database 'A' with data from database 'B'. I wrote this code:
DECLARE
TYPE Typ1 IS TABLE OF VARCHAR(255) index by binary_integer;
replace_what Typ1;
TYPE Typ2 IS TABLE OF ROWID index by binary_integer;
replace_where Typ2;
BEGIN
SELECT A.rowid, B.duns_no BULK COLLECT INTO replace_where, replace_what
FROM A, B
WHERE
A.SupplierID = B.SupplierID AND
A.ERPID = B.erpid AND
A.CompanyCode = B.companycode;
FORALL i IN 1..replace_what.COUNT
update A
set supplierduns = replace_what(i)
WHERE rowid = replace_where(i);
END;
and the problem is when I had abouth 1.200.000 it was everything all right. But when I had 2.500.000 rows in mi database A then my code did not work as I want but my Memory was filled up and system crash. I don not understand what its happened.In your update statement you are executing the subquery once for every row. That's very likely what is making it slow. Try it like below and you can get rid of all your code:
SQL> create table a (companycode,erpid,supplierid,supplierduns)
2 as
3 select 1, 1, 1, 123 from dual union all
4 select 1, 1, 2, 456 from dual union all
5 select 1, 1, 3, 789 from dual
6 /
Table created.
SQL> create table b (companycode,erpid,supplierid,duns_no)
2 as
3 select 1, 1, 1, 321 from dual union all
4 select 1, 1, 2, 654 from dual union all
5 select 1, 1, 4, 987 from dual
6 /
Table created.
SQL> UPDATE A
2 SET supplierduns = (
3 SELECT B.duns_no FROM B WHERE
4 B.SupplierID = A.SupplierID AND
5 B.ERPID = A.erpid AND
6 B.CompanyCode = A.companycode)
7 /
3 rows updated.
SQL> select * from a
2 /
COMPANYCODE ERPID SUPPLIERID SUPPLIERDUNS
1 1 1 321
1 1 2 654
1 1 3
3 rows selected.
SQL> rollback
2 /
Rollback complete.
SQL> update ( select a.supplierduns a_dunsno
2 , b.duns_no b_dunsno
3 from a
4 , b
5 where a.companycode = b.companycode (+)
6 and a.erpid = b.erpid (+)
7 and a.supplierid = b.supplierid (+)
8 )
9 set a_dunsno = b_dunsno
10 /
set a_dunsno = b_dunsno
ERROR at line 9:
ORA-01779: cannot modify a column which maps to a non key-preserved table
SQL> alter table b add primary key (companycode,erpid,supplierid)
2 /
Table altered.
SQL> update ( select a.supplierduns a_dunsno
2 , b.duns_no b_dunsno
3 from a
4 , b
5 where a.companycode = b.companycode (+)
6 and a.erpid = b.erpid (+)
7 and a.supplierid = b.supplierid (+)
8 )
9 set a_dunsno = b_dunsno
10 /
3 rows updated.
SQL> select * from a
2 /
COMPANYCODE ERPID SUPPLIERID SUPPLIERDUNS
1 1 1 321
1 1 2 654
1 1 3
3 rows selected.Regards,
Rob. -
Hi All
I am having an issue in my FORALL loop and I am thinking that I might not understand this properly.
Basically I am just updating a date in a table but the dates have timestamps. I trunc the date but the column always ends up as null.
code is simple
cursor list is select date from table1;
open list
fetch list bulk collect into l_date
forall i in 1.. 100
update table2 set date = trunc(l_date(i)) where id = id(i);
end loop
close list
notice the trunc inside my forall loop. it seems to null this value.
Is this behaviour correct?
Thanks in advanceclueless wrote:
The code was just to show what I was doing.
I understand using the word date is bad practice. What I am after is why the value in the FORALL loop ends up being null.
Outside the FORALL loop the trunc(date) would just round to the nearest day.
cursor list is select id, effdate from table1;
open list
fetch list bulk collect into l_effdateI believe this is not the real code.
what is l_effdate type ?
forall i in 1.. 100
update table2 set effdate = trunc(l_effdate(i)) where id = id(i);
end loop
close list
have you make sure the value of (l_effdate(i)) is not null? try to show with dbms_output.put_line
change your forall with for loop if you want to run debug for testing.
if your fetch is correct, it will give you exact same result -
Semie addivie measure and strange behavour
Hi,
For the first time , I 'm facing a strange behavior with a semi additive measure.
In EXCEL when I' m crossing it with a product dimension (Declared as Regular type ... no time type),
the behavour is like we are crossing with a time dimension.
No SUM on the column , but the last non empty value whenever appear.
Product Type - Product code Cummulative Power
A 16 <-- Problem we don't see No 31
AA1 15
AA1 16
Total SUM 16 !! no 31
Does anyone have an idea about this problem ?
Thank you very much
Christophe
<input id="gt-speech-in" lang="fr" size="1" speech="speech" style="float:left;width:15px;padding:5px 6px;margin:0px;outline:none;border-style:none;background-color:rgba(255,
255, 255, 0);" tabindex="-1" type="text" x-webkit-grammar="builtin:translate" x-webkit-speech="x-webkit-speech" />
wheneverHi Christophe,
your question is not clear to me. From what I understand your facing problem with measure 'Cummulative
Power' which uses aggregation function 'Last NonEmpty'.
Don't know whether this will help. In our case 'Last
NonEmpty' function would be calculating 'Cummulative Power' against 'Product' dimension attributes,
while the measure value being the value of highest member or last member of time\date dimension. (remember semi additive measures always requires time dimension)
For example consider the following table.
Product
Date
Cummulative Power
A
20140101
10
B
20140101
15
A
20140102
13
B
20140102
17
A
20140103
16
B
20140103
11
Here if 'Last NonEmpty' function is used with Cummulative Power then against each Product we would get the following results in cube.
Product
Last NomEmpty(Cummulative Power)
A
16
B
11
i.e. the above values are corresponding to the last member of time dimension (20140103) .
Saurabh Kamath -
Where to put the commit in the FORALL BULK COLLECT LOOP
Hi,
Have the following LOOP code using FORALL and bulk collect, but didnt know where to put the
'commit' :
open f_viewed;
LOOP
fetch f_viewed bulk collect into f_viewed_rec LIMIT 2000;
forall i in 1..f_viewed_rec.count
insert into jwoodman.jw_job_history_112300
values f_viewed_rec(i);
--commit; [Can I put this 'commit' here? - Jenny]
EXIT when f_viewed%NOTFOUND;
END LOOP;
commit;
Thanks,
- Jennymc**** wrote:
Bulk collect normally used with large data sets. If you have less dataset such as 1000-2000 records then you canot get such a performance improvent using bulk collect.(Please see oracle documents for this)
When you update records Oracle acquire exclusive lock for that. So if you use commit inside the loop then it will process number of records defined by limit parameter at ones and then commit those changes.
That will release all locks acquired by Oracle and also teh memory used to keep those uncommited transactions.
If you use commit outside the loop,
Just assume that you insert 100,000 records, all those records will store in oracle memory and it will affect all other users performance as well.
Further more if you update 100,000 records then it will hold exclusive lock for all 100,000 records addtion to the usage of the oracle memory.
I am using this for telco application which we process over 30 million complex records (one row has 234 columns).
When we work with large data sets we do not depends with the oracle basic rollback function. because when you keep records without commit itb uses oracle memory and badly slowdown all other processes.Hi mc****,
What a load of dangerous and inaccurate rubbish to be telling a new Oracle developer. Commit processing should be driven by the logical unit of a transaction. This should hold true whether that transaction involves a few rows or millions. If, and only if, the transaction is so large that it affects the size constraints of the database resources, in particular, rollback or redo space, then you can consider breaking that transaction up to smaller transactions.
Why is frequent committing undesirable I hear you ask?
First of all it is hugely wasteful of rollback or redo space. This is because while the database is capable of locking at a row level, redo is written at a block level, which means that if you update, delete or insert a million rows and commit after each individual statement, then that is a million blocks that need to go into redo. As many of these rows will be in the same block, if you instead do these as one transaction, then the same block in redo can be transacted upon, making the operation more efficient. True, locks will be held for longer, but if this is new data being done in batches then users will rarely be inconvenienced. If locking is a problem then I would suggest that you should be looking at how you are doing your processing.
Secondly, committing brings into play one of the major serialization points in the database, log sync. When a transaction is committed, the log buffer needs to be written to disc. This occurs serially for multiple commits. Each commit has to wait until the commit before has completed. This becomes even more of a bottleneck if you are using Data Guard in SYNC mode, as the commit cycle does not complete until the remote log is notified as written.
This then brings us two rules of thumb that will always lead a developer in the right direction.
1. Commit as infrequently as possible, usually at the logical unit of a transaction
2. When building transactions, first of all seek to do it using straight SQL (CTAS, insert select, update where etc). If this can't be easily achieved, then use PL/SQL bulk operations.
Regards
Andre
Maybe you are looking for
-
"Invoice and Delivery No Look UP "In B2C webshop
Hi All, We have the Following scenario. 1.A registered User creates the order in B2C web shop 2.The order gets replicated to ECC 3.The Delivery and invoice gets created at ECC. 4.The document flow containing the delivery number and Invoice should get
-
How to call PL/SQL stored procedure using ODBC?
Could anyone tell me how can I call PL/SQL stored procedure using ODBC? Are there any sample codes? Thanx! null
-
Scripting system prefs: desktop assignments & syntax changes
The problem I have is that I use multiple external monitors, but frequently use my 2014 15" rMBP disconnected. My workday involves two externals being connected, with multiple apps being used on those monitors. Most apps play nicely with the respecti
-
Sharing pacman cache in same machine between different Arch installs
How may I ask?
-
Export PDF from InDesign -- SIze mismatch
We are facing a strange problem of like size mismatch in INDD and Exported PDF. Say I have a INDD document of height 5.456 inch and 6.758 inch and when we are exporting it to PDF with High Quality Print then PDF size will show 5.45 inch x 6.75 inc