About procedure to commit after 1000 rows
i have procedure
and a cursor c1 is select emp table and
insert into copy_emp table
but i want to commit after 1000 record;
with exception handling
plz help me with example
thanks
Everything you have described is either bad practice in all currently supported versions or bad practice in every version since 7.0.12.
Cursor loops have been obsolete since version 9.0.1.
Incremental commiting, for example every 1,000 rows, has always been a bad practice. It slows down processing and manufactures ORA-01555 exceptions.
For demos of how to properly process data sets look up BULK COLLECT and FORALL at http://tahiti.oracle.com and review the demos in Morgan's Library at www.psoug.org.
For a long list of good explanations of why you should not do incremental commits read Tom Kyte's many comments at http://asktom.oracle.com.
Incremental commits may make sense in other products. Oracle is not other products.
Similar Messages
-
Hi
Iam inserting around 2.5 mmillion records in aconversion project
let me know how i can commit after every 10000 rows please can u tell me whether i can use bulk insert or bulk bind because i have never used please resolve my problem.
Thanks
Madhuas sundar said, per link of TOM you are better of not commiting in th loop other wise it will give you snapshot too old error,
still if you want
1. set the counter to 0. ct number:=0;
increment counter in the loop ct:=ct+1;
IF ct=10000 THEN
COMMIT;
END IF;
2. you can use bulk collect and FORALL also. and commit.
but still follow the tread as per TOM
typo
Message was edited by:
devmiral -
hi
I am trying to insert millions of rows in a target oracle table from a source oracle using ODI.
Is it possible to specify commit after n number of rows, say after every 100000 rows..
Kindly let know any suggestions.No, I am using ODI interface to populate the table.
LKM : SQL to ORACLE
IKM : Oracle incremental update.
The interface has failed for "unable to extend the tablespace" error. I want to commit the records after every - say 100000- rows -
Commit after n rows while update
i have 1 update query
UPDATE security
SET display_security_alias = CONCAT ('BAD_ALIAS_', display_security_alias)
WHERE security_id IN (
SELECT DISTINCT security_id
FROM security_xref
WHERE security_alias_type IN ('BADSYMBOL', 'ERRORSYMBOL', 'BADCUSIP', 'ERRORCUSIP' )
i want to perform commit after every 500rows ( due to Business requirement) how can we achieve this. please helpJ99 wrote:
As mentioned by karthic_arp --Not a good idea still if you want it you can do this way
DECLARE
CURSOR C
IS
SELECT DISTINCT security_id
FROM security_xref
WHERE security_alias_type IN ('BADSYMBOL', 'ERRORSYMBOL', 'BADCUSIP', 'ERRORCUSIP' );
-Counter to count records
counter number(4) default 0;
BEGIN
FOR rec in C
Loop
UPDATE security SET display_security_alias = CONCAT ('BAD_ALIAS_', display_security_alias)
WHERE security_id = rec.security_id
counter := counter +SQL%rowcount ;
If counter >= 500 then
counter := 0;
commit;
end if;
END Loop;
end ;
/NOT tested.Apart from committing every N rows, being a completely technically stupid idea, committing inside a cursor loop just take it one step further to stupiddom. -
Query running slow after 1000 rows in oracle
Hi.
I have one query which is fetching miln of rows.. the query runs very fast till 1000 to 1500 records after that it run very slow. Can you please help ,what could be the reason?
Thanks831269 wrote:
I have one query which is fetching miln of rows.. Why are you fetching that many rows? What is your client code going to do with a million rows? And why do you expect this to be fast? A million rows worth of I/O has to be done by Oracle (that will likely be mostly from disk and not buffer cache). That million rows has to be copied from the Oracle's SGA to client memory. If your client is PL/SQL code, that will be copied into the PGA. If your client is external, then that copy has to happen across platform boundaries and the network too.
Then your code churns away on processing a million rows... doing what exactly? That "+what+" will need to be done once per row, for a million times. If it takes 10ms per row, that means almost 3h of client processing time.
Fetching that many rows..? Often a design and coding mistake. Always an exception to the rule. Will never be "fast".
And scalability and performance need to be addressed by re-examining the requirements, optimising the design that necessitates fetching that many rows, and using techniques such as parallel processing and thread safe code design. -
Procedure to delete mutiple table records with 1000 rows at a time
Hello Champs,
I am having requirement to create procedure to achive the deletes from 5 big tables. 1 master table and 4 child tables. Each table has 28 millions records. Now I have to schedule a procedure to run it on daily basis to delete approximately 500k records from each tables. But the condition is: Seperate procedures for each tables and delete only 1000 rows at a time. The plan is below. Since I am new for writing complicate procedure, I don't have much idea how to code the procedures, can some one help me to design the procedure queries. Many Thanks
1. Fetch 1000 rows from master table with where clause condition.
2. Match them with each child tables and delete the records.
3. Atlast delete those 1000 rows from master table.
4. over all commit for all 5 X 1000 records.
5. Repeat the steps 1 to 4 till no rows to fetch from master table.
Below are detailed procedure plan:----
----- Main procedure to fetch 1000 rows at a time and providing them as input to 5 different procedures to delete records, repeating same steps till no rows to fetch i.e. (i=0)-----
Create procedure fetch_record_from_master_table:
loop
i = Select column1 from mastertable where <somecondition> and rowcount <= 1000;
call procedure 1 and i rows as input;
call procedure 2 and i rows as input;
call procedure 3 and i rows as input;
call procedure 4 and i rows as input;
call procedure 5 and i rows as input;
commit;
exit when i=0
end loop;
----- Sepereate procedure's to delete records from each child tables------
Create procedure 1 with input:
delete from child_table1 where column1=input rows of master table;
Create procedure 2 with input:
delete from child_table2 where column1=input rows of master table;
Create procedure 3 with input:
delete from child_table3 where column1=input rows of master table;
Create procedure 4 with input:
delete from child_table4 where column1=input rows of master table;
--- procedure to delete records from master table atlast----
Create procedure 5 with input:
delete from Master_table where column1=input rows of master table;Oops, but this will work, won't it?
begin
execute immediate 'truncate table child_table1';
execute immediate ' truncate table child_table2';
execute immediate ' truncate table child_table3';
execute immediate ' truncate table child_table4';
for r_fk in ( select table_name, constraint_name
from user_constraints
where table_name = 'MASTER_TABLE'
and constraint_type = 'R'
loop
execute immediate 'ALTER TABLE ' || r_fk.table_name || ' MODIFY CONSTRAINT ' || r_fk.constraint_name || ' DISABLE';
end loop;
execute immediate ' truncate table master_table';
for r_fk in ( select table_name, constraint_name
from user_constraints
where table_name = 'MASTER_TABLE'
and constraint_type = 'R'
loop
execute immediate 'ALTER TABLE ' || r_fk.table_name || ' MODIFY CONSTRAINT ' || r_fk.constraint_name || ' ENABLE';
end loop;
end;
/ Or
truncate table child_table1;
truncate table child_table2;
truncate table child_table3;
truncate table child_table4;
begin
for r_fk in ( select table_name, constraint_name
from user_constraints
where table_name = 'MASTER_TABLE'
and constraint_type = 'R'
loop
execute immediate 'ALTER TABLE ' || r_fk.table_name || ' MODIFY CONSTRAINT ' || r_fk.constraint_name || ' DISABLE';
end loop;
end;
truncate table master_table;
begin
for r_fk in ( select table_name, constraint_name
from user_constraints
where table_name = 'MASTER_TABLE'
and constraint_type = 'R'
loop
execute immediate 'ALTER TABLE ' || r_fk.table_name || ' MODIFY CONSTRAINT ' || r_fk.constraint_name || ' ENABLE';
end loop;
end;
/ -
I'm getting probelms with the following procedure. Is there any that I can do to commit after every 10,000 rows of deletion? Or is there any other alternative! The DBAs are not willing to increase the undo tablespace value!
create or replace procedure delete_rows(v_days number)
is
l_sql_stmt varchar2(32767) := 'DELETE TABLE_NAME WHERE ROWID IN (SELECT ROWID FROM TABLE_NAME W
where_cond VARCHAR2(32767);
begin
where_cond := 'DATE_THRESHOLD < (sysdate - '|| v_days ||' )) ';
l_sql_stmt := l_sql_stmt ||where_cond;
IF v_days IS NOT NULL THEN
EXECUTE IMMEDIATE l_sql_stmt;
END IF;
end;I think I can use cursors and for every 10,000 %ROWCOUNT, I can commit, but even before posting the thread, I feel i will get bounces! ;-)
Please help me out in this!
Cheers
Sarma!Hello
In the event that you can't persuede the DBA to configure the database properly, Why not just use rownum?
SQL> CREATE TABLE dt_test_delete AS SELECT object_id, object_name, last_ddl_time FROM dba_objects;
Table created.
SQL>
SQL> select count(*) from dt_test_delete WHERE last_ddl_time < SYSDATE - 100;
COUNT(*)
35726
SQL>
SQL> DECLARE
2
3 ln_DelSize NUMBER := 10000;
4 ln_DelCount NUMBER;
5
6 BEGIN
7
8 LOOP
9
10 DELETE
11 FROM
12 dt_test_delete
13 WHERE
14 last_ddl_time < SYSDATE - 100
15 AND
16 rownum <= ln_DelSize;
17
18 ln_DelCount := SQL%ROWCOUNT;
19
20 dbms_output.put_line(ln_DelCount);
21
22 EXIT WHEN ln_DelCount = 0;
23
24 COMMIT;
25
26 END LOOP;
27
28 END;
29 /
10000
10000
10000
5726
0
PL/SQL procedure successfully completed.
SQL>HTH
David
Message was edited by:
david_tyler -
Hello,
I want to create stored procedure which will give output rows from "values that are passed as one parameter (comma seperated) to store procedure".
Suppose ,
Parameter value : person 1,person2,person3
table structure :
Project Name | officers 1 | officers 2
here, officers 1 or officers 2 may contain names of multiple people.
expected OUTPUT : distinct list(rows) of projects where person 1 or person 2 or person 3 is either officer1 or officer 2.
please explain or provide solution in detail
- ThanksHi Visakh,
Thanks for reply.
But the solution you provided giving me right output only if officer 1 or officer 2 contains single value , not with comma seperated value.
Your solution is working fine for following scenario :
Project
Officers 1
Officers 2
p1
of11
off21
p2
of12
off22
with parameter : of11,off22 : it will give expected output
And its not working in case of :
Project
Officers 1
Officers 2
p1
of11,of12
off21,off23
p2
of12,of13
off22,off24
with parameter : of11,off22 : it will not give any row in output
I need patten matching not exact match :)
ok
if thats the case use this modified logic
CREATE PROC GetProjectDetails
@PersonList varchar(5000)
AS
SELECT p.*
FROM ProjectTable p
INNER JOIN dbo.ParseValues(@PersonList,',')f
ON ',' + p.[officers 1] + ',' LIKE '%,' + f.val + ',%'
OR ',' + p.[officers 2] + ',' LIKE '%,' + f.val + ',%'
GO
Keep in mind that what you've done is a wrong design approach
You should not be storing multiples values like this as comma separated list in a single column. Learn about normalization . This is in violation of 1st Normal Form
Please Mark This As Answer if it solved your issue
Please Mark This As Helpful if it helps to solve your issue
Visakh
My MSDN Page
My Personal Blog
My Facebook Page -
Commit after PL/SQL procedure successfully completed?
Hello. I have a question, it may be stupid but here goes:
When I run a script like this do I have to commit after it is completed?
SQL> @merge_candidates_INC933736.sql
PL/SQL procedure successfully completed.XerXi wrote:
Hello. I have a question, it may be stupid but here goes:
When I run a script like this do I have to commit after it is completed?
SQL> @merge_candidates_INC933736.sql
PL/SQL procedure successfully completed.
How would anyone know? You didn't post what the script does.
If the script contains nothing but DDL to create objects then you do NOT need to add a COMMIT since Oracle wil implictily commit DDL statements.
A COMMIT should be performed at the END of a transaction. Since we don't know what, if any DML is contained in your script we have no idea how many transactions might be represented.
So for scripts that contain DML you should add a COMMIT to the script after each transaction has been completed. -
I use Oracle 10G Rel2. I'm trying to improve the performance of my database insertions. Every two days I perform a process of inserting 10000 rows in a table. I call a PL/SQL procedure
for inserting every row, which checks data and perform the insert command on the table.
Should I commit after every call to the procedure ??
Is it better to perform a commit at the end of calling 10000 times to the insertion procedure?? So the question is : is "commit" a cheap operation ??
Any idea to improve performance with this operation ??> So the question is : is "commit" a cheap operation ??
Yes. A commit for a billion rows is as fast as a commit for a single row.
So there are no commit overheads for doing a commit on a large transaction versus a commit on a small transaction. So is this the right question to ask? The commit itself does not impact performance.
But HOW you use the commit in your code, does. Which is why the points raised by Daniel is important.. how the commit is used. In Oracle, the "best place" is at the end of the business transaction. When the business transaction is done and dusted, commit. That is after all the very purpose of the commit command - protecting the integrity of the data and the business transaction. -
Avoid Commit after every Insert that requires a SELECT
Hi everybody,
Here is the problem:
I have a table of generator alarms which is populated daily. On daily basis there are approximately 50,000 rows to be inserted in it.
Currently i have one month's data in it ... Approximately 900,000 rows.
here goes the main problem.
before each insert command, whole table is checked if the record does not exist already. Two columns "SiteName" and "OccuranceDate" are checked... this means, these two columns are making a unique record when checked together with an AND operation in WHERE clause.
we have also implemented partition on this table. and it is basically partitioned on the basis of OccuranceDate and each partition has 5 days' data.
say
01-Jun to 06 Jun
07-Jun to 11 Jun
12-Jun to 16 Jun
and so on
26-Jun to 30 Jun
NOW:
we have a commit command within the insertion loop, and the each row is committed once inserted, making approximately 50,000 commits daily.
Question:
Can we commit data after say each 500 inserted rows, but my real question is can we Query the records using SELECT which are Just Inserted but not yet committed ?
a friend told me that, you can query the records which are inserted in the same connection session but not yet committed.
Can any one help ?
Sorry for the long question but it was to make u understand the real issue. :(
Khalid Mehmood Awan
khalidmehmoodawan @ gmail.com
Edited by: user5394434 on Jun 30, 2009 11:28 PMDon't worry about it - I just said that because the experts over there will help you much better. If you post your code details there they will give suggestions on optimizing it.
Doing a SELECT between every INSERT doesn't seem very natural to me, but it all depends on the details of your code.
Also, not committing on time may cause loss of the uncommitted changes. Depending on how critical the data is and the dependency of the changes, you have to commit after every INSERT, in between, or at the end.
Regards,
K. -
Copying table in pl/sql 1000 rows at once.
Hi everybody,
I have an PL/SQL procedure that does the following.
execute immediate 'create table ' || p_descination_table || ' as select * from ' || p_source_table || '; '
And this copies a table from one schema to another (the procedure is in a user that has create table anywhere privilege).
The problem is if the table is very big I want to copy 1000 rows at a time and do a commit between batches.
So I can do
execute immediate 'create table ' || p_descination_table || ' as select * from ' || p_source_table || ' where 1=0; '
To create the structure but how do I copy the rows 1000 at a time.
Lastly does anyone have code that generates the create index commands so I can also create the indexes on the target table.
BenYou could test this:
CREATE TABLE BIGTABLE
AS SELECT ROWNUM RN, A.* FROM
SELECT x.* FROM ALL_OBJECTS x
CROSS JOIN
(SELECT 1 FROM DUAL CONNECT BY LEVEL <= 100)
) A
SELECT COUNT(*) FROM BIGTABLE;
COUNT(*)
7129000
DECLARE
NUM NUMBER;
T1 PLS_INTEGER;
T2 PLS_INTEGER;
SRC_TABLE CONSTANT VARCHAR2(30) := 'BIGTABLE';
DST_TABLE CONSTANT VARCHAR2(30) := 'BIGTABLE_2';
BEGIN
T1 := DBMS_UTILITY.GET_TIME;
EXECUTE IMMEDIATE 'CREATE TABLE ' || DST_TABLE ||
' AS SELECT * FROM ' || SRC_TABLE ||
' WHERE 1=0';
EXECUTE IMMEDIATE ''
|| 'DECLARE '
|| ' TYPE REC_TABLE IS TABLE OF ' || SRC_TABLE || '%ROWTYPE; '
|| ' records REC_TABLE; '
|| ' '
|| ' CURSOR cur IS SELECT * FROM ' || SRC_TABLE || '; '
|| ' num NUMBER; '
|| ' NUM_REC PLS_INTEGER := 0; '
|| ' num_batch PLS_INTEGER := 0; '
|| 'BEGIN '
|| ' OPEN cur; '
|| ' LOOP '
|| ' FETCH cur BULK COLLECT INTO records LIMIT 1000;'
|| ' FORALL I IN records.FIRST .. records.LAST '
|| ' INSERT INTO ' || DST_TABLE || ' VALUES RECORDS(I);'
|| ' COMMIT; '
|| ' num_rec := num_rec + records.count; '
|| ' num_batch := num_batch + 1; '
|| ' EXIT WHEN RECORDS.COUNT < 1000; '
|| ' END LOOP; '
|| ' CLOSE CUR;'
|| ' DBMS_OUTPUT.PUT_LINE(''Batches = '' || num_batch || '
|| ' '' Records = '' || num_rec ); '
|| 'END;';
T2 := DBMS_UTILITY.GET_TIME;
DBMS_OUTPUT.PUT_LINE('Total time = ' || to_char( (T2-T1)/100, '99999D99' ) );
END;
Batches = 7130 Records = 7129000
Total time = 385.97
SELECT COUNT(*) FROM BIGTABLE_2;
COUNT(*)
7129000
SELECT count(*) FROM (
SELECT rn FROM BIGTABLE
INTERSECT
SELECT rn FROM BIGTABLE_2
COUNT(*)
7129000 However, look at the simple CTAS command:
DECLARE
T1 PLS_INTEGER;
T2 PLS_INTEGER;
SRC_TABLE CONSTANT VARCHAR2(30) := 'BIGTABLE';
DST_TABLE CONSTANT VARCHAR2(30) := 'BIGTABLE_1';
BEGIN
T1 := DBMS_UTILITY.GET_TIME;
EXECUTE IMMEDIATE 'CREATE TABLE ' || DST_TABLE ||
' AS SELECT * FROM ' || SRC_TABLE;
T2 := DBMS_UTILITY.GET_TIME;
DBMS_OUTPUT.PUT_LINE('CTAS Time = ' || to_char( (T2-T1)/100, '99999D99' ) );
end;
CTAS Time = 28.55It's your choice:
- splitting data into batches and committing - 390 seconds,
- simple CTAS - 30 seconds. -
Problems with PObject::destroy(void*) and commit after this
Hi there
I have some problem with delete garbage objects.
For example:
I create my object
MyObject * o = new (conn,"MY#TABLE") MyObject();
if (some_condition)
PObject::destroy(o);
delete o; //this work well
//but after this
conn->commit();
I have got the error
OCI-21710: argument is expecting a valid memory address of an object
help me anybody
can I delete/destroy object if one has been init over new (conn,table)
and absent problem with commit after this ?Thank you for answer
but what you mean about this code ?
MyObject * o = new (conn,"MY#TABLE") MyObject();
if (some_condition)
o->markDelete();
PObject::destroy(o);
delete o;
//but after this
conn->commit();
This is work without error.
Message was edited by:
pavel -
Hi
I want to display a message after inserting rows in table like *'you have inserted a new row successfully*'.
i am using the createinsert ADF Button to insert the rows in table.after that i am comitting it.
after commiting i want to display message for the user.for this what i need to do.
Please help me.
Sailaja.user10860137
Can you please explain me the each line in the code briefly.
+public String saveButton_action(){+
BindingContainer bindings = getBindings();
OperationBinding operationBinding = bindings.getOperationBinding("Commit");
Object result = operationBinding.execute();
+// note "!" operator has been removed from the default code.+
+if(operationBinding.getErrors().isEmpty()){+
FacesContext ctx = FacesContext.getCurrentInstance();
FacesMessage saveMsg = new FacesMessage("Record Saved Successfully");
ctx.addMessage(null,saveMsg);
+}+
return null;
+}+
And i have requirement to show the message on favcet "status bar".not in a popup window.(from the above code the message is showing in popup window. )the Layout i am using is PanelCollection.
can you tell me what i need to do.
Thanks
Sailaja.
Edited by: sj0609 on Mar 19, 2009 8:03 AM -
Commit after 2000 records in update statement but am not using loop
Hi
My oracle version is oracle 9i
I need to commit after every 2000 records.Currently am using the below statement without using the loop.how to do this?
do i need to use rownum?
BEGIN
UPDATE
(SELECT A.SKU,M.TO_SKU,A.TO_STORE FROM
RT_TEMP_IN_CARTON A,
CD_SKU_CONV M
WHERE
A.SKU=M.FROM_SKU AND
A.SKU<>M.TO_SKU AND
M.APPROVED_FLAG='Y')
SET SKU = TO_SKU,
TO_STORE=(SELECT(
DECODE(TO_STORE,
5931,'931',
5935,'935',
5928,'928',
5936,'936'))
FROM
RT_TEMP_IN_CARTON WHERE TO_STORE IN ('5931','5935','5928','5936'));
COMMIT;
end;
Thanks for your helpI need to commit after every 2000 recordsWhy?
Committing every n rows is not recommended....
Currently am using the below statement without using the loop.how to do this?Use a loop? (not recommended)
Maybe you are looking for
-
Hi Experts, I am trying to activate my BPM. I have used exception handlers and when i try to activate my BPM, I get either a warning/information which says (!)Exceptions will never be caught system exceptions will never be thrown Has anyone faced suc
-
Getting wsdl file from behind a firewall
Hi All, I need to create a webservice using a WSDL file. The client is behind a proxy/firewall which requires authentication. I am using a service factory: ServiceFactory factory = createServiceFactoryInstance(); factory.createService(wsdlLocation, s
-
10.4.3 update problem
Ran the update on my powerbook G4 (550, 15inch titanium). I am using File Vault. Before restart, it wanted to run and I allowed it. Have not been able to boot into the main account, which boots automatically, since. I zapped the P-ram, booted into te
-
Do i need money to create account
last week i create a account in itune using credit card but in itune its mention that "there will be No charge to create account" but after some times i got a call from bank to confirm that i have used for this and charged $1. but now i want a reason
-
This is driving me crazy. Not just the power on ribbon, but also the touchpad as well. That LITTLE plastic black piece is not going in no matter WHAT I do....and I couldn't find any one else having this problem online anywhere. What am I missing?