ODI commit after N rows
hi
I am trying to insert millions of rows in a target oracle table from a source oracle using ODI.
Is it possible to specify commit after n number of rows, say after every 100000 rows..
Kindly let know any suggestions.
No, I am using ODI interface to populate the table.
LKM : SQL to ORACLE
IKM : Oracle incremental update.
The interface has failed for "unable to extend the tablespace" error. I want to commit the records after every - say 100000- rows
Similar Messages
-
Hi
Iam inserting around 2.5 mmillion records in aconversion project
let me know how i can commit after every 10000 rows please can u tell me whether i can use bulk insert or bulk bind because i have never used please resolve my problem.
Thanks
Madhuas sundar said, per link of TOM you are better of not commiting in th loop other wise it will give you snapshot too old error,
still if you want
1. set the counter to 0. ct number:=0;
increment counter in the loop ct:=ct+1;
IF ct=10000 THEN
COMMIT;
END IF;
2. you can use bulk collect and FORALL also. and commit.
but still follow the tread as per TOM
typo
Message was edited by:
devmiral -
Commit after n rows while update
i have 1 update query
UPDATE security
SET display_security_alias = CONCAT ('BAD_ALIAS_', display_security_alias)
WHERE security_id IN (
SELECT DISTINCT security_id
FROM security_xref
WHERE security_alias_type IN ('BADSYMBOL', 'ERRORSYMBOL', 'BADCUSIP', 'ERRORCUSIP' )
i want to perform commit after every 500rows ( due to Business requirement) how can we achieve this. please helpJ99 wrote:
As mentioned by karthic_arp --Not a good idea still if you want it you can do this way
DECLARE
CURSOR C
IS
SELECT DISTINCT security_id
FROM security_xref
WHERE security_alias_type IN ('BADSYMBOL', 'ERRORSYMBOL', 'BADCUSIP', 'ERRORCUSIP' );
-Counter to count records
counter number(4) default 0;
BEGIN
FOR rec in C
Loop
UPDATE security SET display_security_alias = CONCAT ('BAD_ALIAS_', display_security_alias)
WHERE security_id = rec.security_id
counter := counter +SQL%rowcount ;
If counter >= 500 then
counter := 0;
commit;
end if;
END Loop;
end ;
/NOT tested.Apart from committing every N rows, being a completely technically stupid idea, committing inside a cursor loop just take it one step further to stupiddom. -
About procedure to commit after 1000 rows
i have procedure
and a cursor c1 is select emp table and
insert into copy_emp table
but i want to commit after 1000 record;
with exception handling
plz help me with example
thanksEverything you have described is either bad practice in all currently supported versions or bad practice in every version since 7.0.12.
Cursor loops have been obsolete since version 9.0.1.
Incremental commiting, for example every 1,000 rows, has always been a bad practice. It slows down processing and manufactures ORA-01555 exceptions.
For demos of how to properly process data sets look up BULK COLLECT and FORALL at http://tahiti.oracle.com and review the demos in Morgan's Library at www.psoug.org.
For a long list of good explanations of why you should not do incremental commits read Tom Kyte's many comments at http://asktom.oracle.com.
Incremental commits may make sense in other products. Oracle is not other products. -
I'm getting probelms with the following procedure. Is there any that I can do to commit after every 10,000 rows of deletion? Or is there any other alternative! The DBAs are not willing to increase the undo tablespace value!
create or replace procedure delete_rows(v_days number)
is
l_sql_stmt varchar2(32767) := 'DELETE TABLE_NAME WHERE ROWID IN (SELECT ROWID FROM TABLE_NAME W
where_cond VARCHAR2(32767);
begin
where_cond := 'DATE_THRESHOLD < (sysdate - '|| v_days ||' )) ';
l_sql_stmt := l_sql_stmt ||where_cond;
IF v_days IS NOT NULL THEN
EXECUTE IMMEDIATE l_sql_stmt;
END IF;
end;I think I can use cursors and for every 10,000 %ROWCOUNT, I can commit, but even before posting the thread, I feel i will get bounces! ;-)
Please help me out in this!
Cheers
Sarma!Hello
In the event that you can't persuede the DBA to configure the database properly, Why not just use rownum?
SQL> CREATE TABLE dt_test_delete AS SELECT object_id, object_name, last_ddl_time FROM dba_objects;
Table created.
SQL>
SQL> select count(*) from dt_test_delete WHERE last_ddl_time < SYSDATE - 100;
COUNT(*)
35726
SQL>
SQL> DECLARE
2
3 ln_DelSize NUMBER := 10000;
4 ln_DelCount NUMBER;
5
6 BEGIN
7
8 LOOP
9
10 DELETE
11 FROM
12 dt_test_delete
13 WHERE
14 last_ddl_time < SYSDATE - 100
15 AND
16 rownum <= ln_DelSize;
17
18 ln_DelCount := SQL%ROWCOUNT;
19
20 dbms_output.put_line(ln_DelCount);
21
22 EXIT WHEN ln_DelCount = 0;
23
24 COMMIT;
25
26 END LOOP;
27
28 END;
29 /
10000
10000
10000
5726
0
PL/SQL procedure successfully completed.
SQL>HTH
David
Message was edited by:
david_tyler -
Hi
I want to display a message after inserting rows in table like *'you have inserted a new row successfully*'.
i am using the createinsert ADF Button to insert the rows in table.after that i am comitting it.
after commiting i want to display message for the user.for this what i need to do.
Please help me.
Sailaja.user10860137
Can you please explain me the each line in the code briefly.
+public String saveButton_action(){+
BindingContainer bindings = getBindings();
OperationBinding operationBinding = bindings.getOperationBinding("Commit");
Object result = operationBinding.execute();
+// note "!" operator has been removed from the default code.+
+if(operationBinding.getErrors().isEmpty()){+
FacesContext ctx = FacesContext.getCurrentInstance();
FacesMessage saveMsg = new FacesMessage("Record Saved Successfully");
ctx.addMessage(null,saveMsg);
+}+
return null;
+}+
And i have requirement to show the message on favcet "status bar".not in a popup window.(from the above code the message is showing in popup window. )the Layout i am using is PanelCollection.
can you tell me what i need to do.
Thanks
Sailaja.
Edited by: sj0609 on Mar 19, 2009 8:03 AM -
I use Oracle 10G Rel2. I'm trying to improve the performance of my database insertions. Every two days I perform a process of inserting 10000 rows in a table. I call a PL/SQL procedure
for inserting every row, which checks data and perform the insert command on the table.
Should I commit after every call to the procedure ??
Is it better to perform a commit at the end of calling 10000 times to the insertion procedure?? So the question is : is "commit" a cheap operation ??
Any idea to improve performance with this operation ??> So the question is : is "commit" a cheap operation ??
Yes. A commit for a billion rows is as fast as a commit for a single row.
So there are no commit overheads for doing a commit on a large transaction versus a commit on a small transaction. So is this the right question to ask? The commit itself does not impact performance.
But HOW you use the commit in your code, does. Which is why the points raised by Daniel is important.. how the commit is used. In Oracle, the "best place" is at the end of the business transaction. When the business transaction is done and dusted, commit. That is after all the very purpose of the commit command - protecting the integrity of the data and the business transaction. -
Commit after 2000 records in update statement but am not using loop
Hi
My oracle version is oracle 9i
I need to commit after every 2000 records.Currently am using the below statement without using the loop.how to do this?
do i need to use rownum?
BEGIN
UPDATE
(SELECT A.SKU,M.TO_SKU,A.TO_STORE FROM
RT_TEMP_IN_CARTON A,
CD_SKU_CONV M
WHERE
A.SKU=M.FROM_SKU AND
A.SKU<>M.TO_SKU AND
M.APPROVED_FLAG='Y')
SET SKU = TO_SKU,
TO_STORE=(SELECT(
DECODE(TO_STORE,
5931,'931',
5935,'935',
5928,'928',
5936,'936'))
FROM
RT_TEMP_IN_CARTON WHERE TO_STORE IN ('5931','5935','5928','5936'));
COMMIT;
end;
Thanks for your helpI need to commit after every 2000 recordsWhy?
Committing every n rows is not recommended....
Currently am using the below statement without using the loop.how to do this?Use a loop? (not recommended) -
Avoid Commit after every Insert that requires a SELECT
Hi everybody,
Here is the problem:
I have a table of generator alarms which is populated daily. On daily basis there are approximately 50,000 rows to be inserted in it.
Currently i have one month's data in it ... Approximately 900,000 rows.
here goes the main problem.
before each insert command, whole table is checked if the record does not exist already. Two columns "SiteName" and "OccuranceDate" are checked... this means, these two columns are making a unique record when checked together with an AND operation in WHERE clause.
we have also implemented partition on this table. and it is basically partitioned on the basis of OccuranceDate and each partition has 5 days' data.
say
01-Jun to 06 Jun
07-Jun to 11 Jun
12-Jun to 16 Jun
and so on
26-Jun to 30 Jun
NOW:
we have a commit command within the insertion loop, and the each row is committed once inserted, making approximately 50,000 commits daily.
Question:
Can we commit data after say each 500 inserted rows, but my real question is can we Query the records using SELECT which are Just Inserted but not yet committed ?
a friend told me that, you can query the records which are inserted in the same connection session but not yet committed.
Can any one help ?
Sorry for the long question but it was to make u understand the real issue. :(
Khalid Mehmood Awan
khalidmehmoodawan @ gmail.com
Edited by: user5394434 on Jun 30, 2009 11:28 PMDon't worry about it - I just said that because the experts over there will help you much better. If you post your code details there they will give suggestions on optimizing it.
Doing a SELECT between every INSERT doesn't seem very natural to me, but it all depends on the details of your code.
Also, not committing on time may cause loss of the uncommitted changes. Depending on how critical the data is and the dependency of the changes, you have to commit after every INSERT, in between, or at the end.
Regards,
K. -
Do we need to commit after a select statement in any case (in any transaction mode)?
Why do we need to commit after selecting from a table from another databse using a DB link?
If I execute a SQL query, does it really start a transaction in the database?
I could not find any entry in v$transaction after executing a select statement which implies no transactions are started.
Regards,
SandeepWelcome to the forum!
>
Do we need to commit after a select statement in any case (in any transaction mode)?
>
Yes you need to issue COMMIT or ROLLBACK but only if you issue a 'SELECT .... FOR UPDATE' because that locks the rows selected and they will remain locked until released. Other sessions trying to update one of your locked rows will hang until released or will get
>
ORA-00054: resource busy and acquire with NOWAIT specified or timeout expired
>
In DB2 a SELECT will create share locks on the rows and updates of those rows by other sessions could be blocked by the share locks. So there the custom is to COMMIT or ROLLBACK after a select.
>
Why do we need to commit after selecting from a table from another databse using a DB link
>
See Hooper's explanation of this at http://hoopercharles.wordpress.com/2010/01/27/neat-tricks/
And see the 'Remote PL/SQL section of this - http://psoug.org/reference/db_link.html
A quote from it
>
Why does it seem that a SELECT over a db_link requires a commit after execution ?
Because it does! When Oracle performs a distributed SQL statement Oracle reserves an entry in the rollback segment area for the two-phase commit processing. This entry is held until the SQL statement is committed even if the SQL statement is a query.
If the application code fails to issue a commit after the remote or distributed select statement then the rollback segment entry is not released. If the program stays connected to Oracle but goes inactive for a significant period of time (such as a daemon, wait for alert, wait for mailbox entry, etc...) then when Oracle needs to wrap around and reuse the extent, Oracle has to extend the rollback segment because the remote transaction is still holding its extent. This can result in the rollback segments extending to either their maximum extent limit or consuming all free space in the rbs tablespace even where there are no large transactions in the application. When the rollback segment tablespace is created using extendable files then the files can end up growing well beyond any reasonable size necessary to support the transaction load of the database. Developers are often unaware of the need to commit distributed queries and as a result often create distributed applications that cause, experience, or contribute to rollback segment related problems like ORA-01650 (unable to extend rollback). The requirement to commit distributed SQL exists even with automated undo management available with version 9 and newer. If the segment is busy with an uncommitted distributed transaction Oracle will either have to create a new undo segment to hold new transactions or extend an existing one. Eventually undo space could be exhausted, but prior to this it is likely that data would have to be discarded before the undo_retention period has expired.
Note that per the Distributed manual that a remote SQL statement is one that references all its objects at a remote database so that the statement is sent to this site to be processed and only the result is returned to the submitting instance, while a distributed transaction is one that references objects at multiple databases. For the purposes of this FAQ there is no difference, as both need to commit after issuing any form of distributed query. -
Is it necessary give commit after each SELECT in oracle? Can it influence performance of database (SELECTs without commit)?
Thank you for answer.
LenkaHello
I would imagine it is a artifact from using SQL server or DB2 or something similar. For certain transaction isolation levels, SQL server (for example) has to lock the rows being queried so that a consistent view of data can be returned, so committing after a select ensures that these locks are removed allowing others to read and write the data.
Oracle handles things differently, writers don't block readers and readers don't block writers. It is all part of the multi version read consistency model which is covered in the concepts guide. There are also some very interesting articles on asktom:
http://asktom.oracle.com/pls/ask/f?p=4950:8:10261219059254362776::NO::F4950_P8_DISPLAYID,F4950_P8_CRITERIA:1886476148373
http://download-west.oracle.com/docs/cd/B10501_01/server.920/a96524/c01_02intro.htm#46633
http://download-west.oracle.com/docs/cd/B10501_01/server.920/a96524/c21cnsis.htm#2414
HTH
David -
Need to commit after every 10 000 records inserted ?
What would be the best way to Commit after every 10 000 records inserted from one table to the other using the following script :
DECLARE
l_max_repa_id x_received_p.repa_id%TYPE;
l_max_rept_id x_received_p_trans.rept_id%TYPE;
BEGIN
SELECT MAX (repa_id)
INTO l_max_repa_id
FROM x_received_p
WHERE repa_modifieddate <= ADD_MONTHS (SYSDATE, -6);
SELECT MAX (rept_id)
INTO l_max_rept_id
FROM x_received_p_trans
WHERE rept_repa_id = l_max_repa_id;
INSERT INTO x_p_requests_arch
SELECT *
FROM x_p_requests
WHERE pare_repa_id <= l_max_rept_id;
DELETE FROM x__requests
WHERE pare_repa_id <= l_max_rept_id;1006377 wrote:
we are moving between 5 and 10 million records from the one table to the other table and it takes forever.
Please could you provide me with a script just to commit after every x amount of records ? :)I concur with the other responses.
Committing every N records will slow down the process, not speed it up.
The fastest way to move your data (and 10 million rows is nothing, we do those sorts of volumes frequently ourselves) is to use a single SQL statement to do an INSERT ... SELECT ... statement (or a CREATE TABLE ... AS SELECT ... statement as appropriate).
If those SQL statements are running slowly then you need to look at what's causing the performance issue of the SELECT statement, and tackle that issue, which may be a case of simply getting the database statistics up to date, or applying a new index to a table etc. or re-writing the select statement to tackle the query in a different way.
So, deal with the cause of the performance issue, don't try and fudge your way around it, which will only create further problems. -
Hi,
I have a large amount of data that I ma extacting from my Clients SQLSERVER db.I am using "LKM SQL to ORACLE" and using "IKM SQL control append" to extract this data. My DBA tells me that I am committing a lot in the C$ table and that I should commit every 100000 rows only. Is there a customization that I can do the LKM to make it commit after every 100000 Rows?. Kindly help.This is an urgent requirement.
Regards,
JohnTry /*+ APPEND * hint in the insert statement , that should help you
insert /*+ APPEND * into <%=odiRef.getTable("L", "COLL_NAME", "A")%> -
Insert Blank row After every Row in alv report
How to insert blank row After every row in Alv report
what do you mean by a 'blank row'? ALV displays tabular data with 'any' number of columns. Now if you actually want a blank row (no columns at all, just a row), then that is just not possible. If I'm not mistaken, this question was posted before, so try to do a search on SCN. See what is says.
-
Problems with PObject::destroy(void*) and commit after this
Hi there
I have some problem with delete garbage objects.
For example:
I create my object
MyObject * o = new (conn,"MY#TABLE") MyObject();
if (some_condition)
PObject::destroy(o);
delete o; //this work well
//but after this
conn->commit();
I have got the error
OCI-21710: argument is expecting a valid memory address of an object
help me anybody
can I delete/destroy object if one has been init over new (conn,table)
and absent problem with commit after this ?Thank you for answer
but what you mean about this code ?
MyObject * o = new (conn,"MY#TABLE") MyObject();
if (some_condition)
o->markDelete();
PObject::destroy(o);
delete o;
//but after this
conn->commit();
This is work without error.
Message was edited by:
pavel
Maybe you are looking for
-
Sapinit couldn't start SAP properly.
Hi Gurus, I have setup /etc/init.d/sapinit but somehow everytime the server booting up it just doesnt bring up SAP properly. System: RHEL 6.5 ECC 6 EHP7 Error found in starsap log : Start FAIL: HTTP error, HTTP/1.1 401 Unauthorized Temporary solutio
-
suddenly my dahsboard calendar widget is showing three panels instead of two. Panel one: Today's day and date; Panel two: The calendar for the month; and the new panel, Panel Three: "There are no upcoming events in iCalendar today". Is this Panel Thr
-
Hi Experts For the Purchase Order & Outline can i maintain the same output message type ZNEU. But the layout forms would be different. Shouldn't be any issue right. Rgds MM
-
I have 11 members that are used for grouping stock. They are: DEAD_STK_3M, DEAD_STK_6M, NEW_NO_SALES, STK_NEGATIVE, STK_0_1M, STK_1_2M, STK_2_3M, STK_3_4M, STK_4_5M, STK_5_6M, STK_6M_PLUS. I have put a dynamically calculated formula in member "STK_0_
-
How do I get music loaded into itunes via CD synced into ipad?
sync seems to only want to load music onto the ipad that was purchased within itunes. I have a lot of old CDs have loaded into itunes on my macbook pro that have been synced to my ipod, why won't they also sync to the ipad?