Commit on thousands of records
Hello,
I've encountered the following problem while trying to update records in an Oracle 8i database :
I have a java program that updates thousands of records from a flat file to the oracle database, the "commit" command is done at the end of the program,the problem is that some records are not updated in the database but no exception is raised !
If I try to do a commit after each update, the problem seems to be solved, but of course it takes more time to do the massive update, and I think it is not recommended to do a commit after each record?
Is there a limit to which a commit can be done? (a number of maximum records to be updated)
Thanks greatly for your help!
Regards,
Carine
If it was a problem with the size of the rollback statements, you would have received an error.
But are you sure that you don't have any neglected errors (like a when others that does no handling?). In that case you wouldn't receive any error and no rollback would be performed (but a commit instead) resulting in "saving" your already done modifications.
In the book "expert one-on-one" from thomas kyte, there is a chapter of what exactly a commit does.
a small extract:
basicly a commit has a fairly flat response time. This because 99.9 percent of the work is already done before you commit.
[list]
[*]you have already genererated the rollback segments in the sga
[*]modified data blocks have been generated in the sga
[*]buffered redo for the above two items has been generated in the sga
[*]depending on the size of the above three, and the amount of time spent, some combination of the above data may have been flushed onto disk already
[*]all locks have been acquired
[list]
when you commit, all that is left is the following
[list]
[*]generate a scn (system change number) for our transaction
[*]lgwr writes all of our remaining buffered redo log entries to disk, and records the scn in the online redo log files as well. This step is actually the commit. if this step occurs, we have committed. Our transaction entry is removed, this shows that we have committed. Our record in the v$transaction view will 'disappear'.
[*]All locks held by our session are released, and everyone who was enqueued waiting on locks we held will be released.
[*]Many of the blocks our transaction modified will be visited and 'cleaned out' in a fast mode if they are still in the buffer cache.
[list]
Flushing the redo log buffer by lgwris the lengthiest operation.
To avoid long waiting, this flushing is done continuously as we are processing:
[list]
[*]every three seconds
[*]when the redo log buffer is one third or one MB full
[*] upon any transaction commit
[list]
for more information do a search on akstom.oracle.com or read his book.
But is must be clear that the commit on itself has no limits on processed rows.
There's no limit re: commit. There is a limit on the number of rows that can be modified (updt, del, ins) ina transaction (e.g. between commits). It depends on rollback segment size (and other activity). This varies with each database (see your DBA).
If you were hitting this limit it would normally "rolllback" all changes to the last commit.
Ken
=======
Hello Ken,
Thanks a lot for this quick answer. The wonder is that I do not get any error message concerning the rollback segment:
if I do the commit at the end after updating thousands of records, it seems like it was done correctly but I see that only some records have not been updated in the database (thus I would not be hitting the limit as all changes would have been rolledback) ?
Is there a way to get a return status from the commit ? Should I do a commit after each 1000 records for example?
Thanks again,
Carine
Similar Messages
-
How can I make my adodc connect faster to my SQL Server? It's taking a minute (so long) before I can view thousand of record in my listview. Please anyone help me.
I'm using this code:
Public Class McheckpaymentNew
Private cn As New ADODB.Connection
Private rs As New ADODB.Recordset
Private Sub McheckpaymentNew_Load(sender As Object, e As EventArgs) Handles MyBase.Load
Try
cn.ConnectionString = "DSN=database; UID=user; PWD=password"
cn.Open()
rs.CursorLocation = ADODB.CursorLocationEnum.adUseClient
rs.CursorType = ADODB.CursorTypeEnum.adOpenStatic
rs.LockType = ADODB.LockTypeEnum.adLockBatchOptimistic
Catch ex As Exception
MsgBox("Failed to Connect!, Please check your Network Connections, or Contact MIS Dept. for assistance.", vbCritical, "Error while Connecting to Database.."
End
End Try
End Sub
End ClassHow can I make my adodc connect faster to my SQL Server? It's taking a minute (so long) before I can view thousand of record in my listview. Please anyone help me.
I'm using this code:
Public Class McheckpaymentNew
Private cn As New ADODB.Connection
Private rs As New ADODB.Recordset
Private Sub McheckpaymentNew_Load(sender As Object, e As EventArgs) Handles MyBase.Load
Try
cn.ConnectionString = "DSN=database; UID=user; PWD=password"
cn.Open()
rs.CursorLocation = ADODB.CursorLocationEnum.adUseClient
rs.CursorType = ADODB.CursorTypeEnum.adOpenStatic
rs.LockType = ADODB.LockTypeEnum.adLockBatchOptimistic
Catch ex As Exception
MsgBox("Failed to Connect!, Please check your Network Connections, or Contact MIS Dept. for assistance.", vbCritical, "Error while Connecting to Database.."
End
End Try
End Sub
End Class -
Commit for every 1000 records in Insert into select statment
Hi I've the following INSERT into SELECT statement .
The SELECT statement (which has joins ) has around 6 crores fo data . I need to insert that data into another table.
Please suggest me the best way to do that .
I'm using the INSERT into SELECT statement , but i want to use commit statement for every 1000 records .
How can i achieve this ..
insert into emp_dept_master
select e.ename ,d.dname ,e.empno ,e.empno ,e.sal
from emp e , dept d
where e.deptno = d.deptno ------ how to use commit for every 1000 records .ThanksSmile wrote:
Hi I've the following INSERT into SELECT statement .
The SELECT statement (which has joins ) has around 6 crores fo data . I need to insert that data into another table.Does the another table already have records or its empty?
If its empty then you can drop it and create it as
create your_another_table
as
<your select statement that return 60000000 records>
Please suggest me the best way to do that .
I'm using the INSERT into SELECT statement , but i want to use commit statement for every 1000 records .That is not the best way. Frequent commit may lead to ORA-1555 error
[url http://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:275215756923]A nice artical from ASKTOM on this one
How can i achieve this ..
insert into emp_dept_master
select e.ename ,d.dname ,e.empno ,e.empno ,e.sal
from emp e , dept d
where e.deptno = d.deptno ------ how to use commit for every 1000 records .
It depends on the reason behind you wanting to split your transaction into small chunks. Most of the time there is no good reason for that.
If you are tying to imporve performance by doing so then you are wrong it will only degrade the performance.
To improve the performance you can use APPEND hint in insert, you can try PARALLEL DML and If you are in 11g and above you can use [url http://docs.oracle.com/cd/E11882_01/appdev.112/e25788/d_parallel_ex.htm#CHDIJACH]DBMS_PARALLEL_EXECUTE to break your insert into chunks and run it in parallel.
So if you can tell the actual objective we could offer some help. -
To Fetch records from database ..in case if there thousands of records
Hi All,
Suppose i have several thousands of records in the database.I want a select statement which can retrieve first 5000 rows and last 5000 rows simultaneously.
Can anybody please tell me how to select it from the database.
Its urgent so hence kindly please let me know .
Thanks,
Neethuselect * from db into table itab
where <condition>
upto 5000 rows
sorted by primary key.
select * from db appending table itab
where <condition>
upto 5000 rows
sorted descending by primary key.
*reward if solved* -
Commit after every 1000 records
Hi dears ,
i have to update or insert arround 1 lakhs records every day incremental basis,
while doing it , after completing all the records commit happens, In case some problem in between all my processed records are getting rollbacked,
I need to commit it after every frequency of records say 1000 records.
Any one know how to do it??
Thanks in advance
Regards
RajaRaja,
There is an option in the configuration of a mapping in which you can set the Commit Frequency. The Commit Frequency only applies to non-bulk mode mappings. Bulk mode mappings commit according the bulk size (which is also an configuration setting of the mapping).
When you set the Default Operating Mode to row based and Bulk Processing Code to false, Warehouse Builder uses the Commit Frequency parameter when executing the package. Warehouse Builder commits data to the database after processing the number of rows specified in this parameter.
If you set Bulk Processing Code to true, set the Commit Frequency equal to the Bulk Size. If the two values are different, Bulk Size overrides the commit frequency and Warehouse Builder implicitly performs a commit for every bulk size.
Regards,
Ilona -
Commit/Rollback Button, Count Records, Ctrl-End pblm?
1. How about adding a commit and rollback button on the SQL Worksheet?
2. How about a quick way to count the records in the result set in sqlworksheet and the data tab?
3. Noticed that using ctrl-end on the data tab does not move to the last record. You still need to page down to the end, or scroll to the end. ctrl-end should move to the last record in the table. I get 55 records, ctrl-end, and then go to rec 105. There are 1k's of records in the table. Paging down will get me there.
4. What about drag and drop selected text in SQL Worksheet?
5. On the table tab (and sub tabs...what are they called?), automatically store the filters and sorts on a user.table level (and maybe a dropdown to recall them)? These are the kind of requests that really improve your efficiency, and acceptance of the product. The user believes that as they use the product, they are building a foundation that can be easily recalled later.
As a note for any settings files...they should be stored so they can be easily transported to the next release or another pc. They should probably be in a directory with the operator's name (like my documents).I think it would be good to have a function (like Ctrl-End) that allows you to jump to the last record - even if there are a million rows.
As far as a "work-around", exporting will count the records, so you could run your query and in the result table right-click, select Export -> some format. Once you select the file to export to, the Fetched Rows count is replaced with a Rows count for the export.
The export does not appear fetch the rows in the results (unless you have already fetched to the end of the query) - in my example, when I ran the query the row count was "Fectched Rows: 100". I exported and the row count was "Rows: 4992". Then when I paged down in the results, the row count was updated to "Fectched Rows: 150". From this, I would assume that it is rerunning the query separately if you haven't already fetched the last record in the results.
Interestingly, if I then scrolled down to the end of the query (displayed "All Rows Fetched: 4992") and then did an export, it appeared to scroll through the fetched records but ended up with "Row: 4991" displayed, but the full 4992 records were in the export file.
Another interesting point - if I set up my query to return 199 records (ROWNUM < 200), when I run it, it displayed "Fectched Rows: 100". Once I page down enough it shows "Fectched Rows: 150" and then "Fectched Rows: 200". Then it is only after I page down more that it updates to "All Rows Fetched: 199". -
Record level commit in a multi record block
Dear all,
i have a multi record block in my form (only one block)
after entring record the save buttons commites the form
if any error is raised then total commit processs stop for example if DUP_VAL_ON_INDEX is raised
what i want is that the form shoud act record level,i.e; it should commit record wise so that though a excepton is raised atleast record above that are commited.
thank U
Raj
mail : [email protected]you can have non-database block and write "insert into <table>" statements for each row of block at when-button-pressed of 'commit' button. Commit after each row insertion. This will serve your purpose.
-
Hi Guys ,
I have a table bind to ObjectListDataProvider and using hibernate 3
i set the pagination rows to 10 per page and enable the paginate button to true
but i was wondering why the application server or rather the JSF itself cannot handle lots of records displaying in one page.
for example i have 150 pages for 10 rows per page each
if i click paginate button to show all in single page , the cpu utilization hit 99% and my application will hang.
but it works okay if i have 8-9 pages for 10 rows per page and show all of them in single page.
Is there any way i can fix this or tune this up?
anyone experiencing this before?
ThanksSolved.
Somewhere in our code, while populating one of the columns of the table, we were resetting the VO which was causing the issue. Modified that method call to add a parameter to mention from which row should the table be populated.
Thanks
Nagamanoj -
Commit in procedures after every 100000 records possible?
Hi All,
I am using an ODI procedure to insert data into a table.
I checked that in the ODI procedure there is an option of selecting transaction and setting the commit option as 'Commit after every 1000 records'.
Since the record count to be inserted is 38489152, I would like to know is this option configurable.
Can i ensure that commits are made at a logical step of 100000 instead of 1000 records?
Thank You.
Prernarecently added on this
http://dwteam.in/commit-interval-in-odi/
Thanks
Bhabani
http://dwteam.in -
Commit after 2000 records in update statement but am not using loop
Hi
My oracle version is oracle 9i
I need to commit after every 2000 records.Currently am using the below statement without using the loop.how to do this?
do i need to use rownum?
BEGIN
UPDATE
(SELECT A.SKU,M.TO_SKU,A.TO_STORE FROM
RT_TEMP_IN_CARTON A,
CD_SKU_CONV M
WHERE
A.SKU=M.FROM_SKU AND
A.SKU<>M.TO_SKU AND
M.APPROVED_FLAG='Y')
SET SKU = TO_SKU,
TO_STORE=(SELECT(
DECODE(TO_STORE,
5931,'931',
5935,'935',
5928,'928',
5936,'936'))
FROM
RT_TEMP_IN_CARTON WHERE TO_STORE IN ('5931','5935','5928','5936'));
COMMIT;
end;
Thanks for your helpI need to commit after every 2000 recordsWhy?
Committing every n rows is not recommended....
Currently am using the below statement without using the loop.how to do this?Use a loop? (not recommended) -
How to print space as thousand seperator and comma as decimal seperator
Hi All,
I have requirement where I need to print the amounts with space as thousand seperator and comma as decimal seperator.
I have a field wrshb which is of type mhnd-wrshb. currently I am printing this. In the adobe layout I have declared this coloumn as Decimal field.
Now in the output it is printing as comma as thousand seperator and dot as decimal seperator.
For example ,currently the value is printing as 32,811.41
but I want the amount as 32 811,41
I have declared the variable as char16, using write statement in the interface I moved the value from currency field to char field.
Then in debugging i checked the value comes as 32,811.41 and it goes to dump with teh reason-cannot interpret as a number.
Can anyone help me in fixing this?
Thanks and Regards,
Karthik Ganti.Hi Adam,
As per initial requirement, I have set the format such that the amount is printing in below format as required.
Locale---Italian.
Space as thousand seperator and comma as decimal seperator.
for example 1 234,45
As some of the Currencies will not have decimals, now users would like to print amount without decimals. For example in my case amount printing in KRW ( Korean currency ) is also similar to the above format which is wrong.
for example Now amount is printing as 55 000,00. But actually it should be 550 000. Similarly for JPY currency also, as it doesnot haves decimals ( checked in TCURX table ).
I have written some logic in the interface. below is the logic.
WRITE:
wa_mhnd1-wrshb to wa_item-wrshb CURRENCY WA_ITEM-WAERS.
*READ TABLE lt_tcurx INTO lwa_tcurx WITH KEY currkey = wa_item-waers BINARY SEARCH.
IF sy-subrc = 0.
IF lwa_tcurx-currdec = '0'.
REPLACE ',' WITH SPACE INTO WA_ITEM-WRSHB.
REPLACE ',' WITH SPACE INTO WA_ITEM-WRSHB.
else.
REPLACE ',' WITH SPACE INTO WA_ITEM-WRSHB.
REPLACE ALL OCCURRENCES OF '.' in wa_item-wrshb WITH ','.
endif.
ENDIF.
a. when the write statement gets executed amount will be in ,. ( 1,234.45 )format. Then my logic gets executed correctly. In this company code is CH10 ( EUR ) and KR10.
b. But sometimes after the write statement gets executed amount will be in ., format ( 1.234.45 ). In this case my logic works, but gives the wrong value. In this case company code is VN10 ( EUR )
In both the cases currency is EUR.
Will the decimal format change accordingly based on the company code code currency.Can you please tell me why write statement behaved differently.
Do I need to change any locale in the adobe form, or any other logic to be written in interface. ? I am trying it out from long time, but not able to fix it.
Can you please help me how to achieve this ?
Thanks and Regards,
Karthik Ganti. -
Hi,
This might be a really dumb one to ask but I am currently working on a table that has sequential data for steps that an invoice goes through in a particular system. Here is how it looks:
ID InvoiceID
InvoiceSteps
Timestamp
283403 0000210121_0002_2013
Post FI Invoice
2013-07-01 19:07:00.0000000
389871 0000210121_0002_2013
Clear Invoice
2013-08-25 14:02:00.0000000
Here is my extremely slow query that converts multiple rows of an invoice into a single one with 'InvoiceSteps' listed according to their timestamps in a sequential manner separated by commas.
SELECT [InvoiceID],
[InvoiceSteps] = STUFF((
SELECT ',' + ma.InvoiceSteps
FROM invoices ma
WHERE m.InvoiceID = ma.InvoiceID
ORDER BY [Timestamp]
FOR XML PATH(''), TYPE).value('.', 'NVARCHAR(MAX)'), 1, 1, '')
FROM invoices m
GROUP BY InvoiceID
ORDER BY InvoiceID;
Here is the end result:
InvoiceID InvoiceSteps
0000210121_0002_2013
Post FI Invoice,Clear Invoice
My question: How can I improve the query so that it can process thousands of records as fast as possible (>600K in this case).
Thanks you!There are many methods to concatenate the rows to columns. Assuming you have necessary indexes to support your query as Rishabh suggested, if you still find issues with performance, then you need to look at various other approaches as well. I have seen at
certain places(huge data), CLR outperformed . Having said, we need to assess each and come to a conclusion for your scenario.
Refer the below link for various approach, (please also look at the comment session as well):
https://www.simple-talk.com/sql/t-sql-programming/concatenating-row-values-in-transact-sql/ -
How to remove soft deleted records?
Hi everybody,
Are there any ways to remove soft deleted records from LT table?
For instance, table is versioned and database has few workspaces. The user removes some data from LIVe workspace (or from other workspaces). The removed records are marked as deleted in the LT table, but never removed from LT table even the compression is executed on all workspaces.
I found the only way to remove them is to remove all workspaces and savepoints and when run compression. After that it removes all soft deleted records (records that are marked as deleted in LT table).
I have a thousands active records while hundred of thousands are soft deleted, which causes performance degradation.
Any suggestions?
Thanks for any input.
Edited by: dmbond on Jan 14, 2010 7:15 AMThanks Ben for such quick replay.
Please correct my understanding if i wrong somewhere...
From you last post i understood that soft deleted records cannot be removed from versioned table where the same data were existed prior to versioning.
I made a quick test and can see version 0 on the data that was originally there, and running compression did not removed it (choice by OWM).
However, another case when brand new table (empty) was versioned), (new workspace created after versioing), and then new data is added into LIVE which later was removed. After compressing LIVE workspace the soft records still in LT table and version is not 0.
Here is my last test example:
create table dm_test (
column1 number primary key,
column2 number not null);
call dbms_wm.enableVersioning('DM_TEST');
call dbms_wm.createWorkspace('DUMMY');
--No records
select * from dm_test_lt;
insert into dm_test values (9,1);
insert into dm_test values (10,2);
insert into dm_test values (11,3);
insert into dm_test values (12,4);
commit;
--Shows data with version different than 0
select * from dm_test_lt;
--Delete data from LIVE workspace
delete from dm_test;
commit;
--LT has delstatus negative (-1)
select * from dm_test_lt;
--compress LIVE workspace
declare
begin
DBMS_WM.CompressWorkspace('LIVE',
compress_view_wo_overwrite => TRUE,
auto_commit => TRUE,
remove_latest_deleted_rows=>TRUE);
end;
--The data is still there after compression
select * from dm_test_lt; -
How to fetch records between two seq uence numbers?
We have thousands of records with seq uence numbers in the oracle database, we need to retrieve a number of records between two seq uence numbers, i.e. to retrieve records between 100 and 200 seq uence numbers. Could some one help me to find a query to fetch records.
I'll be waiting for your response..
Edited by: sumant on Jul 27, 2010 12:42 PMIs this is what you are looking for?
SQL> create table tab1 (id number);
Table created.
SQL> insert into tab1 values (1);
1 row created.
SQL> insert into tab1 values (2);
1 row created.
SQL> insert into tab1 values (3);
1 row created.
SQL> commit;
Commit complete.
SQL> select * from tab1;
ID
1
2
3
SQL> select round(dbms_random.value(max(id)+1,max(id)+50)) random_number
2 from tab1;
RANDOM_NUMBER
43
SQL> select round(dbms_random.value(max(id)+1,max(id)+50)) random_number
2 from tab1;
RANDOM_NUMBER
39
SQL> select round(dbms_random.value(max(id)+1,max(id)+50)) random_number
2 from tab1;
RANDOM_NUMBER
13 This will generate a random number that will be greater then the maximum value in the table and lower then the maxvalue + 50. Having the range starting from maxvalue in the table you will never get a number generated which already exists in the table.
As you see in the above number genrated they are all between 4 and 53.
If you want to increase the range from where the number are generated just increase the value 50 that I used in the query. -
Why I can not unlock the record when close a page?
I develop a jsp page as client,the jsp application use a business componment with statefull application module.i call a database rowset's lock function in jsp page to lock a record,now,after i close the jsp page without commit and rollback,the record is not unlocked,until i restart the oc4j application server.how can i unlock the record when close the page without commit and rollback?????
Li Ping:
If you lock a row in the course of a transaction, you cannot unlock the row until the end of transaction (either rollback or commit). DBMS doesn't allow this.
Alternatively, you can try to delay locking of the row. E.g., you can try using optimistic locking. Please see the Java Doc for oracle.jbo.Transaction if you're interested in optimistic locking.
Thanks.
Sung
Maybe you are looking for
-
Ow do you burn a slideshow from i photo to a disc
How do you burn a slideshow from Iphoto to a disc.
-
Since firefox updated my browser is really big, even my desktop, I can't get everything to fit on my monitor and I don't know how to get the font/size smaller. I tried going to the desktop and hitting smaller font but it says that it is at "normal?.
-
Strange "Help Icon" on start up
My G4 won't startup fully. I get a strange blue on white help icon in the upper right corner of the screen and that's it. The full menu bar never finishes coming up. I have tried repairing the disk, first with the install CD then with Disk Warrior. D
-
Upgrade Oracle from 10.2.0.2 to 10.2.0.4
Hi, My SAP ecc 6.0 is currently running on Oracle Win (64bit) Enterprise 10.2.0.2. I was advised to upgrade it to 10.2.0.4. But I'm not able to find the upgrade patches in the SAP software downloads. Please kindly show me the path to the patches. Is
-
Today's Mini-trick: seeing actual Key Photo crops
Hi. One in a wildly infrequent series. The usual caveats apply (mainly: there may be a much simpler way to do this; it is likely not to be useful to you). The Key Photo in a Project is the image Aperture uses in the tile which represents the Project