DB dead lock during update
Hello,
In our R/3 Enterprise 4.7 Production system running on Windows NT/MSSQL, I saw update errors in t-code SM13.
Error details:
Date : 07/19/2009
No. of errors : 24
All errors are for background user BATCH-ID
Function Module : RKE_WRITE_ACT_LINE_ITEM_OP01
Status : DB dead lock during update
SM12 doesnt have any locks currently, not sure about status when these update errors occured.
I didnt find any SAPnotes related to this F-module or similar.
Can you please tell whether it is a serious issue and how to handle such errors?
Regards,
Roshan
Hi,
can u please let us know how many time u have seen this error in SM13. Also from how many days.
If its only once u have seen, then no need to worry.
Function Module : RKE_WRITE_ACT_LINE_ITEM_OP01
Status : DB dead lock during update
From above lines its clear that, Function Module : RKE_WRITE_ACT_LINE_ITEM_OP01 was locked, so the update has failed to access that module. Since it had faield to update, so is the error in sm13 with reason DB dead lock during update.
Can u please refer to the logs in SM21 and also to the trace files(of related work process) related to the same error(red color errors), Try to analyze from there, If u find any difficulties in doing so. Paste the same here along with ur system verion/DB/OS/Patch level
Regards,
Ravi
Similar Messages
-
I tried to update my NOKIA 5500 Sport using the software updater program downloaded from the NOKIA site.
After a couple of minutes it said it had lost contact and told me to unplugg the USB-cable. I did so and tried again. After the second time around i got frustrated and removed the phone and tried to restart it. It would not so i removed the battery and now its dead.
Whah shall i do?
Please help me.
Best regards
Björn, Gothneburg, SWEDENContact a nokia care point.
They can re-flash the firmware and get it working again.
There is nothing that you can do yourself. -
How to recover an iPhone that locked during update
My iPhone 3G failed during an update of iOS six software. Now when you do heard reboot for soft you get a USB cable plug into iTunes displayed on the screen but cannot activate iTunes on the computer that is iTunes does not recognize the iPhone anymore. Does anyone have a solution how to do a hard reset back to original configuration.
Thanks
Markyes I have exactely the same problem , can anyone help ??
-
Itunes locked during update, now it indicates i will lose the video on phone?
can I recover vidoe if in a locked itunes screen?
i did have google set up as my primary account for contacts and cloud services...
i had 300mb of storage out of 5gb available , but per apple geniuses at store, they dont hold video even if room available..
we set up a "dummy " phone at the store, signed in as me etc...and looked to see of the video was in my cloud etc..but due to the fact that google doesnt keep videos it wasnt there...
might have to restore and recover at this point ... -
Hi,
I have a weird issue with locking during update task.
I have a report, which enqueue a specific lock object in exclusive cumulative mode (mode E) with scope 2.
Afterwards, I call FM in update task, which also enqueue exactly the same object in exclusive cumulative mode with scope 2. The FM is triggered after commit work has been executed.
For some reason, the enqueue during the update task process fails (because object is locked by other owner).
As I know, there shouldn't be any locking problem because the update task inherits the lock owner (as described in http://help.sap.com/saphelp_erp60_sp/helpdata/EN/7b/f9813712f7434be10000009b38f8cf/frameset.htm).
Does anyone why does it occur? How can I solve this issue?
Thanks in advance,
ShaiHi,
Thanks for the answer.
Why is that?
The lock is defined in exclusive cumulative mode, what means that the same lock can be requested by the same owner several times.
This scenario works just fine in normal dialog process (with two sequent locks), but fails in update task.
For your knowledge,
Shai -
FOR UPDATE cursor is causing Blocking/ Dead Locking issues
Hi,
I am facing one of the complex issues regarding blocking / dead locking issues. Please find below the details and help / suggest me the best approach to ahead with that.
Its core Investment Banking Domain, in Our Day to day Business we are using many transaction table for processing trades and placing the order. In specific there are two main transaction table
1) Transaction table 1
2) Transaction table 2
These both the tables are having huge amount of data. In one of our application to maintain data integrity (During this process we do not want other users to change these rows), we have placed SELECT …………….. FOR UPDATE CURSOR on these two table and we have locked all the rows during the process. And we have batch jobs (shell scripts ) , calling this procedure , we will be running 9 times per day 1 hrs each start at 7:15AM in the morn finish it up in the eve 5PM . Let’s say. The reason we run the same procedure multiple times is, our business wants to know the voucher before its finalized. Because there is a possibility that order can be placed and will be updated/cancelled several times in a single day. So at the end of the day , we will be sending the finalized update to our client.
20 07 * * 1-5 home/bin/app_process_prc.sh >> home/bin/app1/process.out
20 08 * * 1-5 home/bin/app_process_prc.sh >> home/bin/app1/process.out
20 09 * * 1-5 home/bin/app_process_prc.sh >> home/bin/app1/process.out
20 10 * * 1-5 home/bin/app_process_prc.sh >> home/bin/app1/process.out
20 11 * * 1-5 home/bin/app_process_prc.sh >> home/bin/app1/process.out
20 12 * * 1-5 home/bin/app_process_prc.sh >> home/bin/app1/process.out
20 13 * * 1-5 home/bin/app_process_prc.sh >> home/bin/app1/process.out
20 14 * * 1-5 home/bin/app_process_prc.sh >> home/bin/app1/process.out
20 15 * * 1-5 home/bin/app_process_prc.sh >> home/bin/app1/process.out
20 16 * * 1-5 home/bin/app_process_prc.sh >> home/bin/app1/process.out
20 17 * * 1-5 home/bin/app_process_prc.sh >> home/bin/app1/process.out
Current Program will look like:
App_Prc_1
BEGIN
/***** taking the order details (source) and will be populate into the table ****/
CURSOR Cursor_Upload IS
SELECT col1, col2 … FROM Transaction table1 t 1, Source table 1 s
WHERE t1.id_no = t2.id_no
AND t1.id_flag = ‘N’
FOR UPDATE OF t1.id_flag;
/************* used for inserting the another entry , if theres any updates happened on the source table , for the records inserted using 1st cursor. **************/
CURSOR cursor_update IS
SELECT col1, col2 … FROM transaction table2 t2 , transaction table t1
WHERE t1.id_no = t2.id_no
AND t1.id_flag = ‘Y’
AND t1.DML_ACTION = ‘U’,’D’ -- will retrieve the records which are updated and deleted recently for the inserted records in transaction table 1 for that particular INSERT..
FOR UPDATE OF t1.id_no,t1.id_flag;
BLOCK 1
BEGIN
FOR v_upload IN Cursor_Upload;
LOOP
INSERT INTO transaction table2 ( id_no , dml_action , …. ) VALUES (v_upload.id_no , ‘I’ , … ) RETURNING v_upload.id_no INTO v_no -- I specify for INSERT
/********* Updating the Flag in the source table after the population ( N into Y ) N order is not placed yet , Y order is processed first time )
UPDATE transaction table1
SET id_FLAG = ‘Y’
WHERE id_no = v_no;
END LOOP;
EXCEPTION WHEN OTHER THEN
DBMS_OUTPUT.PUT_LINE( );
END ;
BLOCK 2
BEGIN -- block 2 starts
FOR v_update IN Cursor_Update;
LOOP;
INSERT INTO transaction table2 ( id_no ,id_prev_no, dml_action , …. ) VALUES (v_id_seq_no, v_upload.id_no ,, … ) RETURNING v_upload.id_no INTO v_no
UPDATE transaction table1
SET id_FLAG = ‘Y’
WHERE id_no = v_no;
END LOOP;
EXCEPTION WHEN OTHER THEN
DBMS_OUTPUT.PUT_LINE( );
END; -- block2 end
END app_proc; -- Main block end
Sample output in Transaction table1 :
Id_no | Tax_amt | re_emburse_amt | Activ_DT | Id_Flag | DML_ACTION
01 1,835 4300 12/JUN/2009 N I ( these DML Action will be triggered when ever if theres in any DML operation occurs in this table )
02 1,675 3300 12/JUN/2009 Y U
03 4475 6500 12/JUN/2009 N D
Sample output in Transaction table2 :
Id_no | Prev_id_no Tax_amt | re_emburse_amt | Activ_DT
001 01 1,835 4300 12/JUN/2009 11:34 AM ( 2nd cursor will populate this value , bcoz there s an update happened for the below records , this is 2nd voucher
01 0 1,235 6300 12/JUN/2009 09:15 AM ( 1st cursor will populate this record when job run first time )
02 0 1,675 3300 12/JUN/2009 8:15AM
003 03 4475 6500 12/JUN/2009 11:30 AM
03 0 1,235 4300 12/JUN/2009 10:30 AM
Now the issues is :
When these Process runs, our other application jobs failing, because it also uses these main 2 tranaction table. So dead lock is detecting in these applications.
Solutin Needed :
Can anyone suggest me , like how can rectify this blocking /Locking / Dead lock issues. I wants my other application also will use this tables during these process.
Regards,
Maranhmmm.... this leads to a warning:
SQL> ALTER SESSION SET PLSQL_WARNINGS='ENABLE:ALL';
Session altered.
CREATE OR REPLACE PROCEDURE MYPROCEDURE
AS
MYCOL VARCHAR(10);
BEGIN
SELECT col2
INTO MYCOL
FROM MYTABLE
WHERE col1 = 'ORACLE';
EXCEPTION
WHEN PIERRE THEN
NULL;
END;
SP2-0804: Procedure created with compilation warnings
SQL> show errors
Errors for PROCEDURE MYPROCEDURE:
LINE/COL ERROR
12/9 PLW-06009: procedure “MYPROCEDURE” PIERRE handler does not end in RAISE or RAISE_APPLICATION_ERROR
:) -
Dead lock error while updating data into cube
We have a scenario of daily truncate and upload of data into cube and volumes arrive @ 2 million per day.We have Parallel process setting (psa and data targets in parallel) in infopackage setting to speed up the data load process.This entire process runs thru process chain.
We are facing dead lock issue everyday.How to avoid this ?
In general dead lock occurs because of degenerated indexes if the volumes are very high. so my question is does deletion of Indexes of the cube everyday along with 'deletion of data target content' process help to avoiding dead lock ?
Also observed is updation of values into one infoobject is taking longer time approx 3 mins for each data packet.That infoobject is placed in dimension and defined it as line item as the volumes are very high for that specific object.
so this is over all scenario !!
two things :
1) will deletion of indexes and recreation help to avoid dead lock ?
2) any idea why the insertion into the infoobject is taking longer time (there is a direct read on sid table of that object while observed in sql statement).
Regards.hello,
1) will deletion of indexes and recreation help to avoid dead lock ?
Ans:
To avoid this problem, we need to drop the indexes of the cube before uploading the data.and rebuild the indexes...
Also,
just find out in SM12 which is the process which is causing lock.... Delete that.
find out the process in SM66 which is running for a very long time.Stop this process.
Check the transaction SM50 for the number of processes available in the system. If they are not adequate, you have to increase them with the help of basis team
2) any idea why the insertion into the infoobject is taking longer time (there is a direct read on sid table of that object while observed in sql statement).
Ans:
Lie item dimension is one of the ways to improve data load as well as query performance by eliminationg the need for dimensin table. So while loading/reading, one less table to deal with..
Check in the transformation mapping of that chs, it any rouitne/formula is written.If so, this can lead to more time for processing that IO.
Storing mass data in InfoCubes at document level is generally not recommended because when data is loaded, a huge SID table is created for the document number line-item dimension.
check if your IO is similar to doc no...
Regards,
Dhanya -
5300 Dead after connection was lost during update
5300 Dead after connection was lost during update. Won't turn on, power up-nothing. Help?
It is now dead.
Take to a nokia care point.
Care points:
UK
http://www.nokia.co.uk/A4228006
Europe:
http://europe.nokia.com/A4388379
Elsewhere:
http://www.nokia.com and select your country. -
Locks up on Find my Ipad during update
Trying to update to osi5, during update my ipad1 will not go past "find my ipad", I can choose either option and select next, but it will not accept it.
Thanks,
ptmooserI also did a restore from itunes, but it had the same result.
-
hI,
What are dead locks in BWHi ,
The concept of dead lock is as similar as in Oracle. If Process P1 is using resource A (locked by P1)) and waiting for B and Process P2is using resouce B(locked by P2) and waiting for resouceA. This creates dead lock as both can not proceed further.
In BW deadlock usually occur in delta load. As, far as my observance deadlock usually occurs during processing of update rules. This may be because more than one package will be processing same record as in delta load, data moves from changelog of below level to next level.
Regards
Sushma -
Frequenet dead locks in SQL Server 2008 R2 SP2
Hi,
We are experiencing frequent dead locks in our application. We are using MSSQL Server 2008 R2 SP2 version. When our application is configured for 5-6 app servers, this issue is occurring frequently.
But, when the same application is used with the MSSQL Server 2008 R2 or SQL Server 2012, we don't see the dead lock issue. From the error lock and sql trace, the error message is thrown for the database table JobLock. We have a stored procedure to insert/update
for the above table when the job moves from one service to other. The same procedure works fine when used with the 2008 R2 and SQL Server 2012 Version.
Is the above issue related to the hotfix from the below url?
http://support.microsoft.com/kb/2703275
Following error message is seen frequently in the log file.
INFO : 03/24/2014 10:26:30:290 PM: [00007900:00005932] [Xerox.ISP.Workflow.ManagedActivity.PersistInTransaction] System.Data.SqlClient.SqlException (0x80131904): Transaction (Process ID 62) was deadlocked on lock resources with another process and has been
chosen as the deadlock victim. Rerun the transaction.
at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection, Action`1 wrapCloseInAction)
at System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection, Action`1 wrapCloseInAction)
at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj, Boolean callerHasConnectionLock, Boolean asyncClose)
at System.Data.SqlClient.TdsParser.TryRun(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj, Boolean& dataReady)
at System.Data.SqlClient.SqlCommand.FinishExecuteReader(SqlDataReader ds, RunBehavior runBehavior, String resetOptionsString)
at System.Data.SqlClient.SqlCommand.RunExecuteReaderTds(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, Boolean async, Int32 timeout, Task& task, Boolean asyncWrite, SqlDataReader ds)
at System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, String method, TaskCompletionSource`1 completion, Int32 timeout, Task& task, Boolean asyncWrite)
at System.Data.SqlClient.SqlCommand.InternalExecuteNonQuery(TaskCompletionSource`1 completion, String methodName, Boolean sendToPipe, Int32 timeout, Boolean asyncWrite)
at System.Data.SqlClient.SqlCommand.ExecuteNonQuery()
at Microsoft.Practices.EnterpriseLibrary.Data.Database.DoExecuteNonQuery(DbCommand command)
at Microsoft.Practices.EnterpriseLibrary.Data.Database.ExecuteNonQuery(DbCommand command, DbTransaction transaction)
at Xerox.ISP.DataAccess.Data.Utility.ExecuteNonQuery(TransactionManager transactionManager, DbCommand dbCommand)
at Xerox.ISP.DataAccess.Data.SqlClient.SqlActivityProviderBase.ActivityReady(TransactionManager transactionManager, Int32 start, Int32 pageLength, Nullable`1 ActivityID, Nullable`1 JobId, String ContentUrl, Nullable`1 PrevWorkStep, Nullable`1
CurrentWorkStep, String Principal, Nullable`1 Status, Nullable`1 ServerID, String HostName, Nullable`1 LockUserID, Nullable`1& ErrorCode, Byte[]& Activity_TS)
at Xerox.ISP.DataAccess.Domain.ActivityBase.ActivityReady(Nullable`1 ActivityID, Nullable`1 JobId, String ContentUrl, Nullable`1 PrevWorkStep, Nullable`1 CurrentWorkStep, String Principal, Nullable`1 Status, Nullable`1 ServerID, String HostName,
Nullable`1 LockUserID, Nullable`1& ErrorCode, Byte[]& Activity_TS, Int32 start, Int32 pageLength)
at Xerox.ISP.DataAccess.Domain.ActivityBase.ActivityReady(Nullable`1 ActivityID, Nullable`1 JobId, String ContentUrl, Nullable`1 PrevWorkStep, Nullable`1 CurrentWorkStep, String Principal, Nullable`1 Status, Nullable`1 ServerID, String HostName,
Nullable`1 LockUserID, Nullable`1& ErrorCode, Byte[]& Activity_TS)
at Xerox.ISP.Workflow.ManagedActivity.<>c__DisplayClass2f.<ActivityReady>b__2d()
at Xerox.ISP.Workflow.ManagedActivity.PersistInTransaction(Boolean createNew, PersistMethod persist)
ClientConnectionId:9e44a64f-5014-4634-9cee-4581e1b9c299
I look forward to the suggestions to get the issue resolved. Your input is much appreciated.
Thanks,
Keshava.If you are having deadlock trouble in your SQL Server instance, this recipe demonstrates how to make sure deadlocks are logged to the SQL ServerManagement Studio SQL log appropriately using
the DBCC TRACEON, DBCC TRACEOFF, and DBCC TRACESTATUS commands. These functions enable, disable, and check the status of trace flags.
To determine the cause of a deadlock, we need to know
the resources involved and the types of locks acquired and requested. For this kind of information, SQL Server provides
Trace Flag 1222 (this flag supersedes 1204, which was frequently used in earlier versions of SQL Server.)
DBCCTRACEON(1222,
-1);
GO
With this flag enabled, SQL Server will provide output in the form of a deadlock graph, showing the executing statements
for each session, at the time of the deadlock; these are the statements that were blocked and so formed the conflict or cycle that led to the deadlock.
Be aware that it is rarely possible to guarantee that deadlocks will never occur. Tuning for deadlocks
primarily involves minimizing the likelihood of their occurrence. Most of the techniques for minimizing the occurrence of deadlocks are similar to the general techniques for minimizing blocking problems. -
IPod touch completely dead after attempted update to 3.0
Cross-posted to "with Windows" forum...
...Not only during update to 3.0, but also during update to 2.2.1 and during a restore attempt 6 weeks ago. All 3 "dead" iPods have been replaced by Apple - I'd like a solution...
I have videoed the update taking place and the iPod dying during it - sadly it is in MOV format (439MB) and I cannot convert to an uploadable version for Youtube... By DEAD, I mean, no power and will not display ANYTHING on the screen. Simply won't power up again.
This is what happens: Download update using iTunes... Extracting software... Backing up iPod... Preparing the iPod for the software update... (iPod shows USB plug and CD - then apple logo...) then the updating progress line appears on iTunes and on touch. Updating iPod software... Verifying updated iPod software... Updating the iPod firmware... White progress line on iPod disappears after a few seconds and screen goes black. Error message displays on iTunes - "The iPod xxxx could not be updated. An unknown error occurred (6)"
iPod is now dead and needs to be replaced,
1) Has anyone experienced this?
2) Has anyone SOLVED it?
Maybe related is a problem with some videos that have happened recently (and were the cause of me trying to restore the iPod last time it died) MP4 (or MPV files) play for about 20-30 minutes, then screen goes white, then black, then apple logo appears, then iPod returns to home screen after about 30 seconds. My usual solution to this has been to remove all videos, photos (which are corrupted by this time), audio, and then re-sync back.
I'd appreciate ANY help please. I intend buying an iPhone soon and would like to know that the problem won't occur on that, every time I try to restore.
Cheers!
MartinTo begin the discussion, let's throw in a few possibilities:
1. Is your iPod in the process of being "jail-broken"? If so, I'd avoid the risk and stick with standard Apple firmware. Some hacks end up with dead iPods.
2. Do you use iTunes for syncing? Only use Apple software for managing your file system.
3. Are you Windows or Apple? For whatever it's worth, I've only seen this happen with Windows machines.
No solutions here - but maybe some hopeful suggestions. -
Restore using TSPITR Results Dead lock error
This is the step is followed but i am getting deadlock error .please give your valuable suggestion .
Product Used:oracle 11g in linux environmnet
1)Before taking backup get SCN number for restore.
Command applied: Select current_scn from v$database;
2)running Full backup of database
Command applied:
configure controlfile autobackup on;
backup database;
CROSSCHECK BACKUP;
exit;
3)Running level 0 incremental backup
Command applied:
BACKUP AS COMPRESSED BACKUPSET INCREMENTAL LEVEL 0 TAG ='WEEKLY' TABLESPACE TEST;
exit;
3) Running level 1 incremental backup
Command applied:
BACKUP AS COMPRESSED BACKUPSET INCREMENTAL LEVEL 1 TAG ='DAILY' TABLESPACE TEST;
4)Before Restore(TSPITR) following procedure are applied under sysdba privilege
Command applied:
SQL 'exec dbms_backup_restore.manageauxinstance ('TSPITR',1)';
5)TSPITR Restore command
Command applied:
run
SQL 'ALTER TABLESPACE TEST OFFLINE'
RECOVER TABLESPACE TEST UNTIL SCN 1791053 AUXILIARY DESTINATION '/opt/oracle/base/flash_recovery_area';
SQL 'ALTER TABLESPACE TEST ONLINE';
and i tried with this option also(the same error i was getting)
Command applied:
run
SQL 'ALTER TABLESPACE TEST OFFLINE';
SET UNTIL SCN 1912813;
RESTORE TABLESPACE TEST ;
RECOVER TABLESPACE TEST UNTIL SCN 1912813 AUXILIARY DESTINATION '/opt/oracle/base/flash_recovery_area';
SQL 'ALTER TABLESPACE TEST ONLINE';
The follwing error i get for above mentioned restore command
Recovery Manager: Release 11.2.0.1.0 - Production on Tue Aug 17 18:11:18 2010
Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
connected to target database: NEW10 (DBID=2860680927)
RMAN> run
2> {
3> SQL 'ALTER TABLESPACE TEST OFFLINE';
4> RECOVER TABLESPACE TEST UNTIL SCN 1791053 AUXILIARY DESTINATION '/opt/oracle/base/flash_recovery_area';
5> SQL 'ALTER TABLESPACE TEST ONLINE';
6> }
7>
using target database control file instead of recovery catalog
sql statement: ALTER TABLESPACE TEST OFFLINE
Starting recover at 17-AUG-10
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=404 device type=DISK
RMAN-05026: WARNING: presuming following set of tablespaces applies to specified point-in-time
List of tablespaces expected to have UNDO segments
Tablespace SYSTEM
Tablespace UNDOTBS1
Creating automatic instance, with SID='BkAq'
initialization parameters used for automatic instance:
db_name=NEW10
db_unique_name=BkAq_tspitr_NEW10
compatible=11.2.0.0.0
db_block_size=8192
db_files=200
sga_target=280M
processes=50
db_create_file_dest=/opt/oracle/base/flash_recovery_area
log_archive_dest_1='location=/opt/oracle/base/flash_recovery_area'
#No auxiliary parameter file used
starting up automatic instance NEW10
Oracle instance started
Total System Global Area 292933632 bytes
Fixed Size 1336092 bytes
Variable Size 100666596 bytes
Database Buffers 184549376 bytes
Redo Buffers 6381568 bytes
Automatic instance created
Running TRANSPORT_SET_CHECK on recovery set tablespaces
TRANSPORT_SET_CHECK completed successfully
contents of Memory Script:
# set requested point in time
set until scn 1791053;
# restore the controlfile
restore clone controlfile;
# mount the controlfile
sql clone 'alter database mount clone database';
# archive current online log
sql 'alter system archive log current';
# avoid unnecessary autobackups for structural changes during TSPITR
sql 'begin dbms_backup_restore.AutoBackupFlag(FALSE); end;';
executing Memory Script
executing command: SET until clause
Starting restore at 17-AUG-10
allocated channel: ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: SID=59 device type=DISK
channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: restoring control file
channel ORA_AUX_DISK_1: reading from backup piece /opt/oracle/base/flash_recovery_area/NEW10/autobackup/2010_08_17/o1_mf_s_727280767_66nmo8x7_.bkp
channel ORA_AUX_DISK_1: piece handle=/opt/oracle/base/flash_recovery_area/NEW10/autobackup/2010_08_17/o1_mf_s_727280767_66nmo8x7_.bkp tag=TAG20100817T142607
channel ORA_AUX_DISK_1: restored backup piece 1
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:01
output file name=/opt/oracle/base/flash_recovery_area/NEW10/controlfile/o1_mf_66o0wsh8_.ctl
Finished restore at 17-AUG-10
sql statement: alter database mount clone database
sql statement: alter system archive log current
sql statement: begin dbms_backup_restore.AutoBackupFlag(FALSE); end;
contents of Memory Script:
# set requested point in time
set until scn 1791053;
# set destinations for recovery set and auxiliary set datafiles
set newname for clone datafile 1 to new;
set newname for clone datafile 8 to new;
set newname for clone datafile 3 to new;
set newname for clone datafile 2 to new;
set newname for clone datafile 9 to new;
set newname for clone tempfile 1 to new;
set newname for datafile 7 to
"/opt/oracle/base/oradata/NEW10/test01.dbf";
# switch all tempfiles
switch clone tempfile all;
# restore the tablespaces in the recovery set and the auxiliary set
restore clone datafile 1, 8, 3, 2, 9, 7;
switch clone datafile all;
executing Memory Script
executing command: SET until clause
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
renamed tempfile 1 to /opt/oracle/base/flash_recovery_area/NEW10/datafile/o1_mf_temp_%u_.tmp in control file
Starting restore at 17-AUG-10
using channel ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_1: restoring datafile 00001 to /opt/oracle/base/flash_recovery_area/NEW10/datafile/o1_mf_system_%u_.dbf
channel ORA_AUX_DISK_1: restoring datafile 00008 to /opt/oracle/base/flash_recovery_area/NEW10/datafile/o1_mf_system_%u_.dbf
channel ORA_AUX_DISK_1: restoring datafile 00003 to /opt/oracle/base/flash_recovery_area/NEW10/datafile/o1_mf_undotbs1_%u_.dbf
channel ORA_AUX_DISK_1: restoring datafile 00002 to /opt/oracle/base/flash_recovery_area/NEW10/datafile/o1_mf_sysaux_%u_.dbf
channel ORA_AUX_DISK_1: restoring datafile 00009 to /opt/oracle/base/flash_recovery_area/NEW10/datafile/o1_mf_sysaux_%u_.dbf
channel ORA_AUX_DISK_1: reading from backup piece /opt/oracle/base/flash_recovery_area/NEW10/backupset/2010_08_17/o1_mf_nnndf_TAG20100817T140128_66nl7174_.bkp
channel ORA_AUX_DISK_1: piece handle=/opt/oracle/base/flash_recovery_area/NEW10/backupset/2010_08_17/o1_mf_nnndf_TAG20100817T140128_66nl7174_.bkp tag=TAG20100817T140128
channel ORA_AUX_DISK_1: restored backup piece 1
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:02:45
channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_1: restoring datafile 00007 to /opt/oracle/base/oradata/NEW10/test01.dbf
channel ORA_AUX_DISK_1: reading from backup piece /opt/oracle/base/flash_recovery_area/NEW10/backupset/2010_08_17/o1_mf_nnnd0_WEEKLY_66nl9m8k_.bkp
channel ORA_AUX_DISK_1: piece handle=/opt/oracle/base/flash_recovery_area/NEW10/backupset/2010_08_17/o1_mf_nnnd0_WEEKLY_66nl9m8k_.bkp tag=WEEKLY
channel ORA_AUX_DISK_1: restored backup piece 1
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:06:55
Finished restore at 17-AUG-10
datafile 1 switched to datafile copy
input datafile copy RECID=6 STAMP=727294911 file name=/opt/oracle/base/flash_recovery_area/NEW10/datafile/o1_mf_system_66o0x1sf_.dbf
datafile 8 switched to datafile copy
input datafile copy RECID=7 STAMP=727294911 file name=/opt/oracle/base/flash_recovery_area/NEW10/datafile/o1_mf_system_66o0x1r9_.dbf
datafile 3 switched to datafile copy
input datafile copy RECID=8 STAMP=727294911 file name=/opt/oracle/base/flash_recovery_area/NEW10/datafile/o1_mf_undotbs1_66o0x1vr_.dbf
datafile 2 switched to datafile copy
input datafile copy RECID=9 STAMP=727294911 file name=/opt/oracle/base/flash_recovery_area/NEW10/datafile/o1_mf_sysaux_66o0x1vj_.dbf
datafile 9 switched to datafile copy
input datafile copy RECID=10 STAMP=727294911 file name=/opt/oracle/base/flash_recovery_area/NEW10/datafile/o1_mf_sysaux_66o0x1rs_.dbf
contents of Memory Script:
# set requested point in time
set until scn 1791053;
# online the datafiles restored or switched
sql clone "alter database datafile 1 online";
sql clone "alter database datafile 8 online";
sql clone "alter database datafile 3 online";
sql clone "alter database datafile 2 online";
sql clone "alter database datafile 9 online";
sql clone "alter database datafile 7 online";
# recover and open resetlogs
recover clone database tablespace "TEST", "SYSTEM", "UNDOTBS1", "SYSAUX" delete archivelog;
alter clone database open resetlogs;
executing Memory Script
executing command: SET until clause
sql statement: alter database datafile 1 online
sql statement: alter database datafile 8 online
sql statement: alter database datafile 3 online
sql statement: alter database datafile 2 online
sql statement: alter database datafile 9 online
sql statement: alter database datafile 7 online
Starting recover at 17-AUG-10
using channel ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: starting incremental datafile backup set restore
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
destination for restore of datafile 00007: /opt/oracle/base/oradata/NEW10/test01.dbf
channel ORA_AUX_DISK_1: reading from backup piece /opt/oracle/base/flash_recovery_area/NEW10/backupset/2010_08_17/o1_mf_nnnd1_DAILY_66nmf6qs_.bkp
channel ORA_AUX_DISK_1: piece handle=/opt/oracle/base/flash_recovery_area/NEW10/backupset/2010_08_17/o1_mf_nnnd1_DAILY_66nmf6qs_.bkp tag=DAILY
channel ORA_AUX_DISK_1: restored backup piece 1
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:01
starting media recovery
archived log for thread 1 with sequence 39 is already on disk as file /opt/oracle/base/flash_recovery_area/NEW10/archivelog/2010_08_17/o1_mf_1_39_66nmc1dg_.arc
archived log for thread 1 with sequence 40 is already on disk as file /opt/oracle/base/flash_recovery_area/NEW10/archivelog/2010_08_17/o1_mf_1_40_66nmcfw4_.arc
archived log for thread 1 with sequence 41 is already on disk as file /opt/oracle/base/flash_recovery_area/NEW10/archivelog/2010_08_17/o1_mf_1_41_66nmcwcf_.arc
archived log for thread 1 with sequence 42 is already on disk as file /opt/oracle/base/flash_recovery_area/NEW10/archivelog/2010_08_17/o1_mf_1_42_66nmddbw_.arc
archived log for thread 1 with sequence 43 is already on disk as file /opt/oracle/base/flash_recovery_area/NEW10/archivelog/2010_08_17/o1_mf_1_43_66o0wyys_.arc
archived log file name=/opt/oracle/base/flash_recovery_area/NEW10/archivelog/2010_08_17/o1_mf_1_39_66nmc1dg_.arc thread=1 sequence=39
archived log file name=/opt/oracle/base/flash_recovery_area/NEW10/archivelog/2010_08_17/o1_mf_1_40_66nmcfw4_.arc thread=1 sequence=40
archived log file name=/opt/oracle/base/flash_recovery_area/NEW10/archivelog/2010_08_17/o1_mf_1_41_66nmcwcf_.arc thread=1 sequence=41
archived log file name=/opt/oracle/base/flash_recovery_area/NEW10/archivelog/2010_08_17/o1_mf_1_42_66nmddbw_.arc thread=1 sequence=42
archived log file name=/opt/oracle/base/flash_recovery_area/NEW10/archivelog/2010_08_17/o1_mf_1_43_66o0wyys_.arc thread=1 sequence=43
media recovery complete, elapsed time: 00:00:50
Finished recover at 17-AUG-10
database opened
contents of Memory Script:
# make read only the tablespace that will be exported
sql clone 'alter tablespace TEST read only';
# create directory for datapump import
sql "create or replace directory TSPITR_DIROBJ_DPDIR as ''
/opt/oracle/base/flash_recovery_area''";
# create directory for datapump export
sql clone "create or replace directory TSPITR_DIROBJ_DPDIR as ''
/opt/oracle/base/flash_recovery_area''";
executing Memory Script
sql statement: alter tablespace TEST read only
sql statement: create or replace directory TSPITR_DIROBJ_DPDIR as ''/opt/oracle/base/flash_recovery_area''
sql statement: create or replace directory TSPITR_DIROBJ_DPDIR as ''/opt/oracle/base/flash_recovery_area''
Performing export of metadata...
EXPDP> Starting "SYS"."TSPITR_EXP_BkAq":
EXPDP> Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
EXPDP> Processing object type TRANSPORTABLE_EXPORT/TABLE
EXPDP> Processing object type TRANSPORTABLE_EXPORT/GRANT/OWNER_GRANT/OBJECT_GRANT
EXPDP> Processing object type TRANSPORTABLE_EXPORT/INDEX
EXPDP> Processing object type TRANSPORTABLE_EXPORT/CONSTRAINT/CONSTRAINT
EXPDP> Processing object type TRANSPORTABLE_EXPORT/INDEX_STATISTICS
EXPDP> Processing object type TRANSPORTABLE_EXPORT/TRIGGER
EXPDP> Processing object type TRANSPORTABLE_EXPORT/TABLE_STATISTICS
EXPDP> Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
EXPDP> Master table "SYS"."TSPITR_EXP_BkAq" successfully loaded/unloaded
EXPDP> ******************************************************************************
EXPDP> Dump file set for SYS.TSPITR_EXP_BkAq is:
EXPDP> /opt/oracle/base/flash_recovery_area/tspitr_BkAq_82690.dmp
EXPDP> ******************************************************************************
EXPDP> Datafiles required for transportable tablespace TEST:
EXPDP> /opt/oracle/base/oradata/NEW10/test01.dbf
EXPDP> Job "SYS"."TSPITR_EXP_BkAq" successfully completed at 18:25:02
Export completed
contents of Memory Script:
# shutdown clone before import
shutdown clone immediate
# drop target tablespaces before importing them back
sql 'drop tablespace TEST including contents keep datafiles';
executing Memory Script
database closed
database dismounted
Oracle instance shut down
sql statement: drop tablespace TEST including contents keep datafiles
Removing automatic instance
shutting down automatic instance
target database instance not started
Automatic instance removed
auxiliary instance file /opt/oracle/base/flash_recovery_area/NEW10/datafile/o1_mf_temp_66o1k480_.tmp deleted
auxiliary instance file /opt/oracle/base/flash_recovery_area/NEW10/onlinelog/o1_mf_3_66o1k0mg_.log deleted
auxiliary instance file /opt/oracle/base/flash_recovery_area/NEW10/onlinelog/o1_mf_2_66o1jyt4_.log deleted
auxiliary instance file /opt/oracle/base/flash_recovery_area/NEW10/onlinelog/o1_mf_1_66o1jx3w_.log deleted
auxiliary instance file /opt/oracle/base/flash_recovery_area/NEW10/datafile/o1_mf_sysaux_66o0x1rs_.dbf deleted
auxiliary instance file /opt/oracle/base/flash_recovery_area/NEW10/datafile/o1_mf_sysaux_66o0x1vj_.dbf deleted
auxiliary instance file /opt/oracle/base/flash_recovery_area/NEW10/datafile/o1_mf_undotbs1_66o0x1vr_.dbf deleted
auxiliary instance file /opt/oracle/base/flash_recovery_area/NEW10/datafile/o1_mf_system_66o0x1r9_.dbf deleted
auxiliary instance file /opt/oracle/base/flash_recovery_area/NEW10/datafile/o1_mf_system_66o0x1sf_.dbf deleted
auxiliary instance file /opt/oracle/base/flash_recovery_area/NEW10/controlfile/o1_mf_66o0wsh8_.ctl deleted
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of recover command at 08/17/2010 18:25:36
RMAN-03015: error occurred in stored script Memory Script
RMAN-03009: failure of sql command on default channel at 08/17/2010 18:25:25
RMAN-11003: failure during parse/execution of SQL statement: drop tablespace TEST including contents keep datafiles
ORA-00604: error occurred at recursive SQL level 1
ORA-00060: deadlock detected while waiting for resource
Recovery Manager complete.
please give your valuable suggestion .It should be more helpful for us.
Edited by: user10750009 on Aug 20, 2010 1:07 AM
Edited by: user10750009 on Aug 20, 2010 1:15 AMI want TSPITR ,during this operation i faced this deadlock error.
Before that we faced roll backsegment error for that we applied follwing workaround .
If i applied follwing workaround before every backup and restore .i didn't get any error .all things went successful.
spool /tmp/Createtest.log
connect / as sysdba
REM Perform startup in case we are still down
ALTER SYSTEM SET PROCESSES=500 SCOPE=SPFILE;
SHUT IMMEDIATE;
STARTUP MOUNT EXCLUSIVE;
ALTER DATABASE ARCHIVELOG;
ARCHIVE LOG START;
ALTER DATABASE OPEN;
connECT / as sysdba;
alter system set undo_management = MANUAL scope=spfile;
shutdown immediate;
startup;
Connect / as sysdba;
DROP TABLE TEST123;
create table test123 (t1 number, t2 varchar2(10));
begin
for i in 1.. 300000 loop
insert into test values (i,'AAAAAAAAAA');
end loop;
end;
delete test123;
commit;
alter system set undo_management = auto scope=spfile;
shutdown immediate ;
startup ;
The above workaround we applied before creating tablespace and datafile ,after that we face some dead lock error while restore TSPITR .Did you need any more information
Edited by: user10750009 on Aug 20, 2010 1:12 AM -
Transaction Locking during multiple Webservice - persistent webs sessions
Hi All,<br>
<br>
Yesterday evening we had a discussion concerning ESA architecture. We want to create (web)services for accessing the SAP business objects (using XI) and use these (web)services via visual composer, webdynpro or custom java development.<br>
<br>
It does not seem a big problem to perform creations and reads of transaction, but when we want to change objects, we saw some problems concerning locking/commiting and rollbacks.<br>
<br>
From our GUI we would like to be able to go in edit mode and from that moment on, the transaction should be locked. We then want to change certain parameters and commit only when we push the save button.<br>
<br>
We can invoke a webservice wich tries to lock the transaction, but at the moment the XI scenario is completed (=the lock is created), the program at SAP side (=proxy in our case) is also finished and the lock is automaticly removed. How can we do locking, when using webservices via XI?<br>
<br>
The problem of the rollback and commit we can partially solve by putting more logic in the GUI, but we don't want to do that. How can we do a change of a business object and remember this change without doing a commit on the SAP system.... . Same problem for the rollback.<br>
<br>
Is there a away to keep a session "alive" during multiple webservice calls or to simulate it? Every webservice invokation happens in a different context...isn't it?<br>
<br>
<br>
<b>Just to make it a bit more clear.</b><br>
<br>
Suppose we create 6 service related to the business object bupa (business partner).<br>
- read<br>
- change<br>
- commit<br>
- rollback<br>
- lock<br>
- unlock.<br>
<br>
We create a GUI which uses these services.<br>
<br>
<b>Step1:</b> we want to see bupa in detail, so the read webservice is called and the retrieved details are shown in the GUI<br>
<b>Step2:</b> we want to go in edit mode, so the lock webservice is called to lock the bupa. The bupa should stay locked, untill the unlock is called. Here occurs the problem. The webservice lock is called, XI will trigger the proxy on the SAP system. This proxy will lock the bupa. As soon as the proxy-program is completed, the bupa lock will automaticly be removed ... . We want to keep this lock!<br>
<b>Step3:</b> we change the bupa using the change webservice. Only the user who locked the bupa should be able to change it.<br>
Problem concerning the locking occurs: standard we don't know who locked the bupa (this is done by the generic RFC user, configured in sm59). Should we pass some kind of GUID towards the proxy and build some additional logic to know which end-user in fact locked it... . Using the userid isn't sufficient, because a user could logon multiple time simultanously.<br>
<br>
Another problem is that we want to change the bupa, without having to do a commit yet.De commit should be called only when pushing the save button. When the proxy is ended and we did not do a commit, the changes are lost normally ... .<br>
<br>
What we in fact want to do is Simulate the bsp behaviour.<br>
<b>Step4:</b>We want to perform a save of the things we changed or a reset. This means the commit or rollback webservice is called.<br>
<b>Step5:</b> We want to unlock the bupa by calling the unlock webservice.<br>
<br>
<br>
Please give me your comments.<br>
<br>
Kind regards<br>
Joris<br>
<br>
Note: Transaction Locking during multiple Webservice "sessions".
Message was edited by:
Joris VerberckmoesThere are multiple strategies how to resolve this. They require that the last change time is available in the changed object, and also that the client keeps the value of the change time when it read the data.
1. First one wins
Immediately before posting the changes, the current change time is read from the server. In case it is different from the client buffer, then the client changes are discarted.
Example:
1. Client A reads data
2. Client B reads data
3. Client B changes its buffer
4. Client B checks if server change time has changed (result is no)
5. Client B writes his changes to the server
6. Client A changes its buffer
7. Client A checks if server change time has changed (result is yes)
8. Client A discarts its changes
2. Last one wins
Easy. Client just writes his changes to the server, overwriting any changes that might have occured since it read the data.
Example:
1. Client A reads data
2. Client B reads data
3. Client B changes its buffer
4. Client B writes his changes to the server
5. Client A changes its buffer
6. Client A writes its changes to the server -> changes from client B are lost
3. Everybody wins
Most complicated. In case of concurrent changes, the client is responsible for merging his changes with the changes from other clients and to resolve any conflicts.
Example:
1. Client A reads data
2. Client B reads data
3. Client B changes its buffer
4. Client B checks if server change time has changed (result is no)
5. Client B writes his changes to the server
6. Client A changes its buffer
7. Client A checks if server change time has changed (result is yes)
8. Client A merges its changes with changes from client B
9. Client A writes his changes to the server
"Last one wins" is definitely not water-proof. But even with the other strategies, data can potentially get lost in the short timeframe when the change time is checked and the actual update.
To make it more secure, server support is required. E.g. the client could pass the change time from its read access to the server. The server can then reliably reject the update if the change data has been updated in beetween by another client. -
Gurus,
Please clarify me the three questions which I am posting below
1) What's the deadlock situation ? How oracle treats the dead lock situation
2) Disadvantages of having index
3) I have two tables A and B .. In table A, I have two columns (say col1, col2) .. Col1 is a primary key column .. In table B, I have two columns (say col3, col4) .. Col3 is a primary key column .. Col2 of A has a referrential integrity to Col3 of B ..And Col4 of B has a referrential integrity to col2 of A .. Now if I am inserting a values in table A ...it is showing error "parent value doesnt exist" .. like wise, if I am inserting values in table B, the above mentioned error is comming ..
How to overcome this error
Please advice
RegardsHi.
1) A dead lock is a situation where two or more sessions acquire locks which then prevent each other from moving on. ie session one updates a row aaa in a table and session two updates row bbb (no commits). Session one then attempts to update row bbb and session two attempts to update row aaa and both wait for the locks to clear (default behaviour). Oracle monitors for these situations and will automatically kill one of the sessions and allow the other to complete.
2) Indexes are used to speed up access to data in the database and if associated with a Primary or Unique Key, enforce uniqueness. They have the disadvantages of taking up space and slowing down updates and inserts.
3) This is not a deadlock. It is a circular reference. You cannot insert into one table because the other table is expected to have a parent value and vice versa. From a data modelling point of view a circular reference is unsupportable and meaningless. Like trying to be your father's son and your father's father at the same time.
Regards
Andre
Maybe you are looking for
-
Ideapad a1000g - 2g internet not working
I bought this tablet a couple of days back. it does all the things that it claims. except browsing on sim card. I inserted my sim card and uploaded a 2g internet recharge. but it is so slow. it does not move for many minutes. it works fine on wifi.
-
This computer is no longer authorized for apps message
I keep getting this message and I only have 3 devices: This computer is no longer authorized for apps that are installed on the iPod "iPod touch". Would you like to authorize this computer for items purchased from the iTunes Store? How do I fix?
-
Getting Application Name of the Servlet
Hello Developers, I need to retrieve the Application Name of my Servlet to display it in my web page. I try to use the ServletContextName but i'm not able to implement it (I think that my version of the Servlet API is too old). Thanks for your help.
-
My mom's Mac Mini is having some difficulties installing the Epson C86 drivers from the CD. It continually claims that her username and password do not have admin privilages. However, she is the only user on the computer, and she definitely has admin
-
I have a Crstal report which presently prints a medical billing record consisting of header, multiple detail lines and a footer. The company that these are sent to has provided an .xsd file to enable us to transmit the billing as XML files. This wil