DEAD LOCK ERROR
When we are using our oracle application, our session hang every time the user update records. the DBA said that we have a dead lock error... the only thing he did is to reset sa database every time we in counter this problem. but we have this error every day, and i don't have an ideal regarding what is dead lock.
you don't have a deadlock problem, because a deadlock will be "solved" as oracle simply kills the blocking session and rollbacks the changes.He is right. you might have blocking problem not dead lock problem. Find out the blocker and waiter and kill the blocker session.
Following query would give you an idea about who is blocking and who is waiting for :
select /*+ ordered */
a.sid blocker_sid,
-- c.sql_text,
a.username blocker_username,
a.serial#,
-- a.logon_time,
b.type,
b.lmode mode_held,
b.ctime time_held,
c.sid waiter_sid,
c.request request_mode,
c.ctime time_waited
from v$lock b, v$enqueue_lock c, v$session a, v$sqltext c
where c.address=a.prev_sql_addr and
a.sid = b.sid
and b.id1 = c.id1(+)
and b.id2 = c.id2(+)
and c.type(+) = 'TX'
and b.type = 'TX'
and b.block = 1
order by time_held, time_waited
Look for blocker_id and waiter_id. If possible, kill blocker using following command.
select sid,serial# from v$session where sid = blocker_sid;
alter system kill session 'sid,serial#'';
Jaffar
Similar Messages
-
Restore using TSPITR Results Dead lock error
This is the step is followed but i am getting deadlock error .please give your valuable suggestion .
Product Used:oracle 11g in linux environmnet
1)Before taking backup get SCN number for restore.
Command applied: Select current_scn from v$database;
2)running Full backup of database
Command applied:
configure controlfile autobackup on;
backup database;
CROSSCHECK BACKUP;
exit;
3)Running level 0 incremental backup
Command applied:
BACKUP AS COMPRESSED BACKUPSET INCREMENTAL LEVEL 0 TAG ='WEEKLY' TABLESPACE TEST;
exit;
3) Running level 1 incremental backup
Command applied:
BACKUP AS COMPRESSED BACKUPSET INCREMENTAL LEVEL 1 TAG ='DAILY' TABLESPACE TEST;
4)Before Restore(TSPITR) following procedure are applied under sysdba privilege
Command applied:
SQL 'exec dbms_backup_restore.manageauxinstance ('TSPITR',1)';
5)TSPITR Restore command
Command applied:
run
SQL 'ALTER TABLESPACE TEST OFFLINE'
RECOVER TABLESPACE TEST UNTIL SCN 1791053 AUXILIARY DESTINATION '/opt/oracle/base/flash_recovery_area';
SQL 'ALTER TABLESPACE TEST ONLINE';
and i tried with this option also(the same error i was getting)
Command applied:
run
SQL 'ALTER TABLESPACE TEST OFFLINE';
SET UNTIL SCN 1912813;
RESTORE TABLESPACE TEST ;
RECOVER TABLESPACE TEST UNTIL SCN 1912813 AUXILIARY DESTINATION '/opt/oracle/base/flash_recovery_area';
SQL 'ALTER TABLESPACE TEST ONLINE';
The follwing error i get for above mentioned restore command
Recovery Manager: Release 11.2.0.1.0 - Production on Tue Aug 17 18:11:18 2010
Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
connected to target database: NEW10 (DBID=2860680927)
RMAN> run
2> {
3> SQL 'ALTER TABLESPACE TEST OFFLINE';
4> RECOVER TABLESPACE TEST UNTIL SCN 1791053 AUXILIARY DESTINATION '/opt/oracle/base/flash_recovery_area';
5> SQL 'ALTER TABLESPACE TEST ONLINE';
6> }
7>
using target database control file instead of recovery catalog
sql statement: ALTER TABLESPACE TEST OFFLINE
Starting recover at 17-AUG-10
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=404 device type=DISK
RMAN-05026: WARNING: presuming following set of tablespaces applies to specified point-in-time
List of tablespaces expected to have UNDO segments
Tablespace SYSTEM
Tablespace UNDOTBS1
Creating automatic instance, with SID='BkAq'
initialization parameters used for automatic instance:
db_name=NEW10
db_unique_name=BkAq_tspitr_NEW10
compatible=11.2.0.0.0
db_block_size=8192
db_files=200
sga_target=280M
processes=50
db_create_file_dest=/opt/oracle/base/flash_recovery_area
log_archive_dest_1='location=/opt/oracle/base/flash_recovery_area'
#No auxiliary parameter file used
starting up automatic instance NEW10
Oracle instance started
Total System Global Area 292933632 bytes
Fixed Size 1336092 bytes
Variable Size 100666596 bytes
Database Buffers 184549376 bytes
Redo Buffers 6381568 bytes
Automatic instance created
Running TRANSPORT_SET_CHECK on recovery set tablespaces
TRANSPORT_SET_CHECK completed successfully
contents of Memory Script:
# set requested point in time
set until scn 1791053;
# restore the controlfile
restore clone controlfile;
# mount the controlfile
sql clone 'alter database mount clone database';
# archive current online log
sql 'alter system archive log current';
# avoid unnecessary autobackups for structural changes during TSPITR
sql 'begin dbms_backup_restore.AutoBackupFlag(FALSE); end;';
executing Memory Script
executing command: SET until clause
Starting restore at 17-AUG-10
allocated channel: ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: SID=59 device type=DISK
channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: restoring control file
channel ORA_AUX_DISK_1: reading from backup piece /opt/oracle/base/flash_recovery_area/NEW10/autobackup/2010_08_17/o1_mf_s_727280767_66nmo8x7_.bkp
channel ORA_AUX_DISK_1: piece handle=/opt/oracle/base/flash_recovery_area/NEW10/autobackup/2010_08_17/o1_mf_s_727280767_66nmo8x7_.bkp tag=TAG20100817T142607
channel ORA_AUX_DISK_1: restored backup piece 1
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:01
output file name=/opt/oracle/base/flash_recovery_area/NEW10/controlfile/o1_mf_66o0wsh8_.ctl
Finished restore at 17-AUG-10
sql statement: alter database mount clone database
sql statement: alter system archive log current
sql statement: begin dbms_backup_restore.AutoBackupFlag(FALSE); end;
contents of Memory Script:
# set requested point in time
set until scn 1791053;
# set destinations for recovery set and auxiliary set datafiles
set newname for clone datafile 1 to new;
set newname for clone datafile 8 to new;
set newname for clone datafile 3 to new;
set newname for clone datafile 2 to new;
set newname for clone datafile 9 to new;
set newname for clone tempfile 1 to new;
set newname for datafile 7 to
"/opt/oracle/base/oradata/NEW10/test01.dbf";
# switch all tempfiles
switch clone tempfile all;
# restore the tablespaces in the recovery set and the auxiliary set
restore clone datafile 1, 8, 3, 2, 9, 7;
switch clone datafile all;
executing Memory Script
executing command: SET until clause
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
renamed tempfile 1 to /opt/oracle/base/flash_recovery_area/NEW10/datafile/o1_mf_temp_%u_.tmp in control file
Starting restore at 17-AUG-10
using channel ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_1: restoring datafile 00001 to /opt/oracle/base/flash_recovery_area/NEW10/datafile/o1_mf_system_%u_.dbf
channel ORA_AUX_DISK_1: restoring datafile 00008 to /opt/oracle/base/flash_recovery_area/NEW10/datafile/o1_mf_system_%u_.dbf
channel ORA_AUX_DISK_1: restoring datafile 00003 to /opt/oracle/base/flash_recovery_area/NEW10/datafile/o1_mf_undotbs1_%u_.dbf
channel ORA_AUX_DISK_1: restoring datafile 00002 to /opt/oracle/base/flash_recovery_area/NEW10/datafile/o1_mf_sysaux_%u_.dbf
channel ORA_AUX_DISK_1: restoring datafile 00009 to /opt/oracle/base/flash_recovery_area/NEW10/datafile/o1_mf_sysaux_%u_.dbf
channel ORA_AUX_DISK_1: reading from backup piece /opt/oracle/base/flash_recovery_area/NEW10/backupset/2010_08_17/o1_mf_nnndf_TAG20100817T140128_66nl7174_.bkp
channel ORA_AUX_DISK_1: piece handle=/opt/oracle/base/flash_recovery_area/NEW10/backupset/2010_08_17/o1_mf_nnndf_TAG20100817T140128_66nl7174_.bkp tag=TAG20100817T140128
channel ORA_AUX_DISK_1: restored backup piece 1
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:02:45
channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_1: restoring datafile 00007 to /opt/oracle/base/oradata/NEW10/test01.dbf
channel ORA_AUX_DISK_1: reading from backup piece /opt/oracle/base/flash_recovery_area/NEW10/backupset/2010_08_17/o1_mf_nnnd0_WEEKLY_66nl9m8k_.bkp
channel ORA_AUX_DISK_1: piece handle=/opt/oracle/base/flash_recovery_area/NEW10/backupset/2010_08_17/o1_mf_nnnd0_WEEKLY_66nl9m8k_.bkp tag=WEEKLY
channel ORA_AUX_DISK_1: restored backup piece 1
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:06:55
Finished restore at 17-AUG-10
datafile 1 switched to datafile copy
input datafile copy RECID=6 STAMP=727294911 file name=/opt/oracle/base/flash_recovery_area/NEW10/datafile/o1_mf_system_66o0x1sf_.dbf
datafile 8 switched to datafile copy
input datafile copy RECID=7 STAMP=727294911 file name=/opt/oracle/base/flash_recovery_area/NEW10/datafile/o1_mf_system_66o0x1r9_.dbf
datafile 3 switched to datafile copy
input datafile copy RECID=8 STAMP=727294911 file name=/opt/oracle/base/flash_recovery_area/NEW10/datafile/o1_mf_undotbs1_66o0x1vr_.dbf
datafile 2 switched to datafile copy
input datafile copy RECID=9 STAMP=727294911 file name=/opt/oracle/base/flash_recovery_area/NEW10/datafile/o1_mf_sysaux_66o0x1vj_.dbf
datafile 9 switched to datafile copy
input datafile copy RECID=10 STAMP=727294911 file name=/opt/oracle/base/flash_recovery_area/NEW10/datafile/o1_mf_sysaux_66o0x1rs_.dbf
contents of Memory Script:
# set requested point in time
set until scn 1791053;
# online the datafiles restored or switched
sql clone "alter database datafile 1 online";
sql clone "alter database datafile 8 online";
sql clone "alter database datafile 3 online";
sql clone "alter database datafile 2 online";
sql clone "alter database datafile 9 online";
sql clone "alter database datafile 7 online";
# recover and open resetlogs
recover clone database tablespace "TEST", "SYSTEM", "UNDOTBS1", "SYSAUX" delete archivelog;
alter clone database open resetlogs;
executing Memory Script
executing command: SET until clause
sql statement: alter database datafile 1 online
sql statement: alter database datafile 8 online
sql statement: alter database datafile 3 online
sql statement: alter database datafile 2 online
sql statement: alter database datafile 9 online
sql statement: alter database datafile 7 online
Starting recover at 17-AUG-10
using channel ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: starting incremental datafile backup set restore
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
destination for restore of datafile 00007: /opt/oracle/base/oradata/NEW10/test01.dbf
channel ORA_AUX_DISK_1: reading from backup piece /opt/oracle/base/flash_recovery_area/NEW10/backupset/2010_08_17/o1_mf_nnnd1_DAILY_66nmf6qs_.bkp
channel ORA_AUX_DISK_1: piece handle=/opt/oracle/base/flash_recovery_area/NEW10/backupset/2010_08_17/o1_mf_nnnd1_DAILY_66nmf6qs_.bkp tag=DAILY
channel ORA_AUX_DISK_1: restored backup piece 1
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:01
starting media recovery
archived log for thread 1 with sequence 39 is already on disk as file /opt/oracle/base/flash_recovery_area/NEW10/archivelog/2010_08_17/o1_mf_1_39_66nmc1dg_.arc
archived log for thread 1 with sequence 40 is already on disk as file /opt/oracle/base/flash_recovery_area/NEW10/archivelog/2010_08_17/o1_mf_1_40_66nmcfw4_.arc
archived log for thread 1 with sequence 41 is already on disk as file /opt/oracle/base/flash_recovery_area/NEW10/archivelog/2010_08_17/o1_mf_1_41_66nmcwcf_.arc
archived log for thread 1 with sequence 42 is already on disk as file /opt/oracle/base/flash_recovery_area/NEW10/archivelog/2010_08_17/o1_mf_1_42_66nmddbw_.arc
archived log for thread 1 with sequence 43 is already on disk as file /opt/oracle/base/flash_recovery_area/NEW10/archivelog/2010_08_17/o1_mf_1_43_66o0wyys_.arc
archived log file name=/opt/oracle/base/flash_recovery_area/NEW10/archivelog/2010_08_17/o1_mf_1_39_66nmc1dg_.arc thread=1 sequence=39
archived log file name=/opt/oracle/base/flash_recovery_area/NEW10/archivelog/2010_08_17/o1_mf_1_40_66nmcfw4_.arc thread=1 sequence=40
archived log file name=/opt/oracle/base/flash_recovery_area/NEW10/archivelog/2010_08_17/o1_mf_1_41_66nmcwcf_.arc thread=1 sequence=41
archived log file name=/opt/oracle/base/flash_recovery_area/NEW10/archivelog/2010_08_17/o1_mf_1_42_66nmddbw_.arc thread=1 sequence=42
archived log file name=/opt/oracle/base/flash_recovery_area/NEW10/archivelog/2010_08_17/o1_mf_1_43_66o0wyys_.arc thread=1 sequence=43
media recovery complete, elapsed time: 00:00:50
Finished recover at 17-AUG-10
database opened
contents of Memory Script:
# make read only the tablespace that will be exported
sql clone 'alter tablespace TEST read only';
# create directory for datapump import
sql "create or replace directory TSPITR_DIROBJ_DPDIR as ''
/opt/oracle/base/flash_recovery_area''";
# create directory for datapump export
sql clone "create or replace directory TSPITR_DIROBJ_DPDIR as ''
/opt/oracle/base/flash_recovery_area''";
executing Memory Script
sql statement: alter tablespace TEST read only
sql statement: create or replace directory TSPITR_DIROBJ_DPDIR as ''/opt/oracle/base/flash_recovery_area''
sql statement: create or replace directory TSPITR_DIROBJ_DPDIR as ''/opt/oracle/base/flash_recovery_area''
Performing export of metadata...
EXPDP> Starting "SYS"."TSPITR_EXP_BkAq":
EXPDP> Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
EXPDP> Processing object type TRANSPORTABLE_EXPORT/TABLE
EXPDP> Processing object type TRANSPORTABLE_EXPORT/GRANT/OWNER_GRANT/OBJECT_GRANT
EXPDP> Processing object type TRANSPORTABLE_EXPORT/INDEX
EXPDP> Processing object type TRANSPORTABLE_EXPORT/CONSTRAINT/CONSTRAINT
EXPDP> Processing object type TRANSPORTABLE_EXPORT/INDEX_STATISTICS
EXPDP> Processing object type TRANSPORTABLE_EXPORT/TRIGGER
EXPDP> Processing object type TRANSPORTABLE_EXPORT/TABLE_STATISTICS
EXPDP> Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
EXPDP> Master table "SYS"."TSPITR_EXP_BkAq" successfully loaded/unloaded
EXPDP> ******************************************************************************
EXPDP> Dump file set for SYS.TSPITR_EXP_BkAq is:
EXPDP> /opt/oracle/base/flash_recovery_area/tspitr_BkAq_82690.dmp
EXPDP> ******************************************************************************
EXPDP> Datafiles required for transportable tablespace TEST:
EXPDP> /opt/oracle/base/oradata/NEW10/test01.dbf
EXPDP> Job "SYS"."TSPITR_EXP_BkAq" successfully completed at 18:25:02
Export completed
contents of Memory Script:
# shutdown clone before import
shutdown clone immediate
# drop target tablespaces before importing them back
sql 'drop tablespace TEST including contents keep datafiles';
executing Memory Script
database closed
database dismounted
Oracle instance shut down
sql statement: drop tablespace TEST including contents keep datafiles
Removing automatic instance
shutting down automatic instance
target database instance not started
Automatic instance removed
auxiliary instance file /opt/oracle/base/flash_recovery_area/NEW10/datafile/o1_mf_temp_66o1k480_.tmp deleted
auxiliary instance file /opt/oracle/base/flash_recovery_area/NEW10/onlinelog/o1_mf_3_66o1k0mg_.log deleted
auxiliary instance file /opt/oracle/base/flash_recovery_area/NEW10/onlinelog/o1_mf_2_66o1jyt4_.log deleted
auxiliary instance file /opt/oracle/base/flash_recovery_area/NEW10/onlinelog/o1_mf_1_66o1jx3w_.log deleted
auxiliary instance file /opt/oracle/base/flash_recovery_area/NEW10/datafile/o1_mf_sysaux_66o0x1rs_.dbf deleted
auxiliary instance file /opt/oracle/base/flash_recovery_area/NEW10/datafile/o1_mf_sysaux_66o0x1vj_.dbf deleted
auxiliary instance file /opt/oracle/base/flash_recovery_area/NEW10/datafile/o1_mf_undotbs1_66o0x1vr_.dbf deleted
auxiliary instance file /opt/oracle/base/flash_recovery_area/NEW10/datafile/o1_mf_system_66o0x1r9_.dbf deleted
auxiliary instance file /opt/oracle/base/flash_recovery_area/NEW10/datafile/o1_mf_system_66o0x1sf_.dbf deleted
auxiliary instance file /opt/oracle/base/flash_recovery_area/NEW10/controlfile/o1_mf_66o0wsh8_.ctl deleted
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of recover command at 08/17/2010 18:25:36
RMAN-03015: error occurred in stored script Memory Script
RMAN-03009: failure of sql command on default channel at 08/17/2010 18:25:25
RMAN-11003: failure during parse/execution of SQL statement: drop tablespace TEST including contents keep datafiles
ORA-00604: error occurred at recursive SQL level 1
ORA-00060: deadlock detected while waiting for resource
Recovery Manager complete.
please give your valuable suggestion .It should be more helpful for us.
Edited by: user10750009 on Aug 20, 2010 1:07 AM
Edited by: user10750009 on Aug 20, 2010 1:15 AMI want TSPITR ,during this operation i faced this deadlock error.
Before that we faced roll backsegment error for that we applied follwing workaround .
If i applied follwing workaround before every backup and restore .i didn't get any error .all things went successful.
spool /tmp/Createtest.log
connect / as sysdba
REM Perform startup in case we are still down
ALTER SYSTEM SET PROCESSES=500 SCOPE=SPFILE;
SHUT IMMEDIATE;
STARTUP MOUNT EXCLUSIVE;
ALTER DATABASE ARCHIVELOG;
ARCHIVE LOG START;
ALTER DATABASE OPEN;
connECT / as sysdba;
alter system set undo_management = MANUAL scope=spfile;
shutdown immediate;
startup;
Connect / as sysdba;
DROP TABLE TEST123;
create table test123 (t1 number, t2 varchar2(10));
begin
for i in 1.. 300000 loop
insert into test values (i,'AAAAAAAAAA');
end loop;
end;
delete test123;
commit;
alter system set undo_management = auto scope=spfile;
shutdown immediate ;
startup ;
The above workaround we applied before creating tablespace and datafile ,after that we face some dead lock error while restore TSPITR .Did you need any more information
Edited by: user10750009 on Aug 20, 2010 1:12 AM -
Dead lock error while updating data into cube
We have a scenario of daily truncate and upload of data into cube and volumes arrive @ 2 million per day.We have Parallel process setting (psa and data targets in parallel) in infopackage setting to speed up the data load process.This entire process runs thru process chain.
We are facing dead lock issue everyday.How to avoid this ?
In general dead lock occurs because of degenerated indexes if the volumes are very high. so my question is does deletion of Indexes of the cube everyday along with 'deletion of data target content' process help to avoiding dead lock ?
Also observed is updation of values into one infoobject is taking longer time approx 3 mins for each data packet.That infoobject is placed in dimension and defined it as line item as the volumes are very high for that specific object.
so this is over all scenario !!
two things :
1) will deletion of indexes and recreation help to avoid dead lock ?
2) any idea why the insertion into the infoobject is taking longer time (there is a direct read on sid table of that object while observed in sql statement).
Regards.hello,
1) will deletion of indexes and recreation help to avoid dead lock ?
Ans:
To avoid this problem, we need to drop the indexes of the cube before uploading the data.and rebuild the indexes...
Also,
just find out in SM12 which is the process which is causing lock.... Delete that.
find out the process in SM66 which is running for a very long time.Stop this process.
Check the transaction SM50 for the number of processes available in the system. If they are not adequate, you have to increase them with the help of basis team
2) any idea why the insertion into the infoobject is taking longer time (there is a direct read on sid table of that object while observed in sql statement).
Ans:
Lie item dimension is one of the ways to improve data load as well as query performance by eliminationg the need for dimensin table. So while loading/reading, one less table to deal with..
Check in the transformation mapping of that chs, it any rouitne/formula is written.If so, this can lead to more time for processing that IO.
Storing mass data in InfoCubes at document level is generally not recommended because when data is loaded, a huge SID table is created for the document number line-item dimension.
check if your IO is similar to doc no...
Regards,
Dhanya -
Hi,
One of my developers trying to insert in to one table from 2 different sessions at the same time and got dead lock error.From my trace file I got this
Deadlock graph:
---------Blocker(s)-------- ---------Waiter(s)---------
Resource Name process session holds waits process session holds waits
TX-00090026-00290e72 168 227 X 148 223 S
TX-00020012-0034e086 148 223 X 168 227 S
session 227: DID 0001-00A8-000036D2 session 223: DID 0001-0094-000052FC
session 223: DID 0001-0094-000052FC session 227: DID 0001-00A8-000036D2
Rows waited on:
Session 223: obj - rowid = 00125244 - AAAAAAAIqAAATseAAA <-------------------- 00125244
(dictionary objn - 1200708, file - 554, block - 80670, slot - 0)
Session 227: obj - rowid = 00125244 - AAAAAAAIqAAATsdAAA <-------------------- 00125244
(dictionary objn - 1200708, file - 554, block - 80669, slot - 0)
It seems to be 2 sessions trying to take same rowid and torying to insert into that.How it happens, those should pickup random rowid's and insert independenlty.
Please suggest me what could be the reason.
Thanks Very Much
AnandGet with your developer and see what the code is really doing as well as how it is being used. All of the deadlocks I have encountered to date had their root cause in the application code. Spending some time with the developers to see what they are doing and more importantly why, will lead you to finding a solution to your deadlock problem.
-
Sessions were still active eventhough Dead lock detected
Hi all,
Yesterday I saw very odd oracle behaviour.When oracle finds Dead lock it should kill those sessions automatically.In my case those two sessions were still trying to run the same update command and were casuing dead locks again and again for 1 Hour.I had to kill those sessions manually to avoid these dea lock.
How can those sessions were still trying eventhough dead lock detected and causing deadlocks.My logfile filled with this dead lock error.When I killed those sesions it end up with snap shot too old error.
Please suggest me
Thankshi
just ROLLBACK or COMMIT any one session. you will out of dead lock.
and one more thing is in dead lock situation the sessions were not terminated
and session wating for releasing locks aquire by another session
try this one if not work plz reply
have a nice time
best luck -
Dead Lock what made by another user!(patition table)
I have a question about Dead-Lock!
Our Situation is ..
User "A" made a Patition Table, ACNT_WONJANG
(without any Trigger,Function, Procedure)
When "B" - another user - tried to drop its Partition,
Dead-Lock invoked.
but A droped it's Partition well.
What can i Do?
this is the trace file.
/oracle/home/admin/ACNT/udump/ora_44478_acnt.trc
Oracle8i Enterprise Edition Release 8.1.7.0.0 - Production
With the Partitioning option
JServer Release 8.1.7.0.0 - Production
ORACLE_HOME = /oracle/home
System name: AIX
Node name: acnt
Release: 3
Version: 4
Machine: 000C962D4C00
Instance name: ACNT
Redo thread mounted by this instance: 1
Oracle process number: 15
Unix process pid: 44478, image: oracle@acnt (TNS V1-V3)
*** SESSION ID:(16.394) 2001-10-04 15:00:41.829
A self-deadlock among DDL and parse locks
is detected. In most cases, this self-deadlock
is handled internally.
This should be reported to Oracle Support
ONLY IF an error is signalled back to the
user on a command-line or screen.
The following information may aid in finding
user on a command-line or screen.
The following information may aid in finding
the problem.
ORA-04020: deadlock detected while trying to lock object
F03P.ACNT_WONJANG
session: 440786b4 request: X
LIBRARY OBJECT HANDLE: handle=43108348
name=F03P.ACNT_WONJANG
hash=76b93583 timestamp=NULL
namespace=TABL/PRCD/TYPE flags=KGHP/TIM/SML/[02000000]
kkkk-dddd-llll=0000-0001-0001 lock=S pin=S latch=0
lwt=43108360[43108360,43108360] ltm=43108368[43108368,43108368]
pwt=43108378[43108378,43108378] ptm=431083d0[431083d0,431083d0]
ref=43108350[43108350,43108350] lnd=431083dc[4310824c,425b7ec4]
LIBRARY OBJECT: object=431080d0
flags=NEX[0002] pflags= [00] status=VALD load=0
DATA BLOCKS:
data# heap pointer status pins change
0 431082d8 43108154 I/P/A 0 NONE
HEAP DUMP OF DATA BLOCK 0:
HEAP DUMP heap name="library cache" desc=0x431082d8
HEAP DUMP heap name="library cache" desc=0x431082d8
extent sz=0x224 alt=32767 het=8 rec=9 flg=2 opc=0
parent=30000030 owner=431080d0 nex=0 xsz=0x0
EXTENT 0
Chunk 431080c0 sz= 196 perm "perm "
alo=196
431080C0 500000C5 00000000 00000000 000000C4 [P...............]
431080D0 43108348 431080D4 431080D4 431080DC [C..HC...C...C...]
431080E0 431080DC 00000000 00000000 00020100 [C...............]
431080F0 00000000 00000000 00000000 00000000 [................]
43108100 43108144 00000000 00000000 00000000 [C..D............]
43108110 00000000 00000000 00000000 00000000 [................]
Repeat 2 times
43108140 00000000 431082D8 00000000 43108154 [....C.......C..T]
43108150 00000000 00000000 00000000 00000000 [................]
Repeat 1 times
43108170 00000000 00000000 00000019 00000000 [................]
43108180 00000000 [....]
Total heap size = 196
FREE LISTS:
Bucket 0 size=0
Total free space = 0
UNPINNED RECREATABLE CHUNKS (lru first):
Total free space = 0
UNPINNED RECREATABLE CHUNKS (lru first):
PERMANENT CHUNKS:
Chunk 431080c0 sz= 196 perm "perm "
alo=196
Permanent space = 196carlyfromal wrote:
Here's the thing I myself have an Ipad 3 that I got from Ebay that is activation locked and I have the same issue. Can't get the info. Well,since Apple conveniently decided to discontinue selling the Ipad 3 the only way I could get one was to buy a used one,so it looks to me like they could have some mercy and help a person unlock the thing. We're not dishonest people that go around stealing things,yet because of Apple's brilliant(I use that term sarcastically) idea to put this stupid new crap in place people like us who have to buy second-hand products have to suffer and get screwed out of money we had to save up to buy this stuff! And all anyone can come up with is "well boohoo" or "tough luck" or whatever! But,what about the rights of the rest of us?! Some of you may find this a tad rude, but oh well,tough luck!
On the other hand, there are those of us that appreciate the theft protection provided by the latest IOS.
There are certain things to watch out for when purchasing used devices of any sort, the first of which is to ensure that you're not buying stolen property. Since you are unable to obtain cooperation from the seller, perhaps your device was stolen! -
Frequenet dead locks in SQL Server 2008 R2 SP2
Hi,
We are experiencing frequent dead locks in our application. We are using MSSQL Server 2008 R2 SP2 version. When our application is configured for 5-6 app servers, this issue is occurring frequently.
But, when the same application is used with the MSSQL Server 2008 R2 or SQL Server 2012, we don't see the dead lock issue. From the error lock and sql trace, the error message is thrown for the database table JobLock. We have a stored procedure to insert/update
for the above table when the job moves from one service to other. The same procedure works fine when used with the 2008 R2 and SQL Server 2012 Version.
Is the above issue related to the hotfix from the below url?
http://support.microsoft.com/kb/2703275
Following error message is seen frequently in the log file.
INFO : 03/24/2014 10:26:30:290 PM: [00007900:00005932] [Xerox.ISP.Workflow.ManagedActivity.PersistInTransaction] System.Data.SqlClient.SqlException (0x80131904): Transaction (Process ID 62) was deadlocked on lock resources with another process and has been
chosen as the deadlock victim. Rerun the transaction.
at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection, Action`1 wrapCloseInAction)
at System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection, Action`1 wrapCloseInAction)
at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj, Boolean callerHasConnectionLock, Boolean asyncClose)
at System.Data.SqlClient.TdsParser.TryRun(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj, Boolean& dataReady)
at System.Data.SqlClient.SqlCommand.FinishExecuteReader(SqlDataReader ds, RunBehavior runBehavior, String resetOptionsString)
at System.Data.SqlClient.SqlCommand.RunExecuteReaderTds(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, Boolean async, Int32 timeout, Task& task, Boolean asyncWrite, SqlDataReader ds)
at System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, String method, TaskCompletionSource`1 completion, Int32 timeout, Task& task, Boolean asyncWrite)
at System.Data.SqlClient.SqlCommand.InternalExecuteNonQuery(TaskCompletionSource`1 completion, String methodName, Boolean sendToPipe, Int32 timeout, Boolean asyncWrite)
at System.Data.SqlClient.SqlCommand.ExecuteNonQuery()
at Microsoft.Practices.EnterpriseLibrary.Data.Database.DoExecuteNonQuery(DbCommand command)
at Microsoft.Practices.EnterpriseLibrary.Data.Database.ExecuteNonQuery(DbCommand command, DbTransaction transaction)
at Xerox.ISP.DataAccess.Data.Utility.ExecuteNonQuery(TransactionManager transactionManager, DbCommand dbCommand)
at Xerox.ISP.DataAccess.Data.SqlClient.SqlActivityProviderBase.ActivityReady(TransactionManager transactionManager, Int32 start, Int32 pageLength, Nullable`1 ActivityID, Nullable`1 JobId, String ContentUrl, Nullable`1 PrevWorkStep, Nullable`1
CurrentWorkStep, String Principal, Nullable`1 Status, Nullable`1 ServerID, String HostName, Nullable`1 LockUserID, Nullable`1& ErrorCode, Byte[]& Activity_TS)
at Xerox.ISP.DataAccess.Domain.ActivityBase.ActivityReady(Nullable`1 ActivityID, Nullable`1 JobId, String ContentUrl, Nullable`1 PrevWorkStep, Nullable`1 CurrentWorkStep, String Principal, Nullable`1 Status, Nullable`1 ServerID, String HostName,
Nullable`1 LockUserID, Nullable`1& ErrorCode, Byte[]& Activity_TS, Int32 start, Int32 pageLength)
at Xerox.ISP.DataAccess.Domain.ActivityBase.ActivityReady(Nullable`1 ActivityID, Nullable`1 JobId, String ContentUrl, Nullable`1 PrevWorkStep, Nullable`1 CurrentWorkStep, String Principal, Nullable`1 Status, Nullable`1 ServerID, String HostName,
Nullable`1 LockUserID, Nullable`1& ErrorCode, Byte[]& Activity_TS)
at Xerox.ISP.Workflow.ManagedActivity.<>c__DisplayClass2f.<ActivityReady>b__2d()
at Xerox.ISP.Workflow.ManagedActivity.PersistInTransaction(Boolean createNew, PersistMethod persist)
ClientConnectionId:9e44a64f-5014-4634-9cee-4581e1b9c299
I look forward to the suggestions to get the issue resolved. Your input is much appreciated.
Thanks,
Keshava.If you are having deadlock trouble in your SQL Server instance, this recipe demonstrates how to make sure deadlocks are logged to the SQL ServerManagement Studio SQL log appropriately using
the DBCC TRACEON, DBCC TRACEOFF, and DBCC TRACESTATUS commands. These functions enable, disable, and check the status of trace flags.
To determine the cause of a deadlock, we need to know
the resources involved and the types of locks acquired and requested. For this kind of information, SQL Server provides
Trace Flag 1222 (this flag supersedes 1204, which was frequently used in earlier versions of SQL Server.)
DBCCTRACEON(1222,
-1);
GO
With this flag enabled, SQL Server will provide output in the form of a deadlock graph, showing the executing statements
for each session, at the time of the deadlock; these are the statements that were blocked and so formed the conflict or cycle that led to the deadlock.
Be aware that it is rarely possible to guarantee that deadlocks will never occur. Tuning for deadlocks
primarily involves minimizing the likelihood of their occurrence. Most of the techniques for minimizing the occurrence of deadlocks are similar to the general techniques for minimizing blocking problems. -
FOR UPDATE cursor is causing Blocking/ Dead Locking issues
Hi,
I am facing one of the complex issues regarding blocking / dead locking issues. Please find below the details and help / suggest me the best approach to ahead with that.
Its core Investment Banking Domain, in Our Day to day Business we are using many transaction table for processing trades and placing the order. In specific there are two main transaction table
1) Transaction table 1
2) Transaction table 2
These both the tables are having huge amount of data. In one of our application to maintain data integrity (During this process we do not want other users to change these rows), we have placed SELECT …………….. FOR UPDATE CURSOR on these two table and we have locked all the rows during the process. And we have batch jobs (shell scripts ) , calling this procedure , we will be running 9 times per day 1 hrs each start at 7:15AM in the morn finish it up in the eve 5PM . Let’s say. The reason we run the same procedure multiple times is, our business wants to know the voucher before its finalized. Because there is a possibility that order can be placed and will be updated/cancelled several times in a single day. So at the end of the day , we will be sending the finalized update to our client.
20 07 * * 1-5 home/bin/app_process_prc.sh >> home/bin/app1/process.out
20 08 * * 1-5 home/bin/app_process_prc.sh >> home/bin/app1/process.out
20 09 * * 1-5 home/bin/app_process_prc.sh >> home/bin/app1/process.out
20 10 * * 1-5 home/bin/app_process_prc.sh >> home/bin/app1/process.out
20 11 * * 1-5 home/bin/app_process_prc.sh >> home/bin/app1/process.out
20 12 * * 1-5 home/bin/app_process_prc.sh >> home/bin/app1/process.out
20 13 * * 1-5 home/bin/app_process_prc.sh >> home/bin/app1/process.out
20 14 * * 1-5 home/bin/app_process_prc.sh >> home/bin/app1/process.out
20 15 * * 1-5 home/bin/app_process_prc.sh >> home/bin/app1/process.out
20 16 * * 1-5 home/bin/app_process_prc.sh >> home/bin/app1/process.out
20 17 * * 1-5 home/bin/app_process_prc.sh >> home/bin/app1/process.out
Current Program will look like:
App_Prc_1
BEGIN
/***** taking the order details (source) and will be populate into the table ****/
CURSOR Cursor_Upload IS
SELECT col1, col2 … FROM Transaction table1 t 1, Source table 1 s
WHERE t1.id_no = t2.id_no
AND t1.id_flag = ‘N’
FOR UPDATE OF t1.id_flag;
/************* used for inserting the another entry , if theres any updates happened on the source table , for the records inserted using 1st cursor. **************/
CURSOR cursor_update IS
SELECT col1, col2 … FROM transaction table2 t2 , transaction table t1
WHERE t1.id_no = t2.id_no
AND t1.id_flag = ‘Y’
AND t1.DML_ACTION = ‘U’,’D’ -- will retrieve the records which are updated and deleted recently for the inserted records in transaction table 1 for that particular INSERT..
FOR UPDATE OF t1.id_no,t1.id_flag;
BLOCK 1
BEGIN
FOR v_upload IN Cursor_Upload;
LOOP
INSERT INTO transaction table2 ( id_no , dml_action , …. ) VALUES (v_upload.id_no , ‘I’ , … ) RETURNING v_upload.id_no INTO v_no -- I specify for INSERT
/********* Updating the Flag in the source table after the population ( N into Y ) N order is not placed yet , Y order is processed first time )
UPDATE transaction table1
SET id_FLAG = ‘Y’
WHERE id_no = v_no;
END LOOP;
EXCEPTION WHEN OTHER THEN
DBMS_OUTPUT.PUT_LINE( );
END ;
BLOCK 2
BEGIN -- block 2 starts
FOR v_update IN Cursor_Update;
LOOP;
INSERT INTO transaction table2 ( id_no ,id_prev_no, dml_action , …. ) VALUES (v_id_seq_no, v_upload.id_no ,, … ) RETURNING v_upload.id_no INTO v_no
UPDATE transaction table1
SET id_FLAG = ‘Y’
WHERE id_no = v_no;
END LOOP;
EXCEPTION WHEN OTHER THEN
DBMS_OUTPUT.PUT_LINE( );
END; -- block2 end
END app_proc; -- Main block end
Sample output in Transaction table1 :
Id_no | Tax_amt | re_emburse_amt | Activ_DT | Id_Flag | DML_ACTION
01 1,835 4300 12/JUN/2009 N I ( these DML Action will be triggered when ever if theres in any DML operation occurs in this table )
02 1,675 3300 12/JUN/2009 Y U
03 4475 6500 12/JUN/2009 N D
Sample output in Transaction table2 :
Id_no | Prev_id_no Tax_amt | re_emburse_amt | Activ_DT
001 01 1,835 4300 12/JUN/2009 11:34 AM ( 2nd cursor will populate this value , bcoz there s an update happened for the below records , this is 2nd voucher
01 0 1,235 6300 12/JUN/2009 09:15 AM ( 1st cursor will populate this record when job run first time )
02 0 1,675 3300 12/JUN/2009 8:15AM
003 03 4475 6500 12/JUN/2009 11:30 AM
03 0 1,235 4300 12/JUN/2009 10:30 AM
Now the issues is :
When these Process runs, our other application jobs failing, because it also uses these main 2 tranaction table. So dead lock is detecting in these applications.
Solutin Needed :
Can anyone suggest me , like how can rectify this blocking /Locking / Dead lock issues. I wants my other application also will use this tables during these process.
Regards,
Maranhmmm.... this leads to a warning:
SQL> ALTER SESSION SET PLSQL_WARNINGS='ENABLE:ALL';
Session altered.
CREATE OR REPLACE PROCEDURE MYPROCEDURE
AS
MYCOL VARCHAR(10);
BEGIN
SELECT col2
INTO MYCOL
FROM MYTABLE
WHERE col1 = 'ORACLE';
EXCEPTION
WHEN PIERRE THEN
NULL;
END;
SP2-0804: Procedure created with compilation warnings
SQL> show errors
Errors for PROCEDURE MYPROCEDURE:
LINE/COL ERROR
12/9 PLW-06009: procedure “MYPROCEDURE” PIERRE handler does not end in RAISE or RAISE_APPLICATION_ERROR
:) -
Gurus,
Please clarify me the three questions which I am posting below
1) What's the deadlock situation ? How oracle treats the dead lock situation
2) Disadvantages of having index
3) I have two tables A and B .. In table A, I have two columns (say col1, col2) .. Col1 is a primary key column .. In table B, I have two columns (say col3, col4) .. Col3 is a primary key column .. Col2 of A has a referrential integrity to Col3 of B ..And Col4 of B has a referrential integrity to col2 of A .. Now if I am inserting a values in table A ...it is showing error "parent value doesnt exist" .. like wise, if I am inserting values in table B, the above mentioned error is comming ..
How to overcome this error
Please advice
RegardsHi.
1) A dead lock is a situation where two or more sessions acquire locks which then prevent each other from moving on. ie session one updates a row aaa in a table and session two updates row bbb (no commits). Session one then attempts to update row bbb and session two attempts to update row aaa and both wait for the locks to clear (default behaviour). Oracle monitors for these situations and will automatically kill one of the sessions and allow the other to complete.
2) Indexes are used to speed up access to data in the database and if associated with a Primary or Unique Key, enforce uniqueness. They have the disadvantages of taking up space and slowing down updates and inserts.
3) This is not a deadlock. It is a circular reference. You cannot insert into one table because the other table is expected to have a parent value and vice versa. From a data modelling point of view a circular reference is unsupportable and meaningless. Like trying to be your father's son and your father's father at the same time.
Regards
Andre -
Dead Lock occured while Sync index in oracle text
Hi All,
We are facing dead lock issue which syncing the oracle text index . The index is built on the local partitioned table and the sync index has the parameters below,
parallel - 4
memory - 20M
the error message is,
System error: Plsql job execution is failed with
error code -20000 and error message ORA-20000: Oracle Text error: DRG-50610:
internal error: drvdml.ParallelDML DRG-50857: oracle error in
drvdml.ParallelDML ORA-12801: error signaled in parallel query server P003,
instance xxxx.enterprisenet.org:xxxx (1) ORA-20000: Oracle Text error:
DRG-50857: oracle error in drepdump_dollarp_insert ORA-00060: deadlock detected
while waiting for resource ORA-06512: at "CTXSYS.DRUE", line 160
ORA-06512: at "CTXSYS.DRVPARX",
Thanks in advance.How many occurrences of XYZ are there per XML document ?
If there are more than one, then obviously you cannot create such an index on it.
In this case, you'll need an XMLIndex, unstructured or structured, depending on the type of queries you want to run.
If there's only one occurrence, could you post a sample document and your db version?
Thanks. -
DEAD LOCKS on table ARFCSSTATE
Please help!
Essentially the problem is that anything that updates the customer master (T/C BP) causes DEAD LOCKS on table ARFCSSTATE and the queues either slow down terrible or they hang (stop). When one tries to delete an entry in the queue a screen dump takes place DBIF_RSQL_SQL_ERROR in ARFC_RUN.
We are using: CRM 3.0 with the following service packs:
SAP Basis release 610 level 38
SAP ABA Release 50A level 38
BBPCRM Release 300 level 17
Points will be given.
Thank you.Hi Surendra,
I was expirenced with the error, Basis people had resolved that for me,
better to post this issue to them,
this problem for all data sources or any perticulat data source when scheduling infopackage.
Regards
Vijay -
Hi ,
when doing some work in biw side, i got a run time error of Dead lock.
it is a SQL statement error.
INSERT L_TAB from G_U_tab.
in this statement i got runtime error(dump) of dead lock.
what i have to do???
pls help me.hi,
The structure of G_U_tab should be same as that of L_TAB table
data : G_U_tab like L_TAB occurs 0 with header line.
regards,
Santosh -
ORA-00060 dead lock happen on R12 demo database
We have EBS R12 (12.0.4) on Redhat Linux system when I check alert.log file and found demo database (VIS) have ORA-00060 dead lock. The dead lock happen on statement:
UPDATE FND_CONCURRENT_REQUESTS SET PP_END_DATE = SYSDATE, POST_REQUEST_STATUS = 'E' WHERE REQUEST_ID = :B1
Do I need do anthing relate to this ORA- error or just ignore it?
Thanks.Thank you for answer. I checked CM log and did NOT find any error.
I also read document 153717.1 and compare FND_CONCURRENT_REQUESTS table definition. The initial number of trnasaction is 10 and max is 255. I try to use OEM to change "initial transaction number" and it don't allow to.
Any ideal??? -
Hi
I am running a little test that runs two threads that update the same table. Each thread tries to update several documents in the within a single transaction. The update is done by retrieving the document, modifying it, adding the updated document and deleting the existing document.
I have enable dead lock detection and when a dead lock exception is thrown I abort the current transaction.
However after a few iteration, deleteDocument causes a core dump and DBXML prints to stderr : "Previous deadlock return not resolved".
Is there anything else to resolve when a dead lock occurs other than abort the transaction?
This is the relevant stack trace:
#0 0x00421780 in Dbc::get () from /u/yoava/dbxml-2.2.13/install/lib/libdb_cxx-4.3.so
(gdb) where
#0 0x00421780 in Dbc::get () from /u/yoava/dbxml-2.2.13/install/lib/libdb_cxx-4.3.so
#1 0x00934289 in DbXml::SyntaxDatabase::updateStatistics (this=0x2000001c,
context=@0x8641c34, key=@0x8628070, statistics=@0x8628088) at Cursor.hpp:48
#2 0x00901d10 in DbXml::StatisticsWriteCache::updateContainer (this=0x8641bcc,
context=@0x8641c34, container=Internal: global symbol `Container' found in Container.cpp psymtab but not in symtab.
Container may be an inlined function, or may be a template function
(if a template, try specifying an instantiation: Container<type>).
) at /usr/include/c++/3.2.3/bits/stl_tree.h:202
#3 0x0093bacf in DbXml::KeyStash::updateIndex (this=0x8634df8, context=@0x8641c34,
container=0x8630e68) at KeyStash.cpp:210
#4 0x008c72b7 in DbXml::Container::deleteDocument (this=0x8630e68, txn=0x86417f8,
document=@0x8671f38, context=Internal: global symbol `UpdateContext' found in UpdateContext.cpp psymtab but not in symtab.
UpdateContext may be an inlined function, or may be a template function
(if a template, try specifying an instantiation: UpdateContext<type>).
) at Container.cpp:679
#5 0x008d3475 in DeleteDocumentFunctor2::method (this=0x2000001c, container=@0x8630e68,
txn=Internal: global symbol `Transaction' found in Transaction.cpp psymtab but not in symtab.
Transaction may be an inlined function, or may be a template function
(if a template, try specifying an instantiation: Transaction<type>).
) at TransactedContainer.cpp:121
#6 0x008d3149 in DbXml::TransactedContainer::transactedMethod (this=0x8630e68,
txn=0x3d71c8, flags=0, f=@0xb176b470) at TransactedContainer.cpp:217
#7 0x008d2fe8 in DbXml::TransactedContainer::deleteDocument (this=0x8630e68,
txn=0x86417f8, document=Internal: global symbol `Document' found in Document.cpp psymtab but not in symtab.
Document may be an inlined function, or may be a template function
(if a template, try specifying an instantiation: Document<type>).
) at TransactedContainer.cpp:26
#8 0x00906940 in DbXml::XmlContainer::deleteDocument (this=0xbfff8c94, txn=@0xb176b5a0,
document=Internal: global symbol `XmlDocument' found in XmlDocument.cpp psymtab but not in symtab.
XmlDocument may be an inlined function, or may be a template function
(if a template, try specifying an instantiation: XmlDocument<type>).
) at /u/yoava/dbxml-2.2.13/dbxml/include/dbxml/XmlDocument.hpp:72
#9 0x0804a5c0 in DoUpdates (arg=0xbfff8c90) at dbxml_test_6.cpp:99
#10 0x003dedec in start_thread () from /lib/tls/libpthread.so.0
#11 0x0037ea2a in clone () from /lib/tls/libc.so.6
thanksI have applied them. When applying patch 6 I got an error:
compile3.server 46% patch < patch.2.2.13.6
patch.2.2.13.6: No such file or directory.
compile3.server 47% patch < patch.2.2.13.6
(Stripping trailing CRs from patch.)
can't find file to patch at input line 3
Perhaps you should have used the -p or --strip option?
The text leading up to this was:
|*** NsEventGenerator.cpp.orig Thu Dec 8 14:50:50 2005
|--- dbxml/src/dbxml/nodeStore/NsEventGenerator.cppThu Sep 28 17:24:57 2006
File to patch: dbxml/src/dbxml/DbWrapper.hpp
patching file dbxml/src/dbxml/DbWrapper.hpp
Hunk #1 FAILED at 357.
1 out of 1 hunk FAILED -- saving rejects to file dbxml/src/dbxml/DbWrapper.hpp.rej
However this patch does not seem to be lock related. -
Dead lock problem occur in Ms-Sql Server
Hi friends,
I am using the 1,Tomcat server
2, jdbc-odbc-bridge driver
In my applicaiton .mutli user access time its throw -deadlock
exception . How to solve the dead lock problem.. please help it.
Can i modify the Db connection?
please help me .... How solve the dead lock problem..
please ............ its urgentI am using this stored procedure to occur dead lock condtion. Orderly insert table values ..
Imm_tblGameTransactions- Primary Tables
Imm_tblGameDetailsBJ - Secondary tables
Please check it.....
Please explain breifly..... ....
please.........
CREATE procedure IMM_BJDeal
@plid int,
@gameid int,
@betamt money,
@bal money,
@winamt money,
@usercards nvarchar(500),
@dealercards nvarchar(500),
@useracecnt int,
@dealeracecnt int,
@dealerbj int,
@userbj int,
@insurance int,
@split int,
@push int,
@sessionid int,
@ltransid int out
as
begin
declare
@transdate datetime,
@linitbal money,
@lfinalbal money,
@errormesg varchar(50)
select @linitbal=balance from Imm_players.dbo.Imm_tblPlayerbalance where playerid=@plid
select @transdate=getdate()
--set @ldealcards ='['+@dealercard1+','+@dealercard2+']'
--print @ldealcards
if(@userbj=1)
begin
select @lfinalbal= @bal
begin transaction
insert into Imm_tblGameTransactions
(playerid,gameid,Initialbalance,transactiondate,betamount,winamount,currencycode,finalbalance,sessionid)
values(@plid,@gameid,@linitbal,@transdate,@betamt,@winamt,'USD',@lfinalbal,@sessionid)
IF @@ERROR <> 0
Begin
-- There's an error b/c @ERROR is not 0, rollback
ROLLBACK
return
End
select @ltransid=@@identity from Imm_tblGameTransactions
insert into Imm_tblGameDetailsBJ(transid,playercard,dealercard,typeid,result,statusid,split,insurance,playercardcount,dealercardcount,winvalue,betvalue)
values(@ltransid,@usercards,@dealercards ,1,1,'PB',@split,@insurance,@useracecnt,@dealeracecnt,@winamt,@betamt)
IF @@ERROR <> 0
begin
-- There's an error b/c @ERROR is not 0, rollback
ROLLBACK
return
end
update Imm_players.dbo.Imm_tblPlayerbalance set balance=@lfinalbal where playerid=@plid
IF @@ERROR <> 0
begin
-- There's an error b/c @ERROR is not 0, rollback
ROLLBACK
return
end
commit transaction
return
end
else
begin
begin transaction
insert into Imm_tblGameTransactions(playerid,gameid,Initialbalance,transactiondate,betamount,winamount,currencycode,finalbalance,sessionid)
values(@plid,@gameid,@linitbal,@transdate,@betamt,@winamt,'USD',@bal,@sessionid)
IF @@ERROR <> 0
Begin
-- There's an error b/c @ERROR is not 0, rollback
ROLLBACK
return
End
/*ELSE
COMMIT -- Success! Commit the transaction*/
select @ltransid=@@identity from Imm_tblGameTransactions
insert into Imm_tblGameDetailsBJ(transid,playercard,dealercard,typeid,result,split,insurance,playercardcount,dealercardcount,winvalue,betvalue,statusid)
values(@ltransid,@usercards,@dealercards,1,3,@split,@insurance,@useracecnt,@dealeracecnt,@winamt,@betamt,'G')
IF @@ERROR <> 0
Begin
-- There's an error b/c @ERROR is not 0, rollback
ROLLBACK
return
End
/*ELSE
COMMIT -- Success! Commit the transaction*/
commit transaction
return
end
end
GO
Maybe you are looking for
-
I got an iPad mini from a friend who is now passed away and I'm trying to activate it using my Apple ID but it won't let me because I have to contact the previous owner, what do I do!?
-
Error while creating Documents in WPC
Hi All, I am trying to create a form using an article template in Web page composer (NW 7.3). When i save the form keying in the required details I get an error saying "Error while saving the document" . This is what i found in Log files and have not
-
Hi, I have defined in SLD one WAS Java Server and one R3 system. I would like to define JCO to this R3. I can define JCO Single Server Connection (step 2 - connection type) - it is going ok, but I can define JCo for metadata while it required connect
-
Application Server Report Path
Dear Oracle Gurus I am new to this area of expertise. We have a peculiar problem. Let me try to explain this We are running Oracle Appication server in Windows based environment( windows server 2003 i presume) we have multiple projects running in thi
-
I can't get the music from my computer to my iPod touch. I put it in sync mode and the computer says it did the job and transferred the music correctly but the iPod says it doesn't have the music transferred.