Error on deadlock

Hi all,
we have a one dump of 9.2.0.8.0 which needs to be imported in the same version of database.
so we created the new database and did patch upgradation also.
Now the db is 9.2.0.8.0 version.
The thing is while we executing catalog.sql it shows the below error.
ORA-04020: deadlock detected while trying to lock object
SYS.CDC_ALTER_CTABLE_BEFORE
I dont know where the problem is. and when i m trying to import it shows the below err.
IMP-00008: unrecognized statement in the export file:
8^D^GTM
IMP-00008: unrecognized statement in the export file:
9^D^GTM
IMP-00008: unrecognized statement in the export file:
:^D^GTM
IMP-00008: unrecognized statement in the export file:
;^D^GTM
IMP-00008: unrecognized statement in the export file:
<^D^GTM
IMP-00008: unrecognized statement in the export file:
=^D^GTM
IMP-00008: unrecognized statement in the export file:
after sometime again import is going is like this..
. .importing table "CEK_COMM_TRANS" 202 rows imported
. . importing table "CEK_RESERVED" 2347 rows imported
. . importing table "CEK_RESERVED2" 554 rows imported
. . importing table "CHECK_CV" 38725 rows imported
. . importing table "CHECK_CV_POLICY_LAPSE1" 67246 rows imported
. . importing table "CHECK_CV_PRODUCT_TRAD" 50 rows imported
. . importing table "CHECK_CV_PTD" 67246 rows imported
. . importing table "CHECK_POL_SUSP_B4_ACC_CHANGE" 52 rows imported
I think there is some huge problem.
our o/s is IBM-AIX
where is the problem occured?
is there any patch upgradation problem ?
or
is the export file is corrupted?
or
is the problem with the import server?
atlast import also terminated unsuccessfully without saying this command.like abnormal termination.
after import stops i found core file was generated at that time.
it looks like....
SAMPLE O/P OF CORE FILE:
$B( ^PM-^B^B@^A^Qß
0@^?I^CñM-^PAðCÿøM-^B@?ðC^Aô^A^N ð^^ p" p" ð^^ÍÍ ð^^M-^@^W^B^T^G(^C^D^A^AC
ñ^ARÅçø^Rimp:fE^Pcñï^Y^E}'DÍÍÍÍÍÍÍ^BÍÎ vòÁM-^QAÓÊüH½M-^Je^?ÿÿÿÿÿÿÿ^?ÿÿÿÿÿÿÿ^?ÿÿÿÿÿÿÿ^?ÿÿÿÿÿÿÿ^?ÿÿÿÿÿÿÿ^?ÿÿÿÿÿÿÿ^A^A?ÿþ^?ÿÿÿÿÿÿÿ^?
ÿÿÿÿÿÿÿ^?ÿÿÿÿÿÿÿ^?ÿÿÿÿÿÿÿ^?ÿÿÿÿÿÿÿ^?ÿÿÿÿÿÿÿ^?ÿÿÿÿÿÿÿ^]J^C^L^^^_^AM-^RBp^Aß%À^A^P^EF ^A^P^EF ^A^A^P^EF HHHHHHHHH^HH^L^L^L^A
M-^@^AM-^@^AM-^@^AM-^A^AM-^A^AM-^@^AM-^A^AM-^@^AM-^@^AM-^A8^BK(^AM-^^â^Q^A^P^E^P
^\¸imp,^\H^B ÿÿÿð£* ÿÿÿð£*/usr/ccs/bin/usla64arcP^B
gà .^RM-^@
+ ^P X×`^A /usr/lib/libcrypt.ashr_64.o663P^B
iM-^@ ^Aصp^PF ^P e^_@^AM-^H/usr/lib/libc.aaio_64.o_2_1161P^B
/^P@^ASÁ ^P aµM-^HM-^O^T/usr/lib/libodm.ashr_64.o^\LOTUP^B ^AE^Bû ^P a /usr/lib/libdl.ashr_64.o^A°X^B
ú^\ TÀ^C{ç  ^P X¿T/usr/lib/libpthreads.ashr_xpg5_64.o´^B^S¹p        ^CÇM-^@(^Uþ     ^P Lo(^LRM-^@/usr/lib/libc.ashr_64.o^Oÿÿ
ÿÿÿа^A^P
ËP^AM-^I^[|^A^P
Ï0^Oÿÿÿÿÿа^AM-^R^X(^OÿÿÿÿÿÐ ^C^Oÿÿÿÿÿа^OÿÿÿÿÿÐà^A^P·M-^^¸^A^P
ËPB^A^P
×p^A^OÿÿÿÿÿÔ°^A^P^KÞ@^A^P
ËP^A^P
ϼ^OÿÿÿÿÿÑ M-^D$"K> pi^A^P^KÞ@^A^P
ËP^A^P
Ï0^OÿÿÿÿÿÑPn 332:
PT$: Encountered th^A^P^Fd` "." whe^A^P
ϼ^OÿÿÿÿÿÑÀof the f!Ü:
There were so many problems i had faced. i m planning to start to fresh import once again ..
now i m going to drop all the users.
But what i do before import i will try to use show=y and will monitor some one or two hours about the logfile of show=y output to check whether it works fine or receive the same errors.
it is a dump of 50G file..
this task gives me more trouble since last two days.
pls give ur valuable suggestions i m waiting for that..
Regards,
M.Murali...                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

HI...
Is both the database having the same NLS_LANG....???? Have u tried running catalog.sql after bouncing the database??? Usallu IMP-0008 comes when export dump is corrupted.
As per metalink:--
Error: IMP 8
Text: unrecognized statement in the export file: n %s
Cause: Unrecognized statement in export file. This could be due to
corrupted export file or Import internal bug.
Action: If the export file is corrupted, retry with a new export file.
Else report this as Import internal error.
Regards,
Anand

Similar Messages

  • Getting ORA-00060: deadlock detected while waiting for resource

    We have an Informatica Mapping, SIL_Ordertiem_Fac that has 7 targets tables, 2 of which are Oracle partitioned tables. It also has 30 targets,one which is the OrderItem Staging table is an Oracle partitioned table. The Workflow that calls this has 8 sessions tasks , one for each of the partitioned staging table. We are encountering this error of deadlocks. We have enabled in each session the following "Session retry on deadlock" and we have also set the informatica integration service - NumOfDeadlockRetries:100000 and DeadlockSleep:1. Is there any other parameters that we need to set. How can we see how many deadlocks are happening in this workllow.
    Thanks,
    Gary

    Are you running FULL or INCREMENTAL for this task? If you are running full, try to change the target property to Normal instead of bulk. This may help in certain cases. Otherwise, can you consider making the Sessions or tasks sequential so that they do not execute in parallel? Simplest case, have to tried to kill all sessions and rerun the ETL? Also, depending on the DB version there are DB level parameters such as DML_LOCKS etc. Check with your DBA and ensure that there is no DB side setting you can edit.
    If helpful, please mark as correct or helpful.

  • How to re-issue an SQL query in java code in deadlock situation ?

    Hi all..
    I have a java application (in Struts) which is running on JBoss 4.0.1 server. The database is MySql 6.0. It is an
    application made to be used among multiusers. The background operating system is Windows Vista.
    Now a days i am facing a peculiar problem due to which the further things go wrong. I am getting following error :
    com.mysql.jdbc.exceptions.MySQLTransactionRollbackException: Deadlock found when trying to get lock; try restarting transaction
    I searched over forums and i got response as restart the transaction in deadlock.
    In my java code it is a delete query where i get such exception when multiple users are accessing the application. What i tried for time being is as follows :
    Statement stmt = null;
    String temp = null;
    Connection conn = null;
    String sUserID =(String) session.getAttribute("username")
    try
    conn = DBConnection.getJndiConnection();
    stmt = conn.createStatement();
    temp = "Delete FROM temptable where Login ='"+sUserID+"'";
    System.out.println("QUERY:"+temp);
    int rowCount = stmt.executeUpdate(temp);
    System.out.println("Rows affected in try ="+rowCount+" for user :"+sUserID);
    // in catch i re-issue the delete query in deadlock..
    catch (com.mysql.jdbc.exceptions.MySQLTransactionRollbackException ne)
    System.out.println("Error ....Deadlock occured for user: "+sUserID);
    ne.printStackTrace();
    try
    int rowCount = s.executeUpdate(temp);
    System.out.println("Rows affected in catch ="+rowCount+" for user :"+sUserID);
    catch (Exception e)
    System.out.println("Exception again after restarting transaction..with user :"+sUserID);
    e.printStackTrace();
    (all necessary imports are present in code)
    Here i am properly establishing the database connection with the help of other class DBConnection. As most of the forum insisted to re-issue the transaction, i have tried upto one level with above code. But still the problem is just partially solved.
    How can i write my code so that every time if there is MySQLTransactionRollbackException , then the delete query should be re-issued again and again untill the transaction is complete.
    Can i use GOTO statement ? or is it bad programming practice?
    What are the other possible ways ??
    Can someone please help in this regard ??
    Thanks in advance..
    Regards
    Prasad

    Hi all
    i am new for these kind of databae transaction operation i am getting following error
    /resetattendancedata.dqlMessage: Deadlock found when trying to get lock; Try restarting transaction, message from server: "Lock wait timeout exceeded; try restarting transaction" Cause: null Error Message: Deadlock found when trying to get lock; Try restarting transaction, message from server: "Lock wait timeout exceeded; try restarting transaction" Cause: null Error code: 1205 Error state: 41000 java.sql.SQLException: Deadlock found when trying to get lock; Try restarting transaction, message from server: "Lock wait timeout exceeded; try restarting transaction"
    Here is my code i have written
    if(movements.equalsIgnoreCase("ON"))
    System.out.println("Movement");
    if(submovements.equalsIgnoreCase("statusonly"))
    query="";
    query="UPDATE TRNMOVEMENT SET PROCESSED='N',STATUSPUT='N',ATTDATE=NULL,TIMEDIFF='' "
    +"WHERE ATTDATE BETWEEN '"datefrom1"' AND '"dateto"' AND EMPID IN "
    +"(SELECT SYSEMPID "
    +"FROM (((MSTPERSONALDETAILS PD LEFT JOIN MSTCATEGORY CAT ON PD.CATEGCODE=CAT.SYSCATEGCODE)"
    +"LEFT JOIN MSTUNITDETAILS U ON PD.UNITCODE=U.SYSUNITCODE) "
    +"LEFT JOIN MSTDESIGNATION DES ON PD.DESIGCODE=DES.SYSDESIGCODE) "
    +"LEFT JOIN MSTDEPARTMENT DEP ON PD.DEPTCODE=DEP.SYSDEPTCODE "strFltrString" ) ";
    leaveStmt.executeUpdate(query);
    leaveStmt.executeUpdate("commit");
    else if(submovements.equalsIgnoreCase("flagstatus"))
    query="";
    query="UPDATE TRNMOVEMENT SET PROCESSED='N',STATUSPUT='N',ATTDATE=NULL,FLAG='',TIMEDIFF='',BREAK='N'WHERE ATTDATE BETWEEN '"datefrom1"' AND '"dateto"' AND EMPID IN (SELECT SYSEMPID FROM(((MSTPERSONALDETAILS PD LEFT JOIN MSTCATEGORY CAT ON PD.CATEGCODE=CAT.SYSCATEGCODE) LEFT JOIN MSTUNITDETAILS U ON PD.UNITCODE=U.SYSUNITCODE) LEFT JOIN MSTDESIGNATION DES ON PD.DESIGCODE=DES.SYSDESIGCODE) LEFT JOIN MSTDEPARTMENT DEP ON PD.DEPTCODE=DEP.SYSDEPTCODE WHERE EMPID IS NOT NULL "strFltrString" ) ";
    leaveStmt.executeUpdate(query);
    synchronized(leaveStmt)
    //leaveStmt1.executeUpdate("set TRANSACTION ISOLATION LEVEL REPEATABLE READ"); // setting transaction level using SQL
    // leaveStmt1.executeUpdate("start transaction"); // Starting a transaction using SQL     
    query1="DELETE FROM TRNDAILYATTENDANCE WHERE ATTDATE BETWEEN '"datefrom1"' AND '"dateto"' AND EMPID IN (SELECT SYSEMPID FROM (((MSTPERSONALDETAILS PD "
    +"LEFT JOIN MSTCATEGORY CAT ON PD.CATEGCODE=CAT.SYSCATEGCODE) LEFT JOIN MSTUNITDETAILS U ON PD.UNITCODE=U.SYSUNITCODE) LEFT JOIN MSTDESIGNATION DES ON PD.DESIGCODE = DES.SYSDESIGCODE )LEFT JOIN MSTDEPARTMENT DEP ON PD.DEPTCODE=DEP.SYSDEPTCODE WHERE EMPID IS NOT NULL "strFltrString" )";
    //leaveStmt.addBatch(query1);
    leaveStmt.executeUpdate(query1);
    synchronized(leaveStmt)
    query2="UPDATE TRNLEAVEAPPLICATION SET TAKENTOATT='N' WHERE '"datefrom1"' BETWEEN FROMDATE AND "
    +"TODATE AND '"dateto"' BETWEEN FROMDATE AND TODATE AND EMPID IN (SELECT SYSEMPID FROM (((MSTPERSONALDETAILS PD LEFT JOIN MSTCATEGORY CAT ON "
    +"PD.CATEGCODE=CAT.SYSCATEGCODE) LEFT JOIN MSTUNITDETAILS U ON PD.UNITCODE=U.SYSUNITCODE) LEFT JOIN "
    +"MSTDESIGNATION DES ON PD.DESIGCODE=DES.SYSDESIGCODE) LEFT JOIN MSTDEPARTMENT DEP ON PD.DEPTCODE=DEP.SYSDEPTCODE WHERE EMPID IS NOT NULL "+strFltrString +") ";
    //leaveStmt.addBatch(query2);
    //leaveStmt.executeBatch();
    leaveStmt.executeUpdate(query2);
    //} // end of for loop
    else
    leaveStmt.executeUpdate("commit");
    Please Solve my problem for this error thanks in advance........

  • Table Comparisons and Deadlocks

    Post Author: Thang Nguyen
    CA Forum: Data Integration
    Hi ,
    Pretty new to this DI stuff, but I've got a dataflow where I'm using a Table Comparison Transform to work out my updates and inserts. My database is SQL server 2000.
    When it runs the Table Comparison I get SQL errors regarding deadlock victim and the insert fails. I've ran a trace on SQL Server and the insert statement is being blocked by a select statement, so looks like some sort of issue with the Table Comparison looking for the differences and inserting new rows at the same time.
    I've tried to split the operation into two Dataflows using Map Operation where one is doing the updates and the other does in the inserts, but I still get the deadlock issue.
    Has anyone else experienced this problem?
    Thanks
    Thang

    Post Author: Thang Nguyen
    CA Forum: Data Integration
    If anyone is interested the solution I've got from BO is:
    "Can you put the following parameter in  your DSConfig   / al_engine section  :  SQLServerReadUncommitted=1 "
    Beware that this sets your changes the SQL server Transaction Isolation level to allow dirty reads which isn't ideal

  • Fatal deadlock in safepoint code

    I got the following exception when benchmarking WLS 6.1 SP1 on HPUX11.0.
              This exception resulted in an outage in one of the WLS instances in the
              cluster. Unfortunately, this problem doesn't happen often. It happened once
              after a 10 hour run and once after a 4 hour run. It also appears to happen
              when we are benchmarking under heavy loads. This exception also only
              occurred twice in a three week testing window. In other words, it is
              difficult to recreate. However, we are looking for a solution.
              # Java version:
              # Java HotSpot(TM) Server VM (mixed mode)
              # HotSpot Virtual Machine Error, Internal Error
              # Fatal: Deadlock in safepoint code. stopped at 00000000
              # Error ID:
              /CLO/Components/JAVA_HOTSPOT/Src/build/hp-ux/../../src/share/vm/runtime/safe
              point.cpp, 297
              # Problematic Thread: prio=3 tid=0x00490b70 nid=18 lwp_id=17514 runnable
              I am running JDK "1.3.1.00-release-010607-19:35-PA_RISC2.0" which comes with
              the WLS 6.1 SP1 distribution. I am tempted to try JDK 1.3.1.01 which is the
              latest from HP. Only problem is we try to stay compliant with BEA's
              recommended JVM. Any thoughts on that?
              My config is three HP L3000s (4 cpus each) - one is the DB server and two
              are the appl servers. The appl servers are only running at about 50-70% busy
              with lots of free physical memory. My JVM settings are
              JAVA_OPTIONS="-server -verbosegc -XX:NewSize=128m -XX:MaxNewSize=128m -XX:Su
              rviv
              orRatio=2 -Xms512m -Xmx512m"
              Looking at the HP Java release notes, I tried turning
              on -XX:+UseCompilerSafepoint. The doc on safe point follows.
              -XX:+UseCompilerSafepoints
              Enables compiler safe points. In this version, compiler safe points is off
              by default. Enabling compiler safepoints guarantees a more deterministic
              delay to stop all running java threads before doing a safepoint operation,
              namely garbage collection and deoptimization. For patch information, see
              "Known Problems" in these release notes.
              Unfortunately, soon (about 10 minutes) after starting up the WLS cluster, we
              got the following exception
              ========================================================================
              An unexpected exception has been detected in native code outside the VM.
              Unexpected Signal : 10 occurred at PC=0xd04190
              Function name=(N/A)
              Library=(N/A)
              NOTE: We are unable to locate the function name symbol for the error
              just occurred. Please refer to release documentation for possible
              reason and solutions.
              Current Java thread:
              "ExecuteThread: '5' for queue: 'default'" daemon prio=2 tid=0x00456040
              nid=17 lwp_id=21726 runnable [0x00000000..0x589b3478]
              Dynamic libraries:
              /app1/wl6dncp/bea/jdk131/jre/bin/../bin/PA_RISC2.0/native_threads/java
              text:0x00001000-0x00006644 data:0x00007000-0x00007324
              /app1/wl6dncp/bea/jdk131/jre/bin/../lib/PA_RISC2.0/server/libjvm.sl
              text:0xc2c00000-0xc33ba000 data:0x7f71a000-0x7f7cf000
              /usr/lib/libpthread.1
              text:0xc11a0000-0xc11b6000 data:0x7f6e7000-0x7f6ea000
              /usr/lib/libm.2
              text:0xc02c0000-0xc02e6000 data:0x7f6ea000-0x7f6f0000
              /usr/lib/libcl.2
              text:0xc0e40000-0xc0f17000 data:0x7f6f1000-0x7f70f000
              /usr/lib/libisamstub.1
              text:0xc00ce000-0xc00cf000 data:0x7f6f0000-0x7f6f1000
              /usr/lib/libCsup.2
              text:0xc1460000-0xc147b000 data:0x7f70f000-0x7f712000
              /usr/lib/libc.2
              text:0xc0100000-0xc0228000 data:0x7f7d0000-0x7f7e7000
              /usr/lib/libdld.2
              text:0xc0003000-0xc0005000 data:0x7f7cf000-0x7f7d0000
              /app1/wl6dncp/bea/jdk131/jre/lib/PA_RISC2.0/native_threads/libhpi.sl
              text:0xc0fa0000-0xc0fb1000 data:0x7f6e6000-0x7f6e7000
              /app1/wl6dncp/bea/jdk131/jre/bin/../lib/PA_RISC2.0/libverify.sl
              text:0xc1140000-0xc1150000 data:0x7f6d4000-0x7f6d5000
              /app1/wl6dncp/bea/jdk131/jre/bin/../lib/PA_RISC2.0/libjava.sl
              text:0xc1150000-0xc117c000 data:0x7f6cf000-0x7f6d4000
              /app1/wl6dncp/bea/jdk131/jre/bin/../lib/PA_RISC2.0/libzip.sl
              text:0xc1180000-0xc1193000 data:0x7f6cd000-0x7f6cf000
              /app1/wl6dncp/bea/jdk131/jre/bin/../lib/PA_RISC2.0/libnet.sl
              text:0xc0f34000-0xc0f3d000 data:0x7f42c000-0x7f42d000
              /usr/lib/libnss_dns.1
              text:0xc00c8000-0xc00cc000 data:0x7f429000-0x7f42a000
              /usr/lib/libnss_nis.1
              text:0xc0008000-0xc000e000 data:0x7f428000-0x7f429000
              /usr/lib/libnsl.1
              text:0xc0240000-0xc02bb000 data:0x58d3a000-0x58dff000
              /usr/lib/libxti.2
              text:0xc00b0000-0xc00c5000 data:0x7f423000-0x7f428000
              /usr/lib/libnss_files.1
              text:0xc0028000-0xc002f000 data:0x7f422000-0x7f423000
              /app1/wl6dncp/bea/wlserver6.1/lib/hpux11/libmuxer.sl
              text:0xc0979000-0xc097c000 data:0x7f421000-0x7f422000
              /app1/wl6dncp/bea/wlserver6.1/lib/hpux11/libweblogicunix1.sl
              text:0xc00cf000-0xc00d0000 data:0x7f420000-0x7f421000
              /usr/lib/libnsl_s.2
              text:0xc2910000-0xc2924000 data:0x7f419000-0x7f41e000
              /usr/lib/libC.2
              text:0xc24e0000-0xc24fd000 data:0x7f41e000-0x7f420000
              Local Time = Wed Oct 31 13:20:37 2001
              Elapsed Time = 493
              # The exception above was detected in native code outside the VM
              # Java VM: Java HotSpot(TM) Server VM (1.3.1
              1.3.1.00-release-010607-19:35-PA_RISC2.0 PA2.0 mixed mode)
              ========================================================================
              The -XX:+UseCompilerSafepoint option requires patch PHKL_24943. See doc
              below.
              HotSpot Compiler Safe Points
              NOTE: For both HP-UX 11.0 and 11i, using Compiler Safe Points requires a
              patch. The required patches are shown below. For information on locating and
              installing the patches, go to the "Installation" section in this document.
              HP-UX 11.0 PHKL_24943
              HP-UX 11i PHKL_24751
              In this version, compiler safe points is off by default. To turn it on, use
              the -XX:+UseCompilerSafepoints option. Enabling compiler safepoints
              guarantees a more deterministic delay to stop all running java threads
              before doing a safepoint operation, namely garbage collection and
              deoptimization.
              Unfortunately, PHKL_24943 was recalled. See below.
              HP-UX 11.00 PA-RISC Patches
              NOTE: Several of the patches shown below have dependency patches. On the web
              page from which you download the patch, click the "dependency" link and make
              sure you install the dependency patches as well.
              PHCO_23792
              PHCO_23963
              PHCO_24148
              PHKL_18543
              PHKL_23226
              PHKL_23409
              PHKL_24826
              PHKL_24943*
              PHKL_25188
              PHNE_21731
              PHNE_23456
              PHNE_24034
              PHSS_23440
              *PHKL_24943 has been recalled. A replacement patch will be available shortly
              and will be posted here as soon as it is available.
              The long and the short of it. Has anybody in HPUX land run across this issue
              before in production and what workaround have you come up with.
              TIA
              Bernie
              

    I also encounter this problem, does you resolve it or not? If you did, please tell me what should I do.
              Thanks very much
              

  • Deadlock condition for tranction

    Hi all please any one tell me what is the meaning for following error With full explanation
    java.sql.SQLException: Deadlock found when trying to get lock; Try restarting transaction, message from server: "Lock wait timeout exceeded; try restarting transaction "
    /resetattendancedata.dqlMessage: Deadlock found when trying to get lock; Try restarting transaction, message from server: "Lock wait timeout exceeded; try restarting transaction" Cause: null Error Message: Deadlock found when trying to get lock; Try restarting transaction, message from server: "Lock wait timeout exceeded; try restarting transaction" Cause: null Error code: 1205 Error state: 41000 java.sql.SQLException: Deadlock found when trying to get lock; Try restarting transaction, message from server: "Lock wait timeout exceeded; try restarting transaction"
    Thanks In Advance
    Abhinay

    I did find the answer. thank to the ones who did read my post.

  • DI API Deadlocks in SQL 2005 database server

    Hello All,
    I have an Issue, which is coming on large databases ( > 50GB) and number of users >  20.
    I have read the following notes
    1269591, 1489753, 1318311, 1231444 , 1344641 , 1316554
    also contributed in the following thread:
    Workaround:
    Each SAP B1 Client is using an addon, which do a WM (Warehouse management) based on WHSE Journal. Until upgraded to 2007 version, this was working fine, no issue came up only system was slow.
    Process with pseudo code like this
    'incoming cases - when an item is entering into the warehouse
    Add Logistics documents (action success = true)
    - start transaction
      - check Warehouse journal
      - record changes done by this document (based on oinm) into (UDT)
    - finish transaction
    'outgoing cases - where an item is carried out from the warehouse - create a self pick list example: sales order:
    start transaction
    - loop on every rows of the sales order
      - read ordered quantity
      - look for place where it is stored,
    - if found make  a reservation in (UDT)
      - add position to self created pick list  (UDT)
    finish  transaction
    'release from warehouse
    start transaction
      - look for pick list position (UDT)
      - reduce stock value (UDT)
      - update pick list  with packed quantity (UTD)
      - create delivery note
    finish  transaction
    Depending from activity the users receive various error messages like Could not commit transaction / Deadlock ...  / -2038 Internal error xxxx messages, when trying to access to UDT-s or any marketing documents, and the system is frozen. I go to activity monitor, and check and tell them which client is causing dead locks, they log off ,and can continue work or if the process runs to timeout, it sends the error message to SAP B1 Client.
    Also we have an addon which reads (only reads) data from database from OITM / OITW / and one of these UDT above. (queries against stock value from UDT). here we receive 2 type of error messages
    - Deadlock
    - Timeout
    I know these messages are coming from: other computers /Users are issueing documents in transaction.
    Any ideas to resolve this issue / has anybody happened this issue ?
    Every ideas, comments are welcome.
    The system is b1 2007 SP 01 PL 10. Average document position between 50-100.
    Solution Architect team? Any workaround?
    Regards
    János

    HI Janos,
    I also faced little bit same type of Issue in one of the project.
    Please look
    Link: [Large data processing Issue | SAP Hanging Problem using Transactions;
    After applying this my issue was resolved.
    regards:
    Sandy
    Edited by: Sandeep Saini | Roorkee | India on Sep 6, 2010 7:16 PM

  • Ironport Application Error

    I am getting this strange error I don't know what to do to solve. 
    this email i am getting is : 
    The Critical message is:
    An application fault occurred: ('egg/coro_postgres.py _simple_query|765', "<class 'coro_postgres.QueryError'>", '_simple_query (ERROR 40P01: deadlock detected)', '[egg/quarantine_hermes.py _vacuum_main|2606] [egg/quarantine.py run_maintenance|259] [egg/quarantine.py _query|1980] [egg/quarantine.py _call_db|1954] [egg/quarantine.py _db_query|2037] [egg/coro_postgres.py query|355] [egg/coro_postgres.py _simple_query|765]')
    please help in solving this issue. 
    Thanks, 
    Maen

    You are hitting the following known defect, which also affects the AsyncOS 8.5.6:
    CSCun76328 - DLP application fault seen after upgrading to 8.5.0-473
    Symptom:
    After upgrading the Email Security Appliance from AsyncOS 8.0.1 to 8.5.0-473, the follwoing application fault might be encountered:
    An application fault occurred: ('dlp/config.py make_actions_filterset|1200', "", "'NoneType' object has no attribute 'get_default_restriction'", '[egg/filters.py config_change|11386] [dlp/config.py rebuild_dlp_actions_filterset|1177] [dlp/config.py make_actions_filterset|1200]')
    Conditions:
    upgrading an Email Security Appliance from AsyncOS 8.0.1 to 8.5.0-473. After initial reboot of the appliance as part of the upgrade process the above application fault might be encountered.
    Workaround:
    not known so far
    Further Problem Description:
    This is an intermittent issue with no known side-effects. The application fault happens at the point where message/content filter configuration is loaded. Any subsequent configuration changes should not be affected by this. Making a dummy configuration change to filters if this application fault is seen will ensure the latest consistent configuration is picked up

  • Frequent but unpredictable DB_PAGE_NOTFOUND corruption

    Hi,
    We have developed a multi-process data processing engine that uses BDB as state storage to store queues of pointers to datums in on-disk flat files. The engine is written in Perl, using the standard BerkeleyDB CPAN module as its interface to BDB.
    Platform: Red Hat Enterprise Linux 5.1 x86-64
    Perl: 5.8.8 (with 64-bit support)
    BDB: 4.3.29 (the default for this version of RHEL)
    After running in production for some time without any errors, occasionally one of the data queues (a Btree database) has started to corrupt after a few hours of record creation/deletion by forked children. The error (which is elicited after subsequent db_put() calls is "DB_PAGE_NOTFOUND: Requested page not found"), and running db_verify on the database returns:
    "db_verify: Page 1: internal page is empty and should not be
    db_verify: queue.db: DB_VERIFY_BAD: Database verification failed"
    Worse, is that the error cannot be recreated on any of our development or staging environments - it just intermittently occurs in production, now maybe every 3 to 8 hours.
    Some background:
    Roughly - the child processes that seem to be causing the corruption read a bunch of key/values via a cursor, and then delete the keys from the DB.
    The environment is created with: DB_CREATE | DB_INIT_LOCK | DB_INIT_LOG | DB_INIT_MPOOL | DB_THREAD | DB_INIT_TXN
    The database is created with: DB_CREATE|DB_THREAD
    The parent process closes all Env & DB handles before forking children, then re-opens upon returning from fork().
    The child processes all open their own Env & DB handles after fork().
    There are usually around 5-8 children running in parallel, and will execute the deletes on the DB in parallel.
    Before exiting, the child processes always explicitly call db_sync() before calling db_close() - probably overkill.
    Here's where my understanding of deadlocking in BDB gets shaky:
    DB_INIT_LOCK should implement multiple-writer locking semantics, and because of the way the parent process distributes the work to the child processes, children are never competing to delete the same keys.
    I suspect the reason for the corruption is that BDB's locking may be page-based, not key (record) based, and if (say) child A deleting a key causes an underlying page split (?) whilst child B is also deleting a key stored on that same page, corruption occurs. Am I on the right track here? The app is not yet doing any deadlock detection or resolution - we haven't yet gone down that route because nowhere are any errors regarding deadlocks being surfaced in the statuses of any DB calls, or the output of db_stat().
    Interestingly, none of the db_del() calls in any of children fail, with deadlock errors or otherwise - the corruption is only noticed by calls to db_put() into the same database during a subsequent processing run - obviously after the in-memory cache has been synced to disk.
    We haven't yet gone for upgrading BDB to 4.7 (or even 4.4) , but will attempt to do this if no other fix is forthcoming.
    An alternative, quicker fix we're trying out is to use DB_INIT_CDB to enforce single-writer semantics on the children, or to move the responsibility of writing back up to the parent process, and have no multiple-writers at all.
    I know my understanding of the pitfalls of deadlocking and how they relate to the underlying Btree store aren't great and suspect herein lies the real problem. Many thanks in advance for anyone with advice or recommendations here.

    Thanks Michael. I'll engage here for the sake of Googlers and also follow up by email.
    - Yes, the same flags are used to open the environments and db in the children; all processes use the same storage class that wraps the BDB access.
    - db_sync() before db_close() was paranoia on my part - noted and understood that it's unnecessary.
    - The db_verify output is indeed all it reports. <tt>db_dump -qa queue.db</tt> on a corrupt DB reports:
    <tt>
    In-memory DB structure:
    btree: 0x120200 (duplicates, open called, read-only)
    bt_meta: 0 bt_root: 1
    bt_maxkey: 0 bt_minkey: 2
    bt_compare: 0x30b2222900 bt_prefix: 0x30b2222970
    bt_lpgno: 0
    =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
    page 0: btree metadata level: 0 (lsn.file: 0 lsn.offset: 1)
    magic: 0x53162
    version: 9
    pagesize: 8192
    type: 9
    keys: 0 records: 0
    free list: 2, 0
    last_pgno: 2
    flags: 0x1 (duplicates)
    uid: 5f 0 db 4 0 fd 0 0 1b d6 75 51 bf 5c 0 0 0 0 0 0
    maxkey: 0 minkey: 2
    root: 1
    page 1: btree internal level: 2 records: 0 (lsn.file: 0 lsn.offset: 1)
    entries: 0 offset: 8192
    page 2: invalid level: 0 (lsn.file: 0 lsn.offset: 1)
    prev: 0 next: 0 entries: 0 offset: 8192
    </tt>
    There are records in the queue.db, though - viewing it reveals recognisable keys.
    Other things I ought to mention, which may be giveaways:
    - Although creating the environment with DB_INIT_TXN, the app does not perform any transaction handling or checkpointing - in effect it is in auto-commit mode.
    - Since modifying the storage to use DB_INIT_CDB overnight, there has been (so far!) no corruption.
    Thanks again.

  • How do I create a non-JTS sequence connection pool using JTS

    I'm getting all kinds of errors (DB deadlocks and exceptions) using JTS with sequences. From reading several posts, it is necessary to create a separate non-JTS connection pool.
    I've seen several postings on how to do this in the sessions.xml file, but how do I do this in Java code?
    What I am trying is:
    SequencingControl seqCtrl = ((oracle.toplink.publicinterface.DatabaseSession)serverSession).getSequencingControl();
    seqCtrl.setShouldUseSeparateConnection(true);
    seqCtrl.setLogin(sequenceLogin);
    I've also tried:
    serverSession.addConnectionPool("sequencing", sequenceLogin, 2,5);
    but neither work. The problem with these settings is that the sequence properties on my DatabaseLogin are not honored.
    My table name is T_SEQUENCES and my preAllocation size is 5. However the SQL that is generated by this setup is:
    "UPDATE SEQUENCE SET SEQ_COUNT = SEQ_COUNT + 50 WHERE SEQ_NAME = 'T_GROUPS_SEQ'"
    I'm guessing these may be default values, but I don't know where else to override these except for on the DatabaseLogin I am passing in.
    I know this setup works, because it is the same DatabaseLogin I use for non-JTS configuration.
    Could someone provide me a code-snippet on how to do this?
    Thanks,
    Nate

    I'm not using XML files, all of my setup is in code. I'm not sure how I would use the setLoginAndApplySequenceProperties(DatabaseLogin) call or if it would address my larger problem.
    The main issue I don't want to lose here is, I've got an application where I'm trying to use JTS with JBoss and SQL Server. It's a SessionBean/POJO architecture. The problem I was having that started this thread is that sequence number allocation causes a database deadlock.
    My thought was if I opened a 2nd connection pool dedicated to sequences, it might resolve the issue. I was able to do this with the workaround I posted, but it didn't fix anything. I now get a different error related to my newly inserted objects.
    From working with this off-and-on over the last several months, I would say that I don't think TopLink/JBoss/SQL Server using JTA can be made to work.
    I know that TopLink has a plug-in architecture and in theory if I implemented ExternalTransactionController and SynchronizationListener for JBoss correctly, it should all work.
    But, I've got the JBoss 4.0.0 source that I can step through, I've got all the recent updates to SQL Server and the JDBC driver, and I'm following everything I've been told so far on how to make this all work.
    It plain doesn't work.
    Furthermore, I haven't found anyone (this group, JBoss group) that has gotten this to work (TopLink/JBoss/JTA). This is an important item to us, we'd like to get this to work, and we would be happy to work with Oracle Consulting on this or whatever it takes (already opened a TAR on this).
    Are there any other support options available to making this work?
    Nate

  • Suggestion of a possibly useful feature

    Hi Gene and Dimitri,
         in our project we face a problem that we should try to find out a subset of tree nodes in a tree hierarchy filtered by transitive parent-child relationship to a specific node.
         The amount of nodes are possibly in the couple of hundred thousands, so probably they would not economically fit into a single JVM, so probably it would need to be stored in a partitioned cache. Also the tree hierarchy occasionally changes (it is a normal use case).
         I expect it could be useful if the children of each non-leaf node could be maintained in a data structure on each cache client node, effectively a reverse index of all entries in the cache based on the parent id attribute value as extracted value.
         However, for this, it would be necessary that I can quickly and consistently build the reverse index upon startup.
         The 3 parameter constructor of the ContinuousQueryCache gives an opportunity to do this, however it would also mean an enormous overhead because it would also store each entry in the JVM, which as it was already mentioned would be quite disastrous to memory footprint, not to mention it is not required anyway.
         A quick first solution to this problem would be if the ContinousQueryCache backing map could be disabled, but it would still act as an ObservableMap. This way the memory footprint overhead would be much less, after the handling of the initial events, but we would still be able receive exactly one event per entry and per subsequent events.
         Another improvement would be allowed if at the construction of the ContinousQueryCache, the events corresponding to the entries returned by the initial query result could be delivered together, not one-by-one, as it would allow optimizing the index structure lists memory consumption by not having to gradually increase the collections holding the reverse mapped keys to an extracted value, but we could create it with the correct size (and some spare space) at once. However this would require two passes on the events corresponding to the initial query, therefore they would need to be delivered together to allow two passes.
         Practically it is even worth extracting this functionality alone (initial list of events or values corresponding to entries in the cache delivered together to a synchronous-listener with all following events delivered normally) to a helper class which can create a listener initializable as mentioned without any functionality additionally provided by the ContinousQueryCache.
         If such a feature is created, then it would be easy to implement arbitrary custom (userspace) indexes remotely to the cache backing maps, or allow indexing replicated caches.
         Is indexing replicated caches actually scheduled as a feature anytime soon?
         A third possible optimization would be if the initial query would not return entire entries but would run an user specifiable aggregation instead, which can return only the relevant extracted attributes. This would allow reducing the initial traffic and also possibly allow leveraging existing indexes in the cache servers therefore bypassing deserialization of the entries altogether.
         Best regards,
         Robert

    Hi Peter,
         > Hi Rob,
         >
         > I have managed to create a wrapper over any
         > NamedCache that maintains it's own local indexes and
         > optimizes Queries when an IndexAwareFilter is used.
         > Using only one listener registered for listening to
         > "light" events I just retrieve the key from MapEvent
         > (I'm not even interested in the type of event) and
         > then look-up for what the underlying cache says about
         > the key (whether it is currently deleted or whether
         > it has some value associated) and update the indexes
         > accordingly.
         You should be aware, that the listener must NOT call back to the same cache service that it received the event from, otherwise you are subjecting yourself to deadlocks under heavy load. Therefore you should not be looking up cached values from a listener.
         Also, the fact that you are doing a lookup means that your index is updated asynchronously to the change event. This means that the cache content is not guaranteed to always be in sync with your indexes.
         E.g. when your processing is slow and you have two change events for the same entry, and your lookup sees only the effect of the second change, then your index will not reflect the intermittent existence of the value after the first change, therefore it was not in sync with the cache when it contained the value received in the first change.
         This can make a problem, when you are doing queries for which not all filters are indexed, and therefore some of the filters are checking the index and some are checking the values. They might be inconsistent and lead to false results.
         For queries like that, you must check the values in a state as they were in when the index was created from them. This means that you must be able to provide a snapshot of entries which is consistent with the snapshot of the indexes the filter will see (and of course the indexes must also be in a consistent state with each other). For two changes to the same entry, going from state A through state B to state C, the filter must see index contents (let's assume two indexes) the indexes and the entry either all in state A or all in state B or all in state C together, but not in a mixed-up state.
         This is quite complex logic, and I don't believe this is possible to do with index updating being asynchronous to the changes.
         Also, there is no point in using lite events and getting from the cache in a replicated cache, as the data has already traveled to your node when you received the event, then why do another lookup instead of getting it at once, except for seeing the latest version, which as I mentioned leads to inconsistencies.
         > The initial loading of the indexes is done after the
         > "light" listener is registered and it is simply a
         > mater of iterating over all keys of the map and
         > producing a stream of key-only events that are
         > processed in parallel with events that are produced
         > by the ObservableMap. The order of these events is
         > not important since they only specify "something has
         > changed under that key". If I get two events for the
         > same change/initial load it doesn't matter since they
         > are idempotent.
         Please see my previous comment about the consistency of the index with the cached values.
         > In each index I maintain a map from extracted value
         > to the set of keys (which is the index content) and
         > also a reverse map from key to the extracted value
         > (which is important to be able to update the index
         > when I only have current value and not the old one).
         >
         Coherence terminology is actually just the other way round.
         Reverse map is mapping from the extracted value to the cache keys from which the extracted value is extracted from (and this is called index content in IndexAwareFilter-s).
         Forward map is mapping from the cache key to the extracted value (you can use this by casting the entry to QueryMap.Entry in an index-aware filter or a parallel-aggregator, and calling QueryMap.Entry.extract(ValueExtractor) which will consult an index forward map if it exists.
         > It works perfectly for replicated cache.
         >
         Have you tried stress testing it under high load, when querying for data when you were actually changing that data, and making changes on multiple attributes in a single put, with some of the changed attributes indexed, and some of them are not, and verifying for not getting false results in the query result set?
         Due to the possibility of the cache not being in sync with the index, I would believe, those race conditions would expose errors (or deadlocks).
         Best regards,
         Robert

  • MultiThread Problem

    My applet constructs and starts 2 identical Threads.
    While running, my debuging utility tells me that one of them quits running at some point. Moving the mouse, it makes it to continue to run. At some other point again, one of the Threads stops running and remains stopped. The other Thread runs fine though.
    I eliminated a runtime error as a reason for one of the Threads to stop, because I deliberately instantiated the same thread twice to test it. Still one of them quits for no reason.
    Wny does the one Thread give up?
    Thanks,
    Chris.

    My software is a huge utility software which runs in a thread. Recently I instantiated the same thread twice (in general the input variables are allowed to vary from thread to thread, else there would be no point in having 2 identical threads).
    This weird behavior came about though: One of the threads quits during runtime while the other still goes strong. Even when the 2 threads are identical. Therefore, I eliminated any runtime error, except perhaps possible out-of-memory error, OR deadlock. I am going to have to build a custom debugging utility to troubleshoot this one. In general the thread instantiates its own objects so this eliminates deadlock since no variables are shared, but certain variables are static which i have to investigate further. Finally, the thread objects are entirely independent from each other since there is no communication between them. They are simply 2 instances of my thread object.
    class myThread extends Thread
    class myApplet extends Applet
    init()
    arrayOfThreads[0] = new myThread(..variables..)
    arrayOfThreads[1] = new myThread(..variables...)
    I would appreciate any comment.
    Chris.

  • No status messaging!!?!

    On one of my sites, status messaging has just stopped! Around 24 hours ago.
    I noticed an error about deadlocks in statmgr.log.
    SQL box and site server has been rebooted.
    I'm now seeing messages like these in statmgr.log:
    Read file "D:\Program Files\Microsoft Configuration Manager\inboxes\statmgr.box\retry\exlmsx89.sql" which represents a 137559-byte failed SQL transaction to insert 221 status messages and their 331 insertion strings and 258 attribute ID/value pairs
    into the database. SMS_STATUS_MANAGER
    15/07/2014 13:54:10 4388 (0x1124)
    Retrying a 137559-byte SQL transaction to insert 221 status messages and their 331 insertion strings and 258 attribute ID/value pairs into the database.
    SMS_STATUS_MANAGER 15/07/2014 13:54:10
    4388 (0x1124)
    But still nothing in status messages!?
    Help!?

    There might be a corrupt message, which is probably the oldest message sitting in the box. Try to take that message out and restart the SMS_EXECUTIVE.
    Back in the SMS day's there was an KB article about something very similar, see:
    http://support.microsoft.com/kb/884123/en-us
    My Blog: http://www.petervanderwoude.nl/
    Follow me on twitter: pvanderwoude

  • Change sequence number

    In application somebody want to change the sequence number. For example first line's number is 1 , second line's number is 2, third line's is 3. If I want to change second line's number from 2 to 3 and third line's number from 3 to 2 at the same time. There is an error about deadlock when I commit them. How can I solute it?
    Thanks :)

    you will always get this deadlock issue unless you do this:
    save record 3 as something else (ie: 99999)
    save record 2 as 3
    save 99999 as 2
    hope that is clear.

  • EJB3 Clustering Failover

    Assume I have an EJB that uses anonther EJB. I inject it with EJB3 annotations. The calling EJB starts executing. While it is running the server on which the other EJB is running fails. The calling EJB then tries to invoke the EJB whose server has failed. What happens?
    Does the Oracle Cluster transparently detect the failure, re-inject a valid EJB instance, and reinvoke the new instance? That would be best case.
    Does the Oracle AOS simply throw the standard RMI exception, which we can presumably trap. If we trap it, detect that it is an exception caused by a failed cluster element (rather than say a business logic error or a standard runtime error like deadlock, etc) can we take the usual action, i.e. get another EJB instance from a valid cluster server and re-invoke it? If so, is there any problem setting the EJB3 instance variable that was originally set by the EJB3 dependency injection framework?
    Thanks in advance for an insight.

    Hi,
    Have you find out what was the cause you get the BasicRemoteRef as a reference?
    I'm having the same problem: I have two references to different EJB's, one is resolved as ClusterableRemoteRef which never fails, and the one resolved as BasicRemoteRef is the one that fails when for example, invoke both services when my Weblogic 11 is shutting down.
    Regards,
    Juan

Maybe you are looking for