Deadlock messages in HsvEventLog
Hi,
HFM 11.1.2.1
When running a consolidation, I frequently get the following error message in HsvEventLog:
"Transaction (Process ID xxx) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction."
What are possible reasons for this error occurring? If this is a rules issue, how can I determine the rule that generates this error?
Thanks
Hi,
We did see similar error message once in our application 9.3.1 and Oracle support came back saying this would happen when there is update for same scenarios based on subcube.
I mean there are certain dimension that come under sube cude even if you have sessions updating those subcubes then it would generate a deadlock.
Thanks
Amith
Similar Messages
-
read in the manual that deadlock (ora-00060) is treated separately as are also the
block corrupted and archive hung alerts.
But can not find anything to manage the deadlock message. These messages appear in the alert log file but EM 10g does not capture it.
ThanksLooking in alertlog.pl I see it excludes
257, 16038 for archive hung
1157, 1578, 27048 for data block corruption
603 for session terminated
No mention of deadlock being excluded.
Perhaps its just done via the generic alert monitoring config?
Go to the main database page, click on the link beside "Alert Log"
Then at the bottom click "Generic Alert Log Error Monitoring Configuration"
I use the following (esp for 10g SE &/or RAC)
Filter: .*ORA-0*(2097|0439|1555|0230|1587|27091|27072)\D.*
Critical: Undefined
Warning: ORA-.* -
Troubleshooting tuxedo services .. related to Oracle deadlock errors
Afternoon all,
I support an application that allows API calls to tuxedo services using messages comprised of FML fields. Our application sits on an OracleDB that is experiencing deadlock issues, one of the tuxedo servers involved in the deadlock is one that advertises services accessed by the API calls so I'm trying to rule it out as the source.
Is there any way to trace/log the services in detail so that I can tie the deadlocks directly to a call of a tuxedo service?
I've looked into TMTRACE, but I'm not sure this is going to give me the detail I need.
Changing application code is not an option unfortunately.
Thanks,
AdamHi Todd,
Thanks for the quick response.
Aplogies for the ignorance, but what do you mean by XA / local database transaction?
Our architecture is such that a tuxedo server, that offers the services our APIs talk to, makes the Oracle updates. Is that XA or local?
Oracle's logging and deadlock messages show me what the updates are, but I'm trying to make sure 100% that an API called service is instigating that update.
Hope that is clear enough!
Cheers,
Adam
Todd Little wrote:
Hi Adam,
Are the Tuxedo services using XA transactions or only local database transactions? The reason I ask is that TSAM offers the ability to track XA transactions and might be able to tell you what's happening. It can also track services and call paths, i.e., complete end-to-end call tracking from a Tuxedo client, through intervening servers (services).
If you are not using XA transactions, I'm afraid there isn't a lot that Tuxedo can provide you as Tuxedo is unaware of the underlying database requests. In fact, if you aren't using XA transactions, Tuxedo doesn't even know a database is involved it the processing of requests.
I would think you should be able to trace things from the database side, but you'd have to ask that question over in the database forums as I'm not familiar enough with what tracing/debugging facilities on the database side to give you much help.
Regards,
Todd Little
Oracle Tuxedo Chief Architect -
We have an application which is very heavy on object creation. When we run the application after 4-6hours we get OutOfMemory(OOM) even though there seems to be enough memory and high enough permanent generation space (~256MB).
I have tried various options to use parallel young generation and CMS GC for olf generation. Also tried allowing class unloading from the permanent generation space. We do not get OOM now but the application grinds to halt with a very high CPU usage, which corresponds to long and contunuous GC that takes more than 10 second for minor collections.
I have tried various settings to help the promotion of objects in the YG to tenured space by lowering the size of YG and setting JVM parameters to compact the tenured space to prevent fregmentation.
Profiler does not show any suspicuos memory leak in our application.
However, there is something happening in the GC logs that I cannot understand. The following is from the GC output
12199.621: [GC 12199.622: [ParNew: 16256K->0K(16320K), 0.2017510 secs] 207698K->191557K(786368K), 0.2019374 secs]
12201.795: [GC 12201.795: [ParNew: 16256K->0K(16320K), 0.2048213 secs] 207813K->192260K(786368K), 0.2050363 secs]
12203.873: [GC 12203.873: [ParNew: 16256K->0K(16320K), 0.2146450 secs] 208516K->194418K(786368K), 0.2148564 secs]
12206.033: [GC 12206.033: [ParNew: 16256K->0K(16320K), 1.0297104 secs] 329458K->314748K(786368K), 1.0299117 secs]
12208.379: [GC 12208.379: [ParNew: 16256K->0K(16320K), 0.9179062 secs] 331004K->316549K(786368K), 0.9181090 secs]
12211.448: [GC 12211.448: [ParNew: 16256K->0K(16320K), 0.9260658 secs] 332805K->318359K(786368K), 0.9262664 secs]
If you look at the output in bold where the GC time jumps from 0.2148564 secs to 1.0299117 secs, the heap size after collection also jumps from 194418K to 314748K.
How could this be? A jump of 120330K is more than 7 times bigger that the YG size. This cannot be due to object allocation?
Only thing I can think of is a JVM bug with class loaders. I found a JVM bug with class loaders (5033614) which is according to release notes fixed with in 1.4.2_07. I have tried 1.4.2_08 with no success.
I observed this sort of jumps are happening wiht class unloading messages such as below where GC time jumps from 0.3070497 secs to 1.0390087 secs.
5231.303: [GC 5231.304: [ParNew: 16256K->0K(16320K), 0.3068234 secs] 386613K->370358K(786368K), 0.3070497 secs]
5231.866: [GC 5231.866: [ParNew: 12685K->0K(16320K), 0.4980721 secs]5232.365: [CMS[Unloading class sun.reflect.GeneratedSerializationConstructorAccessor138]
[Unloading class sun.reflect.GeneratedSerializationConstructorAccessor20]
[Unloading class sun.reflect.GeneratedSerializationConstructorAccessor29]
[Unloading class sun.reflect.GeneratedMethodAccessor13]
: 370358K->195960K(770048K), 3.4865979 secs] 383043K->195960K(786368K), 3.9850061 secs]
5236.620: [GC 5236.620: [ParNew: 16256K->0K(16320K), 1.0387359 secs] 331000K->314830K(786368K), 1.0390087 secs]
AFAIK this would only happen when there is not enough space in the Permanent Generation space. This would even happen even with the -XX:PermSize=256m -XX:MaxPermSize=256m. Is there something else that triggers class unloading? (Tuning off class unloading did not help with the GC times)
The platform is 2 CPU Solaris 9, 1.4.2 Sun JDK, OC4J app server. And the log is produced with
java -server -XX:PermSize=192m -XX:MaxPermSize=192m -XX:NewSize=8m -XX:MaxNewSize=8m -Xms1024m -Xmx1024m -XX:SurvivorRatio=64 -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:-CMSParallelRemarkEnabled -XX:+UseCMSCompactAtFullCollection -XX:CMSFullGCsBeforeCompaction=0 -Djava.awt.headless=true -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -jar oc4j.jar
Any comment is greatly appreciated.I do not have the heap statistics when OOM occurred but there were enoughtmemory available then. I am certain that permanenet generation space was not exceeded which was 256MB and never exceeds 128MB in my GC output analysis. I suspect it was something to do with heap resizing so I fixed -Xms and -Xmx to do same value and run parallel and CMS garbage collection and we did not get the OOM again.
However, the stack dump when OOM occurred consistently showed a line in our code with a thread in 'waiting on condition' state. The code that is implicated is only using log4j to log some messages.
"HttpRequestHandler-1450695169" prio=5 tid=0x00000001018bc6b0 nid=0x19 waiting on condition [ffffffff012ff000..ffffffff01301500]
I suspected of a thread deadlod but stack dump shows no deadlock message. Profiling of the app also did not point out any deadlocks. Is there a known issue with log4j in this area? (I could not find any reference to this on the Internet)
With the current settings we do not get OOM but we get some unusual behviour as I posted in my original post which shows a jump in old generation (probably permanent generation or C heap) that is 7+ time bigger than the objects allocated in that period.
At the moment I see continuous and slow GC without any apparent need -plenty of tenured and permanent space are available. The young generation collection is taking too long with CPU consumption stauck at ~95%. See below
19384.564: [GC 19384.564: [ParNew: 8064K->0K(8128K), 5.5018406 secs] 563049K->555965K(1048512K), 5.5020402 secs]
19390.254: [GC 19390.255: [ParNew: 8064K->0K(8128K), 5.2093746 secs] 564029K->556598K(1048512K), 5.2095620 secs]
19395.593: [GC 19395.594: [ParNew: 8064K->0K(8128K), 5.1898968 secs] 564662K->557699K(1048512K), 5.1900956 secs]
19400.904: [GC 19400.904: [ParNew: 8064K->0K(8128K), 5.1879878 secs] 565763K->558361K(1048512K), 5.1881776 secs]
I don't think promation of YG objects|fragmentation of tenured spaces is the problem as the YG is set to be very small 8MB (ond heap is 768MB) and compaction is done for tenured space (-XX:+UseCMSCompactAtFullCollection -XX:CMSFullGCsBeforeCompaction=0).
I don't have a test case as I do not know the cause . The application is very large and under heavy load with plenty of reflection code but observation of GC output showed Permanent Generation remains under 128MB and we are having this problem with settings of -XX:PermSize=256m -XX:MaxPermSize=256m
I cannot understand GC behaviour I described in my original post but I did see similar bug entries with no satisfactory explanation from the Sun engineers.
I suspect C Heap might be strained and it might be it that is causing class unloading as I said it occurs even when there is plenty of Permanent Generation space is available (turning off class unloading did not help either). -
BI 4.0 SP01 Patch 4 even viewer errors and warnings
Hi All
I just installed BI 4.0 and patch it with SP01 Pach 4. We are using MS-SQL server and Tomcat 6. Any help to remove the cause of this warning will be apriciated.
We are getting loads of warning on the Event Viewer:
Log Name: Application
Source: BusinessObjects_cms
Date: 27-06-2011 11:06:37
Event ID: 33300
Task Category: Database
Level: Warning
Keywords: Classic
User: N/A
Computer: myhostname
Description:
Error in Database Access. Cause [Microsoft][ODBC SQL Server Driver][SQL Server] Transaction (Process ID 106) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction..
Another warning is:
Log Name: Application
Source: BusinessObjects_cms
Date: 27-06-2011 11:06:37
Event ID: 33301
Task Category: Database
Level: Warning
Keywords: Classic
User: N/A
Computer: myhostname
Description:
Retryable error returned from CMS database. Will retry the request 6 time(s).
Error:
Log Name: Application
Source: Application Error
Date: 27-06-2011 11:18:57
Event ID: 1000
Task Category: (100)
Level: Error
Keywords: Classic
User: N/A
Computer: myhost name
Description:
Faulting application name: java.exe, version: 6.0.170.4, time stamp: 0x4c3bb897
Faulting module name: awt.dll, version: 6.0.170.4, time stamp: 0x4c3bfa5f
Exception code: 0xc0000005
Fault offset: 0x00000000000d2210
Faulting process id: 0x1408
Faulting application start time: 0x01cc34aa7126bc49
Faulting application path: D:\SAP BusinessObjects\SAP BusinessObjects Enterprise XI 4.0\win64_x64\sapjvm\bin\java.exe
Faulting module path: D:\SAP BusinessObjects\SAP BusinessObjects Enterprise XI 4.0\win64_x64\sapjvm\jre\bin\awt.dll
AwsHi,
This is an old thread. It might be adviseable to open a new one.
Are you actually experiencing a problem on the platform? error or outages? I suspect not.
However, the search keywords "deadlock victim 4.0" are fruitful :
1699951 - Transaction deadlock messages in the Event Viewer
when using MS SQL as the database in SAP BO BI 4.0.
1640918 - Getting frequent deadlocks in CMS DB pointing to MS
SQL server in BI 4.0
1774201 - CMS SQL Serverdatabase Deadlocks in event
viewer
1787321 - database Deadlock warnings in event viewer due to
indexing of search service.
Please check these resources and get back to us. (append your number to this link http://service.sap.com/sap/support/notes/ xxxxx )
regards,
H -
TrackedMessages_Copy_BizTalkMsgBoxDb job is over running
Hi, we have a application which is running on BizTalk server 2004 and uses SQl server 2000. It was working fine for past couple of years. From past 6 month we noticed that
TrackedMessages_Copy_BizTalkMsgBoxDb job is over running and fails. It happens whenever there is an suden overload in the no of messages coming to BizTalk server. Also Application is stopped working. We are forced to reboot all the servers
once to get back to normal. But this wasn't happened before.
TrackedMessages_Copy_BizTalkMsgBoxDb is scheduled to run for each and every minute in SQL server agent. Now a days if the messages coming to BizTalk increases then job is taking around 4-5 hours also not getting complted. we are forced to
cancel the job and run it again. Also some time fails.
Can someone please help me on this......
Thanks and Regards, BalaHi Bala,
Based on my research, the TrackedMessages_Copy_BizTalkMsgBoxDb job may fail with deadlock error. To work around such issues, locked processes should be killed in BizTalk Database SQL server. And also you may need to apply the latest service pack on
BizTalk Server. For more information, I recommend you check the following articles.
BizTalk Job 'TrackedMessages_Copy_BizTalkMsgBoxDb' fails with deadlock message
FIX: The TrackedMessages_Copy_BizTalkMsgBoxDb SQL Server Agent job fails
TrackedMessages_Copy_BizTalkMsgBoxDb job fails in regular intervals
In addition, as the issue is more related to BizTalk Server , I would like to recommend that you ask the question in the BizTalk Server forums at
https://social.technet.microsoft.com/Forums/en-US/home?category=biztalkserver . It is appropriate and more experts will assist you.
Thanks,
Lydia Zhang -
Hi,
I am getting following deadlock message. Can any one explain what is happening in below deadlock?? Also please suggest what can I do to get rid of this deadlock?
MAXDOP is 2Hi Tom ,
Thank you so much for your reply, actually I was not expecting a reply on this as I have touched multiple forms but didn't get a reply on that...
For some reason , I edit the original post and removed the Stored Procedure body and deadlock which I had shared with everyone please don't mind about that.
I do not have a cascading delete trigger on [dbo].[OrderContactAddresses] which is trying to delete Phones.
and I really like your suggestion about adding a flag "IsDeleted" and setting that flag to 1 when deleted but unfortunately my application have already been sold to other clients and most of them had a huge transactional data in there tables at
this point It is almost impossible for us to change the application code as well as the database.. but still I agree that we had done that before when we are in the process of creating database.
So, for the time being is there a way / solution/ suggestion/ recommendation from your side of deleting rows from these 20 tables in a sequence under one begin transaction?? without getting trapped in table locking and deadlock situation??
your reply will be highly appreciated and thanks once again for your time looking at my post and understanding the situation.
if you want then I can redo the change I done for the original post.
Please feel free to let me know if you have questions. -
SCOM 2012 SP1 Datawarehouse event id 31553 and after 2 minutes successful id 31572
Hello guys,
I'm running SCOM 2012 SP1 and in one of my 3 Management groups i got the following events. I get them every few minutes and i cannot explain it. I have checked the aggregation of the DW and its fine. I Have extended the timeout from 5 to 15 minutes ... What
should i do next. Please help with this. I have checked all 2007 posts but they are not related to my problem. I have tried a number of queries but they don't return any result. Do I have a SQL server performance issue ?
Date and Time:
10/27/2014 11:05:10 AM
Log Name:
Operations Manager
Source:
Health Service Modules
Generating Rule:
Data Warehouse related event collection
Event Number:
31553
Level:
Error
Logging Computer:
User:
N/A
Description:
Data was written to the Data Warehouse staging area but processing failed on one of the subsequent operations. Exception 'SqlException': Transaction (Process ID 71) was deadlocked on lock resources with another process and has been chosen as the deadlock
victim. Rerun the transaction.
One or more workflows were affected by this.
Workflow name: Microsoft.SystemCenter.DataWarehouse.CollectPerformanceData
Instance name: server2.mymg.corp
Instance ID: {EBD49B2A-D314-07DF-9E2C-B79CF86B1A72}
Management group: MyMG
Date and Time:
10/27/2014 11:26:36 AM
Log Name:
Operations Manager
Source:
Health Service Modules
Generating Rule:
Data Warehouse related event collection
Event Number:
31553
Level:
Error
Logging Computer:
server10.mymg.corp
User:
N/A
Description:
Data was written to the Data Warehouse staging area but processing failed on one of the subsequent operations. Exception 'SqlException': Transaction (Process ID 68) was deadlocked on lock resources with another process and has been chosen as the deadlock
victim. Rerun the transaction.
One or more workflows were affected by this.
Workflow name: Microsoft.SystemCenter.DataWarehouse.CollectPerformanceData
Instance name: server10.mymg.corp
Instance ID: {DBDD19A5-98B3-F90A-641E-7C4693BFD6EB}
Management group: MyMG
Event Data:
View Event Data
< DataItem
type ="
System.XmlData "
time =" 2014-10-27T11:26:36.2773982-05:00
" sourceHealthServiceId
=" DBDD19A5-98B3-F90A-641E-7C4693BFD6EB
" >
< EventData
>
< Data > MyMG</
Data >
< Data >
Microsoft.SystemCenter.DataWarehouse.CollectPerformanceData
</ Data
>
< Data > server10.mymg.corp
</ Data
>
< Data >
{DBDD19A5-98B3-F90A-641E-7C4693BFD6EB}
</ Data >
< Data >
SqlException </
Data >
< Data >
Transaction (Process ID 68) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction.
</ Data
>
</ EventData
>
</ DataItem
>
User:
N/A
After 2 minutes i will have the 31572 which is
Data writer successfully performed data maintenance operations. What would be the reason for this ?There are a few things that can cause this. Your hint here is the deadlock message; processes are competing to write data in the DW, so you need to find out what that is. You also have a message stating that data is written to staging, but timing out moving
the data. If you happen to be running the Exchange MP, this creates an additional data set that can quietly clog up the works--but you would see that dataset in the event message as well.
Because you say aggregation is moving, try following these steps, they have served me well. I think you are seeing the results of data getting 'stuck' that causes the system to slowly choke itself. Note: You can focus on just the perf dataset, but always
a good idea to make sure everything is healthy while you're at it.
Steps:
Check to see if the staging area for data written to the DW is clear and data is moving:
select count(*) from Alert.AlertStage
select count (*) from Event.eventstage
select count (*) from Perf.PerformanceStage
select count (*) from state.statestage
These values normally rise and fall rapidly. If you open each table, they should also only contain recent data. If you find any data points older than a day, something is stuck. In one case I found state.statestage stuck around 14300. A look at the data
showed a few hundred rows of data that had information 4 months old.
See if data is moving (if you have old data, the answer is evident):
1. select * from StandardDataset
2. Plug the appropriate GUID into exec StandardDatasetMaintenance @DatasetId='<GUID from query>'
3. See if the count decreases
If it doesn't, if the data is expendable, just truncate the table (don't be afraid to do this--if OpsMgr is choking, stop the choking, and don't fret over missing data points)
truncate table state.statechange
Optional query:
Plug the DatasetId into the below query
USE [OperationsManagerDW]
DECLARE @DataSet uniqueidentifier
SET @DataSet = (SELECT DatasetId FROM StandardDataset WHERE DatasetId = '138B1324-BE31-42D7-A6CB-EA10139E309C')
EXEC StandardDatasetMaintenance @DataSet
Note: Valid current values are Event, Alert, CM, Perf, State
If these check out, it's time to
See if any SQL jobs are running that could be interfering
If you are not out of space, but close, and autogrow is killing your performance
Run the large table query to see if you have unchecked perf data growth that exceeds the performance capability of the SQL box
You may indeed have a SQL performance issue, but that's the last thing you troubleshoot--make sure OpsMgr is healthy first -
Deadlocked error message when running crystal report
Hi All,
I need a bit of guidance as to the following problem.
Seems there is a lock being created in SQL when running some crystal reports.
When trying to run a report the following message is displayed:
Database connection Error L 'ADO Error Code: 0x
Source: Microsoft SQL Native Client
Description: Transaction (Process ID 302) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction.
SQL State: 40001
Native Error: [Database Vendor Code: 1205 ]'
The error is happening intermittently when refreshing the report. It does not seem to happen when no users are using the system.
Is there any way to stop this from happening? Or is this some kind of limitation with crystal and how it accesses the database?
Many Thanks,
MattHi Matt
I have had this problem with long running queries as well. I ended up doing the main part of the query in a view using WITH (NOLOCK) for each table and then reporting on the view.
This had extra benefits of making the report much simpler and often making the query more efficient and easier to test as well.
Rob -
Revision: 21494
Revision: 21494
Author: [email protected]
Date: 2011-06-29 11:25:54 -0700 (Wed, 29 Jun 2011)
Log Message:
bug fix BLZ-581 Possible deadlock situation when sending message
change the scope of lock EndpointPushNotifier.pushNeeded to be minimal (retrieving the messages from the message buffer), that way, we can avoid the connection write failure to occupy the lock forever.
Checkintests pass
Ticket Links:
http://bugs.adobe.com/jira/browse/BLZ-581
Modified Paths:
blazeds/trunk/modules/core/src/flex/messaging/endpoints/BaseStreamingHTTPEndpoint.javaAdobe has donated BlazeDS to the Apache Flex community, and the source code is hosted on the Apache Flex website in a GIT repository.
http://flex.apache.org/dev-sourcecode.html -
Revision: 21209
Revision: 21209
Author: [email protected]
Date: 2011-05-04 18:26:18 -0700 (Wed, 04 May 2011)
Log Message:
bug fix BLZ-581 Possible deadlock situation when sending message
change the scope of lock EndpointPushNotifier.pushNeeded to be minimal (retrieving the messages from the message buffer), that way, we can avoid the connection write failure to occupy the lock forever.
Checkintests pass
Ticket Links:
http://bugs.adobe.com/jira/browse/BLZ-581
Modified Paths:
blazeds/trunk/modules/core/src/flex/messaging/endpoints/BaseStreamingHTTPEndpoint.javaAdobe has donated BlazeDS to the Apache Flex community, and the source code is hosted on the Apache Flex website in a GIT repository.
http://flex.apache.org/dev-sourcecode.html -
Revision: 18873
Revision: 18873
Author: [email protected]
Date: 2010-11-30 12:22:15 -0800 (Tue, 30 Nov 2010)
Log Message:
Bug fix Watson 2724938 potential deadlock in flex.messaging.client.FlexClient
Maintain the pattern of using the FlexClient.lock and ensure that order of locks should always start with the FlexClient.lock. This is critical to prevent deadlock cases,
Modified Paths:
blazeds/trunk/modules/core/src/flex/messaging/client/FlexClient.java -
CK40N : no message log update after =cksu
Hi costing experts,
In the costing run statistics there are errors for the marking step, which I correct by entering =cksu according to SAP note 542790.
However, the message log for the marking step is not updated. It is hard to find which are the remaining messages in the overall message log, because there are some 400 messages in total and only 50 are still relevant according to the refreshed statistics.
Does anyone have any idea's about what can be done to update the message log?
Thanks,
Caroline.I got below error in the ST22 dump....
"Database error text........: "ORA-00060: deadlock detected while waiting for
resource"
Internal call code.........: "[RSQL/INSR//BIC/FZXX_C02 ]"
I was searching for OSS notes associated with following keywords and note able to find any notes related my support pack level we are in SP13 (SAPKW70013)
"DBIF_RSQL_SQL_ERROR" "CX_SY_OPEN_SQL_DB" "WRITE_ICFACT"
Any help will be greatly appreciated...
Thanks,
Raman -
No Message: InfoCube Update Completed
BW Gurus,
I trying to load Data from PSA to the Cube Using DTP...during the data load, for certain data package the data load is keep on going and there is no error message as such and its still yellow status and under the "Updating to InfoCube ZXX_XX" node ,after the "Write to fact Table tasks "..
There is a message saying "No Message: InfoCube Update Completed" could anyone help what is the cause for this issue and also fix ? is it a problem with the amount of data load ?
Thanks,
RamanI got below error in the ST22 dump....
"Database error text........: "ORA-00060: deadlock detected while waiting for
resource"
Internal call code.........: "[RSQL/INSR//BIC/FZXX_C02 ]"
I was searching for OSS notes associated with following keywords and note able to find any notes related my support pack level we are in SP13 (SAPKW70013)
"DBIF_RSQL_SQL_ERROR" "CX_SY_OPEN_SQL_DB" "WRITE_ICFACT"
Any help will be greatly appreciated...
Thanks,
Raman -
Hi All,
<br/>
<br/>We are now facing some problems on our smartform printing programs, sometimes some of the smartform programs look getting stuck, from SM50, the processing time is endless, and from the detailed tracing log, there are repeated messages like below for every ONE MINUTE:
<br/>I WARNING: MtxLock 0x70000000636197c rrol0046 owner=33 deadlock ?
<br/>
<br/>I've searched the forum and also did google and SAP notes but can not get useful information match with my error, so could anyone please help to analyse what exact cause of this problem? Any suggestions is highly appreciated.
<br/>
<br/>The smartform program itself is simple and correct, just has one header and several items and uses normal SSF_FUNCTION_MODULE_NAME and CALL FNAME to do the print work, in most of time it works correct, but sometimes it just has problem and then the deadlock error occurred. I guess maybe it's because user tried to print too fast than the capability of the printer and then user canceled the job, then in some situations, the upcoming printing jobs get blocked and then get stuck, but it's just a guess, i've no proof to identify it.
<br/>
<br/>Here I attached the full trace log, hope someone could give me a idea, thank you very much!
<br/>
<br/>----
<br/>trc file: "dev_w38", trc level: 1, release: "701"
<br/>----
<br/>M Thu Dec 9 14:03:55 2010
<br/>M db_connect o.k.
<br/>M ICT: exclude compression: .zip,.cs,.rar,.arj,.z,.gz,.tar,.lzh,.cab,.hqx,.ace,.jar,.ear,.war,.css,.pdf,.js,.gzip
<br/>I MtxInit: 38 0 0
<br/>M SHM_PRES_BUF (addr: 0x700001050000000, size: 44000000)
<br/>M SHM_ROLL_AREA (addr: 0x700001060000000, size: 536870912)
<br/>M SHM_PAGING_AREA (addr: 0x700001080000000, size: 536870912)
<br/>M SHM_ROLL_ADM (addr: 0x700000006221000, size: 5506336)
<br/>M SHM_PAGING_ADM (addr: 0x7000010a0000000, size: 1311776)
<br/>M ThCreateNoBuffer allocated 544152 bytes for 1000 entries at 0x7000010b0002000
<br/>M ThCreateNoBuffer index size: 3000 elems
<br/>M ThCreateVBAdm allocated 11776 bytes (50 server) at 0x7000010d0000000
<br/>X EmInit: MmSetImplementation( 2 ).
<br/>X MM global diagnostic options set: 0
<br/>X EM/TOTAL_SIZE_MB = 262144
<br/>X mm.dump: set maximum dump mem to 96 MB
<br/>M Deactivate statistics hyper index locking
<br/>I *** INFO Shm 44 in Pool 40 18928 KB estimated 14500 KB real ( -4427 KB -24 %)
<br/>I *** INFO Shm 45 in Pool 40 12928 KB estimated 8500 KB real ( -4427 KB -35 %)
<br/>B dbntab: NTAB buffers attached
<br/>B dbntab: Buffer FTAB(hash header) (addr: 0x7000010b0088088, size: 584)
<br/>B dbntab: Buffer FTAB(anchor array) (addr: 0x7000010b00882d0, size: 1280008)
<br/>B dbntab: Buffer FTAB(item array) (addr: 0x7000010b01c0ad8, size: 5120000)
<br/>B dbntab: Buffer FTAB(data area) (addr: 0x7000010b06a2ad8, size: 122880000)
<br/>B dbntab: Buffer IREC(hash header) (addr: 0x7000010b7bd4088, size: 584)
<br/>B dbntab: Buffer IREC(anchor array) (addr: 0x7000010b7bd42d0, size: 1280008)
<br/>B dbntab: Buffer IREC(item array) (addr: 0x7000010b7d0cad8, size: 1280000)
<br/>B dbntab: Buffer IREC(data area) (addr: 0x7000010b7e452d8, size: 12288000)
<br/>B dbntab: Buffer STAB(hash header) (addr: 0x7000010b89ff088, size: 584)
<br/>B dbntab: Buffer STAB(anchor array) (addr: 0x7000010b89ff2d0, size: 1280008)
<br/>B dbntab: Buffer STAB(item array) (addr: 0x7000010b8b37ad8, size: 1280000)
<br/>B dbntab: Buffer STAB(data area) (addr: 0x7000010b8c702d8, size: 6144000)
<br/>B dbntab: Buffer TTAB(hash header) (addr: 0x7000010b924e088, size: 6720)
<br/>B dbntab: Buffer TTAB(anchor array) (addr: 0x7000010b924fac8, size: 1280008)
<br/>B dbntab: Buffer TTAB(item array) (addr: 0x7000010b93882d0, size: 3200000)
<br/>B dbntab: Buffer TTAB(data area) (addr: 0x7000010b96956d0, size: 23360000)
<br/>B db_con_shm_ini: WP_ID = 38, WP_CNT = 59, CON_ID = -1
<br/>B dbstat: TABSTAT buffer attached (addr: 0x7000010f002d2d0)
<br/>B dbtbxbuf: Buffer TABL (addr: 0x700001100000100, size: 180000000, end: 0x70000110aba9600)
<br/>B dbtbxbuf: Buffer TABLP (addr: 0x700000006763100, size: 20480000, end: 0x700000007aeb100)
<br/>B dbexpbuf: Buffer EIBUF (addr: 0x700000007aec108, size: 67108864, end: 0x70000000baec108)
<br/>B dbexpbuf: Buffer ESM (addr: 0x700001110000108, size: 4194304, end: 0x700001110400108)
<br/>B dbexpbuf: Buffer CUA (addr: 0x7000010bace2108, size: 18432000, end: 0x7000010bbe76108)
<br/>B dbexpbuf: Buffer OTR (addr: 0x700001120000108, size: 4194304, end: 0x700001120400108)
<br/>B dbcalbuf: Buffer CALE (addr: 0x70000000baee000, size: 500000, end: 0x70000000bb68120)
<br/>M CCMS: AlInitGlobals : alert/use_sema_lock = TRUE.
<br/>S *** init spool environment
<br/>S TSPEVJOB updates inside critical section: event_update_nocsec = 0
<br/>S initialize debug system
<br/>T Stack direction is downwards.
<br/>T debug control: prepare exclude for printer trace
<br/>T new memory block 0x114388060
<br/>S spool kernel/ddic check: Ok
<br/>S using table TSP02FX for frontend printing
<br/>S 1 spool work process(es) found
<br/>S frontend print via spool service enabled
<br/>S printer list size is 150
<br/>S printer type list size is 50
<br/>S queue size (profile) = 300
<br/>S hostspool list size = 3000
<br/>S option list size is 30
<br/>I *** INFO Shm 49 in Pool 40 2898 KB estimated 1632 KB real ( -1266 KB -44 %)
<br/>S found processing queue enabled
<br/>S found spool memory service RSPO-RCLOCKS at 0x7000010bbe77070
<br/>S doing lock recovery
<br/>S setting server cache root
<br/>S found spool memory service RSPO-SERVERCACHE at 0x7000010bbe78160
<br/>S using messages for server info
<br/>S size of spec char cache entry: 297032 bytes (timeout 100 sec)
<br/>S size of open spool request entry: 2512 bytes
<br/>S immediate print option for implicitely closed spool requests is disabled
<br/>A **GENER Trace switched on ***
<br/>A
<br/>A -PXA--
<br/>A PXA INITIALIZATION
<br/>A PXA: Locked PXA-Semaphore.
<br/>A System page size: 4kb, total admin_size: 237304kb, dir_size: 58960kb.
<br/>A Attached to PXA (address 0x700001130000000, size 3000000K, 4 fragments of 690676K )
<br/>A
<br/>A Thu Dec 9 14:03:59 2010
<br/>A abap/pxa = shared unprotect gen_remote
<br/>A PXA INITIALIZATION FINISHED
<br/>A -PXA--
<br/>A
<br/>A ABAP ShmAdm attached (addr=0x700000f4046c000 leng=20955136 end=0x700000f41868000)
<br/>A >> Shm MMADM area (addr=0x700000f40915418 leng=247168 end=0x700000f40951998)
<br/>A >> Shm MMDAT area (addr=0x700000f40952000 leng=15818752 end=0x700000f41868000)
<br/>A RFC rfc/signon_error_log = -1
<br/>A RFC rfc/dump_connection_info = 0
<br/>A RFC rfc/dump_client_info = 0
<br/>A RFC rfc/cp_convert/ignore_error = 1
<br/>A RFC rfc/cp_convert/conversion_char = 23
<br/>A RFC rfc/wan_compress/threshold = 251
<br/>A RFC rfc/recorder_pcs not set, use defaule value: 2
<br/>A RFC rfc/delta_trc_level not set, use default value: 0
<br/>A RFC rfc/no_uuid_check not set, use default value: 0
<br/>A RFC rfc/bc_ignore_thcmaccp_retcode not set, use default value: 0
<br/>A RFC Method> initialize RemObjDriver for ABAP Objects
<br/>M ThrCreateShObjects allocated 122630 bytes at 0x70000000c124000
<br/>N SsfSapSecin: putenv(SECUDIR=/usr/sap/PRD/DVEBMGS00/sec): ok
<br/>N
<br/>N =================================================
<br/>N === SSF INITIALIZATION:
<br/>N ===...SSF Security Toolkit name SAPSECULIB .
<br/>N ===...SSF library is /usr/sap/PRD/DVEBMGS00/exe/libsapcrypto.o .
<br/>N ===...SSF default hash algorithm is SHA1 .
<br/>N ===...SSF default symmetric encryption algorithm is DES-CBC .
<br/>N ===...SECUDIR="/usr/sap/PRD/DVEBMGS00/sec"
<br/>N ===...loading of Security Toolkit successfully completed.
<br/>N === SAPCRYPTOLIB 5.5.5C pl29 (Jan 30 2010) MT-safe
<br/>N =================================================
<br/>N MskiInitLogonTicketCacheHandle: Logon Ticket cache pointer retrieved from shared memory.
<br/>N MskiInitLogonTicketCacheHandle: Workprocess runs with Logon Ticket cache.
<br/>M JrfcVmcRegisterNativesDriver o.k.
<br/>W =================================================
<br/>W === ipl_Init() called
<br/>B dbtran INFO (init_connection '<DEFAULT>' [ORACLE:700.08]):
<br/>B max_blocking_factor = 5, max_in_blocking_factor = 5,
<br/>B min_blocking_factor = 5, min_in_blocking_factor = 5,
<br/>B prefer_union_all = 0, prefer_join = 0,
<br/>B prefer_fix_blocking = 0, prefer_in_itab_opt = 1,
<br/>B convert AVG = 0, alias table FUPD = 0,
<br/>B escape_as_literal = 1, opt GE LE to BETWEEN = 0,
<br/>B select * =0x0f, character encoding = STD / <none>:-,
<br/>B use_hints = abap->1, dbif->0x1, upto->2147483647, rule_in->0,
<br/>B rule_fae->0, concat_fae->0, concat_fae_or->0
<br/>W ITS Plugin: Path dw_gui
<br/>W ITS Plugin: Description ITS Plugin - ITS rendering DLL
<br/>W ITS Plugin: sizeof(SAP_UC) 2
<br/>W ITS Plugin: Release: 701, [7010.0.97.20020600]
<br/>W ITS Plugin: Int.version, [33]
<br/>W ITS Plugin: Feature set: [22]
<br/>W ===... Calling itsp_Init in external dll ===>
<br/>W PpioRecoverLocks, table: 0x700000f418f2778
<br/>W PpioRecoverLocks, number of file locks 256
<br/>W PpioRecoverLocks: file lock set to: (nil)
<br/>W PpioRecoverLocks: directory lock set to: (nil)
<br/>W PpioRecoverLocks: global lock set to: (nil)
<br/>W PpioRecoverLocks() done
<br/>W PprcRecoverLocks, table: 0x700000f418f27e8
<br/>W PprcRecoverLocks: directory lock set to: (nil)
<br/>W PprcRecoverLocks() done
<br/>W === ipl_Init() returns 0, ITSPE_OK: OK
<br/>W =================================================
<br/>N VSI: WP init in ABAP VM completed with rc=0
<br/>E Profile-Parameter: enque/deque_wait_answer = FALSE
<br/>E Profile-Parameter: enque/sync_dequeall = 0
<br/>E EnqId_SuppressIpc: local EnqId initialization o.k.
<br/>E EnqCcInitialize: local enqueue client init o.k.
<br/>M ThCheckPrevUser: previous user was T78/M0, clean counter 0
<br/>M ThCheckPrevUser: clean previous user T78/U26013/M0/I2/V-1
<br/>M
<br/>M Modeinfo for User T78/M0
<br/>M
<br/>M tm state = 4
<br/>M uid = 26013
<br/>M term type = 0x4
<br/>M display = 0x8
<br/>M cpic_no = 0
<br/>M cpic_idx = -1
<br/>M usr = >8000199 <
<br/>M terminal = >ceegsap20 <
<br/>M client = >800<
<br/>M conversation_ID = > <
<br/>M appc_tm_conv_idx = -1
<br/>M its_plugin = NO
<br/>M allowCreateMode = YES
<br/>M wp_ca block = -1
<br/>M appc_ca block = -1
<br/>M blockSoftCanel = NO
<br/>M session_id = >4CFF77CE4A6A0068E10080000A04C87E<
<br/>M ext_session_id = >4CFF77CE4A6A0068E10080000A04C87E<
<br/>M imode = 2
<br/>M mode state = 0x1a
<br/>M mode clean_state = 2
<br/>M task_type = ZTTADIA
<br/>M lastThFc = THFCTERM
<br/>M lastAction = TH_IACT_NO_ACTION
<br/>M th_errno = 0
<br/>M rollout_reason = 1
<br/>M last_rollout_level = 7
<br/>M async_receives = 0
<br/>M cpic_receive = 0
<br/>M em handle = 67
<br/>M roll state = 3
<br/>M abap state = 3
<br/>M em state = 2
<br/>M eg state = 1
<br/>M spa state = 3
<br/>M enq state = 0
<br/>M softcancel = 1
<br/>M cancelInitiator = DISPATCHER
<br/>M clean_state = DP_SOFTCANCEL
<br/>M next hook = T-1/U-1/M255
<br/>M master hook = T-1/U-1/M255
<br/>M slave hook = T-1/U-1/M255
<br/>M debug_tid = 255
<br/>M debug_mode = 0
<br/>M mode type = 0x1
<br/>M debug = 0
<br/>M msg_count = 6
<br/>M tcode = >ZPP015 <
<br/>M last_wp = 38
<br/>M client conversation_ID = > <
<br/>M server conversation_ID = > <
<br/>M lock = 0
<br/>M max enq infos = 0
<br/>M act enq infos = 0
<br/>M em_hyper_hdl = 0x700000f41d918e8
<br/>M plugin_info = NULL
<br/>M act_plugin_hdl = -1
<br/>M act_plugin_no = 0
<br/>M max_plugin_no = 0
<br/>M
<br/>M ThCheckPrevUser: reset spa state for user T78/U26013/M0
<br/>M ThSetDoSafeCleanup: th_do_safe_cleanup = FALSE (wanted FALSE)
<br/>M LOCK WP ca_blk 44
<br/>M ThAtWpBlk: set zttatiln to zero
<br/>M ThAtWpBlk: set zttatoln to zero
<br/>M DpVmcGetVmByTmAdm: no VM found for T78/M0/I2
<br/>M LOCK APPC ca_blk 640
<br/>M set task type ZTTADIA
<br/>M ThCleanPrevUser: clean U26013 T78 M0 I2 no VM clean state DP_SOFTCANCEL clean counter 1
<br/>M ThCleanPrevUser: saved MODE_REC = 10
<br/>M PfStatDisconnect: disconnect statistics
<br/>M ThCleanPrevUser: found soft cancel flag
<br/>M ThSoftCancel: set clean state of T78/M0 to DP_DEFAULT_CLEANING
<br/>M ThSoftCancel session in state TM_DISCONNECTED, delete mode
<br/>M ThIAMDel: delete tid/mode 78/0 (th_errno 47, release 1)
<br/>M ThIDeleteMode (78, 0, 3, ><, 0, 255, TRUE)
<br/>M ThIDeleteMode: no modes found ..
<br/>M no sub modes
<br/>M ThCheckMemoryState (0, 0, 1)
<br/>M ThRollIn: roll in T78/U26013/M0/I2 (level=7, abap_level=1, attach_em=1)
<br/>M ThCheckEmState: check ATTACH for em hdl 67
<br/>M ThCheckEmState: call EmContextAttach (em_hdl=67)
<br/>I Thu Dec 9 14:05:00 2010
<br/>I WARNING: MtxLock 0x70000000636197c rrol0046 owner=33 deadlock ?
<br/>I Thu Dec 9 14:06:00 2010
<br/>I WARNING: MtxLock 0x70000000636197c rrol0046 owner=33 deadlock ?
<br/>I Thu Dec 9 14:07:00 2010
<br/>I WARNING: MtxLock 0x70000000636197c rrol0046 owner=33 deadlock ?
<br/>I Thu Dec 9 14:08:00 2010
<br/>I WARNING: MtxLock 0x70000000636197c rrol0046 owner=33 deadlock ?Hi Sitarama,
<br/>
<br/>Thanks very much for your fast reply. Yes, the smartforms are custom-built. Besides the deadlock error, there still has some errors in SP01 says "Could not pass request to host spool system". But it previews wonderfully in SP01. The detailed SP01 error log is like below:
Print request processing log
Errors occurred processing this print request
Error during print request output. l_rc = 99
There may be no printout
Most important attributes of spool request
Request number 2828
Request name SMART LOCA 8000199
Client 800
Owner 8000199
Request attributes
Time created 2010120906014800
Remaining life +00007235800
Dispo 1 (Go/Hold) G
Dispo 2 (Keep/Delete) D
Dispo 3 (Indirect/Direct) D
Default output device LOCA
Default no. copies 1
Format ZTEST
Main print request characteristics
Spool request number 2828
Print request number 1
Print request attributes
Time created 2010120906020500
Output device LOCA
Format ZTEST
What do you mean parallel processing on table level? The smartforms extract data from different SAP tables and then transfer to smartform interface, then they can added in form pages. Do you think this results parallel processing towards tables?
<br/>
Thank you and hope to hear more from you.
Best Regards,
Jeff
Maybe you are looking for
-
Problem installing Lion on my Macbook Pro.
Hello all. I bought the virtual lion upgrade from the Apple store a year and half ago, after installing it I noticed I coudn't use many applications I used daily. That lead me to reinstall Snow Leopard on my computer using my time machine backup. Aft
-
I have remaining credit at the iTunes Store, but will not let me use without entering 'your Gift Card or Download Code' which, of course, was thrown away months ago, when I first put the code in! Originally it was put into my computer, now I'm trying
-
My photos will not sync from computer to photo app under ios 6
I can't synch photos from my windows computer to the photo application. i updated to the podcast app, removed it and managed to get music & podcasts back into the music folder but since then I haven't been able to get photos onto my ipod. I have the
-
How to save change log for a particular standard table in ZTABLE
Hi Experts, Wouldlike to make one ztable in R3 which should record changes made( say particualr field) in standard SAP DB table ( say TXPDAT). I think that there has to beway to do it. Could you please share the way to achieve it? Will appreciate
-
Why do i have to use the podcast app in iOS6 since it doesn't work correctly
I upgraded to iOS 6 and now you cant download podcasts through the itunes app... Now you have to download the podcast app and use that instead, but that wont work correctly for me... keeps hanging and not downloading the actual podcast. Does anyone k