Using MDBs for long running transactions
Although MDBs are not the best vehicles for running long transactions, I am
forced to use them for one such scenario (let's say for lack of a better
pattern). In order to let my long running MDB (with Container Managed Tx)
do it's chores I increased the time-out value to a higher number rather than
using the default of 30 secs. Strangely, I was seeing
IllegalStateExceptions in stdout. So I created a brand new test MDB with a
Thread.sleep for 60 seconds, increased my MDBs timeout value to 120 secs,
made sure there was only one MDB in the pool and ran the test again. I
still see the below error after 30 seconds.
I guess I should probably open a support case, but I thought I'll post here
as well in case there's something I am missing.
<May 27, 2003 5:26:31 PM PDT> <Notice> <EJB> <Error marking transaction for
rollback: java.lang.IllegalStateException: Cannot m
ark the transaction for rollback. xid=64:bea55f200db2c786, status=Rolled
back. [Reason=weblogic.transaction.internal.TimedOutEx
ception: Transaction timed out after 34 seconds
Xid=64:bea55f200db2c786(-33600248),Status=Active,numRepliesOwedMe=0,numRepli
esOwedOthers=0,seconds since begin=34,seconds left=
30,activeThread=Thread[ExecuteThread: '9' for queue: 'default',5,Thread
Group for Queue: 'default'],ServerResourceInfo[JMS_hmJD
BCStore]=(state=started,assigned=none),SCInfo[wlcsDomain+wlcsServer]=(state=
active),OwnerTransactionManager=ServerTM[ServerCoor
dinatorDescriptor=(CoordinatorURL=wlcsServer+155.14.3.140:7501+wlcsDomain+,
Resources={})],CoordinatorURL=wlcsServer+155.14.3.1
40:7501+wlcsDomain+)]
java.lang.IllegalStateException: Cannot mark the transaction for rollback.
xid=64:bea55f200db2c786, status=Rolled back. [Reason
=weblogic.transaction.internal.TimedOutException: Transaction timed out
after 34 seconds
Xid=64:bea55f200db2c786(-33600248),Status=Active,numRepliesOwedMe=0,numRepli
esOwedOthers=0,seconds since begin=34,seconds left=
30,activeThread=Thread[ExecuteThread: '9' for queue: 'default',5,Thread
Group for Queue: 'default'],ServerResourceInfo[JMS_hmJD
BCStore]=(state=started,assigned=none),SCInfo[wlcsDomain+wlcsServer]=(state=
active),OwnerTransactionManager=ServerTM[ServerCoor
dinatorDescriptor=(CoordinatorURL=wlcsServer+155.14.3.140:7501+wlcsDomain+,
Resources={})],CoordinatorURL=wlcsServer+155.14.3.1
40:7501+wlcsDomain+)]
at
weblogic.transaction.internal.TransactionImpl.throwIllegalStateException(Tra
nsactionImpl.java:1486)
at
weblogic.transaction.internal.TransactionImpl.setRollbackOnly(TransactionImp
l.java:466)
at
weblogic.ejb20.manager.BaseEJBManager.handleSystemException(BaseEJBManager.j
ava:255)
at
weblogic.ejb20.manager.BaseEJBManager.setupTxListener(BaseEJBManager.java:21
5)
at
weblogic.ejb20.manager.StatelessManager.preInvoke(StatelessManager.java:153)
at
weblogic.ejb20.internal.BaseEJBObject.preInvoke(BaseEJBObject.java:117)
at
weblogic.ejb20.internal.StatelessEJBObject.preInvoke(StatelessEJBObject.java
:63)
at
com.xoriant.hm.ejb.session.HierarchyManagerBean_fzysig_EOImpl.getHierarchyId
(HierarchyManagerBean_fzysig_EOImpl.java
:1477)
at
com.ebiz.application.customerprofile.hm.CPXHMController.SynchronizeMHTH(Unkn
own Source)
at
com.ebiz.application.customerprofile.hm.CPHMOrgGroupMsgBean.onMessage(Unknow
n Source)
at weblogic.ejb20.internal.MDListener.execute(MDListener.java:254)
at weblogic.kernel.ExecuteThread.execute(ExecuteThread.java:139)
at weblogic.kernel.ExecuteThread.run(ExecuteThread.java:120)
Hi Adarsh,
It may be that the transaction time-out setting in the descriptor
is not taking effect. The tx is still timing out after
the default 30 seconds, so the later attempt to call
"setRollbackOnly" fails as the transaction has already
rolled back The ignored descriptor setting is a known issue
in some earlier SPs, but I'm not sure when and where it
was fixed - so yes, contact customer support. The work-around
is to set the default transaction time-out for the entire server to a
higher value. (I'm not sure where to set this on the console,
but the relevant JTA MBean field is "TimeoutSeconds".)
Tom
Adarsh Dattani wrote:
> Although MDBs are not the best vehicles for running long transactions, I am
> forced to use them for one such scenario (let's say for lack of a better
> pattern). In order to let my long running MDB (with Container Managed Tx)
> do it's chores I increased the time-out value to a higher number rather than
> using the default of 30 secs. Strangely, I was seeing
> IllegalStateExceptions in stdout. So I created a brand new test MDB with a
> Thread.sleep for 60 seconds, increased my MDBs timeout value to 120 secs,
> made sure there was only one MDB in the pool and ran the test again. I
> still see the below error after 30 seconds.
> I guess I should probably open a support case, but I thought I'll post here
> as well in case there's something I am missing.
>
> <May 27, 2003 5:26:31 PM PDT> <Notice> <EJB> <Error marking transaction for
> rollback: java.lang.IllegalStateException: Cannot m
>
> ark the transaction for rollback. xid=64:bea55f200db2c786, status=Rolled
> back. [Reason=weblogic.transaction.internal.TimedOutEx
>
> ception: Transaction timed out after 34 seconds
>
> Xid=64:bea55f200db2c786(-33600248),Status=Active,numRepliesOwedMe=0,numRepli
> esOwedOthers=0,seconds since begin=34,seconds left=
>
> 30,activeThread=Thread[ExecuteThread: '9' for queue: 'default',5,Thread
> Group for Queue: 'default'],ServerResourceInfo[JMS_hmJD
>
> BCStore]=(state=started,assigned=none),SCInfo[wlcsDomain+wlcsServer]=(state=
> active),OwnerTransactionManager=ServerTM[ServerCoor
>
> dinatorDescriptor=(CoordinatorURL=wlcsServer+155.14.3.140:7501+wlcsDomain+,
> Resources={})],CoordinatorURL=wlcsServer+155.14.3.1
>
> 40:7501+wlcsDomain+)]
>
> java.lang.IllegalStateException: Cannot mark the transaction for rollback.
> xid=64:bea55f200db2c786, status=Rolled back. [Reason
>
> =weblogic.transaction.internal.TimedOutException: Transaction timed out
> after 34 seconds
>
> Xid=64:bea55f200db2c786(-33600248),Status=Active,numRepliesOwedMe=0,numRepli
> esOwedOthers=0,seconds since begin=34,seconds left=
>
> 30,activeThread=Thread[ExecuteThread: '9' for queue: 'default',5,Thread
> Group for Queue: 'default'],ServerResourceInfo[JMS_hmJD
>
> BCStore]=(state=started,assigned=none),SCInfo[wlcsDomain+wlcsServer]=(state=
> active),OwnerTransactionManager=ServerTM[ServerCoor
>
> dinatorDescriptor=(CoordinatorURL=wlcsServer+155.14.3.140:7501+wlcsDomain+,
> Resources={})],CoordinatorURL=wlcsServer+155.14.3.1
>
> 40:7501+wlcsDomain+)]
>
> at
> weblogic.transaction.internal.TransactionImpl.throwIllegalStateException(Tra
> nsactionImpl.java:1486)
>
> at
> weblogic.transaction.internal.TransactionImpl.setRollbackOnly(TransactionImp
> l.java:466)
>
> at
> weblogic.ejb20.manager.BaseEJBManager.handleSystemException(BaseEJBManager.j
> ava:255)
>
> at
> weblogic.ejb20.manager.BaseEJBManager.setupTxListener(BaseEJBManager.java:21
> 5)
>
> at
> weblogic.ejb20.manager.StatelessManager.preInvoke(StatelessManager.java:153)
>
> at
> weblogic.ejb20.internal.BaseEJBObject.preInvoke(BaseEJBObject.java:117)
>
> at
> weblogic.ejb20.internal.StatelessEJBObject.preInvoke(StatelessEJBObject.java
> :63)
>
> at
> com.xoriant.hm.ejb.session.HierarchyManagerBean_fzysig_EOImpl.getHierarchyId
> (HierarchyManagerBean_fzysig_EOImpl.java
>
> :1477)
>
> at
> com.ebiz.application.customerprofile.hm.CPXHMController.SynchronizeMHTH(Unkn
> own Source)
>
> at
> com.ebiz.application.customerprofile.hm.CPHMOrgGroupMsgBean.onMessage(Unknow
> n Source)
>
> at weblogic.ejb20.internal.MDListener.execute(MDListener.java:254)
>
> at weblogic.kernel.ExecuteThread.execute(ExecuteThread.java:139)
>
> at weblogic.kernel.ExecuteThread.run(ExecuteThread.java:120)
>
>
>
>
Similar Messages
-
RZ20 - Is there an alert for long running transactions?
In RZ20 is there an alert for long running transactions?
http://help.sap.com/saphelp_nw04s/helpdata/en/9c/f78b3ba19dfe47e10000000a11402f/content.htm
This document clearly explains your problem.
"Reward points if useful" -
Is there any time out defined for long running transaction?
hi,
i have to make one big data transferring script , though transaction is not required here, but i was planning to,
please tel me is there any time out for long running transactions.i have to run the script from database it self
yours sincerleyCan you show us an example of your script? You can divide the transaction into small chunks to reduce time and locking/blocking as well.
Best Regards,Uri Dimant SQL Server MVP,
http://sqlblog.com/blogs/uri_dimant/
MS SQL optimization: MS SQL Development and Optimization
MS SQL Consulting:
Large scale of database and data cleansing
Remote DBA Services:
Improves MS SQL Database Performance
SQL Server Integration Services:
Business Intelligence -
Long-running transactions and the performance penalty
If I change the orch or scope Transaction Type to "Long Running" and do not create any other transaction scopes inside, I'm getting this warning:
warning X4018: Performance Warning: marking service '***' as a longrunning transaction is not necessary and incurs the performance penalty of an extra commit
I didn't find any description of such penalties.
So my questions to gurus:
Does it create some additional persistence point(s) / commit(s) in LR orchestration/scope?
Where are these persistence points happen, especially in LR orchestration?
Leonid Ganeline [BizTalk MVP] BizTalk Development ArchitectureThe wording may make it sound so but IMHO, if during the build of an orchestration we get carried away with scope shapes we end up with more persistence points which do affect the performance so one additional should not make soo much of a difference. It
may have been put because of end-user feed back where people may have opted for long running transactions without realizing about performance overheads and in subsequent performance optimization sessions with Microsoft put it on the product enhancement list
as "provide us with an indication if we're to incurr performance penalties". A lot of people design orchestration like they write code (not saying that is a bad thing) where they use the scope shape along the lines of a try catch block and what with
Microsoft marketing Long Running Transactions/Compensation blocks as USP's for BizTalk, people did get carried away into using them without understanding the implications.
Not saying that there is no additional persistence points added but just wondering if adding one is sufficient to warrant the warning. But if I nest enough scope shapes and mark them all as long-running, they may add up.
So when I looked at things other than persistence points, I tried to think on how one might implement the long running transaction (nested, incorporating atomic, etc), would you be able to leverage the .Net transaction object (something the pipeline
use and execute under) or would that model not handle the complexities of the Long Running Transaction which by very definiton span across days/months and keeping .Net Transaction objects active or serialization/de-serialization into operating context will
cause more issues.
Regards. -
Problem handing long running transaction
Hi,
I have a long running transaction, and there is a high possibility that
someone else will make changes to one of the same objects in that
transaction. How is everyone handling this situation? Is it best to
catch the JDOException thrown and attempt the transaction one more time
or to return to the user on "failure"?
Also, would it possible with Kodo to set Pessimistic transactions for
just this case and use optimistic for "read" operations?
KamAlso, would it possible with Kodo to set Pessimistic transactions for
just this case and use optimistic for "read" operations?You can easily switch between optimistic and pessimistic transactions:
pm.currentTransaction ().setOptimistic (val);
You can also use pessimistic transactions for everything, and set the
nontransactional read property to true to do reads outside of transactions.
Note, however, that you shouldn't use pessimistic locking for long-running
transactions in most cases. They use too many DB resources. It's better to
use optimistic transacactions and deal with the errors (or give them to the
user). -
Long running transactions.
Hi all,
In real world systems, how efficient are long running flat transactions in ejb 2.0 since the transactions tends to lock several tables that are part of the transaction.
Could any one explain in detail or direct to any websites that share the real time experiences with transactions in ejb enviornment.
Thanks in advance.Long run transaction are real performance killers, avoid them if you can.
It's actually quite easy (and sensible) to break down long transactions into smaller transactions with a little common-sense.
For example, funds transfer. Say you want to transfer money from one account to another account from country A (which is only open on Tuesdays) to country B (only open on Fridays).
Option A: Start a long running transaction on Tuesday, commit it on Friday.
The resources used to do this are very much wasted - and will cause performance degradation.
Option B: Transfer the funds from Country A to Country C (which is open Tuesday and Friday) - in one transaction on Tuesday. Transfer funds from Country C to country B in a second transaction on Friday. If C to B fails, transfer the funds back - if it succeeds inform A of the success. It's more complex, but there's no free lunch. (This is how real banks do it - ever got a refund from the bank? Well, no, but you know what I mean; it's also why it takes five days for a cheque to clear etc etc).
This is only a narrative - but there are very few cases where this cannot work in a real transaction. -
Explicit commit during a long-running transaction in EclipseLink
Hi,
I am currently upgrading a J2EE application from OAS with Toplink Essentials to WL 10.3.3 with Eclipselink and have the following issue with transactions.
The application was developed to have long-running transactions for business reasons in specific scenarios. However, some other queries must be created and committed along the way to make sure that we have this specific data in the database before the final commit. This call (and subsequent code) is in an EJB method that has the "@TransactionAttribute(TransactionAttributeType.REQUIRED)" defined on it. Taking this out gives me the same behaviour.
The application has the following implementation of the process, which fails:
Code
EntityManager em = PersistenceUtil.createEntityManager();
em.getTransaction().begin();
PersistenceUtil.saveOrUpdate(em,folder);
em.getTransaction().commit(); --->>>>FAILS HERE
Error
javax.ejb.EJBTransactionRolledbackException: EJB Exception: : javax.persistence.RollbackException: Exception [EclipseLink-4002] (Eclipse Persistence Services - 2.0.2.v20100323-r6872): org.eclipse.persistence.exceptions.DatabaseException Internal Exception: java.sql.SQLException: Cannot call Connection.rollback in distributed transaction. Transaction Manager will commit the resource manager when the distributed transaction is committed.
So I tried the following to see if it would work, but I believe that the transaction end and anything after that will fail since it requires a transaction to continue
PersistenceUtil.getUnitOfWork().writeChanges();
PersistenceUtil.getUnitOfWork().commit();
Error
javax.persistence.TransactionRequiredException: joinTransaction has been called on a resource-local EntityManager which is unable to register for a JTA transaction.
Can anyone help me as to how to commit a transaction within the long running transaction in this environment? I also want to be sure that the long-running transaction does not fail or is not stopped along the way.
Thanking you in advanceYou seem to be using JTA, so you cannot use JPA transactions, you must defined your transaction in JTA, such as in your SessionBean.
When using JTA you should never use,
em.getTransaction().begin();
If you do not want to use JTA, then you need to set your persistence unit to be RESOURCE_LOCAL in your persistence.xml.
Also you need to ensure you use a non-jta enabled DataSource.
James : http://www.eclipselink.org -
IMDB Cache group load and long running transaction
Hello,
We are investigating the use of IMDB Cache to cache a number of large Oracle tables. When loading the cache I have noticed logs accumulating and I am not quite sure why this should be. I have a read only cache group consisting of 3 tables with approximatley, 88 million rows, 74 million rows and 570 million rows in each table. To load the cache group I run the following -
LOAD CACHE GROUP er_ro_cg COMMIT EVERY 1000 ROWS PARALLEL 10 ;ttLogHolds shows -
Command> call ttLogHolds ;
< 0, 12161024, Long-Running Transaction , 1.1310 >
< 170, 30025728, Checkpoint , Entity.ds0 >
< 315, 29945856, Checkpoint , Entity.ds1 >
3 rows found.I read this as saying from log 0 to current must be kept for the long running transaction. From what I can see the long running transaction is the cache group load. Is this expected? I was expecting the commit in the load cache group to allow the logs to be deleted. I am able to query the contents of the tables at various times in the load so I can see that the commit is taking place.
Thanks
MarkHello,
I couldn't recall whether I had changed the Autocommit settings when I ran the load so I tried a couple more runs. From what I could see the value of autocommit did not influence how the logs were treated. For example -
1. Autocommit left as the default -
Connection successful: DSN=Entity;UID=cacheadm;DataStore=/prod100/oradata/ENTITY/Entity;DatabaseCharacterSet=AL32UTF8;ConnectionCharacterSet=US7ASCII;DRIVER=/app1/oracle/product/11.2.0/TimesTen/ER/lib/libtten.so;LogDir=/prod100/oradata/ENTITY;PermSize=66000;TempSize=2000;TypeMode=0;OracleNetServiceName=TRAQPP.world;
(Default setting AutoCommit=1)
Command> LOAD CACHE GROUP er_ro_cg COMMIT EVERY 1000 ROWS PARALLEL 10 ;Logholds shows a long running transaction -
Command> call ttlogholds ;
< 0, 11915264, Long-Running Transaction , 1.79 >
< 474, 29114368, Checkpoint , Entity.ds0 >
< 540, 1968128, Checkpoint , Entity.ds1 >
3 rows found.And ttXactAdmin shows only the load running -
2011-01-19 14:10:03.135
/prod100/oradata/ENTITY/Entity
TimesTen Release 11.2.1.6.1
Outstanding locks
PID Context TransID TransStatus Resource ResourceID Mode SqlCmdID Name
Program File Name: timestenorad
28427 0x16fd6910 7.26 Active Database 0x01312d0001312d00 IX 0
Table 718080 W 69211971680 TRAQDBA.ENT_TO_EVIDENCE_MAP
Table 718064 W 69211971680 TRAQDBA.AADNA
Command 69211971680 S 69211971680
8.10029 Active Database 0x01312d0001312d00 IX 0
9.10582 Active Database 0x01312d0001312d00 IX 0
10.10477 Active Database 0x01312d0001312d00 IX 0
11.10332 Active Database 0x01312d0001312d00 IX 0
12.10546 Active Database 0x01312d0001312d00 IX 0
13.10261 Active Database 0x01312d0001312d00 IX 0
14.10637 Active Database 0x01312d0001312d00 IX 0
15.10669 Active Database 0x01312d0001312d00 IX 0
16.10111 Active Database 0x01312d0001312d00 IX 0
Program File Name: ttIsqlCmd
29317 0xde257d0 1.79 Active Database 0x01312d0001312d00 IX 0
Row BMUFVUAAAAKAAAAPD0 S 69211584104 SYS.TABLES
Command 69211584104 S 69211584104
11 outstanding transactions foundAnd the commands were
< 69211971680, 2048, 1, 1, 0, 0, 1392, CACHEADM , load cache group CACHEADM.ER_RO_CG commit every 1000 rows parallel 10 _tt_bulkFetch 4096 _tt_bulkInsert 1000 >
< 69211584104, 2048, 1, 1, 0, 0, 1400, CACHEADM , LOAD CACHE GROUP er_ro_cg COMMIT EVERY 1000 ROWS PARALLEL 10 >Running the load again with autocommit off -
Command> AutoCommit
autocommit = 1 (ON)
Command> AutoCommit 0
Command> AutoCommit
autocommit = 0 (OFF)
Command> LOAD CACHE GROUP er_ro_cg COMMIT EVERY 1000 ROWS PARALLEL 10 ;Logholds shows a long running transaction
Command> call ttlogholds ;
< 1081, 6617088, Long-Running Transaction , 2.50157 >
< 1622, 10377216, Checkpoint , Entity.ds0 >
< 1668, 55009280, Checkpoint , Entity.ds1 >
3 rows found.And ttXactAdmin shows only the load running -
er.oracle$ ttXactAdmin entity
2011-01-20 07:23:54.125
/prod100/oradata/ENTITY/Entity
TimesTen Release 11.2.1.6.1
Outstanding locks
PID Context TransID TransStatus Resource ResourceID Mode SqlCmdID Name
Program File Name: ttIsqlCmd
2368 0x12bb37d0 2.50157 Active Database 0x01312d0001312d00 IX 0
Row BMUFVUAAAAKAAAAPD0 S 69211634216 SYS.TABLES
Command 69211634216 S 69211634216
Program File Name: timestenorad
28427 0x2abb580af2a0 7.2358 Active Database 0x01312d0001312d00 IX 0
Table 718080 W 69212120320 TRAQDBA.ENT_TO_EVIDENCE_MAP
Table 718064 W 69212120320 TRAQDBA.AADNA
Command 69212120320 S 69212120320
8.24870 Active Database 0x01312d0001312d00 IX 0
9.26055 Active Database 0x01312d0001312d00 IX 0
10.25659 Active Database 0x01312d0001312d00 IX 0
11.25469 Active Database 0x01312d0001312d00 IX 0
12.25694 Active Database 0x01312d0001312d00 IX 0
13.25465 Active Database 0x01312d0001312d00 IX 0
14.25841 Active Database 0x01312d0001312d00 IX 0
15.26288 Active Database 0x01312d0001312d00 IX 0
16.24924 Active Database 0x01312d0001312d00 IX 0
11 outstanding transactions foundWhat I did notice was that TimesTen runs three queries against the Oracle server, the first to select from the parent table, the second to join the parent to the first child and the third to join the parent to the second child. Logholds seems to show a long running transaction once the second query starts. For example, I was monitoring the load of the parent table, checking ttlogholds to watch for a long running transaction. As shown below, a long running transaction entry appeared around 09:01:41 -
Command> select sysdate from dual ;
< 2011-01-20 09:01:37 >
1 row found.
Command> call ttlogholds ;
< 2427, 39278592, Checkpoint , Entity.ds1 >
< 2580, 22136832, Checkpoint , Entity.ds0 >
2 rows found.
Command> select sysdate from dual ;
< 2011-01-20 09:01:41 >
1 row found.
Command> call ttlogholds ;
< 2427, 39290880, Long-Running Transaction , 2.50167 >
< 2580, 22136832, Checkpoint , Entity.ds0 >
< 2929, 65347584, Checkpoint , Entity.ds1 >
3 rows found.This roughly matches the time the query that selects the rows for the first child table started in Oracle
traqdba@TRAQPP> select sm.sql_id,sql_exec_start,sql_fulltext
2 from v$sql_monitor sm, v$sql s
3 where sm.sql_id = 'd6fmfrymgs5dn'
4 and sm.sql_id = s.sql_id ;
SQL_ID SQL_EXEC_START SQL_FULLTEXT
d6fmfrymgs5dn 20/JAN/2011 08:59:27 SELECT "TRAQDBA"."ENT_TO_EVIDENCE_MAP"."ENTITY_KEY", "TRAQDBA"."ENT_TO_EVIDENCE_
MAP"."EVIDENCE_KEY", "TRAQDBA"."ENT_TO_EVIDENCE_MAP"."EVIDENCE_VALUE", "TRAQDBA"
."ENT_TO_EVIDENCE_MAP"."CREATED_DATE_TIME" FROM "TRAQDBA"."ENT_TO_EVIDENCE_MAP",
"TRAQDBA"."AADNA" WHERE "TRAQDBA"."ENT_TO_EVIDENCE_MAP"."ENTITY_KEY" = "TRAQDBA
"."AADNA"."ADR_ADDRESS_NAME_KEY"
Elapsed: 00:00:00.00Thanks
Mark -
Sequential Convoy and Long running transaction: Messages still referenced
Hi everyone<o:p></o:p>
Being a BizTalk developer since 2006, this thing still stumps me.<o:p></o:p>
I have a sequential convoy singleton orchestration that debatches messages using a rcvPipeline. The orchestration is needed in a FIFO scenario. In order to execute a rcvPipeline
within an orchestration I need to encapsulate it within an atomic transaction scope. <o:p></o:p>
In order to have an atomic scope the orchestration needs to be long running. I have also encapsulated the atomic transaction within a scope (using long running transactions) to have
exception handling.
<o:p></o:p>
Everything works fine except for one major detail:
When the orchestration executes the messages are still in the messagebox. I can even click on the orchestration instance in the MGMT console and look at the message! Tracking is disabled for the receive port as well as for the orchestration. Still, the messages
does not get cleaned up.
<o:p></o:p>
I have set my DTA-purge to 1 hour and it works fine, but the messages are still in the orchestration.<o:p></o:p>
My guess is that the long running transactions does not complete (although it looks like they should) and since the transaction is not completed the messages are not removed from
the message box.
So, to summarize: Is it possible to combine long running transactions and a singleton orchestration?
//Mikael Sand (MCTS, ICC 2011) -
Blog Logica SwedenSo after a day of looking for the solution it is quite clear that you are right in that the atomic transaction does not commit. I added a compensation block with trace info and it is never hit.
I also experimented with Isolation level on the atomic transaction and that did nothing.
Lastly I also made the sendport direct bound and also tried "specify later binding" to a physical port.
The messages are still being referenced by the orchestration! What can I do to make the atomic transation commit?
//Mikael Sand (MCTS, ICC 2011) -
Blog Logica Sweden -
Alert monitor for long running background jobs
Hello,
I have to configure an alert moniter for long running background jobs which are running more than 20000 secs using rule based. I have created a rule based MTE and assigend MTE class CCMS_GET_MTE_BY_CLASS to virtual node but i dont find a node to specify the time.
could any one guide me how can i do this.
Thanks,
KasiHi *,
I think the missing bit is where to set the maximum runtime. The runtime is set in the collection method and not the MTE class.
process: rz20 --> SAP CCMS Technical Expert Monitors --> All Contexts on local application server --> background --> long-running jobs. Click on 'Jobs over Runtime Limits' then properties, click the methods tab then double click 'CCMS_LONGRUNNING_JOB_COLLECT', in the parameters tab you can then set the maximum runtime.
If you need to monitor specific jobs, follow the process (http://help.sap.com/saphelp_nw70/helpdata/en/1d/ab3207b610e3408fff44d6b1de15e6/content.htm) to create the rule based monitor, then follow this process to set the runtime.
Hope this helps.
Regards,
Riyaan.
Edited by: Riyaan Mahri on Oct 22, 2009 5:07 PM
Edited by: Riyaan Mahri on Oct 22, 2009 5:08 PM -
Profiler execution plan ONLY for long running queries
The duration only applies to specific profiler events however I'd like to capture the execution plan ONLY for queries over 10 minutes.
Is there a way to do this using Xevents?
Anyone knows?
Thanks!
PaulaI've wanted that too but could not find a way to get it from profiler.
But it may be possible with xevents (or without xevents!) to watch for long-running queries and then get the plan from the cache,where it will probably stick for some time, using DMVs.
Josh -
Will rollback failure cause long-running transaction?
We are getting the following error for one transaction
[TimesTen][TimesTen 5.1.35 CLIENT]Communication link failure. System call select() failed with OS error 110. This operation has Timed Out. Try increasing your ODBC timeout attribute or check to make sure the target TimesTen Server is running
After that application tries to do a rollback, but rollback failed.
Will this transaction become a long-running transaction in server?Have you filed a metalink SR to get help on this issue?
-scheung -
Tracking completion status for long running DML operations
Does anybody know:
Is there any possibility to track a completion status for long running DML operations (for example how many rows is inserted)?
For example if I execute an INSERT statement which is working for several hours it is very important to estimate the total time for this operation.
Thanks forwardI'm working with Oracle8 in present, and unfortunately this solution (V$SESSION_LONGOPS)cannot help me.
On Oracle8 it works, but with some restrictions:
- You must be using the cost-based optimizer
- Set the TIMED_STATISTICS or SQL_TRACE parameter to TRUE
- Gather statistics for your objects with the ANALYZE statement or the DBMS_STATS package. -
Considerations for long running publication extensions
We are considering implementing a post processing publication extension which may take several minutes to execute. One of our concerns with this strategy is that the publication extension may bog down the Adaptive Processing Server.
Are there any general considerations / recommendations for long running post processing publication extensions?
Thanks!Generally creating a new thread is an expensive process. Well, everything is relative. My laptop can create & run & stop 7,000+ threads per second, test program below, YMMV. If you are dealing with thousands of thread creations per second, pooling may be sensible; if not, premature optimization is the root of all evil, etc.
public class ThreadSpeed
public static void main(String args[])
throws Exception
System.out.println("Ignore the first few timings.");
System.out.println("They may include Hotspot compilation time.");
System.out.println("I hope you are running me with \"java -server\"!");
for (int n = 0; n < 5; n++)
doit();
System.out.println("Did you run me with \"java -server\"? You should!");
public static void doit()
throws Exception
long start = System.currentTimeMillis();
for (int n = 0; n < 10000; n++) {
Thread thread = new Thread(new MyRunnable());
thread.start();
thread.join();
long end = System.currentTimeMillis();
System.out.println("thread time " + (end - start) + " ms");
static class MyRunnable
implements Runnable
public void run()
}Edited by: sjasja on Jan 14, 2010 2:20 AM -
Should EntryProcessors be used for long-running operations?
Hi Gene and All,
a couple of other questions come from the seemingly unexhaustible list :-)
- what happens if the caller of an InvocableMap.invokeAll or invoke method dies?
- all entryProcessors complete regardless of the client being there or not
- all already started process method calls complete, the unprocessed entries will not get processed
- something else happens
- what happens if the caller of an InvocableMap.aggregate method with a parallel-aggregator dies
- all aggregate methods in the parallel-aggregator complete
- the aggregate methods in the parallel-aggregator stop during processing
- something else happens
- should an entryprocessor or a parallel-aware entryaggregator implement a comparably long-running operation (e.g. jdbc access), or does that seriously affect performance of other concurrent operations within the cluster node or the entire cluster (e.g. becuase of blocking other events/requests)?
- should the work manager be used instead for these kinds of things (e.g. jdbc access)?
Thanks and best regards,
RobertRobert,
As soon as an EntryProcessor or EntryAggregator get delivered to the server nodes, it will get executed regardless of the requestor's state.
In regard to long-running operations the only thing you have to be conscious about is a number of worker threads allocated for such a processing. Since there is a single client thread issuing a request, it would suggests allocation as many worker threads (across the cache server tier) as there are client thread (across the presentation/application tier).
Regards,
Gene
Maybe you are looking for
-
Can you help me with several questions?
I have several questions 1. What boxes do I need to click on when I click on a link and want it to open in windows instead of tabs? 2. Why is it that I can not add norton 360 and yahoo on my add on bar? 3. Is it possible to just use firefox to open u
-
Hello, I have a problem with refcursor in a project migrated from 10 to 11g. I found this thread from a few years ago that talks about the same problem. It has no final answer - is it ok to comment the st.close(); in callStoredFunction in the AMImpl.
-
Keyboard selection in Leopard Terminal
Does anyone know where the keyboard selection has gone in the Leopard Terminal? I.e., the command-option-enter command, then selecting some text using it two times more, which then could be pasted. I quite liked it, especially as I don't like mice/mo
-
Flash player is and isn't working!!
Ok here are the specifics Running Windows 7 x64 IE 9 with 10.3.181.26 (unsure whether its 32/64) & Firefow 5 with 10.3.181.26 (unsure whether its 32/64). Ok A while back I got the usual popup to say that there was an update and proceeded to use the
-
Display/Change Connection Between Selected Actions
Hi guys We just recently had PI7.0 installed on a new machine (we're still running XI3.0 on the old machine), and we have about 2 interfaces currently running in PI7.0. The problem is this: - I create an Integration Scenario (appl components, actions