How to trace users Long Running transaction?

It happened a day ago, is there a transaction I can use to trace the users execution that caused them to time_out?
    Thank-You.

dear friend,
follow this link ,u ll get ur answer....
https://wiki.sdn.sap.com/wiki/pages/viewpage.action?pageId=17826
try for transactions st03,st03n,se30,st12........
regards
kanishak

Similar Messages

  • Long-running transactions and the performance penalty

    If I change the orch or scope Transaction Type to "Long Running" and do not create any other transaction scopes inside, I'm getting this warning:
    warning X4018: Performance Warning: marking service '***' as a longrunning transaction is not necessary and incurs the performance penalty of an extra commit
    I didn't find any description of such penalties.
    So my questions to gurus:
    Does it create some additional persistence point(s) / commit(s) in LR orchestration/scope?
    Where are these persistence points happen, especially in LR orchestration?
    Leonid Ganeline [BizTalk MVP] BizTalk Development Architecture

    The wording may make it sound so but IMHO, if during the build of an orchestration we get carried away with scope shapes we end up with more persistence points which do affect the performance so one additional should not make soo much of a difference. It
    may have been put because of end-user feed back where people may have opted for long running transactions without realizing about performance overheads and in subsequent performance optimization sessions with Microsoft put it on the product enhancement list
    as "provide us with an indication if we're to incurr performance penalties". A lot of people design orchestration like they write code (not saying that is a bad thing) where they use the scope shape along the lines of a try catch block and what with
    Microsoft marketing Long Running Transactions/Compensation blocks as USP's for BizTalk, people did get carried away into using them without understanding the implications.
    Not saying that there is no additional persistence points added but just wondering if adding one is sufficient to warrant the warning. But if I nest enough scope shapes and mark them all as long-running, they may add up.
    So when I looked at things other than persistence points, I tried to think on how one might implement the long running transaction (nested, incorporating atomic, etc), would you be able to leverage the .Net transaction object (something the pipeline
    use and execute under) or would that model not handle the complexities of the Long Running Transaction which by very definiton span across days/months and keeping .Net Transaction objects active or serialization/de-serialization into operating context will
    cause more issues.
    Regards.

  • Problem handing long running transaction

    Hi,
    I have a long running transaction, and there is a high possibility that
    someone else will make changes to one of the same objects in that
    transaction. How is everyone handling this situation? Is it best to
    catch the JDOException thrown and attempt the transaction one more time
    or to return to the user on "failure"?
    Also, would it possible with Kodo to set Pessimistic transactions for
    just this case and use optimistic for "read" operations?
    Kam

    Also, would it possible with Kodo to set Pessimistic transactions for
    just this case and use optimistic for "read" operations?You can easily switch between optimistic and pessimistic transactions:
    pm.currentTransaction ().setOptimistic (val);
    You can also use pessimistic transactions for everything, and set the
    nontransactional read property to true to do reads outside of transactions.
    Note, however, that you shouldn't use pessimistic locking for long-running
    transactions in most cases. They use too many DB resources. It's better to
    use optimistic transacactions and deal with the errors (or give them to the
    user).

  • Long running transactions.

    Hi all,
    In real world systems, how efficient are long running flat transactions in ejb 2.0 since the transactions tends to lock several tables that are part of the transaction.
    Could any one explain in detail or direct to any websites that share the real time experiences with transactions in ejb enviornment.
    Thanks in advance.

    Long run transaction are real performance killers, avoid them if you can.
    It's actually quite easy (and sensible) to break down long transactions into smaller transactions with a little common-sense.
    For example, funds transfer. Say you want to transfer money from one account to another account from country A (which is only open on Tuesdays) to country B (only open on Fridays).
    Option A: Start a long running transaction on Tuesday, commit it on Friday.
    The resources used to do this are very much wasted - and will cause performance degradation.
    Option B: Transfer the funds from Country A to Country C (which is open Tuesday and Friday) - in one transaction on Tuesday. Transfer funds from Country C to country B in a second transaction on Friday. If C to B fails, transfer the funds back - if it succeeds inform A of the success. It's more complex, but there's no free lunch. (This is how real banks do it - ever got a refund from the bank? Well, no, but you know what I mean; it's also why it takes five days for a cheque to clear etc etc).
    This is only a narrative - but there are very few cases where this cannot work in a real transaction.

  • Explicit commit during a long-running transaction in EclipseLink

    Hi,
    I am currently upgrading a J2EE application from OAS with Toplink Essentials to WL 10.3.3 with Eclipselink and have the following issue with transactions.
    The application was developed to have long-running transactions for business reasons in specific scenarios. However, some other queries must be created and committed along the way to make sure that we have this specific data in the database before the final commit. This call (and subsequent code) is in an EJB method that has the "@TransactionAttribute(TransactionAttributeType.REQUIRED)" defined on it. Taking this out gives me the same behaviour.
    The application has the following implementation of the process, which fails:
    Code
    EntityManager em = PersistenceUtil.createEntityManager();
    em.getTransaction().begin();
    PersistenceUtil.saveOrUpdate(em,folder);
    em.getTransaction().commit(); --->>>>FAILS HERE
    Error
    javax.ejb.EJBTransactionRolledbackException: EJB Exception: : javax.persistence.RollbackException: Exception [EclipseLink-4002] (Eclipse Persistence Services - 2.0.2.v20100323-r6872): org.eclipse.persistence.exceptions.DatabaseException Internal Exception: java.sql.SQLException: Cannot call Connection.rollback in distributed transaction. Transaction Manager will commit the resource manager when the distributed transaction is committed.
    So I tried the following to see if it would work, but I believe that the transaction end and anything after that will fail since it requires a transaction to continue
    PersistenceUtil.getUnitOfWork().writeChanges();
    PersistenceUtil.getUnitOfWork().commit();
    Error
    javax.persistence.TransactionRequiredException: joinTransaction has been called on a resource-local EntityManager which is unable to register for a JTA transaction.
    Can anyone help me as to how to commit a transaction within the long running transaction in this environment? I also want to be sure that the long-running transaction does not fail or is not stopped along the way.
    Thanking you in advance

    You seem to be using JTA, so you cannot use JPA transactions, you must defined your transaction in JTA, such as in your SessionBean.
    When using JTA you should never use,
    em.getTransaction().begin();
    If you do not want to use JTA, then you need to set your persistence unit to be RESOURCE_LOCAL in your persistence.xml.
    Also you need to ensure you use a non-jta enabled DataSource.
    James : http://www.eclipselink.org

  • IMDB Cache group load and long running transaction

    Hello,
    We are investigating the use of IMDB Cache to cache a number of large Oracle tables. When loading the cache I have noticed logs accumulating and I am not quite sure why this should be. I have a read only cache group consisting of 3 tables with approximatley, 88 million rows, 74 million rows and 570 million rows in each table. To load the cache group I run the following -
    LOAD CACHE GROUP er_ro_cg COMMIT EVERY 1000 ROWS PARALLEL 10 ;ttLogHolds shows -
    Command> call ttLogHolds ;
    < 0, 12161024, Long-Running Transaction      , 1.1310 >
    < 170, 30025728, Checkpoint                    , Entity.ds0 >
    < 315, 29945856, Checkpoint                    , Entity.ds1 >
    3 rows found.I read this as saying from log 0 to current must be kept for the long running transaction. From what I can see the long running transaction is the cache group load. Is this expected? I was expecting the commit in the load cache group to allow the logs to be deleted. I am able to query the contents of the tables at various times in the load so I can see that the commit is taking place.
    Thanks
    Mark

    Hello,
    I couldn't recall whether I had changed the Autocommit settings when I ran the load so I tried a couple more runs. From what I could see the value of autocommit did not influence how the logs were treated. For example -
    1. Autocommit left as the default -
    Connection successful: DSN=Entity;UID=cacheadm;DataStore=/prod100/oradata/ENTITY/Entity;DatabaseCharacterSet=AL32UTF8;ConnectionCharacterSet=US7ASCII;DRIVER=/app1/oracle/product/11.2.0/TimesTen/ER/lib/libtten.so;LogDir=/prod100/oradata/ENTITY;PermSize=66000;TempSize=2000;TypeMode=0;OracleNetServiceName=TRAQPP.world;
    (Default setting AutoCommit=1)
    Command> LOAD CACHE GROUP er_ro_cg COMMIT EVERY 1000 ROWS PARALLEL 10 ;Logholds shows a long running transaction -
    Command> call ttlogholds ;
    < 0, 11915264, Long-Running Transaction      , 1.79 >
    < 474, 29114368, Checkpoint                    , Entity.ds0 >
    < 540, 1968128, Checkpoint                    , Entity.ds1 >
    3 rows found.And ttXactAdmin shows only the load running -
    2011-01-19 14:10:03.135
    /prod100/oradata/ENTITY/Entity
    TimesTen Release 11.2.1.6.1
    Outstanding locks
    PID     Context            TransID     TransStatus Resource  ResourceID           Mode  SqlCmdID             Name
    Program File Name: timestenorad
    28427   0x16fd6910            7.26     Active      Database  0x01312d0001312d00   IX    0                   
                                                       Table     718080               W     69211971680          TRAQDBA.ENT_TO_EVIDENCE_MAP
                                                       Table     718064               W     69211971680          TRAQDBA.AADNA
                                                       Command   69211971680          S     69211971680         
                                  8.10029  Active      Database  0x01312d0001312d00   IX    0                   
                                  9.10582  Active      Database  0x01312d0001312d00   IX    0                   
                                 10.10477  Active      Database  0x01312d0001312d00   IX    0                   
                                 11.10332  Active      Database  0x01312d0001312d00   IX    0                   
                                 12.10546  Active      Database  0x01312d0001312d00   IX    0                   
                                 13.10261  Active      Database  0x01312d0001312d00   IX    0                   
                                 14.10637  Active      Database  0x01312d0001312d00   IX    0                   
                                 15.10669  Active      Database  0x01312d0001312d00   IX    0                   
                                 16.10111  Active      Database  0x01312d0001312d00   IX    0                   
    Program File Name: ttIsqlCmd
    29317   0xde257d0             1.79     Active      Database  0x01312d0001312d00   IX    0                   
                                                       Row       BMUFVUAAAAKAAAAPD0   S     69211584104          SYS.TABLES
                                                       Command   69211584104          S     69211584104         
    11 outstanding transactions foundAnd the commands were
    < 69211971680, 2048, 1, 1, 0, 0, 1392, CACHEADM                       , load cache group CACHEADM.ER_RO_CG commit every 1000 rows parallel 10 _tt_bulkFetch 4096 _tt_bulkInsert 1000 >
    < 69211584104, 2048, 1, 1, 0, 0, 1400, CACHEADM                       , LOAD CACHE GROUP er_ro_cg COMMIT EVERY 1000 ROWS PARALLEL 10 >Running the load again with autocommit off -
    Command> AutoCommit
    autocommit = 1 (ON)
    Command> AutoCommit 0
    Command> AutoCommit
    autocommit = 0 (OFF)
    Command> LOAD CACHE GROUP er_ro_cg COMMIT EVERY 1000 ROWS PARALLEL 10 ;Logholds shows a long running transaction
    Command>  call ttlogholds ;
    < 1081, 6617088, Long-Running Transaction      , 2.50157 >
    < 1622, 10377216, Checkpoint                    , Entity.ds0 >
    < 1668, 55009280, Checkpoint                    , Entity.ds1 >
    3 rows found.And ttXactAdmin shows only the load running -
    er.oracle$ ttXactAdmin entity                                             
    2011-01-20 07:23:54.125
    /prod100/oradata/ENTITY/Entity
    TimesTen Release 11.2.1.6.1
    Outstanding locks
    PID     Context            TransID     TransStatus Resource  ResourceID           Mode  SqlCmdID             Name
    Program File Name: ttIsqlCmd
    2368    0x12bb37d0            2.50157  Active      Database  0x01312d0001312d00   IX    0                   
                                                       Row       BMUFVUAAAAKAAAAPD0   S     69211634216          SYS.TABLES
                                                       Command   69211634216          S     69211634216         
    Program File Name: timestenorad
    28427   0x2abb580af2a0        7.2358   Active      Database  0x01312d0001312d00   IX    0                   
                                                       Table     718080               W     69212120320          TRAQDBA.ENT_TO_EVIDENCE_MAP
                                                       Table     718064               W     69212120320          TRAQDBA.AADNA
                                                       Command   69212120320          S     69212120320         
                                  8.24870  Active      Database  0x01312d0001312d00   IX    0                   
                                  9.26055  Active      Database  0x01312d0001312d00   IX    0                   
                                 10.25659  Active      Database  0x01312d0001312d00   IX    0                   
                                 11.25469  Active      Database  0x01312d0001312d00   IX    0                   
                                 12.25694  Active      Database  0x01312d0001312d00   IX    0                   
                                 13.25465  Active      Database  0x01312d0001312d00   IX    0                   
                                 14.25841  Active      Database  0x01312d0001312d00   IX    0                   
                                 15.26288  Active      Database  0x01312d0001312d00   IX    0                   
                                 16.24924  Active      Database  0x01312d0001312d00   IX    0                   
    11 outstanding transactions foundWhat I did notice was that TimesTen runs three queries against the Oracle server, the first to select from the parent table, the second to join the parent to the first child and the third to join the parent to the second child. Logholds seems to show a long running transaction once the second query starts. For example, I was monitoring the load of the parent table, checking ttlogholds to watch for a long running transaction. As shown below, a long running transaction entry appeared around 09:01:41 -
    Command> select sysdate from dual ;
    < 2011-01-20 09:01:37 >
    1 row found.
    Command> call ttlogholds ;
    < 2427, 39278592, Checkpoint                    , Entity.ds1 >
    < 2580, 22136832, Checkpoint                    , Entity.ds0 >
    2 rows found.
    Command> select sysdate from dual ;
    < 2011-01-20 09:01:41 >
    1 row found.
    Command> call ttlogholds ;
    < 2427, 39290880, Long-Running Transaction      , 2.50167 >
    < 2580, 22136832, Checkpoint                    , Entity.ds0 >
    < 2929, 65347584, Checkpoint                    , Entity.ds1 >
    3 rows found.This roughly matches the time the query that selects the rows for the first child table started in Oracle
    traqdba@TRAQPP> select sm.sql_id,sql_exec_start,sql_fulltext
      2  from v$sql_monitor sm, v$sql s
      3  where sm.sql_id = 'd6fmfrymgs5dn'
      4  and sm.sql_id = s.sql_id ;
    SQL_ID        SQL_EXEC_START       SQL_FULLTEXT
    d6fmfrymgs5dn 20/JAN/2011 08:59:27 SELECT "TRAQDBA"."ENT_TO_EVIDENCE_MAP"."ENTITY_KEY", "TRAQDBA"."ENT_TO_EVIDENCE_
                                       MAP"."EVIDENCE_KEY", "TRAQDBA"."ENT_TO_EVIDENCE_MAP"."EVIDENCE_VALUE", "TRAQDBA"
                                       ."ENT_TO_EVIDENCE_MAP"."CREATED_DATE_TIME" FROM "TRAQDBA"."ENT_TO_EVIDENCE_MAP",
                                        "TRAQDBA"."AADNA" WHERE "TRAQDBA"."ENT_TO_EVIDENCE_MAP"."ENTITY_KEY" = "TRAQDBA
                                       "."AADNA"."ADR_ADDRESS_NAME_KEY"
    Elapsed: 00:00:00.00Thanks
    Mark

  • Sequential Convoy and Long running transaction: Messages still referenced

    Hi everyone<o:p></o:p>
    Being a BizTalk developer since 2006, this thing still stumps me.<o:p></o:p>
    I have a sequential convoy singleton orchestration that debatches messages using a rcvPipeline. The orchestration is needed in a FIFO scenario. In order to execute a rcvPipeline
    within an orchestration I need to encapsulate it within an atomic transaction scope. <o:p></o:p>
    In order to have an atomic scope the orchestration needs to be long running. I have also encapsulated the atomic transaction within a scope (using long running transactions) to have
    exception handling.
    <o:p></o:p>
    Everything works fine except for one major detail:
    When the orchestration executes the messages are still in the messagebox. I can even click on the orchestration instance in the MGMT console and look at the message! Tracking is disabled for the receive port as well as for the orchestration. Still, the messages
    does not get cleaned up.
    <o:p></o:p>
    I have set my DTA-purge to 1 hour and it works fine, but the messages are still in the orchestration.<o:p></o:p>
    My guess is that the long running transactions does not complete (although it looks like they should) and since the transaction is not completed the messages are not removed from
    the message box.
    So, to summarize: Is it possible to combine long running transactions and a singleton orchestration?
    //Mikael Sand (MCTS, ICC 2011) -
    Blog Logica Sweden

    So after a day of looking for the solution it is quite clear that you are right in that the atomic transaction does not commit. I added a compensation block with trace info and it is never hit.
    I also experimented with Isolation level on the atomic transaction and that did nothing.
    Lastly I also made the sendport direct bound and also tried "specify later binding" to a physical port.
    The messages are still being referenced by the orchestration! What can I do to make the atomic transation commit?
    //Mikael Sand (MCTS, ICC 2011) -
    Blog Logica Sweden

  • Re:How to determine the long running jobs in a patch

    Hi ,
    How to determine the long running jobs in a patch .
    Regards

    Hi,
    Check the below MY ORACLE SUPPORT note:
    Note.252422.1 .... Check Completed Long Running Jobs In Oracle Apps.
    Best regards,
    Rafi

  • IM34, IMCCP1, IMCCP3 . How to block user not run these tcodes twice

    Hi All,
    I have plan value from cjr2 (cost element and activity type)
    Normally we use IM34 (to roll up) the plan value then to copy plan value to investment management (imccp1) and copy plan to project budget (imccp3).
    How to block user to run these tcodes twice. It seems that if user run it twice, the total plan and budget will be double. Is there any way to reverse?
    PLz help...
    Cheers,
    Nies

    thx

  • How to trace user activity

    Dear guru
    Please guide me, how to trace user activities in SAP. We are using ECC 6.0 version.

    No interview questions.
    Search before posting.
    Read the "Rules of Engagement"
    Theres plenty of information and threads about it.
    Regards
    Juan

  • How can find out long run quries?

    Hi,
    I have some question
    how can find out long run queries , i have use v$session but i have not find out,pls how can find out
    these queries.

    v$session_longops has some limitations, for example it records only some operations see more [url http://www.gplivna.eu/papers/v$session_longops.htm]here
    Another possibility might be using statspack and/or [url http://download-uk.oracle.com/docs/cd/B19306_01/server.102/b14211/autostat.htm#PFGRF02601]AWR
    From docs:
    The most current instructions and information on installing and using the Statspack package are contained in the spdoc.txt file installed with your database. Refer to that file for Statspack information. On Unix systems, the file is located in the ORACLE_HOME/rdbms/admin directory. On Windows systems, the file is located in the ORACLE_HOME\rdbms\admin directory.
    Gints Plivna
    http://www.gplivna.eu

  • How to trace users

    HI,
    Did any body know : how to trace users who are logging into sap system i.e ( I want to see 10 days before i.e on 17th day of january.. I want to see users who are logged in sap system.
    Thanks
    Jhony

    Hi,
    So, I was supposed to guess that you are using SAP R/3 4.0b ?
    Please, think about the needed information that you need to provide with your questions...
    Sorry, but I don't have anymore access to such an old release of R/3 and don't remember where you get the info.
    Regards,
    Olivier

  • RZ20 - Is there an alert for long running transactions?

    In RZ20 is there an alert for long running transactions?

    http://help.sap.com/saphelp_nw04s/helpdata/en/9c/f78b3ba19dfe47e10000000a11402f/content.htm
    This document clearly explains your problem.
    "Reward points if useful"

  • How to trace Very Long SQL and PLSQL?

    Hi,
    i need an advice on how to trace very long PLSQL as iam new to my company and the business is very new to me , i am a very good developer using Developer suite , but my weak point is tracing very long packages and undestand them ,
    i want to trace a Procedure that have a LOT of lines of code and it calls many other packages, i really need to be quick in undestading what this procedre is doing as i should solve my problem in appropriate time
    i am using dbms_output.put_line for tracing, to know what is the path that is taken and the variable values but i think i need a totaly different approach
    also there are some sqls in this system that is very long, some of them is joining more than 10 tables at the same time, any hint to understand what this sql is doing?
    any help is appreciated.

    There is a trace facility for PL/SQL, but if you use it find a GUI interface. Trying to get it to work in SQL*PLUS standalone was painful. I think its supported by the major GUI tools like TOAD and PL/SQL developer but have not used it myself.
    Other tools you can work with to help you debug and tune PL/SQL include Oracle trace (which analyzes SQL for efficiency, with an interpreter called tkprof) and DBMS_PROFILER. DBMS_PROFILER requires some initial set-up but can analyze PL/SQL code line by line and if used carefully function as a sort of limited trace.
    Good luck!

  • Will rollback failure cause long-running transaction?

    We are getting the following error for one transaction
    [TimesTen][TimesTen 5.1.35 CLIENT]Communication link failure. System call select() failed with OS error 110. This operation has Timed Out. Try increasing your ODBC timeout attribute or check to make sure the target TimesTen Server is running
    After that application tries to do a rollback, but rollback failed.
    Will this transaction become a long-running transaction in server?

    Have you filed a metalink SR to get help on this issue?
    -scheung

Maybe you are looking for

  • Web service, servlet, HttpServletResponse, Locale

    Hello guys! I have two questions: - Can a servlet be a web service? - For the next one, I supposed that yes... I'm trying to build a web service from a servlet and I get a strange error. I simplified the code to the maximum, and it became like this:

  • Messaging Server 3.5: user authentications being rejected

    Using Messaging Server 3.5, user authentications are being rejected. Why? <P> Messaging 3.5.2 fixes the following bug. In 3.5, the host name that is set in the client is compared to the mailHost field in the directory. If the client config is differe

  • IOS 7.0.4 hides Contacts

    I just updated to .4 and my iphine 5 crashed. Had to restore from icloud and got most items back but my contact list was placed in a folder. I slid it out but it then dissapeared. I can find it by double tapping the home button and it is below the mi

  • How to install the Sri Lankan sinhalese fonts to read sri lankan sinhalese content

    Please let me know how can i install the Sri Lankan sinhalese fonts to read Sinhalese content.?

  • Have Melodyne/Rewire file bugs per Logic 8 been fixed?

    I'm about to do a clean install of Leopard on my system drive (and the online upgrade to 10.5.3), followed by installation of Logic Pro 8.0.2 and Melodyne Studio 3.2.2.2. Trusted users on this forum have reported problems with Logic following install