Bypassing Transaction Boundaries

Hi Experts,
I have one process with morethan one database invokes for polloing and as well as for writing the data depending on some conditions. I have to include the whole process in one transaction . As per my understanding the database invokes by default includes in a transaction . The problem here is i have few activites which bypass the transaction and creates the new one like receice or wait etc and i should i have all the invokes in one TX so that if an error occurs either i can rollback the whole process or finally i can commit it if everything goes well.
Please tell me how can i achive it having everything in one Tx though i have those activies in between it.
Thanks.

Hi,
Make sure all your DB's are XA enabled...if you are using 10.1.3.4 or above you will have all the DB calls in a single transaction..avoid using those activities wch cause's
a transaction to commit like invoke,receive,wait,checkpoint()..etc.Have a luk at the below link.
http://www.oracle.com/technology/architect/soa-suite-series/wli-bpel-transactions.html

Similar Messages

  • Exploring the transaction boundaries between ESB and BPEl

    Hi
    I was working on how to explore the transaction boundaries between the ESB and BPEL.
    How to work on these I have some confusions over in this.
    Please help me.
    Cheers.

    Jan,
    Are you sure about this ?
    http://www.oracle.com/technology/products/integration/esb/files/esb-transactions-errorhandling.pdf
    I'm not a die-hard at the transaction-mechanisme so some wild guessings.
    If i look at this document, i can define on every routing rule the sync/async processing.
    If i select on both db-adapter-routings the sync-option, and i make sure both dbadapter-connections make use of the xA properties in the datasource-properties, shouldn't this help him on his way?
    I haven't tested it, so im curious what it would do, or do i miss something?

  • Transaction boundaries for an Integration Knowledge Module

    Hi All,
    We are currently using the SQL Incremental Update IKM to push changes from our source to target database. Our target database is denormalized to extract out data that can change over time into tables that are indexed on load timestamps that come from CDC (so that we can do time based queries on these tables to find changes between polling intervals). We want the ability to run multiple IKMs in one transaction for a parent-child type relationship, so that if we have an error in processing the child, then the parent is rolled back, leaving the target database in a consistent state.
    We have tried assigning the one transaction to all of the steps in the the IKM as well as the LKM and CKM so that the load of the data into the ODI work tables is in the same transaction as the commit to the intermediate store. If we tell ODI not to commit until the end of the integration module and there is an error in inserting rows, rather than rolling back, the errors are placed into the E$<table_name> table in the target database and the IKM finishes as normal.
    Is is possible to roll back this transaction so that errors aren't placed into these E$ tables? We would rather have human intervention fix the error in the target database and rerun the scenario than fix the error in the target database and then copy values out of these E$ tables. Especially since the E$ tables are emptied on the next run. Any help would be greatly appreciated.
    Regards,
    Aaron.

    If you turn FLOW_CONTROL off, you won't get the data moved into the E$ tables, it will simply try to do the set-wise updates and inserts, if it fails, then the task will fail, causing your rollback.

  • DAO design pattern & Transaction boundaries

    I am using a tool to generate a JDBC tier that applies the DAO design pattern. However, the generated code is based on "autocommit"-strategy, i.e. there is no conn.setautocommit(false)-statement in the code.
    I have added statement so that I can handle the transaction by myself. The DataAccessObject-implementation class does however close the connection before returning. I would like to execute the commit outside of the DAO-implementation.
    Anyone who have been experiencing the same problem?
    Best regards
    Lasse Bergstr�m

    I'm not sure if I fully understand your question.
    However, we usually implement such a scenario by doing autocommit false on the connection object and then passing it to the DAO. Finally do a commit on the connection in the calling class.

  • JTA Transaction--please help-----Xid not valid

    HI,
    I am writing a small application which i am posting at the end.This is decription of my application.I am writing a jsp.Later on i will be using in some other way.
    I am using Oracle XA implementation to communicatewith my RM which oracle8.1.7 .
    I am creating two XAConnection with two data instances 'test' and 'test3' .These two reside on my local machine in the same database server.
    With the code which i am sending you i have tried two cases.
    First Case
    1)i use only one XAConnection object of say 'test'.
    2)enlist its XADataSource with my transaction Object
    3) get two connection objects and execute two sql's on themMy code works fine and maintains the transaction.
    Second Case
    1) I use create two XAConnection objects. one of 'test' and other of 'test3'.
    2) enlist their XAResources with transaction object.
    3) Now i take one connection from each of XAConnection and execute two sqls, oneon each of them.
    It gives me exception while enlisting second XAResource with transaction objeectsaying that "The Xid is not valid".
    below is the stackTrace.
    javax.transaction.SystemException: start() failed on resource 'oracle.jdbc.xa.client.OracleXAResource':XAER_NOTA : The XID is not valid
    oracle.jdbc.xa.OracleXAException at oracle.jdbc.xa.OracleXAResource.checkError(OracleXAResource.java:483)
    at oracle.jdbc.xa.client.OracleXAResource.start(OracleXAResource.java:190)
    at weblogic.transaction.internal.ServerResourceInfo.start(ServerResourceInfo.java:1165)
    at weblogic.transaction.internal.ServerResourceInfo.xaStart(ServerResourceInfo.java:1108)
    <----------------------CODE------------------------------------------------------>
    <html>
    <body bgcolor=tan>
    <%@page session="true" %>
    <%@page import="java.util.Hashtable,java.sql.*,javax.naming.*,javax.transaction.*,javax.sql.*,oracle.jdbc.xa.client.OracleXADataSource,javax.rmi.PortableRemoteObject,javax.transaction.xa.XAResource" %>
    <%!
    private static XAConnection getFirstXAConnection() throws java.sql.SQLException{
         OracleXADataSource oxadsFirst = new OracleXADataSource();
         String urlFirst = "jdbc:oracle:thin:@70.7.51.80:1521:test";
         oxadsFirst.setURL(urlFirst);
         XAConnection xaConnectionFirst = oxadsFirst.getXAConnection("scott","tiger");
    return xaConnectionFirst;
    private static XAConnection getSecondXAConnection() throws java.sql.SQLException{
              OracleXADataSource oxadsSec= new OracleXADataSource();
              String urlSec = "jdbc:oracle:thin:@70.7.51.80:1521:test3";
              oxadsSec.setURL(urlSec);
              XAConnection xaConnectionSec = oxadsSec.getXAConnection("scott","tiger");
    return xaConnectionSec;
    %>
    <%
    Context ctx = null;
         Hashtable ht = new Hashtable();
         ht.put(Context.INITIAL_CONTEXT_FACTORY, "weblogic.jndi.WLInitialContextFactory");
         ht.put(Context.PROVIDER_URL, "t3://localhost:7001");
         try{
              ctx = new InitialContext(ht);
              //javax.transaction.UserTransaction transaction = (javax.transaction.UserTransaction)ctx.lookup("java:comp/UserTransaction");
              System.out.println("Before Lookup JNDI UserTransaction and TransactionManager......................");
              //javax.transaction.UserTransaction userTx = (javax.transaction.UserTransaction)ctx.lookup("javax.transaction.UserTransaction");
              javax.transaction.TransactionManager transactionManager = (javax.transaction.TransactionManager)ctx.lookup("javax.transaction.TransactionManager");
              System.out.println("After Lookup TransactionManager......................");
    try{
                   transactionManager.begin();
                   Transaction transaction = transactionManager.getTransaction();
                   System.out.println("Transaction Object ----------------------------->"+transaction);
    XAConnection xaConFirst = getFirstXAConnection();
    XAResource xaResourceFirst = xaConFirst.getXAResource();
    System.out.println("xaResourceFirst Object ----------------------------->"+xaResourceFirst);
    XAConnection xaConSecond = getSecondXAConnection();
    XAResource xaResourceSecond = xaConSecond.getXAResource();
    System.out.println("xaResourceSecond Object ----------------------------->"+xaResourceSecond);
    if(!xaResourceFirst.isSameRM(xaResourceSecond) )
    System.out.println("<-----------------BOTH THE RESOURCES ARE NOT SAME SMAE SAME----------------------------->");
    boolean firstEnlistBool = transaction.enlistResource(xaResourceFirst);
    System.out.println("firstEnlistBool ----------------------------->"+firstEnlistBool);
    boolean secondEnlistBool = transaction.enlistResource(xaResourceSecond);
    System.out.println("secondEnlistBool -------------------------> "+secondEnlistBool);
    java.sql.Connection firstConn = xaConFirst.getConnection();
    Statement stmt = firstConn.createStatement();
    stmt.executeQuery("insert into dept values(60,'MARKETING','NEW DELHI')");
    java.sql.Connection secondConn = xaConSecond.getConnection();//xaConFirst.getConnection();//
    stmt = secondConn.createStatement();
    //stmt.executeQuery("insert into account values(20,20)");
    stmt.executeQuery("insert into salgrade values(10,10,10)");
    if(Status.STATUS_ACTIVE == transactionManager.getStatus() )
    System.out.println("Before committing status "+transactionManager.getStatus() );
    transactionManager.commit();
    System.out.println("After committing");
              } catch(SQLException sqlE){
                        sqlE.printStackTrace();
                        catch(Exception e){
                                            e.printStackTrace();
         } catch (Exception ex) {
              ex.printStackTrace();
              finally {
                   try {
                        ctx.close();
                        catch (Exception e) {
                             e.printStackTrace();
                   } // close finally
    %>
    <form method="post">
    <input type="submit" name="submit" value="Call Transaction Bean">
    </form>
    </html>
    <-------------------------------------------------------------------------------->
    please help in this..I am stuck with this and dont kow how to move ahead to remove this problem..
    Best Regards
    Akhil Nagpal

    Hi,
    Hi Vicky,
    I think we are in different time zones. I am south
    korea.I have tried your suggestion but it gives me
    the same exception.Yes ,there is a difference of 3.5 hours , I am in India(Mumbai).
    This is what i am trying to achieve.
    My aim is to create some aplication to which i can
    register my XADataSources and this application should
    be able to handle the distributed transactions among
    them.I will be using the TransactionManager of some
    application server.SO i am using weblogic7.0.
    For the testing purpose i have creaed a jsp as i
    i posted the code. I am very new to this JTA and may
    be i am doing wrong.Can you help me this but putting
    insights into your experience may be that will
    increase my enthusiasm :-) ....
    please help me in this.
    I am extracting the following from the docs
    public interface TransactionManager
    The TransactionManager interface defines the methods that allow an application server to manage transaction boundaries.
    public interface UserTransaction
    The UserTransaction interface defines the methods that allow an application to explicitly manage transaction boundaries
    So as per your specs I can understands your application is trying to explicitily control the boundaries of the transaction, so you should use the UserTransction instance to begin the Transaction.My understanding says the TransactionManager will come in picture for declarative transaction and UserTransaction for the your case.I think you have tried that , I would have tried this out here but I dont work on weblogic.Do the things cooly and try to understand the concept.Let me know of the results.
    Regards
    Vicky

  • Can i user UserTransaction  in a Container-managed transaction Bean

    can i use UserTransaction to control transaction boundaries in a container-managed transaction bean method?
    below is the method:
    there is one-to-many between Employees and SalaryItem
    @TransactionAttribute(value = TransactionAttributeType.REQUIRED)
    private void initEmployeesSalary(Long salarySumId) {
    for(Employees employees: liEmployees){
    for (int i = 0; i < 20; i++) {                                                           
    SalaryItem item = new SalaryDetailItem();
    employees.addSalaryItem(item);
    when there are about 1000 employees,the method run very slow.
    What do you think I should do?
    null

    Hi again,
    The EJB specs say that a stateful Session Bean with CMT is NOT allowed to use the UserTransaction; see page 361 of the EJB2.0 specification. So combining them will not (or should not) work.
    I suggest CMT+SessionSynchronization combined with using a flag to indicate whether notify should be called or not. Otherwise, you could try splitting up the bean into two beans: one with CMT and another one without. The one without CMT could use the UserTransaction and notify.
    Also, you might want to check http://www.onjava.com/pub/a/onjava/2001/10/02/ejb.html
    Hope that helps a bit,
    Guy
    http://www.atomikos.com

  • DML,Transactions and index updates

    Hi,
    Its known adding indexes slows down the DML on the table. i.e. every time table data changes, the index has to be recalculated. What i am trying to understand is whether the index is recalculated as soon as oracle sees the change?
    To elaborate, lets say i have a table abc with 4 columns, column1, column2, column3 and column4. I have two indexes; one unique on column1 and another non unique index on column2.
    So when i am trying to update column4, which is not indexed, will there be any transactional data generated for this operation? Will it be generated if i am updating column2 ( with non-unique index) ?
    What i am interested to know is how transactions boundaries impact the calculation of index. Will oracle always generate transactional entries and recalculate affected indexes even before the transaction is committed and the data change is made permanent ?

    user9356129 wrote:
    Hi,
    Its known adding indexes slows down the DML on the table. i.e. every time table data changes, the index has to be recalculated. Yes, but only when involved (i.e. indexed) columns are changed.
    And, indexes are not "recalculated". Assuming the index is of type B-tree (by farout the most commonly used type), then the "B-tree is maintained". How that's done can be found in elementary computer science materials which you can probably find using Google.
    What i am trying to understand is whether the index is recalculated as soon as oracle sees the change?
    To elaborate, lets say i have a table abc with 4 columns, column1, column2, column3 and column4. I have two indexes; one unique on column1 and another non unique index on column2.
    So when i am trying to update column4, which is not indexed, will there be any transactional data generated for this operation?You'll need to clarify what you mean by "transactional data". But in this case the block(s) that hold(s) the table row(s) of which you have updated column4, will be changed, in memory, to reflect your update. And as column4 is not involved in any index, no index blocks will be changed.
    Will it be generated if i am updating column2 ( with non-unique index) ?In this case not only table-blocks will be changed to reflect your update, but also index-blocks (that hold B-tree information) will be changed (in memory).
    >
    What i am interested to know is how transactions boundaries impact the calculation of index. Will oracle always generate transactional entries and recalculate affected indexes even before the transaction is committed and the data change is made permanent ?Yes (to the part following 'and' of the latter sentence. I don't know what you mean by "transactional entries").
    Toon

  • CcBPM: Raising Alert breaks transactional brackets!

    Hi all!
    I am using the (since SP19 / SP12 or so) new functionality of explicitly setting the <b>transaction boundaries</b> for my ccBPMs - however during tests I found out that obviously everything works fine UNLESS you are trying to raise an ALERT via control step in your block with broader transactional boundaries set. If you do raise an Alert
    it seems to break your "transactional bracket" of the block - hence no correct retry
    (e.g. via "SWF_XI_SWPR") is working.
    Does anybody know some OSS-note regarding this? Or any other workaround for
    having some Alerting in my BP and STILL don't lose my transactional boundaries?
    Many thanks in advance for your input!
    Andy

  • Revision (transaction) management. Am I doing OK?

    I need to track audit information about all tables within a data schema, which app users see via grants on the schema each app users maps to (no app user is supposed to connect directly on the data schema). To achieve this, I have a Revision table in the data schema, with a sequence-based primary key, 2 columns per data table called insert_rev and update_rev which are FKs to the Revision table, populated automatically via triggers, and a definer-rights PL/SQL package that app user must execute to call new_revision, end_revision. The revision data table triggers raise an application error if an insert or delete are attempted outside the "scope" of a revision (i.e. not between a new_revision and end_revision call).
    This is working OK, except that it requires apps to properly call new_revision/end_revision, i.e. apps to be well behaved (a big if IMHO, especially since we eventually plan to allow 3rd party apps to access our schema), and also although it is intended that revision align with transaction boundaries, nothing enforces this.
    Is there a way to have some kind of ON COMMIT and ON ROLLBACK trigger than could automatically call end_revision automatically?
    Any way to relate our ad-hoc revisions with real transaction IDs from the DB instance? (and if we did, would these IDs be meaningless if the data is moved to another instance for example)? I've recently discovered V$TRANSACTION, and was wandering if/how this view could be useful to me!?
    I'm quite new to Oracle and DBs in general, so any advice on a better design to track application data revisions, in a way similar to a SCM system like SubVersion, would be appreciated. What I have designed so far works, but on second thought I think I may be re-inventing the wheel here, and there might be better ways to do this.
    Thanks for any insights. --DD
    PS: I'm also wondering if row-level triggers for all inserts/updates of all data tables might not be a performance killer too.

    When I think of Oracle Auditing, I think DBA level auditing.
    I want to have application-visible meta-data of data changes, and somehow I expected Oracle Auditing to not be visible to an end-user client app, not organizations willing to expose auditing info to our apps. It could be that I'm wrong though.
    How would an non-privileged client app view/access the auditing info? Can the auditing info be restricted to a given schema? Thanks, --DD                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

  • CDI event originator's EJB transaction waits for Observer(AFTER_SUCCESS) EJ

    Hi!
    I have the following scenario: (pesudo-code)
    CallerObject.method() {
    SessionBean1.method1(); // through remote bean interface
    // [1]
    @TransactionAttribute(REQUIRED)
    SessionBean1.method1() {
    // do something...
    Event<myClass>.fire();
    // do something...
    ObserverObject.method2(@Observes(during=AFTER_SUCCESS) myClass) {
    sessionBean2.method2(); // through local bean interface
    @Asynchronous
    @TransactionAttribute(REQUIRED)
    SessionBean2.method2() {
    // do something...
    (with the usual container-managed transaction boundaries)
    ==> the following happens:
    [1] is only reached AFTER the transaction in SessionBean2.method2() finishes! (Although the last statement in SessionBean1.method1() is reached way before that.)
    It's as if SessionBean1.method1()'s transaction somehow isn't "released" (for want of a better word -- it does get committed immediately, before the event handler ObserverObject.method2() is called!) until the asynchronously called Session2.method2()'s transaction finishes as well.
    Does anyone know how I could avoid that?
    (The point of the whole setup would be to have the long-running SessionBean2.method2() run in the background after T1's completion and have SessionBean1.method1() return as soon as possible.)
    P.S.: I have verified that
    a) T1 is committed immediately (the records go in the DB)
    b) SessionBean2.method2() is called asynchronously (control jumps to the next statement in the calling code immediately)
    c) the SessionBean1.method1() doesn't return control to the caller code until T2 finishes
    Thanks,
    Agoston

    there is an error in either one of your web.xml files or in the server.xml..
    it looks like an xml parse error..
    did deploy app changes before this? or update some context stuff within server.xml??

  • Propagating User Transactions

    Hi,
    I am trying to check how user transactions work in context of the OC4j Containers.
    I have deployed a Stateless Session Bean in the 'standalone' OC4J container of JDev10.1.3. I then exposed the EJB as a WebService.
    Next, I wrote a client and am running it in the 'embedded' OC4J container of JDev 10.1.3. From this client, I am trying to invoke the bean methods as WebService calls. Below is my client code:
    <code>
    env.put(Context.INITIAL_CONTEXT_FACTORY, "com.evermind.server.rmi.RMIInitialContextFactory");
    env.put(Context.PROVIDER_URL, "ormi://localhost:23791/");
    env.put(Context.SECURITY_PRINCIPAL, "oc4jadmin");
    env.put(Context.SECURITY_CREDENTIALS, "welcome");
    Context context = new InitialContext(env);
    ut = (UserTransaction) context.lookup("java:comp/UserTransaction");
    ut.begin();
    // call bean methods.
    ut.commit();
    </code>
    However, I am not able to look up the JNDI Name for UserTransaction in the embedded OC4J. I tried both "java:comp/UserTransaction" and "jta/usertransaction".
    Could somebody please let me know how to lookup the User Transaction from the Context in OC4J?
    Also, I read somewhere that UserTransactions (begin .. method .. commit/rollback) is not supported in OC4J unless the EJB are running in the same container? Is this true? In which case, my above test scenario would never work, is it?
    Many thanks in Advance.
    Regards,
    Pratul

    Hi Katalin!
    Thanks a lot for your reply.
    You are right. The port 23891 is the correct port for ORMI for embedded OC4J.
    Actually, I was trying to execute the client as a standalone Java class (public static void main() ). I eventually moved my client code to a JSP and was able to retrieve the UserTransaction object using "java:comp/UserTransaction".
    I found this strange since I have tested transaction boundaries in a similar fashion using WSAD and it worked just fine. The only reason I could think of was that the classloader in JDeveloper was not able to load the complete environment implementations of the transactions, before the lookup call is made from my code.
    However, now after having moved my client code into a JSP, the UserTransaction object retrieved is "com.sun.enterprise.distributedtx.UserTransactionImpl".
    And when I invoke the begin() method on this transaction object, I get a NullPointerException. :((
    06/10/03 15:31:07 java.lang.NullPointerException
    06/10/03 15:31:07      at com.sun.jts.jta.TransactionManagerImpl.begin(TransactionManagerImpl.java:171)
    06/10/03 15:31:07      at com.sun.jts.jta.UserTransactionImpl.begin(UserTransactionImpl.java:50)
    06/10/03 15:31:07      at com.sun.enterprise.distributedtx.UserTransactionImpl.begin(UserTransactionImpl.java:66)
    Does anybody have any idea why that particular reference of the UserTransaction object is returned? Is it because of any extra/missing jar files in the classpath? Why a NullPointerException , even when the object itself is not null ?
    Please help !!
    Thanks,
    Pratul

  • Ora-01591 Lock held in by In-doubt distributed transaction - help required

    In my web application i am getting an error @ a particular page. I am using JTA user transaction to mark the transaction boundaries. On commit i am confronting the error "lock held in by In-Doubt transaction. I queried pending_trans$,dba_2pc_pending and force commit the transaction.Also i purged the transaction using Purge_Lost_db_entity. Still after doing all these i am facing the same problem with a new transaction no.
    I did all the below steps:
    I did follow the below steps:
    SELECT * FROM PENDING_TRANS$
    SELECT
    LOCAL_TRAN_ID, GLOBAL_TRAN_ID, STATE, MIXED, HOST, COMMIT#
    FROM
    DBA_2PC_PENDING
    WHERE
    LOCAL _TRAN_ID = '??.';
    SELECT LOCAL_TRAN_ID, IN_OUT, DATABASE, INTERFACE
    FROM DBA_2PC_NEIGHBORS;
    COMMIT FORCE 'local transactionID', 'SCN';
    DBMS_TRANSACTION.PURGE_LOST_DB_ENTRY (local transactionID); OR
    DBMS_TRANSACTION.PURGE_MIXED (local transactionID);
    SELECT s.inst_id,
    s.sid,
    s.serial#,
    p.spid,
    s.username,
    s.program
    FROM gv$session s
    JOIN gv$process p ON p.addr = s.paddr AND p.inst_id = s.inst_id
    WHERE s.type != 'BACKGROUND'
    and s.program='JDBC Thin Client'
    ALTER SYSTEM KILL SESSION '102,10' IMMEDIATE;
    Database shutdown and restart doesnt work..
    Please anyone help..

    tis is the result of the query v$version
    Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - Prod
    PL/SQL Release 10.2.0.3.0 - Production
    "CORE 10.2.0.3.0 Production"
    TNS for 32-bit Windows: Version 10.2.0.3.0 - Production
    NLSRTL Version 10.2.0.3.0 - Production
    I use websphere6.1 and oracle 10.2.0.3. using JTA UserTransaction to define boundaries. I am facing tis issue from last week only before it was working fine. I did the commit force as mentioned before to manually unlock transaction.From then i am getting this error. Could u pls help. I m having pressure from onsite, i dont know what is causing the error in one particular flow. I have included the error logs below.
    [7/14/10 19:15:54:328 IST] 00000023 WSRdbXaResour E DSRA0304E: XAException occurred. XAException contents and details are:
    The XA Error is : -3
    The XA Error message is : A resource manager error has occured in the transaction branch.
    The Oracle Error code is : 17410
    The Oracle Error message is: Internal XA Error
    The cause is : null.
    [7/14/10 19:15:54:343 IST] 00000023 WSRdbXaResour E DSRA0302E: XAException occurred. Error code is: XAER_RMERR (-3). Exception is: <null>
    [7/14/10 19:15:54:359 IST] 00000023 XATransaction E J2CA0027E: An exception occurred while invoking commit on an XA Resource Adapter from dataSource jdbc/ScorecardDataSource, within transaction ID {XidImpl: formatId(57415344), gtrid_length(36), bqual_length(54), data(00000129d13434870000000100000008f0685880e9e9a1514500185d37d88f31da70140400000129d13434870000000100000008f0685880e9e9a1514500185d37d88f31da701404000000010000000000000000000000000001)}: oracle.jdbc.xa.OracleXAException
    [7/14/10 19:15:54:531 IST] 00000023 MCWrapper E J2CA0081E: Method cleanup failed while trying to execute method cleanup on ManagedConnection WSRdbManagedConnectionImpl@3e0a3e0a from resource jdbc/ScorecardDataSource. Caught exception: com.ibm.ws.exception.WsException: DSRA0080E: An exception was received by the Data Store Adapter. See original exception message: No more data to read from socket. with SQL State : null SQL Code : 17410
         at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:125)
    [7/14/10 19:15:54:593 IST] 00000023 MCWrapper E J2CA0081E: Method destroy failed while trying to execute method destroy on ManagedConnection WSRdbManagedConnectionImpl@3e0a3e0a from resource No longer available. Caught exception: java.lang.NullPointerException
    [7/14/10 19:16:08:093 IST] 00000023 SystemErr R Caused by: java.sql.SQLException: ORA-01591: lock held by in-doubt distributed transaction 9.34.890
    Pls help

  • Another bug report on transactional behavior

    Using the same test program as documented in the other bug report, I
    have also ran across the following which appears to be a bug.
    When observing transactional boundaries (order of loop body work: start,
    query, read, commit) and using datastore transactions with retainValues
    == false and nonTR == true, I see the JDO instances fetched by the query
    being loaded just prior to being read, as one would expect. After the
    commit, as expected, they are cleared, and the same behavior occurs on
    subsequent iterations.
    What appears to be buggy is that optimistic transactions when tested for
    the same configuration behave in exactly the same matter as the
    datastore transactions.
    Section 13.5 of Version 0.95 of the spec, states:
    "With optimistic transactions, instances queried or read from the data
    store will not be transactional unless they are modified, deleted, or
    marked by the application as transactional in the transaction."
    My understanding of this is that the instances that are fetched from the
    query and read become PNT, and hence would not have their state cleared
    at the end of the transaction and would not be reloaded when read in the
    next transaction -- contrary to the behavior observed.
    David Ezzio
    Yankee Software

    Beta 2.2 put the issues I raised here to rest.
    David Ezzio wrote:
    >
    Using the same test program as documented in the other bug report, I
    have also ran across the following which appears to be a bug.
    When observing transactional boundaries (order of loop body work: start,
    query, read, commit) and using datastore transactions with retainValues
    == false and nonTR == true, I see the JDO instances fetched by the query
    being loaded just prior to being read, as one would expect. After the
    commit, as expected, they are cleared, and the same behavior occurs on
    subsequent iterations.
    What appears to be buggy is that optimistic transactions when tested for
    the same configuration behave in exactly the same matter as the
    datastore transactions.
    Section 13.5 of Version 0.95 of the spec, states:
    "With optimistic transactions, instances queried or read from the data
    store will not be transactional unless they are modified, deleted, or
    marked by the application as transactional in the transaction."
    My understanding of this is that the instances that are fetched from the
    query and read become PNT, and hence would not have their state cleared
    at the end of the transaction and would not be reloaded when read in the
    next transaction -- contrary to the behavior observed.
    David Ezzio
    Yankee Software

  • Multi "Datasource" transaction management

    want to maintain transaction boundaries using UserTransaction, for connections obtained from two different datasources.
    Context myCntxt = new InitialContext();
    UserTransaction ut =
    (UserTransaction) myCntxt.lookup("java:comp/UserTransaction");
    ut.begin();
    // Connection from Database 1
    ds1 = (javax.sql.DataSource)
    initCtx.lookup("java:comp/env/jdbc/Database1");
    con1 = ds1.getConnection();
    stmt = con1.createStatement();
    stmt.executeUpdate(...);
    // Connection from Database 2
    ds2 = (javax.sql.DataSource)
    initCtx.lookup("java:comp/env/jdbc/Database2");
    con2 = ds2.getConnection();
    stmt = con2.createStatement();
    stmt.executeUpdate(...);
    ut.commit();
    I wish transaction to be atomic (either both "update"s are done or nothing is done). I use Websphare 5, please let me know if you have a answer ASAP.
    Thanks,

    st.executeupdate returns no of records effected.
    if that value is < 0(means 0 records effected), after that u can excute another database executeupdate function ,it also returns the integer value.
    if both are effecting records(return value is >0)then u call commit on both databases.
    i think u have to do all these things programtically

  • Brief, independent transactions in an EE context

    For various business reasons, I'm writing my own little sequence
    generator. I'd like my generator to update its persistent sequence
    counter in its own independent transaction, and I'm wondering how to
    create that transaction.
    Section 16.1.3 of the JDO spec mentions two ways of doing this, but
    doesn't do much to explain exactly how either of them works. The first
    I'm familiar with: acquire a UserTransaction via JNDI. That's a bit
    heavyweight for our needs.
    The second option, using javax.jdo.Transaction, sounds a lot more like
    what we want: "acquiring a PersistenceManager without beginning a
    UserTransaction results
    in the PersistenceManager being able to manage transaction boundaries via
    begin, commit, and rollback methods on javax.jdo.Transaction."
    OK, but what if there's already a transaction in my context? That is,
    what if my sequence generator is called from an EJB, and that EJB created
    a transaction? How do I get my own transaction? Do I still acquire the
    UserTransaction via JNDI, and simply not ever call
    UserTransaction.begin()? Or should I acquire a PersistenceManager at
    startup somehow?
    Paul

    Paul,
    You probably want to either maintain a separate (non-managed) PMF for the
    sequence generator, or use the EEPersistenceManagerFactory's DataSource to
    obtain a separate JDBC connection.
    Also, the SequenceFactory interface in our 2.3 beta (available to the public
    Real Soon Now) is significantly improved. Let me know if you are interested
    in more details.
    -Patrick
    On 5/22/02 6:35 PM, in article ach6fg$1q2$[email protected], "Paul
    Cantrell" <[email protected]> wrote:
    For various business reasons, I'm writing my own little sequence
    generator. I'd like my generator to update its persistent sequence
    counter in its own independent transaction, and I'm wondering how to
    create that transaction.
    Section 16.1.3 of the JDO spec mentions two ways of doing this, but
    doesn't do much to explain exactly how either of them works. The first
    I'm familiar with: acquire a UserTransaction via JNDI. That's a bit
    heavyweight for our needs.
    The second option, using javax.jdo.Transaction, sounds a lot more like
    what we want: "acquiring a PersistenceManager without beginning a
    UserTransaction results
    in the PersistenceManager being able to manage transaction boundaries via
    begin, commit, and rollback methods on javax.jdo.Transaction."
    OK, but what if there's already a transaction in my context? That is,
    what if my sequence generator is called from an EJB, and that EJB created
    a transaction? How do I get my own transaction? Do I still acquire the
    UserTransaction via JNDI, and simply not ever call
    UserTransaction.begin()? Or should I acquire a PersistenceManager at
    startup somehow?
    Paul
    Patrick Linskey [email protected]
    SolarMetric Inc. http://www.solarmetric.com

Maybe you are looking for

  • Problem with "if" statement in Dreamweaver

    I have successfully used "if" statements in the past with Dreamweaver, but I'm having a problem using it with images (i.e.  <img src = ). What I'm trying to accomplish is to display a default.jpg image if there is no image file name stored in a MySQL

  • Table for finding rate based on part code and material

    Hi Experts      I am an ABAP consultant with ltd knowledge on SD . Please tell me in which table  I can find the rate based on party code and material .

  • Mac Mail Crashing when selecting Exchange Inbox

    Behavior started today.  I noticed i can be disconnected from the network and can access my inbox but once I connect, Mail crashes.  I've tried deleted the "container" files per other threads and numerous other work-arounds but I can't seem to get th

  • Mapping Versions/History in Integration Repository

    We are running PI7.0 - SP10....  A highly suspect version with some ropey patches supplied by SAP! We have imported a Mapping version into our XIT system that does not work, however, our XID system is about to be either restored or upgraded due to so

  • Printing quanties

    I just upgraded to PSCS5 from cs3. Win7 pro 64bit. All is well till I start printing. I print to 2 R1900s. The process goes like this: When I change the number of prints to, say, 3. The printer will print one print. The next time I try to print that