Transactional scope

          Hi there,
          I wanted to do the following but it did not work for me,
          I would appreicate if you could help. I use WL6.0
          Basically I want to have two methods, say, "insertTest1"
          and "insertTest2" in a stateless
          session bean, "Required" is used to define the
          transactional attribute for these two methods.
          I now want to have a third method, say, "callBoth" which just calls
          those two methods mentioned above, one by one.
          The transactional attribute for the 3rd method "callBoth"
          is also "Required".
          What I expected was that if, in the 3rd method "callBoth", the
          2nd call to "testInsert2" failed, the EJB container should
          roll back the 1st call to "testInsert2", unfortunately,
          it did not work for me.
          Attached please find the code:
          import javax.naming.*;
          import javax.ejb.SessionBean;
          import javax.ejb.SessionContext;
          import java.rmi.RemoteException;
          import javax.ejb.EJBException;
          import javax.ejb.CreateException;
          import java.sql.*;
          import javax.sql.DataSource;
          public class CommitTestEJB implements SessionBean {
          private SessionContext ctx;
          public void setSessionContext(SessionContext context) throws EJBException
          ctx = context;
          public void ejbActivate() throws EJBException {
          public void ejbPassivate() throws EJBException {
          public void ejbRemove() throws EJBException {
          public void ejbCreate() throws CreateException, EJBException {
          public void insertTest1(int x, Connection conn) throws EJBException {
          try {
          insertX("test1", x,conn);
          } catch (SQLException e) {
          throw new EJBException(e);
          public void insertTest2(int x, Connection conn)throws EJBException{
          try {
          insertX("test2", x,conn);
          } catch (SQLException e) {
          throw new EJBException(e);
          public void callBoth(int x1, int x2)throws EJBException {
          Connection conn = getDBConnection();
          try {
          insertTest1(x1, conn);
          insertTest2(x2, conn);
          } catch (Exception e) {
          throw new EJBException(e);
          try {
          conn.close();}
          catch (Exception e) {}
          private Connection getDBConnection() {
          Connection connection = null;
          try {
          InitialContext ic = new InitialContext();
          DataSource ds = (DataSource)
                    ic.lookup("oratest2");
          connection = ds.getConnection();
          } catch (Exception e) {
          return connection;
          private void insertX(String tableName, int x, Connection dbConnection) throws
          SQLException{
          Statement stmt = dbConnection.createStatement();
          String queryStr = "INSERT INTO " + tableName +
          " values(" + x + ")";
          int resultCount = stmt.executeUpdate(queryStr);
          stmt.close();
          

          Thanks a lot, Rob.
          It worked!!!! I just used TxDataSource, I still throwed EJBException and I did
          not use setRollbackly, still it worked.....
          Why I got to use TxDataSource, it is not a Distributed trasanction at all.
          Also I have two questions about WL clustering
          1) I was told that from the WL license file I
          can find out if it is for clustering, how can I
          find it?
          2) How many clustering licenses do I need to
          run a cluster? just need one in the Admin server?
          or I need other clustering licenses for all of
          the managed servers also?
          Thanks a lot
          Shiye Qiu
          Rob Woollen <[email protected]> wrote:
          >Answered in the ejb newsgroup.
          >
          >-- Rob
          >
          >Shiye Qiu wrote:
          >>
          >> Hi there,
          >>
          >> I wanted to do the following but it did not work for me,
          >> I would appreicate if you could help. I use WL6.0
          >>
          >> Basically I want to have two methods, say, "insertTest1"
          >> and "insertTest2" in a stateless
          >> session bean, "Required" is used to define the
          >> transactional attribute for these two methods.
          >>
          >> I now want to have a third method, say, "callBoth" which just calls
          >> those two methods mentioned above, one by one.
          >> The transactional attribute for the 3rd method "callBoth"
          >> is also "Required".
          >>
          >> What I expected was that if, in the 3rd method "callBoth", the
          >> 2nd call to "testInsert2" failed, the EJB container should
          >> roll back the 1st call to "testInsert2", unfortunately,
          >> it did not work for me.
          >>
          >> Attached please find the code:
          >> import javax.naming.*;
          >> import javax.ejb.SessionBean;
          >> import javax.ejb.SessionContext;
          >> import java.rmi.RemoteException;
          >> import javax.ejb.EJBException;
          >> import javax.ejb.CreateException;
          >> import java.sql.*;
          >> import javax.sql.DataSource;
          >>
          >> public class CommitTestEJB implements SessionBean {
          >> private SessionContext ctx;
          >>
          >> public void setSessionContext(SessionContext context) throws EJBException
          >> {
          >> ctx = context;
          >> }
          >>
          >> public void ejbActivate() throws EJBException {
          >> }
          >>
          >> public void ejbPassivate() throws EJBException {
          >> }
          >>
          >> public void ejbRemove() throws EJBException {
          >> }
          >>
          >> public void ejbCreate() throws CreateException, EJBException {
          >> }
          >>
          >> public void insertTest1(int x, Connection conn) throws EJBException
          >{
          >> try {
          >> insertX("test1", x,conn);
          >> } catch (SQLException e) {
          >> throw new EJBException(e);
          >> }
          >> }
          >>
          >> public void insertTest2(int x, Connection conn)throws EJBException{
          >> try {
          >> insertX("test2", x,conn);
          >> } catch (SQLException e) {
          >> throw new EJBException(e);
          >> }
          >> }
          >>
          >> public void callBoth(int x1, int x2)throws EJBException {
          >> Connection conn = getDBConnection();
          >>
          >> try {
          >> insertTest1(x1, conn);
          >> insertTest2(x2, conn);
          >> } catch (Exception e) {
          >> throw new EJBException(e);
          >> }
          >>
          >> try {
          >> conn.close();}
          >> catch (Exception e) {}
          >> }
          >>
          >> private Connection getDBConnection() {
          >>
          >> Connection connection = null;
          >>
          >> try {
          >> InitialContext ic = new InitialContext();
          >> DataSource ds = (DataSource)
          >> ic.lookup("oratest2");
          >> connection = ds.getConnection();
          >> } catch (Exception e) {
          >> }
          >> return connection;
          >> }
          >>
          >> private void insertX(String tableName, int x, Connection dbConnection)
          >throws
          >> SQLException{
          >> Statement stmt = dbConnection.createStatement();
          >> String queryStr = "INSERT INTO " + tableName +
          >> " values(" + x + ")";
          >> int resultCount = stmt.executeUpdate(queryStr);
          >> stmt.close();
          >> }
          >> }
          >
          >--
          >
          >----------------------------------------------------------------------
          >
          >AVAILABLE NOW!: Building J2EE Applications & BEA WebLogic Server
          >
          >by Michael Girdley, Rob Woollen, and Sandra Emerson
          >
          >http://learnWebLogic.com
          

Similar Messages

  • Using Transaction Scope in Oracle

    Following is my code to call a transaction using .NET transaction scope:
    using (TransactionScope sc = new TransactionScope())
    try
    OracleConnection c = new OracleConnection(ConfigurationManager.ConnectionStrings["oConn"].ConnectionString);
    c.Open();
    OracleCommand oc = c.CreateCommand();
    oc.CommandText = "delete from test where issue_id=:issue_id";
    oc.Parameters.Add("issue_id", OracleDbType.Int32).Value = 64;
    var rowdelete = oc.ExecuteNonQuery();
    c.Close();
    OracleConnection c2 = new OracleConnection(ConfigurationManager.ConnectionStrings["oConn"].ConnectionString);
    c2.Open();
    OracleCommand oc2 = c2.CreateCommand();
    oc2.CommandText = "delete from issue_no where issue_id=:issue_id";
    oc2.Parameters.Add("issue_id", OracleDbType.Int32).Value = 64;
    rowdelete = oc2.ExecuteNonQuery();
    c2.Close();
    sc.Complete();
    catch (Exception ex)
    throw ex;
    It always throws exception: Data provider internal error(-3000)
    It works fine when I remove transactions, what will be the cause for this? I cannot find related information about this error.

    What version of ODP are you using? Does the behavior occur on current versions?
    What is your connection string. That is to say, what format is it? I recall a known issue where using a fully qualified alias ( (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=...etc...) caused issues. Using a tnsnames.ora , or the instant client connect string (//host:port/service_name) did not have the problem.
    Greg

  • SFSB and BMT JTA Transaction Scope confusion

    Hi,
    I'm a bit confused with the scope of a UserTransaction.
    * Classic SFSB with BMT
    @Stateful
    @TransactionManagement(TransactionManagementType.BEAN)
    public class SFSBean implements SFS {
         private @Resource UserTransaction tx;
         private @PersistenceUnit(unitName="my-db") EntityManagerFactory emf;
         private EntityManager em;
         public void start() {
              try {
                   tx.begin();
              } catch (NotSupportedException e) {
                   throw new RuntimeException(e);
              } catch (SystemException e) {
                   throw new RuntimeException(e);
              em = emf.createEntityManager();
         public void doJob1() {
         public void doJob2() {
         @Remove
         public void commit() {
              try {
                   tx.commit();
              } catch (SecurityException e) {
                   throw new RuntimeException(e);
              } catch (IllegalStateException e) {
                   throw new RuntimeException(e);
              } catch (RollbackException e) {
                   throw new RuntimeException(e);
              } catch (HeuristicMixedException e) {
                   throw new RuntimeException(e);
              } catch (HeuristicRollbackException e) {
                   throw new RuntimeException(e);
              } catch (SystemException e) {
                   throw new RuntimeException(e);
         @Remove
         public void rollback() {
              try {
                   tx.rollback();
              } catch (IllegalStateException e) {
                   e.printStackTrace();
              } catch (SecurityException e) {
                   e.printStackTrace();
              } catch (SystemException e) {
                   e.printStackTrace();
    }When I request this SB the container inject a JTA transaction inside my variable tx and this transaction will be bound to the life of my SB.
    But : if a handler for this SB is associated to a SessionScoped or ConversationScoped bean (CDI contexts) or simply to a HttpSession attribute, then calls to SB methods may occurs in different thread (successives requests).
    Is this pattern supported? As JTA rely on ThreadLocal but a transaction can also be injected inside a SFSB: I'm a bit confused...
    HttpRequest1[Thread-1] : ejbHandler.start(); // conversation start
    HttpRequest2[Thread-2] : ejbHandler.doJob1(); // long running transaction
    HttpRequest3[Thread-1] : ejbHandler.doJob2(); // long running transaction
    HttpRequest4[Thread-3] : ejbHandler.commit(); // conversation end
    I tried a small use case and it seems to work (maybe all my requests was part of the same thread, lucky me) with JBoss but if an error occurs the transaction manager seems completely confused.
    If this pattern is legal (I know it is a very bad pattern...), I'll try to fix my code, if it isn't I'll have to drop JTA from this part of the code...
    Thank you.

    Yes, this is part of the required transactional behavior for a stateful session bean. A stateful session bean is the only session bean type whose bean instances can retain their association with a transaction after the business method returns. It's then the container's job to set up the correct transaction context for each successive invocation of the stateful session bean instance, until the transaction is committed or rolled back. If a system exception is thrown from the stateful session bean method the instance will be destroyed.

  • Do you have a transaction scope that spans multiple requests.

    We have and application that includes multiple tabs, which are really iframe instances. We need to maintain state for the entire time the tab is open, which may be across multiple requests.
    I am not comfortable making all our backing beans "session" scope, and making them "request" forces us to do lots of work (DB access etc...) on every post-back to re-initialize the backing bean.
    I have been looking at both Shale and JBoss SEAM to give me this "conversational" scope. I have looked at "process" scope, however we may have the same backing bean in use for multiple tabs, therefore would need it linked to something like the viewId.
    Does ADF plan on enhancing the "process" scope functionality or is it OK to add SEAM or shale at the front-end of the ADF processing lifecycle?
    Your guidance would be appreciated.

    The processScope functionality seems pretty crude.
    I was looking to define which data elements of the backing bean need to be stored (maybe using annotations) and have them restored automatically before the APPLY_REQUEST lifecycle gets initiated. I can write this functionality, however I was looking for a more robust solution.

  • Do you have a "transaction" scope for multi-request views

    We have and application that includes multiple tabs, which are really iframe instances. We need to maintain state for the entire time the tab is open, which may be across multiple requests.
    I am not comfortable making all our backing beans "session" scope, and making them "request" forces us to do lots of work (DB access etc...) on every post-back to re-initialize the backing bean.
    I have been looking at both Shale and JBoss SEAM to give me this "conversational" scope. I have looked at "process" scope, however we may have the same backing bean in use for multiple tabs, therefore would need it linked to something like the viewId.
    Does ADF plan on enhancing the "process" scope functionality or is it OK to add SEAM or shale at the front-end of the ADF processing lifecycle?
    Your guidance would be appreciated.

    Hi,
    have a look at the ADF developer documentation on OTN and read the chapters on taskflows. You can have beans in a taskflow that have their scope set to backing bean.
    If a taskflow is then used multiple times in a region, the backing bean scope makes sure the two instaces are isolated
    You can't mix and match Seam or Shale lifecycle with ADF Faces RC and vice versa.
    Frank

  • Do I need Distributed Transaction Scope when I have Two Database in Single SQL Server Instance

    Dear Sirs.
    I have Two Database in SQL Server Express 2008 R2, I Move Row From Database 1 Table 1 to Database 2 Table 1
    Do I need Distributed Transaction or just regular Transaction.
    Thank you in Advance. 
    Irakli Lomidze

    Dear Sirs.
    I have Two Database in SQL Server Express 2008 R2, I Move Row From Database 1 Table 1 to Database 2 Table 1
    Do I need Distributed Transaction or just regular Transaction.
    Thank you in Advance. 
    Irakli Lomidze
    Whats you are doing does not qualify under distributed transaction. Please read about distributed transaction from below link
    http://technet.microsoft.com/en-us/library/ms188721%28v=sql.105%29.aspx
    Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it.
    My TechNet Wiki Articles

  • Question about Transaction scope

    Hello all
    I failed to find the answer in the Forte documentation.
    what appends when
    - a client GUI uses transactions,
    - services use message duration AND dependant transactions.
    Do we have
    - two independant transactions,
    - an undetected error,
    - some thing in between ?
    thank you for any reference to the right document or an
    explanation.
    Jean-Claude Bourut
    s-mail 72-78 Grande-Rue 92310 Sevres
    e-mail [email protected]
    Tel (33-1) 41 14 86 41

    If you want to access variables outside of a method then you have to use class variables.
    regards,
    Owen
    class TestClass
      String a;
      public void init ()
        a = "aba";
      public void output()
        System.out.println ( a );
      public static void main(String[] args )
         TestClass test = new TestClass();
         test.init();
         test.output();
    }

  • JMS BC & BPEL Transaction Scope

    Hi,
    I'm trying to use Open ESB with BPEL SE JMS BC and HTTP BC to create a store and forward layer in a integration architecture. The idea is that messages that are used to update a slow external system (or one that is often down) are put in a JMS queue and then the JMS BC picks them up and passes them to a BPEL process via NMR. The BPEL process then calls the external system. If the external system is down, or some other system related error occurs, then I want the transaction to roll back andthe message remain on the queue to be retried later.
    My issue is that when the JMS BC picks up the message, it sends it to BPEL via NMR and as soon as it gets the "Done" response, it commits the associated XA transaction and the message is permanently off the JMS queue. But the "done" response is sent by the BPEL SE as soon as it gets the message, as opposed to only once it has called the external system successfully. This breaks the model as any failures are not retried... Is there a way around this? Am I doing something wrong? How can I get the transaction context to span the entire call flow from JMS BC to BPEL and back with failures resulting in the message remaining in the JMS queue to be retried later
    Thanks
    Paul

    Paul,
    The reason for the current behavior of BPEL SE (sending DONE as soon as reeve completes) is by design to support asynchronous message communications and long running processes. We do not want to keep the transaction open for the life of process instance as the transaction may timeout (for long running processes). If you want to override this behavior you can do so by setting the Business Process as Atomic (Atomic attribute of business process as true), but the effect of this would be that all the operations in bpel engine and also outbound invocations would use the same (received) transaction context. Note that some of these features are still under development. I understand that this still might not solve your use case where the best solution would be reties. This work is underway and should be available in next release.
    Just to give you heads up; retry support is being provided as part of Systemic Qualities initiative which is underway and is targeted to provide support like wire qualities (throttling/reties), fault and security propagation among others across open-esb components. The particular solution to you interest would be wire quality initiative. Once implemented this would allow configurable number of reties a specified intervals.
    Regards,
    Malkit

  • Oracle forms vs ADF transaction scopes

    In Oracle forms, we tend to build a form for each atomic business function that we want to implement. When we commit or rollback, we are generally affecting this function. In Oracle ADF/BC JSF, we build an application modules that tends to encompass several entities and different pages implementing many different business function. Issue a rollback or commit in ADF affects all the components within this applicatons module. This is making development tendious with respect to catering for controlling CRUD management. For example using a cancel button that rolls back Ins/Upd/Del operation for a specific page is be tricky as a rollback cancels everything. What are best practices with this respect. It would be a waste to create an seperate application module for each what resembles a form!
    any ideas

    Hi,
    have a look at
    http://www.oracle.com/webapps/online-help/jdeveloper/10.1.3/state/content/navId.4/navSetId._/vtAnchor.BABEAJJA/vtTopicFile.bcadfdevguide%7Cbcstatemgmt%7Ehtm/
    and read about the savepoint functionality
    Frank

  • Transaction aborts after installing ODAC 12c Release 3

    I have .net code that used a transaction scope which works fine using ODAC 11g Release 4, but fails with "Unable to enlist in a distributed transaction" using ODAC 12c Release 1,2, or 3.  The transaction to a single database.  I am at a loss for what could be the issue.
    This issue occurs on both Windows 7 and Windows Server 2008 R2.
    I have reviewed the trace logs for both the Microsoft Distributed Transaction Server, and the Oracle Services for Microsoft Transactions Services.  The MSDTC trace logs indicate that the transaction abort was request was received from the calling application ("RECEIVED_ABORT_REQUEST_FROM_BEGINNER").  The ORAMTS trace logs indicate an OCI error and that there was an attempt to begin a distributed transaction with out logging on ("OCI_ERROR - 2048." ,  "ORA-02048: attempt to begin distributed transaction without logging on")
    I can reproduce this error with a simple code example with just tried to insert records into a table.  If I change the data provider to "System.Data.OracleClient", or uninstall 12c and install 11g this code works fine.
    DataSet1TableAdapters.DataTable1TableAdapter da = new DataSet1TableAdapters.DataTable1TableAdapter();
                using (TransactionScope scope = new TransactionScope())
                    Transaction txn = Transaction.Current;
                    try
                       da.Insert(0, "This ia a title");
                        scope.Complete();
                        lblmessage.Text = "Transaction Succeeded.";
                    catch (Exception ex)
                        txn.Rollback();
                        lblmessage.Text = "Transaction Failed.";
    Can anyone provide any ideas what is happening?  I really would like to use ODAC 12c.
    Thanks.

    Moving to the ODP.NET forum to get a wider audience.

  • Container-managed / bean-managed transaction demarcation

    I am trying to make sure I understand container-managed and bean-managed transaction demarcation and in particular where you have one bean calling another bean. What happens where one of the beans has container-managed transaction demarcation and the other bean-managed transaction demarcation. In fact the initial question to ask is, is this allowed?
    Lets use an application scenario to illustrate the issue. The application has a payment transaction. Payments can be received in one of two ways:
    1. As a payment at a branch where the individual payment is processed on a client application and resulting in the processing of a single payment transaction.
    2. As a batch of payments received from a bank containing, potentially, thousands of payment transactions.
    The proposed implementation for this uses two session beans. The first is a Payment session bean that implements the business logic as appropriate calling entity beans to persist the change. The second is a BatchPayment session bean. This processes the batch of payment transactions received from the bank. The BatchPayment reads through the batch of payments from a bank calling the Payment session bean for each payment transaction.
    Lets look at the transactional properties of both session beans. In order to support the client application the Payment session bean can implicitly enforce transactional integrity and is therefore set to container-managed transaction demarcation. However the BatchPayment session bean will want to explicitly specify transaction demarcation for performance reasons. The transactional "commit" process is relatively expensive. When processing a large batch of transactions rather than performing a commit after every transaction is processed we want to perform the commit after a number of transactions have been processed. For example, we may decide that after every 100 transactions have been processed we commit. The processing will have a shorter elapsed time as we have not had to perform 99 commit processes. So the BatchPayment session bean will want to explicitly specify its transaction demarcation and will therefore be defined with bean-managed transaction demarcation.
    How would this be implemented? A possible solution is:
    Payment session bean implemented with container-managed transaction demarcation with transaction scope set to Required.
    BatchPayment session bean implemented with bean-managed transaction demarcation with transaction scope set to Required.
    When the client application is run it calls the Payment bean and the container-managed transaction demarcation ensures the transactional integrity of that transaction.
    When a BatchPayment process is run it explicitly determines the transaction demarcation. Lets say that after every 100 Payment transactions (through 100 calls to the Payment session bean) have been processed the BatchPayment bean issues a commit. In this scenario however we have mixed container-managed and bean-managed transaction demarcation. Hence my original question. Can container-managed and bean-managed transaction demarcation be mixed? If not how is it possible to implement the requirements as described above?
    Thanks for any thoughts.
    Paul

    BatchPayment session bean implemented with bean-managed transaction demarcation with transaction scope set to Required.Didn't quite understand this sentence.... if it's BMT it has no declarative transaction attributes such as "Required"....
    Anyway, first of all I'll have to ask, Why at all would you want to commit in the middle of the business method? to get as much through as possible before a potential crash? :-)
    Can container-managed and bean-managed transaction demarcation be mixed?Yes, of course. Just remember that the "direction" you are refering to ->
    a BMT SB that propagates it's transaction to a method in a CMT SB that is demarcated with "Required" is the simplest case. If it were "reversed", or for that matter any BMT that might be called within an active transaction context must perform logic to manipulate the transaction state. For instance(and most common case), checking to see if a transaction is active and if so not to do anything(just use the one that is already active).
    If not how is it possible to implement the requirements as described above?You could also implement this scenario with CMTs all the way through. your BatchPayment SB could consist of two methods, one (say, execute(Collection paymentsToExecute) ) with "Supports", and another(say executeBatchUnit(Collection paymentsToExecute, int beginIndex, int endIndex) ) with "RequiresNew".
    then have the first just call the other with indexes denoting each time a group of payments.
    Still, it does seem more suitable using BMT for these kind of things.....
    Hope this helped....

  • Handling Transaction in Stateful Session Bean

    I wrote a public method like public void doTransaction()
    it will call 2 private method, like: methodA and methodB
    Both private methods have db accesss statement and will update the db. They got different db connection and will close the connection when method call finished.
    How to include them to one transaction? I want to be able to rollback the job of the first method when I catch exception thrown by the second method.
    I tried simply define transaction type of the public method to be Container and Required. But it doesn't work, the first method doesn't rollback. Of course I can let the 2 private methods share a same connection and commit after finishing calling them. But how if they are in different DB?

    Ok... Here it goes...
    You can do it in the following manner.
    As you said you have got 2 private methods doing d/b updates and these are called from a public method.
    Stateful session beans since associated with a client across methods, you can take advantage of it. Write your own user defined transaction.
    Begin the transaction scope in your public before calling the 1st private method. Call the 2 methods in a try block. Once you are done with these methods, you can commit and end the transaction. If you get any exception, rollback the transaction in the catch block. Otherwise if u get any exception in the 2nd method, you can rollback the transaction there itself.
    Stateful session beans lets u allow to spawn the bean managed transaction across methods. you can begin your transaction in one method and end it in a differnt method or you can end the transaction after calling the methods.
    The problem you are dealing with can typically very well handled by writing bean managed transaction.
    Hope this helps. If you need anymore clarity on my solution, please let me know.
    -amit

  • SQL 2005 Distributed Transactions from WCF

    Hello,
    I've been redirected here from the Transaction Programming forum becuase I have e peculiar issue with SQL 2005 running INSERT stored procs from multiple WCF services all withing a TransactionScope.
    The original post is http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=2720665&SiteID=1&mode=1
    The story goes, I have SRVC A with starts a TransactionScope which in turn calls SRVC B & C in sequence based on processing rules.
    SRVC A is a Sequential Workflow which Starts and Completes the TransactionScope
    SRVC B Creates a new Customer into the database
    SRVC C Creates new Accounts for that Customer and Initialises the accounts with funds
    The DB Tables underneath are Customer, Account and AccountLog
    DDL
    Code Block
    CREATE TABLE [Member].[Customers](
    [CustomerId] [int]
    IDENTITY(1,1) NOT NULL,
    [Name] [varchar](32) NOT NULL,
    [CreatedUtc] [datetime] NOT NULL ,
    CONSTRAINT [PK_Customer] PRIMARY KEY CLUSTERED
    [CustomerId]
    ASC
    CREATE TABLE [Bank].[Accounts](
    [AccountId] [int]
    IDENTITY(1,1) NOT NULL,
    [CustomerId] [int] NOT NULL,
    [CurrentBalance] [money] NOT NULL,
    [LastUpdateDate] [datetime] NULL,
    [CreatedDate] [datetime] NOT NULL,
    [timestamp] [timestamp] NOT NULL,
    CONSTRAINT [PK_Bank_Account] PRIMARY KEY CLUSTERED
    [AccountId]
    ASC
    ) ON [PRIMARY]
    GO
    ALTER TABLE [Bank].[Accounts] WITH CHECK ADD CONSTRAINT [FK_Account_Customer] FOREIGN KEY([CustomerId])
    REFERENCES [Member].[Customers] ([CustomerId])
    CREATE TABLE [Bank].[AccountLog](
    [AccountLogId] [int]
    IDENTITY(1,1) NOT NULL,
    [AccountId] [int] NOT NULL,
    [Amount] [money] NOT NULL,
    [UtcDate] [datetime] NOT NULL,
    CONSTRAINT [PK_Bank_AccountLog] PRIMARY KEY CLUSTERED
    [AccountLogId]
    ASC
    ) ON [PRIMARY]
    GO
    ALTER TABLE [Bank].[AccountLog] WITH CHECK ADD CONSTRAINT [FK_AccountLog_Account] FOREIGN KEY([AccountId])
    REFERENCES [Bank].[Accounts] ([AccountId])
    NB. I've removed most fields not essential for this example.
    So from SRVC A I invoke SRVC B and the Customer is created, however when I get to SRVC C and the accounts are to be created I get a lock.  Only when the Transaction aborts due to timeout, do I see in SQL Profiler that the call to the SP that created the Account is executed but eventually rolls back as it is part of the distributed transaction.
    Now, If I set the Isolation level in the TransactionScope to ReadUncommitted (urgh) the problem remains.  When I set the IsolationLevel to Read Uncommitted in the SP that creates the account the problem remains but when I remove the FK constraint the problem disappers.  The other curious thing is that with the Customer -> Account FK removed and when SRVC C calls to insert funds into the AccountLog which also updates an aggregated total in the Account from within the same transaction scope and with Account -> AccountLog FK constraints in place there is no locking even with Isolation Serializable.
    I'm quite at a loss as to what could be causing these issues.  If anyone has any suggestions I would greatly appreciate any help.
    Thanks
    Andy
     

    Andy,
    Is this still an issue?
    Thanks!
    Ed Price, Power BI & SQL Server Customer Program Manager (Blog,
    Small Basic,
    Wiki Ninjas,
    Wiki)
    Answer an interesting question?
    Create a wiki article about it!

  • Long-running transactions and the performance penalty

    If I change the orch or scope Transaction Type to "Long Running" and do not create any other transaction scopes inside, I'm getting this warning:
    warning X4018: Performance Warning: marking service '***' as a longrunning transaction is not necessary and incurs the performance penalty of an extra commit
    I didn't find any description of such penalties.
    So my questions to gurus:
    Does it create some additional persistence point(s) / commit(s) in LR orchestration/scope?
    Where are these persistence points happen, especially in LR orchestration?
    Leonid Ganeline [BizTalk MVP] BizTalk Development Architecture

    The wording may make it sound so but IMHO, if during the build of an orchestration we get carried away with scope shapes we end up with more persistence points which do affect the performance so one additional should not make soo much of a difference. It
    may have been put because of end-user feed back where people may have opted for long running transactions without realizing about performance overheads and in subsequent performance optimization sessions with Microsoft put it on the product enhancement list
    as "provide us with an indication if we're to incurr performance penalties". A lot of people design orchestration like they write code (not saying that is a bad thing) where they use the scope shape along the lines of a try catch block and what with
    Microsoft marketing Long Running Transactions/Compensation blocks as USP's for BizTalk, people did get carried away into using them without understanding the implications.
    Not saying that there is no additional persistence points added but just wondering if adding one is sufficient to warrant the warning. But if I nest enough scope shapes and mark them all as long-running, they may add up.
    So when I looked at things other than persistence points, I tried to think on how one might implement the long running transaction (nested, incorporating atomic, etc), would you be able to leverage the .Net transaction object (something the pipeline
    use and execute under) or would that model not handle the complexities of the Long Running Transaction which by very definiton span across days/months and keeping .Net Transaction objects active or serialization/de-serialization into operating context will
    cause more issues.
    Regards.

  • JTA transaction unexpectedly rolled back

    I have a Spring Java web app deployed on OC4J 10.1.3.3 using Toplink as the container managed persistence. When my app is launched a named query is executed that uses JPQL to load up collections of objects. It is failing with the subject line exception.
    org.springframework.transaction.UnexpectedRollbackException: JTA transaction unexpectedly rolled back (maybe due to a timeout); nested exception is javax.transaction.RollbackException: Timed out
    If, however, I then modify the URL and force an action that will re-execute the same query, it works fine.
    Any ideas on which configuration settings I should investigage/change to enable this to work the first time through?
    I am not dealing with large collections here. At this point, there are 11 main objects that have children/parents. Re-execution of the query happens very fast.
    Thank you!
    Ginni

    Hello,
    It sounds like its not the query itself that is taking too long, but all the processing done before the query in the same transaction scope. The error is that the transaction is timing out, so you should start by checking when the transaction is started and if the timeout value needs to be increased to cover the time this process is taking, or if the transaction can be made smaller or broken up into smaller peices. Or, if the query is just returning data that isn't going to be modified, if a transaction is required at all.
    Best Regards,
    Chris

Maybe you are looking for

  • CANNOT UPDATE - IPOD EMPTY! HELP!

    I recentley added some CD's to my Itunes and went to update. It gave me a meesege saying there was too much and it would create a new playlist for the rest and update what it could. I deleted a lot of stuff and deleted the playlist it made for me. No

  • When I paid an additional .30ct for each song can I download to a CD ?

    Itunes: When I paid an additional .30ct for each song, am I able to download on to a CD?

  • Premiere Pro CC 2014 - Video playing faster than audio - Out Of Sync?

    Hi All, It's been some time since I've had to write about any problems here. Sooo, I've got Adobe Premiere Pro CC 7.2.2.  (Using AJA KONA IoExpress BoB)   I've imported a 5 minute AVI file (video=1920x1080, Data=65587kbps, Total=66355kbps, 30fps. - -

  • Document existing configuration

    I am new to an organization and part of my responsibilities will be administrating the existing cucm solution. Is there a away of documenting the existing configuration and settings without going from screen to screen?

  • Netra 440 - pci@1e failure msg at OpenBoot Diagnostics

    Hi, We have here some problems at a new netra 440. In the OpenBoot Diagnostics we receive the msg: WARNING: Device /pci@1e,600000/ide@d being marked with 'status' == fail SC Alert: /pci@1e,600000/ide@d fail When the OpenBoot Diagnostics are finished