Transaction Management in ODI in distributed Environment

Dear All,
I would like to understand how does ODI manages distributed transations so that the logical unit of work is either completely committed successfully or rolledback.
Eg. If I have to develop interface(s) in such a way that, the transaction is distributed that means the data is comming from more one source systems (SQL server, oracle , DB2 and so on) , and the data comming from three systems form a logical unit of work and one the system fails to provide the data, then how does ODI internally manages to rollback complete transaction or complete the transaction successfylly if all three sources provide the necessory data to complte the logical unit of work.
How would ODI implement concept of two phase commit?
Thanks in Advance

I have exactly the same question about ODI integration with multiple systems and the way he deals with failures!
Let's hope for some light :-)

Similar Messages

  • Java user-defined transaction management not working correctly???

    Hi everyone,
    I have encountered a problem when using Java user-defined transaction management in my session bean. It threw an exception but I could not work out what that means. Could anyone comment on this? Thanks.
    This BrokerBean is a stateless session calling other entities bean to perform some simple operations. There are 2 Cloudscape databases in use. Invoices (EB) use InvoiceDB and all the other EBs use StockDB.
    If I comment out the user-defined transaction management code, then everything works fine. Or if I comment out the Invoices EB code, it is fine as well. It seemed to me that there is something wrong in transaction management when dealing with distributed databases.
    --------------- source code ----------------------
    public void CreateInvoices(int sub_accno) {
    try {
    utx = context.getUserTransaction();
    utx.begin();
    SubAcc subAcc = subAccHome.findByPrimaryKey(new SubAccPK(sub_accno));
    String sub_name = subAcc.getSubName();
    String sub_address = subAcc.getSubAddress();
    Collection c = stockTransHome.findBySubAccno(sub_accno);
    Iterator i = c.iterator();
    ArrayList a = new ArrayList();
    while (i.hasNext()) {
    StockTrans stockTrans = (StockTrans)i.next();
    int trans_id = stockTrans.getTransID();
    String tran_type = stockTrans.getTranType();
    int stock_id = stockTrans.getStockID();
    float price = stockTrans.getPrice();
    Invoices invoices = invoicesHome.create(sub_accno, sub_name, sub_address, trans_id, stock_id, tran_type, price);
    stockTrans = stockTransHome.findByPrimaryKey(new StockTransPK(trans_id));
    stockTrans.remove();
    utx.commit();
    utx = null;
    } catch (Exception e) {
    if (utx != null) {
    try {
    utx.rollback();
    utx = null;
    catch (Exception ex) {}
    // e.printStackTrace();
    throw new EJBException("BrokerBean.CreateInvoices(): " + e.getMessage());
    --------------- exception ----------------------
    Initiating login ...
    Enter Username:
    Enter Password:
    Binding name:`java:comp/env/ejb/BrokerSB`
    EJB test succeed
    Test BuyStock!
    Test BuyStock!
    Test BuyStock!
    Test BuyStock!
    Test SellStock!
    Test SellStock!
    Caught an exception.
    java.rmi.ServerException: RemoteException occurred in server thread; nested exce
    ption is:
    java.rmi.RemoteException: BrokerBean.CreateInvoices(): CORBA TRANSACTION
    _ROLLEDBACK 9998 Maybe; nested exception is:
    org.omg.CORBA.TRANSACTION_ROLLEDBACK: vmcid: 0x2000 minor code: 1806
    completed: Maybe
    at com.sun.corba.ee.internal.iiop.ShutdownUtilDelegate.mapSystemExceptio
    n(ShutdownUtilDelegate.java:64)
    at javax.rmi.CORBA.Util.mapSystemException(Util.java:65)
    at BrokerStub.CreateInvoices(Unknown Source)
    at Client.main(Unknown Source)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.
    java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces
    sorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:324)
    at com.sun.enterprise.util.Utility.invokeApplicationMain(Utility.java:22
    9)
    at com.sun.enterprise.appclient.Main.main(Main.java:155)
    Caused by: java.rmi.RemoteException: BrokerBean.CreateInvoices(): CORBA TRANSACT
    ION_ROLLEDBACK 9998 Maybe; nested exception is:
    org.omg.CORBA.TRANSACTION_ROLLEDBACK: vmcid: 0x2000 minor code: 1806
    completed: Maybe
    at com.sun.enterprise.iiop.POAProtocolMgr.mapException(POAProtocolMgr.ja
    va:389)
    at com.sun.ejb.containers.BaseContainer.postInvoke(BaseContainer.java:43
    1)
    at BrokerBean_EJBObjectImpl.CreateInvoices(BrokerBean_EJBObjectImpl.java
    :265)
    at BrokerBeanEJBObjectImpl_Tie._invoke(Unknown Source)
    at com.sun.corba.ee.internal.POA.GenericPOAServerSC.dispatchToServant(Ge
    nericPOAServerSC.java:520)
    at com.sun.corba.ee.internal.POA.GenericPOAServerSC.internalDispatch(Gen
    ericPOAServerSC.java:210)
    at com.sun.corba.ee.internal.POA.GenericPOAServerSC.dispatch(GenericPOAS
    erverSC.java:112)
    at com.sun.corba.ee.internal.iiop.ORB.process(ORB.java:255)
    at com.sun.corba.ee.internal.iiop.RequestProcessor.process(RequestProces
    sor.java:84)
    at com.sun.corba.ee.internal.orbutil.ThreadPool$PooledThread.run(ThreadP
    ool.java:99)

    Three things:
    first, maybe you should think of putting ut.begin() just before the invoicesHome.create() method and ut.commit() just after the stockTrans.remove() method.It wont solve the current problem but will help in performance once the problem is solved.
    second, your utx.commit() is outside the try block. how come the code is compiling then??
    third, try doing a SOP call before and after invoicesHome.create() method and see where the problem actually lies.
    let us know...
    Hi SteveW2,
    Thanks for being so helpful. Here are my replies:
    Can I just ask why you're not using containermanaged
    transactions?The reason why I didn't use container managed
    transactions is because I don't really know how to do
    that. I am more familiar with this user-defined
    transaction handling.
    I have attempted to implement the same method in an
    entity bean and just let the container manage the
    rollback itself. The same exception was thrown when
    running the client.
    Also, the transaction behaviour is likely to relateto
    the app server youre using - which is it?What do you mean by the app server? I am using J2EE
    1.3.1 if that is what you meant.
    Finally, if your code has a problem rolling back,and
    throws an exception, you discard your exception
    thereby losing useful information.I have tried to print the exception stack as well, but
    it is the same as just printing the general
    exception.
    This problem is very strange cause if I comment out
    the transaction management thing, then everything
    works fine. Or if I am only working with 1 single
    database, with this user-defined transaction handling,
    everything works fine as well.
    Here is the error log from J2EE server if you are
    interested.
    ------------ error log ---------------
    javax.ejb.TransactionRolledbackLocalException:
    Exception thrown from bean; nested exception is:
    javax.ejb.EJBException: ejbCreate: Connection
    previously closed, open another Connection
    javax.ejb.EJBException: ejbCreate: Connection
    previously closed, open another Connection
         at InvoicesBean.ejbCreate(Unknown Source)
    at
    InvoicesBean_RemoteHomeImpl.create(InvoicesBean_Remote
    omeImpl.java:31)
         at InvoicesHomeStub.create(Unknown Source)
         at BrokerBean.CreateInvoices(Unknown Source)
    at
    BrokerBean_EJBObjectImpl.CreateInvoices(BrokerBean_EJB
    bjectImpl.java:261)
    at BrokerBeanEJBObjectImpl_Tie._invoke(Unknown
    Source)
    at
    com.sun.corba.ee.internal.POA.GenericPOAServerSC.dispa
    chToServant(GenericPOAServerSC.java:520)
    at
    com.sun.corba.ee.internal.POA.GenericPOAServerSC.inter
    alDispatch(GenericPOAServerSC.java:210)
    at
    com.sun.corba.ee.internal.POA.GenericPOAServerSC.dispa
    ch(GenericPOAServerSC.java:112)
    at
    com.sun.corba.ee.internal.iiop.ORB.process(ORB.java:25
    at
    com.sun.corba.ee.internal.iiop.RequestProcessor.proces
    (RequestProcessor.java:84)
    at
    com.sun.corba.ee.internal.orbutil.ThreadPool$PooledThr
    ad.run(ThreadPool.java:99)
    javax.ejb.TransactionRolledbackLocalException:
    Exception thrown from bean; nested exception is:
    javax.ejb.EJBException: ejbCreate: Connection
    previously closed, open another Connection
    at
    com.sun.ejb.containers.BaseContainer.checkExceptionCli
    ntTx(BaseContainer.java:1434)
    at
    com.sun.ejb.containers.BaseContainer.postInvokeTx(Base
    ontainer.java:1294)
    at
    com.sun.ejb.containers.BaseContainer.postInvoke(BaseCo
    tainer.java:403)
    at
    InvoicesBean_RemoteHomeImpl.create(InvoicesBean_Remote
    omeImpl.java:37)
         at InvoicesHomeStub.create(Unknown Source)
         at BrokerBean.CreateInvoices(Unknown Source)
    at
    BrokerBean_EJBObjectImpl.CreateInvoices(BrokerBean_EJB
    bjectImpl.java:261)
    at BrokerBeanEJBObjectImpl_Tie._invoke(Unknown
    Source)
    at
    com.sun.corba.ee.internal.POA.GenericPOAServerSC.dispa
    chToServant(GenericPOAServerSC.java:520)
    at
    com.sun.corba.ee.internal.POA.GenericPOAServerSC.inter
    alDispatch(GenericPOAServerSC.java:210)
    at
    com.sun.corba.ee.internal.POA.GenericPOAServerSC.dispa
    ch(GenericPOAServerSC.java:112)
    at
    com.sun.corba.ee.internal.iiop.ORB.process(ORB.java:25
    at
    com.sun.corba.ee.internal.iiop.RequestProcessor.proces
    (RequestProcessor.java:84)
    at
    com.sun.corba.ee.internal.orbutil.ThreadPool$PooledThr
    ad.run(ThreadPool.java:99)
    What is "connection previously closed, open another
    connection"? This might be the cause of the
    exception.
    I'll keep trying till I solve the problem.
    Thanks,
    Sasuke

  • Managed to make run EPM 11.1.2.1 in a distributed environment?

    Hi,
    please see blow...
    so far the installation and configuration worked well.
    However, in shared services under the essbase node there are no applications available .
    The same is when I try to priovision a user (for apps).
    Can someone give me a hint how to take a closer look to solfe the problem.
    Evething else seems to work fine. Essbase is up an running and it's accessible via the EAS console.
    Thank you in advance!
    Andre
    Hi all,
    a more general question:
    Has anybody successfully managed to make run EPM 11.1.2.1 in a distributed environment?
    If yes - please let me know it, just to give me a littel bit of hope.
    I have tried to install and config a EPM 11.1.2.1 like this:
    All Servers Win208R2.
    1. Http and J2EE Server
    2. Essbase Server
    3. Others Server
    4. RDBMS-Server (Repository)
    However the the OPMN process on the essbase server does not start and thus I cannot go the next steps in my install/conifg process.
    Thank you in advance and best Regards.
    Andre
    Edited by: andreml on May 17, 2011 9:47 AM
    Edited by: andreml on May 17, 2011 9:50 AM

    Hi Pablo,
    Thanks for your inputs
    - I am a bit familiar with F5's BIG/IP load-balancing methods - round-robin, least connections mode and dynamic ratio - while intelligently supporting session persistence.
    - We can also manage load balancing via the WebLogic Admin console, and as you have noted by the OHS as well - which I am not familiar with...
    This is a newbie question - wouldn't having 3 different agents managing load-balancing complicate things..? As the WebLogic server sits on top of the OHS, I guess they work together to provide load-balancing and configuring the WebLogic for clustering/load-balancing should affect the OHS configuration as well. Is this how it works at the high-level or is it more complicated?
    The EPM System Configurator creates the required cluster and adds servers to the cluster when we deploy the Web applications in the final step of the configuration. So we need not manually configure WebLogic for clustering. But when and where does one configure load-balancing..?
    Thanks again.. Essbase infrastructure is indeed a vast topic as it is interesting... :)

  • Error connecting SQL Azure - Network access for Distributed Transaction Manager (MSDTC) has been disabled

    Sometimes I have an error connecting SQL Azure. The error occurs in an asp.net application and in a windows service running on VM in Azure. Error details:
    System.Data.Entity.Core.EntityException: The underlying provider failed on Open. ---> System.Transactions.TransactionManagerCommunicationException: Network access for Distributed Transaction Manager (MSDTC) has been disabled. Please enable DTC for network
    access in the security configuration for MSDTC using the Component Services Administrative tool. ---> System.Runtime.InteropServices.COMException: The transaction manager has disabled its support for remote/network transactions. (Exception from HRESULT:
    0x8004D024)
       at System.Transactions.Oletx.IDtcProxyShimFactory.ReceiveTransaction(UInt32 propgationTokenSize, Byte[] propgationToken, IntPtr managedIdentifier, Guid& transactionIdentifier, OletxTransactionIsolationLevel& isolationLevel,
    ITransactionShim& transactionShim)
       at System.Transactions.TransactionInterop.GetOletxTransactionFromTransmitterPropigationToken(Byte[] propagationToken)
       --- End of inner exception stack trace ---
       at System.Transactions.Oletx.OletxTransactionManager.ProxyException(COMException comException)
       at System.Transactions.TransactionInterop.GetOletxTransactionFromTransmitterPropigationToken(Byte[] propagationToken)
       at System.Transactions.TransactionStatePSPEOperation.PSPEPromote(InternalTransaction tx)
       at System.Transactions.TransactionStateDelegatedBase.EnterState(InternalTransaction tx)
       at System.Transactions.EnlistableStates.Promote(InternalTransaction tx)
       at System.Transactions.Transaction.Promote()
       at System.Transactions.TransactionInterop.ConvertToOletxTransaction(Transaction transaction)
       at System.Transactions.TransactionInterop.GetExportCookie(Transaction transaction, Byte[] whereabouts)
       at System.Data.SqlClient.SqlInternalConnection.EnlistNonNull(Transaction tx)
       at System.Data.ProviderBase.DbConnectionPool.PrepareConnection(DbConnection owningObject, DbConnectionInternal obj, Transaction transaction)
       at System.Data.ProviderBase.DbConnectionPool.TryGetConnection(DbConnection owningObject, UInt32 waitForMultipleObjectsTimeout, Boolean allowCreate, Boolean onlyOneCheckConnection, DbConnectionOptions userOptions, DbConnectionInternal&
    connection)
       at System.Data.ProviderBase.DbConnectionPool.TryGetConnection(DbConnection owningObject, TaskCompletionSource`1 retry, DbConnectionOptions userOptions, DbConnectionInternal& connection)
       at System.Data.ProviderBase.DbConnectionFactory.TryGetConnection(DbConnection owningConnection, TaskCompletionSource`1 retry, DbConnectionOptions userOptions, DbConnectionInternal oldConnection, DbConnectionInternal& connection)
       at System.Data.ProviderBase.DbConnectionInternal.TryOpenConnectionInternal(DbConnection outerConnection, DbConnectionFactory connectionFactory, TaskCompletionSource`1 retry, DbConnectionOptions userOptions)
       at System.Data.SqlClient.SqlConnection.TryOpenInner(TaskCompletionSource`1 retry)
       at System.Data.SqlClient.SqlConnection.TryOpen(TaskCompletionSource`1 retry)
       at System.Data.SqlClient.SqlConnection.Open()
       at System.Data.Entity.SqlServer.DefaultSqlExecutionStrategy.<>c__DisplayClass1.<Execute>b__0()
       at System.Data.Entity.SqlServer.DefaultSqlExecutionStrategy.Execute[TResult](Func`1 operation)
       at System.Data.Entity.Core.EntityClient.EntityConnection.Open()
       --- End of inner exception stack trace ---
       at System.Data.Entity.Core.EntityClient.EntityConnection.Open()
       at System.Data.Entity.Core.Objects.ObjectContext.EnsureConnection()
       at System.Data.Entity.Core.Objects.ObjectContext.ExecuteInTransaction[T](Func`1 func, IDbExecutionStrategy executionStrategy, Boolean startLocalTransaction, Boolean releaseConnectionOnSuccess)
       at System.Data.Entity.Core.Objects.ObjectQuery`1.<>c__DisplayClassb.<GetResults>b__9()
       at System.Data.Entity.SqlServer.DefaultSqlExecutionStrategy.Execute[TResult](Func`1 operation)
       at System.Data.Entity.Core.Objects.ObjectQuery`1.GetResults(Nullable`1 forMergeOption)
       at System.Data.Entity.Core.Objects.DataClasses.EntityReference`1.Load(MergeOption mergeOption)
       at System.Data.Entity.Core.Objects.DataClasses.RelatedEnd.DeferredLoad()
       at System.Data.Entity.Core.Objects.Internal.LazyLoadBehavior.LoadProperty[TItem](TItem propertyValue, String relationshipName, String targetRoleName, Boolean mustBeNull, Object wrapperObject)
       at System.Data.Entity.Core.Objects.Internal.LazyLoadBehavior.<>c__DisplayClass7`2.<GetInterceptorDelegate>b__2(TProxy proxy, TItem item)

    Hello,
    I am not an expert in MSDTC but as we know,SQL Azure Database does not support
    distributed transactions. This means that SQL Azure doesn’t allow Microsoft Distributed Transaction Coordinator (MS DTC) to delegate distributed transaction handling.
    One common cause of MSDTC getting involved in Entity Framework applications is the fact that we close and reopen the same connection as needed (i.e. for each query that is executed).To avoid the stack from opening and closing the connection multiple times,
    you can simply open the connection explicitly and run the queries in the same connectio.
    The following thread is about a similar issue, please refer to:
    http://answers.flyppdevportal.com/categories/azure/sqlazure.aspx?ID=d705a8cf-cba4-494c-96f6-96a136bd29e3
    What's more, you can also try the workaround that involves setting the Enlist option of the SQL Azure connection to false. For the detail explanation, please refer to:Entity
    FrameWork and SQL Azure
    Regards,
    Fanny Liu
    Fanny Liu
    TechNet Community Support

  • Landscape parameter in distributed environment

    HI
    while working on managed system configuration in the step landscape parameter at one place we have to give path to the folder of logs of database.
    as i am working on distributed environment i.e sap on one server and database on another server i gave path to the database log directory it is giving error that directory not found.
    is there any work around for it?

    Jansi Rani Murugesan
    Hi
    Thanks for your help.
    I have another issue that ZMOR transaction type is not visible to the dispatcher and processor. i have given them authorization in their authorization role of ZMOR in all PR_TYPE but still it is not visible.
    can you please help me out with this issue

  • Very very urgent help needed Distributed environment.

    Hi All,
    I want to know more about distributed environment. Typically we do have only one PRODUCTION system like 100.all the day to day business activities in real time will be done in this system. where the question will arise transfering the data from one system to other system using ALE/EDI-IDOCS.
    for suppose::::
    data between sales organization to warhouse and manufacturing unit to warehouse.
    and all invoices will goes to respective company code in FI. then all should be done under the one roof (client). that client is here 100 as PRODUCTION system,as per my knowledge.
    i am expecting clear understanding of this scenario from you guys.
    like my questions???
    1) what is the system?
    ANS: warehouse, sales organization,manufacturing unit these all are systems.where we can define in one PRODUCTION system and one client. and  these RFC destinations we create like three digit or thru IP address.
    2) i want to know about this three digit sytem? whether it will create it within the production system (100)?
    3) if it is created within the production client(100) system then what the need of IP address?
    4) what is client?
    ANS: client is highest hierarchy in the sap system.in production we maintain 100 as a one. but in development systems and other quality systems we maintain like 100,200,300,400,500,,,Etc.according to business requirement.
    *********(Please dont say answers like dev syst to quality systems)***********
    my thoughts and views i added, i wnat to know clear scenario, within two lines.if it possible?
    earliest answer is more appreciable.
    warm regards,
    lynx.

    Hi
    1. Client : A group of users who can Access some portion of data in SAP system Database.
    SAP introduced the Client Concept to allow different category of users (Such as Developers , Testers, End users) in the Same SAP BOX (server).
    that is why Tables in SAP Database are categorized as Cross-client (shared) and Client-specific.
    So a production server can have one or more clients or all users may be on Same client.
    2. Distributed processing : Check this Scenario .. that will help..
    Customer has a separate SAP Warehouse Management System (WMS) in the R/3 landscape where all R/3 Distribution data are replicated/distributed from R/3 to the WMS system via ALE. For example, Sales Orders are created in R/3 - when they are delivered (Sales document->Deliver) the ALE kicks in and the same Delivery doc is distributed to the WMS system but the Sales order is not distributed. Any subsequent functions for the Delivery like Picking, Packing, Goods Issue, Shipment is then done from the WMS system only so that all distribution specific transactional data are stored and processed from the WMS system.
    Again, in order to successfully distribute the delivery documents to the WMS system from R/3 via ALE, a lot of SD master data needs to be distributed prior to these subsequent distribution business processes (like picking, packing, GI etc.). So another set of ALEs are also set up to distribute SD master data via IDocs (within the ALE framework) to the WMS system every time master data is created/changed in the R/3 - for example - Plant, Warehouse, Storage Type, Storage Location, Material Master etc.
    For distribution of materials from R/3 to WMS, we use Basic Type/IDoc Type MATMAS05, Message Type MATMAS. We've set up MATMAS05 using ALE Filters on Division, Sales Org., Distribution Channel, Material Type, Storage Location and Plant as we want only specific org data to distribute across WMS.
    <b>Reward if helpful.</b>

  • What is a Transaction Manager??????

              For Updating mutilple databases in a distributed environment one has to use Two
              Phase commit. In this 2PC there is one Transaction manager that manages all the
              transaction with all the resource managers.
              I need to know what is this transaction Manager ?????? Is it a seperate software
              required to be plugged in with weblogic or it comes as a part of weblogic 6.1
              beta release. I also read from javax.transaction and javax.sql packages that
              for updating multiple databases one needs .... XADatasource and XAConnection ...objects
              How these two are linked with 2PC ...??????
              Please help
              

    Upkar,
              WLS 6.0 and above incorporates an implementation of a transaction manager (the engine that
              drives a 2 phase commit) written by the same engineers who worked on the transaction
              manager in Tuxedo.
              In a 2 phase commit, instead of saying "commit" directly too the databases, the transaction
              manager says "prepare" and then "commit" (these are the 2 phases) The interface that a
              database provides for the TM to do this through is XAResource.
              The XAConnection is merely a connection that supports this XAResource interface.
              None of this XA stuff should be seen in your application code, unless you are writing a
              transaction manager or a database driver, which I don't suppose you will be doing!
              Regards,
              Peter.
              Got a Question? Ask BEA at http://askbea.bea.com
              The views expressed in this posting are solely those of the author, and BEA
              Systems, Inc. does not endorse any of these views.
              BEA Systems, Inc. is not responsible for the accuracy or completeness of the
              information provided
              and assumes no duty to correct, expand upon, delete or update any of the
              information contained in this posting.
              Upkar Sharma wrote:
              > For Updating mutilple databases in a distributed environment one has to use Two
              > Phase commit. In this 2PC there is one Transaction manager that manages all the
              > transaction with all the resource managers.
              > I need to know what is this transaction Manager ?????? Is it a seperate software
              > required to be plugged in with weblogic or it comes as a part of weblogic 6.1
              > beta release. I also read from javax.transaction and javax.sql packages that
              > for updating multiple databases one needs .... XADatasource and XAConnection ...objects
              > ..
              > How these two are linked with 2PC ...??????
              >
              > Please help
              

  • Forte Transaction Management & 2PC

    Forte Transaction Management & 2PC
    The main purpose of 2PC in a distributed transaction manager is
    to enable recovery from a failure that occurs during the window
    of transaction commit processing. The Forte transaction manager was built
    with this in mind but only with respect to the "volatile" (or "in memory")
    objects that Forte manages. What this implies is that because Forte stores
    objects in memory and not persistently on disk, the requirement of recovery
    for these objects is significantly reduced (if not eliminated all together).
    Forte follows a distributed 2PC model in that tasks and messages carry
    along with them transaction identification and, during commit processing,
    every distributed participant is polled for its availability to commit
    the transaction. Applications saving persistent data to disk during a
    distributed Forte transaction need to concern themselves with the potential
    for failure during the commit processing window. Forte's prepare phase polls
    each site (confirming a communications link with each distributed participant)
    but no prepare request goes to the database primarily because (in release 1 and
    2 of forte) no database supported a general distributed two-phase commit
    (one could take issue with that in the case of Sybase, but rather than debate
    this point, suffice it to say that the general direction in the industry for
    support of this functionality was through TP monitors -- more on that later).
    Once all sites are ready to commit Forte expects that the commit will
    complete successfully. If at this moment, for example, a participating
    Sybase server terminates (with data not yet committed) while a participating
    Oracle server has already committed its unit of work, then the outcome of
    the distributed transaction is inconsistent - if no one has yet committed
    Forte will still abort the transaction. This "window of inconsistency"
    is documented in the Forte TOOL manual.
    Mission critical applications that require distributed transactions can
    address this window of inconsistency in a number of ways:
    * Utilize a TP monitor such as Encina (see below)
    * Log distributed updates in an auxiliary database table (much like a
    distributed transaction monitor's transaction-state log). This approach has
    been the traditional banking application solution prior to the commercial
    availability of products like Encina, Tuxedo, TopEnd, etc.
    This solution is somewhat complex and is usually not generic enough
    so as not to have to change code every time a new table or database
    site is introduced into the application's data model.
    * Rearrange the data model in order to eliminate the need for distributed
    transactions. This is usually only a temporary solution (with smaller
    numbers of active clients) and cannot be applied to complex legacy systems.
    With the advent of the X/Open distributed transaction architecture (the
    XA Interface) more database vendors have found that by complying with the
    XA interface they can plug their database-specific implementation of
    transaction into a globally managed transaction, with commit and abort
    processing being conducted by a central coordinator. Of course, the
    overall transaction manager coordinating the global transaction must
    itself, persistently record the state of the different distributed
    branches participating in the transaction. A significant portion of
    the functionality provided by products such as Encina, Tuxedo, TopEnd and
    OpenTP1 is to provide exactly this global transaction management.
    Rather than extend the Forte distributed transaction manager with the
    functionality necessary to manage and recover distributed transactions
    that modify data on disk, Forte has chosen to integrate with the emerging
    set of commercial transaction monitors and managers. This decision was
    built into the original design of the Forte transaction model (using XA and
    early Tuxedo white-papers as guidelines):
    * In Forte release 2 an integration with Encina was delivered.
    * In January 1997 a press release announced an integration of
    OpenTP1 with Forte for release 3.
    * The Forte engineering staff is currently investing integration
    with other transaction management products as well.
    Neil Goodman,
    Forte Development.

    You don't. ("manage" a transaction)
    There is nothing really to "manage".
    A transaction is automatically started when you make any changes to data (e.g. fire off a DML statement).
    You simply needs to issue a COMMIT or ROLLBACK when needed. A COMMIT at the end of the business transaction and not before (i.e. no committing every n number of rows). A ROLLBACK when hitting an exception or business logic error that requires the uncommitted changes to be undone.
    That in a nutshell is it. It is that simple.
    Oracle also supports creating savepoints and rolling back only some changes made thus far in the transaction.
    The only other thing to keep in mind that a DDL in Oracle issues an implicit commit. Firing off a DDL with cause any exiting uncommitted transaction to be committed.
    Transaction "logic/management" should not be made more complex than this.

  • Deploy to Application Server Failed on a distributed environment

    Hi All,
    I am trying to configure new Hyperion verion 11.1.2.2 on distributed environment but during configuring calculation manager to application server failed and getting error message like "Deploy to Application Server Failed". I am not sure what is issue and how to fix it. I have tried to read log files but I am not able to undersantd where to look and debug this issue.
    My Hyperion Environment over view as a below:-
    1- I have used Microsoft VMWARE to build my Hyperion Enviornment
    2- I have created 1 window server 2003 domain and made 4 clients of that domain. (All these systems have windows server 2003 installed). I have given name to each client server i.e. System A, System B, System C, System D
    3- I have installed SQL 2005 and created databases for all Hyperion components i.e. Shared services, calculation manager, epma on a system A
    5- I have installed and configured foundation services and weblogic server on a system B. (In this system I have installed and configured shared services, weblogic, workspace, and able to deploye application server on a same system)
    6- On a system A I am able to complete installation for hyperion Performance Management Architect and Calculation Manager and able to finish all type of configuration for these two components but as soon as system trying to configure any related to APPLICATION SERVER services it fails to configure. On a configuration summary page system shows everything is configured but APPLICATION SERVER says FAILED wrtten on a red color letters.
    I have explored log files and found that Calculation Manager application server failed to deploy or Deploy to Application server failed.....
    Since, I am not sure where to look and how to debug this issue I am requesting to all hyperion friends to help and guide me to debug as I have been trying to install this product since last friday and still no out put....
    I will be really thankful if someone share his or her wisdom to help me....
    Thank you to all in Advance.....
    Thanks,
    Safi

    Did you install all the WebLogic web applications on the foundation machine as well as the machine they are going to be deployed to.
    "On the machine on which you plan to administer the WebLogic Server, you must install all Web applications for all applications you plan to deploy on any machine in the environment. (The WebLogic Administration Server is installed and deployed on the Foundation Services machine.)"
    Cheers
    John
    http://john-goodwin.blogspot.com/

  • Hyperion EPM Installation in a distributed environment...

    Hi all,
    I am planning to install Hyperion products in a distributed environment. I have three machines say, Machine A, Machine B, Machine C. All of the machines have Windows 2003 operating system. And only Machine B and Machine C have Application Server installed. Now I give a brief structure as what machine holds which products-
    Machine A:-
    Oracle 11g
    Essbase Server
    Machine B:-
    Foundation Services- Shared Services
    Essbase Studio
    Essbase Integration Services
    Smart Search
    Administration Services
    Provider Services
    Habnet
    Essbase Client
    Machine C:-
    EPMA
    Planning
    Workspace
    I create a separate database for each component, say, to configure Essbase I have Hyess database, to configure Planning I have Hyplan database.
    And I configure Shared Services to the same database, say, Hyshs from each machine. In that case I just follow the previously congigured database.
    Machine B and Machine C components are deployed to their respective Application Server.
    Can anyone please tell me if this is proper or not? Any kind of modification or recommendation would be appreciated.
    Thanks.

    Hi John,
    thanks for your response. Yes, the Process Manager does not start. I had a look at the Event Viewer. And it shows error and the error properties description is something like this-
    Service cannot be started. Hyperion.DimensionServer.ProcessManager.Interface.ProcessManagerException: Cannot initialize the Session Manager. ---> System.Exception: System.Data.OracleClient requires Oracle client software version 8.1.7 or greater.
    at System.Data.OracleClient.OCI.DetermineClientVersion()
    at System.Data.OracleClient.OracleInternalConnection.OpenOnLocalTransaction(String userName, String password, String serverName, Boolean integratedSecurity, Boolean unicode, Boolean omitOracleConnectionName)
    at System.Data.OracleClient.OracleInternalConnection..ctor(OracleConnectionString connectionOptions)
    at System.Data.OracleClient.OracleConnectionFactory.CreateConnection(DbConnectionOptions options, Object poolGroupProviderInfo, DbConnectionPool pool, DbConnection owningObject)
    at System.Data.ProviderBase.DbConnectionFactory.CreatePooledConnection(DbConnection owningConnection, DbConnectionPool pool, DbConnectionOptions options)
    at System.Data.ProviderBase.DbConnectionPool.CreateObject(DbConnection owningOb...
    For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp.

  • EPM 11.1.2.1 Installtion in Distributed environment

    Hi,
    I got a requirement to install and configure below components in distributed environment in windows 2008 server, the database is SQL Sever 2008.
    - Oracle Hyperion Shared Services 11.1.2.1
    - Oracle Hyperion BI+ Workspace 11.1.2.1
    - Oracle Hyperion Essbase 11.1.2.1
    - Oracle Hyperion Essbase Administration Services 11.1.2.1
    - Oracle Hyperion Analytic Provider Services 11.1.2.1
    - Oracle Hyperion Planning 11.1.2.1
    - Oracle Hyperion BI+ Financial Reporting 11.1.2
    - Oracle Hyperion Web Analysis11.1.2
    - Oracle Data Integrator
    4 servers and here is the approach I would like to follow. Please give if this is the correct sequence of installation and configuration:
    Server1: -
    Oracle Hyperion Shared Services 11.1.2.1 , Oracle Hyperion BI+ Workspace 11.1.2.1. I believe Weblogic Application server will be installed in this server. I will be configuring both components in 1 SQL Server database.
    Next components installing in Server2: I will be configuring EAS component in 1 SQL Server database.
    - Oracle Hyperion Essbase 11.1.2.1
    - Oracle Hyperion Essbase Administration Services 11.1.2.1
    - Oracle Hyperion Analytic Provider Services 11.1.2.1
    Next components installing in Server3: I will be configuring Planning component in 1 SQL Server database.
    Oracle Hyperion Planning 11.1.2.1
    Next components installing in Server4: I will be configuring HFR in 1 Schema and web Analysis in 1 Schema. And not sure how to install and configure the ODI, Please share your experience in installing and configuring ODI and how many schemas required for it.
    - Oracle Hyperion BI+ Financial Reporting 11.1.2
    - Oracle Hyperion Web Analysis11.1.2
    - Oracle Data Integrator
    Advance thanks for your ideas.
    Best Regards,
    UB

    The documentation contains lots of useful information I really think you should study it before even attempting, also take note of the information provided in - http://download.oracle.com/docs/cd/E17236_01/epm.1112/epm_install_11121/ch03s03.html
    For ODI the RCU utility creates the master and work repository, the documentation takes you through the steps or just search on the web as there are a number of installation guides if you trust them.
    Cheers
    John
    http://john-goodwin.blogspot.com/

  • Coherence and EclipseLink - JTA Transaction Manager - slow response times

    A colleague and I are updating a transactional web service to use Coherence as an underlying L2 cache. The application has the following characteristics:
    Java 1.7
    Using Spring Framework 4.0.5
    EclipseLink 12.1.2
    TopLink grid 12.1.2
    Coherence 12.1.2
    javax.persistence 12.1.2
    The application is split, with a GAR in a WebLogic environment and the actual web service application deployed into IBM WebSphere 8.5.
    When we execute a GET from the server for a decently sized piece of data, the response time is roughly 20-25 seconds. From looking into DynaTrace, it appears that we're hitting a brick wall at the "calculateChanges" method within EclipseLink. Looking further, we appear to be having issues with the transaction manager but we're not sure what. If we have a local resource transaction manager, the response time is roughly 500 milliseconds for the exact same request. When the JTA transaction manager is involved, it's 20-25 seconds.
    Is there a recommendation on how to configure the transaction manager when incorporating Coherence into a web service application of this type?

    Hi Volker/Markus,
    Thanks a lot for the response.
    Yeah Volker, you are absolutely right. the 10-12 seconds happens when we have not used the transaction for several minutes...Looks like the transactions are moved away from the SAP buffer or something, in a very short time.
    and yes, the ABAP WP's are running in Pool 2 (*BASE) and the the JAVA server, I have set up in another memory pool of 7 GB's.
    I would say the performance of the JAVA part is much better than the ABAP part.
    Should I just remove the ABAP part of the SOLMAN from memory pool 2 and assign the JAVA/ABAP a separate huge memory pool  of say like 12-13 GB's.
    Will that likely to improve my performance??
    No, I have not deactivated RSDB_TDB in TCOLL from daily twice to weekly once on all systems on this box. It is running daily twice right now.
    Should I change it to weekly once on all the systems on this box?  How is that going to help me?? The only thinng I can think of is that it will save me some CPU utilization, as considerable CPU resources are needed for this program to run.
    But my CPU utilization is anyway only like 30 % average. Its a i570 hardware and right now running 5 CPU's.
    So you still think I should deactivate this job from daily twice to weekly once on all systems on this box??
    Markus, Did you open up any messages with SAP on this issue.?
    I remember working on the 3.2 version of soultion manager on change management and the response times very much better than this as compared to 4.0.
    Let me know guys and once again..thanks a lot for your help and valuable input.
    Abhi

  • EP6,SP3 WAS640 Java Installation in a distributed environment....

    Hi Freinds,
    While installing WAS Java 640 instance in a distributed environment (with oracle 9.2 DB instance in separate host & WAS640 on another), the installation setup stops at the last step (i.e. during registration of SDM of SCS/CI installation). Kindly suggest, what cud be the problem?
    The details of errors are as mentioned below:
    ERROR 2004-10-09 15:04:05
    MUT-02041   SDM call of deploySdaList ends with returncode 4. See output of logfile C:\Program Files\sapinst_instdir\WEBAS_640_J2EE_ONLY\DS\CI\callSdmViaSapinst.log.
    The details of CallsdmViaSAPinst.log is as follows:
    Oct 9, 2004 2:13:26 PM   Info:
    Oct 9, 2004 2:13:26 PM   Info: ============================================
    Oct 9, 2004 2:13:26 PM   Info: =   Starting to execute command 'deploy'   =
    Oct 9, 2004 2:13:26 PM   Info: ============================================
    Oct 9, 2004 2:13:26 PM   Info: Starting SDM - Software Deployment Manager...
    Oct 9, 2004 2:13:32 PM   Info: tc/SL/SDM/SDM/sap.com/SAP AG/6.3003.00.0000.20031126161800.0000
    Oct 9, 2004 2:13:33 PM   Info: SDM operation mode successfully set to: Standalone
    Oct 9, 2004 2:13:35 PM   Info: Initializing Network Manager (50317)
    Oct 9, 2004 2:13:35 PM   Info: Checking if another SDM is running on port 50318
    Oct 9, 2004 2:13:35 PM   Info: -
    Starting deployment -
    Oct 9, 2004 2:13:35 PM   Info: Loading selected archives...
    Oct 9, 2004 2:13:36 PM   Info: Loading archive 'I:\CD\51030277_WAS640SP3_Installation\J2EE1\J2EE-ENG\JDD\SYNCLOG.SDA'
    Oct 9, 2004 2:13:36 PM   Info: Selected archives successfully loaded.
    Oct 9, 2004 2:13:36 PM   Info: Error handling strategy: OnErrorStop
    Oct 9, 2004 2:13:36 PM   Info: Update strategy: UpdateLowerVersions
    Oct 9, 2004 2:13:36 PM   Info: Starting to execute deployment action (deploy) for "synclog".
    Oct 9, 2004 2:13:44 PM   Info: Creating connections to database "EPQ".
    Oct 9, 2004 2:13:55 PM   Info: Creating vendor connection to database.
    Oct 9, 2004 2:13:57 PM   Error: Creation of vendor connection failed.
    Original error message is:
    java.sql.SQLException: Io exception: The Network Adapter could not establish the connection
    Stack trace of original Exception or Error is:
    java.sql.SQLException: Io exception: The Network Adapter could not establish the connection
         at oracle.jdbc.dbaccess.DBError.throwSqlException(DBError.java:134)
         at oracle.jdbc.dbaccess.DBError.throwSqlException(DBError.java:179)
         at oracle.jdbc.dbaccess.DBError.throwSqlException(DBError.java:334)
         at oracle.jdbc.ttc7.TTC7Protocol.handleIOException(TTC7Protocol.java:3695)
         at oracle.jdbc.ttc7.TTC7Protocol.logon(TTC7Protocol.java:352)
         at oracle.jdbc.driver.OracleConnection.<init>(OracleConnection.java:362)
         at oracle.jdbc.driver.OracleDriver.getConnectionInstance(OracleDriver.java:536)
         at oracle.jdbc.driver.OracleDriver.connect(OracleDriver.java:328)
         at com.sap.sql.jdbc.NativeConnectionFactory.createNativeConnection(NativeConnectionFactory.java:149)
         at com.sap.sql.connect.OpenSQLDataSourceImpl.createConnection(OpenSQLDataSourceImpl.java:472)
         at com.sap.sql.connect.OpenSQLDataSourceImpl.getConnection(OpenSQLDataSourceImpl.java:253)
         at com.sap.sdm.serverext.servertype.dbsc.extern.DBSCConnectionManager.createSDMVendorConnection(DBSCConnectionManager.java:214)
         at com.sap.sdm.serverext.servertype.dbsc.extern.DBSCConnectionManager.createSDMConnections(DBSCConnectionManager.java:77)
         at com.sap.sdm.serverext.servertype.dbsc.ConnectionManagerDecorator.createSDMConnections(ConnectionManagerDecorator.java:73)
         at com.sap.sdm.serverext.servertype.dbsc.DatabaseTargetSystem.connect(DatabaseTargetSystem.java:140)
         at com.sap.sdm.serverext.servertype.dbsc.DBSCDeploymentActionProcessor.executeAction(DBSCDeploymentActionProcessor.java:115)
         at com.sap.sdm.app.proc.deployment.impl.PhysicalDeploymentActionExecutor.execute(PhysicalDeploymentActionExecutor.java:58)
         at com.sap.sdm.app.proc.deployment.impl.DeploymentActionImpl.execute(DeploymentActionImpl.java:181)
         at com.sap.sdm.app.proc.deployment.controllers.internal.impl.DeploymentExecutorImpl.execute(DeploymentExecutorImpl.java:51)
         at com.sap.sdm.app.proc.deployment.states.eventhandler.ExecuteDeploymentHandler.executeAction(ExecuteDeploymentHandler.java:84)
         at com.sap.sdm.app.proc.deployment.states.eventhandler.ExecuteDeploymentHandler.handleEvent(ExecuteDeploymentHandler.java:61)
         at com.sap.sdm.app.proc.deployment.states.StateBeforeNextDeployment.processEvent(StateBeforeNextDeployment.java:78)
         at com.sap.sdm.app.proc.deployment.states.InstContext.processEventServerSide(InstContext.java:88)
         at com.sap.sdm.app.proc.deployment.states.InstContext.processEvent(InstContext.java:74)
         at com.sap.sdm.app.sequential.deployment.impl.DeployerImpl.doPhysicalDeployment(DeployerImpl.java:121)
         at com.sap.sdm.app.sequential.deployment.impl.DeployerImpl.deploy(DeployerImpl.java:90)
         at com.sap.sdm.control.command.cmds.Deploy.execute(Deploy.java:162)
         at com.sap.sdm.control.command.decorator.AssureStandaloneMode.execute(AssureStandaloneMode.java:54)
         at com.sap.sdm.control.command.decorator.AssureOneRunningSDMOnly.execute(AssureOneRunningSDMOnly.java:61)
         at com.sap.sdm.control.command.decorator.SDMInitializer.execute(SDMInitializer.java:52)
         at com.sap.sdm.control.command.decorator.GlobalParamEvaluator.execute(GlobalParamEvaluator.java:60)
         at com.sap.sdm.control.command.decorator.AbstractLibDirSetter.execute(AbstractLibDirSetter.java:46)
         at com.sap.sdm.control.command.decorator.ExitPostProcessor.execute(ExitPostProcessor.java:48)
         at com.sap.sdm.control.command.decorator.CommandNameLogger.execute(CommandNameLogger.java:49)
         at com.sap.sdm.control.command.decorator.AdditionalLogFileSetter.execute(AdditionalLogFileSetter.java:65)
         at com.sap.sdm.control.command.decorator.AbstractLogDirSetter.execute(AbstractLogDirSetter.java:52)
         at com.sap.sdm.control.command.Command.exec(Command.java:42)
         at SDM.main(SDM.java:21)
    Oct 9, 2004 2:13:57 PM   Error: Execution of deployment action for "synclog" aborted:
    Db connect failed.
    Oct 9, 2004 2:13:57 PM   Error: Deployment NOT successful for synclog
    Oct 9, 2004 2:13:57 PM   Error: -
    At least one of the Deployments failed -
    Oct 9, 2004 2:13:57 PM   Info: Summarizing the deployment results:
    Oct 9, 2004 2:13:57 PM   Error: Aborted: I:\CD\51030277_WAS640SP3_Installation\J2EE1\J2EE-ENG\JDD\SYNCLOG.SDA
    Oct 9, 2004 2:13:57 PM   Error: Processing error. Return code: 4

    Hi Rajenda,
    I'm really stucked in distributed Java installation.
    Can you help me? I'd like to ask you some questions.
    My email address is: [email protected]
    Regards,
    Tibor
    ps.: I'm using my friend account in SAP SDN (Nándor)

  • 11g TP2 ADF Task Flows and Transaction Management

    I'm wondering how ADF Task Flow Transaction Management works vis-a-vis database sessions and using stored procedure calls in an environment with connection pooling. I haven't written the code yet but am looking for a better understanding of how it works before I try.
    Example:
    I create a bounded adf task flow. I set the "transaction" property to "new-transaction" and the "data control scope" to "isolated".
    As the task flow is running, the user clicks buttons that navigate from page to page in the flow. Each button click posts the page back to the app server. On the app server a backing bean method in each page calls a stored procedure in a database package to modify some values in one or more tables in the database. The procedure does not commit these changes.
    Each time a backing bean makes a stored procedure call will it be in the same database session? Or will connection pooling possibly return a different database connection and therefore a different database session?
    If the transaction management feature of the adf task flows guarantees me that I will always be in the same database session then I don't have to write any extra code to make this work. Will it do that or not?

    I don't know if it is documented in the adf documentation currently available for 11g TP2 but what you ask for is a normal transaction management with connection pooling and i can't imagine it is not implemented in ADF BC layer like it is in JPA or other persistence layer.
    A transaction will always be executed in the same session. Normally your web session will stay in the same session even you start more than one transaction. You don't have to write any code to manage the session pooling. It is a good practices to customize it at the persistence layer during installation depending on your infrastructure.
    Take a look into Fusion Developer Guide ... i'm sure you will find some better explanations about this.

  • Database transaction management in Web services

    Hi,
    I am using Oracle8i and firing some database queries from my web services. I want to do the transaction management for the same i.e. When one of the queries fail, i want to rollback. But when i write my own transaction management, it gives me an error :
    java.sql.SQLException: Cannot call Connection.commit in distributed transaction.Transaction Manager will commit the resource manager when the distributed transaction is committed.
    Can anyone please help me out as to how to perform the database transaction management in web services.
    Thanking in advance.
    Prashant

    Unfortunately to manage transactions over web services there is no viable solution available in market. All implementations come with restrictions e.g. Metro works with only EJBs on Glassfish, JBossTS works on JBoss but not with JAX-WS, Atomikos supports only Axis as of now.
    1. See explanation above.
    2. Yes, it can be but conditions mentioned above are applied :-)
    3. [www.oasis-open.org/committees/ws-tx/|www.oasis-open.org/committees/ws-tx/]
    4. Unfortunately as of now I do not see an easy way to this problem.

Maybe you are looking for