Revision (transaction) management. Am I doing OK?

I need to track audit information about all tables within a data schema, which app users see via grants on the schema each app users maps to (no app user is supposed to connect directly on the data schema). To achieve this, I have a Revision table in the data schema, with a sequence-based primary key, 2 columns per data table called insert_rev and update_rev which are FKs to the Revision table, populated automatically via triggers, and a definer-rights PL/SQL package that app user must execute to call new_revision, end_revision. The revision data table triggers raise an application error if an insert or delete are attempted outside the "scope" of a revision (i.e. not between a new_revision and end_revision call).
This is working OK, except that it requires apps to properly call new_revision/end_revision, i.e. apps to be well behaved (a big if IMHO, especially since we eventually plan to allow 3rd party apps to access our schema), and also although it is intended that revision align with transaction boundaries, nothing enforces this.
Is there a way to have some kind of ON COMMIT and ON ROLLBACK trigger than could automatically call end_revision automatically?
Any way to relate our ad-hoc revisions with real transaction IDs from the DB instance? (and if we did, would these IDs be meaningless if the data is moved to another instance for example)? I've recently discovered V$TRANSACTION, and was wandering if/how this view could be useful to me!?
I'm quite new to Oracle and DBs in general, so any advice on a better design to track application data revisions, in a way similar to a SCM system like SubVersion, would be appreciated. What I have designed so far works, but on second thought I think I may be re-inventing the wheel here, and there might be better ways to do this.
Thanks for any insights. --DD
PS: I'm also wondering if row-level triggers for all inserts/updates of all data tables might not be a performance killer too.

When I think of Oracle Auditing, I think DBA level auditing.
I want to have application-visible meta-data of data changes, and somehow I expected Oracle Auditing to not be visible to an end-user client app, not organizations willing to expose auditing info to our apps. It could be that I'm wrong though.
How would an non-privileged client app view/access the auditing info? Can the auditing info be restricted to a given schema? Thanks, --DD                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

Similar Messages

  • Does transaction manager also releases the connections ?

    Hi All,
    I've a doubt regarding release of connection in transaction handling scenarios in EJB.
    Let us assume I have an EJB method associated with a transaction attribute as Requires New (i.e. the ejb method is associated with container managed transaction).
    Within that method two diffreent databasess are accessed, connections are created, databases are updated but connections are not realeased.
    The code goes similar to the one given below -
    public void beanMethod1()
    InitialContext cntx = new InitialContext();
    javax.sql.DataSource ds1 = cntx.lookup("dataSourceName1");
    javax.sql.DataSource ds2 = cntx.lookup("dataSourceName2");
    java.sql.Connection conn1 = ds1.getConnection();
    ds1.setAutoCommit(false);
    PreparedStatement pst1 = ds1.prepareStatement("Qyery1");
    pst1.executeUpdate();
    java.sql.Connection conn2 = ds2.getConnection();
    ds2.setAutoCommit(false);
    PreparedStatement pst2 = ds2.prepareStatement("Qyery2");
    pst2.executeUpdate();
    Now in this context my doubt is - will the transaction manager, along with handling commit/rollback, release the connections also (once commit/rollback is over) ? Or releasing of connections need to be handled in the bean method ?
    If releasing connections has to be handled in the bean method, then how does transaction manager execute a commit/rollback on a released connection ?
    The same doubt can be extended to bean managed transaction also where transaction boundarry is demarcated using javax.transaction.UserTransaction object's begin(), commit() and rollback() methods.
    It will be a real help if anyone please throw some light on this doubt.
    Thanks in advance,
    Sourav

    Hi,
    Your code needs to release (i.e., close) the connections it uses; this is outside the TM responsibility scope.
    The commit or rollback is not a problem, because the corresponding XAResource (which is the
    transaction manager's handle to your connection) can be used even after your connection
    has been closed in the application code. That is the catch about XA, and it allows the whole
    mechanism of connection pooling and DataSources to work properly.
    Hope that helps,
    Guy

  • Does the ADF support annotation transaction management?

    hi there,
    does the ADF support annotation transaction management or some kind of way like that? how?

    Trasnactions are mainitained in the Application module and in the task flow level.. so it do support transaction management..
    Annotations if you are asking about Java features then, Yes it supports that too..

  • Transaction management in stateless session beans.

    Hi all,
    I am using EJB 1.1.
    I have a statless session bean that has two methods- A and B.
    which does not involve any database interaction
    like inserting/updating/deleting the data in the database.
    The process flow is such the client always calls A first followed by the call to B.
    I have the Transaction attribute set as TX_REQUIRED at the whole bean level.
    Now my question is as follows:
    Since it is a stateless bean, ejbCreate() is called for every method's invocation.
    So does it mean that a new transaction is started for every method invocation?
    Also since a transaction ends by commit/rollback.
    The transation associated with the method A/B will never get completed as there is no commit/rollback involved in method implementation.
    So how is this transaction ended?
    Any more details about the transaction management in stateless session beans is highly appreciated.
    Any input at the earliest is highly appreciated.
    Thanks in advance.

    Since it is a stateless bean, ejbCreate() is called for every method's invocation.For stateless session bean , Create() is not delegated to the instance.
    So does it mean that a new transaction is started for every method invocation?This depends opon the Tx attribute and sequence of calls. Since you have given Tx_required then if you call any method and there is no Tx context associated with client,then a new TX will be started by container othere wise it will execute in the same TX context as the calling client. Note that client can be jsp or other ejb method.
    Also since a transaction ends by commit/rollback.
    The transation associated with the method A/B will never get completed >as there is no commit/rollback involved in method implementation.
    So how is this transaction ended?If you are using COntainer managed TX then Transaction handling like starting , ending etc is handled by the container. You need not worry about that.
    Any more details about the transaction management in stateless session >beans is highly appreciated.
    Any input at the earliest is highly appreciated.Some time back I had read the article on how the Transaction management done by container on IBM Site. I dont have URL , but you can search the site.
    HTH
    -Ashwani

  • Configuration for Transaction Management

              Hi,
              I am working with Weblogic Server SP1. I am facing a problem in configuring for
              Transaction Management.
              I have a session EJB say SEJB and two entity EJB say EEJB1 and EEJB2. EEJB1 is
              for the parent table
              and EEJB2 is for the child table.
              I have two records in the database REC1 and REC2.
              REC2 has dependencies and cannot be deleted, while REC1 can be deleted.
              In weblogic-ejb-jar.xml I have configured as follows:
              <weblogic-enterprise-bean>
              <ejb-name>SEJB</ejb-name>
              <stateless-session-descriptor>
              <pool>
              <max-beans-in-free-pool>300</max-beans-in-free-pool>
              <initial-beans-in-free-pool>150</initial-beans-in-free-pool>
              </pool>
              </stateless-session-descriptor>
              <reference-descriptor>
                   <ejb-reference-description>
                   <ejb-ref-name>EEJB</ejb-ref-name>
                   <jndi-name>EEJBean</jndi-name>
                   </ejb-reference-description>
                   </reference-descriptor>
              <jndi-name>SEJBn</jndi-name>
              </weblogic-enterprise-bean>
              Further, in ejb-jar.xml I have set up the <trans-attribute> as RequiresNew for
              Session Bean while Supports
              for the EEJB. Something like this:...
              <container-transaction>
              <method>
              <ejb-name>SEJB</ejb-name>
              <method-intf>Remote</method-intf>
              <method-name>*</method-name>
              </method>
              <trans-attribute>RequiresNew</trans-attribute>
              </container-transaction>
              In spite of this setting, when, through the client, I am selecting the two records
              REC1 and REC2 at the same
              time and deleting them, REC1 gets deleted while REC2 does not and gives a TransactionRollbackException.
              Ideally, since both are part of a single transaction, both should have been rolled
              back.
              Please suggest if I am missing on some kind of configuration parameter or setting.
              I'll be more than
              happy to provide some more details to get the problem solved.
              I can also be reached at [email protected]
              Thanks in advance,
              Regards,
              Rishi
              

    TCode: SWF5
    Enterprise_Extensions:
    -> EA-FS
    Enterprise_Business_Functions:
    -> FIN_TRM*
    Rg
    Lorenz

  • @TransactionAttribute annotation being ignored by Transaction Manager

    I am currently running jboss-4.0.4GA. I believe I must have something configured incorrectly, or I misunderstand transaction management performed by the container. Though I have my datasource declared as local-tx, which I believe allows transactions, it appears that my a call to a remote function in a stateless session bean is completely executed in one single transaction, regardless of the @TransactionAttribute tags.
    In my example, I call a function with @TransactionAttribute = REQUIRED. This is the OUTER FUNCTION. This function inserts a record into the cust table of our database. Then this function calls a second function with @TransactionAttribute = REQUIRES_NEW. This is the INNER FUNCTION.
    This function should, according to spec, start up a new transaction independant of the first function. However, the INNER function can select the (un-committed) cust record from the OUTER function. The INNER function then proceeds to add a cust record of its own to the database.
    Control then returns to the OUTER function, which can succesfully read the cust record inserted by the INNER function, which is to be expected because the INNER function should have had its transaction committed. However, my program then throws a RuntimeException in order to force a rollback, and this rollback removes both the cust record inserted by the OUTER function and the cust record inserted by the INNER function.
    To further my belief that the transaction manager is ignoring my @TransactionAttribute annotations, I change the TransactionAttributeType of the INNER function to "NEVER". According to spec, the code should throw an exception when this function is called within a managed transaction. However, when I run the code I get the exact same behavior as when the INNER function is "REQUIRES_NEW".
    I would greatly appreciate if anyone has any insight into what I am doing wrong. Thanks!
    Client Program that Invokes TestTransImpl Stateless Session Bean
    public class Client{
         public static void main(String[] args) throws Exception {
              try{               
                   Properties env = new Properties();
                               env.setProperty(Context.SECURITY_PRINCIPAL, "guest");
                               env.setProperty(Context.SECURITY_CREDENTIALS, "guest123");
                               env.setProperty(Context.PROVIDER_URL, "jnp://localhost:1099");
                               env.setProperty(Context.URL_PKG_PREFIXES, "org.jboss.naming:org.jnp.interfaces");
                               env.setProperty(Context.INITIAL_CONTEXT_FACTORY, "org.jboss.security.jndi.JndiLoginInitialContextFactory");
                   InitialContext ctx = new InitialContext(env);
                   TestTransRemote ttr = (TestTransRemote)ctx.lookup("TestTransImpl/remote");
                   ttr.testTransactions();
              }catch(Exception e){
                   e.printStackTrace();
                   throw e;
    }Remote Interface for TestTransImpl Stateless Session Bean
    public interface TestTransRemote extends Serializable {
         public void testTransactions() throws Exception;
    }TestTransImpl Stateless Session Bean
    @Stateless
    @Remote(TestTransRemote.class)
    public class TestTransImpl implements TestTransRemote {
         private static final long serialVersionUID = 1L;
         @TransactionAttribute(TransactionAttributeType.REQUIRED)
         public void testTransactions() throws Exception{
              java.sql.Connection conn = getConnection();
              java.sql.PreparedStatement ps;
              ps = conn.prepareCall("insert into cust(loc,cust_no) values ('001',20)");
              ps.execute();
              System.out.println("OUTSIDE FUNCTION - Customer 20 created");
              requiredNewFunction();
              ps = conn.prepareCall("Select cust_no from cust where loc = '001' and cust_no = 24");
              java.sql.ResultSet results = ps.executeQuery();
              results.next();     
              System.out.println("OUTSIDE FUNCTION - Customer Read - Cust No = " + results.getLong("cust_no"));
              throw new RuntimeException();
         @TransactionAttribute(TransactionAttributeType.REQUIRES_NEW)
         private void requiredNewFunction() throws Exception{
              java.sql.Connection conn = getConnection();
              java.sql.PreparedStatement ps;
              ps = conn.prepareCall("Select cust_no from cust where loc = '001' and cust_no = 20");
              java.sql.ResultSet results = ps.executeQuery();
              results.next();     
              System.out.println("INSIDE FUNCTION - Customer Read - Cust No = " + results.getLong("cust_no"));
              ps = conn.prepareCall("insert into cust(loc,cust_no) values ('001',24)");
              ps.execute();
              System.out.println("INSIDE FUNCTION - Customer 24 created");
         private java.sql.Connection getConnection() throws Exception{
              javax.sql.DataSource ds;
              javax.naming.InitialContext ic = new javax.naming.InitialContext();
              ds = (javax.sql.DataSource)ic.lookup("java:MyOracleDS");
              java.sql.Connection conn = ds.getConnection();
              return conn;          
    }Datasource XML File
    <?xml version="1.0" encoding="UTF-8"?>
    <datasources>
        <local-tx-datasource>
            <jndi-name>MyOracleDS</jndi-name>
            <connection-url>jdbc:oracle:thin:XXXXX(DB Host):1521:XXXXX(DB Sid)</connection-url>
            <driver-class>oracle.jdbc.driver.OracleDriver</driver-class>
            <user-name>XXXXX(username)</user-name>
            <password>XXXXX(password)</password>
            <min-pool-size>5</min-pool-size>
            <max-pool-size>100</max-pool-size>
            <exception-sorter-class-name>org.jboss.resource.adapter.jdbc.vendor.OracleExceptionSorter</exception-sorter-class-name>
            <!-- corresponding type-mapping in the standardjbosscmp-jdbc.xml (optional) -->
            <metadata>
                <type-mapping>Oracle10g</type-mapping>
            </metadata>
        </local-tx-datasource>
    </datasources>Program Output
    08:43:41,093 INFO  [STDOUT] OUTSIDE FUNCTION - Customer 20 created
    08:43:41,125 INFO  [STDOUT] INSIDE FUNCTION - Customer Read - Cust No = 20
    08:43:41,140 INFO  [STDOUT] INSIDE FUNCTION - Customer 24 created
    08:43:41,140 INFO  [STDOUT] OUTSIDE FUNCTION - Customer Read - Cust No = 24

    All ejb invocation behavior, including authorization, container-managed transactions, etc. only applies when the call is made through one of the appropriate ejb client objects. If
    TestTransImpl.testTransactions() directly invokes requiredNewFunction() it's just a normal java
    method call -- the ejb container has no idea it's happening and is not interposing. If you want
    the full ejb invocation behavior when you invoke requiredNewFunction() you'll need to
    make sure requiredNewFunction is part of a business interface, is public, and is invoked through
    the corresponding ejb reference :
    @Resource private SessionContext ctx;
    public void testTransactions() throws Exception {
    TestTransRemote testTrans = ctx.getBusinessObject(TestTransRemote.class);
    testTrans.requiredNewFunction();
    }

  • 11g TP2 ADF Task Flows and Transaction Management

    I'm wondering how ADF Task Flow Transaction Management works vis-a-vis database sessions and using stored procedure calls in an environment with connection pooling. I haven't written the code yet but am looking for a better understanding of how it works before I try.
    Example:
    I create a bounded adf task flow. I set the "transaction" property to "new-transaction" and the "data control scope" to "isolated".
    As the task flow is running, the user clicks buttons that navigate from page to page in the flow. Each button click posts the page back to the app server. On the app server a backing bean method in each page calls a stored procedure in a database package to modify some values in one or more tables in the database. The procedure does not commit these changes.
    Each time a backing bean makes a stored procedure call will it be in the same database session? Or will connection pooling possibly return a different database connection and therefore a different database session?
    If the transaction management feature of the adf task flows guarantees me that I will always be in the same database session then I don't have to write any extra code to make this work. Will it do that or not?

    I don't know if it is documented in the adf documentation currently available for 11g TP2 but what you ask for is a normal transaction management with connection pooling and i can't imagine it is not implemented in ADF BC layer like it is in JPA or other persistence layer.
    A transaction will always be executed in the same session. Normally your web session will stay in the same session even you start more than one transaction. You don't have to write any code to manage the session pooling. It is a good practices to customize it at the persistence layer during installation depending on your infrastructure.
    Take a look into Fusion Developer Guide ... i'm sure you will find some better explanations about this.

  • Java user-defined transaction management not working correctly???

    Hi everyone,
    I have encountered a problem when using Java user-defined transaction management in my session bean. It threw an exception but I could not work out what that means. Could anyone comment on this? Thanks.
    This BrokerBean is a stateless session calling other entities bean to perform some simple operations. There are 2 Cloudscape databases in use. Invoices (EB) use InvoiceDB and all the other EBs use StockDB.
    If I comment out the user-defined transaction management code, then everything works fine. Or if I comment out the Invoices EB code, it is fine as well. It seemed to me that there is something wrong in transaction management when dealing with distributed databases.
    --------------- source code ----------------------
    public void CreateInvoices(int sub_accno) {
    try {
    utx = context.getUserTransaction();
    utx.begin();
    SubAcc subAcc = subAccHome.findByPrimaryKey(new SubAccPK(sub_accno));
    String sub_name = subAcc.getSubName();
    String sub_address = subAcc.getSubAddress();
    Collection c = stockTransHome.findBySubAccno(sub_accno);
    Iterator i = c.iterator();
    ArrayList a = new ArrayList();
    while (i.hasNext()) {
    StockTrans stockTrans = (StockTrans)i.next();
    int trans_id = stockTrans.getTransID();
    String tran_type = stockTrans.getTranType();
    int stock_id = stockTrans.getStockID();
    float price = stockTrans.getPrice();
    Invoices invoices = invoicesHome.create(sub_accno, sub_name, sub_address, trans_id, stock_id, tran_type, price);
    stockTrans = stockTransHome.findByPrimaryKey(new StockTransPK(trans_id));
    stockTrans.remove();
    utx.commit();
    utx = null;
    } catch (Exception e) {
    if (utx != null) {
    try {
    utx.rollback();
    utx = null;
    catch (Exception ex) {}
    // e.printStackTrace();
    throw new EJBException("BrokerBean.CreateInvoices(): " + e.getMessage());
    --------------- exception ----------------------
    Initiating login ...
    Enter Username:
    Enter Password:
    Binding name:`java:comp/env/ejb/BrokerSB`
    EJB test succeed
    Test BuyStock!
    Test BuyStock!
    Test BuyStock!
    Test BuyStock!
    Test SellStock!
    Test SellStock!
    Caught an exception.
    java.rmi.ServerException: RemoteException occurred in server thread; nested exce
    ption is:
    java.rmi.RemoteException: BrokerBean.CreateInvoices(): CORBA TRANSACTION
    _ROLLEDBACK 9998 Maybe; nested exception is:
    org.omg.CORBA.TRANSACTION_ROLLEDBACK: vmcid: 0x2000 minor code: 1806
    completed: Maybe
    at com.sun.corba.ee.internal.iiop.ShutdownUtilDelegate.mapSystemExceptio
    n(ShutdownUtilDelegate.java:64)
    at javax.rmi.CORBA.Util.mapSystemException(Util.java:65)
    at BrokerStub.CreateInvoices(Unknown Source)
    at Client.main(Unknown Source)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.
    java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces
    sorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:324)
    at com.sun.enterprise.util.Utility.invokeApplicationMain(Utility.java:22
    9)
    at com.sun.enterprise.appclient.Main.main(Main.java:155)
    Caused by: java.rmi.RemoteException: BrokerBean.CreateInvoices(): CORBA TRANSACT
    ION_ROLLEDBACK 9998 Maybe; nested exception is:
    org.omg.CORBA.TRANSACTION_ROLLEDBACK: vmcid: 0x2000 minor code: 1806
    completed: Maybe
    at com.sun.enterprise.iiop.POAProtocolMgr.mapException(POAProtocolMgr.ja
    va:389)
    at com.sun.ejb.containers.BaseContainer.postInvoke(BaseContainer.java:43
    1)
    at BrokerBean_EJBObjectImpl.CreateInvoices(BrokerBean_EJBObjectImpl.java
    :265)
    at BrokerBeanEJBObjectImpl_Tie._invoke(Unknown Source)
    at com.sun.corba.ee.internal.POA.GenericPOAServerSC.dispatchToServant(Ge
    nericPOAServerSC.java:520)
    at com.sun.corba.ee.internal.POA.GenericPOAServerSC.internalDispatch(Gen
    ericPOAServerSC.java:210)
    at com.sun.corba.ee.internal.POA.GenericPOAServerSC.dispatch(GenericPOAS
    erverSC.java:112)
    at com.sun.corba.ee.internal.iiop.ORB.process(ORB.java:255)
    at com.sun.corba.ee.internal.iiop.RequestProcessor.process(RequestProces
    sor.java:84)
    at com.sun.corba.ee.internal.orbutil.ThreadPool$PooledThread.run(ThreadP
    ool.java:99)

    Three things:
    first, maybe you should think of putting ut.begin() just before the invoicesHome.create() method and ut.commit() just after the stockTrans.remove() method.It wont solve the current problem but will help in performance once the problem is solved.
    second, your utx.commit() is outside the try block. how come the code is compiling then??
    third, try doing a SOP call before and after invoicesHome.create() method and see where the problem actually lies.
    let us know...
    Hi SteveW2,
    Thanks for being so helpful. Here are my replies:
    Can I just ask why you're not using containermanaged
    transactions?The reason why I didn't use container managed
    transactions is because I don't really know how to do
    that. I am more familiar with this user-defined
    transaction handling.
    I have attempted to implement the same method in an
    entity bean and just let the container manage the
    rollback itself. The same exception was thrown when
    running the client.
    Also, the transaction behaviour is likely to relateto
    the app server youre using - which is it?What do you mean by the app server? I am using J2EE
    1.3.1 if that is what you meant.
    Finally, if your code has a problem rolling back,and
    throws an exception, you discard your exception
    thereby losing useful information.I have tried to print the exception stack as well, but
    it is the same as just printing the general
    exception.
    This problem is very strange cause if I comment out
    the transaction management thing, then everything
    works fine. Or if I am only working with 1 single
    database, with this user-defined transaction handling,
    everything works fine as well.
    Here is the error log from J2EE server if you are
    interested.
    ------------ error log ---------------
    javax.ejb.TransactionRolledbackLocalException:
    Exception thrown from bean; nested exception is:
    javax.ejb.EJBException: ejbCreate: Connection
    previously closed, open another Connection
    javax.ejb.EJBException: ejbCreate: Connection
    previously closed, open another Connection
         at InvoicesBean.ejbCreate(Unknown Source)
    at
    InvoicesBean_RemoteHomeImpl.create(InvoicesBean_Remote
    omeImpl.java:31)
         at InvoicesHomeStub.create(Unknown Source)
         at BrokerBean.CreateInvoices(Unknown Source)
    at
    BrokerBean_EJBObjectImpl.CreateInvoices(BrokerBean_EJB
    bjectImpl.java:261)
    at BrokerBeanEJBObjectImpl_Tie._invoke(Unknown
    Source)
    at
    com.sun.corba.ee.internal.POA.GenericPOAServerSC.dispa
    chToServant(GenericPOAServerSC.java:520)
    at
    com.sun.corba.ee.internal.POA.GenericPOAServerSC.inter
    alDispatch(GenericPOAServerSC.java:210)
    at
    com.sun.corba.ee.internal.POA.GenericPOAServerSC.dispa
    ch(GenericPOAServerSC.java:112)
    at
    com.sun.corba.ee.internal.iiop.ORB.process(ORB.java:25
    at
    com.sun.corba.ee.internal.iiop.RequestProcessor.proces
    (RequestProcessor.java:84)
    at
    com.sun.corba.ee.internal.orbutil.ThreadPool$PooledThr
    ad.run(ThreadPool.java:99)
    javax.ejb.TransactionRolledbackLocalException:
    Exception thrown from bean; nested exception is:
    javax.ejb.EJBException: ejbCreate: Connection
    previously closed, open another Connection
    at
    com.sun.ejb.containers.BaseContainer.checkExceptionCli
    ntTx(BaseContainer.java:1434)
    at
    com.sun.ejb.containers.BaseContainer.postInvokeTx(Base
    ontainer.java:1294)
    at
    com.sun.ejb.containers.BaseContainer.postInvoke(BaseCo
    tainer.java:403)
    at
    InvoicesBean_RemoteHomeImpl.create(InvoicesBean_Remote
    omeImpl.java:37)
         at InvoicesHomeStub.create(Unknown Source)
         at BrokerBean.CreateInvoices(Unknown Source)
    at
    BrokerBean_EJBObjectImpl.CreateInvoices(BrokerBean_EJB
    bjectImpl.java:261)
    at BrokerBeanEJBObjectImpl_Tie._invoke(Unknown
    Source)
    at
    com.sun.corba.ee.internal.POA.GenericPOAServerSC.dispa
    chToServant(GenericPOAServerSC.java:520)
    at
    com.sun.corba.ee.internal.POA.GenericPOAServerSC.inter
    alDispatch(GenericPOAServerSC.java:210)
    at
    com.sun.corba.ee.internal.POA.GenericPOAServerSC.dispa
    ch(GenericPOAServerSC.java:112)
    at
    com.sun.corba.ee.internal.iiop.ORB.process(ORB.java:25
    at
    com.sun.corba.ee.internal.iiop.RequestProcessor.proces
    (RequestProcessor.java:84)
    at
    com.sun.corba.ee.internal.orbutil.ThreadPool$PooledThr
    ad.run(ThreadPool.java:99)
    What is "connection previously closed, open another
    connection"? This might be the cause of the
    exception.
    I'll keep trying till I solve the problem.
    Thanks,
    Sasuke

  • Transaction Management in JMS adapter

    Hi,
    I am creating a BPEL process for inserting data in a JMS queue (for producing message on queue) with transaction management, and came across the property "isTransacted", can anyone please help me with the functionality of this property, and how to handle transactions with JMS Queue.
    Thanks.

    First of all the JMS specification does not define the behavior for multiple clients accessing a queue. Having said that, most vendors do offer this. Since it is not defined in the specification mileage can vary from provider to provider. However, most vendors do the obvious/intuitive thing.
    When a client receives a message from a queue it is not available to anyone else until the final disposition of that message can be determined. Acknowledge and commit permanently remove the message from the queue. Recover and rollback put the message back in the queue. The death of the client results in a recover or rollback. When a message goes back to the queue, it is made available to all clients who are waiting (possibly even the same client (except in the case of the dead client)).
    With respect to your questions, the intuitive behaviors are:
    1) message goes back and someone else has a chance to get it
    2) If you have a transacted session then the acknowledge is simply ignored, and since you didn't call commit, the message is still considered outstanding (and no one else can get it). If the session is not transacted then the acknowledge causes the message to be permanently removed from the destination (and no one else can get it).
    3) if the session is transacted then the commit permanently removes the message from the destination (and no one else can get it). if the session is not transacted then the commit must throw an exception and since you didn't call acknowledge, the message is still considered outstanding (and no one else can get it).
    Client2 could only receive the same message if client1 rolledback or recovered the message (which should occur if client1 dies).
    _sjz.

  • RE: Re[2]: Transaction Management

    Hi,
    Thanks for the reply. But my situation may require more than 1 DBSession
    per Persistence Manager, as under the same Domain problem, my persistent
    objects spread across multiple databases ( due to some legacy and package
    systems ).
    On the other hand, I think your model would be helpful in some cases.
    Following is what I think you are doing in your model in order to make use
    of this Persistence Manager.
    1. Client program would retrieve business object from this persistence
    manager;
    2. When persistence manager return the requested object, it would save
    its pointer to the business object.
    3. When client call save()/delete() on the business object, it would
    then route the request to the persistence manager by its pointer.
    Please correct me if I'm wrong.
    As mentioned, I have another design on this issue. When my document is
    ready, would you like to take a look. I just want to invite more opinion on
    this "Framework" with which I've spent at least half a year to work on.
    Best regards,
    Peter Sham.
    -----Original Message-----
    From: Dimitar Gospodinov [SMTP:[email protected]]
    Sent: Monday, May 17, 1999 5:41 PM
    To: Peter Sham (HTHK - Assistant Manager - Software Development,
    IITB)
    Cc: Vanessa Rumball; [email protected]
    Subject: Re[2]: Transaction Management
    Hello Peter,
    Well, we are using a slightly different approach. We have a SO
    (we call it Persistence Manager)
    and DBSession SO (user visible) in one partition. This
    partition
    is load balanced.
    All database activity is in the Persistence Manager - in one
    partition that uses one DBSession. In this approach we do not
    have possibility for deadlocks between different DBSessions
    because for example an activity that involves several tables
    will be executed within one DBSession. And since this
    partition
    is load balanced, the access to the database will not be
    blocked.
    Hope this makes sense.
    Best regards,
    Dimitar mailto:[email protected]
    Monday, May 17, 1999, 1:55:35 PM, you wrote:
    PSHAMSDI> Hi,
    PSHAMSDI> I would like to add to the question on the concern on
    sharing DBSession.
    PSHAMSDI> The fact that a DBSession is shared and is blocked from
    other threads within
    PSHAMSDI> a transaction make it a candidate for "dead-lock". That's
    why in my
    PSHAMSDI> application, up until now, I dare not to load-balance a
    DBSession or involve
    PSHAMSDI> multiple DBSessions in a update transaction. I have
    experience that when
    PSHAMSDI> multiple DBSessions are involved in a update transaction,
    there is a great
    PSHAMSDI> choice that the DBSessions are dead-locked by different
    threads.
    PSHAMSDI> The way that we do it now is very dumb and hard to
    maintain. We pass the
    PSHAMSDI> DBSession along for all the calls involved in a update
    transaction.
    PSHAMSDI> However, if someone forget to follow the convention, the
    application will
    PSHAMSDI> get dead-locked and I have to use dumb status on the
    partitions to trace
    PSHAMSDI> back the invoking method. It is horrible and with no
    guarentee to find the
    PSHAMSDI> source of the problem.
    PSHAMSDI> I have figured a more extensive architecture to solve this
    problem. But
    PSHAMSDI> before I fully implement my design, I would like to know
    if there is already
    PSHAMSDI> a elegant solution out there.
    PSHAMSDI> Thanks for any help in advance.
    PSHAMSDI> Best regards,
    PSHAMSDI> Peter Sham.
    PSHAMSDI> -----Original Message-----
    PSHAMSDI> From: Dimitar Gospodinov [SMTP:[email protected]]
    PSHAMSDI> Sent: Monday, May 17, 1999 2:47 PM
    PSHAMSDI> To: Vanessa Rumball
    PSHAMSDI> Cc: [email protected]
    PSHAMSDI> Subject: Re: Transaction Management
    PSHAMSDI> Hello Vanessa,
    PSHAMSDI> You should use dependent transactions - the
    "begin
    PSHAMSDI> transaction"
    PSHAMSDI> statement is equal to "begin dependent
    transaction" statement.
    PSHAMSDI> So you can have several methods for saving
    the data in
    PSHAMSDI> different
    PSHAMSDI> tables - all these method contain "begin
    transaction .. end
    PSHAMSDI> transaction" construction.
    PSHAMSDI> Then you can have one "wrapper" method that
    calls the above
    PSHAMSDI> methods. This method also contains "begin
    transaction .. end
    PSHAMSDI> transaction" construction.
    PSHAMSDI> Now you have dependent transactions - if
    some of the
    PSHAMSDI> transaction
    PSHAMSDI> fails , the whole bunch of transaction will
    fail.
    PSHAMSDI> If you want to catch the Deadlocks you may
    register for the
    PSHAMSDI> AbortException exception and re-try your
    outermost
    PSHAMSDI> transaction.
    PSHAMSDI> Hope this helps.
    PSHAMSDI> Best regards,
    PSHAMSDI> Dimitar
    mailto:[email protected]
    PSHAMSDI> Monday, May 17, 1999, 6:08:17 AM, you wrote:
    PSHAMSDI> VR> Hi there,
    PSHAMSDI> VR> I have a number of table manager classes,
    each of which saves
    PSHAMSDI> data to
    PSHAMSDI> VR> their respective table in the database. With
    these tables it is
    PSHAMSDI> likely
    PSHAMSDI> VR> that they may be locked by other users on
    occasion so I have put
    PSHAMSDI> in
    PSHAMSDI> VR> exception handlers on the managers to cater
    for this. The user
    PSHAMSDI> has the
    PSHAMSDI> VR> option to keep trying or give up and try again
    later.
    PSHAMSDI> VR> Now sometimes three or more tables may need
    to be updated
    PSHAMSDI> together and if
    PSHAMSDI> VR> one fails to commit then no data for the three
    tables should be
    PSHAMSDI> saved to
    PSHAMSDI> VR> the database. In such a case the 'save'
    method of the three or
    PSHAMSDI> more table
    PSHAMSDI> VR> managers are called from a single method
    within one 'dependent'
    PSHAMSDI> forte
    PSHAMSDI> VR> transaction. Before calling the save methods,
    I call another
    PSHAMSDI> method which
    PSHAMSDI> VR> starts a SQL 'read write wait 10' transaction
    reserving each
    PSHAMSDI> table needed
    PSHAMSDI> VR> within the transaction.
    PSHAMSDI> VR> I have read through the Transactions chapter
    of the Forte
    PSHAMSDI> Accessing
    PSHAMSDI> VR> Databases manual and see examples where a
    number of SQL
    PSHAMSDI> statements are
    PSHAMSDI> VR> included within a transaction and each one
    commits only if all
    PSHAMSDI> are
    PSHAMSDI> VR> successful at the end of the transaction. I
    assumed my approach
    PSHAMSDI> would be
    PSHAMSDI> VR> similar especially when using the 'begin
    dependent transaction'
    PSHAMSDI> statement.
    PSHAMSDI> VR> But if the application gets around to saving
    the second table
    PSHAMSDI> which is
    PSHAMSDI> VR> locked and the user decides not to commit, the
    first table is
    PSHAMSDI> still updated
    PSHAMSDI> VR> in the database.
    PSHAMSDI> VR> Is it because my SQL statements are in
    seperate methods and
    PSHAMSDI> are commited
    PSHAMSDI> VR> when the method is complete? Or am I missing
    something
    PSHAMSDI> somewhere?
    PSHAMSDI> VR> Any help greatly appreciated.
    PSHAMSDI> VR> Thank you.
    PSHAMSDI> VR> Vanessa.
    PSHAMSDI> VR> ===========================================< @
    PSHAMSDI> >>===========================================
    PSHAMSDI> VR> Vanessa Rumball
    PSHAMSDI> VR> Analyst Programmer Phone:
    (03) 479 8285
    PSHAMSDI> VR> A.T.S. Fax:
    (03) 479 5080
    PSHAMSDI> VR> University of Otago Email:
    PSHAMSDI> [email protected]
    PSHAMSDI> VR> PO Box 56
    PSHAMSDI> VR> Dunedin
    PSHAMSDI> VR> New Zealand
    PSHAMSDI> VR> ===========================================< @
    PSHAMSDI> >>===========================================
    PSHAMSDI> VR> -
    PSHAMSDI> VR> To unsubscribe, email '[email protected]'
    with
    PSHAMSDI> VR> 'unsubscribe forte-users' as the body of the
    message.
    PSHAMSDI> VR> Searchable thread archive
    PSHAMSDI> <URL:http://pinehurst.sageit.com/listarchive/>
    PSHAMSDI> -
    PSHAMSDI> To unsubscribe, email '[email protected]' with
    PSHAMSDI> 'unsubscribe forte-users' as the body of the
    message.
    PSHAMSDI> Searchable thread archive
    PSHAMSDI> <URL:http://pinehurst.sageit.com/listarchive/>
    To unsubscribe, email '[email protected]' with
    'unsubscribe forte-users' as the body of the message.
    Searchable thread archive <URL:http://pinehurst.sageit.com/listarchive/>

    Hi Bolun,
    If you have 2 different UOM(KG and PC) you can see 2 sub-totals only. If you have more, you will see more...
    You can try this: Some work around's ........
    Option1: Convert other Units of measures into KG's or PC's or
    Option2: Add one more Indicator at infoprovider level and populate indicator accordingly., based on UOM and use in report. or
    Option3: Make UOM as User Input and restrict report based on that...
    Hope it Helps
    Srini

  • What's the function of the null transactional manager server?

    how to use the null transactional manager server?

    If you need to call a service while in a transaction, but that service
    does not use a resource manager (database), then it should be in a group
    with the null TMS.
    If it is a leaf service, you could call it with TPNOTRAN, but if it has
    to call other services that are part of the transaction, then it too
    must be part of the transaction.
    The first service called by a client that initiated a transaction will
    become the coordinator, so it needs to have a TMS even if it is not
    using a database.
         Scott Orshan
         BEA Systems
    wangming wrote:
    >
    how to use the null transactional manager server?

  • Receiving Transaction Manager Errors out

    I am getting this meesage whle doing Inter-Org transfer that Receiving Transaction Manager as errored out. There is no error code displayed or similar.
    Any thoughts where I can find the log/details of this error?
    Thx

    Please see the solution in (PODAMGR And RCVOLTM Inactive And Cannot Be Started (Doc ID 726158.1)).
    Are there any errors in the database log file?
    If the above didn't help, please run cmclean.sql and ccm.sql scripts.
    Concurrent Processing - CMCLEAN.SQL - Non Destructive Script to Clean Concurrent Manager Tables (Doc ID 134007.1)
    Concurrent Processing - CCM.sql Diagnostic Script to Diagnose Common Concurrent Manager Issues (Doc ID 171855.1)
    Thanks,
    Hussein

  • Transaction Management - OIM API

    We wanted to know how to handle transaction from OIMClient, when we make OIM API call.
    eg:
    from a java client, we invoke create organization and provision resource to that organization.
    i.e we end up calling two OIM api calls
    1)
        organizationManager.create(organizationObj);
    2)
    tcOrganizationOperationsIntf.provisionObject(orgKey,resourceKey);
    Now, if 2nd one fails, then transaction should be rolled back and organization create should be rolled back. we wanted single unit of work to be achieved.
    how can this be achieved?
    how do we control transaction from OIMClient/API?
    public String createOrganization(OrganizationVO ovo) {
      String result = "";
      OrganizationManager omgr = null; // OIMClient API
      Organization org = null; // OIMClient API
      try { 
      omgr=ULMServiceLocator.getInstance().getOrganizationManager();
      org = new Organization();
      org.setAttribute("Organization Name", ovo.getOrgName());
      org.setAttribute("Organization Customer Type", ovo.getOrgType());
      result = omgr.create(org);
      tcUtilityFactory ioutilityFactory = ULMServiceLocator.getInstance().getcUtilityFactory();
      //TODO
      tcOrganizationOperationsIntf utilityFactory1 = (tcOrganizationOperationsIntf) ioutilityFactory
      .getUtility("Thor.API.Operations.tcOrganizationOperationsIntf");
      long l1 = utilityFactory1.provisionObject(Long.parseLong(result),
      123l);
      } catch (oracle.jrf.UnknownPlatformException e) {
      e.printStackTrace();
      } catch (Exception e) {
      // TODO Auto-generated catch block
      e.printStackTrace();
      return result;

    Hi Abhay,
    <b>If there is another way to add a record to Main table and Lookup/Qualified/Hierarchy table simultaneously then there is no need of Transaction.</b>
    There are no direct methods currently(as of MDM 5.5SP3) available in MDM API.
    You have to build your own logic to implement.
    Regarding the Transaction management, I guess you are talking aboout two phase commit scenarios. I would say you can achieve this using EJB's in which u write the business logic (In this case the MDM API code).
    For Example:
    Adding a record to Main table,
    Let say 2 fields
    1. Free text field.
    Its straight forward method, use A2iFiled object and assign some value to it.
    2. Lookup field.
    For this.
    First get the Record id for the value, which you are trying to add, from the lookup table.
    If the lookup table does not find the value in the table, it return zero or some negative value(which I am not sure).
    Based on the return value you can add the value into the lookup table and then into the main table.
    Just putting a sample scenario. Hope this helps.
    Thanks and regards
    Subbu

  • How Transaction Manager work with Resource Manager, like Connection pool?

    hi,
    I'm using BEA Webloigc8.1 Stateless Session Bean/DAO/Oracle stored proc.
    but I'm not quite clear how Transaction Manager work with Resource Manager, like Connection pool.
    my understanding is that, in a weblogic transaction, a stateless session bean interact with several DAOs, and for each method of DAO a connection is acquired from connection pool. I've heard that the connection will not return to pool until the transaction commits.
    My question is that, does it mean that for a weblogic transaction, multiple connections might be allocated to it? and if multiple connections are allocated, then how many oracle transactions would be started? or multiple connections share the same oracle transaction?
    I didn't feel it make sense to start multiple oracle transactions, cause deadlock might be incurred in a single weblogic transaction.
    any help appreciated!

    Xin Zhuang wrote:
    hi,
    I'm using BEA Webloigc8.1 Stateless Session Bean/DAO/Oracle stored proc.
    but I'm not quite clear how Transaction Manager work with Resource Manager, like Connection pool.
    my understanding is that, in a weblogic transaction, a stateless session bean interact with several DAOs, and for each method of DAO a connection is acquired from connection pool. I've heard that the connection will not return to pool until the transaction commits.
    My question is that, does it mean that for a weblogic transaction, multiple connections might be allocated to it? and if multiple connections are allocated, then how many oracle transactions would be started? or multiple connections share the same oracle transaction?
    I didn't feel it make sense to start multiple oracle transactions, cause deadlock might be incurred in a single weblogic transaction.
    any help appreciated!Hi. If you configure your WLS DataSource to use keep a connection for
    the duration of a tx, it will do that, and in any case there can be
    no deadlock however many connections operate for a given XA transaction.
    Here is the best coding form for DAOs or any other user-written code
    for using WebLogic DataSources. This is important for two reasons:
    1 - Thread-safety is maintained as long as the connection is a
    method-level object.
    2 - It is crucial to notify WebLogic that you are done with a connection
    ASAP, by your calling close() on it. We will then put it back in the
    pool, or keep it under the covers for your next request if it's in a
    transaction etc. The pool is optimized for quick get-use-close scenarios.
    public void one_of_my_main_JDBC_Methods()
    Connection con=null; // Must be a method level object for thread-safety
    // It will be closed by the end of the method.
    try {
    con = myDataSource.getConnection(); // Get the connection in the try
    // block, directly from the WebLogic
    // datasource
    // do all the JDBC within this try block. You can pass the
    // connection to subordinate methods, but not to anywhere
    // that thinks it can use the connection later.
    rs.close(); // close any result set asap
    stmt.close(); // then close any statement asap
    // When you're done with JDBC
    con.close(); // close the connection asap
    con = null; // nullify it so the finally knows it's done
    catch (Exception e) {
    // do whatever catch stuff you want. You don't
    // need a catch block if you don't want one...
    finally {
    // It is important to close a JDBC connection ASAP when it's not needed.
    // without fail, and regardless of exit path. Do everything in your
    // finally block in it's own try-catch-ignore so everything is done.
    try { if (con != null) con.close();} catch (Exception ignore){}
    return ret;
    }

  • ORA-10561: block type 'TRANSACTION MANAGED DATA BLOCK', data object# 237

    Hello,
    I encountered ORA-10561 while I was recovering my DB.
    Problem Background :-
    I took a hotbakup of my DB running in Windows, I edited the Pfile and recreated the control file
    I was able to succesfully mount the DB
    The I gave the below command to apply the redologs and recover the DB
    And the errors followed as mentioned.
    The HOTBACKUP was copied and restored using a USB-PENdrive. I suspect that the datafiles and / or ARCHIVE LOGS would have had some format issues due to O/S {WINXP ---> RHEL5}
    If this is the case ? then I would like to know how to convert the format of the files(Datafiles and or Redologs)
    into acceptable format
    >
    SQL> recover database until time '2010-03-15:18:08:05' using backup controlfile;
    ORA-00279: change 3447582 generated at 03/15/2010 17:41:42 needed for thread 1
    ORA-00289: suggestion : /home/oracle/NEW/ARCHIVE/ARC0000000144_0706577643.0001
    ORA-00280: change 3447582 for thread 1 is in sequence #144
    Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
    /home/oracle/NEW/ARCHIVE/ARC00144_0706577643.001
    ORA-00283: recovery session canceled due to errors
    ORA-10562: Error occurred while applying redo to data block (file# 1, block#
    1658)
    ORA-10564: tablespace SYSTEM
    ORA-01110: data file 1: '/home/oracle/NEW/oradata/O1_MF_SYSTEM_5M9ZKSSW_.DBF'
    ORA-10561: block type 'TRANSACTION MANAGED DATA BLOCK', data object# 237
    ORA-00600: internal error code, arguments: [4502], [0], [], [], [], [], [], []
    >
    Thanks in advance.
    Regards,
    Valli

    You need to give much more information:
    What version of Oracle? 10gR2 is not a version, 10.2.0.1 is a version.
    What version of Windows, exactly? What version of linux?
    ORA-600 means you need to talk to Oracle support. There is an ora-600 lookup tool, which basically searches the knowledge base (for ora-600[4502] in your case, which brings up a bunch of really, really, really old docs).
    How exactly did you take the "hot backup?" There are a number of ways to do things with that name, some of which are just plain wrong.
    How exactly did you get the backup from one machine to another? Which exact commands did you use to copy the files to the usb and from the usb? Did you do it more than once?
    Why are you using the backup controlfile syntax? There are valid reasons, and invalid reasons to do that.
    What exactly did you change in the pfile?
    What does the alert log say about all this?

Maybe you are looking for

  • How to redirect to different page after login in APEX 4.1

    Dear All, Here my Requirement is,When the users login, when they entered their username and password and pressed login button, they have to redirected to different pages based on the type of user. Here my LOGIN_TABLE has following 3 columns, 1.Userna

  • How do I convert a PDF to a DOCX?

    I have the free 30 day trial.  I was under the impression that I would be able to convert a PDF document to a DOCX document with this free trial.  I have been unable to figure out how to do this.  Any advice will be appreciated.  Thanks in advance!

  • SQL Developer Data Modeler -Editing the Legend

    Hi, I have installed SQL DEVELOPER Version- 3.0.04. I am creating the data modeler diagrams(Relational and Logical) using the SQL Data Modeler.I would like to know whether the following features are available in this version or the latest version of

  • Flash Islands organizational chart

    Hi, Is there any code available for the example given in the link http://www.sdn.sap.com/irj/scn/elearn?rid=/library/uuid/a0c91fc0-932d-2c10-4ca7-f5774950c8e3&overridelayout=true I'm searching for the flex codes for the creation of an organizational

  • Can't download master collection CS6 on windows 7!

    I am trying in download the master collection trial version but when is is finished it says that i dont have enough empty space. But i do have. I have over 100gb left. But i cannot get it! Help me!