What is a Transaction Manager??????
For Updating mutilple databases in a distributed environment one has to use Two
Phase commit. In this 2PC there is one Transaction manager that manages all the
transaction with all the resource managers.
I need to know what is this transaction Manager ?????? Is it a seperate software
required to be plugged in with weblogic or it comes as a part of weblogic 6.1
beta release. I also read from javax.transaction and javax.sql packages that
for updating multiple databases one needs .... XADatasource and XAConnection ...objects
How these two are linked with 2PC ...??????
Please help
Upkar,
WLS 6.0 and above incorporates an implementation of a transaction manager (the engine that
drives a 2 phase commit) written by the same engineers who worked on the transaction
manager in Tuxedo.
In a 2 phase commit, instead of saying "commit" directly too the databases, the transaction
manager says "prepare" and then "commit" (these are the 2 phases) The interface that a
database provides for the TM to do this through is XAResource.
The XAConnection is merely a connection that supports this XAResource interface.
None of this XA stuff should be seen in your application code, unless you are writing a
transaction manager or a database driver, which I don't suppose you will be doing!
Regards,
Peter.
Got a Question? Ask BEA at http://askbea.bea.com
The views expressed in this posting are solely those of the author, and BEA
Systems, Inc. does not endorse any of these views.
BEA Systems, Inc. is not responsible for the accuracy or completeness of the
information provided
and assumes no duty to correct, expand upon, delete or update any of the
information contained in this posting.
Upkar Sharma wrote:
> For Updating mutilple databases in a distributed environment one has to use Two
> Phase commit. In this 2PC there is one Transaction manager that manages all the
> transaction with all the resource managers.
> I need to know what is this transaction Manager ?????? Is it a seperate software
> required to be plugged in with weblogic or it comes as a part of weblogic 6.1
> beta release. I also read from javax.transaction and javax.sql packages that
> for updating multiple databases one needs .... XADatasource and XAConnection ...objects
> ..
> How these two are linked with 2PC ...??????
>
> Please help
Similar Messages
-
What's the function of the null transactional manager server?
how to use the null transactional manager server?
If you need to call a service while in a transaction, but that service
does not use a resource manager (database), then it should be in a group
with the null TMS.
If it is a leaf service, you could call it with TPNOTRAN, but if it has
to call other services that are part of the transaction, then it too
must be part of the transaction.
The first service called by a client that initiated a transaction will
become the coordinator, so it needs to have a TMS even if it is not
using a database.
Scott Orshan
BEA Systems
wangming wrote:
>
how to use the null transactional manager server? -
Forte Transaction Management & 2PC
Forte Transaction Management & 2PC
The main purpose of 2PC in a distributed transaction manager is
to enable recovery from a failure that occurs during the window
of transaction commit processing. The Forte transaction manager was built
with this in mind but only with respect to the "volatile" (or "in memory")
objects that Forte manages. What this implies is that because Forte stores
objects in memory and not persistently on disk, the requirement of recovery
for these objects is significantly reduced (if not eliminated all together).
Forte follows a distributed 2PC model in that tasks and messages carry
along with them transaction identification and, during commit processing,
every distributed participant is polled for its availability to commit
the transaction. Applications saving persistent data to disk during a
distributed Forte transaction need to concern themselves with the potential
for failure during the commit processing window. Forte's prepare phase polls
each site (confirming a communications link with each distributed participant)
but no prepare request goes to the database primarily because (in release 1 and
2 of forte) no database supported a general distributed two-phase commit
(one could take issue with that in the case of Sybase, but rather than debate
this point, suffice it to say that the general direction in the industry for
support of this functionality was through TP monitors -- more on that later).
Once all sites are ready to commit Forte expects that the commit will
complete successfully. If at this moment, for example, a participating
Sybase server terminates (with data not yet committed) while a participating
Oracle server has already committed its unit of work, then the outcome of
the distributed transaction is inconsistent - if no one has yet committed
Forte will still abort the transaction. This "window of inconsistency"
is documented in the Forte TOOL manual.
Mission critical applications that require distributed transactions can
address this window of inconsistency in a number of ways:
* Utilize a TP monitor such as Encina (see below)
* Log distributed updates in an auxiliary database table (much like a
distributed transaction monitor's transaction-state log). This approach has
been the traditional banking application solution prior to the commercial
availability of products like Encina, Tuxedo, TopEnd, etc.
This solution is somewhat complex and is usually not generic enough
so as not to have to change code every time a new table or database
site is introduced into the application's data model.
* Rearrange the data model in order to eliminate the need for distributed
transactions. This is usually only a temporary solution (with smaller
numbers of active clients) and cannot be applied to complex legacy systems.
With the advent of the X/Open distributed transaction architecture (the
XA Interface) more database vendors have found that by complying with the
XA interface they can plug their database-specific implementation of
transaction into a globally managed transaction, with commit and abort
processing being conducted by a central coordinator. Of course, the
overall transaction manager coordinating the global transaction must
itself, persistently record the state of the different distributed
branches participating in the transaction. A significant portion of
the functionality provided by products such as Encina, Tuxedo, TopEnd and
OpenTP1 is to provide exactly this global transaction management.
Rather than extend the Forte distributed transaction manager with the
functionality necessary to manage and recover distributed transactions
that modify data on disk, Forte has chosen to integrate with the emerging
set of commercial transaction monitors and managers. This decision was
built into the original design of the Forte transaction model (using XA and
early Tuxedo white-papers as guidelines):
* In Forte release 2 an integration with Encina was delivered.
* In January 1997 a press release announced an integration of
OpenTP1 with Forte for release 3.
* The Forte engineering staff is currently investing integration
with other transaction management products as well.
Neil Goodman,
Forte Development.You don't. ("manage" a transaction)
There is nothing really to "manage".
A transaction is automatically started when you make any changes to data (e.g. fire off a DML statement).
You simply needs to issue a COMMIT or ROLLBACK when needed. A COMMIT at the end of the business transaction and not before (i.e. no committing every n number of rows). A ROLLBACK when hitting an exception or business logic error that requires the uncommitted changes to be undone.
That in a nutshell is it. It is that simple.
Oracle also supports creating savepoints and rolling back only some changes made thus far in the transaction.
The only other thing to keep in mind that a DDL in Oracle issues an implicit commit. Firing off a DDL with cause any exiting uncommitted transaction to be committed.
Transaction "logic/management" should not be made more complex than this. -
Coherence and EclipseLink - JTA Transaction Manager - slow response times
A colleague and I are updating a transactional web service to use Coherence as an underlying L2 cache. The application has the following characteristics:
Java 1.7
Using Spring Framework 4.0.5
EclipseLink 12.1.2
TopLink grid 12.1.2
Coherence 12.1.2
javax.persistence 12.1.2
The application is split, with a GAR in a WebLogic environment and the actual web service application deployed into IBM WebSphere 8.5.
When we execute a GET from the server for a decently sized piece of data, the response time is roughly 20-25 seconds. From looking into DynaTrace, it appears that we're hitting a brick wall at the "calculateChanges" method within EclipseLink. Looking further, we appear to be having issues with the transaction manager but we're not sure what. If we have a local resource transaction manager, the response time is roughly 500 milliseconds for the exact same request. When the JTA transaction manager is involved, it's 20-25 seconds.
Is there a recommendation on how to configure the transaction manager when incorporating Coherence into a web service application of this type?Hi Volker/Markus,
Thanks a lot for the response.
Yeah Volker, you are absolutely right. the 10-12 seconds happens when we have not used the transaction for several minutes...Looks like the transactions are moved away from the SAP buffer or something, in a very short time.
and yes, the ABAP WP's are running in Pool 2 (*BASE) and the the JAVA server, I have set up in another memory pool of 7 GB's.
I would say the performance of the JAVA part is much better than the ABAP part.
Should I just remove the ABAP part of the SOLMAN from memory pool 2 and assign the JAVA/ABAP a separate huge memory pool of say like 12-13 GB's.
Will that likely to improve my performance??
No, I have not deactivated RSDB_TDB in TCOLL from daily twice to weekly once on all systems on this box. It is running daily twice right now.
Should I change it to weekly once on all the systems on this box? How is that going to help me?? The only thinng I can think of is that it will save me some CPU utilization, as considerable CPU resources are needed for this program to run.
But my CPU utilization is anyway only like 30 % average. Its a i570 hardware and right now running 5 CPU's.
So you still think I should deactivate this job from daily twice to weekly once on all systems on this box??
Markus, Did you open up any messages with SAP on this issue.?
I remember working on the 3.2 version of soultion manager on change management and the response times very much better than this as compared to 4.0.
Let me know guys and once again..thanks a lot for your help and valuable input.
Abhi -
@TransactionAttribute annotation being ignored by Transaction Manager
I am currently running jboss-4.0.4GA. I believe I must have something configured incorrectly, or I misunderstand transaction management performed by the container. Though I have my datasource declared as local-tx, which I believe allows transactions, it appears that my a call to a remote function in a stateless session bean is completely executed in one single transaction, regardless of the @TransactionAttribute tags.
In my example, I call a function with @TransactionAttribute = REQUIRED. This is the OUTER FUNCTION. This function inserts a record into the cust table of our database. Then this function calls a second function with @TransactionAttribute = REQUIRES_NEW. This is the INNER FUNCTION.
This function should, according to spec, start up a new transaction independant of the first function. However, the INNER function can select the (un-committed) cust record from the OUTER function. The INNER function then proceeds to add a cust record of its own to the database.
Control then returns to the OUTER function, which can succesfully read the cust record inserted by the INNER function, which is to be expected because the INNER function should have had its transaction committed. However, my program then throws a RuntimeException in order to force a rollback, and this rollback removes both the cust record inserted by the OUTER function and the cust record inserted by the INNER function.
To further my belief that the transaction manager is ignoring my @TransactionAttribute annotations, I change the TransactionAttributeType of the INNER function to "NEVER". According to spec, the code should throw an exception when this function is called within a managed transaction. However, when I run the code I get the exact same behavior as when the INNER function is "REQUIRES_NEW".
I would greatly appreciate if anyone has any insight into what I am doing wrong. Thanks!
Client Program that Invokes TestTransImpl Stateless Session Bean
public class Client{
public static void main(String[] args) throws Exception {
try{
Properties env = new Properties();
env.setProperty(Context.SECURITY_PRINCIPAL, "guest");
env.setProperty(Context.SECURITY_CREDENTIALS, "guest123");
env.setProperty(Context.PROVIDER_URL, "jnp://localhost:1099");
env.setProperty(Context.URL_PKG_PREFIXES, "org.jboss.naming:org.jnp.interfaces");
env.setProperty(Context.INITIAL_CONTEXT_FACTORY, "org.jboss.security.jndi.JndiLoginInitialContextFactory");
InitialContext ctx = new InitialContext(env);
TestTransRemote ttr = (TestTransRemote)ctx.lookup("TestTransImpl/remote");
ttr.testTransactions();
}catch(Exception e){
e.printStackTrace();
throw e;
}Remote Interface for TestTransImpl Stateless Session Bean
public interface TestTransRemote extends Serializable {
public void testTransactions() throws Exception;
}TestTransImpl Stateless Session Bean
@Stateless
@Remote(TestTransRemote.class)
public class TestTransImpl implements TestTransRemote {
private static final long serialVersionUID = 1L;
@TransactionAttribute(TransactionAttributeType.REQUIRED)
public void testTransactions() throws Exception{
java.sql.Connection conn = getConnection();
java.sql.PreparedStatement ps;
ps = conn.prepareCall("insert into cust(loc,cust_no) values ('001',20)");
ps.execute();
System.out.println("OUTSIDE FUNCTION - Customer 20 created");
requiredNewFunction();
ps = conn.prepareCall("Select cust_no from cust where loc = '001' and cust_no = 24");
java.sql.ResultSet results = ps.executeQuery();
results.next();
System.out.println("OUTSIDE FUNCTION - Customer Read - Cust No = " + results.getLong("cust_no"));
throw new RuntimeException();
@TransactionAttribute(TransactionAttributeType.REQUIRES_NEW)
private void requiredNewFunction() throws Exception{
java.sql.Connection conn = getConnection();
java.sql.PreparedStatement ps;
ps = conn.prepareCall("Select cust_no from cust where loc = '001' and cust_no = 20");
java.sql.ResultSet results = ps.executeQuery();
results.next();
System.out.println("INSIDE FUNCTION - Customer Read - Cust No = " + results.getLong("cust_no"));
ps = conn.prepareCall("insert into cust(loc,cust_no) values ('001',24)");
ps.execute();
System.out.println("INSIDE FUNCTION - Customer 24 created");
private java.sql.Connection getConnection() throws Exception{
javax.sql.DataSource ds;
javax.naming.InitialContext ic = new javax.naming.InitialContext();
ds = (javax.sql.DataSource)ic.lookup("java:MyOracleDS");
java.sql.Connection conn = ds.getConnection();
return conn;
}Datasource XML File
<?xml version="1.0" encoding="UTF-8"?>
<datasources>
<local-tx-datasource>
<jndi-name>MyOracleDS</jndi-name>
<connection-url>jdbc:oracle:thin:XXXXX(DB Host):1521:XXXXX(DB Sid)</connection-url>
<driver-class>oracle.jdbc.driver.OracleDriver</driver-class>
<user-name>XXXXX(username)</user-name>
<password>XXXXX(password)</password>
<min-pool-size>5</min-pool-size>
<max-pool-size>100</max-pool-size>
<exception-sorter-class-name>org.jboss.resource.adapter.jdbc.vendor.OracleExceptionSorter</exception-sorter-class-name>
<!-- corresponding type-mapping in the standardjbosscmp-jdbc.xml (optional) -->
<metadata>
<type-mapping>Oracle10g</type-mapping>
</metadata>
</local-tx-datasource>
</datasources>Program Output
08:43:41,093 INFO [STDOUT] OUTSIDE FUNCTION - Customer 20 created
08:43:41,125 INFO [STDOUT] INSIDE FUNCTION - Customer Read - Cust No = 20
08:43:41,140 INFO [STDOUT] INSIDE FUNCTION - Customer 24 created
08:43:41,140 INFO [STDOUT] OUTSIDE FUNCTION - Customer Read - Cust No = 24All ejb invocation behavior, including authorization, container-managed transactions, etc. only applies when the call is made through one of the appropriate ejb client objects. If
TestTransImpl.testTransactions() directly invokes requiredNewFunction() it's just a normal java
method call -- the ejb container has no idea it's happening and is not interposing. If you want
the full ejb invocation behavior when you invoke requiredNewFunction() you'll need to
make sure requiredNewFunction is part of a business interface, is public, and is invoked through
the corresponding ejb reference :
@Resource private SessionContext ctx;
public void testTransactions() throws Exception {
TestTransRemote testTrans = ctx.getBusinessObject(TestTransRemote.class);
testTrans.requiredNewFunction();
} -
11g TP2 ADF Task Flows and Transaction Management
I'm wondering how ADF Task Flow Transaction Management works vis-a-vis database sessions and using stored procedure calls in an environment with connection pooling. I haven't written the code yet but am looking for a better understanding of how it works before I try.
Example:
I create a bounded adf task flow. I set the "transaction" property to "new-transaction" and the "data control scope" to "isolated".
As the task flow is running, the user clicks buttons that navigate from page to page in the flow. Each button click posts the page back to the app server. On the app server a backing bean method in each page calls a stored procedure in a database package to modify some values in one or more tables in the database. The procedure does not commit these changes.
Each time a backing bean makes a stored procedure call will it be in the same database session? Or will connection pooling possibly return a different database connection and therefore a different database session?
If the transaction management feature of the adf task flows guarantees me that I will always be in the same database session then I don't have to write any extra code to make this work. Will it do that or not?I don't know if it is documented in the adf documentation currently available for 11g TP2 but what you ask for is a normal transaction management with connection pooling and i can't imagine it is not implemented in ADF BC layer like it is in JPA or other persistence layer.
A transaction will always be executed in the same session. Normally your web session will stay in the same session even you start more than one transaction. You don't have to write any code to manage the session pooling. It is a good practices to customize it at the persistence layer during installation depending on your infrastructure.
Take a look into Fusion Developer Guide ... i'm sure you will find some better explanations about this. -
Let me know what is the use of "Open Item Management" in GL creation.
I read abt it by clicking F1. But i didn't got clarity. I just came to know that it should be used for balance sheet items. That too there are many balancesheet items like bank accounts, raw material account for which it should not be used...
So please clarify it on your own words i.e. what is "Open Item management" & where it should be used.... ?Hello,
Following is the F1 help for your reference:
Indicator: Open item management?
Determines that open items are managed for this account.
Use
Items posted to accounts managed on an open item basis are marked as open or cleared. The balance of these accounts is always equal to the balance of the open items.
Procedure
Set up accounts with open item management if offsetting entries are to be assigned to the postings made to these accounts. Postings to these accounts represent incomplete transactions.
Example:
A goods receipt/invoice receipt (GR/IR) clearing account should be managed on an open item basis so that you can check at any time whether invoices have been received for the goods received for an order.
Set up accounts without open item management if no offsetting entry is to be made against a posting to this account.
Examples
Accounts that are managed on an open item basis include:
Clearing accounts:
Bank clearing account
Payroll clearing account
Cash discount clearing account
GR/IR clearing account
Accounts that are not managed on an open item basis:
Bank accounts
Tax accounts
Raw material accounts
Reconciliation accounts
These are managed implicitly using the subledger open item function.
Profit and loss accounts
Materials Management (MM) accounts posted with a posting key that has account type 'M'
Regards,
Ravi -
User submitted Credit Card Historical Transactions Management Report by mis
A user has submitted the 'Credit Card Historical Transactions Management Report' by mistake. They have noticed that as a result of this, the concurrent request appears to have deactivated all iExpense unused transactions up to and including the date that they ran the program which results in a number of cardholders affected are unable to see or acquit their Visa transactions.
We believe this process can be reversed using the same report and choosing "activate transactions" however we just need confirmation that our
assumption is correct and that the request can be reversed. Can anyone please confirm if this can be reversed.
Thanks
LeeI created a SR with Oracle to confirm this and they said you can run this program. I just wanted to make sure program is safe to run for clearing out outstanding charges for termed employee.If Oracle support confirmed that you can run the program, then I would say go with what they said :)
Thanks,
Hussein -
Java user-defined transaction management not working correctly???
Hi everyone,
I have encountered a problem when using Java user-defined transaction management in my session bean. It threw an exception but I could not work out what that means. Could anyone comment on this? Thanks.
This BrokerBean is a stateless session calling other entities bean to perform some simple operations. There are 2 Cloudscape databases in use. Invoices (EB) use InvoiceDB and all the other EBs use StockDB.
If I comment out the user-defined transaction management code, then everything works fine. Or if I comment out the Invoices EB code, it is fine as well. It seemed to me that there is something wrong in transaction management when dealing with distributed databases.
--------------- source code ----------------------
public void CreateInvoices(int sub_accno) {
try {
utx = context.getUserTransaction();
utx.begin();
SubAcc subAcc = subAccHome.findByPrimaryKey(new SubAccPK(sub_accno));
String sub_name = subAcc.getSubName();
String sub_address = subAcc.getSubAddress();
Collection c = stockTransHome.findBySubAccno(sub_accno);
Iterator i = c.iterator();
ArrayList a = new ArrayList();
while (i.hasNext()) {
StockTrans stockTrans = (StockTrans)i.next();
int trans_id = stockTrans.getTransID();
String tran_type = stockTrans.getTranType();
int stock_id = stockTrans.getStockID();
float price = stockTrans.getPrice();
Invoices invoices = invoicesHome.create(sub_accno, sub_name, sub_address, trans_id, stock_id, tran_type, price);
stockTrans = stockTransHome.findByPrimaryKey(new StockTransPK(trans_id));
stockTrans.remove();
utx.commit();
utx = null;
} catch (Exception e) {
if (utx != null) {
try {
utx.rollback();
utx = null;
catch (Exception ex) {}
// e.printStackTrace();
throw new EJBException("BrokerBean.CreateInvoices(): " + e.getMessage());
--------------- exception ----------------------
Initiating login ...
Enter Username:
Enter Password:
Binding name:`java:comp/env/ejb/BrokerSB`
EJB test succeed
Test BuyStock!
Test BuyStock!
Test BuyStock!
Test BuyStock!
Test SellStock!
Test SellStock!
Caught an exception.
java.rmi.ServerException: RemoteException occurred in server thread; nested exce
ption is:
java.rmi.RemoteException: BrokerBean.CreateInvoices(): CORBA TRANSACTION
_ROLLEDBACK 9998 Maybe; nested exception is:
org.omg.CORBA.TRANSACTION_ROLLEDBACK: vmcid: 0x2000 minor code: 1806
completed: Maybe
at com.sun.corba.ee.internal.iiop.ShutdownUtilDelegate.mapSystemExceptio
n(ShutdownUtilDelegate.java:64)
at javax.rmi.CORBA.Util.mapSystemException(Util.java:65)
at BrokerStub.CreateInvoices(Unknown Source)
at Client.main(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.
java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces
sorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:324)
at com.sun.enterprise.util.Utility.invokeApplicationMain(Utility.java:22
9)
at com.sun.enterprise.appclient.Main.main(Main.java:155)
Caused by: java.rmi.RemoteException: BrokerBean.CreateInvoices(): CORBA TRANSACT
ION_ROLLEDBACK 9998 Maybe; nested exception is:
org.omg.CORBA.TRANSACTION_ROLLEDBACK: vmcid: 0x2000 minor code: 1806
completed: Maybe
at com.sun.enterprise.iiop.POAProtocolMgr.mapException(POAProtocolMgr.ja
va:389)
at com.sun.ejb.containers.BaseContainer.postInvoke(BaseContainer.java:43
1)
at BrokerBean_EJBObjectImpl.CreateInvoices(BrokerBean_EJBObjectImpl.java
:265)
at BrokerBeanEJBObjectImpl_Tie._invoke(Unknown Source)
at com.sun.corba.ee.internal.POA.GenericPOAServerSC.dispatchToServant(Ge
nericPOAServerSC.java:520)
at com.sun.corba.ee.internal.POA.GenericPOAServerSC.internalDispatch(Gen
ericPOAServerSC.java:210)
at com.sun.corba.ee.internal.POA.GenericPOAServerSC.dispatch(GenericPOAS
erverSC.java:112)
at com.sun.corba.ee.internal.iiop.ORB.process(ORB.java:255)
at com.sun.corba.ee.internal.iiop.RequestProcessor.process(RequestProces
sor.java:84)
at com.sun.corba.ee.internal.orbutil.ThreadPool$PooledThread.run(ThreadP
ool.java:99)Three things:
first, maybe you should think of putting ut.begin() just before the invoicesHome.create() method and ut.commit() just after the stockTrans.remove() method.It wont solve the current problem but will help in performance once the problem is solved.
second, your utx.commit() is outside the try block. how come the code is compiling then??
third, try doing a SOP call before and after invoicesHome.create() method and see where the problem actually lies.
let us know...
Hi SteveW2,
Thanks for being so helpful. Here are my replies:
Can I just ask why you're not using containermanaged
transactions?The reason why I didn't use container managed
transactions is because I don't really know how to do
that. I am more familiar with this user-defined
transaction handling.
I have attempted to implement the same method in an
entity bean and just let the container manage the
rollback itself. The same exception was thrown when
running the client.
Also, the transaction behaviour is likely to relateto
the app server youre using - which is it?What do you mean by the app server? I am using J2EE
1.3.1 if that is what you meant.
Finally, if your code has a problem rolling back,and
throws an exception, you discard your exception
thereby losing useful information.I have tried to print the exception stack as well, but
it is the same as just printing the general
exception.
This problem is very strange cause if I comment out
the transaction management thing, then everything
works fine. Or if I am only working with 1 single
database, with this user-defined transaction handling,
everything works fine as well.
Here is the error log from J2EE server if you are
interested.
------------ error log ---------------
javax.ejb.TransactionRolledbackLocalException:
Exception thrown from bean; nested exception is:
javax.ejb.EJBException: ejbCreate: Connection
previously closed, open another Connection
javax.ejb.EJBException: ejbCreate: Connection
previously closed, open another Connection
at InvoicesBean.ejbCreate(Unknown Source)
at
InvoicesBean_RemoteHomeImpl.create(InvoicesBean_Remote
omeImpl.java:31)
at InvoicesHomeStub.create(Unknown Source)
at BrokerBean.CreateInvoices(Unknown Source)
at
BrokerBean_EJBObjectImpl.CreateInvoices(BrokerBean_EJB
bjectImpl.java:261)
at BrokerBeanEJBObjectImpl_Tie._invoke(Unknown
Source)
at
com.sun.corba.ee.internal.POA.GenericPOAServerSC.dispa
chToServant(GenericPOAServerSC.java:520)
at
com.sun.corba.ee.internal.POA.GenericPOAServerSC.inter
alDispatch(GenericPOAServerSC.java:210)
at
com.sun.corba.ee.internal.POA.GenericPOAServerSC.dispa
ch(GenericPOAServerSC.java:112)
at
com.sun.corba.ee.internal.iiop.ORB.process(ORB.java:25
at
com.sun.corba.ee.internal.iiop.RequestProcessor.proces
(RequestProcessor.java:84)
at
com.sun.corba.ee.internal.orbutil.ThreadPool$PooledThr
ad.run(ThreadPool.java:99)
javax.ejb.TransactionRolledbackLocalException:
Exception thrown from bean; nested exception is:
javax.ejb.EJBException: ejbCreate: Connection
previously closed, open another Connection
at
com.sun.ejb.containers.BaseContainer.checkExceptionCli
ntTx(BaseContainer.java:1434)
at
com.sun.ejb.containers.BaseContainer.postInvokeTx(Base
ontainer.java:1294)
at
com.sun.ejb.containers.BaseContainer.postInvoke(BaseCo
tainer.java:403)
at
InvoicesBean_RemoteHomeImpl.create(InvoicesBean_Remote
omeImpl.java:37)
at InvoicesHomeStub.create(Unknown Source)
at BrokerBean.CreateInvoices(Unknown Source)
at
BrokerBean_EJBObjectImpl.CreateInvoices(BrokerBean_EJB
bjectImpl.java:261)
at BrokerBeanEJBObjectImpl_Tie._invoke(Unknown
Source)
at
com.sun.corba.ee.internal.POA.GenericPOAServerSC.dispa
chToServant(GenericPOAServerSC.java:520)
at
com.sun.corba.ee.internal.POA.GenericPOAServerSC.inter
alDispatch(GenericPOAServerSC.java:210)
at
com.sun.corba.ee.internal.POA.GenericPOAServerSC.dispa
ch(GenericPOAServerSC.java:112)
at
com.sun.corba.ee.internal.iiop.ORB.process(ORB.java:25
at
com.sun.corba.ee.internal.iiop.RequestProcessor.proces
(RequestProcessor.java:84)
at
com.sun.corba.ee.internal.orbutil.ThreadPool$PooledThr
ad.run(ThreadPool.java:99)
What is "connection previously closed, open another
connection"? This might be the cause of the
exception.
I'll keep trying till I solve the problem.
Thanks,
Sasuke -
RE: Re[2]: Transaction Management
Hi,
Thanks for the reply. But my situation may require more than 1 DBSession
per Persistence Manager, as under the same Domain problem, my persistent
objects spread across multiple databases ( due to some legacy and package
systems ).
On the other hand, I think your model would be helpful in some cases.
Following is what I think you are doing in your model in order to make use
of this Persistence Manager.
1. Client program would retrieve business object from this persistence
manager;
2. When persistence manager return the requested object, it would save
its pointer to the business object.
3. When client call save()/delete() on the business object, it would
then route the request to the persistence manager by its pointer.
Please correct me if I'm wrong.
As mentioned, I have another design on this issue. When my document is
ready, would you like to take a look. I just want to invite more opinion on
this "Framework" with which I've spent at least half a year to work on.
Best regards,
Peter Sham.
-----Original Message-----
From: Dimitar Gospodinov [SMTP:[email protected]]
Sent: Monday, May 17, 1999 5:41 PM
To: Peter Sham (HTHK - Assistant Manager - Software Development,
IITB)
Cc: Vanessa Rumball; [email protected]
Subject: Re[2]: Transaction Management
Hello Peter,
Well, we are using a slightly different approach. We have a SO
(we call it Persistence Manager)
and DBSession SO (user visible) in one partition. This
partition
is load balanced.
All database activity is in the Persistence Manager - in one
partition that uses one DBSession. In this approach we do not
have possibility for deadlocks between different DBSessions
because for example an activity that involves several tables
will be executed within one DBSession. And since this
partition
is load balanced, the access to the database will not be
blocked.
Hope this makes sense.
Best regards,
Dimitar mailto:[email protected]
Monday, May 17, 1999, 1:55:35 PM, you wrote:
PSHAMSDI> Hi,
PSHAMSDI> I would like to add to the question on the concern on
sharing DBSession.
PSHAMSDI> The fact that a DBSession is shared and is blocked from
other threads within
PSHAMSDI> a transaction make it a candidate for "dead-lock". That's
why in my
PSHAMSDI> application, up until now, I dare not to load-balance a
DBSession or involve
PSHAMSDI> multiple DBSessions in a update transaction. I have
experience that when
PSHAMSDI> multiple DBSessions are involved in a update transaction,
there is a great
PSHAMSDI> choice that the DBSessions are dead-locked by different
threads.
PSHAMSDI> The way that we do it now is very dumb and hard to
maintain. We pass the
PSHAMSDI> DBSession along for all the calls involved in a update
transaction.
PSHAMSDI> However, if someone forget to follow the convention, the
application will
PSHAMSDI> get dead-locked and I have to use dumb status on the
partitions to trace
PSHAMSDI> back the invoking method. It is horrible and with no
guarentee to find the
PSHAMSDI> source of the problem.
PSHAMSDI> I have figured a more extensive architecture to solve this
problem. But
PSHAMSDI> before I fully implement my design, I would like to know
if there is already
PSHAMSDI> a elegant solution out there.
PSHAMSDI> Thanks for any help in advance.
PSHAMSDI> Best regards,
PSHAMSDI> Peter Sham.
PSHAMSDI> -----Original Message-----
PSHAMSDI> From: Dimitar Gospodinov [SMTP:[email protected]]
PSHAMSDI> Sent: Monday, May 17, 1999 2:47 PM
PSHAMSDI> To: Vanessa Rumball
PSHAMSDI> Cc: [email protected]
PSHAMSDI> Subject: Re: Transaction Management
PSHAMSDI> Hello Vanessa,
PSHAMSDI> You should use dependent transactions - the
"begin
PSHAMSDI> transaction"
PSHAMSDI> statement is equal to "begin dependent
transaction" statement.
PSHAMSDI> So you can have several methods for saving
the data in
PSHAMSDI> different
PSHAMSDI> tables - all these method contain "begin
transaction .. end
PSHAMSDI> transaction" construction.
PSHAMSDI> Then you can have one "wrapper" method that
calls the above
PSHAMSDI> methods. This method also contains "begin
transaction .. end
PSHAMSDI> transaction" construction.
PSHAMSDI> Now you have dependent transactions - if
some of the
PSHAMSDI> transaction
PSHAMSDI> fails , the whole bunch of transaction will
fail.
PSHAMSDI> If you want to catch the Deadlocks you may
register for the
PSHAMSDI> AbortException exception and re-try your
outermost
PSHAMSDI> transaction.
PSHAMSDI> Hope this helps.
PSHAMSDI> Best regards,
PSHAMSDI> Dimitar
mailto:[email protected]
PSHAMSDI> Monday, May 17, 1999, 6:08:17 AM, you wrote:
PSHAMSDI> VR> Hi there,
PSHAMSDI> VR> I have a number of table manager classes,
each of which saves
PSHAMSDI> data to
PSHAMSDI> VR> their respective table in the database. With
these tables it is
PSHAMSDI> likely
PSHAMSDI> VR> that they may be locked by other users on
occasion so I have put
PSHAMSDI> in
PSHAMSDI> VR> exception handlers on the managers to cater
for this. The user
PSHAMSDI> has the
PSHAMSDI> VR> option to keep trying or give up and try again
later.
PSHAMSDI> VR> Now sometimes three or more tables may need
to be updated
PSHAMSDI> together and if
PSHAMSDI> VR> one fails to commit then no data for the three
tables should be
PSHAMSDI> saved to
PSHAMSDI> VR> the database. In such a case the 'save'
method of the three or
PSHAMSDI> more table
PSHAMSDI> VR> managers are called from a single method
within one 'dependent'
PSHAMSDI> forte
PSHAMSDI> VR> transaction. Before calling the save methods,
I call another
PSHAMSDI> method which
PSHAMSDI> VR> starts a SQL 'read write wait 10' transaction
reserving each
PSHAMSDI> table needed
PSHAMSDI> VR> within the transaction.
PSHAMSDI> VR> I have read through the Transactions chapter
of the Forte
PSHAMSDI> Accessing
PSHAMSDI> VR> Databases manual and see examples where a
number of SQL
PSHAMSDI> statements are
PSHAMSDI> VR> included within a transaction and each one
commits only if all
PSHAMSDI> are
PSHAMSDI> VR> successful at the end of the transaction. I
assumed my approach
PSHAMSDI> would be
PSHAMSDI> VR> similar especially when using the 'begin
dependent transaction'
PSHAMSDI> statement.
PSHAMSDI> VR> But if the application gets around to saving
the second table
PSHAMSDI> which is
PSHAMSDI> VR> locked and the user decides not to commit, the
first table is
PSHAMSDI> still updated
PSHAMSDI> VR> in the database.
PSHAMSDI> VR> Is it because my SQL statements are in
seperate methods and
PSHAMSDI> are commited
PSHAMSDI> VR> when the method is complete? Or am I missing
something
PSHAMSDI> somewhere?
PSHAMSDI> VR> Any help greatly appreciated.
PSHAMSDI> VR> Thank you.
PSHAMSDI> VR> Vanessa.
PSHAMSDI> VR> ===========================================< @
PSHAMSDI> >>===========================================
PSHAMSDI> VR> Vanessa Rumball
PSHAMSDI> VR> Analyst Programmer Phone:
(03) 479 8285
PSHAMSDI> VR> A.T.S. Fax:
(03) 479 5080
PSHAMSDI> VR> University of Otago Email:
PSHAMSDI> [email protected]
PSHAMSDI> VR> PO Box 56
PSHAMSDI> VR> Dunedin
PSHAMSDI> VR> New Zealand
PSHAMSDI> VR> ===========================================< @
PSHAMSDI> >>===========================================
PSHAMSDI> VR> -
PSHAMSDI> VR> To unsubscribe, email '[email protected]'
with
PSHAMSDI> VR> 'unsubscribe forte-users' as the body of the
message.
PSHAMSDI> VR> Searchable thread archive
PSHAMSDI> <URL:http://pinehurst.sageit.com/listarchive/>
PSHAMSDI> -
PSHAMSDI> To unsubscribe, email '[email protected]' with
PSHAMSDI> 'unsubscribe forte-users' as the body of the
message.
PSHAMSDI> Searchable thread archive
PSHAMSDI> <URL:http://pinehurst.sageit.com/listarchive/>
To unsubscribe, email '[email protected]' with
'unsubscribe forte-users' as the body of the message.
Searchable thread archive <URL:http://pinehurst.sageit.com/listarchive/>Hi Bolun,
If you have 2 different UOM(KG and PC) you can see 2 sub-totals only. If you have more, you will see more...
You can try this: Some work around's ........
Option1: Convert other Units of measures into KG's or PC's or
Option2: Add one more Indicator at infoprovider level and populate indicator accordingly., based on UOM and use in report. or
Option3: Make UOM as User Input and restrict report based on that...
Hope it Helps
Srini -
Need info regarding BPEL Transaction Management
Hi All,
Can anybody gives me information regarding how much BPEL support for transaction Management.
Up to what level transactional behaviours can now BPEL (10.1.3.3) offer so far.WS-Transaction still can be expected from this current version. If somebody gives me the link of any document link that will be much fruitful to me.
Thanks in advanceRefer to the link :-http://download.oracle.com/technology/tech/soa/soa_best_practices_1013x_drop3.pdf
Under the section transaction you'll get an overview of BPEL transaction management.
Rgdrs
Mandrita. -
Hi Gurus,
I need your help on a concurrent request ( Receiving Transaction Manager ) showing as Inactive/No Manager.
EBS: R12.1.2 on Linux
Database: 11.2.0.3 on HP UX Itanium
A user is submitting the concurrent request - "Receiving Transaction Manager" (It is a concurrent program not a concurrent manager). The request ends up as Inactive/No Manager. When I click on the diagnostics button this is what it shows -
"No concurrent manager is defined to process this request, so it cannot be processed.
Contact your system administrator to define a concurrent manager to process this request or to verify that existing concurrent managers have the correct specialization rules."
I ran analyzereq.sql and the analysis is
Analysis:
Request 1427025 is in phase "Pending" with status " Normal".
(phase_code = P) (status_code = I)
This request is an unconstrained request. (queue_method_code = I)
It is in a "Pending/Normal" status, ready
to be run by the next available manager.
Checking managers available to run this request...
There is no manager defined that can run this request
This should show on the form as Inactive/No Manager
Check the specialization rules for each
manager to make sure they are defined correctly.
Additional information (from apps.FND_CONC.DIAGNOSE):
No concurrent manager is defined to process this request, so it cannot be processed.
Contact your system administrator to define a concurrent manager to process this request or to verify that existing concurrent managers have the correct specialization rules.
I ran another SQL to find out which manager runs the program :
select CONTROLLING_MANAGER,request_id
from fnd_concurrent_requests
where request_id=1427025;
It returned blank (or) NULL for controlling Manager.
select b.USER_CONCURRENT_QUEUE_NAME from fnd_concurrent_processes a,
fnd_concurrent_queues_vl b, fnd_concurrent_requests c
where a.CONCURRENT_QUEUE_ID = b.CONCURRENT_QUEUE_ID
and a.CONCURRENT_PROCESS_ID = c.controlling_manager
and c.request_id = 1427025;
Returned Nothing.
I thought that for a concurrent program when no specific manager is defined, Standard Manager is the default. Is my understanding wrong ?? Have anyone of you faced this issue? How did you fix it ?
I did check the profile - Concurrent: Conflict Domain for the responsibility and it is blank.
Appreciate any help.
Thanks
Cherrish VaidiyanWas this working before? If yes, any changes been done recently?
Your understanding is correct, unless this concurrent program is excluded from being processed by standard manager.
If the request was not processed then the no rows output is an expected behavior -- (Doc ID 344011.1)
Have you tried the solution in (Doc ID 1303315.1)?
Thanks,
Hussein -
ORA-10561: block type 'TRANSACTION MANAGED DATA BLOCK', data object# 237
Hello,
I encountered ORA-10561 while I was recovering my DB.
Problem Background :-
I took a hotbakup of my DB running in Windows, I edited the Pfile and recreated the control file
I was able to succesfully mount the DB
The I gave the below command to apply the redologs and recover the DB
And the errors followed as mentioned.
The HOTBACKUP was copied and restored using a USB-PENdrive. I suspect that the datafiles and / or ARCHIVE LOGS would have had some format issues due to O/S {WINXP ---> RHEL5}
If this is the case ? then I would like to know how to convert the format of the files(Datafiles and or Redologs)
into acceptable format
>
SQL> recover database until time '2010-03-15:18:08:05' using backup controlfile;
ORA-00279: change 3447582 generated at 03/15/2010 17:41:42 needed for thread 1
ORA-00289: suggestion : /home/oracle/NEW/ARCHIVE/ARC0000000144_0706577643.0001
ORA-00280: change 3447582 for thread 1 is in sequence #144
Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
/home/oracle/NEW/ARCHIVE/ARC00144_0706577643.001
ORA-00283: recovery session canceled due to errors
ORA-10562: Error occurred while applying redo to data block (file# 1, block#
1658)
ORA-10564: tablespace SYSTEM
ORA-01110: data file 1: '/home/oracle/NEW/oradata/O1_MF_SYSTEM_5M9ZKSSW_.DBF'
ORA-10561: block type 'TRANSACTION MANAGED DATA BLOCK', data object# 237
ORA-00600: internal error code, arguments: [4502], [0], [], [], [], [], [], []
>
Thanks in advance.
Regards,
ValliYou need to give much more information:
What version of Oracle? 10gR2 is not a version, 10.2.0.1 is a version.
What version of Windows, exactly? What version of linux?
ORA-600 means you need to talk to Oracle support. There is an ora-600 lookup tool, which basically searches the knowledge base (for ora-600[4502] in your case, which brings up a bunch of really, really, really old docs).
How exactly did you take the "hot backup?" There are a number of ways to do things with that name, some of which are just plain wrong.
How exactly did you get the backup from one machine to another? Which exact commands did you use to copy the files to the usb and from the usb? Did you do it more than once?
Why are you using the backup controlfile syntax? There are valid reasons, and invalid reasons to do that.
What exactly did you change in the pfile?
What does the alert log say about all this? -
What is Corporate Performance Management
Hi all
I m a sap PLM consultant.
Can you please explain what is Corporate Performance Management and what we do in sap for this module?
Thanks
shashank agavekarHello
Corporate performance management (CPM) is the area of business intelligence involved
with monitoring and managing an organization's performance, according to key performance indicators
(KPIs) such as revenue, return on investment (ROI), overhead, and operational costs.
For online businesses, CPM includes additional factors such as page views, server load,
network traffic and transactions per second. CPM is also known as business performance management (BPM)
or enterprise performance management (EPM) .
Further info :
https://www.sdn.sap.com/irj/sdn/cpm-elearning
Regards,
Jean -
I am unable to find any sample code for using Tyrex's Transaction Manager and the JTA API. Can anyone help me here? There are two things I need help with, but if you can help me on any of them that would be huge!
1. Setting up / Configuring Tyrex
2. Adding MS SQL as DataSource (and Transaction Domain? It's not clear what that means exactly) (I have the beta JDBC driver from MS)
I search around google but couldn't get any helpful tutorial about Tyrex and Tyrex's website has next to nothing in help too!
fatmondNot really. This is a forum about Java reflection and your question is not related to that in the least. The JDBC forum is also supposed to include questions about JTA but I doubt you're going to get an answer there either, because of the obscurity of the software you're asking about. But it's worth a try.
Or you could post on Tyrex's mailing list.
Maybe you are looking for
-
Checkbox in discoverer desktop edition
Hi, The query we are executing created for cross tab report have plenty of views and is returning null data which needs apps intialization which cannot be done from discoverer. I think loging into Discoverer desktop edition with oracle applications u
-
How do i transfer files from my mac to a windows pc
I have several files on my mac that i would also like to load onto my windows pc. How do I convert them?
-
I downloaded FireFox 4 and my RoboForm stopped working correctly. Is there a solution.
I downloaded FireFox 4 and my RoboForm stpped working correctly. I contacted RoboForm and they said RoboForm does not work correcly with FireFox 4. They told me to go back to FireFox 3.6xx. I did and RoboForm works. I can not believe that is the only
-
Script or anything that will auto quit app after certain time of inactivity?
I am searching for a way to make apps auto quit after a certain time of inactivity. For exmaple, if i open the app store, i would like it to close automatically after a certain amount of time, with the possibility of adding exceptions to the list. Is
-
Query Custom Object3 using Like
Hi, A customer has many models attached to him .Each model info is maintained in custom object 3 . We have maintained External Unique Id of model(object3) in the format custno-modelno .So each model is identified uniquely by external unique id. I wan