Creating a non transaction datasource
All,
I am using Quartz to fire Hibernate Jobs. Quartz Scheduler runs as a servlet in tomcat, the Hibernate job is a standalone app.
The Scheduler calls the app like so:
Quartz>Shell>Hibernate.
When I start the Hibernate jobs, they instantly go into a blocked state, basically db dealocks. When view the connections with
mysql admin, I can see the connections are sleep!
I heard thru the grapevine that a non-transactional datasource would solve this problem.
How do I do this? Is it done in tomcat, or do I need to change my hibernate db connection?
Any ideas?
You're deadlocking with another process - quite possibly with one of your earlier tasks that stalled for some reason.
Turning off transactions is a dumb way to fix this - it avoids you finding out what the real problem is, thus curing the symptom but probably not the disease. Find out what's deadlocking and why, then fix that problem properly.
Similar Messages
-
JDev 9.0.3.3 Data not saved to DB when using Non-Transaction DataSource
Hi,
Env: JDev 9.0.3.3/WL 6.0 sp1/Oracle 8i
We have successfully deployed our application in 3-tier(remote mode) in JDev 9.0.3.2. using JClient, EO/VO, EJB Session Facade (BMT).
Now we are planning to use JDev 9.0.3.3 (build 1205)
We are using ejb.txn.type=local and Weblogic DataSource(non-txn).
In JDev 9033, after commit the data is not getting saved to DB. No errors in the log below.
This works fine in JDev 9032.
This does not work with simple Master Detail and also with single row simple form.
==========================================
[281] BaseSQLBuilder Executing DML ... (Update)
[282] Executing DML...
[283] UPDATE CISDBA.DCX_BASE_COST_V BaseCost SET ITEM_STAT=?,TID=? WHERE PART_NUM=? AND MY=?
[284] cStmt = conn.prepareCall(" UPDATE CISDBA.DCX_BASE_COST_V BaseCost SET ITEM_STAT=?,TID=? WHERE PART_NUM=? AND MY=?"); // JBO-JDBC-INTERACT
[285] cStmt.setObject(1, new BigDecimal((double) 2.0)); /*ItemStat*/ // JBO-JDBC-INTERACT
[286] cStmt.setObject(2, "t2733bx"); /*Tid*/ // JBO-JDBC-INTERACT
[287] cStmt.setObject(3, "04782612AA"); /*PartNum*/ // JBO-JDBC-INTERACT
[288] cStmt.setObject(4, "2004"); /*My*/ // JBO-JDBC-INTERACT
[289] cStmt.execute(); // JBO-JDBC-INTERACT
[290] cStmt.close(); // JBO-JDBC-INTERACT
BaseCostImpl: after doDML
BaseCostImpl: End of doDML()...
BaseCostInvestCost VO before postChanges...
this.getWhereClause(): null
isDirty() before executeQuery...
this.getWhereClause(): null
isDirty() after executeQuery...
BaseCostInvestCost VO before postChanges...
this.getWhereClause(): null
isDirty() before executeQuery...
this.getWhereClause(): null
isDirty() after executeQuery...
[291] BaseSQLBuilder: releaseSavepoint 'BO_SP' ignored
[292] BaseSQLBuilder: setSavepoint 'BO_SP' ignored
BaseCostInvestCost VO before postChanges...
this.getWhereClause(): null
isDirty() before executeQuery...
this.getWhereClause(): null
isDirty() after executeQuery...
BaseCostInvestCost VO before postChanges...
this.getWhereClause(): null
isDirty() before executeQuery...
this.getWhereClause(): null
isDirty() after executeQuery...
[293] BaseSQLBuilder: releaseSavepoint 'BO_SP' ignored
[294] EJBTxnHandler: Commited txn
[BaseCostInvestCostViewImpl.afterCommit] Enter
[295] BaseCostInvestCostView2 notify COMMIT ...
[BaseCostInvestCostViewImpl.afterCommit] Exit
[BaseCostInvestCostViewImpl.afterCommit] Enter
[296] BaseCostInvestCostView1 notify COMMIT ...
[BaseCostInvestCostViewImpl.afterCommit] Exit
[297] SubDept2SubProgView1 notify COMMIT ...
[298] InvSubDeptLOV1 notify COMMIT ...
[299] SubProg2SubDeptView1 notify COMMIT ...
[300] SubProgramLOV1 notify COMMIT ...
[301] StdCostView1 notify COMMIT ...
[302] AltCostView1 notify COMMIT ...
[303] PlantCodeView1 notify COMMIT ...
[304] PaperCarView1 notify COMMIT ...
[305] InvestCostItemView1 notify COMMIT ...
[306] SavedSearchView1 notify COMMIT ...
[307] AltCostView1_BaseInvestToAltViewLink_AltCostView notify COMMIT ...
[308] InvestCostItemView1_BaseInvestToInvestItemViewLink_InvestCostItemView notify COMMIT ...
[309] PaperCarView_BaseCostTrackedVehicleViewLink_PaperCarView notify COMMIT ...
[310] VehicleProgramLOV1 notify COMMIT ...
[311] SubDeptLOV1 notify COMMIT ...
[312] Transaction timeout set to 28800 secs
[313] [NavigationEvent: BaseCostInvestCostView1 From 0 to 1]
[314] Column count: 14
[315] ViewObject : Reusing defined prepared Statement
[316] Binding param 1: 769661
[317] Binding param 2: 2004
[318] [RangeRefreshEvent: AltCostView1 start=-1 count=0]
[319] Column count: 13
[320] ViewObject : Reusing defined prepared Statement
[321] Binding param 1: 769661
[322] [RangeRefreshEvent: PaperCarView1 start=0 count=6]
[323] Column count: 4
[324] ViewObject : Reusing defined prepared Statement
[325] Binding param 1: INV37
[326] [RangeRefreshEvent: InvestCostItemView1 start=0 count=1]
[327] [NavigationEvent: AltCostView1 From -1 to -1]
[328] [NavigationEvent: PaperCarView1 From -1 to 0]
[329] [NavigationEvent: InvestCostItemView1 From -1 to 0]
========================================================Hi Carsten,
I tried to reproduce your problem, but couldn't. Let me explain what steps I executed and perhaps you can advise where I've not matched your steps.
--Using build jdeveloper 9.0.3.3.1203, I built a new bc4j project containing a dept-emp default bc4j project (deptEntity, empEntity, deptView, empView, deptempFKAssoc, deptempFKViewLink, ApplicationModule).
--In dos shell, I went to the directory \jdevdir\jdev\bin and ran setvars -go to set the correct jdk version
--In the dos shell, in the directory \jdevdir\j2ee\home I executed the following command to install oc4j:
java -jar oc4j.jar (defaults pswd to welcome for admin)
--I remoted the appmodule to EJB Session Bean (BMT) and created a new deployment profile using the 9ias configuration for the application module.
--I deployed the bc4j objects to oc4j
--I created a new project
--In this project I created a new jclient master-detail form using the above project's application module for the data model
--I saved all and compiled the jclient project
--I ran the jclient form and inserted a master record
--I committed the transaction successfully
--I browsed records, then edited a record
--I committed the transaction successfully, then browsed.
Is there something I've missed? Did you migrate your project and not start by creating a new project? Is there something special about the database schema you are using?
Thanks,
Amy -
Websphere 5.0 and non-jts-datasource = 2PC exception!
Hello all
We're migrating from a working weblogic 8 app to websphere 5.0, and we run into this problem.
Toplink tries to enlist the NON-JTS datasource in the global transaction. In weblogic we defined our non-jts datasource as a non-transactional datasource, but there is no such option in websphere. What is going on??
Please help
TIA
- Russ -Hello Rustam,
WebSphere 5 throws exceptions when you try to get a non-jta datasource while in a transaction - it seems to try to enlist it in the transaction I think.
This is more a WebSphere issue, since it means you cannot read outside of the transaction.
There are 3 options:
1) Don't define a non-jta datasource at all in TopLink. Draw backs to this are that there may be problems reading when there is no transaction, such as when you are using cache synch.
2) Create your own datasource (outside of WebSphere) and place it in JNDI. Then have TopLink access it as a non-jta datasource. Your datasource must be completely independent of WebSphere so that it does not attempt to associate with JTA.
3) Use a TopLink maintained read connetion pool. You can use the non-jts-connection-url sessions.xml tag, which will use the login setting defined in your project.xml. I've not tested it, but can also override the read pool in a preLogin event that should look something like:
public void preLogin(SessionEvent event) {
DatabaseLogin dbLogin(new Oracle9iPlatform());
dbLogin.setUserName("name");
dbLogin.setPassword("password");
dbLogin.setConnectionString("jdbc:oracle:thin:@ip:port:sid");
ConnectionPool readPool = new ReadConnectionPool("read", dbLogin, minCon, maxCon, event.getSession());
event.getSession().setReadConnectionPool(readPool);
Best Regards,
Chris Delahunt -
What happened when using non-TX datasource connection in a Transaction?
Hi,
I try to update a timestamp table in a transactional EJB method invocation.
The table is in the same db where the entity bean stores. I get the
connection via DataSource (non-transactional), and issue the sql statement,
and then close the connection.
My question is: is my update part of the EJB transaction? If yes, how do
I manage to get the timestamp update out of the scope of the transaction
since I just want to timestamp no matter wether the EJB transactional method
invokcation succeed or fails. Do I need to explicitly suspend and resume
the transaction?
Thanks!
-ChunboChunbo Huang wrote:
>
Hi,
I try to update a timestamp table in a transactional EJB method invocation.
The table is in the same db where the entity bean stores. I get the
connection via DataSource (non-transactional), and issue the sql statement,
and then close the connection.
My question is: is my update part of the EJB transaction? If yes, how do
I manage to get the timestamp update out of the scope of the transaction
since I just want to timestamp no matter wether the EJB transactional method
invokcation succeed or fails. Do I need to explicitly suspend and resume
the transaction?Hi. Your update is non-transactional. It is not part of the EJB transaction,
and will succeed or fail at the time you do it, regardless of the eventual
fate of the EJB transaction.
Joe
>
Thanks!
-Chunbo -
Impact of creating a non-clusterd index on a huge transaction table?
Hello Everyone,
We have a transaction table containing 10 million records and everyday a million records will be inserted. We don’t have any clustered index on this table as this is a transaction table (more than 10 columns to uniquely identify a row). We
do have some SPs which in turn some reports getting generated using this table. In order to improve the performance of an SP, we created a non-clustered index on this table and we found a huge performance gain.
Here comes my question - will this (creation of non-clustered index) impacts my table data load performance or other reports generation?
Any suggestions will be appreciated.
Many Thanks!
Rajasekhar.Hello Rajasekhar,
First identify this table and corresponding columns usage. Through SP_depends system procedure you can identify this table dependencies.
Then look at complex queries and it's execution plans. You can get an output recommendations of appropriate missing indexes.
Now you can try to create appropriate indexes. Always I suggest you to limit the index count if you are inserting/updating large volume records. Also if possible create clustered index.
One more option, you can horizontally partitioned the table and move data to multiple filegroups. Based of range of data your query performance also improve a lot.
To apply partition for existing table, you should take backup and recreate from scartch.
Check this link : http://www.mssqltips.com/sqlservertip/2888/how-to-partition-an-existing-sql-server-table/
Best Regards,
Ashokkumar.
Ashokkumar -
How to create loosely coupled transaction branches with DBMS_XA ?
Hello,
I use DBMS_XA to start several sessions in the same database instance participating a global transactions. The local transaction branches are tightly coupled.
Do you know a way (Oracle 11.2) for the branches to be loosely coupled by using dbms_xa in one-only instance ?
Best Regards,
GregoryFor non XA datasource create a JDBC Connection with data source class
oracle.jdbc.pool.OracleDataSource
For xa datasource create a JDBC connection with data source class
oracle.jdbc.xa.client.OracleXADataSource -
Updaing 2 schema's in 1 db = non-emulated datasource
When updating two databases from within a single tx, you must use a non-emulated datasource (i.e. JTA support is required).
Is this true of updating two database schema's (same database) in the same tx as well ? The Oracle documentation seems not to discuss this.
Regards,
Manoj.Manoj -- My apologies. I misread the question. If you reference another schema but only use a single data source definition then it is a 1 PC. You only need to configure 2 PC if you will be creating transactions across two different data sources and you want true 2 PC semantics. In fact, I'm not sure if in the pure Oracle database view of the world you wouldn't have to do the same thing but it has just been a while since I looked at that. Don't forget that you can also use basic database mechanisms like synonyms and views to reference the other schema.
Thanks -- Jeff -
Help with seting up a Data Sorce can't be created with non-existent Pool
I am wanting to use an Oracle 9i with WebLogic 7
I have the following in my config.xml:
<JDBCConnectionPool DriverName="oracle.jdbc.driver.OracleDriver"
Name="Thin.Pool" Password="{3DES}C3xDZIWIABA="
Properties="user=SYSTEM" TestTableName="OID"
URL="jdbc:oracle:thin:@localhost:1521:DB_SID"/>
<JDBCDataSource JNDIName="DB_DS" Name="DB_DS" PoolName="Thin.Pool"/>
The console seems happy, no error mesages but in the log I get:
####<Mar 31, 2003 6:33:45 PM MST> <Info> <HTTP> <blue> <GameServe>
<ExecuteThread: '1' for queue: '__weblogic_admin_html_queue'> <kernel
identity> <> <101047>
<[ServletContext(id=4110316,name=console,context-path=/console)]
FileServlet: Using standard I/O>
####<Mar 31, 2003 6:35:37 PM MST> <Info> <JDBC> <blue> <GameServe>
<ExecuteThread: '1' for queue: '__weblogic_admin_html_queue'> <kernel
identity> <> <001082> <Creating Data Source named DB_DS for pool
Thin.Pool>
####<Mar 31, 2003 6:35:37 PM MST> <Error> <JDBC> <blue> <GameServe>
<ExecuteThread: '1' for queue: '__weblogic_admin_html_queue'> <kernel
identity> <> <001059> <Error during Data Source creation:
weblogic.common.ResourceException: DataSource(DB_DS) can't be created
with non-existent Pool (connection or multi) (Thin.Pool)
at weblogic.jdbc.common.internal.JdbcInfo.validateConnectionPool(JdbcInfo.java:127)
at weblogic.jdbc.common.internal.JdbcInfo.startDataSource(JdbcInfo.java:260)
at weblogic.jdbc.common.internal.JDBCService.addDeploymentx(JDBCService.java:293)
at weblogic.jdbc.common.internal.JDBCService.addDeployment(JDBCService.java:270)
at weblogic.management.mbeans.custom.DeploymentTarget.addDeployment(DeploymentTarget.java:375)
at weblogic.management.mbeans.custom.DeploymentTarget.addDeployment(DeploymentTarget.java:154)
at java.lang.reflect.Method.invoke(Native Method)
at weblogic.management.internal.DynamicMBeanImpl.invokeLocally(DynamicMBeanImpl.java:732)
at weblogic.management.internal.DynamicMBeanImpl.invoke(DynamicMBeanImpl.java:714)
at weblogic.management.internal.ConfigurationMBeanImpl.invoke(ConfigurationMBeanImpl.java:417)
at com.sun.management.jmx.MBeanServerImpl.invoke(MBeanServerImpl.java:1557)
at com.sun.management.jmx.MBeanServerImpl.invoke(MBeanServerImpl.java:1525)
at weblogic.management.internal.RemoteMBeanServerImpl.invoke(RemoteMBeanServerImpl.java:952)
at weblogic.management.internal.ConfigurationMBeanImpl.updateConfigMBeans(ConfigurationMBeanImpl.java:578)
at weblogic.management.internal.ConfigurationMBeanImpl.invoke(ConfigurationMBeanImpl.java:419)
at com.sun.management.jmx.MBeanServerImpl.invoke(MBeanServerImpl.java:1557)
at com.sun.management.jmx.MBeanServerImpl.invoke(MBeanServerImpl.java:1525)
at weblogic.management.internal.RemoteMBeanServerImpl.invoke(RemoteMBeanServerImpl.java:952)
at weblogic.management.internal.MBeanProxy.invoke(MBeanProxy.java:470)
at weblogic.management.internal.MBeanProxy.invoke(MBeanProxy.java:198)
at $Proxy16.addDeployment(Unknown Source)
at weblogic.management.internal.DynamicMBeanImpl.unprotectedUpdateDeployments(DynamicMBeanImpl.java:1784)
at weblogic.management.internal.DynamicMBeanImpl.access$0(DynamicMBeanImpl.java:1737)
at weblogic.management.internal.DynamicMBeanImpl$1.run(DynamicMBeanImpl.java:1715)
at weblogic.security.service.SecurityServiceManager.runAs(SecurityServiceManager.java:780)
at weblogic.management.internal.DynamicMBeanImpl.updateDeployments(DynamicMBeanImpl.java:1711)
at weblogic.management.internal.DynamicMBeanImpl.setAttribute(DynamicMBeanImpl.java:1035)
at weblogic.management.internal.ConfigurationMBeanImpl.setAttribute(ConfigurationMBeanImpl.java:353)
at com.sun.management.jmx.MBeanServerImpl.setAttribute(MBeanServerImpl.java:1358)
at com.sun.management.jmx.MBeanServerImpl.setAttribute(MBeanServerImpl.java:1333)
at weblogic.management.internal.RemoteMBeanServerImpl.setAttribute(RemoteMBeanServerImpl.java:898)
at weblogic.management.internal.MBeanProxy.setAttribute(MBeanProxy.java:324)
at weblogic.management.internal.MBeanProxy.invoke(MBeanProxy.java:193)
at $Proxy13.setTargets(Unknown Source)
at java.lang.reflect.Method.invoke(Native Method)
at weblogic.management.console.info.FilteredMBeanAttribute.doSet(FilteredMBeanAttribute.java:92)
at weblogic.management.console.actions.mbean.DoEditMBeanAction.perform(DoEditMBeanAction.java:145)
at weblogic.management.console.actions.internal.ActionServlet.doAction(ActionServlet.java:171)
at weblogic.management.console.actions.internal.ActionServlet.doPost(ActionServlet.java:85)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:760)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:853)
at weblogic.servlet.internal.ServletStubImpl$ServletInvocationAction.run(ServletStubImpl.java:1058)
at weblogic.servlet.internal.ServletStubImpl.invokeServlet(ServletStubImpl.java:401)
at weblogic.servlet.internal.ServletStubImpl.invokeServlet(ServletStubImpl.java:306)
at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:5445)
at weblogic.security.service.SecurityServiceManager.runAs(SecurityServiceManager.java:780)
at weblogic.servlet.internal.WebAppServletContext.invokeServlet(WebAppServletContext.java:3105)
at weblogic.servlet.internal.ServletRequestImpl.execute(ServletRequestImpl.java:2588)
at weblogic.kernel.ExecuteThread.execute(ExecuteThread.java:213)
at weblogic.kernel.ExecuteThread.run(ExecuteThread.java:189)
Why does it say:
can't be created with non-existent Pool
Thanks,Add "Targets" attribute to the connection pool. You may
get an idea how it looks like by searching config.xml
for "targets". If target servers are not set, the pool won't be
deployed and can not be used to a create datasource.
Regards,
Slava Imeshev
"BBaker" <[email protected]> wrote in message
news:[email protected]...
I am wanting to use an Oracle 9i with WebLogic 7
I have the following in my config.xml:
<JDBCConnectionPool DriverName="oracle.jdbc.driver.OracleDriver"
Name="Thin.Pool" Password="{3DES}C3xDZIWIABA="
Properties="user=SYSTEM" TestTableName="OID"
URL="jdbc:oracle:thin:@localhost:1521:DB_SID"/>
<JDBCDataSource JNDIName="DB_DS" Name="DB_DS" PoolName="Thin.Pool"/>
The console seems happy, no error mesages but in the log I get:
####<Mar 31, 2003 6:33:45 PM MST> <Info> <HTTP> <blue> <GameServe>
<ExecuteThread: '1' for queue: '__weblogic_admin_html_queue'> <kernel
identity> <> <101047>
<[ServletContext(id=4110316,name=console,context-path=/console)]
FileServlet: Using standard I/O>
####<Mar 31, 2003 6:35:37 PM MST> <Info> <JDBC> <blue> <GameServe>
<ExecuteThread: '1' for queue: '__weblogic_admin_html_queue'> <kernel
identity> <> <001082> <Creating Data Source named DB_DS for pool
Thin.Pool>
####<Mar 31, 2003 6:35:37 PM MST> <Error> <JDBC> <blue> <GameServe>
<ExecuteThread: '1' for queue: '__weblogic_admin_html_queue'> <kernel
identity> <> <001059> <Error during Data Source creation:
weblogic.common.ResourceException: DataSource(DB_DS) can't be created
with non-existent Pool (connection or multi) (Thin.Pool)
atweblogic.jdbc.common.internal.JdbcInfo.validateConnectionPool(JdbcInfo.java:
127)
atweblogic.jdbc.common.internal.JdbcInfo.startDataSource(JdbcInfo.java:260)
atweblogic.jdbc.common.internal.JDBCService.addDeploymentx(JDBCService.java:29
3)
atweblogic.jdbc.common.internal.JDBCService.addDeployment(JDBCService.java:270
atweblogic.management.mbeans.custom.DeploymentTarget.addDeployment(DeploymentT
arget.java:375)
atweblogic.management.mbeans.custom.DeploymentTarget.addDeployment(DeploymentT
arget.java:154)
at java.lang.reflect.Method.invoke(Native Method)
atweblogic.management.internal.DynamicMBeanImpl.invokeLocally(DynamicMBeanImpl
.java:732)
atweblogic.management.internal.DynamicMBeanImpl.invoke(DynamicMBeanImpl.java:7
14)
atweblogic.management.internal.ConfigurationMBeanImpl.invoke(ConfigurationMBea
nImpl.java:417)
atcom.sun.management.jmx.MBeanServerImpl.invoke(MBeanServerImpl.java:1557)
atcom.sun.management.jmx.MBeanServerImpl.invoke(MBeanServerImpl.java:1525)
atweblogic.management.internal.RemoteMBeanServerImpl.invoke(RemoteMBeanServerI
mpl.java:952)
atweblogic.management.internal.ConfigurationMBeanImpl.updateConfigMBeans(Confi
gurationMBeanImpl.java:578)
atweblogic.management.internal.ConfigurationMBeanImpl.invoke(ConfigurationMBea
nImpl.java:419)
atcom.sun.management.jmx.MBeanServerImpl.invoke(MBeanServerImpl.java:1557)
atcom.sun.management.jmx.MBeanServerImpl.invoke(MBeanServerImpl.java:1525)
atweblogic.management.internal.RemoteMBeanServerImpl.invoke(RemoteMBeanServerI
mpl.java:952)
at weblogic.management.internal.MBeanProxy.invoke(MBeanProxy.java:470)
at weblogic.management.internal.MBeanProxy.invoke(MBeanProxy.java:198)
at $Proxy16.addDeployment(Unknown Source)
atweblogic.management.internal.DynamicMBeanImpl.unprotectedUpdateDeployments(D
ynamicMBeanImpl.java:1784)
atweblogic.management.internal.DynamicMBeanImpl.access$0(DynamicMBeanImpl.java
:1737)
atweblogic.management.internal.DynamicMBeanImpl$1.run(DynamicMBeanImpl.java:17
15)
atweblogic.security.service.SecurityServiceManager.runAs(SecurityServiceManage
r.java:780)
atweblogic.management.internal.DynamicMBeanImpl.updateDeployments(DynamicMBean
Impl.java:1711)
atweblogic.management.internal.DynamicMBeanImpl.setAttribute(DynamicMBeanImpl.
java:1035)
atweblogic.management.internal.ConfigurationMBeanImpl.setAttribute(Configurati
onMBeanImpl.java:353)
atcom.sun.management.jmx.MBeanServerImpl.setAttribute(MBeanServerImpl.java:135
8)
atcom.sun.management.jmx.MBeanServerImpl.setAttribute(MBeanServerImpl.java:133
3)
atweblogic.management.internal.RemoteMBeanServerImpl.setAttribute(RemoteMBeanS
erverImpl.java:898)
atweblogic.management.internal.MBeanProxy.setAttribute(MBeanProxy.java:324)
at weblogic.management.internal.MBeanProxy.invoke(MBeanProxy.java:193)
at $Proxy13.setTargets(Unknown Source)
at java.lang.reflect.Method.invoke(Native Method)
atweblogic.management.console.info.FilteredMBeanAttribute.doSet(FilteredMBeanA
ttribute.java:92)
atweblogic.management.console.actions.mbean.DoEditMBeanAction.perform(DoEditMB
eanAction.java:145)
atweblogic.management.console.actions.internal.ActionServlet.doAction(ActionSe
rvlet.java:171)
atweblogic.management.console.actions.internal.ActionServlet.doPost(ActionServ
let.java:85)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:760)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:853)
atweblogic.servlet.internal.ServletStubImpl$ServletInvocationAction.run(Servle
tStubImpl.java:1058)
atweblogic.servlet.internal.ServletStubImpl.invokeServlet(ServletStubImpl.java
:401)
atweblogic.servlet.internal.ServletStubImpl.invokeServlet(ServletStubImpl.java
:306)
atweblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(W
ebAppServletContext.java:5445)
atweblogic.security.service.SecurityServiceManager.runAs(SecurityServiceManage
r.java:780)
atweblogic.servlet.internal.WebAppServletContext.invokeServlet(WebAppServletCo
ntext.java:3105)
atweblogic.servlet.internal.ServletRequestImpl.execute(ServletRequestImpl.java
:2588)
at weblogic.kernel.ExecuteThread.execute(ExecuteThread.java:213)
at weblogic.kernel.ExecuteThread.run(ExecuteThread.java:189)
Why does it say:
can't be created with non-existent Pool
Thanks, -
Error While Creating Material Classification Characteristic DataSource -
Hi All,
I am creating the Material Classification DataSource 1CL_OMAT001 with CTBW in R/3 with the following info:
Basis DataSource = 0MATERIAL_ATTR; Client = 200; Class Type = 001;
Obj table = MARA; DataSource Type = ATTR, DataSource = 1CL_OMAT001
I have added all the characteristics with 'New Entries' button and the following are the details after saving it:
DataSource = 1CL_OMAT001
Characteristic Name = ABCDXYZ
Item No = ' ' (blank)
Maintain Status = N
Characteristic = 1CL_AABC001
Typ = CHA
While adding the MATERIAL_GROUP as characteristic it got an error because of Reference Characteristic since it is reference to Table MARA. Finally, I didn't add it.
When I click the 'Generate' button to generate the DataSource then it is giving the error of <b>'BW Metada is incomplete'</b> (CLBW008).
What is wrong with it and how could I fix this problem, please ?
Thanks in advance.I am not too sure but this might be the cause of error. Data doesn't need to be in client 100, but the characteristic needs to be defined in the class if I remember it correctly.
To do a quick check, open report program RCLBW_STRUCTURE_GEN in SE38, there is a call to FM RSA1_SINGLE_OLTPSOURCE_GET (around line 169) put a breakpoint there in the editor (display).
Now run your CTBW generation, it should stop at the breakpoint. You should check the value of field GT_CLBW_SOURCES-IOBJNM in the debugger - it would likely be the characteristic name.
Now, go to SE37, do a single test of FM RSA1_SINGLE_OLTPSOURCE_GET, and specify this same value for field I_OLTPSOURCE and execute. You will likely get a sy-subrc of non-zero.
Repeat this in client 200, check sy-subrc.
If you are getting sy-subrc 0 in 200 and non-zero in 100; you will either need to create the classification in 100, or generate it in 200.
If this doesn't work, let us know. -
[Solved] if(Transaction specified for a non-transactional database) then
I am getting started with BDBXML 2.4.14 transactions and XQuery update functionality and I am having some difficulty with 'node insert ...' and transactions failing with 'Transaction specified for a non-transactional database'
Thanks for helping out.
Setup:
I have coded up a singleton manager for the XmlManger with a ThreadLocal holding the transaction and a query method to execute XQueries. The setup goes like this:
environmentConfig = new EnvironmentConfig();
environmentConfig.setRunRecovery(true); environmentConfig.setTransactional(true); environmentConfig.setAllowCreate(true); environmentConfig.setRunRecovery(true); environmentConfig.setInitializeCache(true); environmentConfig.setTxnMaxActive(0); environmentConfig.setInitializeLocking(true); environmentConfig.setInitializeLogging(true); environmentConfig.setErrorStream(System.err);
environmentConfig.setLockDetectMode(LockDetectMode.MINWRITE); environmentConfig.setJoinEnvironment(true); environmentConfig.setThreaded(true);
xmlManagerConfig = new XmlManagerConfig(); xmlManagerConfig.setAdoptEnvironment(true); xmlManagerConfig.setAllowAutoOpen(true); xmlManagerConfig.setAllowExternalAccess(true);
xmlContainerConfig = new XmlContainerConfig(); xmlContainerConfig.setAllowValidation(false); xmlContainerConfig.setIndexNodes(true); xmlContainerConfig.setNodeContainer(true);
// initialize
instance.xmlManager = new XmlManager(instance.getEnvironment(), instance.getXmlManagerConfig());
instance.xmlContainer = instance.xmlManager.openContainer( containerName, instance.getXmlContainerConfig());
private ThreadLocal<XmlTransaction> transaction = new ThreadLocal<XmlTransaction>();
public XmlTransaction getTransaction() throws Exception {
if (transaction.get() == null) {
XmlTransaction t = xmlManager.createTransaction();
log.info("Transaction created, id: " + t.getTransaction().getId());
transaction.set(t);
} else if (log.isDebugEnabled()) {
log.debug("Reusing transaction, id: "
+ transaction.get().getTransaction().getId());
return transaction.get();
private XmlQueryContext createQueryContext(String docName) throws Exception {
XmlQueryContext context = xmlManager.createQueryContext(
XmlQueryContext.LiveValues, XmlQueryContext.Lazy);
List<NamespacePrefix> namespacePrefixs = documentPrefixes.get(docName);
// declare ddi namespaces
for (NamespacePrefix namespacePrefix : namespacePrefixs) {
context.setNamespace(namespacePrefix.getPrefix(), namespacePrefix
.getNamespace());
return context;
public XmlResults xQuery(String query) throws Exception {
XmlQueryExpression xmlQueryExpression = null;
XmlQueryContext xmlQueryContext = getQueryContext(docName);
try {
xmlQueryExpression = xmlManager.prepare(getTransaction(), query,
xmlQueryContext);
log.info(query.toString());
} catch (Exception e) {
if (xmlQueryContext != null) {
xmlQueryContext.delete();
throw new DDIFtpException("Error prepare query: " + query, e);
XmlResults rs = null;
try {
rs = xmlQueryExpression.execute(getTransaction(), xmlQueryContext);
// catch deadlock and implement retry
catch (Exception e) {
throw new DDIFtpException("Error on query execute of: " + query, e);
} finally {
if (xmlQueryContext != null) {
xmlQueryContext.delete();
xmlQueryExpression.delete();
return rs;
<?xml version="1.0" encoding="UTF-8"?>
<Test version="0.1">
<Project id="test-project" agency="dda">
<File id="large-doc.xml" type="ddi"/>
<File id="complex-doc.xml" type="ddi"/>
</Project>
<Project id="2nd-project" agency="test.org"/>
</Test>
Problem:
All the queries are run through the xQuery method and I do delete the XmlResults afterwards. How do I get around the 'Transaction specified for a non-transactional database' what is the transactions doing? How do I get state information out of a transaction? What am I doing wrong here?
1 First I insert a node:
Transaction created, id: -2147483647
Adding document: large-doc.xml to xml container
Reusing transaction, id: -2147483647
Working doc: ddieditor.xml
Root element: Test
Reusing transaction, id: -2147483647
insert nodes <Project id="JUnitTest" agency="test.org"></Project> into doc("dbxml:/ddieditor.dbxml/ddieditor.xml")/Test
Reusing transaction, id: -2147483647
2 Then do a query:
Reusing transaction, id: -2147483647
doc("dbxml:/ddieditor.dbxml/ddieditor.xml")/Test/Project/@id
Reusing transaction, id: -2147483647
3 The same query again:
Reusing transaction, id: -2147483647
doc("dbxml:/ddieditor.dbxml/ddieditor.xml")/Test/Project/@id
Reusing transaction, id: -2147483647
4 Delete a node:
Reusing transaction, id: -2147483647
delete node for $x in doc("dbxml:/ddieditor.dbxml/ddieditor.xml")/Test/Project where $x/@id = '2nd-project' return $x
Reusing transaction, id: -2147483647
5 Then an error on query:
Reusing transaction, id: -2147483647
doc("dbxml:/ddieditor.dbxml/ddieditor.xml")/Test/Project/@id
Reusing transaction, id: -2147483647
Transaction specified for a non-transactional database
com.sleepycat.dbxml.XmlException: Error: Invalid argument, errcode = DATABASE_ERROR
at com.sleepycat.dbxml.dbxml_javaJNI.XmlResults_hasNext(Native Method)
at com.sleepycat.dbxml.XmlResults.hasNext(XmlResults.java:136)
Message was edited by:
jannikjOk got it solved by increasing the locks lockers and mutex's I allso increased the the log buffer size:
environmentConfig = new EnvironmentConfig();
// general environment
environmentConfig.setAllowCreate(true);
environmentConfig.setRunRecovery(true); // light recovery on startup
//environmentConfig.setRunFatalRecovery(true); // heavy recovery on startup
environmentConfig.setJoinEnvironment(true); // reuse of environment: ok
environmentConfig.setThreaded(true);
// log subsystem
environmentConfig.setInitializeLogging(true);
environmentConfig.setLogAutoRemove(true);
environmentConfig.setLogBufferSize(128 * 1024); // default 32KB
environmentConfig.setInitializeCache(true); // shared memory region
environmentConfig.setCacheSize(2500 * 1024 * 1024); // 250MB cache
// transaction
environmentConfig.setTransactional(true);
environmentConfig.setTxnMaxActive(0); // live forever, no timeout
// locking subsystem
environmentConfig.setInitializeLocking(true);
environmentConfig.setMutexIncrement(22);
environmentConfig.setMaxMutexes(200000);
environmentConfig.setMaxLockers(200000);
environmentConfig.setMaxLockObjects(200000); // default 1000
environmentConfig.setMaxLocks(200000);
// deadlock detection
environmentConfig.setLockDetectMode(LockDetectMode.MINWRITE);
In the docs by Oracle it is limited information given regarding the impact of these settings and their options. Can you guys point in a direction where I can find some written answers or it hands on? -
NON-transactional session bean access entity bean
We are currently profiling our product using Borland OptmizeIt tool, and we
found some interesting issues. Due to our design, we have many session beans which
are non transactional, and these session beans will access entity beans to do
the reading operations, such as getWeight, getRate, since it's read only, there
is no need to do transaction commit stuff which really takes time, this could
be seen through the profile. I know weblogic support readonly entity bean, but
it seems that it only has benefit on ejbLoad call, my test program shows that
weblogic still creates local transaction even I specified it as transaction not
supported, and Transaction.commit() will always be called in postInvoke(), from
the profile, we got that for a single method call, such as getRate(), 80% time
spent on postInvoke(), any suggestion on this? BTW, most of our entity beans are
using Exclusive lock, that's the reason that we use non-transactional session
bean to avoid dead lock problem.
ThanksSlava,
Thanks for the link, actually I read it before, and following is what I extracted
it from the doc:
<weblogic-doc>
Do not set db-is-shared to "false" if you set the entity bean's concurrency
strategy to the "Database" option. If you do, WebLogic Server will ignore the
db-is-shared setting.
</weblogic-doc>
Thanks
"Slava Imeshev" <[email protected]> wrote:
Hi Jinsong,
You may want to read this to get more detailed explanation
on db-is-shared (cache-between-transactions for 7.0):
http://e-docs.bea.com/wls/docs61/ejb/EJB_environment.html#1127563
Let me know if you have any questions.
Regards,
Slava Imeshev
"Jinsong HU" <[email protected]> wrote in message
news:[email protected]...
Thanks.
But it's still not clear to me in db-is-shared setting, if I specifiedentity
lock as database lock, I assumed db-is-shared is useless, because foreach
new
transaction, entity bean will reload data anyway. Correct me if I amwrong.
Jinsong
"Slava Imeshev" <[email protected]> wrote:
Jinsong,
See my answers inline.
"Jinsong Hu" <[email protected]> wrote in message
news:[email protected]...
Hi Slava,
Thanks for your reply, actually, I agree with you, we need to
review
our db
schema and seperate business logic to avoid db lock. I can not say,guys,
we need
to change this and that, since it's a big application and developedsince
EJB1.0
spec, I think they are afraid to do such a big change.Total rewrite is the worst thing that can happen to an app. The
better aproach would be identifying the most critical piece and
make a surgery on it.
Following are questions in my mind:
(1) I think there should be many companies using weblogic serverto
develop
large enterprise applications, I am just wondering what's the maintransaction/lock
mechanism that is used? Transional session / database lock,
db-is-shared
entity
I can't say for the whole community, as for my experience the standard
usage patthern is session fasades calling Entity EJBs while having
Required TX attribute plus plain transacted JDBC calls for bulk
reads or inserts.
is the dominant one? It seems that if you speficy database lock,
the
db-is-shared
should be true, right?Basically it's not true. One will need db-is-shared only if thereare
changes
to the database done from outside of the app server.
(2) For RO bean, if I specify read-idle-timeout to 0, it shouldonly
load
once at the first use time, right?I assume read-timeout-seconds was meant. That's right, but if
an application constantly reads new RO data, RO beans will be
constantly dropped from cache and new ones will be loaded.
You may want to looks at server console to see if there's a lot
of passivation for RO beans.
(3) For clustering part, have anyone use it in real enterpriseapplication?
My concern, since database lock is the only way to choose, how aboutthe
affect
of ejbLoad to performance, since most transactions are short live,if high
volume
transactions are in processing, I am just scared to death about
the
ejbLoad overhead.
ejbLoad is a part of bean's lifecycle, how would you be scared ofit?
If ejbLoads take too much time, it could be a good idea to profile
used SQLs. Right index optimization can make huge difference.
Also you may want cosider using CMP beans to let weblogic
take care about load optimization.
(4) If using Optimization lock, all the ejbStore need to do
version
check
or timestamp check, right? How about this overhead?As for optimistic concurrency, it performs quite well as you can
use lighter isolation levels.
HTH,
Slava Imeshev
"Jinsong Hu" <[email protected]> wrote in message
news:[email protected]...
We are using Exclusive Lock for entity bean, because of we do
not
want
to
load
data in each new transaction. If we use Database lock, that means
we
dedicate
data access calls to database, if database deadlock happens,
it's
hard
to
detect,
while using Exclusive lock, we could detect this dead lock in
container
level.
The problem is, using Exclusive concurrency mode you serialize
access to data represented by the bean. This aproach has negative
effect on ablity of application to process concurrent requests.As
a
result the app may have performance problems under load.
Actually, at the beginnning, we did use database lock and usingtransactional
The fact that you had database deadlocking issues tells that
application logic / database schema may need some review.
Normally to avoid deadlocking it's good to group database
operations mixing in updattes and inserts into one place so
that db locking sequence is not spreaded in time. Moving to
forced serialized data access just hides design/implementation
problems.
session bean, but the database dead lock and frequent ejbLoad
really
kill
us,
so we decided to move to use Exclusive lock and to avoid dead
lock,
we
change
some session bean to non-transactional.Making session beans non-transactions makes container
creating short-living transactions for each call to entity bean
methods. It's a costly process and it puts additional load to
both container and database.
We could use ReadOnly lock for some entity beans, but since weblogicserver will
always create local transaction for entity bean, and we found
transaction
commit
is expensive, I am arguing why do we need create container leveltransaction for
read only bean.First, read-only beans still need to load data. Also, you may seeRO
beans
contanly loading data if db-is-shared set to true. Other reason
can
be
that
RO semantics is not applicable the data presented by RO bean (forinstance,
you have a reporting engine that constantly produces "RO" data,
while
application-consumer of that data retrieves only new data and neverasks
for "old" data). RO beans are good when there is a relatively stable
data
accessed repeatedly for read only access.
You may want to tell us more about your app, we may be of help.
Regards,
Slava Imeshev
I will post the performance data, let's see how costful
transaction.commit
is.
"Cameron Purdy" <[email protected]> wrote:
We are currently profiling our product using Borland
OptmizeIt
tool,
and we
found some interesting issues. Due to our design, we have
many
session
beans which
are non transactional, and these session beans will access
entity
beans
to
do
the reading operations, such as getWeight, getRate, since
it's
read
only,
there
is no need to do transaction commit stuff which really takes
time,
this
could
be seen through the profile. I know weblogic support readonly
entity
bean,
but
it seems that it only has benefit on ejbLoad call, my test
program
shows
that
weblogic still creates local transaction even I specified
it
as
transaction not
supported, and Transaction.commit() will always be called
in
postInvoke(),
from
the profile, we got that for a single method call, such as
getRate(),
80%
time
spent on postInvoke(), any suggestion on this? BTW, most of
our
entity
beans are
using Exclusive lock, that's the reason that we use
non-transactional
session
bean to avoid dead lock problem.I am worried that you have made some decisions based on an improper
understand of what WebLogic is doing.
First, you say "non transactional", but from your description
you
should
have those marked as tx REQUIRED to avoid multiple transactions
(since
non-transactional just means that the database operation becomesits
own
little transaction).
Second, you say you are using exclusive lock, which you shouldonly
use
if
you are absolutely sure that you need it, (and note that it
does
not
work in
a cluster).
Peace,
Cameron Purdy
Tangosol, Inc.
http://www.tangosol.com/coherence.jsp
Tangosol Coherence: Clustered Replicated Cache for Weblogic
"Jinsong Hu" <[email protected]> wrote in message
news:[email protected]...
> -
Send message in a non-transactional context
I've coded a simple Log4j appender to send error messages to a queue (JMSAppender provided by Log4J is not good for me because it use a publish/subscrive model and use topics instead of queues); the appender has to work in a non transactional context, so the appender has to send always the message to queue although the global transaction rollbacks; I've create queue session settings transacted parameter to false and acknowledgeMode to AUTO_ACKNOWLEDGE , but message is not delivered to queue if the global transaction, started by an EJB Stateless Session Bean, rollbacks; in my environment (IBM WebSphere and IBM MQ), queue connection factory is configured no-XA resource.
Any suggestions?
Below the source code.
Thanks in advance
context = new InitialContext();
queueFactory = (QueueConnectionFactory) context.lookup(getQueueConnectionFactoryBindingName());
queueConnection = queueFactory.createQueueConnection();
queueSession = queueConnection.createQueueSession(false,Session.AUTO_ACKNOWLEDGE);
queue = (Queue) context.lookup(getQueueBindingName());
queueSender = queueSession.createSender(queue);
queueConnection.start();RobI've coded a simple Log4j appender to send error messages to a queue (JMSAppender provided by Log4J is not good for me because it use a publish/subscrive model and use topics instead of queues); the appender has to work in a non transactional context, so the appender has to send always the message to queue although the global transaction rollbacks; I've create queue session settings transacted parameter to false and acknowledgeMode to AUTO_ACKNOWLEDGE , but message is not delivered to queue if the global transaction, started by an EJB Stateless Session Bean, rollbacks; in my environment (IBM WebSphere and IBM MQ), queue connection factory is configured no-XA resource.
Any suggestions?
Below the source code.
Thanks in advance
context = new InitialContext();
queueFactory = (QueueConnectionFactory) context.lookup(getQueueConnectionFactoryBindingName());
queueConnection = queueFactory.createQueueConnection();
queueSession = queueConnection.createQueueSession(false,Session.AUTO_ACKNOWLEDGE);
queue = (Queue) context.lookup(getQueueBindingName());
queueSender = queueSession.createSender(queue);
queueConnection.start();Rob -
Concept: How can I handle a hole in a non-sap datasource?
Hello,
I want to discuss here a problem with non-sap datasource:
We load data from oracle db with db-connect technique. Each record we load has an idate(record is created) and a udate(record is changed). Based on the udate we create a kind of delta load:
The udate is selection criteria of the Infopackage(full upload)
In the start routine of the transferrule we detect the oldest udate and store it to the tvarc table. This UDATE is the low selection criteria for udate of the next load. So we reload only the records, which are change after the last load.
The data are transferred to ODS (overwrite) and so on..
That works perfect!
But now we find out, that in this non-sap datasource its possible to delete records directly. In SAP we have usually the procedure that a reversal document is created to delete a record(for example FI documents). In fact this non-sap datasource creates a hole in the database. A record is deleted > no udate change -> no change in BW. That means the record is still in BW!
Do you had a similar problem? Or do you have an idea how we could fix this problem.
I can not load all data every day by full upload that needs to much time more than 2 million data from five different datasources .
Thank you for your attention
RalfHello,
To close my post, here is my solution:
- ODS A is filled by delta using the UDATE field.
- ODS B is filled by full upload. It contents only the key fields of ODS A (load needs
only 20 minutes)
- ODS C is filled by ODS A(only key fields) per full upload. In the startroutine of the
update rule is a check: Delete all data, which are in ODS B.
Result is: ODS C contents only the data, which have to be deleted in ODS A.
- Full upload ODS C to ODS A. Set in the start routine recordmode to 'R'.
Result is: these records are delete from the ODS A and change log of ODS A is
updated.
Before the next load the content of ODS B and C is deleted.
(cool picture:)
..........CUBE X
..............|
.(delta)....|
..............|........ODS C(diff. A - B)
..............|........|....|
..............|..(R).|....|.\ (check)
..............|........|....|..\
............ODS A....ODS B(keys)
.(delta)...|................| (full)
..........non sap systems
It's followed the principle of Sudhi's idea. But I did it not with PSA, because I have five different DataSources, so I need five calls to identify the records, which have to be deleted and five calls to edit the PSA ....
In this way I load everything in ODS and do the procedure only once time.
All the best Ralf -
How to create new RF transaction in R/3?
Hi Guys,
Can anyone tell me how to create new RF transaction In WM. If possible step by step.
Thanks heeps.
NarahariHi again,
well if you need it for an infoObject ZMATERIAL,
Create the infoObject ZMATERIAL with reference to 0MATERIAL and you will have all 0MATERIAL hierarchies available for ZMATERIAL.
Nothing additional necessary.
The creation of similar Hierarchy-Extractors in R/3 is, as far as i know, not possible.
The proper order for the FF-Elements when uploading the self-defined hierarchy depends on the hierarchy-settings in the DataSource, for instance for sorted hierarchies:
Node ID NODEID NUMC 8
InfoObject Name INFOOBJECT CHAR 30
Node Name NODENAME CHAR 32
Link Name LINK CHAR 1
Parent Node PARENTID NUMC 8
First Subnode CHILDID NUMC 8
Next Node Along NEXTID NUMC 8
Language Key LANGU CHAR 1
Description - Short TXTSH CHAR 20
Description - Medium TXTMD CHAR 40
Description - Long TXTLG CHAR 60
Look also for the Function module for the FTP in se38 by FTP.
hth
cheers
sven -
JDev 9.0.3.3 Commit issue when using Transaction DataSource
Hi,
Env: JDev 9.0.3.3/WL 6.0 sp1/Oracle 8i
We have successfully deployed our application in 3-tier(remote mode) in JDev 9.0.3.2. using JClient, EO/VO, EJB Session Facade (BMT).
Now we are planning to use JDev 9.0.3.3
In JDev 9033 with the same code base we have issues after Commit.
We are using ejb.txn.type=global (default) and Weblogic Transactional DataSource. It gives following error after Committing and when we navigate to some other row:
=====================================================
[690] BaseSQLBuilder Executing DML ... (Update)
[691] Executing DML...
[692] UPDATE CISDBA.DCX_X_SAVED_SEARCH SavedSearch SET DCX_MODEL_YEAR=?,DCX_USAGE_VEHICLE_FAMILY=?,DCX_END_ITEMS=?,DCX_RECORD_STATUS=? WHERE DCX_ID=?
[693] cStmt = conn.prepareCall(" UPDATE CISDBA.DCX_X_SAVED_SEARCH SavedSearch SET DCX_MODEL_YEAR=?,DCX_USAGE_VEHICLE_FAMILY=?,DCX_END_ITEMS=?,DCX_RECORD_STATUS=? WHERE DCX_ID=?"); // JBO-JDBC-INTERACT
[694] cStmt.setObject(1, "2004"); /*DcxModelYear*/ // JBO-JDBC-INTERACT
[695] cStmt.setObject(2, "LX"); /*DcxUsageVehicleFamily*/ // JBO-JDBC-INTERACT
[696] cStmt.setObject(3, "All"); /*DcxEndItems*/ // JBO-JDBC-INTERACT
[697] cStmt.setObject(4, "All"); /*DcxRecordStatus*/ // JBO-JDBC-INTERACT
[698] // ERROR: Unknown data type java.lang.Long // JBO-JDBC-INTERACT
[699] cStmt.setObject(5, "206"); /*DcxId*/ // JBO-JDBC-INTERACT
[700] cStmt.execute(); // JBO-JDBC-INTERACT
[701] cStmt.close(); // JBO-JDBC-INTERACT
BaseCostInvestCost VO before postChanges...
this.getWhereClause(): null
isDirty() before executeQuery...
this.getWhereClause(): null
isDirty() after executeQuery...
BaseCostInvestCost VO before postChanges...
this.getWhereClause(): null
isDirty() before executeQuery...
this.getWhereClause(): null
isDirty() after executeQuery...
[702] BaseSQLBuilder: releaseSavepoint 'BO_SP' ignored
[703] BaseSQLBuilder: setSavepoint 'BO_SP' ignored
BaseCostInvestCost VO before postChanges...
this.getWhereClause(): null
isDirty() before executeQuery...
this.getWhereClause(): null
isDirty() after executeQuery...
BaseCostInvestCost VO before postChanges...
this.getWhereClause(): null
isDirty() before executeQuery...
this.getWhereClause(): null
isDirty() after executeQuery...
[704] BaseSQLBuilder: releaseSavepoint 'BO_SP' ignored
[705] EJBTxnHandler: Commited txn
[706] BaseCostInvestCostView2 notify COMMIT ...
[707] BaseCostInvestCostView1 notify COMMIT ...
[708] StdCostView1 notify COMMIT ...
[709] AltCostView1 notify COMMIT ...
[710] PaperCarView1 notify COMMIT ...
[711] InvestCostItemView1 notify COMMIT ...
[712] SavedSearchView1 notify COMMIT ...
[713] AltCostView1_BaseInvestToAltViewLink_AltCostView notify COMMIT ...
[714] InvestCostItemView1_BaseInvestToInvestItemViewLink_InvestCostItemView notify COMMIT ...
[715] PaperCarView_BaseCostTrackedVehicleViewLink_PaperCarView notify COMMIT ...
[716] VehicleProgramLOV1 notify COMMIT ...
[717] SubDeptLOV1 notify COMMIT ...
[718] Transaction timeout set to 28800 secs
[719] Column count: 14
[720] ViewObject : Reusing defined prepared Statement
[721] QueryCollection.executeQuery failed...
[722] java.sql.SQLException: The transaction is no longer active (status = Committed). No further JDBC access is allowed within this transaction.
void weblogic.jdbcbase.jts.Connection.checkIfRolledBack()
Connection.java:468
void weblogic.jdbcbase.jts.Statement.setMaxRows(int)
Statement.java:179
void weblogic.jdbc.rmi.internal.StatementImpl.setMaxRows(int)
StatementImpl.java:82
void weblogic.jdbc.rmi.SerialStatement.setMaxRows(int)
SerialStatement.java:132
void oracle.jbo.server.QueryCollection.executeQuery(java.lang.Object[], int)
QueryCollection.java:534
void
===================================================Hi Carsten,
I tried to reproduce your problem, but couldn't. Let me explain what steps I executed and perhaps you can advise where I've not matched your steps.
--Using build jdeveloper 9.0.3.3.1203, I built a new bc4j project containing a dept-emp default bc4j project (deptEntity, empEntity, deptView, empView, deptempFKAssoc, deptempFKViewLink, ApplicationModule).
--In dos shell, I went to the directory \jdevdir\jdev\bin and ran setvars -go to set the correct jdk version
--In the dos shell, in the directory \jdevdir\j2ee\home I executed the following command to install oc4j:
java -jar oc4j.jar (defaults pswd to welcome for admin)
--I remoted the appmodule to EJB Session Bean (BMT) and created a new deployment profile using the 9ias configuration for the application module.
--I deployed the bc4j objects to oc4j
--I created a new project
--In this project I created a new jclient master-detail form using the above project's application module for the data model
--I saved all and compiled the jclient project
--I ran the jclient form and inserted a master record
--I committed the transaction successfully
--I browsed records, then edited a record
--I committed the transaction successfully, then browsed.
Is there something I've missed? Did you migrate your project and not start by creating a new project? Is there something special about the database schema you are using?
Thanks,
Amy
Maybe you are looking for
-
I've bought a song, But it's not showing up in my "Music" tab.
I've bought a song from itunes, and when i did, it downloaded. so i went to music but it's not there... Then i back to the itune store and look in the "Purchased" section, pressed "Not on my Ipad." And there are my songs, I download them again, 3 tim
-
Objects to be transported from Windows platform to Unix platform
Hi experts Our current development is on windows platform. and the production server will be on HP Unix platform. I just wanted to know that is it possible to transport the webdynpro application and the portal contents. Also wanted to know what all c
-
Newbie attempting to use Data Provider
Hello everyone, I just downloaded Flex and am eager to learn but I'm having some difficulties finding any debugging tools to help me troubleshoot the code below. I basically grabbed the data provider example and tried to rework it for something I nee
-
Hi All, The roles are migrated to BP1then the old queries under roles are deleted from shred queries now how to find the queries under which role. Thanks in Advance, Ravi
-
Is there any body that know how to retrieve SMS mesages from a 9210i phone to my PC using a cable or other conection. Regards Dirk Grobler