Kodo 4.0
We would also like to know the tentative date of release for 4.0. There
are features in JDO 2.0 which makes an upgrade worth the time and effort.
Since the EA3 is out, I am assuming the final release is not too far away
but if we can get a roadmap/tentative dates for future releases that would
help
I agree with Daune.. we are not asking for an official commitment here we
are looking for an estimate. You see Solarmetric has done a great job in
keeping upto speed with the JDO features by providing them as preview
features but we would like to get a rough estimate of when things will
change. I am interesting in following
1. Deletion by query
2. Fetch plan configuration
3. JDO 2.0 compliance in general
On somewhat different note..
4. Enhancements to workbench, it would help a LOT if workbench had more
hot keys (minimizing and maximizing JDOQL editor and hiding the log pane
by mouse takes forever. I figured out some of the hot keys by chance, for
instance I didnt know there was auto completion provided, although I did
find it somewhere in docs later)
J-F Daune wrote:
As customers, it is important for us to know -at least estimations- so
that we can organize the release dates of our own products and inform our
own customers.
I hardly imagine turning to them and saying I have no idea of when next
release will be released.
We don't want you to 'sign with your blood' on a roadmap, but we just want
to know if it will be next month or not before 6 months.
Regards,
J-F
It will depend on a variety of factors so we cannot commit to a
roadmap/tentative release date. Note that JDO 2 has not been finalized
yet so that will be the first hurdle.
Khurram wrote:
We would also like to know the tentative date of release for 4.0. There
are features in JDO 2.0 which makes an upgrade worth the time and effort.
Since the EA3 is out, I am assuming the final release is not too far away
but if we can get a roadmap/tentative dates for future releases that
would
help
Similar Messages
-
Error while configuring kodo to lookup a datasource
We had a working application using EEPersistenceManagerFactory
I changed the kodo.properties to lookup a non XA JDBC datasource.
After that the application is working fine (it creates
,updates,deletes,finds record in the DB)
but SystemOut.log has the following error for every operation
We are using Kodo 2.5.0, Websphere 5.0 and Oracle 8
How can we avoid getting this error ?.
We tried to find any property on the Websphere datasource which can be
altered to avoid this error but no luck.
Thanks
Paresh
[10/7/03 15:30:45:467 IST] 3d8b2d1a MCWrapper E J2CA0081E: Method
destroy failed while trying to execute method destroy on ManagedConnection
com.ibm.ws.rsadapter.spi.WSRdbManagedConnectionImpl@437f6d23 from resource
<null>. Caught exception: com.ibm.ws.exception.WsException: DSRA0080E: An
exception was received by the Data Store Adapter. See original exception
message: Cannot call 'cleanup' on a ManagedConnection while it is still in
a transaction..
at
com.ibm.ws.rsadapter.exceptions.DataStoreAdapterException.<init>(DataStoreAdapterException.java:222)
at
com.ibm.ws.rsadapter.exceptions.DataStoreAdapterException.<init>(DataStoreAdapterException.java:172)
at
com.ibm.ws.rsadapter.AdapterUtil.createDataStoreAdapterException(AdapterUtil.java:182)
at
com.ibm.ws.rsadapter.spi.WSRdbManagedConnectionImpl.cleanupTransactions(WSRdbManagedConnectionImpl.java:1826)
at
com.ibm.ws.rsadapter.spi.WSRdbManagedConnectionImpl.destroy(WSRdbManagedConnectionImpl.java:1389)
at com.ibm.ejs.j2c.MCWrapper.destroy(MCWrapper.java:1032)
at
com.ibm.ejs.j2c.poolmanager.FreePool.returnToFreePool(FreePool.java:259)
at com.ibm.ejs.j2c.poolmanager.PoolManager.release(PoolManager.java:777)
at com.ibm.ejs.j2c.MCWrapper.releaseToPoolManager(MCWrapper.java:1304)
at
com.ibm.ejs.j2c.ConnectionEventListener.connectionClosed(ConnectionEventListener.java:195)
at
com.ibm.ws.rsadapter.spi.WSRdbManagedConnectionImpl.processConnectionClosedEvent(WSRdbManagedConnectionImpl.java:843)
at
com.ibm.ws.rsadapter.jdbc.WSJdbcConnection.closeWrapper(WSJdbcConnection.java:569)
at com.ibm.ws.rsadapter.jdbc.WSJdbcObject.close(WSJdbcObject.java:132)
at
com.solarmetric.kodo.impl.jdbc.SQLExecutionManagerImpl.close(SQLExecutionManagerImpl.java:814)
at
com.solarmetric.kodo.impl.jdbc.runtime.JDBCStoreManager.release(JDBCStoreManager.java(Inlined
Compiled Code))
at
com.solarmetric.kodo.impl.jdbc.runtime.JDBCStoreManager.load(JDBCStoreManager.java(Compiled
Code))
at
com.solarmetric.kodo.runtime.StateManagerImpl.loadFields(StateManagerImpl.java(Compiled
Code))
at
com.solarmetric.kodo.runtime.StateManagerImpl.preSerialize(StateManagerImpl.java:784)
at com.paresh.core.vo.Release.jdoPreSerialize(Release.java)
at com.paresh.core.vo.Release.writeObject(Release.java)
at java.lang.reflect.Method.invoke(Native Method)
at
com.ibm.rmi.io.IIOPOutputStream.invokeObjectWriter(IIOPOutputStream.java:703)
at com.ibm.rmi.io.IIOPOutputStream.outputObject(IIOPOutputStream.java:671)
at
com.ibm.rmi.io.IIOPOutputStream.simpleWriteObject(IIOPOutputStream.java:146)
at
com.ibm.rmi.io.ValueHandlerImpl.writeValueInternal(ValueHandlerImpl.java:217)
at com.ibm.rmi.io.ValueHandlerImpl.writeValue(ValueHandlerImpl.java:144)
at com.ibm.rmi.iiop.CDROutputStream.write_value(CDROutputStream.java:1590)
at com.ibm.rmi.iiop.CDROutputStream.write_value(CDROutputStream.java:1107)
at
com.paresh.core.interfaces._EJSRemoteStatelessValidation_da16513c_Tie.findCorrectionAction(_EJSRemoteStatelessValidation_da16513c_Tie.java:309)
at
com.paresh.core.interfaces._EJSRemoteStatelessValidation_da16513c_Tie._invoke(_EJSRemoteStatelessValidation_da16513c_Tie.java:104)
at
com.ibm.CORBA.iiop.ServerDelegate.dispatchInvokeHandler(ServerDelegate.java:582)
at com.ibm.CORBA.iiop.ServerDelegate.dispatch(ServerDelegate.java:437)
at com.ibm.rmi.iiop.ORB.process(ORB.java:320)
at com.ibm.CORBA.iiop.ORB.process(ORB.java:1544)
at com.ibm.rmi.iiop.Connection.doWork(Connection.java:2063)
at com.ibm.rmi.iiop.WorkUnitImpl.doWork(WorkUnitImpl.java:63)
at com.ibm.ejs.oa.pool.PooledThread.run(ThreadPool.java:95)
at com.ibm.ws.util.ThreadPool$Worker.run(ThreadPool.java:592)
kodo.properties
com.solarmetric.kodo.LicenseKey=
#com.solarmetric.kodo.ee.ManagedRuntimeProperties=TransactionManagerName=java:/TransactionManager
com.solarmetric.kodo.ee.ManagedRuntimeProperties=TransactionManagerName=TransactionFactory
TransactionManagerMethod=com.ibm.ejs.jts.jta.TransactionManagerFactory.getTransactionManager
#com.solarmetric.kodo.ee.ManagedRuntimeClass=com.solarmetric.kodo.ee.InvocationManagedRuntime
com.solarmetric.kodo.ee.ManagedRuntimeClass=com.solarmetric.kodo.ee.AutomaticManagedRuntime
#javax.jdo.PersistenceManagerFactoryClass=com.solarmetric.kodo.impl.jdbc.JDBCPersistenceManagerFactory
javax.jdo.PersistenceManagerFactoryClass=com.solarmetric.kodo.impl.jdbc.ee.EEPersistenceManagerFactory
javax.jdo.option.ConnectionFactoryName=ds/kodo/DataSource1
javax.jdo.option.Optimistic=true
javax.jdo.option.RetainValues=true
javax.jdo.option.NontransactionalRead=true
#com.solarmetric.kodo.DataCacheClass=com.solarmetric.kodo.runtime.datacache.plugins.CacheImpl
# Changing these to a non-zero value will dramatically increase
# performance, but will cause in-memory databases such as Hypersonic
# SQL to never exit when your main() method exits, as the pooled
# connections in the in-memory database will cause a daemon thread to
# remain running.
javax.jdo.option.MinPool=5
javax.jdo.option.MaxPool=10We do have a makeTransientAll() before the object returns from Session
Bean.
We also tried the JCA path
After installing the JCA RAR and doing a lookup for
PersistenceManagetFactory the same code is not throwing any exception.
The exception is thrown only if datasource is used.
Thanks
Paresh
Marc Prud'hommeaux wrote:
Paresh-
It looks like you are returning a collection of instances from an EJB,
which will cause them to be serialized. The serialization is happening
outside the context of a transaction, and Kodo needs to obtain a
connection. Websphere seems to not like that.
You have a few options:
1. Call makeTransientAll() on all the instances before you return them
from your bean methods
2. Manually instantiate all the fields yourself before sending them
back. You could use a bogus ObjectOutputStream to do this.
3. In 3.0, you can use the new detach() API to detach the instances
before sending them back to the client.
In article <[email protected]>, Paresh wrote:
We had a working application using EEPersistenceManagerFactory
I changed the kodo.properties to lookup a non XA JDBC datasource.
After that the application is working fine (it creates
,updates,deletes,finds record in the DB)
but SystemOut.log has the following error for every operation
We are using Kodo 2.5.0, Websphere 5.0 and Oracle 8
How can we avoid getting this error ?.
We tried to find any property on the Websphere datasource which can be
altered to avoid this error but no luck.
Thanks
Paresh
[10/7/03 15:30:45:467 IST] 3d8b2d1a MCWrapper E J2CA0081E: Method
destroy failed while trying to execute method destroy on ManagedConnection
com.ibm.ws.rsadapter.spi.WSRdbManagedConnectionImpl@437f6d23 from resource
<null>. Caught exception: com.ibm.ws.exception.WsException: DSRA0080E: An
exception was received by the Data Store Adapter. See original exception
message: Cannot call 'cleanup' on a ManagedConnection while it is still in
a transaction..
at
com.ibm.ws.rsadapter.exceptions.DataStoreAdapterException.<init>(DataStoreAdapterException.java:222)
at
com.ibm.ws.rsadapter.exceptions.DataStoreAdapterException.<init>(DataStoreAdapterException.java:172)
at
com.ibm.ws.rsadapter.AdapterUtil.createDataStoreAdapterException(AdapterUtil.java:182)
at
com.ibm.ws.rsadapter.spi.WSRdbManagedConnectionImpl.cleanupTransactions(WSRdbManagedConnectionImpl.java:1826)
at
com.ibm.ws.rsadapter.spi.WSRdbManagedConnectionImpl.destroy(WSRdbManagedConnectionImpl.java:1389)
at com.ibm.ejs.j2c.MCWrapper.destroy(MCWrapper.java:1032)
at
com.ibm.ejs.j2c.poolmanager.FreePool.returnToFreePool(FreePool.java:259)
at com.ibm.ejs.j2c.poolmanager.PoolManager.release(PoolManager.java:777)
at com.ibm.ejs.j2c.MCWrapper.releaseToPoolManager(MCWrapper.java:1304)
at
com.ibm.ejs.j2c.ConnectionEventListener.connectionClosed(ConnectionEventListener.java:195)
at
com.ibm.ws.rsadapter.spi.WSRdbManagedConnectionImpl.processConnectionClosedEvent(WSRdbManagedConnectionImpl.java:843)
at
com.ibm.ws.rsadapter.jdbc.WSJdbcConnection.closeWrapper(WSJdbcConnection.java:569)
at com.ibm.ws.rsadapter.jdbc.WSJdbcObject.close(WSJdbcObject.java:132)
at
com.solarmetric.kodo.impl.jdbc.SQLExecutionManagerImpl.close(SQLExecutionManagerImpl.java:814)
at
com.solarmetric.kodo.impl.jdbc.runtime.JDBCStoreManager.release(JDBCStoreManager.java(Inlined
Compiled Code))
at
com.solarmetric.kodo.impl.jdbc.runtime.JDBCStoreManager.load(JDBCStoreManager.java(Compiled
Code))
at
com.solarmetric.kodo.runtime.StateManagerImpl.loadFields(StateManagerImpl.java(Compiled
Code))
at
com.solarmetric.kodo.runtime.StateManagerImpl.preSerialize(StateManagerImpl.java:784)
at com.paresh.core.vo.Release.jdoPreSerialize(Release.java)
at com.paresh.core.vo.Release.writeObject(Release.java)
at java.lang.reflect.Method.invoke(Native Method)
at
com.ibm.rmi.io.IIOPOutputStream.invokeObjectWriter(IIOPOutputStream.java:703)
at com.ibm.rmi.io.IIOPOutputStream.outputObject(IIOPOutputStream.java:671)
at
com.ibm.rmi.io.IIOPOutputStream.simpleWriteObject(IIOPOutputStream.java:146)
at
com.ibm.rmi.io.ValueHandlerImpl.writeValueInternal(ValueHandlerImpl.java:217)
at com.ibm.rmi.io.ValueHandlerImpl.writeValue(ValueHandlerImpl.java:144)
at com.ibm.rmi.iiop.CDROutputStream.write_value(CDROutputStream.java:1590)
at com.ibm.rmi.iiop.CDROutputStream.write_value(CDROutputStream.java:1107)
at
com.paresh.core.interfaces._EJSRemoteStatelessValidation_da16513c_Tie.findCorrectionAction(_EJSRemoteStatelessValidation_da16513c_Tie.java:309)
at
com.paresh.core.interfaces._EJSRemoteStatelessValidation_da16513c_Tie._invoke(_EJSRemoteStatelessValidation_da16513c_Tie.java:104)
at
com.ibm.CORBA.iiop.ServerDelegate.dispatchInvokeHandler(ServerDelegate.java:582)
at com.ibm.CORBA.iiop.ServerDelegate.dispatch(ServerDelegate.java:437)
at com.ibm.rmi.iiop.ORB.process(ORB.java:320)
at com.ibm.CORBA.iiop.ORB.process(ORB.java:1544)
at com.ibm.rmi.iiop.Connection.doWork(Connection.java:2063)
at com.ibm.rmi.iiop.WorkUnitImpl.doWork(WorkUnitImpl.java:63)
at com.ibm.ejs.oa.pool.PooledThread.run(ThreadPool.java:95)
at com.ibm.ws.util.ThreadPool$Worker.run(ThreadPool.java:592)
kodo.properties
com.solarmetric.kodo.LicenseKey=
#com.solarmetric.kodo.ee.ManagedRuntimeProperties=TransactionManagerName=java:/TransactionManager
>>
>
com.solarmetric.kodo.ee.ManagedRuntimeProperties=TransactionManagerName=TransactionFactory
>>
>
TransactionManagerMethod=com.ibm.ejs.jts.jta.TransactionManagerFactory.getTransactionManager
>>
>>
>
#com.solarmetric.kodo.ee.ManagedRuntimeClass=com.solarmetric.kodo.ee.InvocationManagedRuntime
>>
>
com.solarmetric.kodo.ee.ManagedRuntimeClass=com.solarmetric.kodo.ee.AutomaticManagedRuntime
>>
>>
>
#javax.jdo.PersistenceManagerFactoryClass=com.solarmetric.kodo.impl.jdbc.JDBCPersistenceManagerFactory
>>
>
javax.jdo.PersistenceManagerFactoryClass=com.solarmetric.kodo.impl.jdbc.ee.EEPersistenceManagerFactory
>>
javax.jdo.option.ConnectionFactoryName=ds/kodo/DataSource1
javax.jdo.option.Optimistic=true
javax.jdo.option.RetainValues=true
javax.jdo.option.NontransactionalRead=true
#com.solarmetric.kodo.DataCacheClass=com.solarmetric.kodo.runtime.datacache.plugins.CacheImpl
>>
>>
# Changing these to a non-zero value will dramatically increase
# performance, but will cause in-memory databases such as Hypersonic
# SQL to never exit when your main() method exits, as the pooled
# connections in the in-memory database will cause a daemon thread to
# remain running.
javax.jdo.option.MinPool=5
javax.jdo.option.MaxPool=10
Marc Prud'hommeaux [email protected]
SolarMetric Inc. http://www.solarmetric.com -
Unit test fails after upgrading to Kodo 4.0.0 from 4.0.0-EA4
I have a group of 6 unit tests failing after upgrading to the new Kodo
4.0.0 (with BEA) from Kodo-4.0.0-EA4 (with Solarmetric). I'm getting
exceptions like the one at the bottom of this email. It seems to be an
interaction with the PostgreSQL driver, though I can't be sure. I
haven't changed my JDO configuration or the related classes in months
since I've been focusing on using the objects that have already been
defined. The .jdo, .jdoquery, and .java code are below the exception,
just in case there's something wrong in there. Does anyone have advice
as to how I might debug this?
Thanks,
Mark
Testsuite: edu.ucsc.whisper.test.integration.UserManagerQueryIntegrationTest
Tests run: 15, Failures: 0, Errors: 6, Time elapsed: 23.308 sec
Testcase:
testGetAllUsersWithFirstName(edu.ucsc.whisper.test.integration.UserManagerQueryIntegrationTest):
Caused an ERROR
The column index is out of range: 2, number of columns: 1.
<2|false|4.0.0> kodo.jdo.DataStoreException: The column index is out of
range: 2, number of columns: 1.
at
kodo.jdbc.sql.DBDictionary.newStoreException(DBDictionary.java:4092)
at kodo.jdbc.sql.SQLExceptions.getStore(SQLExceptions.java:82)
at kodo.jdbc.sql.SQLExceptions.getStore(SQLExceptions.java:66)
at kodo.jdbc.sql.SQLExceptions.getStore(SQLExceptions.java:46)
at
kodo.jdbc.kernel.SelectResultObjectProvider.handleCheckedException(SelectResultObjectProvider.java:176)
at
kodo.kernel.QueryImpl$PackingResultObjectProvider.handleCheckedException(QueryImpl.java:2460)
at
com.solarmetric.rop.EagerResultList.<init>(EagerResultList.java:32)
at kodo.kernel.QueryImpl.toResult(QueryImpl.java:1445)
at kodo.kernel.QueryImpl.execute(QueryImpl.java:1136)
at kodo.kernel.QueryImpl.execute(QueryImpl.java:901)
at kodo.kernel.QueryImpl.execute(QueryImpl.java:865)
at kodo.kernel.DelegatingQuery.execute(DelegatingQuery.java:787)
at kodo.jdo.QueryImpl.executeWithArray(QueryImpl.java:210)
at kodo.jdo.QueryImpl.execute(QueryImpl.java:137)
at
edu.ucsc.whisper.core.dao.JdoUserDao.findAllUsersWithFirstName(JdoUserDao.java:232)
at
edu.ucsc.whisper.core.manager.DefaultUserManager.getAllUsersWithFirstName(DefaultUserManager.java:252)
NestedThrowablesStackTrace:
org.postgresql.util.PSQLException: The column index is out of range: 2,
number of columns: 1.
at
org.postgresql.core.v3.SimpleParameterList.bind(SimpleParameterList.java:57)
at
org.postgresql.core.v3.SimpleParameterList.setLiteralParameter(SimpleParameterList.java:101)
at
org.postgresql.jdbc2.AbstractJdbc2Statement.bindLiteral(AbstractJdbc2Statement.java:2085)
at
org.postgresql.jdbc2.AbstractJdbc2Statement.setInt(AbstractJdbc2Statement.java:1133)
at
com.solarmetric.jdbc.DelegatingPreparedStatement.setInt(DelegatingPreparedStatement.java:390)
at
com.solarmetric.jdbc.PoolConnection$PoolPreparedStatement.setInt(PoolConnection.java:440)
at
com.solarmetric.jdbc.DelegatingPreparedStatement.setInt(DelegatingPreparedStatement.java:390)
at
com.solarmetric.jdbc.DelegatingPreparedStatement.setInt(DelegatingPreparedStatement.java:390)
at
com.solarmetric.jdbc.DelegatingPreparedStatement.setInt(DelegatingPreparedStatement.java:390)
at
com.solarmetric.jdbc.LoggingConnectionDecorator$LoggingConnection$LoggingPreparedStatement.setInt(LoggingConnectionDecorator.java:1
257)
at
com.solarmetric.jdbc.DelegatingPreparedStatement.setInt(DelegatingPreparedStatement.java:390)
at
com.solarmetric.jdbc.DelegatingPreparedStatement.setInt(DelegatingPreparedStatement.java:390)
at kodo.jdbc.sql.DBDictionary.setInt(DBDictionary.java:980)
at kodo.jdbc.sql.DBDictionary.setUnknown(DBDictionary.java:1299)
at kodo.jdbc.sql.SQLBuffer.setParameters(SQLBuffer.java:638)
at kodo.jdbc.sql.SQLBuffer.prepareStatement(SQLBuffer.java:539)
at kodo.jdbc.sql.SQLBuffer.prepareStatement(SQLBuffer.java:512)
at kodo.jdbc.sql.SelectImpl.execute(SelectImpl.java:332)
at kodo.jdbc.sql.SelectImpl.execute(SelectImpl.java:301)
at kodo.jdbc.sql.Union$UnionSelect.execute(Union.java:642)
at kodo.jdbc.sql.Union.execute(Union.java:326)
at kodo.jdbc.sql.Union.execute(Union.java:313)
at
kodo.jdbc.kernel.SelectResultObjectProvider.open(SelectResultObjectProvider.java:98)
at
kodo.kernel.QueryImpl$PackingResultObjectProvider.open(QueryImpl.java:2405)
at
com.solarmetric.rop.EagerResultList.<init>(EagerResultList.java:22)
at kodo.kernel.QueryImpl.toResult(QueryImpl.java:1445)
at kodo.kernel.QueryImpl.execute(QueryImpl.java:1136)
at kodo.kernel.QueryImpl.execute(QueryImpl.java:901)
at kodo.kernel.QueryImpl.execute(QueryImpl.java:865)
at kodo.kernel.DelegatingQuery.execute(DelegatingQuery.java:787)
at kodo.jdo.QueryImpl.executeWithArray(QueryImpl.java:210)
at kodo.jdo.QueryImpl.execute(QueryImpl.java:137)
at
edu.ucsc.whisper.core.dao.JdoUserDao.findAllUsersWithFirstName(JdoUserDao.java:232)
--- DefaultUser.java -------------------------------------------------
public class DefaultUser
implements User
/** The account username. */
private String username;
/** The account password. */
private String password;
/** A flag indicating whether or not the account is enabled. */
private boolean enabled;
/** The authorities granted to this account. */
private Set<Authority> authorities;
/** Information about the user, including their name and text that
describes them. */
private UserInfo userInfo;
/** The set of organizations where this user works. */
private Set<Organization> organizations;
--- DefaultUser.jdo --------------------------------------------------
<?xml version="1.0"?>
<!DOCTYPE jdo PUBLIC
"-//Sun Microsystems, Inc.//DTD Java Data Objects Metadata 2.0//EN"
"http://java.sun.com/dtd/jdo_2_0.dtd">
<jdo>
<package name="edu.ucsc.whisper.core">
<sequence name="user_id_seq"
factory-class="native(Sequence=user_id_seq)"/>
<class name="DefaultUser" detachable="true"
table="whisper_user" identity-type="datastore">
<datastore-identity sequence="user_id_seq" column="userId"/>
<field name="username">
<column name="username" length="80" jdbc-type="VARCHAR" />
</field>
<field name="password">
<column name="password" length="40" jdbc-type="CHAR" />
</field>
<field name="enabled">
<column name="enabled" />
</field>
<field name="userInfo" persistence-modifier="persistent"
default-fetch-group="true" dependent="true">
<extension vendor-name="jpox"
key="implementation-classes"
value="edu.ucsc.whisper.core.DefaultUserInfo" />
<extension vendor-name="kodo"
key="type"
value="edu.ucsc.whisper.core.DefaultUserInfo" />
</field>
<field name="authorities" persistence-modifier="persistent"
table="user_authorities"
default-fetch-group="true">
<collection
element-type="edu.ucsc.whisper.core.DefaultAuthority" />
<join column="userId" delete-action="cascade"/>
<element column="authorityId" delete-action="cascade"/>
</field>
<field name="organizations" persistence-modifier="persistent"
table="user_organizations" mapped-by="user"
default-fetch-group="true" dependent="true">
<collection
element-type="edu.ucsc.whisper.core.DefaultOrganization"
dependent-element="true"/>
<join column="userId"/>
<!--<element column="organizationId"/>-->
</field>
</class>
</package>
</jdo>
--- DefaultUser.jdoquery ---------------------------------------------
<?xml version="1.0"?>
<!DOCTYPE jdo PUBLIC
"-//Sun Microsystems, Inc.//DTD Java Data Objects Metadata 2.0//EN"
"http://java.sun.com/dtd/jdo_2_0.dtd">
<jdo>
<package name="edu.ucsc.whisper.core">
<class name="DefaultUser">
<query name="UserByUsername"
language="javax.jdo.query.JDOQL"><![CDATA[
SELECT UNIQUE FROM edu.ucsc.whisper.core.DefaultUser
WHERE username==searchName
PARAMETERS java.lang.String searchName
]]></query>
<query name="DisabledUsers"
language="javax.jdo.query.JDOQL"><![CDATA[
SELECT FROM edu.ucsc.whisper.core.DefaultUser WHERE
enabled==false
]]></query>
<query name="EnabledUsers"
language="javax.jdo.query.JDOQL"><![CDATA[
SELECT FROM edu.ucsc.whisper.core.DefaultUser WHERE
enabled==true
]]></query>
<query name="CountUsers"
language="javax.jdo.query.JDOQL"><![CDATA[
SELECT count( this ) FROM edu.ucsc.whisper.core.DefaultUser
]]></query>
</class>
</package>
</jdo>I'm sorry, I have no idea. I suggest sending a test case that
reproduces the problem to support. -
Horizontal mapping not working in Kodo 4.1.2
Hello,
I am having troubles in trying to get the class mapping I want in Kodo 4.1.2.
I want to go from Kodo 3.3 to Kodo 4.1, and still in the evaluation process. Basically, all I want is to have my package.jdo files to work in Kodo 4.1, with the minimum modifications, before moving to JPA.
The mapping is defined is my package.jdo using the <extension vendor-name="kodo" key="jdbc-class-map" value="XXX"/> where XXX can be one of horizontal, base or flat (I dont use vertical in this applicaion). This element does not seem to be properly recognized by the new mapping tool, and all my classes are mapped in the same base table.
After some digging in the docs and in the examples provided, I found this <inheritance strategy="XXX"/> element, that can be put in my package.jdo file. This is not clearly said in the docs (it seems this element is only mentionned in the new orm DTD), but is used in the sample/jdo/horizontal/package.jdo file.
Then I modified my package.jdo files, with this new element, where XXX is one of subclass-table, new-table (with no <join/> nested element) or superclass-table. But the result is not the one expected : all the classes are mapped in the same table.
I then gave a try at the example provided, compiled, enhanced and mapped the sample/jdo/horizontal classes provided with the distribution, since this example covers exactly what I want to do. It seems to me that this example does not work either.
The package.jdo says :
<jdo>
<package name="samples.jdo.horizontal">
<sequence name="sjvm" factory-class="sjvm" strategy="nontransactional"/>
<class name="LastModified">
<inheritance strategy="subclass-table"/>
</class>
The mapping file I get says :
<mapping>
<package name="samples.jdo.horizontal">
<class name="LastModified">
<jdbc-class-map type="base" pk-column="ID" table="LASTMODIFIED"/>
<jdbc-version-ind type="version-number" column="VERSN"/>
<jdbc-class-ind type="in-class-name" column="TYP"/>
<field name="creationDate">
<jdbc-field-map type="value" column="CREATIONDATE"/>
</field>
<field name="lastModificationDate">
<jdbc-field-map type="value" column="LASTMODIFICATIONDATE"/>
</field>
</class>
The enhancement is made using jdoc, the mapping file is generated using the following command line (the DB is empty) :
mappingtool -a refresh -f mapping.xml samples.jdo.horizontal.LastModifiedI would expect an horizontal mapping, with a class element containing only a <jdbc-class-map type="horizontal"/> element, as it is the case if I use my Kodo 3.3 mapping tool.
I tried the enhancement / mapping file generation process with two configuration of the kodo.properties file : the first one with kodo3 as the Mapping Factory and the second with jdo. The result is the same in both cases. I verified by setting the log level to TRACE for the Tool what the factory used. In fact in both case, it is the MappingFileDeprecatedJDOMappingFactory, which seems weird to me.
Bytheway, setting the MappingFactory to jpa leads to the following IllegalArgumentException during enhancement :
Exception in thread "main" org.apache.openjpa.lib.util.ParseException: Instantiation of plugin "MetaDataFactory" with value "jpa" caused an error "java.lang.IllegalArgumentException : java.lang.ClassNotFoundException: jpa". The alias or class name may have been misspelled, or the class may not have be available in the class path. Valid aliases for this plugin are: [jdo, kodo3]
I'd be glad to get any hint if I'm wrong on anything, or any workaround / patch to get my case to work.
Thank you in advance,
JoseIf the same exact app and code works with 4.0 with the same ForeignKeyDeleteAction setting, I suggest that you open a case with support.
This property hasn't changed since 4.0
http://e-docs.bea.com/kodo/docs41/full/html/ref_guide_mapping_defaults.html
Laurent -
Kodo and tangosol integration: ClassCastException in TangosolQueryCache
Hello.
I have tried to prepare my weblogic environment to work with kodo jdo 3.4.0 and tangosol coherence.
So, i prepare kodo's ra.xml as
<config-property>
<description>DataCache.</description>
<config-property-name>DataCache</config-property-name>
<config-property-type>java.lang.String</config-property-type>
<config-property-value>tangosol(TangosolCacheName=dist-kodo)</config-property-value>
</config-property>
<config-property>
<description>Plugin used to cache query results loaded from the data store. Must implement kodo.datacache.QueryCache.</description>
<config-property-name>QueryCache</config-property-name>
<config-property-type>java.lang.String</config-property-type>
<config-property-value>tangosol(TangosolCacheName=dist-kodo)</config-property-value>
</config-property>
<config-property>
<description>Remote Commit Provider.</description>
<config-property-name>RemoteCommitProvider</config-property-name>
<config-property-type>java.lang.String</config-property-type>
<config-property-value>sjvm</config-property-value>
</config-property>
When i start weblogic 8.1 i have found in STDOUT
* Tangosol Coherence(tm): Enterprise Edition is licensed by Tangosol, Inc.
* License details are available at: http://www.tangosol.com/license.jsp
* Licensed for evaluation use with the following restrictions:
* Effective Date : 1 Sep 2006 00:00:00 GMT
* Termination Date : 1 Nov 2006 00:00:00 GMT
* A production license is required for production use.
* Copyright (c) 2000-2006 Tangosol, Inc.
Tangosol Coherence Version 3.2/357 (Pre-release)
2006-09-11 17:43:49.303 Tangosol Coherence 3.2/357 (Pre-release) <Info> (thread=main, member=n/a): sun.misc.AtomicLong is not supported on this JVM; using a syncrhonized counter.
but when i trying to use datacache=tangosol, have ClassCastException
2006-09-11 17:45:57.253 Tangosol Coherence 3.2/357 (Pre-release) <Error> (thread=DistributedCache:EventDispatcher, member=4): An exception occurred while dispatching the following event:
CacheEvent: MapListenerSupport$FilterEvent{DistributedCache$BinaryMap deleted: key=Binary(length=141, value=0x0BACED00057372000C6B6F646F2E7574696C2E4964DDFCC3A1DB13765D0300024A000269644C00095F747970654E616D657400124C6A6176612F6C616E672F537472696E673B78700000000000000001740039636F6D2E626561722E66692E74726164656875622E646F6D61696E2E627573696E6573732E436C656172616E6365437573746F6D65724A444F78), value=null, filters=[MapEventFilter(mask=DELETED)]}
2006-09-11 17:45:57.253 Tangosol Coherence 3.2/357 (Pre-release) <Error> (thread=DistributedCache:EventDispatcher, member=4): The following exception was caught by the event dispatcher:
2006-09-11 17:45:57.253 Tangosol Coherence 3.2/357 (Pre-release) <Error> (thread=DistributedCache:EventDispatcher, member=4):
java.lang.ClassCastException
at kodo.datacache.TangosolQueryCache.entryDeleted(TangosolQueryCache.java:291)
at com.tangosol.util.MapEvent.dispatch(MapEvent.java:199)
at com.tangosol.util.MapEvent.dispatch(MapEvent.java:164)
at com.tangosol.util.MapListenerSupport.fireEvent(MapListenerSupport.java:550)
at com.tangosol.coherence.component.util.SafeNamedCache.translateMapEvent(SafeNamedCache.CDB:7)
at com.tangosol.coherence.component.util.SafeNamedCache.entryDeleted(SafeNamedCache.CDB:1)
at com.tangosol.util.MapEvent.dispatch(MapEvent.java:199)
at com.tangosol.coherence.component.util.daemon.queueProcessor.service.DistributedCache$ViewMap$ProxyListener.dispatch(DistributedCache.CDB:22)
at com.tangosol.coherence.component.util.daemon.queueProcessor.service.DistributedCache$ViewMap$ProxyListener.entryDeleted(DistributedCache.CDB:1)
at com.tangosol.util.MapEvent.dispatch(MapEvent.java:199)
at com.tangosol.coherence.component.util.CacheEvent.run(CacheEvent.CDB:18)
at com.tangosol.coherence.component.util.daemon.queueProcessor.Service$EventDispatcher.onNotify(Service.CDB:14)
at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:35)
at java.lang.Thread.run(Thread.java:534)
2006-09-11 17:45:57.253 Tangosol Coherence 3.2/357 (Pre-release) <Error> (thread=DistributedCache:EventDispatcher, member=4): (The service event thread has logged the exception and is continuing.)
When i have a look to TangosolQueryCache.jad (kodo part, decompiled from .class with jad) i found
public void entryDeleted(MapEvent evt)
QueryKey queryKey = (QueryKey)evt.getKey();
boolean isExpired = true;
keyRemoved(queryKey, isExpired);
but in com.tangosol.util.MapEvent (tangosol.jar)
public Object getKey()
return m_oKey;
So, in TangosolQueryCache tries to classcast Object to QueryKey.
Any ideas how to resolve that problem?
And in general, where i can find example of integration kodo datacache with tangosol coherence?
At http://edocs.bea.com/kodo/docs40/full/html/ref_guide_caching.html examples is not so full.
Thanks for answer.Hi Pavel,
Key in this event is an instance of kodo.util.Id:
00: 0B AC ED 00 05 73 72 00 0C 6B 6F 64 6F 2E 75 74 .’..sr..kodo.ut
10: 69 6C 2E 49 64 DD FC C3 A1 DB 13 76 5D 03 00 02 il.Id�?üáÛ.v]...
20: 4A 00 02 69 64 4C 00 09 5F 74 79 70 65 4E 61 6D J..idL.._typeNam
30: 65 74 00 12 4C 6A 61 76 61 2F 6C 61 6E 67 2F 53 et..Ljava/lang/S
40: 74 72 69 6E 67 3B 78 70 00 00 00 00 00 00 00 01 tring;xp........
50: 74 00 39 63 6F 6D 2E 62 65 61 72 2E 66 69 2E 74 t.9com.bear.fi.t
60: 72 61 64 65 68 75 62 2E 64 6F 6D 61 69 6E 2E 62 radehub.domain.b
70: 75 73 69 6E 65 73 73 2E 43 6C 65 61 72 61 6E 63 usiness.Clearanc
80: 65 43 75 73 74 6F 6D 65 72 4A 44 4F 78 eCustomerJDOx so, either kodo.datacache.TangosolQueryCache is listening to a wrong cache, or application attempts to remove something which is not removeable - did you try asking BEA support about this ?
Anyway, this exception doesn't affect anything - The service event thread has logged the exception and is continuing.
Regards,
Dimitri -
Need best-practice for Kodo web apps
Please forgive this question from a JDO newbie if it is already answered
elsewhere (if so please let me know, I can't find it)...
We are building an enterprise application consisting of multiple
cooperating web-apps, one servlet per web app. Following the classic MVC
pattern (using Struts) we have the servlets operating as controllers that
concurrently field requests from multiple users. In response to each
concurrent request the servlets instantiate the corresponding command
instances to process the requests. The command objects then interact with
a common shared model implemented as a collection of persistent "model"
objects and business rules. Finally, the commands then create java beans
that are sent to JSPs for display to the users.
No problem thus far - we have successfully used Kodo to implement the
architecture described above. To date we have used a single persistance
manager (PM) that is global to the servlets, existing as long as the
application is running. Although this works, I believe that it won't
scale for concurrent access by multiple users (correct me if I am wrong)
since a PM has one and only one database connection. Our concern however
is regarding persistance managers verses multiple user concurrent access
to the common model (i.e., one user can update information which is then
displayed to all other users).
I believe the answer is in using multiple PMs, but how should the PMs be
allocated? One per user stored in the user's session? If so, how do
multiple users share access to the same model instances? My mental model
is that a PM is like a "sandbox" - objects in one sandbox cannot see
objects existing in another PM's "sandbox". Is that true?
I know this is a long winded question. I was hoping to provide enough
context so someone could suggest a PM utilization that scales well for
applications like ours.
Any help/suggestions is GREATLY appreciated!
SteveYou can transportable tablespace for the database tier node.
https://blogs.oracle.com/stevenChan/entry/10gr2_xtts_ebs11i
https://blogs.oracle.com/stevenChan/entry/call_for_xtts_eap_participants
For the application tier node, please see:
https://blogs.oracle.com/stevenChan/entry/migrate_ebs_apptiers_linux
https://blogs.oracle.com/stevenChan/entry/migrating_oracle_applications_to_new_platforms
Thanks,
Hussein -
ForeignKeyDeleteAction=null not working with Kodo 4.1.2 and Mysql 5
Hello,
i am trying to use Kodo 4.1.2, together with mysql 5.0
my kodo.properties says:
openjpa.jdbc.MappingDefaults: jdo(ForeignKeyDeleteAction=null)
and i am expecting an sql-statement something like:
ALTER TABLE mytable ADD FOREIGN KEY (bar) REFERENCES othertable(foo) ON DELETE SET NULL;
but all i got is:
ALTER TABLE mytable ADD FOREIGN KEY (bar) REFERENCES othertable(foo);
Kodo 4.0 didn't have any problems with that. Also, in Kodo 4.1.2 other ForeignKeyDeleteActions (like 'cascade' oder 'default') are working like expected.
What am i doing wrong? Where can i find some upgrade instructions? Is this a bug?
Thanks in advance,
MarkusIf the same exact app and code works with 4.0 with the same ForeignKeyDeleteAction setting, I suggest that you open a case with support.
This property hasn't changed since 4.0
http://e-docs.bea.com/kodo/docs41/full/html/ref_guide_mapping_defaults.html
Laurent -
GenerationType.AUTO doesn't work correctly with Kodo and MySQL
Greetings,
I'm migrating applications from JBoss/Hibernate to WebLogic/Kodo. The following column annotation:
@Id
@Column(name="CUSTOMER_ID")
@GeneratedValue(strategy = GenerationType.AUTO)
public long getCustomerId()
is supposed to generate auto-increment primary key in mysql. This is what happens with Hibernate. With Kodo, this seems to be equivalent with GenerationType.TABLE as a table named openjpa_sequence_table is created, containing sequence values. So what should one do in order to be able to use true auto-increment strategy with MySQL and Kodo ?
Many thanks in advance,
NicolasHi Nicolas,
By setting the generation strategy to AUTO, you're essentially letting the JPA provider choose which strategy to use. It looks like Kodo is using the TABLE strategy by default and Hibernate is using the IDENTITY strategy here. You can set the strategy to IDENTITY if you want Kodo to behave similarly to Hibernate. However, it's worth pointing out that there may be a reason for Kodo not using the IDENTITY strategy by default.
The docs at: http://edocs.bea.com/wls/docs103/kodo/full/html/ref_guide_pc_oid.html#ref_guide_pc_oid_pkgen_autoinc
point out the following:
"Statements inserting into tables with auto-increment / identity columns cannot be batched. After each insert, Kodo must go back to the database to retrieve the last inserted auto-increment value to set back in the persistent object. This can have a negative impact on performance. "
- Matt -
Am I correct in my impression that in Kodo 3 all mapping related
extensions that could formerly have been done in the *.jdo file via Kodo
entensions (table=, size= ordered=, etc.) can now be done in the mapping
file or are there exceptions?
ScottAm I correct in my impression that in Kodo 3 all mapping related
extensions that could formerly have been done in the *.jdo file via Kodo
entensions (table=, size= ordered=, etc.) can now be done in the mapping
file or are there exceptions?No. Currently, only mapping data can go in the .mapping file. This
does not inlcude things like column sizes, index attributes, etc. It
only includes the mapping of classes and fields to tables and columns.
In the future, we plan on supporting additional mapping formats, in
particular JDO 2 metadata, which allows you to separate out all
schema+mapping information, or to include it all in your main JDO file,
all without needing extensions. -
Kodo 3.0.0 Released
Hello All,
SolarMetric is happy to announce the general availability of Kodo JDO 3.0.0.
The latest version of Kodo JDO is the most powerful ever. Full release notes
can be found at http://docs.solarmetric.com/relnotes.html. A list of the
major changes can be found at the bottom of this email. If you have a valid
and up-to-date maintenance and support contract, you can download the latest
release at http://www.solarmetric.com/Software/Purchase/download.php.
If you don't, please contact your sales representative
([email protected]) to determine how to upgrade.
Major changes in Kodo JDO 3.0.0 include:
- Kodo now supports direct SQL queries and stored procedure calls through
the javax.jdo.Query interface.
- Technology preview of Kodo Management / Monitoring capability.
- JRun 4 is now supported.
- Built-in support for Borland JDataStore, as well as Microsoft Access and
Microsoft Visual FoxPro (using a JDBC-ODBC server bridge like DataDirect,
but not the Sun JDBC-ODBC bridge).
- Refactored data caching to not lock the cache as a whole when loading data
from it or storing data into it. This change improves the concurrency of the
cache.
- Support for detaching and attaching instances from a persistence manager,
allowing applications to more easily use a "data transfer object" pattern.
- Support for collection and map fields that are backed by large result
sets.
- Support for additional settings to control the handling of large result
sets.
- Better exception messages when mappings can't be found or fail validations
against the schema.
- Expanded smart proxies to include change-tracking for lists.
- New externalization system for storage of field types not supported by JDO
without resorting to serializing or requiring custom mappings. Replaces the
old stringification mapping.
- Support for aggregates and projections in queries. See the Query
Aggregates and Projections documentation for more details.
- New mapping system, providing more flexible mapping options.
- Pluggable system for storing mapping information, with built-in options
for storing mapping information in the database, in JDO metadata extensions,
and in a separate mapping file. See the section on the Mapping Factory for
more information.
- Support for embedded 1-1 mappings, including nested embedded mappings with
no limit on nesting depth. See the section on Embedded One-to-One Mapping
for more information.
- Support for timestamp and state-based optimistic lock versioning, and for
custom versioning systems. See the section on Version Indicators for more
information.
- Configurable eager fetching of 1-1, 1-many and many-many relations.
Potentially reduces the number of database queries required when iterating
through an extent or query result and accessing relation fields of each
instance.
- Improved documentation and error messaging.
Enjoy,
-GregHi,
This plug in will it works with Kodo 4.0.1 and maven 1.0.2?
Thanks
Guy -
Kodo 4.1.2 license key problem
Has anyone tried the JDO tutorial provided with Kodo 4.1.2? Everything works fine until I try to create the mappings with 'mappingtool -p jdo.properties package.jdo'. Then I get:
Exception in thread "main" com.solarmetric.license.LicenseException: No product
license key was found. Please ensure that you either have a valid "license.bea"
file in your CLASSPATH or in the directory specified by the "bea.home" environme
nt variable, or else that you have a valid license key specified in the "kodo.Li
censeKey" property of you Kodo configuration. If you need to request an evaluati
on license, please go to http://commerce.bea.com or contact [email protected]
at com.solarmetric.license.License.<init>(License.java:67)
at kodo.conf.KodoCapabilities.newLicense(KodoCapabilities.java:133)
at kodo.conf.LicenseKey.getLicense(LicenseKey.java:38)
at kodo.conf.LicenseKey.getLicense(LicenseKey.java:22)
at kodo.jdbc.meta.KodoClassMapping.validateMapping(KodoClassMapping.java
I'm running the tutorial using the kodocmd.bat and I changed the properties to point to kodo's install directory and my jdk home directory.
I've downloaded the xml document at 'http://ftpmain.bea.com/download/pub/license/kodo/kodo40/license.bea', named it license.bea and put it in kodo's install directory which is on the classpath.
I think the problem is getting Kodo to recognize the license outside of a web application. I am trying to use Kodo in a standalone application and obviously need it to recognize the license key.
Regards,
JakeHi Jake,
I don't think the problem is getting Kodo to recognize the license outside
the web application (I am also working on a standalone application). I think
the problem is that the code still requires the "old" solarmetric license
key next to the bea license key. I actually run into the same problem if I
don't specify both keys. Since I still have several important bugs already
open for months and registering an issue requires writing complete testcases
including manuals how to use these test cases, I actually didnt even bother
to register this one.
kind regards,
Christiaan
<Jacob Bowers> wrote in message news:[email protected]...
Has anyone tried the JDO tutorial provided with Kodo 4.1.2? Everything
works fine until I try to create the mappings with 'mappingtool -p
jdo.properties package.jdo'. Then I get:
Exception in thread "main" com.solarmetric.license.LicenseException: No
product
license key was found. Please ensure that you either have a valid
"license.bea"
file in your CLASSPATH or in the directory specified by the "bea.home"
environme
nt variable, or else that you have a valid license key specified in the
"kodo.Li
censeKey" property of you Kodo configuration. If you need to request an
evaluati
on license, please go to http://commerce.bea.com or contact [email protected].
at com.solarmetric.license.License.<init>(License.java:67)
at kodo.conf.KodoCapabilities.newLicense(KodoCapabilities.java:133)
at kodo.conf.LicenseKey.getLicense(LicenseKey.java:38)
at kodo.conf.LicenseKey.getLicense(LicenseKey.java:22)
at
kodo.jdbc.meta.KodoClassMapping.validateMapping(KodoClassMapping.java
I'm running the tutorial using the kodocmd.bat and I changed the properties
to point to kodo's install directory and my jdk home directory.
I've downloaded the xml document at
'http://ftpmain.bea.com/download/pub/license/kodo/kodo40/license.bea', named
it license.bea and put it in kodo's install directory which is on the
classpath.
I think the problem is getting Kodo to recognize the license outside of a
web application. I am trying to use Kodo in a standalone application and
obviously need it to recognize the license key.
Regards,
Jake -
Kodo 4.1.2 download incomplete?
The current Kodo documentation states that internally, Kodo still uses the kodo.util.Id class for datastore identity. None of the JAR files in the Kodo 4.1.2 distribution I just downloaded contains this class. This was first evident to me when attempting to build my application against the Kodo 4.1.2 JARs; I saw a huge list of compilation errors.
Plesae advise.
Thanks,
BrianTurns out that the kodo.util.Id class has been deprecated and replaced with org.apache.openjpa.util.Id. Unfortunately the current documentation that ships with the Kodo 4.1.2 distribution is out-of-date, as it incorrectly states that Kodo still uses the kodo.util.Id internally to manage datastore identity. Bottom line, anyone using Kodo 4.1.2 should use org.apache.openjpa.util.Id when dealing with datastore identity ID objects.
-
Kodo 4.1.2 in Weblogic 10 Problem
I was told by BEA that Kodo/openJPA is included in Weblogic 10. However, now I have Weblogic 10 but I could not located much Kodo classes from Weblogic libraries. I searched all the JARs under BEA_HOME\wlserver_10.0\server\lib.
I also tried to migrate Kodo/JPA application from Weblogic 9.2 to Weblogic 10. My application depends on Kodo JCA deployment in managed environment. The application and Kodo JCA deployed fine into Weblogic 10. But when I test any application, the test failed when I tried to create EntityMaanger from EntityManagerFactory:
Caused by: <4|false|0.9.7> org.apache.openjpa.persistence.ArgumentException: config-error
at weblogic.kodo.event.ClusterRemoteCommitProvider.endConfiguration(ClusterRemoteCommitProvider.java:112)
at org.apache.openjpa.lib.conf.Configurations.configureInstance(Configurations.java:447)
at org.apache.openjpa.conf.RemoteCommitProviderValue.instantiate(RemoteCommitProviderValue.java:122)
at org.apache.openjpa.conf.RemoteCommitProviderValue.instantiateProvider(RemoteCommitProviderValue.java:103)
at org.apache.openjpa.conf.RemoteCommitProviderValue.instantiateProvider(RemoteCommitProviderValue.java:95)
at org.apache.openjpa.conf.OpenJPAConfigurationImpl.newRemoteCommitProviderInstance(OpenJPAConfigurationImpl.java:708)
at org.apache.openjpa.event.RemoteCommitEventManager.(RemoteCommitEventManager.java:56)
at org.apache.openjpa.conf.OpenJPAConfigurationImpl.getRemoteCommitEventManager(OpenJPAConfigurationImpl.java:720)
at org.apache.openjpa.kernel.AbstractBrokerFactory.newBroker(AbstractBrokerFactory.java:177)
at org.apache.openjpa.kernel.DelegatingBrokerFactory.newBroker(DelegatingBrokerFactory.java:139)
at org.apache.openjpa.persistence.EntityManagerFactoryImpl.createEntityManager(EntityManagerFactoryImpl.java:187)
at org.apache.openjpa.persistence.EntityManagerFactoryImpl.createEntityManager(EntityManagerFactoryImpl.java:140)
at kodo.persistence.jdbc.JPAConnectionFactory.createEntityManager(JPAConnectionFactory.java:144)
at kodo.persistence.jdbc.JPAConnectionFactory.createEntityManager(JPAConnectionFactory.java:23)
at com.psi.vida.ejb.JPASessionBean.list(JPASessionBean.java:165)
at com.psi.vida.ejb.JPASessionEJB_lvtqkz_EOImpl.list(JPASessionEJB_lvtqkz_EOImpl.java:134)
at com.psi.vida.ejb.JPASessionEJB_lvtqkz_EOImpl_WLSkel.invoke(Unknown Source)
at weblogic.rmi.internal.ServerRequest.sendReceive(ServerRequest.java:174)
... 17 more
Caused by: java.lang.Exception: <0|true|0.9.7> org.apache.openjpa.persistence.PersistenceException: no-trasport
at org.apache.openjpa.util.Exceptions.replaceNestedThrowables(Exceptions.java:230)
at org.apache.openjpa.persistence.ArgumentException.writeObject(ArgumentException.java:104)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:585)
at java.io.ObjectStreamClass.invokeWriteObject(ObjectStreamClass.java:890)
at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1333)
at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1284)
at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1073)
at java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1369)
at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1341)
at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1284)
at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1073)
at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:291)
at weblogic.rmi.extensions.server.CBVOutputStream.writeObject(CBVOutputStream.java:84)
at weblogic.rmi.internal.ServerRequest.unmarshalThrowable(ServerRequest.java:349)
at weblogic.rmi.internal.ServerRequest.getThrowable(ServerRequest.java:62)
at weblogic.rmi.internal.ServerRequest.sendReceive(ServerRequest.java:203)
... 17 moreI was told by BEA that Kodo/openJPA is included in
Weblogic 10. However, now I have Weblogic 10 but I
could not located much Kodo classes from Weblogic
libraries. I searched all the JARs under
BEA_HOME\wlserver_10.0\server\lib. They're in the (new) modules directory. weblogic.jar refers to stuff in the modules directory via its manifest classpath.
I also tried to migrate Kodo/JPA application from
Weblogic 9.2 to Weblogic 10. My application depends
on Kodo JCA deployment in managed environment. The
application and Kodo JCA deployed fine into Weblogic
10. But when I test any application, the test failed
when I tried to create EntityMaanger from
EntityManagerFactory:Interesting. I do not know what the status of Kodo JCA testing is in WebLogic 10, but it sounds like something is a bit wonky.
Basically, in a WLS environment, the default remote commit provider is automatically set to the new weblogic.kodo.event.ClusterRemoteCommitProvider, which uses the WLS clustering protocol to communicate cache notifications. The error that you're seeing indicates that cluster services are not available in the execution context. You can probably get around this by explicitly setting the 'kodo.RemoteCommitProvider' option to 'sjvm' (if you're not running in a cluster), or to whatever you had it set to in the past. (I'm guessing that it was unset in the past, as otherwise, the configuration should be picking up that instead of the new default.)
However, personally, I much prefer the new persistence.xml configuration file format, compared to JCA configuration. (You can trivially use the persistence.xml format with Kodo JDO, even though it's a JPA-specified feature.) You might want to look into moving away from JCA and to the persistence.xml style instead.
If you do this, you'll end up putting a META-INF/persistence.xml file in your EAR (and possibly a META-INF/persistence-configuration.xml file, if you want to use the strongly-typed Kodo XML configuration format), and replacing your JNDI lookups with java:comp/env/persistence/<persistence-unit-name>. (I think that's the right location. I might be mistaken, though.)
Also, I can't guarantee that WebLogic 10 really handles JCA configuration all that well; some bits of that exception make it look like maybe some resources are not available in the classloader, which is surprising. So, it's possible that there is some sort of more fundamental JCA problem here (and not just a problem with the new remote commit provider).
-Patrick -
Relational queries through JDBC with the help of Kodo's metadata for O/R mapping
Due to JDOQL's limitations (inability to express joins, when relationships
are not modeled as object references), I find myself needing to drop down to
expressing some queries in SQL through JDBC. However, I still want my Java
code to remain independent of the O/R mapping. I would like to be able to
formulate the SQL without hardcoding any knowledge of the relational table
and column names, by using Kodo's metadata. After poking around the Kodo
Javadocs, it appears as though the relevant calls are as follows:
ClassMetaData cmd = ClassMetaData.getInstance(MyPCObject.class, pm);
FieldMetaData fmd = cmd.getDeclaredField( "myField" );
PersistenceManagerFactory pmf = pm.getPersistenceManagerFactory();
JDBCConfiguration conf = (JDBCConfiguration)
((EEPersistenceManagerFactory)pmf).getConfiguration();
ClassResolver resolver = pm.getClassResolver(MyPCObject.class);
Connector connector = new PersistenceManagerConnector(
(PersistenceManagerImpl) pm );
DBDictionary dict = conf.getDictionary( connector );
FieldMapping fm = ClassMapping.getFieldMapping(fmd, conf, resolver, dict);
Column[] cols = fm.getDataColumns();
Does that look about right?
Here's what I'm trying to do:
class Foo
String name; // application identity
String bar; // foreign key to Bar
class Bar
String name; // application identity
int weight;
Let's say I want to query for all Foo instances for which its bar.weight >
100. Clearly this is trivial to do in JDOQL, if Foo.bar is an object
reference to Bar. But there are frequently good reasons for modeling
relationships as above, for example when Foo and Bar are DTOs exposed by the
remote interface of an EJB. (Yeah, yeah, I'm lazy, using my
PersistenceCapable classes as both the DAOs and the DTOs.) But I still want
to do queries that navigate the relationship; it would be nice to do it in
JDOQL directly. I will also want to do other weird-ass queries that would
definitely only be expressible in SQL. Hence, I'll need Kodo's O/R mapping
metadata.
Is there anything terribly flawed with this logic?
BenI have a one point before I get to this:
There is nothing wrong with using PC instances as both DAO and DTO
objects. In fact, I strongly recommend this for most J2EE/JDO design.
However, there should be no need to expose the foreign key values... use
application identity to quickly reconstitute an object id (which can in
turn find the persistent version), or like the j2ee tutorial, store the
object id in some form (Object or String) and use that to re-find the
matching persistent instance at the EJB tier.
Otherwise, there is a much easier way of finding ClassMapping instances
and in turn FieldMapping instances (see ClassMapping.getInstance () in
the JavaDocs).
Ben Eng wrote:
Due to JDOQL's limitations (inability to express joins, when relationships
are not modeled as object references), I find myself needing to drop down to
expressing some queries in SQL through JDBC. However, I still want my Java
code to remain independent of the O/R mapping. I would like to be able to
formulate the SQL without hardcoding any knowledge of the relational table
and column names, by using Kodo's metadata. After poking around the Kodo
Javadocs, it appears as though the relevant calls are as follows:
ClassMetaData cmd = ClassMetaData.getInstance(MyPCObject.class, pm);
FieldMetaData fmd = cmd.getDeclaredField( "myField" );
PersistenceManagerFactory pmf = pm.getPersistenceManagerFactory();
JDBCConfiguration conf = (JDBCConfiguration)
((EEPersistenceManagerFactory)pmf).getConfiguration();
ClassResolver resolver = pm.getClassResolver(MyPCObject.class);
Connector connector = new PersistenceManagerConnector(
(PersistenceManagerImpl) pm );
DBDictionary dict = conf.getDictionary( connector );
FieldMapping fm = ClassMapping.getFieldMapping(fmd, conf, resolver, dict);
Column[] cols = fm.getDataColumns();
Does that look about right?
Here's what I'm trying to do:
class Foo
String name; // application identity
String bar; // foreign key to Bar
class Bar
String name; // application identity
int weight;
Let's say I want to query for all Foo instances for which its bar.weight >
100. Clearly this is trivial to do in JDOQL, if Foo.bar is an object
reference to Bar. But there are frequently good reasons for modeling
relationships as above, for example when Foo and Bar are DTOs exposed by the
remote interface of an EJB. (Yeah, yeah, I'm lazy, using my
PersistenceCapable classes as both the DAOs and the DTOs.) But I still want
to do queries that navigate the relationship; it would be nice to do it in
JDOQL directly. I will also want to do other weird-ass queries that would
definitely only be expressible in SQL. Hence, I'll need Kodo's O/R mapping
metadata.
Is there anything terribly flawed with this logic?
Ben
Steve Kim
[email protected]
SolarMetric Inc.
http://www.solarmetric.com -
Class Cast Exception running Kodo JCA in Weblogic 8.1
Hi,
I have a stateless session EJB that accesses Kodo through the JCA adapter.
The database I'm connecting to is mysql.
The problem I'm having is that the persistence manager throws a class cast
exception when trying to commit the transaction. See below for the stack
trace. I suspect it has something to do with the mapping, but the error
doesn't give me enough information to tell.
Any idea where to start looking to solve this problem?
Merrill
<Sep 14, 2004 9:09:40 AM PDT> <Error> <EJB> <BEA-010026> <Exception
occurred during commit of transaction Name=[EJB ossj
inventory.bean.impl.JVTInventorySessionBean.createEntitySpecificationByValue(javax.oss.cbe.EntitySpecificationValue)],X
id=BEA1-0003E542E34E0D33F21F(21266875),Status=Rolled back.
[Reason=kodo.util.FatalException: java.lang.ClassCastExceptio
n
NestedThrowables:
java.lang.ClassCastException],numRepliesOwedMe=0,numRepliesOwedOthers=0,seconds
since begin=13,seconds left=30,SCInfo[os
sj+myserver]=(state=rolledback),properties=({weblogic.transaction.name=[EJB
ossj.inventory.bean.impl.JVTInventorySession
Bean.createEntitySpecificationByValue(javax.oss.cbe.EntitySpecificationValue)]}),OwnerTransactionManager=ServerTM[Server
CoordinatorDescriptor=(CoordinatorURL=myserver+10.4.110.92:7001+ossj+t3+,
XAResources={},NonXAResources={})],Coordinator
URL=myserver+10.4.110.92:7001+ossj+t3+): kodo.util.FatalException:
java.lang.ClassCastException
NestedThrowables:
java.lang.ClassCastException
at
kodo.runtime.PersistenceManagerImpl.beforeCompletion(PersistenceManagerImpl.java:825)
at
weblogic.transaction.internal.ServerSCInfo.callBeforeCompletions(ServerSCInfo.java:1010)
at
weblogic.transaction.internal.ServerSCInfo.startPrePrepareAndChain(ServerSCInfo.java:115)
at
weblogic.transaction.internal.ServerTransactionImpl.localPrePrepareAndChain(ServerTransactionImpl.java:1184)
at
weblogic.transaction.internal.ServerTransactionImpl.globalPrePrepare(ServerTransactionImpl.java:1910)
at
weblogic.transaction.internal.ServerTransactionImpl.internalCommit(ServerTransactionImpl.java:273)
at
weblogic.transaction.internal.ServerTransactionImpl.commit(ServerTransactionImpl.java:244)
at
weblogic.ejb20.internal.BaseEJBObject.postInvoke(BaseEJBObject.java:299)
at
weblogic.ejb20.internal.StatelessEJBObject.postInvoke(StatelessEJBObject.java:140)
at
ossj.inventory.bean.impl.JVTInventorySession_h5aqa8_EOImpl.createEntitySpecificationByValue(JVTInventorySessi
on_h5aqa8_EOImpl.java:4968)
at
ossj.inventory.bean.impl.JVTInventorySession_h5aqa8_EOImpl_WLSkel.invoke(Unknown
Source)
at
weblogic.rmi.internal.BasicServerRef.invoke(BasicServerRef.java:477)
at
weblogic.rmi.cluster.ReplicaAwareServerRef.invoke(ReplicaAwareServerRef.java:108)
at
weblogic.rmi.internal.BasicServerRef$1.run(BasicServerRef.java:420)
at
weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:363)
at
weblogic.security.service.SecurityManager.runAs(SecurityManager.java:144)
at
weblogic.rmi.internal.BasicServerRef.handleRequest(BasicServerRef.java:415)
at
weblogic.rmi.internal.BasicExecuteRequest.execute(BasicExecuteRequest.java:30)
at weblogic.kernel.ExecuteThread.execute(ExecuteThread.java:219)
at weblogic.kernel.ExecuteThread.run(ExecuteThread.java:178)
Caused by: java.lang.ClassCastException
at kodo.jdbc.sql.AbstractRow.toSQL(AbstractRow.java:657)
at kodo.jdbc.runtime.RowImpl.flush(RowImpl.java:250)
at
kodo.jdbc.runtime.PreparedStatementManager.flush(PreparedStatementManager.java:125)
at
kodo.jdbc.runtime.UpdateManagerImpl.flush(UpdateManagerImpl.java:361)
at
kodo.jdbc.runtime.UpdateManagerImpl.flush(UpdateManagerImpl.java:168)
at
kodo.jdbc.runtime.UpdateManagerImpl.flush(UpdateManagerImpl.java:73)
at
kodo.jdbc.runtime.JDBCStoreManager.flush(JDBCStoreManager.java:590)
at
kodo.runtime.DelegatingStoreManager.flush(DelegatingStoreManager.java:152)
at
kodo.runtime.PersistenceManagerImpl.flushInternal(PersistenceManagerImpl.java:964)
at
kodo.runtime.PersistenceManagerImpl.beforeCompletion(PersistenceManagerImpl.java:813)Can you use 3.1.5. Can you also be sure that you don't have multiple
versions of Kodo around, i.e. in the system classpath, JCA kodo.rar
directory, etc?
Merrill Higginson wrote:
Abe White wrote:
What version of Kodo?I'm using Kodo V 3.1.2
Steve Kim
[email protected]
SolarMetric Inc.
http://www.solarmetric.com -
Kodo class/resource loading approach will be a problem in managed environment.
Hello,
Kodo does not work with Apache Tomcat 3 and 4 unless Kodo jars and my
persistent classes loaded by the same class loader. Kodo uses
Class.forName() and class.getResources() on class loader which loaded Kodo
classes thus it is not able to find system.prefs or package.jdo (and even if
it was able to find system.prefs how would it handle multiple system.prefs
from multiple contexts each managed by its own class loader?)
The problem as I see it is that web containers usually have container
classloader(s) and a class loader per web context. Desired configuration
would be to share Kodo classes among all contexts and implement jndi factory
to provide PersistentManagerFactory lookup in each web container
environment. According to Tomcat specs I placed Kodo jars in
$tomcat_home/lib (class loader shared among all contexts) and deployed my
persistent classes along with system.prefs and *.jdo files to my context
WEB-INF/lib directory. As a result Kodo would not find system.prefs resource
and eventually fail with error - required resource db/url could not be
loaded. I tried to place Kodo jars into together with my persistent classes
in WEB-INF/lib it did not help (this was really strange since all classes
should have been loaded by the same context class loader) The only
configuration that works is when I put Kodo and my persistent classes in
tomcat's common directory or on system class path. Which is of course not a
right way to go as context specific classes such as my persistent classes
should be shared
Without source code it is hard to figure out what is going on and how
resources get loaded, so could somebody look into it. It is quite important
for us to resolve this issue as we are planning to use Kodo with tomcat.
Also I am not sure that system.prefs is a good idea. As I mentioned when
integrated with appserver/web container environment jndi it is up to jndi
factory to create and configure PersistentManagerFactories. Different
PersistentManagerFactories will be bound to different jndi names often in
different contexts under common or separate class loaders and they should
not be in my opinion dependent on common system.prefs
I would greatly appreciate your suggestion on proper use of Kodo in web
container environment
Thank you very much in advance
Alex RoytmanI want to add that even if we put both Kodo jars and persistent classes jar
in WEB-INF/lib to be loaded by context class loader it does not work due to
following piece of code in Prefs.java:
private static synchronized void loadSystem() {
try {
Enumeration enumeration =
(com.techtrader.util.app.Prefs.class).getClassLoader().getResources("system.
prefs");
Object obj = null;
Object obj1 = null;
InputStream inputstream;
for (; enumeration.hasMoreElements(); inputstream.close()) {
URL url = (URL)enumeration.nextElement();
inputstream = url.openStream();
_cache.parse(inputstream, url.toString(), "");
catch (Exception exception) {
throw new ParseException(exception);
however if we replace it with following it will work:
private static synchronized void loadSystem() {
try {
Object obj = null;
Object obj1 = null;
URL url =
(com.techtrader.util.app.Prefs.class).getClassLoader().getResource("system.p
refs");
InputStream inputstream = url.openStream();
_cache.parse(inputstream, url.toString(), "");
inputstream.close();
catch (Exception exception) {
throw new ParseException(exception);
"Alex Roytman" <[email protected]> wrote in message
news:[email protected]...
Hello,
Kodo does not work with Apache Tomcat 3 and 4 unless Kodo jars and my
persistent classes loaded by the same class loader. Kodo uses
Class.forName() and class.getResources() on class loader which loaded Kodo
classes thus it is not able to find system.prefs or package.jdo (and evenif
it was able to find system.prefs how would it handle multiple system.prefs
from multiple contexts each managed by its own class loader?)
The problem as I see it is that web containers usually have container
classloader(s) and a class loader per web context. Desired configuration
would be to share Kodo classes among all contexts and implement jndifactory
to provide PersistentManagerFactory lookup in each web container
environment. According to Tomcat specs I placed Kodo jars in
$tomcat_home/lib (class loader shared among all contexts) and deployed my
persistent classes along with system.prefs and *.jdo files to my context
WEB-INF/lib directory. As a result Kodo would not find system.prefsresource
and eventually fail with error - required resource db/url could not be
loaded. I tried to place Kodo jars into together with my persistentclasses
in WEB-INF/lib it did not help (this was really strange since all classes
should have been loaded by the same context class loader) The only
configuration that works is when I put Kodo and my persistent classes in
tomcat's common directory or on system class path. Which is of course nota
right way to go as context specific classes such as my persistent classes
should be shared
Without source code it is hard to figure out what is going on and how
resources get loaded, so could somebody look into it. It is quiteimportant
for us to resolve this issue as we are planning to use Kodo with tomcat.
Also I am not sure that system.prefs is a good idea. As I mentioned when
integrated with appserver/web container environment jndi it is up to jndi
factory to create and configure PersistentManagerFactories. Different
PersistentManagerFactories will be bound to different jndi names often in
different contexts under common or separate class loaders and they should
not be in my opinion dependent on common system.prefs
I would greatly appreciate your suggestion on proper use of Kodo in web
container environment
Thank you very much in advance
Alex Roytman
Maybe you are looking for
-
How do I get InsomniaX to work on my A1342 apple macbook?
I downloaded InsomniaX on my laptop, but everytime that I play music and then try shutting the lid of my computer, the music shuts off. How can I get it to work?
-
1560 series prints only error message?
Wireless printing was working fine, then all of a sudden, if I attempt to print the printer prints a page with the following: PCL XL Error Subsystem: KERNEL Error: IllegalTag File Name: Kernel.c Line Number: 1921 The "Error" line is not always the sa
-
How to get rid of missing font "Wingdings Light Italic"?
The screenshot shows a bulleted list. The text is set in Myriad Pro Light Italic. The bullet is a square symbol from Wingdings, which I guess is trying to adapt to the styles from the body text, resulting in Wingdings Light Italic, which obviously do
-
I create a stored procedure in SQL server and write a Crystal report off this. And the user does not get asked for BOUSER it just automatically does it. I can not get the same to work in Oracle. Can someone tell me what I'm doing wrong in Oracle.
-
Grey frame around relinked images
Sometimes (but I think not always) when I relink a placed image I get a double grey frame around the image. This is not a major problem but just kind of irritating and it seems to go away on its own after a few mouse clicks or something, but is it no