Caching in kodo
Hi,
We are using tomcat 4.0 for our application.We have jsps and java
classes.I wanted to know that if we intend to use caching, can we use in
our current environement. I mean, are EJBs compulsory to use caching. And
do we have to use the Id class for caching.
or will caching just work with any app environnent that uses a data store.
thanks
adib
Adib,
Our caching plug-in is not at all dependent on an application server. You
should be able to use our caching features to increase performance in any
Kodo-based application.
-Patrick
On 7/20/02 9:56 PM, "adib motiwala" <[email protected]> wrote:
Hi,
We are using tomcat 4.0 for our application.We have jsps and java
classes.I wanted to know that if we intend to use caching, can we use in
our current environement. I mean, are EJBs compulsory to use caching. And
do we have to use the Id class for caching.
or will caching just work with any app environnent that uses a data store.
thanks
adib
Patrick Linskey [email protected]
SolarMetric Inc. http://www.solarmetric.com
Similar Messages
-
Class kodo.util.ObjectNotFoundException
Hi,
I'm getting this exception when running my application
class kodo.util.ObjectNotFoundException
The instance of type "com.jario.server.entitymodel.EntityLocation" with oid
"com.jario.server.entitymodel.EntityLocation-753" no longer exists in the
data store. This may mean that you deleted the instance in a separate
persistence manager, but this persistence manager still has a cached
version.
kodo.runtime.StateManagerImpl.loadFields(StateManagerImpl.java:2667)
kodo.runtime.StateManagerImpl.loadField(StateManagerImpl.java:2744)
kodo.runtime.StateManagerImpl.isLoaded(StateManagerImpl.java:1116)
com.jario.server.entitymodel.EntityLocation.jdoGetfileStorageType(EntityLoca
tion.java)
But the object is the in database with JDOID 753.
When I restart the instance It finds it no bother.
I'm running Kodo 3.0.1 Ent. (Licenced)
Why is this happening?
Kind Regards
Graham CruickshanksKodo 3.0.2 seems to have fixed this issue.
Cheers
Graham Cruickshanks
"Graham Cruickshanks" <[email protected]> wrote in message
news:buqr0f$7qk$[email protected]..
>
"Abe White" <[email protected]> wrote in message
news:buq0t4$kg9$[email protected]..
Are you using the data cache?No, Kodo.properties are as follows
javax.jdo.PersistenceManagerFactoryClass:
kodo.jdbc.runtime.JDBCPersistenceManagerFactory
kodo.LicenseKey: <REMOVED>
kodo.PersistenceManagerImpl: CloseOnManagedCommit=true
kodo.jdbc.SequenceFactory: PrimaryKeyColumn=PKX, SequenceColumn=SEQUENCEX,
TableName=JDO_SEQUENCEX
kodo.FlushBeforeQueries: true
javax.jdo.option.IgnoreCache: false
javax.jdo.option.ConnectionDriverName: com.mysql.jdbc.Driver
javax.jdo.option.ConnectionPassword:
javax.jdo.option.ConnectionURL:
jdbc:mysql://localhost/JarioServer?autoReconnect=true
javax.jdo.option.ConnectionUserName: JarioServer
kodo.jdbc.DBDictionary: mysql(StoreCharsAsNumbers=false, TableType=InnoDB)
Have you possibly deleted the object and flushed within the same
transaction?I have no delete actions this part of my application, so this is an
impossablity
If you enable SQL logging, you can see the last SELECT statement that is
failing. What happens if you run this against your DB directly?I'm experiencing other problems than just the listed error. I'mexperiencing
sporadic null pointer exceptions on call's that should return values from
the database.
The SQL trace shows the 'Select' query has been done, and if I try running
this with direct SQL to database it returns results. The results in the
tables where this happens have not be written too since applicationstartup.
so i can't be a flushing/datacache error.
The database & driver has not changed from KODO 2.x which worked fine.
Details
Database: 4.10.13-nt
Driver: mysql-connector-java-3.0.9-stable-bin.jar (Tried 3.0.10 as well,
same issue)
Java version: 1.4.2_01-b06
Kodo: 3.0.1 -
I am sending DTOs between Client and Server instead of using attach/detach
and send the persistent objects back and forth between presentation tier
and business tier. Since persistent objects has JDOVersion, do I also need
to add version to DTOs? Is Kodo handling the versioning for me, or I have
to do it myself. Let me explain my problem with an example.
Let's say there is a table called CONSUMER in the database. Its has the
following columns.
JDOID 1
JDOVERSION 1
NAME John Smith
SSN 111-11-1111
The corresponding persistent object is
Consumer
-- name
-- ssn
The DTO is
ConsumerDTO
-- objectId
-- name
-- ssn
Client A calls getConsumer() on the Session Facade and got the following
consumer information.
ConsumerDTO
============================
JDOID 1
NAME John Smith
SSN 111-11-1111
Client B also calls getConsumer() on the Session Facade and got the
following consumer information.
ConsumerDTO
============================
JDOID 1
NAME John Smith
SSN 111-11-1111
Client A changes consumer name to Mary Smith and calls updateConsumer() on
Session Facade and commit the transaction. The consumer info in database
table get changed. I think the JDOVERSION will be changed to 2.
Now, Client B also changes the ssn to 222-22-2222 and calling Sesson
Facade to update consumer and commit. Is Kodo smart enough to know that
Client B cannot make the commit since the version is changed? or Do we
need to make the versioning ourself.
Thanks in advance for your help!Since my persistent objects are not just simply the data objects. They
contains business logic. I do not want to simply detach them and explose
to the client. Secondly, I do not want to couple the presentation tier and
business tier by sharing the persistent objects by UI and server sides. I
would like to use DTO to decouple UI and server to give UI much more
flexiblility for DTOs.
If I am writing my versioning system using DTO, what Kodo is going to do
with JDOVERSION? Don't you think is this JDO limitation if user wants to
use DTO?
For my second question, I am using PM. I mentioned in my previous entry
that the PM will be opened and closed between two requests. You just
answered "if not". What about "if yes"? If I use PM then what happen?
Thanks a lot!
Stephen Kim wrote:
First, note that detach/attach feature solves these problems for you as
it provides DTO functionality in a much simpler package.
As for your question, versioning is transparent to the persistent
object. In fact exposing it by mapping a field to the column will be
problematic. However, the state image version indicator simply takes
into account the fields as versioning info such that a specific number
doesn't have to be transferred. Alternatively you can use Kodo's
runtime APIs to retrieve that information for you to use in some
versioning system. And finally, you can write your own DTO specific
versioning system using a persistent field which encapsulates the
behavior you need.
As for your second question, it is dependent on if you are using the PM
or not. If not, no. The cache only stores the -data- and not the
persistent object itself.
Gene Shen wrote:
Abe White wrote:
Answered via support. See documentation for version indicators.Thanks, Abe. I read the documentation for version indicators. I still
doesn't get my answer. We are using the default version number indicator.
The problem is that I am using DTO which keeps the identity in order to
re-associate the persistent object. My question is: do I need to have the
DTO keeps the version or not.
BTW, I have another question.
I have two client requests. The persistence manager will be open and close
between these two requests.
The first request is to get consumer by name. Once I get the consumer
persistent object. I created a DTO and set the identity and other
attributes using the persistent object.
The second request is to update consumer after UI made some changes. I am
using the identity in the DTO to re-associate the persistent object using
consumer =
(Consumer)pm.getObjectById(pm.newObjectIdInstance(Consumer.class, objId),
true);
What kodo does after I call above method? Does it return the same
persistent object of my first request using cache or Kodo is going to
retrieve the database and give me a new persistent object? Thanks!
Steve Kim
[email protected]
SolarMetric Inc.
http://www.solarmetric.com -
Using Tangosol Coherence in conjunction with Kodo JDO for distributing caching
JDO currently has a perception problem in terms of performance. Transparent
persistence is perceived to have a significant performance overhead compared
to hand-coded JDBC. That was certainly true a while ago, when the first JDO
implementations were evaluated. They typically performed about half as well
and with higher resource requirements. No doubt JDO vendors have closed that
gap by caching PreparedStatements, queries, data, and by using other
optimizations.
Aside from the ease of programming through transparent persistence, I
believe that using JDO in conjunction with distributed caching techniques in
a J2EE managed environment has the opportunity to transparently give
scalability, performance, and availability improvements that would otherwise
be much more difficult to realize through other persistence techniques.
In particular, it looks like Tangosol is doing a lot of good work in the
area of distributed caching for J2EE. For example, executing parallelized
searches in a cluster is a capability that is pretty unique and potentially
very valuable to many applications. It would appear to me to be a lot of
synergy between Kodo JDO and Tangosol Coherence. Using Coherence as an
implementation of Kodo JDO's distributed cache would be a natural desire for
enterprise applications that have J2EE clustering requirements for high
scalability, performance, and availability.
I'm wondering if Solarmetric has any ideas or plans for closer integration
(e.g., pluggability) of Tangosol Coherence into Kodo JDO. This is just my
personal opinion, but I think a partnership between your two organizations
to do this integration would be mutually advantageous, and it would
potentially be very attractive to your customers.
BenMarc,
Thanks for pointing that out. That is truly excellent!
Ben
"Marc Prud'hommeaux" <[email protected]> wrote in message
news:[email protected]...
Ben-
We do currently have a plug-in for backing our data cache with a
Tangosol cache.
See: http://docs.solarmetric.com/manual.html#datastore_cache_config
In article <[email protected]>, Ben Eng wrote:
JDO currently has a perception problem in terms of performance.
Transparent
persistence is perceived to have a significant performance overheadcompared
to hand-coded JDBC. That was certainly true a while ago, when the firstJDO
implementations were evaluated. They typically performed about half aswell
and with higher resource requirements. No doubt JDO vendors have closedthat
gap by caching PreparedStatements, queries, data, and by using other
optimizations.
Aside from the ease of programming through transparent persistence, I
believe that using JDO in conjunction with distributed cachingtechniques in
a J2EE managed environment has the opportunity to transparently give
scalability, performance, and availability improvements that wouldotherwise
be much more difficult to realize through other persistence techniques.
In particular, it looks like Tangosol is doing a lot of good work in the
area of distributed caching for J2EE. For example, executingparallelized
searches in a cluster is a capability that is pretty unique andpotentially
very valuable to many applications. It would appear to me to be a lot of
synergy between Kodo JDO and Tangosol Coherence. Using Coherence as an
implementation of Kodo JDO's distributed cache would be a natural desirefor
enterprise applications that have J2EE clustering requirements for high
scalability, performance, and availability.
I'm wondering if Solarmetric has any ideas or plans for closerintegration
(e.g., pluggability) of Tangosol Coherence into Kodo JDO. This is justmy
personal opinion, but I think a partnership between your twoorganizations
to do this integration would be mutually advantageous, and it would
potentially be very attractive to your customers.
Ben--
Marc Prud'hommeaux [email protected]
SolarMetric Inc. http://www.solarmetric.com -
[Repost] Kodo do not use cache when resolving references to objects?
Abe,
I noticed that if object A references object B and we read multiple A
object which reference the same B object, this B object will be re-read
from database for every fetched A object. If I read 100 A-object
referencing the same B it will do 100 selects on the same B. I use
optimistic transactions with retain values = true
Could you please comment on it
Thank you very much
AlexWas there something else about this you'd like me to comment on?_
No Abe that is it. I was not sure I am not doing something wrong. I
seemed very strange to me that internal reference resolution completely
ignore cache.
I think it is quite important - you want believe how much it slows down
object graph retrieval. It does not seem to be hard to implement. :-)_
I guess when resolving a reference you would have to use the same
validate _true/false settings as for retrieving object?
-----Original Message-----
From: White, Abe [mailto:[email protected]]
Sent: Monday, August 06, 2001 6:10 PM
To: JDO-ListServ
Subject: RE: [Repost] Kodo do not use cache when resolving references to
objects?
this B object will be re-read_
from database for every fetched A object_Yes, this seems like inefficient behavior that we should optimize. The
same B instance is returned from each relation, so it is not a bug, but
obviously it is something we could be doing a little more intelligently.
Thanks for pointing it out!
Was there something else about this you'd like me to comment on?_ -
Kodo do not use cache when resolving references to objects?
I noticed that if object A references object B and we read multiple A
object which reference the same B object, this B object will be re-read
from database for every fetched A object. If I read 100 A-object
referencing the same B it will do 100 selects on the same B
AlexWas there something else about this you'd like me to comment on?_
No Abe that is it. I was not sure I am not doing something wrong. I
seemed very strange to me that internal reference resolution completely
ignore cache.
I think it is quite important - you want believe how much it slows down
object graph retrieval. It does not seem to be hard to implement. :-)_
I guess when resolving a reference you would have to use the same
validate _true/false settings as for retrieving object?
-----Original Message-----
From: White, Abe [mailto:[email protected]]
Sent: Monday, August 06, 2001 6:10 PM
To: JDO-ListServ
Subject: RE: [Repost] Kodo do not use cache when resolving references to
objects?
this B object will be re-read_
from database for every fetched A object_Yes, this seems like inefficient behavior that we should optimize. The
same B instance is returned from each relation, so it is not a bug, but
obviously it is something we could be doing a little more intelligently.
Thanks for pointing it out!
Was there something else about this you'd like me to comment on?_ -
I'm using Kodo 2.3.2 running in a managed environment (JBoss 3.x). I have
it configured to use the "LocalCache" for caching of object instances and I
can see that the cache is working.
Does anyone know how I can get a reference to the DataCache? There is
example code in section 7.3.2 of the Kodo Manual, but that code doesn't
compile for me.
The specific code from the manual is:
PersistenceManagerFactoryImpl factory = (PersistenceManagerFactoryImpl)
pm.getPersistenceManagerFactory();
factory.getDataCache().pin(JDOHelper.getObjectId(o));
The PersistenceManagerFactoryImpl doesn't have a method "getDataCache".
Thanks in advance.Oops; that should be 'factory.getConfiguration ().getDataCache ()'.
-Fred
In article <amphkp$bes$[email protected]>, TJanusz wrote:
I'm using Kodo 2.3.2 running in a managed environment (JBoss 3.x). I have
it configured to use the "LocalCache" for caching of object instances and I
can see that the cache is working.
Does anyone know how I can get a reference to the DataCache? There is
example code in section 7.3.2 of the Kodo Manual, but that code doesn't
compile for me.
The specific code from the manual is:
PersistenceManagerFactoryImpl factory = (PersistenceManagerFactoryImpl)
pm.getPersistenceManagerFactory();
factory.getDataCache().pin(JDOHelper.getObjectId(o));
The PersistenceManagerFactoryImpl doesn't have a method "getDataCache".
Thanks in advance.
Fred Lucas
SolarMetric Inc.
202-595-2064 x1122
http://www.solarmetric.com -
Cache and/or Connection problems under load
I have a Kodo web app that's been running just fine in
production for many months now. However, recently the web
traffic has shot up by a huge amount, literally overnight.
But unfortunately, it's caused the app to fail very ungracefully
under the strain.
It's been a crazy few days, and I haven't been able to do
very much analysis because of higher priorities. But from
what I have been able to gleen, it now looks like Kodo is
the most likely culprit. From what I've read in other messages
here, it appears others may have been experiencing similar
problems.
My environment: Redhat Linux 8, Postgres 7.3.4 with the
included JDBC3 driver, Apache 1.3.x, Tomcat 4.1.x and the
webapp connector. Similar behavior was seen with Apache 2.x,
Tomcat 4.1.x and the JK2 connector (that was on the new machine
I setup to handle the new traffic, which, of course, died the
night before).
As I mentioned, this app has been running reliably for
months with no problems. But when placed under heavy load,
it appears to get into some sort of pathological state where
it slows down dramatically (asymptotically?) to the point where
it's effectively locked up. In one case, where the app was
left running for several hours in this state, requests were
taking 90 minutes to complete (normal is 1-5 seconds).
From what I can deduce, there seem to be four things
going on, three of which have been mentioned in recent threads
here:
1) Excessive memory consumption. When the app is
operating normally, I see fairly flat memory usage for
the JVM process. Under load, the JVM steadily expands
until it hits its heap limit. I've gotten OutOfMemory
exceptions with a heap size of 350MB, which should be plenty.
2) Level 2 cache locking issues. I've seen dozens of
threads waiting on a lock in the DataCache code. Not sure
if there's a deadlock happening here or just that the
threads are waiting on a lock that's being held for a long time.
3) Database Connection leaks or contention. I see threads
spinning in the DataSource code trying to get a connection.
I also see dozens of connections from the Postgres side which
seem to be sitting idle, but in the middle of a transaction.
When things get bad, I also see exceptions being thrown because
of timeouts waiting for a connection to become available. It's
a web app, PMs should not be tied up for more than a few seconds.
4) CPU usage pegged or nearly so for the JVM. I suspect
this is related to #3. Something very bad is going on here.
If I stop all inbound requests to the JVM when it's in this
bad state, it will continue to burn CPU at 90%+ for a very
long time. I think it will eventually finish what it's doing,
but I haven't had the luxury of waiting for it. It's definitely
not a linear slowdown proportional to the load.
Attached are my kodo.properties file and some thread stack
traces along with some comments. Any advice would be greatly
appreciated. This is not a complicated app nor am I doing
anything unusual. It doesn't seem logical that Kodo could
breakdown so dramatically under load, so I'm hoping it's some
sort of interaction thing that I can work around.
Thanks.
Ron Hitchens {mailto:[email protected]} RonSoft Technologies
(510) 494-9597 (Home Office) http://www.ronsoft.com
(707) 924-3878 (fax) Bit Twiddling At Its Finest
"Born with a broken heart" -Kenny Wayne ShepardPlease read prior posts regarding level 2 cache. It is unusable under stress
as far I am concerned. Basically entire cache gets locked on any database
read. Makes it very unscalable
Are you using 2.5.3? It will request a connection from a pool every time it
resolves reference to a PC even if it is cached in PM and therefore Kodo
does not need to read any. As result if you iterate over 100 objects in your
query and for each object resolve reference to another object (always the
same) kodo will request 100 database connections from the pool (and note
they issue rollback on every time they return a connection to the pool so
getting connection might be fairly expensive)
In conjunction with level 2 cache contention this causes application to go
into a stupor.
Try to go back to 2.5.2 (or may be 2.5.4 they promised in the near future
with a workaround) or use "persistent-manager" connection retention if you
discard PM after each HTTP invocation - it will take care of connection
pooling issue. As far as L2 cache I was unable to find any work around so
far - see if you might be better of without cache. You might if your object
graph is not very complex
"Ron Hitchens" <[email protected]> wrote in message
news:[email protected]...
>
I have a Kodo web app that's been running just fine in
production for many months now. However, recently the web
traffic has shot up by a huge amount, literally overnight.
But unfortunately, it's caused the app to fail very ungracefully
under the strain.
It's been a crazy few days, and I haven't been able to do
very much analysis because of higher priorities. But from
what I have been able to gleen, it now looks like Kodo is
the most likely culprit. From what I've read in other messages
here, it appears others may have been experiencing similar
problems.
My environment: Redhat Linux 8, Postgres 7.3.4 with the
included JDBC3 driver, Apache 1.3.x, Tomcat 4.1.x and the
webapp connector. Similar behavior was seen with Apache 2.x,
Tomcat 4.1.x and the JK2 connector (that was on the new machine
I setup to handle the new traffic, which, of course, died the
night before).
As I mentioned, this app has been running reliably for
months with no problems. But when placed under heavy load,
it appears to get into some sort of pathological state where
it slows down dramatically (asymptotically?) to the point where
it's effectively locked up. In one case, where the app was
left running for several hours in this state, requests were
taking 90 minutes to complete (normal is 1-5 seconds).
From what I can deduce, there seem to be four things
going on, three of which have been mentioned in recent threads
here:
1) Excessive memory consumption. When the app is
operating normally, I see fairly flat memory usage for
the JVM process. Under load, the JVM steadily expands
until it hits its heap limit. I've gotten OutOfMemory
exceptions with a heap size of 350MB, which should be plenty.
2) Level 2 cache locking issues. I've seen dozens of
threads waiting on a lock in the DataCache code. Not sure
if there's a deadlock happening here or just that the
threads are waiting on a lock that's being held for a long time.
3) Database Connection leaks or contention. I see threads
spinning in the DataSource code trying to get a connection.
I also see dozens of connections from the Postgres side which
seem to be sitting idle, but in the middle of a transaction.
When things get bad, I also see exceptions being thrown because
of timeouts waiting for a connection to become available. It's
a web app, PMs should not be tied up for more than a few seconds.
4) CPU usage pegged or nearly so for the JVM. I suspect
this is related to #3. Something very bad is going on here.
If I stop all inbound requests to the JVM when it's in this
bad state, it will continue to burn CPU at 90%+ for a very
long time. I think it will eventually finish what it's doing,
but I haven't had the luxury of waiting for it. It's definitely
not a linear slowdown proportional to the load.
Attached are my kodo.properties file and some thread stack
traces along with some comments. Any advice would be greatly
appreciated. This is not a complicated app nor am I doing
anything unusual. It doesn't seem logical that Kodo could
breakdown so dramatically under load, so I'm hoping it's some
sort of interaction thing that I can work around.
Thanks.
Ron Hitchens {mailto:[email protected]} RonSoft Technologies
(510) 494-9597 (Home Office) http://www.ronsoft.com
(707) 924-3878 (fax) Bit Twiddling At Its Finest
"Born with a broken heart" -Kenny Wayne Shepard
With cahce enabled, 2.5.3
Here the app had recently slowed down and then effectively locked up.
There where many outstanding web requests that were not receiving output.
At this point most threads seemed to be waiting at the same location.
There were a large number of active database connections and most of
them had open transactions (according to pg_stat_activity). The app
was not responding to any web requests.
It would seem that db transactions had been started, then the thread
got stuck for a long time on a synchronization lock in the cache lookup.
Below are two randomly chosen thread stack dumps.
Thread-72[1] where
[1] java.lang.Object.wait (native method)
[2] java.lang.Object.wait (Object.java:429)
[3]oswego.cs.dl.util.concurrent.WriterPreferenceReadWriteLock$ReaderLock.acquir
e (WriterPreferenceReadWriteLock.java:169)
[4]com.solarmetric.kodo.runtime.datacache.AbstractCacheImpl.acquireReadLock
(AbstractCacheImpl.java:384)
[5]com.solarmetric.kodo.runtime.datacache.TimedDataCache.acquireReadLock
(TimedDataCache.java:256)
[6] com.solarmetric.kodo.runtime.datacache.DataCacheStoreManager.load(DataCacheStoreManager.java:595)
[7] com.solarmetric.kodo.runtime.StateManagerImpl.loadFields(StateManagerImpl.java:2,330)
[8] com.solarmetric.kodo.runtime.StateManagerImpl.isLoaded(StateManagerImpl.java:897)
[9] com.europeasap.data.City.jdoGetname (null)
[10] com.europeasap.data.City.getName (City.java:39)
[11] com.europeasap.form.CustomerBookingForm.populateDepartureCityInfo(CustomerBookingForm.java:922)
[12] com.europeasap.form.CustomerBookingForm.onetimeInit(CustomerBookingForm.java:871)
[13] com.europeasap.form.CustomerBookingForm.populatePackageInfo(CustomerBookingForm.java:880)
[14] com.europeasap.action.CustomizeTrip.perform (CustomizeTrip.java:66)
[15] org.apache.struts.action.ActionServlet.processActionPerform(ActionServlet.java:1,787)
[16] org.apache.struts.action.ActionServlet.process(ActionServlet.java:1,586) [17]
org.apache.struts.action.ActionServlet.doGet (ActionServlet.java:492)
[18] javax.servlet.http.HttpServlet.service (HttpServlet.java:740)
[19] javax.servlet.http.HttpServlet.service (HttpServlet.java:853)
[20] org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:247)
[21] org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:193)
[22] org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:260)
[23]org.apache.catalina.core.StandardPipeline$StandardPipelineValveContext.invok
eNext (StandardPipeline.java:643)
[24] org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:480)
[25] org.apache.catalina.core.ContainerBase.invoke(ContainerBase.java:995)
[26] org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
[27]org.apache.catalina.core.StandardPipeline$StandardPipelineValveContext.invok
eNext (StandardPipeline.java:643)
[28] org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:493)
[29]org.apache.catalina.core.StandardPipeline$StandardPipelineValveContext.invok
eNext (StandardPipeline.java:641)
[30] org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:480)
[31] org.apache.catalina.core.ContainerBase.invoke(ContainerBase.java:995)
[32] org.apache.catalina.core.StandardContext.invoke(StandardContext.java:2,415)
[33] org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:180)
[34]org.apache.catalina.core.StandardPipeline$StandardPipelineValveContext.invok
eNext (StandardPipeline.java:643)
[35] org.apache.catalina.valves.ErrorDispatcherValve.invoke(ErrorDispatcherValve.java:170)
[36]org.apache.catalina.core.StandardPipeline$StandardPipelineValveContext.invok
eNext (StandardPipeline.java:641)
[37] org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:172)
[38]org.apache.catalina.core.StandardPipeline$StandardPipelineValveContext.invok
eNext (StandardPipeline.java:641)
[39] org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:480)
[40] org.apache.catalina.core.ContainerBase.invoke(ContainerBase.java:995)
[41] org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:174)
[42]org.apache.catalina.core.StandardPipeline$StandardPipelineValveContext.invok
eNext (StandardPipeline.java:643)
[43] org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:480)
[44] org.apache.catalina.core.ContainerBase.invoke(ContainerBase.java:995)
[45] org.apache.catalina.connector.warp.WarpRequestHandler.handle (null)
[46] org.apache.catalina.connector.warp.WarpConnection.run (null)
[47] java.lang.Thread.run (Thread.java:534)
Thread-64[1] where
[1] java.lang.Object.wait (native method)
[2] java.lang.Object.wait (Object.java:429)
[3]oswego.cs.dl.util.concurrent.WriterPreferenceReadWriteLock$ReaderLock.acquir
e (WriterPreferenceReadWriteLock.java:169)
[4]com.solarmetric.kodo.runtime.datacache.AbstractCacheImpl.acquireReadLock
(AbstractCacheImpl.java:384)
[5]com.solarmetric.kodo.runtime.datacache.TimedDataCache.acquireReadLock
(TimedDataCache.java:256)
[6] com.solarmetric.kodo.runtime.datacache.DataCacheStoreManager.load(DataCacheStoreManager.java:595)
[7] com.solarmetric.kodo.runtime.StateManagerImpl.loadField(StateManagerImpl.java:2,248)
[8] com.solarmetric.kodo.runtime.StateManagerImpl.isLoaded(StateManagerImpl.java:899)
[9] com.europeasap.data.HotelPrices.jdoGetseasonalPrices (null)
[10] com.europeasap.data.HotelPrices.normalizeIndex(HotelPrices.java:113)
[11] com.europeasap.data.HotelPrices.getCost (HotelPrices.java:45)
[12] com.europeasap.logic.CostHelper.findLowestHotel(CostHelper.java:181)
[13] com.europeasap.logic.CostHelper.computeBasePackageCost(CostHelper.java:297)
[14] com.europeasap.logic.CostHelper.computeFinalPackageCost(CostHelper.java:246)
[15] com.europeasap.form.CustomerBookingForm.updateDisplayCosts(CustomerBookingForm.java:1,440)
[16] com.europeasap.form.CustomerBookingForm.updateCustomizeDisplayInfo(CustomerBookingForm.java:1,407)
[17] com.europeasap.action.CustomizeTrip.perform (CustomizeTrip.java:68)
[18] org.apache.struts.action.ActionServlet.processActionPerform(ActionServlet.java:1,787)
[19] org.apache.struts.action.ActionServlet.process(ActionServlet.java:1,586) [20]
org.apache.struts.action.ActionServlet.doGet (ActionServlet.java:492)
[21] javax.servlet.http.HttpServlet.service (HttpServlet.java:740)
[22] javax.servlet.http.HttpServlet.service (HttpServlet.java:853)
[23] org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:247)
[24] org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:193)
[25] org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:260)
[26]org.apache.catalina.core.StandardPipeline$StandardPipelineValveContext.invok
eNext (StandardPipeline.java:643)
[27] org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:480)
[28] org.apache.catalina.core.ContainerBase.invoke(ContainerBase.java:995)
[29] org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
[30]org.apache.catalina.core.StandardPipeline$StandardPipelineValveContext.invok
eNext (StandardPipeline.java:643)
[31] org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:493)
[32]org.apache.catalina.core.StandardPipeline$StandardPipelineValveContext.invok
eNext (StandardPipeline.java:641)
[33] org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:480)
[34] org.apache.catalina.core.ContainerBase.invoke(ContainerBase.java:995)
[35] org.apache.catalina.core.StandardContext.invoke(StandardContext.java:2,415)
[36] org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:180)
[37]org.apache.catalina.core.StandardPipeline$StandardPipelineValveContext.invok
eNext (StandardPipeline.java:643)
[38] org.apache.catalina.valves.ErrorDispatcherValve.invoke(ErrorDispatcherValve.java:170)
[39]org.apache.catalina.core.StandardPipeline$StandardPipelineValveContext.invok
eNext (StandardPipeline.java:641)
[40] org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:172)
[41]org.apache.catalina.core.StandardPipeline$StandardPipelineValveContext.invok
eNext (StandardPipeline.java:641)
[42] org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:480)
[43] org.apache.catalina.core.ContainerBase.invoke(ContainerBase.java:995)
[44] org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:174)
[45]org.apache.catalina.core.StandardPipeline$StandardPipelineValveContext.invok
eNext (StandardPipeline.java:643)
[46] org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:480)
[47] org.apache.catalina.core.ContainerBase.invoke(ContainerBase.java:995)
[48] org.apache.catalina.connector.warp.WarpRequestHandler.handle (null)
[49] org.apache.catalina.connector.warp.WarpConnection.run (null)
[50] java.lang.Thread.run (Thread.java:534)
while running slow, 2.5.3
At this point, the app had been running several hours normally, then
apparently slowed down and locked up while I was away. When looking
at the app threads and database activity, everything appeared idle.
No transactions seemed to be open in the db. But the app was not
behaving normally. Web requests that did not make use of JDO worked
fine (but slow). But requests that hit the db either blocked or were
very slow to respond.
Looking back at the log, there had been a large number of requests
that threw exceptions because they could not get a connection within
five seconds.
Most threads were idle, waiting on read, but some were in the state
shown by the following two stack dumps. Unlike the cache threads above,
they did not seem to be waiting for a lock to be granted, they seemed
to be spinning in the connection management code. Apparently trying
to get a connection. I suspended and resumed the same thread repeatedly
and it always seemd to be doing the same thing. Single stepping was
very difficult because the debugger was slow to respond, apparently
because of other threads also busy spinning.
Postgres indicated that there where lots of connections open and
that they were all idle, so there should not have been a shortage
of connections in the pool. There are two mysteries here: 1) why
can't this thread get a connection? and 2) Why is it busy spinning?
Thread-56[1] where
[1]com.solarmetric.datasource.PreparedStatementCache$CacheAwareConnection.prepa
reStatement (PreparedStatementCache.java:184)
[2]com.solarmetric.datasource.PreparedStatementCache$CacheAwareConnection.prepa
reStatement (PreparedStatementCache.java:169)
[3] com.solarmetric.datasource.ConnectionWrapper.prepareStatement(ConnectionWrapper.java:199)
[4]com.solarmetric.kodo.impl.jdbc.schema.dict.AbstractDictionary.isClosed
(AbstractDictionary.java:1,912)
[5]com.solarmetric.kodo.impl.jdbc.SQLExecutionManagerImpl.getConnectionFromFact
ory (SQLExecutionManagerImpl.java:186)
[6] com.solarmetric.kodo.impl.jdbc.SQLExecutionManagerImpl.getConnection(SQLExecutionManagerImpl.java:147)
[7]com.solarmetric.kodo.impl.jdbc.runtime.JDBCStoreManager.newSQLExecutionManag
er (JDBCStoreManager.java:828)
[8]com.solarmetric.kodo.impl.jdbc.runtime.JDBCStoreManager.getSQLExecutionManag
er (JDBCStoreManager.java:714)
[9]com.solarmetric.kodo.impl.jdbc.runtime.JDBCStoreManager.getDatastoreConnecti
on (JDBCStoreManager.java:287)
[10]com.solarmetric.kodo.runtime.datacache.DataCacheStoreManager.getDatastoreCon
nection (DataCacheStoreManager.java:465)
[11] com.solarmetric.kodo.runtime.datacache.DataCacheStoreManager.load(DataCacheStoreManager.java:591)
[12] com.solarmetric.kodo.runtime.StateManagerImpl.loadFields(StateManagerImpl.java:2,330)
[13] com.solarmetric.kodo.runtime.StateManagerImpl.isLoaded(StateManagerImpl.java:897)
[14] com.europeasap.data.City.jdoGetname (null)
[15] com.europeasap.data.City.getName (City.java:39)
[16] com.europeasap.form.CustomerBookingForm.populateDepartureCityInfo(CustomerBookingForm.java:922)
[17] com.europeasap.form.CustomerBookingForm.onetimeInit(CustomerBookingForm.java:871)
[18] com.europeasap.form.CustomerBookingForm.populatePackageInfo(CustomerBookingForm.java:880)
[19] com.europeasap.action.CustomizeTrip.perform (CustomizeTrip.java:66)
[20] org.apache.struts.action.ActionServlet.processActionPerform(ActionServlet.java:1,787)
[21] org.apache.struts.action.ActionServlet.process(ActionServlet.java:1,586) [22]
org.apache.struts.action.ActionServlet.doGet (ActionServlet.java:492)
[23] javax.servlet.http.HttpServlet.service (HttpServlet.java:740)
[24] javax.servlet.http.HttpServlet.service (HttpServlet.java:853)
[25] org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:247)
[26] org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:193)
[27] org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:260)
[28]org.apache.catalina.core.StandardPipeline$StandardPipelineValveContext.invok
eNext (StandardPipeline.java:643)
[29] org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:480)
[30] org.apache.catalina.core.ContainerBase.invoke(ContainerBase.java:995)
[31] org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
[32]org.apache.catalina.core.StandardPipeline$StandardPipelineValveContext.invok
eNext (StandardPipeline.java:643)
[33] org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:493)
[34]org.apache.catalina.core.StandardPipeline$StandardPipelineValveContext.invok
eNext (StandardPipeline.java:641)
[35] org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:480)
[36] org.apache.catalina.core.ContainerBase.invoke(ContainerBase.java:995)
[37] org.apache.catalina.core.StandardContext.invoke(StandardContext.java:2,415)
[38] org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:180)
[39]org.apache.catalina.core.StandardPipeline$StandardPipelineValveContext.invok
eNext (StandardPipeline.java:643)
[40] org.apache.catalina.valves.ErrorDispatcherValve.invoke(ErrorDispatcherValve.java:170)
[41]org.apache.catalina.core.StandardPipeline$StandardPipelineValveContext.invok
eNext (StandardPipeline.java:641)
[42] org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:172)
[43]org.apache.catalina.core.StandardPipeline$StandardPipelineValveContext.invok
eNext (StandardPipeline.java:641)
[44] org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:480)
[45] org.apache.catalina.core.ContainerBase.invoke(ContainerBase.java:995)
[46] org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:174)
[47]org.apache.catalina.core.StandardPipeline$StandardPipelineValveContext.invok
eNext (StandardPipeline.java:643)
[48] org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:480)
[49] org.apache.catalina.core.ContainerBase.invoke(ContainerBase.java:995)
[50] org.apache.catalina.connector.warp.WarpRequestHandler.handle (null)
[51] org.apache.catalina.connector.warp.WarpConnection.run (null)
[52] java.lang.Thread.run (Thread.java:534)
Thread-56[1] where
[1]com.solarmetric.datasource.DataSourceImpl$AbstractPool.findConnection
(DataSourceImpl.java:826)
[2] com.solarmetric.datasource.DataSourceImpl$AbstractPool.getConnection(DataSourceImpl.java:605)
[3] com.solarmetric.datasource.DataSourceImpl.getConnection(DataSourceImpl.java:363)
[4] com.solarmetric.datasource.DataSourceImpl.getConnection(DataSourceImpl.java:356)
[5]com.solarmetric.kodo.impl.jdbc.runtime.DataSourceConnector.getConnection
(DataSourceConnector.java:63)
[6]com.solarmetric.kodo.impl.jdbc.SQLExecutionManagerImpl.getConnectionFromFact
ory (SQLExecutionManagerImpl.java:185)
[7] com.solarmetric.kodo.impl.jdbc.SQLExecutionManagerImpl.getConnection(SQLExecutionManagerImpl.java:147)
[8]com.solarmetric.kodo.impl.jdbc.runtime.JDBCStoreManager.newSQLExecutionManag
er (JDBCStoreManager.java:828)
[9]com.solarmetric.kodo.impl.jdbc.runtime.JDBCStoreManager.getSQLExecutionManag
er (JDBCStoreManager.java:714)
[10]com.solarmetric.kodo.impl.jdbc.runtime.JDBCStoreManager.getDatastoreConnecti
on (JDBCStoreManager.java:287)
[11]com.solarmetric.kodo.runtime.datacache.DataCacheStoreManager.getDatastoreCon
nection (DataCacheStoreManager.java:465)
[12]com.solarmetric.kodo.runtime.datacache.DataCacheStoreManager.initialize
(DataCacheStoreManager.java:519)
[13] com.solarmetric.kodo.runtime.StateManagerImpl.loadInitialState(StateManagerImpl.java:215)
[14]com.solarmetric.kodo.runtime.PersistenceManagerImpl.getObjectByIdFilter
(PersistenceManagerImpl.java:1,278)
[15] com.solarmetric.kodo.runtime.PersistenceManagerImpl.getObjectById(PersistenceManagerImpl.java:1,179)
[16]com.solarmetric.kodo.runtime.datacache.query.CacheAwareQuery$CachedResultLis
t.get (CacheAwareQuery.java:432)
[17] java.util.AbstractList$Itr.next (AbstractList.java:421)
[18] com.europeasap.form.CustomerBookingForm.populateDepartureCityInfo(CustomerBookingForm.java:919)
[19] com.europeasap.form.CustomerBookingForm.onetimeInit(CustomerBookingForm.java:871)
[20] com.europeasap.form.CustomerBookingForm.populatePackageInfo(CustomerBookingForm.java:880)
[21] com.europeasap.action.CustomizeTrip.perform (CustomizeTrip.java:66)
[22] org.apache.struts.action.ActionServlet.processActionPerform(ActionServlet.java:1,787)
[23] org.apache.struts.action.ActionServlet.process(ActionServlet.java:1,586) [24]
org.apache.struts.action.ActionServlet.doGet (ActionServlet.java:492)
[25] javax.servlet.http.HttpServlet.service (HttpServlet.java:740)
[26] javax.servlet.http.HttpServlet.service (HttpServlet.java:853)
[27] org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:247)
[28] org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:193)
[29] org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:260)
[30]org.apache.catalina.core.StandardPipeline$StandardPipelineValveContext.invok
eNext (StandardPipeline.java:643)
[31] org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:480)
[32] org.apache.catalina.core.ContainerBase.invoke(ContainerBase.java:995)
[33] org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
[34]org.apache.catalina.core.StandardPipeline$StandardPipelineValveContext.invok
eNext (StandardPipeline.java:643)
[35] org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:493)
[36]org.apache.catalina.core.StandardPipeline$StandardPipelineValveContext.invok
eNext (StandardPipeline.java:641)
[37] org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:480)
[38] org.apache.catalina.core.ContainerBase.invoke(ContainerBase.java:995)
[39] org.apache.catalina.core.StandardContext.invoke(StandardContext.java:2,415)
[40] org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:180)
[41]org.apache.catalina.core.StandardPipeline$StandardPipelineValveContext.invok
eNext (StandardPipeline.java:643)
[42] org.apache.catalina.valves.ErrorDispatcherValve.invoke(ErrorDispatcherValve.java:170)
[43]org.apache.catalina.core.StandardPipeline$StandardPipelineValveContext.invok
eNext (StandardPipeline.java:641)
[44] org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:172)
[45]org.apache.catalina.core.StandardPipeline$StandardPipelineValveContext.invok
eNext (StandardPipeline.java:641)
[46] org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:480)
[47] org.apache.catalina.core.ContainerBase.invoke(ContainerBase.java:995)
[48] org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:174)
[49]org.apache.catalina.core.StandardPipeline$StandardPipelineValveContext.invok
eNext (StandardPipeline.java:643)
[50] org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:480)
[51] org.apache.catalina.core.ContainerBase.invoke(ContainerBase.java:995)
[52] org.apache.catalina.connector.warp.WarpRequestHandler.handle (null)
[53] org.apache.catalina.connector.warp.WarpConnection.run (null)
[54] java.lang.Thread.run (Thread.java:534)
With cache disabled 2.4.3
This run was an accident. I inadvertently ran the app with the older
2.4.3 version of Kodo, with the cache disabled. This one got into trouble
almost immediately. It also seemed to lockup with lots of opentransactions
in the db. It's also interesting that these two threads also seem to be
hanging around the same method as in 2.5.3.
Thread-63[1] where 0x9f9
[1]com.solarmetric.datasource.PreparedStatementCache$CacheAwareConnection.prepa
reStatement (PreparedStatementCache.java:184)
[2] com.solarmetric.datasource.ConnectionWrapper.prepareStatement(ConnectionWrapper.java:377)
[3]com.solarmetric.kodo.impl.jdbc.SQLExecutionManagerImpl.prepareStatementInter
nal (SQLExecutionManagerImpl.java:807)
[4]com.solarmetric.kodo.impl.jdbc.SQLExecutionManagerImpl.executePreparedQueryI
nternal (SQLExecutionManagerImpl.java:761)
[5]com.solarmetric.kodo.impl.jdbc.SQLExecutionManagerImpl.executeQueryInternal
(SQLExecutionManagerImpl.java:691)
[6] com.solarmetric.kodo.impl.jdbc.SQLExecutionManagerImpl.executeQuery(SQLExecutionManagerImpl.java:372)
[7] com.solarmetric.kodo.impl.jdbc.SQLExecutionManagerImpl.executeQuery(SQLExecutionManagerImpl.java:356)
[8] com.solarmetric.kodo.impl.jdbc.ormapping.ClassMapping.loadByPK(ClassMapping.java:950)
[9] com.solarmetric.kodo.impl.jdbc.runtime.JDBCStoreManager.initialize(JDBCStoreManager.java:263)
[10] com.solarmetric.kodo.runtime.StateManagerImpl.loadInitialState(StateManagerImpl.java:174)
[11]com.solarmetric.kodo.runtime.PersistenceManagerImpl.getObjectByIdFilter
(PersistenceManagerImpl.java:1,023)
[12] com.solarmetric.kodo.runtime.PersistenceManagerImpl.getObjectById(PersistenceManagerImpl.java:942)
[13] com.solarmetric.kodo.impl.jdbc.ormapping.OneToOneMapping.load(OneToOneMapping.java:147)
[14] com.solarmetric.kodo.impl.jdbc.runtime.JDBCStoreManager.load(JDBCStoreManager.java:375)
[15] com.solarmetric.kodo.runtime.StateManagerImpl.loadField(StateManagerImpl.java:2,035)
[16] com.solarmetric.kodo.runtime.StateManagerImpl.isLoaded(StateManagerImpl.java:720)
[17] com.europeasap.data.CityMarkup.jdoGetcity (null)
[18] com.europeasap.data.CityMarkup.getCity (CityMarkup.java:30)
[19] com.europeasap.logic.CostHelper.getCityMarkup (CostHelper.java:81)
[20] com.europeasap.logic.CostHelper.computeBasePackageCost(CostHelper.java:289)
[21] com.europeasap.logic.CostHelper.computeFinalPackageCost(CostHelper.java:246)
[22] com.europeasap.form.CustomerBookingForm.updateDisplayCosts(CustomerBookingForm.java:1,440)
[23] com.europeasap.form.CustomerBookingForm.updateCustomizeDisplayInfo(CustomerBookingForm.java:1,407)
[24] com.europeasap.action.CustomizeTrip.perform (CustomizeTrip.java:68)
[25] org.apache.struts.action.ActionServlet.processActionPerform(ActionServlet.java:1,787)
[26] org.apache.struts.action.ActionServlet.process(ActionServlet.java:1,586) [27]
org.apache.struts.action.ActionServlet.doPost (ActionServlet.java:510)
[28] javax.servlet.http.HttpServlet.service (HttpServlet.java:760)
[29] javax.servlet.http.HttpServlet.service (HttpServlet.java:853)
[30] org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:247)
[31] org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:193)
[32] org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:260)
[33]org.apache.catalina.core.StandardPipeline$StandardPipelineValveContext.invok
eNext (StandardPipeline.java:643)
[34] org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:480)
[35] org.apache.catalina.core.ContainerBase.invoke(ContainerBase.java:995)
[36] org.apache.catal -
Caching problem w/ primary-foreign key mapping
I have seen this a couple of times now. It is not consistent enough to
create a simple reproducible test case, so I will have to describe it to you
with an example and hope you can track it down. It only occurs when caching
is enabled.
Here are the classes:
class C1 { int id; C2 c2; }
class C2 { int id; C1 c1; }
Each class uses application identity using static nested Id classes: C1.Id
and C2.Id. What is unusual is that the same value is used for both
instances:
int id = nextId();
C1 c1 = new C1(id);
C2 c2 = new C2(id);
c1.c2 = c2;
c2.c1 = c1;
This all works fine using optimistic transactions with caching disabled.
Although the integer values are the same, the oids are unique because each
class defines its own unique oid class.
Here is the schema and mapping (this works with caching disabled but fails
with caching enabled):
table t1: column id integer, column revision integer, primary key (id)
table t2: column id integer, column revision integer, primary key (id)
<jdo>
<package name="test">
<class name="C1" objectid-class="C1$Id">
<extension vendor-name="kodo" key="jdbc-class-map" value="base">
<extension vendor-name="kodo" key="table" value="t1"/>
</extension>
<extension vendor-name="kodo" key="jdbc-version-ind"
value="version-number">
<extension vendor-name="kodo" key="column" value="revision"/>
</extension>
<field name="id" primary-key="true">
<extension vendor-name="kodo" key="jdbc-field-map" value="value">
<extension vendor-name="kodo" key="column" value="id"/>
</extension>
</field>
<field name="c2">
<extension vendor-name="kodo" key="jdbc-field-map" value="one-one">
<extension vendor-name="kodo" key="column.id" value="id"/>
</extension>
</field>
</class>
<class name="C2" objectid-class="C2$Id">
<extension vendor-name="kodo" key="jdbc-class-map" value="base">
<extension vendor-name="kodo" key="table" value="t2"/>
</extension>
<extension vendor-name="kodo" key="jdbc-version-ind"
value="version-number">
<extension vendor-name="kodo" key="column" value="revision"/>
</extension>
<field name="id" primary-key="true">
<extension vendor-name="kodo" key="jdbc-field-map" value="value">
<extension vendor-name="kodo" key="column" value="id"/>
</extension>
</field>
<field name="c1">
<extension vendor-name="kodo" key="dependent" value="true"/>
<extension vendor-name="kodo" key="inverse-owner" value="c2"/>
<extension vendor-name="kodo" key="jdbc-field-map" value="one-one">
<extension vendor-name="kodo" key="table" value="t1"/>
<extension vendor-name="kodo" key="ref-column.id" value="id"/>
<extension vendor-name="kodo" key="column.id" value="id"/>
</extension>
</field>
</class>
</package>
</jdo>
Because the ids are known to be the same, the primary key values are also
used as foreign key values. Accessing C2.c1 is always non-null when caching
is disabled. With caching is enabled C2.c1 is usually non-null but sometimes
null. When it is null we get warnings about dangling references to deleted
instances with id values of 0 and other similar warnings.
The workaround is to add a redundant column with the same value. For some
reason this works around the caching problem (this is unnecessary with
caching disabled):
table t1: column id integer, column id2 integer, column revision integer,
primary key (id), unique index (id2)
table t2: column id integer, column revision integer, primary key (id)
<jdo>
<package name="test">
<class name="C1" objectid-class="C1$Id">
<extension vendor-name="kodo" key="jdbc-class-map" value="base">
<extension vendor-name="kodo" key="table" value="t1"/>
</extension>
<extension vendor-name="kodo" key="jdbc-version-ind"
value="version-number">
<extension vendor-name="kodo" key="column" value="revision"/>
</extension>
<field name="id" primary-key="true">
<extension vendor-name="kodo" key="jdbc-field-map" value="value">
<extension vendor-name="kodo" key="column" value="id"/>
</extension>
</field>
<field name="c2">
<extension vendor-name="kodo" key="jdbc-field-map" value="one-one">
<extension vendor-name="kodo" key="column.id" value="id2"/>
</extension>
</field>
</class>
<class name="C2" objectid-class="C2$Id">
<extension vendor-name="kodo" key="jdbc-class-map" value="base">
<extension vendor-name="kodo" key="table" value="t2"/>
</extension>
<extension vendor-name="kodo" key="jdbc-version-ind"
value="version-number">
<extension vendor-name="kodo" key="column" value="revision"/>
</extension>
<field name="id" primary-key="true">
<extension vendor-name="kodo" key="jdbc-field-map" value="value">
<extension vendor-name="kodo" key="column" value="id"/>
</extension>
</field>
<field name="c1">
<extension vendor-name="kodo" key="dependent" value="true"/>
<extension vendor-name="kodo" key="inverse-owner" value="c2"/>
<extension vendor-name="kodo" key="jdbc-field-map" value="one-one">
<extension vendor-name="kodo" key="table" value="t1"/>
<extension vendor-name="kodo" key="ref-column.id" value="id2"/>
<extension vendor-name="kodo" key="column.id" value="id"/>
</extension>
</field>
</class>
</package>
</jdo>
Needless to say, the extra column adds a lot of overhead, including the
addition of a second unique index, for no value other than working around
the caching defect.Tom-
The first thing that I think of whenever I see a problem like this is
that the equals() and hashCode() methods of your application identity
classes are not correct. Can you check them to ensure that they are
written in accordance to the guidelines at:
http://docs.solarmetric.com/manual.html#jdo_overview_pc_identity_application
If that doesn't help address the problem, can you post the code for your
application identity classes so we can double-check, and we will try to
determine what might be causing the problem.
In article <[email protected]>, Tom Landon wrote:
I have seen this a couple of times now. It is not consistent enough to
create a simple reproducible test case, so I will have to describe it to you
with an example and hope you can track it down. It only occurs when caching
is enabled.
Here are the classes:
class C1 { int id; C2 c2; }
class C2 { int id; C1 c1; }
Each class uses application identity using static nested Id classes: C1.Id
and C2.Id. What is unusual is that the same value is used for both
instances:
int id = nextId();
C1 c1 = new C1(id);
C2 c2 = new C2(id);
c1.c2 = c2;
c2.c1 = c1;
This all works fine using optimistic transactions with caching disabled.
Although the integer values are the same, the oids are unique because each
class defines its own unique oid class.
Here is the schema and mapping (this works with caching disabled but fails
with caching enabled):
table t1: column id integer, column revision integer, primary key (id)
table t2: column id integer, column revision integer, primary key (id)
<jdo>
<package name="test">
<class name="C1" objectid-class="C1$Id">
<extension vendor-name="kodo" key="jdbc-class-map" value="base">
<extension vendor-name="kodo" key="table" value="t1"/>
</extension>
<extension vendor-name="kodo" key="jdbc-version-ind"
value="version-number">
<extension vendor-name="kodo" key="column" value="revision"/>
</extension>
<field name="id" primary-key="true">
<extension vendor-name="kodo" key="jdbc-field-map" value="value">
<extension vendor-name="kodo" key="column" value="id"/>
</extension>
</field>
<field name="c2">
<extension vendor-name="kodo" key="jdbc-field-map" value="one-one">
<extension vendor-name="kodo" key="column.id" value="id"/>
</extension>
</field>
</class>
<class name="C2" objectid-class="C2$Id">
<extension vendor-name="kodo" key="jdbc-class-map" value="base">
<extension vendor-name="kodo" key="table" value="t2"/>
</extension>
<extension vendor-name="kodo" key="jdbc-version-ind"
value="version-number">
<extension vendor-name="kodo" key="column" value="revision"/>
</extension>
<field name="id" primary-key="true">
<extension vendor-name="kodo" key="jdbc-field-map" value="value">
<extension vendor-name="kodo" key="column" value="id"/>
</extension>
</field>
<field name="c1">
<extension vendor-name="kodo" key="dependent" value="true"/>
<extension vendor-name="kodo" key="inverse-owner" value="c2"/>
<extension vendor-name="kodo" key="jdbc-field-map" value="one-one">
<extension vendor-name="kodo" key="table" value="t1"/>
<extension vendor-name="kodo" key="ref-column.id" value="id"/>
<extension vendor-name="kodo" key="column.id" value="id"/>
</extension>
</field>
</class>
</package>
</jdo>
Because the ids are known to be the same, the primary key values are also
used as foreign key values. Accessing C2.c1 is always non-null when caching
is disabled. With caching is enabled C2.c1 is usually non-null but sometimes
null. When it is null we get warnings about dangling references to deleted
instances with id values of 0 and other similar warnings.
The workaround is to add a redundant column with the same value. For some
reason this works around the caching problem (this is unnecessary with
caching disabled):
table t1: column id integer, column id2 integer, column revision integer,
primary key (id), unique index (id2)
table t2: column id integer, column revision integer, primary key (id)
<jdo>
<package name="test">
<class name="C1" objectid-class="C1$Id">
<extension vendor-name="kodo" key="jdbc-class-map" value="base">
<extension vendor-name="kodo" key="table" value="t1"/>
</extension>
<extension vendor-name="kodo" key="jdbc-version-ind"
value="version-number">
<extension vendor-name="kodo" key="column" value="revision"/>
</extension>
<field name="id" primary-key="true">
<extension vendor-name="kodo" key="jdbc-field-map" value="value">
<extension vendor-name="kodo" key="column" value="id"/>
</extension>
</field>
<field name="c2">
<extension vendor-name="kodo" key="jdbc-field-map" value="one-one">
<extension vendor-name="kodo" key="column.id" value="id2"/>
</extension>
</field>
</class>
<class name="C2" objectid-class="C2$Id">
<extension vendor-name="kodo" key="jdbc-class-map" value="base">
<extension vendor-name="kodo" key="table" value="t2"/>
</extension>
<extension vendor-name="kodo" key="jdbc-version-ind"
value="version-number">
<extension vendor-name="kodo" key="column" value="revision"/>
</extension>
<field name="id" primary-key="true">
<extension vendor-name="kodo" key="jdbc-field-map" value="value">
<extension vendor-name="kodo" key="column" value="id"/>
</extension>
</field>
<field name="c1">
<extension vendor-name="kodo" key="dependent" value="true"/>
<extension vendor-name="kodo" key="inverse-owner" value="c2"/>
<extension vendor-name="kodo" key="jdbc-field-map" value="one-one">
<extension vendor-name="kodo" key="table" value="t1"/>
<extension vendor-name="kodo" key="ref-column.id" value="id2"/>
<extension vendor-name="kodo" key="column.id" value="id"/>
</extension>
</field>
</class>
</package>
</jdo>
Needless to say, the extra column adds a lot of overhead, including the
addition of a second unique index, for no value other than working around
the caching defect.
Marc Prud'hommeaux [email protected]
SolarMetric Inc. http://www.solarmetric.com -
Kodo and tangosol integration: ClassCastException in TangosolQueryCache
Hello.
I have tried to prepare my weblogic environment to work with kodo jdo 3.4.0 and tangosol coherence.
So, i prepare kodo's ra.xml as
<config-property>
<description>DataCache.</description>
<config-property-name>DataCache</config-property-name>
<config-property-type>java.lang.String</config-property-type>
<config-property-value>tangosol(TangosolCacheName=dist-kodo)</config-property-value>
</config-property>
<config-property>
<description>Plugin used to cache query results loaded from the data store. Must implement kodo.datacache.QueryCache.</description>
<config-property-name>QueryCache</config-property-name>
<config-property-type>java.lang.String</config-property-type>
<config-property-value>tangosol(TangosolCacheName=dist-kodo)</config-property-value>
</config-property>
<config-property>
<description>Remote Commit Provider.</description>
<config-property-name>RemoteCommitProvider</config-property-name>
<config-property-type>java.lang.String</config-property-type>
<config-property-value>sjvm</config-property-value>
</config-property>
When i start weblogic 8.1 i have found in STDOUT
* Tangosol Coherence(tm): Enterprise Edition is licensed by Tangosol, Inc.
* License details are available at: http://www.tangosol.com/license.jsp
* Licensed for evaluation use with the following restrictions:
* Effective Date : 1 Sep 2006 00:00:00 GMT
* Termination Date : 1 Nov 2006 00:00:00 GMT
* A production license is required for production use.
* Copyright (c) 2000-2006 Tangosol, Inc.
Tangosol Coherence Version 3.2/357 (Pre-release)
2006-09-11 17:43:49.303 Tangosol Coherence 3.2/357 (Pre-release) <Info> (thread=main, member=n/a): sun.misc.AtomicLong is not supported on this JVM; using a syncrhonized counter.
but when i trying to use datacache=tangosol, have ClassCastException
2006-09-11 17:45:57.253 Tangosol Coherence 3.2/357 (Pre-release) <Error> (thread=DistributedCache:EventDispatcher, member=4): An exception occurred while dispatching the following event:
CacheEvent: MapListenerSupport$FilterEvent{DistributedCache$BinaryMap deleted: key=Binary(length=141, value=0x0BACED00057372000C6B6F646F2E7574696C2E4964DDFCC3A1DB13765D0300024A000269644C00095F747970654E616D657400124C6A6176612F6C616E672F537472696E673B78700000000000000001740039636F6D2E626561722E66692E74726164656875622E646F6D61696E2E627573696E6573732E436C656172616E6365437573746F6D65724A444F78), value=null, filters=[MapEventFilter(mask=DELETED)]}
2006-09-11 17:45:57.253 Tangosol Coherence 3.2/357 (Pre-release) <Error> (thread=DistributedCache:EventDispatcher, member=4): The following exception was caught by the event dispatcher:
2006-09-11 17:45:57.253 Tangosol Coherence 3.2/357 (Pre-release) <Error> (thread=DistributedCache:EventDispatcher, member=4):
java.lang.ClassCastException
at kodo.datacache.TangosolQueryCache.entryDeleted(TangosolQueryCache.java:291)
at com.tangosol.util.MapEvent.dispatch(MapEvent.java:199)
at com.tangosol.util.MapEvent.dispatch(MapEvent.java:164)
at com.tangosol.util.MapListenerSupport.fireEvent(MapListenerSupport.java:550)
at com.tangosol.coherence.component.util.SafeNamedCache.translateMapEvent(SafeNamedCache.CDB:7)
at com.tangosol.coherence.component.util.SafeNamedCache.entryDeleted(SafeNamedCache.CDB:1)
at com.tangosol.util.MapEvent.dispatch(MapEvent.java:199)
at com.tangosol.coherence.component.util.daemon.queueProcessor.service.DistributedCache$ViewMap$ProxyListener.dispatch(DistributedCache.CDB:22)
at com.tangosol.coherence.component.util.daemon.queueProcessor.service.DistributedCache$ViewMap$ProxyListener.entryDeleted(DistributedCache.CDB:1)
at com.tangosol.util.MapEvent.dispatch(MapEvent.java:199)
at com.tangosol.coherence.component.util.CacheEvent.run(CacheEvent.CDB:18)
at com.tangosol.coherence.component.util.daemon.queueProcessor.Service$EventDispatcher.onNotify(Service.CDB:14)
at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:35)
at java.lang.Thread.run(Thread.java:534)
2006-09-11 17:45:57.253 Tangosol Coherence 3.2/357 (Pre-release) <Error> (thread=DistributedCache:EventDispatcher, member=4): (The service event thread has logged the exception and is continuing.)
When i have a look to TangosolQueryCache.jad (kodo part, decompiled from .class with jad) i found
public void entryDeleted(MapEvent evt)
QueryKey queryKey = (QueryKey)evt.getKey();
boolean isExpired = true;
keyRemoved(queryKey, isExpired);
but in com.tangosol.util.MapEvent (tangosol.jar)
public Object getKey()
return m_oKey;
So, in TangosolQueryCache tries to classcast Object to QueryKey.
Any ideas how to resolve that problem?
And in general, where i can find example of integration kodo datacache with tangosol coherence?
At http://edocs.bea.com/kodo/docs40/full/html/ref_guide_caching.html examples is not so full.
Thanks for answer.Hi Pavel,
Key in this event is an instance of kodo.util.Id:
00: 0B AC ED 00 05 73 72 00 0C 6B 6F 64 6F 2E 75 74 .’..sr..kodo.ut
10: 69 6C 2E 49 64 DD FC C3 A1 DB 13 76 5D 03 00 02 il.Id�?üáÛ.v]...
20: 4A 00 02 69 64 4C 00 09 5F 74 79 70 65 4E 61 6D J..idL.._typeNam
30: 65 74 00 12 4C 6A 61 76 61 2F 6C 61 6E 67 2F 53 et..Ljava/lang/S
40: 74 72 69 6E 67 3B 78 70 00 00 00 00 00 00 00 01 tring;xp........
50: 74 00 39 63 6F 6D 2E 62 65 61 72 2E 66 69 2E 74 t.9com.bear.fi.t
60: 72 61 64 65 68 75 62 2E 64 6F 6D 61 69 6E 2E 62 radehub.domain.b
70: 75 73 69 6E 65 73 73 2E 43 6C 65 61 72 61 6E 63 usiness.Clearanc
80: 65 43 75 73 74 6F 6D 65 72 4A 44 4F 78 eCustomerJDOx so, either kodo.datacache.TangosolQueryCache is listening to a wrong cache, or application attempts to remove something which is not removeable - did you try asking BEA support about this ?
Anyway, this exception doesn't affect anything - The service event thread has logged the exception and is continuing.
Regards,
Dimitri -
Best practice - caching objects
What is the best practice when many transactions requires a persistent
object that does not change?
For example, in a ASP model supporting many organizations, organization is
required for many persistent objects in the model. I would rather look the
organization object up once and keep it around.
It is my understanding that once the persistence manager is closed the
organization can no longer be part of new transactions with other
persistence managers. Aside from looking it up for every transaction, is
there a better solution?
Thanks in advance
Garyproblem with using object id fields instead of PC object references in your
object model is that it makes your object model less useful and intuitive.
Taking to the extreme (replacing all object references with their IDs) you
will end up with object like a row in JDBC dataset. Plus if you use PM per
HTTP request it will not do you any good since organization data won't be in
PM anyway so it might even be slower (no optimization such as Kodo batch
loads)
So we do not do it.
What you can do:
1. Do nothing special just use JVM level or distributed cache provided by
Kodo. You will not need to access database to get your organization data but
object creation cost in each PM is still there (do not forget this cache we
are talking about is state cache not PC object cache) - good because
transparent
2. Designate a single application wide PM for all your read-only big
things - lookup screens etc. Use PM per request for the rest. Not
transparent - affects your application design
3. If large portion of your system is read-only use is PM pooling. We did it
pretty successfully. The requirement is to be able to recognize all PCs
which are updateable and evict/makeTransient those when PM is returned to
the pool (Kodo has a nice extension in PersistenceManagerImpl for removing
all managed object of a certain class) so you do not have stale data in your
PM. You can use Apache Commons Pool to do the pooling and make sure your PM
is able to shrink. It is transparent and increase performance considerably
One approach we use
"Gary" <[email protected]> wrote in message
news:[email protected]...
>
What is the best practice when many transactions requires a persistent
object that does not change?
For example, in a ASP model supporting many organizations, organization is
required for many persistent objects in the model. I would rather look the
organization object up once and keep it around.
It is my understanding that once the persistence manager is closed the
organization can no longer be part of new transactions with other
persistence managers. Aside from looking it up for every transaction, is
there a better solution?
Thanks in advance
Gary -
Kodo 3.0.0 Released
Hello All,
SolarMetric is happy to announce the general availability of Kodo JDO 3.0.0.
The latest version of Kodo JDO is the most powerful ever. Full release notes
can be found at http://docs.solarmetric.com/relnotes.html. A list of the
major changes can be found at the bottom of this email. If you have a valid
and up-to-date maintenance and support contract, you can download the latest
release at http://www.solarmetric.com/Software/Purchase/download.php.
If you don't, please contact your sales representative
([email protected]) to determine how to upgrade.
Major changes in Kodo JDO 3.0.0 include:
- Kodo now supports direct SQL queries and stored procedure calls through
the javax.jdo.Query interface.
- Technology preview of Kodo Management / Monitoring capability.
- JRun 4 is now supported.
- Built-in support for Borland JDataStore, as well as Microsoft Access and
Microsoft Visual FoxPro (using a JDBC-ODBC server bridge like DataDirect,
but not the Sun JDBC-ODBC bridge).
- Refactored data caching to not lock the cache as a whole when loading data
from it or storing data into it. This change improves the concurrency of the
cache.
- Support for detaching and attaching instances from a persistence manager,
allowing applications to more easily use a "data transfer object" pattern.
- Support for collection and map fields that are backed by large result
sets.
- Support for additional settings to control the handling of large result
sets.
- Better exception messages when mappings can't be found or fail validations
against the schema.
- Expanded smart proxies to include change-tracking for lists.
- New externalization system for storage of field types not supported by JDO
without resorting to serializing or requiring custom mappings. Replaces the
old stringification mapping.
- Support for aggregates and projections in queries. See the Query
Aggregates and Projections documentation for more details.
- New mapping system, providing more flexible mapping options.
- Pluggable system for storing mapping information, with built-in options
for storing mapping information in the database, in JDO metadata extensions,
and in a separate mapping file. See the section on the Mapping Factory for
more information.
- Support for embedded 1-1 mappings, including nested embedded mappings with
no limit on nesting depth. See the section on Embedded One-to-One Mapping
for more information.
- Support for timestamp and state-based optimistic lock versioning, and for
custom versioning systems. See the section on Version Indicators for more
information.
- Configurable eager fetching of 1-1, 1-many and many-many relations.
Potentially reduces the number of database queries required when iterating
through an extent or query result and accessing relation fields of each
instance.
- Improved documentation and error messaging.
Enjoy,
-GregHi,
This plug in will it works with Kodo 4.0.1 and maven 1.0.2?
Thanks
Guy -
Kodo 4.1.2 in Weblogic 10 Problem
I was told by BEA that Kodo/openJPA is included in Weblogic 10. However, now I have Weblogic 10 but I could not located much Kodo classes from Weblogic libraries. I searched all the JARs under BEA_HOME\wlserver_10.0\server\lib.
I also tried to migrate Kodo/JPA application from Weblogic 9.2 to Weblogic 10. My application depends on Kodo JCA deployment in managed environment. The application and Kodo JCA deployed fine into Weblogic 10. But when I test any application, the test failed when I tried to create EntityMaanger from EntityManagerFactory:
Caused by: <4|false|0.9.7> org.apache.openjpa.persistence.ArgumentException: config-error
at weblogic.kodo.event.ClusterRemoteCommitProvider.endConfiguration(ClusterRemoteCommitProvider.java:112)
at org.apache.openjpa.lib.conf.Configurations.configureInstance(Configurations.java:447)
at org.apache.openjpa.conf.RemoteCommitProviderValue.instantiate(RemoteCommitProviderValue.java:122)
at org.apache.openjpa.conf.RemoteCommitProviderValue.instantiateProvider(RemoteCommitProviderValue.java:103)
at org.apache.openjpa.conf.RemoteCommitProviderValue.instantiateProvider(RemoteCommitProviderValue.java:95)
at org.apache.openjpa.conf.OpenJPAConfigurationImpl.newRemoteCommitProviderInstance(OpenJPAConfigurationImpl.java:708)
at org.apache.openjpa.event.RemoteCommitEventManager.(RemoteCommitEventManager.java:56)
at org.apache.openjpa.conf.OpenJPAConfigurationImpl.getRemoteCommitEventManager(OpenJPAConfigurationImpl.java:720)
at org.apache.openjpa.kernel.AbstractBrokerFactory.newBroker(AbstractBrokerFactory.java:177)
at org.apache.openjpa.kernel.DelegatingBrokerFactory.newBroker(DelegatingBrokerFactory.java:139)
at org.apache.openjpa.persistence.EntityManagerFactoryImpl.createEntityManager(EntityManagerFactoryImpl.java:187)
at org.apache.openjpa.persistence.EntityManagerFactoryImpl.createEntityManager(EntityManagerFactoryImpl.java:140)
at kodo.persistence.jdbc.JPAConnectionFactory.createEntityManager(JPAConnectionFactory.java:144)
at kodo.persistence.jdbc.JPAConnectionFactory.createEntityManager(JPAConnectionFactory.java:23)
at com.psi.vida.ejb.JPASessionBean.list(JPASessionBean.java:165)
at com.psi.vida.ejb.JPASessionEJB_lvtqkz_EOImpl.list(JPASessionEJB_lvtqkz_EOImpl.java:134)
at com.psi.vida.ejb.JPASessionEJB_lvtqkz_EOImpl_WLSkel.invoke(Unknown Source)
at weblogic.rmi.internal.ServerRequest.sendReceive(ServerRequest.java:174)
... 17 more
Caused by: java.lang.Exception: <0|true|0.9.7> org.apache.openjpa.persistence.PersistenceException: no-trasport
at org.apache.openjpa.util.Exceptions.replaceNestedThrowables(Exceptions.java:230)
at org.apache.openjpa.persistence.ArgumentException.writeObject(ArgumentException.java:104)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:585)
at java.io.ObjectStreamClass.invokeWriteObject(ObjectStreamClass.java:890)
at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1333)
at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1284)
at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1073)
at java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1369)
at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1341)
at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1284)
at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1073)
at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:291)
at weblogic.rmi.extensions.server.CBVOutputStream.writeObject(CBVOutputStream.java:84)
at weblogic.rmi.internal.ServerRequest.unmarshalThrowable(ServerRequest.java:349)
at weblogic.rmi.internal.ServerRequest.getThrowable(ServerRequest.java:62)
at weblogic.rmi.internal.ServerRequest.sendReceive(ServerRequest.java:203)
... 17 moreI was told by BEA that Kodo/openJPA is included in
Weblogic 10. However, now I have Weblogic 10 but I
could not located much Kodo classes from Weblogic
libraries. I searched all the JARs under
BEA_HOME\wlserver_10.0\server\lib. They're in the (new) modules directory. weblogic.jar refers to stuff in the modules directory via its manifest classpath.
I also tried to migrate Kodo/JPA application from
Weblogic 9.2 to Weblogic 10. My application depends
on Kodo JCA deployment in managed environment. The
application and Kodo JCA deployed fine into Weblogic
10. But when I test any application, the test failed
when I tried to create EntityMaanger from
EntityManagerFactory:Interesting. I do not know what the status of Kodo JCA testing is in WebLogic 10, but it sounds like something is a bit wonky.
Basically, in a WLS environment, the default remote commit provider is automatically set to the new weblogic.kodo.event.ClusterRemoteCommitProvider, which uses the WLS clustering protocol to communicate cache notifications. The error that you're seeing indicates that cluster services are not available in the execution context. You can probably get around this by explicitly setting the 'kodo.RemoteCommitProvider' option to 'sjvm' (if you're not running in a cluster), or to whatever you had it set to in the past. (I'm guessing that it was unset in the past, as otherwise, the configuration should be picking up that instead of the new default.)
However, personally, I much prefer the new persistence.xml configuration file format, compared to JCA configuration. (You can trivially use the persistence.xml format with Kodo JDO, even though it's a JPA-specified feature.) You might want to look into moving away from JCA and to the persistence.xml style instead.
If you do this, you'll end up putting a META-INF/persistence.xml file in your EAR (and possibly a META-INF/persistence-configuration.xml file, if you want to use the strongly-typed Kodo XML configuration format), and replacing your JNDI lookups with java:comp/env/persistence/<persistence-unit-name>. (I think that's the right location. I might be mistaken, though.)
Also, I can't guarantee that WebLogic 10 really handles JCA configuration all that well; some bits of that exception make it look like maybe some resources are not available in the classloader, which is surprising. So, it's possible that there is some sort of more fundamental JCA problem here (and not just a problem with the new remote commit provider).
-Patrick -
V2.2.1 is waisting connections / kodo 2.2.1 usable?
Hi,
Summary of the problem: Since 2.2.1, the deepness of the object mapping
seems to have a direct correlation to the minimum number of connections
needed to execute any query. Insufficient connections cause a kodo system
lock.
I'm currently moving from version 2.1.3 to 2.2.1 and realized a behaviour
that seems to be quite problematc:
If the pool-part of system.prefs is missing, any deepness of object mapping
can be resolved by version 2.2.1. Good.
If the pool-part of system.prefs is provided with min=1/max=1 and the query
accesses an 1:1 mapping, the target objects of the query can be resolved.
Also good.
If the pool-part of system.prefs is provided with min=1/max=1 and the query
accesses an 1:n mapping, the target objects can't be provided because kodo
locks. Very bad.
If I change the max pool size to '2', the query can be executed.
We are currently working with a more complex extend respectively class
graph. Querying our class graph on the root level was no problem with kodo
2.1.3, but using kodo 2.2.1 we need a pool size of _13_ connections to query
some objects!!!! I wouldn't call this an expected behaviour.
If I don't use the kodo connection pooling and change the data source by
calling JDBCPersistenceManagerFactory.setConnectionFactoryName(jndiPath) to
use own connection pools, the problem is exactly the same: The pool must
have at least _13_ connections for our extent, or more generally spoken, at
least 2 connections for querying a 1:n mapping. I didn't check the number
for n:m, nor the exact number for first class objects as part of other FCOs
depending on their mapping.
So, as a summary, it seems as if the nesting of SQL-Statements and their
connection caching during execution has changed in 2.2.1 in a way that seems
quite problematic (or even unusable) not only for complex queries.
If a team is working on the same project it is usually also working on the
same test database which is sometimes also used by other development teams.
Opening _13_ or even more connections per connection pool (often more than
just one connection pool is needed) per team member to a central database
will easily lead to resource problems. This won't be the case on production
time, but code has to be developed prior, hasn't it :)
Did I oversee anything? Or is the problem know by anyone?
Cheers,
Christian ElsenHi Patrick,
> Summary of the problem: Since 2.2.1, the deepness of the object
> mapping seems to have a direct correlation to the minimum number
> of connections needed to execute any query. Insufficient
> connections cause a kodo system lock.
It was a long try-and-error-research, but I found an indicator to the
problem: first class objects that belong to the default-fetch-group.
In the attached example I use a class D with an Arraylist of class E
objects and query for all D objects. The pool size is always min=1/max=1.
-case 1: the list of E objects of class D does not belong to
the default fetch group => The query is executed as expected
-case 2: the list of E objects of class D belongs to
the default fetch group => The query is locked until the
timeout duration for the pool elapses (default=10sec).
If I increase the pool max to 2, the query is executed as expected.
I didn't try a default-fetch-group-recursion, but I expect a recursive
behaviour: Having an FCO in the default-fetch-group that has an FCO in
the default-fetch-group that has an FCO in the default-fetch-group....
leads to an increased number of connections needed to execute a query.
We currently work on a quite complex mapping and have to query for the
'root' of the Extend is some cases => therefore we today need a pool
with 14! connections to execute the query. The problem arised with kodo
version 2.2.1, in 2.1.3 one connection was enough to execute the query.
I wonder if the problem depends just on version 2.2.1 or maybe on our
environment, but the observations point to the kodo implementation.
In production this behaviour won't be acceptable, as I described in my
first comment. It would be nice to hear if the results are reproducable.
If they are, we maybe have to look for other jdo implementations, as our
production date is in about 3 weeks, although we're quite satisfied with
the implementation (not regarding the described big problem).
Regards,
Christian
PS: Are there any guidelines of how to size the pools depending on the
used mapping?
I cannot reproduce your problem. I'm trying with a test case with three
classes: A, B, and C. A has a list of B's and B has a list of C's. Both
of these lists are one-to-manys. I run a query on A whose filter digs
through B into C. pool/min is set to 0, and pool/max is set to 1. The
query works.
I've attached my test class and package.jdo. Can you alter this to make
it reproduce your problem, or provide me with code that displays the
behavior that you described?
Further, what database are you running against? I've only run my test
against InstantDB.
-Patrick -
Missing table extension in Kodo 3.0
Hi,
I liked the class extention key="table" value="..." in the jdo metadata
xml files ind Kodo 2.x.
Unfortunately I can't find how to specify the tablename in my jdo metadata
for Kodo 3.0. I want to generate the mapping with the mapping tool, but it
always creates its own table names.
Thank you for your advise in advance,
Michael TammMichael,
I had the same problems when switching to 3.x. I am now using the xdoclet
jdo tags and have got this to work using the following on my class
definition
* @jdo.persistence-capable identity-type="datastore"
requires-extent="true"
* @jdo.class-vendor-extension vendor-name="kodo" key="jdbc-class-map"
value="base"
* @jdo.class-vendor-extension vendor-name="kodo"
key="jdbc-class-map/table" value="AppParam"
* @jdo.class-vendor-extension vendor-name="kodo" key="jdbc-version-ind"
value="version-number"
* @jdo.class-vendor-extension vendor-name="kodo"
key="jdbc-version-ind/column" value="_version"
* @jdo.class-vendor-extension vendor-name="kodo" key="jdbc-class-ind"
value="in-class-name"
* @jdo.class-vendor-extension vendor-name="kodo"
key="jdbc-class-ind/column" value="_class"
* @jdo.class-vendor-extension vendor-name="kodo" key="data-cache-timeout"
value="60"
* @jdo.class-vendor-extension vendor-name="kodo" key="data-cache"
value="true"
You have to use the Kodo key of jdbc-class-map/table to specify the mapped
table. You also have to set your kodo.jdbc.MappingFactory property to
"metadata".
Michael Tamm wrote:
Hi,
I liked the class extention key="table" value="..." in the jdo metadata
xml files ind Kodo 2.x.
Unfortunately I can't find how to specify the tablename in my jdo metadata
for Kodo 3.0. I want to generate the mapping with the mapping tool, but it
always creates its own table names.
Thank you for your advise in advance,
Michael Tamm
Maybe you are looking for
-
Duplicated folders in Mail from iCloud account
Hi, I have a little problem with Mail in Mac OS X Mountain Lion and iCloud. I use the last release, with all upgrades done. When I use Mail, sometimes (and always when I use the Sync function in Mail) I get duplicated folders from iCloud. A little pi
-
How to change the text of a standard iview
Hello, We are implementing ESS on ERP 2005 in an Irish client, and there is no Irish version of the webdynpro iview to change Personal data. So we follow the IMG docu and use the GB iview and works fine. The problem is that the text on one of the fie
-
LPs to mp3 (or other)
I am ready finally to begin recording our large LP library to digital music. I have an Intel iMac. What kind of connector do I need between the turntable and the computer? I assume I can use GarageBand (as I saw in another post) as the software to re
-
# is showing in the report
Hi Gurus, For the Info Object Course (0TRAINING) attributes -Course Start Date(0TRAINBEGDA) Course End Date (0TRAINENDDA) are there in Master Data after activating Master data, when excute the report it showing as # for start & end dates. Please h
-
How to integrate JD Ewards EnterpriseOne with BPEL/SOA 11G
Can anyone post steps or docs/link as how to integrate JD Edwards EnterpriseOne business services with BPEL/SOA 11G. I would highly appreciate if anyone could share their views on integrating business services with bpel-soa10g/11g Thanks