Scoping the toplink cache to application

Hi all, per Active Cache guide, the cache configuration can be scoped at the application level. This is what I am trying to do.
I have created shared libraries for coherence.jar, active-cahe jar and toplinkgrid jar and am including them in my application's weblogic-application.xml as library reference. The idea is that the cluster will be scoped within my application class loader.
Now, the next part is also to scope the cache configuration in my application. I have also created a JAR file which contains the cache configuration xml file and have created a shared library and included it as above.
However, with this, the log out put is as below
<Sep 16, 2010 11:11:22 AM CST> <Notice> <Stdout> <BEA-000000> <2010-09-16 11:11:22.391/691.970 Oracle Coherence GE 3.5.3/465p2 <Info> (thread=[STANDBY] ExecuteT
hread: '3' for queue: 'weblogic.kernel.Default (self-tuning)', member=n/a): Loaded cache configuration from "zip:D:/Oracle/Middleware11.1.1.3/coherence_3.5/lib/
coherence.jar!/reports/report-group.xml">
I have also tried placing the configuration JAR file in the application EAR file lib directory. I still get the above log statement.
So, the question is, how do I make coherence pick up my application scoped configuration. I read the Active Cache documentation, but I am not clear about this part.
Thanks.

You can use the system propety -Dtangosol.coherence.cacheconfig
to tell coherence which cache configuration you want to use, for example
-Dtangosol.coherence.cacheconfig=<location>/hibernate-cache-config.xml

Similar Messages

  • Querying the toplink cache under high-load

    We've had some interesting experiences with "querying" the TopLink Cache lately.
    It was recently discovered that our "read a single object" method was incorrectly
    setting query.checkCacheThenDB() for all ReadObjectQueries. This was brought to light
    when we upgraded our production servers from 4 cores to 8. We immediatly started
    experiencing very long response times under load.
    We traced this down to the following stack: (TopLink version 10.1.3.1.0)
    at java.lang.Object.wait(Native Method)
    - waiting on <0x00002aab08fd26d8> (a oracle.toplink.internal.helper.ConcurrencyManager)
    at java.lang.Object.wait(Object.java:474)
    at oracle.toplink.internal.helper.ConcurrencyManager.acquireReadLock(ConcurrencyManager.java:179)
    - locked <0x00002aab08fd26d8> (a oracle.toplink.internal.helper.ConcurrencyManager)
    at oracle.toplink.internal.helper.ConcurrencyManager.checkReadLock(ConcurrencyManager.java:167)
    at oracle.toplink.internal.identitymaps.CacheKey.checkReadLock(CacheKey.java:122)
    at oracle.toplink.internal.identitymaps.IdentityMapKeyEnumeration.nextElement(IdentityMapKeyEnumeration.java:31)
    at oracle.toplink.internal.identitymaps.IdentityMapManager.getFromIdentityMap(IdentityMapManager.java:530)
    at oracle.toplink.internal.queryframework.ExpressionQueryMechanism.checkCacheForObject(ExpressionQueryMechanism.java:412)
    at oracle.toplink.queryframework.ReadObjectQuery.checkEarlyReturnImpl(ReadObjectQuery.java:223)
    at oracle.toplink.queryframework.ObjectLevelReadQuery.checkEarlyReturn(ObjectLevelReadQuery.java:504)
    at oracle.toplink.queryframework.DatabaseQuery.execute(DatabaseQuery.java:564)
    at oracle.toplink.queryframework.ObjectLevelReadQuery.execute(ObjectLevelReadQuery.java:779)
    at oracle.toplink.queryframework.ReadObjectQuery.execute(ReadObjectQuery.java:388)
    We moved the query back to the default, query.checkByPrimaryKey() and this issue went away.
    The bottleneck seemed to stem from the read lock on the CacheKey from IdenityMapKeyEnumeration
    public Object nextElement() {
    if (this.nextKey == null) {
    throw new NoSuchElementException("IdentityMapKeyEnumeration nextElement");
    // CR#... Must check the read lock to avoid
    // returning half built objects.
    this.nextKey.checkReadLock();
    return this.nextKey;
    We had many threads that were having contention while searching the cache for a particular query.
    From the stack we know that the contention was limited to one class. We've since refactored that code
    not to use a query in that code path.
    Question:
    Armed with this better knowledge of how TopLink queries the cache, we do have a few objects that we
    frequently read by something other than the primary key. A natural key, but not the oid.
    We have some other caching mechanisms in place (JBoss TreeCache) to help eliminate queries to the DB
    for these objects. But the TreeCache also tries to acquire a read lock when accessing the cache.
    Presumably a read lock over the network to the cluster.
    Is there anything that can be done about the read lock on CacheKey when querying the cache in a high load
    situation?

    CheckCacheThenDatabase will check the entire cache for a match using a linear search. This can be efficient if the cache is very large. Typically it is more efficient to access the database if your cache is large and the field you are querying in the database is indexed in the table.
    The cache concurrency was greatly improved in TopLink 11g/EclipseLink, so you may wish to try it out.
    Supporting indexes in the TopLink/EclipseLink cache is something desirable (feel free to log the enhancement request on EclipseLink). You can simulate this to some degree using a named query and a query cache.
    -- James : http://www.eclipselink.org

  • Keeping the toplink cache insync across a cluster

    We would like to deploy our application across a cluster of 10g application servers and we are using toplink as our persistence framework using POJO's. I have been looking around and I have been unable to find anything very helpful on how to setup toplink to be able to keep the cache in sync across the cluster. Has anyone does this on 10g with POJO's? If so can you please point me to an example or a good piece of documentation?
    Thanks
    Marc

    The documentation on Cache synch can be found starting on page 8-3 on the 10g (9.0.4) release.

  • How do I sync Toplink Cache with my Database?

    Hey guys,
    We are running macro's through an excel sheet that connects to the database and performs Updates and Inserts.
    Since this database update is NOT taking place through TopLink (in a Unit of Work) - we do not see the database changes through the web app on the front end unless we bounce our webserver. Presumably the Toplink cache is re-built on start up...so then we can see the changes.
    My question is, what can I do to make sure the TopLink cache is aware of the database changes we have made through the macro without having to bounce the server? Is there a re-fresh or sync command that can be run?
    This task is sort of a one time thing, so I don't want a solution that involves the cache going to sync itself on a schedule or anything like that. Maybe bouncing is the best solution?
    Thoughts?
    We are using Toplink 9.0.4
    Thanks.

    Hello,
    Because it is a one time thing, if you can make sure no other TopLink process are going on, you can probably get away with initializeAllIdentityMaps() or the intitializeIdentityMap(Class) methods on the session. These will clear the identity maps, having the obvious draw back of removing all object identity which will cause problems with running processes though. It might be better than bouncing the server - it depends on the application. Logging out of the session and logging back in has the same effect of clearing the cache, but with a bit more overhead. Benifit is that running processes will get errors if they continue to use the session, rather than strange behaviior if they continue to use objects after identity is lost.
    Another alternative is to run refresh queries on the data you know might be in the cache or that might have been affected. Drawbacks are that this brings them into the cache if they are not already there.
    TopLink 9.0.4 is a quire a few versions back, and in newer versions there is cache invalidation. An object marked as invalid is not removed, so object identity is maintained, but on the next query the data will be refreshed - ensuring subsequent queries will get results from the database without having to be told explicietly to refresh or being set to always refresh. Invalidation can be triggered on particular objects, classes or the entire cache or different policies can be set to set a time to live etc.
    Except for bouncing the server or logging out of the session, all of the above leave some possibility that a concurrent user will still have a reference to stale data and continue to use it after the process has run on the database. So I hope you use optimistic locking and that your batch process updates the version to avoid other process from overwriting with stale data.
    Best Regards,
    Chris

  • Cluster config. - toplink cache

    hi all!
    A simple question...
    I've built an application using ADF and toplink.
    Actually the application runs on a single IAS.
    now, for load reasons, i'll need to migrate to a cluster configuration.
    Is there any kind of problem with the toplink cache?
    Thanks.
    Luca

    hi all!
    A simple question...
    I've built an application using ADF and toplink.
    Actually the application runs on a single IAS.
    now, for load reasons, i'll need to migrate to a cluster configuration.
    Is there any kind of problem with the toplink cache?
    Thanks.
    Luca

  • Querying TopLink Cache

    My system demands caching the result sets obtained from a ReadAllQuery. I might make several queries on this static set of cached data. But the data to be cached is small. I use TopLink's use session.getProject().FullIdentityMap() to cache the output of ReadAllQuery that i execute at the start of the system.
    But am not able to query the cache from an external API later ( which could be several minutes later). How can i manipulate the TopLink cache APIs to get this done. Kindly Reply.

    Hi Manoj,
    If I understand you correctly, you persist some objects and then later query them. You don't get the results you expect when you use checkCacheOnly(). You need to use checkCacheThenDatabase() and when you do this you're seeing SQL I expect.
    If your cache type for the class is FullIdentityMap then TopLink will never release objects of that class once read and your checkCacheOnly() query should work.
    I'm guessing that you're using a different TopLink session. You mention you have a number of services. What environment are you running in and what is your architecture (e.g., servlet or EJB)? Statics don't solve sharing problems especially in an application server or web application environment in which multiple classloaders are employed.
    --Shaun                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

  • TOPLINK-38 ClassCastException if specifying toplink.cache.type. ENTITY

    Some background info:
    I am running a 10.1.3.3 OC4J standalone. On the same instance there is a resource adapter also using toplink JPA and get the entity manager from the factory instead of the container managed one. In my application, I also use an application managed EM. When deploying by myself, there is not issue. When it is deployed with the resource adapter, I would get the ClassCastException.
    If I removed the properties in the persistence.xml file that contains the "toplink.cache.type.<ENTITY>", the exception goes away and everything works fine. Following is my persistence.xml file.
    <persistence-unit name="oc4j" transaction-type="JTA">
    <provider>
    oracle.toplink.essentials.PersistenceProvider
    </provider>
    <jta-data-source>java:jdbc/OcmsXdmsDs</jta-data-source>
    <class>oracle.sdp.xcapdatamodel.jpa.XCAPAppUsage</class>
    <class>oracle.sdp.xcapdatamodel.jpa.XCAPUser</class>
    <class>oracle.sdp.xcapdatamodel.jpa.XCAPDocument</class>
    <class>oracle.sdp.xcapdatamodel.jpa.AppUsageSchema</class>
    <properties>
    <property name="toplink.session-name" value="xdms-multinode-session"/>
    <property name="toplink.logging.level" value="INFO"/>
    <!-- Optimize toplink caching -->
    <property name="toplink.cache.type.XCAPUser" value="None"/>
    <property name="toplink.cache.type.XCAPDocument" value="None"/>
    <property name="toplink.cache.type.XCAPAppUsage" value="Full"/>
    <property name="toplink.cache.type.AppUsageSchema" value="Full"/>
    </properties>
    </persistence-unit>

    Exception [TOPLINK-38] (Oracle TopLink Essentials - 2.0 (Build b58g-fcs (09/07/2007))): oracle.toplink.essentials.except
    ions.DescriptorException
    Exception Description: Identity map constructor failed because an invalid identity map was specified.
    Internal Exception: java.lang.ClassCastException: oracle.toplink.essentials.internal.identitymaps.FullIdentityMap
    Descriptor: RelationalDescriptor(oracle.sdp.xcapdatamodel.jpa.AppUsageSchema --> [DatabaseTable(APPUSAGESCHEMA)])
    at oracle.toplink.essentials.exceptions.DescriptorException.invalidIdentityMap(DescriptorException.java:778)
    at oracle.toplink.essentials.internal.identitymaps.IdentityMapManager.buildNewIdentityMap(IdentityMapManager.jav
    a:293)
    at oracle.toplink.essentials.internal.identitymaps.IdentityMapManager.getIdentityMap(IdentityMapManager.java:716
    at oracle.toplink.essentials.internal.identitymaps.IdentityMapManager.acquireDeferredLock(IdentityMapManager.jav
    a:119)
    at oracle.toplink.essentials.internal.sessions.IdentityMapAccessor.acquireDeferredLock(IdentityMapAccessor.java:
    85)
    at oracle.toplink.essentials.internal.descriptors.ObjectBuilder.buildObject(ObjectBuilder.java:479)
    at oracle.toplink.essentials.internal.descriptors.ObjectBuilder.buildWorkingCopyCloneNormally(ObjectBuilder.java
    :451)
    at oracle.toplink.essentials.internal.descriptors.ObjectBuilder.buildObjectInUnitOfWork(ObjectBuilder.java:421)
    at oracle.toplink.essentials.internal.descriptors.ObjectBuilder.buildObject(ObjectBuilder.java:387)
    at oracle.toplink.essentials.queryframework.ReportQueryResult.processItem(ReportQueryResult.java:220)
    at oracle.toplink.essentials.queryframework.ReportQueryResult.buildResult(ReportQueryResult.java:182)
    at oracle.toplink.essentials.queryframework.ReportQueryResult.<init>(ReportQueryResult.java:98)
    at oracle.toplink.essentials.queryframework.ReportQuery.buildObject(ReportQuery.java:594)
    at oracle.toplink.essentials.queryframework.ReportQuery.buildObjects(ReportQuery.java:643)
    at oracle.toplink.essentials.queryframework.ReportQuery.executeDatabaseQuery(ReportQuery.java:804)
    at oracle.toplink.essentials.queryframework.DatabaseQuery.execute(DatabaseQuery.java:628)
    at oracle.toplink.essentials.queryframework.ObjectLevelReadQuery.execute(ObjectLevelReadQuery.java:692)
    at oracle.toplink.essentials.queryframework.ObjectLevelReadQuery.executeInUnitOfWork(ObjectLevelReadQuery.java:7
    46)
    at oracle.toplink.essentials.internal.sessions.UnitOfWorkImpl.internalExecuteQuery(UnitOfWorkImpl.java:2233)
    at oracle.toplink.essentials.internal.sessions.AbstractSession.executeQuery(AbstractSession.java:952)
    at oracle.toplink.essentials.internal.sessions.AbstractSession.executeQuery(AbstractSession.java:924)
    at oracle.toplink.essentials.internal.ejb.cmp3.base.EJBQueryImpl.executeReadQuery(EJBQueryImpl.java:367)
    at oracle.toplink.essentials.internal.ejb.cmp3.base.EJBQueryImpl.getResultList(EJBQueryImpl.java:478)
    at oracle.sdp.xcapconfigmanager.JPAApplicationUsageImpl.loadXmlXsd(JPAApplicationUsageImpl.java:203)
    at oracle.sdp.xcapconfigmanager.XCAPConfigManager.loadJPAAppUsage(XCAPConfigManager.java:447)
    at oracle.sdp.xcapconfigmanager.XCAPConfigManager.loadApplicationUsages(XCAPConfigManager.java:402)
    at oracle.sdp.xcapconfigmanager.XCAPConfigManager.start(XCAPConfigManager.java:366)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:585)
    at oracle.sdp.jmxframework.ModelerBeanDeployer.invokeLifeCycleOperation(ModelerBeanDeployer.java:315)
    at oracle.sdp.jmxframework.ModelerBeanDeployer.startService(ModelerBeanDeployer.java:266)
    at oracle.sdp.jmxframework.ModelerBeanDeployer.initializeServices(ModelerBeanDeployer.java:359)
    at oracle.sdp.jmxframework.ModelerBeanDeployer.preRegister(ModelerBeanDeployer.java:384)
    at com.sun.jmx.mbeanserver.BaseMetaDataImpl.preRegisterInvoker(BaseMetaDataImpl.java:83)
    at com.sun.jmx.mbeanserver.MetaDataImpl.preRegisterInvoker(MetaDataImpl.java:237)
    at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:923)
    at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:337)
    at com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:497)
    at oracle.oc4j.admin.jmx.server.state.ApplicationStateFilterMBeanServer.registerMBean(ApplicationStateFilterMBea
    nServer.java:349)
    at com.evermind.server.Application.registerApplicationMBeans(Application.java:2978)
    at com.evermind.server.Application.addJ2EEApplicationMBean(Application.java:1682)
    at com.evermind.server.ApplicationStateRunning.initializeApplication(ApplicationStateRunning.java:201)
    at com.evermind.server.Application.setConfig(Application.java:438)
    at com.evermind.server.Application.setConfig(Application.java:339)
    at com.evermind.server.ApplicationServer.addApplication(ApplicationServer.java:1895)
    at com.evermind.server.ApplicationServer.initializeDeployedApplications(ApplicationServer.java:1651)
    at com.evermind.server.ApplicationServer.setConfig(ApplicationServer.java:1034)
    at com.evermind.server.ApplicationServerLauncher.run(ApplicationServerLauncher.java:131)
    at java.lang.Thread.run(Thread.java:595)
    Caused by: java.lang.ClassCastException: oracle.toplink.essentials.internal.identitymaps.FullIdentityMap
    at oracle.toplink.essentials.internal.identitymaps.IdentityMapManager.buildNewIdentityMap(IdentityMapManager.jav
    a:289)

  • Toplink cache coordination problem with opmn lookup

    Dear all,
    We encounter some problems when we use opmn url and jms to realize the toplink cache coordination.
    Scenario:
    1. Using oracle application server 10g, create three processes(jvm) on one oc4j instance
    2. Using oc4j in memory jms server for cache coordination
    3. Using opmn url to lookup jms topic connection factory and connection, opmn url: opmn:ormi://shasudv4:6004:OC4J_TTS/tts
    When we start oc4j instance, we find the log, all the toplink cache coordination properites has been set to toplink framework, and remote command manager has been initialized successfully. All in memory jms server has also been started successfully, there are three listeners on the topic.
    But, when we test the synchronization, the data can't be synchronized between all the processes, two of them is ok, the rest of them has some problem to coordinate with the other two. There are no exceptions, we have set the level of toplink log to 'all'.
    I have checked the thread dump, I find that "HTTPThreadGroup", "RMICallHandler" and "JMSRequestHandler" threads are involved in cache synchronization.
    So, we are obliged to change the opmn url to ormi url, so we can only set a bound for rmi ports of processes, because the ormi port is dynamicly assigned.
    &lt;port id="rmi" range="12405-12407" /&gt;
    This seems ok, and involved thread are: "RMICallHander" and "JMSRequestHandler". But we also concern that we have restricted the port bound of ormi port of every instance.
    Thanks of any advice.

    I raised a TAR with Oracle. The short answer is that you can't do cache coordination either with ADF business objects, or with Toplink Essentials (as opposed to Oracle Toplink, which doesn't yet support JPA).

  • TopLink Cache Out of Sync

    I have a situation where the application needs to handle inserts and updates, and deletes are handled by an oracle stored procedure on the database. My problem is after issuing any inserts/updates, then doing a delete of a record, it seems like that record is still in the toplink cache because trying to re-insert that deleted record fails until I restart the app server, then it works fine (until it's deleted again). Is there some way after calling the stored procedure to delete a record, to get the object in toplink updated correctly?
    I've tried a few different things including
    getUnitOfWork().unregisterObject(object);
    and
    getSession().getIdentityMapAccessor().invalidateObject(getObject(object));
    with no luck. Any suggestions?
    Nick

    You can set check existence to be check database instead of check cache.
    http://www.oracle.com/technology/products/ias/toplink/doc/1013/main/_html/uowadv001.htm#CACFHAAJ

  • Toplink Cache issues in Cluster

    Hi
    Our production environment is have a clustered environment and we have been noticing the following problem. When a user is trying to save a record she repeatedly encounters the "Toplink-5006" exception that I have included below.
    TopLink Error]: 2006.07.19 04:49:23.359--UnitOfWork(115148745)--null--Exception [TOPLINK-5006] (TopLink (WLS CMP) - 10g (9.0.4.2) (Build 040311)): oracle.toplink.exceptions.OptimisticLockException
    Exception Description: The object [[email protected]ce459] cannot be updated because it has changed or been deleted since it was last read.
    Class> com.rhii.mjplus.fo.people.beans.People_z2e2a7__TopLink_CMP_2_0 Primary Key> [1001280937, 0]
    [TopLink Error]: 2006.07.19 04:49:23.359--UnitOfWork(115148745)--null--Exception [TOPLINK-5006] (TopLink (WLS CMP) - 10g (9.0.4.2) (Build 040311)): oracle.toplink.exceptions.OptimisticLockException
    Exception Description: The object [[email protected]ce459] cannot be updated because it has changed or been deleted since it was last read.
    Class> com.rhii.mjplus.fo.people.beans.People_z2e2a7__TopLink_CMP_2_0 Primary Key> [1001280937, 0]
    <Jul 19, 2006 4:49:23 PM PDT> <Error> <EJB> <BEA-010026> <Exception occurred during commit of transaction Name=[EJB com.rhii.mjplus.fo.people.beans.PeopleManagerBean.setPeople(java.util.HashMap,java.lang.String,java.lang.String,java.lang.String,java.util.HashSet,com.rhii.mjplus.common.login.data.UserInfoDO)],Xid=BEA1-795A6481D2E1938A8EAD(115171166),Status=Rolled back. [Reason=Exception [TOPLINK-5006] (TopLink (WLS CMP) - 10g (9.0.4.2) (Build 040311)): oracle.toplink.exceptions.OptimisticLockException
    Exception Description: The object [[email protected]ce459] cannot be updated because it has changed or been deleted since it was last read.
    Class> com.rhii.mjplus.fo.people.beans.People_z2e2a7__TopLink_CMP_2_0 Primary Key> [1001280937, 0]],numRepliesOwedMe=0,numRepliesOwedOthers=0,seconds since begin=0,seconds left=60,XAServerResourceInfo[weblogic.jdbc.wrapper.JTSXAResourceImpl]=(ServerResourceInfo[weblogic.jdbc.wrapper.JTSXAResourceImpl]=(state=rolledback,assigned=MS15_mjp),xar=weblogic.jdbc.wrapper.JTSXAResourceImpl@6dcf50b),SCInfo[mjp+MS15_mjp]=(state=rolledback),properties=({weblogic.transaction.name=[EJB com.rhii.mjplus.fo.people.beans.PeopleManagerBean.setPeople(java.util.HashMap,java.lang.String,java.lang.String,java.lang.String,java.util.HashSet,com.rhii.mjplus.common.login.data.UserInfoDO)], weblogic.jdbc=t3://10.253.129.56:2323}),OwnerTransactionManager=ServerTM[ServerCoordinatorDescriptor=(CoordinatorURL=MS15_mjp+10.253.129.56:2323+mjp+t3+, XAResources={},NonXAResources={})],CoordinatorURL=MS15_mjp+10.253.129.56:2323+mjp+t3+): Local Exception Stack:
    Exception [TOPLINK-5006] (TopLink (WLS CMP) - 10g (9.0.4.2) (Build 040311)): oracle.toplink.exceptions.OptimisticLockException
    Exception Description: The object [[email protected]ce459] cannot be updated because it has changed or been deleted since it was last read.
    Class> com.rhii.mjplus.fo.people.beans.People_z2e2a7__TopLink_CMP_2_0 Primary Key> [1001280937, 0]
         at oracle.toplink.exceptions.OptimisticLockException.objectChangedSinceLastReadWhenUpdating(Ljava/lang/Object;Loracle/toplink/queryframework/ObjectLevelModifyQuery;)Loracle/toplink/exceptions/OptimisticLockException;(OptimisticLockException.java:109)
    What is puzzling is that the occurance of this nature has increased with user load and the toplink cache does not seem to have been refreshed after it encounters the first Optimistic Lock exception. We have run several test and this is not reproducabile in the DEV environment where we do not have a clustered set. After making a few updates to a record users starts experiencing the problem ... for some this persist for a really long time.
    We do not have Cache synchronization
    Cluster setup is as followes
    There are 4 boxes and each box has one admin and 4 managed servers.
    I have included the toplink-cmp-people.xml ( Thisis the particular entity bean we have a problem with) Our application server is Weblogic and we have Toplink version 9042
    <toplink-ejb-jar>
    <session>
    <name>People</name>
    <project-class>
    com.rhii.mjplus.fo.people.beans.PeopleToplink
    </project-class>
    <login>
    <datasource>MJPool</datasource>
    <non-jts-datasource>MJPool</non-jts-datasource>
    </login>
    <use-remote-relationships>true</use-remote-relationships>
    <customization-class>com.rhii.mjplus.common.TopLinkCustomization
    </customization-class>
    </session>
    </toplink-ejb-jar>
    I would appreciate any kind of feedback
    Thanks
    Lakshmi

    Can you refresh that record using a query before you save it ?
    Hi
    Our production environment is have a clustered
    environment and we have been noticing the following
    problem. When a user is trying to save a record she
    repeatedly encounters the "Toplink-5006" exception
    that I have included below.
    TopLink Error]: 2006.07.19
    04:49:23.359--UnitOfWork(115148745)--null--Exception
    [TOPLINK-5006] (TopLink (WLS CMP) - 10g (9.0.4.2)
    (Build 040311)):
    oracle.toplink.exceptions.OptimisticLockException
    Exception Description: The object
    [com.rhii.mjplus.fo.people.beans.People_z2e2a7__TopLin
    k_CMP_2_0@6dce459] cannot be updated because it has
    changed or been deleted since it was last read.
    Class>
    com.rhii.mjplus.fo.people.beans.People_z2e2a7__TopLink
    CMP2_0 Primary Key> [1001280937, 0]
    [TopLink Error]: 2006.07.19
    04:49:23.359--UnitOfWork(115148745)--null--Exception
    [TOPLINK-5006] (TopLink (WLS CMP) - 10g (9.0.4.2)
    (Build 040311)):
    oracle.toplink.exceptions.OptimisticLockException
    Exception Description: The object
    [com.rhii.mjplus.fo.people.beans.People_z2e2a7__TopLin
    k_CMP_2_0@6dce459] cannot be updated because it has
    changed or been deleted since it was last read.
    Class>
    com.rhii.mjplus.fo.people.beans.People_z2e2a7__TopLink
    CMP2_0 Primary Key> [1001280937, 0]
    <Jul 19, 2006 4:49:23 PM PDT> <Error> <EJB>
    <BEA-010026> <Exception occurred during commit of
    transaction Name=[EJB
    com.rhii.mjplus.fo.people.beans.PeopleManagerBean.setP
    eople(java.util.HashMap,java.lang.String,java.lang.Str
    ing,java.lang.String,java.util.HashSet,com.rhii.mjplus
    .common.login.data.UserInfoDO)],Xid=BEA1-795A6481D2E19
    38A8EAD(115171166),Status=Rolled back.
    [Reason=Exception [TOPLINK-5006] (TopLink (WLS CMP) -
    10g (9.0.4.2) (Build 040311)):
    oracle.toplink.exceptions.OptimisticLockException
    Exception Description: The object
    [com.rhii.mjplus.fo.people.beans.People_z2e2a7__TopLin
    k_CMP_2_0@6dce459] cannot be updated because it has
    changed or been deleted since it was last read.
    Class>
    com.rhii.mjplus.fo.people.beans.People_z2e2a7__TopLink
    CMP2_0 Primary Key> [1001280937,
    0]],numRepliesOwedMe=0,numRepliesOwedOthers=0,seconds
    since begin=0,seconds
    left=60,XAServerResourceInfo[weblogic.jdbc.wrapper.JTS
    XAResourceImpl]=(ServerResourceInfo[weblogic.jdbc.wrap
    per.JTSXAResourceImpl]=(state=rolledback,assigned=MS15
    _mjp),xar=weblogic.jdbc.wrapper.JTSXAResourceImpl@6dcf
    50b),SCInfo[mjp+MS15_mjp]=(state=rolledback),propertie
    s=({weblogic.transaction.name=[EJB
    com.rhii.mjplus.fo.people.beans.PeopleManagerBean.setP
    eople(java.util.HashMap,java.lang.String,java.lang.Str
    ing,java.lang.String,java.util.HashSet,com.rhii.mjplus
    .common.login.data.UserInfoDO)],
    weblogic.jdbc=t3://10.253.129.56:2323}),OwnerTransacti
    onManager=ServerTM[ServerCoordinatorDescriptor=(Coordi
    natorURL=MS15_mjp+10.253.129.56:2323+mjp+t3+,
    XAResources={},NonXAResources={})],CoordinatorURL=MS15
    _mjp+10.253.129.56:2323+mjp+t3+): Local Exception
    Stack:
    Exception [TOPLINK-5006] (TopLink (WLS CMP) - 10g
    (9.0.4.2) (Build 040311)):
    oracle.toplink.exceptions.OptimisticLockException
    Exception Description: The object
    [com.rhii.mjplus.fo.people.beans.People_z2e2a7__TopLin
    k_CMP_2_0@6dce459] cannot be updated because it has
    changed or been deleted since it was last read.
    Class>
    com.rhii.mjplus.fo.people.beans.People_z2e2a7__TopLink
    CMP2_0 Primary Key> [1001280937, 0]
    at
    t
    oracle.toplink.exceptions.OptimisticLockException.obje
    ctChangedSinceLastReadWhenUpdating(Ljava/lang/Object;L
    oracle/toplink/queryframework/ObjectLevelModifyQuery;)
    Loracle/toplink/exceptions/OptimisticLockException;(Op
    timisticLockException.java:109)
    What is puzzling is that the occurance of this nature
    has increased with user load and the toplink cache
    does not seem to have been refreshed after it
    encounters the first Optimistic Lock exception. We
    have run several test and this is not reproducabile
    in the DEV environment where we do not have a
    clustered set. After making a few updates to a record
    users starts experiencing the problem ... for some
    this persist for a really long time.
    We do not have Cache synchronization
    Cluster setup is as followes
    There are 4 boxes and each box has one admin and 4
    managed servers.
    I have included the toplink-cmp-people.xml ( Thisis
    the particular entity bean we have a problem with)
    Our application server is Weblogic and we have
    Toplink version 9042
    <toplink-ejb-jar>
    <session>
    <name>People</name>
    <project-class>
    com.rhii.mjplus.fo.people.beans.PeopleToplink
    </project-class>
    <login>
    <datasource>MJPool</datasource>
    <non-jts-datasource>MJPool</non-jts-datasource>
    </login>
    <use-remote-relationships>true</use-remote-relationsh
    ips>
    <customization-class>com.rhii.mjplus.common.TopLinkCu
    stomization
    </customization-class>
    </session>
    </toplink-ejb-jar>
    I would appreciate any kind of feedback
    Thanks
    LakshmiCan you refresh that record using a query before you save it ?

  • Toplink cache growing until OutOfMemoryError

    Hello,
    We are using OC4J 9.0.3 with toplink 9.0.3 on IBM AIX J2RE 1.3.1. From time to time, the JVM crashes with an OutOfMemoryError. Analysing the JVM heap dump, we have found that there are a lot of objects of the same type in the toplink cache:
    # of objects: 48 047
    Total size in bytes: 165 874 992
    The identity map settings for the project caching this object type are:
    Type: SoftCacheWeakIdentityMap
    Size: 100
    My understanding is that most of the 48 047 objects should be referenced as a weak references and that some of them should have been garbage collected by the VM instead of throwing an OutOfMemoryError.
    Could it be a problem in the JVM garbage collector ?
    May I use a CacheIdentityMap to work around the problem ?
    Do you have any advice ?
    Thank you in advance for any hint,
    Pierre Laroche

    Being in production I am not sure how much you can debug the system, but to determine if it is an issue with the weak references you could clear out the cache through an initializeAllIdentityMaps call on the session when the system is using most of the memory? This will remove all objects from the TopLink identity map removing the WeakReferences. This call should not be completed on a system still servicing requests though. Is it possible that there is a leak in the application that is preventing the WeakReferences from being garbage collected?
    --Gordon                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

  • Toplink cache performance

    I have the following problem:
    A j2ee struts application deployed over a cluster of 4 application server (10g). Machines Sun Solaris, configured identically with one OC4J container and three processes(3 jvm's). Memory always keeps growing till the max configured in the java options. Which is set to 1,5 Gb (we have tried with a lower Xmx1GB and even Xmx2GB)...
    Once that number is hit than the CPU turns high, and the controller ping process of the application server forcefully terminates the process(es) which can not be reached by the ping process.
    Load: approximately 1500 users per day, with high degree of updates and inserts.
    My question: can the toplink cache,(most of the classes are configured SoftCacheWeakReference) be the cause of this ridiculous memory growth?

    Seems like your application has a memory leak somewhere. You may wish to analyze your app servers memory usage with memory profiling tools, such as JProbe memory profiler.
    Unless you have a very large cache size, I would not expect the SoftCacheWeakIdentityMap to cause a memory issue. You can verify this by changing your caching type to WeakIdentityMap. Also double check that you are not using a FullIdentityMap anywhere, nor using a very large cache size.
    Also verify your application does not holding references to objects and not allowing them to garbage collect.

  • Toplink cache

    Hi,
    I am having severe memory problems using Toplink Essentials. What I am experiencing is a growing memory consumption every time an entity is persistent, no matter I set in persistence.xml
    <property name="toplink.cache.type.default" value="Weak"/>
    and
    <property name="toplink.cache.size.default" value="10"/>
    I used the Java Visual VM supplied with the sun 1.6.0.07 sdk to analize the heap.
    The behaviour I am expecting is:
    when a persistent class is not referenced by my application anymore, it should be kept in the toplink cache with a weak reference, to be garbage collected as soon as the JVM is running out of memory.
    What I see instead :
    for each class I persist I find roughly 3 instances in memory and they seems to be strongly referenced. In addition, after a while the program start slowing down (each persist takes longer, I guess this is the cost for the cache lookup, but in my real application it ends up using more than a second for each persist!)
    Workaround: I found EntityManage.clear() wipes out the cache freeying the memory, but I am not sure this is free from side effects (all the entities are detached I guess)
    I attacked a test case: just the simplest entity possible, filled with a random integer and persisted. Leave it running for maybe 10 minutes.
    After a while have:
    2000 entities persisted
    6000 instances in the heap
    12000 oracle.toplink.essentials.internal.helperIdentityHashtable$Entry
    Persisting is now taking ~50ms instead of 5ms (and growing)
    Am I doing something wrong?
    This is my entity:
    import javax.persistence.Column;
    import javax.persistence.Entity;
    import javax.persistence.GeneratedValue;
    import javax.persistence.GenerationType;
    import javax.persistence.Id;
    @Entity
    public class AnEntity {
         @Id
         @GeneratedValue(strategy = GenerationType.IDENTITY)
         @Column
         protected int     anEntityID;
         protected int     data;
         protected AnEntity() {
         public AnEntity(final int data) {
              this.data = data;
    My main application:
    import javax.persistence.EntityManager;
    import javax.persistence.EntityManagerFactory;
    import javax.persistence.EntityTransaction;
    import javax.persistence.Persistence;
    public class Test {
         public static void main(final String[] args) throws InterruptedException {
              new Test();
         final EntityManagerFactory     emFTest     = Persistence.createEntityManagerFactory("test_pu");
         private final EntityManager     emTest     = emFTest.createEntityManager();
         public Test() throws InterruptedException {
              int i = 0;
              while (true) {
                   final AnEntity entity = new AnEntity((int) (Math.random() * 1000000));
                   final long time = System.nanoTime();
                   try {
                        final EntityTransaction trans = emTest.getTransaction();
                        trans.begin();
                        emTest.persist(entity);
                        trans.commit();
                   } catch (final Exception e) {
                        System.err.println(e.getMessage());
                   System.out.println("It took:" + (System.nanoTime() - time) / 1000000 + "ms for " + i);
                   i++;
                   Thread.sleep(100);
    And my persistence.xml
    <?xml version="1.0" encoding="UTF-8"?>
    <persistence version="1.0" xmlns="http://java.sun.com/xml/ns/persistence">
              <persistence-unit name="test_pu" transaction-type="RESOURCE_LOCAL">
                        <provider>oracle.toplink.essentials.ejb.cmp3.EntityManagerFactoryProvider</provider>
                        <class>AnEntity</class>
                        <properties>
                             <property name="toplink.jdbc.driver" value="com.mysql.jdbc.Driver" />
                             <property name="toplink.jdbc.url" value="jdbc:mysql://test:3306/test" />
                             <property name="toplink.jdbc.user" value="test" />
                             <property name="toplink.jdbc.password" value="test" />
                             <property name="toplink.logging.level" value="INFO" />
                             <property name="toplink.ddl-generation" value="create-tables" />
                             <property name="toplink.cache.type.default" value="Weak"/>
                        </properties>
              </persistence-unit>
    </persistence>

    Hi Chris,
    I see your point, and this is what I am trying to obtain, I am sure i am getting the underlying JPA phylosophy wrong (as a side note, I already moved to Eclipselink :) )
    What I have is a fleet of vehicles, each one sending the gps position once per second. All the position are stored in the database.
    The communication between each veichle and the "server application" is through JMS, so I effectively receive a bunch of positions every second - no easy way to put them in the same transaction, but it is not a big hit on the database so I am not concerned about this.
    What I noticed is after something like 30 minutes the application starts struggling, every persist took 300ms, positions started accumulating as he software did not manage to write them as fast as they are generated and everything start collapsing badly - it took 15 days to find out the problem was with the toplink cache.
    I am trying to work out the best way of doing things.
    What I noticed is, if I recreate the EM, all the entities are detached (and I understand why - entities are managed by the entity manager!).
    I understand there are two caches, the session cache and the UoW cache (EM).
    What I desire at a conceptual level is to keep my entities managed until I do not reference them anymore. If I need them again, I have to retrieve them back with em.find, or I can create new ones.
    Unfortunately things seems not to be working this way - EM keep everything in cache even if do not have any reference to an instance.
    I have quite a good level of control over the global cache, but no control at all over the EM cache (apart from completely wiping it) so I have to recreate the EM every time, but the object are not managed this way - forcing me to use merge in place of persist (I basically would like the best of the two worlds).
    This is not a problem by itself, but this forces me to write code like
    entity=em.merge(entity)
    or better:
    AnEntity tmp=em.merge(entity)
    entity.anEntityID=tmp.anEntityID
    to handle the case this is a new entity. This is my new code, everytime I have to update the db
    final EntityManager emTest = emFTest.createEntityManager();
    final EntityTransaction trans = emTest.getTransaction();
    trans.begin();
    final AnEntity tmp = emTest.merge(entity);
    trans.commit();
    entity.anEntityID = tmp.anEntityID;
    emTest.close()
    while with a global EM it was:
    final EntityTransaction trans = emTest.getTransaction();
    trans.begin();
    emTest.persist(entity);
    trans.commit();

  • How to Add Index in Toplink Cache

    Hi
    We are using toplink as ORM.we are using default cache option (soft cache weak identity)provided by toplink, need to optimize the object retrival from cache, is there any option to create Index ( like database index) on the toplink cache.
    Message was edited by:
    [email protected]

    The TopLink cache is indexed by primary key only. It does not support additional indexes.
    Doug

  • Refresh toplink cache from tirgger

    How to update the toplink cache due to change in the database by some external process or procedure?
    This question was posted some time back and one of the suggestion was to create a trigger on the table holding the data and implement callout to the toplink cache to refresh. I will appreciate if any one can let me know where I can I find more information to implement such a callout method from trigger on the database table.
    We are accessing the toplink objects from OC4J container from where a singleton is managing the calls to the toplink objects. We already have methods in place to refresh the cached object based on timeouts but now the new requirements are to refresh the objects only if the data is changed in the database.
    Thanks
    Ahmad

    I have url error on this thread : How to refresh cache in TopLink, turn off cache
    [b]Discussion Forums Error
    We cannot process your request at this time. Please try again later.
    Thank's

Maybe you are looking for