Querying TopLink Cache

My system demands caching the result sets obtained from a ReadAllQuery. I might make several queries on this static set of cached data. But the data to be cached is small. I use TopLink's use session.getProject().FullIdentityMap() to cache the output of ReadAllQuery that i execute at the start of the system.
But am not able to query the cache from an external API later ( which could be several minutes later). How can i manipulate the TopLink cache APIs to get this done. Kindly Reply.

Hi Manoj,
If I understand you correctly, you persist some objects and then later query them. You don't get the results you expect when you use checkCacheOnly(). You need to use checkCacheThenDatabase() and when you do this you're seeing SQL I expect.
If your cache type for the class is FullIdentityMap then TopLink will never release objects of that class once read and your checkCacheOnly() query should work.
I'm guessing that you're using a different TopLink session. You mention you have a number of services. What environment are you running in and what is your architecture (e.g., servlet or EJB)? Statics don't solve sharing problems especially in an application server or web application environment in which multiple classloaders are employed.
--Shaun                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

Similar Messages

  • Querying the toplink cache under high-load

    We've had some interesting experiences with "querying" the TopLink Cache lately.
    It was recently discovered that our "read a single object" method was incorrectly
    setting query.checkCacheThenDB() for all ReadObjectQueries. This was brought to light
    when we upgraded our production servers from 4 cores to 8. We immediatly started
    experiencing very long response times under load.
    We traced this down to the following stack: (TopLink version 10.1.3.1.0)
    at java.lang.Object.wait(Native Method)
    - waiting on <0x00002aab08fd26d8> (a oracle.toplink.internal.helper.ConcurrencyManager)
    at java.lang.Object.wait(Object.java:474)
    at oracle.toplink.internal.helper.ConcurrencyManager.acquireReadLock(ConcurrencyManager.java:179)
    - locked <0x00002aab08fd26d8> (a oracle.toplink.internal.helper.ConcurrencyManager)
    at oracle.toplink.internal.helper.ConcurrencyManager.checkReadLock(ConcurrencyManager.java:167)
    at oracle.toplink.internal.identitymaps.CacheKey.checkReadLock(CacheKey.java:122)
    at oracle.toplink.internal.identitymaps.IdentityMapKeyEnumeration.nextElement(IdentityMapKeyEnumeration.java:31)
    at oracle.toplink.internal.identitymaps.IdentityMapManager.getFromIdentityMap(IdentityMapManager.java:530)
    at oracle.toplink.internal.queryframework.ExpressionQueryMechanism.checkCacheForObject(ExpressionQueryMechanism.java:412)
    at oracle.toplink.queryframework.ReadObjectQuery.checkEarlyReturnImpl(ReadObjectQuery.java:223)
    at oracle.toplink.queryframework.ObjectLevelReadQuery.checkEarlyReturn(ObjectLevelReadQuery.java:504)
    at oracle.toplink.queryframework.DatabaseQuery.execute(DatabaseQuery.java:564)
    at oracle.toplink.queryframework.ObjectLevelReadQuery.execute(ObjectLevelReadQuery.java:779)
    at oracle.toplink.queryframework.ReadObjectQuery.execute(ReadObjectQuery.java:388)
    We moved the query back to the default, query.checkByPrimaryKey() and this issue went away.
    The bottleneck seemed to stem from the read lock on the CacheKey from IdenityMapKeyEnumeration
    public Object nextElement() {
    if (this.nextKey == null) {
    throw new NoSuchElementException("IdentityMapKeyEnumeration nextElement");
    // CR#... Must check the read lock to avoid
    // returning half built objects.
    this.nextKey.checkReadLock();
    return this.nextKey;
    We had many threads that were having contention while searching the cache for a particular query.
    From the stack we know that the contention was limited to one class. We've since refactored that code
    not to use a query in that code path.
    Question:
    Armed with this better knowledge of how TopLink queries the cache, we do have a few objects that we
    frequently read by something other than the primary key. A natural key, but not the oid.
    We have some other caching mechanisms in place (JBoss TreeCache) to help eliminate queries to the DB
    for these objects. But the TreeCache also tries to acquire a read lock when accessing the cache.
    Presumably a read lock over the network to the cluster.
    Is there anything that can be done about the read lock on CacheKey when querying the cache in a high load
    situation?

    CheckCacheThenDatabase will check the entire cache for a match using a linear search. This can be efficient if the cache is very large. Typically it is more efficient to access the database if your cache is large and the field you are querying in the database is indexed in the table.
    The cache concurrency was greatly improved in TopLink 11g/EclipseLink, so you may wish to try it out.
    Supporting indexes in the TopLink/EclipseLink cache is something desirable (feel free to log the enhancement request on EclipseLink). You can simulate this to some degree using a named query and a query cache.
    -- James : http://www.eclipselink.org

  • Querying by PK against Toplink Cache

    Lets say I have table A that has PK id.
    If toplink generates SQL like this:
    "select id from A where 1=1 and id=3"
    Assuming the data is already cached against that PK, will Toplink retrieve the data from the Toplink cache with this SQL? or since i used 1=1, will it not find it in the cache and require a database retrieval?
    Thanks,
    Zev.

    Doug, thanks for the response. I expected it since we have seen a large increase in SQL. Let me explain the reason for this and maybe you can point us in a better direction.
    We generate a Map of CriteriaValue Objects. The in our base object, we iterate over the map and based on the criteria, we build the appropriate expression.
    Here is the code for the iteration.
              CriterionValue criterionValueObj;
              selectionExpression = builder.prefixSQL("1=1");
              while (keyIterator.hasNext()) {
                   String key = (String) keyIterator.next();
                   criterionValueObj = (CriterionValue) criteriaHash.get(key);
                   selectionExpression = getExpression(builder, selectionExpression, criterionValueObj, key, firstTimeThroughLoop);
                   firstTimeThroughLoop = false;
    As you can see, we add a prefix of 1=1. The reason for this is that if this is the first time through the look, we need to say:
    ExpressionBuilder builder; //just showing the declaration type
    selection = builder.get(key).containsSubstring(value.toString());
    if it is the second time through (with conjunction)
    ExpressionBuilder builder; //just showing the declaration type
    selection = selection.and (builder.get(key).containsSubstring(value.toString()));
    so we need two seperate code blocks in this case. A tremendous code minimization in our base class is to say 1=1 AND then it is as if we are always accessing it the second time.
    Do you have any suggestions for us? I do not want modify the caching options programitically.
    Thanks,
    Zev.

  • Toplink Caching issue

    I have three Entities say C1 , P1 and card. Relation between these
    entities are like this
    P1 to Card ( one - one mapping )
    C1 to Card ( one - many mapping )
    I have three usecases. Usecase 1 uses the mail Entity C1. It will
    create one record using C1. ( keep in mind that there will not be any data for entity card at that time ).
    Second usecase , Usecase 2 , uses Entity P1 and creates data in P1 as
    well as Card.
    Third usecase will again take C1 and try to get the card data which
    were inserted through P1.
    But unfortunately , it is coming as null ... But if I restart the server and access the third usecase then I'm getting all the card data from C1 Entity which were saved through P1 Entity .
    To summarise my finding , issue is with Toplink caching. When i
    accessed the first usecase, C1 is cached. In the second usecase , I updated Entity card using P1. When i accessed the third usecase , toplink is returing the cached entity. Toplink is not able to identify whether related entity got changed or not....
    Workaround is not to use cache for C1 entity. It is working but it will
    not be the correct solution.
    any one faced this issue in Toplink ?
    Correct me if my findings are not correct...

    Hello,
    Its not that the cache isn't working, its that you are updating relationships in the database but not in the cache. Take the case of a 1:M from B->C used in a previous response. It looks like the foreign key used in this relationship (in the C table) is mapped to the C object either as a direct to field mapping, or through a 1:1 to the A object. When you change this so that it now somehow points to the B object, you haven't updated the B object to include the C in its collection. Because B has a 1:M mapping collection of Cs, when you next get that B object from the cache, its collection will not contain the newly referenced C (assuming indirection is not used or has been previously triggered). This is a result of you mapping the 1:M in the first place - since it is storing the collection in the object.
    If you are going to make changes that affect relationships and yet not update the relationships directly I would suggest either:
    A) refreshing affected objects from the database when done. In this case, refreshing the B object will result in TopLink requerying the B->C relation and picking up any changes (depending on your refresh options on the B descriptor though - please check the docs if you need more info).
    B) Not mapping these relationships. Instead, when you need to find C objects referenced from B, query for them.
    Both options are less efficient as they cause more hits to the database. This is why it would be preferred if you updated the relationships when you change fields that affect those relationships, but it is up to you to decide which will work best for your application.
    Please note that the cache is working, in fact the situation you are seeing is due to it working. I am not sure of how your situation would work with DAO as you mentioned in a previous post, but it sounds like it would always hit the database. This situation is a result of you making changes in the database but not changing the object model to reflect those changes, proving that the cache is indeed working. While you mention that some objects are not in your use case, they are in the database but just not mapped in TopLink. In the case of a 1:M mapping, the database shows a 1:1 back relationship that you have not mapped in your object model. So this would be more a case of your object model not matching your database model, and not being taken into account in your use cases.
    Best Regards,
    Chris

  • How do I sync Toplink Cache with my Database?

    Hey guys,
    We are running macro's through an excel sheet that connects to the database and performs Updates and Inserts.
    Since this database update is NOT taking place through TopLink (in a Unit of Work) - we do not see the database changes through the web app on the front end unless we bounce our webserver. Presumably the Toplink cache is re-built on start up...so then we can see the changes.
    My question is, what can I do to make sure the TopLink cache is aware of the database changes we have made through the macro without having to bounce the server? Is there a re-fresh or sync command that can be run?
    This task is sort of a one time thing, so I don't want a solution that involves the cache going to sync itself on a schedule or anything like that. Maybe bouncing is the best solution?
    Thoughts?
    We are using Toplink 9.0.4
    Thanks.

    Hello,
    Because it is a one time thing, if you can make sure no other TopLink process are going on, you can probably get away with initializeAllIdentityMaps() or the intitializeIdentityMap(Class) methods on the session. These will clear the identity maps, having the obvious draw back of removing all object identity which will cause problems with running processes though. It might be better than bouncing the server - it depends on the application. Logging out of the session and logging back in has the same effect of clearing the cache, but with a bit more overhead. Benifit is that running processes will get errors if they continue to use the session, rather than strange behaviior if they continue to use objects after identity is lost.
    Another alternative is to run refresh queries on the data you know might be in the cache or that might have been affected. Drawbacks are that this brings them into the cache if they are not already there.
    TopLink 9.0.4 is a quire a few versions back, and in newer versions there is cache invalidation. An object marked as invalid is not removed, so object identity is maintained, but on the next query the data will be refreshed - ensuring subsequent queries will get results from the database without having to be told explicietly to refresh or being set to always refresh. Invalidation can be triggered on particular objects, classes or the entire cache or different policies can be set to set a time to live etc.
    Except for bouncing the server or logging out of the session, all of the above leave some possibility that a concurrent user will still have a reference to stale data and continue to use it after the process has run on the database. So I hope you use optimistic locking and that your batch process updates the version to avoid other process from overwriting with stale data.
    Best Regards,
    Chris

  • ValueHolder Indirection and TopLink Cache

    We have a parent object A (main table) and child object B (a lookup table), mapped thought ValueHolder Indirection.
    We use ReadAllQuery to build SQL and retrieve object A(s), and associated B(s).
    The query result is used to populate a web page table.
    The problem is that when the table is populated, the exact same lookup query against object B is repeated without checking cache. That data field is populated by object_A.object_b.description. For example, below same query would be repeated many times when loading up the web page table:
    Select description_id, description from Table_B where description_id = 1
    Why the query generated by this Indirection not checking cache? Anyway to force it to check TopLink cache first before querying against database?
    Thanks for any help!
    Jeffrey

    Thanks for the reply.
    We use JDeveloper 10.1.3.2. TopLink map in JDev is used to map all table objects and their relationships. So In TopLink map, object(table) A has a ValueHolder object(table) B through indirection. The primary key of B is: description_id, which is used in the table reference mapping.
    We use the default TopLink settings in JDev, so object B has below settings in TopLink map:
    Identity Map: FullIdentityMap
    Size: 50 (there are only about 20 records in this lookup table)
    Existence Checking: Check Cache
    We don't have any other caching mechanism other than TopLink's. EJB 3.0 is used as service bean, and External Transaction Controller (OC4J) is used.
    How to check if B is already in the TopLink cache? I heard ReadAllQuery always goes to database w/o checking TopLink cache, but in the case, the query generated by lazy loading indirection is after the ReadAllQuery execution (when the web page table is loading up).
    Jeffrey

  • Toplink Cache not refreshed

    In my project, TOPLINK is used. To carry out the CRUD operation, Stored procedures are used on all objectcs.
    Toplink query is not used.
    In some cases, since we wanted to use the existing Stored procedures for running the business logic, toplink is not used.
    After running these procedures, we refresh the Toplink Cache by using "refreshObject(java.lang.Object p1)" method of UnitOfWork before we read the Object.
    When we read this object, the data appears to be in sinc with what is present in the database tables.
    But when we write the data once again to the database, Toplink some how refers to the Old data ( Data which was present before refreshing).
    It seems to be a strange issue.
    Kindly help.

    If you call refresh on an object already loaded in the UnitOfWork, and the object was modified before you call refresh, the refresh will override the modification done to the object.
    You should never need to call refresh.
    What you should do instead of refresh is to invalidate on session cache so when ever you read the object later, it will be refresh if was invalidated, so more efficient.
    If you use TopLink 11 JPA api you need to flush to get current transaction/uow see the modified data in db.
    But realistically, stored procedure do not work well with O-R framework, and you say Toplink query is not used, why not removing TopLink from the equation? If you want better JDBC you can use iBatis or Spring.

  • Toplink Cache not getting refreshed after executing UpdateAllQuery

    After executing UpdateAllQuery, the records in the database are getting updated, however the Toplink cache is not getting refreshed with the new data, it still has stale data which is causing issues.
    We're also setting
    updateQuery.setCacheUsage(UpdateAllQuery.INVALIDATE_CACHE);
    Thanks.

    Toplink version is 10.1.3.0.0
    Here is the code
    UpdateAllQuery updateQuery = new UpdateAllQuery(RegisterImpl.class);
    updateQuery.setCacheUsage(UpdateAllQuery.INVALIDATE_CACHE);
    ExpressionBuilder registerBuilder = updateQuery.getExpressionBuilder();
    updateQuery.addArgument("userIdArg");
    updateQuery.addArgument("channelArg");
    updateQuery.addArgument("tokenArg");
    updateQuery.addArgument("dateArg");
    updateQuery.addArgument("businessFuncIdArg");
    Expression reservedDetailRegExp = registerBuilder.get("reservedDetail");
    // build expressions
    Expression userIdExp = reservedDetailRegExp.get("userId").equal(
    registerBuilder.getParameter("userIdArg"));
    Expression channelExp = reservedDetailRegExp.get("channel").equal(
    registerBuilder.getParameter("channelArg"));
    Expression tokenExp = reservedDetailRegExp.get("token").equal(
    registerBuilder.getParameter("tokenArg"));
    Expression dateExp = reservedDetailRegExp.get("date").equal(
    registerBuilder.getParameter("dateArg"));
    Expression busFuncExp = registerBuilder.get("businessFuncId").equal(
    registerBuilder.getParameter("businessFuncIdArg"));
    // set selection criteria
    updateQuery.setSelectionCriteria(userIdExp.and(channelExp.and(tokenExp.and(dateExp
    .and(busFuncExp)))));
    // substitute the values
    updateQuery.addUpdate(reservedDetailRegExp.get("userId"), "");
    updateQuery.addUpdate(reservedDetailRegExp.get("channel"), "");
    updateQuery.addUpdate(reservedDetailRegExp.get("token"), "");
    updateQuery.addUpdate(reservedDetailRegExp.get("date"), "");
    updateQuery.addUpdate(registerBuilder.get("businessFuncId"), "");
    In the object model for the query, the RegisterImpl has an aggregate mapping (ReservedDetail), which in turn has a number of direct-to-field mappings and an one-to-one mapping to businessFunction (for our query we use a direct query key "businessFuncId")

  • Toplink Cache problem

    I am using JDev of following version "Studio Edition Version 10.1.3.1.0.3984
    Build JDEVADF_10.1.3.0_NT_061009.1404.3984".
    I had an application with JSF and Toplink and am facing issues with toplink cache mechanism while TESTING IN JDEV . Following is the scenario.
    1) When clicking on a link, say 'Get status' , i am querying data from database and displaying in JSF.
    2) Later, i am updating particular record.
    3) when i click on the link again (Get status), updated data is not gettng reflected.
    Here is the code to get data and i am calling this method whenever i am clicking on 'Get status' link.
    public Vector getMaintenanceStatusList(){
    Vector lockersStatus = null;
    try {
    Session sess = ToplinkDBUtils.getTopLinkSession();
    sess.validateCache();
    ReadAllQuery query = new ReadAllQuery();
    query.dontCheckCache();
    query.dontMaintainCache();
    query.setReferenceClass(LockerMaintenanceStatus.class);
    query.useDistinct();
    lockersStatus = sess.readAllObjects(LockerMaintenanceStatus.class);
    } catch (DatabaseException e) {
    e.printStackTrace();
    //******** Printing the updated value.
    for (Iterator i = lockersStatus.iterator();i.hasNext();){
    LockerMaintenanceStatus tmp = (LockerMaintenanceStatus)i.next();
    System.out.println("Status value :: "+tmp.getStatus());
    return lockersStatus;
    Here, i am getting old status value for second time though i have updated the database with new status. (scenario if i work in same browser with same toplink session)
    i am getting updated values only when i restart the JDEV oc4j.
    Please let me know how i can solve the problem.
    Thanks!
    Veeraswami K

    Veeraswami,
    In your code you are constructing a query but are not executing it. Instead you are simply calling session.readAllObjects. The result of this call will be the cached versions (based on default cache config).
    I would recommend doing:
    ReadAllQuery query = new ReadAllQuery(LockerMaintenanceStatus.class);
    query.refreshIdentityMapResult();
    lockersStatus = (Vector) sess.executeQuery(query);The other settings you had used would bypass the cache and not result in its results being refreshed.
    Doug

  • Toplink Cache issues in Cluster

    Hi
    Our production environment is have a clustered environment and we have been noticing the following problem. When a user is trying to save a record she repeatedly encounters the "Toplink-5006" exception that I have included below.
    TopLink Error]: 2006.07.19 04:49:23.359--UnitOfWork(115148745)--null--Exception [TOPLINK-5006] (TopLink (WLS CMP) - 10g (9.0.4.2) (Build 040311)): oracle.toplink.exceptions.OptimisticLockException
    Exception Description: The object [[email protected]ce459] cannot be updated because it has changed or been deleted since it was last read.
    Class> com.rhii.mjplus.fo.people.beans.People_z2e2a7__TopLink_CMP_2_0 Primary Key> [1001280937, 0]
    [TopLink Error]: 2006.07.19 04:49:23.359--UnitOfWork(115148745)--null--Exception [TOPLINK-5006] (TopLink (WLS CMP) - 10g (9.0.4.2) (Build 040311)): oracle.toplink.exceptions.OptimisticLockException
    Exception Description: The object [[email protected]ce459] cannot be updated because it has changed or been deleted since it was last read.
    Class> com.rhii.mjplus.fo.people.beans.People_z2e2a7__TopLink_CMP_2_0 Primary Key> [1001280937, 0]
    <Jul 19, 2006 4:49:23 PM PDT> <Error> <EJB> <BEA-010026> <Exception occurred during commit of transaction Name=[EJB com.rhii.mjplus.fo.people.beans.PeopleManagerBean.setPeople(java.util.HashMap,java.lang.String,java.lang.String,java.lang.String,java.util.HashSet,com.rhii.mjplus.common.login.data.UserInfoDO)],Xid=BEA1-795A6481D2E1938A8EAD(115171166),Status=Rolled back. [Reason=Exception [TOPLINK-5006] (TopLink (WLS CMP) - 10g (9.0.4.2) (Build 040311)): oracle.toplink.exceptions.OptimisticLockException
    Exception Description: The object [[email protected]ce459] cannot be updated because it has changed or been deleted since it was last read.
    Class> com.rhii.mjplus.fo.people.beans.People_z2e2a7__TopLink_CMP_2_0 Primary Key> [1001280937, 0]],numRepliesOwedMe=0,numRepliesOwedOthers=0,seconds since begin=0,seconds left=60,XAServerResourceInfo[weblogic.jdbc.wrapper.JTSXAResourceImpl]=(ServerResourceInfo[weblogic.jdbc.wrapper.JTSXAResourceImpl]=(state=rolledback,assigned=MS15_mjp),xar=weblogic.jdbc.wrapper.JTSXAResourceImpl@6dcf50b),SCInfo[mjp+MS15_mjp]=(state=rolledback),properties=({weblogic.transaction.name=[EJB com.rhii.mjplus.fo.people.beans.PeopleManagerBean.setPeople(java.util.HashMap,java.lang.String,java.lang.String,java.lang.String,java.util.HashSet,com.rhii.mjplus.common.login.data.UserInfoDO)], weblogic.jdbc=t3://10.253.129.56:2323}),OwnerTransactionManager=ServerTM[ServerCoordinatorDescriptor=(CoordinatorURL=MS15_mjp+10.253.129.56:2323+mjp+t3+, XAResources={},NonXAResources={})],CoordinatorURL=MS15_mjp+10.253.129.56:2323+mjp+t3+): Local Exception Stack:
    Exception [TOPLINK-5006] (TopLink (WLS CMP) - 10g (9.0.4.2) (Build 040311)): oracle.toplink.exceptions.OptimisticLockException
    Exception Description: The object [[email protected]ce459] cannot be updated because it has changed or been deleted since it was last read.
    Class> com.rhii.mjplus.fo.people.beans.People_z2e2a7__TopLink_CMP_2_0 Primary Key> [1001280937, 0]
         at oracle.toplink.exceptions.OptimisticLockException.objectChangedSinceLastReadWhenUpdating(Ljava/lang/Object;Loracle/toplink/queryframework/ObjectLevelModifyQuery;)Loracle/toplink/exceptions/OptimisticLockException;(OptimisticLockException.java:109)
    What is puzzling is that the occurance of this nature has increased with user load and the toplink cache does not seem to have been refreshed after it encounters the first Optimistic Lock exception. We have run several test and this is not reproducabile in the DEV environment where we do not have a clustered set. After making a few updates to a record users starts experiencing the problem ... for some this persist for a really long time.
    We do not have Cache synchronization
    Cluster setup is as followes
    There are 4 boxes and each box has one admin and 4 managed servers.
    I have included the toplink-cmp-people.xml ( Thisis the particular entity bean we have a problem with) Our application server is Weblogic and we have Toplink version 9042
    <toplink-ejb-jar>
    <session>
    <name>People</name>
    <project-class>
    com.rhii.mjplus.fo.people.beans.PeopleToplink
    </project-class>
    <login>
    <datasource>MJPool</datasource>
    <non-jts-datasource>MJPool</non-jts-datasource>
    </login>
    <use-remote-relationships>true</use-remote-relationships>
    <customization-class>com.rhii.mjplus.common.TopLinkCustomization
    </customization-class>
    </session>
    </toplink-ejb-jar>
    I would appreciate any kind of feedback
    Thanks
    Lakshmi

    Can you refresh that record using a query before you save it ?
    Hi
    Our production environment is have a clustered
    environment and we have been noticing the following
    problem. When a user is trying to save a record she
    repeatedly encounters the "Toplink-5006" exception
    that I have included below.
    TopLink Error]: 2006.07.19
    04:49:23.359--UnitOfWork(115148745)--null--Exception
    [TOPLINK-5006] (TopLink (WLS CMP) - 10g (9.0.4.2)
    (Build 040311)):
    oracle.toplink.exceptions.OptimisticLockException
    Exception Description: The object
    [com.rhii.mjplus.fo.people.beans.People_z2e2a7__TopLin
    k_CMP_2_0@6dce459] cannot be updated because it has
    changed or been deleted since it was last read.
    Class>
    com.rhii.mjplus.fo.people.beans.People_z2e2a7__TopLink
    CMP2_0 Primary Key> [1001280937, 0]
    [TopLink Error]: 2006.07.19
    04:49:23.359--UnitOfWork(115148745)--null--Exception
    [TOPLINK-5006] (TopLink (WLS CMP) - 10g (9.0.4.2)
    (Build 040311)):
    oracle.toplink.exceptions.OptimisticLockException
    Exception Description: The object
    [com.rhii.mjplus.fo.people.beans.People_z2e2a7__TopLin
    k_CMP_2_0@6dce459] cannot be updated because it has
    changed or been deleted since it was last read.
    Class>
    com.rhii.mjplus.fo.people.beans.People_z2e2a7__TopLink
    CMP2_0 Primary Key> [1001280937, 0]
    <Jul 19, 2006 4:49:23 PM PDT> <Error> <EJB>
    <BEA-010026> <Exception occurred during commit of
    transaction Name=[EJB
    com.rhii.mjplus.fo.people.beans.PeopleManagerBean.setP
    eople(java.util.HashMap,java.lang.String,java.lang.Str
    ing,java.lang.String,java.util.HashSet,com.rhii.mjplus
    .common.login.data.UserInfoDO)],Xid=BEA1-795A6481D2E19
    38A8EAD(115171166),Status=Rolled back.
    [Reason=Exception [TOPLINK-5006] (TopLink (WLS CMP) -
    10g (9.0.4.2) (Build 040311)):
    oracle.toplink.exceptions.OptimisticLockException
    Exception Description: The object
    [com.rhii.mjplus.fo.people.beans.People_z2e2a7__TopLin
    k_CMP_2_0@6dce459] cannot be updated because it has
    changed or been deleted since it was last read.
    Class>
    com.rhii.mjplus.fo.people.beans.People_z2e2a7__TopLink
    CMP2_0 Primary Key> [1001280937,
    0]],numRepliesOwedMe=0,numRepliesOwedOthers=0,seconds
    since begin=0,seconds
    left=60,XAServerResourceInfo[weblogic.jdbc.wrapper.JTS
    XAResourceImpl]=(ServerResourceInfo[weblogic.jdbc.wrap
    per.JTSXAResourceImpl]=(state=rolledback,assigned=MS15
    _mjp),xar=weblogic.jdbc.wrapper.JTSXAResourceImpl@6dcf
    50b),SCInfo[mjp+MS15_mjp]=(state=rolledback),propertie
    s=({weblogic.transaction.name=[EJB
    com.rhii.mjplus.fo.people.beans.PeopleManagerBean.setP
    eople(java.util.HashMap,java.lang.String,java.lang.Str
    ing,java.lang.String,java.util.HashSet,com.rhii.mjplus
    .common.login.data.UserInfoDO)],
    weblogic.jdbc=t3://10.253.129.56:2323}),OwnerTransacti
    onManager=ServerTM[ServerCoordinatorDescriptor=(Coordi
    natorURL=MS15_mjp+10.253.129.56:2323+mjp+t3+,
    XAResources={},NonXAResources={})],CoordinatorURL=MS15
    _mjp+10.253.129.56:2323+mjp+t3+): Local Exception
    Stack:
    Exception [TOPLINK-5006] (TopLink (WLS CMP) - 10g
    (9.0.4.2) (Build 040311)):
    oracle.toplink.exceptions.OptimisticLockException
    Exception Description: The object
    [com.rhii.mjplus.fo.people.beans.People_z2e2a7__TopLin
    k_CMP_2_0@6dce459] cannot be updated because it has
    changed or been deleted since it was last read.
    Class>
    com.rhii.mjplus.fo.people.beans.People_z2e2a7__TopLink
    CMP2_0 Primary Key> [1001280937, 0]
    at
    t
    oracle.toplink.exceptions.OptimisticLockException.obje
    ctChangedSinceLastReadWhenUpdating(Ljava/lang/Object;L
    oracle/toplink/queryframework/ObjectLevelModifyQuery;)
    Loracle/toplink/exceptions/OptimisticLockException;(Op
    timisticLockException.java:109)
    What is puzzling is that the occurance of this nature
    has increased with user load and the toplink cache
    does not seem to have been refreshed after it
    encounters the first Optimistic Lock exception. We
    have run several test and this is not reproducabile
    in the DEV environment where we do not have a
    clustered set. After making a few updates to a record
    users starts experiencing the problem ... for some
    this persist for a really long time.
    We do not have Cache synchronization
    Cluster setup is as followes
    There are 4 boxes and each box has one admin and 4
    managed servers.
    I have included the toplink-cmp-people.xml ( Thisis
    the particular entity bean we have a problem with)
    Our application server is Weblogic and we have
    Toplink version 9042
    <toplink-ejb-jar>
    <session>
    <name>People</name>
    <project-class>
    com.rhii.mjplus.fo.people.beans.PeopleToplink
    </project-class>
    <login>
    <datasource>MJPool</datasource>
    <non-jts-datasource>MJPool</non-jts-datasource>
    </login>
    <use-remote-relationships>true</use-remote-relationsh
    ips>
    <customization-class>com.rhii.mjplus.common.TopLinkCu
    stomization
    </customization-class>
    </session>
    </toplink-ejb-jar>
    I would appreciate any kind of feedback
    Thanks
    LakshmiCan you refresh that record using a query before you save it ?

  • Query result caching on oracle 9 and 10 vs indexing

    I am trying to improve performance on oracle 9i and 10g.
    We use some queries that take up to 30 minutes to execute.
    I heard that there are some products to cache query results.
    Would this have any advantage over using indexes or materialized views?
    Does anyone know any products that I can use to cache the results of this queries on disk?
    Personally I think that by using the query result caching I would reduce the cpu time needed to process the query.
    Is this true?

    Your message post pushes all the wrong buttons starting with the fact that 9i and 10g are marketing labels not version numbers.
    You don't tune queries by spending money and throwing resources at them. You tune them by identifying the problem queries, running explain plans, visualizing their output using DBMS_XPLAN, and addressing the root cause.
    If you want help post full version numbers, the SQL statements, and the DBMS_XPLAN outputs.

  • Pinning the sql query in cache

    Hi All,
    I want to pin the sql query is cache because the physical reads are very high. Can anyone tell me steps to pin the sql query in the cache. Current version 10.2.1.0 OS : Windows.
    Reads CPU Elapsed
    Physical Reads Executions per Exec %Total Time (s) Time (s) SQL Id
    19,836 38 522.0 2.1 25.40 50.00 1r0wh3v6bayyk
    With regards
    kccrga

    Oscar,
    I've read Carys paper and a good paper it is.
    The point I was trying to get across is that there should be no rules of thumb. (par this one of course ;) )
    It all depends. Should one concentrate on reducing cpu usage on a disk-bound system? Is doing, say, 100 single-block reads from disk faster than doing 100 current mode gets?

  • Using the client result cache without the query result cache

    I have constructed a client in C# using ODP.NET to connect to an Oracle database and want to perform client result caching for some of my queries.
    This is done using a result_cache hint in the query.
    select /*+ result_cache */ * from table
    As far as I can tell query result caching on the server is done using the same hint, so I was wondering if there was any way to differentiate between the two? I want the query results to be cached on the client, but not on the server.
    The only way I have found to do this is to disable all caching on the server, but I don't want to do this as I want to use the server cache for PL/SQL function results.
    Thanks.

    e3a934c9-c4c2-4c80-b032-d61d415efd4f wrote:
    I have constructed a client in C# using ODP.NET to connect to an Oracle database and want to perform client result caching for some of my queries.
    This is done using a result_cache hint in the query.
    select /*+ result_cache */ * from table 
    As far as I can tell query result caching on the server is done using the same hint, so I was wondering if there was any way to differentiate between the two? I want the query results to be cached on the client, but not on the server.
    The only way I have found to do this is to disable all caching on the server, but I don't want to do this as I want to use the server cache for PL/SQL function results.
    Thanks.
    You haven't provided ANY information about how you configured the result cache. Different parameters are used for configuring the client versus the server result cache so you need to post what, if anything, you configured.
    Post the code you executed when you set the 'client_result_cache_lag' and 'client_result_cache_size' parameters so we can see what values you used. Also post the results of querying those parameters after you set them that show that they really are set.
    You also need to post your app code that shows that you are using the OCI statements are used when you want to use client side result cacheing.
    See the OCI dev guide
    http://docs.oracle.com/cd/B28359_01/appdev.111/b28395/oci10new.htm#sthref1491
    Statement Caching in OCI
    Statement caching refers to the feature that provides and manages a cache of statements for each session. In the server, it means that cursors are ready to be used without the need to parse the statement again. Statement caching can be used with connection pooling and with session pooling, and will improve performance and scalability. It can be used without session pooling as well. The OCI calls that implement statement caching are:
      OCIStmtPrepare2()
      OCIStmtRelease()

  • How to Add Index in Toplink Cache

    Hi
    We are using toplink as ORM.we are using default cache option (soft cache weak identity)provided by toplink, need to optimize the object retrival from cache, is there any option to create Index ( like database index) on the toplink cache.
    Message was edited by:
    [email protected]

    The TopLink cache is indexed by primary key only. It does not support additional indexes.
    Doug

  • Refresh toplink cache from tirgger

    How to update the toplink cache due to change in the database by some external process or procedure?
    This question was posted some time back and one of the suggestion was to create a trigger on the table holding the data and implement callout to the toplink cache to refresh. I will appreciate if any one can let me know where I can I find more information to implement such a callout method from trigger on the database table.
    We are accessing the toplink objects from OC4J container from where a singleton is managing the calls to the toplink objects. We already have methods in place to refresh the cached object based on timeouts but now the new requirements are to refresh the objects only if the data is changed in the database.
    Thanks
    Ahmad

    I have url error on this thread : How to refresh cache in TopLink, turn off cache
    [b]Discussion Forums Error
    We cannot process your request at this time. Please try again later.
    Thank's

Maybe you are looking for

  • Error in VO extension

    Hi, I have a requirement to extend a VO, i need to add 3 columns in the Select list of the Seeded VO Query. when i try to extend the VO in JDev(10g) using VO wizards, i m getting the following error - "Each Row in the query result columns must be map

  • Question about Sun Java Calendar Server 7

    You downloadable forum, at [http://www.sun.com/software/products/calendar_srvr/get_it.jsp|http://www.sun.com/software/products/calendar_srvr/get_it.jsp], leaves one with two questions. -What databases are needed/supported by this MessageBoard System

  • Is it possible to rent a movie on the new iPad using only 4g?

    Why do they say u can use the 4g network to download movies when every time i try to it says it's too big and I have to be on a wifi...that's why I got the 4g cause my wifi *****...help please!

  • Processing Large Files in Adobe Premiere Elements 12

    Greetings, I am trying to process video files that are approximately 500 to 750mb in Adobe Premiere Elements 12.  I am running on Windows 7 Home Premium, Intel i5 processor 2300, 8gb DDR3 and Invidia GeForce GT520 Video Card.  It is choking so bad I

  • Artifacts on effects and export with Premiere Pro CC

    Premiere Pro CC is producing video artifacts whenever I add in effects or transitions. Also appears in exports. I don't know if it's due to the codecs I'm using or low bit rates, but I've never had this problem till I moved from a PC to Mac. Message