Kodo Data Cache Usage

I'm using Kodo 2.3.2 running in a managed environment (JBoss 3.x). I have
it configured to use the "LocalCache" for caching of object instances and I
can see that the cache is working.
Does anyone know how I can get a reference to the DataCache? There is
example code in section 7.3.2 of the Kodo Manual, but that code doesn't
compile for me.
The specific code from the manual is:
PersistenceManagerFactoryImpl factory = (PersistenceManagerFactoryImpl)
pm.getPersistenceManagerFactory();
factory.getDataCache().pin(JDOHelper.getObjectId(o));
The PersistenceManagerFactoryImpl doesn't have a method "getDataCache".
Thanks in advance.

Oops; that should be 'factory.getConfiguration ().getDataCache ()'.
-Fred
In article <amphkp$bes$[email protected]>, TJanusz wrote:
I'm using Kodo 2.3.2 running in a managed environment (JBoss 3.x). I have
it configured to use the "LocalCache" for caching of object instances and I
can see that the cache is working.
Does anyone know how I can get a reference to the DataCache? There is
example code in section 7.3.2 of the Kodo Manual, but that code doesn't
compile for me.
The specific code from the manual is:
PersistenceManagerFactoryImpl factory = (PersistenceManagerFactoryImpl)
pm.getPersistenceManagerFactory();
factory.getDataCache().pin(JDOHelper.getObjectId(o));
The PersistenceManagerFactoryImpl doesn't have a method "getDataCache".
Thanks in advance.
Fred Lucas
SolarMetric Inc.
202-595-2064 x1122
http://www.solarmetric.com

Similar Messages

  • Livecache data cache usage - table monitor_caches

    Hi Team,
    We have a requirement of capturing the Data cache usage of Livecache on an hourly basis.
    Instead of doing it manually by going into LC10 and copying the data into an excel, is there a table which captures this data on a periodic basis which we can use to get the report at a single shot.
    "monitor_caches" is one table which holds this data, but we are not sure how we can get the data from this table. Also, we need to see the contents of this table, we are not sure how we can do that.
    As "monitor_caches" is a maxdb table I am not sure how I can the data from this table. I have never worked on Maxdb before.
    Has anyone had this requirement.
    Warm Regards,
    Venu

    Hi,
    For Cache usage below tables can be referred
    Data Cache Usage - total (table MONITOR_CACHES)
    Data Cache Usage - OMS Data (table MONITOR_CACHES)
    Data Cache Usage - SQL Data (table MONITOR_CACHES)
    Data Cache Usage - History/Undo (table MONITOR_CACHES)
    Data Cache Usage - OMS History (table MONITOR_CACHES)
    Data Cache Usage - OMS Rollback (table MONITOR_CACHES)
    Out Of Memory Exceptions (table SYSDBA.MONITOR_OMS)
    OMS Terminations (table SYSDBA.MONITOR_OMS)
    Heap Usage (table OMS_HEAP_STATISTICS)
    Heap Usage in KB (table OMS_HEAP_STATISTICS)
    Maximum Heap Usage in KB (table ALLOCATORSTATISTICS)
    System Heap in KB (table ALLOCATORSTATISTICS)
    Parameter OMS_HEAP_LIMIT (KB) (dbmrfc command param_getvalue OMS_HEAP_LIMIT)
    For reporting purpose , look into the following BW extractors and develop BW report.
    /SAPAPO/BWEXDSRC APO -> BW: Data Source - Extractor
    /SAPAPO/BWEXTRAC APO -> BW: Extractors for Transactional Data
    /SAPAPO/BWEXTRFM APO -> BW: Formula to Calculate a Key Figure
    /SAPAPO/BWEXTRIN APO -> BW: Dependent Extractors
    /SAPAPO/BWEXTRMP APO -> BW: Mapping Extractor Structure Field
    Hope this helps.
    Regards,
    Deepak Kori

  • Data Cache usage 96%

    Hi Experts,
    With total 7 planning versions and total size of 9GB, the data cache is constantly filling up
    Total 14GB allocated and each time we create a new version the usage goes up by 10%.
    There are no old sessions more than 4 hrs and when 5 planners starts working simutaneously, the cache starts filling up
    Thanks,
    Naren

    > With total 7 planning versions and total size of 9GB, the data cache is constantly filling up
    >
    > Total 14GB allocated and each time we create a new version the usage goes up by 10%.
    >
    > There are no old sessions more than 4 hrs and when 5 planners starts working simutaneously, the cache starts filling up
    Now what?
    You create multiple versions of liveCache objects in parallel - of course the cache usage will increase (and also the used space in the data area).
    That's how the multi-version-data-handling works.
    A copy of the current version is created for each new version.
    So either add more cache, reduce your planning version complexity/size or do less in parallel if you want to prevent the cache from beeing used 100%.
    regards,
    Lars

  • High cache usage of free memory

    I dont know if this is normal but my fresh archlinux x86_64 install with Xfce uses round 300Mb of 4Gb ram space after boot but then after i start and close some programs i get to 3 Gb used of 3.87 Gb free memory when all applycation are closed. Now i understand that kernel caches some data in memory but i cant clear it all with commands:
    sync
    echo 1 > /proc/sys/vm/drop_caches
    echo 2 > /proc/sys/vm/drop_caches
    echo 3 > /proc/sys/vm/drop_caches
    After i do that system reports 540 Mb used compared to 300Mb from boot. Is there a way to limit the ammount of cache usage so i dont end up having all free memory used by cache?
    Last edited by Anril (2009-12-29 13:19:52)

    Ram that is unused is wasted ram, the linux kernel manages it very well so no worries there.
    If you have swap usage it's because most probably you have things stored in the ram that are not being used often but cannot be discarded too so things being used more often take precedence.
    If you think you have enough ram for all your needs then why not disable swap? I've read that there might be a speed penalty for running without swap but I've been running my system without swap (arch64 4G ram) and I've never had any bad surprises.
    There are a few things that might prompt swap usage (don't quote me on that though, it just seems to me to be the case), if you copy many files from one place to the other the kernel will use all the ram available to try to cache them, if you copy many GB then all the ram will get filled (and maybe other things will get evicted to swap).
    It's not a bug, free ram is there to be used, things that have been used/needed recently get to stay in the ram, other things can go to the swap, if you can cache a file that has been recently used and need to use it again shortly after then it's much faster if it is in the ram instead of having to read it again from the hard disk. The logic behind it is good, it's just that some corner cases may make behave badly.

  • Problems with data cache plugin - The ResultList has been closed

    I'm testing out the data cache to see if it helps some of my performance
    problems, but I now get lots of exceptions that I didn't get before I
    enabled the cache. Here's how I enabled it:
    # CACHE
    com.solarmetric.kodo.DataCacheClass=com.solarmetric.kodo.runtime.datacache.p
    lugins.CacheImpl
    com.solarmetric.kodo.RemoteCommitProviderClass=com.solarmetric.kodo.runtime.
    event.impl.SingleJVMRemoteCommitProvider
    The exception I'm getting follows. I'm curious if anyone has any insight
    into what's going on? I'm sure there is a problem with my code, I'm
    forgetting to close something or other but since it works fine without the
    cache I'm really stuck as to what it is.
    Thanks
    Michael
    22:17:32,792 ERROR ObjectFinder - Exception =
    com.solarmetric.kodo.runtime.FatalUserException: The ResultList has been
    closed.
    NestedThrowables:
    com.solarmetric.kodo.runtime.ClosurePoint: Closure point of object held in
    embedded stack trace.
    com.solarmetric.kodo.runtime.FatalUserException: The ResultList has been
    closed.
    NestedThrowables:
    com.solarmetric.kodo.runtime.ClosurePoint: Closure point of object held in
    embedded stack trace.
    at
    com.solarmetric.kodo.runtime.objectprovider.EagerResultList.checkClosed(Eage
    rResultList.java:66)
    at
    com.solarmetric.kodo.runtime.objectprovider.EagerResultList.get(EagerResultL
    ist.java:84)
    at
    com.solarmetric.kodo.runtime.datacache.query.CachingRandomAccessList.get(Cac
    hingRandomAccessList.java:124)
    at java.util.AbstractList$Itr.next(AbstractList.java:416)
    at java.util.AbstractList.equals(AbstractList.java:604)
    at serp.util.WeakCollection$WeakValue.equals(WeakCollection.java:123)
    at java.util.HashMap.eq(HashMap.java:270)
    at java.util.HashMap.removeEntryForKey(HashMap.java:525)
    at java.util.HashMap.remove(HashMap.java:507)
    at java.util.HashSet.remove(HashSet.java:198)
    at serp.util.RefValueCollection.removeFilter(RefValueCollection.java:272)
    at serp.util.RefValueCollection.remove(RefValueCollection.java:235)
    at
    com.solarmetric.kodo.runtime.datacache.query.QueryCacheImpl.unregisterClassC
    hangeListener(QueryCacheImpl.java:160)
    at
    com.solarmetric.kodo.runtime.datacache.query.CachingRandomAccessList.abortCa
    ching(CachingRandomAccessList.java:103)
    at
    com.solarmetric.kodo.runtime.datacache.query.CachingRandomAccessList.get(Cac
    hingRandomAccessList.java:149)
    at java.util.AbstractList$Itr.next(AbstractList.java:416)
    at java.util.AbstractList.hashCode(AbstractList.java:624)
    at serp.util.WeakCollection$WeakValue.<init>(WeakCollection.java:93)
    at serp.util.WeakCollection.createRefValue(WeakCollection.java:64)
    at serp.util.RefValueCollection.addFilter(RefValueCollection.java:193)
    at serp.util.RefValueCollection.add(RefValueCollection.java:166)
    at serp.util.RefValueCollection.add(RefValueCollection.java:157)
    at
    com.solarmetric.kodo.runtime.datacache.query.QueryCacheImpl.registerClassCha
    ngeListener(QueryCacheImpl.java:151)
    at
    com.solarmetric.kodo.runtime.datacache.query.CachingRandomAccessList.<init>(
    CachingRandomAccessList.java:76)
    at
    com.solarmetric.kodo.runtime.datacache.query.CacheAwareQuery.wrapList(CacheA
    wareQuery.java:146)
    at
    com.solarmetric.kodo.runtime.datacache.query.CacheAwareQuery.execute(CacheAw
    areQuery.java:270)
    at
    com.verideon.veriguard.persistence.ObjectFinder.getObjects(ObjectFinder.java
    :62)
    at
    com.verideon.veriguard.persistence.ObjectFinder.getObject(ObjectFinder.java:
    44)
    at
    com.verideon.veriguard.persistence.ObjectFinder.getRole(ObjectFinder.java:91
    at
    com.verideon.veriguard.services.VeriguardService.createCustomerAccount(Verig
    uardService.java:210)
    at
    com.verideon.veriguard.services.TestVeriguardServiceMonitors.createTestUser(
    TestVeriguardServiceMonitors.java:133)
    at
    com.verideon.veriguard.services.TestVeriguardServiceMonitors.setUp(TestVerig
    uardServiceMonitors.java:80)
    at junit.framework.TestCase.runBare(TestCase.java:125)
    at junit.framework.TestResult$1.protect(TestResult.java:106)
    at junit.framework.TestResult.runProtected(TestResult.java:124)
    at junit.framework.TestResult.run(TestResult.java:109)
    at junit.framework.TestCase.run(TestCase.java:118)
    at junit.framework.TestSuite.runTest(TestSuite.java:208)
    at junit.framework.TestSuite.run(TestSuite.java:203)
    at
    org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRu
    nner.java:392)
    at
    org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.
    java:276)
    at
    org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner
    ..java:167)
    NestedThrowablesStackTrace:
    com.solarmetric.kodo.runtime.ClosurePoint: Closure point of object held in
    embedded stack trace.
    at
    com.solarmetric.kodo.runtime.objectprovider.EagerResultList.close(EagerResul
    tList.java:78)
    at com.solarmetric.kodo.impl.jdbc.query.JDBCQuery.close(JDBCQuery.java:127)
    at com.solarmetric.kodo.query.QueryImpl.closeAll(QueryImpl.java:637)
    at
    com.solarmetric.kodo.runtime.datacache.query.CacheAwareQuery.closeAll(CacheA
    wareQuery.java:343)
    at
    com.solarmetric.kodo.runtime.PersistenceManagerImpl.closeQueries(Persistence
    ManagerImpl.java:934)
    at
    com.solarmetric.kodo.runtime.PersistenceManagerImpl.close(PersistenceManager
    Impl.java:914)
    at
    com.solarmetric.kodo.runtime.PersistenceManagerImpl.close(PersistenceManager
    Impl.java:884)
    at com.verideon.veriguard.services.PMService.close(PMService.java:120)
    at com.verideon.veriguard.services.PMService.close(PMService.java:111)
    at
    com.verideon.veriguard.services.TestVeriguardServiceMonitors.deleteTestUser(
    TestVeriguardServiceMonitors.java:127)
    at
    com.verideon.veriguard.services.TestVeriguardServiceMonitors.tearDown(TestVe
    riguardServiceMonitors.java:103)
    at junit.framework.TestCase.runBare(TestCase.java:130)
    at junit.framework.TestResult$1.protect(TestResult.java:106)
    at junit.framework.TestResult.runProtected(TestResult.java:124)
    at junit.framework.TestResult.run(TestResult.java:109)
    at junit.framework.TestCase.run(TestCase.java:118)
    at junit.framework.TestSuite.runTest(TestSuite.java:208)
    at junit.framework.TestSuite.run(TestSuite.java:203)
    at
    org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRu
    nner.java:392)
    at
    org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.
    java:276)
    at
    org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner
    ..java:167)

    Michael,
    Could you send your test case to [email protected] so that we
    can take a look at what's going on to cause this exception?
    Thanks,
    -Patrick
    On Thu, 22 May 2003 17:18:50 -0400, Michael wrote:
    I'm testing out the data cache to see if it helps some of my performance
    problems, but I now get lots of exceptions that I didn't get before I
    enabled the cache. Here's how I enabled it:
    # CACHE
    com.solarmetric.kodo.DataCacheClass=com.solarmetric.kodo.runtime.datacache.p
    lugins.CacheImpl
    com.solarmetric.kodo.RemoteCommitProviderClass=com.solarmetric.kodo.runtime.
    event.impl.SingleJVMRemoteCommitProvider
    The exception I'm getting follows. I'm curious if anyone has any
    insight into what's going on? I'm sure there is a problem with my code,
    I'm forgetting to close something or other but since it works fine
    without the cache I'm really stuck as to what it is.
    Thanks
    Michael
    22:17:32,792 ERROR ObjectFinder - Exception =
    com.solarmetric.kodo.runtime.FatalUserException: The ResultList has been
    closed.
    NestedThrowables:
    com.solarmetric.kodo.runtime.ClosurePoint: Closure point of object held
    in embedded stack trace.
    com.solarmetric.kodo.runtime.FatalUserException: The ResultList has been
    closed.
    NestedThrowables:
    com.solarmetric.kodo.runtime.ClosurePoint: Closure point of object held
    in embedded stack trace.
    at
    com.solarmetric.kodo.runtime.objectprovider.EagerResultList.checkClosed(Eage
    rResultList.java:66)
    at
    com.solarmetric.kodo.runtime.objectprovider.EagerResultList.get(EagerResultL
    ist.java:84)
    at
    com.solarmetric.kodo.runtime.datacache.query.CachingRandomAccessList.get(Cac
    hingRandomAccessList.java:124)
    at java.util.AbstractList$Itr.next(AbstractList.java:416) at
    java.util.AbstractList.equals(AbstractList.java:604) at
    serp.util.WeakCollection$WeakValue.equals(WeakCollection.java:123) at
    java.util.HashMap.eq(HashMap.java:270) at
    java.util.HashMap.removeEntryForKey(HashMap.java:525) at
    java.util.HashMap.remove(HashMap.java:507) at
    java.util.HashSet.remove(HashSet.java:198) at
    serp.util.RefValueCollection.removeFilter(RefValueCollection.java:272)
    at serp.util.RefValueCollection.remove(RefValueCollection.java:235) at
    com.solarmetric.kodo.runtime.datacache.query.QueryCacheImpl.unregisterClassC
    hangeListener(QueryCacheImpl.java:160)
    at
    com.solarmetric.kodo.runtime.datacache.query.CachingRandomAccessList.abortCa
    ching(CachingRandomAccessList.java:103)
    at
    com.solarmetric.kodo.runtime.datacache.query.CachingRandomAccessList.get(Cac
    hingRandomAccessList.java:149)
    at java.util.AbstractList$Itr.next(AbstractList.java:416) at
    java.util.AbstractList.hashCode(AbstractList.java:624) at
    serp.util.WeakCollection$WeakValue.<init>(WeakCollection.java:93) at
    serp.util.WeakCollection.createRefValue(WeakCollection.java:64) at
    serp.util.RefValueCollection.addFilter(RefValueCollection.java:193) at
    serp.util.RefValueCollection.add(RefValueCollection.java:166) at
    serp.util.RefValueCollection.add(RefValueCollection.java:157) at
    com.solarmetric.kodo.runtime.datacache.query.QueryCacheImpl.registerClassCha
    ngeListener(QueryCacheImpl.java:151)
    at
    com.solarmetric.kodo.runtime.datacache.query.CachingRandomAccessList.<init>(
    CachingRandomAccessList.java:76)
    at
    com.solarmetric.kodo.runtime.datacache.query.CacheAwareQuery.wrapList(CacheA
    wareQuery.java:146)
    at
    com.solarmetric.kodo.runtime.datacache.query.CacheAwareQuery.execute(CacheAw
    areQuery.java:270)
    at
    com.verideon.veriguard.persistence.ObjectFinder.getObjects(ObjectFinder.java
    :62)
    at
    com.verideon.veriguard.persistence.ObjectFinder.getObject(ObjectFinder.java:
    44)
    at
    com.verideon.veriguard.persistence.ObjectFinder.getRole(ObjectFinder.java:91
    at
    com.verideon.veriguard.services.VeriguardService.createCustomerAccount(Verig
    uardService.java:210)
    at
    com.verideon.veriguard.services.TestVeriguardServiceMonitors.createTestUser(
    TestVeriguardServiceMonitors.java:133)
    at
    com.verideon.veriguard.services.TestVeriguardServiceMonitors.setUp(TestVerig
    uardServiceMonitors.java:80)
    at junit.framework.TestCase.runBare(TestCase.java:125) at
    junit.framework.TestResult$1.protect(TestResult.java:106) at
    junit.framework.TestResult.runProtected(TestResult.java:124) at
    junit.framework.TestResult.run(TestResult.java:109) at
    junit.framework.TestCase.run(TestCase.java:118) at
    junit.framework.TestSuite.runTest(TestSuite.java:208) at
    junit.framework.TestSuite.run(TestSuite.java:203) at
    org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRu
    nner.java:392)
    at
    org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.
    java:276)
    at
    org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner
    .java:167)
    NestedThrowablesStackTrace:
    com.solarmetric.kodo.runtime.ClosurePoint: Closure point of object held
    in embedded stack trace.
    at
    com.solarmetric.kodo.runtime.objectprovider.EagerResultList.close(EagerResul
    tList.java:78)
    at
    com.solarmetric.kodo.impl.jdbc.query.JDBCQuery.close(JDBCQuery.java:127)
    at com.solarmetric.kodo.query.QueryImpl.closeAll(QueryImpl.java:637) at
    com.solarmetric.kodo.runtime.datacache.query.CacheAwareQuery.closeAll(CacheA
    wareQuery.java:343)
    at
    com.solarmetric.kodo.runtime.PersistenceManagerImpl.closeQueries(Persistence
    ManagerImpl.java:934)
    at
    com.solarmetric.kodo.runtime.PersistenceManagerImpl.close(PersistenceManager
    Impl.java:914)
    at
    com.solarmetric.kodo.runtime.PersistenceManagerImpl.close(PersistenceManager
    Impl.java:884)
    at com.verideon.veriguard.services.PMService.close(PMService.java:120)
    at com.verideon.veriguard.services.PMService.close(PMService.java:111)
    at
    com.verideon.veriguard.services.TestVeriguardServiceMonitors.deleteTestUser(
    TestVeriguardServiceMonitors.java:127)
    at
    com.verideon.veriguard.services.TestVeriguardServiceMonitors.tearDown(TestVe
    riguardServiceMonitors.java:103)
    at junit.framework.TestCase.runBare(TestCase.java:130) at
    junit.framework.TestResult$1.protect(TestResult.java:106) at
    junit.framework.TestResult.runProtected(TestResult.java:124) at
    junit.framework.TestResult.run(TestResult.java:109) at
    junit.framework.TestCase.run(TestCase.java:118) at
    junit.framework.TestSuite.runTest(TestSuite.java:208) at
    junit.framework.TestSuite.run(TestSuite.java:203) at
    org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRu
    nner.java:392)
    at
    org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.
    java:276)
    at
    org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner
    .java:167)--
    Patrick Linskey
    SolarMetric Inc.

  • Caching objects in the data cache as a result of an extent.

    Patrick -
    I wanted to post this since it's related to a question I posted about extents and the data cache on
    11/8.
    I discovered that the com.solarmetric.kodo.DefaultFetchBatchSize setting affects how many objects
    get put into the data cache as a result of running an extent (in 2.3.2). If I have:
    com.solarmetric.kodo.DefaultFetchBatchSize=20
    then as soon as I execute the second line below:
    Iterator anIterator = results.iterator();
    Object anObject = anIterator.next();
    I see 20 objects in my data cache. In a prior reply you indicated that you were going to check this
    behavior in 2.4 so I wanted to send you this additional information. This behavior isn't a problem
    for me.
    Les

    Les,
    This is expected behavior -- the DefaultBatchFetchSize instructs Kodo to
    retrieve objects from the scrollable ResultSet in groups of 20. So,
    getting the first item from the iterator will cause a page of 20 objects
    to be pulled from the result set.
    -Patrick
    Les Selecky wrote:
    Patrick -
    I wanted to post this since it's related to a question I posted about
    extents and the data cache on
    11/8.
    I discovered that the com.solarmetric.kodo.DefaultFetchBatchSize
    setting affects how many objects
    get put into the data cache as a result of running an extent (in
    2.3.2). If I have:
    com.solarmetric.kodo.DefaultFetchBatchSize=20
    then as soon as I execute the second line below:
    Iterator anIterator = results.iterator();
    Object anObject = anIterator.next();
    I see 20 objects in my data cache. In a prior reply you indicated that
    you were going to check this
    behavior in 2.4 so I wanted to send you this additional information.
    This behavior isn't a problem
    for me.
    Les
    Patrick Linskey [email protected]
    SolarMetric Inc. http://www.solarmetric.com

  • Possible to set CacheSize for the single-JVM version of the data cache?

    Hi -
    I'm using Kodo 2.3.2.
    Can I set the size for the data cache when I'm using a subclass of the single-JVM version of the
    data cache? Or is this only a feature for the distributed version?
    Here are more details...
    I'm using a subclass of LocalCache so I can display the cache contents.
    The kodo.properties contains these lines:
    com.solarmetric.kodo.DataCacheClass=com.siemens.financial.jdoprototype.app.TntLocalCache
    com.solarmetric.kodo.DataCacheProperties=CacheSize=10
    When my test program starts, it displays getConfiguration().getDataCacheProperties() and the value
    is displayed as "CacheSize=10".
    But when I load 25 objects by OID and display the contents of the cache, I see that all the objects
    have been cached (not just 10). Can you shed light on this?
    Thanks,
    Les

    The actual size of the cache is a bit more complex than just the CacheSize
    setting. The CacheSize is the number of hard references to maintain in the
    cache. So, the most-recently-used 10 elements will have hard refs to them,
    and the other 15 will be moved to a SoftValueCache. Soft references are not
    garbage-collected immediately, so you might see the cache size remain at
    25 until you run out of memory. (The JVM has a good deal of flexibility in
    how it implements soft references. The theory is that soft refs should stay
    around until absolutely necessary, but many JVMs treat them the same as
    weak refs.)
    Additionally, pinning objects into the cache has an impact on the cache
    size. Pinned objects do not count against the cache size. So, if you have
    15 pinned objects, the cache size could be 25 even if there are no soft
    references being maintained.
    -Patrick
    In article <aqrpqo$rl7$[email protected]>, Les Selecky wrote:
    Hi -
    I'm using Kodo 2.3.2.
    Can I set the size for the data cache when I'm using a subclass of the single-JVM version of the
    data cache? Or is this only a feature for the distributed version?
    Here are more details...
    I'm using a subclass of LocalCache so I can display the cache contents.
    The kodo.properties contains these lines:
    com.solarmetric.kodo.DataCacheClass=com.siemens.financial.jdoprototype.app.TntLocalCache
    com.solarmetric.kodo.DataCacheProperties=CacheSize=10
    When my test program starts, it displays getConfiguration().getDataCacheProperties() and the value
    is displayed as "CacheSize=10".
    But when I load 25 objects by OID and display the contents of the cache, I see that all the objects
    have been cached (not just 10). Can you shed light on this?
    Thanks,
    Les
    Patrick Linskey [email protected]
    SolarMetric Inc. http://www.solarmetric.com

  • How to guarantee that all events regarding Data Cache are dispatched when application is terminating? Urgent

    Hello,
    we have the phenomena that when an application is commiting a transaction and then terminating,
    often not all events regarding Data Cache are dispatched by the TCP kodo.RemoteCommitProvider.
    It seems that the JVM on termination is not waiting until RemoteCommitProvider has dispatched all events regarding Data
    Cache. In this way we sometimes loose some cache synchronization and some of our customers run into serious problems.
    Is there a way to guarantee that all Data Cache events are dispatched before the aplciation terminates
    (maybe implementing a shutdown hook?).
    best regards
    Guenther Demetz
    Wuerth-Phoenix SRL

    Hi,
    as nobody answered to my question I try to explain it more simple:
    Are the TCP-kodo.RemoteCommitProvider threads acting as user threads or as threads of type 'deamon' ?
    I hope that soon someon can answer
    best regards
    Guenther Demetz
    Wuerth-Phoenix SRL
    G.Demetz wrote:
    Hello,
    we have the phenomena that when an application is commiting a transaction and then terminating,
    often not all events regarding Data Cache are dispatched by the TCP kodo.RemoteCommitProvider.
    It seems that the JVM on termination is not waiting until RemoteCommitProvider has dispatched all events regarding Data
    Cache. In this way we sometimes loose some cache synchronization and some of our customers run into serious problems.
    Is there a way to guarantee that all Data Cache events are dispatched before the aplciation terminates
    (maybe implementing a shutdown hook?).
    best regards
    Guenther Demetz
    Wuerth-Phoenix SRL

  • IPhone Apps - Data Cache

    When I go the iphone menu:  Setting > General > Usage, I can see the size of each App.
    Most apps are relatively small in size, but the Info & data size grows everyday as I use the apps.
    example like Facebook, the FB app is only 14MB, but my data size is more than 1.4GB.
    beside deleting the app & reinstalling, is there any way to reset this data? ... well - Apple only has a Delete button on that page.
    I know deleting & reinstalling will bring the app back to 14MB, & there's no data lost on my FB page.  But this method is not so efficient when there are hundreds of apps' data cache to clear...
    anyone has any idea how to clear this data cache for apps?

    hi pvonk,
    thanks for the pointer, I've tried that but it didn't clear the App data cache.

  • Date Cache

    I am using the data cache and don't seem to have any way of querying it
    at a point in time to see its effectiveness. In the DataCache
    interface, I don't see methods that might let me know:
         * how many hard references are currently in memory
         * about how much memory is being used by the cache
         * some measure of the "hit ratio",
         * some measure of how much is being discarded
    This instrumentation would let me tune the cache to the environment. I
    don't see any sort of tuning that is possible under the current
    implementation, so your answer "more is better", appears to be the best
    that I can do. I might like to change
    com.solarmetric.kodo.CacheReferenceSize but by what motivation?
    Thanks,
    Dan

    Patrick,
    Looking at the box, tonight, I see that the cached objects have
    increased to around 2835. I find this very odd since the max is set to
    2500. I'm getting very confused by this behaviour since it doesn't
    correspond to expectations.
    As a note about garbage collection, I did force at gc() on the code, and
    the cached objects went from 2835 to 2507. Maybe your theory isn't
    quite right, eh? <g>
    Maybe the way it works is that it will cache and cache objects until
    some sort of garbage collection is requested. Then, it throws away
    objects until some criteria is reached. Can you check with the source,
    please?
    (I don't have any objects pinned.)
    Dan
    Patrick Linskey wrote:
    Dan,
    There isn't anything that I can tell you based on the source that isn't
    just a reiteration of something in the JavaDoc for the relevant classes.
    I'd suggest generating some output that shows you whenever something is
    added to / removed from the DataCache, or whenever a mutation operation
    is performed on the DataCache's CacheMap.
    Regarding garbage collection, I doubt that that's the issue, since the
    CacheMap maintains hard references to all objects unless the cache's size
    is greater than the cache size + the pinned objects count.
    -Patrick
    On Mon, 13 Jan 2003 19:59:25 -0500, Dan Finkelstein wrote:
    Thanks, Patrick. Since you have access to the source code, would you
    mind perusing the pertient data caching routines and finding out under
    what cirumstances, other than a full cache, objects will be evicted? On
    my system, the cache size seems to go back and forth between around 1600
    and 2000 again and again in a slow sort of cycle. My cache is set for
    2500 and I would really like to know what would cause it to go down. For
    example, does garbage collection play a role?
    Dan
    Patrick Linskey wrote:
    Dan,
    Based on the information below, it's hard to say for sure why the cache
    would not reach its full size.
    For your second point, we don't do any work to calculate the size of
    the objects put into our cache. You should be able to determine this on
    your own by looking at the objects that you're putting into the cache.
    However, bear in mind that performing such a calculation will have a
    negative impact on the cache performance.
    It may be more beneficial to do some back-of-the-envelope calculations
    by estimating the average size of the variable-length fields in the
    cached objects.
    -Patrick
    On Mon, 13 Jan 2003 17:13:45 -0500, Dan Finkelstein wrote:
    Marc,
    I implemented an extension to LocalCache as you described. Right now,
    I've only used it to display the current number of objects in the
    cache.
    I assumed the method in CacheMap to use is size(), although it
    appeared undocumentated. Anyway, what I've noticed is that the number
    of items in the cache can suddenly diminish (say from around 1950 to
    1600). My cache size is set to 2500. Could you explain under what
    circumstances objects are removed from the cache prior to it reaching
    its full capacity?
    Second, since the cache is competing for memory with the JVM, I'd like
    to have a gauge on how much memory is consumed by the objects in the
    cache. How do I retrieve this information? (Patrick has referred me
    to you on this one!)
    Dan
    Marc Prud'hommeaux wrote:
    Dan-
    Those are good ideas for improvements: I've made an enhancement
    request for this at:
    http://bugzilla.solarmetric.com/show_bug.cgi?id=527
    In the meantime, it wouldn't be too difficult to subclass the cache
    implementation to use a decorating CacheMap that tracks statistics, a
    la:
    public class TrackingCache
    extends LocalCache
    protected CacheMap newCacheMap ()
    final CacheMap orig = super.newCacheMap (); return new
    CacheMap ()
    // pass-though methods to "orig" that track stats
    In article <[email protected]>, Dan Finkelstein
    wrote:
    Let me give you an idea.
    clearStatistics()
         -- clears statistics from cache
    numberOfAccesses()
         -- total number of requests for an object
    numberFromCache()
         -- total number of times a request was satisfied from the cache
    From this, I can calculate the hit percentage, or the effectiveness
    of the cache over a period of time:
    double hitPercentage = 0;
    if(_accesses > 0)
         hitPercentage = 100 * (double) accessesFromCache / accesses;
    Does this make sense?
    Dan
    Patrick Linskey wrote:
    Dan,
    I'm not quite sure what you're looking for aside from the method in
    LocalCache to retrieve its CacheMap. Given this information, you
    have access to quite a bit of underlying cache data.
    -Patrick
    Dan Finkelstein wrote:
    Patrick,
    May I suggest you add an enhancement request for exposing the
    underlying properties of the data cache? This appears to be a
    feature that would be useful to a great many.
    I don't really care to spend a day or two exposing this data -- it
    really seems to be in Kodo's domain, but if it were available, I
    could easily display it (or log it) and see how that component is
    functioning.
    Dan
    Patrick Linskey wrote:
    In 2.3.x, you can extend LocalCache to get access to its
    underlying cache implementation (a CacheMap). In 2.4, this
    underlying implementantion is available via a get method.
    To collect instrumentation details, you can extend the appropriate
    cache and intercept the various get etc. calls.
    -Patrick
    Dan Finkelstein wrote:
    I am using the data cache and don't seem to have any way of
    querying it
    at a point in time to see its effectiveness. In the DataCache
    interface, I don't see methods that might let me know:
    * how many hard references are currently in memory * about how
    much memory is being used by the cache * some measure of the
    "hit ratio",
    * some measure of how much is being discarded
    This instrumentation would let me tune the cache to theenvironment. I
    don't see any sort of tuning that is possible under the current
    implementation, so your answer "more is better", appears to be
    thebest
    that I can do. I might like to change
    com.solarmetric.kodo.CacheReferenceSize but by what motivation?
    Thanks,
    Dan

  • Error in Starting Oracle BAM Active Data Cache

    I am not able to start "Oracle BAM Active Data Cache" on my machine.
    The other two components "Oracle BAM Event Engine" and "Oracle BAM Report Cache" are starting properly.
    When I see the event log file of my Computer I could see the details as below:
    Event Type: Error
    Event Source: Oracle BAM Active Data Cache
    Event Category: None
    Event ID: 0
    Date: 2/7/2007
    Time: 3:51:25 PM
    User: N/A
    Computer: CHNANDA-WXP
    Description:
    ActiveDataCache: The Oracle BAM Active Data Cache service failed to start. Oracle.BAM.ActiveDataCache.Common.Exceptions.CacheException: ADC Server exception in Startup(). ---> Oracle.DataAccess.Client.OracleException ORA-12541: TNS:no listener at Oracle.DataAccess.Client.OracleException.HandleErrorHelper(Int32 errCode, OracleConnection conn, IntPtr opsErrCtx, OpoSqlValCtx* pOpoSqlValCtx, Object src, String procedure)
    at Oracle.DataAccess.Client.OracleConnection.Open()
    at Oracle.DataAccess.Client.OracleConnection.Open()
    at Oracle.BAM.ActiveDataCache.Kernel.StorageEngine.Oracle.OracleStorageEngine.GetServerVersion()
    at Oracle.BAM.ActiveDataCache.Kernel.StorageEngine.Oracle.OracleStorageEngine.Startup(IDictionary oParameters)
    at Oracle.BAM.ActiveDataCache.Kernel.Server.DataStoreServer.Startup()
    --- End of inner exception stack trace ---
    at Oracle.BAM.ActiveDataCache.Kernel.Server.DataStoreServer.Startup()
    at Oracle.BAM.ActiveDataCache.Kernel.Server.Server.Startup()
    at Oracle.BAM.ActiveDataCache.Service.DataServer.Run()
    For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp.
    Could anyone pls help me?
    Thanks and Regards,
    Chinmaya Nanda

    hi Chinmaya -can yoy tell us your companyname,project ? Your problem is very simple.BAM ADC is notable to reachoracle db.fromyour dos prompt- try tnsping <yrDB> [default  is oraclebam  or orcl  ]/ Also see FAQ pages. there is a requirement on dos prompt setting, with <clientforBAM>as 1st parameter

  • Unable to change stock posting date at usage decision while inspecting HUs

    If we were using materials without WMS it's simple: thereu2019s a button in the screen for stock posting by which we're able to change document date and posting date; but we're using WM and the screen is slightly different: the button I'm referring it's gone!
    So, how to change the posting date at posting stock in QA11 when we're using handle units? Some end-user told me that in version 4.7 this was possible. I don't think so ...unless there was something customized at WM IMG...or maybe they were using a USER EXIT to bring a pop-up window for this (I'm starting to believe that this was implemented..). This is the first time I work with HUs, so I don't know how to manage this.
    Anyone?
    Seba
    Edited by: Sebastian Sniezyk on Apr 3, 2009 10:16 AM

    I solved it in this topic: Changing posting date at usage decision for handle units. How?

  • Web Application - Data caching of enterprise data

    Sorry in advance if this is off-topic but I can't find anywhere else to post this type of question.
    I am looking for information/suggestions such as books, technology or design methodology for my enterprise web applications. These sites are currently up and functional using only JSP, servlets and regular Java classes stored in a web application session to provide data caching and access. We are using Weblogic Server 6.1 running on an AIX Unix system at this time. I'm not sure that this is the best design architecture as our web sessions are getting too large but I can't think of any other Java technology to use and I need some help. Here's an overview of our environment and our needs.
    Our core data is held in a mainframe based IMS system. Some DB2 is also used. Access to this data is through IMS COBOL transactions which we can execute with IMS Connect. We also use some JDBC to get to the DB2 tables directly where available.
    Some overall application data is cached when the web application is deployed. We use singleton classes which are created and refreshed at deployment and they then refresh themselves from the sources every 24 hours.
    Each time a user logs in we execute several IMS transactions and JDBC calls to cache user specific data in regular Java classes which are then simply placed in the users web session where we access them from JSP, servlets and other Java classes. The fields in these Java classes range from any type of primitive field to TreeMaps of other Java classes. As the data is cached it is sorted and other fields are calculated and stored in these classes. As the user progresses through the system we then may have to do several other IMS transaction and JDBC calls to collect other types of data. All of this is then also added to the users session. Most of this in inquiry. We do allow transactions but those are built from user input and data already cached and are then we just execute the IMS transactions with the input.
    As our application has grown these Java classes have gotten larger and larger. And since these are simply stored in server memory in the web sessions then these are also getting huge. I'm concerned that this is not the best way for this application to be architected. Is there something else we should be doign? I simply don't understand how Entity Java Beans could be used but then again I don't know much about them. I wouldn't think that caching the data to a local database and accessing it from there would be any more efficient and would probably just slow down the system from all the I/O.
    Any help or direction would be greatly appreciated.

    The best book you can buy is 'Professional Java Server Programming, j2ee edition' by Wrox. It is by far the best reference I've used. Another quick reference consideration might be the j2ee book provided by codenotes... its quick and to the point.

  • Dynamic Calc processor cannot lock more than [100] ESM blocks during the calculation, please increase CalcLockBlock setting and then retry(a small data cache setting could also cause this problem, please check the data cache size setting).

    Hi,
    Our Environment is Essbase 11.1.2.2 and working on Essbase EAS and Shared Services components.One of our user tried to run the Cal Script of one Application and faced this error.
    Dynamic Calc processor cannot lock more than [100] ESM blocks during the calculation, please increase CalcLockBlock setting and then retry(a small data cache setting could also cause this problem, please check the data cache size setting).
    I have done some Google and found that we need to add something in Essbase.cfg file like below.
    1012704 Dynamic Calc processor cannot lock more than number ESM blocks during the calculation, please increase CalcLockBlock setting and then retry (a small data cache setting could also cause this problem, please check the data cache size setting).
    Possible Problems
    Analytic Services could not lock enough blocks to perform the calculation.
    Possible Solutions
    Increase the number of blocks that Analytic Services can allocate for a calculation:
    Set the maximum number of blocks that Analytic Services can allocate to at least 500. 
    If you do not have an $ARBORPATH/bin/essbase.cfg file on the server computer, create one using a text editor.
    In the essbase.cfg file on the server computer, set CALCLOCKBLOCKHIGH to 500.
    Stop and restart Analytic Server.
    Add the SET LOCKBLOCK HIGH command to the beginning of the calculation script.
    Set the data cache large enough to hold all the blocks specified in the CALCLOCKBLOCKHIGH setting. 
    Determine the block size.
    Set the data catche size.
    Actually in our Server Config file(essbase.cfg) we dont have below data  added.
    CalcLockBlockHigh 2000
    CalcLockBlockDefault 200
    CalcLockBlocklow 50
    So my doubt is if we edit the Essbase.cfg file and add the above settings and restart the services will it work?  and if so why should we change the Server config file if the problem is with one application Cal Script. Please guide me how to proceed.
    Regards,
    Naveen

    Your calculation needs to hold more blocks in memory than your current set up allows.
    From the docs (quoting so I don't have to write it, not to be a smarta***:
    CALCLOCKBLOCK specifies the number of blocks that can be fixed at each level of the SET LOCKBLOCK HIGH | DEFAULT | LOW calculation script command.
    When a block is calculated, Essbase fixes (gets addressability to) the block along with the blocks containing its children. Essbase calculates the block and then releases it along with the blocks containing its children. By default, Essbase allows up to 100 blocks to be fixed concurrently when calculating a block. This is sufficient for most database calculations. However, you may want to set a number higher than 100 if you are consolidating very large numbers of children in a formula calculation. This ensures that Essbase can fix all the required blocks when calculating a data block and that performance will not be impaired.
    Example
    If the essbase.cfg file contains the following settings:
    CALCLOCKBLOCKHIGH 500  CALCLOCKBLOCKDEFAULT 200  CALCLOCKBLOCKLOW 50 
    then you can use the following SET LOCKBLOCK setting commands in a calculation script:
    SET LOCKBLOCK HIGH; 
    means that Essbase can fix up to 500 data blocks when calculating one block.
    Support doc is saying to change your config file so those settings can be made available for any calc script to use.
    On a side note, if this was working previously and now isn't then it is worth investigating if this is simply due to standard growth or a recent change that has made an unexpected significant impact.

  • Xi Data Cache Refresh Problem

    Hello,
    We have following Problem:
    We send an XML-Request with HTTP-Adapter to XI and get Data from R/3, works fine.
    But when we send a Second Request we become an Error.
    The error comes because in the Java Runtime the XI writes wrong values into Message.
    When we do a XI Data Cache Refresh for INTEGRATION_SERVER
    and INTEGRATION_RUNTIME we can send a message and all works fine again.´
    Any Idea why the XI Caches Data from Request and set this values in the message of request that was send later?
    Thanks,
    Robin

    Hi,
    I think both are same. Because in SXi_ADM we also define our IS. so i think it may be same ...
    If I am wrong please share your knowledge . in this thread for more clarification ..
    Regards,
    Amit

Maybe you are looking for

  • MB_CREATE_GOODS_MOVEMENT returning subrc = 5

    Hiii guyz... Need some help up here ... In the below code ..we were calling these FMs to create and post goods movement.Here I have made a lill change,  initially we were passing MIGO in parameter now I am passing MIGO_GO and now FM is returing subrc

  • Creating addon installer without BIDE

    Hi all i would like to know how can an installer project can be created for an addon without the use of the BIDE tools appreciate the help Yoav

  • Querying using Native SQL

    Hi Pals, I have a requirement which I would like to brief. I have an internal table with columns FIELD1 and FIELD2. For all the values in FIELD1 I have to query a table MSI(which does not form a part of the R/3 framework) through Native SQL statement

  • Business Scenario and Business Process in IR

    Business Scenario vs Business Process in IR??? Pls give the example for BPEL???

  • Photoshop Elements 5.0/Edits

    I am attempting to edit existing text within some photos.  It doesn't seem to want to cooperate.  What is the problem?