WrapperNamedCache is a NamedCache ?

Hi All,
When I create a WrapperNamedCache I am not able to see the newly created cache in JConsole JMX MBeans. is the WrapperNamedCache is a fully network aware cache of just a local cache ? does it sit in the coherence cluster ?
Regards
S

Hi JK,
Thanks for your reply. is there any API class that works like a NamedCache. I want to bundle all my domain temporary collections into a NamedCache and put it on the cache servers/storage nodes and later take these collections from the cacheserver and work like a NamedCache.
The problem is we have too many temporary collections, creating a NamedCache for each temp collection is very expensive and it could degrade the performance of management node and other nodes as well.
I am trying to create a Custom CacheFactory to manage the temporary collections.
Regards
S.
Edited by: user594809 on 26-Apr-2010 08:31

Similar Messages

  • Coherence entries do not expire

    Hello,
    I have a replicated scheme and set the expiration time (30 minutes) in the put method from NamedCache. In the first hour and a half, the behavior was as expected, from the logs I could see the entries expired after 30 minutes. But after that they seem to not expire. My problem is that some old data comes from the cache, although I could see the new data was stored. So i'm affraid the old data remains somewhere, maybe on some nodes and then is probably replicated back to all nodes.
    I'm not sure what happens, but I only found this post: Problem with Coherence replicated cache and put operation with cMillis para where it says that replicated cache does not fully support per-entry expiration. Has anyone heard or experienced something similar?
    And do you know how can I remove the entries from the cache?

    Hi,
    Given that the reply in this thread Problem with Coherence replicated cache and put operation with cMillis para was from Jason who works in the Coherence Engineering Team, and it was only a few months ago, I would be inclined to believe him when he says per-entry expiry is not supported in replicated caches.
    An alternative to replicated caches is to use a Continuous Query Cache that is backed by a distributed cache. This will work like a replicated cache but gets around a lot of the limitations you have with replicated cache, like per-entry expiry.
    The easiest way to make a replicated cache using a CQC is to use a class-scheme and custom wrapper class like this.
    Your cache configuration would look something like this...
    <cache-config xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
                  xmlns="http://xmlns.oracle.com/coherence/coherence-cache-config"
                  xsi:schemaLocation="http://xmlns.oracle.com/coherence/coherence-cache-config coherence-cache-config.xsd">
        <defaults>
            <serializer>pof</serializer>
        </defaults>
        <caching-scheme-mapping>
            <cache-mapping>
                <cache-name>Replicated*</cache-name>
                <scheme-name>cqc-scheme</scheme-name>
                <init-params>
                    <init-param>
                        <param-name>underlying-cache</param-name>
                        <param-value>ReplicatedUnderlying*</param-value>
                    </init-param>
                </init-params>
            </cache-mapping>
            <cache-mapping>
                <cache-name>ReplicatedUnderlying*</cache-name>
                <scheme-name>replicated-underlying-scheme</scheme-name>
            </cache-mapping>
        </caching-scheme-mapping>
        <caching-schemes>
            <class-scheme>
                <scheme-name>cqc-scheme</scheme-name>
                <class-name>com.thegridman.coherence.CQCReplicatedCache</class-name>
                <init-params>
                    <init-param>
                        <param-type>{cache-ref}</param-type>
                        <param-value>{underlying-cache}</param-value>
                    </init-param>
                    <init-param>
                        <param-type>java.lang.String</param-type>
                        <param-value>{cache-name}</param-value>
                    </init-param>
                </init-params>
            </class-scheme>
            <distributed-scheme>
                <scheme-name>replicated-underlying-scheme</scheme-name>
                <service-name>ReplicatedUnderlyingService</service-name>
                <backing-map-scheme>
                    <local-scheme/>
                </backing-map-scheme>
            </distributed-scheme>
        </caching-schemes>
    </cache-config>Any cache with a name prefixed by "Replicated", e.g. ReplicatedTest, would map to the cqc-scheme, which is our custom class scheme. This will create another distributed cache prefixed with "ReplicatedUnderlying", e.g. for the ReplicatedTest cache we would also get the ReplicatedUnderlyingTest cache.
    Now, because the data will go into the distributed scheme we can do all the normal things with this cache that we can do with any distributed cache, e.g. we could add cache stores, listeners work properly, we know were entry processors will go, etc...
    The com.thegridman.coherence.CQCReplicatedCache is our custom NamedCache implementation that is basically a CQC wrapper around the underlying cache.
    The simplest form of this class is...
    package com.thegridman.coherence;
    import com.tangosol.net.NamedCache;
    import com.tangosol.net.cache.ContinuousQueryCache;
    import com.tangosol.net.cache.WrapperNamedCache;
    import com.tangosol.util.filter.AlwaysFilter;
    public class CQCReplicatedCache extends WrapperNamedCache {
        public CQCReplicatedCache(NamedCache underlying, String cacheName) {
            super(new ContinuousQueryCache(underlying, AlwaysFilter.INSTANCE), cacheName);
    }Now in your application code you can still get caches just like normal, you code has no idea that the cache is really a wrapper around a CQC. So you can just do normal stuff like this in your code...
    NamedCache cache = CacheFactory.getCache("ReplicatedTest");
    cache.put("Key-1", "Value-1");This though still does not quite support per-entry expiry reliably. The way that expiry works in Coherence is that entries are only expired when another action happens on a cache, such as a get, put, size, entrySet and so on. The problem with the code above is that say you do this...
    NamedCache cache = CacheFactory.getCache("ReplicatedTest");
    cache.put("Key-1", "Value-1", 5000);
    Thread.sleep(6000);
    Object value = cache.get("Key-1");...you would expect to get back null, but you will still get back "Value-1". This is because the value will be from the CQC and not hit the underlying cache. To make everything work for eviction then for certain CQC operations we need to poke the underlying cache in our wrapper, so we change the CQCReplicatedCache class like this
    package com.thegridman.coherence;
    import com.tangosol.net.NamedCache;
    import com.tangosol.net.cache.ContinuousQueryCache;
    import com.tangosol.net.cache.WrapperNamedCache;
    import com.tangosol.util.Filter;
    import com.tangosol.util.filter.AlwaysFilter;
    import java.util.Collection;
    import java.util.Comparator;
    import java.util.Map;
    import java.util.Set;
    public class CQCReplicatedCache extends WrapperNamedCache {
        private NamedCache underlying;
        public CQCReplicatedCache(NamedCache underlying, String cacheName) {
            super(new ContinuousQueryCache(underlying, AlwaysFilter.INSTANCE), cacheName);
            this.underlying = underlying;
        public NamedCache getUnderlying() {
            return underlying;
        @Override
        public Set entrySet(Filter filter) {
            underlying.size();
            return super.entrySet(filter);
        @Override
        public Set entrySet(Filter filter, Comparator comparator) {
            underlying.size();
            return super.entrySet(filter, comparator);
        @Override
        public Map getAll(Collection colKeys) {
            underlying.size();
            return super.getAll(colKeys);
        @Override
        public Object invoke(Object oKey, EntryProcessor agent) {
            underlying.size();
            return super.invoke(oKey, agent);
        @Override
        public Map invokeAll(Collection collKeys, EntryProcessor agent) {
            underlying.size();
            return super.invokeAll(collKeys, agent);
        @Override
        public Map invokeAll(Filter filter, EntryProcessor agent) {
            underlying.size();
            return super.invokeAll(filter, agent);
        @Override
        public Set keySet(Filter filter) {
            underlying.size();
            return super.keySet(filter);
        @Override
        public boolean containsValue(Object oValue) {
            underlying.size();
            return super.containsValue(oValue);
        @Override
        public Object get(Object oKey) {
            underlying.size();
            return super.get(oKey);
        @Override
        public boolean containsKey(Object oKey) {
            underlying.size();
            return super.containsKey(oKey);
        @Override
        public boolean isEmpty() {
            underlying.size();
            return super.isEmpty();
        @Override
        public int size() {
            return underlying.size();
    }We have basically poked the underlying cache, by calling size() on it prior to doing any other operation that would not normally hit the underlying cache, so causing eviction to trigger. I think I have covered all the methods above but it should be obvious how to override any others I have missed.
    If you now run this code
    NamedCache cache = CacheFactory.getCache("ReplicatedTest");
    cache.put("Key-1", "Value-1", 5000);
    Thread.sleep(6000);
    Object value = cache.get("Key-1");it should work as expected.
    JK

  • Purpose and ambition level with WrapperNamedCache class?

    I recently found the WrapperNamedCache in the Coherence API and would like to know a bit about its intended purpose and its ambition level (i.e. what kind of cache is it compatible with and how complete is the compatibility)? Is this a class that is intended for end-user use or is it mainly intended for internal use in Coherence?
    I see one possible use of this class as a way to "mock" a coherence cache for testing purposes - is this one of the intended uses?
    Does it support indexes?
    Does it support aggregation and invocables?
    etc
    /Magnus

    Hi Magnus,
    You are correct. WrapperNamedCache is used both internally and for mock testing.
    WrapperNamedCache can wrap any Map instance. It supports all of the NamedCache APIs, but in some cases does nothing if the underlying Map is not an instance of NamedCache. So, for example, it supports addIndex/removeIndex but only if the underlying Map is a QueryMap.
    I think that WrapperNamedCache is extended by some end users to create a custom NamedCache.
    --Tom                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

  • How can I access the namedCache

    Hi,
    As you know we can access the Oracle Coherence cache within CEP by using setMap procedure. This procedure only gives us a Map object. And we can load the coherence cache with this map object.
    But we should get the memberId and memberName of the coherence server for each node. How can I get the coherence member information(memberid, membername) that currently I am using?
    I am using below piece of code to get member Id but we got an error at the bottom of the page. The piece of code is :
        public void setMap(Map map)
             galataCache = map;
             NamedCache cache = CacheFactory.getCache("galataCustomerCache");
             cache = (NamedCache) map;
             CacheService service = cache.getCacheService();
             Cluster cluster = service.getCluster();
             Member memberThis = cluster.getLocalMember();
             Global.LOG.info("My Member id is:"+memberThis.getId());
             Global.LOG.info("My Member machine id is:"+memberThis.getMachineId());
             Global.LOG.info("My Member name is:"+memberThis.getMemberName());
        }And Oracle CEP server gives this error :
    ####<Jan 11, 2011 10:17:59 PM EET> <Info> <org.springframework.beans.factory.support.DefaultListableBeanFactory> <> <myServer> <[ACTIVE] ExecuteThread: '11' for queue: 'weblogic.kernel.Default (self-tuning)'> <> <> <> <1294777079117> <BEA-000000> <Destroying singletons in org.springframework.beans.factory.support.DefaultListableBeanFactory@21a6abab: defining beans [CacheBeanFactoryPostProcessor,ServiceDependencyBeanFactoryPostProcessor,ContextLifecycleControlBean,com.bea.wlevs.spring.BeanPostProcessorServiceProcessor,com.bea.wlevs.spring.EventTypeRepositoryFactoryBean#0,com.bea.wlevs.spring.CachingSystemFactoryBean#coherence_provider#0,galataCache,customerCacheLoader,wlevs_stage_proxy_forgalataCustomerCache,wlevs_stage_proxy_forgalataCustomerCachedefaultcachestage,galataCustomerCache,com.bea.wlevs.spring.support.ServiceInjectionBeanPostProcessor,org.springframework.osgi.extensions.annotation.ServiceReferenceInjectionBeanPostProcessor,com.bea.wlevs.spring.ApplicationIdentityAwareProcessor,com.bea.wlevs.spring.MBeanRegistrationBeanPostProcessor,com.bea.wlevs.spring.WorkManagerPostProcessor,com.bea.wlevs.spring.CacheBeanPostProcessor,com.bea.wlevs.spring.ActivationBeanPostProcessor,com.bea.wlevs.spring.BlockingSenderBeanPostProcessor,com.bea.wlevs.spring.RunnableBeanPostProcessor,com.bea.wlevs.spring.DisposableBeanPostProcessor,com.bea.wlevs.spring.ResourceInjectionBeanPostProcessor]; root of factory hierarchy>
    ####<Jan 11, 2011 10:17:59 PM EET> <Info> <OSGiLogReaderAdapter> <> <myServer> <Log Event Dispatcher> <> <> <> <1294777079119> <BEA-000000> <Bundle[214] GalataCoherence4, Message (ServiceEvent UNREGISTERING null), Exception (null), Time (1294777079118)>
    ####<Jan 11, 2011 10:17:59 PM EET> <Error> <Deployment> <> <myServer> <[ACTIVE] ExecuteThread: '11' for queue: 'weblogic.kernel.Default (self-tuning)'> <> <> <> <1294777079121> <BEA-2045010> <The application context "GalataCoherence4" could not be started: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'customerCacheLoader' defined in URL [bundleentry://214.fwk539854707/META-INF/spring/CoherenceCustomer.context.xml]: Error setting property values; nested exception is org.springframework.beans.PropertyBatchUpdateException; nested PropertyAccessExceptions (1) are:
    PropertyAccessException 1: org.springframework.beans.MethodInvocationException: Property 'map' threw exception; nested exception is (Wrapped: Failed to load the factory) java.lang.reflect.InvocationTargetException
    org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'customerCacheLoader' defined in URL [bundleentry://214.fwk539854707/META-INF/spring/CoherenceCustomer.context.xml]: Error setting property values; nested exception is org.springframework.beans.PropertyBatchUpdateException; nested PropertyAccessExceptions (1) are:
    PropertyAccessException 1: org.springframework.beans.MethodInvocationException: Property 'map' threw exception; nested exception is (Wrapped: Failed to load the factory) java.lang.reflect.InvocationTargetException
         at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.applyPropertyValues(AbstractAutowireCapableBeanFactory.java:1279)
         at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.populateBean(AbstractAutowireCapableBeanFactory.java:1010)
         at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:472)
         at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory$1.run(AbstractAutowireCapableBeanFactory.java:409)
         at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:380)
         at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:264)
         at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:222)
         at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:261)
         at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:185)
         at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:164)
         at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:429)
         at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:729)
         at org.springframework.osgi.context.support.AbstractDelegatedExecutionApplicationContext.access$1600(AbstractDelegatedExecutionApplicationContext.java:69)
         at org.springframework.osgi.context.support.AbstractDelegatedExecutionApplicationContext$4.run(AbstractDelegatedExecutionApplicationContext.java:355)
         at org.springframework.osgi.util.internal.PrivilegedUtils.executeWithCustomTCCL(PrivilegedUtils.java:85)
         at org.springframework.osgi.context.support.AbstractDelegatedExecutionApplicationContext.completeRefresh(AbstractDelegatedExecutionApplicationContext.java:320)
         at org.springframework.osgi.extender.internal.dependencies.startup.DependencyWaiterApplicationContextExecutor$CompleteRefreshTask.run(DependencyWaiterApplicationContextExecutor.java:139)
         at org.springframework.scheduling.commonj.DelegatingWork.run(DelegatingWork.java:62)
         at weblogic.work.commonj.CommonjWorkManagerImpl$WorkWithListener.run(CommonjWorkManagerImpl.java:196)
         at weblogic.work.SelfTuningWorkManagerImpl$WorkAdapterImpl.run(SelfTuningWorkManagerImpl.java:528)
         at weblogic.work.ExecuteThread.execute(ExecuteThread.java:201)
         at weblogic.work.ExecuteThread.run(ExecuteThread.java:173)
    Caused By: org.springframework.beans.PropertyBatchUpdateException; nested PropertyAccessExceptions (1) are:
    PropertyAccessException 1: org.springframework.beans.MethodInvocationException: Property 'map' threw exception; nested exception is (Wrapped: Failed to load the factory) java.lang.reflect.InvocationTargetException
         at org.springframework.beans.AbstractPropertyAccessor.setPropertyValues(AbstractPropertyAccessor.java:104)
         at org.springframework.beans.AbstractPropertyAccessor.setPropertyValues(AbstractPropertyAccessor.java:59)
         at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.applyPropertyValues(AbstractAutowireCapableBeanFactory.java:1276)
         at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.populateBean(AbstractAutowireCapableBeanFactory.java:1011)
         at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:472)
         at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory$1.run(AbstractAutowireCapableBeanFactory.java:409)
         at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:380)
         at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:264)
         at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:222)
         at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:261)
         at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:185)
         at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:164)
         at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:429)
         at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:729)
         at org.springframework.osgi.context.support.AbstractDelegatedExecutionApplicationContext.access$1600(AbstractDelegatedExecutionApplicationContext.java:69)
         at org.springframework.osgi.context.support.AbstractDelegatedExecutionApplicationContext$4.run(AbstractDelegatedExecutionApplicationContext.java:355)
         at org.springframework.osgi.util.internal.PrivilegedUtils.executeWithCustomTCCL(PrivilegedUtils.java:85)
         at org.springframework.osgi.context.support.AbstractDelegatedExecutionApplicationContext.completeRefresh(AbstractDelegatedExecutionApplicationContext.java:320)
         at org.springframework.osgi.extender.internal.dependencies.startup.DependencyWaiterApplicationContextExecutor$CompleteRefreshTask.run(DependencyWaiterApplicationContextExecutor.java:139)
         at org.springframework.scheduling.commonj.DelegatingWork.run(DelegatingWork.java:62)
         at weblogic.work.commonj.CommonjWorkManagerImpl$WorkWithListener.run(CommonjWorkManagerImpl.java:196)
         at weblogic.work.SelfTuningWorkManagerImpl$WorkAdapterImpl.run(SelfTuningWorkManagerImpl.java:528)
         at weblogic.work.ExecuteThread.execute(ExecuteThread.java:201)
         at weblogic.work.ExecuteThread.run(ExecuteThread.java:173)
    >
    ####<Jan 11, 2011 10:17:59 PM EET> <Info> <OSGiLogReaderAdapter> <> <myServer> <Log Event Dispatcher> <> <> <> <1294777079123> <BEA-000000> <Bundle[214] GalataCoherence4, Message (BundleEvent STOPPED), Exception (null), Time (1294777079123)>
    ####<Jan 11, 2011 10:17:59 PM EET> <Info> <OSGiLogReaderAdapter> <> <myServer> <Log Event Dispatcher> <> <> <> <1294777079124> <BEA-000000> <Bundle[214] GalataCoherence4, Message (BundleEvent UNRESOLVED), Exception (null), Time (1294777079124)>
    ####<Jan 11, 2011 10:17:59 PM EET> <Info> <OSGiLogReaderAdapter> <> <myServer> <Log Event Dispatcher> <> <> <> <1294777079124> <BEA-000000> <Bundle[214] GalataCoherence4, Message (BundleEvent UNINSTALLED), Exception (null), Time (1294777079125)>
    ####<Jan 11, 2011 10:17:59 PM EET> <Notice> <Deployment> <> <myServer> <[ACTIVE] ExecuteThread: '11' for queue: 'weblogic.kernel.Default (self-tuning)'> <> <> <> <1294777079124> <BEA-2045001> <The application bundle "GalataCoherence4" was undeployed successfully>
    ####<Jan 11, 2011 10:17:59 PM EET> <Warning> <com.bea.wlevs.cluster.gbcast.support.AbstractBroadcastGroup> <> <myServer> <Invocation:com.bea.wlevs.cluster.coherence.BroadcastTxInvocationService> <> <> <> <1294777079124> <BEA-000000> <AtomicGroupBroadcastListener.onCommit [[email protected]54] threw an exception
    java.lang.IllegalStateException: Deployment transaction could not be completed, the local deployment failed
         at com.bea.wlevs.deployment.cluster.ClusterDeploymentListener.onPrepare(ClusterDeploymentListener.java:63)
         at com.bea.wlevs.cluster.gbcast.support.AbstractBroadcastGroup.prepareTxInternal(AbstractBroadcastGroup.java:302)
         at com.bea.wlevs.cluster.coherence.BroadcastGroupImpl$BroadcastTransactionImpl.run(BroadcastGroupImpl.java:407)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.InvocationService.onInvocationRequest(InvocationService.CDB:10)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.InvocationService$InvocationRequest.onReceived(InvocationService.CDB:40)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onNotify(Grid.CDB:136)
         at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
         at java.lang.Thread.run(Thread.java:619)
    >

    The easiest way (in my opinion) would be to implement the com.bea.wlevs.ede.api.cluster.GroupMembershipListener Interface. Anytime there is a membership change (which includes initial startup of a CEP application on a clustered server) the onMembershipChange() method is invoked. Example:
    @Override
    public void onMembershipChange(Server localIdentity, Configuration groupConfiguration) {
    int serverID = localIdentity.getIdentity();
    int coordID = groupConfiguration.getCoordinator().getIdentity;
    In this example, "serverID" is the local server's ID, and the "coordID" is the ID of the primary server for the cache cluster. This would be invoked at startup, and each time there is a cluster membership change.
    You would need to import the following:
    com.bea.wlevs.ede.api.cluster.GroupMembershipListener
    com.bea.wlevs.ede.api.cluster.Configuration
    com.bea.wlevs.ede.api.cluster.Server
    Hope this helps. See http://download.oracle.com/docs/cd/E14571_01/apirefs.1111/e14303/com/bea/wlevs/ede/api/cluster/GroupMembershipListener.html
    --Jim Leary
    Edited by: Jim Leary on Jan 11, 2011 12:51 PM
    Edited by: Jim Leary on Jan 11, 2011 12:57 PM

  • Error exporting NamedCache to file via PofWriter

    I'd like to dump some NamedCaches with a large number of entries (> 200,000) to files, and then be able to re-load the NamedCaches from those files. I realize that there may be a feature in an upcoming version of Coherence to do this sort of thing, but I'm using 3.5 for now and need to roll my own for the time being.
    I already have working code that takes the entire NamedCache and writes it as one single Map to a file using the writeMap method of a PofBufferWriter (a variation of the code shown below).
    The problem is that attempting to load a pof file containing one single Map with 200,000+ entries usually causes a out of memory error. This is because it is not possible to know ahead of time the number of elements in the Map in the pof file, which means it isn't possible to set the initial capacity of the Map that the contents of the file are being read into via the PofReader.readMap method. The consequence of this is that the Map's growth algorithm kicks in, doubling the size of the map over and over, and pretty soon the map is giant and there is insufficient heap. It's not that the objects in the file wouldn't fit in memory, the problem is that the Map growth algorithm just creates a Map too big for the heap.
    So, I've been thinking about a variation on the theme that avoids the Map growth problem. A variety of potential solutions come to mind, and I've tried many of them.
    First, I could write an int in position 0 of the PofWriter stream to say how many elements the Map in position 1 contains. That would allow me to compute an initial capacity for the Map that the file is loaded into (via PofReader) that avoids the Map growth algorithm problems.
    Second, I can iterate the elements of the NamedCache and build a number of smaller sub-Maps of a known size, and then write those sub-Maps in consecutive positions of the PofWriter stream, probably with an int in position 0 that says how many Maps follow.
    A third alternative is to iterate the elements of the NamedCache and write each key/value pair to the Nth posiition in the PofWriter stream (again, with an int in position zero saying how many key/value pairs follow).
    The code shown below is basically strategy #3, but whether I try strategy #1 or #2 or #3, I always get the following exception upon the second invocation of a "write" method on the PofWriter:
    <pre>
    java.lang.IllegalArgumentException: not in a complex type
    at com.tangosol.io.pof.PofBufferWriter.beginProperty(PofBufferWriter.java:1940)
    at com.tangosol.io.pof.PofBufferWriter.writeObject(PofBufferWriter.java:1426)
    at com.mycompany.util.coherence.NamedCacheExporter.exportCache(NamedCacheExporter.java:182)
    </pre>
    Just to be clear, if I only write one entry to the PofWriter (e.g., the entire NamedCache), it does not throw this exception.
    I am confident that all of the objects being written implement PortableObject (correctly) and are registered in the pof config file. The NamedCacheEntry object used below is a simple key/value wrapper that implements PortableObject and has only two Object-valued instance variables named "key" and "value".
    <pre>
    public static void exportCacheNamedTo(String cacheName, File file)
    throws Exception
    WrapperBufferOutput wrappedBufferOutput = null;
    try
    NamedCache cache = CacheFactory.getCache( cacheName );
    FileOutputStream fileOutputStream = new FileOutputStream( file );
    BufferedOutputStream bufferedOutputStream =
    new BufferedOutputStream( fileOutputStream, 1024 * 1024 );
    DataOutputStream dataOutputStream = new DataOutputStream( bufferedOutputStream );
    wrappedBufferOutput = new WrapperBufferOutput( dataOutputStream );
    ConfigurablePofContext pofContext =
    (ConfigurablePofContext) cache.getCacheService().getSerializer();
    PofWriter pofWriter = new PofBufferWriter( wrappedBufferOutput, pofContext );
    //this works fine
    //pofWriter.writeMap( 0, cache.getAll( cache.keySet() ) );
    //this fails with the above error
    //pofWriter.writeInt( 0, cache.size() );
    //pofWriter.writeMap( 1, cache.getAll( cache.keySet() ) );
    //this fails with the above error
    int pofIndex = 0;
    //index 0 contains an int that says how many more POF indexes to expect
    pofWriter.writeInt( pofIndex++, cache.size() );
    //index N contains a NamedCacheEntry (a simple key/value wrapper)
    for ( Object o : cache.entrySet() )
    NamedCacheEntry nce = new NamedCacheEntry( (Entry) o );
    pofWriter.writeObject( pofIndex++, nce );
    finally
    if ( wrappedBufferOutput != null )
    wrappedBufferOutput.close();
    </pre>
    Anybody see what I'm doing wrong or have any idea what the cause of the exception above is?
    Thanks in advance
    Edited by: dm197 on Feb 20, 2010 3:07 PM
    Edited by: dm197 on Feb 21, 2010 9:49 AM
    Edited by: dm197 on Feb 21, 2010 9:51 AM

    Hi DM
    Can't you just use the ConfigurablePofContext you have to serialize the entries to the WrapperBufferOutput
    public static void exportCacheNamedTo(String cacheName, File file)
    throws Exception
        WrapperBufferOutput wrappedBufferOutput = null;
        try
            NamedCache cache = CacheFactory.getCache( cacheName );
            FileOutputStream fileOutputStream = new FileOutputStream( file );
            BufferedOutputStream bufferedOutputStream =
                new BufferedOutputStream( fileOutputStream, 1024 * 1024 );
            DataOutputStream dataOutputStream = new DataOutputStream( bufferedOutputStream );
            wrappedBufferOutput = new WrapperBufferOutput( dataOutputStream );
            ConfigurablePofContext pofContext =
                (ConfigurablePofContext) cache.getCacheService().getSerializer();
            for ( Object o : cache.entrySet() )
                NamedCacheEntry nce = new NamedCacheEntry( (Map.Entry) o );
                pofContext.serialize(wrappedBufferOutput, nce);
        finally
            if ( wrappedBufferOutput != null )
                wrappedBufferOutput.close();
    }And then read them back like this:
    public static void importCacheNamedTo(String cacheName, File file)
    throws Exception
        WrapperBufferInput wrappedBufferInput = null;
        try
            NamedCache cache = CacheFactory.getCache( cacheName );
            FileInputStream fileInputStream = new FileInputStream( file );
            BufferedInputStream bufferedInputStream =
                new BufferedInputStream( fileInputStream, 1024 * 1024 );
            DataInputStream dataInputStream = new DataInputStream( bufferedInputStream );
            wrappedBufferInput = new WrapperBufferInput( dataInputStream );
            ConfigurablePofContext pofContext =
                (ConfigurablePofContext) cache.getCacheService().getSerializer();
            while (wrappedBufferInput.available() > 0)
                NamedCacheEntry nce = pofContext.deserialize(wrappedBufferInput);
                // Add the entry back into the cache
                cache.put(nce.????, nce.????);
        finally
            if ( wrappedBufferInput != null )
                wrappedBufferInput.close();
    }I have not tested this so I cannot guarantee it works, especially the wrappedBufferInput.available() > 0 condition in the while loop. I am not sure if that is the best way to check for the end of the stream.
    JK

  • How to set the expiration time for a namedcache programmatically?

    I have a named cache with near cache configuration and no expiration specified in the cache-config. The backing map which is a distributed scheme has a size limited eviction policy but no expiration. I have an algorithm to determine the expiry delay of this named cache at run time. How can I set this delay programmatically on the named cache?
    Thanks
    Sairam

    Hi Sairam,
    You would need to get the backing map for the cache and then set the expiry on the backing map; assuming that the backing map supports expiry.
    For example:
    NamedCache cache = CacheFactory.getCache("test");
    CacheService service = cache.getCacheService();
    DefaultConfigurableCacheFactory.Manager bmm = (DefaultConfigurableCacheFactory.Manager) service.getBackingMapManager();
    Map bm = bmm.getBackingMap("test");
    if (bm instanceof ConfigurableCacheMap)
    System.out.println("Setting expiry delay");
    ((ConfigurableCacheMap) bm).setExpiryDelay(100000);
    -John

  • Get data from NamedCache in an EntryProcessor

    Hi,
         I've got CacheA and CacheB, and i'm calling invokeAll((Filter) null, agent) on CacheA.
         In the agent I need to read all entries in CacheB that are related to the entries in CacheA.
         And then based on data in CacheB update the entries in CacheA. This could potenitally involve updating every single object in CacheA and reading every object in CacheB.
         Both caches are distributed, but the CacheB entries relating to CacheA entries should be located in the same partition because of the KeyAssociator on the Key objects in both caches are the same. so I'm guessing that network access will be none and hence performance acceptable.
         However I'm not sure what the best approach is. Should I just call entrySet() on CacheB from within the EntryProcessor and supply a filter that narrows the entrySet down to values that match the current Entry being processed? Or is there a more efficient or correct way of doing this?
         Currently, values in CacheA doesn't know the key's of the values in CacheB, but values in CacheB has a value that mathes the key of entries in CacheA. So I need to query the cache with a value filter.
         Cheers,
         Henric

    Hi Paul,
         Thanks for the reply.
         How can I tell if the caches are managed by differenct cache services? Also, it is important that the entries in cache B is located in the same partition as the entries in cache A if they have the same association key.
         If I increase the thread pool, in what scenarios would i risk deadlocks?
         i'll post some code to give it some more context.
             private void correctDeposit()
                   NamedCache cache = CacheFactory.getCache("AccountCache");
                   // Create an agent that will sum the deposit for an account's Positions
                   AbstractProcessor agent = new AbstractProcessor()
                      public Object process(InvocableMap.Entry entry)
                         Account acc = (Account) entry.getValue();
                         // Get the PositionCache
                         NamedCache posCache = CacheFactory.getCache("PositionCache");
                         // Filter that will match positions to the account id
                         Filter filter = new EqualsFilter(new ReflectionExtractor("getAccountId"), acc.getAccountId());
                         // Aggregate the deposit - This method call is illegal
                         // Blows up with poll() is not allowed on the service thread.
         BigDecimal totalDeposit = (BigDecimal) posCache.aggregate(filter, new BigDecimalSum("getDeposit"));
                         acc.setDeposit(totalDeposit);
                         entry.setValue(updated);
                         return null;
                   // run this agent on the account cache.
                   cache.invokeAll((Filter) null, agent);
            

  • NamedCache remove() fail

    I had in interesting experience today.
    I have a NamedCache backed by a CacheStore backed by an Oracle table.
    Called remove() and CacheStore.erase() was called, so the record was removed from the table.
    But the object remained in the NamedCache.
    When I changed the code to keySet().remove(), it worked as expected.
    I suspect there is an issue with coherence using an opaque binary representation of my key for equality comparison.
    My key object contains serialized fields which do not participate in equals() / hashCode(), but also do not vary.
    For the moment, I'm happy with the keySet().remove() solution. It's also a slight performance improvement.
    I enshrined this code in our NamedCacheUtil class as a static call.
    Leonard

    Leonard,
    In order for Coherence to work correctly in all cases, equals and hashcode for the serialized form (binary) and the de-serialized form (Object) needs to be symmetric for keys.
    Patrick Peralta (from the Coherence Engineering team) has written an excellent blog post on how to implement keys here:
    http://blackbeanbag.net/wp/2010/06/06/coherence-key-howto/
    /Christer

  • Sending context information to namedcache.get

    I have a distributed cache backed by a data store. when I invoke NamedCache.get(), apart from the key, I would like to pass additional context to the get method, that is passed on to the load method of the database store class. I want to acheive something like:
    NamedCache.get(mykey, Object myContext),
    resulting in load(myKey, context)
    Is there a means to do this?
    Thank You

    I am hoping the key->table is a static mapping? If so maybe you could instantiate the CacheLoader with knowledge of this mapping and it could then determine the table to use for the load?
                             <cachestore-scheme>
                                  <class-scheme>
                                       <class-name>com.hraja.coherence.cachestore.KRMCacheStore</class-name>
                                       <init-params>
                                            <!-- xml version -->
                                            <init-param>
                                                 <param-type>xml</param-type>
                                                 <param-value>
                                                      <krm-map>
                                                           <key-relational-mapping>
                                                                <key>key1</key>
                                                                <table>table1</table>
                                                           </key-relational-mapping>
                                                           <key-relational-mapping>
                                                                <key>key2</key>
                                                                <table>table2</table>
                                                           </key-relational-mapping>
                                                      </krm-map>
                                                 </param-value>
                                            </init-param>
                                            <!--
                                                 # delimited string version <init-param>
                                                 <param-type>string</param-type>
                                                 <param-value>key1=table1,key2=table2</param-value>
                                                 </init-param>
                                            -->
                                       </init-params>
                                  </class-scheme>
                             </cachestore-scheme>The other alternative is to take the JPA Entity annotation approach and decorate your class with this meta-data that you analyze in your load method...
    --harvey                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

  • Why am I seeing "Restarting NamedCache" messages in my logs

    Periodically I see the following message in my log files. This message shows up at about the same time in different log files.
    INFO [Logger@9243969 3.5.3/465] (Log4j.CDB:3) - 2010-04-21 15:05:08.137/8422.071 Oracle Coherence GE 3.5.3/465 <Info> (thread=PlacementWorker:7, member=7): Restarting NamedCache: dist-order-cache
    INFO [Logger@9243969 3.5.3/465] (Log4j.CDB:3) - 2010-04-21 16:22:08.304/13042.238 Oracle Coherence GE 3.5.3/465 <Info> (thread=OrderWorker:7, member=7): Restarting NamedCache: dist-allocation-cache
    What is the significance of these errors, can this lead to data loss ?

    Hi Jehangir,
    Usually this message is an indication that the node has been "disconnected" from the cluster and trying to reconnect again. If you see this on the storage-enabled nodes and there are more than one nodes that depart the cluster at once than you can indeed have a data loss.
    Regards,
    Gene

  • Bad logic in NamedCache remove method?

    Why does the remove method in a NamedCache invoke the load method in the CacheStore in a distributed cache? Why force a database load call for the object if it doesn't exist in the cache only to immediately remove it?
    Edited by: 786216 on Aug 3, 2010 10:07 AM

    Hi 786216,
    The reason for that comes to the contract of the java.util.Map#remove(Object) method that NamedCache inherits is to return the "old" value. Basically, the net effect should be the same as:
    oValue = map.get(oKey);
    map.removeBlind(oKey); // pretend such a method exists
    return oValue;The good news is that the removeBlind() method does exists. It looks like:
    map.keySet().remove(oKey);and will not cause the store.load() operation to occur.
    Regards,
    Gene

  • Cost/benefits of release() on NamedCache

    Based on the javadoc I'm a little concerned about the cost of calling release() on NamedCache every time the cache reference is looked up and used.
    I'm running an app inside Weblogic 8.1 SP2 using the Coherence RAR. Every invocation of the cache will occur inside a transaction. I'm using the following as my basic pattern:
    CacheAdapter adapter = getCacheAdapter();
    try
    NamedCache map = adapter.getNamedCache("CacheName", getClass().getClassLoader());
    try
    // Do cache stuff
    finally
    map.release();
    finally
    closeCacheAdapter(adapter);
    Note that methods like this may be called hundreds of times inside a single transaction. Moreover, in some circumstances, 10k+ objects may be added to the cache (on system startup).
    I guess my first question is, do I really need the map.release()? I'm always using the class loader of my cache wrapper, so I'd imagine that the container can't release that class loader in any case.
    Second, how can I estimate how much time the release() call is going to cost me?
    Any help would be appreciated. Thanks.
    --Peter                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

    Gene,
    Thank you for your quick response. You've helped clarify the issue.
    I would like to request a clarification in the javadoc in a future version. I can't seem to find anywhere in the doc where it says that you shouldn't call release() on a NamedCache returned by a CacheAdapter.
    As far as the 10k+ goes, I'm not crazy about it either. But there is a legacy code base here, with legacy behavior. Essentially we're using the AppCycleListener interface to allow us to listen to startup events and preload the cache. This is effectively exclusive, but I believe that Weblogic treats it as if we're inside a transaction. I know that it did for our earlier revision, which used the server initializer interface.
    Thanks again for the help.
    --Peter                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

  • WrapperNamedCache

    I'm having some trouble getting the WrapperNamedCache to work.
    When I try to use the cache from a client it throws the following exception:
    Application code running on "DistributedCache" service thread(s) should not call ensureCache as this may result in deadlock. The most common case is a CacheFactory call from a custom CacheStore implementation.
    2009-04-17 14:05:10.736/6.157 Oracle Coherence GE 3.4.1/407 <Warning> (thread=DistributedCache, member=1): Application code running on "DistributedCache" service thread(s) should not call ensureCache as this may result in deadlock. The most common case is a CacheFactory call from a custom CacheStore implementation.
    2009-04-17 14:05:10.751/6.172 Oracle Coherence GE 3.4.1/407 <Error> (thread=DistributedCache, member=1): Assertion failed: poll() is a blocking call and cannot be called on the Service thread
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.poll(Grid.CDB:4)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.poll(Grid.CDB:11)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$BinaryMap.sendPartitionedRequest(DistributedCache.CDB:59)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$BinaryMap.size(DistributedCache.CDB:13)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$BinaryMap.isEmpty(DistributedCache.CDB:1)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$BinaryMap$KeySet.isEmpty(DistributedCache.CDB:1)
         at com.tangosol.util.ConverterCollections$ConverterCollection.isEmpty(ConverterCollections.java:483)
         at com.tangosol.util.AbstractKeySetBasedMap.isEmpty(AbstractKeySetBasedMap.java:48)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$Storage.instantiateResourceMap(DistributedCache.CDB:25)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$Storage.setCacheName(DistributedCache.CDB:27)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$ConfigListener.entryInserted(DistributedCache.CDB:15)
         at com.tangosol.util.MapEvent.dispatch(MapEvent.java:191)
         at com.tangosol.util.MapEvent.dispatch(MapEvent.java:164)
         at com.tangosol.util.MapListenerSupport.fireEvent(MapListenerSupport.java:556)
         at com.tangosol.util.ObservableHashMap.dispatchEvent(ObservableHashMap.java:229)
         at com.tangosol.util.ObservableHashMap$Entry.onAdd(ObservableHashMap.java:270)
         at com.tangosol.util.SafeHashMap.put(SafeHashMap.java:244)
         at com.tangosol.coherence.component.util.collections.WrapperMap.put(WrapperMap.CDB:1)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid$ServiceConfigMap.put(Grid.CDB:13)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$StorageIdRequest.onReceived(DistributedCache.CDB:40)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onMessage(Grid.CDB:9)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onNotify(Grid.CDB:130)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache.onNotify(DistributedCache.CDB:3)
         at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:37)
         at java.lang.Thread.run(Unknown Source)Here's the configuration
    <!DOCTYPE cache-config SYSTEM "cache-config.dtd">
    <cache-config>
      <caching-scheme-mapping>
        <cache-mapping>
          <cache-name>*</cache-name>
          <scheme-name>distributed-scheme</scheme-name>
        </cache-mapping>
      </caching-scheme-mapping>
      <caching-schemes>
        <!-- Distributed caching scheme. -->
        <distributed-scheme>
          <scheme-name>distributed-scheme</scheme-name>
          <service-name>DistributedCache</service-name>
          <serializer>
            <class-name>com.tangosol.io.pof.ConfigurablePofContext</class-name>
          </serializer>
          <backing-map-scheme>
            <class-scheme>
              <scheme-ref>entitled-scheme</scheme-ref>
            </class-scheme>
          </backing-map-scheme>
          <autostart>true</autostart>
        </distributed-scheme>
        <!-- Wrapped caching scheme. -->
        <class-scheme>
          <scheme-name>entitled-scheme</scheme-name>
          <class-name>com.tangosol.net.cache.WrapperNamedCache</class-name>
          <init-params>
            <init-param>
              <param-type>{cache-ref}</param-type>
              <param-value>{cache-name}</param-value>
            </init-param>
            <init-param>
              <param-type>string</param-type>
              <param-value>{cache-name}</param-value>
            </init-param>
          </init-params>
        </class-scheme>
      </caching-schemes>
    </cache-config>Edited by: pards on Apr 17, 2009 11:10 AM

    The WrapperNamedCache cannot be used as a backing map; a cache cannot be used as a backing map.
    I assume that you tried this approach as an alternative to the WrapperNamedCache used in the proxy for security. Please see my reply to that thread:
    Re: Row-level security?
    --David                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

  • Determining Cluster Members Using a NamedCache

    Is it possible to get the cluster members using a particular NamedCache?
    Is it possible to add a listener which listens for members getting/returning (ensure/release) a particular NamedCache?
    Is there a concept of groups within a cluster membership? Some way of both indicating what a particular node will be doing (say if you wanted to have a "nodata" member which held no objects in its partition and a "data" member which did hold the objects and could have the same configuration file, only somehow give it the parameter to indicate which an instance would be) as well as identify it from the cluster info (get all cluster members of the "data" type)?
    --Tim                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

    1) It's possible to get all cluster members that are
    using (or providing storage for) a given cache
    service:Excellent. Just what I was looking for.
    More generally, starting with Coherence 3.2, there
    are number of <aThis is what we need. I used storage enabled as an example, but we want to have roles for cluster members, so this is the better solution. We're currently on 3.0, but I guess this is a good impetus to upgrade.
    --Tim                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

  • Querying objects not in the NamedCache

    The wiki topic on querying (http://wiki.tangosol.com/display/COH32UG/Querying+the+Cache ) points out that a query will "apply only to currently cached data".
         This seems fairly logical because it seems unreasonable to expect the cache to hold onto information that it has already evicted.
         If you were to design a DAO layer (following http://wiki.tangosol.com/display/COH32UG/Managing+an+Object+Model ) using the first of the following architectures:
         1. Direct Cache access:
         App <---> NamedCache <---> CacheStore <---> DB
         2. Direct Cache and DB-DAO access:
         App
         |
         CacheAwareDAO <---> CacheStore <---> DB
         |
         NamedCache
         |
         CacheStore
         |
         DB
         you would then have a situation where you would not be able to query evicted data.
         So by using the 2nd strategy I assume you would probably always want to bypass the cache for all queries other than by primary key, to ensure that you are always querying the entire persistent population.
         This seems a little coarse grained and also reduces the utility of the Coherence cache (unless the bulk of your queries are by primary key).
         Can anybody tell me if my assumption is wrong and if there are any usage strategies the mitigate this aspect?
         Thx,
         Ben

    Hi Rob,     >
         > Why would you need 2 separate caches?
         the first cache would have eviction policy, and caches values, but does not have indexes
         the second would not have eviction, does not store data, but has index updates on changes.
         This way you have a fully indexed but not stored data-set, similarly to the difference between stored and indexed attributes Lucene.
         > Why not just
         > maintain a index within each cache so that every
         > entry causes the index to get updated inline (i.e.
         > synchronously within the call putting the data into
         > the cache)?
         >
         You cannot manually maintain an index, because that is not a configurable extension point (it is not documented how an index should be updated manually). You have to rely on Coherence to do it for you upon changes to entries in the owned partitions.
         And since Coherence code does remove index references to evicted or removed data, therefore the index would not know about the non-cached data.
         Or did I misunderstood on how you imagine the indexes to be maintained? Did you envision an index separate from what Coherence has?
         > (You may have to change Coherence to do this.....)
         Changing Coherence was exactly what I was trying to avoid. I tried to come up with things within the specified extension points, and the allowed things, although it might be possible that I still did not manage to remain within the allowed set of operations.
         Of course, if changing Coherence is allowed, allowing an option of filtering index changes to non-eviction events is probably the optimal solution.
         > And I don't think that the write-behind issue would
         > be a problem, as the current state cache of the cache
         > (and it's corresponding index) reflects the future
         > state of the backing store (which accordingly to
         > Coherence's resilience guarantee will definitely
         > occur).
         >
         The index on the second cache in the write-behind scenario would be out-of-synch only if the second cache is updated by invocations to the cache-store of the first cache. If it is updated upon changes to the backing map, then it won't. Obviously if you don't have 2 caches, but only one, it cannot be out-of-synch.
         > So you would have a situation where cache evictions
         > occur regularly but the index just overflows to disk
         > in such a fashion that relevant portions of it can be
         > recalled in an intelligent fashion, leveraging some
         > locality of reference for example.
         >
         I don't really see, how this could be done. AFAIK, all the indexes Coherence has are maintained in memory and does not overflow to disk, but I may be wrong on this, but again, I may have misunderstood what you refer on index handling.
         > a) you leverage locality of reference by using as
         > much keyed data access as possible
         > b) have Coherence do the through-reading
         > c) use database DAO for range querying
         > d) if you were to use Hibernate for (c), you might be
         > able to double dip by using Coherence as an L2 cache.
         > (I don't know if this unecessarily duplicates cached
         > data....)
         >
         > Any thoughts on this?
         a: if you know ids on your own, then this is the optimal solution, provided cache hit rates can be driven high enough. if you have to query for ids, the latency might be too high.
         b: read-through can become suboptimal, since AFAIK, currently the cache store reads all rows one by one, only read-ahead uses loadAll, but I may be wrong on this. Loading from database can be optimized for multiple id loading as well, to be faster than the same via cache store. So it is very important that the cache hit rate be very high for performance-relevant data in case of read-through.
         c: use database dao for complex querying, possibly almost anything more complex than straight top-down queries. make performance tests for both solutions, try to rely on partition affinity, and try to come up with data structures that help with making indexes which can be queried with as few queries as possible, and with not too high index access count.
         d: you cannot query by Coherence on Hibernate second-level cache, as Hibernate second-level caches do not contain structured data, but contain byte[][]s or byte[], holding the column values serialized to it (separately or the same byte[], I don't remember which).
         Best regards,
         Robert

Maybe you are looking for