WrapperNamedCache

I'm having some trouble getting the WrapperNamedCache to work.
When I try to use the cache from a client it throws the following exception:
Application code running on "DistributedCache" service thread(s) should not call ensureCache as this may result in deadlock. The most common case is a CacheFactory call from a custom CacheStore implementation.
2009-04-17 14:05:10.736/6.157 Oracle Coherence GE 3.4.1/407 <Warning> (thread=DistributedCache, member=1): Application code running on "DistributedCache" service thread(s) should not call ensureCache as this may result in deadlock. The most common case is a CacheFactory call from a custom CacheStore implementation.
2009-04-17 14:05:10.751/6.172 Oracle Coherence GE 3.4.1/407 <Error> (thread=DistributedCache, member=1): Assertion failed: poll() is a blocking call and cannot be called on the Service thread
     at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.poll(Grid.CDB:4)
     at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.poll(Grid.CDB:11)
     at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$BinaryMap.sendPartitionedRequest(DistributedCache.CDB:59)
     at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$BinaryMap.size(DistributedCache.CDB:13)
     at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$BinaryMap.isEmpty(DistributedCache.CDB:1)
     at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$BinaryMap$KeySet.isEmpty(DistributedCache.CDB:1)
     at com.tangosol.util.ConverterCollections$ConverterCollection.isEmpty(ConverterCollections.java:483)
     at com.tangosol.util.AbstractKeySetBasedMap.isEmpty(AbstractKeySetBasedMap.java:48)
     at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$Storage.instantiateResourceMap(DistributedCache.CDB:25)
     at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$Storage.setCacheName(DistributedCache.CDB:27)
     at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$ConfigListener.entryInserted(DistributedCache.CDB:15)
     at com.tangosol.util.MapEvent.dispatch(MapEvent.java:191)
     at com.tangosol.util.MapEvent.dispatch(MapEvent.java:164)
     at com.tangosol.util.MapListenerSupport.fireEvent(MapListenerSupport.java:556)
     at com.tangosol.util.ObservableHashMap.dispatchEvent(ObservableHashMap.java:229)
     at com.tangosol.util.ObservableHashMap$Entry.onAdd(ObservableHashMap.java:270)
     at com.tangosol.util.SafeHashMap.put(SafeHashMap.java:244)
     at com.tangosol.coherence.component.util.collections.WrapperMap.put(WrapperMap.CDB:1)
     at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid$ServiceConfigMap.put(Grid.CDB:13)
     at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$StorageIdRequest.onReceived(DistributedCache.CDB:40)
     at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onMessage(Grid.CDB:9)
     at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onNotify(Grid.CDB:130)
     at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache.onNotify(DistributedCache.CDB:3)
     at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:37)
     at java.lang.Thread.run(Unknown Source)Here's the configuration
<!DOCTYPE cache-config SYSTEM "cache-config.dtd">
<cache-config>
  <caching-scheme-mapping>
    <cache-mapping>
      <cache-name>*</cache-name>
      <scheme-name>distributed-scheme</scheme-name>
    </cache-mapping>
  </caching-scheme-mapping>
  <caching-schemes>
    <!-- Distributed caching scheme. -->
    <distributed-scheme>
      <scheme-name>distributed-scheme</scheme-name>
      <service-name>DistributedCache</service-name>
      <serializer>
        <class-name>com.tangosol.io.pof.ConfigurablePofContext</class-name>
      </serializer>
      <backing-map-scheme>
        <class-scheme>
          <scheme-ref>entitled-scheme</scheme-ref>
        </class-scheme>
      </backing-map-scheme>
      <autostart>true</autostart>
    </distributed-scheme>
    <!-- Wrapped caching scheme. -->
    <class-scheme>
      <scheme-name>entitled-scheme</scheme-name>
      <class-name>com.tangosol.net.cache.WrapperNamedCache</class-name>
      <init-params>
        <init-param>
          <param-type>{cache-ref}</param-type>
          <param-value>{cache-name}</param-value>
        </init-param>
        <init-param>
          <param-type>string</param-type>
          <param-value>{cache-name}</param-value>
        </init-param>
      </init-params>
    </class-scheme>
  </caching-schemes>
</cache-config>Edited by: pards on Apr 17, 2009 11:10 AM

The WrapperNamedCache cannot be used as a backing map; a cache cannot be used as a backing map.
I assume that you tried this approach as an alternative to the WrapperNamedCache used in the proxy for security. Please see my reply to that thread:
Re: Row-level security?
--David                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

Similar Messages

  • WrapperNamedCache is a NamedCache ?

    Hi All,
    When I create a WrapperNamedCache I am not able to see the newly created cache in JConsole JMX MBeans. is the WrapperNamedCache is a fully network aware cache of just a local cache ? does it sit in the coherence cluster ?
    Regards
    S

    Hi JK,
    Thanks for your reply. is there any API class that works like a NamedCache. I want to bundle all my domain temporary collections into a NamedCache and put it on the cache servers/storage nodes and later take these collections from the cacheserver and work like a NamedCache.
    The problem is we have too many temporary collections, creating a NamedCache for each temp collection is very expensive and it could degrade the performance of management node and other nodes as well.
    I am trying to create a Custom CacheFactory to manage the temporary collections.
    Regards
    S.
    Edited by: user594809 on 26-Apr-2010 08:31

  • Purpose and ambition level with WrapperNamedCache class?

    I recently found the WrapperNamedCache in the Coherence API and would like to know a bit about its intended purpose and its ambition level (i.e. what kind of cache is it compatible with and how complete is the compatibility)? Is this a class that is intended for end-user use or is it mainly intended for internal use in Coherence?
    I see one possible use of this class as a way to "mock" a coherence cache for testing purposes - is this one of the intended uses?
    Does it support indexes?
    Does it support aggregation and invocables?
    etc
    /Magnus

    Hi Magnus,
    You are correct. WrapperNamedCache is used both internally and for mock testing.
    WrapperNamedCache can wrap any Map instance. It supports all of the NamedCache APIs, but in some cases does nothing if the underlying Map is not an instance of NamedCache. So, for example, it supports addIndex/removeIndex but only if the underlying Map is a QueryMap.
    I think that WrapperNamedCache is extended by some end users to create a custom NamedCache.
    --Tom                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

  • Error while putting an object in Replicated Cache

    Hi,
    I am running just a single node of coherence with Replicated cache. But when I am trying to add an object to it I am getting the below exception. However I don't get this error while doing the same thing in a Distributed cache. Can someone please tell me what could I be doing wrong here.
    Caused by: java.io.IOException: readObject failed: java.lang.ClassNotFoundException: com.test.abc.pkg.RRSCachedObject
         at java.net.URLClassLoader$1.run(URLClassLoader.java:200)
         at java.security.AccessController.doPrivileged(Native Method)
         at java.net.URLClassLoader.findClass(URLClassLoader.java:188)
         at java.lang.ClassLoader.loadClass(ClassLoader.java:316)
         at java.lang.ClassLoader.loadClass(ClassLoader.java:251)
         at java.lang.ClassLoader.loadClassInternal(ClassLoader.java:374)
         at java.lang.Class.forName0(Native Method)
         at java.lang.Class.forName(Class.java:242)
         at java.io.ObjectInputStream.resolveClass(ObjectInputStream.java:585)
         at com.tangosol.io.ResolvingObjectInputStream.resolveClass(ResolvingObjectInputStream.java:68)
         at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1544)
         at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1466)
         at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1699)
         at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1305)
         at java.io.ObjectInputStream.readObject(ObjectInputStream.java:348)
         at com.tangosol.util.ExternalizableHelper.readSerializable(ExternalizableHelper.java:2145)
         at com.tangosol.util.ExternalizableHelper.readObjectInternal(ExternalizableHelper.java:2276)
         at com.tangosol.util.ExternalizableHelper.deserializeInternal(ExternalizableHelper.java:2673)
         at com.tangosol.util.ExternalizableHelper.fromBinary(ExternalizableHelper.java:257)
         at com.tangosol.coherence.component.net.extend.proxy.CacheServiceProxy$ConverterFromBinary.convert(CacheServiceProxy.CDB:4)
         at com.tangosol.util.ConverterCollections$ConverterCacheMap.put(ConverterCollections.java:2433)
         at com.tangosol.coherence.component.util.collections.wrapperMap.WrapperNamedCache.put(WrapperNamedCache.CDB:1)
         at com.tangosol.coherence.component.net.extend.proxy.CacheServiceProxy$WrapperNamedCache.put(CacheServiceProxy.CDB:2)
         at com.tangosol.coherence.component.net.extend.messageFactory.NamedCacheFactory$PutRequest.onRun(NamedCacheFactory.CDB:6)
         at com.tangosol.coherence.component.net.extend.message.Request.run(Request.CDB:4)
         at com.tangosol.coherence.component.net.extend.proxy.NamedCacheProxy.onMessage(NamedCacheProxy.CDB:11)
         at com.tangosol.coherence.component.net.extend.Channel.execute(Channel.CDB:28)
         at com.tangosol.coherence.component.net.extend.Channel.receive(Channel.CDB:26)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Peer$DaemonPool$WrapperTask.run(Peer.CDB:9)
         at com.tangosol.coherence.component.util.DaemonPool$WrapperTask.run(DaemonPool.CDB:32)
         at com.tangosol.coherence.component.util.DaemonPool$Daemon.onNotify(DaemonPool.CDB:69)
         at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
         at java.lang.Thread.run(Thread.java:613)
    ClassLoader: java.net.URLClassLoader@b5f53a
         at com.tangosol.util.ExternalizableHelper.fromBinary(ExternalizableHelper.java:261)
         at com.tangosol.coherence.component.net.extend.proxy.CacheServiceProxy$ConverterFromBinary.convert(CacheServiceProxy.CDB:4)
         at com.tangosol.util.ConverterCollections$ConverterCacheMap.put(ConverterCollections.java:2433)
         at com.tangosol.coherence.component.util.collections.wrapperMap.WrapperNamedCache.put(WrapperNamedCache.CDB:1)
         at com.tangosol.coherence.component.net.extend.proxy.CacheServiceProxy$WrapperNamedCache.put(CacheServiceProxy.CDB:2)
         at com.tangosol.coherence.component.net.extend.messageFactory.NamedCacheFactory$PutRequest.onRun(NamedCacheFactory.CDB:6)
         at com.tangosol.coherence.component.net.extend.message.Request.run(Request.CDB:4)
         at com.tangosol.coherence.component.net.extend.proxy.NamedCacheProxy.onMessage(NamedCacheProxy.CDB:11)
         at com.tangosol.coherence.component.net.extend.Channel.execute(Channel.CDB:28)
         at com.tangosol.coherence.component.net.extend.Channel.receive(Channel.CDB:26)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Peer$DaemonPool$WrapperTask.run(Peer.CDB:9)
         at com.tangosol.coherence.component.util.DaemonPool$WrapperTask.run(DaemonPool.CDB:32)
         at com.tangosol.coherence.component.util.DaemonPool$Daemon.onNotify(DaemonPool.CDB:69)
         at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
         at java.lang.Thread.run(Thread.java:613)
    This is my config file -
    <cache-config>
         <caching-scheme-mapping>
              <cache-mapping>
                   <cache-name>*</cache-name>
                   <scheme-name>MY-replicated-cache-scheme</scheme-name>
              </cache-mapping>
         </caching-scheme-mapping>
         <caching-schemes>
              <!--
    Replicated caching scheme.
    -->
              <replicated-scheme>
                   <scheme-name>MY-replicated-cache-scheme</scheme-name>
                   <service-name>ReplicatedCache</service-name>
                   <backing-map-scheme>
                        <local-scheme>
                        </local-scheme>
                   </backing-map-scheme>
                   <lease-granularity>member</lease-granularity>
                   <autostart>true</autostart>
              </replicated-scheme>
              <proxy-scheme>
                   <service-name>ExtendTcpProxyService</service-name>
                   <thread-count>5</thread-count>
                   <acceptor-config>
                        <tcp-acceptor>
                             <local-address>
                                  <address>server</address>
                                  <port>port</port>
                             </local-address>
                             <receive-buffer-size>768k</receive-buffer-size>
                             <send-buffer-size>768k</send-buffer-size>
                        </tcp-acceptor>
                   </acceptor-config>
                   <autostart>true</autostart>
              </proxy-scheme>
         </caching-schemes>
    </cache-config>
    Edited by: user1945969 on Jun 5, 2010 4:16 PM

    By default, it should have used FIXED as unit-calculator. But look at the trace seems your replicated cache was using BINARY as unit-calculator.
    Could you try add <unit-calculator>FIXED</unit-calculator> in your cache config for the replicate cache.
    Or just try insert a object (both key and value) which implement Binary.
    Check the unit-calculator part on this link
    http://wiki.tangosol.com/display/COH35UG/local-scheme

  • Extend Security FAQ Example Broken?

    I have been trying out the Extend Seurity example in the Coherence FAQ here http://coherence.oracle.com/pages/viewpage.action?pageId=1343626
    Basically the way it works is that the Extend proxy uses a class scheme that uses a sub-class of com.tangosol.net.cache.WrapperNamedCache to wrap the "real" cache. This sub-class can then override methods you want to secure to do an access check before forwarding the method call to the wrapped cache.
    Now, this all appeared to work fine until I tried to execute queries against the cache. The queries will execute against the "wrapped" cache which resides in the storage enabled nodes of the cluster, as the Extend proxies are storage disabled. I started to get back errors that the methods I was querying on did not exist in the objects I had put into the cache.
    E.G. Missing or inaccessible method: com.tangosol.util.Binary#getIntValue[]
    The reason for this it turns out is that the "put" method of the WrapperNamedCache in the extend proxy gets instances of com.tangosol.util.Binary for its key and value parameters as the Extend Client has POF serialized the values to send over the wire. When WrapperNamedCache calls "put" on the real cache presumably it send these com.tangosol.util.Binary values. It then appears that these are serialized again to go over the wire to the real cache so the underlying real cache ends up containing a serialized value of a serialized value and hence my queries fail.
    Is this "double" serializing due to me mis-configuring the caches, or am I stuck with it?
    Obviously it is pretty impractical to de-serialize the objects in the methods of the WrapperNamedCache sub-class.
    Presumably making the Extend proxies storage enabled nodes of the cluster wouldn't make any difference either.
    I am beginning to give up on ever having a secure Coherence cluster as so many things related to security in Coherence seem broken.
    Banging my head in frustration...
    JK.

    I haven't been able to get this to work, and I'm using Noah's updated code.
    I'm trying to implement row-level security using the EntitledNamedCache, so basically I'll be intercepting calls to get() and checking the client's privileges against the data they're trying to read.
    The problem - as Jonathan experienced - is that inside the EntitledNamedCache the super.get() call to the WrapperNamedCache returns a com.tangosol.util.Binary instead of the actual object that was put() in.
    Is there a way for WrapperNamedCache.get() to return the actual object?

  • ClassCastException in CompositeAggregator.Parallel

    As you Tangosol folks have proved so useful lately, I have another conundrum :)
    I am using the Tangosol library of aggregators to run a parallel query across two distributed cache service members. The code I am running is as follows:
    cache.aggregate(
                   new EqualsFilter("isRejected", false),
                   GroupAggregator.Parallel.createInstance("getCcyPair",
                        CompositeAggregator.Parallel.createInstance(
                             new InvocableMap.ParallelAwareAggregator[] {
                                  new Count(),
                                  new LongSum("getBalance")
    where I have confirmed that isRejected returns a boolean primitive and getBalance a long primitive.
    This appears to run fine on one node but across two nodes throws an exception:
    An exception occurred while processing a AggregateFilterRequest
    java.lang.ClassCastException: [Lcom.tangosol.util.InvocableMap$EntryAggregator;
         at com.tangosol.util.aggregator.CompositeAggregator$Parallel.getParallelAggregator(CompositeAggregator.java:330)
         at com.tangosol.util.aggregator.GroupAggregator$Parallel.getParallelAggregator(GroupAggregator.java:445)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.DistributedCache$ViewMap.aggregate(DistributedCache.CDB:46)
         at com.tangosol.coherence.component.util.SafeNamedCache.aggregate(SafeNamedCache.CDB:1)
         at com.tangosol.coherence.component.util.collections.wrapperMap.WrapperNamedCache.aggregate(WrapperNamedCache.CDB:1)
         at com.tangosol.coherence.component.comm.messageFactory.NamedCacheFactory$AggregateFilterRequest.onRun(NamedCacheFactory.CDB:6)
         at com.tangosol.coherence.component.comm.message.Request.run(Request.CDB:13)
         at com.tangosol.coherence.component.net.extend.proxy.NamedCacheProxy.process(NamedCacheProxy.CDB:7)
         at com.tangosol.coherence.component.comm.Channel.onReceive(Channel.CDB:94)
         at com.tangosol.coherence.component.comm.Connection.onReceive(Connection.CDB:64)
         at com.tangosol.coherence.component.comm.ConnectionManager$MessageTask.run(ConnectionManager.CDB:4)
         at com.tangosol.coherence.component.util.DaemonPool$Daemon.onNotify(DaemonPool.CDB:41)
         at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:35)
         at java.lang.Thread.run(Thread.java:595)
    Any ideas? I have tried using an EntryAggregator[] array in the CompositeAggregator.Parallel constructor instead.
    Regards, James

    I am using coherence tangosol 3.3, and i get the same error.
    InvocableMap.EntryAggregator aggregator = GroupAggregator.createInstance(multiExtractor,
    CompositeAggregator.createInstance(
    new InvocableMap.EntryAggregator[]{
    new LongSum("getRiskQty"),
    new LongSum("getRiskDelta"),
    new LongSum("getRiskMtm"),
    new LongSum("getRiskGamma"),
    new LongSum("getRiskTheta"),
    new LongSum("getRiskVega"),
    new LongSum("getRiskCashVega"),
    new LongSum("getDiscRiskMtm"),
    new LongSum("getDiscRiskDelta")}
    results in this error:
    Exception in thread "main" com.tangosol.io.pof.PortableException (Remote: An exception occurred while processing a Aggrega
    teFilterRequest) java.lang.ClassCastException: [Lcom.tangosol.util.InvocableMap$EntryAggregator; cannot be cast to [Lcom.t
    angosol.util.InvocableMap$ParallelAwareAggregator;
         at com.tangosol.util.aggregator.CompositeAggregator$Parallel.getParallelAggregator(CompositeAggregator.java:330)
         at com.tangosol.util.aggregator.GroupAggregator$Parallel.getParallelAggregator(GroupAggregator.java:445)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.DistributedCache$ViewMap.aggregate(DistributedCach
    e.CDB:44)
    I have a data client connecting to a tangosol cluster using a tcp proxy (tangosol*extend).
    The same code worked if the client was started on the same machine without using extend.
    Can somebody please help with this? Could this be a licensing issue? We are still evaluating the product.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

  • Simple get vs entry processor for processsing 'get' requests

    We're wondering about the best way to handle a simple get where we want to do some processing on the request:
    1. we might want to authenticate/authorize a proxy as well as an 'implied' user sitting behind the proxy
    2. we might need to look up the cache entry by modifying the key presented by the user: for example, changing capitalization
    3. we might even have multiple aliases for the cache entry but only store it once to avoid data duplication-- in this case, we'd need to resolve an alias to another name entierly.
    Would it be best to use an entry processor to do this (a 'GetProcessor') or is there a better way using simple 'get' in the basic api with some server-side logic to intercept the request and do the processing? If the latter, can you please explain how to do it?
    And please point out any performance considerations if you can.
    Thanks!
    Edited by: Murali on Apr 25, 2011 2:51 PM

    Hi Murali,
    You would probably be better off using an Invocable and InvocationService for what you want to do. The main reason for this would be points 2 and 3 where you say you might want to modify the key or have aliases (I presume you mean aliases for keys).
    If you use a get or an EntryProcessor these two requests would be routed to the storage member that owns the key of the get request or EntryProcessor invoke. If you then wanted to modify the key the get or EntryProcessor may now be on the wrong node as a different node may own the new key.
    If your data access requests are all coming from client applications over Extend and not from within cluster members then you could intercept calls to the cache on the Extend Proxy and do your extra processing and key modification there. There are a couple of different ways of doing this depending on what version of Coherence you are using and whether this is restricted to a few caches or all caches. Coherence 3.6 and above make this easier as they introduce methods and configuration as part of the security API that would allow you to intercept calls an easily wrapper caches. It is still possible in 3.5 but a bit more work.
    Probably the easiest way on an ExtendProxy is to create a wrapper class that can wrap the real cache and intecept the required methods. You can do this by extending WrapperNamedCache and overriding the methods you want to intercept, such as get(). Actually making Coherence use your wrapper instead of the real cache can be done a number of ways depending again on which version of Coherence you have.
    Are all your data access requirements just gets or do you intend to use Filter queries? Obviously any query made againsta a cache where the Filter was targeted at the Key rather than the value would fail if the filter was using unmodified key values. You would also need to cope with getAll requests.
    If you can expand a bit on where the requests for data will come from (i.e. are they all form Extend clients) and which version of Coherence you have then it would be possible to give a better answer as right now there are quite a few possibilities.
    JK

  • Coherence-Extend and Continuous Query performance

    Hi,
    I am trying to evaluate the performance impact of continous queries, when using coherence extend (TCP). The idea is that desktop clients will be running continuous queries against a cluster, and other processes will be updating the data in that cluster. The clients themselves take a purely read-only view of the data.
    In my tests, I find that the updater process takes about 250ms to update 5000 values in the cache (using a putAll operation). When I have a continuous query running against a remote cache, linked with coherence extend, the update time increases to about 1500ms. This is not CPU bound.
    Is this what people would expect?
    If so this raises questions to me about:
    1) slow subscribers - what if one of my clients is very badly behaved? Can I detect this and/or take action?
    2) conflation of updates - can Coherence do conflation?
    3) can I get control to send object deltas over the wire rather than entire objects?
    Is this a use case for which CoherenceExtend and continuous queries were designed?
    Robert

    Yes, it is certainly possible, although depending on your requirements it may be more or less additional coding. You have a few choices. For example, since you have a CQC on the cache, you could conceivably aggregate locally (on any event). In other words, since all the data are local, there is no need to do the parallel aggregation (unless it is CPU limited). Depending on the aggregation, you may only have to recalculate part of it.
    You can access the internal data structure (Map) within the CQC as follows:
    Map map = cqc.getInternalCache();
    // now we can do aggregation
    NamedCache cache = new WrapperNamedCache(map);
    cache.aggregate(..);More complex approaches would only recalculate portions based on the event, or (depending on the function) might use the event to adjust the aggregated results.
    Peace,
    Cameron Purdy | Oracle Coherence
    http://coherence.oracle.com/

  • Coherence entries do not expire

    Hello,
    I have a replicated scheme and set the expiration time (30 minutes) in the put method from NamedCache. In the first hour and a half, the behavior was as expected, from the logs I could see the entries expired after 30 minutes. But after that they seem to not expire. My problem is that some old data comes from the cache, although I could see the new data was stored. So i'm affraid the old data remains somewhere, maybe on some nodes and then is probably replicated back to all nodes.
    I'm not sure what happens, but I only found this post: Problem with Coherence replicated cache and put operation with cMillis para where it says that replicated cache does not fully support per-entry expiration. Has anyone heard or experienced something similar?
    And do you know how can I remove the entries from the cache?

    Hi,
    Given that the reply in this thread Problem with Coherence replicated cache and put operation with cMillis para was from Jason who works in the Coherence Engineering Team, and it was only a few months ago, I would be inclined to believe him when he says per-entry expiry is not supported in replicated caches.
    An alternative to replicated caches is to use a Continuous Query Cache that is backed by a distributed cache. This will work like a replicated cache but gets around a lot of the limitations you have with replicated cache, like per-entry expiry.
    The easiest way to make a replicated cache using a CQC is to use a class-scheme and custom wrapper class like this.
    Your cache configuration would look something like this...
    <cache-config xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
                  xmlns="http://xmlns.oracle.com/coherence/coherence-cache-config"
                  xsi:schemaLocation="http://xmlns.oracle.com/coherence/coherence-cache-config coherence-cache-config.xsd">
        <defaults>
            <serializer>pof</serializer>
        </defaults>
        <caching-scheme-mapping>
            <cache-mapping>
                <cache-name>Replicated*</cache-name>
                <scheme-name>cqc-scheme</scheme-name>
                <init-params>
                    <init-param>
                        <param-name>underlying-cache</param-name>
                        <param-value>ReplicatedUnderlying*</param-value>
                    </init-param>
                </init-params>
            </cache-mapping>
            <cache-mapping>
                <cache-name>ReplicatedUnderlying*</cache-name>
                <scheme-name>replicated-underlying-scheme</scheme-name>
            </cache-mapping>
        </caching-scheme-mapping>
        <caching-schemes>
            <class-scheme>
                <scheme-name>cqc-scheme</scheme-name>
                <class-name>com.thegridman.coherence.CQCReplicatedCache</class-name>
                <init-params>
                    <init-param>
                        <param-type>{cache-ref}</param-type>
                        <param-value>{underlying-cache}</param-value>
                    </init-param>
                    <init-param>
                        <param-type>java.lang.String</param-type>
                        <param-value>{cache-name}</param-value>
                    </init-param>
                </init-params>
            </class-scheme>
            <distributed-scheme>
                <scheme-name>replicated-underlying-scheme</scheme-name>
                <service-name>ReplicatedUnderlyingService</service-name>
                <backing-map-scheme>
                    <local-scheme/>
                </backing-map-scheme>
            </distributed-scheme>
        </caching-schemes>
    </cache-config>Any cache with a name prefixed by "Replicated", e.g. ReplicatedTest, would map to the cqc-scheme, which is our custom class scheme. This will create another distributed cache prefixed with "ReplicatedUnderlying", e.g. for the ReplicatedTest cache we would also get the ReplicatedUnderlyingTest cache.
    Now, because the data will go into the distributed scheme we can do all the normal things with this cache that we can do with any distributed cache, e.g. we could add cache stores, listeners work properly, we know were entry processors will go, etc...
    The com.thegridman.coherence.CQCReplicatedCache is our custom NamedCache implementation that is basically a CQC wrapper around the underlying cache.
    The simplest form of this class is...
    package com.thegridman.coherence;
    import com.tangosol.net.NamedCache;
    import com.tangosol.net.cache.ContinuousQueryCache;
    import com.tangosol.net.cache.WrapperNamedCache;
    import com.tangosol.util.filter.AlwaysFilter;
    public class CQCReplicatedCache extends WrapperNamedCache {
        public CQCReplicatedCache(NamedCache underlying, String cacheName) {
            super(new ContinuousQueryCache(underlying, AlwaysFilter.INSTANCE), cacheName);
    }Now in your application code you can still get caches just like normal, you code has no idea that the cache is really a wrapper around a CQC. So you can just do normal stuff like this in your code...
    NamedCache cache = CacheFactory.getCache("ReplicatedTest");
    cache.put("Key-1", "Value-1");This though still does not quite support per-entry expiry reliably. The way that expiry works in Coherence is that entries are only expired when another action happens on a cache, such as a get, put, size, entrySet and so on. The problem with the code above is that say you do this...
    NamedCache cache = CacheFactory.getCache("ReplicatedTest");
    cache.put("Key-1", "Value-1", 5000);
    Thread.sleep(6000);
    Object value = cache.get("Key-1");...you would expect to get back null, but you will still get back "Value-1". This is because the value will be from the CQC and not hit the underlying cache. To make everything work for eviction then for certain CQC operations we need to poke the underlying cache in our wrapper, so we change the CQCReplicatedCache class like this
    package com.thegridman.coherence;
    import com.tangosol.net.NamedCache;
    import com.tangosol.net.cache.ContinuousQueryCache;
    import com.tangosol.net.cache.WrapperNamedCache;
    import com.tangosol.util.Filter;
    import com.tangosol.util.filter.AlwaysFilter;
    import java.util.Collection;
    import java.util.Comparator;
    import java.util.Map;
    import java.util.Set;
    public class CQCReplicatedCache extends WrapperNamedCache {
        private NamedCache underlying;
        public CQCReplicatedCache(NamedCache underlying, String cacheName) {
            super(new ContinuousQueryCache(underlying, AlwaysFilter.INSTANCE), cacheName);
            this.underlying = underlying;
        public NamedCache getUnderlying() {
            return underlying;
        @Override
        public Set entrySet(Filter filter) {
            underlying.size();
            return super.entrySet(filter);
        @Override
        public Set entrySet(Filter filter, Comparator comparator) {
            underlying.size();
            return super.entrySet(filter, comparator);
        @Override
        public Map getAll(Collection colKeys) {
            underlying.size();
            return super.getAll(colKeys);
        @Override
        public Object invoke(Object oKey, EntryProcessor agent) {
            underlying.size();
            return super.invoke(oKey, agent);
        @Override
        public Map invokeAll(Collection collKeys, EntryProcessor agent) {
            underlying.size();
            return super.invokeAll(collKeys, agent);
        @Override
        public Map invokeAll(Filter filter, EntryProcessor agent) {
            underlying.size();
            return super.invokeAll(filter, agent);
        @Override
        public Set keySet(Filter filter) {
            underlying.size();
            return super.keySet(filter);
        @Override
        public boolean containsValue(Object oValue) {
            underlying.size();
            return super.containsValue(oValue);
        @Override
        public Object get(Object oKey) {
            underlying.size();
            return super.get(oKey);
        @Override
        public boolean containsKey(Object oKey) {
            underlying.size();
            return super.containsKey(oKey);
        @Override
        public boolean isEmpty() {
            underlying.size();
            return super.isEmpty();
        @Override
        public int size() {
            return underlying.size();
    }We have basically poked the underlying cache, by calling size() on it prior to doing any other operation that would not normally hit the underlying cache, so causing eviction to trigger. I think I have covered all the methods above but it should be obvious how to override any others I have missed.
    If you now run this code
    NamedCache cache = CacheFactory.getCache("ReplicatedTest");
    cache.put("Key-1", "Value-1", 5000);
    Thread.sleep(6000);
    Object value = cache.get("Key-1");it should work as expected.
    JK

  • KeyPartitioningStrategy and PartitionedFilter?

    I have been playing around with the WrapperNamedCache class. I created a mock partitioned cache service that uses our application specific eyPartitioningStrategy. I did however not manage to get the PartitionedFilter to filter entries according to the strategy and the passed in partitions.
    When looking at the PartitionedFilter class I actually fail to see how this class can handle custom KeyPartitioningStrategy since none of the arguments passed in can be used to retreive it...
    Some experiements with my debugger as well as a small test program (included belowe) seemed to confirm this... My test program shows that the partitioned filter does not require Coherent to be running (no logo text printed etc) so it does not seem to retrieve the KeyPartitioningStrategy using some undocumented behind the scenes way...
    And I am using Coherence 3.5.
    Is this broken or am I missing something here?
    /Magnus
    package test;
    import com.tangosol.util.filter.PartitionedFilter;
    import com.tangosol.util.filter.AlwaysFilter;
    import com.tangosol.net.partition.PartitionSet;
    import java.util.Map;
    public class FilterTest {
        private static final class FakeEntry<K, V> implements java.util.Map.Entry<K, V> {
            private final K key;
            private V value;
            private FakeEntry(K key, V value) {
                this.key = key;
                this.value = value;
            public K getKey() {
                return key;
            public V getValue() {
                return value;
            public V setValue(V value) {
                V oldValue = this.value;
                this.value = value;
                return oldValue;
            public String toString(){
                return key.toString();
        // A filter that prints the keys that are evaluated
        private static final class VerboseAlwaysFilter extends AlwaysFilter {
            @Override
            public boolean evaluateEntry(Map.Entry entry) {
                System.out.println("Evaluating " + entry);
                return super.evaluateEntry(entry);
        public static void main(String[] args) {
            PartitionSet set = new PartitionSet(5);
            set.add(0);
            set.add(2);
            set.add(4);
            PartitionedFilter pf = new PartitionedFilter(new VerboseAlwaysFilter(), set);
            for (int i = 0; i < set.getPartitionCount(); i++) {
                pf.evaluateEntry(new FakeEntry(i, i));
    }Edited by: MagnusE on Nov 19, 2009 10:16 AM

    Hi Magnus,
    I believe that the PartitionedFilter is handled as a wrapper filter (similarly to KeyAssociatedFilter and LimitFilter), and the distributed cache service on the node sending the filter just looks for it as the outermost filter, and just retrieves the partition set and the wrapped filter from the PartitionedFilter and uses that as a filter against the partitions to dispatch the wrapped filter to.
    This is even more likely as in earlier versions (3.4) PartitionedFilter was not even a PortableObject which indicates that it is not supposed to be sent over the network (and this caused a problem when trying to use it from a TCP*Extend client).
    The WrapperNamedCache may or may not support this functionality, my best guess would be it does not.
    Best regards,
    Robert

Maybe you are looking for

  • What do I have to do, to keep the B&W bitmaps format via the ribbon?

    I'm generating a PDF from MS-PowerPoint 2013 including several monochrome bitmaps using Acrobat 11. Via the Acrobat ribbon, the black and white bitmaps were converted to RGB bitmaps in the PDF (increasing the PDF size a lot). Via the virtual Adobe PD

  • Macbook Pro power button sticks

    I just bought a new Macbook Pro, and the power button is a nightmare. It is incredibly difficult to depress. I'm worried I'm going to break it. Is this a common problem? Is there a solution? I see no other way to power the unit on.

  • Expand and Collpase nodes in ADF Pivot table

    Hi, I am looking for a code to Expand and Collapse all drilldown programmatically in adf pivot table. I have only one level of drill down that display sub total.

  • Subscribed Calendars Do Not Update

    My wife and I subscribe to each others schedules. However, there are constant problems with the updating. For example, when I delete an appointment from my schedule, it doesn't show up on her iCal--even after we refresh both computers and vice versa.

  • Download will not go completely through the process. Usually hangs at 1 sec

    I was having trouble with words disappearing from my icons and toolbars so I uninstalled firefox. Now I cannot re install a version of firefox. It hangs with 1 sec left in the process.