Concurrency with Entry Processors on Replicated Cache

Hi,
In the documentation is says that entry processors in replicated caches are executed on the initiating node.
How is concurrency handled in this situation?
What happens if two or more nodes are asked to execute something on an entry at the same time?
What happens if the node initiating the execution is a storage disabled node?
Thanks!

Jonathan.Knight wrote:
In a distributed cache the entry processor runs on the node that owns the cache entry. In a replicated cache the same entries are on all nodes so I think one of the questions was what happens in this scenario. I presume the EP only execues on one of the nodes - it would not make sense for it to execute on all nodes - but which one does it use? Is there still a concept of owner for a replicated cache or is it random.
At this point I would have coded a quick experiment to prove what happens but unfortunately I am a tad busy right now.
JKHi Jonathan,
in the replicated cache there is still a notion of ownership of an entry, in Coherence terms it is called a lease. It is always owned by the last node to have carried out a successful modification on it, where modification may be a put/remove but it can also be a lock operation. Lease granularity is per entry.
Practically the lock operation in the code Dimitri pasted serves two purposes. First it ensures no other nodes can lock it, second it brings the lease to the locking node, so it can correctly execute the entry-processor locally on the entry.
Best regards,
Robert

Similar Messages

  • Partition-level Transactions - Entry Processor access "other" cache.

    Hi there,
    Playing around with Coherence 3.7 and like the look of the new Partition-level Transactions. I'm trying to implement an Entry Processor that accesses an "other" cache in order to create an audit record.
    The slides from Brian's SIG presentation show the following code for an example EP doing this:
    public void process(Entry entry)
    // Update an entry in another cache.
    ((BinaryEntry)entry).getBackingMapContext(“othercache”).getBackingMap().put(“othercachekey”, value);
    The problem I'm facing is the API doesn't seem to have an implementation of BinaryEntry.getBackingMapContext(String cachename). It just has a no-arg version which accesses the cache of the current entry. It's not an error just in the API docs, as the code doesn't compile either.
    Any ideas what to do here?
    Cheers,
    Steve

    Care to expand on that, Charlie?
    Reason I ask is that since I posted my reply to JK, I noticed that I was getting classcast errors on my audit record insert in the server logs. I had to resort to the use of "converters" to get my newly created audit record into binary format before the "put" into the second (also distributed) cache was successful without errors:
    BackingMapManagerContext ctx = ((BinaryEntry)entry).getContext();
    ctx.getBackingMapContext("PositionAudit").getBackingMap().put(
    ctx.getKeyToInternalConverter().convert(pa.getAuditId()),
    ctx.getValueToInternalConverter().convert(pa));
    The "PositionAudit" cache is the one I want to create a new entry in each time an EntryProcessor is invoked against Position objects in the "Position" cache. The object I'm creating for that second cache, "pa" is the newly created audit object based on data in the "position" object the entry processor is actually invoked against.
    So the requirement is pretty simple, at least in my mind: Position objects in the "positions" cache get updated by an EntryProcessor. When the EntryProcessor fires, it must also write an "audit" record/object to a second cache, based upon data in the Position object just manipulated. As all of the caches are distributed caches - which store their data in Binary format, AFAIK - I still have to do the explicit conversion to binary format of my new audit object to get it successfully "put" into the audit cache.
    Seems to still be quite a bit of messing around (including the KeyAssociator stuff to make sure these objects are in the same partitions in the first place for all this to work), but at least I now get "transactionally atomic" operations across both the positions and positions audit caches, something that couldn't be done from an EP prior to 3.7.
    As I say, it works now. Just want to make sure I'm going about it the right way. :)
    Any comments appreciated.
    Cheers,
    Steve

  • Updating a hierarchical data structure from an entry processor

    I have a tree-like data structure that I am attempting to update from an AbstractProcessor.
    Imagine that one value is a collection of child value keys, and I want to add a new child node in the tree. This requires updating the parent node (which contains the list of child nodes), and adding the child value which is a separate entry.
    I would rather not combine all bits of data into one value (which could make for a large serialized object), as sometimes I prefer to access (read-only) the child values directly. The child and the parent values live in the same partition in the partitioned cache, though, so get access should be local.
    However, I am attempting to call put() on the same cache to add a child value which is apparently disallowed. It makes sense that a blocking call is involved in this operation, as it needs to push out this data to the cluster member that has the backup value for the same operation, but is there a general problem with performing any kind of re-entrant work on Coherence caches from an entry processor for any value that is not the value you are processing? I get the assertion below.
    I am fine with the context blocking (preventing reads or writes on the parent node value) until the child completes, presuming that I handle deadlock prevention myself due to the order in which values are accessed.
    Is there any way to do this, either with entry processors or not? My code previously used lock, get and put to operate on the tree (which worked), but I am trying to convert this code to use entry processors to be more efficient.
    2008-12-05 16:05:34.450 (ERROR)[Coherence/Logger@9219882 3.4/405]: Assertion failed: poll() is a blocking call and cannot be called on the Service thread
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.poll(Grid.CDB:4)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.poll(Grid.CDB:11)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$BinaryMap.put(DistributedCache.CDB:30)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$BinaryMap.put(DistributedCache.CDB:1)
         at com.tangosol.util.ConverterCollections$ConverterCacheMap.put(ConverterCollections.java:2433)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$ViewMap.put(DistributedCache.CDB:1)
         at com.tangosol.coherence.component.util.SafeNamedCache.put(SafeNamedCache.CDB:1)
         at com.tangosol.net.cache.CachingMap.put(CachingMap.java:928)
         at com.tangosol.net.cache.CachingMap.put(CachingMap.java:887)
         at com.tangosol.net.cache.NearCache.put(NearCache.java:286)
         at com.conduit.server.properties.CLDistributedPropertiesManager$UpdatePropertiesProcessor.process(CLDistributedPropertiesManager.java:249)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$Storage.invoke(DistributedCache.CDB:20)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache.onInvokeRequest(DistributedCache.CDB:50)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$InvokeRequest.run(DistributedCache.CDB:1)
         at com.tangosol.coherence.component.net.message.requestMessage.DistributedCacheKeyRequest.onReceived(DistributedCacheKeyRequest.CDB:12)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onMessage(Grid.CDB:9)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onNotify(Grid.CDB:130)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache.onNotify(DistributedCache.CDB:3)
         at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:37)
         at java.lang.Thread.run(Thread.java:637)

    Hi,
    reentrant calls to the same Coherence service is very much recommended against.
    For more about it, please look at the following Wiki page:
    http://wiki.tangosol.com/display/COH34UG/Constraints+on+Re-entrant+Calls
    Best regards,
    Robert

  • Read through Scheme in Replicated Cache with berkely db

    Hi i have a 20 gb of data, while restarting the server i need to populate all the data in to coherence cache.if i create a pre load java class hadrly it will take 30 mins to 1 hour to load the data in to cache, while loading time if any request came means how can give the response to the request. i have gone through the Read through scheme its good. But i dont know how to implement with replicated cahce. is it possible to implement the read through+replicated cache+berkeley. if yes menas please post the sample code with full reference. thanks in advance
    Edited by: 875786 on Dec 5, 2011 8:10 PM

    If you read the documentation for the replicated scheme configuration here: http://docs.oracle.com/cd/E24290_01/coh.371/e22837/appendix_cacheconfig.htm#BABGHEJE ans specifically the part about the <backing-map> configutation you will see...
    http://docs.oracle.com/cd/E24290_01/coh.371/e22837/appendix_cacheconfig.htm#BABGHEJE:
    To ensure cache coherence, the backing-map of a replicated cache must not use a read-through pattern to load cache entries. Either use a cache-aside pattern from outside the cache service, or switch to the distributed-scheme, which supports read-through clustered caching.So it would appear that you cannot do read-through with a replicated cache - which makes sense really when you think about it.
    As I already asked - why are you trying to put 20GB in a replicated cache?
    Presumably you do not have JVMs with heaps well over 20GB to hold all that data, or do you - in which case you must have tuned GC very well. You say you are bothered about NFRs and network latency yet you are building a system that will require either very big heaps, and hence at some point there will be long GCs or you cannot hold all the data in memory so you have to configure expiry and then you have the latency of reading the data from the DB.
    If you are using read-through then presumably your use-case does not require all the data to be in the cache - i.e. all you data access is using a get by key and you do not do any filter queries. If this is the case then use a distributed cache where you can store all the data or use read-through. If all you access is using key based gets then you do not need to co-locate the caches and your application in a single JVM - have separate cache server JVMs to hold the data and configure near-caches in your application.
    There are various ways to hold 20GB of data that would be much more efficient than you are suggesting with a replicated cache.
    JK

  • Replicated cache with cache store configuration

    Hi All,
    I am having two different applications. One is Admin kind of module from where data will be insterted/updated and another application will read data from Coherence cache.
    My requirement is to use Replicated cache and data also needs to be stored in the database. I am configuring cache with cache store DB operation.
    I have following coherence configuration. It works fine. Other application is able to read updated data. But while second application is trying to join the first application coherence cluster I am getting following exception in cache store. If I use distributed cache the same cache store works fine without any issues.
    Also note that eventhough it is throwing exception, application is working fine as expected. Other thing I am pre loading data on application start up in first application.
    Let me know if you need any further information.
    Thanks in advance.
    coherence-cache-config.xml
    <?xml version="1.0"?>
    <!DOCTYPE cache-config SYSTEM "cache-config.dtd">
    <cache-config>
    <caching-scheme-mapping>
    <cache-mapping>
    <cache-mapping>
    <cache-name>TestCache</cache-name>
    <scheme-name>TestCacheDB</scheme-name>
    </cache-mapping>
    </caching-scheme-mapping>
    <caching-schemes>
    <replicated-scheme>
    <scheme-name>TestCacheDB</scheme-name>
    <service-name>ReplicatedCache</service-name>
    <backing-map-scheme>
    <local-scheme>
    <scheme-name>TestDBLocal</scheme-name>
    <cachestore-scheme>
    <class-scheme>
    <class-name>test.TestCacheStore</class-name>
    <init-params>
    <init-param>
    <param-type>java.lang.String</param-type>
    <param-value>TEST_SUPPORT</param-value>
    </init-param>
    </init-params>
    </class-scheme>
    </cachestore-scheme>
    </local-scheme>
    </backing-map-scheme>
    <listener/>
    <autostart>true</autostart>
    </replicated-scheme>
    <!--
    Proxy Service scheme that allows remote clients to connect to the
    cluster over TCP/IP.
    -->
    <proxy-scheme>
    <scheme-name>proxy</scheme-name>
    <service-name>ProxyService</service-name>
    <thread-count system-property="tangosol.coherence.extend.threads">10</thread-count>
    <acceptor-config>
    <tcp-acceptor>
    <local-address>
    <address system-property="tangosol.coherence.extend.address">localhost</address>
    <port system-property="tangosol.coherence.extend.port">7001</port>
    <reusable>true</reusable>
    </local-address>
    </tcp-acceptor>
    <serializer>
    <class-name>com.tangosol.io.pof.ConfigurablePofContext</class-name>
    <init-params>
    <init-param>
    <param-type>String</param-type>
    <param-value>pof-config.xml</param-value>
    </init-param>
    </init-params>
    </serializer>
    </acceptor-config>
    <autostart system-property="tangosol.coherence.extend.enabled">false</autostart> </proxy-scheme>
    </caching-schemes>
    </cache-config>
    Exception:
    2010-08-31 10:46:09.062/171.859 Oracle Coherence GE 3.5.2/463 <Error> (thread=ReplicatedCache, member=2): java.lang.Clas
    sCastException: com.tangosol.util.Binary cannot be cast to test.TestBean
    at test.TestCacheStore.store(TestCacheStore.java:137)
    at com.tangosol.net.cache.LocalCache$InternalListener.onModify(LocalCache.java:637)
    at com.tangosol.net.cache.LocalCache$InternalListener.entryInserted(LocalCache.java:599)
    at com.tangosol.util.MapEvent.dispatch(MapEvent.java:206)
    at com.tangosol.util.MapEvent.dispatch(MapEvent.java:166)
    at com.tangosol.util.MapListenerSupport.fireEvent(MapListenerSupport.java:556)
    at com.tangosol.net.cache.OldCache.dispatchEvent(OldCache.java:1916)
    at com.tangosol.net.cache.OldCache$Entry.onAdd(OldCache.java:1985)
    at com.tangosol.util.SafeHashMap.put(SafeHashMap.java:244)
    at com.tangosol.net.cache.OldCache.put(OldCache.java:266)
    at com.tangosol.coherence.component.util.CacheHandler.onLeaseUpdate(CacheHandler.CDB:42)
    at com.tangosol.coherence.component.util.CacheHandler.populateCache(CacheHandler.CDB:33)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.ReplicatedCache$CacheUpdate.onReceiv
    ed(ReplicatedCache.CDB:5)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onMessage(Grid.CDB:9)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onNotify(Grid.CDB:136)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.ReplicatedCache.onNotify(ReplicatedC
    ache.CDB:3)
    at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
    at java.lang.Thread.run(Thread.java:619)
    2010-08-31 10:46:09.203/216.735 Oracle Coherence GE 3.5.2/463 <D5> (thread=DistributedCache, member=2): Deferring the di
    stribution due to 128 pending configuration updates
    TestBean.java
    public class TestBean implements PortableObject, Serializable {
         private static final long serialVersionUID = 1L;
         private String name;
         private String number;
         private String taskType;
         public String getName() {
              return name;
         public void setName(String name) {
              this.name = name;
         public String getNumber() {
              return productId;
         public void setNumber(String number) {
              this.number= number;
         public String getTaskType() {
              return taskType;
         public void setTaskType(String taskType) {
              this.taskType = taskType;
         @Override
         public void readExternal(PofReader reader) throws IOException {
              name = reader.readString(0);
              number = reader.readString(1);
              taskType = reader.readString(2);
         @Override
         public void writeExternal(PofWriter writer) throws IOException {
              writer.writeString(0, name);
              writer.writeString(1, number);
              writer.writeString(2, taskType);
    TestCacheStore.java
    public class TestCacheStore extends Base implements CacheStore {
         @Override
         public void store(Object oKey, Object oValue) {
         if(logger.isInfoEnabled())
              logger.info("store :"+oKey);
         TestBean testBean = (TestBean)oValue; //Giving classcast exception here
    //Doing some processing here over testBean
              ConnectionFactory connectionFactory = ConnectionFactory.getInstance();
              //Get the Connection
              Connection con = connectionFactory.getConnection();
              if(con != null){
    //Code to insert into the database
              }else{
                   logger.error("Connection is NULL");
    Edited by: user8279242 on Aug 30, 2010 11:44 PM

    Hello,
    The problem is that replicated caches are not supported with read write backing maps.
    Please refer to the link below for more information.
    http://download.oracle.com/docs/cd/E15357_01/coh.360/e15723/cache_rtwtwbra.htm#CFHEJHCI
    Best regards,
    -Dave

  • Replacing our Replicated Caches with Distributed+CQC

    Hi,
    I've been advised on this forum to replace our Replicated caches with Distributed+CQC with an AlwaysFilter. This should give me the "zero-latency" get() performance which really should be part of a Replicated Cache, but apparently isn't (the lease-model and serialization get in the way of this).
    My concern is now storage efficiency - my understanding is that my storage footprint will double, as the same information is now stored in two places (partitioned backing map and cqc front map). Is this correct? If so, I'm unsure why it would be considered a good replacement for the Replicated scheme.
    Thanks,
    Matt

    The second link looks like it helped you out
    - Re: The effects of "leases" on the read-performance of Replicated Caches
    Also if you are looking at (near) zero latency when garbage collections occur, you might want to have at the Zing JVM
    - http://www.azulsystems.com/products/zing/virtual-machine
    Layency due to garbage collection (i.e., introduced pause times) could break your (extreme) low-latency demand.
    You could try the CQC with the always filter:
    NamedCache cache = CacheFactory.getCache("somecache");
    ContinuousQueryCache localCache =new ContinousQueryCache(cache, AlwaysFilter.INSTANCE);The preceding code will result in a locally materialized view of the cache data that satisfies the specified filter. By default, both keys and values will be cached locally.
    If you want to cache only keys and retrieve values from the back cache as needed, which might be the best option if the values are large and accessed infrequently, or
    if you only care about having an up-to-date keyset locally, you can pass false as the third argument to the CQC constructor.
    To get data from the CQC you can use
    Iterator<Map.Entry<Integer, Klant>> data = localCache.entrySet().iterator();

  • POF serialization with replicated cache?

    Sorry again for the newbie question.
    Can you use POF serialized objects in a replicated cache?
    All of the examples show POF serialized objects being used with a partitioned cache.
    If you can do this, are there any caveats involved with the "replication" cache? I assume it would have to be started using the same configuration as the "master" cache.

    Thanks Rob.
    So, you just start up the Coherence instance on the (or at the) replication site, using the same configuration as the "master"? (Of course with appropriate classpaths and such set correctly)

  • Basic use of locks with replicated cache

    Hi,
    I'm in the process of evaluating Coherence and I have a few fundamental questions about best locking practices with a simple replicated cache. I've been through the docs & forums a few times, but I'm still confused.
    The docs say that a replicted cache is "fully coherent". What, exactly does this mean? Does this imply that Coherence handles all the locking for you? Under what situations do I have to lock a node?
    Thanks.

    Hi,
    There are generally two reasons for desiring full synchronicity:
    Fault tolerance ... however, note that by the time the put returns, the data is held by both the client and the issuer. Also, we recommend the use of the Distributed cache topology for transactional data (and not Replicated). Distributed is in fact fully synchronous.
    The other reasons people usually ask about this is concern over the logical view of data updates (simultaneous state across the cluster). Even within a single JVM running on a single CPU, data updates are not simultaneous or even synchronous. This is quite intentional (in fact most of the complexity of the Java Memory Model arises from the desire to avoid the requirement for simultaneous state changes).
    In the JVM, synchronized blocks are required to avoid race conditions and inconsistent views of data, and in Coherence, locks are required to do the same.
    The key point to remember is, from the point of view of a single cluster member, data updates are in fact synchronous and simultaneous (with or without locking).
    I'm assuming your question relates to one of those two issues (as those are the most common concerns that we hear); if not, could you provide a bit more background on your requirements?
    Jon Purdy
    Tangosol, Inc.

  • Can not access cache in entry processor

    Why the cache where the entry object exists can not be accessed in entry processor?
    The calculation in entry processor will not always be one grid object operation.
    One object would be related to other objects.
    If we could not do such thing in entry processor, we have to process calculation outside of the entry processor. However, caculation out of grid would not be a very efficient way for generating result.
    Does anyone have good idea on this?

    Hi, Robert,
    I understood your meaning.
    Invocable is an alternative solution for processing data. Use it outside of entry-processor, no dead lock would happen.
    I paste piece of my code here. Following codes can not work properly. Just like you said, can not wait for result inside of entry-processor. My original purpose is that get "Nation" property from another data object. Now, I only can do it outside of entry-processor.
    public class UpdateFxProcessor extends AbstractProcessor {
         public Object process(Entry entry) {
              HtWsMeasuresUsdFx hfx = (HtWsMeasuresUsdFx)entry.getValue();
         InvocationService service = (InvocationService)
    CacheFactory.getConfigurableCacheFactory()
    .ensureService("InvocationService");
         Map map = service.query(new NationInvocable(hfx.getWsNationCode()), null);
         String nation= (String)map.get(service.getCluster().getLocalMember());
              hfx.setNation(nation);     
    // ... other logics for calculating hfx....
              return null;
    My data grid environment will store more than 10 million HtWsMeasuresUsdFx objects. I hope that calculation on these objects could be executed in parallel. Maybe I misunderstood the function of entry-processor. If we use Invocable outside of entry-processor, it is hard to control its calculation in parallel, isn't it?
    Best regards,
    Copper

  • Accessing Cache From an Entry Processor

    Is it possible to Access another cache to perform operations on when calling cache.invoke()? In this call I pass in the key and an processor that invokes another cache.

    Hi Dan,
         If that other cache is in a cache service different from the cache service of the cache on which the entry-processor is running, then you can.
         Otherwise you should not, because your code will be prone to deadlocks.
         Best regards,
         Robert

  • Are put()'s to a replicated cache atomic?

    We are using Coherence for a storage application and we're observing what may be a Coherence issue with latency on put()'s to a replicated cache.
    We have two software nodes on different servers sharing information in the cache and I need to know if I can count on a put() being atomic with respect to all software and hardware nodes in the grid.
    Does anyone know if these operations are guaranteed to be atomic on replicated caches, and that an entry will be visible to ALL nodes when the put() returns?
    Thanks,
    Stacy Maydew

    You could use explicit locking, for example,
    if (cache.lock(key, timeout)) {
        try {
            Object value = cache.get(key);
            cache.put(key, value);
        } finally {
            cache.unlock(key);
    } else {
        // decide what to do if you cannot obtain a lock
    }Note that when using explicit locking, you will require multiple trips to the cache server: to lock the entry,
    to retrieve the value, to update it and to unlock it. Which increases the latency.
    You can also use an entry processor which carries the information needed for the update.
    An entry processor eliminates the need for explicit concurrency control.
    An example
    public class KlantNaamEntryProcessor extends AbstractProcessor implements Serializable {
        public KlantNaamEntryProcessor() {
        public Object process(InvocableMap.Entry entry) {
            Klant klant = (Klant) entry.getValue();
            klant.setNaam(klant.getNaam().toLowerCase());
            entry.setValue(klant);
            return klant;
    and it's usage
    cache.invokeAll(filter, new KlantNaamEntryProcessor()).entrySet().iterator();Better examples can be found here: http://wiki.tangosol.com/display/COH33UG/Transactions,+Locks+and+Concurrency

  • Question about LRU in a replicated cache

    Hi Tangosol,
    I have a question about how the LRU eviction policy works in a replicated cache that uses a local cache for its backing map. My cache config looks like this:
    <replicated-scheme>
    <scheme-name>local-repl-scheme</scheme-name>
    <backing-map-scheme>
    <local-scheme>
    <scheme-ref>base-local-scheme</scheme-ref>
    </local-scheme>
    </backing-map-scheme>
    </replicated-scheme>
    <local-scheme>
    <scheme-name>base-local-scheme</scheme-name>
    <eviction-policy>LRU</eviction-policy>
    <high-units>50</high-units>
    <low-units>20</low-units>
    <expiry-delay/>
    <flush-delay/>
    </local-scheme>
    My test code does the following:
    1. Inserts 50 entries into the cache
    2. Checks to see that the cache size is 50
    3. Inserts 1 additional entry (as I understand it, this should cause the eviction logic to kick-in)
    4. Checks the cache size again, expecting it to now be 20
    With HYBRID and LFU eviction policies, the above logic works exactly as expected. When I switch to LRU however, the code at step 2 always returns a value significantly less than 50. All 50 inserts appear to complete successfully, so I can only assume that some of the entries have already been evicted by the time I get to step 2.
    Any thoughts?
    Thanks.
    Pete L.
    Addendum:
    As usual, in attempting to boil this issue down to its essential elements, I left out some details that turned out to be important. The logic that causes the condition to occur looks more like:
    1. Loop 2 times:
    2. Create named cache instance "TestReplCache"
    3. Insert 50 cache entries
    4. Verify that cache size == 50
    5. Insert 1 additional entry
    6. Verify that cache size == 20
    7. call cache.release()
    8. End Loop
    With this logic, the problem occurs on the second pass of the loop. Step 4 reports a cache size of < 50. This happens with LRU, LFU, and HYBRID-- so my initial characterization of this problem is incorrect. The salient details appear to be that I am using the same cache name each pass of the loop and that I am calling release() at the end of the loop. (If I call destroy() instead, all works as expected.)
    So... my revised question(s) would be: is this behavior expected? Is calling destroy() my only recourse?
    Message was edited by: planeski

    Robert,
    Attached are my sample code and cache config files. The code is a bit contrived-- it's extracted from a JUnit test case. Typically, we wouldn't re-use the same cache name in this way. What caught my eye however, was the fact that this same test case does not exhibit this behavior when running against a local cache directly (as opposed to a repl cache backed by a local cache.)
    Why call release? Well, again, when running this same test case against a local cache, you have to call release or it won't work. I figured the same applied to a repl cache backed by a local cache.
    Now that I understand this is more a byproduct of how my unit tests are written and not an issue with LRU eviction (as I originally thought), it's not a big deal-- more of a curiosity than a problem.
    Pete L.<br><br> <b> Attachment: </b><br>coherence-cache-config.xml <br> (*To use this attachment you will need to rename 545.bin to coherence-cache-config.xml after the download is complete.)<br><br> <b> Attachment: </b><br>LruTest.java <br> (*To use this attachment you will need to rename 546.bin to LruTest.java after the download is complete.)

  • Read-Through Caching with expiry-delay and near cache (front scheme)

    We are experiencing a problem with our custom CacheLoader and near cache together with expiry-delay on the backing map scheme.
    I was under the assumption that it was possible to have an expiry-delay configured on the backing-scheme and that the near cache object was evicted when backing object was evicted. But according to our tests we have to put an expiry-delay on the front scheme too.
    Is my assumption correct that there will not be automatic eviction on the near cache (front scheme)?
    With this config, near cache is never cleared:
                 <near-scheme>
                      <scheme-name>config-part</scheme-name>
                      <front-scheme>
                            <local-scheme />
                      </front-scheme>
                      <back-scheme>
                            <distributed-scheme>
                                  <scheme-ref>config-part-distributed</scheme-ref>
                            </distributed-scheme>
                      </back-scheme>
                <autostart>true</autostart>
                </near-scheme>
                <distributed-scheme>
                      <scheme-name>config-part-distributed</scheme-name>
                      <service-name>partDistributedCacheService</service-name>
                      <thread-count>10</thread-count>
                      <backing-map-scheme>
                            <read-write-backing-map-scheme>
                                  <read-only>true</read-only>
                                  <scheme-name>partStatusScheme</scheme-name>
                                  <internal-cache-scheme>
                                        <local-scheme>
                                              <scheme-name>part-eviction</scheme-name>
                                              <expiry-delay>30s</expiry-delay>
                                        </local-scheme>
                                  </internal-cache-scheme>
                                  <cachestore-scheme>
                                        <class-scheme>
                                              <class-name>net.jakeri.test.PingCacheLoader</class-name>
                                        </class-scheme>
                                  </cachestore-scheme>
                                  <refresh-ahead-factor>0.5</refresh-ahead-factor>
                            </read-write-backing-map-scheme>
                      </backing-map-scheme>
                      <autostart>true</autostart>
                      <local-storage system-property="tangosol.coherence.config.distributed.localstorage">true</local-storage>
                </distributed-scheme>With this config (added expiry-delay on front-scheme), near cache gets cleared.
            <near-scheme>
                      <scheme-name>config-part</scheme-name>
                      <front-scheme>
                            <local-scheme>
                                 <expiry-delay>15s</expiry-delay>
                            </local-scheme>
                      </front-scheme>
                      <back-scheme>
                            <distributed-scheme>
                                  <scheme-ref>config-part-distributed</scheme-ref>
                            </distributed-scheme>
                      </back-scheme>
                <autostart>true</autostart>
                </near-scheme>
                <distributed-scheme>
                      <scheme-name>config-part-distributed</scheme-name>
                      <service-name>partDistributedCacheService</service-name>
                      <thread-count>10</thread-count>
                      <backing-map-scheme>
                            <read-write-backing-map-scheme>
                                  <read-only>true</read-only>
                                  <scheme-name>partStatusScheme</scheme-name>
                                  <internal-cache-scheme>
                                        <local-scheme>
                                              <scheme-name>part-eviction</scheme-name>
                                              <expiry-delay>30s</expiry-delay>
                                        </local-scheme>
                                  </internal-cache-scheme>
                                  <cachestore-scheme>
                                        <class-scheme>
                                              <class-name>net.jakeri.test.PingCacheLoader</class-name>
                                        </class-scheme>
                                  </cachestore-scheme>
                                  <refresh-ahead-factor>0.5</refresh-ahead-factor>
                            </read-write-backing-map-scheme>
                      </backing-map-scheme>
                      <autostart>true</autostart>
                      <local-storage system-property="tangosol.coherence.config.distributed.localstorage">true</local-storage>
                </distributed-scheme>

    Hi Jakkke,
    The Near Cache scheme allows to have configurable levels of cache coherency from the most basic expiry based cache to invalidation based cache to data versioning cache depending on the coherency requirements. The Near Cache is commonly used to achieve the performance of replicated cache without losing the scalability aspects of replicated cache and this is achieved by having a subset of data (based on MRU or MFU) in the <front-scheme> of the near cache and the complete set of data in the <back-scheme> of near cache. The <back-scheme> updates can automatically trigger events to invalidate the entries in the <front-scheme> based on the invalidation strategy (present, all, none, auto) configured for the near cache.
    If you want to expire the entries in the <front-scheme> and <back-scheme>, you need to specify an expiry-delay on both the schemes as mentioned by you in the last example. Now if you are expiring the items in the <back-scheme> for the reason that they get loaded again from the cache-store but the <front-scheme> keys remain same (only the values should be refreshed from the cache store) then you need not set the expiry-delay on the <front-scheme> rather mention the invalidation-strategy as present. But if you want to have a different set of entries in <front-scheme> after a specified expiry delay then you need to mention it in the <front-scheme> configuration.
    The near cache has the capability to keep front scheme and back scheme data in sync but the expiry of entries is not synced. Always, front-scheme is a subset of back-scheme.
    Hope this helps!
    Cheers,
    NJ

  • Query regarding Replicated Caches that Persist on Disk

    I need to build a simple fault tolerant system that will replicate cache
    entries across a small set of systems. I want the cache to be persistent
    even if all cluster members are brought down.
    Is this something that requires writing a custom CacheStore implementation
    or is there a configuration that I can use that uses off-the-shelf pluggable
    caches? The documentation was somewhat vague about this.
    If I need or want to write my own CacheStore, when there is a cache write-through
    operation how does Coherence figure out which member of the cluster will do
    the actual work and persist a particular object?

    Hi rhanckel,
    write-through and cache stores are not supported with replicated caches, you need to use partitioned (distributed) cache for cache stores.
    You can use a number of out-of-the-box cache stores (Hibernate, TopLink, etc) or you can write your own if you don't find a suitable one. Configuration is the same, you need to specify the cache store class name appropriately in the <cache-store-scheme> child element of the <read-write-backing-map> element.
    You can look at the documentation for it on the following urls:
    http://wiki.tangosol.com/display/COH34UG/cachestore-scheme
    http://wiki.tangosol.com/display/COH34UG/read-write-backing-map-scheme
    http://wiki.tangosol.com/display/COH34UG/Read-Through%2C+Write-Through%2C+Write-Behind+Caching+and+Refresh-Ahead
    As for how Coherence figures out which member needs to write:
    in a partitioned cache, each cache key has an owner node which is algorithmically determined from the key itself and the distribution of partitions among nodes (these two do not depend on the actual data in the cache). More specifically, any key is always mapped to the same partition (provided you did not change the partition-count or the partition affinity-related settings, although if you did the latter, then it is arguably not the same key anymore). Therefore Coherence just needs to know, who owns a certain partition. Hence, the owner of that partition is the owner of the key, and that node is tasked with every operation related to that key.
    Best regards,
    Robert

  • Strange read-through operation after entry processor work

    Hi.
    We use the combination cache listener - entry processor to do some actions when the data comes to coherence. We use Oracle Coherence Version 3.5.3/465.
    Just after the entry processor has set the new value for the entry, the new "get" operation is called for the cache and jdbc hit is done for this key.
    Here is the entry processor:
    public Object process(Entry entry) {       
            if (!entry.isPresent()) {
                // No entities exist for this CoreMatchingString - creating new Matching unit
                MatchingUnit newUnit = new MatchingUnit(newTrade);
                entry.setValue(newUnit, true);
                return null;
            ((MatchingUnit)entry.getValue()).processMatching(newTrade);
            return null;
        }Very interesting, that if I use entry.setValue(value) without second parameter - I recieve the db hit just on setValue method. According to docs, setValue() with one parameter returns the previous value and its logical, that the cache hit (and therefore the db hit) is done just on set. But I use the overloaded version void setValue(java.lang.Object oValue, boolean fSynthetic), which is said to be light and should not fetch previous version of the object. But this is done anyway! Not on setValue itself, but just after the process() method called.
    Actually it's strange, that coherence tries to fetch the previous value in the case it didn't exist! The cache.invoke(matchingStr, new CCPEntryProcessor(ccp)) is invoked on not existing record and it is created just on invokation. May be it's the bug or the place for optimization.
    Thanks

    bitec wrote:
    Thanks, Robert, for such detailed answer.
    Still not clear for me, why synthetic inserts are debatable. There are lots of cases, when the client simply updates/inserts the record (using setValue()) and does not need to recieve the previous value. If he needs, he will launch the method:Hi Anton,
    it is debatable because the purpose of the fSynthetic flag is NOT so that you can optimize a cache store operation away. Synthetic event means that this is not real change in the data set triggered by the user, it is something Coherence has done on the backing map / on the cache for its own reasons and decisions to be able to provide high-availability to the data, and it only changes that particular Coherence node's subset of data but does not have a meaning related to the actually existing full data set. Such reasons are partition movement and cache eviction (or possibly any other reasons why Coherence would want to change the content of the backing map without telling the user that anything has changed).
    If you set the synthetic flag, you are trying to masquerade a real data change as an event which Coherence decided to trigger. This is why it is debatable. Also, synthetic backing map events may not always lead to dispatching cache events (for partition rebalance they definitely not). This optimization may also be extended to synthetic cache events.
    java.lang.Object setValue(java.lang.Object oValue)and recieve the previous value. If he doesn't, he calls:
    void setValue(java.lang.Object oValue, boolean fSynthetic)and DOESN'T recieve the previous value as method is marked void. Thus he cannot get the previous value anyhow using this API, except of the direct manual db call. Yep, because Coherence is not interested in the old value in case of a synthetic event. The synthetic methods exist so that some entry can be changed in Coherence (usually by Coherence itself) in a way that it indicates a synthetic event so that listeners are not notified.
    Some valid uses for such functionality for setValue invoked by user code could be compacting some cached value and replacing the value the backing map stores with the compacted representation, which does not mean a change in the meaning of the actual cached value, only the representation changes. Of course if the setValue(2) method does not actually honor the synthetic flag, then such functionality will still incur all the costs of a normal setValue(1) call.
    But the previous value is anyway fetched by Coherence itself just after process() and client anyway doesn't get it! But any listeners on the cache may need to get it due to cache semantics reasons.
    In this case I regard this as the bug, cause the client, which uses this API doesn't expect the cache hit to take place (no return value for this overloaded setValue() method), but it takes and leads to some extra problems, resulting from read-through mechanizm.
    I would not regard it as a bug, it is probably the case of documenting a possible optimization too early, when it ultimately did not get implemented. I definitely would not try to abuse it to set a value without triggering a db fetch, as again, the intention of the synthetic flag is related not only to the cache loader functionality, but also to events and marking that a change indicates a real data change or a Coherence data management action.
    Now I understand why coherence does not know, whether this is inserted or updated value, thanks for details.
    Anton.
    *Edited: I thought about this problem from the point of the oracle user, but may be this additional hit is necessary for event producers, which need to make the events, containing old/new values. In this case this seems to be the correct behaviour.... Seems that I need some other workaround to avoid db hit. The best workaround is the empty load() method for cachestore...You can try to find a workaround, but it is an ugly can of worms, because of the scenario when more than one thread tries to load the same entry, and some of them try to load it with a reason.
    You may try to put some data into thread-local to indicate that you don't really want to load that particular key. The problem is that depending on configuration and race conditions your cache loader may not be called to clean up that thread-local as some other thread may just be invoked right now and in that case its return value is going to be returned to all other threads, too, so you may end up with a polluted thread.
    Best regards,
    Robert
    Edited by: robvarga on Oct 15, 2010 4:25 PM

Maybe you are looking for

  • Saving always goes to documents folder

    Hello, I've posted this problem before but haven't received a fix for it yet. Perhaps I'm just explaining it wrong. Simply put every time I go to save a document, instead of going to the last folder used in the program it always goes to the documents

  • Acrobat X crashes every time when I open a signed document.

    I'm running Acrobat 10.1.13.16. Acrobat receives a Visual C++ run time error when opening signed documents.  I can open standard PDF files without errors. Microsoft Visual C++ Runtime Library Runtime Error! Program: C:\Program Files\Adobe\Acrobat 10.

  • Convert swf to mov or avi etc

    Hi I've designed a vector animation in flash 8 using a 16:9 ratio to be projected on a video screen. What is the best way to convert this swf to a format that can be played on a normal dvd player so it can be projected on a screen and maintain the ve

  • Is it possible to use å, ä, ö in a QR-code from InDesign?

    When creating a QR-code in InDesign it seems impossible to use å, ä and ö. I want to create vcards for people in our organization, and since we're in Sweden many of us have those characters in our name. I have seen these kind of codes generated from

  • HT4796 Migration Assistant Results are where??

    I completed the Windows Migration Assistant process - to transfer files from my PC to my Mac. When prompted with options to transfer I selected all, since it was not clear. I cannot find any of the files or photos frmo my PC on my MAC. Any helpful hi