Failed to replicate entries in replicated cache

We are using replicated scheme for for the purpose of storing configuration items in the cluster. Once in a while we need to refresh those items in the cache. It has been noticed that during this process some items failed to replicate to other nodes even though the update succeeded in the primary backup.
We are using Coherence 3.7.1.8
We have observed the following messages in the logs of the nodes where replication failed:
Rejected update: Lease: EQ_ETN_STATIC (Cache=mds_guest, Size=77, Version=1/3, IssuerId=18, HolderId=0, Status=LEASE_AVAILABLE, Last locked at Thu Jan 01 01:00:00 GMT 1970)
by Lease: EQ_ETN_STATIC (Cache=mds_guest, Size=77, Version=1/1, IssuerId=18, HolderId=0, Status=LEASE_AVAILABLE, Last locked at Mon Dec 30 13:36:10 GMT 2013)

Hello,
Most of the non-serializable object issue comes because of following two reason
1. Application code issue
2. WLS product bug (sometime)
Please refer following Oracle KM article to know more about troubleshooting Serialization issues.
==> How To Troubleshoot Serialization Problems (Doc ID 1390822.1)
Also you can refer below KM and test JSP page which can detect serializable objects in a web application.
==> Session Replication Fails Due To Non-Serializable Object: JSP Test Page (Doc ID 1073386.1)
Hope this helps.
Regards,
Suraj

Similar Messages

  • %APF-4-CREATE_PMK_CACHE_FAILED: apf_pmkcache.c:561 Attempt to insert PMK to the key cache failed. unable to insert a new entry in PMK cache list.Length: 32. Station:

    I received this error using apple IPAD version 7.1.2 connecting to cisco WiSM controller version 7.0.250. access point wireless setup using local-hreap.
    below are the syslog messages.
    : *mmListen: Aug 25 10:38:35.962: %APF-4-CREATE_PMK_CACHE_FAILED: apf_pmkcache.c:561 Attempt to insert PMK to the key cache failed. unable to insert a new entry in PMK cache list.Length: 32. Station:98:fe:94:90:70:ef
    : *mmListen: Aug 25 10:38:35.962: %MM-4-PMKCACHE_ADD_FAILED: mm_listen.c:6479 Failed to create PMK/CCKM cache entry for station 98:fe:94:90:70:ef with update from controller
    : *dot1xMsgTask: Aug 25 12:09:27.883: %DOT1X-3-MAX_EAP_RETRIES: 1x_auth_pae.c:3092 Max EAP identity request retries (3) exceeded for client 98:fe:94:90:70:ef
    : *dot1xMsgTask: Aug 25 12:57:32.503: %DOT1X-3-MAX_EAP_RETRIES: 1x_auth_pae.c:3092 Max EAP identity request retries (3) exceeded for client 98:fe:94:90:70:ef

    Hi Ravi Rai,
    I'm sorry to hear about the issue you are having with your Mac. If you are having hard freezes or restart issue that don't appear to be drive related, even after reinstalling Mavericks, you may want to try the troubleshooting steps outlined in the following article:
    OS X: When your computer spontaneously restarts or displays "Your computer restarted because of a problem."
    Regards,
    - Brenden

  • Concurrency with Entry Processors on Replicated Cache

    Hi,
    In the documentation is says that entry processors in replicated caches are executed on the initiating node.
    How is concurrency handled in this situation?
    What happens if two or more nodes are asked to execute something on an entry at the same time?
    What happens if the node initiating the execution is a storage disabled node?
    Thanks!

    Jonathan.Knight wrote:
    In a distributed cache the entry processor runs on the node that owns the cache entry. In a replicated cache the same entries are on all nodes so I think one of the questions was what happens in this scenario. I presume the EP only execues on one of the nodes - it would not make sense for it to execute on all nodes - but which one does it use? Is there still a concept of owner for a replicated cache or is it random.
    At this point I would have coded a quick experiment to prove what happens but unfortunately I am a tad busy right now.
    JKHi Jonathan,
    in the replicated cache there is still a notion of ownership of an entry, in Coherence terms it is called a lease. It is always owned by the last node to have carried out a successful modification on it, where modification may be a put/remove but it can also be a lock operation. Lease granularity is per entry.
    Practically the lock operation in the code Dimitri pasted serves two purposes. First it ensures no other nodes can lock it, second it brings the lease to the locking node, so it can correctly execute the entry-processor locally on the entry.
    Best regards,
    Robert

  • Error while putting an object in Replicated Cache

    Hi,
    I am running just a single node of coherence with Replicated cache. But when I am trying to add an object to it I am getting the below exception. However I don't get this error while doing the same thing in a Distributed cache. Can someone please tell me what could I be doing wrong here.
    Caused by: java.io.IOException: readObject failed: java.lang.ClassNotFoundException: com.test.abc.pkg.RRSCachedObject
         at java.net.URLClassLoader$1.run(URLClassLoader.java:200)
         at java.security.AccessController.doPrivileged(Native Method)
         at java.net.URLClassLoader.findClass(URLClassLoader.java:188)
         at java.lang.ClassLoader.loadClass(ClassLoader.java:316)
         at java.lang.ClassLoader.loadClass(ClassLoader.java:251)
         at java.lang.ClassLoader.loadClassInternal(ClassLoader.java:374)
         at java.lang.Class.forName0(Native Method)
         at java.lang.Class.forName(Class.java:242)
         at java.io.ObjectInputStream.resolveClass(ObjectInputStream.java:585)
         at com.tangosol.io.ResolvingObjectInputStream.resolveClass(ResolvingObjectInputStream.java:68)
         at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1544)
         at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1466)
         at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1699)
         at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1305)
         at java.io.ObjectInputStream.readObject(ObjectInputStream.java:348)
         at com.tangosol.util.ExternalizableHelper.readSerializable(ExternalizableHelper.java:2145)
         at com.tangosol.util.ExternalizableHelper.readObjectInternal(ExternalizableHelper.java:2276)
         at com.tangosol.util.ExternalizableHelper.deserializeInternal(ExternalizableHelper.java:2673)
         at com.tangosol.util.ExternalizableHelper.fromBinary(ExternalizableHelper.java:257)
         at com.tangosol.coherence.component.net.extend.proxy.CacheServiceProxy$ConverterFromBinary.convert(CacheServiceProxy.CDB:4)
         at com.tangosol.util.ConverterCollections$ConverterCacheMap.put(ConverterCollections.java:2433)
         at com.tangosol.coherence.component.util.collections.wrapperMap.WrapperNamedCache.put(WrapperNamedCache.CDB:1)
         at com.tangosol.coherence.component.net.extend.proxy.CacheServiceProxy$WrapperNamedCache.put(CacheServiceProxy.CDB:2)
         at com.tangosol.coherence.component.net.extend.messageFactory.NamedCacheFactory$PutRequest.onRun(NamedCacheFactory.CDB:6)
         at com.tangosol.coherence.component.net.extend.message.Request.run(Request.CDB:4)
         at com.tangosol.coherence.component.net.extend.proxy.NamedCacheProxy.onMessage(NamedCacheProxy.CDB:11)
         at com.tangosol.coherence.component.net.extend.Channel.execute(Channel.CDB:28)
         at com.tangosol.coherence.component.net.extend.Channel.receive(Channel.CDB:26)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Peer$DaemonPool$WrapperTask.run(Peer.CDB:9)
         at com.tangosol.coherence.component.util.DaemonPool$WrapperTask.run(DaemonPool.CDB:32)
         at com.tangosol.coherence.component.util.DaemonPool$Daemon.onNotify(DaemonPool.CDB:69)
         at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
         at java.lang.Thread.run(Thread.java:613)
    ClassLoader: java.net.URLClassLoader@b5f53a
         at com.tangosol.util.ExternalizableHelper.fromBinary(ExternalizableHelper.java:261)
         at com.tangosol.coherence.component.net.extend.proxy.CacheServiceProxy$ConverterFromBinary.convert(CacheServiceProxy.CDB:4)
         at com.tangosol.util.ConverterCollections$ConverterCacheMap.put(ConverterCollections.java:2433)
         at com.tangosol.coherence.component.util.collections.wrapperMap.WrapperNamedCache.put(WrapperNamedCache.CDB:1)
         at com.tangosol.coherence.component.net.extend.proxy.CacheServiceProxy$WrapperNamedCache.put(CacheServiceProxy.CDB:2)
         at com.tangosol.coherence.component.net.extend.messageFactory.NamedCacheFactory$PutRequest.onRun(NamedCacheFactory.CDB:6)
         at com.tangosol.coherence.component.net.extend.message.Request.run(Request.CDB:4)
         at com.tangosol.coherence.component.net.extend.proxy.NamedCacheProxy.onMessage(NamedCacheProxy.CDB:11)
         at com.tangosol.coherence.component.net.extend.Channel.execute(Channel.CDB:28)
         at com.tangosol.coherence.component.net.extend.Channel.receive(Channel.CDB:26)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Peer$DaemonPool$WrapperTask.run(Peer.CDB:9)
         at com.tangosol.coherence.component.util.DaemonPool$WrapperTask.run(DaemonPool.CDB:32)
         at com.tangosol.coherence.component.util.DaemonPool$Daemon.onNotify(DaemonPool.CDB:69)
         at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
         at java.lang.Thread.run(Thread.java:613)
    This is my config file -
    <cache-config>
         <caching-scheme-mapping>
              <cache-mapping>
                   <cache-name>*</cache-name>
                   <scheme-name>MY-replicated-cache-scheme</scheme-name>
              </cache-mapping>
         </caching-scheme-mapping>
         <caching-schemes>
              <!--
    Replicated caching scheme.
    -->
              <replicated-scheme>
                   <scheme-name>MY-replicated-cache-scheme</scheme-name>
                   <service-name>ReplicatedCache</service-name>
                   <backing-map-scheme>
                        <local-scheme>
                        </local-scheme>
                   </backing-map-scheme>
                   <lease-granularity>member</lease-granularity>
                   <autostart>true</autostart>
              </replicated-scheme>
              <proxy-scheme>
                   <service-name>ExtendTcpProxyService</service-name>
                   <thread-count>5</thread-count>
                   <acceptor-config>
                        <tcp-acceptor>
                             <local-address>
                                  <address>server</address>
                                  <port>port</port>
                             </local-address>
                             <receive-buffer-size>768k</receive-buffer-size>
                             <send-buffer-size>768k</send-buffer-size>
                        </tcp-acceptor>
                   </acceptor-config>
                   <autostart>true</autostart>
              </proxy-scheme>
         </caching-schemes>
    </cache-config>
    Edited by: user1945969 on Jun 5, 2010 4:16 PM

    By default, it should have used FIXED as unit-calculator. But look at the trace seems your replicated cache was using BINARY as unit-calculator.
    Could you try add <unit-calculator>FIXED</unit-calculator> in your cache config for the replicate cache.
    Or just try insert a object (both key and value) which implement Binary.
    Check the unit-calculator part on this link
    http://wiki.tangosol.com/display/COH35UG/local-scheme

  • Query regarding Replicated Caches that Persist on Disk

    I need to build a simple fault tolerant system that will replicate cache
    entries across a small set of systems. I want the cache to be persistent
    even if all cluster members are brought down.
    Is this something that requires writing a custom CacheStore implementation
    or is there a configuration that I can use that uses off-the-shelf pluggable
    caches? The documentation was somewhat vague about this.
    If I need or want to write my own CacheStore, when there is a cache write-through
    operation how does Coherence figure out which member of the cluster will do
    the actual work and persist a particular object?

    Hi rhanckel,
    write-through and cache stores are not supported with replicated caches, you need to use partitioned (distributed) cache for cache stores.
    You can use a number of out-of-the-box cache stores (Hibernate, TopLink, etc) or you can write your own if you don't find a suitable one. Configuration is the same, you need to specify the cache store class name appropriately in the <cache-store-scheme> child element of the <read-write-backing-map> element.
    You can look at the documentation for it on the following urls:
    http://wiki.tangosol.com/display/COH34UG/cachestore-scheme
    http://wiki.tangosol.com/display/COH34UG/read-write-backing-map-scheme
    http://wiki.tangosol.com/display/COH34UG/Read-Through%2C+Write-Through%2C+Write-Behind+Caching+and+Refresh-Ahead
    As for how Coherence figures out which member needs to write:
    in a partitioned cache, each cache key has an owner node which is algorithmically determined from the key itself and the distribution of partitions among nodes (these two do not depend on the actual data in the cache). More specifically, any key is always mapped to the same partition (provided you did not change the partition-count or the partition affinity-related settings, although if you did the latter, then it is arguably not the same key anymore). Therefore Coherence just needs to know, who owns a certain partition. Hence, the owner of that partition is the owner of the key, and that node is tasked with every operation related to that key.
    Best regards,
    Robert

  • Getting All Entries from a cache

    Hi Folks,
         Just a small interesting observation. In an attempt to get back all the data from my partitioned cache I tried the following approaches:
         //EntrySet
         NamedCache cache = NamedCache.getCache("MyCache");
         Iterator<Entry<MyKeyObj, MyObj>> iter = cache.entrySet().iterator();
         //iterator over objects and get values
         //KeySet & getAll
         NamedCache cache = NamedCache.getCache("MyCache");
         Map results = cache.getAll(cache.keySet());
         Iterator<Entry<MyKeyObj, MyObj>> iter = results.iterator();
         //iterate over objects and get values
         Retrieving ~47k objects from 4 nodes takes 21 seconds using the entryset approach and 10 seconds for the keyset/getal approach.
         does that sound right to you? That implies that the entryset iterator is lazy loaded using get(key) for each entry.
         Regards,
         Max

    Hi Gene,
         I actually posted the question, because we are currently performance-tuning our application, and there are scenarios where (due to having a large amount of badly organized legacy code with the bottom layers ported to Coherence) there are lots of invocations getting all the entries from some caches, sometimes even hundreds of times during the processing of a HTTP request.
         In some cases (typically with caches having a low cache-size) we found, that the entrySet-AlwaysFilter solution was way faster than the keyset-getall solution, which was about as fast as the solution iterating over the cache (new HashMap(cache)).
         I just wanted to ask if there are some rules of thumb on how long is it efficient to use the AlwaysFilter on distributed caches, and where it starts to be better to use the keyset-getall approach (from a naive test-case keyset-getall seemed to be better upwards from a couple-of-thousand entries).
         Also, as we are considering to move some of the caches (static data mostly, with usually less than 1000 entries, sometimes even as few as a dozen entries in a named cache, and in very few cases as many as 40000 entries) to a replicated topology, that is why I asked about the effect of using replicated caches...
         I expect the entrySet-AlwaysFilter to be slower than the iterating solution, since it effectively does the same, and also has some additional filter evaluation to be done.
         The keySet-getall will be something similar to the iterating solution, I guess.
         What is to be known about the implementation of the values() method?
         Can it be worth using in some cases? Does it give us an instant snapshot in case of replicated caches? Is it faster than the entrySet(void) in case of replicated caches?
         Thanks and best regards,
         Robert

  • Problem using Binary wrappers in a Replicated cache

    I'd like to store my keys and values as Binary objects in a replicated cache so I can monitor their size using a BINARY unit-calculator. However, when I attempt to put a key and value as com.tangosol.util.Binary, I get the enclosed IOException.
    I have a very simple sample to reproduce the exception, but I don't see a way to attach it here.
    2008-07-07 09:21:35.061 Oracle Coherence GE 3.3.1/389 <Error> (thread=ReplicatedCache, member=1): Failed to deserialize a key for cache MyMap
    2008-07-07 09:21:35.061 Oracle Coherence GE 3.3.1/389 <Error> (thread=ReplicatedCache, member=1): An exception (java.io.IOException) occurred reading Message LeaseUpdate Type=9 for Service=ReplicatedCache{Name=ReplicatedCache, State=(SERVICE_STARTED), Id=2, Version=3.0, OldestMemberId=1}
    2008-07-07 09:21:35.061 Oracle Coherence GE 3.3.1/389 <Error> (thread=ReplicatedCache, member=1): Terminating ReplicatedCache due to unhandled exception: java.io.IOException
    2008-07-07 09:21:35.061 Oracle Coherence GE 3.3.1/389 <Error> (thread=ReplicatedCache, member=1):
    java.io.IOException: unsupported type / corrupted stream
         at com.tangosol.util.ExternalizableHelper.readObject(ExternalizableHelper.java:2162)
         at com.tangosol.coherence.component.net.Message.readObject(Message.CDB:3)
         at com.tangosol.coherence.component.net.message.LeaseMessage.read(LeaseMessage.CDB:11)
         at com.tangosol.coherence.component.net.message.leaseMessage.ResourceMessage.read(ResourceMessage.CDB:5)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.Service.onNotify(Service.CDB:110)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.ReplicatedCache.onNotify(ReplicatedCache.CDB:3)
         at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:35)
         at java.lang.Thread.run(Thread.java:619)
    2008-07-07 09:21:35.061 Oracle Coherence GE 3.3.1/389 <D5> (thread=ReplicatedCache, member=1): Service ReplicatedCache left the cluster

    Hi John,
    Currently the Replicated cache service uses Binary only for internal value representation and does not allow storing arbitrary Binary objects in replicated caches. You should have no problems using partitioned (Distributed) cache service with this approach.
    Regards,
    Gene

  • Question about LRU in a replicated cache

    Hi Tangosol,
    I have a question about how the LRU eviction policy works in a replicated cache that uses a local cache for its backing map. My cache config looks like this:
    <replicated-scheme>
    <scheme-name>local-repl-scheme</scheme-name>
    <backing-map-scheme>
    <local-scheme>
    <scheme-ref>base-local-scheme</scheme-ref>
    </local-scheme>
    </backing-map-scheme>
    </replicated-scheme>
    <local-scheme>
    <scheme-name>base-local-scheme</scheme-name>
    <eviction-policy>LRU</eviction-policy>
    <high-units>50</high-units>
    <low-units>20</low-units>
    <expiry-delay/>
    <flush-delay/>
    </local-scheme>
    My test code does the following:
    1. Inserts 50 entries into the cache
    2. Checks to see that the cache size is 50
    3. Inserts 1 additional entry (as I understand it, this should cause the eviction logic to kick-in)
    4. Checks the cache size again, expecting it to now be 20
    With HYBRID and LFU eviction policies, the above logic works exactly as expected. When I switch to LRU however, the code at step 2 always returns a value significantly less than 50. All 50 inserts appear to complete successfully, so I can only assume that some of the entries have already been evicted by the time I get to step 2.
    Any thoughts?
    Thanks.
    Pete L.
    Addendum:
    As usual, in attempting to boil this issue down to its essential elements, I left out some details that turned out to be important. The logic that causes the condition to occur looks more like:
    1. Loop 2 times:
    2. Create named cache instance "TestReplCache"
    3. Insert 50 cache entries
    4. Verify that cache size == 50
    5. Insert 1 additional entry
    6. Verify that cache size == 20
    7. call cache.release()
    8. End Loop
    With this logic, the problem occurs on the second pass of the loop. Step 4 reports a cache size of < 50. This happens with LRU, LFU, and HYBRID-- so my initial characterization of this problem is incorrect. The salient details appear to be that I am using the same cache name each pass of the loop and that I am calling release() at the end of the loop. (If I call destroy() instead, all works as expected.)
    So... my revised question(s) would be: is this behavior expected? Is calling destroy() my only recourse?
    Message was edited by: planeski

    Robert,
    Attached are my sample code and cache config files. The code is a bit contrived-- it's extracted from a JUnit test case. Typically, we wouldn't re-use the same cache name in this way. What caught my eye however, was the fact that this same test case does not exhibit this behavior when running against a local cache directly (as opposed to a repl cache backed by a local cache.)
    Why call release? Well, again, when running this same test case against a local cache, you have to call release or it won't work. I figured the same applied to a repl cache backed by a local cache.
    Now that I understand this is more a byproduct of how my unit tests are written and not an issue with LRU eviction (as I originally thought), it's not a big deal-- more of a curiosity than a problem.
    Pete L.<br><br> <b> Attachment: </b><br>coherence-cache-config.xml <br> (*To use this attachment you will need to rename 545.bin to coherence-cache-config.xml after the download is complete.)<br><br> <b> Attachment: </b><br>LruTest.java <br> (*To use this attachment you will need to rename 546.bin to LruTest.java after the download is complete.)

  • Are put()'s to a replicated cache atomic?

    We are using Coherence for a storage application and we're observing what may be a Coherence issue with latency on put()'s to a replicated cache.
    We have two software nodes on different servers sharing information in the cache and I need to know if I can count on a put() being atomic with respect to all software and hardware nodes in the grid.
    Does anyone know if these operations are guaranteed to be atomic on replicated caches, and that an entry will be visible to ALL nodes when the put() returns?
    Thanks,
    Stacy Maydew

    You could use explicit locking, for example,
    if (cache.lock(key, timeout)) {
        try {
            Object value = cache.get(key);
            cache.put(key, value);
        } finally {
            cache.unlock(key);
    } else {
        // decide what to do if you cannot obtain a lock
    }Note that when using explicit locking, you will require multiple trips to the cache server: to lock the entry,
    to retrieve the value, to update it and to unlock it. Which increases the latency.
    You can also use an entry processor which carries the information needed for the update.
    An entry processor eliminates the need for explicit concurrency control.
    An example
    public class KlantNaamEntryProcessor extends AbstractProcessor implements Serializable {
        public KlantNaamEntryProcessor() {
        public Object process(InvocableMap.Entry entry) {
            Klant klant = (Klant) entry.getValue();
            klant.setNaam(klant.getNaam().toLowerCase());
            entry.setValue(klant);
            return klant;
    and it's usage
    cache.invokeAll(filter, new KlantNaamEntryProcessor()).entrySet().iterator();Better examples can be found here: http://wiki.tangosol.com/display/COH33UG/Transactions,+Locks+and+Concurrency

  • Failed to replicate non-serializable object  Weblogic 10 3 Cluster environ

    Hi,
    We have problem in cluster environment, its showing all the objects in Session needs to be serialized, is there any tool to find what objects in session needs to be serialized or any way to find. There was no issue in WLS 8 when the application again setup in WLS 10, we are facing the session replication problem.
    The setup is there are two managed server instance in cluster, they are set to mulicast(and also tried with unicast).
    stacktrace:
    ####<Jun 30, 2010 7:11:16 PM EDT> <Error> <Cluster> <userbser01> <rs002> <[ACTIVE] ExecuteThread: '1' for queue: 'weblogic.kernel.Default (self-tuning)'> <<anonymous>> <> <> <1277939476284> <BEA-000126> <All session objects should be serializable to replicate. Check the objects in your session. Failed to replicate non-serializable object.>
    ####<Jun 30, 2010 7:11:19 PM EDT> <Error> <Cluster> <userbser01> <rs002> <[ACTIVE] ExecuteThread: '1' for queue: 'weblogic.kernel.Default (self-tuning)'> <<anonymous>> <> <> <1277939479750> <BEA-000126> <All session objects should be serializable to replicate. Check the objects in your session. Failed to replicate non-serializable object.>
    Thanks,

    Hi
    Irrespective of WLS 8.x or WLS 9.x, 10.x, in general any objects that needs to be synced/replicated across the servers in cluster should be Serializable (or implement serializable). The object should be able to marshall and unmarshall. Simple reason why it did not showed in WLS 8.x is may be in that version they could not show these details like Error. May be they showed as Info or Warn or Just Ignored it. Weblogic Server became more and more stable and more efficient down the lines like from its oldest version 4.x, 5.x, 6.x, 7.x to latest 10.x. So my guess is they added more logic and more functionality to capture all possible errors and scenarios. I did worked on WLS 8.1 SP4 to SP6 long time back. Could not remember if I saw or did not see these errors for cluster domain with non-serializabe objects. I vaguley remember seeing it for Portal Domains but not sure. I do not have 8.x installed, otherwise I would have given a quick shot and confirm it.
    So even though it did not showed up in WLS 8.x, still underneath rule is any object that needs to be replicated that is getting replicated in cluster needs to implement Serializable interface.
    Thanks
    Ravi Jegga

  • Replicated Cache Data Visibility

    Hello
    When is the cache entry visible when doing a put(..) to a Replicated cache?
    I've been reading the documentation for the Replicated Cache Topology, and it's not clear when data that has been put(..) is visible to other nodes.
    For example, if I have a cluster comprised of 26 nodes (A through Z), and I invoke replicatedCache.put("foo", "bar") from member A, at what point is the Map.Entry("foo", "bar") present and queryable on member B? Is it as soon as it has been put into the local storage on B? Or is it only just before the call to put(..) on member A returns successfully? While the put(..) from member A is "in flight", is it possible to have to simultaneous reads on members F and Y return different results because the put(..) hasn't yet been invoked successfully on one of the nodes?
    Regards
    Pete

    Hi Pete,
    As the data replication is done asynchronously, (you may refer to this post, Re: Performance of replicated cache vs. distributed cache ) . So, you may read a different result on different nodes.
    Furthermore, may I know your use case on replicated cache?
    Regards,
    Rock
    Oracle ACS

  • Read through Scheme in Replicated Cache with berkely db

    Hi i have a 20 gb of data, while restarting the server i need to populate all the data in to coherence cache.if i create a pre load java class hadrly it will take 30 mins to 1 hour to load the data in to cache, while loading time if any request came means how can give the response to the request. i have gone through the Read through scheme its good. But i dont know how to implement with replicated cahce. is it possible to implement the read through+replicated cache+berkeley. if yes menas please post the sample code with full reference. thanks in advance
    Edited by: 875786 on Dec 5, 2011 8:10 PM

    If you read the documentation for the replicated scheme configuration here: http://docs.oracle.com/cd/E24290_01/coh.371/e22837/appendix_cacheconfig.htm#BABGHEJE ans specifically the part about the <backing-map> configutation you will see...
    http://docs.oracle.com/cd/E24290_01/coh.371/e22837/appendix_cacheconfig.htm#BABGHEJE:
    To ensure cache coherence, the backing-map of a replicated cache must not use a read-through pattern to load cache entries. Either use a cache-aside pattern from outside the cache service, or switch to the distributed-scheme, which supports read-through clustered caching.So it would appear that you cannot do read-through with a replicated cache - which makes sense really when you think about it.
    As I already asked - why are you trying to put 20GB in a replicated cache?
    Presumably you do not have JVMs with heaps well over 20GB to hold all that data, or do you - in which case you must have tuned GC very well. You say you are bothered about NFRs and network latency yet you are building a system that will require either very big heaps, and hence at some point there will be long GCs or you cannot hold all the data in memory so you have to configure expiry and then you have the latency of reading the data from the DB.
    If you are using read-through then presumably your use-case does not require all the data to be in the cache - i.e. all you data access is using a get by key and you do not do any filter queries. If this is the case then use a distributed cache where you can store all the data or use read-through. If all you access is using key based gets then you do not need to co-locate the caches and your application in a single JVM - have separate cache server JVMs to hold the data and configure near-caches in your application.
    There are various ways to hold 20GB of data that would be much more efficient than you are suggesting with a replicated cache.
    JK

  • Replicated cache with cache store configuration

    Hi All,
    I am having two different applications. One is Admin kind of module from where data will be insterted/updated and another application will read data from Coherence cache.
    My requirement is to use Replicated cache and data also needs to be stored in the database. I am configuring cache with cache store DB operation.
    I have following coherence configuration. It works fine. Other application is able to read updated data. But while second application is trying to join the first application coherence cluster I am getting following exception in cache store. If I use distributed cache the same cache store works fine without any issues.
    Also note that eventhough it is throwing exception, application is working fine as expected. Other thing I am pre loading data on application start up in first application.
    Let me know if you need any further information.
    Thanks in advance.
    coherence-cache-config.xml
    <?xml version="1.0"?>
    <!DOCTYPE cache-config SYSTEM "cache-config.dtd">
    <cache-config>
    <caching-scheme-mapping>
    <cache-mapping>
    <cache-mapping>
    <cache-name>TestCache</cache-name>
    <scheme-name>TestCacheDB</scheme-name>
    </cache-mapping>
    </caching-scheme-mapping>
    <caching-schemes>
    <replicated-scheme>
    <scheme-name>TestCacheDB</scheme-name>
    <service-name>ReplicatedCache</service-name>
    <backing-map-scheme>
    <local-scheme>
    <scheme-name>TestDBLocal</scheme-name>
    <cachestore-scheme>
    <class-scheme>
    <class-name>test.TestCacheStore</class-name>
    <init-params>
    <init-param>
    <param-type>java.lang.String</param-type>
    <param-value>TEST_SUPPORT</param-value>
    </init-param>
    </init-params>
    </class-scheme>
    </cachestore-scheme>
    </local-scheme>
    </backing-map-scheme>
    <listener/>
    <autostart>true</autostart>
    </replicated-scheme>
    <!--
    Proxy Service scheme that allows remote clients to connect to the
    cluster over TCP/IP.
    -->
    <proxy-scheme>
    <scheme-name>proxy</scheme-name>
    <service-name>ProxyService</service-name>
    <thread-count system-property="tangosol.coherence.extend.threads">10</thread-count>
    <acceptor-config>
    <tcp-acceptor>
    <local-address>
    <address system-property="tangosol.coherence.extend.address">localhost</address>
    <port system-property="tangosol.coherence.extend.port">7001</port>
    <reusable>true</reusable>
    </local-address>
    </tcp-acceptor>
    <serializer>
    <class-name>com.tangosol.io.pof.ConfigurablePofContext</class-name>
    <init-params>
    <init-param>
    <param-type>String</param-type>
    <param-value>pof-config.xml</param-value>
    </init-param>
    </init-params>
    </serializer>
    </acceptor-config>
    <autostart system-property="tangosol.coherence.extend.enabled">false</autostart> </proxy-scheme>
    </caching-schemes>
    </cache-config>
    Exception:
    2010-08-31 10:46:09.062/171.859 Oracle Coherence GE 3.5.2/463 <Error> (thread=ReplicatedCache, member=2): java.lang.Clas
    sCastException: com.tangosol.util.Binary cannot be cast to test.TestBean
    at test.TestCacheStore.store(TestCacheStore.java:137)
    at com.tangosol.net.cache.LocalCache$InternalListener.onModify(LocalCache.java:637)
    at com.tangosol.net.cache.LocalCache$InternalListener.entryInserted(LocalCache.java:599)
    at com.tangosol.util.MapEvent.dispatch(MapEvent.java:206)
    at com.tangosol.util.MapEvent.dispatch(MapEvent.java:166)
    at com.tangosol.util.MapListenerSupport.fireEvent(MapListenerSupport.java:556)
    at com.tangosol.net.cache.OldCache.dispatchEvent(OldCache.java:1916)
    at com.tangosol.net.cache.OldCache$Entry.onAdd(OldCache.java:1985)
    at com.tangosol.util.SafeHashMap.put(SafeHashMap.java:244)
    at com.tangosol.net.cache.OldCache.put(OldCache.java:266)
    at com.tangosol.coherence.component.util.CacheHandler.onLeaseUpdate(CacheHandler.CDB:42)
    at com.tangosol.coherence.component.util.CacheHandler.populateCache(CacheHandler.CDB:33)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.ReplicatedCache$CacheUpdate.onReceiv
    ed(ReplicatedCache.CDB:5)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onMessage(Grid.CDB:9)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onNotify(Grid.CDB:136)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.ReplicatedCache.onNotify(ReplicatedC
    ache.CDB:3)
    at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
    at java.lang.Thread.run(Thread.java:619)
    2010-08-31 10:46:09.203/216.735 Oracle Coherence GE 3.5.2/463 <D5> (thread=DistributedCache, member=2): Deferring the di
    stribution due to 128 pending configuration updates
    TestBean.java
    public class TestBean implements PortableObject, Serializable {
         private static final long serialVersionUID = 1L;
         private String name;
         private String number;
         private String taskType;
         public String getName() {
              return name;
         public void setName(String name) {
              this.name = name;
         public String getNumber() {
              return productId;
         public void setNumber(String number) {
              this.number= number;
         public String getTaskType() {
              return taskType;
         public void setTaskType(String taskType) {
              this.taskType = taskType;
         @Override
         public void readExternal(PofReader reader) throws IOException {
              name = reader.readString(0);
              number = reader.readString(1);
              taskType = reader.readString(2);
         @Override
         public void writeExternal(PofWriter writer) throws IOException {
              writer.writeString(0, name);
              writer.writeString(1, number);
              writer.writeString(2, taskType);
    TestCacheStore.java
    public class TestCacheStore extends Base implements CacheStore {
         @Override
         public void store(Object oKey, Object oValue) {
         if(logger.isInfoEnabled())
              logger.info("store :"+oKey);
         TestBean testBean = (TestBean)oValue; //Giving classcast exception here
    //Doing some processing here over testBean
              ConnectionFactory connectionFactory = ConnectionFactory.getInstance();
              //Get the Connection
              Connection con = connectionFactory.getConnection();
              if(con != null){
    //Code to insert into the database
              }else{
                   logger.error("Connection is NULL");
    Edited by: user8279242 on Aug 30, 2010 11:44 PM

    Hello,
    The problem is that replicated caches are not supported with read write backing maps.
    Please refer to the link below for more information.
    http://download.oracle.com/docs/cd/E15357_01/coh.360/e15723/cache_rtwtwbra.htm#CFHEJHCI
    Best regards,
    -Dave

  • Replacing our Replicated Caches with Distributed+CQC

    Hi,
    I've been advised on this forum to replace our Replicated caches with Distributed+CQC with an AlwaysFilter. This should give me the "zero-latency" get() performance which really should be part of a Replicated Cache, but apparently isn't (the lease-model and serialization get in the way of this).
    My concern is now storage efficiency - my understanding is that my storage footprint will double, as the same information is now stored in two places (partitioned backing map and cqc front map). Is this correct? If so, I'm unsure why it would be considered a good replacement for the Replicated scheme.
    Thanks,
    Matt

    The second link looks like it helped you out
    - Re: The effects of "leases" on the read-performance of Replicated Caches
    Also if you are looking at (near) zero latency when garbage collections occur, you might want to have at the Zing JVM
    - http://www.azulsystems.com/products/zing/virtual-machine
    Layency due to garbage collection (i.e., introduced pause times) could break your (extreme) low-latency demand.
    You could try the CQC with the always filter:
    NamedCache cache = CacheFactory.getCache("somecache");
    ContinuousQueryCache localCache =new ContinousQueryCache(cache, AlwaysFilter.INSTANCE);The preceding code will result in a locally materialized view of the cache data that satisfies the specified filter. By default, both keys and values will be cached locally.
    If you want to cache only keys and retrieve values from the back cache as needed, which might be the best option if the values are large and accessed infrequently, or
    if you only care about having an up-to-date keyset locally, you can pass false as the third argument to the CQC constructor.
    To get data from the CQC you can use
    Iterator<Map.Entry<Integer, Klant>> data = localCache.entrySet().iterator();

  • Service SRM PO fail to replicate to SAP with error"Tax code Mand. for ERS"

    HI
    We are facing issue for posting Serviice PO with vendor enabled with ERS functionality.
    Service purchase orders with ERS flag and with complete tax detail fail to replicate to MM PO. Error found on XML on ECC interface : "Enter Tax code in case of Evaluated receipt settlement".
    We have compelete tax details on the XML which is not recognized. When we manually feed the details in dubug it allows to post the document.
    PO is created with account assigned service item.
    Have any one come across simialr situation.
    Regards
    Prashanth K Saralaya

    Hi Prashanth,
      Do you have tax code maintained in SRM PO. SRM PO is pushed to ERP using RFC Calls and not XML(in standard scenarios). I am wondering why XML is being checked in debug mode by you for a SRM PO to make it to ERP.
    May I have the background of your requirement..
    Regards
    Virender Singh

Maybe you are looking for

  • Is there any way to make a backup of my Leopard installation disk?

    I have always hade a habit of keeping a copy of my software media ... Just in case. Is there a way Of making a copy of my Leopard install disk? My wife has already thrown out my snow Leopard install dusk... 8-( I am running both os on my MacPro but m

  • Java array

    hi I have one problem.need help for logic the code of jsp is: String[] printPay=new String[]{5.6.9,10}; int[] array=new int[]{1,2,3,4,5,6,7,8,9,10,11,12,13}; String[] newArray= new String[]{57,57,78,78}; for(int i=0;i<printPay.length;i++){           

  • Mac firewall security flaw in Adobe CS3

    Security experts are warning of an issue within Adobe CS3's Version Cue application which can disable a Mac's built-in firewall. An alert from the experts at Secunia warns that Adobe Version Cue disables a Mac's firewall when it is installed. It does

  • Bug in Conversion 4m Planned Order to Production Order.

    Hi guys, While conversion of Planned order to Production Order i got the below error, "YOU CANNOT MAKE AN ASSIGNMENT TO MAKE-TO-ORDER STOCK" Message # CO684 Settings i ve done, MRP3 View for this particular Material i have selected Stratergy as 20. P

  • Copernicus crashes when trying to launch UI Designer

    Hi Folks, I'm having an issue where Copernicus will crash after trying to launch UI Designer. The error message is a very generic "CopernicusIsolatedShell.exe has encountered a problem and needs to close.  We are sorry for the inconvenience." Could t