One concern for replicated cache

For replicated caches, I think changes are replicated asynchronously,then how to understand update operations will achieve "bad" performance when many nodes exist?Could you kindly pls. explain expenses costed for that? Thanks!
BR
michael

user8024986 wrote:
Hi Robert,
That listens reasonable, unicast and multi-cast messages are sent out without having to wait for responses and multi-cast reduces the network's stream size at the same time.
Then my concern is still that, for replication cache,any changes are sent as messages to other nodes in asyn mode(needn't wait for the response either unicast and muli-cast), so the cost are mainly caused by sending of changes ,which is decided by network status,if the network is high enough and since the messages are sent in asyn way, the performance will be affected finitely, right?
thanks,
MichaelMichael,
it may not have been clear, but what Aleks said is still true. The interleaving means that messages are sent out to recipient nodes interleaved without waiting for a response before sending to the next node, but the cache.put() call returns after the the positive response from all cache nodes arrived that the update was incorporated into their own copy of the data (or the death of the recipient node was detected in which case the response will not be waited for).
So the overall cost on the network is both sends and responses, and since in general responses go to a single node (the sender of the message the response replies to) therefore even for a multicast update there will be several unicast responses.
But yes, the higher number of cluster nodes there are, the larger load this puts on the network.
There are several measures in Coherence for trying to decrease this effect on the network, e.g. bundling together messages or ACKs to the same destination which allows them to be sent in less packets than if they were sent alone (this is particularly effective in case of small messages and ACKs), but this is really effective when there are many threads on each node each doing cache operations as this increases the likelihood of multiple messages/ACKs to be sent to the same node roughly at the same time.
But in general, if you have frequent writes to a replicated cache you can't really scale it after a point (a certain number of cluster nodes) due to saturating the network, and you should consider switching to partitioned caches (distributed cache). Even near and continuous query caches are not really effective in case of write-heavy caches (more writes than reads).
Even if the network is able to keep up, more messages would still increase the length of the queues of messages in a node to respond to and therefore more messages would probably mean longer response times.
Best regards,
Robert

Similar Messages

  • Using CacheLoader for replicated cache in cohernce 3.6

    Hi,
    I s it possible to configure a CacheLoader for replicated Cache in Coherence 3.6? The backing map will be local scheme for this cache.
    Regards,
    CodeSlave

    We have a "start of day" process that just runs up a Java client (full cluster member, but storage disabled node) that clears and then repopulates a number of "reference data" replicated caches we use in our application. Use the hints-n-tips in the Coherence Developer's Guide (bulk operations, etc.) to get decent performance. We load the data from an Oracle database. Again, tune the extract side (JDBC/JPA batching, etc.) to get that side of things performing well.
    For ad-hoc, intra day updates to the replicated caches (and you should look to minimise these), we use a "listener" that attaches to an Oracle database DCN (data change notification) stream.
    Cheers,
    Steve

  • Problem with Expiry Period for Multiple Caches in One Configuration File

    I need to have a Cache System with multiple expiry periods, i.e. few records should exist for, lets say, 1 hour, some for 3 hours and others for 6 hours. To achieve it, I am trying to define multiple caches in the config file. Based on the data, I choose the Cache (with appropriate expiry period). Thats where, I am facing this problem. I am able to create the caches in the config file. They have different eviction policies i.e. for Cache1, it is 1 hour and for Cache2, it is 3 Hours. However, the data that is stored in Cache1 is not expired after 1 hour. It expires after the expiry period of other cache i.e.e Cache2.
    Plz correct me if I am not following the correct way of achieving the required. I am attaching the config file here.<br><br> <b> Attachment: </b><br>near-cache-config1.xml <br> (*To use this attachment you will need to rename 142.bin to near-cache-config1.xml after the download is complete.)

    Hi Rajneesh,
    In your cache mapping section, you have two wildcard mappings ("*"). These provide an ambiguous mapping for all cache names.
    Rather than doing this, you should have a cache mapping for each cache scheme that you are using -- in your case the 1-hour and 3-hour schemes.
    I would suggest removing one (or both) of the "*" mappings and adding entries along the lines of:
    <pre>
    <cache-mapping>
    <cache-name>near-1hr-*</cache-name>
    <scheme-name>default-near</scheme-name>
    </cache-mapping>
    <cache-mapping>
    <cache-name>near-3hr-*</cache-name>
    <scheme-name>default-away</scheme-name>
    </cache-mapping>
    </pre>
    With this scheme, any cache that starts with "near-1hr-" (e.g. "near-1hr-Cache1") will have 1-hour expiry. And any cache that starts with "near-3hr-" will have 3-hour expiry. Or, to map your cache schemes on a per-cache basis, in your case you may replace "near-1hr-*" and "near-3hr-*" with Cache1 and Cache2 (respectively).
    Jon Purdy
    Tangosol, Inc.

  • Our system has detected an unauthorized login attempt to your AppIeID from an IP address location different than one you usually use. In order to protect your account, we will disable your AppleID due to our concern for the safety and integrity of the App

    Our system has detected an unauthorized login attempt to your AppIeID from an IP address location different than one you usually use.
    In order to protect your account, we will disable your AppleID due to our concern for the safety and integrity of the AppIe community.
    In order to confirm that you are the rightful owner of this account, we recommend that you click here: My Apple ID.
    I received this e-mail during the night and wondered if is genuine?

    It's a scam to steal your Apple ID and password.
    Delete it.

  • Basic use of locks with replicated cache

    Hi,
    I'm in the process of evaluating Coherence and I have a few fundamental questions about best locking practices with a simple replicated cache. I've been through the docs & forums a few times, but I'm still confused.
    The docs say that a replicted cache is "fully coherent". What, exactly does this mean? Does this imply that Coherence handles all the locking for you? Under what situations do I have to lock a node?
    Thanks.

    Hi,
    There are generally two reasons for desiring full synchronicity:
    Fault tolerance ... however, note that by the time the put returns, the data is held by both the client and the issuer. Also, we recommend the use of the Distributed cache topology for transactional data (and not Replicated). Distributed is in fact fully synchronous.
    The other reasons people usually ask about this is concern over the logical view of data updates (simultaneous state across the cluster). Even within a single JVM running on a single CPU, data updates are not simultaneous or even synchronous. This is quite intentional (in fact most of the complexity of the Java Memory Model arises from the desire to avoid the requirement for simultaneous state changes).
    In the JVM, synchronized blocks are required to avoid race conditions and inconsistent views of data, and in Coherence, locks are required to do the same.
    The key point to remember is, from the point of view of a single cluster member, data updates are in fact synchronous and simultaneous (with or without locking).
    I'm assuming your question relates to one of those two issues (as those are the most common concerns that we hear); if not, could you provide a bit more background on your requirements?
    Jon Purdy
    Tangosol, Inc.

  • Query regarding Replicated Caches that Persist on Disk

    I need to build a simple fault tolerant system that will replicate cache
    entries across a small set of systems. I want the cache to be persistent
    even if all cluster members are brought down.
    Is this something that requires writing a custom CacheStore implementation
    or is there a configuration that I can use that uses off-the-shelf pluggable
    caches? The documentation was somewhat vague about this.
    If I need or want to write my own CacheStore, when there is a cache write-through
    operation how does Coherence figure out which member of the cluster will do
    the actual work and persist a particular object?

    Hi rhanckel,
    write-through and cache stores are not supported with replicated caches, you need to use partitioned (distributed) cache for cache stores.
    You can use a number of out-of-the-box cache stores (Hibernate, TopLink, etc) or you can write your own if you don't find a suitable one. Configuration is the same, you need to specify the cache store class name appropriately in the <cache-store-scheme> child element of the <read-write-backing-map> element.
    You can look at the documentation for it on the following urls:
    http://wiki.tangosol.com/display/COH34UG/cachestore-scheme
    http://wiki.tangosol.com/display/COH34UG/read-write-backing-map-scheme
    http://wiki.tangosol.com/display/COH34UG/Read-Through%2C+Write-Through%2C+Write-Behind+Caching+and+Refresh-Ahead
    As for how Coherence figures out which member needs to write:
    in a partitioned cache, each cache key has an owner node which is algorithmically determined from the key itself and the distribution of partitions among nodes (these two do not depend on the actual data in the cache). More specifically, any key is always mapped to the same partition (provided you did not change the partition-count or the partition affinity-related settings, although if you did the latter, then it is arguably not the same key anymore). Therefore Coherence just needs to know, who owns a certain partition. Hence, the owner of that partition is the owner of the key, and that node is tasked with every operation related to that key.
    Best regards,
    Robert

  • Replicated Cache Data Visibility

    Hello
    When is the cache entry visible when doing a put(..) to a Replicated cache?
    I've been reading the documentation for the Replicated Cache Topology, and it's not clear when data that has been put(..) is visible to other nodes.
    For example, if I have a cluster comprised of 26 nodes (A through Z), and I invoke replicatedCache.put("foo", "bar") from member A, at what point is the Map.Entry("foo", "bar") present and queryable on member B? Is it as soon as it has been put into the local storage on B? Or is it only just before the call to put(..) on member A returns successfully? While the put(..) from member A is "in flight", is it possible to have to simultaneous reads on members F and Y return different results because the put(..) hasn't yet been invoked successfully on one of the nodes?
    Regards
    Pete

    Hi Pete,
    As the data replication is done asynchronously, (you may refer to this post, Re: Performance of replicated cache vs. distributed cache ) . So, you may read a different result on different nodes.
    Furthermore, may I know your use case on replicated cache?
    Regards,
    Rock
    Oracle ACS

  • Replicated cache with cache store configuration

    Hi All,
    I am having two different applications. One is Admin kind of module from where data will be insterted/updated and another application will read data from Coherence cache.
    My requirement is to use Replicated cache and data also needs to be stored in the database. I am configuring cache with cache store DB operation.
    I have following coherence configuration. It works fine. Other application is able to read updated data. But while second application is trying to join the first application coherence cluster I am getting following exception in cache store. If I use distributed cache the same cache store works fine without any issues.
    Also note that eventhough it is throwing exception, application is working fine as expected. Other thing I am pre loading data on application start up in first application.
    Let me know if you need any further information.
    Thanks in advance.
    coherence-cache-config.xml
    <?xml version="1.0"?>
    <!DOCTYPE cache-config SYSTEM "cache-config.dtd">
    <cache-config>
    <caching-scheme-mapping>
    <cache-mapping>
    <cache-mapping>
    <cache-name>TestCache</cache-name>
    <scheme-name>TestCacheDB</scheme-name>
    </cache-mapping>
    </caching-scheme-mapping>
    <caching-schemes>
    <replicated-scheme>
    <scheme-name>TestCacheDB</scheme-name>
    <service-name>ReplicatedCache</service-name>
    <backing-map-scheme>
    <local-scheme>
    <scheme-name>TestDBLocal</scheme-name>
    <cachestore-scheme>
    <class-scheme>
    <class-name>test.TestCacheStore</class-name>
    <init-params>
    <init-param>
    <param-type>java.lang.String</param-type>
    <param-value>TEST_SUPPORT</param-value>
    </init-param>
    </init-params>
    </class-scheme>
    </cachestore-scheme>
    </local-scheme>
    </backing-map-scheme>
    <listener/>
    <autostart>true</autostart>
    </replicated-scheme>
    <!--
    Proxy Service scheme that allows remote clients to connect to the
    cluster over TCP/IP.
    -->
    <proxy-scheme>
    <scheme-name>proxy</scheme-name>
    <service-name>ProxyService</service-name>
    <thread-count system-property="tangosol.coherence.extend.threads">10</thread-count>
    <acceptor-config>
    <tcp-acceptor>
    <local-address>
    <address system-property="tangosol.coherence.extend.address">localhost</address>
    <port system-property="tangosol.coherence.extend.port">7001</port>
    <reusable>true</reusable>
    </local-address>
    </tcp-acceptor>
    <serializer>
    <class-name>com.tangosol.io.pof.ConfigurablePofContext</class-name>
    <init-params>
    <init-param>
    <param-type>String</param-type>
    <param-value>pof-config.xml</param-value>
    </init-param>
    </init-params>
    </serializer>
    </acceptor-config>
    <autostart system-property="tangosol.coherence.extend.enabled">false</autostart> </proxy-scheme>
    </caching-schemes>
    </cache-config>
    Exception:
    2010-08-31 10:46:09.062/171.859 Oracle Coherence GE 3.5.2/463 <Error> (thread=ReplicatedCache, member=2): java.lang.Clas
    sCastException: com.tangosol.util.Binary cannot be cast to test.TestBean
    at test.TestCacheStore.store(TestCacheStore.java:137)
    at com.tangosol.net.cache.LocalCache$InternalListener.onModify(LocalCache.java:637)
    at com.tangosol.net.cache.LocalCache$InternalListener.entryInserted(LocalCache.java:599)
    at com.tangosol.util.MapEvent.dispatch(MapEvent.java:206)
    at com.tangosol.util.MapEvent.dispatch(MapEvent.java:166)
    at com.tangosol.util.MapListenerSupport.fireEvent(MapListenerSupport.java:556)
    at com.tangosol.net.cache.OldCache.dispatchEvent(OldCache.java:1916)
    at com.tangosol.net.cache.OldCache$Entry.onAdd(OldCache.java:1985)
    at com.tangosol.util.SafeHashMap.put(SafeHashMap.java:244)
    at com.tangosol.net.cache.OldCache.put(OldCache.java:266)
    at com.tangosol.coherence.component.util.CacheHandler.onLeaseUpdate(CacheHandler.CDB:42)
    at com.tangosol.coherence.component.util.CacheHandler.populateCache(CacheHandler.CDB:33)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.ReplicatedCache$CacheUpdate.onReceiv
    ed(ReplicatedCache.CDB:5)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onMessage(Grid.CDB:9)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onNotify(Grid.CDB:136)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.ReplicatedCache.onNotify(ReplicatedC
    ache.CDB:3)
    at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
    at java.lang.Thread.run(Thread.java:619)
    2010-08-31 10:46:09.203/216.735 Oracle Coherence GE 3.5.2/463 <D5> (thread=DistributedCache, member=2): Deferring the di
    stribution due to 128 pending configuration updates
    TestBean.java
    public class TestBean implements PortableObject, Serializable {
         private static final long serialVersionUID = 1L;
         private String name;
         private String number;
         private String taskType;
         public String getName() {
              return name;
         public void setName(String name) {
              this.name = name;
         public String getNumber() {
              return productId;
         public void setNumber(String number) {
              this.number= number;
         public String getTaskType() {
              return taskType;
         public void setTaskType(String taskType) {
              this.taskType = taskType;
         @Override
         public void readExternal(PofReader reader) throws IOException {
              name = reader.readString(0);
              number = reader.readString(1);
              taskType = reader.readString(2);
         @Override
         public void writeExternal(PofWriter writer) throws IOException {
              writer.writeString(0, name);
              writer.writeString(1, number);
              writer.writeString(2, taskType);
    TestCacheStore.java
    public class TestCacheStore extends Base implements CacheStore {
         @Override
         public void store(Object oKey, Object oValue) {
         if(logger.isInfoEnabled())
              logger.info("store :"+oKey);
         TestBean testBean = (TestBean)oValue; //Giving classcast exception here
    //Doing some processing here over testBean
              ConnectionFactory connectionFactory = ConnectionFactory.getInstance();
              //Get the Connection
              Connection con = connectionFactory.getConnection();
              if(con != null){
    //Code to insert into the database
              }else{
                   logger.error("Connection is NULL");
    Edited by: user8279242 on Aug 30, 2010 11:44 PM

    Hello,
    The problem is that replicated caches are not supported with read write backing maps.
    Please refer to the link below for more information.
    http://download.oracle.com/docs/cd/E15357_01/coh.360/e15723/cache_rtwtwbra.htm#CFHEJHCI
    Best regards,
    -Dave

  • Replacing our Replicated Caches with Distributed+CQC

    Hi,
    I've been advised on this forum to replace our Replicated caches with Distributed+CQC with an AlwaysFilter. This should give me the "zero-latency" get() performance which really should be part of a Replicated Cache, but apparently isn't (the lease-model and serialization get in the way of this).
    My concern is now storage efficiency - my understanding is that my storage footprint will double, as the same information is now stored in two places (partitioned backing map and cqc front map). Is this correct? If so, I'm unsure why it would be considered a good replacement for the Replicated scheme.
    Thanks,
    Matt

    The second link looks like it helped you out
    - Re: The effects of "leases" on the read-performance of Replicated Caches
    Also if you are looking at (near) zero latency when garbage collections occur, you might want to have at the Zing JVM
    - http://www.azulsystems.com/products/zing/virtual-machine
    Layency due to garbage collection (i.e., introduced pause times) could break your (extreme) low-latency demand.
    You could try the CQC with the always filter:
    NamedCache cache = CacheFactory.getCache("somecache");
    ContinuousQueryCache localCache =new ContinousQueryCache(cache, AlwaysFilter.INSTANCE);The preceding code will result in a locally materialized view of the cache data that satisfies the specified filter. By default, both keys and values will be cached locally.
    If you want to cache only keys and retrieve values from the back cache as needed, which might be the best option if the values are large and accessed infrequently, or
    if you only care about having an up-to-date keyset locally, you can pass false as the third argument to the CQC constructor.
    To get data from the CQC you can use
    Iterator<Map.Entry<Integer, Klant>> data = localCache.entrySet().iterator();

  • Concurrency with Entry Processors on Replicated Cache

    Hi,
    In the documentation is says that entry processors in replicated caches are executed on the initiating node.
    How is concurrency handled in this situation?
    What happens if two or more nodes are asked to execute something on an entry at the same time?
    What happens if the node initiating the execution is a storage disabled node?
    Thanks!

    Jonathan.Knight wrote:
    In a distributed cache the entry processor runs on the node that owns the cache entry. In a replicated cache the same entries are on all nodes so I think one of the questions was what happens in this scenario. I presume the EP only execues on one of the nodes - it would not make sense for it to execute on all nodes - but which one does it use? Is there still a concept of owner for a replicated cache or is it random.
    At this point I would have coded a quick experiment to prove what happens but unfortunately I am a tad busy right now.
    JKHi Jonathan,
    in the replicated cache there is still a notion of ownership of an entry, in Coherence terms it is called a lease. It is always owned by the last node to have carried out a successful modification on it, where modification may be a put/remove but it can also be a lock operation. Lease granularity is per entry.
    Practically the lock operation in the code Dimitri pasted serves two purposes. First it ensures no other nodes can lock it, second it brings the lease to the locking node, so it can correctly execute the entry-processor locally on the entry.
    Best regards,
    Robert

  • One ibot for multiple reports (or all reports in one topic/folder)

    Hello,
    Is it possbile to set the ibot for report caching to consider a whole bunch of reports rather than setting
    up one ibot for every single report?
    Regards,
    metalray

    Hello ADB,
    Thanks for that. I wonder how to prove that the caching worked or that is has been done.
    I noticed the .TBL files in the cache directory but that does not tell me much. Using OBI Administration and looking at the cache on the physical layer
    did not help me either understand how all reports in my dashboard are now cached by running the ibot at midnight.
    Out of the BI Answers, Delivers and Dashboards manual:
    "NOTE: The Oracle BI Server maintains its own cache. This cache is separate from the Oracle BI
    Presentation Services cache.
    When you select a specific dashboard or request, Oracle BI Presentation Services checks its cache
    to determine if the identical results have recently been requested. If so, Oracle BI Presentation
    Services returns the most recent results, thereby avoiding unnecessary processing by the Oracle BI
    Server and the back-end database. If not, the request is issued to the Oracle BI Server for
    processing."
    So basically I want to know how I can see that BI Presentations Services has
    really cached the reports in my dashboard i.e. if my ibot works
    Thanks,
    metalray

  • Error while putting an object in Replicated Cache

    Hi,
    I am running just a single node of coherence with Replicated cache. But when I am trying to add an object to it I am getting the below exception. However I don't get this error while doing the same thing in a Distributed cache. Can someone please tell me what could I be doing wrong here.
    Caused by: java.io.IOException: readObject failed: java.lang.ClassNotFoundException: com.test.abc.pkg.RRSCachedObject
         at java.net.URLClassLoader$1.run(URLClassLoader.java:200)
         at java.security.AccessController.doPrivileged(Native Method)
         at java.net.URLClassLoader.findClass(URLClassLoader.java:188)
         at java.lang.ClassLoader.loadClass(ClassLoader.java:316)
         at java.lang.ClassLoader.loadClass(ClassLoader.java:251)
         at java.lang.ClassLoader.loadClassInternal(ClassLoader.java:374)
         at java.lang.Class.forName0(Native Method)
         at java.lang.Class.forName(Class.java:242)
         at java.io.ObjectInputStream.resolveClass(ObjectInputStream.java:585)
         at com.tangosol.io.ResolvingObjectInputStream.resolveClass(ResolvingObjectInputStream.java:68)
         at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1544)
         at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1466)
         at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1699)
         at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1305)
         at java.io.ObjectInputStream.readObject(ObjectInputStream.java:348)
         at com.tangosol.util.ExternalizableHelper.readSerializable(ExternalizableHelper.java:2145)
         at com.tangosol.util.ExternalizableHelper.readObjectInternal(ExternalizableHelper.java:2276)
         at com.tangosol.util.ExternalizableHelper.deserializeInternal(ExternalizableHelper.java:2673)
         at com.tangosol.util.ExternalizableHelper.fromBinary(ExternalizableHelper.java:257)
         at com.tangosol.coherence.component.net.extend.proxy.CacheServiceProxy$ConverterFromBinary.convert(CacheServiceProxy.CDB:4)
         at com.tangosol.util.ConverterCollections$ConverterCacheMap.put(ConverterCollections.java:2433)
         at com.tangosol.coherence.component.util.collections.wrapperMap.WrapperNamedCache.put(WrapperNamedCache.CDB:1)
         at com.tangosol.coherence.component.net.extend.proxy.CacheServiceProxy$WrapperNamedCache.put(CacheServiceProxy.CDB:2)
         at com.tangosol.coherence.component.net.extend.messageFactory.NamedCacheFactory$PutRequest.onRun(NamedCacheFactory.CDB:6)
         at com.tangosol.coherence.component.net.extend.message.Request.run(Request.CDB:4)
         at com.tangosol.coherence.component.net.extend.proxy.NamedCacheProxy.onMessage(NamedCacheProxy.CDB:11)
         at com.tangosol.coherence.component.net.extend.Channel.execute(Channel.CDB:28)
         at com.tangosol.coherence.component.net.extend.Channel.receive(Channel.CDB:26)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Peer$DaemonPool$WrapperTask.run(Peer.CDB:9)
         at com.tangosol.coherence.component.util.DaemonPool$WrapperTask.run(DaemonPool.CDB:32)
         at com.tangosol.coherence.component.util.DaemonPool$Daemon.onNotify(DaemonPool.CDB:69)
         at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
         at java.lang.Thread.run(Thread.java:613)
    ClassLoader: java.net.URLClassLoader@b5f53a
         at com.tangosol.util.ExternalizableHelper.fromBinary(ExternalizableHelper.java:261)
         at com.tangosol.coherence.component.net.extend.proxy.CacheServiceProxy$ConverterFromBinary.convert(CacheServiceProxy.CDB:4)
         at com.tangosol.util.ConverterCollections$ConverterCacheMap.put(ConverterCollections.java:2433)
         at com.tangosol.coherence.component.util.collections.wrapperMap.WrapperNamedCache.put(WrapperNamedCache.CDB:1)
         at com.tangosol.coherence.component.net.extend.proxy.CacheServiceProxy$WrapperNamedCache.put(CacheServiceProxy.CDB:2)
         at com.tangosol.coherence.component.net.extend.messageFactory.NamedCacheFactory$PutRequest.onRun(NamedCacheFactory.CDB:6)
         at com.tangosol.coherence.component.net.extend.message.Request.run(Request.CDB:4)
         at com.tangosol.coherence.component.net.extend.proxy.NamedCacheProxy.onMessage(NamedCacheProxy.CDB:11)
         at com.tangosol.coherence.component.net.extend.Channel.execute(Channel.CDB:28)
         at com.tangosol.coherence.component.net.extend.Channel.receive(Channel.CDB:26)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Peer$DaemonPool$WrapperTask.run(Peer.CDB:9)
         at com.tangosol.coherence.component.util.DaemonPool$WrapperTask.run(DaemonPool.CDB:32)
         at com.tangosol.coherence.component.util.DaemonPool$Daemon.onNotify(DaemonPool.CDB:69)
         at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
         at java.lang.Thread.run(Thread.java:613)
    This is my config file -
    <cache-config>
         <caching-scheme-mapping>
              <cache-mapping>
                   <cache-name>*</cache-name>
                   <scheme-name>MY-replicated-cache-scheme</scheme-name>
              </cache-mapping>
         </caching-scheme-mapping>
         <caching-schemes>
              <!--
    Replicated caching scheme.
    -->
              <replicated-scheme>
                   <scheme-name>MY-replicated-cache-scheme</scheme-name>
                   <service-name>ReplicatedCache</service-name>
                   <backing-map-scheme>
                        <local-scheme>
                        </local-scheme>
                   </backing-map-scheme>
                   <lease-granularity>member</lease-granularity>
                   <autostart>true</autostart>
              </replicated-scheme>
              <proxy-scheme>
                   <service-name>ExtendTcpProxyService</service-name>
                   <thread-count>5</thread-count>
                   <acceptor-config>
                        <tcp-acceptor>
                             <local-address>
                                  <address>server</address>
                                  <port>port</port>
                             </local-address>
                             <receive-buffer-size>768k</receive-buffer-size>
                             <send-buffer-size>768k</send-buffer-size>
                        </tcp-acceptor>
                   </acceptor-config>
                   <autostart>true</autostart>
              </proxy-scheme>
         </caching-schemes>
    </cache-config>
    Edited by: user1945969 on Jun 5, 2010 4:16 PM

    By default, it should have used FIXED as unit-calculator. But look at the trace seems your replicated cache was using BINARY as unit-calculator.
    Could you try add <unit-calculator>FIXED</unit-calculator> in your cache config for the replicate cache.
    Or just try insert a object (both key and value) which implement Binary.
    Check the unit-calculator part on this link
    http://wiki.tangosol.com/display/COH35UG/local-scheme

  • What is it with Verizon lately and their lack of concern for customer satisfaction?

    I am increasingly frustrated with Verizon and their lack of concern for customer satisfaction.  I have been with them since they began and through the years they have become less and less customer friendly.  All I want to do is do an early upgrade to the IPhone 6, I have one line that is out of contract, two that will be up in May and one in October.  All the lines are eligible for early edge but only if we take a Motorola phone we do not want because they have some contact with them to push phones no one wants.  Seriously Verizon what the heck is wrong with you, trying to force people who have been loyal to you to take a phone they don't want because they paid you to do so.  I'm not trying to leave for the competition but at this point I may have to.  I had even checked back in November when I saw we were eligible for the early edge and was told we could only do the Motorola phone but that promotion would be over Dec 31 and after that we could upgrade to any phone we wanted.  Imagine my disappointment when we go to a verizon store to find out that was a lie and they entered into another lame contract with Motorola to push a diffent phone (last years model that didn't sell) and we would be forced to take one.  I contact customer service and was told to bad, sorry we lied to you but it's either that phone or wait out your contract.  Well maybe it's time to switch because it's very obvious that Verizon no longer cares about their customers, hey Verizon can you hear me now?

    I'm not trying to get out of my contract, I have been with Verizon for 15+ years and have always been able to do an early upgrade, we just forgo the discount $50 or $100 we would get off the new phone.  I have been told several times by Verizon employees that if I am within 6 months of my contract end I could do an early upgrade I just would not get the additional money off.  I was also told in November that I could do the edge up for any phone after Dec 31st.   I was lied to and that seems to be happening more and more.  Verizon has been changing its practices to the detriment of its loyal customers.  One of the reasons I liked Verizon and stayed with them for so long was because they were always willing to work with their customers and Verizon can change or alter their agreements anytime they want.  I wish they would train their emploves not to be deceptive and tell people one thing and then do another. I can understand if I was trying to cancel my contract but I am not and would be extending my contract by two years as opposed to leaving in the next 6 months.

  • Problem using Binary wrappers in a Replicated cache

    I'd like to store my keys and values as Binary objects in a replicated cache so I can monitor their size using a BINARY unit-calculator. However, when I attempt to put a key and value as com.tangosol.util.Binary, I get the enclosed IOException.
    I have a very simple sample to reproduce the exception, but I don't see a way to attach it here.
    2008-07-07 09:21:35.061 Oracle Coherence GE 3.3.1/389 <Error> (thread=ReplicatedCache, member=1): Failed to deserialize a key for cache MyMap
    2008-07-07 09:21:35.061 Oracle Coherence GE 3.3.1/389 <Error> (thread=ReplicatedCache, member=1): An exception (java.io.IOException) occurred reading Message LeaseUpdate Type=9 for Service=ReplicatedCache{Name=ReplicatedCache, State=(SERVICE_STARTED), Id=2, Version=3.0, OldestMemberId=1}
    2008-07-07 09:21:35.061 Oracle Coherence GE 3.3.1/389 <Error> (thread=ReplicatedCache, member=1): Terminating ReplicatedCache due to unhandled exception: java.io.IOException
    2008-07-07 09:21:35.061 Oracle Coherence GE 3.3.1/389 <Error> (thread=ReplicatedCache, member=1):
    java.io.IOException: unsupported type / corrupted stream
         at com.tangosol.util.ExternalizableHelper.readObject(ExternalizableHelper.java:2162)
         at com.tangosol.coherence.component.net.Message.readObject(Message.CDB:3)
         at com.tangosol.coherence.component.net.message.LeaseMessage.read(LeaseMessage.CDB:11)
         at com.tangosol.coherence.component.net.message.leaseMessage.ResourceMessage.read(ResourceMessage.CDB:5)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.Service.onNotify(Service.CDB:110)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.ReplicatedCache.onNotify(ReplicatedCache.CDB:3)
         at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:35)
         at java.lang.Thread.run(Thread.java:619)
    2008-07-07 09:21:35.061 Oracle Coherence GE 3.3.1/389 <D5> (thread=ReplicatedCache, member=1): Service ReplicatedCache left the cluster

    Hi John,
    Currently the Replicated cache service uses Binary only for internal value representation and does not allow storing arbitrary Binary objects in replicated caches. You should have no problems using partitioned (Distributed) cache service with this approach.
    Regards,
    Gene

  • Question about LRU in a replicated cache

    Hi Tangosol,
    I have a question about how the LRU eviction policy works in a replicated cache that uses a local cache for its backing map. My cache config looks like this:
    <replicated-scheme>
    <scheme-name>local-repl-scheme</scheme-name>
    <backing-map-scheme>
    <local-scheme>
    <scheme-ref>base-local-scheme</scheme-ref>
    </local-scheme>
    </backing-map-scheme>
    </replicated-scheme>
    <local-scheme>
    <scheme-name>base-local-scheme</scheme-name>
    <eviction-policy>LRU</eviction-policy>
    <high-units>50</high-units>
    <low-units>20</low-units>
    <expiry-delay/>
    <flush-delay/>
    </local-scheme>
    My test code does the following:
    1. Inserts 50 entries into the cache
    2. Checks to see that the cache size is 50
    3. Inserts 1 additional entry (as I understand it, this should cause the eviction logic to kick-in)
    4. Checks the cache size again, expecting it to now be 20
    With HYBRID and LFU eviction policies, the above logic works exactly as expected. When I switch to LRU however, the code at step 2 always returns a value significantly less than 50. All 50 inserts appear to complete successfully, so I can only assume that some of the entries have already been evicted by the time I get to step 2.
    Any thoughts?
    Thanks.
    Pete L.
    Addendum:
    As usual, in attempting to boil this issue down to its essential elements, I left out some details that turned out to be important. The logic that causes the condition to occur looks more like:
    1. Loop 2 times:
    2. Create named cache instance "TestReplCache"
    3. Insert 50 cache entries
    4. Verify that cache size == 50
    5. Insert 1 additional entry
    6. Verify that cache size == 20
    7. call cache.release()
    8. End Loop
    With this logic, the problem occurs on the second pass of the loop. Step 4 reports a cache size of < 50. This happens with LRU, LFU, and HYBRID-- so my initial characterization of this problem is incorrect. The salient details appear to be that I am using the same cache name each pass of the loop and that I am calling release() at the end of the loop. (If I call destroy() instead, all works as expected.)
    So... my revised question(s) would be: is this behavior expected? Is calling destroy() my only recourse?
    Message was edited by: planeski

    Robert,
    Attached are my sample code and cache config files. The code is a bit contrived-- it's extracted from a JUnit test case. Typically, we wouldn't re-use the same cache name in this way. What caught my eye however, was the fact that this same test case does not exhibit this behavior when running against a local cache directly (as opposed to a repl cache backed by a local cache.)
    Why call release? Well, again, when running this same test case against a local cache, you have to call release or it won't work. I figured the same applied to a repl cache backed by a local cache.
    Now that I understand this is more a byproduct of how my unit tests are written and not an issue with LRU eviction (as I originally thought), it's not a big deal-- more of a curiosity than a problem.
    Pete L.<br><br> <b> Attachment: </b><br>coherence-cache-config.xml <br> (*To use this attachment you will need to rename 545.bin to coherence-cache-config.xml after the download is complete.)<br><br> <b> Attachment: </b><br>LruTest.java <br> (*To use this attachment you will need to rename 546.bin to LruTest.java after the download is complete.)

Maybe you are looking for