Read through Scheme in Replicated Cache with berkely db

Hi i have a 20 gb of data, while restarting the server i need to populate all the data in to coherence cache.if i create a pre load java class hadrly it will take 30 mins to 1 hour to load the data in to cache, while loading time if any request came means how can give the response to the request. i have gone through the Read through scheme its good. But i dont know how to implement with replicated cahce. is it possible to implement the read through+replicated cache+berkeley. if yes menas please post the sample code with full reference. thanks in advance
Edited by: 875786 on Dec 5, 2011 8:10 PM

If you read the documentation for the replicated scheme configuration here: http://docs.oracle.com/cd/E24290_01/coh.371/e22837/appendix_cacheconfig.htm#BABGHEJE ans specifically the part about the <backing-map> configutation you will see...
http://docs.oracle.com/cd/E24290_01/coh.371/e22837/appendix_cacheconfig.htm#BABGHEJE:
To ensure cache coherence, the backing-map of a replicated cache must not use a read-through pattern to load cache entries. Either use a cache-aside pattern from outside the cache service, or switch to the distributed-scheme, which supports read-through clustered caching.So it would appear that you cannot do read-through with a replicated cache - which makes sense really when you think about it.
As I already asked - why are you trying to put 20GB in a replicated cache?
Presumably you do not have JVMs with heaps well over 20GB to hold all that data, or do you - in which case you must have tuned GC very well. You say you are bothered about NFRs and network latency yet you are building a system that will require either very big heaps, and hence at some point there will be long GCs or you cannot hold all the data in memory so you have to configure expiry and then you have the latency of reading the data from the DB.
If you are using read-through then presumably your use-case does not require all the data to be in the cache - i.e. all you data access is using a get by key and you do not do any filter queries. If this is the case then use a distributed cache where you can store all the data or use read-through. If all you access is using key based gets then you do not need to co-locate the caches and your application in a single JVM - have separate cache server JVMs to hold the data and configure near-caches in your application.
There are various ways to hold 20GB of data that would be much more efficient than you are suggesting with a replicated cache.
JK

Similar Messages

  • Replacing our Replicated Caches with Distributed+CQC

    Hi,
    I've been advised on this forum to replace our Replicated caches with Distributed+CQC with an AlwaysFilter. This should give me the "zero-latency" get() performance which really should be part of a Replicated Cache, but apparently isn't (the lease-model and serialization get in the way of this).
    My concern is now storage efficiency - my understanding is that my storage footprint will double, as the same information is now stored in two places (partitioned backing map and cqc front map). Is this correct? If so, I'm unsure why it would be considered a good replacement for the Replicated scheme.
    Thanks,
    Matt

    The second link looks like it helped you out
    - Re: The effects of "leases" on the read-performance of Replicated Caches
    Also if you are looking at (near) zero latency when garbage collections occur, you might want to have at the Zing JVM
    - http://www.azulsystems.com/products/zing/virtual-machine
    Layency due to garbage collection (i.e., introduced pause times) could break your (extreme) low-latency demand.
    You could try the CQC with the always filter:
    NamedCache cache = CacheFactory.getCache("somecache");
    ContinuousQueryCache localCache =new ContinousQueryCache(cache, AlwaysFilter.INSTANCE);The preceding code will result in a locally materialized view of the cache data that satisfies the specified filter. By default, both keys and values will be cached locally.
    If you want to cache only keys and retrieve values from the back cache as needed, which might be the best option if the values are large and accessed infrequently, or
    if you only care about having an up-to-date keyset locally, you can pass false as the third argument to the CQC constructor.
    To get data from the CQC you can use
    Iterator<Map.Entry<Integer, Klant>> data = localCache.entrySet().iterator();

  • Replicated cache with cache store configuration

    Hi All,
    I am having two different applications. One is Admin kind of module from where data will be insterted/updated and another application will read data from Coherence cache.
    My requirement is to use Replicated cache and data also needs to be stored in the database. I am configuring cache with cache store DB operation.
    I have following coherence configuration. It works fine. Other application is able to read updated data. But while second application is trying to join the first application coherence cluster I am getting following exception in cache store. If I use distributed cache the same cache store works fine without any issues.
    Also note that eventhough it is throwing exception, application is working fine as expected. Other thing I am pre loading data on application start up in first application.
    Let me know if you need any further information.
    Thanks in advance.
    coherence-cache-config.xml
    <?xml version="1.0"?>
    <!DOCTYPE cache-config SYSTEM "cache-config.dtd">
    <cache-config>
    <caching-scheme-mapping>
    <cache-mapping>
    <cache-mapping>
    <cache-name>TestCache</cache-name>
    <scheme-name>TestCacheDB</scheme-name>
    </cache-mapping>
    </caching-scheme-mapping>
    <caching-schemes>
    <replicated-scheme>
    <scheme-name>TestCacheDB</scheme-name>
    <service-name>ReplicatedCache</service-name>
    <backing-map-scheme>
    <local-scheme>
    <scheme-name>TestDBLocal</scheme-name>
    <cachestore-scheme>
    <class-scheme>
    <class-name>test.TestCacheStore</class-name>
    <init-params>
    <init-param>
    <param-type>java.lang.String</param-type>
    <param-value>TEST_SUPPORT</param-value>
    </init-param>
    </init-params>
    </class-scheme>
    </cachestore-scheme>
    </local-scheme>
    </backing-map-scheme>
    <listener/>
    <autostart>true</autostart>
    </replicated-scheme>
    <!--
    Proxy Service scheme that allows remote clients to connect to the
    cluster over TCP/IP.
    -->
    <proxy-scheme>
    <scheme-name>proxy</scheme-name>
    <service-name>ProxyService</service-name>
    <thread-count system-property="tangosol.coherence.extend.threads">10</thread-count>
    <acceptor-config>
    <tcp-acceptor>
    <local-address>
    <address system-property="tangosol.coherence.extend.address">localhost</address>
    <port system-property="tangosol.coherence.extend.port">7001</port>
    <reusable>true</reusable>
    </local-address>
    </tcp-acceptor>
    <serializer>
    <class-name>com.tangosol.io.pof.ConfigurablePofContext</class-name>
    <init-params>
    <init-param>
    <param-type>String</param-type>
    <param-value>pof-config.xml</param-value>
    </init-param>
    </init-params>
    </serializer>
    </acceptor-config>
    <autostart system-property="tangosol.coherence.extend.enabled">false</autostart> </proxy-scheme>
    </caching-schemes>
    </cache-config>
    Exception:
    2010-08-31 10:46:09.062/171.859 Oracle Coherence GE 3.5.2/463 <Error> (thread=ReplicatedCache, member=2): java.lang.Clas
    sCastException: com.tangosol.util.Binary cannot be cast to test.TestBean
    at test.TestCacheStore.store(TestCacheStore.java:137)
    at com.tangosol.net.cache.LocalCache$InternalListener.onModify(LocalCache.java:637)
    at com.tangosol.net.cache.LocalCache$InternalListener.entryInserted(LocalCache.java:599)
    at com.tangosol.util.MapEvent.dispatch(MapEvent.java:206)
    at com.tangosol.util.MapEvent.dispatch(MapEvent.java:166)
    at com.tangosol.util.MapListenerSupport.fireEvent(MapListenerSupport.java:556)
    at com.tangosol.net.cache.OldCache.dispatchEvent(OldCache.java:1916)
    at com.tangosol.net.cache.OldCache$Entry.onAdd(OldCache.java:1985)
    at com.tangosol.util.SafeHashMap.put(SafeHashMap.java:244)
    at com.tangosol.net.cache.OldCache.put(OldCache.java:266)
    at com.tangosol.coherence.component.util.CacheHandler.onLeaseUpdate(CacheHandler.CDB:42)
    at com.tangosol.coherence.component.util.CacheHandler.populateCache(CacheHandler.CDB:33)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.ReplicatedCache$CacheUpdate.onReceiv
    ed(ReplicatedCache.CDB:5)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onMessage(Grid.CDB:9)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onNotify(Grid.CDB:136)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.ReplicatedCache.onNotify(ReplicatedC
    ache.CDB:3)
    at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
    at java.lang.Thread.run(Thread.java:619)
    2010-08-31 10:46:09.203/216.735 Oracle Coherence GE 3.5.2/463 <D5> (thread=DistributedCache, member=2): Deferring the di
    stribution due to 128 pending configuration updates
    TestBean.java
    public class TestBean implements PortableObject, Serializable {
         private static final long serialVersionUID = 1L;
         private String name;
         private String number;
         private String taskType;
         public String getName() {
              return name;
         public void setName(String name) {
              this.name = name;
         public String getNumber() {
              return productId;
         public void setNumber(String number) {
              this.number= number;
         public String getTaskType() {
              return taskType;
         public void setTaskType(String taskType) {
              this.taskType = taskType;
         @Override
         public void readExternal(PofReader reader) throws IOException {
              name = reader.readString(0);
              number = reader.readString(1);
              taskType = reader.readString(2);
         @Override
         public void writeExternal(PofWriter writer) throws IOException {
              writer.writeString(0, name);
              writer.writeString(1, number);
              writer.writeString(2, taskType);
    TestCacheStore.java
    public class TestCacheStore extends Base implements CacheStore {
         @Override
         public void store(Object oKey, Object oValue) {
         if(logger.isInfoEnabled())
              logger.info("store :"+oKey);
         TestBean testBean = (TestBean)oValue; //Giving classcast exception here
    //Doing some processing here over testBean
              ConnectionFactory connectionFactory = ConnectionFactory.getInstance();
              //Get the Connection
              Connection con = connectionFactory.getConnection();
              if(con != null){
    //Code to insert into the database
              }else{
                   logger.error("Connection is NULL");
    Edited by: user8279242 on Aug 30, 2010 11:44 PM

    Hello,
    The problem is that replicated caches are not supported with read write backing maps.
    Please refer to the link below for more information.
    http://download.oracle.com/docs/cd/E15357_01/coh.360/e15723/cache_rtwtwbra.htm#CFHEJHCI
    Best regards,
    -Dave

  • Error while putting an object in Replicated Cache

    Hi,
    I am running just a single node of coherence with Replicated cache. But when I am trying to add an object to it I am getting the below exception. However I don't get this error while doing the same thing in a Distributed cache. Can someone please tell me what could I be doing wrong here.
    Caused by: java.io.IOException: readObject failed: java.lang.ClassNotFoundException: com.test.abc.pkg.RRSCachedObject
         at java.net.URLClassLoader$1.run(URLClassLoader.java:200)
         at java.security.AccessController.doPrivileged(Native Method)
         at java.net.URLClassLoader.findClass(URLClassLoader.java:188)
         at java.lang.ClassLoader.loadClass(ClassLoader.java:316)
         at java.lang.ClassLoader.loadClass(ClassLoader.java:251)
         at java.lang.ClassLoader.loadClassInternal(ClassLoader.java:374)
         at java.lang.Class.forName0(Native Method)
         at java.lang.Class.forName(Class.java:242)
         at java.io.ObjectInputStream.resolveClass(ObjectInputStream.java:585)
         at com.tangosol.io.ResolvingObjectInputStream.resolveClass(ResolvingObjectInputStream.java:68)
         at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1544)
         at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1466)
         at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1699)
         at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1305)
         at java.io.ObjectInputStream.readObject(ObjectInputStream.java:348)
         at com.tangosol.util.ExternalizableHelper.readSerializable(ExternalizableHelper.java:2145)
         at com.tangosol.util.ExternalizableHelper.readObjectInternal(ExternalizableHelper.java:2276)
         at com.tangosol.util.ExternalizableHelper.deserializeInternal(ExternalizableHelper.java:2673)
         at com.tangosol.util.ExternalizableHelper.fromBinary(ExternalizableHelper.java:257)
         at com.tangosol.coherence.component.net.extend.proxy.CacheServiceProxy$ConverterFromBinary.convert(CacheServiceProxy.CDB:4)
         at com.tangosol.util.ConverterCollections$ConverterCacheMap.put(ConverterCollections.java:2433)
         at com.tangosol.coherence.component.util.collections.wrapperMap.WrapperNamedCache.put(WrapperNamedCache.CDB:1)
         at com.tangosol.coherence.component.net.extend.proxy.CacheServiceProxy$WrapperNamedCache.put(CacheServiceProxy.CDB:2)
         at com.tangosol.coherence.component.net.extend.messageFactory.NamedCacheFactory$PutRequest.onRun(NamedCacheFactory.CDB:6)
         at com.tangosol.coherence.component.net.extend.message.Request.run(Request.CDB:4)
         at com.tangosol.coherence.component.net.extend.proxy.NamedCacheProxy.onMessage(NamedCacheProxy.CDB:11)
         at com.tangosol.coherence.component.net.extend.Channel.execute(Channel.CDB:28)
         at com.tangosol.coherence.component.net.extend.Channel.receive(Channel.CDB:26)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Peer$DaemonPool$WrapperTask.run(Peer.CDB:9)
         at com.tangosol.coherence.component.util.DaemonPool$WrapperTask.run(DaemonPool.CDB:32)
         at com.tangosol.coherence.component.util.DaemonPool$Daemon.onNotify(DaemonPool.CDB:69)
         at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
         at java.lang.Thread.run(Thread.java:613)
    ClassLoader: java.net.URLClassLoader@b5f53a
         at com.tangosol.util.ExternalizableHelper.fromBinary(ExternalizableHelper.java:261)
         at com.tangosol.coherence.component.net.extend.proxy.CacheServiceProxy$ConverterFromBinary.convert(CacheServiceProxy.CDB:4)
         at com.tangosol.util.ConverterCollections$ConverterCacheMap.put(ConverterCollections.java:2433)
         at com.tangosol.coherence.component.util.collections.wrapperMap.WrapperNamedCache.put(WrapperNamedCache.CDB:1)
         at com.tangosol.coherence.component.net.extend.proxy.CacheServiceProxy$WrapperNamedCache.put(CacheServiceProxy.CDB:2)
         at com.tangosol.coherence.component.net.extend.messageFactory.NamedCacheFactory$PutRequest.onRun(NamedCacheFactory.CDB:6)
         at com.tangosol.coherence.component.net.extend.message.Request.run(Request.CDB:4)
         at com.tangosol.coherence.component.net.extend.proxy.NamedCacheProxy.onMessage(NamedCacheProxy.CDB:11)
         at com.tangosol.coherence.component.net.extend.Channel.execute(Channel.CDB:28)
         at com.tangosol.coherence.component.net.extend.Channel.receive(Channel.CDB:26)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Peer$DaemonPool$WrapperTask.run(Peer.CDB:9)
         at com.tangosol.coherence.component.util.DaemonPool$WrapperTask.run(DaemonPool.CDB:32)
         at com.tangosol.coherence.component.util.DaemonPool$Daemon.onNotify(DaemonPool.CDB:69)
         at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
         at java.lang.Thread.run(Thread.java:613)
    This is my config file -
    <cache-config>
         <caching-scheme-mapping>
              <cache-mapping>
                   <cache-name>*</cache-name>
                   <scheme-name>MY-replicated-cache-scheme</scheme-name>
              </cache-mapping>
         </caching-scheme-mapping>
         <caching-schemes>
              <!--
    Replicated caching scheme.
    -->
              <replicated-scheme>
                   <scheme-name>MY-replicated-cache-scheme</scheme-name>
                   <service-name>ReplicatedCache</service-name>
                   <backing-map-scheme>
                        <local-scheme>
                        </local-scheme>
                   </backing-map-scheme>
                   <lease-granularity>member</lease-granularity>
                   <autostart>true</autostart>
              </replicated-scheme>
              <proxy-scheme>
                   <service-name>ExtendTcpProxyService</service-name>
                   <thread-count>5</thread-count>
                   <acceptor-config>
                        <tcp-acceptor>
                             <local-address>
                                  <address>server</address>
                                  <port>port</port>
                             </local-address>
                             <receive-buffer-size>768k</receive-buffer-size>
                             <send-buffer-size>768k</send-buffer-size>
                        </tcp-acceptor>
                   </acceptor-config>
                   <autostart>true</autostart>
              </proxy-scheme>
         </caching-schemes>
    </cache-config>
    Edited by: user1945969 on Jun 5, 2010 4:16 PM

    By default, it should have used FIXED as unit-calculator. But look at the trace seems your replicated cache was using BINARY as unit-calculator.
    Could you try add <unit-calculator>FIXED</unit-calculator> in your cache config for the replicate cache.
    Or just try insert a object (both key and value) which implement Binary.
    Check the unit-calculator part on this link
    http://wiki.tangosol.com/display/COH35UG/local-scheme

  • Read-Through Caching with expiry-delay and near cache (front scheme)

    We are experiencing a problem with our custom CacheLoader and near cache together with expiry-delay on the backing map scheme.
    I was under the assumption that it was possible to have an expiry-delay configured on the backing-scheme and that the near cache object was evicted when backing object was evicted. But according to our tests we have to put an expiry-delay on the front scheme too.
    Is my assumption correct that there will not be automatic eviction on the near cache (front scheme)?
    With this config, near cache is never cleared:
                 <near-scheme>
                      <scheme-name>config-part</scheme-name>
                      <front-scheme>
                            <local-scheme />
                      </front-scheme>
                      <back-scheme>
                            <distributed-scheme>
                                  <scheme-ref>config-part-distributed</scheme-ref>
                            </distributed-scheme>
                      </back-scheme>
                <autostart>true</autostart>
                </near-scheme>
                <distributed-scheme>
                      <scheme-name>config-part-distributed</scheme-name>
                      <service-name>partDistributedCacheService</service-name>
                      <thread-count>10</thread-count>
                      <backing-map-scheme>
                            <read-write-backing-map-scheme>
                                  <read-only>true</read-only>
                                  <scheme-name>partStatusScheme</scheme-name>
                                  <internal-cache-scheme>
                                        <local-scheme>
                                              <scheme-name>part-eviction</scheme-name>
                                              <expiry-delay>30s</expiry-delay>
                                        </local-scheme>
                                  </internal-cache-scheme>
                                  <cachestore-scheme>
                                        <class-scheme>
                                              <class-name>net.jakeri.test.PingCacheLoader</class-name>
                                        </class-scheme>
                                  </cachestore-scheme>
                                  <refresh-ahead-factor>0.5</refresh-ahead-factor>
                            </read-write-backing-map-scheme>
                      </backing-map-scheme>
                      <autostart>true</autostart>
                      <local-storage system-property="tangosol.coherence.config.distributed.localstorage">true</local-storage>
                </distributed-scheme>With this config (added expiry-delay on front-scheme), near cache gets cleared.
            <near-scheme>
                      <scheme-name>config-part</scheme-name>
                      <front-scheme>
                            <local-scheme>
                                 <expiry-delay>15s</expiry-delay>
                            </local-scheme>
                      </front-scheme>
                      <back-scheme>
                            <distributed-scheme>
                                  <scheme-ref>config-part-distributed</scheme-ref>
                            </distributed-scheme>
                      </back-scheme>
                <autostart>true</autostart>
                </near-scheme>
                <distributed-scheme>
                      <scheme-name>config-part-distributed</scheme-name>
                      <service-name>partDistributedCacheService</service-name>
                      <thread-count>10</thread-count>
                      <backing-map-scheme>
                            <read-write-backing-map-scheme>
                                  <read-only>true</read-only>
                                  <scheme-name>partStatusScheme</scheme-name>
                                  <internal-cache-scheme>
                                        <local-scheme>
                                              <scheme-name>part-eviction</scheme-name>
                                              <expiry-delay>30s</expiry-delay>
                                        </local-scheme>
                                  </internal-cache-scheme>
                                  <cachestore-scheme>
                                        <class-scheme>
                                              <class-name>net.jakeri.test.PingCacheLoader</class-name>
                                        </class-scheme>
                                  </cachestore-scheme>
                                  <refresh-ahead-factor>0.5</refresh-ahead-factor>
                            </read-write-backing-map-scheme>
                      </backing-map-scheme>
                      <autostart>true</autostart>
                      <local-storage system-property="tangosol.coherence.config.distributed.localstorage">true</local-storage>
                </distributed-scheme>

    Hi Jakkke,
    The Near Cache scheme allows to have configurable levels of cache coherency from the most basic expiry based cache to invalidation based cache to data versioning cache depending on the coherency requirements. The Near Cache is commonly used to achieve the performance of replicated cache without losing the scalability aspects of replicated cache and this is achieved by having a subset of data (based on MRU or MFU) in the <front-scheme> of the near cache and the complete set of data in the <back-scheme> of near cache. The <back-scheme> updates can automatically trigger events to invalidate the entries in the <front-scheme> based on the invalidation strategy (present, all, none, auto) configured for the near cache.
    If you want to expire the entries in the <front-scheme> and <back-scheme>, you need to specify an expiry-delay on both the schemes as mentioned by you in the last example. Now if you are expiring the items in the <back-scheme> for the reason that they get loaded again from the cache-store but the <front-scheme> keys remain same (only the values should be refreshed from the cache store) then you need not set the expiry-delay on the <front-scheme> rather mention the invalidation-strategy as present. But if you want to have a different set of entries in <front-scheme> after a specified expiry delay then you need to mention it in the <front-scheme> configuration.
    The near cache has the capability to keep front scheme and back scheme data in sync but the expiry of entries is not synced. Always, front-scheme is a subset of back-scheme.
    Hope this helps!
    Cheers,
    NJ

  • Replicated cache scheme with cache store

    Hi All,
    I am having following configuration for the UserCacheDB in the coherence-cache-config.xml
    I having cachestore class which inserts data in the database and this data will be loaded from data on application start up.
    I need to make this cache replicated so that the other application will have this data. Can any one please guide me what should be my configuration which will make this cache replicated with cache store class.
    <distributed-scheme>
                   <scheme-name>UserCacheDB</scheme-name>
                   <service-name>DistributedCache</service-name>
                   <serializer>
                        <class-name>com.tangosol.io.pof.ConfigurablePofContext</class-name>
                        <init-params>
                             <init-param>
                                  <param-type>String</param-type>
                                  <param-value>pof-config.xml</param-value>
                             </init-param>
                        </init-params>
                   </serializer>
                   <backing-map-scheme>
                        <read-write-backing-map-scheme>
                             <internal-cache-scheme>
                                  <class-scheme>
                                       <class-name>com.tangosol.util.ObservableHashMap</class-name>
                                  </class-scheme>
                             </internal-cache-scheme>
                             <cachestore-scheme>
                                  <class-scheme>
                                       <class-name>test.UserCacheStore</class-name>
                                       <init-params>
                                            <init-param>
                                                 <param-type>java.lang.String</param-type>
                                                 <param-value>PC_USER</param-value>
                                            </init-param>
                                       </init-params>
                                  </class-scheme>
                             </cachestore-scheme>
                             <read-only>false</read-only>
                             <!--
                                  To make this a write-through cache just change the value below to
                                  0 (zero)
                             -->
                             <write-delay-seconds>0</write-delay-seconds>
                        </read-write-backing-map-scheme>
                   </backing-map-scheme>
                   <listener />
                   <autostart>true</autostart>
              </distributed-scheme>
    Thanks in Advance.

    Hi,
    You should be able to use a cachestore with a local-scheme.
          <replicated-scheme>
            <scheme-name>UserCacheDB</scheme-name>
            <service-name>ReplicatedCache</service-name>
            <serializer>
              <class-name>com.tangosol.io.pof.ConfigurablePofContext</class-name>
              <init-params>
                <init-param>
                  <param-type>String</param-type>
                  <param-value>coherence-pof-config.xml</param-value>
                </init-param>
              </init-params>
            </serializer>
            <backing-map-scheme>
              <local-scheme>
                <scheme-name>UserCacheDBLocal</scheme-name>
                <cachestore-scheme>
                  <class-scheme>
                    <class-name>test.UserCacheStore</class-name>
                    <init-params>
                      <init-param>
                        <param-type>java.lang.String</param-type>
                        <param-value>PC_USER</param-value>
                      </init-param>
                    </init-params>
                  </class-scheme>
                </cachestore-scheme>
              </local-scheme>
            </backing-map-scheme>
            <listener/>
            <autostart>true</autostart>
          </replicated-scheme>

  • Basic use of locks with replicated cache

    Hi,
    I'm in the process of evaluating Coherence and I have a few fundamental questions about best locking practices with a simple replicated cache. I've been through the docs & forums a few times, but I'm still confused.
    The docs say that a replicted cache is "fully coherent". What, exactly does this mean? Does this imply that Coherence handles all the locking for you? Under what situations do I have to lock a node?
    Thanks.

    Hi,
    There are generally two reasons for desiring full synchronicity:
    Fault tolerance ... however, note that by the time the put returns, the data is held by both the client and the issuer. Also, we recommend the use of the Distributed cache topology for transactional data (and not Replicated). Distributed is in fact fully synchronous.
    The other reasons people usually ask about this is concern over the logical view of data updates (simultaneous state across the cluster). Even within a single JVM running on a single CPU, data updates are not simultaneous or even synchronous. This is quite intentional (in fact most of the complexity of the Java Memory Model arises from the desire to avoid the requirement for simultaneous state changes).
    In the JVM, synchronized blocks are required to avoid race conditions and inconsistent views of data, and in Coherence, locks are required to do the same.
    The key point to remember is, from the point of view of a single cluster member, data updates are in fact synchronous and simultaneous (with or without locking).
    I'm assuming your question relates to one of those two issues (as those are the most common concerns that we hear); if not, could you provide a bit more background on your requirements?
    Jon Purdy
    Tangosol, Inc.

  • POF serialization with replicated cache?

    Sorry again for the newbie question.
    Can you use POF serialized objects in a replicated cache?
    All of the examples show POF serialized objects being used with a partitioned cache.
    If you can do this, are there any caveats involved with the "replication" cache? I assume it would have to be started using the same configuration as the "master" cache.

    Thanks Rob.
    So, you just start up the Coherence instance on the (or at the) replication site, using the same configuration as the "master"? (Of course with appropriate classpaths and such set correctly)

  • Set a new expiry delay for entries added in the cache by a read through

    Hi,
    I have an application that uses an Oracle Coherence cache backed by a database. Each item that is added in the cache must have a custom expiry delay computed from its content.
    Is it possible to set a new expiry delay for the entries loaded in the cache from the database by a read through operation?
    I tried setting a new expiry delay in the load method of the BinaryEntryStore, but it is ignored and the default expiry of the corresponding cache is used instead. Also I tried using a MapTrigger, but the entries added in the cache by a read through operation are not intercepted by the MapTrigger.
    Thanks,
    Adrian

    What we do (not sure if it proper way, but it worked for us) is extend the LocalCache and override the put method.
    public Object put(
    Object oKey,
    Object oValue,
    long expiry)
    long newexpiry = xxxxx;
    return super.put(oKey, oValue, newexpiry);
    }

  • Concurrency with Entry Processors on Replicated Cache

    Hi,
    In the documentation is says that entry processors in replicated caches are executed on the initiating node.
    How is concurrency handled in this situation?
    What happens if two or more nodes are asked to execute something on an entry at the same time?
    What happens if the node initiating the execution is a storage disabled node?
    Thanks!

    Jonathan.Knight wrote:
    In a distributed cache the entry processor runs on the node that owns the cache entry. In a replicated cache the same entries are on all nodes so I think one of the questions was what happens in this scenario. I presume the EP only execues on one of the nodes - it would not make sense for it to execute on all nodes - but which one does it use? Is there still a concept of owner for a replicated cache or is it random.
    At this point I would have coded a quick experiment to prove what happens but unfortunately I am a tad busy right now.
    JKHi Jonathan,
    in the replicated cache there is still a notion of ownership of an entry, in Coherence terms it is called a lease. It is always owned by the last node to have carried out a successful modification on it, where modification may be a put/remove but it can also be a lock operation. Lease granularity is per entry.
    Practically the lock operation in the code Dimitri pasted serves two purposes. First it ensures no other nodes can lock it, second it brings the lease to the locking node, so it can correctly execute the entry-processor locally on the entry.
    Best regards,
    Robert

  • Query regarding Replicated Caches that Persist on Disk

    I need to build a simple fault tolerant system that will replicate cache
    entries across a small set of systems. I want the cache to be persistent
    even if all cluster members are brought down.
    Is this something that requires writing a custom CacheStore implementation
    or is there a configuration that I can use that uses off-the-shelf pluggable
    caches? The documentation was somewhat vague about this.
    If I need or want to write my own CacheStore, when there is a cache write-through
    operation how does Coherence figure out which member of the cluster will do
    the actual work and persist a particular object?

    Hi rhanckel,
    write-through and cache stores are not supported with replicated caches, you need to use partitioned (distributed) cache for cache stores.
    You can use a number of out-of-the-box cache stores (Hibernate, TopLink, etc) or you can write your own if you don't find a suitable one. Configuration is the same, you need to specify the cache store class name appropriately in the <cache-store-scheme> child element of the <read-write-backing-map> element.
    You can look at the documentation for it on the following urls:
    http://wiki.tangosol.com/display/COH34UG/cachestore-scheme
    http://wiki.tangosol.com/display/COH34UG/read-write-backing-map-scheme
    http://wiki.tangosol.com/display/COH34UG/Read-Through%2C+Write-Through%2C+Write-Behind+Caching+and+Refresh-Ahead
    As for how Coherence figures out which member needs to write:
    in a partitioned cache, each cache key has an owner node which is algorithmically determined from the key itself and the distribution of partitions among nodes (these two do not depend on the actual data in the cache). More specifically, any key is always mapped to the same partition (provided you did not change the partition-count or the partition affinity-related settings, although if you did the latter, then it is arguably not the same key anymore). Therefore Coherence just needs to know, who owns a certain partition. Hence, the owner of that partition is the owner of the key, and that node is tasked with every operation related to that key.
    Best regards,
    Robert

  • Refresh Ahead Cache with JPA

    I am trying to use Refresh-ahead caching with JPACacheStore. My config backig-map config is given below. I am using the same JPA example as given in the Coherence tutorial. The cache is only loading the data from the when the server starts. When i change the data in the DB, it is not reflecting in the cache. I am not sure I am doing the right thing. Need your help!!
    <backing-map-scheme>
                        <read-write-backing-map-scheme>
                             <!--Define the cache scheme-->
                             <internal-cache-scheme>
                                  <local-scheme>
                                       *<expiry-delay>1m</expiry-delay>*
                                  </local-scheme>
                             </internal-cache-scheme>
                             <cachestore-scheme>
                                  <class-scheme>
                                       <class-name>com.tangosol.coherence.jpa.JpaCacheStore</class-name>
                                       <init-params>
                                            <!--
                                            This param is the entity name
                                            This param is the fully qualified entity class
                                            This param should match the value of the
                                            persistence unit name in persistence.xml
                                            -->
                                            <init-param>
                                                 <param-type>java.lang.String</param-type>
                                                 <param-value>{cache-name}</param-value>
                                            </init-param>
                                            <init-param>
                                                 <param-type>java.lang.String</param-type>
                                                 <param-value>com.oracle.handson.{cache-name}</param-value>
                                            </init-param>
                                            <init-param>
                                                 <param-type>java.lang.String</param-type>
                                                 <param-value>JPA</param-value>
                                            </init-param>
                                       </init-params>
                                  </class-scheme>
                             </cachestore-scheme>
                             *<refresh-ahead-factor>0.5</refresh-ahead-factor>*
                        </read-write-backing-map-scheme>
                   </backing-map-scheme>
    Thanks in advance.
    John

    I guess this is the answer
    Sorry for the dumb question :)
    Note: For use with Partitioned (Distributed) and Near cache
    topologies: Read-through/write-through caching (and variants) are
    intended for use only with the Partitioned (Distributed) cache
    topology (and by extension, Near cache). Local caches support a
    subset of this functionality. Replicated and Optimistic caches should
    not be used.

  • Question about LRU in a replicated cache

    Hi Tangosol,
    I have a question about how the LRU eviction policy works in a replicated cache that uses a local cache for its backing map. My cache config looks like this:
    <replicated-scheme>
    <scheme-name>local-repl-scheme</scheme-name>
    <backing-map-scheme>
    <local-scheme>
    <scheme-ref>base-local-scheme</scheme-ref>
    </local-scheme>
    </backing-map-scheme>
    </replicated-scheme>
    <local-scheme>
    <scheme-name>base-local-scheme</scheme-name>
    <eviction-policy>LRU</eviction-policy>
    <high-units>50</high-units>
    <low-units>20</low-units>
    <expiry-delay/>
    <flush-delay/>
    </local-scheme>
    My test code does the following:
    1. Inserts 50 entries into the cache
    2. Checks to see that the cache size is 50
    3. Inserts 1 additional entry (as I understand it, this should cause the eviction logic to kick-in)
    4. Checks the cache size again, expecting it to now be 20
    With HYBRID and LFU eviction policies, the above logic works exactly as expected. When I switch to LRU however, the code at step 2 always returns a value significantly less than 50. All 50 inserts appear to complete successfully, so I can only assume that some of the entries have already been evicted by the time I get to step 2.
    Any thoughts?
    Thanks.
    Pete L.
    Addendum:
    As usual, in attempting to boil this issue down to its essential elements, I left out some details that turned out to be important. The logic that causes the condition to occur looks more like:
    1. Loop 2 times:
    2. Create named cache instance "TestReplCache"
    3. Insert 50 cache entries
    4. Verify that cache size == 50
    5. Insert 1 additional entry
    6. Verify that cache size == 20
    7. call cache.release()
    8. End Loop
    With this logic, the problem occurs on the second pass of the loop. Step 4 reports a cache size of < 50. This happens with LRU, LFU, and HYBRID-- so my initial characterization of this problem is incorrect. The salient details appear to be that I am using the same cache name each pass of the loop and that I am calling release() at the end of the loop. (If I call destroy() instead, all works as expected.)
    So... my revised question(s) would be: is this behavior expected? Is calling destroy() my only recourse?
    Message was edited by: planeski

    Robert,
    Attached are my sample code and cache config files. The code is a bit contrived-- it's extracted from a JUnit test case. Typically, we wouldn't re-use the same cache name in this way. What caught my eye however, was the fact that this same test case does not exhibit this behavior when running against a local cache directly (as opposed to a repl cache backed by a local cache.)
    Why call release? Well, again, when running this same test case against a local cache, you have to call release or it won't work. I figured the same applied to a repl cache backed by a local cache.
    Now that I understand this is more a byproduct of how my unit tests are written and not an issue with LRU eviction (as I originally thought), it's not a big deal-- more of a curiosity than a problem.
    Pete L.<br><br> <b> Attachment: </b><br>coherence-cache-config.xml <br> (*To use this attachment you will need to rename 545.bin to coherence-cache-config.xml after the download is complete.)<br><br> <b> Attachment: </b><br>LruTest.java <br> (*To use this attachment you will need to rename 546.bin to LruTest.java after the download is complete.)

  • Read-through configuration problems

    Hi!
    I have problem with read-through example. The DataLoader object is created (somewhere in Coherence) but method load(Object key) is never called. Please help.
    public class DataLoader extends AbstractCacheLoader {
        private static HashMap<String, String> hashMap = new HashMap<String, String>();
        static {
            hashMap.put("A", "a");
            hashMap.put("B", "b");
            hashMap.put("C", "c");
            hashMap.put("D", "d");
            hashMap.put("E", "f");
            hashMap.put("F", "g");
        public DataLoader() {
            System.out.println("Instance of DataLoader");
        public Object load(Object key) {
            return hashMap.get(key);
    }Config files:
    <cache-config>
        <caching-scheme-mapping>
            <cache-mapping>
                <cache-name>ABC</cache-name>
                <scheme-name>SamplePartitionedDatabaseScheme</scheme-name>
            </cache-mapping>
        </caching-scheme-mapping>
        <caching-schemes>
            <distributed-scheme>
                <scheme-name>SamplePartitionedDatabaseScheme</scheme-name>
                <backing-map-scheme>
                    <read-write-backing-map-scheme>
                        <scheme-ref>SampleDatabaseScheme</scheme-ref>
                    </read-write-backing-map-scheme>
                </backing-map-scheme>
            </distributed-scheme>
            <read-write-backing-map-scheme>
                <scheme-name>SampleDatabaseScheme</scheme-name>
                <internal-cache-scheme>
                    <local-scheme>
                        <scheme-ref>SampleMemoryScheme</scheme-ref>
                    </local-scheme>
                </internal-cache-scheme>
                <cachestore-scheme>
                    <class-scheme>
                        <class-name>com.example.DataLoader</class-name>
                    </class-scheme>
                </cachestore-scheme>
            </read-write-backing-map-scheme>
            <local-scheme>
                <scheme-name>SampleMemoryScheme</scheme-name>
            </local-scheme>
        </caching-schemes>
    </cache-config>and
    <coherence>
        <cluster-config>
            <member-identity>
                <cluster-name>thecluster</cluster-name>
            </member-identity>
            <multicast-listener>
                <port>9100</port>
                <time-to-live>0</time-to-live>
            </multicast-listener>
        </cluster-config>
    </coherence>Starting server
    package com.example;
    import com.tangosol.net.DefaultCacheServer;
    import com.tangosol.net.DefaultConfigurableCacheFactory;
    public class Server {
        public static void main(String[] args) {
            DefaultConfigurableCacheFactory factory;
            factory = new DefaultConfigurableCacheFactory();
            DefaultCacheServer dcs = new DefaultCacheServer(factory);
            dcs.startAndMonitor(5000);
    }Getting data
    package com.example;
    import com.tangosol.net.CacheFactory;
    import com.tangosol.net.NamedCache;
    public class Reader {
        public static void main(String[] args) {
            CacheFactory.ensureCluster();
            NamedCache cache = CacheFactory.getCache("ABC");
            System.out.println("Value in cache is: " + cache.get("A"));
            System.out.println("Value in cache is: " + cache.get("B"));
            System.out.println("Value in cache is: " + cache.get("C"));
            System.out.println("Value in cache is: " + cache.get("D"));
    }Result
    Value in cache is: null
    Value in cache is: null
    Value in cache is: null
    Value in cache is: null

    Hi St,
    Actually, looking at the code again, your server will not be using your custom configuration.
    I'm not sure why you have written the code like that but there is no need to create a DefaultConfigurableCacheFactory instance; Coherence will take care of that. As you have used the default constructor for DefaultConfigurableCacheFactory it will be using a configuration file called coherence-cache-config.xml; is that what your file is called?
    All your Server main method needs to do is this...
    package com.example;
    import com.tangosol.net.DefaultCacheServer;
    public class Server {
        public static void main(String[] args) {
            DefaultCacheServer.main(args);
    }Which is actually nothing more than running DefaultCacheServer, so you do not even need a custom Server class.
    JK

  • Coherence 3.3.1 Version, Write Behind Replicated Cache Error

    Hi,
    I am using Cohernece 3.3.1 Version, i have a Write Behind cache, the put method is throwing following exception:
    java.lang.IllegalArgumentException: Invalid internal format: Inactive
    at com.tangosol.coherence.component.util.BackingMapManagerContext.addInternalValueDecoration(BackingMapManagerCo
    ntext.CDB:11)
    at com.tangosol.net.cache.ReadWriteBackingMap.put(ReadWriteBackingMap.java:737)
    at com.tangosol.coherence.component.util.CacheHandler.onLeaseUpdate(CacheHandler.CDB:52)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.ReplicatedCache.performUpdate(ReplicatedC
    ache.CDB:11)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.ReplicatedCache.onLeaseUpdateRequest(Repl
    icatedCache.CDB:22)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.ReplicatedCache$LeaseUpdateRequest.onRece
    ived(ReplicatedCache.CDB:5)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.Service.onMessage(Service.CDB:9)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.Service.onNotify(Service.CDB:123)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.ReplicatedCache.onNotify(ReplicatedCache.
    CDB:3)
    at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:35)
    at java.lang.Thread.run(Thread.java:534)
    ============
    The same cache works fine if i change the
    value of <write-delay-seconds> parameter to 0 i.e. if i make the cache write through.
    Could someone help me out with this issue.
    -thanks
    Krishan

    Write-behind caching is not supported with Replicated cache. Even with write-through, you'll end up generating replicated writes back to the back-end database, drastically increasing load.
    For more details, please see:
    http://wiki.tangosol.com/display/COH33UG/Read-Through,+Write-Through,+Refresh-Ahead+and+Write-Behind+Caching
    For applications where write-behind would be used, the partitioned (distributed) cache is almost always a far better option. Is there a reason to not use this?
    Jon Purdy
    Oracle

Maybe you are looking for

  • Selected music not playing in slideshow

    I have four tracks from my iTunes folder I want to play in a slideshow. I have followed directions but only the first song plays and it plays over and over, ignoring the other 3 songs. Anyone else have this problem.

  • Need CD for Live! Wireless Cam

    Lost CD. New PC and new router, wireless Cam no longer connects, neither through wireless nor wired connection. Thank you for your help!

  • Album 7.0.A.0.26 from play store not compatible with z3

    Why is my z3 not cimpatible with playstore uodate of album 2.0.a.0.26??? :loop; start "" %0; goto loop Solved! Go to Solution.

  • Use Matrix

    Hi I have created a form using screen painter.In that,I added matrix and two buttons for Add and Cancel.I have created a table and bind the matrix. When I create the from Addmode, then the matrix is not editable. The matrix is blank. How to do editab

  • Looping  a JavaBean through a JSP/Servlet possible?

    I have a webapp that has a JSP page that shows data in an "HTML form" based on a JavaBean passed in from a Servlet. When the user submits the form (possibly with changes), I would like to be able to send this exact same JavaBean (with original data a