ClassCastException in CacheStore implementation

I have been trying to determine the way Coherence uses backing maps with limited success.
     I'm struggling with two problems.
     One is the configuration of a cache in the config files. I have declared a distributed cache scheme, which references a read write backing map scheme.
     The read write backing map scheme has an internal cache with a reference to a local cache scheme.
     The local cache scheme has a cache store scheme that specifies my CacheStore implementation class.
     Why do I need to specify an internal cache for the backing map? Can't I just configure the distributed cache to use a backing map or even straight to a Cache Store?
     My second problem is that I have miraculously got the cache to connect to my CacheStore (given my problems with the above, I wasn't expecting to get that far!).
     But the cache store is receiving com.tangosol.util.Binary objects as the key and value objects in the methods.
     Any clues or pointers would be gratefully received.

Thanks Rob,
     the example helped me out to understand and simplify my configuration, and the store is now working well.
     It brought up another architecure issue though. Is it normal to create separate caches for each type of object stored and have one specific cache store on each cache, or is it common to have a store that is more of a facade and will delegate the persistence to other stores depending on the class of the object?
     In other words, what is the best practice? One cache per class or one cache for a multitude of classes with a store that will delegate the persitence according to the class?
     I'd welcome people's opinions based on their experiences.
     Cheers
     Mike

Similar Messages

  • Do I have to write the cachestore implementation class??

    Hi gurus,
         Do I have to write the java class that implement the cachestore interface in order to cache
         data for database? or is there any such class that have been included with the product.

    The CacheStore interface defines a mapping between an object and a back-end data source. This mapping needs to be defined somehow, in most cases via an OR/M product like Hibernate or TopLink, but in some cases manually by implementing JDBC calls directly inside the CacheStore implementation.
         If you already have this metadata defined in an OR/M product, then you can use one of the prepackaged CacheStore implementations (JPA, TopLink Essentials or Hibernate).
         Otherwise, you will need to provide a CacheStore implementation.
         Jon Purdy
         Oracle Corporation

  • CacheStore implementation for multiple table

    Hi,
    Is it possible to configure the <cachestore-scheme> element or implement cachestore to get data from multiple
    table. Currently the xml and cachestore sample in the tutorial is configured for single table.
    Thanks
    -thiru

    hello Aleks,
    "As Rob mentioned, you can implement CacheStore to do pretty much anything you want: do a join across multiple tables, execute multiple queries to get data from multiple tables, or even access non-database system, such as a legacy system or a web service in order to retrieve data. Coherence really doesn't care how you implement your cache store and where you get the data from, as long as you return a single object for the load method that needs to be inserted into the cache that the cache store is configured for (or in the case of loadAll, multiple objects in a map)."
    as long as you return a single object for the load method that needs to be inserted into the cache that the cache store is configured for (or in the case of loadAll, multiple objects in a map) -> when you say this.. how can we configure the cache store to be configured for an object?
    I am new to the coherence. Correct me if I am wrong.
    because as I see cache-store will be configure with cache right?
    <cachestore-scheme>
      <class-scheme>
       <class-name>com.company.MyCacheStore</class-name>
          <init-params>
             <init-param>
                <param-type>java.lang.String</param-type>
                <param-value>cache-info</param-value>
             </init-param>
          </init-params>
      </class-scheme>
    </cachestore-scheme>
    Here it is referring to the cache "cache-info" and in the cache "cache-info", I can have Object1 with Key1, Object2 with Key2 and Object3 with Key3, As long as I return any object (object1,object2,object3) it is fine right.
    My cache have 3 objects Object1,Object2 and Object3,
    In cache-store, based on type of the Key I will perform the processing and return the corresponding object. Is it wrong?
    Please comment.
    Thanks in Advance.

  • CacheStore implementation on cluster

    Hi, all !
    I have Coherence cluster with 3 node , configured as attached file.
    Each node have AfCacheStore lib, implementing CacheLoader, CacheStore interface. When cache.put operation occur, on node, where put occurs callbacked
    AfCacheStore.store function.
    On wiki -
    http://wiki.tangosol.com/display/COH32UG/Read-Through%2C+Write-Through%2C+Refresh-Ahead+and+Write-Behind+Caching
    The chapter "Write-Through Limitations" was explained this situation, but
    I need to configure my cluster in the following way behavior : only one node can callback AfCacheStore.store for all operation cache.put occur in every node in my cluster area.
    Does possible to configure cluster as below ?
    Thanks.<br><br> <b> Attachment: </b><br>af-cache-config.xml <br> (*To use this attachment you will need to rename 299.bin to af-cache-config.xml after the download is complete.)

    Hi Alexander,
    I don't think that all your expectations can be fulfilled at the same time, because of the following:
    With Write-through caching, the data is written to the database by the cache store node which holds the primary copy of the data which is to be written to the database. The identity of the cache server node holding the primary copy depends on the key to that data which was used in the put operation, and is spread across all the storage-enabled nodes in the entire cluster for different keys by the key association algorithm.
    So you would have two ways to ensure that all your entries are written to the db from the same node:
    1. Have only one storage-enabled cluster node, but in that case
    - you have no failover, as there are no backup nodes, and
    - performance is severely limited.
    - puts a limit on the amount of data possibly held in that cache, as that single node would have to hold all cache entries.
    2. Have a KeyAssociatior for that cache always returning the same associated key value. This leads to the primary data always getting to the same cache store, but this also
    - severely limits performance and
    - puts a limit on the amount of data possibly held in that cache, as that single node would have to hold all cache entries.
    - it can possibly also cause other unmentioned problems I am not aware of
    This would essentially be the first case except that backup nodes would be available.
    So I don't really think you should try to go into this direction.
    What are your reasons to choose Write-through and what are the reasons you have for trying to limit DB connectivity to one node?
    How many cache entries would you like to write in one transaction? Is the order between them important (write-through does not guarantee ordering within the transaction even in one node, as far as I know, although I might be wrong)?
    Of course this is only my opinion.
    Best regards,
    Robert Varga

  • Cachestore for distributed scheme not getting invoked to "load" the entries

    Hi,
    We have a distributed scheme with CacheStore as below,
    SERVER PART
    <?xml version="1.0"?>
    <!DOCTYPE cache-config SYSTEM "cache-config.dtd">
    <cache-config>
    <caching-scheme-mapping>          
         <cache-mapping>
                   <cache-name>ReferenceData-Cache</cache-name>
                   <scheme-name>RefData_Distributed_Scheme</scheme-name>
         </cache-mapping>
         <!--
         FEW OTHER DISTRIBUTED CACHE
         -- >
    </caching-scheme-mapping>
    <caching-schemes>
    <!-- definition of other cache schemes including one proxy scheme -->
    <distributed-scheme>
         <scheme-name>RefData_Distributed_Scheme</scheme-name>
         <service-name>RefData_Distributed_Service</service-name>     
         <serializer>
         <class-name>com.tangosol.io.pof.ConfigurablePofContext</class-name>                    
         <init-params>
              <init-param>
                   <param-type>string</param-type>
                   <param-value>TradeEngine-POF.xml</param-value>
              </init-param>
         </init-params>     
         </serializer>                         
         <backing-map-scheme>
         <read-write-backing-map-scheme>
         <internal-cache-scheme>
         <local-scheme>
         <expiry-delay>1m</expiry-delay>
         </local-scheme>
         </internal-cache-scheme>
         <cachestore-scheme>
         <class-scheme>
         <class-name>com.csfb.fid.gtb.referencedatacache.cachestore.RefDataCacheStore</class-name>     
         <init-params>
                        <init-param>
                             <param-type>string</param-type>
                             <param-value>{cache-name}</param-value>
                        </init-param>
                        </init-params>                    
         </class-scheme>
         </cachestore-scheme>
         <read-only>true</read-only>
         <refresh-ahead-factor>.5</refresh-ahead-factor>
         </read-write-backing-map-scheme>
         </backing-map-scheme>
         <backup-count>1</backup-count>
         <autostart system-property="tangosol.coherence.distributed-service.enabled">false</autostart>           
    </distributed-scheme>
    </caching-schemes>
    </cache-config>
    The above configuration is used on tcp extend proxy node with localstorage=false
    There is similar configuration on storage node,
    - with no proxy,
    - with same "ReferenceData-Cache" (autostart=true)
    - and localstorage=true.
    Following is my CacheStore implementation.
    NOTE: This Cachestore is only for loading the cache entry from cache store.i.e. from some excel file in my case, i.e. only load() and loadAll() methods.
    NO store() or storeAll().
    package com.csfb.fid.gtb.referencedatacache.cachestore;
    import java.util.Collection;
    import java.util.HashMap;
    import java.util.Iterator;
    import java.util.List;
    import java.util.Map;
    import com.creditsuisse.fid.gtb.common.FileLogger;
    import com.csfb.fid.gtb.referencedatacache.Currency;
    import com.csfb.fid.gtb.utils.refdada.DBDetails;
    import com.csfb.fid.gtb.utils.refdada.ReferenceDataReaderUtility;
    import com.tangosol.net.NamedCache;
    import com.tangosol.net.cache.CacheStore;
    public class RefDataCacheStore implements CacheStore
         private DBDetails dbDetails = null;
         private ReferenceDataReaderUtility utils = null;
    public RefDataCacheStore(String cacheName)
         System.out.println("RefDataCacheStore constructor..");
         //dbDetails = DBDetails.getInstance();
         utils = new ReferenceDataReaderUtility();
    public Object load(Object key)
         return utils.readCurrency(key);
    public void store(Object oKey, Object oValue)
    public void erase(Object oKey)
         public void eraseAll(Collection colKeys)
         public Map loadAll(Collection colKeys)
              System.out.println("RefDataCacheStore loadAll..");
              Map<String, Object> obejctMap = new HashMap<String, Object>();
              List<Object> list = utils.readAllCurrencies();
              Iterator<Object> listItr = list.iterator(colKeys);
              while(listItr.hasNext()){
                   Object obj = listItr.next();
                   if(obj != null){
                        String key = "CU-"+((Currency)obj).getId();
                        obejctMap.put(key, (Currency)obj);
              return obejctMap;
         public void storeAll(Map mapEntries)
    CLIENT PART
    I connect to this cache using extend client with follwing config file,
    <?xml version="1.0"?>
    <!DOCTYPE cache-config SYSTEM "cache-config.dtd">
    <cache-config>
         <caching-scheme-mapping>
              <cache-mapping>
                        <cache-name>ReferenceData-Cache</cache-name>
                        <scheme-name>coherence-remote-scheme</scheme-name>
              </cache-mapping>     
         </caching-scheme-mapping>
         <caching-schemes>
              <remote-cache-scheme>
                   <scheme-name>coherence-remote-scheme</scheme-name>
                   <initiator-config>
                        <serializer>
                             <class-name>com.tangosol.io.pof.ConfigurablePofContext</class-name>                         
                             <init-params>
                                  <init-param>
                                       <param-type>string</param-type>
                                       <param-value>TradeEngine-POF.xml</param-value>
                                  </init-param>
                             </init-params>                         
                        </serializer>
                        <tcp-initiator>
                             <remote-addresses>                              
                                  <socket-address>
                                       <address>169.39.30.182</address>
                                       <port>9001</port>
                                  </socket-address>
                             </remote-addresses>
                        <connect-timeout>10s</connect-timeout>
                        </tcp-initiator>
                        <outgoing-message-handler>
                             <request-timeout>3000s</request-timeout>
                        </outgoing-message-handler>                                   
                   </initiator-config>
              </remote-cache-scheme>
         </caching-schemes>
    </cache-config>
    PROBLEM
    From my test case (with extend client file as configuration), when i try to connect to get cache handle of this cache, as
    refDataCache = CacheFactory.getCache("ReferenceData-Cache");
    I get following error on server side,
    2010-05-12 18:28:25.229/1687.847 Oracle Coherence GE 3.5.3/465 <Error> (thread=DistributedCache, member=2): BackingMapManager com.tangosol.net.DefaultConfigurableCacheFactory$Manager: failed to instantiate a cache: ReferenceData-Cache
    2010-05-12 18:28:25.229/1687.847 Oracle Coherence GE 3.5.3/465 <Error> (thread=DistributedCache, member=2):
    java.lang.IllegalArgumentException: No scheme for cache: "ReferenceData-Cache"
         at com.tangosol.net.DefaultConfigurableCacheFactory.findSchemeMapping(DefaultConfigurableCacheFactory.java:507)
         at com.tangosol.net.DefaultConfigurableCacheFactory$Manager.instantiateBackingMap(DefaultConfigurableCacheFactory.java:3486)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$Storage.instantiateResourceMap(DistributedCache.CDB:22)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$Storage.setCacheName(DistributedCache.CDB:27)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$ConfigListener.entryInserted(DistributedCache.CDB:15)
         at com.tangosol.util.MapEvent.dispatch(MapEvent.java:266)
         at com.tangosol.util.MapEvent.dispatch(MapEvent.java:226)
         at com.tangosol.util.MapListenerSupport.fireEvent(MapListenerSupport.java:556)
         at com.tangosol.util.ObservableHashMap.dispatchEvent(ObservableHashMap.java:229)
         at com.tangosol.util.ObservableHashMap$Entry.onAdd(ObservableHashMap.java:270)
         at com.tangosol.util.SafeHashMap.put(SafeHashMap.java:244)
         at com.tangosol.coherence.component.util.collections.WrapperMap.put(WrapperMap.CDB:1)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid$ServiceConfigMap.put(Grid.CDB:31)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$StorageIdRequest.onReceived(DistributedCache.CDB:45)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onMessage(Grid.CDB:9)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onNotify(Grid.CDB:136)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache.onNotify(DistributedCache.CDB:3)
         at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
         at java.lang.Thread.run(Thread.java:619)
    However, with this error also i am able to do normal put operation and get operation against those put.
    But when i try to get the cache entry which doesnt exists in cache, like
    refDataCache.get("CU-1");
    I expected that call would go to RefDataCacheStore.load method. But in my case i dont see it happening.
    Also i debugged my jvms in debug mode and i put a class load breakpoint for RefDataCacheStore, but even that is not hit. I have RefDataCacheStore in my server classpath.
    Hope to see reply on this soon.
    Thanks
    Manish

    Hi Manish,
    <<previous advice deleted>>
    user13113666 wrote:
    Hi,
    I have my server picking up the correct configuration files for "ReferenceData-Cache". In my local jconsole i could see that the service named "RefData_Distributed_Service" is started under service mbean. Also i am able to perform a put and get on "ReferenceData-Cache", i could even see StorageManager mbean showing my inserted entries for "RefData_Distributed_Service".With the local jconsole, are you monitoring the server/proxy node, or the TCP*Extend cleint node?
    The client can have the service with the server still not having it.
    Could you please post the startup log for the storage node on the server...
    Best regards,
    Robert
    Edited by: robvarga on May 17, 2010 12:33 PM

  • Exception thrown when trying to use CacheStore

    This message is for Mr. Rob Misek.
    Hi, Rob,
    As per our talk on the phone this morning, I have attached my cache confif file and a stubbed version of our CacheStore implementation(this is all we have at this moment). And also, the exception.
    Thank you.<br><br> <b> Attachment: </b><br>TDCacheStore.java <br> (*To use this attachment you will need to rename 130.bin to TDCacheStore.java after the download is complete.)<br><br> <b> Attachment: </b><br>td-cache-config.xml <br> (*To use this attachment you will need to rename 131.bin to td-cache-config.xml after the download is complete.)<br><br> <b> Attachment: </b><br>error.txt <br> (*To use this attachment you will need to rename 132.bin to error.txt after the download is complete.)

    Hi Hong,
    Take a look at the attached XML snippet that includes the internal cache scheme. Also you may want to take a look at using the com.tangosol.net.cache.AbstractCacheStore.
    Later,
    Rob Misek
    Tangosol, Inc.<br><br> <b> Attachment: </b><br>internal-snippet.xml <br> (*To use this attachment you will need to rename 134.bin to internal-snippet.xml after the download is complete.)

  • Throwing Exceptions from CacheStore

    If I encounter an exception whilst persisting a cache's contents via a CacheStore implementation, how can I pass this expection up to the code that put the entry on the cache? If it doesn't make it into the database, I'd like to throw something to let the client code know.

    Hi Mike,
         Rethrowing synchronous CacheStore exceptions to the calling thread is now supported in Coherence 3.0. To do so, set the <tt>rollback-cachestore-failures</tt> configuration element in your <tt>read-write-backing-map-scheme</tt> or <tt>versioned-backing-map-scheme</tt> caching scheme to true.
         Jason

  • WARNING when running Sample Controllable CacheStore

    Hi,
    I am trying to run the Sample Controllable CacheStore1 with Coherence 3.4.1 on Linux. The sample may be found here [http://coherence.oracle.com/display/COH34UG/Sample+CacheStores]
    I configured two distributed cache schemes: the control cache uses SafeHashMap as the backing map, and the other distributed cache uses the ControllableCacheStore1 in the sample code. I see the following warning in the Coherence log:
    2009-02-20 09:43:09.090/32.012 Oracle Coherence GE 3.4.1/407 &lt;Warning&gt; (thread=DistributedCache, member=1): Application code running on "DistributedCache" service thread(s) should not call ensureCache as this may result in deadlock. The most common case is a CacheFactory call from a custom CacheStore implementation.
    2009-02-20 09:43:09.090/32.012 Oracle Coherence GE 3.4.1/407 &lt;Error&gt; (thread=DistributedCache, member=1): Assertion failed: poll() is a blocking call and cannot be called on the Service thread
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.poll(Grid.CDB:4)
    It seems that invoking CacheFactory.getCache(CONTROL_CACHE) from the ControllableCacheStore1 causes the warning. Does anyone see the same warning? How do we make it work? The ControllableCacheStore2 (with CacheStoreAware interface) works, but since the controll is at a very granual level, I'd prefer the option with a control cache(if it works).
    Thanks.

    Hi,
    the cache store is always run from a distributed cache service and the controllable cache store is recommended to take the flag controlling writes to DB from a replicated cache (because it is a very read-heavy information and also it is very sensitive on latency, therefore it is best if it is resolvable within the local JVM, and the cache is negligible size, all of these point toward using replicated caches).
    If you follow this recommendation then the controllable cache store inherently goes to a different cache service.
    Best regards,
    Robert

  • How to Prevent CacheStore from Getting Called when Loading from DB

    Hi,
    I have a Cache with WriteBehind enabled. The issue is when I'm initializing the Cache from its Persistence Store (SQL Server 2005) I dont want it to call its CacheStore implementation. In the coherence book written by Alexander Seovic it recommends using another Cache to control writing to different Caches, sort of like a global flag, but that will just work in a WriteThrough scenario and not in a WriteBehind. One of my theories is to use a MapTrigger in which when i'm loading from the db I intercept the call and tell the object not write to the DB, maybe through writing directly to the Backing Map though i'm not sure if writing to the BackingMap prevents the Calling of the CacheStore. Please let me know. Tks.

    Hi user13402724,
    The documentation covers a scenario like this here: http://download.oracle.com/docs/cd/E14526_01/coh.350/e14509/appsampcachestore.htm#sthref512
    JK

  • Clearing a cache backed by a CacheStore

    I have a distributed cache with a read-write backing map implementation that is backed by a CacheStore. The CacheStore reads and writes the cached objects into a database.
    I need to implement "remove all" functionality. I was considering using the clear method on the cache, but wasn't exactly sure what this does. Does the clear method eventually call down to the erase on the CacheStore? If so, will this be an inefficient process since a seperate database call will be made to remove each object from the underlying database tables?
    Is there a better approach to implementing the remove all functionality? I also considered bypassing the cache to remove all the underlying database records and then following up with a call to destroy the cache. Is this a feasible alternative?
    Thanks.
    Chuck

    Hi Chuck,
    Yes, if you are using standard ReadWriteBackingMap implementation, CacheStore.erase will be called for each object removed from the cache.
    While there are few other options, what you proposed might be the simplest -- execute a query directly against the database to remove the records and then clear the cache. However, keep in mind that CacheStore.erase will still be called, so you need to make sure that it is implemented as no-op within your CacheStore implementation (probably makes sense to implement eraseAll as no-op as well).
    Regards,
    Aleks

  • Get all keys from Cache

    Hi,
    I have a scenario where I have a backing store attached to the cache and running the server in a fault tolerant mode. Because of fault tolerance when a new node joins the cluster it is required to recover the data from the cache. When using NamedCache.keySet() I get only the keys which are in the cache and not the ones persisted in the Backing store.
    How do I go about getting the whole set of keys from the cache and backing store using the Tangosol API ?

    Here is my cache-config file. The CacheStore implementation is quite big and I need to check with my manager to share it on the forum. Maybe this will continue through the regular support channel.
    Message was edited by: pkdhar<br><br> <b> Attachment: </b><br>coherence-cache-config.xml <br> (*To use this attachment you will need to rename 465.bin to coherence-cache-config.xml after the download is complete.)

  • Write-Behind Caching and Re-entrant Calls

    Support Team -
         The Coherence User Guide states that:
         "The CacheStore implementation must not call back into the hosting cache service. This includes OR/M solutions that may internally reference Coherence cache services. Note that calling into another cache service instance is allowed, though care should be taken to avoid deeply nested calls (as each call will "consume" a cache service thread and could result in deadlock if a cache service threadpool is exhausted)."
         I have Load-tested a use case wherein I have two caches: ABCache and BACache. ABCache is accessed by the application for write operation, BACache is accessed by the application for read operation. ABCache is a write-behind cache whose CacheStore populates BACache by reversing key and value of each cache entry stored in the ABCache.
         The solution worked under load with no issues.
         But can I use it? Or is it too dangerous?
         My write-behind thread-count setting is left at default (0). The documentation states that
         "If zero, all relevant tasks are performed on the service thread."
         What does this mean? Can I re-enter the caching service if my thread-count is zero?
         Thank you,
         Denis.

    Dimitri -
         I am not sure I fully understand your answer:
         1. "Your test worked because write-behing backing map invokes CacheStore methods asynchronously, on a write-behind thread." In my configuration, I have default value for thread-count, which is zero. According to the documentation, that means that CacheStore methods would be executed by the service thread and not by the write-behind thread. Do I understand this correctly?
         2. "If will fail if CacheStore method will need to be invoked synchronously on a service thread." I am not sure what is the purpose of the "service thread". In which scenarios the "CacheStore method will need to be invoked synchronously on a service thread"?
         Thank you,
         Denis.

  • Query regarding Replicated Caches that Persist on Disk

    I need to build a simple fault tolerant system that will replicate cache
    entries across a small set of systems. I want the cache to be persistent
    even if all cluster members are brought down.
    Is this something that requires writing a custom CacheStore implementation
    or is there a configuration that I can use that uses off-the-shelf pluggable
    caches? The documentation was somewhat vague about this.
    If I need or want to write my own CacheStore, when there is a cache write-through
    operation how does Coherence figure out which member of the cluster will do
    the actual work and persist a particular object?

    Hi rhanckel,
    write-through and cache stores are not supported with replicated caches, you need to use partitioned (distributed) cache for cache stores.
    You can use a number of out-of-the-box cache stores (Hibernate, TopLink, etc) or you can write your own if you don't find a suitable one. Configuration is the same, you need to specify the cache store class name appropriately in the <cache-store-scheme> child element of the <read-write-backing-map> element.
    You can look at the documentation for it on the following urls:
    http://wiki.tangosol.com/display/COH34UG/cachestore-scheme
    http://wiki.tangosol.com/display/COH34UG/read-write-backing-map-scheme
    http://wiki.tangosol.com/display/COH34UG/Read-Through%2C+Write-Through%2C+Write-Behind+Caching+and+Refresh-Ahead
    As for how Coherence figures out which member needs to write:
    in a partitioned cache, each cache key has an owner node which is algorithmically determined from the key itself and the distribution of partitions among nodes (these two do not depend on the actual data in the cache). More specifically, any key is always mapped to the same partition (provided you did not change the partition-count or the partition affinity-related settings, although if you did the latter, then it is arguably not the same key anymore). Therefore Coherence just needs to know, who owns a certain partition. Hence, the owner of that partition is the owner of the key, and that node is tasked with every operation related to that key.
    Best regards,
    Robert

  • Is it possible to catch ConnectionException in PublishingCacheStore

    Hi All,
    We are using 3 clusters of coherence; where data flow is always one way, i.e Master Cache writes to 2 other redundant caches (these caches are read only.. No writes are expected in these things). We are using PublishingCacheStore along with EventDistribution pattern.
    We have written our own CacheStore which extends PublishingCacheStore (of Push Replication Pattern). Everything works fine. In case of failover, i.e. if there is a connectionException to remote cluster we are not able to catch that exception. However we could see the exception in the system console.
    Please let me know if anyone has any idea how to do it.
    CacheStore Implementation:
    ==================
    @Override
         public void store(BinaryEntry entry) {
              if(logger.isDebugEnabled()) logger.debug("Entry >> store entry "+entry+" for channelName "+channelName+" & cacheName "+cacheName);
              try {
                   super.store(entry);
              }catch (ConnectionException e) {
                   logger.error("ConnectionException occured while store (replicating) the data to channelName "+channelName+" for cacheName "+cacheName,e);
    //Some business logic needs to be added on Exception
                   //Throwing back the exception to make sure that the replicated failure data is requeued again.
                   throw new RuntimeException(e);
              catch (Exception e) {
                   logger.error("Exception occured while store (replicating) the data to channelName "+channelName+" for cacheName "+cacheName,e);
                   //Some business logic needs to be added on Exception
                   //Throwing back the exception to make sure that the replicated failure data is requeued again.
                   throw new RuntimeException(e);
    Coherence cache Config.xml
    =================
    <cache-mapping>
         <cache-name>TEST</cache-name>
              <scheme-name>CacheDB</scheme-name>
                   <event:distributor>
                        <event:distributor-name>{cache-name}</event:distributor-name>
                        <event:distributor-external-name>{site-name}-{cluster-name}-{cache-name}</event:distributor-external-name>
                        <event:distributor-scheme>
                             <event:coherence-based-distributor-scheme />
                        </event:distributor-scheme>
                        <event:distribution-channels>
                             <event:distribution-channel>
                                  <event:channel-name>Active Publisher 1</event:channel-name>
                                  <event:starting-mode system-property="channel.starting.mode">enabled</event:starting-mode>
    <event:restart-delay>12000</event:restart-delay>
                                  <event:channel-scheme>
                                       <event:remote-cluster-channel-scheme>
                                            <event:remote-invocation-service-name>remote-site1</event:remote-invocation-service-name>
                                            <event:remote-channel-scheme>
                                                 <event:local-cache-channel-scheme>
                                                      <event:target-cache-name>TEST</event:target-cache-name>
                                                 </event:local-cache-channel-scheme>
                                            </event:remote-channel-scheme>
                                       </event:remote-cluster-channel-scheme>
                                  </event:channel-scheme>
                             </event:distribution-channel>
                             <event:distribution-channel>
                                  <event:channel-name>Active Publisher 2</event:channel-name>
                                  <event:starting-mode system-property="channel.starting.mode">enabled</event:starting-mode>
    <event:restart-delay>12000</event:restart-delay>
                                  <event:channel-scheme>
                                       <event:remote-cluster-channel-scheme>
                                            <event:remote-invocation-service-name>remote-site2</event:remote-invocation-service-name>
                                            <event:remote-channel-scheme>
                                                 <event:local-cache-channel-scheme>
                                                      <event:target-cache-name>TEST</event:target-cache-name>
                                                 </event:local-cache-channel-scheme>
                                            </event:remote-channel-scheme>
                                       </event:remote-cluster-channel-scheme>
                                  </event:channel-scheme>
                             </event:distribution-channel>
                        </event:distribution-channels>
                   </event:distributor>
              </cache-mapping>
    </caching-scheme-mapping>
         <!--
                   The following scheme is required for each remote-site when using a
                   RemoteInvocationPublisher
              -->
         <remote-invocation-scheme>
         <service-name>remote-site1</service-name>
                   <initiator-config>
                        <tcp-initiator>
                             <remote-addresses>
                                  <socket-address>
                                       <address>localhost</address>
                                       <port>20001</port>
                                  </socket-address>
                             </remote-addresses>
                             <connect-timeout>2s</connect-timeout>
                        </tcp-initiator>
                        <outgoing-message-handler>
                             <request-timeout>5s</request-timeout>
                        </outgoing-message-handler>
                   </initiator-config>
              </remote-invocation-scheme>
              <remote-invocation-scheme>
                   <service-name>remote-site2</service-name>
                   <initiator-config>
                        <tcp-initiator>
                             <remote-addresses>
                                  <socket-address>
                                       <address>localhost</address>
                                       <port>20002</port>
                                  </socket-address>
                             </remote-addresses>
                             <connect-timeout>2s</connect-timeout>
                        </tcp-initiator>
                        <outgoing-message-handler>
                             <request-timeout>5s</request-timeout>
                        </outgoing-message-handler>
                   </initiator-config>
              </remote-invocation-scheme>
              <distributed-scheme>
                   <scheme-name>UserCacheDB</scheme-name>
                   <service-name>DistributedCache</service-name>
                   <serializer>
                        <class-name>com.tangosol.io.pof.ConfigurablePofContext</class-name>
                        <init-params>
                             <init-param>
                                  <param-type>String</param-type>
                                  <param-value>pof-config.xml</param-value>
                             </init-param>
                        </init-params>
                   </serializer>
                   <backing-map-scheme>
    <read-write-backing-map-scheme>
    <internal-cache-scheme>
    <local-scheme>
    </local-scheme>
    </internal-cache-scheme>
    <cachestore-scheme>
    <class-scheme>
    <class-name>au.com.vha.cpg.cachestore.publishing.CPGPublishingCacheStore</class-name>
    <init-params>
    <init-param>
    <param-type>java.lang.String</param-type>
    <param-value>{cache-name}</param-value>
    </init-param>
                                            <init-param>
    <param-type>java.lang.String</param-type>
    <param-value>TEST</param-value>
    </init-param>
    </init-params>
    </class-scheme>
    </cachestore-scheme>
    </read-write-backing-map-scheme>
    </backing-map-scheme>
                   <listener />
                   <autostart>true</autostart>
              </distributed-scheme>

    Hi,
    As it's possible many things could fail - eg: one site may fail but the other may still remain working - it's best to let the PublishingCacheStore do it's own recovery. If you'd like to additionally use a CacheStore with Push Replication you can configure a separate Event Channel for the CacheStore using a CacheStore Event Channel.
    The configuration options are here:
    http://tinyurl.com/aporpcx
    The source code (for a test) is available here:
    http://tinyurl.com/axse83s
    -- Brian
    Brian Oliver | Architect | Oracle Coherence 

  • Web-service call from Apache to Glassfish

    I have written static HTML pages which run on Apache server and I give a web-service call which is present on Glassfish server.
    I want to calculate the number of rows present in the selected Excel file. For this, I make a web-service call through Ajax; where I send a file object to it, which then, reads the file and returns the number of rows present in that file.
    But, I could not make an Ajax POST for a file object.
    For Example, I have an HTML form containing a file object (for which I have to calculate the number of rows). I have to send this form to the web-service for processing the file.
    My form is like this:
    <form id="myForm" action="http://www.mydomain.com:8080/myApp/jersey/myClass/calculateRows" method="POST" enctype="multipart/form-data" accept-charset="utf-8" name="submitForm">
            input id ="workbook" name="workbook" type="hidden"/>
    </form>
    For this I used:
        $("#myForm").ajaxSubmit(function(noOfRows)
           alert(noOfRows);
    In the web-service, I gave:
        @Path('myClass')
        public class myClass
          @POST
          @Path("calculateRows")
          public Response calculateNoOfRows(@Context HttpServletRequest request)
            int noOfRows = 0;
            String wbk = request.getParameter("workbook");
            for (Part part : request.getParts())
                 if (!part.getName().equalsIgnoreCase("workbook"))
                      //code to calculate number of rows
                      noOfRows = 100;   (for example)
            ResponseBuilder builder = null;
            builder = Response.ok(String.valueOf(noOfRows));
            builder.header("Access-Control-Allow-Origin", "*");
            builder.header("Access-Control-Allow-Methods","POST");
            builder.header("Allow-Control-Allow-Headers", "Origin,Connection,Keep-Alive,Accept-Encoding,Accept-Charset,Accept,User-Agent,Host,X-Requested-With");
            return builder.build();
    In the Firebug, the above given URL does not appear in the Console tab.
    And, in the Net tab, I see like this:
        OPTIONS http://www.mydomain.com:8080/myApp/jersey/myClass/calculateRows
    I am not able to resolve the problem. Can anyone help?

    Hi,
    I would recommend implementing the web-service callout logic in a custom cachestore/cacheloader (see the CacheLoader.load() method) and configure your rwbm with your cachestore implementation.
    While subclassing RWBM and overriding functionality can be done, the CacheLoader interface is the intended method designed to accomplish this kind of task. RWBM internal implementations may perform other internal bookkeeping and may vary over time.
    See also:
    http://download.oracle.com/otn_hosted_doc/coherence/351/com/tangosol/net/cache/CacheLoader.html
    http://wiki.tangosol.com/display/COH35UG/cachestore-scheme
    http://wiki.tangosol.com/display/COH35UG/Sample+CacheStores
    thanks,
    -Rob

Maybe you are looking for

  • Auto-RF behavior in strong FH environments

    Has anyone seen the behavior of Auto-RF in environments using medical telemetry equipment (frequency-hopping APs a la Symbol Spectrum 24s)? How does Auto-RF deal with seeing -70dB or higher noise on-channel? The concern is that it will force a channe

  • Reinstalling the backup file in lightroom 4.4

    Hello everyone. I accidentally deleted my backup file from the folder that light room created in my folder. I have it backed up in my Time Machine but I can't find a way to ask light room to look at it again. I can put it back in to its place but lig

  • BB Z10 not being recognized as usb device in W8

    I just upgraded to the Z10 from my 9800 and used BB Link to transfer all the details to the new phone. I also have a new Dell AIO with w8. After downloading all the data from the 9800 and plugged in the Z10, the Dell didn't recognise that a new usb d

  • SNR and RSSI Values.

    HI , What is the  tha average ,minimum and maximum  values of  SNR(Signal to noice ratio ) and RSSI (Received Signal Strength Indicator ) in  cisco  access points   . And how it is depends on the client connectivity with the Access point. Thanks & Re

  • Ask User confirmation before droping table

    Hi, I need to build a script that ask user a confirmation before that he execute the script that drop a table like "Do you check the log file before droping table Y/N" I start to do this but it's not working well, I'm new in sql and PLsql accept cond