Local storage enabled per distributed-scheme

As the cache configuration xml allows you to specify local-storage true or falase for individual distributed-schemes it would seem that it is possible to have a stiuation where you have a cluster of nodes where each node is storage enabled for a different sub-set of caches/schemes.
I have never done this myself and always either storage enabled/disabled at the JVM level. Would having nodes with different caches/schemes storage enabled be a valid thing to do? I am not quite sure why I might want to do this but I was asked the question by one of our dev teams (I don't think they quite know they would want to do it either).
I suppose what I really want to know is has anyone done this before and if we did it is something likely to break?
Cheers,
JK.

Hi Jonathan,
yes, it is doable, only you usually would want to specify a different override Java property for the storage-enabled flag in each service scheme.
An example on when you want to do this is when there are different services storing data in (different) partitioned cache services (and usually they also have access to a database from a cache server, e.g. by a cache store) and you want to separate deployment of the server-side of the services to different nodes due to access control or provisioning reasons.
In these cases it is even possible that the server side of one service is the client side of another service, meaning that the cache configuration of both services need to exist in some JVMs.
I posted some additional info and configuration in this forum thread some months ago:
Re: Partitioned cache - where to put what config files?
Particularly look at the later posts in the thread
Best regards,
Robert
Edited by: robvarga on Dec 5, 2008 4:16 PM
Added link to forum thread.

Similar Messages

  • Adobe Flash Player 10 - Enable Local Storage Issue

    Hi all,
    Having and issue after upgrading to Flash Player 10 from 9.
    Some chat room applications are indicating that they cannot run
    since local storage is not enabled. If I log into the PC as the
    domain admin everything works fine. If a regular user logs in, that
    message appears. What has adobe changed in Flash Player 10 that I
    need to update?
    Thanks.

    Me too!
    On both IE8 and Firefox. Win 7 32 bit on IBM T42 - 1GB ram.
    Old laptop I know but the 10.1.x.x Flash Players worked just fine. Problem started with 10.2.152.26 flash player.
    Now BBC TV live and iPlayer work just fine, however YouTube does not - audio but no video. Just a black rectangle where the video should be.
    Right click to get the menu and "settings" is greyed out. However select the pop-out option and the video plays. Right click on the pop-out and "settings" is available. So deselect "enable hardware acceleration", close the pop-out and refresh the window (F5) and now YouTube videos play. Switch hardware acceleration back on and now they don't.
    This video works fine with hardware acceleration enabled. http://www.adobe.com/products/flashplayer/features/video/h264/
    These also work. http://www.adobe.com/devnet/flashplayer/stagevideo.html
    Something broken in 10.2, I think.

  • Distributed Scheme vs Local Scheme - Execution Time

    Hello,
         I am retrofitting an existing application to use Coherence. This application uses one of the JDK's implementations of the Map interface to store a large number of objects. I have modified the application to use Coherence's NamedCache.
         I am running a test case in a single node cluster (only one JVM). When I configure the cache to use a local scheme my test case executes in 105 milliseconds. When I configure the cache to use a distributed scheme my test case executes in 72000 milliseconds. I am surprised by this since there should not be any network traffic in a single node cluster.
         Would you please help me to understand this? I have included the Coherence cache configuration below.
         <cache-config>
              <caching-scheme-mapping>
                   <cache-mapping>
                        <cache-name>*</cache-name>
                        <!--scheme-name>LocalCache</scheme-name-->
                        <scheme-name>DistributedCache</scheme-name>
                   </cache-mapping>
              </caching-scheme-mapping>
              <caching-schemes>
                   <distributed-scheme>
                        <scheme-name>DistributedCache</scheme-name>
                        <service-name>DistributedCache</service-name>
                        <backing-map-scheme>
                             <local-scheme>
                                  <scheme-ref>LocalCache</scheme-ref>
                             </local-scheme>
                        </backing-map-scheme>
                        <backup-count>0</backup-count>
                        <autostart>true</autostart>
                   </distributed-scheme>
                   <local-scheme>
                        <scheme-name>LocalCache</scheme-name>
                   </local-scheme>
              </caching-schemes>
         </cache-config>
         Thank you,
         R. Vaessen

    Hi Robert,
         according to question about parallel-aggregators this post from Dimitri
         there is a change request (COH-1013) about it.
         However, if you are concerned only about network transmission latency, you should be aware, that it will probably not improve that.
         The serialized format of the cache entry is serialized on the client who puts the data in the cache. This serialized form travels to the primary cache server node (which sends it over to the backups), and the primary node stores it in this form.
         When someone requests that entry, the same serialized form is sent back to the requestor, so getting and putting objects do not mean any serialization/deserialization in the cache servers.
         If there are indexes defined on the cache, then the serialized form is also deserialized upon putting it into the backing map.
         If filters or aggregators or entry-processors are executed on the entries which do not have indexes to be leveraged, then the serialized form of candidate entries is deserialized.
         If you wanted to store entries in object form and serialize it every time it is sent to the client, you would see a performance decrease, and probably a negative effect on GC as well, compared to the current implementation, if you only put and get values to the cache.
         Also, entries in object form consume more memory than they do in serialized form.
         Therefore the only advantage which storing entries in object form is experienced when you want to execute queries or aggregators over non-indexed attributes or entry-processors, where in this case you do not have to pay the cost of deserialization of each entry upon executing the evaluate() method of the filter or upon getting the value from the entry.
         If you want, you can achieve this effect even currently, but at a very large memory cost: you can create a method which returns the object itself, and index that method with a reflection extractor and use an index aware filter and the QueryMap.extract method to access the object in the index instead of the entry itself. Practically you should not do that, if possible, due to the additional memory cost.
         Best regards,
         Robert

  • Local Storage cannot be enabled

    Just yesterday, I started having problems with Adobe Flash Player 10 on my Vista computer.  I use both IE and Firefox, and no matter which browser I use, I get the message that local storage has been disabled and I need to enable it.  When I go to the Global Settings page to enable it, I am not allowed to check the box to enable the storage.  I am told that Adobe is aware of this bug.  It is named FP-1914.  What can I do about this?  I have tried downloading an older version of Flash Player, but I cannot figure out which file(s) I need to open to install it.
    Sue

    Hi, if you sent a screenshot it didn't come through because I think that feature has been disabled for a time.
    I use all of the Settings and have not had any problem with any and have been able to set them. I use Windows and Windows Music Player.
    Others that i have worked with and am currently working with have mentioned they can't access one of the Settings
    Panel or even the site itself. However, it was due to not having the correct Flash Player installed, along with the correct
    ActiveX Control whether IE or other browsers. Once their Flash Player was installed correctly then they had no problem
    with any Settings.
    I am not familiar with Pandora, but would only make this comment. Flash Player is a browser plug-in and sometimes
    in users add ons there are some that will conflict with the browser itself, add ons conflicting with each other and conflict with the Flash ActiveX Control.
    Perhaps an add on is conflicting with Pandora since it is Flash enabled as you say. Just mentioning a possibility.
    Also you may want to make sure your Flash Player is installed correctly since sometimes with an Uninstall/Install one old file doesn't get removed.
    Thanks,
    eidnolb

  • Cachestore for distributed scheme not getting invoked to "load" the entries

    Hi,
    We have a distributed scheme with CacheStore as below,
    SERVER PART
    <?xml version="1.0"?>
    <!DOCTYPE cache-config SYSTEM "cache-config.dtd">
    <cache-config>
    <caching-scheme-mapping>          
         <cache-mapping>
                   <cache-name>ReferenceData-Cache</cache-name>
                   <scheme-name>RefData_Distributed_Scheme</scheme-name>
         </cache-mapping>
         <!--
         FEW OTHER DISTRIBUTED CACHE
         -- >
    </caching-scheme-mapping>
    <caching-schemes>
    <!-- definition of other cache schemes including one proxy scheme -->
    <distributed-scheme>
         <scheme-name>RefData_Distributed_Scheme</scheme-name>
         <service-name>RefData_Distributed_Service</service-name>     
         <serializer>
         <class-name>com.tangosol.io.pof.ConfigurablePofContext</class-name>                    
         <init-params>
              <init-param>
                   <param-type>string</param-type>
                   <param-value>TradeEngine-POF.xml</param-value>
              </init-param>
         </init-params>     
         </serializer>                         
         <backing-map-scheme>
         <read-write-backing-map-scheme>
         <internal-cache-scheme>
         <local-scheme>
         <expiry-delay>1m</expiry-delay>
         </local-scheme>
         </internal-cache-scheme>
         <cachestore-scheme>
         <class-scheme>
         <class-name>com.csfb.fid.gtb.referencedatacache.cachestore.RefDataCacheStore</class-name>     
         <init-params>
                        <init-param>
                             <param-type>string</param-type>
                             <param-value>{cache-name}</param-value>
                        </init-param>
                        </init-params>                    
         </class-scheme>
         </cachestore-scheme>
         <read-only>true</read-only>
         <refresh-ahead-factor>.5</refresh-ahead-factor>
         </read-write-backing-map-scheme>
         </backing-map-scheme>
         <backup-count>1</backup-count>
         <autostart system-property="tangosol.coherence.distributed-service.enabled">false</autostart>           
    </distributed-scheme>
    </caching-schemes>
    </cache-config>
    The above configuration is used on tcp extend proxy node with localstorage=false
    There is similar configuration on storage node,
    - with no proxy,
    - with same "ReferenceData-Cache" (autostart=true)
    - and localstorage=true.
    Following is my CacheStore implementation.
    NOTE: This Cachestore is only for loading the cache entry from cache store.i.e. from some excel file in my case, i.e. only load() and loadAll() methods.
    NO store() or storeAll().
    package com.csfb.fid.gtb.referencedatacache.cachestore;
    import java.util.Collection;
    import java.util.HashMap;
    import java.util.Iterator;
    import java.util.List;
    import java.util.Map;
    import com.creditsuisse.fid.gtb.common.FileLogger;
    import com.csfb.fid.gtb.referencedatacache.Currency;
    import com.csfb.fid.gtb.utils.refdada.DBDetails;
    import com.csfb.fid.gtb.utils.refdada.ReferenceDataReaderUtility;
    import com.tangosol.net.NamedCache;
    import com.tangosol.net.cache.CacheStore;
    public class RefDataCacheStore implements CacheStore
         private DBDetails dbDetails = null;
         private ReferenceDataReaderUtility utils = null;
    public RefDataCacheStore(String cacheName)
         System.out.println("RefDataCacheStore constructor..");
         //dbDetails = DBDetails.getInstance();
         utils = new ReferenceDataReaderUtility();
    public Object load(Object key)
         return utils.readCurrency(key);
    public void store(Object oKey, Object oValue)
    public void erase(Object oKey)
         public void eraseAll(Collection colKeys)
         public Map loadAll(Collection colKeys)
              System.out.println("RefDataCacheStore loadAll..");
              Map<String, Object> obejctMap = new HashMap<String, Object>();
              List<Object> list = utils.readAllCurrencies();
              Iterator<Object> listItr = list.iterator(colKeys);
              while(listItr.hasNext()){
                   Object obj = listItr.next();
                   if(obj != null){
                        String key = "CU-"+((Currency)obj).getId();
                        obejctMap.put(key, (Currency)obj);
              return obejctMap;
         public void storeAll(Map mapEntries)
    CLIENT PART
    I connect to this cache using extend client with follwing config file,
    <?xml version="1.0"?>
    <!DOCTYPE cache-config SYSTEM "cache-config.dtd">
    <cache-config>
         <caching-scheme-mapping>
              <cache-mapping>
                        <cache-name>ReferenceData-Cache</cache-name>
                        <scheme-name>coherence-remote-scheme</scheme-name>
              </cache-mapping>     
         </caching-scheme-mapping>
         <caching-schemes>
              <remote-cache-scheme>
                   <scheme-name>coherence-remote-scheme</scheme-name>
                   <initiator-config>
                        <serializer>
                             <class-name>com.tangosol.io.pof.ConfigurablePofContext</class-name>                         
                             <init-params>
                                  <init-param>
                                       <param-type>string</param-type>
                                       <param-value>TradeEngine-POF.xml</param-value>
                                  </init-param>
                             </init-params>                         
                        </serializer>
                        <tcp-initiator>
                             <remote-addresses>                              
                                  <socket-address>
                                       <address>169.39.30.182</address>
                                       <port>9001</port>
                                  </socket-address>
                             </remote-addresses>
                        <connect-timeout>10s</connect-timeout>
                        </tcp-initiator>
                        <outgoing-message-handler>
                             <request-timeout>3000s</request-timeout>
                        </outgoing-message-handler>                                   
                   </initiator-config>
              </remote-cache-scheme>
         </caching-schemes>
    </cache-config>
    PROBLEM
    From my test case (with extend client file as configuration), when i try to connect to get cache handle of this cache, as
    refDataCache = CacheFactory.getCache("ReferenceData-Cache");
    I get following error on server side,
    2010-05-12 18:28:25.229/1687.847 Oracle Coherence GE 3.5.3/465 <Error> (thread=DistributedCache, member=2): BackingMapManager com.tangosol.net.DefaultConfigurableCacheFactory$Manager: failed to instantiate a cache: ReferenceData-Cache
    2010-05-12 18:28:25.229/1687.847 Oracle Coherence GE 3.5.3/465 <Error> (thread=DistributedCache, member=2):
    java.lang.IllegalArgumentException: No scheme for cache: "ReferenceData-Cache"
         at com.tangosol.net.DefaultConfigurableCacheFactory.findSchemeMapping(DefaultConfigurableCacheFactory.java:507)
         at com.tangosol.net.DefaultConfigurableCacheFactory$Manager.instantiateBackingMap(DefaultConfigurableCacheFactory.java:3486)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$Storage.instantiateResourceMap(DistributedCache.CDB:22)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$Storage.setCacheName(DistributedCache.CDB:27)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$ConfigListener.entryInserted(DistributedCache.CDB:15)
         at com.tangosol.util.MapEvent.dispatch(MapEvent.java:266)
         at com.tangosol.util.MapEvent.dispatch(MapEvent.java:226)
         at com.tangosol.util.MapListenerSupport.fireEvent(MapListenerSupport.java:556)
         at com.tangosol.util.ObservableHashMap.dispatchEvent(ObservableHashMap.java:229)
         at com.tangosol.util.ObservableHashMap$Entry.onAdd(ObservableHashMap.java:270)
         at com.tangosol.util.SafeHashMap.put(SafeHashMap.java:244)
         at com.tangosol.coherence.component.util.collections.WrapperMap.put(WrapperMap.CDB:1)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid$ServiceConfigMap.put(Grid.CDB:31)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$StorageIdRequest.onReceived(DistributedCache.CDB:45)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onMessage(Grid.CDB:9)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onNotify(Grid.CDB:136)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache.onNotify(DistributedCache.CDB:3)
         at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
         at java.lang.Thread.run(Thread.java:619)
    However, with this error also i am able to do normal put operation and get operation against those put.
    But when i try to get the cache entry which doesnt exists in cache, like
    refDataCache.get("CU-1");
    I expected that call would go to RefDataCacheStore.load method. But in my case i dont see it happening.
    Also i debugged my jvms in debug mode and i put a class load breakpoint for RefDataCacheStore, but even that is not hit. I have RefDataCacheStore in my server classpath.
    Hope to see reply on this soon.
    Thanks
    Manish

    Hi Manish,
    <<previous advice deleted>>
    user13113666 wrote:
    Hi,
    I have my server picking up the correct configuration files for "ReferenceData-Cache". In my local jconsole i could see that the service named "RefData_Distributed_Service" is started under service mbean. Also i am able to perform a put and get on "ReferenceData-Cache", i could even see StorageManager mbean showing my inserted entries for "RefData_Distributed_Service".With the local jconsole, are you monitoring the server/proxy node, or the TCP*Extend cleint node?
    The client can have the service with the server still not having it.
    Could you please post the startup log for the storage node on the server...
    Best regards,
    Robert
    Edited by: robvarga on May 17, 2010 12:33 PM

  • Cache Mapping & Distributed Scheme not working.

    I have a set of cache mapping and its corresponding distributed scheme defined. When i run it on its own it works fine, but when I add the same cache mapping and distributed scheme to a set of cache mappings and distributed schemes i.e. HibernateCacheStore which are all binded to a service-name, i get the message "com.tangosol.net.RequestPolicyException: No storage-enabled nodes exist for service .... "
    I am using a cache config in which local storage is enabled for server side and my application connects to it with localstorage=false. Have tried the system property option also but of no use.
    I have tried it using coherence standalone as well as running it from JBOSS. Same behavior persists.
    My intention is to keep this cache mapping different from existing service (i.e. service-name).
    Is anyone aware of any conflicts when simulating such type of scenario. Looking forward for some pointers.
    Am using Coherence 3.6.1 with JBoss 5.x
    Edited by: Kapil Naudiyal on Jan 1, 2012 8:47 PM
    Little More research to be more specific. I tried with following combo.
    Coherence Server ----------> Client to initialize
    |
    |
    |
    Client to use cache
    This works fine with the scenario mentioned above.
    But when i use the scenario where
    Coherence Server + client to initialize Cache ---------------> Client to use cache, cache doesn't work. This scenario is implemented within JBoss. Is there is a limitation that an embedded Cache instance loads only one service..
    Thanks in advance for any insights..

    Hi,
    The message "com.tangosol.net.RequestPolicyException: No storage-enabled nodes exist for service .... " implies that your clients are unable to coonect to storage-enabled nodes so you need to ensure if clients are able to connect to the Coherence servers. The easier way to find out is to use the JMX and then look for node details or cluster size. Setting up JMX is quite simple and documentation can be found here - http://docs.oracle.com/cd/E15357_01/coh.360/e15723/manage_jmx.htm
    Hope this helps!
    Cheers,
    NJ

  • Query about local storage

    Hi,
         i had a query about local storage.
         I've a machine that hosts weblogic and tangosol. i've an ejb that accesses a distributed cache i.e NamedCache cache = CacheFactory.get("MyCache")
         i modified tangosol-coherence.xml and set local-storage to false ( for distributed cache) and replaced the file in coherence.jar.
         i'm using an overflow scheme and the back map uses a disk scheme.
         i also start a separate standalone instance of tangosol and i set the system property of local storage to true for the standalone instance.
         i start the standalone instance first and then weblogic.
         The idea is ensure that the tangosol instance in weblogic or the weblogic JVM should not participate in storing data (hence local storage false).
         only the JVM for the standalone instance should store data (hence local storage true -system property).
         i wanted to know whether the property "local-storage" is pertinent to a member(machine) or to a JVM?
         the reason for this doubt: as i'm using a disk scheme, tangosol creates a file for an overflow (e.g lh014402~.tp). i can see two such files when ideally i would have wanted only one for the tangosol instance.
         -rw-r--r-- 1 zephyr users 8364032 2005-06-23 17:02 lh014402~.tp
         -rw-r--r-- 1 zephyr users 8364032 2005-06-23 17:02 lh014403~.tp.
         can you please let me know if we can configure tangosol in such a way that we can two separate instances running with local stroage false for one and true for the other?
         Awaiting your reply
         Thanks
         Vinay

    I would suggest leaving the default 'local-storage' value set to 'true' in the tangosol-coherence.xml and just use the JVM argument to control the local storage of each individual node. Then start the stand alone instance normally (I assume you are using the com.tangosol.net.DefaultCacheServer) and start the WebLogic instance with the following:
         java [...] -Dtangosol.coherence.distributed.localstorage=false [...]
         Hope this helps.
         Later,
         Rob Misek
         Tangosol, Inc.

  • Excessive (?) cluster delays during shutdown of storage enabled node.

    We are experiencing significant delays when shutting down a storage enabled node. At the moment, this is happening in a benchmark environment. If these delays were to occur in production, however, they would push us well outside of our acceptable response times, so we are looking for ways to reduce/eliminate the delays.
    Some background:
    - We're running in a 'grid' style arrangement with a dedicated cache tier.
    - We're running our benchmarks with a vanilla distributed cache -- binary storage, no backups, no operations other than put/get.
    - We're allocating a relatively large number of partitions (1973), basing that number on the total potential cluster storage and the '50MB per partition' rule.
    - We're using JSW to manage startup/shutdown, calling DefaultCacheServer.main() to start the cache server, and using the shutdown hook (from the operational config) to shutdown the instance.
    - We're currently running all of the dedicated cache JVMs on a single machine (that won't be the case in production, of course), with a relatively higher ratio of JVMs to cores --> about 2 to 1.
    - We're using a simple benchmarking client that is issuing a combination of puts/gets against the distributed cache. The ids for these puts/gets are randomized (completely synthetic, i know).
    - We're currently handling all operations on the distributed service thread (i.e. thread count is zero).
    What we see:
    - When adding a new node to a cluster under steady load (~50% CPU idle avg) , there is a very slight degradation, but only very slight. There is no apparent pause, and the maximum operation times against the cluster might barely exceed ~100 ms.
    - When later removing that node from the cluster (kill the JVM, triggering the coherence supplied shutdown hook), there is an obvious, extended pause. During this time, the maximum operation times against the cluster are as high as 5, 10, or even 15 seconds.
    At the beginning of the pause, a client will see this message:
    2010-07-13 22:23:53.227/55.738 Oracle Coherence GE 3.5.3/465 <D5> (thread=Cluster, member=10): Member 8 left service Management with senior member 1
    During the length of the pause, the cache server logging indicates that primary partitions are being shuffled around.
    When the partition shuffle is complete, the clients become immediately responsive, and display these messages:
    2010-07-13 22:23:58.935/61.446 Oracle Coherence GE 3.5.3/465 <D5> (thread=Cluster, member=10): Member 8 left service hibL2-distributed with senior member 1
    2010-07-13 22:23:58.973/61.484 Oracle Coherence GE 3.5.3/465 <D5> (thread=Cluster, member=10): MemberLeft notification for Member 8 received from Member(Id=8, Timestamp=2010-07-13 22:23:21.378, Address=x.x.x.x:8001, MachineId=47282, Location=site:xxx.com,machine:xxx,process:30552,member:xxx-S02, Role=server)
    2010-07-13 22:23:58.973/61.484 Oracle Coherence GE 3.5.3/465 <D5> (thread=Cluster, member=10): Member(Id=8, Timestamp=2010-07-13 22:23:58.973, Address=x.x.x.x:8001, MachineId=47282, Location=site:xxx.com,machine:xxx,process:30552,member:xxx-S02, Role=server) left Cluster with senior member 1
    2010-07-13 22:23:59.135/61.646 Oracle Coherence GE 3.5.3/465 <D5> (thread=Cluster, member=10): TcpRing: disconnected from member 8 due to the peer departure
    Note that there was almost nothing actually in the entire cluster-wide cache at this point -- maybe 10 MB of data at most.
    Any thoughts on how we could eliminate (or nearly eliminate) these pauses on shutdown?

    Increasing the number of threads associated with the distributed service does not seem to have a noticable effect. I might try it in a larger scale test, just to make sure, but initial indications are not positive.
    From the client side, the operations seem hung behind the DistributedCache$BinaryMap.waitForPartitionRedistribution() method. The call stack is listed below.
    "main" prio=10 tid=0x09a75400 nid=0x6f02 in Object.wait() [0xb7452000]
    java.lang.Thread.State: TIMED_WAITING (on object monitor)
    at java.lang.Object.wait(Native Method)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$BinaryMap.waitForPartitionRedistribution(DistributedCache.CDB:96)
    - locked <0x9765c938> (a com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$BinaryMap$Contention)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$BinaryMap.waitForRedistribution(DistributedCache.CDB:10)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$BinaryMap.ensureRequestTarget(DistributedCache.CDB:21)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$BinaryMap.get(DistributedCache.CDB:16)
    at com.tangosol.util.ConverterCollections$ConverterMap.get(ConverterCollections.java:1547)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$ViewMap.get(DistributedCache.CDB:1)
    at com.tangosol.coherence.component.util.SafeNamedCache.get(SafeNamedCache.CDB:1)
    at com.ea.nova.coherence.lt.GetRandomTask.main(GetRandomTask.java:90)
    Any help appreciated!

  • Local Storage on IPad and Iphone is missing IOS7

    Hi,
    I have a Jquery Mobile app working fine in IOS6, upgrading to IO7 on the Ipad and Iphone the app is crashing when returning lists of data that previously worked. Our app works exclusivley with the local storage code in all modern browers allowing offline usage. Seems like the IOS7 upgrade is disrupting this somehow. I
    cannot see the Local Storage icon in the web developerm, I am expecting this.
    I can load around 40 list items into JqueryMobile, after that Safari crashes and the Wed Developer disappears. Any thoughts welcome.
    Harvey

    This appears to be a known bug , which is REALLY annoying me. Whoevers idea it was to disable cookies by default should be shot! I'm still fumbling around trying to get this going, but its not prooving easy...
    http://stackoverflow.com/questions/20160499/enable-cookies-for-ios-7-in-phonegap -build

  • HT4191 iPhone Local Storage "My iPhone" - How do you create this folder for use by the Notes app on a iPhone or iPad?  If I want to keep some notes only on my device and not in a cloud environment associated with an e-mail account.

    iPhone Local Storage "My iPhone" - How do you create this folder for use by the Notes app on a iPhone or iPad?  If I want to keep some notes only on my device and not in a cloud environment associated with an e-mail account.  I've seen reference to the  "My iPhone" local storage put no mention on how you create this folder or access this folder within the Notes app.  I realize storing information in a local storage like this provides no syncing between other iDevices but that is exactly what I'm looking for.  I'm running iOS7.0.4 on a iPhone 5S, and a iPad Air.  Any help would be greatly appreciated.

    If you go to Settings > Notes > Default Account you will see "On My iPhone" as the default account and the only choice if you have not enabled syncing Notes in Settings >iCloud or Settings > Mail, Contacts, Calendars. If you have enabled syncing you can still select "On My iPhone" as the default account. When you are in the Notes app you won't see any accounts listed if you have not enabled syncing because they are all in the On My iPhone account and that is the only place possible. It is not a folder that you create.

  • Multiple caches use same distributed scheme, what about cachestore config?

    Hi,
    If a distributed scheme is configured with read-write-backing-map, will ALL caches be required to define the cachestore configuration if the caches are using the same distributed scheme? For example, we have two caches "ExpireSessions" and "Customer" which are using distributed-scheme. But only "ExpiredSession" cache needs to have read-write-backing-map (for transaction lite) AND the "ExpiredSession" needs to be persisted to DB. For "Customer" cache, it does NOT need to have read-write-backing-map AND it does NOT need to be persisted to DB. Currently, we have the following configuration which also has the "write-delay" and "cache-store" configurations for the "Customer" cache.
    Is it possible not have cache store configuration (write-delay, cache-store) configuration for the Customer cache even though it is using the same distributed-scheme as "ExpiredSession" cache (which needs the cache store configuration?) We think it probably can remove some of the overhead and improve efficiency) for the 'Customer' cache operations.
    Or is it required to have a separate distributed-scheme for "Customer" cache without eh cache store configuration? But it then will have use separate/additional thread pools for the distributed service?
    Any suggestions?
    Thanks in advance for your help.
    <cache-name>ExpiredSessions</cache-name>
    <scheme-name>default-distributed</scheme-name>
    <init-params>
    <init-param>
    <param-name>expiry-delay</param-name>
    <param-value>2s</param-value>
    </init-param>
    <init-param>
    <param-name>write-delay</param-name>
    <param-value>1s</param-value>
    </init-param>
    <init-param>
    <param-name>cache-store</param-name>
    <param-value>xxx.xxx.DBCacheStore</param-value>
    </param-value>
    </init-param>
    </init-params>
    <cache-mapping>
    <cache-name>Customer</cache-name>
    <scheme-name>default-distributed</scheme-name>
    <init-params>
    <init-param>
    <param-name>cache-store</param-name>
    <param-value>xxx.xxx.EmptyCacheStore</param-value>
    </init-param>
    <init-param>
    <param-name>write-delay</param-name>
    <param-value>24h</param-value>
    </init-param>
    </init-params>
    </cache-mapping>
    <!--
    Default Distributed pricing-distributedcaching scheme.
    -->
    <distributed-scheme>
    <scheme-name>default-distributed</scheme-name>
    <service-name>XXXDistributedCache</service-name>
    <thread-count>16</thread-count>
    <backing-map-scheme>
    <read-write-backing-map-scheme>
    <scheme-ref>rw-bm</scheme-ref>
    </read-write-backing-map-scheme>
    </backing-map-scheme>
    <autostart>true</autostart>
    </distributed-scheme>

    Hi,
    Yes, you can use the same service for different caches with different configuration. You need to define base service configuration with the common parameters and all the caches with have their schemes (not services) refering the base service configuration along with their own configuration. For example,
              <cache-mapping>
                   <cache-name>ExpiredSessions</cache-name>
                   <scheme-name>default-distributed</scheme-name>
                   <init-params>
                        <init-param>
                             <param-name>expiry-delay</param-name>
                             <param-value>2s</param-value>
                        </init-param>
                        <init-param>
                             <param-name>write-delay</param-name>
                             <param-value>1s</param-value>
                        </init-param>
                        <init-param>
                             <param-name>cache-store</param-name>
                             <param-value>xxx.xxx.DBCacheStore</param-value>                    
                        </init-param>
                   </init-params>
              </cache-mapping>
              <cache-mapping>
                   <cache-name>Customer</cache-name>
                   <scheme-name>default-distributed</scheme-name>
              </cache-mapping>
              </caching-scheme-mapping>
                   <!-- Default Distributed pricing-distributedcaching scheme. -->
              <caching-schemes>
                   <distributed-scheme>
                        <scheme-name>default-distributed</scheme-name>
                        <service-name>XXXDistributedCache</service-name>
                        <thread-count>16</thread-count>
                        <autostart>true</autostart>
                   </distributed-scheme>
                   <distributed-scheme>
                        <scheme-name>default-customer</scheme-name>
                        <scheme-ref>default-distributed</scheme-ref>
                        <backing-map-scheme>
                             <local-scheme />
                        </backing-map-scheme>
                   </distributed-scheme>
                   <distributed-scheme>
                        <scheme-name>default-expiry</scheme-name>
                        <scheme-ref>default-distributed</scheme-ref>
                        <backing-map-scheme>
                             <read-write-backing-map-scheme>
                                  <scheme-ref>rw-bm</scheme-ref>
                             </read-write-backing-map-scheme>
                        </backing-map-scheme>
                   </distributed-scheme>
              </caching-schemes>
    Hope this helps!
    Cheers,
    NJ

  • "error: thread-local storage not supported for this target"

    I have a program that uses the __thread specifier, to be run on a Solaris 9/UltraSprac.
    I am not able to compile it using gcc 3.4.4 or 4.0.4, it emits the msg "error: thread-local storage not supported for this target".
    xz@gamera% gcc -v -Wall -D_REENTRANT -c -o func_stack.o func_stack.c
    Reading specs from /opt/gcc/3.4.4/lib/gcc/sparc-sun-solaris2.8/3.4.4/specs
    Configured with: ../srcdir/configure --prefix=/opt/gcc/3.4.4 --disable-nls
    Thread model: posix
    gcc version 3.4.4
    /opt/gcc/3.4.4/libexec/gcc/sparc-sun-solaris2.8/3.4.4/cc1 -quiet -v -D_REENTRANT -DMESS func_stack.c -quiet -dumpbase func_stack.c -mcpu=v7 -auxbase-strip func_stack.o -Wall -version -o /var/tmp//cc0poHSN.s
    ignoring nonexistent directory "/usr/local/include"
    ignoring nonexistent directory "/opt/gcc/3.4.4/lib/gcc/sparc-sun-solaris2.8/3.4.4/../../../../sparc-sun-solaris2.8/include"
    #include "..." search starts here:
    #include <...> search starts here:
    /opt/gcc/3.4.4/include
    /opt/gcc/3.4.4/lib/gcc/sparc-sun-solaris2.8/3.4.4/include
    /usr/include
    End of search list.
    GNU C version 3.4.4 (sparc-sun-solaris2.8)
            compiled by GNU C version 3.4.4.
    GGC heuristics: --param ggc-min-expand=100 --param ggc-min-heapsize=131072
    func_stack.c:16: error: thread-local storage not supported for this target
    func_stack.c:17: error: thread-local storage not supported for this target
    func_stack.c:19: error: thread-local storage not supported for this target
    xs@gamera% gcc -v -D_REENTRANT  -c -o func_stack.o func_stack.c
    Using built-in specs.
    Target: sparc-sun-solaris2.9
    Configured with: /net/clpt-v490-0/export/data/bldmstr/20070711_mars_gcc/src/configure --prefix=/usr/sfw --enable-shared --with-system-zlib --enable-checking=release --disable-libmudflap --enable-languages=c,c++ --enable-version-specific-runtime-libs --with-cpu=v9 --with-ld=/usr/ccs/bin/ld --without-gnu-ld
    Thread model: posix
    gcc version 4.0.4 (gccfss)
    /pkg/gcc/4.0.4/bin/../libexec/gcc/sparc-sun-solaris2.9/4.0.4/cc1 -quiet -v -I. -iprefix /pkg/gcc/4.0.4/bin/../lib/gcc/sparc-sun-solaris2.9/4.0.4/ -D__sparcv8 -D_REENTRANT -DMESS func_stack.c -quiet -dumpbase func_stack.c -mcpu=v9 -auxbase-strip func_stack.o -version -m32 -o /tmp/ccjsdswh.s -r /tmp/cc2w4ZRo.ir
    ignoring nonexistent directory "/pkg/gcc/4.0.4/bin/../lib/gcc/sparc-sun-solaris2.9/4.0.4/../../../../sparc-sun-solaris2.9/include"
    ignoring nonexistent directory "/usr/local/include"
    ignoring nonexistent directory "/usr/sfw/lib/gcc/sparc-sun-solaris2.9/4.0.4/include"
    ignoring nonexistent directory "/usr/sfw/lib/../sparc-sun-solaris2.9/include"
    #include "..." search starts here:
    #include <...> search starts here:
    /pkg/gcc/4.0.4/bin/../lib/gcc/sparc-sun-solaris2.9/4.0.4/include
    /usr/sfw/include
    /usr/include
    End of search list.
    GNU C version 4.0.4 (gccfss) (sparc-sun-solaris2.9)
            compiled by GNU C version 4.0.4 (gccfss).
    GGC heuristics: --param ggc-min-expand=100 --param ggc-min-heapsize=131072
    func_stack.c:16: error: thread-local storage not supported for this target
    func_stack.c:17: error: thread-local storage not supported for this target
    func_stack.c:19: error: thread-local storage not supported for this targetJust as comparison, the corresponding output of compiling another file which does not have __thread declaration is as follows:
    xz@gamera% gcc -v -Wall -D_REENTRANT -c -o common.o common.c
    Reading specs from /opt/gcc/3.4.4/lib/gcc/sparc-sun-solaris2.8/3.4.4/specs
    Configured with: ../srcdir/configure --prefix=/opt/gcc/3.4.4 --disable-nls
    Thread model: posix
    gcc version 3.4.4
    /opt/gcc/3.4.4/libexec/gcc/sparc-sun-solaris2.8/3.4.4/cc1 -quiet -v -D_REENTRANT -DMESS common.c -quiet -dumpbase common.c -mcpu=v7 -auxbase-strip common.o -Wall -version -o /var/tmp//cc4VxrLz.s
    ignoring nonexistent directory "/usr/local/include"
    ignoring nonexistent directory "/opt/gcc/3.4.4/lib/gcc/sparc-sun-solaris2.8/3.4.4/../../../../sparc-sun-solaris2.8/include"
    #include "..." search starts here:
    #include <...> search starts here:
    /opt/gcc/3.4.4/include
    /opt/gcc/3.4.4/lib/gcc/sparc-sun-solaris2.8/3.4.4/include
    /usr/include
    End of search list.
    GNU C version 3.4.4 (sparc-sun-solaris2.8)
            compiled by GNU C version 3.4.4.
    GGC heuristics: --param ggc-min-expand=100 --param ggc-min-heapsize=131072
    /usr/ccs/bin/as -V -Qy -s -xarch=v8 -o common.o /var/tmp//cc4VxrLz.s
    /usr/ccs/bin/as: Sun WorkShop 6 update 2 Compiler Common 6.2 Solaris_9_CBE 2001/04/02Note that the last 2 lines seem to suggest that a Sun assembler is used as the back-end of gcc. I am not sure whether the failure to compile the first file (with __thread) was due to the incompatibility of this Sun assembler. In the first case, the error msg was emitted before these 2 lines are printed.
    I further read a post about gcc 3.3.3's inability to compile code that has __thread in it, on a HP-UX 11.11: http://forums12.itrc.hp.com/service/forums/questionanswer.do?admit=109447627+1216595175060+28353475&threadId=1148976 The conclusion seems to suggest that "the 2.17 GNU assembler did not support thread local storage" and gcc sees that and thus disallows TLS.
    If the assembler is the culprit, then does anyone know whether this "Sun WorkShop 6 update 2" assembler in my installation can work with TLS? And how come a Sun assembler becomes the back-end of gcc? I read that gas (the GNU assembler) is the default backend of gcc. (How) can one specify the assembler to be used for gcc?
    As an aside, I am able to compile my file on this same Solaris 9/UltraSparc platform using the Sun Studio 12 C Compiler:
    xz@gamera% cc -V -# -D_REENTRANT  -c -o func_stack.o func_stack.c
    cc: Sun C 5.9 SunOS_sparc Patch 124867-01 2007/07/12
    ### Note: NLSPATH = /pkg/SUNWspro/12/prod/bin/../lib/locale/%L/LC_MESSAGES/%N.cat:/pkg/SUNWspro/12/prod/bin/../../lib/locale/%L/LC_MESSAGES/%N.cat
    ###     command line files and options (expanded):
    ### -c -D_REENTRANT  -V func_stack.c -o func_stack.o
    /pkg/SUNWspro/12/prod/bin/acomp -xldscope=global -i func_stack.c -y-fbe -y/pkg/SUNWspro/12/prod/bin/fbe -y-xarch=generic -y-xmemalign=8i -y-o -yfunc_stack.o -y-verbose -y-xthreadvar=no%dynamic -y-comdat -xdbggen=no%stabs+dwarf2+usedonly -V -D_REENTRANT  -m32 -fparam_ir -Qy -D__SunOS_5_9 -D__SUNPRO_C=0x590 -D__SVR4 -D__sun -D__SunOS -D__unix -D__sparc -D__BUILTIN_VA_ARG_INCR -D__C99FEATURES__ -Xa -D__PRAGMA_REDEFINE_EXTNAME -Dunix -Dsun -Dsparc -D__RESTRICT -xc99=%all,no%lib -D__FLT_EVAL_METHOD__=0 -I/pkg/SUNWspro/12/prod/include/cc "-g/pkg/SUNWspro/12/prod/bin/cc -V -D_REENTRANT  -c -o func_stack.o " -fsimple=0 -D__SUN_PREFETCH -destination_ir=yabe
    acomp: Sun C 5.9 SunOS_sparc Patch 124867-01 2007/07/12Interestingly, the output no longer mentions the "/usr/ccs/bin/as: Sun WorkShop 6 update 2" assembler.

    Just as another comparison, I compiled a file without __thread on the Solaris 9/UltraSparc platform using gcc 4.0.4. Not surprisingly it worked. But I no longer see the mention of the Sun assembler as in the case of gcc 3.4.4. Nor did I see the mention of "GNU assembler" as in the case of gcc 4.0.4/Solaris 10/x86. Instead, I saw something called "iropt" and "cg". Does anyone know what they are?
    xz@gamera% gcc -v -Wall -D_REENTRANT -c -o common.o common.c
    Using built-in specs.
    Target: sparc-sun-solaris2.9
    Configured with: /net/clpt-v490-0/export/data/bldmstr/20070711_mars_gcc/src/configure --prefix=/usr/sfw --enable-shared --with-system-zlib --enable-checking=release --disable-libmudflap --enable-languages=c,c++ --enable-version-specific-runtime-libs --with-cpu=v9 --with-ld=/usr/ccs/bin/ld --without-gnu-ld
    Thread model: posix
    gcc version 4.0.4 (gccfss)
    /pkg/gcc/4.0.4/bin/../libexec/gcc/sparc-sun-solaris2.9/4.0.4/cc1 -quiet -v -iprefix /pkg/gcc/4.0.4/bin/../lib/gcc/sparc-sun-solaris2.9/4.0.4/ -D__sparcv8 -D_REENTRANT -DMESS common.c -quiet -dumpbase common.c -mcpu=v9 -auxbase-strip common.o -Wall -version -m32 -o /tmp/ccSGJIDD.s -r /tmp/ccKuJz76.ir
    ignoring nonexistent directory "/pkg/gcc/4.0.4/bin/../lib/gcc/sparc-sun-solaris2.9/4.0.4/../../../../sparc-sun-solaris2.9/include"
    ignoring nonexistent directory "/usr/local/include"
    ignoring nonexistent directory "/usr/sfw/lib/gcc/sparc-sun-solaris2.9/4.0.4/include"
    ignoring nonexistent directory "/usr/sfw/lib/../sparc-sun-solaris2.9/include"
    #include "..." search starts here:
    #include <...> search starts here:
    /pkg/gcc/4.0.4/bin/../lib/gcc/sparc-sun-solaris2.9/4.0.4/include
    /usr/sfw/include
    /usr/include
    End of search list.
    GNU C version 4.0.4 (gccfss) (sparc-sun-solaris2.9)
            compiled by GNU C version 4.0.4 (gccfss).
    GGC heuristics: --param ggc-min-expand=100 --param ggc-min-heapsize=131072
    /pkg/gcc/4.0.4/SUNW0scgfss/4.0.4/prod/bin/iropt -F -xarch=v8plus -xchip=generic -O1 -xvector=no -xbuiltin=%none -xcache=generic -Qy -h_gcc -o /tmp/ccUl4mVM.ircg /tmp/ccKuJz76.ir -N/dev/null -is /tmp/ccSGJIDD.s
    /pkg/gcc/4.0.4/SUNW0scgfss/4.0.4/prod/bin/cg -Qy -xarch=v8plus -xchip=generic -OO0 -T3 -Qiselect-C0 -Qrm:newregman:coalescing=0 -xcode=abs32 -xcache=generic -xmemalign=8i -il /pkg/gcc/4.0.4/bin/../lib/gcc/sparc-sun-solaris2.9/4.0.4//gccbuiltins.il -xvector=no -xthreadvar=no%dynamic -xbuiltin=%none -Qassembler-ounrefsym=0 -Qiselect-T0 -Qassembler-I -Qassembler-U -comdat -h_gcc -is /tmp/ccSGJIDD.s -ir /tmp/ccUl4mVM.ircg -oo common.o

  • Live migration to HA failed leaving VHD on local storage and VM in cluster = Unsupported Cluster Configuration

    Hi all
    Fun one here, I've been moving non-HA VMs to a HA and everything has been working perfectly until now.  All this is being performed on Hyper-V 2012R2, Windows Server 2012R2 and VMM 2012R2.
    For some reason on the VMs failed the migration with an error 10608 "Cannot create or update a highly available virtual machine because Virtual Machine Manager could not locate or access Drive:\Folder"  The odd thing is the drive\folder is
    a local storage one and I selected a CSV in the migration wizard.
    The net result is that the VM is half configured into the cluster but the VHD is still on local storage.  Hence the "unsupported cluster configuration" error.
    The question is how do I roll back? I either need to get the VM out of the cluster and back into a non-HA state or move the VHD onto the CSV.  Not sure if the latter is really a option.
    I've foolishly clicked "Ignore" on the repair so now I can't use the "undo" option (brain fade moment on my part).
    Any help gratefully received as I'm a bit stuck with this.
    Thanks
    Rob

    Hi Simar
    Thanks for the advice, I've now got the VM back in a stable state and running HA.
    Just to finish off the thread for future I did the following
    - Shutdown the VM
    - Remove the VM from the Failover Cluster Manager (as you say this did leave the VM configuration intact)
    - I was unable to import the VM as per your instructions so I copied the VHD to another folder on the local storage and took a note of the VM configuration.
    - Deleted the VM from VMM so this removed all the configuration details/old VHD.
    - Built a new VM using the details I saved from the point above
    - Copied the VHD into the new VMs folder and attached it to the VM.
    - Started it up and reconfigured networking
    - Use VMM to make the VM HA.
    I believe I have found the reason for the initial error, it appears there was a empty folder in the Snapshot folder, probably from an old Checkpoint that hadn't cleaned up properly when it was deleted.
    The system is up and running now so thanks again for the advice.
    Rob

  • Adobe Flash FAIL:  Adobe Flash Player local storage settings incorrect.  Module 'Resume' feature may not work on this computer.

    Using a Windows 2012 RDS Environment, we have users connecting to a CPD website, and as part of the CPD they need to run a systems checker.  When they run the systems checker they get the following error message: "Adobe Flash FAIL:  Adobe Flash Player local storage settings incorrect.  Module 'Resume' feature may not work on this computer". All users are connecting to this environment with Windows CE Clients,I have checked the setting on Adobe Flash and they seem correct but as each user has its own profile on the RDS session, is there something that I should be setting for each user. I have added the website to the trusted sites and it has made no difference.   Any ideas

    It sounds like what's happening is that Flash Player can't write or read from the local shared objects in the user's redirected home directory because we disallow traversing junctions in the broker process.  This behavior was disabled to address a vulnerability identified in some of John Forshaw's research into the IE broker last year.
    You can enable this behavior by adding the following setting to mms.cfg:
    EnableInsecureJunctionBehavior=1
    That said, you can probably gather from the name of the flag that we don't really recommend this approach, and disable this attack surface by default.  There's some risk that a network attacker could craft content that abuses fundamental issues with how Windows handles Junctions to write to arbitrary locations.
    Unfortunately, there's not a simple or easy workaround that I'm aware of (but it's been ages since I've administered a Windows domain) for this kind of NAS/SAN-backed terminal server environment where Flash is not able to access \Users\<user>\AppData\Roaming\Macromedia\Flash Player\ without traversing a junction.

  • EM for Coherence - Cannot automatically start departed storage enabled nodes

    Hi Guru,
    I have a cluster with 4 storage enabled nodes. I want EM to monitor those 4 storage enabled nodes and automatically bring up nodes if  down.  So i set the "Nodes Replenish and Entity Discovery Alert Metric ->
    Cluster Size Change (To Replenish Nodes)" as follows:
             Warning Threshold: Not Defined
              Critical Threshold: 3
              Comparision Operator: <
              Occurrences Before Alert: 1
    I manually killed 2 storage nodes and hope EM can automatically bring up them. But unfortunately, this never happen. I even cannot see the correct "Severity Message" displayed on GUI, it always shows "0 nodes departed Coherence cluster". 
    Did anyone have the similar problem? Any hints are appreciated!
    Thanks
    Hysun

    Hi,
    It looks like your cache servers have not used the correct cache configuration file so they do not have a service with the name DistributedSessions. You can see this in your log output here:
    Services
      ClusterService{Name=Cluster, State=(SERVICE_STARTED, STATE_JOINED), Id=0, Version=3.7.1, OldestMemberId=5}
      InvocationService{Name=Management, State=(SERVICE_STARTED), Id=1, Version=3.1, OldestMemberId=5}
      PartitionedCache{Name=DistributedCache, State=(SERVICE_STARTED), LocalStorage=enabled, PartitionCount=257, BackupCount=1, AssignedPartitions=257, BackupPartitions=0}
      ReplicatedCache{Name=ReplicatedCache, State=(SERVICE_STARTED), Id=3, Version=3.0, OldestMemberId=2}
      Optimistic{Name=OptimisticCache, State=(SERVICE_STARTED), Id=4, Version=3.0, OldestMemberId=2}
      InvocationService{Name=InvocationService, State=(SERVICE_STARTED), Id=5, Version=3.1, OldestMemberId=2}
    You said in the original post that you used the following JVM arguments:
    -Dtangosol.coherence.distributed.localstorage=true -Dtangosol.coherence.session.localstorage=true -Dtangosol.coherence.cluster=CoherenceCluster -Dtangosol.coherence.clusteraddress=231.1.3.4 -Dtangosol.coherence.clusterport=7744
    ...but none of those specify the cache configuration to use (in your case session-cache-config.xml) so the node will use the default configuration file; the default name is coherence-cache-config.xml which is inside the Coherence jar file.
    You need to add the following JVM argument
    -Dtangosol.coherence.cacheconfig=session-cache-config.xml -Dtangosol.coherence.distributed.localstorage=true -Dtangosol.coherence.session.localstorage=true -Dtangosol.coherence.cluster=CoherenceCluster -Dtangosol.coherence.clusteraddress=231.1.3.4 -Dtangosol.coherence.clusterport=7744
    JK

Maybe you are looking for