Partition Cache

Hi all ,
Can any one help me to setup a partitioned cache for more than 2 nodes, with all information how to setup server side cache-config.xml and client side cache-config.xml.
the perpose is load- balencing
thanks in advance
Vinod Yadav

Hi Vinod,
The Best Practices document below can be useful for what you are trying to achieve:
http://coherence.oracle.com/display/COH35UG/Best+Practices
Thanks,
Cris

Similar Messages

  • Using a partitionned cache with off-heap storage for backup data

    Hi,
    Is it possible to define a partitionned cache (with data into the heap) with off-heap storage for backup data ?
    I think it could be worthwhile to do so, as backup data are associated with a different access pattern.
    If so, what are the impacts of such off-heap storage for backup data ?
    Particularly, what are the impacts on performance ?
    Thanks.
    Regards,
    Dominique

    Hi,
    It seems what using scheme for backup-store is broken in latest version of Coherence, I've got an exception using your setup.
    2010-07-24 12:21:16.562/7.969 Oracle Coherence GE 3.6.0.0 <Error> (thread=DistributedCache, member=1): java.lang.NullPointerException
         at com.tangosol.net.DefaultConfigurableCacheFactory.findSchemeMapping(DefaultConfigurableCacheFactory.java:466)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$Storage$BackingManager.isPartitioned(PartitionedCache.java:10)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$Storage.instantiateBackupMap(PartitionedCache.java:24)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$Storage.setCacheName(PartitionedCache.java:29)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$ServiceConfig$ConfigListener.entryInserted(PartitionedCache.java:17)
         at com.tangosol.util.MapEvent.dispatch(MapEvent.java:266)
         at com.tangosol.util.MapEvent.dispatch(MapEvent.java:226)
         at com.tangosol.util.MapListenerSupport.fireEvent(MapListenerSupport.java:556)
         at com.tangosol.util.ObservableHashMap.dispatchEvent(ObservableHashMap.java:229)
         at com.tangosol.util.ObservableHashMap$Entry.onAdd(ObservableHashMap.java:270)
         at com.tangosol.util.SafeHashMap.put(SafeHashMap.java:244)
         at com.tangosol.coherence.component.util.ServiceConfig$Map.put(ServiceConfig.java:43)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$StorageIdRequest.onReceived(PartitionedCache.java:45)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onMessage(Grid.java:11)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onNotify(Grid.java:33)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.PartitionedService.onNotify(PartitionedService.java:3)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache.onNotify(PartitionedCache.java:3)
         at com.tangosol.coherence.component.util.Daemon.run(Daemon.java:42)
         at java.lang.Thread.run(Thread.java:619)Tracing in debuger has shown what problem is in PartitionedCache$Storage#setCacheName(String) method, it calls instantiateBackingMap(String) before setting __m_CacheName field.
    It is broken in 3.6.0b17229
    PS using asynchronous wrapper around disk based backup storage should reduce performance impact

  • NullPointerException invoking Processor on Partitioned Cache

    Hi all,
    We are getting a NullPointerException when trying to invoke an EntryProcessor on Service B, from within EntryProcessor of Service A on a partitioned cache.
    The strange thing is that it works fine when there is only one node, but fails when there are one or more nodes. I guess that makes some sense in that there wont be any serialization if there is only one node.
    Can anyone suggest what may be causing the problem from the stack trace?
    16:20:19,715 2012-05-02 16:20:19.715/3.272 Oracle Coherence GE 3.7.1.3 <D6> (thread=Proxy:FeedHandlerExtendTcpProxyService:TcpAcceptor, member=3): Opened: Channel(Id=1136375156, Open=true, Connection=0x000001370E233D590A660314E7BFF3B54A24F82E01F98987CE652E1F3C5DD30E)
    16:20:29,913 2012-05-02 16:20:29.910/13.467 Oracle Coherence GE 3.7.1.3 <D5> (thread=Proxy:FeedHandlerExtendTcpProxyService:TcpAcceptorWorker:1, member=3): An exception occurred while processing a InvokeRequest for Service=Proxy:FeedHandlerExtendTcpProxyService:TcpAcceptor: (Wrapped: Failed request execution for TradesPartitionedCache service on Member(Id=3, Timestamp=2012-05-02 16:20:18.135, Address=10.102.3.20:8088, MachineId=63987, Location=site:,machine:LONW00067144,process:9356, Role=cache)) Portable(com.tangosol.util.WrapperException): (Wrapped: Failed request execution for PositionPartitionedCache service on Member(Id=2, Timestamp=2012-05-02 16:13:27.876, Address=10.102.3.20:8090, MachineId=63987, Location=site:,machine:LONW00067144,process:12892, Role=cache)) null
         at com.tangosol.util.Base.ensureRuntimeException(Base.java:288)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.tagException(Grid.CDB:36)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache.onInvokeRequest(PartitionedCache.CDB:68)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$InvokeRequest.run(PartitionedCache.CDB:1)
         at com.tangosol.coherence.component.util.DaemonPool$WrapperTask.run(DaemonPool.CDB:1)
         at com.tangosol.coherence.component.util.DaemonPool$WrapperTask.run(DaemonPool.CDB:32)
         at com.tangosol.coherence.component.util.DaemonPool$Daemon.onNotify(DaemonPool.CDB:63)
         at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
         at java.lang.Thread.run(Unknown Source)
    Caused by: Portable(com.tangosol.util.WrapperException): (Wrapped: Failed request execution for PositionPartitionedCache service on Member(Id=2, Timestamp=2012-05-02 16:13:27.876, Address=10.102.3.20:8090, MachineId=63987, Location=site:,machine:LONW00067144,process:12892, Role=cache)) null
         at com.tangosol.io.pof.ThrowablePofSerializer.deserialize(ThrowablePofSerializer.java:57)
         at com.tangosol.io.pof.PofBufferReader.readAsObject(PofBufferReader.java:3316)
         at com.tangosol.io.pof.PofBufferReader.readObject(PofBufferReader.java:2604)
         at com.tangosol.io.pof.ConfigurablePofContext.deserialize(ConfigurablePofContext.java:368)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.Service.readObject(Service.CDB:1)
         at com.tangosol.coherence.component.net.Message.readObject(Message.CDB:1)
         at com.tangosol.coherence.component.net.message.SimpleResponse.read(SimpleResponse.CDB:6)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.deserializeMessage(Grid.CDB:19)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onNotify(Grid.CDB:31)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.PartitionedService.onNotify(PartitionedService.CDB:3)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache.onNotify(PartitionedCache.CDB:3)
         ... 2 more
    Caused by: Portable(java.lang.NullPointerException)
         at com.tangosol.io.pof.ThrowablePofSerializer.deserialize(ThrowablePofSerializer.java:57)
         at com.tangosol.io.pof.PofBufferReader.readAsObject(PofBufferReader.java:3316)
         at com.tangosol.io.pof.PofBufferReader.readObject(PofBufferReader.java:2604)
         at com.tangosol.io.pof.PortableException.readExternal(PortableException.java:150)
         at com.tangosol.io.pof.ThrowablePofSerializer.deserialize(ThrowablePofSerializer.java:59)
         ... 12 more
    thanks
    Edited by: bish on 02-May-2012 17:42

    Hi,
    Have you tested that your EntryProcessors both serialize and deserialize properly. The last time I saw a stack trace similar to that (yesterday in fact) it was an error desrializing an EntryProcessor. If your code works with a single node then that would point to a serialization issue.
    Unit testing all your POF classes is always a good idea...
    InvocableMap.EntryProcessor original = ... create the object you want to test ...
    ConfigurablePofContext pofContext = new ConfigurablePofContext("... name of your pof config file...");
    Binary binary = ExternalizableHelper.toBinary(original, pofContext);
    InvocableMap.EntryProcessor result = ExternalizableHelper.fromBinary(binary, pofContext);
    ... do some assertions to make sure the "result" matches the "original" ...On another note - calling a second EntryProcessor from inside the first EntryProcessor could lead to problems unless you are very careful to avoid thread starvation and cache/service re-entrancy.
    JK

  • What happens to lock on expired item in partitioned cache?

    With a partitioned cache, what happens if an object with a lock on it expires?
    In other words, if I put it in with an expiry, something locks it, and it expires while the lock is present, what happens?

    Hi mesocyclone,
    The lock/unlock API is completely orthogonal to the data-related API (get, put, invoke, etc). Presence of absence of the data has no effect on the lock.
    Regards,
    Gene

  • Data processing in multiple caches in the same node

    we are using a partitioned cache to load data ( multiple types of data) in multiple named caches. In one partition, we plan to have all related data and we have got this using key association.
    Now I want to do processing with in that node and do some reconciliation of the data from various sources. We tried entry processor, but we want to consolidate all data from multiple named caches in the node. In a very naive form, I am thinking each named cache as a table and i am looking at ways to have a processor that will do some some processing on the related data.
    I see we could use a combination of Invocable object, Invocation service and entry processors, but I am unable to implement it successfully.
    Can you please point me to any reference implementation where I can do processing at the data node without transfering data back to client.
    Also any reference implementation of Map reduce (at the server side) in coherence would be helpful.
    Regards
    Ganesan

    Hi Ganesan
    A common approach to perform processing in the grid is to execute background threads in response to backing map listener events. The processing is co-located with data because the listener will be called in the JVM that owns the data. The thread can then make Coherence calls to access caches just like any other Coherence client.
    The Coherence Incbutor has numerous examples of this at http://coherence.oracle.com/display/INCUBATOR/Home. The Incubator Common component includes an event processing package that simplifies the handling of events. See the messaging pattern for an example: Message.java and MessageEventManager.java
    I am not sure I answered your question but I hope the information helps.
    Paul

  • Getting All Entries from a cache

    Hi Folks,
         Just a small interesting observation. In an attempt to get back all the data from my partitioned cache I tried the following approaches:
         //EntrySet
         NamedCache cache = NamedCache.getCache("MyCache");
         Iterator<Entry<MyKeyObj, MyObj>> iter = cache.entrySet().iterator();
         //iterator over objects and get values
         //KeySet & getAll
         NamedCache cache = NamedCache.getCache("MyCache");
         Map results = cache.getAll(cache.keySet());
         Iterator<Entry<MyKeyObj, MyObj>> iter = results.iterator();
         //iterate over objects and get values
         Retrieving ~47k objects from 4 nodes takes 21 seconds using the entryset approach and 10 seconds for the keyset/getal approach.
         does that sound right to you? That implies that the entryset iterator is lazy loaded using get(key) for each entry.
         Regards,
         Max

    Hi Gene,
         I actually posted the question, because we are currently performance-tuning our application, and there are scenarios where (due to having a large amount of badly organized legacy code with the bottom layers ported to Coherence) there are lots of invocations getting all the entries from some caches, sometimes even hundreds of times during the processing of a HTTP request.
         In some cases (typically with caches having a low cache-size) we found, that the entrySet-AlwaysFilter solution was way faster than the keyset-getall solution, which was about as fast as the solution iterating over the cache (new HashMap(cache)).
         I just wanted to ask if there are some rules of thumb on how long is it efficient to use the AlwaysFilter on distributed caches, and where it starts to be better to use the keyset-getall approach (from a naive test-case keyset-getall seemed to be better upwards from a couple-of-thousand entries).
         Also, as we are considering to move some of the caches (static data mostly, with usually less than 1000 entries, sometimes even as few as a dozen entries in a named cache, and in very few cases as many as 40000 entries) to a replicated topology, that is why I asked about the effect of using replicated caches...
         I expect the entrySet-AlwaysFilter to be slower than the iterating solution, since it effectively does the same, and also has some additional filter evaluation to be done.
         The keySet-getall will be something similar to the iterating solution, I guess.
         What is to be known about the implementation of the values() method?
         Can it be worth using in some cases? Does it give us an instant snapshot in case of replicated caches? Is it faster than the entrySet(void) in case of replicated caches?
         Thanks and best regards,
         Robert

  • Any way to get more statistics and response-time info from cache processes?

    I'm currently involved in an evaluation of Coherence for use as a global in-core cache/DB for data in the GB range. We'll try to run multiple cache process per cache in order to stay below 512MB heap-space per cache process.
    In order to tune cache access times and understand the dynamic behaviour of the app we'd like to see some statistics info for cache access, e. g.
    - requests processed per cache (process)
    - requests answer time (per request, min, max, avg, median etc.)
    - request answer size (number of cache entries returned, size of entries etc.)
    - whatever else that helps ...
    We are in position to get this kind of statistics info on the application side (when we call Coherence-APIs) but we'd need to see the statistics on a technical level per cache process.
    We know about JMX/MBeans but that does not allow us to come up with log-files for statistical analysis and there is no way to correlate requests from the application to any statistics information (what request did cause what effect).
    Any way to tell the cache processes to provide this kind of (logging) info (mesurements per request) and some summary.

    David,
    In addition to statistics that you can collect at the client tier (measuring the latency of individual cache calls), you could use a number of attributes exposed by the ServiceMBean.
    If you look at a ServiceMBean for your partitioned cache service, the RequestAverageDuration and RequestMaxDuration attributes describe the latency of the underlying communication tier as seen from the client and does not include the serialization/deseriation cost.
    The TaskAverageDuration, TaskMaxBacklog and ThreadAverageActiveCount expose the latency and expenses as seen by the server side, excluding the network tier expenses.
    Also, there are number of statistics exposed by the CacheMBean related to the underlying backing map efficiency such as AverageGetMillis, AverageHitMillis, AverageMissMillis and AveragePutMillis.
    You can also create your own subclasses for backing map implementation and implement custom MBeans that would expose any addition information you wish to collect.
    Regards,
    Gene

  • Client automatically cache the data got from cache server?

    Hi expert,
    I have 2 questions about the client local cache. Would you please help to give me some suggestion?
    1. Will client automatically locally cache the data got from cache server the first time and automatically update the data in local cache when getting the same data from cache server again? I go through the API reference but cannot find any API to query the data currently cached in the local cache.
    2. If client will automatically cache the data got from cache server. Is there any way for a client to get the data event that happens to its local cache, such as entry created in local cache, entry deleted from local cache and entry updated in local cache? In my opinion, when getting an entry from cache server the first time, the MapListener's entry create event should be triggered. When getting the same entry again, the entry update event should be triggered.
    However, I have tried a client with replicated cache, a client with partitioned cache, an extend client with remote cache and a client with local cache(front cache part of near cache), the client (the NamedCache object has been set the MapListener) cannot get any event notification after getting data from cache server. By the way, my listener is OK since when putting data the entry create event and entry update event will be triggered.
    Your suggestion is very appreciated. :)

    Hi
    If I were you I would read this http://download.oracle.com/docs/cd/E14526_01/coh.350/e14510/toc.htm
    and particularly the section about Near Caching here http://download.oracle.com/docs/cd/E14526_01/coh.350/e14510/nearcache.htm#CDEFEAJG
    which is what you are asking about in your question.
    Near Caching is how Coherence stores data in the locally - which is the answetr to your first question. How Near Caching works is explained in the documentation.
    Events, which you ask about in your second question are explained here http://download.oracle.com/docs/cd/E14526_01/coh.350/e14510/delivereventsjava.htm#CBBIIEFA
    It might be that ContinuousQueryCache is closer to what you want. This is explained here http://download.oracle.com/docs/cd/E14526_01/coh.350/e14510/queryabledatafabric.htm#sthref38 A ContinuousQueryCache is like having a sub-set of the underlying cache on the local client which you can then listen to etc...
    JK

  • How to put two cache nodes in one cluster..

    Please provide one sample example file to configure the cache cluster for two cache nodes.. and then where should i keep that XML file...i am suffering with this from last two days... pls help me.......pls..
    I am using coherence for .NET as a client...
    pls...pls..... Thanks in advance.. I am unable to understand what they have given in documents..
    Thanks,
    krishna
    Edited by: krishna_ndlp on Jun 16, 2011 2:55 AM

    Krishna -
    As Steve pointed out, starting a Coherence cache server on multiple machines (or in multiple terminal windows on one machine) will result in multiple node cluster.
    The output from a cache server will show the cluster member set:
    Output from the first cache server (Id=1) starting:
    MasterMemberSet
    ThisMember=Member(Id=1, Timestamp=2011-06-16 10:53:46.231, Address=xxx.xxx.xxx.xxx:8088, MachineId=******,
    Location=site:acme.com, machine:acme-pc,process:5580, Role=CoherenceServer)
    OldestMember=Member(Id=1, Timestamp=2011-06-16 10:53:46.231, Address=xxx.xxx.xxx.xxx:8088, MachineId=******,
    Location=acme.com, machine:acme-pc, process:5580, Role=CoherenceServer)
    ActualMemberSet=MemberSet(Size=1, BitSetCount=2
    Member(Id=1, Timestamp=2011-06-16 10:53:46.231, Address=xxx.xxx.xxx.xxx:8088, MachineId=29922,
         Location=site:acme.com, machine:acme-pc,process:5580, Role=CoherenceServer)
    Output from the second cache server (Id=2) starting:
    MasterMemberSet
    ThisMember=Member(Id=2, Timestamp=2011-06-16 10:54:36.434, Address=xxx.xxx.xxx.xxx:8090, MachineId=******,
    Location=site:acme.com,machine:acme-pc, process:7952, Role=CoherenceServer)
    OldestMember=Member(Id=1, Timestamp=2011-06-16 10:53:46.231, Address=xxx.xxx.xxx.xxx:8088, MachineId=******,
    Location=site:acme.com, machine:acme-pc,process:5580, Role=CoherenceServer)
    ActualMemberSet=MemberSet(Size=2, BitSetCount=2
    Member(Id=1, Timestamp=2011-06-16 10:53:46.231, Address=xxx.xxx.xxx.xxx:8088, MachineId=******,
         Location=site:acme.com, machine:acme-pc,process:5580, Role=CoherenceServer)
    Member(Id=2, Timestamp=2011-06-16 10:54:36.434, Address=xxx.xxx.xxx.xxx:8090, MachineId=******,
         Location=site:acem.com, machine:acme-pc,process:7952, Role=CoherenceServer)
    Cache server output will also show the cache configuration file used when starting (for example):
    Oracle Coherence GE 3.6.0.1 <Info> (thread=main, member=n/a):
    Loaded cache configuration from "jar:file:/C:/coherence-3.6.0.1/coherence/lib/coherence.jar!/coherence-cache-config.xml"
    "coherence-cache-config.xml" is located in the coherence.jar. It contains prefixed wild-card cache-mappings for 'dist-*' partitioned cache, 'repl-*' replicated cache, etc. A generic wild-card cache-mapping of '*' defaults other caches to partitioned.
    Specific caches need to be configured within the cache configuration xml file specified (-Dtangosol.coherence.cacheconfig=<config-file-spec>) when the cache server is started. These caches are then accessible to all server started with the same cacheconfig.
    /Mark J

  • How to config the levels of backup for partioned cache?

    hi all,
    As coherence's document says, "Partitioned caches can be configured with as many levels of backup as desired, or zero if desired. Most installations use one backup copy (two copies total). ", could you pls. tell how? Great thanks!
    thanks,
    michael

    Hi Michael,
    there are several reasons on why it is usually recommended to have at least 3 nodes:
    1. When a single node fails, Coherence after a short time (after the rebalance of the cluster completes) gets itself into a state which again is capable of surviving a single error (data is still backed up at least once). If there is only two nodes, and only one of them remains, this is obviously not possible.
    2. Some of Coherence's protocols for determining what should happen after a cluster node becomes inaccessible behave differently when there are at least 3 nodes (majority wins), whereas when 2 nodes can't communicate with each other, none of them can determine which is faulty.
    As for backup count:
    Total memory consumption without overhead in the cluster is roughly (1+backupcount)*dataset-size + 1 * index-size.
    I believe backup count larger than the number of nodes does not have any additional effect, each node will store only a single copy of the data.
    BR,
    Robert

  • Query regarding Replicated Caches that Persist on Disk

    I need to build a simple fault tolerant system that will replicate cache
    entries across a small set of systems. I want the cache to be persistent
    even if all cluster members are brought down.
    Is this something that requires writing a custom CacheStore implementation
    or is there a configuration that I can use that uses off-the-shelf pluggable
    caches? The documentation was somewhat vague about this.
    If I need or want to write my own CacheStore, when there is a cache write-through
    operation how does Coherence figure out which member of the cluster will do
    the actual work and persist a particular object?

    Hi rhanckel,
    write-through and cache stores are not supported with replicated caches, you need to use partitioned (distributed) cache for cache stores.
    You can use a number of out-of-the-box cache stores (Hibernate, TopLink, etc) or you can write your own if you don't find a suitable one. Configuration is the same, you need to specify the cache store class name appropriately in the <cache-store-scheme> child element of the <read-write-backing-map> element.
    You can look at the documentation for it on the following urls:
    http://wiki.tangosol.com/display/COH34UG/cachestore-scheme
    http://wiki.tangosol.com/display/COH34UG/read-write-backing-map-scheme
    http://wiki.tangosol.com/display/COH34UG/Read-Through%2C+Write-Through%2C+Write-Behind+Caching+and+Refresh-Ahead
    As for how Coherence figures out which member needs to write:
    in a partitioned cache, each cache key has an owner node which is algorithmically determined from the key itself and the distribution of partitions among nodes (these two do not depend on the actual data in the cache). More specifically, any key is always mapped to the same partition (provided you did not change the partition-count or the partition affinity-related settings, although if you did the latter, then it is arguably not the same key anymore). Therefore Coherence just needs to know, who owns a certain partition. Hence, the owner of that partition is the owner of the key, and that node is tasked with every operation related to that key.
    Best regards,
    Robert

  • How to Test coherence cache configuration

    Hi,
    I have configured coherence using the below two config xmls, I had started out by trying to configure a distributed cache scheme but I am not sure if it has come up correctly. This configuration works fine from caching point of view, it even does the clustering, but my only doubt here is that how can I test whether it is actually a distributed cache or a replicated cache?
    coherence-cache-config.xml
    <cache-config>
         <caching-scheme-mapping>
              <cache-mapping>
                   <cache-name>dist-ABCCache</cache-name>
                   <scheme-name>ABC-distributed-cache-scheme</scheme-name>
              </cache-mapping>
         </caching-scheme-mapping>
         <caching-schemes>
              <!--
    Distributed caching scheme.
    -->
              <distributed-scheme>
                   <scheme-name>ABC-distributed-cache-scheme</scheme-name>
                   <lease-granularity>member</lease-granularity>
                   <backing-map-scheme>
                        <local-scheme/>
                   </backing-map-scheme>
                   <autostart>true</autostart>
              </distributed-scheme>
              <proxy-scheme>
                   <service-name>ExtendTcpProxyService</service-name>
                   <thread-count>5</thread-count>
                   <acceptor-config>
                        <tcp-acceptor>
                             <local-address>
                                  <address>server1</address>
                                  <port>####</port>
                             </local-address>
                        </tcp-acceptor>
                   </acceptor-config>
                   <autostart>true</autostart>
              </proxy-scheme>
         </caching-schemes>
    </cache-config>
    tangosol-coherence-override.xml
    <coherence>
         <cluster-config>
              <member-identity>
                   <cluster-name>MyCluster</cluster-name>
              </member-identity>
              <unicast-listener>
                   <well-known-addresses>
                        <socket-address id="1">
                             <address>server1</address>
                             <port>####</port>
                             <port-auto-adjust>false</port-auto-adjust>
                        </socket-address>
                        <socket-address id="2">
                             <address>server2</address>
                             <port>####</port>
                             <port-auto-adjust>false</port-auto-adjust>
                        </socket-address>                    
                   </well-known-addresses>
              </unicast-listener>
              <multicast-listener>
                   <time-to-live system-property="tangosol.coherence.ttl">4</time-to-live>
                   <join-timeout-milliseconds>3000</join-timeout-milliseconds>
              </multicast-listener>
              <packet-publisher>
                   <packet-delivery>
                        <timeout-milliseconds>30000</timeout-milliseconds>
                   </packet-delivery>
              </packet-publisher>
              <service-guardian>
                   <timeout-milliseconds system-property="tangosol.coherence.guard.timeout">35000
                   </timeout-milliseconds>
              </service-guardian>
         </cluster-config>
         <logging-config>
              <severity-level system-property="tangosol.coherence.log.level">9</severity-level>
              <character-limit system-property="tangosol.coherence.log.limit">0</character-limit>
         </logging-config>
    </coherence>

    user1945969 wrote:
    Thanks for your answer but I also wanted to know if there is anyway I can verify that by the data in the cluster? You can start up the [command line application|http://coherence.oracle.com/pages/viewpage.action?pageId=16684] or write a quick class to display the information for that particular cache.
    I mean can check what all data is present in each cluster member?I would suggest taking a look via JMX. In this case, you would want to look at the ServiceMBean, CacheMBean and StorageManagerMBean MBeans (take a look at the Registry for more information).
    Another reason why I am not so confident if this scheme is distributed or not is that, in my config xml I do not have any backing map scheme configured so how is coherence going to do the backups in this case?
    <backing-map-scheme>
         <local-scheme/>
    </backing-map-scheme>You do have a "backing map" configured, it will just use the defaults.
    Coherence always manages the backups automatically, transparently and dynamically for you. When using the partitioned cache (i.e. "distributed-scheme") Coherence will place the backup in a storage enabled node on a separate physical machine as the primary.
    Rob
    :Coherence Team:

  • How to pre - load all database rows into cache

    Hi All,
    The below is my cache configuration, I would like to know how to load all the database rows/specified number of rows into the cache.
    <?xml version="1.0"?>
    <!DOCTYPE cache-config SYSTEM "cache-config.dtd">
    <cache-config>
    <caching-scheme-mapping>
    <cache-mapping>
    <cache-name>TableEmp</cache-name>
    <scheme-name>distributed-hibernate</scheme-name>
    <init-params>
    <init-param>
    <param-name>entityname</param-name>
    <param-value>com.tangosol.examples.explore.Emp</param-value>
    </init-param>
    </init-params>
    </cache-mapping>
    </caching-scheme-mapping>
    <caching-schemes>
    <distributed-scheme>
    <scheme-name>distributed-hibernate</scheme-name>
    <backing-map-scheme>
    <read-write-backing-map-scheme>
    <internal-cache-scheme>
    <local-scheme></local-scheme>
    </internal-cache-scheme>
    <cachestore-scheme>
    <class-scheme>
    <class-name>
    com.tangosol.coherence.hibernate.HibernateCacheStore
    </class-name>
    <init-params>
    <init-param>
    <param-type>java.lang.String</param-type>
    <param-value>{entityname}</param-value>
    </init-param>
    </init-params>
    </class-scheme>
    </cachestore-scheme>
    </read-write-backing-map-scheme>
    </backing-map-scheme>
    </distributed-scheme>
    </caching-schemes>
    </cache-config>
    Please kindly provide a solution.
    Regards
    S

    Hi Rich,
    Imagine I have just downloaded coherence, I have run a server with the default config. From what you said to S coherence can pull the data from database itself WITHOUT me having to push it to coherence? If so can you please explain how this done, or point me at a guide?You might start with [Read-Through Caching|http://coherence.oracle.com/display/COH34UG/Read-Through%2C+Write-Through%2C+Write-Behind+and+Refresh-Ahead+Caching#Read-Through%2CWrite-Through%2CWrite-BehindandRefresh-AheadCaching-ReadThroughCache] to understand how Coherence can pull data. It is the implementation of a CacheLoader that enables the Coherence cache to pull the data.
    The cache configuration that S provided specifies a read-write-backing-map-scheme indicating that HibernateCacheStore class should be used by Coherence and is similar to the configuration discussed at [Using Hibernate as a CacheStore for Coherence|http://wiki.tangosol.com/display/COH34UG/Using+Hibernate+as+a+CacheStore+for+Coherence]. In responding to the original question, I was assuming that the data source being queried to be loaded into the cache is the same as the data source fronted by the Hibernate configuration.
    Secondly with the respects to the answer to my question. If I don't care about versioning ... do I need a EvolvablePortableObject? If you really don't want to version your serialized representations, you can implement the PortableObject interface instead but the additional cost of implementing EvolvablePortableObject is small and the potential benefit is great.
    So my question is, can coherence pull the data from the database using a preload request and serialize into a pof format without me having to push the data to coherence via a separate app? And if so could you please explain how? Or direct me at some documentation?You do not need to push data to Coherence via a separate app. Coherence can pull the data from the database. Coherence can also preload the cache using an EntryProcessor. You can configure Coherence to use POF and will need to implement POF serialization methods for your cache objects.
    The [Partitioned cache with a serializer|http://coherence.oracle.com/display/COH34UG/Sample+Cache+Configurations#SampleCacheConfigurations-Partitionedcacheofadatabase] example and the links it provides should provide sufficient documentation for configuring and using POF.
    Whether you decide to use the HibernateCacheStore, the TopLinkCacheStore or implement your own CacheStore or CacheLoader class to access your data in your database is your decision. You should be able to find sufficient documentation and examples to help you decide how you would like to use Coherence at the [Coherence Knowledge Base|http://wiki.tangosol.com/display/COH/Oracle+Coherence+Knowledge+Base+Home]. I would recommend starting with the [User Guide|http://wiki.tangosol.com/display/COH34UG/Coherence+3.4+Home] if you would like to get a better grasp of the overall architecture.
    Regards,
    Harv

  • Best way to determine insertion order of items in cache for FIFO?

    I want to implement a FIFO queue. I plan on one producer placing unprocessed Orders into a cache. Then multiple consumers will each invoke an EntryProcessor which gets the oldest unprocessed order, sets it processed=true and returns it. What's the best way to determine the oldest object based on insertion order? Should I timestamp the objects with a trigger when they're added to the cache and then index by that value? Or is there a better way? maybe something coherence automatically saves when objects are inserted? Also, it's not critical that the processing order be precisely FIFO, close is good enough.
    Also, since the consumer won't know the key value for the object it will receive, how could the consumer call something like this so it doesn't violate Constraints on Re-entrant Calls? http://wiki.tangosol.com/display/COH34UG/Constraints+on+Re-entrant+Calls
    Thanks,
    Andrew

    Ok, I think I can see where you are coming from now...
    By using a queue for each for each FIX session then you will be experiencing some latency as data is pushed around inside the cluster between the 'owning node' for the order and the location of the queue; but if this is acceptable then great. The number of hops within the cluster and hence the latency will depend on where and how you detect changes to your orders. The advantage of assiging specific orders to each queue is that this will not change should the cluster rebalance; however you should consider what happens if the node controlling a specific FIX session is lost - do you recover from FIX log? If so where is that log kept? Remember to consider what happens if your cluster splits, such that the node with the FIX session is still alive, but is separated from the rest of the cluster. In examining these failure cases you may decide that it is easier to use Coherence's in-built partitioning to assign orders to sessions father than an attribute of order object.
    snidely_whiplash wrote:
    Only changes to orders which result in a new order or replace needing to be sent cause an action by the FIX session. There are several different mechanisms you could use to detect changes to your orders and hence decide if they need to be enqueued:
    1. Use a post trigger that is fired on order insert/update and performs the filtering of changes and if necessary adds the item to the FIX queue
    2. Use a cache store that does the same as (1)
    3. Use an entry processor to perform updates to the order object (as I believe you previously mentioned) and performs logic in (1)
    4. Use a CQC on the order cache
    5. A map listener on the order cache
    The big difference between 1-3 and 4, 5 is that the CQC is i) a SPOF ii) not likely located in the same place as your order object or the queue (assuming that queue is in fact an object in another cache), iii) asynchronously fired hence introducing latency. Also note that the CQC will store your order objects locally whereas a map listener will not.
    (1) and (3) will give you access to both old and new values should that be necessary for your filtering logic.
    Note you must be careful not to make any re-entrant calls with any of 1-3. That means if you are adding something to a FIX queue object in another cache (say using an entry processor) then it should be on a different cache service.
    snidely_whiplash wrote:
    If I move to a CacheStore based setup instead of the CQC based one then any change to an order, including changes made when executions or rejects return on the FIX session will result in the store() method being called which means it will be called unnecessarily a lot. It would be nice if I could specify the CacheStore only store() certain types of changes, ie. those that would result in sending a FIX message. Anything like that possible?There is negligible overhead in Coherence calling your store() method; assuming that your code can decide if anything FIX-related needs to be done based only on the new value of the order object then this should be very fast indeed.
    snidely_whiplash wrote:
    What's a partitioned "token cache"?This is a technique I have used in the past for running services. You create a new partitioned cache into which you place 'tokens' representing a user-defined service that needs to be run. The insertion/deletion of a token in the backing map fires a backing map listener to start/stop a service +(not there are 2 causes of insert/delete in a backing map - i) a user ii) cluster repartitioning)+. In this case that service might be a fix session. If you need to designate a specific member on which a service needs to run then you could add the member id to the token object; however you must be careful that unless you write your own partitioning strategy the token will likely not live on the same cache member as the token indicates; in which case you would want a ful map listener or CQC to listen for tokens rather than a backing map listener
    I hope that's useful rather than confusing!
    Paul

  • Partition redistribution on node join

    When a new node joins a distributed cache, I imagine that the partition assignment strategy will alter for the cluster in that some partitions must get assigned to the new node.
         This must mean that the primary location for some of the keys in the cache alters. What is the process for the data redistribution in this case for a distributed cache, i.e. when a new node joins?
         I have run a simple test, and it would appear that no redistribution occurs until some data is written / read. Is this always the case, or is there some background process that will migrate keys according to the new partition assignment?
         Is there someway to listen for re-assignment events, other than listening for individual get/remove actions on the keys in the backing maps as they get redistributed?
         Finally, how does the backup count for a distributed cache affect this? If it is set to 0 (no backups) is the process different?
         Any help gratefully received :)
         Regards
         Matt Searle

    Matt,
         You are absolutely correct: when a new storage enabled node joins a partitioned cache service, the partition re-distribution protocol gets initiated and the new node will get a part of the overall load. The distribution algorithm is dynamic and adoptive, so as a matter of preference, the transfer affects cache parts that are not currently in active use.
         You can easily watch the process of cache data distribution by listening to the backing maps of the partitioned cache. Attached please find an example of cache configuration descriptor and a custom backing map listener class that would log all corresponding backing map events.
         Regards,
         Gene<br><br> <b> Attachment: </b><br>dist-listen.xml <br> (*To use this attachment you will need to rename 344.bin to dist-listen.xml after the download is complete.)<br><br> <b> Attachment: </b><br>BackingMapListener.java <br> (*To use this attachment you will need to rename 345.bin to BackingMapListener.java after the download is complete.)

Maybe you are looking for

  • Problem in converting varchar to timestamp

    Hi all, I'm having one varchar column which has data like, 2008-08-01T12:00:16.000000+00:00 i need to extract '2008-08-01 12:00:16' and put into an timestamp solumn. How can this be done. i tried the below query, SELECT to_date((to_date((SUBSTR((REPL

  • IDOC -FILE ERROR

    HI experts, My scenario is sending IDOC(DEBMAS06) to FILE. I have configured all the requirements BD54,SM59,WE21,WE20 IN R/3 and SM59,IDX1,IDX2. WHEN I AM SENDING AN IDOC USING TrCode:BD12, I AM GETTING AN ERROR IN WE02 THAT "EDI: Partner profile inb

  • Creating Variable in TVARV table via SM30

    Hi Gurus,              I need to create a variable in TVARV to hold the last EXecution TIME and DATe of my program. I did it earlier in Development , now when i have to create it again in Quality for testing, its not geetting displayed via se11.     

  • Calendar items not showing up on all devices

    I have just gotten a new ipad and when I set up icloud only some of the information transtered from my imac to my ipad.  I have tried adding new appointments to test it and they are not showing up.  What could I be doing wrong?

  • No Low Ink Notification / How to check ink levels

    I don't know how it disappeared, but I no longer have an HP icon, and never get low ink notifications.  I have nothing "HP" in program files, so can't figure out how to check the ink levels.  Every time I lose color I have to change both cartridges.