Data distribution in distributed caching scheme

When using the distributed ( partitioned ) scheme in coherence , how the data distribution is happening among the nodes in data-grid..? Is there any API to control it or are there some configurations to control it ?

Hi 832093
A distributed scheme works by allocating the data to partitions (by default there are 257 of these, but you can configure more for large clusters). The partitions are then allocated as evenly as possible to the nodes of the cluster, so each node owns a number of partitions. Partitions belong to a cache service so you might have a cache service that is responsible for a number of caches and a particular node will own the same partitions for all those caches. If you have a backup count > 0 then a backup of each partition is allocated to another node (on another machine if you have more than one). When you put a value into the cache Coherence will basically perform a hash function on your key which will allocate the key to a partition and therfore to the node that owns that partition. In effect a distibuted cache works like a Java HashMap which hashes keys and allocates them to buckets.
You can have some control over which partition a key goes to if you use key association to co-locate entries into the same partition. You would normally do this to put related values into the same location to make processing them on the server side more efficient in use-cases where you might need to alter or query a number of related items. For example in finanial systems you might have a cache for Trades and a cache for TradeValuations in the same cache service. You can then use key association to allocate all the Valuations for a Trade to the same partition as the parent Trade. So if a Trade was mapped to partition 190 in the Trade cache then all of the Valuations for that Trade would map to partition 190 in the TradeValuations cache and hence be on the same node (in the same JVM process).
You do not really want to have control over which nodes partitions are allocated to as this could alter Coherence ability to evenly distribute partitions and allocate backups properly.
JK

Similar Messages

  • Distributed Cache queries

    Hi,
    In a distributed cache scheme ( in multiple servers/jvm).
    1. how to know which server is hosting what data (cache store) and the backup of this data is in which server?
    2. Can this distribution be controlled? like a 'xyz' cache store is required to be in a specified '123' server only and that of the backup of 'xyz' cache store is required to be in '234' server?
    Thanks,
    ~Ravi Shanker

    Hi,
    In a redundancy system only one server will be serving and the secondary will be idle. I just want to ensure that these idle systems are also used instead of lying idle.
    Hence the question was raised on can we control the Distribution logic, where-in the least used data can be moved into these idle systems and re-direct the usage of data to these idle systems.In Coherence cluster, all the servers hold both primary and backup data, every is serving the requests and holding the backups as well so there are no idle systems.
    but i have few things required for clarification.
    While running the sample programs as per the documentation. We need to start a Default Cache Server and the java programs which act/add as cluster to the cache server.
    But i have seen adding/acting of cluster is working even if the Default Cache Server is shut down?
    Can u provide any info (links) or clarification how the Cache Server and Clusters mechanism work? Gone through the documentation but none has provided a clear picture of this?This is wrong assumption and every storage enabled node can become the cluster member. DefaultCacheServer is one of the implementations to run coherence server.
    HTH
    Cheers,
    _NJ                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

  • Need Help regarding initial configuration for distributed cache

    Hi ,
    I am new to tangosol and trying to setup a basic partitioned distributed cache ,But I am not being able to do so
    Here is my Scenario,
    My Application DataServer create the instance of Tangosolcache .
    I have this config.xml set in my machine where my application start.
    <?xml version="1.0"?>
    <!DOCTYPE cache-config SYSTEM "cache-config.dtd">
    <cache-config>
    <caching-scheme-mapping>
    <!--
    Caches with any name will be created as default near.
    -->
    <cache-mapping>
    <cache-name>*</cache-name>
    <scheme-name>default-distributed</scheme-name>
    </cache-mapping>
    </caching-scheme-mapping>
    <caching-schemes>
    <!--
    Default Distributed caching scheme.
    -->
    <distributed-scheme>
    <scheme-name>default-distributed</scheme-name>
    <service-name>DistributedCache</service-name>
    <backing-map-scheme>
    <class-scheme>
    <scheme-ref>default-backing-map</scheme-ref>
    </class-scheme>
    </backing-map-scheme>
    <autostart>true</autostart>
    </distributed-scheme>
    <!--
    Default backing map scheme definition used by all the caches that do
    not require any eviction policies
    -->
    <class-scheme>
    <scheme-name>default-backing-map</scheme-name>
    <class-name>com.tangosol.util.SafeHashMap</class-name>
    <init-params></init-params>
    </class-scheme>
    </caching-schemes>
    </cache-config>
    Now on the same machine I start a different client using the command
    java -Dtangosol.coherence.distributed.localstorage=false -Dtangosol.coherence.cacheconfig=near-cache-config.xml -classpath
    "C:/calypso/software/release/build" -jar ../lib/coherence.jar
    The problem I am facing is
    1)If I do not start the client even then my application server cache the data .Ideally my config.xml setting is set to
    distributed so under no case it should cache the data in its local ...
    2)I want to bind my differet cache on different process on different machine .
    say
    for e.g
    machine1 should cache cache1 object
    machine2 should cache cache2 object
    and so on .......but i could not find any documentation which explain how to do this setting .Can some one give me example of
    how to do it ....
    3)I want to know the details of cache stored in any particular node how do I know say for e.g machine1 contains so and so
    cache and it corresponding object values ... etc .....
    Regards
    Mahesh

    Hi Thanks for answer.
    After digging into the wiki lot i found out something related to KeyAssociation I think what I need is something like implementation of KeyAssociation and that
    store the particular cache type object on particular node or group of node
    Say for e,g I want to have this kind of setup
    Cache1-->node1,node2 as I forecast this would take lot of memory (So i assign this jvms like 10 G)
    Cache2-->node3 to assign small memory (like 2G)
    and so on ...
    From the wiki documentation i see
    Key Association
    By default the specific set of entries assigned to each partition is transparent to the application. In some cases it may be advantageous to keep certain related entries within the same cluster node. A key-associator may be used to indicate related entries, the partitioned cache service will ensure that associated entries reside on the same partition, and thus on the same cluster node. Alternatively, key association may be specified from within the application code by using keys which implement the com.tangosol.net.cache.KeyAssociation interface.
    Do someone have any example of explaining how this is done in the simplest way ..

  • Configure cache policy of a distributed cache

    How do I configure the cache policy of a distributed cache, e.g. eviction policy, high units, expiry delay etc? Should I use com.tangosol.util.Cache instead of com.tangosol.util.SafeHashMap as the backing map of the distributed cache?

    Hi Jin,
    I have attached an example of the descriptor used to setup a Distributed Caching scheme. This example shows how to set up both a 'vanilla' Distributed Cache as well as how to setup a 'size-limited/auto-expiry' cache.
    The 'HYBRID' local-scheme will automatically use the com.tangosol.net.cache.LocalCache implementation (a subclass of com.tangosol.util.Cache).
    Later,
    Rob Misek
    Tangosol, Inc.
    Coherence: Cluster your Work. Work your Cluster.<br><br> <b> Attachment: </b><br>distributed-cache-config.xml <br> (*To use this attachment you will need to rename 22.bin to distributed-cache-config.xml after the download is complete.)

  • Very simple example of distributed cache

    Hi,
    I followed the getting started guide and I have a config xml that lists a VirtualCache. Then I wrote a very simple main class, TestCache1 that does this:
    private static NamedCache cache = CacheFactory.getCache("VirtualCache");
    public static void main(String[] args) {
    cache.put("hello", "hello");
    and then a TestCache2 that gets the value:
    private static NamedCache cache = CacheFactory.getCache("VirtualCache");
    public static void main(String[] args) {
    Object hello = cache.get("hello");
    System.err.println(hello);
    so I run TestCache1 and then i run TestCache2, but the result is "null". I have a very basic problem where I want one java class to add stuff to a cache that another java process can access. WHat do I need to do?
    Thanks, Jason

    Hi,
    Yes I started up both test classes using the same configuration file but it's just not working. Here's the config file:
    +<cache-config>+
    +<caching-scheme-mapping>+
    +<!--+
    Caches with any name will be created as default replicated.
    -->
    +<cache-mapping>+
    +<cache-name>*</cache-name>+
    +<scheme-name>default-replicated</scheme-name>+
    +</cache-mapping>+
    +<cache-mapping>+
    +<cache-name>VirtualCache</cache-name>+
    +<scheme-name>default-distributed</scheme-name>+
    +</cache-mapping>+
    +</caching-scheme-mapping>+
    +<caching-schemes>+
    +<!--+
    Default Replicated caching scheme.
    -->
    +<replicated-scheme>+
    +<scheme-name>default-replicated</scheme-name>+
    +<service-name>ReplicatedCache</service-name>+
    +<backing-map-scheme>+
    +<class-scheme>+
    +<scheme-ref>default-backing-map</scheme-ref>+
    +</class-scheme>+
    +</backing-map-scheme>+
    +</replicated-scheme>+
    +<!--+
    Default Distributed caching scheme.
    -->
    +<distributed-scheme>+
    +<scheme-name>default-distributed</scheme-name>+
    +<service-name>DistributedCache</service-name>+
    +<backing-map-scheme>+
    +<class-scheme>+
    +<scheme-ref>default-backing-map</scheme-ref>+
    +</class-scheme>+
    +</backing-map-scheme>+
    +</distributed-scheme>+
    +<!--+
    Default backing map scheme definition used by all
    The caches that do not require any eviction policies
    -->
    +<class-scheme>+
    +<scheme-name>default-backing-map</scheme-name>+
    +<class-name>com.tangosol.util.SafeHashMap</class-name>+
    +</class-scheme>+
    +</caching-schemes>+
    +</cache-config>+
    Thanks again, Jason

  • Distributed Cache : Performance issue; takes long to get data

    Hi there,
         I have set up a cluster on one a Linux machine with 11 nodes (Min & Max Heap Memory = 1GB). The nodes are connected through a multicast address / port number. I have configured Distributed Cache service running on all the nodes and 2 nodes with ExtendTCPService. I loaded a dataset of size 13 millions into the cache (approximately 5GB), where the key is String and value is Integer.
         I run a java process from another Linux machine on the same network, that makes use of the this cache. The process fetches around 200,000 items from the cache and it takes around 180 seconds ONLY to fetch the data from the cache.
         I had a look at the Performance Tuning > Coherence Network Tuning and checked the Publisher and Receiver Success rate and both were neardly 0.998 on all the nodes.
         It a bit hard to believe that it takes so long. May be I'm missing something. Would appreciate if you could advice me on the same?
         More info :
              a) All nodes are running on Java 5 update 7
              b) The java process is running on JDK1.4 Update 8
              c) -server option is enabled on all the nodes and the java process
              d) I'm using Tangosol Coherence 3.2.2b371
              d) cache-config.xml
                        <?xml version="1.0"?>
                        <!DOCTYPE cache-config SYSTEM "cache-config.dtd">
                        <cache-config>
                        <caching-scheme-mapping>
                        <cache-mapping>
                        <cache-name>dist-*</cache-name>
                        <scheme-name>dist-default</scheme-name>
                        </cache-mapping>
                        </caching-scheme-mapping>
                        <caching-schemes>
                        <distributed-scheme>
                        <scheme-name>dist-default</scheme-name>
                        <backing-map-scheme>
                             <local-scheme/>
                        </backing-map-scheme>
                        <lease-granularity>member</lease-granularity>
                        <autostart>true</autostart>
                        </distributed-scheme>
                        </caching-schemes>
                        </cache-config>
         Thanks,
         Amit Chhajed

    Hi Amit,
         Is the java test process single threaded, i.e. you performed 200,000 consecutive cache.get() operations? If so then this would go a long ways towards explaining the results, as most of the time in all processes would be spent waiting on the network, and your results would come out to just over 1ms per operation. Please be sure to run with multiple test threads, and also it would be good to make use of the cache.getAll() call where possible to have a single thread fetching multiple items in parallel.
         Also you may need to do a some tuning on your cache server side. In general I would say that on a 1GB heap you should only utilize roughly 750 MB of that space for cache storage. Taking backups into consideration this means 375MB of data per JVM. So with 11 nodes, this would mean a cache capacity of 4GB. At 5GB of data each cache server will be running quite low on free memory, resulting in frequent GCs which will hurt performance. Based on my calculations you should use 14 cache servers to hold your 5GB of data. Be sure to run with -verbose:gc to monitor your GC activity.
         You must also watch your machine to make sure that your cache servers aren't getting swapped out. This means that your server machine needs to have enough RAM to keep all the cache servers in memory. Using "top" you will see that a 1GB JVM actually takes about 1.2 GB of RAM. Thus for 14 JVMs you would need ~17GB of RAM. Obviously you need to leave some RAM for the OS, and other standard processes as well, so I would say this box would need around 18GB RAM. You can use "top" and "vmstat" to verify that you are not making active use of swap space. Obviously the easiest thing to do if you don't have enough RAM, would be to split your cache servers out onto two machines.
         See http://wiki.tangosol.com/display/COH32UG/Evaluating+Performance+and+Scalability for more information on things to consider when performance testing Coherence.
         thanks,
         Mark

  • Data distribution scheme and database fragmentation

    Hi all,
    I'm working on a scenario (University) involving the fragmentation of a central database. A company has regional offices i.e. (England, Wales, Scotland) and each regional office has differing combinations of business areas. They currently have one central database in their head office and my task is to "design a data distribution scheme". By scheme does this mean something like horizontal / vertical fragmentation? Also can somebody point me to an Oracle specific example of creating a fragmented table? I've tried to search online and have found the "partition by" keyword but not much else except for database linking - but I'm thinking this is more concerned with querying than actually creating the fragments.
    Many thanks for your time

    >
    Partitioning is what the tutor meant by "fragmentation". So if there is a current central database and I have created new databases for each regional office I could run something like the below statement on the regional databases to create a bespoke version of the employee table filtered by data relevant to them? This is all theoretical and we don't have to develop the database, I just want to get the syntax correct - Thanks!
    >
    There you go talking about 'new databases' again. You said your original task was this
    >
    my task is to "design a data distribution scheme".
    >
    Is the task to give the regions access to their own data in the ONE central DB? Or to actually create a new DB for each region that contains ONLY that regions data?
    So are we talking ACCESS to a central DB by region? Or are we talking replication of the entire central DB to multiple regions?
    Your example table is partitioned by region. But if each region has their own DB why would you put data for other regions in it?
    If you are wanting each region to have access to their own data in the central DB then you could partition the central DB tables like your example:
    CREATE TABLE employees (
    id INT NOT NULL,
    fname VARCHAR(30),
    lname VARCHAR(30),
    hired DATE NOT NULL DEFAULT '1970-01-01',
    separated DATE NOT NULL DEFAULT '9999-12-31',
    job_code INT,
    store_id INT
    PARTITION BY LIST(region_id) (
    PARTITION Wales VALUES IN (2)
    ); But if you are creating a regional DB that includes data only for that region there is no need to partition it.

  • Full Master Data Distribution in ALE - Vendor Master CREMAS(Full Distributi

    Hi Experts,
       I have to do a Full Master Data Distribution in ALE - Vendor Master CREMAS(Full Distribution) using BD21.
       I have already read the blog of Mickel and incorporated the code in Change_Pointer_Read.
       It is still not working. Should I include the code at the end of the Function Module. If yes, how to code at the last as
       I can include my code at the SAP suggest Enhancement and End Enhancement.
       Is there any way I can create the implicit Enhancement at the end of the Function Module.
       I will appreciate any help to make the Full Distribution work.
    Thanks,
    Mich

    Hi,
       Here is the link.
       /people/michal.krawczyk2/blog/2009/06/04/distribution-of-full-master-data-objects-from-change-pointers
       My requirement is to send the whole Vendor Master in a IDoc even though for eg there is only one field change in address field of Vendor Master. Full Master Data distribution. But the program BD21 creates only IDocs with segments where the changes are made in the particular tab like address.
       Is there any Enhancement or way to enforce the whole Master Data(CREMAS) to be distributed.
       I will really appreciate any helpful answer .
    Thanks,
    Mich

  • Data-node affinity in a distributed cache

    My apologies if this is addressed elsewhere...
    I have a few questions regarding the association of a cached object to a cluster node in a distributed cache:
    1. What factor(s) determine which node is the primary cluster node for a given object in a distributed cache?
    2. Similarly, assuming that at least one backup node is configured, what determines which node will be the backup node for a given object?
    Thanks.

    Hi,
    There is not yet the ability to specify node ownership (through the DistributedCacheService). The basic issue is that a signficant chunk of our technology is involved in managing node ownership without introducing non-scalable state or data vulnerability. Allowing users to control this would shift that responsibility onto application code. This is a very difficult task to manage in a manner that is scalable, performant and fault-tolerant (any two of those are fairly easy to accomplish).
    In practice, this is not much of an issue as we have patterns to work around this (including data-driven load balancing and our near-cache technology) without impacting any of those three requirements.
    I believe there are plans to add this ability to a public Coherence API in a future release, but this would be (as discussed above) a very advanced feature.
    Jon Purdy
    Tangosol, Inc.

  • Query from Distributed Cache

    Hi
    I am newbie to Oracle Coherence and trying to get a hands on experience by running a example (coherence-example-distributedload.zip) (Coherence GE 3.6.1). I am running two instances of server . After this I ran "load.cmd" to distribute data across two server nodes - I can see that data is partitioned across server instances.
    Now I run another instance(on another JVM) of program which will try to join the distributed cache and try to query on the loaded on server instances. I see that the new JVM is joining the cluster and querying for data returns no records. Can you please tell me if I am missing something?
         NamedCache nNamedCache = CacheFactory.getCache("example-distributed");
         Filter eEqualsFilter = new GreaterFilter("getLocId", "1000");
         Set keySet = nNamedCache.keySet(eEqualsFilter);
    I see here that keySet has no records. Can you please help?
    Thanks
    sunder

    I got this problem sorted out - the was problem cache-config.xml.. The correct one looks as below.
    <distributed-scheme>
    <scheme-name>example-distributed</scheme-name>
    <service-name>DistributedCache1</service-name>
    <backing-map-scheme>
         <read-write-backing-map-scheme>
         <scheme-name>DBCacheLoaderScheme</scheme-name>
         <internal-cache-scheme>
         <local-scheme>
         <scheme-ref>DBCache-eviction</scheme-ref>
         </local-scheme>
         </internal-cache-scheme>
              <cachestore-scheme>
              <class-scheme>
                   <class-name>com.test.DBCacheStore</class-name>
                   <init-params>
                                  <init-param>
                                       <param-type>java.lang.String</param-type>
                                       <param-value>locations</param-value>
                                  </init-param>
                                  <init-param>
                                       <param-type>java.lang.String</param-type>
                                       <param-value>{cache-name}</param-value>
                                  </init-param>
                   </init-params>                     
                   </class-scheme>
              </cachestore-scheme>
              <cachestore-timeout>6000</cachestore-timeout>
              <refresh-ahead-factor>0.5</refresh-ahead-factor>     
         </read-write-backing-map-scheme>
         </backing-map-scheme>
         <thread-count>10</thread-count>
    <autostart>true</autostart>
    </distributed-scheme>
    <invocation-scheme>
    <scheme-name>example-invocation</scheme-name>
    <service-name>InvocationService1</service-name>
    <autostart system-property="tangosol.coherence.invocation.autostart">true</autostart>
    </invocation-scheme>
    Missed <class-scheme> element inside <cachestore-scheme> of <read-write-backing-map-scheme>.
    Thanks
    sunder

  • Different distributed caches within the cluster

    Hi,
    i've three machines n1 , n2 and n3 respectively that host tangosol. 2 of them act as the primary distributed cache and the third one acts as the secondary cache. i also have weblogic running on n1 and based on some requests pumps data on to the distributed cache on n1 and n2. i've a listener configured on n1 and n2 and on the entry deleted event i would like to populate tangosol distributed service running on n3. all the 3 nodes are within the same cluster.
    i would like to ensure that the data directly coming from weblogic should only be distributed across n1 and n2 and NOT n3. for e.g. i do not start an instance of tangosol on node n3. and an object gets pruned from either n1 or n2. so ideally i should get a storage not configured exception which does not happen.
    The point is the moment is say CacheFactory.getCache("Dist:n3") in the cache listener, tangosol does populate the secondary cache by creating an instance of Dist:n3 on either n1 or n2 depending from where the object has been pruned.
    from my understanding i dont think we can have a config file on n1 and n2 that does not have a scheme for n3. i tried doing that and got an illegalstate exception.
    my next step was to define the Dist:n3 scheme on n1 and n2 with local storage false and have a similar config file on n3 with local-storage for Dist:n3 as true and local storage for the primary cache as false.
    can i configure local-storage specific to a cache rather than to a node.
    i also have an EJB deployed on weblogic that also entertains a getData request. i.e. this ejb will also check the primary cache and the secondary cache for data. i would have the statement
    NamedCahe n3 = CacheFactory.getCache("n3") in the bean as well.

    Hi Jigar,
    i've three machines n1 , n2 and n3 respectively that
    host tangosol. 2 of them act as the primary
    distributed cache and the third one acts as the
    secondary cache.First, I am curious as to the requirements that drive this configuration setup.
    i would like to ensure that the data directly coming
    from weblogic should only be distributed across n1
    and n2 and NOT n3. for e.g. i do not start an
    instance of tangosol on node n3. and an object gets
    pruned from either n1 or n2. so ideally i should get
    a storage not configured exception which does not
    happen.
    The point is the moment is say
    CacheFactory.getCache("Dist:n3") in the cache
    listener, tangosol does populate the secondary cache
    by creating an instance of Dist:n3 on either n1 or n2
    depending from where the object has been pruned.
    from my understanding i dont think we can have a
    config file on n1 and n2 that does not have a scheme
    for n3. i tried doing that and got an illegalstate
    exception.
    my next step was to define the Dist:n3 scheme on n1
    and n2 with local storage false and have a similar
    config file on n3 with local-storage for Dist:n3 as
    true and local storage for the primary cache as
    false.
    can i configure local-storage specific to a cache
    rather than to a node.
    i also have an EJB deployed on weblogic that also
    entertains a getData request. i.e. this ejb will also
    check the primary cache and the secondary cache for
    data. i would have the statement
    NamedCahe n3 = CacheFactory.getCache("n3") in the
    bean as well.In this scenario, I would recommend having the "primary" and "secondary" caches on different cache services (i.e. distributed-scheme/service-name). Then you can configure local storage on a service by service basis (i.e. distributed-scheme/local-storage).
    Later,
    Rob Misek
    Tangosol, Inc.

  • Setup failover for a distributed cache

    Hello,
    For our production setup we will have 4 app servers one clone per each app server. so there will be 4 clones to a cluster. And we will have 2 jvms for our distributed cache - one being a failover, both of those will be in cluster.
    How would i configure the failover for the distributed cache?
    Thanks

    user644269 wrote:
    Right - so each of the near cache schemes defined would need to have the back map high-units set to where it could take on 100% of data.Specifically the near-scheme/back-scheme/distributed-scheme/backing-map-scheme/local-scheme/high-units value (take a look at the [Cache Configuration Elements|http://coherence.oracle.com/display/COH34UG/Cache+Configuration+Elements] ).
    There are two options:
    1) No Expiry -- In this case you would have to size the storage enabled JVMs to that an individual JVM could store all of the data.
    or
    2) Expiry -- In this case you would set the high-units a value that you determine. If you want it to store all the data then it needs to be set higher than the total number of objects that you will store in the cache at any given time or you can set it lower with the understanding that once that high-units is reached Coherence will evict some data from the cluster (i.e. remove it from the "cluster memory").
    user644269 wrote:
    Other than that - there is not configuration needed to ensure that these JVM's act as a failover in the event one goes down.Correct, data fault tolerance is on by default (set to one level of redundancy).
    :Rob:
    Coherence Team

  • Local Cache containing all Distributed Cache entries

    Hello all,
    I am seeing what appears to be some sort of problem. I have 2 JVMS running, one for the application and the other serving as a coherence cache JVM (near-cache scheme).
    When i stop the cache JVM - the local JVM displays all 1200 entries even if the <high-units> for that cache is set to 300.
    Does the local JVM keep a copy of the Distributed Data?
    Can anyone explain this?
    Thanks

    hi,
    i have configured a near-cahe with frontscheme and back scheme.in the front scheme i have used local cache and in the back scheme i have used the distributed cache .my idea is to have a distributed cache on the coherence servers.
    i have 01 jvm which has weblogic app server while i have a 02 jvm which has 4 coherence servers all forming the cluster.
    Q1: where is the local cache data stored.? is it on the weblogic app server or on the coherence servers (SSI)..
    Q2: although i have shutdown my 4 coherence servers..i am still able to get the data in the app.so have a feel that the data is also stored locally..on the 01 jvm which has weblogic server runnng...
    q3: does both the client apps and coherence servers need to use the same coherence-cache-config.xml
    can somebody help me with these questions.Appreciate your time..

  • Replicated cache scheme with cache store

    Hi All,
    I am having following configuration for the UserCacheDB in the coherence-cache-config.xml
    I having cachestore class which inserts data in the database and this data will be loaded from data on application start up.
    I need to make this cache replicated so that the other application will have this data. Can any one please guide me what should be my configuration which will make this cache replicated with cache store class.
    <distributed-scheme>
                   <scheme-name>UserCacheDB</scheme-name>
                   <service-name>DistributedCache</service-name>
                   <serializer>
                        <class-name>com.tangosol.io.pof.ConfigurablePofContext</class-name>
                        <init-params>
                             <init-param>
                                  <param-type>String</param-type>
                                  <param-value>pof-config.xml</param-value>
                             </init-param>
                        </init-params>
                   </serializer>
                   <backing-map-scheme>
                        <read-write-backing-map-scheme>
                             <internal-cache-scheme>
                                  <class-scheme>
                                       <class-name>com.tangosol.util.ObservableHashMap</class-name>
                                  </class-scheme>
                             </internal-cache-scheme>
                             <cachestore-scheme>
                                  <class-scheme>
                                       <class-name>test.UserCacheStore</class-name>
                                       <init-params>
                                            <init-param>
                                                 <param-type>java.lang.String</param-type>
                                                 <param-value>PC_USER</param-value>
                                            </init-param>
                                       </init-params>
                                  </class-scheme>
                             </cachestore-scheme>
                             <read-only>false</read-only>
                             <!--
                                  To make this a write-through cache just change the value below to
                                  0 (zero)
                             -->
                             <write-delay-seconds>0</write-delay-seconds>
                        </read-write-backing-map-scheme>
                   </backing-map-scheme>
                   <listener />
                   <autostart>true</autostart>
              </distributed-scheme>
    Thanks in Advance.

    Hi,
    You should be able to use a cachestore with a local-scheme.
          <replicated-scheme>
            <scheme-name>UserCacheDB</scheme-name>
            <service-name>ReplicatedCache</service-name>
            <serializer>
              <class-name>com.tangosol.io.pof.ConfigurablePofContext</class-name>
              <init-params>
                <init-param>
                  <param-type>String</param-type>
                  <param-value>coherence-pof-config.xml</param-value>
                </init-param>
              </init-params>
            </serializer>
            <backing-map-scheme>
              <local-scheme>
                <scheme-name>UserCacheDBLocal</scheme-name>
                <cachestore-scheme>
                  <class-scheme>
                    <class-name>test.UserCacheStore</class-name>
                    <init-params>
                      <init-param>
                        <param-type>java.lang.String</param-type>
                        <param-value>PC_USER</param-value>
                      </init-param>
                    </init-params>
                  </class-scheme>
                </cachestore-scheme>
              </local-scheme>
            </backing-map-scheme>
            <listener/>
            <autostart>true</autostart>
          </replicated-scheme>

  • Set request timeout for distributed cache

    Hi,
    Coherence provides 3 parameters we can tune for the distributed cache
    tangosol.coherence.distributed.request.timeout      The default client request timeout for distributed cache services
    tangosol.coherence.distributed.task.timeout      The default server execution timeout for distributed cache services
    tangosol.coherence.distributed.task.hung      the default time before a thread is reported as hung by distributed cache services
    It seems these timeout values are used for both system activities (node discovery, data re-balance etc.) and user activities (get, put). We would like to set the request timeout for get/put. But a low threshold like 10 ms sometimes causes the system activities to fail. Is there a way for us to separately set the timeout values? Or even is it possible to setup timeout on individual calls (like get(key, timeout))?
    -thanks

    Hi,
    not necessarily for get and put methods, but for queries, entry-processor and entry-aggregator and invocable agent sending, you can make the sent filter or aggregator or entry-processor or agent implement PriorityTask, which allows you to make QoS expectations known to Coherence. Most or all stock aggregators and entry-processors implement PriorityTask, if I correctly remember.
    For more info, look at the documentation of PriorityTask.
    Best regards,
    Robert

Maybe you are looking for

  • Why do .mp4 files not show up in the Finder/work in iDVD?

    Hi I downloaded some .mp4 files from the internet on my Windows PC and as per usual they are played by quicktime on my PC. When I saved them to my external HDD and plugged it into my MBP, the folder containing the mp4 files does not appear, and neith

  • Can I buy out my contract to transfer phone number to company account?

    Hello, Currently I have a droid phone under contract with Verizon, and I am eligible for an upgrade in June, 2015, but can buy out of the contract for $180.  I also am going to get a company phone for my job, but it is required that this be the iphon

  • Diff between GUI_UPLOAD and CL_GUI_FRONTEND_SERVICES= GUI_UPLOAD

    i want to upload some records from the flat file to internal table. when i use GUI_UPLOAD function module in tables parameter i have given the work area. but when i use CL_GUI_FRONTEND_SERVICES-=>FILE_OPEN_DIALOG and then CL_GUI_FRONTEND_SERVICES=>GU

  • Role Member Approval

    Hi I am working on custom approvals (that is an oracle lab15). As per that lab the request goes to any one of the role member but defaultly it is going to xelsysadm. It is already customized SOA-Composite. How can i acheive the workflow "create user,

  • Wish list / Bugs for E61 Firmware......

    hi, just trying to see if this is a good way to feedback to nokia.....Feel free to endorse or add to the list....hehe Wish List 1. A2DP 2. Sip can use access point group 3. Time of SMS to show sender sending time instead of receiving time. 4. Can cho