Query regarding Replicated Caches that Persist on Disk

I need to build a simple fault tolerant system that will replicate cache
entries across a small set of systems. I want the cache to be persistent
even if all cluster members are brought down.
Is this something that requires writing a custom CacheStore implementation
or is there a configuration that I can use that uses off-the-shelf pluggable
caches? The documentation was somewhat vague about this.
If I need or want to write my own CacheStore, when there is a cache write-through
operation how does Coherence figure out which member of the cluster will do
the actual work and persist a particular object?

Hi rhanckel,
write-through and cache stores are not supported with replicated caches, you need to use partitioned (distributed) cache for cache stores.
You can use a number of out-of-the-box cache stores (Hibernate, TopLink, etc) or you can write your own if you don't find a suitable one. Configuration is the same, you need to specify the cache store class name appropriately in the <cache-store-scheme> child element of the <read-write-backing-map> element.
You can look at the documentation for it on the following urls:
http://wiki.tangosol.com/display/COH34UG/cachestore-scheme
http://wiki.tangosol.com/display/COH34UG/read-write-backing-map-scheme
http://wiki.tangosol.com/display/COH34UG/Read-Through%2C+Write-Through%2C+Write-Behind+Caching+and+Refresh-Ahead
As for how Coherence figures out which member needs to write:
in a partitioned cache, each cache key has an owner node which is algorithmically determined from the key itself and the distribution of partitions among nodes (these two do not depend on the actual data in the cache). More specifically, any key is always mapped to the same partition (provided you did not change the partition-count or the partition affinity-related settings, although if you did the latter, then it is arguably not the same key anymore). Therefore Coherence just needs to know, who owns a certain partition. Hence, the owner of that partition is the owner of the key, and that node is tasked with every operation related to that key.
Best regards,
Robert

Similar Messages

  • A query regarding synchronised functions, using shared object

    Hi all.
    I have this little query, regarding the functions that are synchronised, based on accessing the lock to the object, which is being used for synchronizing.
    Ok, I will clear myself with the following example :
    class First
    int a;
    static int b;
    public void func_one()
    synchronized((Integer) a)
    { // function logic
    } // End of func_one
    public void func_two()
    synchronized((Integer) b)
    { / function logic
    } // End of func_two
    public static void func_three()
    synchronized((Integer) a)
    { // function logic
    } // End of func_three, WHICH IS ACTUALLY NOT ALLOWED,
    // just written here for completeness.
    public static void func_four()
    synchronized((Integer) b)
    { / function logic
    } // End of func_four
    First obj1 = new First();
    First obj2 = new First();
    Note that the four functions are different on the following criteria :
    a) Whether the function is static or non-static.
    b) Whether the object on which synchronization is based is a static, or a non-static member of the class.
    Now, first my-thoughts; kindly correct me if I am wrong :
    a) In case 1, we have a non-static function, synchronized on a non-static object. Thus, effectively, there is no-synchronisation, since in case obj1 and obj2 happen to call the func_one at the same time, obj1 will obtain lock for obj1.a; and obj2 will obtain lock to obj2.a; and both can go inside the supposed-to-be-synchronized-function-but-actually-is-not merrily.
    Kindly correct me I am wrong anywhere in the above.
    b) In case 2, we have a non-static function, synchronized on a static object. Here, again if obj1, and obj2 happen to call the function at the same time, obj1 will try to obtain lock for obj1.a; while obj2 will try to obtain lock for obj2.a. However, since obj1.a and obj2.a are the same, thus we will indeed obtain sychronisation.
    Kindly correct me I am wrong anywhere in the above.
    c) In case 3, we have a static function , synchronized on a non-static object. However, Java does not allow functions of this type, so we may safely move forward.
    d) In case 4, we have a static function, synchronized on a static object.
    Here, again if obj1, and obj2 happen to call the function at the same time, obj1 will try to obtain lock for obj1.a; while obj2 will try to obtain lock for obj2.a. However, since obj1.a and obj2.a are the same, thus we will indeed obtain sychronisation. But we are only partly done for this case.
    First, Kindly correct me I am wrong anywhere in the above.
    Now, I have a query : what happens if the call is made in a classically static manner, i.e. using the statement "First.func_four;".
    Another query : so far we have been assuming that the only objects contending for the synchronized function are obj1, and obj2, in a single thread. Now, consider this, suppose we have the same reference obj1, in two threads, and the call "obj1.func_four;" happens to occur at the same time from each of these threads. Thus, we have obj1 rying to obtain lock for obj1.a; and again obj1 trying to obtain lock for obj1.a, which are the same locks. So, if obj1.a of the first thread obtains the lock, then it will enter the function no-doubt, but the call from the second thread will also succeed. Thus, effectively, our synchronisation is broken.
    Or am I being dumb ?
    Looking forward to replies..
    Ashutosh

    a) In case 1, we have a non-static function, synchronized on a non-static object. Thus, effectively, there is no-synchronisationThere is no synchronization between distinct First objects, but that's what you specified. Apart from the coding bug noted below, there would be synchronization between different threads using the same instance of First.
    b) In case 2, we have a non-static function, synchronized on a static object. Here, again if obj1, and obj2 happen to call the function at the same time, obj1 will try to obtain lock for obj1.a; while obj2 will try to obtain lock for obj2.a.obj1/2 don't call methods or try to obtain locks. The two different threads do that. And you mean First.b, not obj1.b and obj2.b, but see also below.
    d) In case 4, we have a static function, synchronized on a static object. Here, again if obj1, and obj2 happen to call the function at the same time, obj1 will try to obtain lock for obj1.a; while obj2 will try to obtain lock for obj2.a.Again, obj1/2 don't call methods or try to obtain locks. The two different threads do that. And again, you mean First.b. obj1.b and obj2.b are the same as First.b. Does that make it clearer?
    Now, I have a query : what happens if the call is made in a classically static manner, i.e. using the statement "First.func_four;".That's what happens in any case whether you write obj1.func_four(), obj2.func)four(), or First.func_four(). All these are identical when func_four(0 is static.
    Now, consider this, suppose we have the same reference obj1, in two threads, and the call "obj1.func_four;" happens to occur at the same time from each of these threads. Thus, we have obj1 rying to obtain lock for obj1.aNo we don't, we have a thread trying to obtain the lock on First.b.
    and again obj1 trying to obtain lock for obj1.aYou mean obj2 and First.b, but obj2 doesn't obtain the lock, the thread does.
    which are the same locks. So, if obj1.a of the first thread obtains the lock, then it will enter the function no-doubt, but the call from the second thread will also succeed.Of course it won't. Your reasoning here makes zero sense..Once First.b is locked it is locked. End of story.
    Thus, effectively, our synchronisation is broken.No it isn't. The second thread will wait on the same First.b object that the first thread has locked.
    However in any case you have a much bigger problem here. You're autoboxing your local 'int' variable to a possibly brand-new Integer object every call, so there may be no synchronization at all.
    You need:
    Object a = new Object();
    static Object b = new Object();

  • Question about LRU in a replicated cache

    Hi Tangosol,
    I have a question about how the LRU eviction policy works in a replicated cache that uses a local cache for its backing map. My cache config looks like this:
    <replicated-scheme>
    <scheme-name>local-repl-scheme</scheme-name>
    <backing-map-scheme>
    <local-scheme>
    <scheme-ref>base-local-scheme</scheme-ref>
    </local-scheme>
    </backing-map-scheme>
    </replicated-scheme>
    <local-scheme>
    <scheme-name>base-local-scheme</scheme-name>
    <eviction-policy>LRU</eviction-policy>
    <high-units>50</high-units>
    <low-units>20</low-units>
    <expiry-delay/>
    <flush-delay/>
    </local-scheme>
    My test code does the following:
    1. Inserts 50 entries into the cache
    2. Checks to see that the cache size is 50
    3. Inserts 1 additional entry (as I understand it, this should cause the eviction logic to kick-in)
    4. Checks the cache size again, expecting it to now be 20
    With HYBRID and LFU eviction policies, the above logic works exactly as expected. When I switch to LRU however, the code at step 2 always returns a value significantly less than 50. All 50 inserts appear to complete successfully, so I can only assume that some of the entries have already been evicted by the time I get to step 2.
    Any thoughts?
    Thanks.
    Pete L.
    Addendum:
    As usual, in attempting to boil this issue down to its essential elements, I left out some details that turned out to be important. The logic that causes the condition to occur looks more like:
    1. Loop 2 times:
    2. Create named cache instance "TestReplCache"
    3. Insert 50 cache entries
    4. Verify that cache size == 50
    5. Insert 1 additional entry
    6. Verify that cache size == 20
    7. call cache.release()
    8. End Loop
    With this logic, the problem occurs on the second pass of the loop. Step 4 reports a cache size of < 50. This happens with LRU, LFU, and HYBRID-- so my initial characterization of this problem is incorrect. The salient details appear to be that I am using the same cache name each pass of the loop and that I am calling release() at the end of the loop. (If I call destroy() instead, all works as expected.)
    So... my revised question(s) would be: is this behavior expected? Is calling destroy() my only recourse?
    Message was edited by: planeski

    Robert,
    Attached are my sample code and cache config files. The code is a bit contrived-- it's extracted from a JUnit test case. Typically, we wouldn't re-use the same cache name in this way. What caught my eye however, was the fact that this same test case does not exhibit this behavior when running against a local cache directly (as opposed to a repl cache backed by a local cache.)
    Why call release? Well, again, when running this same test case against a local cache, you have to call release or it won't work. I figured the same applied to a repl cache backed by a local cache.
    Now that I understand this is more a byproduct of how my unit tests are written and not an issue with LRU eviction (as I originally thought), it's not a big deal-- more of a curiosity than a problem.
    Pete L.<br><br> <b> Attachment: </b><br>coherence-cache-config.xml <br> (*To use this attachment you will need to rename 545.bin to coherence-cache-config.xml after the download is complete.)<br><br> <b> Attachment: </b><br>LruTest.java <br> (*To use this attachment you will need to rename 546.bin to LruTest.java after the download is complete.)

  • How to guarantee that all events regarding Data Cache are dispatched when application is terminating? Urgent

    Hello,
    we have the phenomena that when an application is commiting a transaction and then terminating,
    often not all events regarding Data Cache are dispatched by the TCP kodo.RemoteCommitProvider.
    It seems that the JVM on termination is not waiting until RemoteCommitProvider has dispatched all events regarding Data
    Cache. In this way we sometimes loose some cache synchronization and some of our customers run into serious problems.
    Is there a way to guarantee that all Data Cache events are dispatched before the aplciation terminates
    (maybe implementing a shutdown hook?).
    best regards
    Guenther Demetz
    Wuerth-Phoenix SRL

    Hi,
    as nobody answered to my question I try to explain it more simple:
    Are the TCP-kodo.RemoteCommitProvider threads acting as user threads or as threads of type 'deamon' ?
    I hope that soon someon can answer
    best regards
    Guenther Demetz
    Wuerth-Phoenix SRL
    G.Demetz wrote:
    Hello,
    we have the phenomena that when an application is commiting a transaction and then terminating,
    often not all events regarding Data Cache are dispatched by the TCP kodo.RemoteCommitProvider.
    It seems that the JVM on termination is not waiting until RemoteCommitProvider has dispatched all events regarding Data
    Cache. In this way we sometimes loose some cache synchronization and some of our customers run into serious problems.
    Is there a way to guarantee that all Data Cache events are dispatched before the aplciation terminates
    (maybe implementing a shutdown hook?).
    best regards
    Guenther Demetz
    Wuerth-Phoenix SRL

  • Hi guys can someone help with a query regarding the 'podcast app' why do they not have all the episodes that relate to one show available why only half or a selected amount

    Hi guys can someone help with a query regarding the 'podcast app' why do they not have all the episodes that relate to one show available why only half or a selected amount

    THanks...but some days they have all the episodes right back to the very first show...ive downloaded a few but they are only available every now and then which makes no sense...why not have them available the whole time ??

  • Entity beans caching non-persistent data between transactions

    Some of the properties in our entity bean implementation classes are not declared
    in our descriptor files, and therefore, are non-persistent (we are using container-managed
    persistence); I will refer to these properties as "non-persistent properties".
    In WebLogic 5.1, we've noticed that the non-persistent properties are cached in
    between transactions. For instance, I ask for a particular Person (Person(James)),
    and I set one of the non-persistent properties (Property(X)) inside Transaction(A).
    In Transaction(B) (which is exclusive of Transaction(A)), I access Property(X)
    and find that it is the same value as I had set in Transaction(A)- this gives
    the appearance that non-persistent entity properties are being cached in between
    transactions.
    The same appears to hold true in WebLogic 7 SP1, however, we must use the "Exclusive"
    concurrency-strategy to maintain this consistency.
    I am worried that this assumption we are making of non-persistent properties is
    not valid in all cases, and the documentation does not promise anything in the
    way of such an assumption. I am worried that the container could kill the Person(James)
    entity implementation instance in the pool after Transaction(A), and create a
    new Person(James) instance to serve Transaction(B)- once that happens our assumption
    fails.
    "Database" concurrency strategy seems to fail our assumption on a regular basis,
    but that makes sense, since the documentation states that the "database will maintain
    the cache", and the container seems more willing to kill instances when they are
    finished with, or create new instances for new transactions.
    So my question is this: What is exactly guaranteed by the "Exclusive" concurrency-strategy?
    Will the assumption that we've made above ever fail under this strategy?
    Thanks in advance for any help.
    Regards,
    James

    It simply means that there is only one entity bean instance per PK in the
    server, and transaction which uses it locks it exclusively.
    James DeFelice <[email protected]> wrote:
    Thank you for the suggestion. I have considered taking this path, but before I
    make a final decision, I was hoping to get a clear answer to the question that
    I stated below:
    What EXACTLY is guaranteed by the "Exclusive" concurrency-strategy? Maybe someone
    from BEA knows?
    "Cameron Purdy" <[email protected]> wrote:
    To be safe: You should clear those values before the ejb load or set
    them
    after (or both).
    Peace,
    Cameron Purdy
    Tangosol, Inc.
    http://www.tangosol.com/coherence.jsp
    Tangosol Coherence: Clustered Replicated Cache for Weblogic
    "James DeFelice" <[email protected]> wrote in message
    news:[email protected]...
    Some of the properties in our entity bean implementation classes arenot
    declared
    in our descriptor files, and therefore, are non-persistent (we areusing
    container-managed
    persistence); I will refer to these properties as "non-persistentproperties".
    In WebLogic 5.1, we've noticed that the non-persistent properties arecached in
    between transactions. For instance, I ask for a particular Person(Person(James)),
    and I set one of the non-persistent properties (Property(X)) insideTransaction(A).
    In Transaction(B) (which is exclusive of Transaction(A)), I accessProperty(X)
    and find that it is the same value as I had set in Transaction(A)-this
    gives
    the appearance that non-persistent entity properties are being cachedin
    between
    transactions.
    The same appears to hold true in WebLogic 7 SP1, however, we must usethe
    "Exclusive"
    concurrency-strategy to maintain this consistency.
    I am worried that this assumption we are making of non-persistentproperties is
    not valid in all cases, and the documentation does not promise anythingin
    the
    way of such an assumption. I am worried that the container could killthe
    Person(James)
    entity implementation instance in the pool after Transaction(A), andcreate a
    new Person(James) instance to serve Transaction(B)- once that happensour
    assumption
    fails.
    "Database" concurrency strategy seems to fail our assumption on a regularbasis,
    but that makes sense, since the documentation states that the "databasewill maintain
    the cache", and the container seems more willing to kill instanceswhen
    they are
    finished with, or create new instances for new transactions.
    So my question is this: What is exactly guaranteed by the "Exclusive"concurrency-strategy?
    Will the assumption that we've made above ever fail under this strategy?
    Thanks in advance for any help.
    Regards,
    James
    Dimitri

  • One concern for replicated cache

    For replicated caches, I think changes are replicated asynchronously,then how to understand update operations will achieve "bad" performance when many nodes exist?Could you kindly pls. explain expenses costed for that? Thanks!
    BR
    michael

    user8024986 wrote:
    Hi Robert,
    That listens reasonable, unicast and multi-cast messages are sent out without having to wait for responses and multi-cast reduces the network's stream size at the same time.
    Then my concern is still that, for replication cache,any changes are sent as messages to other nodes in asyn mode(needn't wait for the response either unicast and muli-cast), so the cost are mainly caused by sending of changes ,which is decided by network status,if the network is high enough and since the messages are sent in asyn way, the performance will be affected finitely, right?
    thanks,
    MichaelMichael,
    it may not have been clear, but what Aleks said is still true. The interleaving means that messages are sent out to recipient nodes interleaved without waiting for a response before sending to the next node, but the cache.put() call returns after the the positive response from all cache nodes arrived that the update was incorporated into their own copy of the data (or the death of the recipient node was detected in which case the response will not be waited for).
    So the overall cost on the network is both sends and responses, and since in general responses go to a single node (the sender of the message the response replies to) therefore even for a multicast update there will be several unicast responses.
    But yes, the higher number of cluster nodes there are, the larger load this puts on the network.
    There are several measures in Coherence for trying to decrease this effect on the network, e.g. bundling together messages or ACKs to the same destination which allows them to be sent in less packets than if they were sent alone (this is particularly effective in case of small messages and ACKs), but this is really effective when there are many threads on each node each doing cache operations as this increases the likelihood of multiple messages/ACKs to be sent to the same node roughly at the same time.
    But in general, if you have frequent writes to a replicated cache you can't really scale it after a point (a certain number of cluster nodes) due to saturating the network, and you should consider switching to partitioned caches (distributed cache). Even near and continuous query caches are not really effective in case of write-heavy caches (more writes than reads).
    Even if the network is able to keep up, more messages would still increase the length of the queues of messages in a node to respond to and therefore more messages would probably mean longer response times.
    Best regards,
    Robert

  • Query Dimension 1-Cache Data

    Hi,
    I am running a MDX query  and when I checked in profiler its showing a long list of Query dimension (Event Class) 1-Cache data, What does it mean?
    I think its not hitting storage engine rather pulling from cache but why so much caching. What does this event class means?
    Please help! 

    Hi Pinu123,
    Create Cache for Analysis Services (AS) was introduced in SP2 of SQL Server 2005. It can be used to make one or more queries run faster by populating the OLAP storage engine cache first. The query results were cached in memory for re-use.
    In your scenario, you said that the results not hitting storage engine rather pulling from cache. In this case, it seems that this results had been queried by other users and cached in memory. For more information about cache data, please refer to the links
    below.
    How to warm up the Analysis Services data cache using Create Cache statement
    Examining MDX Query performance using Block Computation
    Regards,
    Charlie Liao
    TechNet Community Support

  • Replicated Cache Data Visibility

    Hello
    When is the cache entry visible when doing a put(..) to a Replicated cache?
    I've been reading the documentation for the Replicated Cache Topology, and it's not clear when data that has been put(..) is visible to other nodes.
    For example, if I have a cluster comprised of 26 nodes (A through Z), and I invoke replicatedCache.put("foo", "bar") from member A, at what point is the Map.Entry("foo", "bar") present and queryable on member B? Is it as soon as it has been put into the local storage on B? Or is it only just before the call to put(..) on member A returns successfully? While the put(..) from member A is "in flight", is it possible to have to simultaneous reads on members F and Y return different results because the put(..) hasn't yet been invoked successfully on one of the nodes?
    Regards
    Pete

    Hi Pete,
    As the data replication is done asynchronously, (you may refer to this post, Re: Performance of replicated cache vs. distributed cache ) . So, you may read a different result on different nodes.
    Furthermore, may I know your use case on replicated cache?
    Regards,
    Rock
    Oracle ACS

  • Replicated cache with cache store configuration

    Hi All,
    I am having two different applications. One is Admin kind of module from where data will be insterted/updated and another application will read data from Coherence cache.
    My requirement is to use Replicated cache and data also needs to be stored in the database. I am configuring cache with cache store DB operation.
    I have following coherence configuration. It works fine. Other application is able to read updated data. But while second application is trying to join the first application coherence cluster I am getting following exception in cache store. If I use distributed cache the same cache store works fine without any issues.
    Also note that eventhough it is throwing exception, application is working fine as expected. Other thing I am pre loading data on application start up in first application.
    Let me know if you need any further information.
    Thanks in advance.
    coherence-cache-config.xml
    <?xml version="1.0"?>
    <!DOCTYPE cache-config SYSTEM "cache-config.dtd">
    <cache-config>
    <caching-scheme-mapping>
    <cache-mapping>
    <cache-mapping>
    <cache-name>TestCache</cache-name>
    <scheme-name>TestCacheDB</scheme-name>
    </cache-mapping>
    </caching-scheme-mapping>
    <caching-schemes>
    <replicated-scheme>
    <scheme-name>TestCacheDB</scheme-name>
    <service-name>ReplicatedCache</service-name>
    <backing-map-scheme>
    <local-scheme>
    <scheme-name>TestDBLocal</scheme-name>
    <cachestore-scheme>
    <class-scheme>
    <class-name>test.TestCacheStore</class-name>
    <init-params>
    <init-param>
    <param-type>java.lang.String</param-type>
    <param-value>TEST_SUPPORT</param-value>
    </init-param>
    </init-params>
    </class-scheme>
    </cachestore-scheme>
    </local-scheme>
    </backing-map-scheme>
    <listener/>
    <autostart>true</autostart>
    </replicated-scheme>
    <!--
    Proxy Service scheme that allows remote clients to connect to the
    cluster over TCP/IP.
    -->
    <proxy-scheme>
    <scheme-name>proxy</scheme-name>
    <service-name>ProxyService</service-name>
    <thread-count system-property="tangosol.coherence.extend.threads">10</thread-count>
    <acceptor-config>
    <tcp-acceptor>
    <local-address>
    <address system-property="tangosol.coherence.extend.address">localhost</address>
    <port system-property="tangosol.coherence.extend.port">7001</port>
    <reusable>true</reusable>
    </local-address>
    </tcp-acceptor>
    <serializer>
    <class-name>com.tangosol.io.pof.ConfigurablePofContext</class-name>
    <init-params>
    <init-param>
    <param-type>String</param-type>
    <param-value>pof-config.xml</param-value>
    </init-param>
    </init-params>
    </serializer>
    </acceptor-config>
    <autostart system-property="tangosol.coherence.extend.enabled">false</autostart> </proxy-scheme>
    </caching-schemes>
    </cache-config>
    Exception:
    2010-08-31 10:46:09.062/171.859 Oracle Coherence GE 3.5.2/463 <Error> (thread=ReplicatedCache, member=2): java.lang.Clas
    sCastException: com.tangosol.util.Binary cannot be cast to test.TestBean
    at test.TestCacheStore.store(TestCacheStore.java:137)
    at com.tangosol.net.cache.LocalCache$InternalListener.onModify(LocalCache.java:637)
    at com.tangosol.net.cache.LocalCache$InternalListener.entryInserted(LocalCache.java:599)
    at com.tangosol.util.MapEvent.dispatch(MapEvent.java:206)
    at com.tangosol.util.MapEvent.dispatch(MapEvent.java:166)
    at com.tangosol.util.MapListenerSupport.fireEvent(MapListenerSupport.java:556)
    at com.tangosol.net.cache.OldCache.dispatchEvent(OldCache.java:1916)
    at com.tangosol.net.cache.OldCache$Entry.onAdd(OldCache.java:1985)
    at com.tangosol.util.SafeHashMap.put(SafeHashMap.java:244)
    at com.tangosol.net.cache.OldCache.put(OldCache.java:266)
    at com.tangosol.coherence.component.util.CacheHandler.onLeaseUpdate(CacheHandler.CDB:42)
    at com.tangosol.coherence.component.util.CacheHandler.populateCache(CacheHandler.CDB:33)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.ReplicatedCache$CacheUpdate.onReceiv
    ed(ReplicatedCache.CDB:5)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onMessage(Grid.CDB:9)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onNotify(Grid.CDB:136)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.ReplicatedCache.onNotify(ReplicatedC
    ache.CDB:3)
    at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
    at java.lang.Thread.run(Thread.java:619)
    2010-08-31 10:46:09.203/216.735 Oracle Coherence GE 3.5.2/463 <D5> (thread=DistributedCache, member=2): Deferring the di
    stribution due to 128 pending configuration updates
    TestBean.java
    public class TestBean implements PortableObject, Serializable {
         private static final long serialVersionUID = 1L;
         private String name;
         private String number;
         private String taskType;
         public String getName() {
              return name;
         public void setName(String name) {
              this.name = name;
         public String getNumber() {
              return productId;
         public void setNumber(String number) {
              this.number= number;
         public String getTaskType() {
              return taskType;
         public void setTaskType(String taskType) {
              this.taskType = taskType;
         @Override
         public void readExternal(PofReader reader) throws IOException {
              name = reader.readString(0);
              number = reader.readString(1);
              taskType = reader.readString(2);
         @Override
         public void writeExternal(PofWriter writer) throws IOException {
              writer.writeString(0, name);
              writer.writeString(1, number);
              writer.writeString(2, taskType);
    TestCacheStore.java
    public class TestCacheStore extends Base implements CacheStore {
         @Override
         public void store(Object oKey, Object oValue) {
         if(logger.isInfoEnabled())
              logger.info("store :"+oKey);
         TestBean testBean = (TestBean)oValue; //Giving classcast exception here
    //Doing some processing here over testBean
              ConnectionFactory connectionFactory = ConnectionFactory.getInstance();
              //Get the Connection
              Connection con = connectionFactory.getConnection();
              if(con != null){
    //Code to insert into the database
              }else{
                   logger.error("Connection is NULL");
    Edited by: user8279242 on Aug 30, 2010 11:44 PM

    Hello,
    The problem is that replicated caches are not supported with read write backing maps.
    Please refer to the link below for more information.
    http://download.oracle.com/docs/cd/E15357_01/coh.360/e15723/cache_rtwtwbra.htm#CFHEJHCI
    Best regards,
    -Dave

  • Query Performance and Cache

    Hi,
    I am trying to analyze Query Performance in ST03N.
    I run a Query with Property = Cache Inactive.
    I get Runtime as say 180 sec.
    Again when I run the same Query the Runtime is getting decreased .Now it is say 150 sec.
    Pls expalin me the reason for the less runtime when there is No Cache used.

    Could be two things occurring.  When you say cache is inactive - that is probably referring to the OLAP Global Cache.  The use session, still has alocal cache so that if they run execute the same navigation in the same session, results are retrieved from local cache.
    Also,as the others have mentioned, if the same DB query is really being executed on the DB a second time, not only is the SQL stmt parsing/optimization not need to occur, but almost certainly some/all of the data still remains in the DB's buffer, so it only needs to be retrieved from memory, rather than physically reading the disk again.

  • Aggregate query on global cache group table

    Hi,
    I set up two global cache nodes. As we know, global cache group is dynamic.
    The cache group can be dynamically loaded by primary key or foreign key as my understanding.
    There are three records in oracle cache table, and one record is loaded in node A, and the other two records in node B.
    Oracle:
    1 Java
    2 C
    3 Python
    Node A:
    1 Java
    Node B:
    2 C
    3 Python
    If I select count(*) in Node A or Node B, the result respectively is 1 and 2.
    The questions are:
    how I can get the real count 3?
    Is it reasonable to do this query on global cache group table?
    I have one idea that create another read-only node for aggregation query, but it seems weird.
    Thanks very much.
    Regards,
    Nesta
    Edited by: user12240056 on Dec 2, 2009 12:54 AM

    Do you mean something like
    UPDATE sometable SET somecol = somevalue;
    where you are updating all rows (or where you may use a WHERE clause that matches many rows and is not an equality)?
    This is not something you can do in one step with a GLOBAL DYNAMIC cache group. If the number of rows that would be affected is small and you know the keys or every row that must be updated then you could simply execute multiple individual updates. If the number of rows is large or you do not know all the ketys in advance then maybe you would adopt the approach of ensuring that all relevant rows are in the local cache grid node already via LOAD CACHE GROUP ... WHERE ... Alternatively, if you do not need Grid functionality you could consider using a single cache with a non-dynamic (explicitly loaded) cache group and just pre-load all the data.
    I would not try and use JTA to update rows in multiple grid nodes in one transaction; it will be slow and you would have to know which rows are located in which nodes...
    Chris

  • Using CacheLoader for replicated cache in cohernce 3.6

    Hi,
    I s it possible to configure a CacheLoader for replicated Cache in Coherence 3.6? The backing map will be local scheme for this cache.
    Regards,
    CodeSlave

    We have a "start of day" process that just runs up a Java client (full cluster member, but storage disabled node) that clears and then repopulates a number of "reference data" replicated caches we use in our application. Use the hints-n-tips in the Coherence Developer's Guide (bulk operations, etc.) to get decent performance. We load the data from an Oracle database. Again, tune the extract side (JDBC/JPA batching, etc.) to get that side of things performing well.
    For ad-hoc, intra day updates to the replicated caches (and you should look to minimise these), we use a "listener" that attaches to an Oracle database DCN (data change notification) stream.
    Cheers,
    Steve

  • Concurrency with Entry Processors on Replicated Cache

    Hi,
    In the documentation is says that entry processors in replicated caches are executed on the initiating node.
    How is concurrency handled in this situation?
    What happens if two or more nodes are asked to execute something on an entry at the same time?
    What happens if the node initiating the execution is a storage disabled node?
    Thanks!

    Jonathan.Knight wrote:
    In a distributed cache the entry processor runs on the node that owns the cache entry. In a replicated cache the same entries are on all nodes so I think one of the questions was what happens in this scenario. I presume the EP only execues on one of the nodes - it would not make sense for it to execute on all nodes - but which one does it use? Is there still a concept of owner for a replicated cache or is it random.
    At this point I would have coded a quick experiment to prove what happens but unfortunately I am a tad busy right now.
    JKHi Jonathan,
    in the replicated cache there is still a notion of ownership of an entry, in Coherence terms it is called a lease. It is always owned by the last node to have carried out a successful modification on it, where modification may be a put/remove but it can also be a lock operation. Lease granularity is per entry.
    Practically the lock operation in the code Dimitri pasted serves two purposes. First it ensures no other nodes can lock it, second it brings the lease to the locking node, so it can correctly execute the entry-processor locally on the entry.
    Best regards,
    Robert

  • I loaded in Lion - but my time capsule will not back up. I get an message: couldn't complete backup due to a network prolem. Also it says "make sure your computer and back up disk are on the same network, and that the backup disk is turned on.

    I installed Lion on my Mac Pro laptop. Regarding Time Capsul - I get a messaage as follows: couldn't complete backup due to a network problem. Make sure your computer and back up disk are on the same network and that the  backup disk is turned on. Then try again to back up. I have time capsul turned on. bill

    I have exactly same problem with my MBP and MBA, after upgrading to Lion. I've tried to fix this issue while cheking key chain issues and network setup, even formating hdd and time capsule firmware upgrade (ver. 7.6.1.). Nothing can help. It is very annoying.

Maybe you are looking for