Local Cache with write-behind backing map

Hi there,
I am a Coherence newb, so I hope my question isn't too naive. I have been experimenting with Coherence using a write-behind JPA backing map, but I have only been able to make it work with a distributed cache. Because of my specific database RAC architecture, I need to ensure that entries written to the database from a given WLS node are restricted to a specific RAC node. In my view, using a local cache rather than a distributed cache should solve my problem, although if there is a way of configuring my cache so this works I'd appreciate the info.
So, the short form of the question: can I back a local cache with a write-behind JPA map?
Cheers,
Ron

Hi Ron,
The configuration for <local-scheme> allows you to add a cache store but you cannot use write-behind, only write-through.
Presumably you do not want the data to be shared by the different WLS nodes, i.e. if one node puts data in the cache and that is eventually written to a RAC node, that data cannot be seen in the cache by other WLS nodes. Or do you want all the WLS nodes to share the data but just write it to different RAC nodes?
If you use a local-scheme then the data will only be local to that WLS node and not shared.
I can think of a possible way to do what you want but it depends on the answer to the above question.
JK

Similar Messages

  • Error removing object from cache with write behind

    We have a cache with a DB for a backing store. The cache has a write-behind delay of about 10 seconds.
    We see an error when we:
    - Write new object to the cache
    - Remove object from cache before it gets written to cachestore (because we're still within the 10 secs and the object has not made it to the db yet).
    At first i was thinking "coherence should know if the object is in the db or not, and do the right thing", but i guess that's not the case?

    Hi Ron,
    The configuration for <local-scheme> allows you to add a cache store but you cannot use write-behind, only write-through.
    Presumably you do not want the data to be shared by the different WLS nodes, i.e. if one node puts data in the cache and that is eventually written to a RAC node, that data cannot be seen in the cache by other WLS nodes. Or do you want all the WLS nodes to share the data but just write it to different RAC nodes?
    If you use a local-scheme then the data will only be local to that WLS node and not shared.
    I can think of a possible way to do what you want but it depends on the answer to the above question.
    JK

  • Can a db slowdown with write-behind cause a slowdown in cache operations?

    If we have a coherence cluster, and one cache configured with write-behind is having trouble writing to the db (ie, it's slow), and we keep adding objects to the cache that exceed the ability of the db to consume them; will flow-control kick in and cause the writes to the cache to block/slow-down? Ie, the classic producer-consumer problem, where we are adding objects to the cache, faster than the cachestore can consume them.
    What happens in this case? Will flow-control kick in and block writes to the cache? Will an internal buffer just keep growing? Are there any knobs to tweak this behavior (eg, in the case of spikes, where temporarily the producer is producing faster than the consumer can consume for a brief period of time, but then things go back to normal)?

    user9222505 wrote:
    I believe we discovered that the same thread pool is used for all requests to the cache, including gets, puts and calls into the cachestore. So if the writes are slow within the cachestore, then it uses up all of the threads and slows everything down.Hi,
    This is not really correct.
    If a cache in a service is configured to use write-behind then a separate thread for that service is started, which deals with write-behind store and storeAll operations.
    The remove operations need to be handled synchronously to avoid corruption of the data-set in the scenario of reading a entry from the cache immediately after removing it (if it were not synchronously deleted from the backing storage, then reading it back could give an incorrect non-null value). Therefore remove operations are handled synchronously on the service / worker thread, and not delayed on the write-behind thread.
    Gets are also synchronously handled, so they again are served on the service / worker thread.
    So if the puts are slow and wait too much, that may delay other puts but should not contend with other threads. If the puts are computation intensive, then obviously they hinder other threads because of consumption of the same CPU resource, and not simply because they execute.
    Best regards,
    Robert

  • Preventing write-coalescing with write-behind

    Dear all,
    I am very interested in the write-behind feature but I would like to disable the write-coalescing optimization to see each.
    Is there any way to do this ?
    I feel the fact that CacheStore.storeAll(Map entries) works on cache-key/cache-value makes this impossible to have several cache-value for the same key :-( Not coalescing the consecutive changes on a given entry would require to have a method like CacheStore.storeAll(List<Change>) with Change holding the modification (insert/update/delete), the key and the value.
    Thanks,
    Cyrille
    Cyrille Le Clerc
    [email protected]
    http://blog.xebia.fr

    Thanks for the feedback Robert,
    I implemented this "batch processing without coalescing" thanks a "command queue" colocated with my data.
    Sample : perform async processing on each modification without coalescence of MyEntity identified by MyEntityKey store in "my-entity-cache".
    In my agent/entry-processor, I simultaneously modify my data "MyEntity" and put an entry MyEntityCommand (stored in "my-entity-command-cache"), MyEntityCommand holds enough information to do my async processing. This processing is done asynchronously by MyEntityCommandcacheStore.
    MyEntityCommand is associated with a key MyEntityCommandKey which is composed of MyIdentityKey+sequence-number, MyEntityCommandKey has a KeyAssociation with MyEntityKey to ensure colocation.
    Benefits :
    * There is no coalescence because MyEntityCommandKey contains a unique sequence number.
    * The overweight of this "command queue" is limited because the command object only contains that limited piece of data I need for my async processing (foreign key on the entity + few things) and the queue is self purging thanks to a <expiry-delay> on "my-entity-command-cache".
    Here is a configuration extract
    <distributed-scheme>
      <scheme-name>entity-partitionned</scheme-name>
      <service-name>EntityDistributedCache</service-name>
      <serializer>
      </serializer>
      <backing-map-scheme>
        <local-scheme>
          <scheme-ref>unlimited-backing-map</scheme-ref>
        </local-scheme>
      </backing-map-scheme>
      <thread-count>10</thread-count>
      <autostart>true</autostart>
    </distributed-scheme>
    <distributed-scheme>
      <scheme-name>entity-command-partitionned</scheme-name>
      <service-name>EntityDistributedCache</service-name>
      <serializer>
      </serializer>
      <backing-map-scheme>
        <read-write-backing-map-scheme>
          <cachestore-scheme>
            <class-scheme>
              <class-name>... EntityCommandStore</class-name>
            </class-scheme>
          </cachestore-scheme>
          <internal-cache-scheme>
            <local-scheme>
              <expiry-delay>30s</expiry-delay>
            </local-scheme>
          </internal-cache-scheme>
          <write-delay>10s</write-delay>
          <write-batch-factor>0.5</write-batch-factor>
        </read-write-backing-map-scheme>
      </backing-map-scheme>
      <thread-count>10</thread-count>
      <autostart>true</autostart>
    </distributed-scheme>
    {code}
    Please note that I had to play with <write-batch-factor> to increase the batching factor (ie the number of entries in each CacheStore.store()/storeAll() invocation). Under very high write load, <write-batch-factor> default value of 0 gave me an average of 1.2 entry per CacheStore.store()/storeAll() invocation. My first try of a 0.5 <write-batch-factor> largely increased this average to probably hundreds (my stats are done on an underlying layer, I don't have the exact average).
    Cyrille
    Cyrille Le Clerc
    [email protected]
    http://blog.xebia.fr                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

  • Write-Behind, Expiration, and SQL Exceptions.

    Hi Chaps,
    If a cache with write-behind enabled has problems writing to the DB I understand that Coherence will re-queue the objects and write them when the DB is available.
    The problem I have is that (after a DB failure) I don't see them being written - I can see these items in the cache but not in the DB, even several hours after the outage. (Items that were added to the cache after the outage are being written).
    Is there anything the cachestore methods (specifically store() ) need to do with regards to exceptions to ensure that these items are re-qeueued?
    Next question is: I was also wondering how is this managed with regards to expiry?
    We have our own expiry routine which removes items from the cache that are older than 24 hours (this was from before we could expire objects by specifying the timeout in the put() method call, which I am intending to switch to).
    If an item has not been written to the DB due to an outage and is then expired (by our own routine or by Coherence) is it then lost forever, or will it remain in the queue? (seeing as the queue holds references I am guessing not but though I'd check).
    Thanks,
    Randal.

    Jon,
    I have a question related to this...If you remember a few weeks back, I stumbled upon the problem of the "version-persistent" map for the versioned-backing-map-scheme does not accept putAll operations. The workaround until you guys implement it, was to override the putAll method of the cacheStore and throw and unsupported operation exception (to force individual puts).
    Well, although this workaround works, I am getting tons and tons of:
    2006-04-06 17:18:27.347 Tangosol Coherence 3.1/339 <Warning> (thread=WriteBehindThread:MyCacheStore, member=1): The CacheStore "MyCacheStore@46b9979b" does not support storeAll().
    2006-04-06 17:18:27.348 Tangosol Coherence 3.1/339 <Error> (thread=WriteBehindThread:MyCacheStore, member=1): Failed to store keys="[16, 18, 21, 26, 5, 13, 14, 25, 17, 15, 23, 19, 2, 6, 9, 7]":
    java.lang.UnsupportedOperationException
    at ...MyCacheStore.storeAll(MyCacheStore.java:126)
    at com.tangosol.net.cache.ReadWriteBackingMap$CacheStoreWrapper.storeAll(ReadWriteBackingMap.java:3820)
    at com.tangosol.net.cache.ReadWriteBackingMap$WriteThread.run(ReadWriteBackingMap.java:3538)
    at com.tangosol.util.Daemon$1.run(Daemon.java:63)
    2006-04-06 17:18:27.349 Tangosol Coherence 3.1/339 <Warning> (thread=WriteBehindThread:MyCacheStore, member=1): Requeued store for key="16"
    2006-04-06 17:18:27.349 Tangosol Coherence 3.1/339 <Warning> (thread=WriteBehindThread:MyCacheStore, member=1): Requeued store for key="18"
    2006-04-06 17:18:27.350 Tangosol Coherence 3.1/339 <Warning> (thread=WriteBehindThread:MyCacheStore, member=1): Requeued store for key="21"
    2006-04-06 17:18:27.351 Tangosol Coherence 3.1/339 <Warning> (thread=WriteBehindThread:MyCacheStore, member=1): Requeued store for key="26"
    the first OperationNotSupported is expected, but I'm not sure what the requeued warnings are all about. These are not failures to the DB...it is something else. (mind you that this happens when trying to load a lot of data into the map.)
    1- Is this requeuing related or the same as in failed DB stores?
    2- Is it possible to "lose" stores if I don't configure the write-requeue-threshold with very, very high values? I must ensure I don't lose anything.
    In a related note, in some circumstances, I need to ensure that the "write queue" is flushed or cleared. For example, I may want to force a flush of all pending stores (and wait/block until that's done).
    I have looked into it and I don't seem to know how to do it. I can read the write-queue length, but I believe that this is not very accurate...since my tests seem to indicate that the write-behind thread may take the entries to store off the write-queue and then deal with them in parallel (which means that there are still entries althought the write-queue size is 0). Also, there are some calls from the cache store that, at first, seem to give some access to the write thread (potentially allowing me to contact the thread to tell him to flush or discard any pending stores)...but I believe that all of the functions are protected...but there may be other ways..
    I guess my second batch of questions are:
    1- How can I effectively force a flush (or clear) of the pending stores. Such that there is no single store pending in any queue (visible or invisible to the programmer).
    2- What is the role of re-queuing in these situations? where is the queue sitting, the thread? the cache store? who's responsible of retrying that, and when?...I would like to flush those entries too.
    A quick explanation of the operation of the write thread would also be very appreciated.
    Thanks!
    Josep M.

  • Write behind cache, DB down, when should the system stop taking new data in

    Hello:
    We are trying to use Coherence for our custom ESB, which is brokering payloads of various size between consumer and provider applications.
    Before Coherence, stopping our DB meant organization-wide outage for critically important business services.
    Since we have at least 40G of RAM in production environment, we believe that our app
    can use Coherence write-behind option for tolerating at least several hours worth of DB outage.
    We are currently using a near cache backed by distributed cache in write-behind mode.
    9 business service JVMs (storage enabled=false) use 30 storage enabled JVMs.
    IMPORTANT: We need to create an automated alerting facility determining when
    amount of unsaved data reaches critical level since DB goes down. This alert should help us decide when our application stops accepting inbound traffic.
    It is hard to use QueueSize parameter for that because our payload memory footprint can vary from 1KB to 3MB.
    We do not expire any entries in order to enable support queries against the cache during DB outage.
    Our experiments with trying various flavors of overflow-scheme resulted in OutOfMemoryError, therefore
    we decided to implement RAM-only cache as a first step.
    <near-scheme>
    <scheme-name>message_payload_scheme</scheme-name>
    <front-scheme>
    <local-scheme>
    <scheme-ref>limited_entities_front_scheme</scheme-ref>
    <high-units>100</high-units>
    </local-scheme>
    </front-scheme>
    <back-scheme>
    <distributed-scheme>
    <backing-map-scheme>
    <read-write-backing-map-scheme>
    <internal-cache-scheme>
    <local-scheme>
    <scheme-ref>limited_bytes_scheme</scheme-ref>
    <high-units>199229440</high-units>
    </local-scheme>
    </internal-cache-scheme>
    <cachestore-scheme>
    <class-scheme>
    <class-name>com.comp.MessagePayloadStore</class-name>
    </class-scheme>
    </cachestore-scheme>
    <read-only>false</read-only>
    <write-delay-seconds>3</write-delay-seconds>
    <write-requeue-threshold>2147483646</write-requeue-threshold>
    </read-write-backing-map-scheme>
    </backing-map-scheme>
    <autostart>true</autostart>
    </distributed-scheme>
    </back-scheme>
    </near-scheme>
    <local-scheme>
    <scheme-name>limited_entities_front_scheme</scheme-name>
    <eviction-policy>LRU</eviction-policy>
    <unit-calculator>FIXED</unit-calculator>
    </local-scheme>
    <local-scheme>
    <scheme-name>limited_bytes_scheme</scheme-name>
    <eviction-policy>HYBRID</eviction-policy>
    <unit-calculator>BINARY</unit-calculator>
    </local-scheme>

    Good info ... I feel like I need to restate my original question along with a couple of new questions caused by the discussion above.
    Q1. Does Coherence evict 'dirty', or 'queued', or 'unsaved' objects for cache configuration provided above?
    The answer should be 'NO', otherwise Coherence is unsafe to use as a system of record,
    it should not just drop unsaved information on the floor.
    Q2. What happens to the front tier of the near+partitioned write behind cache described above when amount of unsaved data exceeds max cache capacity defined via high-units?
    I would expect that map.put starts throwing exceptions: cache storage is full, so it should not accept more data
    Q3. How can I determine a moment when amount of dirty data in bytes(!), not in objects, hits 85% of
    max allowed cache capasity configured in bytes (using high-units param and BINARY calculator).
    'DirtyUnits' counter can probably be built with some lower-level Coherence API. Can we use
    this API?
    Please, understand, that we purchased Coherence for reliability, for making our
    system independent from short DB outages, for keeping our business services up
    and running when DBA need some time for admin operations like rebuilding an index.
    Performance benefits are secondary and are not as obvious for our system which
    uses primary keys only and has a well-tuned co-located Oracle back-end.
    We simply cannot put Coherence to production unless we prove that Coherence
    can reliably hold the data and give us information about approaching crisis
    (the cache full of unsaved data).
    If possible, forward this message to Cameron Purdy,
    who was presenting Coherence to our team several moths ago.
    Thanks,
    Vasili Smaliak
    Applications Architect, Enterprise App Integration
    GMAC ResCap
    [email protected]

  • Write-Behind Caching and Re-entrant Calls

    Support Team -
         The Coherence User Guide states that:
         "The CacheStore implementation must not call back into the hosting cache service. This includes OR/M solutions that may internally reference Coherence cache services. Note that calling into another cache service instance is allowed, though care should be taken to avoid deeply nested calls (as each call will "consume" a cache service thread and could result in deadlock if a cache service threadpool is exhausted)."
         I have Load-tested a use case wherein I have two caches: ABCache and BACache. ABCache is accessed by the application for write operation, BACache is accessed by the application for read operation. ABCache is a write-behind cache whose CacheStore populates BACache by reversing key and value of each cache entry stored in the ABCache.
         The solution worked under load with no issues.
         But can I use it? Or is it too dangerous?
         My write-behind thread-count setting is left at default (0). The documentation states that
         "If zero, all relevant tasks are performed on the service thread."
         What does this mean? Can I re-enter the caching service if my thread-count is zero?
         Thank you,
         Denis.

    Dimitri -
         I am not sure I fully understand your answer:
         1. "Your test worked because write-behing backing map invokes CacheStore methods asynchronously, on a write-behind thread." In my configuration, I have default value for thread-count, which is zero. According to the documentation, that means that CacheStore methods would be executed by the service thread and not by the write-behind thread. Do I understand this correctly?
         2. "If will fail if CacheStore method will need to be invoked synchronously on a service thread." I am not sure what is the purpose of the "service thread". In which scenarios the "CacheStore method will need to be invoked synchronously on a service thread"?
         Thank you,
         Denis.

  • Cache write-behind complete check

    Is there a surefire way to check a cache that has a store persisting objects to the database and write-behind set to 2 seconds, has persisted all objects put into the cache?
    We have tried using JMX, querying the Cache's QueueSize and waiting until it reaches 0. It turns out that when putting objects into the write-behind cache, the write-behind queue is not necessarily non-zero immediately after the put(s). e.g. QueueSize may be 0, even if objects still need to be persisted.
    For our nightly integration tests we need to clear out the cache, but want to make sure we do not call NamedCache.clear() on a cache that still has objects that need to be persisted.
    Any ideas?

    Hi Rob,
    The problem may actually be the timeliness of updates to the QueueSize JMX attribute as we're using the MBeanConnector to obtain information on our cache members (and providers). Assuming that objects actually make it to the write-behind queue during the cache put call and certain tests need to be sure these objects are persisted, instead of doing the accounting approach discussed previously, I found a forum thread on ReadWriteBackingMap flush calls.
    To get access to the ReadWriteBackingMap.flush() I created a small test today using a subclass of ReadWriteBackingMap that registers the backingmap with our CashStore implementation:
        protected void configureCacheStore(CacheStore store, boolean readOnly) {
            super.configureCacheStore(store, readOnly);
            if (store instanceof CacheLoaderWriterProvider) {
                ((CacheLoaderWriterProvider)store).registerBackingMap(this);
        }Our cachstore (CacheLoaderWriterProvider) in turn exposes a call to the registered map's flush method as a JMX operation.
    Whenever we need to be sure the write-behind queue is empty during our tests, we'll call this JMX operation.
    Best Regards,
    Marcel.

  • Help needed to sync cache with the database using CacheStore

    Hi,
    I need to have my cache in sync with the database. I followed the tutorial to create a DBCacheStore class which will act as a CacheStore between Cache and my Oracle database. My Cache-Config file contains
    <backing-map-scheme>
                        <read-write-backing-map-scheme>
                             <internal-cache-scheme>
                                  <local-scheme></local-scheme>
                             </internal-cache-scheme>
                             <cachestore-scheme>
                                  <class-scheme>
                                       <class-name>com.coherence.cacheUtil.DBCacheStore</class-name>
                                       <init-params>
                                            <init-param>
                                                 <param-type>java.lang.String</param-type>
                                                 <param-value>{cache-name}</param-value>
                                            </init-param>
                                       </init-params>
                                  </class-scheme>
                             </cachestore-scheme>                         
                        </read-write-backing-map-scheme>                    
                   </backing-map-scheme>
    Now, what I am trying to achieve is,
    1. Load the cache with values when the cache server is up, without any need to run separate program
    2. Integrate all read/write cache operations with the database
    I am aware that CacheStore class implements store, storeAll, erase, eraseAll, load, loadAll methods from undelying interface and I need to write corresponding code to add/remove/update records in the database.
    But, I am really not sure how and when to include the logic to load the cache with the data from the database when the server is up.
    Thank for the help. Much appreciated!

    Could you try something like this to load the cache?
    HashSet buffer=new HashSet();
    s = conn.createStatement();
    rs = s.executeQuery("select key from table");
    while (rs.next())
    String key = rs.getString(1);
    buffer.add(key);
    // this loads 1000 items at a time into the cache
    if ((count++ % 1000) == 0)
    cache.getAll(buffer);
    buffer.clear();
    if (!buffer.isEmpty())
    cache.getAll(buffer);
    This web page also gives a good example:
    http://www.oracle.com/technology/pub/articles/vohra-coherence.html
    -Luk

  • Switching from write through to write behind automatically

    Hi,
    We are considering a Coherence solution to protect a customer facing application from outages due to database failures. This is for a financial company and the monetary value of each transaction is large and we want to provide 100% guarantee against data loss while not incurring any outages. We want to provide a write-through persistence to the database through Coherence which can switch to a write-behind automatically at runtime if the database persistence fails. Is this doable automatically and would it solve the problem I am trying to solve without losing any inflight transactions? Are there any real customer cases that were successful in achieving this using Coherence?
    Thanks
    Sairam
    Edited by: SKR on Feb 16, 2012 3:14 PM
    Edited by: SKR on Feb 16, 2012 3:15 PM

    SKR wrote:
    Jonathan.Knight wrote:
    Hi Sairam
    I know you can change the write-delay in JMX for a cache using write-behind but I pretty certauin you cannot make a write-through cache suddenly become a write-behind cache.
    I'm not sure why you think changing from write-through to write-behind will allow you to guarantee 100% no data loss - do you mean no loss of updates to the DB or no loss of data in the cache cluster? There are certainly scenarios that can occur where you can loose data from either the cluster or the DB that write-through or write-behind will not save you from. Presumably you want to use write-behind to allow for the DB to go down, although you will still need to configure Coherence to properly retry failed write-behind calls CacheStore behaviour on failure. What happens to your data if you are using write-behind and you loose a partition from you cluster (i.e. you loose a physical machine or two or more JVMs in a short space of time) - you have data loss - you cannot guarantee against this you can only mitigate it and have a recovery policy/procedure.
    JKJK,
    Thanks for your reply. I must have explained the scenario better. What we are trying to do is to have our transactions commit to the database synchronously using write-through, so that during normal operation, the data will be committed, persisted and durable in the database. But our RW database becomes a single point of failure and if some problem occurs to the database during the peak load time, we run the risk of an outage till we fix the database problem or failover to the standby (We don't have RAC architecture or automatic failover and the manual switchover takes about 10 - 15 mins minimum). We want to avoid this by providing a cache-only operation mode during such a failure, where the customers can continue to transact and the writes will get queued in the cache. I do understand that losing both the database and the cache or losing the primary and the backup in the cache would result in a data loss. But I am assuming such a dual failure is rare.
    We do not want to run write-behind all the time but only during the database failure window. From what you mentioned, it seems the runtime switching from write-through to write-behind is not available as an option.
    SairamHi Sairam,
    I would suggest that you configure write-behind to have a fairly short write-delay, and you only return a confirmation to the client
    - either after the write-behind succeeded (you can use a backing map listener to listen for the removal of the decoration which meant that the entry was dirty)
    - or if the database went down (noticeable from the failure), then it is up to you whether you send a confirmation which also mentions that it is not persisted to disk yet, or not at all
    Best regards,
    Robert

  • Coherence Extends and Local Cache

    I am triying to use coherence extends to do some work with cache,
    with a local cache is that possible i keep getting null pointer exception,
    like if the data is not being stored in the cache.
         <cache-mapping>
              <cache-name>local-pds2-*</cache-name>
              <scheme-name>local-cache</scheme-name>
         </cache-mapping>
         <local-scheme>
              <scheme-name>local-cache</scheme-name>
                   <eviction-policy>LRU</eviction-policy>
                   <high-units>32000</high-units>
                   <low-units>10</low-units>
                   <unit-calculator>FIXED</unit-calculator>
                   <expiry-delay>10ms</expiry-delay>
                   <flush-delay>1000ms</flush-delay>
         </local-scheme>
    is there something wrong in my configuration?

    this is the config y use for the client
    <?xml version="1.0"?>
    <!DOCTYPE cache-config SYSTEM "cache-config.dtd">
    <cache-config>
    <caching-scheme-mapping>
         <cache-mapping>
              <cache-name>local-pds2-*</cache-name>
              <scheme-name>local-cache</scheme-name>
         </cache-mapping>
    <cache-mapping>
    <cache-name>dist-pds2-*</cache-name>
    <scheme-name>extend-dist</scheme-name>
    </cache-mapping>
    </caching-scheme-mapping>
    <caching-schemes>
         <local-scheme>
              <scheme-name>local-cache</scheme-name>
                   <eviction-policy>LRU</eviction-policy>
                   <high-units>32000</high-units>
                   <low-units>10</low-units>
                   <unit-calculator>FIXED</unit-calculator>
                   <expiry-delay>10ms</expiry-delay>
                   <flush-delay>1000ms</flush-delay>
         </local-scheme>
    <remote-cache-scheme>
    <scheme-name>extend-dist</scheme-name>
    <service-name>ExtendTcpCacheService</service-name>
    <initiator-config>
    <tcp-initiator>
    <remote-addresses>
    <socket-address>
    <address>172.16.2.229</address>
    <address>localhost</address>
    <port>5354</port>
    </socket-address>
    </remote-addresses>
    <connect-timeout>10s</connect-timeout>
    </tcp-initiator>
    <outgoing-message-handler>
    <request-timeout>5s</request-timeout>
    </outgoing-message-handler>
    </initiator-config>
    </remote-cache-scheme>
    </caching-schemes>
    </cache-config>
    and this for the server
    <?xml version="1.0"?>
    <!DOCTYPE cache-config SYSTEM "cache-config.dtd">
    <cache-config>
         <defaults>
              <serializer system-property="tangosol.coherence.serializer"/>
              <socket-provider system-property="tangosol.coherence.socketprovider"/>
         </defaults>
         <caching-scheme-mapping>
              <cache-mapping>
                   <cache-name>dist-pds2-*</cache-name>
                   <scheme-name>dist-default</scheme-name>
              </cache-mapping>
         </caching-scheme-mapping>
         <cache-mapping>
              <cache-name>dist-*</cache-name>
              <scheme-name>distributed</scheme-name>
              <init-params>
                   <init-param>
                        <param-name>back-size-limit</param-name>
                        <param-value>8MB</param-value>
                   </init-param>
              </init-params>
         </cache-mapping>
         <distributed-scheme>
              <scheme-name>distributed</scheme-name>
              <service-name>DistributedCache</service-name>
              <backing-map-scheme>
                   <local-scheme>
                        <scheme-ref>binary-backing-map</scheme-ref>
                   </local-scheme>
              </backing-map-scheme>
              <autostart>true</autostart>
         </distributed-scheme>
         <local-scheme>
              <scheme-name>binary-backing-map</scheme-name>
              <eviction-policy>HYBRID</eviction-policy>
              <high-units>{back-size-limit 0}</high-units>
              <unit-calculator>BINARY</unit-calculator>
              <expiry-delay>{back-expiry 1h}</expiry-delay>
              <flush-delay>1m</flush-delay>
              <cachestore-scheme></cachestore-scheme>
         </local-scheme>
         <caching-schemes>
              <distributed-scheme>
                   <scheme-name>dist-default</scheme-name>
                   <backing-map-scheme>
                        <local-scheme/>
                   </backing-map-scheme>
                   <autostart>true</autostart>
              </distributed-scheme>
              <proxy-scheme>
                   <service-name>ExtendTcpProxyService</service-name>
                   <acceptor-config>
                        <tcp-acceptor>
                             <local-address>
                                  <address >localhost</address>
                                  <port >5354</port>
                             </local-address>
                        </tcp-acceptor>
                   </acceptor-config>
                   <proxy-config>
                        <cache-service-proxy>
                             <enabled>true</enabled>
                        </cache-service-proxy>
                        <invocation-service-proxy>
                             <enabled>true</enabled>
                        </invocation-service-proxy>
                   </proxy-config>
                   <autostart >true</autostart>
              </proxy-scheme>
         </caching-schemes>
    </cache-config>

  • Local Cache containing all Distributed Cache entries

    Hello all,
    I am seeing what appears to be some sort of problem. I have 2 JVMS running, one for the application and the other serving as a coherence cache JVM (near-cache scheme).
    When i stop the cache JVM - the local JVM displays all 1200 entries even if the <high-units> for that cache is set to 300.
    Does the local JVM keep a copy of the Distributed Data?
    Can anyone explain this?
    Thanks

    hi,
    i have configured a near-cahe with frontscheme and back scheme.in the front scheme i have used local cache and in the back scheme i have used the distributed cache .my idea is to have a distributed cache on the coherence servers.
    i have 01 jvm which has weblogic app server while i have a 02 jvm which has 4 coherence servers all forming the cluster.
    Q1: where is the local cache data stored.? is it on the weblogic app server or on the coherence servers (SSI)..
    Q2: although i have shutdown my 4 coherence servers..i am still able to get the data in the app.so have a feel that the data is also stored locally..on the 01 jvm which has weblogic server runnng...
    q3: does both the client apps and coherence servers need to use the same coherence-cache-config.xml
    can somebody help me with these questions.Appreciate your time..

  • Querying cache with data that is large enough to fit into available memory

    Query on a cache would be applicable on the data that can fit into the cache storage as per the link http://coherence.oracle.com/display/COH35UG/Query+the+Cache. However there is also a mention about restricting the cache content along a specific dimension, so that the query actually knows what to expect from the cache.
    Generally any enterprise application would generate huge amount of data and practically it would be difficult to set the fix up the limit to a specific dimension. Fixing the limit is to decide on what else has to be taken from the persistent store.
    Please let me know if there are best practices defined in this regard?
    Edited by: Mahesh Kamath on Feb 5, 2010 3:16 AM

    Hi,
    Thanks for the clarification.
    Okay. So in a nutshell it sounds like you have data spanning a memory cache and living in a database and
    you want to be able to query the entire dataset, so you need someway to figure out what
    is in the cache and what is in the database?
    In general queries in Coherence only work against the cached data, and because of the LRU characteristics
    of memory caches its going to be really difficult to know at a point in time what set of information is living
    in memory and which has been evicted and persisted in the underlying database. If the dataset that spills
    over into a database is fringe (i.e. a small percentage of your total data), then the simplest solution is to
    add new cache servers to get you out of this problem (i.e. avoid databases). If the cache is just the tip
    of the iceberg of all your data than you probably want to have either a write-through or write-behind
    configuration and query the database directly (write-through will get you everything you put into a cache,
    while write-behind will get everything except for the most recent puts which are working their way to being written in batch).
    My guess is that retrieval of keys out of the database, and then retrieval of data through the cache using
    the keys would not be a good idea. For one thing if the database is substantially larger than the cache,
    the entire cache will get evicted by the query activity, which generally is a bad idea. More importantly the
    code path to get the data will be much longer and would result in N number of invocations of SQL as opposed
    to a single invocation of SQL with a cursor.
    Regards,
    Bob

  • Off-heap backing maps seem to generate lots of garbage at insert..!?

    I have been doing a lot of benchmarks of distributed caches with different backing maps. The results where partly positive (I hoped that partitioned (splitting) off-heap backing map would be almost as fast as a non-splitting on-heap backing map). For read and various types of queries this turned out to be mostly true (some queries were slightly slower - probably because they where performed per partition).
    For inserts it does however sadly enough seem to be another story - already when using a non-splitting NIO backing map inserts seemed to generate a lot of garbage slowing the benchmark down significantly and when switching to a splitting NIO backing map this effect became so extreme that full GC occured more or less constantly on the cache nodes slowing execution down to almost a standstill :-(
    Has anybody else tried this and seen the same results or do any of the Coherence developers have some theory?
    To me it would seem like network-io to off-heap (using storage buffers allocated using nio just as the communication buffers!) should be at least as easy to perform without generating excessive garbage as to heap objects but since I dont know the internals of Coherence I cant say for sure if there are something that breaks this theory?
    For me the main expected advantage with using off-heap rather than on-heap would have been REDUCED GC activity and shorter pauses but instead it seems like the result is the oposite - at least when doing inserts...
    My example do not use (or need!) and secondary indexes (only performs get/put/lock/unlock) but each entry is locked before it is inserted and unlocked after (this is needed for the algorithm I am using as a benchmark) - as I have pointed out in another thread it is a pitty that no "lockAll" / unlockAll method calls exists (my benchmark is suffering a lot from all the lock/unlock remote calls) - the overhead for this is however nothing compared to the performance hit that comes from the all the GC...
    I have tried to tune the GC in several ways but this has only to a very limited extent reduced the GC pauses length or the frequency of full-GC - it just seems like a LOT of garbage is generated for some reason...
    The setings that so far was resulted in the least GC-overhead (still awfully bad though!) are -XX:+UseParallelGC -XX:+UseAdaptiveSizePolicy. I am using Coherence 3.5 GE and Sun JRE 1.6.0_14.
    /Magnus
    Edited by: MagnusE on Aug 10, 2009 3:01 PM

    Thanks for ther info - I was indeed using different initial and max size in this experiment and seting them the same eased the problem (now I mostly get incremental rather than full GC messages). Insert do however still generate more GC activity than read (that seem to be more or less totally free from Java heap allocation / deallocation which is VERY good since read is so common!). Perhaps there is some more tweaking of the heap allocation/deallocation that can be done att the same time as you work on that bug you mentioned - it would really be nice with a NIO backing-map with close to zero Java heap usage for all primitive operations (read, insert, delete)!
    /Magnus
    Edited by: MagnusE on Aug 11, 2009 7:33 AM

  • What is backing map in Java?

    What is backing map in Java?
    Thank you

    A Backing Map is a map which is used to implement another map.
    eg.
    - A thread safe map can use a map as a wrapper for a backing map which actually holds the entries.
    - a cache usually has a backing map and the cache is also a map, but smarter.

Maybe you are looking for