Write-Behind Caching and Limited Internal Cache Size

Let's say I have a write-behind cache and configure its internal cache to be of a fixed limited size, e.g. 10000 units. What would happen if more than 10000 units are added to the write-behind cache within the write-delay period? Would my CacheStore's storeAll() get all of the added values or would some of the values be missed because of the internal cache size limitation?

Hi Denis,     >
     > If an entry is removed while it is still in the
     > write-behind queue, it will be removed from the queue
     > and CacheStore.store(oKey, oValue) will be invoked
     > immediately.
     >
     > Regards,
     > Dimitri
     Dimitri,
     Just to confirm, that I understand it right if there is a queued update to a key which is then remove()-ed from the cache, then the following happens:
     First CacheStore.store(key, queuedUpdateValue) is invoked.
     Afterwards CacheStore.erase(key) is invoked.
     Both synchronously to the remove() call.
     I expected only erase will be invoked.
     BR,
     Robert

Similar Messages

  • Can a db slowdown with write-behind cause a slowdown in cache operations?

    If we have a coherence cluster, and one cache configured with write-behind is having trouble writing to the db (ie, it's slow), and we keep adding objects to the cache that exceed the ability of the db to consume them; will flow-control kick in and cause the writes to the cache to block/slow-down? Ie, the classic producer-consumer problem, where we are adding objects to the cache, faster than the cachestore can consume them.
    What happens in this case? Will flow-control kick in and block writes to the cache? Will an internal buffer just keep growing? Are there any knobs to tweak this behavior (eg, in the case of spikes, where temporarily the producer is producing faster than the consumer can consume for a brief period of time, but then things go back to normal)?

    user9222505 wrote:
    I believe we discovered that the same thread pool is used for all requests to the cache, including gets, puts and calls into the cachestore. So if the writes are slow within the cachestore, then it uses up all of the threads and slows everything down.Hi,
    This is not really correct.
    If a cache in a service is configured to use write-behind then a separate thread for that service is started, which deals with write-behind store and storeAll operations.
    The remove operations need to be handled synchronously to avoid corruption of the data-set in the scenario of reading a entry from the cache immediately after removing it (if it were not synchronously deleted from the backing storage, then reading it back could give an incorrect non-null value). Therefore remove operations are handled synchronously on the service / worker thread, and not delayed on the write-behind thread.
    Gets are also synchronously handled, so they again are served on the service / worker thread.
    So if the puts are slow and wait too much, that may delay other puts but should not contend with other threads. If the puts are computation intensive, then obviously they hinder other threads because of consumption of the same CPU resource, and not simply because they execute.
    Best regards,
    Robert

  • LOCAL OLAP CACHE AND GLOBAL OLAP CACHE

    what is local olap cache and global olap cache...
    what is the difference between ....can you explain  scenario plz...
    will reward with points
    thanks in advance

    Hello GURU
    Local cache is specific to a user, before BW 3.0 it was only local cache available...if a user run the query data will come to cache from infoprovider and next time same query will not go to Data base instead it will fetch data from cache memory...this cache will be used only for that particular user...if some other user try the same query it will not pick up dta from cache.....
    BW 3.0 onward we have global cache which means several user can access same cache for the same query or data which is related in cache...
    Thanks
    Tripple k

  • Difference between Presentation Server Cache and BI Server Cache

    Hello Experts,
    What is the Diff b/w Presentation Server Cache and BI Server Cache
    Thanks,
    S Gouda

    Hello,
    Okay, what do you want to do about caching at BI server and Presentation server.
    A nQSXXXX.tmp is a temporary cache file which maintained by the BI Server for an analysis request by a user and is kind of shared data between the OBI Server and the OBI Presentation server. This is refereed as the 'Cursor Cache' which could be managed by going thru Administration> Manage Sessions > Clear Cursor Cache.These .tmp files are it is not related to BI Server cache.
    By Default, the BI Server Cache is stored in the and stored as NQSxxxxx.tbl files. [middlware_home]/instances/instance1/bifoundation/OracleBIServerComponent/coreapplication_obis1/cache]
    Caching occurs by default at the subrequest level, which results in multiple cache entries for some SQL statements. Caching subrequests improves performance and the cache hit ratio, especially for queries that combine real-time and historical data.
    Below are some useful links for cache management in OBIEE 11g.
    http://oraclebisolutions.blogspot.com/2013/02/obiee-11g-obi-server-and-presentation.html
    http://drazda.blogspot.com/2012/10/obiee-11g-cache-management.html
    http://allaboutobiee.blogspot.in/2012/03/cache-management-purging-cache.html
    Pls mark itf this helps. Else post the exact questions you have about this post.
    Thanks,
    SVS

  • Write-Behind, Expiration, and SQL Exceptions.

    Hi Chaps,
    If a cache with write-behind enabled has problems writing to the DB I understand that Coherence will re-queue the objects and write them when the DB is available.
    The problem I have is that (after a DB failure) I don't see them being written - I can see these items in the cache but not in the DB, even several hours after the outage. (Items that were added to the cache after the outage are being written).
    Is there anything the cachestore methods (specifically store() ) need to do with regards to exceptions to ensure that these items are re-qeueued?
    Next question is: I was also wondering how is this managed with regards to expiry?
    We have our own expiry routine which removes items from the cache that are older than 24 hours (this was from before we could expire objects by specifying the timeout in the put() method call, which I am intending to switch to).
    If an item has not been written to the DB due to an outage and is then expired (by our own routine or by Coherence) is it then lost forever, or will it remain in the queue? (seeing as the queue holds references I am guessing not but though I'd check).
    Thanks,
    Randal.

    Jon,
    I have a question related to this...If you remember a few weeks back, I stumbled upon the problem of the "version-persistent" map for the versioned-backing-map-scheme does not accept putAll operations. The workaround until you guys implement it, was to override the putAll method of the cacheStore and throw and unsupported operation exception (to force individual puts).
    Well, although this workaround works, I am getting tons and tons of:
    2006-04-06 17:18:27.347 Tangosol Coherence 3.1/339 <Warning> (thread=WriteBehindThread:MyCacheStore, member=1): The CacheStore "MyCacheStore@46b9979b" does not support storeAll().
    2006-04-06 17:18:27.348 Tangosol Coherence 3.1/339 <Error> (thread=WriteBehindThread:MyCacheStore, member=1): Failed to store keys="[16, 18, 21, 26, 5, 13, 14, 25, 17, 15, 23, 19, 2, 6, 9, 7]":
    java.lang.UnsupportedOperationException
    at ...MyCacheStore.storeAll(MyCacheStore.java:126)
    at com.tangosol.net.cache.ReadWriteBackingMap$CacheStoreWrapper.storeAll(ReadWriteBackingMap.java:3820)
    at com.tangosol.net.cache.ReadWriteBackingMap$WriteThread.run(ReadWriteBackingMap.java:3538)
    at com.tangosol.util.Daemon$1.run(Daemon.java:63)
    2006-04-06 17:18:27.349 Tangosol Coherence 3.1/339 <Warning> (thread=WriteBehindThread:MyCacheStore, member=1): Requeued store for key="16"
    2006-04-06 17:18:27.349 Tangosol Coherence 3.1/339 <Warning> (thread=WriteBehindThread:MyCacheStore, member=1): Requeued store for key="18"
    2006-04-06 17:18:27.350 Tangosol Coherence 3.1/339 <Warning> (thread=WriteBehindThread:MyCacheStore, member=1): Requeued store for key="21"
    2006-04-06 17:18:27.351 Tangosol Coherence 3.1/339 <Warning> (thread=WriteBehindThread:MyCacheStore, member=1): Requeued store for key="26"
    the first OperationNotSupported is expected, but I'm not sure what the requeued warnings are all about. These are not failures to the DB...it is something else. (mind you that this happens when trying to load a lot of data into the map.)
    1- Is this requeuing related or the same as in failed DB stores?
    2- Is it possible to "lose" stores if I don't configure the write-requeue-threshold with very, very high values? I must ensure I don't lose anything.
    In a related note, in some circumstances, I need to ensure that the "write queue" is flushed or cleared. For example, I may want to force a flush of all pending stores (and wait/block until that's done).
    I have looked into it and I don't seem to know how to do it. I can read the write-queue length, but I believe that this is not very accurate...since my tests seem to indicate that the write-behind thread may take the entries to store off the write-queue and then deal with them in parallel (which means that there are still entries althought the write-queue size is 0). Also, there are some calls from the cache store that, at first, seem to give some access to the write thread (potentially allowing me to contact the thread to tell him to flush or discard any pending stores)...but I believe that all of the functions are protected...but there may be other ways..
    I guess my second batch of questions are:
    1- How can I effectively force a flush (or clear) of the pending stores. Such that there is no single store pending in any queue (visible or invisible to the programmer).
    2- What is the role of re-queuing in these situations? where is the queue sitting, the thread? the cache store? who's responsible of retrying that, and when?...I would like to flush those entries too.
    A quick explanation of the operation of the write thread would also be very appreciated.
    Thanks!
    Josep M.

  • Write behind exception and recovery

    Hi all,
    I am working on write behind part in equity trading system. I know that cache store operation will eventually be thrown away if no of retry exceed write-requeue-threshold. However, this is not acceptable as DB must sync with caches at least at day end. For some more complicated caches, we use cache store implementation and Hiberate for simple cache. I am thinking to capture the sql statements that failed during the day and finally at day end, manually fix issues (egDB issue or others) then have them executed.
    Questions:
    1. Is this a good approach for handling the scenario? If yes, any way I can capture the statements and write to file for running in SQL plus for example in case of Hiberate?
    2. Is there any out of box mechanism in Coherence for recovering write-behind queues in case of WHOLE cluster fail (not node fail).
    Henry

    922963 wrote:
    Hi all,
    I am working on write behind part in equity trading system. I know that cache store operation will eventually be thrown away if no of retry exceed write-requeue-threshold. However, this is not acceptable as DB must sync with caches at least at day end. For some more complicated caches, we use cache store implementation and Hiberate for simple cache. I am thinking to capture the sql statements that failed during the day and finally at day end, manually fix issues (egDB issue or others) then have them executed.
    Questions:
    1. Is this a good approach for handling the scenario? If yes, any way I can capture the statements and write to file for running in SQL plus for example in case of Hiberate?Hi Henry,
    There are a few caveats you need to care about but in general it is possible.
    Not necessarily SQLs but serialized entries would probably be simpler to work with when you try to restore them.
    Also, you have to be aware that Coherence may fail to write an entry to the DB but at retry it may try to write a new entry. If it succeeds, you have to be able to figure that out that the earlier failure must not be reexecuted.
    In effect, you should have per-entry versioning in the database and you should check versions of the entity in the database upon writing both from the cache store and also from your end-of-day retry logic.
    2. Is there any out of box mechanism in Coherence for recovering write-behind queues in case of WHOLE cluster fail (not node fail).
    No, nothing like that comes out-of-the-box, if you lost a partition, you lost your write-behind-enqueued entries, too. You could log your failed writes to disk though as you indicated above.
    Best regards,
    Robert

  • Hibernate's dual-layer cach and TopLink's caching strategy

    Dear members,
    I understand that caching between hibernate and toplink is implemented (or utilized) differently. Hibernate seems to have 'dual-layer caching' (which may imply they have two layers of cache) whereas TopLink has session cache and shared cache. The way I see it, they seem to be aiming for the same thing. Are there any differences between (obviously there are, only that I do not know them) those two caching architectures, and how different are they?
    Howard

    Yes there are differences :) For details check out
    TopLink vs Hibernate... revisited... again :)
    and
    Indirection - how are references resolved after session has been closed?

  • Oracle cache and File System cache

    on the checkpoint, oracle cache will be written to disk. But, If an oracle database is over file system datafile, it likely that the data are still leave in FileSystem cache. I don't know how could oracle keep data consistency.

    Thanks for your feedback. I am almost clear about this issue now, except one point need to be confirmed: do you mean that on linux or unix, if required, we can set "direct to disk" from OS level, but for windows, it's default "direct to disk", we do not need to set it manually.
    And I have a further question: If a database is stored on a SAN disk, say, a volume from disk array, the disk array could take snapshot for a disk on block level, we need to implement online backup of database. The steps are: alter tablespace begin backup, alter system suspend, take a snapshot of the volume which store all database files, including datafiles, redo logs, archived redo logs, controll file, service parameter file, network parameter files, password file. Do you think this backup is integrity or not. please note, we do not flush the fs cache before all these steps. Let's assume the SAN cache could be flushed automatically. Can I think it's integrity because the redo writes are synchronous.

  • Bridge Freezing, Not Displaying Thumbnails, Says Disk Space is Low and to Purge Cache

    Hello,
    I'm working in Bridge CS5 on a Mac 0S X version 10.9.2. I have 12 GB of RAM and the server has 240 TB capacity, 76.69 available.
    Recently I've received the following error message while processing large photo shoots with both DNG and JPEG files: Bridge is running low on memory. It is recommended that Bridge is restarted.
    I have also been manually caching large photo shoots and my cache preferences were set to Keep 100% Preview In Cache and Automatically Export Cache To Folder When Possible. My cache size was set to 500,000.  Yesterday, Bridge started to freeze while attempting to open folders. The wheel would spin at the bottom with the message Compiling Criteria. The only way to close Bridge was a Force Quit. This happened several times.
    Today, image thumbnails will no longer display, only the DNG/JPEG icons. I received the following error message:
    I have tried restarting Bridge while holding the alt/Option key to reset the preferences and checking all three boxes - no luck. I have done the same thing with the boxes unchecked - no luck.  I have tried manually purging the cache from its location in the library - no luck. I have tried resetting the cache preferences from the toolbar - no luck. I have tried deleting the cache files from the folders - nothing changes. I have tried purging the cache for each folder from the tool bar - nothing happens.
    I need to restore the thumbnails and rebuild the cache for each folder. Does anybody have an idea what is wrong or how to help?
    Many Thanks!

    Okay, so I have a solution. If after purging the central cache on the server Bridge continues to not perform normally try emptying the trash. I did this and it worked.

  • Global Performance Cache and GPU

    Nvidia's website says: The Global Performance Cache feature makes After Effects faster and more responsive than ever before by taking full advantage of the power of your computer's hardware. NVIDIA GPUs allow you to accelerate previews when drawing images to the screen for a highly interactive experience. 
    does the global performance automatically use the gpu or do you have to enable it? and is this a good reason to invest in a good gpu or not really?

    The global peformance cache has nothing whatsoever to do with the GPU or Nvidia. See this for details of the cache features:
    https://www.video2brain.com/en/lessons/global-performance-cache-and-persistent-disk-cache
    After Effects uses the GPU for almost nothing. Details are here:
    http://www.adobe.ly/AE_CUDA_OpenGL_GPU

  • Write-behind max speed?

    Hi,
    We are trying to test the speed of the write behind mechanism and we would be interested to know how other coherence users handle, for example, writing 1 million rows into the database.
    At the moment, using jdbc batch inserts we can write approximately 30000 rows per minute, which means it would take about 30 minutes to save 1 million rows. Are there any other methods that other coherence user's use that can improve on this?
    Many thanks,

    user738616 wrote:
    Hi,
    This has nothing to do with Coherence as the implementation of CacheStore is outside of Coherence. Apart from JDBC Batch, you should try using PLSQL Bulk binds for such numbers.
    Hope this helps!
    Cheers,
    NJHi NJ,
    we actually measured PLSQL bulk binds against plain SQL (both with JDBC)... for anything which can be translated to plain inserts/updates, plain SQL is way faster (more than 10x).
    You can only win with bulk binds when that statement which you send down actually does more complex logic and multiple statements so you actually win with optimizing away the roundtrips, too.
    Best regards,
    Robert

  • Write-Behind Caching and Re-entrant Calls

    Support Team -
         The Coherence User Guide states that:
         "The CacheStore implementation must not call back into the hosting cache service. This includes OR/M solutions that may internally reference Coherence cache services. Note that calling into another cache service instance is allowed, though care should be taken to avoid deeply nested calls (as each call will "consume" a cache service thread and could result in deadlock if a cache service threadpool is exhausted)."
         I have Load-tested a use case wherein I have two caches: ABCache and BACache. ABCache is accessed by the application for write operation, BACache is accessed by the application for read operation. ABCache is a write-behind cache whose CacheStore populates BACache by reversing key and value of each cache entry stored in the ABCache.
         The solution worked under load with no issues.
         But can I use it? Or is it too dangerous?
         My write-behind thread-count setting is left at default (0). The documentation states that
         "If zero, all relevant tasks are performed on the service thread."
         What does this mean? Can I re-enter the caching service if my thread-count is zero?
         Thank you,
         Denis.

    Dimitri -
         I am not sure I fully understand your answer:
         1. "Your test worked because write-behing backing map invokes CacheStore methods asynchronously, on a write-behind thread." In my configuration, I have default value for thread-count, which is zero. According to the documentation, that means that CacheStore methods would be executed by the service thread and not by the write-behind thread. Do I understand this correctly?
         2. "If will fail if CacheStore method will need to be invoked synchronously on a service thread." I am not sure what is the purpose of the "service thread". In which scenarios the "CacheStore method will need to be invoked synchronously on a service thread"?
         Thank you,
         Denis.

  • Write-Behind Caching and Old Values

    Is there a way to access the old value cached in the write-behind cache for the same key from the CacheStore's store() or storeAll() method?

    I have a business POJO with three parts: partA,     > partB, partC inside. Each of these three parts is
         > persisted by a separate SQL. So, every time I persist
         > my POJO, up to 3 SQLs may be executed.
         I understand.
         > When a change happens in my POJO, it goes onto the
         > write-behind queue. In my CacheStore.store() or
         > CacheStore.storeAll() I would like to be able to make
         > an intelligent decision about which of the three
         > parts: partA, partB or partC has actually changed and
         > only run the SQL updates for the changed parts. This
         > would allow me to avoid massive amounts of
         > unnecessary SQL updates for the parts that did not
         > change.
         Right. Keep in mind that there are two conditions that you must be aware of:
         1) Multiple updates could have occurred to the object, meaning that the database update would have to "roll up" the results of multiple changes to the object.
         2) Some or all of the updates could have already occurred to the database. This may be a little trickier to understand, but it reflects the possible machine failure conditions that occurred while a write-behind was in progress.
         Although the latter are unlikely, they should be accounted for, and of course they are harder to test for with certainty. As a result, the updates to the information (the CacheStore implementation) must be built in an "idempotent" manner, i.e. allowing it to be executed more than once with no additional side-effects.
         > If I had access to the POJO stored under the same key
         > before the new value was put in cache, I could use
         > equals() on each of the three parts to find out
         > exactly which one of them changed.
         While this is true, you would need to compare the "known previous database state" version, not just the "old" version.
         > Of course, if this functionality is not available, I
         > would have to create dirty flags for each of the
         > three POJO parts. But I can't really clear my POJO's
         > flags and recache the POJO from within the store() or
         > storeAll(), right?
         Yes, but remember that those flags are "could be dirty" flags, because of the above failure modes that I described.
         Peace,
         Cameron Purdy
         Tangosol Coherence: The Java Data Grid

  • Write-Behind Caching and Multiple Puts

    What happens when two consecutive puts are performed on the write-behind cache for the same key? Will CacheStore's store() or storeAll() be invoked once for every put() or only once for the last put() (the one which overrode the previous cached values)?

    Hi Denis,
         If you use write-behind, there will be no unnesessary database updates - only the last put() will result in database update.
         Regards,
         Dimitri

  • Write behind cache, DB down, when should the system stop taking new data in

    Hello:
    We are trying to use Coherence for our custom ESB, which is brokering payloads of various size between consumer and provider applications.
    Before Coherence, stopping our DB meant organization-wide outage for critically important business services.
    Since we have at least 40G of RAM in production environment, we believe that our app
    can use Coherence write-behind option for tolerating at least several hours worth of DB outage.
    We are currently using a near cache backed by distributed cache in write-behind mode.
    9 business service JVMs (storage enabled=false) use 30 storage enabled JVMs.
    IMPORTANT: We need to create an automated alerting facility determining when
    amount of unsaved data reaches critical level since DB goes down. This alert should help us decide when our application stops accepting inbound traffic.
    It is hard to use QueueSize parameter for that because our payload memory footprint can vary from 1KB to 3MB.
    We do not expire any entries in order to enable support queries against the cache during DB outage.
    Our experiments with trying various flavors of overflow-scheme resulted in OutOfMemoryError, therefore
    we decided to implement RAM-only cache as a first step.
    <near-scheme>
    <scheme-name>message_payload_scheme</scheme-name>
    <front-scheme>
    <local-scheme>
    <scheme-ref>limited_entities_front_scheme</scheme-ref>
    <high-units>100</high-units>
    </local-scheme>
    </front-scheme>
    <back-scheme>
    <distributed-scheme>
    <backing-map-scheme>
    <read-write-backing-map-scheme>
    <internal-cache-scheme>
    <local-scheme>
    <scheme-ref>limited_bytes_scheme</scheme-ref>
    <high-units>199229440</high-units>
    </local-scheme>
    </internal-cache-scheme>
    <cachestore-scheme>
    <class-scheme>
    <class-name>com.comp.MessagePayloadStore</class-name>
    </class-scheme>
    </cachestore-scheme>
    <read-only>false</read-only>
    <write-delay-seconds>3</write-delay-seconds>
    <write-requeue-threshold>2147483646</write-requeue-threshold>
    </read-write-backing-map-scheme>
    </backing-map-scheme>
    <autostart>true</autostart>
    </distributed-scheme>
    </back-scheme>
    </near-scheme>
    <local-scheme>
    <scheme-name>limited_entities_front_scheme</scheme-name>
    <eviction-policy>LRU</eviction-policy>
    <unit-calculator>FIXED</unit-calculator>
    </local-scheme>
    <local-scheme>
    <scheme-name>limited_bytes_scheme</scheme-name>
    <eviction-policy>HYBRID</eviction-policy>
    <unit-calculator>BINARY</unit-calculator>
    </local-scheme>

    Good info ... I feel like I need to restate my original question along with a couple of new questions caused by the discussion above.
    Q1. Does Coherence evict 'dirty', or 'queued', or 'unsaved' objects for cache configuration provided above?
    The answer should be 'NO', otherwise Coherence is unsafe to use as a system of record,
    it should not just drop unsaved information on the floor.
    Q2. What happens to the front tier of the near+partitioned write behind cache described above when amount of unsaved data exceeds max cache capacity defined via high-units?
    I would expect that map.put starts throwing exceptions: cache storage is full, so it should not accept more data
    Q3. How can I determine a moment when amount of dirty data in bytes(!), not in objects, hits 85% of
    max allowed cache capasity configured in bytes (using high-units param and BINARY calculator).
    'DirtyUnits' counter can probably be built with some lower-level Coherence API. Can we use
    this API?
    Please, understand, that we purchased Coherence for reliability, for making our
    system independent from short DB outages, for keeping our business services up
    and running when DBA need some time for admin operations like rebuilding an index.
    Performance benefits are secondary and are not as obvious for our system which
    uses primary keys only and has a well-tuned co-located Oracle back-end.
    We simply cannot put Coherence to production unless we prove that Coherence
    can reliably hold the data and give us information about approaching crisis
    (the cache full of unsaved data).
    If possible, forward this message to Cameron Purdy,
    who was presenting Coherence to our team several moths ago.
    Thanks,
    Vasili Smaliak
    Applications Architect, Enterprise App Integration
    GMAC ResCap
    [email protected]

Maybe you are looking for