Write-through limitation and putAll

Please find the quote below from developer guide, particularly this one In other words, if two cache entries are updated, triggering calls to CacheStore modules sitting on separate cache servers, it is possible for one database update to succeed and for the other to fail.If a putAll is called on a cache, will it result in one CacheStore.storeAll or many storeAll triggered from different coherence nodes/servers? (assume a distributed topology coherence 3.7.1)
Will the store transaction failure lead to putAll transaction failure?
Are there any patterns that shows how this coherence works with typical databases?
14.7.2 Write-Through LimitationsCoherence does not support two-phase CacheStore operations across multiple CacheStore instances. In other words, if two cache entries are updated, triggering calls to CacheStore modules sitting on separate cache servers, it is possible for one database update to succeed and for the other to fail. In this case, it may be preferable to use a cache-aside architecture (updating the cache and database as two separate components of a single transaction) with the application server transaction manager. In many cases it is possible to design the database schema to prevent logical commit failures (but obviously not server failures). Write-behind caching avoids this issue as "puts" are not affected by database behavior (as the underlying issues have been addressed earlier in the design process).

gs100 wrote:
Thanks for the input, I have further questions based on these suggestions.
1. Let us say one of the putAll fails we would know that it has failed due to underlying one or more store/storeAll. And even if we rollback the coherence transaction, the store/storeAll that succeeded would not be rolled back automatically, is that correct? If true, this means that it would leave the underlying DB/store in the inconsistent state with that of in-memory cache?I guess that is one of the reasons why the transaction framework does not support cache stores... also, write-behind would coalesce updates which would have funny consequences with regards to the transactional context...
2. How do we get the custom implementation of putAll, that you suggested to handle specific errors? any pointers on this would be helpful.I guess it is not going to be posted, the Coherence team may or may not add something which is a bit more deterministic with regards to error.
A few aspects of Coherence behaviour (a.k.a pitfalls) which you need to be aware of to be able to implement your own solution:
Exceptions propagating back to the client can happen in:
- entry-processor (not for putAll specifically)
- result serialization code (not for putAll specifically, but for processAll/aggregate for example)
- deserialization code (indexes/filter-based backing map listeners/cache stores lead to deserialization even for putAll)
- triggers (intentionally, too)
- cache stores
There is no place where you could catch any exceptions from inside the NamedCache call, so they will come out.
Coherence may execute the operation on one thread per partition or one thread per multiple partitions, but never on multiple threads per partition. This means there may be multiple exceptions even from a single storage node, but only at most one exception would be generated per partition (starting with 3.5).
If you send multiple partitions with the same NamedCache call, you can lose exceptions as you wouldn't know if an exception would have or wouldn't have happened with a partition if it was sent alone instead of together with another on the same node.
As you need to be able to return all exceptions from your method call, you have to produce and catch all of them and collect them otherwise you would lose all but one. To produce and catch all exceptions you have to produce all exceptions independently, i.e. different partitions must be operated on independently.
To send an operation to a single partition only, you can separate the operations to different partitions by separating the keysets for different partitions with key-based operations, or applying a PartitionedFilter for filter-based operations.
It is up to you where and how you iterate through the partitions. You can do it on the caller, you can do it on storage node from an Invocable sent via an InvocationService (in this case you can be either optimistic with ownership or chase a partition).
3. Because we are thinking putAll that coherence implemented is most optimized (parallelism). I am not sure how the custom implementation can be as optimal (hope we don't end up calling one by one).You cannot implement it as optimally as Coherence itself does as it interleaves operations (Messages) to independent partitions/nodes (does not have to wait for the return message) from a single thread without waiting for the responses from individual nodes/partitions.
You can either parallelize operations to multiple threads, or do the iteration on the single thread at the cost of higher latency.
Best regards,
Robert

Similar Messages

  • Write through and CacheStore

    Hi,
         I'm running a near cache implementation, with the front being a local cache and the back being a distributed cache. The distributed cache has a local cache and a read-write-backing-map-scheme for persisting each cache to disk every t minutes (for backup purposes - we still use a cache in memory).
         I have a few questions about the Write through capabilities and CacheStore so as to better understand what's going on here:
         1. We only need to store the data to disk for backup in case of complete cluster failure (for example, all n machines in our cluster go down). Right now my implementation of the CacheStore has one line which reads "return null" for the following methods:
         load(..)
         loadAll(..)
         Is there a more efficient/effective way to write to disk and ignore reads if item does not exist in distributed map rather than extending CacheStore and returning null for all methods noted above?
         My reading and writing to disk occurs using the ExternalizableHelper class, thx for this nice work.
         2. How are CacheStore's instantiated initially? Since we want each cache (say we have two different caches here for simplicity) backed up to file every t minutes, do we have to have a separate CacheStore object for each cache? What is the best practice to attach a cachestore to a particular cache?
         For example, I start two Tangosol instances, one on machineA and one on machineB, both storing data as per my configuration. The 2 caches being used are "cacheA" and "cacheB". So when I start the two Tangosol instances, I have to instantiate CacheStore twice so that I can separately write "cacheA" and "cacheB" to their own separate files.
         3. When backup to disk occurs, is there any removing of items from the distributed cache?
         4. I'm not completely sure on the write delay here. What if an item in the cache is just added once, and no updates occur on it (ie. just one put, and 0+ gets). After a specified amount of time, will this be written to disk, or does an update on this object in the cache have to occur before this item can be added to the write queue and eventually written to disk? Once an item is added for the first time, does this trigger the update time for this object to be the first write time?
         Thanks,
         - Noah

    Hi Noah,
         1. No, load() and loadAll() returning null is the most effective way of implementing this.
         2. You can pass the cache name as a constructor parameter - see Parameter Macros in the Coherence User Guide.
         3. No, nothing is removed from the cache
         4. Writes are only triggered by put()'s.
         For more information please take a look at this forum post: <a href = "http://www.tangosol.net/forums/thread.jspa?threadID=445&tstart=0">What is Read-Through/Write-Through/Write-Behind Caching? </a>
         Regards,
         Dimitri

  • Write-through cache error kills coherence

    Hi, we have a write through cache, and when there's an error writing to the cache store, the cache dies. The first error we see is:
    3.5.3/465 | Coherence(3) - 2012-07-19 07:28:35.918/34.768 Oracle Coherence GE 3.5.3/465 <Info> (thread=DistributedWriteThroughWorker:3, member=1): (Wrapped: Failed to store key="1342708115777") java.lang.RuntimeException: Failed to store hibernate entity : This is normal, since there's a legitimate reason why this couldn't get stored, but after the stack trace, we see the below. So the 1st thing is the "Terminating DistributedCache", that just kills the node and makes it unusable, then there's the "unknown user type" message, as if it's trying to send something over the wire, but it can't; although this "WrapperException" is not one of our classes. Clearly we don't want the cache dying, we want to continue to use it, but any subsequent request to the cache fails. Any ideas?
         [java] ERROR | 07-19-2012 07:28:36.116 | Logger@9230760 3.5.3/465 | Coherence(3) - 2012-07-19 07:28:35.919/34.769 Oracle Coherence GE 3.5.3/465 <Error> (thread=DistributedWriteThroughWorker:3, member=1): Terminating DistributedCache due to unhandled exception: java.lang.IllegalArgumentException
         [java] ERROR | 07-19-2012 07:28:36.117 | Logger@9230760 3.5.3/465 | Coherence(3) - 2012-07-19 07:28:35.919/34.769 Oracle Coherence GE 3.5.3/465 <Error> (thread=DistributedWriteThroughWorker:3, member=1):
         [java] java.lang.IllegalArgumentException: unknown user type: com.tangosol.util.WrapperException
         [java]      at com.tangosol.io.pof.ConfigurablePofContext.getUserTypeIdentifier(ConfigurablePofContext.java:400)
         [java]      at com.tangosol.io.pof.ConfigurablePofContext.getUserTypeIdentifier(ConfigurablePofContext.java:389)
         [java]      at com.tangosol.io.pof.PofBufferWriter.writeObject(PofBufferWriter.java:1432)
         [java]      at com.tangosol.io.pof.ConfigurablePofContext.serialize(ConfigurablePofContext.java:338)
         [java]      at com.tangosol.coherence.component.util.daemon.queueProcessor.Service.writeObject(Service.CDB:4)
         [java]      at com.tangosol.coherence.component.net.Message.writeObject(Message.CDB:1)
         [java]      at com.tangosol.coherence.component.net.message.DistributedCacheResponse.write(DistributedCacheResponse.CDB:2)
         [java]      at com.tangosol.coherence.component.util.daemon.queueProcessor.packetProcessor.PacketPublisher.packetizeMessage(PacketPublisher.CDB:137)
         [java]      at com.tangosol.coherence.component.util.daemon.queueProcessor.packetProcessor.PacketPublisher$InQueue.add(PacketPublisher.CDB:8)
         [java]      at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.dispatchMessage(Grid.CDB:50)
         [java]      at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.post(Grid.CDB:53)
         [java]      at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache.onPutRequest(DistributedCache.CDB:146)
         [java]      at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$PutRequest.run(DistributedCache.CDB:1)
         [java]      at com.tangosol.coherence.component.util.DaemonPool$WrapperTask.run(DaemonPool.CDB:1)
         [java]      at com.tangosol.coherence.component.util.DaemonPool$WrapperTask.run(DaemonPool.CDB:32)
         [java]      at com.tangosol.coherence.component.util.DaemonPool$Daemon.onNotify(DaemonPool.CDB:63)
         [java]      at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
         [java]      at java.lang.Thread.run(Thread.java:662)

    Ok, figured it out. I failed to include the standard "coherence-pof-config.xml" when loading in our pof config file, and that's what caused the troubles. Including it solved the problem of the dying cache.

  • Transactional Caches and Write Through

    I've been trying to implement the use of multiple caches, each with write through, all within a transaction.
         The CacheFactory.commitTransactionCollection(..) method only seems to work correctly if the first transactionMap throws an exception in the database code.
         If the second transactionMap throws exceptions, the caches do not appear to rollback correctly.
         I can wrap the whole operation in a JDBC transaction that rolls back the database correctly but the caches are not all rolled back because they are committed one by one?
         For example, I write to two transaction maps, each one created from separate caches. When commiting the transaction maps, the second transaction map causes a database exception. It appears the first transaction map has already committed its objects and doesn't roll back.
         Is it possible to use Coherence with multiple transaction maps and get all the caches and databases rolled back?
         I've also been trying to look at using coherence-tx.rar as described in the forums within WebLogic but I'm getting @@@@@ Failed to commit: javax.transaction.SystemException: Could not contact coordinator at null+SMARTPC:7001+null+t3+
         (SMARTPC being my pc name)
         Has anybody else had this problem? Bonus points for describing how to fix it!
         Mike

    The transaction support in Coherence is for Local     > Transactions. Basically, what this means is that the
         > first phase of the commit ("prepare") acquires locks
         > and ensures that there are no conflicts. The second
         > phase ("commit") does nothing but push data out to
         > the caches.
         This means that once prepare succeeds (all locks acquired), commit will try to copy local data into the base map. If there is a failure on any put, rollback will undo any changes made. All locks are cleared at the end.
         > The problem is that when you are using a
         > CacheStore module, the exception is occurring during
         > the second phase.
         If you start using a CacheStore module, then database update has to be part of the atomic procedure.
         >
         > For this reason, write-through and cache transactions
         > are not a supported combination.
         This is not true for a cache transaction that updaets a single cache entry, right?
         >
         > For single-cache-entry updates, CacheStore operations
         > are fully fault-tolerant in that the cache and
         > database are guaranteed to be consistent during any
         > server failure (including failures during partial
         > updates). While the mechanisms for fault-tolerance
         > vary, this is true for both write-through and
         > write-behind caches.
         For the write-thru case, I believe Database and cache are atomically updated.
         > Coherence does not support two-phase CacheStore
         > operations across multiple CacheStore instances. In
         > other words, if two cache entries are updated,
         > triggering calls to CacheStore modules sitting on
         > separate servers, it is possible for one database
         > update to succeed and for the other to fail.
         But once we have multiple CacheStore modules, then once one atomic write-thru put succeeds that means database is already updated for that specific put. There is no way to roll back the database update (although we can roll back the cache update). Therefore, you may end up in partial commits in such situations where multiple cache entries are updated across different CacheStore modules.
         If I use write-behind CacheStore modules, I can roll back entirely and avoid partial commits? Since writes are not immediately propagated to the database? So in essence, write-behind cache stores are no different than local transactions... Is my understanding correct?

  • Write-Behind Caching and Limited Internal Cache Size

    Let's say I have a write-behind cache and configure its internal cache to be of a fixed limited size, e.g. 10000 units. What would happen if more than 10000 units are added to the write-behind cache within the write-delay period? Would my CacheStore's storeAll() get all of the added values or would some of the values be missed because of the internal cache size limitation?

    Hi Denis,     >
         > If an entry is removed while it is still in the
         > write-behind queue, it will be removed from the queue
         > and CacheStore.store(oKey, oValue) will be invoked
         > immediately.
         >
         > Regards,
         > Dimitri
         Dimitri,
         Just to confirm, that I understand it right if there is a queued update to a key which is then remove()-ed from the cache, then the following happens:
         First CacheStore.store(key, queuedUpdateValue) is invoked.
         Afterwards CacheStore.erase(key) is invoked.
         Both synchronously to the remove() call.
         I expected only erase will be invoked.
         BR,
         Robert

  • Map listeners and write-through strategy.

    Hi.
    Write-through strategy seems to be synchronious operations and if it fails, no data should appear in cache. Logically this means, that no events will be produced if the persisting fails (that's what we exactly need). But could not find any mention about this in documentation. Can anyone verify this?
    Thanks, Anton.

    If you are talking about throwing an exception in your CacheStore code,
    it will happen before anything occurs in the internal cache managed by
    Coherence and no events will be generated (that would have been generated
    under normal cases where the operation succeeded.)

  • I HAVE ADOBE PREMIERE ELEMENTS 10.  I used to have many options on the "Disc Menu" to set up my DVD.. and now it is very limited..almost no templates. I try to sign in to my account through Premiere and it will never sign on.  Anyone have any thoughts??

    I HAVE ADOBE PREMIERE ELEMENTS 10.
    Problem #1   I used to have many options on the "Disc Menu" to set up my DVD.. and now it is very limited..almost no templates.
    Problem #2   I try to sign in to my account through Premiere and it will never sign on...just keeps acting like its trying to but never connects.  Anyone have any thoughts??

    JEN-E-FER-FER
    With versions earlier than 11, there is no more Sign In. If you get any message to that effect, ignore and cancel out of them. You should be able to move forward to your import, editing, and exports using the program nonetheless.
    As for the disc menus, those marked photoshop.com plus members only are no longer available to you in versions of Premiere Elements earlier than 11. You may see the thumbnails for those members only items, but they are no longer available to you in Premiere Elements earlier than 10.
    Interestingly, many of them are found in Premiere Elements 11, 12, and 13 as part of the program with no need to purchase them or have any special membership in anything. Adobe has promoted Adobe Revel as a replacement for photoshop.com, but that has no impact on this disc menus issue.
    This all has to do with Adobe discontinuing photoshop.com and photoshop.com plus et al.
    If you need to download regular disc menus for Premiere Elements 10, please review the following
    ATR Premiere Elements Troubleshooting: PE 7, 8, 9,10: Content Download
    Bottom lines:
    1. No more Sign In for Premiere Elements 10
    2. No access to photoshop.com plus (members only) disc menus
    Any questions or need clarification on anything written, please do not hesitate to ask.
    Thank you.
    ATR

  • I used to be able to download files from the Harddrive of my Sony Handycam, but now it won't let me import them while going through "log and transfer."  I have tried to update my settings, but this doesn't seem to work.  Any help would be much appreciated

    I used to be able to download files from the Harddrive of my Sony Handycam, but now it won't let me import them while going through "log and transfer."  I have tried to update my settings, but this doesn't seem to work.  Any help would be much appreciated

    Hard Drive:  The Hard Drive is on my camera (Sony Handycam)
    Just needed some info on what is going on with your particular system.
    Knowing the exact camera model will also assist us.
    If this camera is standard AVCHD you should be able to connect with USB to Mac with FCE open and use Log and Transfer without all the fiddling around converting etc.
    What FCE does during ingest is convert files to AIC. (Apple Intermediate Codec)
    FCE cannot read the individual files in the BDMV folder, that's why all the converting is required when you use that method.
    BTW: regards formatting drives/cards/memory on cameras; it is wise to use the actual device to format rather than a computer. This is a prime cause of read/write goof ups with solid state media. This applies to still or video cameras.
    Keeping the COMPLETE card/memory structure in tact is vital for successful transfers.
    Try here for some trouble shooting tips:
    https://discussions.apple.com/message/12682263#12682263
    Al

  • Am not able to use facebook on my Iphone 4, softwareversion:iOS7.1, also tried to access through safari and chrome that i have installed it gives an error message saying: "safari could not open the page because server stopped responding",

    Am not able to use facebook on my Iphone 4, softwareversion:iOS7.1, also tried to access through safari and chrome that i have installed it gives an error message saying: "safari could not open the page because server stopped responding", i tried network reset, reset the whole device, rebooting , changing airplane mode rebooting nothing fix the issue, but i can access other sites and google , am using Vodafone as my carrier with 2g network, when at home able to access facebook.com through wifi in safari requesting assistance thank you

    If you can access Facebook while on Wifi at home, but you are unable to access it while away running on your carrier's 2G network, I could phone your carrier.  You have already completed the Cellular Data troubleshooting for the iPhone, so any limitations keeping your from connecting to Facebook over cellular will have to be answered by your carrier.

  • Should OS/FileSystem caching be write-through?

    I have a question. I use Ubuntu. Should I mount my filesystem (which holds BDB's content) with "-o sync" option? That is, should my file system cache be write-through?
    I have this question because, if I turn on the logging feature in Berkeley DB but let the file system cache be write-back, I don't exactly know if the log is properly flushed to the disk or not.

    Thanks George. I agree that mature applications would be better off mounting their filesystem with "-o sync" option.
    But here is a thing: I ran an example test case where I inserted 10 million key-value pairs with logging enabled, and saw that the average response time per insertion was 10 milli seconds, and I did the same experiment with logging disabled and saw that it too took 10 milliseconds per insertion on an average.
    For the experiment with logging enabled, I create the environment with DB_INIT_LOG and DB_INIT_TXN flags but don't surround the insertion requests with txn_begin() and txn->commit(). I guess this way of doing insertions is called autocommit. I am hoping I am doing this experiment right.
    Thanks for the pointers about set_flags() and DB_TXN_NOSYNC, I am going to look them up.

  • Can any tell me the limitations and bugs in Taleo learn

    Hi All,
    I know Taleo Recruiting but new in Taleo learn wanted to know the limitations and bugs which are currently available in Taleo Learn. Being you expert who have implemented Taleo learn, It will be greatly appreciated if you can share some of the information about those available bugs and limitations currently in Taleo Learn. That will really help me to compare about the product and easy to explain client at the time of explaining RFP functionalities technically.
    Always everyone suggestions, Thoughts are welcome
    Have a great day!!!!
    Gaurav

    "Learn Java in a week"?
    What do you mean by "learn"? If you mean be able to write Hello World and understand the bare bones basics of what goes into a simple Java program, maybe.
    If you mean understand the language and API thoroughly enough to produce real, quality software, that's pretty unreasonable.

  • Need code to search through folder and subfolders

    Hi.....
    Am trying to write a program to search for a file through folder and all of its sub folders.
    I tried my best but i was able to search through a folder but not through its sub folders.
    So, anyone could help me out with the code to do this operation?
    Regards
    codingquest

    codingquest wrote:
    Thank you very much for your reply.
    I tried to use the recursive call but i could not do it without any errors.
    So it would be helpful to me if you post the exact coding or the key part of the recursive call here.You'd get better response when you post your code and the error message(s). I'm sure someone will be able to point you in the right direction.

  • Execute RFC Write through VB6.

    Hello,
    I have installed the SAP Netweaver trial version (Installation Number "Initial") .
    I can connect to SAP through vb6 and execute an RFC to download and view data (RFC_READ_TABLE).
    I am having a hard time trying to execute a "write" RFC to write data into SAP. Obviously I may be doing something wrong, but, my question is, can you confirm I should be able to "write" to the trial version? If so, can you point me to some vb6 examples for this? Would I be better trying to switch to the developer version?
    Thanks,
    Steve

    Thanks for replying.
    I used the default log in that the installation instructed me to use (SAP*), but then found in the help a BCUSER log in which I think should supply me with appropriate permissions.
    I suspect it is more likely that my coding is at fault. Could you point me to vb6 RFC write examples? Can I execute a command to view a list of suitable write RFC's to experiment with in SAP?
    Thanks,
    Steve

  • Write-through Cache behavior during Transactional Operation

    If a put is called on a write-through cache during a transaction(with Optimistic Read-Committed settings) that involves multiple caches some set to write-through and others to write-behind, when will the store operation on the corresponding CacheStore be attempted?
         a) Immediately after the put() is called on the cache but before the transaction commit
         or
         b) Immediately after the transaction is committed irrespective of when the put is called

    Hi Abhay,
         The backing map (in this case, <tt>com.tangosol.net.cache.ReadWriteBackingMap</tt>) is responsible for calling the CacheStore implementation. When "commit" is called, Coherence will synchronously send the data to the backing map; the backing map then determines what to do with the data. In the case of ReadWriteBackingMap, it will either (depending on its configuration) synchronously call CacheStore (meaning that a store exception will interrupt your transaction) or queue the update for later (meaning that any store exception will occur after the cache transaction has completed).
         In 3.0, the <rollback-cachestore-failures> element under <read-write-backing-map-scheme> controls whether CacheStore exceptions are propagated back to the client. If you are using a release prior to 3.0, please see this FAQ Item on CacheStore Exceptions.
         Jon Purdy
         Tangosol, Inc.

  • Pre-load a write-through cache

    This probably is a common problem.
    I want to use write-through cache to keep the cache and the db in sync. But at application start-up, I'd like to pre-load the cache. Is there anyway I can disable the write-through behavior during pre-loading, and enable it after pre-loading is complete?

    Wouldn't you just be better to load the data into the database first, and then just query it to get it into the cache? That way, the cache and the db are 'in sync', and no write operations have occurred; also, you don't need to 'flip' any cache settings. Subsequent updates to cache items would then be written to the database as normal.
    Surely if the data wasn't in the db first, you'd end up with a large 'write' operation to place it there once you've loaded the cache and 'flipped' it over, which could be quite a lengthy process. I can't see the advantage of this over putting the data in the db first?
    I'd be interested to know more about your specific use-case, as I'm just to embark on a similar 'loader' program. In my case, I was planning on loading the db first. I'd be interested in any alternative approaches and the reasoning behind them (or, likewise, if you can't actually do it the way I was planning! :)).
    Steve

Maybe you are looking for

  • Info record Tables

    Hi all, I am in need of creating a report for info record. Can someone help me with the tables for the same. Also let me know the table where i can get the pricing conditions, Its really hard to find the conditions. Thanks, S D.

  • Lost Sequence in FCP7 - both in project and in Autosave!!!

    Hey, I've encountered a strange problem with fcp7. I was working on something last night, saved it and when I opened the project this morning, that particular sequence is empty but for one clip. Everything else in the project is intact, including all

  • Regarding Maintenance view

    Hi guys. we have created Maintenance view for addon table in 46C. so was Created Function group ZMXXXXX having below structure. ZMXXXXX -Func module     Table frame ZMXXX     TableProc ZMXXX -Dictionary objects -Dynpro -Include   LSVIXXXX   LSVIXF01

  • IDVD or Toast 7

    I will be burning my first project to DVD. It is a 1.5 hr project done in FCE 2.03 I have a third party internal superdrive and will be using a dual layer DVD. I also have both IDVD and Toast 7 available. Does anybody know which would give me the bes

  • I bought iPhone in Italy. There is wrote: Contient use se Vodafone. Will it work in Russia? May be I need to do something?

    I bought iPhone in Italy. There is wrote: Contient use se Vodafone. Will it work in Russia? May be I need to do something?