Pre-load a write-through cache

This probably is a common problem.
I want to use write-through cache to keep the cache and the db in sync. But at application start-up, I'd like to pre-load the cache. Is there anyway I can disable the write-through behavior during pre-loading, and enable it after pre-loading is complete?

Wouldn't you just be better to load the data into the database first, and then just query it to get it into the cache? That way, the cache and the db are 'in sync', and no write operations have occurred; also, you don't need to 'flip' any cache settings. Subsequent updates to cache items would then be written to the database as normal.
Surely if the data wasn't in the db first, you'd end up with a large 'write' operation to place it there once you've loaded the cache and 'flipped' it over, which could be quite a lengthy process. I can't see the advantage of this over putting the data in the db first?
I'd be interested to know more about your specific use-case, as I'm just to embark on a similar 'loader' program. In my case, I was planning on loading the db first. I'd be interested in any alternative approaches and the reasoning behind them (or, likewise, if you can't actually do it the way I was planning! :)).
Steve

Similar Messages

  • Write-through cache error kills coherence

    Hi, we have a write through cache, and when there's an error writing to the cache store, the cache dies. The first error we see is:
    3.5.3/465 | Coherence(3) - 2012-07-19 07:28:35.918/34.768 Oracle Coherence GE 3.5.3/465 <Info> (thread=DistributedWriteThroughWorker:3, member=1): (Wrapped: Failed to store key="1342708115777") java.lang.RuntimeException: Failed to store hibernate entity : This is normal, since there's a legitimate reason why this couldn't get stored, but after the stack trace, we see the below. So the 1st thing is the "Terminating DistributedCache", that just kills the node and makes it unusable, then there's the "unknown user type" message, as if it's trying to send something over the wire, but it can't; although this "WrapperException" is not one of our classes. Clearly we don't want the cache dying, we want to continue to use it, but any subsequent request to the cache fails. Any ideas?
         [java] ERROR | 07-19-2012 07:28:36.116 | Logger@9230760 3.5.3/465 | Coherence(3) - 2012-07-19 07:28:35.919/34.769 Oracle Coherence GE 3.5.3/465 <Error> (thread=DistributedWriteThroughWorker:3, member=1): Terminating DistributedCache due to unhandled exception: java.lang.IllegalArgumentException
         [java] ERROR | 07-19-2012 07:28:36.117 | Logger@9230760 3.5.3/465 | Coherence(3) - 2012-07-19 07:28:35.919/34.769 Oracle Coherence GE 3.5.3/465 <Error> (thread=DistributedWriteThroughWorker:3, member=1):
         [java] java.lang.IllegalArgumentException: unknown user type: com.tangosol.util.WrapperException
         [java]      at com.tangosol.io.pof.ConfigurablePofContext.getUserTypeIdentifier(ConfigurablePofContext.java:400)
         [java]      at com.tangosol.io.pof.ConfigurablePofContext.getUserTypeIdentifier(ConfigurablePofContext.java:389)
         [java]      at com.tangosol.io.pof.PofBufferWriter.writeObject(PofBufferWriter.java:1432)
         [java]      at com.tangosol.io.pof.ConfigurablePofContext.serialize(ConfigurablePofContext.java:338)
         [java]      at com.tangosol.coherence.component.util.daemon.queueProcessor.Service.writeObject(Service.CDB:4)
         [java]      at com.tangosol.coherence.component.net.Message.writeObject(Message.CDB:1)
         [java]      at com.tangosol.coherence.component.net.message.DistributedCacheResponse.write(DistributedCacheResponse.CDB:2)
         [java]      at com.tangosol.coherence.component.util.daemon.queueProcessor.packetProcessor.PacketPublisher.packetizeMessage(PacketPublisher.CDB:137)
         [java]      at com.tangosol.coherence.component.util.daemon.queueProcessor.packetProcessor.PacketPublisher$InQueue.add(PacketPublisher.CDB:8)
         [java]      at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.dispatchMessage(Grid.CDB:50)
         [java]      at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.post(Grid.CDB:53)
         [java]      at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache.onPutRequest(DistributedCache.CDB:146)
         [java]      at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$PutRequest.run(DistributedCache.CDB:1)
         [java]      at com.tangosol.coherence.component.util.DaemonPool$WrapperTask.run(DaemonPool.CDB:1)
         [java]      at com.tangosol.coherence.component.util.DaemonPool$WrapperTask.run(DaemonPool.CDB:32)
         [java]      at com.tangosol.coherence.component.util.DaemonPool$Daemon.onNotify(DaemonPool.CDB:63)
         [java]      at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
         [java]      at java.lang.Thread.run(Thread.java:662)

    Ok, figured it out. I failed to include the standard "coherence-pof-config.xml" when loading in our pof config file, and that's what caused the troubles. Including it solved the problem of the dying cache.

  • Write-through Cache behavior during Transactional Operation

    If a put is called on a write-through cache during a transaction(with Optimistic Read-Committed settings) that involves multiple caches some set to write-through and others to write-behind, when will the store operation on the corresponding CacheStore be attempted?
         a) Immediately after the put() is called on the cache but before the transaction commit
         or
         b) Immediately after the transaction is committed irrespective of when the put is called

    Hi Abhay,
         The backing map (in this case, <tt>com.tangosol.net.cache.ReadWriteBackingMap</tt>) is responsible for calling the CacheStore implementation. When "commit" is called, Coherence will synchronously send the data to the backing map; the backing map then determines what to do with the data. In the case of ReadWriteBackingMap, it will either (depending on its configuration) synchronously call CacheStore (meaning that a store exception will interrupt your transaction) or queue the update for later (meaning that any store exception will occur after the cache transaction has completed).
         In 3.0, the <rollback-cachestore-failures> element under <read-write-backing-map-scheme> controls whether CacheStore exceptions are propagated back to the client. If you are using a release prior to 3.0, please see this FAQ Item on CacheStore Exceptions.
         Jon Purdy
         Tangosol, Inc.

  • Re: Write-through caching in Forte

    Hello Mark,
    Just one point more. May be you can add an Event Notifier to the lock
    manager to send the new instance of Obj1 to the clients (here client2)
    who use it in their cache.
    Hope this helps.
    Daniel Nguyen.
    Mark S. Potts wrote:
    >
    Andrew
    This is a mixture of a cache strategy and object locking. If I
    understand what you have said I have some suggestions;
    The cache should hold copies of the object and the object should be
    returned to the client. The obect that is returned to the client should
    be version stamped ( optimistic locking ).
    A) Client1 request Obj1
    B) Obj1 is instantiated from the persistent store
    C) Obj1 is version stamped via a lock manager service.
    D) Obj1 is placed in the cache and copy returned to Client1
    Client1 can now work on Obj1
    When Client2 selects Obj2 - the cache size being 1 - the Obj1 is
    replaced with Obj2.
    Obj2 is selected stamped and returned to the client as per the steps
    above.
    When Client 2 now selects Obj1, no longer in the cache, the same steps
    need to be completed as above.
    The cache now contains the same version of Obj1 as give to Client1.
    Now the important part, becuase this is an optimistic locking strategy -
    two clients can have different version of the same object, it is only
    when the object is saved - returned to the persistent store, that the
    version stamp need to be checked. Lets say Client 2 saves before client
    1
    A) Client2 initiates a save on Obj1
    B) Obj1 checks the lock manger to see if anyone has saved a new version
    of Obj1 since it was selected.
    C) If there have been no saves of Obj1 since Obj1 was selected ie the
    version of Obj1 selected does not conflict with the last version saved -
    then save Obj1
    D) Update the version stamp for Obj1 via the Lock manager
    E) Update Obj1 in the cache.
    When Client1 now tries to save the version of Obj1 a conflict will
    result and an exception needs to be raised - and if necessary the new
    version of Obj1, from the cache, returned to Client1.
    The version control can be done more easily if you are prepared to do
    the locking in the database - I do not recommend this for a number of
    well documented reasons.
    However if you choose this alternative instead of using a seperate Lock
    manager you could simply time stamp the row in the database iether on
    that table or a separate lock table and when saving the Obj1 check the
    time stamp on the object against the time stamp on the row. If they are
    the same save the object and update the time stamp to the current time (
    granularity of time stamp determined by number of concurrent users and
    usage patterns ). The time stamp on the row acts as the version stamp
    for the object and is selected into the object as a private attribute at
    time of selection.
    Hope this is of some help.
    Mark Potts
    SCAFFOLDS Product Manager
    Sage IT Partners
    A) Client1 requests Obj1.
    B) Obj1 is instantiated from a persistent store and placed in the cache
    and a reference to Obj1 is
    returned to Client1.
    C) As part of the instantiation of Obj1 the object is version stamped
    through a lock manager service.
    C) Client1 modifies the state of Obj1 trough its reference.
    D) Client2 requests Obj2.
    E) Obj2 is de-serialized, placed in the cache, knocking out Obj1, and a
    reference to Obj2 is returned to Client2.
    F) Client2 requests Obj1. Since it is no longer in the cache, we either
    need to de-serialize Obj1 from some persistent store, in which case we
    now have two out of sync copies of Obj1, or we need to give Client2 the
    reference to the Obj1 that Client1 has.
    Faibishenko, Andrew wrote:
    Has anyone out there been successful at implementing a cache which
    maintains updateable objects.
    Due to financial considerations, we cannot buy an off-the-shelf
    framework.
    What we are trying to build is some kind of object persistence
    mechanism
    and the cache would be a layer in that service.
    Our big issue is maintaining consistency within the cache, for
    multiple
    clients performing updates to an object.
    Example:
    A) Client1 requests Obj1.
    B) Obj1 is de-serialized, placed in the cache and a reference to Obj1
    is
    returned to Client1.
    C) Client1 modifies the state of Obj1 trough its reference.
    D) Client2 requests Obj2.
    E) Obj2 is de-serialized, placed in the cache, knocking out Obj1, and
    a
    reference to Obj2 is returned to Client2.
    F) Client2 requests Obj1. Since it is no longer in the cache, we
    either
    need to de-serialize Obj1 from some persistent store, in which case we
    now have two out of sync copies of Obj1, or we need to give Client2
    the
    reference to the Obj1 that Client1 has.
    Is this something we should ask Forte Consulting about?
    -Andy
    ============================================
    Andy Faibishenko (312)251-3267
    Senior Consultant (800)462-6301
    Metamor Technologies, Inc. [email protected]

    Hello Mark,
    Just one point more. May be you can add an Event Notifier to the lock
    manager to send the new instance of Obj1 to the clients (here client2)
    who use it in their cache.
    Hope this helps.
    Daniel Nguyen.
    Mark S. Potts wrote:
    >
    Andrew
    This is a mixture of a cache strategy and object locking. If I
    understand what you have said I have some suggestions;
    The cache should hold copies of the object and the object should be
    returned to the client. The obect that is returned to the client should
    be version stamped ( optimistic locking ).
    A) Client1 request Obj1
    B) Obj1 is instantiated from the persistent store
    C) Obj1 is version stamped via a lock manager service.
    D) Obj1 is placed in the cache and copy returned to Client1
    Client1 can now work on Obj1
    When Client2 selects Obj2 - the cache size being 1 - the Obj1 is
    replaced with Obj2.
    Obj2 is selected stamped and returned to the client as per the steps
    above.
    When Client 2 now selects Obj1, no longer in the cache, the same steps
    need to be completed as above.
    The cache now contains the same version of Obj1 as give to Client1.
    Now the important part, becuase this is an optimistic locking strategy -
    two clients can have different version of the same object, it is only
    when the object is saved - returned to the persistent store, that the
    version stamp need to be checked. Lets say Client 2 saves before client
    1
    A) Client2 initiates a save on Obj1
    B) Obj1 checks the lock manger to see if anyone has saved a new version
    of Obj1 since it was selected.
    C) If there have been no saves of Obj1 since Obj1 was selected ie the
    version of Obj1 selected does not conflict with the last version saved -
    then save Obj1
    D) Update the version stamp for Obj1 via the Lock manager
    E) Update Obj1 in the cache.
    When Client1 now tries to save the version of Obj1 a conflict will
    result and an exception needs to be raised - and if necessary the new
    version of Obj1, from the cache, returned to Client1.
    The version control can be done more easily if you are prepared to do
    the locking in the database - I do not recommend this for a number of
    well documented reasons.
    However if you choose this alternative instead of using a seperate Lock
    manager you could simply time stamp the row in the database iether on
    that table or a separate lock table and when saving the Obj1 check the
    time stamp on the object against the time stamp on the row. If they are
    the same save the object and update the time stamp to the current time (
    granularity of time stamp determined by number of concurrent users and
    usage patterns ). The time stamp on the row acts as the version stamp
    for the object and is selected into the object as a private attribute at
    time of selection.
    Hope this is of some help.
    Mark Potts
    SCAFFOLDS Product Manager
    Sage IT Partners
    A) Client1 requests Obj1.
    B) Obj1 is instantiated from a persistent store and placed in the cache
    and a reference to Obj1 is
    returned to Client1.
    C) As part of the instantiation of Obj1 the object is version stamped
    through a lock manager service.
    C) Client1 modifies the state of Obj1 trough its reference.
    D) Client2 requests Obj2.
    E) Obj2 is de-serialized, placed in the cache, knocking out Obj1, and a
    reference to Obj2 is returned to Client2.
    F) Client2 requests Obj1. Since it is no longer in the cache, we either
    need to de-serialize Obj1 from some persistent store, in which case we
    now have two out of sync copies of Obj1, or we need to give Client2 the
    reference to the Obj1 that Client1 has.
    Faibishenko, Andrew wrote:
    Has anyone out there been successful at implementing a cache which
    maintains updateable objects.
    Due to financial considerations, we cannot buy an off-the-shelf
    framework.
    What we are trying to build is some kind of object persistence
    mechanism
    and the cache would be a layer in that service.
    Our big issue is maintaining consistency within the cache, for
    multiple
    clients performing updates to an object.
    Example:
    A) Client1 requests Obj1.
    B) Obj1 is de-serialized, placed in the cache and a reference to Obj1
    is
    returned to Client1.
    C) Client1 modifies the state of Obj1 trough its reference.
    D) Client2 requests Obj2.
    E) Obj2 is de-serialized, placed in the cache, knocking out Obj1, and
    a
    reference to Obj2 is returned to Client2.
    F) Client2 requests Obj1. Since it is no longer in the cache, we
    either
    need to de-serialize Obj1 from some persistent store, in which case we
    now have two out of sync copies of Obj1, or we need to give Client2
    the
    reference to the Obj1 that Client1 has.
    Is this something we should ask Forte Consulting about?
    -Andy
    ============================================
    Andy Faibishenko (312)251-3267
    Senior Consultant (800)462-6301
    Metamor Technologies, Inc. [email protected]

  • Write-through caching in Forte

    Has anyone out there been successful at implementing a cache which
    maintains updateable objects.
    Due to financial considerations, we cannot buy an off-the-shelf
    framework.
    What we are trying to build is some kind of object persistence mechanism
    and the cache would be a layer in that service.
    Our big issue is maintaining consistency within the cache, for multiple
    clients performing updates to an object.
    Example:
    A) Client1 requests Obj1.
    B) Obj1 is de-serialized, placed in the cache and a reference to Obj1 is
    returned to Client1.
    C) Client1 modifies the state of Obj1 trough its reference.
    D) Client2 requests Obj2.
    E) Obj2 is de-serialized, placed in the cache, knocking out Obj1, and a
    reference to Obj2 is returned to Client2.
    F) Client2 requests Obj1. Since it is no longer in the cache, we either
    need to de-serialize Obj1 from some persistent store, in which case we
    now have two out of sync copies of Obj1, or we need to give Client2 the
    reference to the Obj1 that Client1 has.
    Is this something we should ask Forte Consulting about?
    -Andy
    ============================================
    Andy Faibishenko (312)251-3267
    Senior Consultant (800)462-6301
    Metamor Technologies, Inc. [email protected]

    Andrew
    This is a mixture of a cache strategy and object locking. If I
    understand what you have said I have some suggestions;
    The cache should hold copies of the object and the object should be
    returned to the client. The obect that is returned to the client should
    be version stamped ( optimistic locking ).
    A) Client1 request Obj1
    B) Obj1 is instantiated from the persistent store
    C) Obj1 is version stamped via a lock manager service.
    D) Obj1 is placed in the cache and copy returned to Client1
    Client1 can now work on Obj1
    When Client2 selects Obj2 - the cache size being 1 - the Obj1 is
    replaced with Obj2.
    Obj2 is selected stamped and returned to the client as per the steps
    above.
    When Client 2 now selects Obj1, no longer in the cache, the same steps
    need to be completed as above.
    The cache now contains the same version of Obj1 as give to Client1.
    Now the important part, becuase this is an optimistic locking strategy -
    two clients can have different version of the same object, it is only
    when the object is saved - returned to the persistent store, that the
    version stamp need to be checked. Lets say Client 2 saves before client
    1
    A) Client2 initiates a save on Obj1
    B) Obj1 checks the lock manger to see if anyone has saved a new version
    of Obj1 since it was selected.
    C) If there have been no saves of Obj1 since Obj1 was selected ie the
    version of Obj1 selected does not conflict with the last version saved -
    then save Obj1
    D) Update the version stamp for Obj1 via the Lock manager
    E) Update Obj1 in the cache.
    When Client1 now tries to save the version of Obj1 a conflict will
    result and an exception needs to be raised - and if necessary the new
    version of Obj1, from the cache, returned to Client1.
    The version control can be done more easily if you are prepared to do
    the locking in the database - I do not recommend this for a number of
    well documented reasons.
    However if you choose this alternative instead of using a seperate Lock
    manager you could simply time stamp the row in the database iether on
    that table or a separate lock table and when saving the Obj1 check the
    time stamp on the object against the time stamp on the row. If they are
    the same save the object and update the time stamp to the current time (
    granularity of time stamp determined by number of concurrent users and
    usage patterns ). The time stamp on the row acts as the version stamp
    for the object and is selected into the object as a private attribute at
    time of selection.
    Hope this is of some help.
    Mark Potts
    SCAFFOLDS Product Manager
    Sage IT Partners
    A) Client1 requests Obj1.
    B) Obj1 is instantiated from a persistent store and placed in the cache
    and a reference to Obj1 is
    returned to Client1.
    C) As part of the instantiation of Obj1 the object is version stamped
    through a lock manager service.
    C) Client1 modifies the state of Obj1 trough its reference.
    D) Client2 requests Obj2.
    E) Obj2 is de-serialized, placed in the cache, knocking out Obj1, and a
    reference to Obj2 is returned to Client2.
    F) Client2 requests Obj1. Since it is no longer in the cache, we either
    need to de-serialize Obj1 from some persistent store, in which case we
    now have two out of sync copies of Obj1, or we need to give Client2 the
    reference to the Obj1 that Client1 has.
    Faibishenko, Andrew wrote:
    Has anyone out there been successful at implementing a cache which
    maintains updateable objects.
    Due to financial considerations, we cannot buy an off-the-shelf
    framework.
    What we are trying to build is some kind of object persistence
    mechanism
    and the cache would be a layer in that service.
    Our big issue is maintaining consistency within the cache, for
    multiple
    clients performing updates to an object.
    Example:
    A) Client1 requests Obj1.
    B) Obj1 is de-serialized, placed in the cache and a reference to Obj1
    is
    returned to Client1.
    C) Client1 modifies the state of Obj1 trough its reference.
    D) Client2 requests Obj2.
    E) Obj2 is de-serialized, placed in the cache, knocking out Obj1, and
    a
    reference to Obj2 is returned to Client2.
    F) Client2 requests Obj1. Since it is no longer in the cache, we
    either
    need to de-serialize Obj1 from some persistent store, in which case we
    now have two out of sync copies of Obj1, or we need to give Client2
    the
    reference to the Obj1 that Client1 has.
    Is this something we should ask Forte Consulting about?
    -Andy
    ============================================
    Andy Faibishenko (312)251-3267
    Senior Consultant (800)462-6301
    Metamor Technologies, Inc. [email protected]

  • What prevents a package from pre-loading into the AppV cache (using SCCM)

    This could be an App-V issue or an SCCM issue, not entirely sure...
    When SCCM deploys AppV apps to clients we always set to download and run locally.  With this set SCCM will always cache the entire package in the AppV cache immediately so it can be used offline.
    I've got a package where this just isn't working.  It is set to download and run locally but after it is deployed the PercentLoaded value shows 0.  If a user then runs it you are looking at a delay whilst it then unpacks from the SCCM cache to
    the AppV cache.  This package includes a script which executes when published so could this be the culprit?

    Mmm the mount seems to be failing...
    AppVManageClient5X::MountAppVPackage() failed for package [530ecc6a-26db-487f-9ab8-5bd3c427a9d1]. (0x87d0128f)
    Looking in the App-V Event viewer we have events like:
    Microsoft AppV Streaming Manager LoadAll failed with status The system cannot find the file specified., Package Version {b83b67b9-56a8-4862-aaa0-16d27c2036ab}
    and
    Could not read the streaming information from file \ProgramData\App-V\0CD3D585-5851-4242-B9E5-A8F49ABC04B5\b83b67b9-56a8-4862-aaa0-16d27c2036ab\Root\commons-logging-1.0.4.jar. Failure status 0xc0000275. - VERBOSE MESSAGE NOT AN ERROR
    0xc0000275 means nothing to me so a quick Google shows The file or directory does not have a reparse
    point.
    Perhaps an issue with the package then?  It is still residing inside the SCCM cache and if a user runs the software it starts mounting more of it...

  • Switching from write through to write behind automatically

    Hi,
    We are considering a Coherence solution to protect a customer facing application from outages due to database failures. This is for a financial company and the monetary value of each transaction is large and we want to provide 100% guarantee against data loss while not incurring any outages. We want to provide a write-through persistence to the database through Coherence which can switch to a write-behind automatically at runtime if the database persistence fails. Is this doable automatically and would it solve the problem I am trying to solve without losing any inflight transactions? Are there any real customer cases that were successful in achieving this using Coherence?
    Thanks
    Sairam
    Edited by: SKR on Feb 16, 2012 3:14 PM
    Edited by: SKR on Feb 16, 2012 3:15 PM

    SKR wrote:
    Jonathan.Knight wrote:
    Hi Sairam
    I know you can change the write-delay in JMX for a cache using write-behind but I pretty certauin you cannot make a write-through cache suddenly become a write-behind cache.
    I'm not sure why you think changing from write-through to write-behind will allow you to guarantee 100% no data loss - do you mean no loss of updates to the DB or no loss of data in the cache cluster? There are certainly scenarios that can occur where you can loose data from either the cluster or the DB that write-through or write-behind will not save you from. Presumably you want to use write-behind to allow for the DB to go down, although you will still need to configure Coherence to properly retry failed write-behind calls CacheStore behaviour on failure. What happens to your data if you are using write-behind and you loose a partition from you cluster (i.e. you loose a physical machine or two or more JVMs in a short space of time) - you have data loss - you cannot guarantee against this you can only mitigate it and have a recovery policy/procedure.
    JKJK,
    Thanks for your reply. I must have explained the scenario better. What we are trying to do is to have our transactions commit to the database synchronously using write-through, so that during normal operation, the data will be committed, persisted and durable in the database. But our RW database becomes a single point of failure and if some problem occurs to the database during the peak load time, we run the risk of an outage till we fix the database problem or failover to the standby (We don't have RAC architecture or automatic failover and the manual switchover takes about 10 - 15 mins minimum). We want to avoid this by providing a cache-only operation mode during such a failure, where the customers can continue to transact and the writes will get queued in the cache. I do understand that losing both the database and the cache or losing the primary and the backup in the cache would result in a data loss. But I am assuming such a dual failure is rare.
    We do not want to run write-behind all the time but only during the database failure window. From what you mentioned, it seems the runtime switching from write-through to write-behind is not available as an option.
    SairamHi Sairam,
    I would suggest that you configure write-behind to have a fairly short write-delay, and you only return a confirmation to the client
    - either after the write-behind succeeded (you can use a backing map listener to listen for the removal of the decoration which meant that the entry was dirty)
    - or if the database went down (noticeable from the failure), then it is up to you whether you send a confirmation which also mentions that it is not persisted to disk yet, or not at all
    Best regards,
    Robert

  • Migrating from 3.1 to 3.7 - write through for a custom cache store issues

    We're migrating from 3.1 to 3.7. So far the migration and testing has been fairly uneventful, but there is one issue that came up yesterday that seems like it is going to be tricky to debug.
    We have a set of storage-enabled nodes that use a custom CacheStore to read from and write behind to a mongo database. On another node connected to that caching service, read throughs work just fine. (I can set breakpoints on the CacheStore load method and see the load calls coming through just fine) - but what's not working is when the other node does a Cache.put - the Store method on the CacheStore is never called and so far I don't see anything in the logs indicating there is a problem on either side (I'm going to make sure that the coherence logging is up to the highest level on both the nodes today when I'm doing more testing)
    I can see the cache put start to dive into the coherence jar, but I don't have source jars for coherence so it's fairly opague what might be going wrong after the Cache.put(object, object) call. I can see that it dives into various coherence methods, but
    Any ideas on where to start debugging this?
    This setup worked fine on 3.1, and as best we can tell all the API calls were converted over to their proper coherence 3.7 versions, and the coherence.xml files were migrated to use the new xsd etc.

    it seems that the issue might be related to this:
    2012-08-15 14:19:34.086 Tangosol Coherence 3.7.1.5 <Error> (thread=WriteBehindThread:CacheStoreWrapper(com.foo.cache.MongoCacheStore):Foo.com-CMS, member=13): Failed to store key="assetId=DEFAULT;assetStyle=DEFAULT;initial=c;siteId=foosite;"
    2012-08-15 14:19:34.087 Tangosol Coherence 3.7.1.5 <Error> (thread=WriteBehindThread:CacheStoreWrapper(com.foo.configrepo.cache.MongoCacheStore):Foo.com-CMS, member=13): (Wrapped) java.io.StreamCorruptedException: invalid type: 13
         at com.tangosol.util.ExternalizableHelper.fromBinary(ExternalizableHelper.java:266)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$ConverterFromBinary.convert(PartitionedCache.CDB:4)
         at com.tangosol.net.cache.BackingMapBinaryEntry.getValue(BackingMapBinaryEntry.java:124)
         at com.tangosol.net.cache.ReadWriteBackingMap$CacheStoreWrapper.storeInternal(ReadWriteBackingMap.java:5731)
         at com.tangosol.net.cache.ReadWriteBackingMap$StoreWrapper.store(ReadWriteBackingMap.java:4814)
         at com.tangosol.net.cache.ReadWriteBackingMap$WriteThread.run(ReadWriteBackingMap.java:4217)
         at com.tangosol.util.Daemon$DaemonWorker.run(Daemon.java:803)
         at java.lang.Thread.run(Thread.java:662)
    Caused by: java.io.StreamCorruptedException: invalid type: 13
         at com.tangosol.util.ExternalizableHelper.readObjectInternal(ExternalizableHelper.java:2303)
         at com.tangosol.util.ExternalizableHelper.deserializeInternal(ExternalizableHelper.java:2746)
         at com.tangosol.util.ExternalizableHelper.fromBinary(ExternalizableHelper.java:262)
         ... 7 moreLooks like it is an issue with the serialization? We're primarily using XmlBean, not POF for serialization.
    Any tips on troubleshooting this?
    Edited by: RyanGardner on Aug 15, 2012 7:37 AM
    Edited by: RyanGardner on Aug 15, 2012 7:38 AM

  • Pre-loading the cache

    I'm attempting to pre-load the cache with data and have implemented controllable caches as per this document (http://wiki.tangosol.com/display/COH35UG/Sample+CacheStores). My cache stores are configured as write-behind with a 2s delay:
    <cache-config>
         <caching-scheme-mapping>
         <cache-mapping>
              <cache-name>PARTY_CACHE</cache-name>
              <scheme-name>party_cache</scheme-name>
         </cache-mapping>
         </caching-scheme-mapping>
         <caching-schemes>
              <distributed-scheme>
                <scheme-name>party_cache</scheme-name>
                <service-name>partyCacheService</service-name>
                <thread-count>5</thread-count>
                <backing-map-scheme>
                    <read-write-backing-map-scheme>
                         <write-delay>2s</write-delay>
                        <internal-cache-scheme>
                            <local-scheme/>
                        </internal-cache-scheme>
                        <cachestore-scheme>
                            <class-scheme>
                                <class-name>spring-bean:partyCacheStore</class-name>
                            </class-scheme>
                        </cachestore-scheme>
                    </read-write-backing-map-scheme>
                </backing-map-scheme>
                <autostart>true</autostart>
            </distributed-scheme>
         </caching-schemes>
    </cache-config>
    public static void enable(String storeName) {
            CacheFactory.getCache(CacheNameEnum.CONTROL_CACHE.name()).put(storeName, Boolean.TRUE);
    public static void disable(String storeName) {
            CacheFactory.getCache(CacheNameEnum.CONTROL_CACHE.name()).put(storeName, Boolean.FALSE);
    public static boolean isEnabled(String storeName) {
            return ((Boolean) CacheFactory.getCache(CacheNameEnum.CONTROL_CACHE.name()).get(storeName)).booleanValue();
    public void store(Object key, Object value) {
            if (isEnabled(getStoreName())) {
                throw new UnsupportedOperationException("Store method not currently supported");
        }The problem I have is that what seems to be happening is:
    1) bulk loading process calls disable() on the cache store
    2) cache is loaded with data
    3) bulk loading process calls enable() on the cache store ready for normal operation
    4) the service thread starts to attempt to store the data as the check to see if the store is enabled returns true because we set it to true in step 3
    so is there a way of temporarily disabling the write-delay or changing it programatically so step 4 doesn't happen?

    Adding
    Thread.sleep(10000);after loading the data seems to solve the problem but this seems dirty, any better solutions?

  • How to pre - load all database rows into cache

    Hi All,
    The below is my cache configuration, I would like to know how to load all the database rows/specified number of rows into the cache.
    <?xml version="1.0"?>
    <!DOCTYPE cache-config SYSTEM "cache-config.dtd">
    <cache-config>
    <caching-scheme-mapping>
    <cache-mapping>
    <cache-name>TableEmp</cache-name>
    <scheme-name>distributed-hibernate</scheme-name>
    <init-params>
    <init-param>
    <param-name>entityname</param-name>
    <param-value>com.tangosol.examples.explore.Emp</param-value>
    </init-param>
    </init-params>
    </cache-mapping>
    </caching-scheme-mapping>
    <caching-schemes>
    <distributed-scheme>
    <scheme-name>distributed-hibernate</scheme-name>
    <backing-map-scheme>
    <read-write-backing-map-scheme>
    <internal-cache-scheme>
    <local-scheme></local-scheme>
    </internal-cache-scheme>
    <cachestore-scheme>
    <class-scheme>
    <class-name>
    com.tangosol.coherence.hibernate.HibernateCacheStore
    </class-name>
    <init-params>
    <init-param>
    <param-type>java.lang.String</param-type>
    <param-value>{entityname}</param-value>
    </init-param>
    </init-params>
    </class-scheme>
    </cachestore-scheme>
    </read-write-backing-map-scheme>
    </backing-map-scheme>
    </distributed-scheme>
    </caching-schemes>
    </cache-config>
    Please kindly provide a solution.
    Regards
    S

    Hi Rich,
    Imagine I have just downloaded coherence, I have run a server with the default config. From what you said to S coherence can pull the data from database itself WITHOUT me having to push it to coherence? If so can you please explain how this done, or point me at a guide?You might start with [Read-Through Caching|http://coherence.oracle.com/display/COH34UG/Read-Through%2C+Write-Through%2C+Write-Behind+and+Refresh-Ahead+Caching#Read-Through%2CWrite-Through%2CWrite-BehindandRefresh-AheadCaching-ReadThroughCache] to understand how Coherence can pull data. It is the implementation of a CacheLoader that enables the Coherence cache to pull the data.
    The cache configuration that S provided specifies a read-write-backing-map-scheme indicating that HibernateCacheStore class should be used by Coherence and is similar to the configuration discussed at [Using Hibernate as a CacheStore for Coherence|http://wiki.tangosol.com/display/COH34UG/Using+Hibernate+as+a+CacheStore+for+Coherence]. In responding to the original question, I was assuming that the data source being queried to be loaded into the cache is the same as the data source fronted by the Hibernate configuration.
    Secondly with the respects to the answer to my question. If I don't care about versioning ... do I need a EvolvablePortableObject? If you really don't want to version your serialized representations, you can implement the PortableObject interface instead but the additional cost of implementing EvolvablePortableObject is small and the potential benefit is great.
    So my question is, can coherence pull the data from the database using a preload request and serialize into a pof format without me having to push the data to coherence via a separate app? And if so could you please explain how? Or direct me at some documentation?You do not need to push data to Coherence via a separate app. Coherence can pull the data from the database. Coherence can also preload the cache using an EntryProcessor. You can configure Coherence to use POF and will need to implement POF serialization methods for your cache objects.
    The [Partitioned cache with a serializer|http://coherence.oracle.com/display/COH34UG/Sample+Cache+Configurations#SampleCacheConfigurations-Partitionedcacheofadatabase] example and the links it provides should provide sufficient documentation for configuring and using POF.
    Whether you decide to use the HibernateCacheStore, the TopLinkCacheStore or implement your own CacheStore or CacheLoader class to access your data in your database is your decision. You should be able to find sufficient documentation and examples to help you decide how you would like to use Coherence at the [Coherence Knowledge Base|http://wiki.tangosol.com/display/COH/Oracle+Coherence+Knowledge+Base+Home]. I would recommend starting with the [User Guide|http://wiki.tangosol.com/display/COH34UG/Coherence+3.4+Home] if you would like to get a better grasp of the overall architecture.
    Regards,
    Harv

  • Pre-loading the Cache from Database during application start-up

    We are using Spring, Hibernate, Oracle Coherence 3.5.2 Weblogic Webservices
    Our requirement is to pre-load the cache during the application start-up most probably during Authentication/Authorization Service is invoked.
    We plan to load the data for other services from database into Coherence cache so that whenever user access that particular service he ends up hitting the Cache instead of database.
    We would greatly appreciate sample code snippets on how to write CacheInitializerBean with marker to demonstrate the state of cache.

    Hi Rob,
    Thanks for pointing to the article: Pre-Loading the Cache
    In fact i already looked at that article before posting. It just mentions how to load the data from database into Cache.
    What i am looking for is how to make this happen during application start-up. This is my first hurdle.
    The second one is as mentioned in the article http://coherence.oracle.com/display/COH35UG/Pre-Loading+the+Cache
    i wrote following code which never gets populated into cache. Not sure whats going wrong even though i see Hibernate loadAll() method loading all the objects in the console
    public   void populateCache() throws SQLException
        Map<Long, Object>  buffer = new HashMap<Long, Object>();
        int count = 0;
         List<Contract> contractList = this.getHibernateTemplate().loadAll(Contract.class);
         log.debug("contractList size="+contractList.size());
         for(Contract contract : contractList)
             Long key   = new Long(contract.getId());
             Object  value = contract;
             buffer.put(key, value);
             // this loads 1000 items at a time into the cache
             if ((count++ % 1000) == 0)
                  contractCache.putAll(buffer);
                 buffer.clear();
         if (!buffer.isEmpty())
              contractCache.putAll(buffer);
        }We would greatly appreciate your time in helping us resolving two hurdle blocks.

  • Should OS/FileSystem caching be write-through?

    I have a question. I use Ubuntu. Should I mount my filesystem (which holds BDB's content) with "-o sync" option? That is, should my file system cache be write-through?
    I have this question because, if I turn on the logging feature in Berkeley DB but let the file system cache be write-back, I don't exactly know if the log is properly flushed to the disk or not.

    Thanks George. I agree that mature applications would be better off mounting their filesystem with "-o sync" option.
    But here is a thing: I ran an example test case where I inserted 10 million key-value pairs with logging enabled, and saw that the average response time per insertion was 10 milli seconds, and I did the same experiment with logging disabled and saw that it too took 10 milliseconds per insertion on an average.
    For the experiment with logging enabled, I create the environment with DB_INIT_LOG and DB_INIT_TXN flags but don't surround the insertion requests with txn_begin() and txn->commit(). I guess this way of doing insertions is called autocommit. I am hoping I am doing this experiment right.
    Thanks for the pointers about set_flags() and DB_TXN_NOSYNC, I am going to look them up.

  • Transactional Caches and Write Through

    I've been trying to implement the use of multiple caches, each with write through, all within a transaction.
         The CacheFactory.commitTransactionCollection(..) method only seems to work correctly if the first transactionMap throws an exception in the database code.
         If the second transactionMap throws exceptions, the caches do not appear to rollback correctly.
         I can wrap the whole operation in a JDBC transaction that rolls back the database correctly but the caches are not all rolled back because they are committed one by one?
         For example, I write to two transaction maps, each one created from separate caches. When commiting the transaction maps, the second transaction map causes a database exception. It appears the first transaction map has already committed its objects and doesn't roll back.
         Is it possible to use Coherence with multiple transaction maps and get all the caches and databases rolled back?
         I've also been trying to look at using coherence-tx.rar as described in the forums within WebLogic but I'm getting @@@@@ Failed to commit: javax.transaction.SystemException: Could not contact coordinator at null+SMARTPC:7001+null+t3+
         (SMARTPC being my pc name)
         Has anybody else had this problem? Bonus points for describing how to fix it!
         Mike

    The transaction support in Coherence is for Local     > Transactions. Basically, what this means is that the
         > first phase of the commit ("prepare") acquires locks
         > and ensures that there are no conflicts. The second
         > phase ("commit") does nothing but push data out to
         > the caches.
         This means that once prepare succeeds (all locks acquired), commit will try to copy local data into the base map. If there is a failure on any put, rollback will undo any changes made. All locks are cleared at the end.
         > The problem is that when you are using a
         > CacheStore module, the exception is occurring during
         > the second phase.
         If you start using a CacheStore module, then database update has to be part of the atomic procedure.
         >
         > For this reason, write-through and cache transactions
         > are not a supported combination.
         This is not true for a cache transaction that updaets a single cache entry, right?
         >
         > For single-cache-entry updates, CacheStore operations
         > are fully fault-tolerant in that the cache and
         > database are guaranteed to be consistent during any
         > server failure (including failures during partial
         > updates). While the mechanisms for fault-tolerance
         > vary, this is true for both write-through and
         > write-behind caches.
         For the write-thru case, I believe Database and cache are atomically updated.
         > Coherence does not support two-phase CacheStore
         > operations across multiple CacheStore instances. In
         > other words, if two cache entries are updated,
         > triggering calls to CacheStore modules sitting on
         > separate servers, it is possible for one database
         > update to succeed and for the other to fail.
         But once we have multiple CacheStore modules, then once one atomic write-thru put succeeds that means database is already updated for that specific put. There is no way to roll back the database update (although we can roll back the cache update). Therefore, you may end up in partial commits in such situations where multiple cache entries are updated across different CacheStore modules.
         If I use write-behind CacheStore modules, I can roll back entirely and avoid partial commits? Since writes are not immediately propagated to the database? So in essence, write-behind cache stores are no different than local transactions... Is my understanding correct?

  • Pre-load the Cache during Application-Start Up

    Our requirement is to pre-load the cache during the application start-up most probably during Authentication/Authorization Service is invoked.
    We plan to load the data for other services from database into Coherence cache so that when user access that particular service he ends up hitting the Cache instead of database.
    Any pointers/suggestions on how to pre-load the cache during application start-up would be greatly appreciated. We are using Spring, Hibernate, Weblogic Web Services
    Regards,
    Bansi

    Hi Bansi,
    I were using following approach.
    First, we never use CacheFactory.getCache() in application code instead all instances of named cache were injected.
    On server side, I have an CacheInitializerBean which were starting cache preloading process (in separate thread). After preloading a special marker entry were put to the cache, indicating what data in the cache are consistent.
    When injecting named cache instance, we use a factory. This factory use CacheFactory.getCache() internally, but it check presence of marker object in cache an blocks until marker object will appear.
    Well in practice things are little more complicated but this is basic idea.
    Preload cache asynchronously and use marker to indicate completion of loading process.
    Hope this will help.
    Regards,
    Alexey

  • How to pre-load Coherence Caches used within an OEP Application

    Hi OEP/Coherence guys,
    I'm currently developing an OEP application that was consuming database inputs in CQL queries.
    I've replaced database direct access by Coherence caches access. My Coherence Local caches use a cache loader to fetch rows (by key) when there is a cache miss. This is working well, and the caches get filled in during the execution of my OEP application.
    The problem is that if CQL queries are made on some attributes (not the key) of not-yet-cached data, the load method of my cache loader is not invoked and there is no result to my CQL query.
    I'm wondering how to pre-load my data in Coherence Caches, from the database, when the OEP application starts to avoid such kind of problems...
    Thx for any advice.
    Renato

    Hi.
    Could you please describe the way to "set-up a cache-loader to load data into your cache when the OEP application starts" ?
    I have a cache-loader configured with my cache. My cache-loader implements the "com.tangosol.net.cache.CacheLoader" interface.
    This interface only defines 2 methods:
    load(java.lang.Object oKey) ==> Return the value associated with the specified key, or null if the key does not have an associated value in the underlying store.
    loadAll(java.util.Collection colKeys) ==> Return the values associated with each the specified keys in the passed collection.
    None of these methods allows me to pre-load my data (and BTW it looks like "loadAll" is never called by OEP)
    Thx
    RP

Maybe you are looking for