Transactional Caches and Write Through

I've been trying to implement the use of multiple caches, each with write through, all within a transaction.
     The CacheFactory.commitTransactionCollection(..) method only seems to work correctly if the first transactionMap throws an exception in the database code.
     If the second transactionMap throws exceptions, the caches do not appear to rollback correctly.
     I can wrap the whole operation in a JDBC transaction that rolls back the database correctly but the caches are not all rolled back because they are committed one by one?
     For example, I write to two transaction maps, each one created from separate caches. When commiting the transaction maps, the second transaction map causes a database exception. It appears the first transaction map has already committed its objects and doesn't roll back.
     Is it possible to use Coherence with multiple transaction maps and get all the caches and databases rolled back?
     I've also been trying to look at using coherence-tx.rar as described in the forums within WebLogic but I'm getting @@@@@ Failed to commit: javax.transaction.SystemException: Could not contact coordinator at null+SMARTPC:7001+null+t3+
     (SMARTPC being my pc name)
     Has anybody else had this problem? Bonus points for describing how to fix it!
     Mike

The transaction support in Coherence is for Local     > Transactions. Basically, what this means is that the
     > first phase of the commit ("prepare") acquires locks
     > and ensures that there are no conflicts. The second
     > phase ("commit") does nothing but push data out to
     > the caches.
     This means that once prepare succeeds (all locks acquired), commit will try to copy local data into the base map. If there is a failure on any put, rollback will undo any changes made. All locks are cleared at the end.
     > The problem is that when you are using a
     > CacheStore module, the exception is occurring during
     > the second phase.
     If you start using a CacheStore module, then database update has to be part of the atomic procedure.
     >
     > For this reason, write-through and cache transactions
     > are not a supported combination.
     This is not true for a cache transaction that updaets a single cache entry, right?
     >
     > For single-cache-entry updates, CacheStore operations
     > are fully fault-tolerant in that the cache and
     > database are guaranteed to be consistent during any
     > server failure (including failures during partial
     > updates). While the mechanisms for fault-tolerance
     > vary, this is true for both write-through and
     > write-behind caches.
     For the write-thru case, I believe Database and cache are atomically updated.
     > Coherence does not support two-phase CacheStore
     > operations across multiple CacheStore instances. In
     > other words, if two cache entries are updated,
     > triggering calls to CacheStore modules sitting on
     > separate servers, it is possible for one database
     > update to succeed and for the other to fail.
     But once we have multiple CacheStore modules, then once one atomic write-thru put succeeds that means database is already updated for that specific put. There is no way to roll back the database update (although we can roll back the cache update). Therefore, you may end up in partial commits in such situations where multiple cache entries are updated across different CacheStore modules.
     If I use write-behind CacheStore modules, I can roll back entirely and avoid partial commits? Since writes are not immediately propagated to the database? So in essence, write-behind cache stores are no different than local transactions... Is my understanding correct?

Similar Messages

  • Should OS/FileSystem caching be write-through?

    I have a question. I use Ubuntu. Should I mount my filesystem (which holds BDB's content) with "-o sync" option? That is, should my file system cache be write-through?
    I have this question because, if I turn on the logging feature in Berkeley DB but let the file system cache be write-back, I don't exactly know if the log is properly flushed to the disk or not.

    Thanks George. I agree that mature applications would be better off mounting their filesystem with "-o sync" option.
    But here is a thing: I ran an example test case where I inserted 10 million key-value pairs with logging enabled, and saw that the average response time per insertion was 10 milli seconds, and I did the same experiment with logging disabled and saw that it too took 10 milliseconds per insertion on an average.
    For the experiment with logging enabled, I create the environment with DB_INIT_LOG and DB_INIT_TXN flags but don't surround the insertion requests with txn_begin() and txn->commit(). I guess this way of doing insertions is called autocommit. I am hoping I am doing this experiment right.
    Thanks for the pointers about set_flags() and DB_TXN_NOSYNC, I am going to look them up.

  • Map listeners and write-through strategy.

    Hi.
    Write-through strategy seems to be synchronious operations and if it fails, no data should appear in cache. Logically this means, that no events will be produced if the persisting fails (that's what we exactly need). But could not find any mention about this in documentation. Can anyone verify this?
    Thanks, Anton.

    If you are talking about throwing an exception in your CacheStore code,
    it will happen before anything occurs in the internal cache managed by
    Coherence and no events will be generated (that would have been generated
    under normal cases where the operation succeeded.)

  • Coherence 3.6.0 transactional cache and POF - NULL values

    Hi,
    We are trying to use the new transactional scheme defined in 3.6.0 and we encounter an abnormal behaviour. The code executes without any exception or warnings but in the cache we find the key associated with a NULL value.
    To try to identify the problem, we defined two services (see cache-config below):
    - one transactional cache
    - one distributed cache
    If we try to insert into transactional cache primitives or strings everything is normal (both key and value are visible using coherence console). But if we try to insert custom classes using POF, the key is inserted with a NULL value.
    In same cluster we defined a distributed cache that uses the same POF classes/configuration. A call to put will succeed in any scenario (both key and value are visible using coherence console).
    <?xml version="1.0"?>
    <!DOCTYPE cache-config SYSTEM "cache-config.dtd">
    <cache-config>
         <caching-scheme-mapping>
              <cache-mapping>
                   <cache-name>cnt.*</cache-name>
                   <scheme-name>storage.transactionalcache.cnt.scheme</scheme-name>
              </cache-mapping>
              <cache-mapping>
                   <cache-name>stt.*</cache-name>
                   <scheme-name>storage.distributedcache.stt.scheme</scheme-name>
              </cache-mapping>
         </caching-scheme-mapping>
         <caching-schemes>
              <transactional-scheme>
                   <scheme-name>storage.transactionalcache.cnt.scheme</scheme-name>
                   <service-name>storage.transactionalcache.cnt</service-name>
                   <thread-count>10</thread-count>
                   <serializer>
                        <class-name>com.tangosol.io.pof.ConfigurablePofContext</class-name>
                        <init-params>
                             <init-param>
                                  <param-type>String</param-type>
                                  <param-value>cnt-pof-config.xml</param-value>
                             </init-param>
                        </init-params>
                   </serializer>
                   <backing-map-scheme>
                        <local-scheme>
                             <high-units>250M</high-units>
                             <unit-calculator>binary</unit-calculator>
                        </local-scheme>
                   </backing-map-scheme>
                   <autostart>true</autostart>
              </transactional-scheme>
              <distributed-scheme>
                   <scheme-name>storage.distributedcache.stt.scheme</scheme-name>
                   <service-name>storage.distributedcache.stt</service-name>
                   <serializer>
                        <class-name>com.tangosol.io.pof.ConfigurablePofContext</class-name>
                        <init-params>
                             <init-param>
                                  <param-type>String</param-type>
                                  <param-value>cnt-pof-config.xml</param-value>
                             </init-param>
                        </init-params>
                   </serializer>
                   <backing-map-scheme>
                        <local-scheme>
                             <high-units>250M</high-units>
                             <unit-calculator>binary</unit-calculator>
                        </local-scheme>
                   </backing-map-scheme>
                   <autostart>true</autostart>
              </distributed-scheme>
         </caching-schemes>
    </cache-config>
    Failing code (uses transaction APIs 3.6.0):
         public static void main(String[] args)
              Connection con = new DefaultConnectionFactory().createConnection("storage.transactionalcache.cnt");
              con.setAutoCommit(false);
              try
                   OptimisticNamedCache cache = con.getNamedCache("cnt.t1");
                   CId tID = new CId();
                   tID.setId(11111L);
                   C tC = new C();
                   tC.setVal(new BigDecimal("100.1"));
                   cache.insert(tID, tC);
                   con.commit();
              catch (Exception e)
                   e.printStackTrace();
                   con.rollback();
              finally
                   con.close();
    Code that succeeds (but without transaction APIs):
         public static void main(String[] args)
              try
                   NamedCache cache = CacheFactory.getCache("stt.t1");
                   CId tID = new CId();
                   tID.setId(11111L);
                   C tC = new C();
                   tC.setVal(new BigDecimal("100.1"));
                   cache.put(tID, tC);
              catch (Exception e)
                   e.printStackTrace();
              finally
    And here is what we list using coherence console if we use transactional APIs:
    Map (cnt.t1): list
    CId {
    id = 11111
    } = null
    Any suggestion, please?

    Cristian,
    After looking at your configuration I noticed that your configuration is incorrect. For a transactional scheme you cannot specify a backing-map-scheme.
    Your config contained:
    <backing-map-scheme>
    <local-scheme>
    <high-units>250M</high-units>
    <unit-calculator>binary</unit-calculator>
    </local-scheme>
    </backing-map-scheme>To specify high-units for a transactional scheme, simply provide a high-units element directly under the transactional-scheme element.
    <transactional-scheme>
        <scheme-name>small-high-units</scheme-name>
        <service-name>TestTxnService</service-name>
        <autostart>true</autostart>
        <high-units>1M</high-units>
    </transactional-scheme>http://download.oracle.com/docs/cd/E15357_01/coh.360/e15723/api_transactionslocks.htm#BEIBACHA
    The reason that it is not allowable to specify a backing-map-scheme for a transactional scheme is that transactional caches use their own storage.
    I am not sure why this would work with primitives and only fail with POF. We will look into this further here and try to reproduce.
    Can you please change your configuration with the above changes and let us know your results.
    Thanks,
    John
    Edited by: jspeidel on Sep 16, 2010 10:44 AM

  • JDO transaction, caching and stored procedures

    I currently retreive an existing object, update the object, and then call
    a stored procedure - all in one transaction. The issue I am running into
    is that the stored proc must have the modified data available when it
    selects the row (modified object) from the table. This is currently not
    happening. For example, if the initial field was null, then it is updated
    and the proc called, the proc only sees null. How do I specify that the
    changes should be processed w/in the transaction (not commited) and
    available for other processes reading the data in the same tx?
    Note: everything works perfect if I manually execute the update statement
    through a Statement on the connection for the stored proc. I have tried
    flush().
    Any ideas would be greatly appreciated!

    Here is a dumbed down version... I also tried doing a Query with JDO and
    that returns the modified value. Basically, the code below find the
    existing object, modifies a field, and then calls a stored proc to do some
    processing. Unfortunately, the stored proc needs to be able to "see" the
    modified data in the transaction.
    public void modifyFoo( UserToken userToken,
    Foo newFoo,
    Integer applicationId )
    PersistenceManager pm = null;
    Transaction tx = null;
    Integer applicationId = producerApplication.getId();
    try
    pm = JDOFactory.getPersistenceManager( userToken );
    tx = pm.currentTransaction();
    tx.begin();
    FooBoId appId = new FooBoId();
    appId.fooBolId = applicationId;
    FooBo appDbVo = (FooBo)pm.getObjectById( appId, true );
    appDbVo.setEffectiveDate( newFoo.getEffectiveDate() );
    String fooNumber = executeFooBar( userToken, applicationId
    appDbVo.setFooNumber( fooNumber );
    tx.commit();
    private String executeFooBar( UserToken userToken, Integer
    applicationId )
    throws PulsarSystemException, ValidationException
    String licenseNumber = null;
    try
    // Get a connection from the persistence manager
    Connection connection = JDOFactory.getConnection( userToken );
    connection.setAutoCommit( false );
    CallableStatement cs = connection.prepareCall( "{call
    sys_appl_pkg.fooBar(?,?,?,?,?,?)}" );
    // Register the type of the return value
    cs.registerOutParameter( 4, Types.NUMERIC );
    cs.registerOutParameter( 5, Types.NUMERIC );
    cs.registerOutParameter( 6, Types.VARCHAR );
    // Set parameters:
    cs.setString( 1, "Test" );
    cs.setInt( 2, applicationId.intValue() );
    cs.setInt( 3, 0 );
    catch( SQLException ex )
    ex.printStackTrace();
    return licenseNumber;
    Patrick Linskey wrote:
    Can you post the code that you're executing?
    -Patrick
    On Mon, 14 Jul 2003 20:38:09 +0000, Josh Zook wrote:
    I currently retreive an existing object, update the object, and then call
    a stored procedure - all in one transaction. The issue I am running into
    is that the stored proc must have the modified data available when it
    selects the row (modified object) from the table. This is currently not
    happening. For example, if the initial field was null, then it is updated
    and the proc called, the proc only sees null. How do I specify that the
    changes should be processed w/in the transaction (not commited) and
    available for other processes reading the data in the same tx?
    Note: everything works perfect if I manually execute the update statement
    through a Statement on the connection for the stored proc. I have tried
    flush().
    Any ideas would be greatly appreciated!
    Patrick Linskey
    SolarMetric Inc.

  • Permisions to read and write on a external hard drive

    Currently I am working with two Mac Air. One of them with Mac OS X 10.9.5 and the other one with Mac OS10.7.5.
    Also I have most of my files in an external hard drive.
    When I try to move file from the Mac air with Mac OS10.7.5 to the hard drive, I have no problem.
    When I try to move file from the Mac air with Mac OS X 10.9.5 to the same hard drive, I am unable to do this action
    because the hard drive appears as a read only (drwxr-xr-x)
    Does any one knows how to change the settings to allow to write in the hard drive ( being able to save files from my
    computer to the hard drive)
    Thank you for your help.
    by the way, the hard drive is not apple.

    Dear Kappy
    That's the problem. In the show information window doesn't appear any little lock anywhere as it appear in other windows
    such as safety and privacy or in parenteral control.
    Perhaps the lock to unlock this window is somewhere else in the preference settings.
    I can try to see if with the Mac air with Mac OS10.7.5 I can change these settings,
    but so far, I haven't being able to find a way to change permissions to read and write
    through the information window.
    Thank you for your collaboration
    Ramon

  • Write-through Cache behavior during Transactional Operation

    If a put is called on a write-through cache during a transaction(with Optimistic Read-Committed settings) that involves multiple caches some set to write-through and others to write-behind, when will the store operation on the corresponding CacheStore be attempted?
         a) Immediately after the put() is called on the cache but before the transaction commit
         or
         b) Immediately after the transaction is committed irrespective of when the put is called

    Hi Abhay,
         The backing map (in this case, <tt>com.tangosol.net.cache.ReadWriteBackingMap</tt>) is responsible for calling the CacheStore implementation. When "commit" is called, Coherence will synchronously send the data to the backing map; the backing map then determines what to do with the data. In the case of ReadWriteBackingMap, it will either (depending on its configuration) synchronously call CacheStore (meaning that a store exception will interrupt your transaction) or queue the update for later (meaning that any store exception will occur after the cache transaction has completed).
         In 3.0, the <rollback-cachestore-failures> element under <read-write-backing-map-scheme> controls whether CacheStore exceptions are propagated back to the client. If you are using a release prior to 3.0, please see this FAQ Item on CacheStore Exceptions.
         Jon Purdy
         Tangosol, Inc.

  • Write-through limitation and putAll

    Please find the quote below from developer guide, particularly this one In other words, if two cache entries are updated, triggering calls to CacheStore modules sitting on separate cache servers, it is possible for one database update to succeed and for the other to fail.If a putAll is called on a cache, will it result in one CacheStore.storeAll or many storeAll triggered from different coherence nodes/servers? (assume a distributed topology coherence 3.7.1)
    Will the store transaction failure lead to putAll transaction failure?
    Are there any patterns that shows how this coherence works with typical databases?
    14.7.2 Write-Through LimitationsCoherence does not support two-phase CacheStore operations across multiple CacheStore instances. In other words, if two cache entries are updated, triggering calls to CacheStore modules sitting on separate cache servers, it is possible for one database update to succeed and for the other to fail. In this case, it may be preferable to use a cache-aside architecture (updating the cache and database as two separate components of a single transaction) with the application server transaction manager. In many cases it is possible to design the database schema to prevent logical commit failures (but obviously not server failures). Write-behind caching avoids this issue as "puts" are not affected by database behavior (as the underlying issues have been addressed earlier in the design process).

    gs100 wrote:
    Thanks for the input, I have further questions based on these suggestions.
    1. Let us say one of the putAll fails we would know that it has failed due to underlying one or more store/storeAll. And even if we rollback the coherence transaction, the store/storeAll that succeeded would not be rolled back automatically, is that correct? If true, this means that it would leave the underlying DB/store in the inconsistent state with that of in-memory cache?I guess that is one of the reasons why the transaction framework does not support cache stores... also, write-behind would coalesce updates which would have funny consequences with regards to the transactional context...
    2. How do we get the custom implementation of putAll, that you suggested to handle specific errors? any pointers on this would be helpful.I guess it is not going to be posted, the Coherence team may or may not add something which is a bit more deterministic with regards to error.
    A few aspects of Coherence behaviour (a.k.a pitfalls) which you need to be aware of to be able to implement your own solution:
    Exceptions propagating back to the client can happen in:
    - entry-processor (not for putAll specifically)
    - result serialization code (not for putAll specifically, but for processAll/aggregate for example)
    - deserialization code (indexes/filter-based backing map listeners/cache stores lead to deserialization even for putAll)
    - triggers (intentionally, too)
    - cache stores
    There is no place where you could catch any exceptions from inside the NamedCache call, so they will come out.
    Coherence may execute the operation on one thread per partition or one thread per multiple partitions, but never on multiple threads per partition. This means there may be multiple exceptions even from a single storage node, but only at most one exception would be generated per partition (starting with 3.5).
    If you send multiple partitions with the same NamedCache call, you can lose exceptions as you wouldn't know if an exception would have or wouldn't have happened with a partition if it was sent alone instead of together with another on the same node.
    As you need to be able to return all exceptions from your method call, you have to produce and catch all of them and collect them otherwise you would lose all but one. To produce and catch all exceptions you have to produce all exceptions independently, i.e. different partitions must be operated on independently.
    To send an operation to a single partition only, you can separate the operations to different partitions by separating the keysets for different partitions with key-based operations, or applying a PartitionedFilter for filter-based operations.
    It is up to you where and how you iterate through the partitions. You can do it on the caller, you can do it on storage node from an Invocable sent via an InvocationService (in this case you can be either optimistic with ownership or chase a partition).
    3. Because we are thinking putAll that coherence implemented is most optimized (parallelism). I am not sure how the custom implementation can be as optimal (hope we don't end up calling one by one).You cannot implement it as optimally as Coherence itself does as it interleaves operations (Messages) to independent partitions/nodes (does not have to wait for the return message) from a single thread without waiting for the responses from individual nodes/partitions.
    You can either parallelize operations to multiple threads, or do the iteration on the single thread at the cost of higher latency.
    Best regards,
    Robert

  • Transactions involving multiple caches and a database

    Hi all,
    I'm curious if the following is possible with the transaction support in Coherence.
    I have the need to write data to two caches and a database from within my Tomcat container, and the whole operation must be atomic.
    Example:
    Write to database (this is NOT via a CacheStore)
    Write to cache 1 (uses write-through to database)
    Write to cache 2 (uses write-through to database)
    If any operation fails, the whole transaction needs to rollback. This feels like an XA transaction, but it looks like it won't work like I expect because the cache must be the last resource, but I have two caches. The ordering of operations is not important.
    Thanks,
    Rob

    MagnusE wrote:
    When using write-through there is (as far as I know) no way to get a fully transactional behaviour (assuming you have more than one cache node) since each node is responsible for persisting its own data items (they will each use a separate connection to the database).
    If you on the other hand uses a "cache beside" pattern this can be made transactional using XA. As long as both caches belong to the same cache service they count as a single "last resource"....
    /MagnusYou also need to use the same transaction isolation and concurrency setting for it being the same last resource. Practically you should only have a single CacheAdapter instance enrolled in the same transaction.
    Best regards,
    Robert

  • Migrating from 3.1 to 3.7 - write through for a custom cache store issues

    We're migrating from 3.1 to 3.7. So far the migration and testing has been fairly uneventful, but there is one issue that came up yesterday that seems like it is going to be tricky to debug.
    We have a set of storage-enabled nodes that use a custom CacheStore to read from and write behind to a mongo database. On another node connected to that caching service, read throughs work just fine. (I can set breakpoints on the CacheStore load method and see the load calls coming through just fine) - but what's not working is when the other node does a Cache.put - the Store method on the CacheStore is never called and so far I don't see anything in the logs indicating there is a problem on either side (I'm going to make sure that the coherence logging is up to the highest level on both the nodes today when I'm doing more testing)
    I can see the cache put start to dive into the coherence jar, but I don't have source jars for coherence so it's fairly opague what might be going wrong after the Cache.put(object, object) call. I can see that it dives into various coherence methods, but
    Any ideas on where to start debugging this?
    This setup worked fine on 3.1, and as best we can tell all the API calls were converted over to their proper coherence 3.7 versions, and the coherence.xml files were migrated to use the new xsd etc.

    it seems that the issue might be related to this:
    2012-08-15 14:19:34.086 Tangosol Coherence 3.7.1.5 <Error> (thread=WriteBehindThread:CacheStoreWrapper(com.foo.cache.MongoCacheStore):Foo.com-CMS, member=13): Failed to store key="assetId=DEFAULT;assetStyle=DEFAULT;initial=c;siteId=foosite;"
    2012-08-15 14:19:34.087 Tangosol Coherence 3.7.1.5 <Error> (thread=WriteBehindThread:CacheStoreWrapper(com.foo.configrepo.cache.MongoCacheStore):Foo.com-CMS, member=13): (Wrapped) java.io.StreamCorruptedException: invalid type: 13
         at com.tangosol.util.ExternalizableHelper.fromBinary(ExternalizableHelper.java:266)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$ConverterFromBinary.convert(PartitionedCache.CDB:4)
         at com.tangosol.net.cache.BackingMapBinaryEntry.getValue(BackingMapBinaryEntry.java:124)
         at com.tangosol.net.cache.ReadWriteBackingMap$CacheStoreWrapper.storeInternal(ReadWriteBackingMap.java:5731)
         at com.tangosol.net.cache.ReadWriteBackingMap$StoreWrapper.store(ReadWriteBackingMap.java:4814)
         at com.tangosol.net.cache.ReadWriteBackingMap$WriteThread.run(ReadWriteBackingMap.java:4217)
         at com.tangosol.util.Daemon$DaemonWorker.run(Daemon.java:803)
         at java.lang.Thread.run(Thread.java:662)
    Caused by: java.io.StreamCorruptedException: invalid type: 13
         at com.tangosol.util.ExternalizableHelper.readObjectInternal(ExternalizableHelper.java:2303)
         at com.tangosol.util.ExternalizableHelper.deserializeInternal(ExternalizableHelper.java:2746)
         at com.tangosol.util.ExternalizableHelper.fromBinary(ExternalizableHelper.java:262)
         ... 7 moreLooks like it is an issue with the serialization? We're primarily using XmlBean, not POF for serialization.
    Any tips on troubleshooting this?
    Edited by: RyanGardner on Aug 15, 2012 7:37 AM
    Edited by: RyanGardner on Aug 15, 2012 7:38 AM

  • Write through and CacheStore

    Hi,
         I'm running a near cache implementation, with the front being a local cache and the back being a distributed cache. The distributed cache has a local cache and a read-write-backing-map-scheme for persisting each cache to disk every t minutes (for backup purposes - we still use a cache in memory).
         I have a few questions about the Write through capabilities and CacheStore so as to better understand what's going on here:
         1. We only need to store the data to disk for backup in case of complete cluster failure (for example, all n machines in our cluster go down). Right now my implementation of the CacheStore has one line which reads "return null" for the following methods:
         load(..)
         loadAll(..)
         Is there a more efficient/effective way to write to disk and ignore reads if item does not exist in distributed map rather than extending CacheStore and returning null for all methods noted above?
         My reading and writing to disk occurs using the ExternalizableHelper class, thx for this nice work.
         2. How are CacheStore's instantiated initially? Since we want each cache (say we have two different caches here for simplicity) backed up to file every t minutes, do we have to have a separate CacheStore object for each cache? What is the best practice to attach a cachestore to a particular cache?
         For example, I start two Tangosol instances, one on machineA and one on machineB, both storing data as per my configuration. The 2 caches being used are "cacheA" and "cacheB". So when I start the two Tangosol instances, I have to instantiate CacheStore twice so that I can separately write "cacheA" and "cacheB" to their own separate files.
         3. When backup to disk occurs, is there any removing of items from the distributed cache?
         4. I'm not completely sure on the write delay here. What if an item in the cache is just added once, and no updates occur on it (ie. just one put, and 0+ gets). After a specified amount of time, will this be written to disk, or does an update on this object in the cache have to occur before this item can be added to the write queue and eventually written to disk? Once an item is added for the first time, does this trigger the update time for this object to be the first write time?
         Thanks,
         - Noah

    Hi Noah,
         1. No, load() and loadAll() returning null is the most effective way of implementing this.
         2. You can pass the cache name as a constructor parameter - see Parameter Macros in the Coherence User Guide.
         3. No, nothing is removed from the cache
         4. Writes are only triggered by put()'s.
         For more information please take a look at this forum post: <a href = "http://www.tangosol.net/forums/thread.jspa?threadID=445&tstart=0">What is Read-Through/Write-Through/Write-Behind Caching? </a>
         Regards,
         Dimitri

  • Write-through cache error kills coherence

    Hi, we have a write through cache, and when there's an error writing to the cache store, the cache dies. The first error we see is:
    3.5.3/465 | Coherence(3) - 2012-07-19 07:28:35.918/34.768 Oracle Coherence GE 3.5.3/465 <Info> (thread=DistributedWriteThroughWorker:3, member=1): (Wrapped: Failed to store key="1342708115777") java.lang.RuntimeException: Failed to store hibernate entity : This is normal, since there's a legitimate reason why this couldn't get stored, but after the stack trace, we see the below. So the 1st thing is the "Terminating DistributedCache", that just kills the node and makes it unusable, then there's the "unknown user type" message, as if it's trying to send something over the wire, but it can't; although this "WrapperException" is not one of our classes. Clearly we don't want the cache dying, we want to continue to use it, but any subsequent request to the cache fails. Any ideas?
         [java] ERROR | 07-19-2012 07:28:36.116 | Logger@9230760 3.5.3/465 | Coherence(3) - 2012-07-19 07:28:35.919/34.769 Oracle Coherence GE 3.5.3/465 <Error> (thread=DistributedWriteThroughWorker:3, member=1): Terminating DistributedCache due to unhandled exception: java.lang.IllegalArgumentException
         [java] ERROR | 07-19-2012 07:28:36.117 | Logger@9230760 3.5.3/465 | Coherence(3) - 2012-07-19 07:28:35.919/34.769 Oracle Coherence GE 3.5.3/465 <Error> (thread=DistributedWriteThroughWorker:3, member=1):
         [java] java.lang.IllegalArgumentException: unknown user type: com.tangosol.util.WrapperException
         [java]      at com.tangosol.io.pof.ConfigurablePofContext.getUserTypeIdentifier(ConfigurablePofContext.java:400)
         [java]      at com.tangosol.io.pof.ConfigurablePofContext.getUserTypeIdentifier(ConfigurablePofContext.java:389)
         [java]      at com.tangosol.io.pof.PofBufferWriter.writeObject(PofBufferWriter.java:1432)
         [java]      at com.tangosol.io.pof.ConfigurablePofContext.serialize(ConfigurablePofContext.java:338)
         [java]      at com.tangosol.coherence.component.util.daemon.queueProcessor.Service.writeObject(Service.CDB:4)
         [java]      at com.tangosol.coherence.component.net.Message.writeObject(Message.CDB:1)
         [java]      at com.tangosol.coherence.component.net.message.DistributedCacheResponse.write(DistributedCacheResponse.CDB:2)
         [java]      at com.tangosol.coherence.component.util.daemon.queueProcessor.packetProcessor.PacketPublisher.packetizeMessage(PacketPublisher.CDB:137)
         [java]      at com.tangosol.coherence.component.util.daemon.queueProcessor.packetProcessor.PacketPublisher$InQueue.add(PacketPublisher.CDB:8)
         [java]      at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.dispatchMessage(Grid.CDB:50)
         [java]      at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.post(Grid.CDB:53)
         [java]      at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache.onPutRequest(DistributedCache.CDB:146)
         [java]      at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$PutRequest.run(DistributedCache.CDB:1)
         [java]      at com.tangosol.coherence.component.util.DaemonPool$WrapperTask.run(DaemonPool.CDB:1)
         [java]      at com.tangosol.coherence.component.util.DaemonPool$WrapperTask.run(DaemonPool.CDB:32)
         [java]      at com.tangosol.coherence.component.util.DaemonPool$Daemon.onNotify(DaemonPool.CDB:63)
         [java]      at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
         [java]      at java.lang.Thread.run(Thread.java:662)

    Ok, figured it out. I failed to include the standard "coherence-pof-config.xml" when loading in our pof config file, and that's what caused the troubles. Including it solved the problem of the dying cache.

  • Pre-load a write-through cache

    This probably is a common problem.
    I want to use write-through cache to keep the cache and the db in sync. But at application start-up, I'd like to pre-load the cache. Is there anyway I can disable the write-through behavior during pre-loading, and enable it after pre-loading is complete?

    Wouldn't you just be better to load the data into the database first, and then just query it to get it into the cache? That way, the cache and the db are 'in sync', and no write operations have occurred; also, you don't need to 'flip' any cache settings. Subsequent updates to cache items would then be written to the database as normal.
    Surely if the data wasn't in the db first, you'd end up with a large 'write' operation to place it there once you've loaded the cache and 'flipped' it over, which could be quite a lengthy process. I can't see the advantage of this over putting the data in the db first?
    I'd be interested to know more about your specific use-case, as I'm just to embark on a similar 'loader' program. In my case, I was planning on loading the db first. I'd be interested in any alternative approaches and the reasoning behind them (or, likewise, if you can't actually do it the way I was planning! :)).
    Steve

  • IMDB Cache and transaction logs

    Hi,
    We have installed the IMDB Cache as part of a proof of concept. We want to cache a large Oracle table (approx 900 million rows) into a read only local cache group and are finding the amount of space taken by transaction logs during the initial cache load operation exceeds the amount of disk space available. Is there a way to prevent transaction logging during the initial cache load? A failure during the initial load is acceptable for us as we can always reload the cache from the base Oracle table. We are using a datastore with 60GB of memory, however, the filesystem available is 273GB less the 120GB for the two datastore backing files, leaving approximately 150GB for transaction logs. To date we have only been able to load approximately 350 millions rows before failing with
    5056: The cache operation fails: error_type=<TimesTen Error>, error_code=<802>, error_message: [TimesTen]TT0802: Data store space exhaustedThe datastore attributes we are using are
    [EntResPP]
    Driver=/app1/oracle/product/11.2.0/TimesTen/ER/lib/libtten.so
    DataStore=/prod100/oradata/EntResPP
    LogPurge=1
    PermSize=60000
    TempSize=2000
    PLSQL=1
    DatabaseCharacterSet=AL32UTF8
    OracleNetServiceName=TRAQPP.worldThe command we use to load the cache is
    load cache group ro commit every 256 rows parallel 4Thanks
    Mark

    The replication agent is only involved if you have AWT cache groups or if you are using replication. If this is a standalone datastore with a readonly cache group then it is not necessary (or possible) to run the replication agent.
    The error message you mentioned is nothing to do with transaction log space. What has happenned is that the memory allocated ot the permanent data region within the datastore (where table data, indexes etc. reside) has become full (this corresponds to PermSize in your DSN attributes). This means you have not allocated enough memory in TimesTen to hold all the data. Be aware that there is typically significant storage space 'inflation' when caching data. This can range from 2x through to 5x or more. So, if the table data occupies a real 10 GB in oracle it will require between 20 and 50 GB in TimesTen.
    It is possible to suppress logging while loading the cache data (or at least it used to be prior to TT 11.2.1 - I haven't tied this in 11.2.1 myself). You'd do this as follows:
    1. Stop all application connections etc. to the datastore, stop cache and replication agents. make sure that the datastore is unloaded from memory.
    2. Change the value for 'Logging' in the DSN attributes to 0 and connect to the DSN using ttIsql as the instance administrator user.
    3. Start the cache agent. from the ttIsql session issue the command:
    load cache group ro commit every 0 rows;
    You have to use 0 (load entire group as single 'transaction' and you cannot use the 'parallel' clause.
    If this fails you may have to manually delete any rows that were loaded since TT cannot rollback.
    4. When the load has completed successfully, stop the cache agent and disconnect the ttIsql session.
    5. Change Logging back to 1 and reconnect as instance administrator from ttIsql. restart cache agent.
    6. Start applications etc. as required.
    Note that I would consider this at best a temporary workaround. Really, you need to ensure you have enough disk space to perform the load using logging. Of course, as I mentioned, the error you are getting right now is nothing to do with log disk space...
    Chris

  • Re: Write-through caching in Forte

    Hello Mark,
    Just one point more. May be you can add an Event Notifier to the lock
    manager to send the new instance of Obj1 to the clients (here client2)
    who use it in their cache.
    Hope this helps.
    Daniel Nguyen.
    Mark S. Potts wrote:
    >
    Andrew
    This is a mixture of a cache strategy and object locking. If I
    understand what you have said I have some suggestions;
    The cache should hold copies of the object and the object should be
    returned to the client. The obect that is returned to the client should
    be version stamped ( optimistic locking ).
    A) Client1 request Obj1
    B) Obj1 is instantiated from the persistent store
    C) Obj1 is version stamped via a lock manager service.
    D) Obj1 is placed in the cache and copy returned to Client1
    Client1 can now work on Obj1
    When Client2 selects Obj2 - the cache size being 1 - the Obj1 is
    replaced with Obj2.
    Obj2 is selected stamped and returned to the client as per the steps
    above.
    When Client 2 now selects Obj1, no longer in the cache, the same steps
    need to be completed as above.
    The cache now contains the same version of Obj1 as give to Client1.
    Now the important part, becuase this is an optimistic locking strategy -
    two clients can have different version of the same object, it is only
    when the object is saved - returned to the persistent store, that the
    version stamp need to be checked. Lets say Client 2 saves before client
    1
    A) Client2 initiates a save on Obj1
    B) Obj1 checks the lock manger to see if anyone has saved a new version
    of Obj1 since it was selected.
    C) If there have been no saves of Obj1 since Obj1 was selected ie the
    version of Obj1 selected does not conflict with the last version saved -
    then save Obj1
    D) Update the version stamp for Obj1 via the Lock manager
    E) Update Obj1 in the cache.
    When Client1 now tries to save the version of Obj1 a conflict will
    result and an exception needs to be raised - and if necessary the new
    version of Obj1, from the cache, returned to Client1.
    The version control can be done more easily if you are prepared to do
    the locking in the database - I do not recommend this for a number of
    well documented reasons.
    However if you choose this alternative instead of using a seperate Lock
    manager you could simply time stamp the row in the database iether on
    that table or a separate lock table and when saving the Obj1 check the
    time stamp on the object against the time stamp on the row. If they are
    the same save the object and update the time stamp to the current time (
    granularity of time stamp determined by number of concurrent users and
    usage patterns ). The time stamp on the row acts as the version stamp
    for the object and is selected into the object as a private attribute at
    time of selection.
    Hope this is of some help.
    Mark Potts
    SCAFFOLDS Product Manager
    Sage IT Partners
    A) Client1 requests Obj1.
    B) Obj1 is instantiated from a persistent store and placed in the cache
    and a reference to Obj1 is
    returned to Client1.
    C) As part of the instantiation of Obj1 the object is version stamped
    through a lock manager service.
    C) Client1 modifies the state of Obj1 trough its reference.
    D) Client2 requests Obj2.
    E) Obj2 is de-serialized, placed in the cache, knocking out Obj1, and a
    reference to Obj2 is returned to Client2.
    F) Client2 requests Obj1. Since it is no longer in the cache, we either
    need to de-serialize Obj1 from some persistent store, in which case we
    now have two out of sync copies of Obj1, or we need to give Client2 the
    reference to the Obj1 that Client1 has.
    Faibishenko, Andrew wrote:
    Has anyone out there been successful at implementing a cache which
    maintains updateable objects.
    Due to financial considerations, we cannot buy an off-the-shelf
    framework.
    What we are trying to build is some kind of object persistence
    mechanism
    and the cache would be a layer in that service.
    Our big issue is maintaining consistency within the cache, for
    multiple
    clients performing updates to an object.
    Example:
    A) Client1 requests Obj1.
    B) Obj1 is de-serialized, placed in the cache and a reference to Obj1
    is
    returned to Client1.
    C) Client1 modifies the state of Obj1 trough its reference.
    D) Client2 requests Obj2.
    E) Obj2 is de-serialized, placed in the cache, knocking out Obj1, and
    a
    reference to Obj2 is returned to Client2.
    F) Client2 requests Obj1. Since it is no longer in the cache, we
    either
    need to de-serialize Obj1 from some persistent store, in which case we
    now have two out of sync copies of Obj1, or we need to give Client2
    the
    reference to the Obj1 that Client1 has.
    Is this something we should ask Forte Consulting about?
    -Andy
    ============================================
    Andy Faibishenko (312)251-3267
    Senior Consultant (800)462-6301
    Metamor Technologies, Inc. [email protected]

    Hello Mark,
    Just one point more. May be you can add an Event Notifier to the lock
    manager to send the new instance of Obj1 to the clients (here client2)
    who use it in their cache.
    Hope this helps.
    Daniel Nguyen.
    Mark S. Potts wrote:
    >
    Andrew
    This is a mixture of a cache strategy and object locking. If I
    understand what you have said I have some suggestions;
    The cache should hold copies of the object and the object should be
    returned to the client. The obect that is returned to the client should
    be version stamped ( optimistic locking ).
    A) Client1 request Obj1
    B) Obj1 is instantiated from the persistent store
    C) Obj1 is version stamped via a lock manager service.
    D) Obj1 is placed in the cache and copy returned to Client1
    Client1 can now work on Obj1
    When Client2 selects Obj2 - the cache size being 1 - the Obj1 is
    replaced with Obj2.
    Obj2 is selected stamped and returned to the client as per the steps
    above.
    When Client 2 now selects Obj1, no longer in the cache, the same steps
    need to be completed as above.
    The cache now contains the same version of Obj1 as give to Client1.
    Now the important part, becuase this is an optimistic locking strategy -
    two clients can have different version of the same object, it is only
    when the object is saved - returned to the persistent store, that the
    version stamp need to be checked. Lets say Client 2 saves before client
    1
    A) Client2 initiates a save on Obj1
    B) Obj1 checks the lock manger to see if anyone has saved a new version
    of Obj1 since it was selected.
    C) If there have been no saves of Obj1 since Obj1 was selected ie the
    version of Obj1 selected does not conflict with the last version saved -
    then save Obj1
    D) Update the version stamp for Obj1 via the Lock manager
    E) Update Obj1 in the cache.
    When Client1 now tries to save the version of Obj1 a conflict will
    result and an exception needs to be raised - and if necessary the new
    version of Obj1, from the cache, returned to Client1.
    The version control can be done more easily if you are prepared to do
    the locking in the database - I do not recommend this for a number of
    well documented reasons.
    However if you choose this alternative instead of using a seperate Lock
    manager you could simply time stamp the row in the database iether on
    that table or a separate lock table and when saving the Obj1 check the
    time stamp on the object against the time stamp on the row. If they are
    the same save the object and update the time stamp to the current time (
    granularity of time stamp determined by number of concurrent users and
    usage patterns ). The time stamp on the row acts as the version stamp
    for the object and is selected into the object as a private attribute at
    time of selection.
    Hope this is of some help.
    Mark Potts
    SCAFFOLDS Product Manager
    Sage IT Partners
    A) Client1 requests Obj1.
    B) Obj1 is instantiated from a persistent store and placed in the cache
    and a reference to Obj1 is
    returned to Client1.
    C) As part of the instantiation of Obj1 the object is version stamped
    through a lock manager service.
    C) Client1 modifies the state of Obj1 trough its reference.
    D) Client2 requests Obj2.
    E) Obj2 is de-serialized, placed in the cache, knocking out Obj1, and a
    reference to Obj2 is returned to Client2.
    F) Client2 requests Obj1. Since it is no longer in the cache, we either
    need to de-serialize Obj1 from some persistent store, in which case we
    now have two out of sync copies of Obj1, or we need to give Client2 the
    reference to the Obj1 that Client1 has.
    Faibishenko, Andrew wrote:
    Has anyone out there been successful at implementing a cache which
    maintains updateable objects.
    Due to financial considerations, we cannot buy an off-the-shelf
    framework.
    What we are trying to build is some kind of object persistence
    mechanism
    and the cache would be a layer in that service.
    Our big issue is maintaining consistency within the cache, for
    multiple
    clients performing updates to an object.
    Example:
    A) Client1 requests Obj1.
    B) Obj1 is de-serialized, placed in the cache and a reference to Obj1
    is
    returned to Client1.
    C) Client1 modifies the state of Obj1 trough its reference.
    D) Client2 requests Obj2.
    E) Obj2 is de-serialized, placed in the cache, knocking out Obj1, and
    a
    reference to Obj2 is returned to Client2.
    F) Client2 requests Obj1. Since it is no longer in the cache, we
    either
    need to de-serialize Obj1 from some persistent store, in which case we
    now have two out of sync copies of Obj1, or we need to give Client2
    the
    reference to the Obj1 that Client1 has.
    Is this something we should ask Forte Consulting about?
    -Andy
    ============================================
    Andy Faibishenko (312)251-3267
    Senior Consultant (800)462-6301
    Metamor Technologies, Inc. [email protected]

Maybe you are looking for

  • PO for consumption

    Hi techies,can anyone explain the procedure of creating a PO for direct consumption?I created a PO for stock but for non stock,u need to hav an idea of account assignments and G/L accounts.Please explain how to assign a correct account and also the c

  • Autopopulate Dynamic Dropdown Field LC Designer

    I am trying to autopopulate dynamically created dropdown fields within another section of a form. Here is what I am trying to do: I have a report (home inspection) that is also an estimate for repairs.  I have several section of the report that have

  • How can a custom tag write multiple fragments in the response ?

    I have a custom tag that needs to write at least 2 html/xhtml fragments into the response in order to be html compliant. The custom tag outputs a script that must be located in the head section and other elements that refers to the script. Events han

  • I have OS X 10.5.8 and want to update my OS.  WHat do I need to do?

    What is the next step to getting toward Maverick?

  • WiFi Printer cannot be accessed after System sleeps

    My Canon MP640 WiFi printer works immediately after I startup my iMac (Mountain Lion) of a morning but complains that there is a communication error the first time I try to print with it after the iMac has been asleep. This also occurs if the printer