Should OS/FileSystem caching be write-through?

I have a question. I use Ubuntu. Should I mount my filesystem (which holds BDB's content) with "-o sync" option? That is, should my file system cache be write-through?
I have this question because, if I turn on the logging feature in Berkeley DB but let the file system cache be write-back, I don't exactly know if the log is properly flushed to the disk or not.

Thanks George. I agree that mature applications would be better off mounting their filesystem with "-o sync" option.
But here is a thing: I ran an example test case where I inserted 10 million key-value pairs with logging enabled, and saw that the average response time per insertion was 10 milli seconds, and I did the same experiment with logging disabled and saw that it too took 10 milliseconds per insertion on an average.
For the experiment with logging enabled, I create the environment with DB_INIT_LOG and DB_INIT_TXN flags but don't surround the insertion requests with txn_begin() and txn->commit(). I guess this way of doing insertions is called autocommit. I am hoping I am doing this experiment right.
Thanks for the pointers about set_flags() and DB_TXN_NOSYNC, I am going to look them up.

Similar Messages

  • Transactional Caches and Write Through

    I've been trying to implement the use of multiple caches, each with write through, all within a transaction.
         The CacheFactory.commitTransactionCollection(..) method only seems to work correctly if the first transactionMap throws an exception in the database code.
         If the second transactionMap throws exceptions, the caches do not appear to rollback correctly.
         I can wrap the whole operation in a JDBC transaction that rolls back the database correctly but the caches are not all rolled back because they are committed one by one?
         For example, I write to two transaction maps, each one created from separate caches. When commiting the transaction maps, the second transaction map causes a database exception. It appears the first transaction map has already committed its objects and doesn't roll back.
         Is it possible to use Coherence with multiple transaction maps and get all the caches and databases rolled back?
         I've also been trying to look at using coherence-tx.rar as described in the forums within WebLogic but I'm getting @@@@@ Failed to commit: javax.transaction.SystemException: Could not contact coordinator at null+SMARTPC:7001+null+t3+
         (SMARTPC being my pc name)
         Has anybody else had this problem? Bonus points for describing how to fix it!
         Mike

    The transaction support in Coherence is for Local     > Transactions. Basically, what this means is that the
         > first phase of the commit ("prepare") acquires locks
         > and ensures that there are no conflicts. The second
         > phase ("commit") does nothing but push data out to
         > the caches.
         This means that once prepare succeeds (all locks acquired), commit will try to copy local data into the base map. If there is a failure on any put, rollback will undo any changes made. All locks are cleared at the end.
         > The problem is that when you are using a
         > CacheStore module, the exception is occurring during
         > the second phase.
         If you start using a CacheStore module, then database update has to be part of the atomic procedure.
         >
         > For this reason, write-through and cache transactions
         > are not a supported combination.
         This is not true for a cache transaction that updaets a single cache entry, right?
         >
         > For single-cache-entry updates, CacheStore operations
         > are fully fault-tolerant in that the cache and
         > database are guaranteed to be consistent during any
         > server failure (including failures during partial
         > updates). While the mechanisms for fault-tolerance
         > vary, this is true for both write-through and
         > write-behind caches.
         For the write-thru case, I believe Database and cache are atomically updated.
         > Coherence does not support two-phase CacheStore
         > operations across multiple CacheStore instances. In
         > other words, if two cache entries are updated,
         > triggering calls to CacheStore modules sitting on
         > separate servers, it is possible for one database
         > update to succeed and for the other to fail.
         But once we have multiple CacheStore modules, then once one atomic write-thru put succeeds that means database is already updated for that specific put. There is no way to roll back the database update (although we can roll back the cache update). Therefore, you may end up in partial commits in such situations where multiple cache entries are updated across different CacheStore modules.
         If I use write-behind CacheStore modules, I can roll back entirely and avoid partial commits? Since writes are not immediately propagated to the database? So in essence, write-behind cache stores are no different than local transactions... Is my understanding correct?

  • Migrating from 3.1 to 3.7 - write through for a custom cache store issues

    We're migrating from 3.1 to 3.7. So far the migration and testing has been fairly uneventful, but there is one issue that came up yesterday that seems like it is going to be tricky to debug.
    We have a set of storage-enabled nodes that use a custom CacheStore to read from and write behind to a mongo database. On another node connected to that caching service, read throughs work just fine. (I can set breakpoints on the CacheStore load method and see the load calls coming through just fine) - but what's not working is when the other node does a Cache.put - the Store method on the CacheStore is never called and so far I don't see anything in the logs indicating there is a problem on either side (I'm going to make sure that the coherence logging is up to the highest level on both the nodes today when I'm doing more testing)
    I can see the cache put start to dive into the coherence jar, but I don't have source jars for coherence so it's fairly opague what might be going wrong after the Cache.put(object, object) call. I can see that it dives into various coherence methods, but
    Any ideas on where to start debugging this?
    This setup worked fine on 3.1, and as best we can tell all the API calls were converted over to their proper coherence 3.7 versions, and the coherence.xml files were migrated to use the new xsd etc.

    it seems that the issue might be related to this:
    2012-08-15 14:19:34.086 Tangosol Coherence 3.7.1.5 <Error> (thread=WriteBehindThread:CacheStoreWrapper(com.foo.cache.MongoCacheStore):Foo.com-CMS, member=13): Failed to store key="assetId=DEFAULT;assetStyle=DEFAULT;initial=c;siteId=foosite;"
    2012-08-15 14:19:34.087 Tangosol Coherence 3.7.1.5 <Error> (thread=WriteBehindThread:CacheStoreWrapper(com.foo.configrepo.cache.MongoCacheStore):Foo.com-CMS, member=13): (Wrapped) java.io.StreamCorruptedException: invalid type: 13
         at com.tangosol.util.ExternalizableHelper.fromBinary(ExternalizableHelper.java:266)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$ConverterFromBinary.convert(PartitionedCache.CDB:4)
         at com.tangosol.net.cache.BackingMapBinaryEntry.getValue(BackingMapBinaryEntry.java:124)
         at com.tangosol.net.cache.ReadWriteBackingMap$CacheStoreWrapper.storeInternal(ReadWriteBackingMap.java:5731)
         at com.tangosol.net.cache.ReadWriteBackingMap$StoreWrapper.store(ReadWriteBackingMap.java:4814)
         at com.tangosol.net.cache.ReadWriteBackingMap$WriteThread.run(ReadWriteBackingMap.java:4217)
         at com.tangosol.util.Daemon$DaemonWorker.run(Daemon.java:803)
         at java.lang.Thread.run(Thread.java:662)
    Caused by: java.io.StreamCorruptedException: invalid type: 13
         at com.tangosol.util.ExternalizableHelper.readObjectInternal(ExternalizableHelper.java:2303)
         at com.tangosol.util.ExternalizableHelper.deserializeInternal(ExternalizableHelper.java:2746)
         at com.tangosol.util.ExternalizableHelper.fromBinary(ExternalizableHelper.java:262)
         ... 7 moreLooks like it is an issue with the serialization? We're primarily using XmlBean, not POF for serialization.
    Any tips on troubleshooting this?
    Edited by: RyanGardner on Aug 15, 2012 7:37 AM
    Edited by: RyanGardner on Aug 15, 2012 7:38 AM

  • Write-through Cache behavior during Transactional Operation

    If a put is called on a write-through cache during a transaction(with Optimistic Read-Committed settings) that involves multiple caches some set to write-through and others to write-behind, when will the store operation on the corresponding CacheStore be attempted?
         a) Immediately after the put() is called on the cache but before the transaction commit
         or
         b) Immediately after the transaction is committed irrespective of when the put is called

    Hi Abhay,
         The backing map (in this case, <tt>com.tangosol.net.cache.ReadWriteBackingMap</tt>) is responsible for calling the CacheStore implementation. When "commit" is called, Coherence will synchronously send the data to the backing map; the backing map then determines what to do with the data. In the case of ReadWriteBackingMap, it will either (depending on its configuration) synchronously call CacheStore (meaning that a store exception will interrupt your transaction) or queue the update for later (meaning that any store exception will occur after the cache transaction has completed).
         In 3.0, the <rollback-cachestore-failures> element under <read-write-backing-map-scheme> controls whether CacheStore exceptions are propagated back to the client. If you are using a release prior to 3.0, please see this FAQ Item on CacheStore Exceptions.
         Jon Purdy
         Tangosol, Inc.

  • Write-through cache error kills coherence

    Hi, we have a write through cache, and when there's an error writing to the cache store, the cache dies. The first error we see is:
    3.5.3/465 | Coherence(3) - 2012-07-19 07:28:35.918/34.768 Oracle Coherence GE 3.5.3/465 <Info> (thread=DistributedWriteThroughWorker:3, member=1): (Wrapped: Failed to store key="1342708115777") java.lang.RuntimeException: Failed to store hibernate entity : This is normal, since there's a legitimate reason why this couldn't get stored, but after the stack trace, we see the below. So the 1st thing is the "Terminating DistributedCache", that just kills the node and makes it unusable, then there's the "unknown user type" message, as if it's trying to send something over the wire, but it can't; although this "WrapperException" is not one of our classes. Clearly we don't want the cache dying, we want to continue to use it, but any subsequent request to the cache fails. Any ideas?
         [java] ERROR | 07-19-2012 07:28:36.116 | Logger@9230760 3.5.3/465 | Coherence(3) - 2012-07-19 07:28:35.919/34.769 Oracle Coherence GE 3.5.3/465 <Error> (thread=DistributedWriteThroughWorker:3, member=1): Terminating DistributedCache due to unhandled exception: java.lang.IllegalArgumentException
         [java] ERROR | 07-19-2012 07:28:36.117 | Logger@9230760 3.5.3/465 | Coherence(3) - 2012-07-19 07:28:35.919/34.769 Oracle Coherence GE 3.5.3/465 <Error> (thread=DistributedWriteThroughWorker:3, member=1):
         [java] java.lang.IllegalArgumentException: unknown user type: com.tangosol.util.WrapperException
         [java]      at com.tangosol.io.pof.ConfigurablePofContext.getUserTypeIdentifier(ConfigurablePofContext.java:400)
         [java]      at com.tangosol.io.pof.ConfigurablePofContext.getUserTypeIdentifier(ConfigurablePofContext.java:389)
         [java]      at com.tangosol.io.pof.PofBufferWriter.writeObject(PofBufferWriter.java:1432)
         [java]      at com.tangosol.io.pof.ConfigurablePofContext.serialize(ConfigurablePofContext.java:338)
         [java]      at com.tangosol.coherence.component.util.daemon.queueProcessor.Service.writeObject(Service.CDB:4)
         [java]      at com.tangosol.coherence.component.net.Message.writeObject(Message.CDB:1)
         [java]      at com.tangosol.coherence.component.net.message.DistributedCacheResponse.write(DistributedCacheResponse.CDB:2)
         [java]      at com.tangosol.coherence.component.util.daemon.queueProcessor.packetProcessor.PacketPublisher.packetizeMessage(PacketPublisher.CDB:137)
         [java]      at com.tangosol.coherence.component.util.daemon.queueProcessor.packetProcessor.PacketPublisher$InQueue.add(PacketPublisher.CDB:8)
         [java]      at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.dispatchMessage(Grid.CDB:50)
         [java]      at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.post(Grid.CDB:53)
         [java]      at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache.onPutRequest(DistributedCache.CDB:146)
         [java]      at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$PutRequest.run(DistributedCache.CDB:1)
         [java]      at com.tangosol.coherence.component.util.DaemonPool$WrapperTask.run(DaemonPool.CDB:1)
         [java]      at com.tangosol.coherence.component.util.DaemonPool$WrapperTask.run(DaemonPool.CDB:32)
         [java]      at com.tangosol.coherence.component.util.DaemonPool$Daemon.onNotify(DaemonPool.CDB:63)
         [java]      at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
         [java]      at java.lang.Thread.run(Thread.java:662)

    Ok, figured it out. I failed to include the standard "coherence-pof-config.xml" when loading in our pof config file, and that's what caused the troubles. Including it solved the problem of the dying cache.

  • Pre-load a write-through cache

    This probably is a common problem.
    I want to use write-through cache to keep the cache and the db in sync. But at application start-up, I'd like to pre-load the cache. Is there anyway I can disable the write-through behavior during pre-loading, and enable it after pre-loading is complete?

    Wouldn't you just be better to load the data into the database first, and then just query it to get it into the cache? That way, the cache and the db are 'in sync', and no write operations have occurred; also, you don't need to 'flip' any cache settings. Subsequent updates to cache items would then be written to the database as normal.
    Surely if the data wasn't in the db first, you'd end up with a large 'write' operation to place it there once you've loaded the cache and 'flipped' it over, which could be quite a lengthy process. I can't see the advantage of this over putting the data in the db first?
    I'd be interested to know more about your specific use-case, as I'm just to embark on a similar 'loader' program. In my case, I was planning on loading the db first. I'd be interested in any alternative approaches and the reasoning behind them (or, likewise, if you can't actually do it the way I was planning! :)).
    Steve

  • Map listeners and write-through strategy.

    Hi.
    Write-through strategy seems to be synchronious operations and if it fails, no data should appear in cache. Logically this means, that no events will be produced if the persisting fails (that's what we exactly need). But could not find any mention about this in documentation. Can anyone verify this?
    Thanks, Anton.

    If you are talking about throwing an exception in your CacheStore code,
    it will happen before anything occurs in the internal cache managed by
    Coherence and no events will be generated (that would have been generated
    under normal cases where the operation succeeded.)

  • Local Cache with write-behind backing map

    Hi there,
    I am a Coherence newb, so I hope my question isn't too naive. I have been experimenting with Coherence using a write-behind JPA backing map, but I have only been able to make it work with a distributed cache. Because of my specific database RAC architecture, I need to ensure that entries written to the database from a given WLS node are restricted to a specific RAC node. In my view, using a local cache rather than a distributed cache should solve my problem, although if there is a way of configuring my cache so this works I'd appreciate the info.
    So, the short form of the question: can I back a local cache with a write-behind JPA map?
    Cheers,
    Ron

    Hi Ron,
    The configuration for <local-scheme> allows you to add a cache store but you cannot use write-behind, only write-through.
    Presumably you do not want the data to be shared by the different WLS nodes, i.e. if one node puts data in the cache and that is eventually written to a RAC node, that data cannot be seen in the cache by other WLS nodes. Or do you want all the WLS nodes to share the data but just write it to different RAC nodes?
    If you use a local-scheme then the data will only be local to that WLS node and not shared.
    I can think of a possible way to do what you want but it depends on the answer to the above question.
    JK

  • Error removing object from cache with write behind

    We have a cache with a DB for a backing store. The cache has a write-behind delay of about 10 seconds.
    We see an error when we:
    - Write new object to the cache
    - Remove object from cache before it gets written to cachestore (because we're still within the 10 secs and the object has not made it to the db yet).
    At first i was thinking "coherence should know if the object is in the db or not, and do the right thing", but i guess that's not the case?

    Hi Ron,
    The configuration for <local-scheme> allows you to add a cache store but you cannot use write-behind, only write-through.
    Presumably you do not want the data to be shared by the different WLS nodes, i.e. if one node puts data in the cache and that is eventually written to a RAC node, that data cannot be seen in the cache by other WLS nodes. Or do you want all the WLS nodes to share the data but just write it to different RAC nodes?
    If you use a local-scheme then the data will only be local to that WLS node and not shared.
    I can think of a possible way to do what you want but it depends on the answer to the above question.
    JK

  • Write-through limitation and putAll

    Please find the quote below from developer guide, particularly this one In other words, if two cache entries are updated, triggering calls to CacheStore modules sitting on separate cache servers, it is possible for one database update to succeed and for the other to fail.If a putAll is called on a cache, will it result in one CacheStore.storeAll or many storeAll triggered from different coherence nodes/servers? (assume a distributed topology coherence 3.7.1)
    Will the store transaction failure lead to putAll transaction failure?
    Are there any patterns that shows how this coherence works with typical databases?
    14.7.2 Write-Through LimitationsCoherence does not support two-phase CacheStore operations across multiple CacheStore instances. In other words, if two cache entries are updated, triggering calls to CacheStore modules sitting on separate cache servers, it is possible for one database update to succeed and for the other to fail. In this case, it may be preferable to use a cache-aside architecture (updating the cache and database as two separate components of a single transaction) with the application server transaction manager. In many cases it is possible to design the database schema to prevent logical commit failures (but obviously not server failures). Write-behind caching avoids this issue as "puts" are not affected by database behavior (as the underlying issues have been addressed earlier in the design process).

    gs100 wrote:
    Thanks for the input, I have further questions based on these suggestions.
    1. Let us say one of the putAll fails we would know that it has failed due to underlying one or more store/storeAll. And even if we rollback the coherence transaction, the store/storeAll that succeeded would not be rolled back automatically, is that correct? If true, this means that it would leave the underlying DB/store in the inconsistent state with that of in-memory cache?I guess that is one of the reasons why the transaction framework does not support cache stores... also, write-behind would coalesce updates which would have funny consequences with regards to the transactional context...
    2. How do we get the custom implementation of putAll, that you suggested to handle specific errors? any pointers on this would be helpful.I guess it is not going to be posted, the Coherence team may or may not add something which is a bit more deterministic with regards to error.
    A few aspects of Coherence behaviour (a.k.a pitfalls) which you need to be aware of to be able to implement your own solution:
    Exceptions propagating back to the client can happen in:
    - entry-processor (not for putAll specifically)
    - result serialization code (not for putAll specifically, but for processAll/aggregate for example)
    - deserialization code (indexes/filter-based backing map listeners/cache stores lead to deserialization even for putAll)
    - triggers (intentionally, too)
    - cache stores
    There is no place where you could catch any exceptions from inside the NamedCache call, so they will come out.
    Coherence may execute the operation on one thread per partition or one thread per multiple partitions, but never on multiple threads per partition. This means there may be multiple exceptions even from a single storage node, but only at most one exception would be generated per partition (starting with 3.5).
    If you send multiple partitions with the same NamedCache call, you can lose exceptions as you wouldn't know if an exception would have or wouldn't have happened with a partition if it was sent alone instead of together with another on the same node.
    As you need to be able to return all exceptions from your method call, you have to produce and catch all of them and collect them otherwise you would lose all but one. To produce and catch all exceptions you have to produce all exceptions independently, i.e. different partitions must be operated on independently.
    To send an operation to a single partition only, you can separate the operations to different partitions by separating the keysets for different partitions with key-based operations, or applying a PartitionedFilter for filter-based operations.
    It is up to you where and how you iterate through the partitions. You can do it on the caller, you can do it on storage node from an Invocable sent via an InvocationService (in this case you can be either optimistic with ownership or chase a partition).
    3. Because we are thinking putAll that coherence implemented is most optimized (parallelism). I am not sure how the custom implementation can be as optimal (hope we don't end up calling one by one).You cannot implement it as optimally as Coherence itself does as it interleaves operations (Messages) to independent partitions/nodes (does not have to wait for the return message) from a single thread without waiting for the responses from individual nodes/partitions.
    You can either parallelize operations to multiple threads, or do the iteration on the single thread at the cost of higher latency.
    Best regards,
    Robert

  • Write through and CacheStore

    Hi,
         I'm running a near cache implementation, with the front being a local cache and the back being a distributed cache. The distributed cache has a local cache and a read-write-backing-map-scheme for persisting each cache to disk every t minutes (for backup purposes - we still use a cache in memory).
         I have a few questions about the Write through capabilities and CacheStore so as to better understand what's going on here:
         1. We only need to store the data to disk for backup in case of complete cluster failure (for example, all n machines in our cluster go down). Right now my implementation of the CacheStore has one line which reads "return null" for the following methods:
         load(..)
         loadAll(..)
         Is there a more efficient/effective way to write to disk and ignore reads if item does not exist in distributed map rather than extending CacheStore and returning null for all methods noted above?
         My reading and writing to disk occurs using the ExternalizableHelper class, thx for this nice work.
         2. How are CacheStore's instantiated initially? Since we want each cache (say we have two different caches here for simplicity) backed up to file every t minutes, do we have to have a separate CacheStore object for each cache? What is the best practice to attach a cachestore to a particular cache?
         For example, I start two Tangosol instances, one on machineA and one on machineB, both storing data as per my configuration. The 2 caches being used are "cacheA" and "cacheB". So when I start the two Tangosol instances, I have to instantiate CacheStore twice so that I can separately write "cacheA" and "cacheB" to their own separate files.
         3. When backup to disk occurs, is there any removing of items from the distributed cache?
         4. I'm not completely sure on the write delay here. What if an item in the cache is just added once, and no updates occur on it (ie. just one put, and 0+ gets). After a specified amount of time, will this be written to disk, or does an update on this object in the cache have to occur before this item can be added to the write queue and eventually written to disk? Once an item is added for the first time, does this trigger the update time for this object to be the first write time?
         Thanks,
         - Noah

    Hi Noah,
         1. No, load() and loadAll() returning null is the most effective way of implementing this.
         2. You can pass the cache name as a constructor parameter - see Parameter Macros in the Coherence User Guide.
         3. No, nothing is removed from the cache
         4. Writes are only triggered by put()'s.
         For more information please take a look at this forum post: <a href = "http://www.tangosol.net/forums/thread.jspa?threadID=445&tstart=0">What is Read-Through/Write-Through/Write-Behind Caching? </a>
         Regards,
         Dimitri

  • What triggers a write-through/write-behind of entry processor changes?

    What triggers a write-through/write-behind when a cache entry is modified by a custom entry processor (subclass of AbstractProcessor)? Is it simply the call to Entry.setValue() that triggers this or are entry modifications detected in some other way?
    The reason I am asking is that in our Coherence cluster it looks like some entry modifications through entry processors have not triggered a write-behind, and I see no logical pattern as to which ones have and which ones haven't except that some specific entries are just never written to our database. We see from our logs that our implementation of the CacheStore.store() method is not called in these cases, and we also see that the cache entry has been modified successfully.
    We are using Coherence 3.3 on a three machine cluster with 8 nodes on each machine, accessed from clients through a TCP extend proxy.
    Regards,
    Mikael Carlstedt
    mBlox Inc
    Edited by: user3849225 on 16-Sep-2010 04:57

    Hi Mikael
    Calling setEntry() will result in a call to the CacheStore.store() method unless the value you are setting is the same as the existing entry value. If you are using writebehind then storeAll() will be called instead of store() if there are multiple entries waiting to be stored. Writebehind will also coelesce entries so that only the last entry for a given key will be stored.
    What patch level are you using?
    Paul
    Edited by: pmackin on Sep 17, 2010 12:08 AM

  • Switching from write through to write behind automatically

    Hi,
    We are considering a Coherence solution to protect a customer facing application from outages due to database failures. This is for a financial company and the monetary value of each transaction is large and we want to provide 100% guarantee against data loss while not incurring any outages. We want to provide a write-through persistence to the database through Coherence which can switch to a write-behind automatically at runtime if the database persistence fails. Is this doable automatically and would it solve the problem I am trying to solve without losing any inflight transactions? Are there any real customer cases that were successful in achieving this using Coherence?
    Thanks
    Sairam
    Edited by: SKR on Feb 16, 2012 3:14 PM
    Edited by: SKR on Feb 16, 2012 3:15 PM

    SKR wrote:
    Jonathan.Knight wrote:
    Hi Sairam
    I know you can change the write-delay in JMX for a cache using write-behind but I pretty certauin you cannot make a write-through cache suddenly become a write-behind cache.
    I'm not sure why you think changing from write-through to write-behind will allow you to guarantee 100% no data loss - do you mean no loss of updates to the DB or no loss of data in the cache cluster? There are certainly scenarios that can occur where you can loose data from either the cluster or the DB that write-through or write-behind will not save you from. Presumably you want to use write-behind to allow for the DB to go down, although you will still need to configure Coherence to properly retry failed write-behind calls CacheStore behaviour on failure. What happens to your data if you are using write-behind and you loose a partition from you cluster (i.e. you loose a physical machine or two or more JVMs in a short space of time) - you have data loss - you cannot guarantee against this you can only mitigate it and have a recovery policy/procedure.
    JKJK,
    Thanks for your reply. I must have explained the scenario better. What we are trying to do is to have our transactions commit to the database synchronously using write-through, so that during normal operation, the data will be committed, persisted and durable in the database. But our RW database becomes a single point of failure and if some problem occurs to the database during the peak load time, we run the risk of an outage till we fix the database problem or failover to the standby (We don't have RAC architecture or automatic failover and the manual switchover takes about 10 - 15 mins minimum). We want to avoid this by providing a cache-only operation mode during such a failure, where the customers can continue to transact and the writes will get queued in the cache. I do understand that losing both the database and the cache or losing the primary and the backup in the cache would result in a data loss. But I am assuming such a dual failure is rare.
    We do not want to run write-behind all the time but only during the database failure window. From what you mentioned, it seems the runtime switching from write-through to write-behind is not available as an option.
    SairamHi Sairam,
    I would suggest that you configure write-behind to have a fairly short write-delay, and you only return a confirmation to the client
    - either after the write-behind succeeded (you can use a backing map listener to listen for the removal of the decoration which meant that the entry was dirty)
    - or if the database went down (noticeable from the failure), then it is up to you whether you send a confirmation which also mentions that it is not persisted to disk yet, or not at all
    Best regards,
    Robert

  • Tuning filesystem cache usage to benefit Oracle db workloads

    We have several OEL 6 servers running Oracle 11g workloads. Typical output from a "free" command on a system with 8 Gig RAM shows 5 to 6 Gig cache usage.
    Are there VMM tuning parameters that can be set to bias the system to use less filesystem cache?
    In EL 5 and earlier there was a kernel tunable called vm.pagecache that I believe was used to tune cache behavior, but it is not present in OEL 6.
    Thanks

    IBM provides excellent documentation.
    Unfortunately, to find such information for Linux requires to get bits and pieces from here or there. The available documentation has often too much room for interpretation if you already have a technical background, or is too detailed, making it difficult to comprehend. However, I guess the problem pretty much applies to all products developed during the last decade.
    I have honestly no experience with AIX. My experience with commercial Unix systems is limited to Tru64, which is all RIP now. IT is a strange business, often favoring trivial over more superior solutions, eventually coming around in the end. Linux has certainly come a long way. However, it is under constant development and can implement drastic changes, for instance, raw disk support.
    By the way, OEL was renamed to Oracle Linux. OL uses the Oracle UEK kernel by default, which is optimized for Oracle products. Product patches and errata can be downloaded for free from Oracle Public Yum. To simplify the initial installation, you can install the "oracle-rdbms-server-11gR2-preinstall" package. It does the setup of kernel parameters, oracle groups and accounts, and triggers the installation of prerequisite software for Oracle DB installation.
    For performance reasons, you might want to setup the system to use kernel hugepages, which is apparently the number 1 performance issue related to Oracle under Linux. Hugepages are a significant drop in server resource utilization for memory and processing. The following links should be helpful:
    /dev/shm on Oracle Linux 6.x to run Oracle 11g R2 - manual configuration?
    Re: Understanding kernel hugepages and Oracle 11g AMM compatibility

  • Should I run ATV digital signal through Pioneer elite tuner DAC

    I currently have
    2 ATV-2
    2 Airport Exresses
    1 SC27 Pioneer Eite AV Tuner (with a really nice DAC and fiber inputs) this is mainly dedicated to my Home theater but has 2 additional zones
    1 Denon 5.1 tuner with fiber inputs ( this is curently powering the 4 ceiling speakers but may be repurposed for another area)
    1 Marantz analog amp (5004) that I would like to use to power the in ceiling speakers from either an ATV or AE.
    Question,
    1. Should I run the AE directly to the Marantz using the AE DAC?
    2. Should I run the AE digital through the Pioneer DAC and then analog to the Marantz?
    3. Should I hook up the ATV Digital to the Pioneer DAC and then Analog to the Marants?
    4. Should I hook up all and switch back and forth to find the best fit, think I have enough cables for that.
    Thanks

    ScooterB -
    NTSC (Analog TV) ended in the USA/Canada in 2009.
    Over The Air (OTA) Television uses ATSC (Digital TV)
    Cable Systems (CATV) often use QAM, or other digital modulations.
    Satellite TV uses other methods, BUT may provide Composite Video (Yellow RCA/S-Video) connector OR an old-style RF modulator for NTSC (channel 3/4).
    Interconnects vary depending on your Service Provider and the TV Card in your computer.
    http://www.hdtvprimer.com/issues/what_is_atsc.html
    HP Pavilion mMedia Center TV m7664x Desktop PC Support
    http://h10025.www1.hp.com/ewfrf/wc/product?cc=us&amp;dlc=en&amp;docname=c00757531&amp;lc=en&amp;prod...
    Your HP computer case (Code Name: Grand Canyon) is designed for microATX motherboards.
    HP used the Hauppauge Computer Works internal WinTV cards, in this time period.
    http://www.hauppauge.com/site/products/prods_hvr_internal.html
    SOME of these OEM cards, such as the HVR-1260 model, were specially built for HP.
    In looking at the Windows XP Drivers, for this computer, it shipped in Fall 2006 with:
    1.) Conexant Falcon II TV tuner solution OR
    2.) Hauppauge WinTV PVR PCI II 23xxx, 25xxx and 26xxx TV tuner solutions.
    Open up your computer -- the TV Tuner is an ADD-IN Expansion Card.
    You can contact Hauppage for additional support information.
    Hauppage Computer Works -- WinTV products
    http://www.hauppauge.com/index.htm
    Ceton Infinitv products
    http://cetoncorp.com/products/infinitv/

Maybe you are looking for

  • Can post Journal at the parent level?

    Hi, Can anyone tell me how to post a journal at the parent entity level? In consolidation, we often do some adjustments at corporate level, how to archieve that via journal? Thanks a lot! Maggie

  • Lighroom and Nikon NEF files

    I have some questions about Lightroom's handling of Nikon NEF files, specifically the parts of the NEF files that Nikon has decided it is a brilliant idea to encrypt. I am using a Nikon D200. How does Lightroom handle white balance in a NEF file? Has

  • Opinion of Ubuntu

    What do people in this forum think of ubuntu? There are many people in the debian forum who hate it. I am in the process of switching from ubuntu to arch. My laptop is currently booting vista, ubuntu and arch, and I am just about to remove ubuntu. I

  • Need Help Adding Music to Slide Show - PS Elements 11

    I am using PS Elements 11 to create a slide show. The photos and transitions are complete. But when I try to add music in an mp3 format, the songs are there but the text for the songs does not appear in the bar below the photos in the slide show revi

  • Cannot save a template

    I can save as a template but cannot just save so that my files will be updated.