Concur_optimistic, concur_pessimistic

Hi,
     I am trying to understand concurrency controls under Tangosol Coherence.
     In the database context, pessimistic locking is typically implemented as a persistent read lock on the row in question at read time, which is subsequently escalated to a write lock upon updation.
     Optimistic locking does not employ persistent read locks at read time ; however, some kind of version checks are performed at write time(this is typically application-specific).
     Does Coherence work the same way? I understand CONCUR_PESSIMISTIC locks the underlying cache entry upon read. What about CONCUR_OPTIMISTIC? Is there some kind of version check done in this case?
     Thanks,
     CR

<blockquote>concur_optimistic under REPEATABLE_READ and SERIALIZABLE behaves similar to Oracle's implementation of the SERIALIZABLE isolation level in that an exception is thrown if the underlying row/cache entry has been updated by a difference transaction AFTER the row/cache entry was read by the current transaction.</blockquote>
     Yes. That is exactly the intent -- to "validate" the transaction during the prepare processing to determine if the constraints (i.e. versions haven't changed) still hold true.
     Coherence even allows you to "custom validate" so you can automatically resolve certain kinds of changes that you don't want to cause a transaction rollback (i.e. you can give permission for the transaction to commit, even if some of the data was changed by another transaction, by fixing up the data yourself in a "transaction validator" -- but this is a very advanced feature primarily used for very long running transactions).
     Peace,
     Cameron Purdy
     Tangosol, Inc.
     Coherence: Shared Memories for J2EE Clusters

Similar Messages

  • PutAll() with Transactions is slow

    Hi Guys,
    I am looking into using transactions, but they seem to be really slow - perhaps I am doing something silly. I've written a simple test to measure the time taken by putAll() with and without transactions and putAll() with transactions takes nearly 4-5 times longer than without transactions.
    Results:
    Average time taken to insert without transactions 210ms
    Average time taken to insert WITH transactions 1210ms
    Test code:
    public class TestCoherenceTransactions {
         private static final int MAP_SIZE = 5000;
         private static final int LIST_SIZE = 15;
         private static final int NO_OF_ITERATIONS = 50;
         public static void main (String[] args) {
              NamedCache cache = CacheFactory.getCache("dist-cache");          
              Map dataToInsert = new HashMap();
              for (int i=0;i < MAP_SIZE; i++) {
                   List value = new ArrayList();
                   for (int j = 0; j < LIST_SIZE; j++) {
                        value.add(j+"QWERTYU");     
                   dataToInsert.put(i,value);
              long timeTaken = 0;
              long startTime = 0;
              for (int i = 0; i < NO_OF_ITERATIONS; i++) {     
                   cache.clear();     
                   startTime = System.currentTimeMillis();
                   cache.putAll(dataToInsert);
                   timeTaken += System.currentTimeMillis() - startTime;     
              System.out.println("Average time taken to insert without transactions " + timeTaken/NO_OF_ITERATIONS );
              timeTaken = 0;
              for (int i = 0; i < NO_OF_ITERATIONS; i++) {
                   cache.clear();     
                   startTime = System.currentTimeMillis();
                   TransactionMap mapTx = CacheFactory.getLocalTransaction(cache);
                   mapTx.setTransactionIsolation(TransactionMap.TRANSACTION_REPEATABLE_GET);
                   mapTx.setConcurrency(TransactionMap.CONCUR_PESSIMISTIC);
                   Collection txnCollection = Collections.singleton(mapTx);
                   mapTx.begin();
                   mapTx.putAll(dataToInsert);
                   CacheFactory.commitTransactionCollection(txnCollection, 1);
                   timeTaken += System.currentTimeMillis() - startTime;     
              System.out.println("Average time taken to insert WITH transactions " + timeTaken/NO_OF_ITERATIONS);
              System.out.println("cache size " + cache.size());
    Am I misssing something obvious?? Can't understand why transactions are really slow. Any pointers would be very much appreciated.
    Thanks
    K

    Hi,
    TransactionMap is this slow because it locks and unlocks all entries (one by one as there is no lock-many funcitonality) you modify (or even read if you used at least REPEATABLE_READ isolation level) during the lifetime of the transaction if you are using CONCUR_PESSIMISTIC or CONCUR_OPTIMISTIC. Since each lock and unlock operation needs a network call (and that network call needs to be backed up from the primary node to the backup node in a distributed cache), the more entries you need to lock/unlock, the more latency is introduced due to this locking unlocking.
    Best regards,
    Robert

  • TransactionMap.put() -  getting stuck!

    Hi,
         In one of my components, code execution just hangs at the last line of this
         code for some reason! ie. at the line where i perform a "put" operation!
         Since its not even throwing an error, i'm not able to deduce the exact cause
         of the error. Please advice!
         Code Snippet
         // initiate a transaction map - wrapped around current instance of Cache region.
         TransactionMap tranMap = CacheFactory.getLocalTransaction(this.fxpbCache.getCoherenceCache());
         // Read only committed data
         tranMap.setTransactionIsolation(TransactionMap.TRANSACTION_REPEATABLE_GET);
         // lock data while reading
         tranMap.setConcurrency(TransactionMap.CONCUR_PESSIMISTIC);
         // begin transaction
         tranMap.begin();
         //upload data
         tranMap.put(12345, <Object>)
         regards
         Mike

    Sandeep,
         The only reason the "put" operation can get blocked for the TransactionMap is (1) you are using pessimistic concurrency control and as the result, the TransactionMap attempts to lock the specified key; (2) that key is already locked by another transaction
         It looks like you have a deadlock situation, when one transaction updates key1 and key2, while another updates key2 and then key1.
         If you change your concurrency to TransactionMap.CONCUR_OPTIMISTIC, you will see that one of the inter-locked transactions will be rolled back.
         Regards,
         Gene

  • TransactionMap deadlock

    I'm trying to populate a partitioned cache on multiple nodes: so keep reading data from external source and calling putAll to add the entries to the cache, with all the operations wrapped inside a transaction, and commit the transaction after all the data is loaded.
    Here is the problems I run into:
    1. If I use CONCUR_OPTIMISTIC, then the result isn't correct, and many entries are lost.
    2. If I use CONCUR_PESSIMISTIC, then I keep running into deadlock issues when more than one node is populating the cache, even for different portions of the data.
    TRANSACTION_REPEATABLE_GET is used for both cases.
    Is there any logic explanation to such behavior? (The javadoc on TransactionMap does not seem to lead to such results.)

    Hi,
    First of all, why are you using a TransactionMap at all? It will just make things slower and the loading process will have much higher memory footprint (as it will have to store the old values from the cache in the TransactionMap).
    Second of all, I would check the serialization code of your objects, it sounds that you may have problems there.
    For CONCUR_OPTIMISTIC: can you post your cached value classes, and also the TransactionValidator which you use if you use one?
    For CONCUR_PESSIMISTIC: are you sure that you are populating different keys with the different threads on the one or more nodes? You can have deadlocks if you do your TransactionMap puts or TransactionMap gets on common keys in conflicting orders, as with REPEATABLE_GET the pessimistic mode will lock the respective entries on all key-based operations (even if the cached value does not exist!!!). Can you post the code which does the loading?
    Best regards,
    Robert

  • Resource Adapter errors in WebLogic 10.3

    Hi,
    I'm getting ClassCastException from coherence 3.5.3 resource adapter in WebLogic 10.3.
    I've already tried to install resource adapter as a separate deploy and inside ear file, but get same errors.
    Has anybody already managed to use this adapter along with weblogic 10.3 ? The error occurs when I try to get a NamedCache object. The distributed cache named "cache" is up and running since I manage to get and put objects into cache by using NamedCache from CacheFactory (CacheFactory.getCache("cache")).
                   ctx = new InitialContext();
                   // the transaction manager from container
                   tx = (UserTransaction) ctx.lookup("java:comp/UserTransaction");
                   tx.begin();
                   adapter = new CacheAdapter(ctx, "tangosol.coherenceTx", CacheAdapter.CONCUR_PESSIMISTIC ,CacheAdapter.TRANSACTION_REPEATABLE_GET, 300);
                   NamedCache cache = adapter.getNamedCache("cache", getClass().getClassLoader());
                   cache.put(1, 11);
                   Integer estoqueGet = (Integer)cache.get(1);
    2010-06-05 20:24:02.859/96.703 Oracle Coherence GE 3.5.3/465 <D5> (thread=Cluster, member=n/a): Service Cluster joined the cluster with senior service member n/
    a
    2010-06-05 20:24:02.969/96.813 Oracle Coherence GE 3.5.3/465 <Info> (thread=Cluster, member=n/a): Failed to satisfy the variance: allowed=16, actual=47
    2010-06-05 20:24:02.969/96.813 Oracle Coherence GE 3.5.3/465 <Info> (thread=Cluster, member=n/a): Increasing allowable variance to 19
    2010-06-05 20:24:03.344/97.188 Oracle Coherence GE 3.5.3/465 <Info> (thread=Cluster, member=n/a): This Member(Id=2, Timestamp=2010-06-05 20:24:03.078, Address=1
    0.10.10.10:8089, MachineId=2570, Location=machine:ACCENTUR-1FAF0A,process:8036, Edition=Grid Edition, Mode=Development, CpuCount=2, SocketCount=1) joined cluste
    r "cluster:0xDDEB" with senior Member(Id=1, Timestamp=2010-06-05 20:23:06.562, Address=10.10.10.10:8088, MachineId=2570, Location=machine:ACCENTUR-1FAF0A,proces
    s:6300, Role=CoherenceServer, Edition=Grid Edition, Mode=Development, CpuCount=2, SocketCount=1)
    2010-06-05 20:24:03.438/97.282 Oracle Coherence GE 3.5.3/465 <D5> (thread=Cluster, member=n/a): Member 1 joined Service Management with senior member 1
    2010-06-05 20:24:03.438/97.282 Oracle Coherence GE 3.5.3/465 <D5> (thread=Cluster, member=n/a): Member 1 joined Service DistributedCache with senior member 1
    2010-06-05 20:24:03.656/97.500 Oracle Coherence GE 3.5.3/465 <D5> (thread=Invocation:Management, member=2): Service Management joined the cluster with senior se
    rvice member 1
    2010-06-05 20:24:04.125/97.985 Oracle Coherence GE 3.5.3/465 <D5> (thread=TcpRingListener, member=2): TcpRing: connecting to member 1 using TcpSocket{State=STAT
    E_OPEN, Socket=Socket[addr=/10.10.10.10,port=3748,localport=8089]}
    2010-06-05 20:24:04.141/97.985 Oracle Coherence GE 3.5.3/465 <Info> (thread=[ACTIVE] ExecuteThread: '1' for queue: 'weblogic.kernel.Default (self-tuning)', memb
    er=2): Loaded cache configuration from "zip:D:/Oracle/Middleware/user_projects/domains/base_domain/servers/AdminServer/tmp/_WL_user/coherenceapp/v3byxq/war/WEB-
    INF/lib/coherence.jar!/coherence-cache-config.xml"
    2010-06-05 20:24:04.438/98.282 Oracle Coherence GE 3.5.3/465 <D5> (thread=DistributedCache, member=2): Service DistributedCache joined the cluster with senior s
    ervice member 1
    2010-06-05 20:24:04.453/98.297 Oracle Coherence GE 3.5.3/465 <D5> (thread=DistributedCache, member=2): Service DistributedCache: received ServiceConfigSync cont
    aining 258 entries
    2010-06-05 20:24:04.547/98.391 Oracle Coherence GE 3.5.3/465 <D4> (thread=DistributedCache, member=2): Asking member 1 for 128 primary partitions
    2010-06-05 20:24:04.828/98.672 Oracle Coherence GE 3.5.3/465 <Error> (thread=DistributedCache, member=2): An exception (java.io.IOException) occurred reading Me
    ssage Response Type=21 for Service=DistributedCache{Name=DistributedCache, State=(SERVICE_STARTED), LocalStorage=enabled, PartitionCount=257, BackupCount=1, Ass
    ignedPartitions=0, BackupPartitions=0}
    2010-06-05 20:24:04.828/98.672 Oracle Coherence GE 3.5.3/465 <Error> (thread=DistributedCache, member=2): Terminating DistributedCache due to unhandled exceptio
    n: java.io.IOException
    2010-06-05 20:24:04.828/98.672 Oracle Coherence GE 3.5.3/465 <Error> (thread=DistributedCache, member=2):
    java.io.IOException: Class initialization failed: java.lang.ClassCastException: com.tangosol.run.xml.SimpleElement
    at com.tangosol.util.ExternalizableHelper.readExternalizableLite(ExternalizableHelper.java:1946)
    at com.tangosol.util.ExternalizableHelper.readObjectInternal(ExternalizableHelper.java:2273)
    at com.tangosol.util.ExternalizableHelper.readObject(ExternalizableHelper.java:2219)
    at com.tangosol.io.DefaultSerializer.deserialize(DefaultSerializer.java:60)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.Service.readObject(Service.CDB:4)
    at com.tangosol.coherence.component.net.Message.readObject(Message.CDB:1)
    at com.tangosol.coherence.component.net.message.DistributedCacheResponse.read(DistributedCacheResponse.CDB:2)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onNotify(Grid.CDB:123)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache.onNotify(DistributedCache.CDB:3)
    at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
    at java.lang.Thread.run(Thread.java:619)
    Class: com.tangosol.run.xml.SimpleElement
    ClassLoader: weblogic.utils.classloaders.ChangeAwareClassLoader@15fe77a finder: weblogic.utils.classloaders.CodeGenClassFinder@2a865b8 annotation: coherenceapp@
    WebAppCoherence
    ContextClassLoader: weblogic.utils.classloaders.ChangeAwareClassLoader@15fe77a finder: weblogic.utils.classloaders.CodeGenClassFinder@2a865b8 annotation: cohere
    nceapp@WebAppCoherence
    at com.tangosol.util.ExternalizableHelper.readExternalizableLite(ExternalizableHelper.java:1961)
    at com.tangosol.util.ExternalizableHelper.readObjectInternal(ExternalizableHelper.java:2273)
    at com.tangosol.util.ExternalizableHelper.readObject(ExternalizableHelper.java:2219)
    at com.tangosol.io.DefaultSerializer.deserialize(DefaultSerializer.java:60)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.Service.readObject(Service.CDB:4)
    at com.tangosol.coherence.component.net.Message.readObject(Message.CDB:1)
    at com.tangosol.coherence.component.net.message.DistributedCacheResponse.read(DistributedCacheResponse.CDB:2)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onNotify(Grid.CDB:123)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache.onNotify(DistributedCache.CDB:3)
    at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
    at java.lang.Thread.run(Thread.java:619)
    2010-06-05 20:24:04.844/98.688 Oracle Coherence GE 3.5.3/465 <D5> (thread=DistributedCache, member=2): Service DistributedCache left the cluster
    java.lang.RuntimeException: Failed to start Service "DistributedCache" (ServiceState=SERVICE_STOPPED)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.Service.waitAcceptingClients(Service.CDB:12)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.poll(Grid.CDB:8)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.poll(Grid.CDB:11)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache.ensureCache(DistributedCache.CDB:29)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache.ensureCache(DistributedCache.CDB:39)
    at com.tangosol.coherence.component.util.safeService.SafeCacheService.ensureCache$Router(SafeCacheService.CDB:1)
    at com.tangosol.coherence.component.util.safeService.SafeCacheService.ensureCache(SafeCacheService.CDB:46)
    at com.tangosol.net.DefaultConfigurableCacheFactory.ensureCache(DefaultConfigurableCacheFactory.java:878)
    at com.tangosol.net.DefaultConfigurableCacheFactory.configureCache(DefaultConfigurableCacheFactory.java:1088)
    at com.tangosol.net.DefaultConfigurableCacheFactory.ensureCache(DefaultConfigurableCacheFactory.java:304)
    at com.tangosol.net.CacheFactory.getCache(CacheFactory.java:735)
    at com.tangosol.coherence.ra.component.connector.resourceAdapter.cciAdapter.CacheAdapter$ManagedConnection$Connection$Interaction.execute(CacheAdapter.C
    DB:35)
    at com.tangosol.run.jca.CacheAdapter.getNamedCache(CacheAdapter.java:329)
    at com.tangosol.run.jca.CacheAdapter.getNamedCache(CacheAdapter.java:271)
    at ServletCoherence.doGet(ServletCoherence.java:51)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
    at weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:227)
    at weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:125)
    at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:292)
    at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:175)
    at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3594)
    at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321)
    at weblogic.security.service.SecurityManager.runAs(SecurityManager.java:121)
    at weblogic.servlet.internal.WebAppServletContext.securedExecute(WebAppServletContext.java:2202)
    at weblogic.servlet.internal.WebAppServletContext.execute(WebAppServletContext.java:2108)
    at weblogic.servlet.internal.ServletRequestImpl.run(ServletRequestImpl.java:1432)
    at weblogic.work.ExecuteThread.execute(ExecuteThread.java:201)
    at weblogic.work.ExecuteThread.run(ExecuteThread.java:173)
    Exception: java.lang.RuntimeException: Failed to start Service "DistributedCache" (ServiceState=SERVICE_STOPPED)
    java.lang.RuntimeException: Failed to start Service "DistributedCache" (ServiceState=SERVICE_STOPPED)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.Service.waitAcceptingClients(Service.CDB:12)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.poll(Grid.CDB:8)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.poll(Grid.CDB:11)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache.ensureCache(DistributedCache.CDB:29)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache.ensureCache(DistributedCache.CDB:39)
    at com.tangosol.coherence.component.util.safeService.SafeCacheService.ensureCache$Router(SafeCacheService.CDB:1)
    at com.tangosol.coherence.component.util.safeService.SafeCacheService.ensureCache(SafeCacheService.CDB:46)
    at com.tangosol.net.DefaultConfigurableCacheFactory.ensureCache(DefaultConfigurableCacheFactory.java:878)
    at com.tangosol.net.DefaultConfigurableCacheFactory.configureCache(DefaultConfigurableCacheFactory.java:1088)
    at com.tangosol.net.DefaultConfigurableCacheFactory.ensureCache(DefaultConfigurableCacheFactory.java:304)
    at com.tangosol.net.CacheFactory.getCache(CacheFactory.java:735)
    at com.tangosol.coherence.ra.component.connector.resourceAdapter.cciAdapter.CacheAdapter$ManagedConnection$Connection$Interaction.execute(CacheAdapter.C
    DB:35)
    at com.tangosol.run.jca.CacheAdapter.getNamedCache(CacheAdapter.java:329)
    at com.tangosol.run.jca.CacheAdapter.getNamedCache(CacheAdapter.java:271)
    at ServletCoherence.doGet(ServletCoherence.java:51)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
    at weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:227)
    at weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:125)
    at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:292)
    at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:175)
    at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3594)
    at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321)
    at weblogic.security.service.SecurityManager.runAs(SecurityManager.java:121)
    at weblogic.servlet.internal.WebAppServletContext.securedExecute(WebAppServletContext.java:2202)
    at weblogic.servlet.internal.WebAppServletContext.execute(WebAppServletContext.java:2108)
    at weblogic.servlet.internal.ServletRequestImpl.run(ServletRequestImpl.java:1432)
    at weblogic.work.ExecuteThread.execute(ExecuteThread.java:201)
    at weblogic.work.ExecuteThread.run(ExecuteThread.java:173)

    The problem may be related to having more than one coherence.jar and tangosol.jar in the classpath. Can you verify that you only have one of those in the classpath?
    /Christer

  • Modify, put and get : operations under transaction

    Hello,
    I use Coherence 3.0 with CONCUR_PESSIMISTIC.
    When threads t1 and simultaneously t2 get the object obj1, they obtain it whitout waiting any lock. It's OK.
    However, I've created a "modify" operation which is :
    - get a transaction
    - get the object obj1
    - "modify it in my app" (for example obj = obj+1)
    - put the new value
    - commit transaction
    This works but, in the following scenario, I get incoherent result, since get operations don't block.
    Threads t1 and t2 modify the object obj1 at the same time. So, for example obj1 == 0 at the beginning. Then :
    - t1 get a transaction
    - t2 get a transaction
    - t1 get the old obj1 value
    - t2 get the old obj1 value
    - t1 put the new value (obj1 == 1) in the cache;
    - t2 put the new value (obj1 == 1) in the cache;
    So at the end, obj1 == 1 in the cache, instead of 2!
    obj1 = 0 + 1 (thread 1)
    obj1 = 1 + 1 (thread 2, with old value == 1, thanks to thread 1)
    I hope I'm clear enough...
    So, my question is : what should I do?
    Note that I can't say anything on the get/write operations ratio.
    Should I use "put" operation to get the old value in modify operation (instead of get)? Won't it be heavy (in load term)?
    Note that when I use TRANSACTION_GET_COMMITTED, the get of the "modify" operation doesn't block, but when I use TRANSACTION_REPEATABLE_GET, it is the get operation (the basic, not embedded in the modify operation) which is blocking.
    I'd like : get basic operations don't be blocking and coherent "modify" operation. Is it possible?
    Isn't it too heavy (in load) to block all the "modify" operation? What about versioning, to detect a change (between a get (old value) and put (new value))? Have code example?
    Thanks for your advice,
    Vincent
    Message was edited by: Vincent

    Hi Vincent,
    It sounds like you can simply use locking in this scenario (cache locking semantics are similar to that of a Java synchronized keyword - if thread does use lock() it will block is there is contention, and if it doesnt use lock() it will not block at all):
    read:
    <code>
    value = cache.get(key); // does not block
    </code>
    write:
    <code>
    try {
    cache.lock(key, timeout); // will block if there is contention
    value = cache.get(key);
    cache.put(key, value + 1);
    } finally {
    cache.unlock(key);
    </code>
    Regards,
    Dimitri

  • Clarification on Transaction Isolation Levels

    Hi,
         I have a query regarding the cache behaviour for CONCUR_OPTIMISTIC->TRANSACTION_COMMITTED isolation level.
         For CONCUR_OPTIMISTIC->TRANSACTION_REPEATABLE_GET, the TransactionMap javadoc states -> "get" operations move the retrieved values into the local map.
         How does the get operation behaviour differ in case of CONCUR_OPTIMISTIC->TRANSACTION_COMMITTED isolation level? Is the value put into the local map on a get operation? If not, is there a possibility of near cache data getting corrupted if the user makes changes on the object returned by the near cache and abends the transaction before calling the persist on that object?
         Thanks,
         Abhay

    Gene,
         So as to understand this behavior a bit more, what happens when the user calls the "get" operation with the CONCUR_OPTIMISTIC->TRANSACTION_COMMITTED settings, make changes to this object returned by near cache and does not call the put or remove operation on the cache for some reason. Will the changes be seen by "get" operations in other transactions?
         In essence, does the "get" operation give the reference to the actual object in the near cache when the isolation settings are CONCUR_OPTIMISTIC->TRANSACTION_COMMITTED?
         We are experiencing the near cache in one of the members in the cache cluster getting out of synch with the data in other cluster members.
         Thanks,
         Abhay

  • Coherence XA in C++

    Hi all,
    I have 2 questions regarding user transaction in Coherence:
    (1) I understand from the following documentation that Coherence can participate in user transaction using JCA Adapter or as the "last participant" in XA:
    http://wiki.tangosol.com/display/COH35UG/Implement+Transactions%2C+Locks%2C+and+Concurrency#ImplementTransactions%2CLocks%2CandConcurrency-ContainerIntegration
    Are these approaches (JCA Adapter and XA last participant) still applicable in using Coherence in C++ environment?
    (2) Also understand that from this posting (Re: XA transaction performance that the performance impact is substantial. I would like to know whether anybody else have implemented using JCA Adapter and experience big performance impact? If there is performance impact, by how much is it?
    Thanks before, guys.
    Regards,
    Edison

    Hi Edison,
    user10423508 wrote:
    Hi all,
    I have 2 questions regarding user transaction in Coherence:
    (1) I understand from the following documentation that Coherence can participate in user transaction using JCA Adapter or as the "last participant" in XA:
    http://wiki.tangosol.com/display/COH35UG/Implement+Transactions%2C+Locks%2C+and+Concurrency#ImplementTransactions%2CLocks%2CandConcurrency-ContainerIntegration
    Are these approaches (JCA Adapter and XA last participant) still applicable in using Coherence in C++ environment?
    The JCA adapter functionality works only inside the TCMP cluster (due to requirements on thread-based locking) and needs a JCA container (for transaction integration), both depending on Java being the runtime. The Coherence*Extend clients themselves are not able to leverage this functionality, only code running on TCMP cluster nodes.
    (2) Also understand that from this posting (Re: XA transaction performance that the performance impact is substantial. I would like to know whether anybody else have implemented using JCA Adapter and experience big performance impact? If there is performance impact, by how much is it?
    The most significant part of the performance impact is the need for locking and unlocking entries, which is done on a per-key basis. If you use GET_COMMITTED isolation level with CONCUR_OPTIMISTIC concurrency (e.g. this configuration is suitable with Hibernate using optimistic version control for database access) and you always write corresponding data changes to the database via an XA JDBC driver and the transactional cache, then if you write to the database first then to the transactional cache afterwards with the same primary keys, then the inherent locking in the database will effectively maintain locks on identical keys to any locking Coherence would carry out for a strictly broader time period. In this situation Coherence locking would always succeed and would not have additional effect to what the database did, so in fact it is unnecessary for Coherence to do any locking. Therefore, changing to CONCUR_EXTERNAL in this case would have no negative effect, but would yield significant performance increase due to not doing any locking at all.
    Depending on the concurrency strategy and the isolation level you chose, another kind of impact can appear due to having to copy the original entry value to the transaction map in case of using optimistic/external concurrency and an isolation level higher than or equal to REPEATABLE_GET.
    Best regards,
    Robert

  • XA transaction performance

    Hi,
    I'm using Weblogic for JMS (using MDB to process messages) and Coherence as a Caching solution. My MDBs are under transaction and I want to perform operations on Coherence using the same transactions. to do such a thing I installed Coherence JCA connector and use the Adapter.
    This seems to be working, however I have a big performance impact (my thoughput is divided by 4 or 5) Is this a normal behavior ?
    In my code I'm maybe doing a mistake: I create a cache for each message :
    adapter = new CacheAdapter(ctx, "tangosol.coherenceTx",
    CacheAdapter.CONCUR_OPTIMISTIC,
    CacheAdapter.TRANSACTION_GET_COMMITTED, 0);
    NamedCache cache = adapter.getNamedCache("dist-test",
    getClass().getClassLoader());
    Can I create the adapter in the init method of the mdb ? I don't think so because of the transaction for the Initial context...
    Luc

    Hi Luc,
    sorry, I misunderstood you, I thought that using no database also means no JMS either, because if the JMS queue uses a JDBC message store (which we normally used so that we can get resource joining by using the same connection pool / datasource as our data access code is using), then you in fact are using a database.
    If you use only a JMS queue and Coherence and if it is possible to recognize redelivery of messages from the data in Coherence caches and data in the message, then I would still suggest managing your own Coherence transaction with TransactionMap-s (and throw a RuntimeException/EJBException if commit to Coherence failed) and upon processing the message you should do the redelivery check. This would allow Weblogic to optimize away using XA for the JMS delivery transaction as the only transactional resource in that transaction would be the JMS queue itself.
    If it is impossible to recognize redelivery, then you should probably use the CacheAdapter.
    I only wish that the JNDI lookup in the CacheAdapter constructor could be replaced with something else, since according to our profiling in an earlier project, it was taking the largest part of the CacheAdapter construction and initialization. The connect() call can probably not be avoided but that took only one third of the time the JNDI lookup took.
    Best regards,
    Robert

  • EJB CMT + Cohererence Transaction

    Hi,
    I am trying to setup coherence in our application. Our app has MDB/EJB(weblogic 8.1) and all transactions are container managed. XA driver is used for db.
    I am looking for - if the transaction rolls back then cache updates all gets rolled back; and a commit on the db is a commit in the cache. Cache & DB should always be in sync with each-other. Is it possible to have the setup like this.
    Thanks,
    - Anand

    Hi,
    Managed to set some prilim-setup. But encountered a problem.
    I am using EJB CMT for our application - The way I am setting the Cache in the EJB is something like below -
    ==================
    Context cacheCtx = new InitialContext();
    CacheAdapter adapter = new CacheAdapter(cacheCtx, "tangosol.coherenceTx",
    CacheAdapter.CONCUR_OPTIMISTIC, CacheAdapter.TRANSACTION_GET_COMMITTED, 0);
    NamedCache map = adapter.getDistributedCache("VirtualCache", getClass().getClassLoader());
    ==================
    I do a ejbCtx.setRollbackOnly() to rollback the transaction - but on doing this. I am getting the below warning in the Weblogic. Do I have to explicitly call the setRollback from the CacheFactory? Or since it is in the transaction it is understood?
    ==================
    2006-03-27 14:40:26.097 Tangosol Coherence 3.1/339 <Warning> (thread=ExecuteThread: '12' for queue: 'weblogic.kernel.Default', member=1): CoherenceRA: LocalTransaction is not completed: Component: com.tangosol.coherence.ra.component.connector.resourceAdapter.cciAdapter.CacheAdapter$ManagedConnection$LocalTransaction(LocalTransaction)@16387060
    ==================
    Can you tell me what could be the problem? Looks like the cache is'nt updated though. I did test the cache thru' an independent program and it is not updated with the above one though.
    Thanks,
    - A

  • Connecting to mulitple cache services with the cache adapter

    Is it possible to connect to mulitple cache services using the same CacheAdapter within the same thread.
    Example:
    InitialContext ctx = new InitialContext();
    CacheAdapter adapter = new CacheAdapter(ctx, "tangosol.coherenceTx", CacheAdapter.CONCUR_OPTIMISTIC, CacheAdapter.TRANSACTION_GET_COMMITTED, 0);
    adapter.connect( "MyService", "", "");
    NamedCache cache1 = adapter.getDistributedCache( "X" );
    cache1.put( "a", "b" );
    adapter.connect( "MyService2", "", "");
    NamedCache cache2 = adapter.getDistributedCache( "Y" );
    cache2.put( "a", "b" );
    Do we have to look up a seperate cache adapter for each service. Do we need to close the adapter between connections?
    Thanks
    Rick

    Rick,
    Calling adaptor.connect() basically makes the adapter aware of transactional boundaries imposed by an application server. It is somewhat analogous to getting a db connection. An adapater can be connected to one and only one CacheService. Connection occurs during the connect(String, String, String) call and any subsequent calls to connect() will cause an IllegalStateException. To connect to multiple CacheServices you would just need to instantiate different adapters.
    Using [connected] adapter you can retrieve a transactional NamedCache for any number of NamedCaches for the CacheService that adapter is connected to by using:
        adapter.getNamedCache(sCacheName, loader);Method getDistributedCache(String, ClassLoader) is just a simple helper that ensures that the adapter is connected to the default DistributedCache service and in turn calls getNamedCache(String, ClassLoader)
    Gene

  • SimpleValidator only validate versions on updates with repeatabler read?!

    I was testing the SimpleValidator and my example seemed to indicated that it ONLY checks that the enlisted (old) version is the same as the locked (current) version for UPDATED objects if the isolation level is repeatable read (or presumably higher)! I would have expected this check to be done no matter what the isolation level was... I thought it was ONLY reads that was not verified in the read commited isolation level compared to read commited...
    It would also be nice to know if one can change how versions are calculated from cache objects simply by overriding the calculateVersion method (my tests indicates that this is possible but I would like to get it confirmed!). After introducing POF (using separate serializers) I was very happy to avoid having my cached busines objects implement Coherence classes or interfaces and would not like to break this again by using Versionable....
    /Magnus
    Edited by: MagnusE on Jan 17, 2010 4:09 PM

    I also rewrote the original program only using transaction maps (my first version assumed that I could create detectable conflicts using dirty reads/writes outside of a transacvtion map just as well as complete and fully commited transaction maps) but this did not change anything either:
    package txtest;
    import com.tangosol.util.TransactionMap;
    import com.tangosol.util.Versionable;
    import com.tangosol.net.CacheFactory;
    import com.tangosol.net.NamedCache;
    import java.io.Serializable;
    public class Test1_ver2 {
        public static final class Person implements Versionable, Serializable {
            private int version;
            private final String name;
            private final int age;
            public Person(String name, int age, int version) {
                this.age = age;
                this.name = name;
                this.version = version;
            public int getAge() {
                return age;
            public String getName() {
                return name;
            public Comparable getVersionIndicator() {
                return version;
            public void incrementVersion() {
                version++;
            public String toString(){
                return name + ", version = " + version;
        static final String CACHE_NAME = "dist-test";
        public static void main(String[] args) {
            try {
                // "Create" cache
                NamedCache cache = CacheFactory.getCache(CACHE_NAME);
                // Initialize cache
                cache.put(1, new Person("Foo", 23, 1));
                // Creatwe transaction map1  and select isolation level
                TransactionMap tx1 = CacheFactory.getLocalTransaction(CacheFactory.getCache(CACHE_NAME));
                tx1.setConcurrency(TransactionMap.CONCUR_OPTIMISTIC);
                // If I use TRANSACTION_GET_COMMITTED no exception is thrown but if TRANSACTION_REPEATABLE_GET is used
                // the validation throws an excpetion as expected...
                tx1.setTransactionIsolation(TransactionMap.TRANSACTION_REPEATABLE_GET);
                //tx1.setTransactionIsolation(TransactionMap.TRANSACTION_GET_COMMITTED);
                TransactionMap.Validator validator1 = new com.tangosol.run.jca.SimpleValidator();
                validator1.setNextValidator(tx1.getValidator());
                tx1.setValidator(validator1);
                // Start transaction
                tx1.begin();
                // Read an object from tx1...
                Person p1 = (Person) tx1.get(1);
                TransactionMap tx2 = CacheFactory.getLocalTransaction(CacheFactory.getCache(CACHE_NAME));
                tx2.setConcurrency(TransactionMap.CONCUR_OPTIMISTIC);
                tx2.setTransactionIsolation(TransactionMap.TRANSACTION_GET_COMMITTED);
                TransactionMap.Validator validator2 = new com.tangosol.run.jca.SimpleValidator();
                validator2.setNextValidator(tx2.getValidator());
                tx2.setValidator(validator2);
                tx2.begin();
                // Read same object using tx2, update it and write it back
                Person p2 = (Person) tx2.get(1);
                tx2.put(1, new Person(p2.getName(), p2.getAge() + 1, ((Integer) p2.getVersionIndicator()) + 1));
                tx2.prepare();
                tx2.commit();
                tx1.put(1, new Person("Fum", p1.getAge(), ((Integer) p1.getVersionIndicator()) + 1));
                // Prepare and commit
                tx1.prepare();
                tx1.commit();
            } catch (Throwable t) {
                t.printStackTrace();
    }Edited by: MagnusE on Jan 18, 2010 10:41 AM

  • Put() with expiration parameter not supported during transaction?

    Hello,
    I am facing a "UnsupportedOperationException" exception when trying to put an object into a cache with an expiration parameter. The cache is obtained from a CacheAdapter inside a transaction. If I use the cache without the CacheAdapter, it works fine. Does it mean that cache used inside transaction don't support the expiration parameter or am I doing something wrong?
    Here is the code that I'm using the get the cache from the CacheAdapter:
    adapter = new CacheAdapter(env.getInitialContext(),
    TANGOSOL_COHERENCE_RA,
    CacheAdapter.CONCUR_OPTIMISTIC,
    CacheAdapter.TRANSACTION_GET_COMMITTED, 0);
    NamedCache c = adapter.getNamedCache(TANGOSOL_CACHE_NAME, getClass().getClassLoader());
    c.put("foo", myObject, 100);
    Here is the last lines of the exception trace:
    java.lang.UnsupportedOperationException
         at com.tangosol.coherence.component.util.TransactionCache.put(TransactionCache.CDB:7)
         at com.tangosol.coherence.ra.component.connector.NamedCacheRecord.put(NamedCacheRecord.CDB:1)
    Thanks for any tips.
    Jean

    Hi Jean,
    You are correct; the caches used inside transaction don't support the expiration parameter. There are a number of reasons we made that decision. It seems that in general, the concepts of automatic eviction do not co-exist well with transactional guarantees. Consider the following scenario:
    You start a transaction and put a new item with expiry of one second. One second later you are about to commit and the item has to be expired. What should the cache behavior be? Should we rollback the transaction? Or should we proceed and pretend that the item has never existed?
    Similar question arise with size-limited caches and REPEATABLE_GET isolation level. What should happen if your commit forces out the entries that were read and relied on by your transaction due to a size limit?
    Sorry if I had more questions than answers :-)
    Regards,
    Gene

  • TransactionMap behavior

    I'm trying to understand the behavior of TransactionMap. I wrapped a remove call inside a transaction, and tried to read the cache from another node.
    On node 1:
    NamedCache countries = CacheFactory.getCache("countries");
    // first, we need to put some countries into the cache
    countries.put("USA", new Country("USA", "United States", "Washington", "USD", "Dollar"));
    countries.put("GBR", new Country("GBR", "United Kingdom", "London", "GBP", "Pound"));
    countries.put("RUS", new Country("RUS", "Russia", "Moscow", "RUB", "Ruble"));
    countries.put("CHN", new Country("CHN", "China", "Beijing", "CNY", "Yuan"));
    countries.put("JPN", new Country("JPN", "Japan", "Tokyo", "JPY", "Yen"));
    countries.put("DEU", new Country("DEU", "Germany", "Berlin", "EUR", "Euro"));
    countries.put("FRA", new Country("FRA", "France", "Paris", "EUR", "Euro"));
    countries.put("ITA", new Country("ITA", "Italy", "Rome", "EUR", "Euro"));
    countries.put("SRB", new Country("SRB", "Serbia", "Belgrade", "RSD", "Dinar"));
    // list all cache entries
    Set<Map.Entry> entries = countries.entrySet(null, null);
    for (Map.Entry entry : entries) {
    System.out.println(entry.getKey() + " = " + entry.getValue());
    TransactionMap mapTx = CacheFactory.getLocalTransaction(countries);
    mapTx.setTransactionIsolation(TransactionMap.TRANSACTION_REPEATABLE_GET);
    mapTx.setConcurrency(TransactionMap.CONCUR_PESSIMISTIC);
    mapTx.begin();
    countries.remove("JPN");
    // do not commit
    System.in.read();
    On node 2:
    NamedCache countries = CacheFactory.getCache("countries");
    TransactionMap mapTx = CacheFactory.getLocalTransaction(countries);
    mapTx.setTransactionIsolation(TransactionMap.TRANSACTION_REPEATABLE_GET);
    mapTx.setConcurrency(TransactionMap.CONCUR_PESSIMISTIC);
    Set<Map.Entry> entries = mapTx.entrySet();
    for (Map.Entry entry : entries) {
    System.out.println(entry.getKey() + " = " + entry.getValue());
    To my surprise, the changes made on node1 were reflected in the read on node2, even though the transaction was not yet committed. This is indicated by the output from node2:
    DEU = Country(Code = DEU, Name = Germany, Capital = Berlin, CurrencySymbol = EUR, CurrencyName = Euro)
    USA = Country(Code = USA, Name = United States, Capital = Washington, CurrencySymbol = USD, CurrencyName = Dollar)
    FRA = Country(Code = FRA, Name = France, Capital = Paris, CurrencySymbol = EUR, CurrencyName = Euro)
    GBR = Country(Code = GBR, Name = United Kingdom, Capital = London, CurrencySymbol = GBP, CurrencyName = Pound)
    CHN = Country(Code = CHN, Name = China, Capital = Beijing, CurrencySymbol = CNY, CurrencyName = Yuan)
    RUS = Country(Code = RUS, Name = Russia, Capital = Moscow, CurrencySymbol = RUB, CurrencyName = Ruble)
    SRB = Country(Code = SRB, Name = Serbia, Capital = Belgrade, CurrencySymbol = RSD, CurrencyName = Dinar)
    Is this the expected behavior? From the relational DB Transaction Isolation Levels, this seems to be dirty read. Or did I miss anything?

    Assuming that I've read your example correctly, if you change:
    countries.remove("JPN")
    to:
    mapTx.remove("JPN");
    you should see the behavior you expect.
    Jon Purdy
    Oracle

Maybe you are looking for

  • Focusrite Saffire 6 USB (1.1) with Macbook Pro late 2008

    Equipment-      Macbook Pro Late 2008 w/ Yosemite 10.10.2      Focusrite Saffire 6 USB (1.1) The Saffire 6 is a audio interface using USB 1.1 to connect microphones to my Macbook. It has 2 inputs and 4 outputs. I've had this unit for years and has wo

  • How to populate the dropdown in selection widget from a file?

    i am using a cq:widget of xtype selection and type select. I got lot of values as my options.i want a simple procedure to populate them. They have mentioned that they can be populated from a json response. Can you explain how to do the same? or is th

  • WLAN Security, Is Apple OSX Safer at public sites?

    Hi, I am opening this up for a technical discussion for people that actually know the answers, Apple Techs please! In the article http://www.lanarchitect.net/Articles/Wireless/SecurityRating/ It leads you to believe only WEP enabled WLAN connections

  • Reinstalling version 4.0

    I need to reinstall Acrobat 4.0 on a Mac PowerBook G4 in order to print documents created in system 9.2 on printers that accept only system 10.6 or later.  The later versions of Acrobat recognize the .pdf created in 4.0.  But without being able to in

  • Color profile per page?

    Hello Every month I make the layout for a magazine with a glossy cover, and uncoated paper on the inside. Until now, the document was always made entirely in a coated profile color space. The colors on the uncoated paper are printed too dark because