Distributed cache with a backing-map as another distributed cache

Hi All,
Is it possible to create a distributed cache with a backing-map-schem with another distributed cache with local-storage disabled ?
Please let me know how to configure this type of cache.
regards
S

Hi Cameron,
I am trying to create a distributed-schem with a backing-map schem. is it possible to configure another distributed queue as a backing map scheme for a cache.
<distributed-scheme>
     <scheme-name>MyDistCache-2</scheme-name>
     <service-name> MyDistCacheService-2</service-name>
     <backing-map-scheme>
               <external-scheme>
                    <scheme-name>MyDistCache-3</scheme-name>
               </external-scheme>
     </backing-map-scheme>
</distributed-scheme>
     <distributed-scheme>
          <scheme-name>MyDistCache-3</scheme-name>
          <service-name> MyDistBackCacheService-3</service-name>
          <local-storage>false</local-storage>
     </distributed-scheme>
Please correct my understanding.
Regards
Srini

Similar Messages

  • How to get cache statistics of back map in a near cache

    Hi-
    I have a near cache configured.
    Local cache makes the front cache and Distributed cache makes the back cache.
    I am able to get statistics of front cache using coherence API, like as below.
    NamedCache coherenceCache = CacheUtils.getNamedCache( CacheUtils.getCacheRegionPrefix() + region.getRegionName() );
              if (coherenceCache instanceof NearCache)
                   NearCache nearCache = (NearCache)coherenceCache;
                   CachingMap cachingMap = (CachingMap)nearCache;
                   CacheStatistics frontStatistics = ((LocalCache)nearCache.getFrontMap()).getCacheStatistics();
    How do i get the statistics of back map?
    If i do below, it gives statistics which combines data for both front and back (That is what i infer based on my understanding - it increases get count in both front and back even if get it done in front cache)
                   CacheStatistics backStatistics = cachingMap.getCacheStatistics();
    So is there any way to get statistics for back cache alone with Coherence API for a near cache?
    Any help on this be much appreciated
    thanks

    Hi,
    NamedCache coherenceCache = CacheUtils.getNamedCache( CacheUtils.getCacheRegionPrefix() + region.getRegionName() );
    if (coherenceCache instanceof NearCache)
    NearCache nearCache = (NearCache)coherenceCache;
    CachingMap cachingMap = (CachingMap)nearCache;
    Map backingMap = cachingMap.getBackMap();
    OR
    Map backingMap = nearCache.getBackMap();
    getFrontMap() and getBackMap() are methods from CachingMap.
    Thanks & Regards,
    Murali.
    ============

  • Using TimeMachine to Restore a Mac with a Back Up of another Mac

    I have a 24" iMac in my home office; I use TimeMachine to back it up. I have a 24" iMac at a client site whose TM back-ups were deleted... ...to save myself the stress of having to reinstall all my software, I tried to restore from the TM Back-Up of my home office machine. A great number of things failed. Adobe CS3, (which I am currently reinstalling, something I was trying to avoid), mobileMe failed for the longest time and required a great deal of fiddling...
    ...anyone have any idea why this all went so darn wrong? I've got most problems solved, but I'd like to know how to avoid this issue in the future. For example, why did all of my software go bad and require new installs?
    Message was edited by: arsenalarmada

    V.K. is correct.
    What do you mean by "all my software"?
    Unless you specifically excluded your Applications folder, all your Apple apps, and almost all 3rd-party apps, if that's where you put them, should have been saved by TM, and restored. (A few 3rd-party apps are coded by their authors so TM can't save them.
    If you put them anywhere else, they still should have been saved and restored, unless you specifically excluded those places.

  • Off-heap backing maps seem to generate lots of garbage at insert..!?

    I have been doing a lot of benchmarks of distributed caches with different backing maps. The results where partly positive (I hoped that partitioned (splitting) off-heap backing map would be almost as fast as a non-splitting on-heap backing map). For read and various types of queries this turned out to be mostly true (some queries were slightly slower - probably because they where performed per partition).
    For inserts it does however sadly enough seem to be another story - already when using a non-splitting NIO backing map inserts seemed to generate a lot of garbage slowing the benchmark down significantly and when switching to a splitting NIO backing map this effect became so extreme that full GC occured more or less constantly on the cache nodes slowing execution down to almost a standstill :-(
    Has anybody else tried this and seen the same results or do any of the Coherence developers have some theory?
    To me it would seem like network-io to off-heap (using storage buffers allocated using nio just as the communication buffers!) should be at least as easy to perform without generating excessive garbage as to heap objects but since I dont know the internals of Coherence I cant say for sure if there are something that breaks this theory?
    For me the main expected advantage with using off-heap rather than on-heap would have been REDUCED GC activity and shorter pauses but instead it seems like the result is the oposite - at least when doing inserts...
    My example do not use (or need!) and secondary indexes (only performs get/put/lock/unlock) but each entry is locked before it is inserted and unlocked after (this is needed for the algorithm I am using as a benchmark) - as I have pointed out in another thread it is a pitty that no "lockAll" / unlockAll method calls exists (my benchmark is suffering a lot from all the lock/unlock remote calls) - the overhead for this is however nothing compared to the performance hit that comes from the all the GC...
    I have tried to tune the GC in several ways but this has only to a very limited extent reduced the GC pauses length or the frequency of full-GC - it just seems like a LOT of garbage is generated for some reason...
    The setings that so far was resulted in the least GC-overhead (still awfully bad though!) are -XX:+UseParallelGC -XX:+UseAdaptiveSizePolicy. I am using Coherence 3.5 GE and Sun JRE 1.6.0_14.
    /Magnus
    Edited by: MagnusE on Aug 10, 2009 3:01 PM

    Thanks for ther info - I was indeed using different initial and max size in this experiment and seting them the same eased the problem (now I mostly get incremental rather than full GC messages). Insert do however still generate more GC activity than read (that seem to be more or less totally free from Java heap allocation / deallocation which is VERY good since read is so common!). Perhaps there is some more tweaking of the heap allocation/deallocation that can be done att the same time as you work on that bug you mentioned - it would really be nice with a NIO backing-map with close to zero Java heap usage for all primitive operations (read, insert, delete)!
    /Magnus
    Edited by: MagnusE on Aug 11, 2009 7:33 AM

  • Read through Scheme in Replicated Cache with berkely db

    Hi i have a 20 gb of data, while restarting the server i need to populate all the data in to coherence cache.if i create a pre load java class hadrly it will take 30 mins to 1 hour to load the data in to cache, while loading time if any request came means how can give the response to the request. i have gone through the Read through scheme its good. But i dont know how to implement with replicated cahce. is it possible to implement the read through+replicated cache+berkeley. if yes menas please post the sample code with full reference. thanks in advance
    Edited by: 875786 on Dec 5, 2011 8:10 PM

    If you read the documentation for the replicated scheme configuration here: http://docs.oracle.com/cd/E24290_01/coh.371/e22837/appendix_cacheconfig.htm#BABGHEJE ans specifically the part about the <backing-map> configutation you will see...
    http://docs.oracle.com/cd/E24290_01/coh.371/e22837/appendix_cacheconfig.htm#BABGHEJE:
    To ensure cache coherence, the backing-map of a replicated cache must not use a read-through pattern to load cache entries. Either use a cache-aside pattern from outside the cache service, or switch to the distributed-scheme, which supports read-through clustered caching.So it would appear that you cannot do read-through with a replicated cache - which makes sense really when you think about it.
    As I already asked - why are you trying to put 20GB in a replicated cache?
    Presumably you do not have JVMs with heaps well over 20GB to hold all that data, or do you - in which case you must have tuned GC very well. You say you are bothered about NFRs and network latency yet you are building a system that will require either very big heaps, and hence at some point there will be long GCs or you cannot hold all the data in memory so you have to configure expiry and then you have the latency of reading the data from the DB.
    If you are using read-through then presumably your use-case does not require all the data to be in the cache - i.e. all you data access is using a get by key and you do not do any filter queries. If this is the case then use a distributed cache where you can store all the data or use read-through. If all you access is using key based gets then you do not need to co-locate the caches and your application in a single JVM - have separate cache server JVMs to hold the data and configure near-caches in your application.
    There are various ways to hold 20GB of data that would be much more efficient than you are suggesting with a replicated cache.
    JK

  • Shoudn't 'put with expiry' throw with read-write backing map?

    Good morning all,
    If I run this client code:
    cache.put(1,  1, CacheMap.EXPIRY_NEVER);I'd expect this entry to never expire. Yet with a read-write backing map it does - immediately, which lead me to digging a bit more...
    According to the [java docs|http://download.oracle.com/otn_hosted_doc/coherence/330/com/tangosol/net/NamedCache.html#put%28java.lang.Object,%20java.lang.Object,%20long%29] support for this call is patchy:
    >
    Note: Though NamedCache interface extends CacheMap, not all implementations currently support this functionality.
    For example, if a cache is configured to be a replicated, optimistic or distributed cache then its backing map must be configured as a local cache. If a cache is configured to be a near cache then the front map must to be configured as a local cache and the back map must support this feature as well, typically by being a distributed cache backed by a local cache (as above.)
    >
    OK, so the docs even say this won't work. But shouldn't it throw an unsupported op exception? Is this a bug or my mistake?
    rw-scheme config:
    <backing-map-scheme>
      <read-write-backing-map-scheme>
         <internal-cache-scheme>
            <local-scheme/>
         </internal-cache-scheme>
         <cachestore-scheme>
        </cachestore-scheme>
        <write-delay>1ms</write-delay>
      </read-write-backing-map-scheme>
    </backing-map-scheme>Edited by: BigAndy on 04-Dec-2012 04:28

    Quick update on this - I've raised an SR and Oracle have confirmed this is a bug and are looking into a fix.

  • Finding exception with the read-write-backing-map-scheme configuration.

    Finding exception with the <read-write-backing-map-scheme> configuration, that is setup against a simple database cache store implementation. The class SimpleCacheEventStoreImpl implements CacheStore interface.
    Exception in thread "main" java.lang.UnsupportedOperationException: configureCache: read-write-backing-map-scheme
         at com.tangosol.net.DefaultConfigurableCacheFactory.configureCache(DefaultConfigurableCacheFactory.java:995)
         at com.tangosol.net.DefaultConfigurableCacheFactory.ensureCache(DefaultConfigurableCacheFactory.java:277)
         at com.tangosol.net.CacheFactory.getCache(CacheFactory.java:689)
         at com.tangosol.net.CacheFactory.getCache(CacheFactory.java:667)
         at Sample.SimpleEventStoreConsumer.main(SimpleEventStoreConsumer.java:10)
    The cache store is interfaced to the program SimpleEventStoreConsumer(where I have a put and get operation) through the following cache configuration descriptor. On running the SimpleEventStoreConsumer, the exception happens on trying to get the Named cache from the cache factory
    <cache-config>
         <caching-scheme-mapping>
              <cache-mapping>
                   <cache-name>Evt*</cache-name>
                   <scheme-name>SampleDatabaseScheme</scheme-name>
              </cache-mapping>
         </caching-scheme-mapping>
         <caching-schemes>
              <read-write-backing-map-scheme>
                   <scheme-name>SampleDatabaseScheme</scheme-name>
                   <internal-cache-scheme>
                        <local-scheme>
                             <scheme-ref>SampleMemoryScheme</scheme-ref>
                        </local-scheme>
                   </internal-cache-scheme>
                   <cachestore-scheme>
                        <class-scheme>
                             <class-name>com.emc.srm.cachestore.SimpleCacheEventStoreImpl</class-name>
                             <init-params>
                                  <init-param>
                                       <param-type>java.lang.String</param-type>
                                       <param-value>{cache-name}</param-value>
                                  </init-param>
                             </init-params>
                        </class-scheme>
                   </cachestore-scheme>
              </read-write-backing-map-scheme>
              <local-scheme>
                   <scheme-name>SampleMemoryScheme</scheme-name>
              </local-scheme>
         </caching-schemes>
    </cache-config>

    you are missing <backing-map-scheme>. Do like following:
    <caching-schemes>
              <distributed-scheme>
                   <scheme-name>distributed-scheme</scheme-name>
                   <service-name>DistributedQueryCache</service-name>
                   <backing-map-scheme>
                        <read-write-backing-map-scheme>
                             <scheme-ref>rw-bm</scheme-ref>
                        </read-write-backing-map-scheme>
                   </backing-map-scheme>
    <autostart>true</autostart>
              </distributed-scheme>
              <read-write-backing-map-scheme>
                   <scheme-name>rw-bm</scheme-name>
    <internal-cache-scheme>
         <local-scheme>
                        </local-scheme>
                   </internal-cache-scheme>               
              </read-write-backing-map-scheme>
    </caching-schemes>

  • Why can't a backing-map-scheme be specified in caching-schemes?

    Most other types of schemes except backing-map-scheme can be specified in the caching-schemes section of the cache configuration XML file and after that be reused in other scheme definitions. What is the motivation for excluding the backing-map-scheme?
    /Magnus

    Hi Magnus,
    you can specify an "abstract" service-type scheme (e.g. distributed-scheme) containing the backing map scheme instead of the backing map scheme itself.
    I know it is not as flexible as having a backing map scheme separately, but it is almost as good.
    Best regards,
    Robert

  • Backing Map Access : Cross-cache joins

    Hi,
    I have been experimenting with cross-cache joins using Entry Processors in Coherence 3.7.1.
    (I have already sent a query to Dave Felcey regarding this - I will post any response from him here - but I just wondered if anyone else has had the same problem)
    h3. Scenario
    A simplified version of the problem is:
    We have two NamedCaches, one called "PARENT_CACHE" and one called "CHILD_CACHE"
    Each cache stores the following types of object respectivley...
    *class Parent implements PortableObject {*
    long id;
    String description;
    *class Child implements PortableObject {*
    long id;
    long parentId;
    String description;
    I want an entry processor that I can send to an instance of "Parent" that will return the "Parent" object together with the associated "Child" objects. The reason I want this is because I do not want to do two out-of-process calls (one to get the parent and one to get it's children - in the real world, we will need to get several different child types and we do not want to do one call for each type) for example...
    *class ExampleEntryProcessor extends AbstractProcessor implements PortableObject {*
    public ParentAndChildrenResult process(Entry entry) {
    I wrote an implementation of this based on Ben Stopfold's blog (see here - particularly the post by "Jonathan Knight")
    So I thought I needed something like this...
         *public ParentAndChildrenResult process(Entry entry) {*
              ParentAndChildrenResult result = new ParentAndChildResult();
              Parent parent = (Parent) entry.getValue();
              result.setParent(parent);
              Filter parentIdFilter = new EqualsFilter(new PofExtractor(Long.class, Child.PARENT_ID_FIELD), parent.getId());
              BinaryEntry binaryEntry = (BinaryEntry) entry;
              Set<java.util.Map.Entry> childEntrySet = queryBackingMap("CHILD_CACHE", parentIdFilter, binaryEntry);
              Converter valueUpConverter = binaryEntry.getContext().getValueFromInternalConverter();
              for (java.util.Map.Entry childEntry : childEntrySet) {
                   result.addChild((Child) valueUpConverter.convert(childEntry.getValue()));
              return result;
         *public Set<Map.Entry> queryBackingMap(String nameOfCacheToSearch, Filter filter, BinaryEntry entry) {*
         BackingMapContext backingMapContext = entry.getContext().getBackingMapContext(nameOfCacheToSearch);
         Map indexMap = backingMapContext.getIndexMap();
         return InvocableMapHelper.query(backingMapContext.getBackingMap(), indexMap, filter, true, false, null);
    I set up key association so I can ensure that the child objects are on the same node as the parent, the keys for each cache look like this...
    *class Key implements KeyAssociation, PortableObject {*
    private long id;
    private long associatedId;
    *public Key() {*
    *public Key(Parent parent) {*
    this.id = parent.getId();
    this.associatedId = parent.getId();
    *public Key(Child child) {*
    this.id = child.getId();
    this.associatedId = child.getParentId();
    *public Object getAssociatedKey() {*
    return associatedId;
    When I send this entry processor to a parent object, I am getting the following exception when the "InvocableMapHelper.query" method is called...
    +"Portable(java.lang.UnsupportedOperationException): PofExtractor must be used with POF-encoded Binary entries; the Map Entry is not a BinaryEntry"+
    I was not expecting this as our cluster is POF enabled and I thought that the backing maps always contained BinaryEntries.
    Has anyone else had a similar problem? Has anyone found any simple examples of how to do this anywhere on the web that work?
    Once I figure out how to get this to work, I want to post the solution somewhere (probably here) because there are bound to be other people who want to do something similar to this.
    Thanks in advance,
    -Bret

    Forgot the output...
    STORAGE NODE OUTPUT
    2012-01-12 18:38:09.860/0.766 Oracle Coherence 3.7.1.0 <Info> (thread=main, member=n/a): Loaded operational configuration from "jar:file:/C:/users/vclondon/.m2/repository/com/tangosol/coherence/3.7.1.0b27797/coherence-3.7.1.0b27797.jar!/tangosol-coherence.xml"
    2012-01-12 18:38:09.860/0.766 Oracle Coherence 3.7.1.0 <Info> (thread=main, member=n/a): Loaded operational overrides from "file:/C:/dev2/tech-trading-workspace/backing-map-access/bin/tangosol-coherence-override.xml"
    2012-01-12 18:38:09.860/0.766 Oracle Coherence 3.7.1.0 <D5> (thread=main, member=n/a): Optional configuration override "/custom-mbeans.xml" is not specified
    Oracle Coherence Version 3.7.1.0 Build 27797
    Grid Edition: Development mode
    Copyright (c) 2000, 2011, Oracle and/or its affiliates. All rights reserved.
    2012-01-12 18:38:10.016/0.922 Oracle Coherence GE 3.7.1.0 <Info> (thread=main, member=n/a): Loaded cache configuration from "file:/C:/dev2/tech-trading-workspace/backing-map-access/bin/coherence-cache-config.xml"; this document does not refer to any schema definition and has not been validated.
    2012-01-12 18:38:10.344/1.250 Oracle Coherence GE 3.7.1.0 <Info> (thread=main, member=n/a): Loaded Reporter configuration from "jar:file:/C:/users/vclondon/.m2/repository/com/tangosol/coherence/3.7.1.0b27797/coherence-3.7.1.0b27797.jar!/reports/report-group.xml"
    2012-01-12 18:38:23.610/14.516 Oracle Coherence GE 3.7.1.0 <D4> (thread=main, member=n/a): TCMP bound to /172.23.0.26:8088 using SystemSocketProvider
    2012-01-12 18:38:54.454/45.360 Oracle Coherence GE 3.7.1.0 <Info> (thread=Cluster, member=n/a): Created a new cluster "LOCAL" with Member(Id=1, Timestamp=2012-01-12 18:38:23.719, Address=172.23.0.26:8088, MachineId=21295, Location=site:,machine:J67LQ2J,process:13600, Role=TestBackingmapStartStorageNode, Edition=Grid Edition, Mode=Development, CpuCount=2, SocketCount=1) UID=0xAC17001A00000134D2FFC167532F1F98
    2012-01-12 18:38:54.454/45.360 Oracle Coherence GE 3.7.1.0 <Info> (thread=main, member=n/a): Started cluster Name=LOCAL
    WellKnownAddressList(Size=1,
    WKA{Address=172.23.0.26, Port=8088}
    MasterMemberSet(
    ThisMember=Member(Id=1, Timestamp=2012-01-12 18:38:23.719, Address=172.23.0.26:8088, MachineId=21295, Location=site:,machine:J67LQ2J,process:13600, Role=TestBackingmapStartStorageNode)
    OldestMember=Member(Id=1, Timestamp=2012-01-12 18:38:23.719, Address=172.23.0.26:8088, MachineId=21295, Location=site:,machine:J67LQ2J,process:13600, Role=TestBackingmapStartStorageNode)
    ActualMemberSet=MemberSet(Size=1
    Member(Id=1, Timestamp=2012-01-12 18:38:23.719, Address=172.23.0.26:8088, MachineId=21295, Location=site:,machine:J67LQ2J,process:13600, Role=TestBackingmapStartStorageNode)
    MemberId|ServiceVersion|ServiceJoined|MemberState
    1|3.7.1|2012-01-12 18:38:54.454|JOINED
    RecycleMillis=1200000
    RecycleSet=MemberSet(Size=0
    TcpRing{Connections=[]}
    IpMonitor{AddressListSize=0}
    2012-01-12 18:38:54.501/45.407 Oracle Coherence GE 3.7.1.0 <Info> (thread=Cluster, member=1): Loaded POF configuration from "file:/C:/dev2/tech-trading-workspace/backing-map-access/bin/pof-config.xml"; this document does not refer to any schema definition and has not been validated.
    2012-01-12 18:38:54.516/45.422 Oracle Coherence GE 3.7.1.0 <Info> (thread=Cluster, member=1): Loaded included POF configuration from "jar:file:/C:/users/vclondon/.m2/repository/com/tangosol/coherence/3.7.1.0b27797/coherence-3.7.1.0b27797.jar!/coherence-pof-config.xml"
    2012-01-12 18:38:54.579/45.485 Oracle Coherence GE 3.7.1.0 <D5> (thread=Invocation:Management, member=1): Service Management joined the cluster with senior service member 1
    2012-01-12 18:38:54.876/45.782 Oracle Coherence GE 3.7.1.0 <D5> (thread=DistributedCache, member=1): Service DistributedCache joined the cluster with senior service member 1
    2012-01-12 18:38:54.891/45.797 Oracle Coherence GE 3.7.1.0 <D5> (thread=Invocation:InvocationService, member=1): Service InvocationService joined the cluster with senior service member 1
    2012-01-12 18:38:54.907/45.813 Oracle Coherence GE 3.7.1.0 <Info> (thread=main, member=1):
    Services
    ClusterService{Name=Cluster, State=(SERVICE_STARTED, STATE_JOINED), Id=0, Version=3.7.1, OldestMemberId=1}
    InvocationService{Name=Management, State=(SERVICE_STARTED), Id=1, Version=3.1, OldestMemberId=1}
    PartitionedCache{Name=DistributedCache, State=(SERVICE_STARTED), LocalStorage=enabled, PartitionCount=257, BackupCount=1, AssignedPartitions=257, BackupPartitions=0}
    InvocationService{Name=InvocationService, State=(SERVICE_STARTED), Id=3, Version=3.1, OldestMemberId=1}
    Started DefaultCacheServer...
    2012-01-12 18:39:03.438/54.344 Oracle Coherence GE 3.7.1.0 <D5> (thread=Cluster, member=1): Member(Id=2, Timestamp=2012-01-12 18:39:03.274, Address=172.23.0.26:8090, MachineId=21295, Location=site:,machine:J67LQ2J,process:14408, Role=TestBackingmapMainRunTheTest) joined Cluster with senior member 1
    2012-01-12 18:39:03.610/54.516 Oracle Coherence GE 3.7.1.0 <D5> (thread=Cluster, member=1): Member 2 joined Service Management with senior member 1
    2012-01-12 18:39:03.907/54.813 Oracle Coherence GE 3.7.1.0 <D5> (thread=Cluster, member=1): Member 2 joined Service DistributedCache with senior member 1
    2012-01-12 18:39:04.032/54.938 Oracle Coherence GE 3.7.1.0 <D5> (thread=Cluster, member=1): TcpRing disconnected from Member(Id=2, Timestamp=2012-01-12 18:39:03.274, Address=172.23.0.26:8090, MachineId=21295, Location=site:,machine:J67LQ2J,process:14408, Role=TestBackingmapMainRunTheTest) due to a peer departure; removing the member.
    2012-01-12 18:39:04.032/54.938 Oracle Coherence GE 3.7.1.0 <D5> (thread=Cluster, member=1): Member 2 left service Management with senior member 1
    2012-01-12 18:39:04.032/54.938 Oracle Coherence GE 3.7.1.0 <D5> (thread=Cluster, member=1): Member 2 left service DistributedCache with senior member 1
    2012-01-12 18:39:04.032/54.938 Oracle Coherence GE 3.7.1.0 <D5> (thread=Cluster, member=1): Member(Id=2, Timestamp=2012-01-12 18:39:04.032, Address=172.23.0.26:8090, MachineId=21295, Location=site:,machine:J67LQ2J,process:14408, Role=TestBackingmapMainRunTheTest) left Cluster with senior member 1
    PROCESS NODE OUTPUT
    2012-01-12 18:39:02.266/0.328 Oracle Coherence 3.7.1.0 <Info> (thread=main, member=n/a): Loaded operational configuration from "jar:file:/C:/users/vclondon/.m2/repository/com/tangosol/coherence/3.7.1.0b27797/coherence-3.7.1.0b27797.jar!/tangosol-coherence.xml"
    2012-01-12 18:39:02.266/0.328 Oracle Coherence 3.7.1.0 <Info> (thread=main, member=n/a): Loaded operational overrides from "file:/C:/dev2/tech-trading-workspace/backing-map-access/bin/tangosol-coherence-override.xml"
    2012-01-12 18:39:02.266/0.328 Oracle Coherence 3.7.1.0 <D5> (thread=main, member=n/a): Optional configuration override "/custom-mbeans.xml" is not specified
    Oracle Coherence Version 3.7.1.0 Build 27797
    Grid Edition: Development mode
    Copyright (c) 2000, 2011, Oracle and/or its affiliates. All rights reserved.
    2012-01-12 18:39:02.407/0.469 Oracle Coherence GE 3.7.1.0 <Info> (thread=main, member=n/a): Loaded cache configuration from "file:/C:/dev2/tech-trading-workspace/backing-map-access/bin/coherence-cache-config.xml"; this document does not refer to any schema definition and has not been validated.
    2012-01-12 18:39:02.501/0.563 Oracle Coherence GE 3.7.1.0 <Info> (thread=main, member=n/a): Loaded Reporter configuration from "jar:file:/C:/users/vclondon/.m2/repository/com/tangosol/coherence/3.7.1.0b27797/coherence-3.7.1.0b27797.jar!/reports/report-group.xml"
    2012-01-12 18:39:03.063/1.125 Oracle Coherence GE 3.7.1.0 <D4> (thread=main, member=n/a): TCMP bound to /172.23.0.26:8090 using SystemSocketProvider
    2012-01-12 18:39:03.438/1.500 Oracle Coherence GE 3.7.1.0 <Info> (thread=Cluster, member=n/a): This Member(Id=2, Timestamp=2012-01-12 18:39:03.274, Address=172.23.0.26:8090, MachineId=21295, Location=site:,machine:J67LQ2J,process:14408, Role=TestBackingmapMainRunTheTest, Edition=Grid Edition, Mode=Development, CpuCount=2, SocketCount=1) joined cluster "LOCAL" with senior Member(Id=1, Timestamp=2012-01-12 18:38:23.719, Address=172.23.0.26:8088, MachineId=21295, Location=site:,machine:J67LQ2J,process:13600, Role=TestBackingmapStartStorageNode, Edition=Grid Edition, Mode=Development, CpuCount=2, SocketCount=1)
    2012-01-12 18:39:03.485/1.547 Oracle Coherence GE 3.7.1.0 <D5> (thread=Cluster, member=n/a): Member 1 joined Service Management with senior member 1
    2012-01-12 18:39:03.485/1.547 Oracle Coherence GE 3.7.1.0 <D5> (thread=Cluster, member=n/a): Member 1 joined Service DistributedCache with senior member 1
    2012-01-12 18:39:03.485/1.547 Oracle Coherence GE 3.7.1.0 <D5> (thread=Cluster, member=n/a): Member 1 joined Service InvocationService with senior member 1
    2012-01-12 18:39:03.485/1.547 Oracle Coherence GE 3.7.1.0 <Info> (thread=main, member=n/a): Started cluster Name=LOCAL
    WellKnownAddressList(Size=1,
    WKA{Address=172.23.0.26, Port=8088}
    MasterMemberSet(
    ThisMember=Member(Id=2, Timestamp=2012-01-12 18:39:03.274, Address=172.23.0.26:8090, MachineId=21295, Location=site:,machine:J67LQ2J,process:14408, Role=TestBackingmapMainRunTheTest)
    OldestMember=Member(Id=1, Timestamp=2012-01-12 18:38:23.719, Address=172.23.0.26:8088, MachineId=21295, Location=site:,machine:J67LQ2J,process:13600, Role=TestBackingmapStartStorageNode)
    ActualMemberSet=MemberSet(Size=2
    Member(Id=1, Timestamp=2012-01-12 18:38:23.719, Address=172.23.0.26:8088, MachineId=21295, Location=site:,machine:J67LQ2J,process:13600, Role=TestBackingmapStartStorageNode)
    Member(Id=2, Timestamp=2012-01-12 18:39:03.274, Address=172.23.0.26:8090, MachineId=21295, Location=site:,machine:J67LQ2J,process:14408, Role=TestBackingmapMainRunTheTest)
    MemberId|ServiceVersion|ServiceJoined|MemberState
    1|3.7.1|2012-01-12 18:38:23.719|JOINED,
    2|3.7.1|2012-01-12 18:39:03.477|JOINED
    RecycleMillis=1200000
    RecycleSet=MemberSet(Size=0
    TcpRing{Connections=[1]}
    IpMonitor{AddressListSize=0}
    2012-01-12 18:39:03.501/1.563 Oracle Coherence GE 3.7.1.0 <Info> (thread=Cluster, member=2): Loaded POF configuration from "file:/C:/dev2/tech-trading-workspace/backing-map-access/bin/pof-config.xml"; this document does not refer to any schema definition and has not been validated.
    2012-01-12 18:39:03.532/1.594 Oracle Coherence GE 3.7.1.0 <Info> (thread=Cluster, member=2): Loaded included POF configuration from "jar:file:/C:/users/vclondon/.m2/repository/com/tangosol/coherence/3.7.1.0b27797/coherence-3.7.1.0b27797.jar!/coherence-pof-config.xml"
    2012-01-12 18:39:03.594/1.656 Oracle Coherence GE 3.7.1.0 <D5> (thread=Invocation:Management, member=2): Service Management joined the cluster with senior service member 1
    2012-01-12 18:39:03.891/1.953 Oracle Coherence GE 3.7.1.0 <D5> (thread=DistributedCache, member=2): Service DistributedCache joined the cluster with senior service member 1
    Exception in thread "main" Portable(com.tangosol.util.WrapperException): (Wrapped: Failed request execution for DistributedCache service on Member(Id=1, Timestamp=2012-01-12 18:38:23.719, Address=172.23.0.26:8088, MachineId=21295, Location=site:,machine:J67LQ2J,process:13600, Role=TestBackingmapStartStorageNode)) PofExtractor must be used with POF-encoded Binary entries; the Map Entry is not a BinaryEntry
    at com.tangosol.util.Base.ensureRuntimeException(Base.java:288)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.tagException(Grid.CDB:36)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache.onInvokeRequest(PartitionedCache.CDB:68)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$InvokeRequest.run(PartitionedCache.CDB:1)
    at com.tangosol.coherence.component.net.message.requestMessage.DistributedCacheKeyRequest.onReceived(DistributedCacheKeyRequest.CDB:12)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onMessage(Grid.CDB:34)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onNotify(Grid.CDB:33)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.PartitionedService.onNotify(PartitionedService.CDB:3)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache.onNotify(PartitionedCache.CDB:3)
    at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
    at java.lang.Thread.run(Unknown Source)
    at <process boundary>
    at com.tangosol.io.pof.ThrowablePofSerializer.deserialize(ThrowablePofSerializer.java:57)
    at com.tangosol.io.pof.PofBufferReader.readAsObject(PofBufferReader.java:3316)
    at com.tangosol.io.pof.PofBufferReader.readObject(PofBufferReader.java:2604)
    at com.tangosol.io.pof.ConfigurablePofContext.deserialize(ConfigurablePofContext.java:368)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.Service.readObject(Service.CDB:1)
    at com.tangosol.coherence.component.net.Message.readObject(Message.CDB:1)
    at com.tangosol.coherence.component.net.message.SimpleResponse.read(SimpleResponse.CDB:6)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.deserializeMessage(Grid.CDB:19)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onNotify(Grid.CDB:31)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.PartitionedService.onNotify(PartitionedService.CDB:3)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache.onNotify(PartitionedCache.CDB:3)
    at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
    at java.lang.Thread.run(Unknown Source)
    Caused by: Portable(java.lang.UnsupportedOperationException): PofExtractor must be used with POF-encoded Binary entries; the Map Entry is not a BinaryEntry
    at com.tangosol.util.extractor.PofExtractor.extractInternal(PofExtractor.java:175)
    at com.tangosol.util.extractor.PofExtractor.extractFromEntry(PofExtractor.java:146)
    at com.tangosol.util.InvocableMapHelper.extractFromEntry(InvocableMapHelper.java:315)
    at com.tangosol.util.SimpleMapEntry.extract(SimpleMapEntry.java:168)
    at com.tangosol.util.filter.ExtractorFilter.evaluateEntry(ExtractorFilter.java:93)
    at com.tangosol.util.InvocableMapHelper.evaluateEntry(InvocableMapHelper.java:262)
    at com.tangosol.util.InvocableMapHelper.query(InvocableMapHelper.java:452)
    at test.backingmap.main.GetParentAndChildrenEntryProcessor.queryBackingMap(GetParentAndChildrenEntryProcessor.java:60)
    at test.backingmap.main.GetParentAndChildrenEntryProcessor.process(GetParentAndChildrenEntryProcessor.java:47)
    at test.backingmap.main.GetParentAndChildrenEntryProcessor.process(GetParentAndChildrenEntryProcessor.java:1)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$Storage.invoke(PartitionedCache.CDB:10)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache.onInvokeRequest(PartitionedCache.CDB:52)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$InvokeRequest.run(PartitionedCache.CDB:1)
    at com.tangosol.coherence.component.net.message.requestMessage.DistributedCacheKeyRequest.onReceived(DistributedCacheKeyRequest.CDB:12)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onMessage(Grid.CDB:34)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onNotify(Grid.CDB:33)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.PartitionedService.onNotify(PartitionedService.CDB:3)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache.onNotify(PartitionedCache.CDB:3)
    at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
    at java.lang.Thread.run(Unknown Source)
    at <process boundary>
    at com.tangosol.io.pof.ThrowablePofSerializer.deserialize(ThrowablePofSerializer.java:57)
    at com.tangosol.io.pof.PofBufferReader.readAsObject(PofBufferReader.java:3316)
    at com.tangosol.io.pof.PofBufferReader.readObject(PofBufferReader.java:2604)
    at com.tangosol.io.pof.PortableException.readExternal(PortableException.java:150)
    at com.tangosol.io.pof.ThrowablePofSerializer.deserialize(ThrowablePofSerializer.java:59)
    at com.tangosol.io.pof.PofBufferReader.readAsObject(PofBufferReader.java:3316)
    at com.tangosol.io.pof.PofBufferReader.readObject(PofBufferReader.java:2604)
    at com.tangosol.io.pof.ConfigurablePofContext.deserialize(ConfigurablePofContext.java:368)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.Service.readObject(Service.CDB:1)
    at com.tangosol.coherence.component.net.Message.readObject(Message.CDB:1)
    at com.tangosol.coherence.component.net.message.SimpleResponse.read(SimpleResponse.CDB:6)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.deserializeMessage(Grid.CDB:19)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onNotify(Grid.CDB:31)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.PartitionedService.onNotify(PartitionedService.CDB:3)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache.onNotify(PartitionedCache.CDB:3)
    at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
    at java.lang.Thread.run(Unknown Source)
    2012-01-12 18:39:04.016/2.078 Oracle Coherence GE 3.7.1.0 <D4> (thread=ShutdownHook, member=2): ShutdownHook: stopping cluster node
    Ta,
    -Bret

  • How do I combine the Coherence 3.5 partitioned backing map with overflow?

    I would like to set up a near cache where the back cache uses an overflow map that uses a partitioned backing map as front and a file (or Berkley DB) based back. I would like the storage for both primary and backup storage to use the same configuration. I tried the following cache config (I am not even sure this say anything about how the backup storage should be configured, except that I say it should be off-heap) :
    <?xml version="1.0"?>
    <!DOCTYPE cache-config SYSTEM "cache-config.dtd">
    <cache-config>
        <caching-scheme-mapping>
            <cache-mapping>
                <cache-name>near-small</cache-name>
                <scheme-name>near-schema</scheme-name>
            </cache-mapping>
        </caching-scheme-mapping>
        <caching-schemes>
            <near-scheme>
                <scheme-name>near-schema</scheme-name>
                <front-scheme>
                    <local-scheme>
                        <eviction-policy>HYBRID</eviction-policy>
                        <high-units>10000</high-units>
                    </local-scheme>
                </front-scheme>
                <back-scheme>
                    <distributed-scheme>
                        <scheme-name>near-distributed-scheme</scheme-name>
                        <service-name>PartitionedOffHeap</service-name>
                        <backup-count>1</backup-count>
                        <thread-count>4</thread-count>
                        <serializer>
                            <class-name>com.tangosol.io.pof.ConfigurablePofContext</class-name>
                        </serializer>
                        <backing-map-scheme>
                            <overflow-scheme>
                                <scheme-name>OverflowScheme</scheme-name>
                                <front-scheme>
                                    <external-scheme>
                                        <nio-memory-manager/>
                                        <unit-calculator>BINARY</unit-calculator>
                                        <high-units>256</high-units>
                                        <unit-factor>1048576</unit-factor>
                                    </external-scheme>
                                </front-scheme>
                                <back-scheme>
                                    <external-scheme>
                                        <scheme-name>DiskScheme</scheme-name>
                                        <lh-file-manager>
                                            <directory>./</directory>
                                        </lh-file-manager>
                                    </external-scheme>
                                </back-scheme>
                            </overflow-scheme>
                            <partitioned>true</partitioned>
                        </backing-map-scheme>
                        <backup-storage>
                            <type>off-heap</type>
                        </backup-storage>
                        <autostart>true</autostart>
                    </distributed-scheme>
                </back-scheme>
                <invalidation-strategy>present</invalidation-strategy>
                <autostart>true</autostart>
            </near-scheme>
            <!--
            Invocation Service scheme.
            -->
            <invocation-scheme>
                <scheme-name>example-invocation</scheme-name>
                <service-name>InvocationService</service-name>
                <autostart system-property="tangosol.coherence.invocation.autostart">true</autostart>
            </invocation-scheme>
        </caching-schemes>
    </cache-config>This all goes well when I start the cache node(s) but when i start an application that try to use the cache I get the error message:
    2009-04-24 08:20:24.925/17.877 Oracle Coherence GE 3.5/453 (Pre-release) <Error> (thread=DistributedCache:PartitionedOffHeap, member=1): java.lang.IllegalStateException: Partition backing map com.tangosol.net.cache.OverflowMap does not implement ConfigurableCacheMap
         at com.tangosol.net.partition.ObservableSplittingBackingCache.createPartition(ObservableSplittingBackingCache.java:100)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$Storage.initializePartitions(DistributedCache.CDB:10)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$Storage.instantiateResourceMap(DistributedCache.CDB:63)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$Storage.setCacheName(DistributedCache.CDB:27)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$ConfigListener.entryInserted(DistributedCache.CDB:15)
    How should I change my cache config to make this work?
    Best Regards
    Magnus

    Magnus,
    The optimizations related to efficiently supporting overflow-style caching are not included in Coherence 3.5. I created COH-2338 and COH-2339 to track the progress of the related issues.
    There are four different implementations of the PartitionAwareBackingMap for Coherence 3.5:
    * PartitionSplittingBackingMap is the simplest implementation that simply partitions data across a number of backing maps; it is not observable.
    * ObservableSplittingBackingMap is the observable implementation; it extends WrapperObservableMap and delegates to (wraps) a PartitionSplittingBackingMap.
    * ObservableSplittingBackingCache is an extension to the ObservableSplittingBackingMap that knows how to manage ConfigurableCacheMap instances as the underlying per-partition backing maps; in other words, it can spread out and coalesce a configured amount of memory (etc.) across all the actual backing maps.
    * ReadWriteSplittingBackingMap is an extension of the ReadWriteBackingMap that is partition-aware.
    The DefaultConfigurableCacheFactory currently only uses the ObservableSplittingBackingCache and the ReadWriteSplittingBackingMap; COH-2338 relates to the request for improvement to add support for the other two implementations as well. Additionally, optimizations to load balancing (where overflow caching tends to get bogged down by many small I/O operations) will be important; those are tracked by COH-2339.
    Peace,
    Cameron Purdy
    Oracle Coherence

  • Could you explain how the read-write-backing-map-scheme is configured in...

    Could you explain how the read-write-backing-map-scheme is configured in the following example?
    <backing-map-scheme>
        <read-write-backing-map-scheme>
         <internal-cache-scheme>
          <class-scheme>
           <class-name>com.tangosol.util.ObservableHashMap</class-name>
          </class-scheme>
         </internal-cache-scheme>
         <cachestore-scheme>
          <class-scheme>
           <class-name>coherence.DBCacheStore</class-name>
           <init-params>
            <init-param>
             <param-type>java.lang.String</param-type>
             <param-value>CATALOG</param-value>
            </init-param>
           </init-params>
          </class-scheme>
         </cachestore-scheme>
         <read-only>false</read-only>
         <write-delay-seconds>0</write-delay-seconds>
        </read-write-backing-map-scheme>
    </backing-map-scheme>
    ...Edited by: qkc on 30-Nov-2009 10:48

    Thank you very much for reply.
    In the following example, the cachestore element is not specified in the <read-write-backing-map-scheme> section. Instead, a class-name ControllerBackingMap is designated. What is the result?
    If ControllerBackingMap is a persistence entity, is the result same with that of cachestore-scheme?
    <distributed-scheme>
                <scheme-name>with-rw-bm</scheme-name>
                <service-name>unlimited-partitioned</service-name>
                <backing-map-scheme>
                    <read-write-backing-map-scheme>
                        <scheme-ref>base-rw-bm</scheme-ref>
                    </read-write-backing-map-scheme>
                </backing-map-scheme>
                <autostart>true</autostart>
            </distributed-scheme>
            <read-write-backing-map-scheme>
                <scheme-name>base-rw-bm</scheme-name>
                <class-name>ControllerBackingMap</class-name>
                <internal-cache-scheme>
                    <local-scheme/>
                </internal-cache-scheme>
            </read-write-backing-map-scheme>

  • Backing map v.s. NamedCache

    Hello all:
    How does a backing map related to named cache conceptually? as quoted When a local storage implementation is used by Coherence to store replicated or distributed data, it is called a backing map because Coherence is actually backed by that local storage implementation. I thought named cache is the one that store the data.
    thanks,
    John

    Hi,
    Johnny_hunter wrote:
    1. The BackingMap is the server side data structure that holds actual data. I noticed this comment in the tuotial too. What does it mean by server? I thought it's a de-centralized topology. Cache nodes are members of a cluster, they are all equal citizens. Of cause I know if the local-storage=false for some nodes in a distributed topology, the data will be stored elsewhere in another instance, say, of DefaultCacheServer. Is it the server you meant?
    By server it means the storage enabled cluster members (i.e. JVMs). In some ways cluster members are equal citizens but in other ways they are not. For example, in a partitioned cache they each store a sub-set of the data. Each storage enabled JVM in the cluster will have a backing map for each cache. These backing maps hold the data that is owned by that JVM.
    Johnny_hunter wrote:
    2. NamedCache.get() call naturally causes a Map.get() call on a corresponding BackingMap; NamedCache.invoke() call may cause a sequence of Map.get() followed by the Map.put(), NamedCache.keySet(filter) call may cause an Map.entrySet().iterator() loop, and so on.
    It looks like a delegate pattern to me. AFAIK, NamedCache is a Map implementation themselves, if they delegate those ops to the corresponding ops in a backingmap, how about themseleves as maps?
    The delegate pattern is probably as good a way as any of looking at it. In client code you have an instance of a NamedCache. When you make calls to that the call is delegated to the cache and backing map in the JVM in the cluster that owns the data your are manipulating.
    JK

  • Backing Map Insert Error

    Hi.
    I'm trying to make an insert into the backing map of another cache inside the EntryProcessor, but get the following error: An entry was inserted into the backing map for the partitioned cache "another-cache" that is not owned by this member; the entry will be removed.
    So, EP is invoked against one cache, but the insert inside it occurs to a different cache.
    Here is the code for it, please let me know what can fix the error above:
         public Object process(InvocableMap.Entry entry)
         BackingMapManagerContext ctx = ((BinaryEntry) entry).getContext();
         BackingMapManagerContext ctx2 = ctx.getBackingMapContext("another-cache").getManagerContext();
    Map childCache = ctx.getBackingMapContext("another-cache").getBackingMap();
    childCache.put(ctx2.getKeyToInternalConverter().convert(KeyObject), ctx2.getValueToInternalConverter().convert(ValueObject));
         }

    Hi,
    I'm trying to make an insert into the backing map of another cache inside the EntryProcessor, but get the following error: An entry was inserted into the backing map for the partitioned cache "another-cache" that is not owned by this member; the entry will be removed.
    So, EP is invoked against one cache, but the insert inside it occurs to a different cache.
    Here is the code for it, please let me know what can fix the error above:
         public Object process(InvocableMap.Entry entry)
         BackingMapManagerContext ctx = ((BinaryEntry) entry).getContext();
         BackingMapManagerContext ctx2 = ctx.getBackingMapContext("another-cache").getManagerContext();
    Map childCache = ctx.getBackingMapContext("another-cache").getBackingMap();
    childCache.put(ctx2.getKeyToInternalConverter().convert(KeyObject), ctx2.getValueToInternalConverter().convert(ValueObject));
         }The reason for the error is that the owner of the key "KeyObject" is not the same node as the owner of the entry "entry". If you want to implement it then you need to define the "DataAffinity" between the objects of both your caches so they are collocated on the same node.
    HTH
    Cheers,
    _NJ                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

  • Get an error when accessing the entry from the Backing Map directly

    We are using some sample code from Oracle to access Objects associated via KeyAssociation directly from the Backing Map.
    Occasionally we get the error posted below. Can someone shed light on what this error means ?
    I'm doing a Get on the Backing Map directly.
    Thanks,
    J
    An entry was inserted into the backing map for the partitioned cache "Customerl" that is not owned by this member; the entry will be removed.
    ReadWriteBackingMap$5{ReadWriteBackingMap inserted: key=Binary(length=75, value=0x---binary key data removed ----), value=Binary(length=691, value=0x---binary value data removed---)), synthetic}
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$Storage.onBackingMapEvent(DistributedCache.CDB:152)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$Storage$PrimaryListener.entryInserted(DistributedCache.CDB:1)
         at com.tangosol.util.MapEvent.dispatch(MapEvent.java:191)
         at com.tangosol.util.MapEvent.dispatch(MapEvent.java:164)
         at com.tangosol.util.MapListenerSupport.fireEvent(MapListenerSupport.java:556)
         at com.tangosol.net.cache.ReadWriteBackingMap$InternalMapListener.dispatch(ReadWriteBackingMap.java:2064)
         at com.tangosol.net.cache.ReadWriteBackingMap$InternalMapListener.entryInserted(ReadWriteBackingMap.java:1903)
         at com.tangosol.util.MapEvent.dispatch(MapEvent.java:191)
         at com.tangosol.util.MapEvent.dispatch(MapEvent.java:164)
         at com.tangosol.util.MapListenerSupport.fireEvent(MapListenerSupport.java:556)
         at com.tangosol.net.cache.OldCache.dispatchEvent(OldCache.java:1718)
         at com.tangosol.net.cache.OldCache$Entry.onAdd(OldCache.java:1786)
         at com.tangosol.util.SafeHashMap.put(SafeHashMap.java:244)
         at com.tangosol.net.cache.OldCache.put(OldCache.java:253)
         at com.tangosol.net.cache.OldCache.put(OldCache.java:221)
         at com.tangosol.net.cache.ReadWriteBackingMap.get(ReadWriteBackingMap.java:721)
         at

    Here is the sample we adapted. We have adapted the code below to our specific Cache. I have highlighted the line that throws the exception, this exception doesnt occur all the time, saw it about 10 times yesterday and 2 times today.
    import com.tangosol.net.BackingMapManagerContext;
    import com.tangosol.net.CacheFactory;
    import com.tangosol.net.CacheService;
    import com.tangosol.net.DefaultConfigurableCacheFactory;
    import com.tangosol.net.NamedCache;
    import com.tangosol.util.Binary;
    import com.tangosol.util.ClassHelper;
    import com.tangosol.util.InvocableMap;
    import com.tangosol.util.processor.AbstractProcessor;
    import java.io.Serializable;
    import java.util.Map;
    * dimitri
    public class Main extends AbstractProcessor
    public static class Foo implements Serializable
    String m_sFoo;
    public String getFoo()
    return m_sFoo;
    public void setFoo(String sFoo)
    m_sFoo = sFoo;
    public String toString()
    return "Foo[foo=" + m_sFoo + "]";
    public static class Bar implements Serializable
    String m_sBar;
    public String getBar()
    return m_sBar;
    public void setBar(String sBar)
    m_sBar = sBar;
    public String toString()
    return "Bar[bar=" + m_sBar + "]";
    public Object process(InvocableMap.Entry entry)
    try
    // We are invoked on foo - update it.
    Foo foo = (Foo) entry.getValue();
    foo.setFoo(foo.getFoo() + " updated");
    entry.setValue(foo);
    // Now update Bar
    Object oStorage = ClassHelper.invoke(entry, "getStorage", null);
    CacheService service = (CacheService) ClassHelper.invoke(oStorage, "getService", null);
    DefaultConfigurableCacheFactory.Manager bmm =
    (DefaultConfigurableCacheFactory.Manager) service.getBackingMapManager();
    BackingMapManagerContext ctx = bmm.getContext();
    Map mapBack = bmm.getBackingMap("bar");
    // Assume that the key is still the same - "test"
    Binary binKey = (Binary) ctx.getKeyToInternalConverter().convert(entry.getKey());
    Binary binValue = (Binary) mapBack.get(binKey);
    // convert value from internal and update
    Bar bar = (Bar) ctx.getValueFromInternalConverter().convert(binValue);
    bar.setBar(bar.getBar() + " updated");
    // update backing map
    binValue = (Binary) ctx.getValueToInternalConverter().convert(bar);
    mapBack.put(binKey, binValue);
    catch (Throwable oops)
    throw ensureRuntimeException(oops);
    return null;
    public static void main(String[] asArg)
    try
    NamedCache cacheFoo = CacheFactory.getCache("foo");
    NamedCache cacheBar = CacheFactory.getCache("bar");
    Foo foo = new Foo();
    foo.setFoo("initial foo");
    cacheFoo.put("test", foo);
    Bar bar = new Bar();
    bar.setBar("initial bar");
    cacheBar.put("test", bar);
    System.out.println(cacheFoo.get("test"));
    System.out.println(cacheBar.get("test"));
    cacheFoo.invoke("test", new Main());
    System.out.println(cacheFoo.get("test"));
    System.out.println(cacheBar.get("test"));
    catch (Throwable oops)
    err(oops);
    finally
    CacheFactory.shutdown();
    }

  • Local Cache with write-behind backing map

    Hi there,
    I am a Coherence newb, so I hope my question isn't too naive. I have been experimenting with Coherence using a write-behind JPA backing map, but I have only been able to make it work with a distributed cache. Because of my specific database RAC architecture, I need to ensure that entries written to the database from a given WLS node are restricted to a specific RAC node. In my view, using a local cache rather than a distributed cache should solve my problem, although if there is a way of configuring my cache so this works I'd appreciate the info.
    So, the short form of the question: can I back a local cache with a write-behind JPA map?
    Cheers,
    Ron

    Hi Ron,
    The configuration for <local-scheme> allows you to add a cache store but you cannot use write-behind, only write-through.
    Presumably you do not want the data to be shared by the different WLS nodes, i.e. if one node puts data in the cache and that is eventually written to a RAC node, that data cannot be seen in the cache by other WLS nodes. Or do you want all the WLS nodes to share the data but just write it to different RAC nodes?
    If you use a local-scheme then the data will only be local to that WLS node and not shared.
    I can think of a possible way to do what you want but it depends on the answer to the above question.
    JK

Maybe you are looking for

  • Session Replication doesn't work when using a custom Unicast Channel

    Hello! After configure a WLS Cluster for an WebApp with session replication support enabled I faced some issues with cluster configuration. My LAB env used for this configurations is: One Solaris 10 SPARC box. -- One WLS 11g (10.3.6) domain with: ---

  • Working with Human Task Service

    Hello everybody, I'm trying to do some exercises with Human Task Service and ADF, but I have no clue how to do it. Ok here is what I'm trying to do. I've got a small BPMN process, which calls a WebService and gets a list of locations. Afterwards the

  • Price history of Text material

    Hi All, I need a report for Price history of PO for the text material. Text material includes Asset, cosumable items etc. I have tried by using me1p but coud not get the result. please let me know the correct way to get that... Looking forward for yo

  • Crystal Reports XI API documentation

    Hi.  I'm using the activex object for Crystal Reports Viewer (version XI).  Can someone please point me to the unabridged documentation for the API (full class and function syntax/descriptions).  Thanks.

  • Manager information for a CRM Resource

    Hi, I am interested in knowing the navigation path for getting the details of the manager of a CRM resource (both in 11.5.10 and R12). I am able to see the source_mgr_id and source_mgr_name populated for a resource in jtf_rs_resource_extns table but