Backing Map Access : Cross-cache joins

Hi,
I have been experimenting with cross-cache joins using Entry Processors in Coherence 3.7.1.
(I have already sent a query to Dave Felcey regarding this - I will post any response from him here - but I just wondered if anyone else has had the same problem)
h3. Scenario
A simplified version of the problem is:
We have two NamedCaches, one called "PARENT_CACHE" and one called "CHILD_CACHE"
Each cache stores the following types of object respectivley...
*class Parent implements PortableObject {*
long id;
String description;
*class Child implements PortableObject {*
long id;
long parentId;
String description;
I want an entry processor that I can send to an instance of "Parent" that will return the "Parent" object together with the associated "Child" objects. The reason I want this is because I do not want to do two out-of-process calls (one to get the parent and one to get it's children - in the real world, we will need to get several different child types and we do not want to do one call for each type) for example...
*class ExampleEntryProcessor extends AbstractProcessor implements PortableObject {*
public ParentAndChildrenResult process(Entry entry) {
I wrote an implementation of this based on Ben Stopfold's blog (see here - particularly the post by "Jonathan Knight")
So I thought I needed something like this...
     *public ParentAndChildrenResult process(Entry entry) {*
          ParentAndChildrenResult result = new ParentAndChildResult();
          Parent parent = (Parent) entry.getValue();
          result.setParent(parent);
          Filter parentIdFilter = new EqualsFilter(new PofExtractor(Long.class, Child.PARENT_ID_FIELD), parent.getId());
          BinaryEntry binaryEntry = (BinaryEntry) entry;
          Set<java.util.Map.Entry> childEntrySet = queryBackingMap("CHILD_CACHE", parentIdFilter, binaryEntry);
          Converter valueUpConverter = binaryEntry.getContext().getValueFromInternalConverter();
          for (java.util.Map.Entry childEntry : childEntrySet) {
               result.addChild((Child) valueUpConverter.convert(childEntry.getValue()));
          return result;
     *public Set<Map.Entry> queryBackingMap(String nameOfCacheToSearch, Filter filter, BinaryEntry entry) {*
     BackingMapContext backingMapContext = entry.getContext().getBackingMapContext(nameOfCacheToSearch);
     Map indexMap = backingMapContext.getIndexMap();
     return InvocableMapHelper.query(backingMapContext.getBackingMap(), indexMap, filter, true, false, null);
I set up key association so I can ensure that the child objects are on the same node as the parent, the keys for each cache look like this...
*class Key implements KeyAssociation, PortableObject {*
private long id;
private long associatedId;
*public Key() {*
*public Key(Parent parent) {*
this.id = parent.getId();
this.associatedId = parent.getId();
*public Key(Child child) {*
this.id = child.getId();
this.associatedId = child.getParentId();
*public Object getAssociatedKey() {*
return associatedId;
When I send this entry processor to a parent object, I am getting the following exception when the "InvocableMapHelper.query" method is called...
+"Portable(java.lang.UnsupportedOperationException): PofExtractor must be used with POF-encoded Binary entries; the Map Entry is not a BinaryEntry"+
I was not expecting this as our cluster is POF enabled and I thought that the backing maps always contained BinaryEntries.
Has anyone else had a similar problem? Has anyone found any simple examples of how to do this anywhere on the web that work?
Once I figure out how to get this to work, I want to post the solution somewhere (probably here) because there are bound to be other people who want to do something similar to this.
Thanks in advance,
-Bret

Forgot the output...
STORAGE NODE OUTPUT
2012-01-12 18:38:09.860/0.766 Oracle Coherence 3.7.1.0 <Info> (thread=main, member=n/a): Loaded operational configuration from "jar:file:/C:/users/vclondon/.m2/repository/com/tangosol/coherence/3.7.1.0b27797/coherence-3.7.1.0b27797.jar!/tangosol-coherence.xml"
2012-01-12 18:38:09.860/0.766 Oracle Coherence 3.7.1.0 <Info> (thread=main, member=n/a): Loaded operational overrides from "file:/C:/dev2/tech-trading-workspace/backing-map-access/bin/tangosol-coherence-override.xml"
2012-01-12 18:38:09.860/0.766 Oracle Coherence 3.7.1.0 <D5> (thread=main, member=n/a): Optional configuration override "/custom-mbeans.xml" is not specified
Oracle Coherence Version 3.7.1.0 Build 27797
Grid Edition: Development mode
Copyright (c) 2000, 2011, Oracle and/or its affiliates. All rights reserved.
2012-01-12 18:38:10.016/0.922 Oracle Coherence GE 3.7.1.0 <Info> (thread=main, member=n/a): Loaded cache configuration from "file:/C:/dev2/tech-trading-workspace/backing-map-access/bin/coherence-cache-config.xml"; this document does not refer to any schema definition and has not been validated.
2012-01-12 18:38:10.344/1.250 Oracle Coherence GE 3.7.1.0 <Info> (thread=main, member=n/a): Loaded Reporter configuration from "jar:file:/C:/users/vclondon/.m2/repository/com/tangosol/coherence/3.7.1.0b27797/coherence-3.7.1.0b27797.jar!/reports/report-group.xml"
2012-01-12 18:38:23.610/14.516 Oracle Coherence GE 3.7.1.0 <D4> (thread=main, member=n/a): TCMP bound to /172.23.0.26:8088 using SystemSocketProvider
2012-01-12 18:38:54.454/45.360 Oracle Coherence GE 3.7.1.0 <Info> (thread=Cluster, member=n/a): Created a new cluster "LOCAL" with Member(Id=1, Timestamp=2012-01-12 18:38:23.719, Address=172.23.0.26:8088, MachineId=21295, Location=site:,machine:J67LQ2J,process:13600, Role=TestBackingmapStartStorageNode, Edition=Grid Edition, Mode=Development, CpuCount=2, SocketCount=1) UID=0xAC17001A00000134D2FFC167532F1F98
2012-01-12 18:38:54.454/45.360 Oracle Coherence GE 3.7.1.0 <Info> (thread=main, member=n/a): Started cluster Name=LOCAL
WellKnownAddressList(Size=1,
WKA{Address=172.23.0.26, Port=8088}
MasterMemberSet(
ThisMember=Member(Id=1, Timestamp=2012-01-12 18:38:23.719, Address=172.23.0.26:8088, MachineId=21295, Location=site:,machine:J67LQ2J,process:13600, Role=TestBackingmapStartStorageNode)
OldestMember=Member(Id=1, Timestamp=2012-01-12 18:38:23.719, Address=172.23.0.26:8088, MachineId=21295, Location=site:,machine:J67LQ2J,process:13600, Role=TestBackingmapStartStorageNode)
ActualMemberSet=MemberSet(Size=1
Member(Id=1, Timestamp=2012-01-12 18:38:23.719, Address=172.23.0.26:8088, MachineId=21295, Location=site:,machine:J67LQ2J,process:13600, Role=TestBackingmapStartStorageNode)
MemberId|ServiceVersion|ServiceJoined|MemberState
1|3.7.1|2012-01-12 18:38:54.454|JOINED
RecycleMillis=1200000
RecycleSet=MemberSet(Size=0
TcpRing{Connections=[]}
IpMonitor{AddressListSize=0}
2012-01-12 18:38:54.501/45.407 Oracle Coherence GE 3.7.1.0 <Info> (thread=Cluster, member=1): Loaded POF configuration from "file:/C:/dev2/tech-trading-workspace/backing-map-access/bin/pof-config.xml"; this document does not refer to any schema definition and has not been validated.
2012-01-12 18:38:54.516/45.422 Oracle Coherence GE 3.7.1.0 <Info> (thread=Cluster, member=1): Loaded included POF configuration from "jar:file:/C:/users/vclondon/.m2/repository/com/tangosol/coherence/3.7.1.0b27797/coherence-3.7.1.0b27797.jar!/coherence-pof-config.xml"
2012-01-12 18:38:54.579/45.485 Oracle Coherence GE 3.7.1.0 <D5> (thread=Invocation:Management, member=1): Service Management joined the cluster with senior service member 1
2012-01-12 18:38:54.876/45.782 Oracle Coherence GE 3.7.1.0 <D5> (thread=DistributedCache, member=1): Service DistributedCache joined the cluster with senior service member 1
2012-01-12 18:38:54.891/45.797 Oracle Coherence GE 3.7.1.0 <D5> (thread=Invocation:InvocationService, member=1): Service InvocationService joined the cluster with senior service member 1
2012-01-12 18:38:54.907/45.813 Oracle Coherence GE 3.7.1.0 <Info> (thread=main, member=1):
Services
ClusterService{Name=Cluster, State=(SERVICE_STARTED, STATE_JOINED), Id=0, Version=3.7.1, OldestMemberId=1}
InvocationService{Name=Management, State=(SERVICE_STARTED), Id=1, Version=3.1, OldestMemberId=1}
PartitionedCache{Name=DistributedCache, State=(SERVICE_STARTED), LocalStorage=enabled, PartitionCount=257, BackupCount=1, AssignedPartitions=257, BackupPartitions=0}
InvocationService{Name=InvocationService, State=(SERVICE_STARTED), Id=3, Version=3.1, OldestMemberId=1}
Started DefaultCacheServer...
2012-01-12 18:39:03.438/54.344 Oracle Coherence GE 3.7.1.0 <D5> (thread=Cluster, member=1): Member(Id=2, Timestamp=2012-01-12 18:39:03.274, Address=172.23.0.26:8090, MachineId=21295, Location=site:,machine:J67LQ2J,process:14408, Role=TestBackingmapMainRunTheTest) joined Cluster with senior member 1
2012-01-12 18:39:03.610/54.516 Oracle Coherence GE 3.7.1.0 <D5> (thread=Cluster, member=1): Member 2 joined Service Management with senior member 1
2012-01-12 18:39:03.907/54.813 Oracle Coherence GE 3.7.1.0 <D5> (thread=Cluster, member=1): Member 2 joined Service DistributedCache with senior member 1
2012-01-12 18:39:04.032/54.938 Oracle Coherence GE 3.7.1.0 <D5> (thread=Cluster, member=1): TcpRing disconnected from Member(Id=2, Timestamp=2012-01-12 18:39:03.274, Address=172.23.0.26:8090, MachineId=21295, Location=site:,machine:J67LQ2J,process:14408, Role=TestBackingmapMainRunTheTest) due to a peer departure; removing the member.
2012-01-12 18:39:04.032/54.938 Oracle Coherence GE 3.7.1.0 <D5> (thread=Cluster, member=1): Member 2 left service Management with senior member 1
2012-01-12 18:39:04.032/54.938 Oracle Coherence GE 3.7.1.0 <D5> (thread=Cluster, member=1): Member 2 left service DistributedCache with senior member 1
2012-01-12 18:39:04.032/54.938 Oracle Coherence GE 3.7.1.0 <D5> (thread=Cluster, member=1): Member(Id=2, Timestamp=2012-01-12 18:39:04.032, Address=172.23.0.26:8090, MachineId=21295, Location=site:,machine:J67LQ2J,process:14408, Role=TestBackingmapMainRunTheTest) left Cluster with senior member 1
PROCESS NODE OUTPUT
2012-01-12 18:39:02.266/0.328 Oracle Coherence 3.7.1.0 <Info> (thread=main, member=n/a): Loaded operational configuration from "jar:file:/C:/users/vclondon/.m2/repository/com/tangosol/coherence/3.7.1.0b27797/coherence-3.7.1.0b27797.jar!/tangosol-coherence.xml"
2012-01-12 18:39:02.266/0.328 Oracle Coherence 3.7.1.0 <Info> (thread=main, member=n/a): Loaded operational overrides from "file:/C:/dev2/tech-trading-workspace/backing-map-access/bin/tangosol-coherence-override.xml"
2012-01-12 18:39:02.266/0.328 Oracle Coherence 3.7.1.0 <D5> (thread=main, member=n/a): Optional configuration override "/custom-mbeans.xml" is not specified
Oracle Coherence Version 3.7.1.0 Build 27797
Grid Edition: Development mode
Copyright (c) 2000, 2011, Oracle and/or its affiliates. All rights reserved.
2012-01-12 18:39:02.407/0.469 Oracle Coherence GE 3.7.1.0 <Info> (thread=main, member=n/a): Loaded cache configuration from "file:/C:/dev2/tech-trading-workspace/backing-map-access/bin/coherence-cache-config.xml"; this document does not refer to any schema definition and has not been validated.
2012-01-12 18:39:02.501/0.563 Oracle Coherence GE 3.7.1.0 <Info> (thread=main, member=n/a): Loaded Reporter configuration from "jar:file:/C:/users/vclondon/.m2/repository/com/tangosol/coherence/3.7.1.0b27797/coherence-3.7.1.0b27797.jar!/reports/report-group.xml"
2012-01-12 18:39:03.063/1.125 Oracle Coherence GE 3.7.1.0 <D4> (thread=main, member=n/a): TCMP bound to /172.23.0.26:8090 using SystemSocketProvider
2012-01-12 18:39:03.438/1.500 Oracle Coherence GE 3.7.1.0 <Info> (thread=Cluster, member=n/a): This Member(Id=2, Timestamp=2012-01-12 18:39:03.274, Address=172.23.0.26:8090, MachineId=21295, Location=site:,machine:J67LQ2J,process:14408, Role=TestBackingmapMainRunTheTest, Edition=Grid Edition, Mode=Development, CpuCount=2, SocketCount=1) joined cluster "LOCAL" with senior Member(Id=1, Timestamp=2012-01-12 18:38:23.719, Address=172.23.0.26:8088, MachineId=21295, Location=site:,machine:J67LQ2J,process:13600, Role=TestBackingmapStartStorageNode, Edition=Grid Edition, Mode=Development, CpuCount=2, SocketCount=1)
2012-01-12 18:39:03.485/1.547 Oracle Coherence GE 3.7.1.0 <D5> (thread=Cluster, member=n/a): Member 1 joined Service Management with senior member 1
2012-01-12 18:39:03.485/1.547 Oracle Coherence GE 3.7.1.0 <D5> (thread=Cluster, member=n/a): Member 1 joined Service DistributedCache with senior member 1
2012-01-12 18:39:03.485/1.547 Oracle Coherence GE 3.7.1.0 <D5> (thread=Cluster, member=n/a): Member 1 joined Service InvocationService with senior member 1
2012-01-12 18:39:03.485/1.547 Oracle Coherence GE 3.7.1.0 <Info> (thread=main, member=n/a): Started cluster Name=LOCAL
WellKnownAddressList(Size=1,
WKA{Address=172.23.0.26, Port=8088}
MasterMemberSet(
ThisMember=Member(Id=2, Timestamp=2012-01-12 18:39:03.274, Address=172.23.0.26:8090, MachineId=21295, Location=site:,machine:J67LQ2J,process:14408, Role=TestBackingmapMainRunTheTest)
OldestMember=Member(Id=1, Timestamp=2012-01-12 18:38:23.719, Address=172.23.0.26:8088, MachineId=21295, Location=site:,machine:J67LQ2J,process:13600, Role=TestBackingmapStartStorageNode)
ActualMemberSet=MemberSet(Size=2
Member(Id=1, Timestamp=2012-01-12 18:38:23.719, Address=172.23.0.26:8088, MachineId=21295, Location=site:,machine:J67LQ2J,process:13600, Role=TestBackingmapStartStorageNode)
Member(Id=2, Timestamp=2012-01-12 18:39:03.274, Address=172.23.0.26:8090, MachineId=21295, Location=site:,machine:J67LQ2J,process:14408, Role=TestBackingmapMainRunTheTest)
MemberId|ServiceVersion|ServiceJoined|MemberState
1|3.7.1|2012-01-12 18:38:23.719|JOINED,
2|3.7.1|2012-01-12 18:39:03.477|JOINED
RecycleMillis=1200000
RecycleSet=MemberSet(Size=0
TcpRing{Connections=[1]}
IpMonitor{AddressListSize=0}
2012-01-12 18:39:03.501/1.563 Oracle Coherence GE 3.7.1.0 <Info> (thread=Cluster, member=2): Loaded POF configuration from "file:/C:/dev2/tech-trading-workspace/backing-map-access/bin/pof-config.xml"; this document does not refer to any schema definition and has not been validated.
2012-01-12 18:39:03.532/1.594 Oracle Coherence GE 3.7.1.0 <Info> (thread=Cluster, member=2): Loaded included POF configuration from "jar:file:/C:/users/vclondon/.m2/repository/com/tangosol/coherence/3.7.1.0b27797/coherence-3.7.1.0b27797.jar!/coherence-pof-config.xml"
2012-01-12 18:39:03.594/1.656 Oracle Coherence GE 3.7.1.0 <D5> (thread=Invocation:Management, member=2): Service Management joined the cluster with senior service member 1
2012-01-12 18:39:03.891/1.953 Oracle Coherence GE 3.7.1.0 <D5> (thread=DistributedCache, member=2): Service DistributedCache joined the cluster with senior service member 1
Exception in thread "main" Portable(com.tangosol.util.WrapperException): (Wrapped: Failed request execution for DistributedCache service on Member(Id=1, Timestamp=2012-01-12 18:38:23.719, Address=172.23.0.26:8088, MachineId=21295, Location=site:,machine:J67LQ2J,process:13600, Role=TestBackingmapStartStorageNode)) PofExtractor must be used with POF-encoded Binary entries; the Map Entry is not a BinaryEntry
at com.tangosol.util.Base.ensureRuntimeException(Base.java:288)
at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.tagException(Grid.CDB:36)
at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache.onInvokeRequest(PartitionedCache.CDB:68)
at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$InvokeRequest.run(PartitionedCache.CDB:1)
at com.tangosol.coherence.component.net.message.requestMessage.DistributedCacheKeyRequest.onReceived(DistributedCacheKeyRequest.CDB:12)
at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onMessage(Grid.CDB:34)
at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onNotify(Grid.CDB:33)
at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.PartitionedService.onNotify(PartitionedService.CDB:3)
at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache.onNotify(PartitionedCache.CDB:3)
at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
at java.lang.Thread.run(Unknown Source)
at <process boundary>
at com.tangosol.io.pof.ThrowablePofSerializer.deserialize(ThrowablePofSerializer.java:57)
at com.tangosol.io.pof.PofBufferReader.readAsObject(PofBufferReader.java:3316)
at com.tangosol.io.pof.PofBufferReader.readObject(PofBufferReader.java:2604)
at com.tangosol.io.pof.ConfigurablePofContext.deserialize(ConfigurablePofContext.java:368)
at com.tangosol.coherence.component.util.daemon.queueProcessor.Service.readObject(Service.CDB:1)
at com.tangosol.coherence.component.net.Message.readObject(Message.CDB:1)
at com.tangosol.coherence.component.net.message.SimpleResponse.read(SimpleResponse.CDB:6)
at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.deserializeMessage(Grid.CDB:19)
at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onNotify(Grid.CDB:31)
at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.PartitionedService.onNotify(PartitionedService.CDB:3)
at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache.onNotify(PartitionedCache.CDB:3)
at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
at java.lang.Thread.run(Unknown Source)
Caused by: Portable(java.lang.UnsupportedOperationException): PofExtractor must be used with POF-encoded Binary entries; the Map Entry is not a BinaryEntry
at com.tangosol.util.extractor.PofExtractor.extractInternal(PofExtractor.java:175)
at com.tangosol.util.extractor.PofExtractor.extractFromEntry(PofExtractor.java:146)
at com.tangosol.util.InvocableMapHelper.extractFromEntry(InvocableMapHelper.java:315)
at com.tangosol.util.SimpleMapEntry.extract(SimpleMapEntry.java:168)
at com.tangosol.util.filter.ExtractorFilter.evaluateEntry(ExtractorFilter.java:93)
at com.tangosol.util.InvocableMapHelper.evaluateEntry(InvocableMapHelper.java:262)
at com.tangosol.util.InvocableMapHelper.query(InvocableMapHelper.java:452)
at test.backingmap.main.GetParentAndChildrenEntryProcessor.queryBackingMap(GetParentAndChildrenEntryProcessor.java:60)
at test.backingmap.main.GetParentAndChildrenEntryProcessor.process(GetParentAndChildrenEntryProcessor.java:47)
at test.backingmap.main.GetParentAndChildrenEntryProcessor.process(GetParentAndChildrenEntryProcessor.java:1)
at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$Storage.invoke(PartitionedCache.CDB:10)
at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache.onInvokeRequest(PartitionedCache.CDB:52)
at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$InvokeRequest.run(PartitionedCache.CDB:1)
at com.tangosol.coherence.component.net.message.requestMessage.DistributedCacheKeyRequest.onReceived(DistributedCacheKeyRequest.CDB:12)
at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onMessage(Grid.CDB:34)
at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onNotify(Grid.CDB:33)
at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.PartitionedService.onNotify(PartitionedService.CDB:3)
at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache.onNotify(PartitionedCache.CDB:3)
at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
at java.lang.Thread.run(Unknown Source)
at <process boundary>
at com.tangosol.io.pof.ThrowablePofSerializer.deserialize(ThrowablePofSerializer.java:57)
at com.tangosol.io.pof.PofBufferReader.readAsObject(PofBufferReader.java:3316)
at com.tangosol.io.pof.PofBufferReader.readObject(PofBufferReader.java:2604)
at com.tangosol.io.pof.PortableException.readExternal(PortableException.java:150)
at com.tangosol.io.pof.ThrowablePofSerializer.deserialize(ThrowablePofSerializer.java:59)
at com.tangosol.io.pof.PofBufferReader.readAsObject(PofBufferReader.java:3316)
at com.tangosol.io.pof.PofBufferReader.readObject(PofBufferReader.java:2604)
at com.tangosol.io.pof.ConfigurablePofContext.deserialize(ConfigurablePofContext.java:368)
at com.tangosol.coherence.component.util.daemon.queueProcessor.Service.readObject(Service.CDB:1)
at com.tangosol.coherence.component.net.Message.readObject(Message.CDB:1)
at com.tangosol.coherence.component.net.message.SimpleResponse.read(SimpleResponse.CDB:6)
at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.deserializeMessage(Grid.CDB:19)
at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onNotify(Grid.CDB:31)
at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.PartitionedService.onNotify(PartitionedService.CDB:3)
at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache.onNotify(PartitionedCache.CDB:3)
at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
at java.lang.Thread.run(Unknown Source)
2012-01-12 18:39:04.016/2.078 Oracle Coherence GE 3.7.1.0 <D4> (thread=ShutdownHook, member=2): ShutdownHook: stopping cluster node
Ta,
-Bret

Similar Messages

  • Accessing index directly from backing map

    Hi Guys,
    I have an interesting problem and would greatly appreciate some input.
    I am trying to find the minimum or maximum values of a specific field from data held
    in the backing map for a cache.
    I have seen examples of PofValue being used to extract individual entries - or being used in
    Iterators, working through the entries in the backing map.
    However, I was wondering if there is a faster way to do this, given that the field I'm interested in
    has an ordered index created against it.
    My thoughts were along the lines of trying to extract the Mapindex entries for the field, and just then
    taking the first value (which, I'm assuming, would be the smallest since the index is ordered).
    Does this make sense, or do I need to abandon the idea of using the pre-existing index and just iterate through
    the PofValues? Is it even possible to access the index entries for values directly from the backing map?
    Many Thanks,
    Adrian.

    Hi Adrian,
    you should be able to write an index-aware filter which makes this really fast, provided that you have a sorted index on the particular attribute which sorts according to the same order (or reverse) according to which you want the least or highest value. The following Filter implementation filters for the entries with the least extracted value:
    import java.util.Map;
    import java.util.Set;
    import java.util.SortedMap;
    import com.tangosol.util.Filter;
    import com.tangosol.util.MapIndex;
    import com.tangosol.util.filter.ExtractorFilter;
    import com.tangosol.util.filter.IndexAwareFilter;
    public class LeastExtractedValueFilter extends ExtractorFilter implements IndexAwareFilter {
       public LeastExtractedValueFilter(ValueExtractor extractor) {
          super(extractor);
       public LeastExtractedValueFilter() {
       @Override
       protected boolean evaluateExtracted(Object extractedValue) {
          throw new UnsupportedOperationException("This filter is only usable with a sorted index");
       @Override
       public Filter applyIndex(Map mapIndexes, Set candidateKeys) {
          MapIndex mapIndex = (MapIndex) mapIndexes.get(m_extractor);
          if (mapIndex != null && mapIndex.isOrdered()) {
             SortedMap reverseMap = (SortedMap) mapIndex.getIndexContents();
             if (reverseMap.isEmpty()) {
                candidateKeys.clear();
             } else {
                candidateKeys.retainAll((Set)reverseMap.values().iterator().next());
             return null;
          throw new UnsupportedOperationException("This filter is only usable with a sorted index");
       @Override
       public int calculateEffectiveness(Map mapIndexes, Set candidateKeys) {
          MapIndex mapIndex = (MapIndex) mapIndexes.get(m_extractor);
          if (mapIndex != null && mapIndex.isOrdered()) {
             return 1;
          throw new UnsupportedOperationException("This filter is only usable with a sorted index");
    }You can then try to extract whatever you want from the candidates in a parallel aggregator, e.g.:
    import java.io.IOException;
    import java.io.Serializable;
    import java.util.Collection;
    import java.util.Set;
    import com.tangosol.io.pof.PofReader;
    import com.tangosol.io.pof.PofWriter;
    import com.tangosol.io.pof.PortableObject;
    import com.tangosol.util.InvocableMap;
    import com.tangosol.util.ValueExtractor;
    import com.tangosol.util.InvocableMap.EntryAggregator;
    import com.tangosol.util.InvocableMap.ParallelAwareAggregator;
    public class FirstEntryExtractorAggregator implements ParallelAwareAggregator {
       public static class Parallel extends FirstEntryExtractorAggregator implements PortableObject, Serializable {
          public Parallel() {
             super(null);
          public Parallel(ValueExtractor extractor) {
             super(extractor);
          @Override
          public void readExternal(PofReader in) throws IOException {
             extractor = (ValueExtractor) in.readObject(0);
          @Override
          public void writeExternal(PofWriter out) throws IOException {
             out.writeObject(0, extractor);
       protected ValueExtractor extractor;
       public FirstEntryExtractorAggregator(ValueExtractor extractor) {
          super();
          this.extractor = extractor;
       @Override
       public Object aggregate(Set entries) {
          if (!entries.isEmpty()) {
             InvocableMap.Entry entry = (InvocableMap.Entry)entries.iterator().next();
             return entry.extract(extractor);
          return null;
       @Override
       public Object aggregateResults(Collection collResults) {
          Comparable least = null;
          for (Object object : collResults) {
             least = leastOf(least, object);
          return least;
       private Comparable leastOf(Comparable least, Object other) {
          if (other == null) return least;
          Comparable compOther = (Comparable) other;
          if (least == null) return compOther;
          return compOther.compareTo(least) > 0 ? least : compOther;
       @Override
       public EntryAggregator getParallelAggregator() {
          return new Parallel(extractor);
    }You would then run this as:
    Object leastExtractedValue = cache.aggregate(new LeastExtractedValueFilter(extractor), new FirstEntryExtractorAggregator(extractor));Best regards,
    Robert
    Edited by: robvarga on Sep 21, 2010 3:00 PM
    Edited by: robvarga on Sep 21, 2010 7:44 PM

  • Shoudn't 'put with expiry' throw with read-write backing map?

    Good morning all,
    If I run this client code:
    cache.put(1,  1, CacheMap.EXPIRY_NEVER);I'd expect this entry to never expire. Yet with a read-write backing map it does - immediately, which lead me to digging a bit more...
    According to the [java docs|http://download.oracle.com/otn_hosted_doc/coherence/330/com/tangosol/net/NamedCache.html#put%28java.lang.Object,%20java.lang.Object,%20long%29] support for this call is patchy:
    >
    Note: Though NamedCache interface extends CacheMap, not all implementations currently support this functionality.
    For example, if a cache is configured to be a replicated, optimistic or distributed cache then its backing map must be configured as a local cache. If a cache is configured to be a near cache then the front map must to be configured as a local cache and the back map must support this feature as well, typically by being a distributed cache backed by a local cache (as above.)
    >
    OK, so the docs even say this won't work. But shouldn't it throw an unsupported op exception? Is this a bug or my mistake?
    rw-scheme config:
    <backing-map-scheme>
      <read-write-backing-map-scheme>
         <internal-cache-scheme>
            <local-scheme/>
         </internal-cache-scheme>
         <cachestore-scheme>
        </cachestore-scheme>
        <write-delay>1ms</write-delay>
      </read-write-backing-map-scheme>
    </backing-map-scheme>Edited by: BigAndy on 04-Dec-2012 04:28

    Quick update on this - I've raised an SR and Oracle have confirmed this is a bug and are looking into a fix.

  • What is backing map in Java?

    What is backing map in Java?
    Thank you

    A Backing Map is a map which is used to implement another map.
    eg.
    - A thread safe map can use a map as a wrapper for a backing map which actually holds the entries.
    - a cache usually has a backing map and the cache is also a map, but smarter.

  • Could you explain how the read-write-backing-map-scheme is configured in...

    Could you explain how the read-write-backing-map-scheme is configured in the following example?
    <backing-map-scheme>
        <read-write-backing-map-scheme>
         <internal-cache-scheme>
          <class-scheme>
           <class-name>com.tangosol.util.ObservableHashMap</class-name>
          </class-scheme>
         </internal-cache-scheme>
         <cachestore-scheme>
          <class-scheme>
           <class-name>coherence.DBCacheStore</class-name>
           <init-params>
            <init-param>
             <param-type>java.lang.String</param-type>
             <param-value>CATALOG</param-value>
            </init-param>
           </init-params>
          </class-scheme>
         </cachestore-scheme>
         <read-only>false</read-only>
         <write-delay-seconds>0</write-delay-seconds>
        </read-write-backing-map-scheme>
    </backing-map-scheme>
    ...Edited by: qkc on 30-Nov-2009 10:48

    Thank you very much for reply.
    In the following example, the cachestore element is not specified in the <read-write-backing-map-scheme> section. Instead, a class-name ControllerBackingMap is designated. What is the result?
    If ControllerBackingMap is a persistence entity, is the result same with that of cachestore-scheme?
    <distributed-scheme>
                <scheme-name>with-rw-bm</scheme-name>
                <service-name>unlimited-partitioned</service-name>
                <backing-map-scheme>
                    <read-write-backing-map-scheme>
                        <scheme-ref>base-rw-bm</scheme-ref>
                    </read-write-backing-map-scheme>
                </backing-map-scheme>
                <autostart>true</autostart>
            </distributed-scheme>
            <read-write-backing-map-scheme>
                <scheme-name>base-rw-bm</scheme-name>
                <class-name>ControllerBackingMap</class-name>
                <internal-cache-scheme>
                    <local-scheme/>
                </internal-cache-scheme>
            </read-write-backing-map-scheme>

  • Behaviour of read-write-backing-map-scheme in combination with local-scheme

    Hi,
    I have the following questions related to the distributed cache defined below:
    1. If I start putting unlimited number of entries to this cache, will I run out of memory or the local-scheme has some default size limit?
    2. What is default eviction policy for the local-scheme?
    <distributed-scheme>
    <scheme-name>A</scheme-name>
    <service-name>simple_service</service-name>
    <backing-map-scheme>
    <read-write-backing-map-scheme>
    <internal-cache-scheme>
    <local-scheme></local-scheme>
    </internal-cache-scheme>
    <cachestore-scheme>
    <class-scheme>
    <class-name>SomeCacheStore</class-name>
    </class-scheme>
    </cachestore-scheme>
    </read-write-backing-map-scheme>
    </backing-map-scheme>
    </distributed-scheme>
    Best regards,
    Jarek

    Hi,
    The default value for expiry-delay is zero which implies no expiry.
    http://wiki.tangosol.com/display/COH34UG/local-scheme
    Thanks,
    Tom

  • Backing Map Insert Error

    Hi.
    I'm trying to make an insert into the backing map of another cache inside the EntryProcessor, but get the following error: An entry was inserted into the backing map for the partitioned cache "another-cache" that is not owned by this member; the entry will be removed.
    So, EP is invoked against one cache, but the insert inside it occurs to a different cache.
    Here is the code for it, please let me know what can fix the error above:
         public Object process(InvocableMap.Entry entry)
         BackingMapManagerContext ctx = ((BinaryEntry) entry).getContext();
         BackingMapManagerContext ctx2 = ctx.getBackingMapContext("another-cache").getManagerContext();
    Map childCache = ctx.getBackingMapContext("another-cache").getBackingMap();
    childCache.put(ctx2.getKeyToInternalConverter().convert(KeyObject), ctx2.getValueToInternalConverter().convert(ValueObject));
         }

    Hi,
    I'm trying to make an insert into the backing map of another cache inside the EntryProcessor, but get the following error: An entry was inserted into the backing map for the partitioned cache "another-cache" that is not owned by this member; the entry will be removed.
    So, EP is invoked against one cache, but the insert inside it occurs to a different cache.
    Here is the code for it, please let me know what can fix the error above:
         public Object process(InvocableMap.Entry entry)
         BackingMapManagerContext ctx = ((BinaryEntry) entry).getContext();
         BackingMapManagerContext ctx2 = ctx.getBackingMapContext("another-cache").getManagerContext();
    Map childCache = ctx.getBackingMapContext("another-cache").getBackingMap();
    childCache.put(ctx2.getKeyToInternalConverter().convert(KeyObject), ctx2.getValueToInternalConverter().convert(ValueObject));
         }The reason for the error is that the owner of the key "KeyObject" is not the same node as the owner of the entry "entry". If you want to implement it then you need to define the "DataAffinity" between the objects of both your caches so they are collocated on the same node.
    HTH
    Cheers,
    _NJ                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

  • Backing map v.s. NamedCache

    Hello all:
    How does a backing map related to named cache conceptually? as quoted When a local storage implementation is used by Coherence to store replicated or distributed data, it is called a backing map because Coherence is actually backed by that local storage implementation. I thought named cache is the one that store the data.
    thanks,
    John

    Hi,
    Johnny_hunter wrote:
    1. The BackingMap is the server side data structure that holds actual data. I noticed this comment in the tuotial too. What does it mean by server? I thought it's a de-centralized topology. Cache nodes are members of a cluster, they are all equal citizens. Of cause I know if the local-storage=false for some nodes in a distributed topology, the data will be stored elsewhere in another instance, say, of DefaultCacheServer. Is it the server you meant?
    By server it means the storage enabled cluster members (i.e. JVMs). In some ways cluster members are equal citizens but in other ways they are not. For example, in a partitioned cache they each store a sub-set of the data. Each storage enabled JVM in the cluster will have a backing map for each cache. These backing maps hold the data that is owned by that JVM.
    Johnny_hunter wrote:
    2. NamedCache.get() call naturally causes a Map.get() call on a corresponding BackingMap; NamedCache.invoke() call may cause a sequence of Map.get() followed by the Map.put(), NamedCache.keySet(filter) call may cause an Map.entrySet().iterator() loop, and so on.
    It looks like a delegate pattern to me. AFAIK, NamedCache is a Map implementation themselves, if they delegate those ops to the corresponding ops in a backingmap, how about themseleves as maps?
    The delegate pattern is probably as good a way as any of looking at it. In client code you have an instance of a NamedCache. When you make calls to that the call is delegated to the cache and backing map in the JVM in the cluster that owns the data your are manipulating.
    JK

  • OBIEE 11g caching question - cross database joins

    Hi, I'm seeing something strange (a.k.a. not wanted) in OBIEE (11g, not sure that version matters).
    I have a simple data mart that contains spend information. The supplier dimension contains keys that can be used to join it to detailed supplier information and supplier address information in our ERP system (that sits in a different database / on a different box). In the OBIEE physical layer I've created a cross database join between the supplier dimension table and the ERP tables that contain the address info.
    Here's the odd behavior I'm seeing. If I write an answers request to select the supplier, some address info, and total spend for fiscal year 2010, I'm seeing OBIEE fire off two queries (this I expect):
    A) Select supplier, address key, and total spend for fiscal year = 2010 against the spend mart
    B) select address_key and associated address info against the ERP system (no limit on this query, it pulls back all rows from the address table)
    OBIEE then does an internal join itself and serves up the results, everything is correct. But here's what's "wrong" - if I then run the exact same answers request, but change the fiscal year to 2009, I again see OBIEE firing off the two queries. What I expected and/or want to see is that, since the entire result set from query #B doesn't change at all, that it wouldn't have to rerun this query. However, it seems to be.
    Is there any way to get #B to cache so that, for any subsequent query that contains supplier address info, OBIEE can pull it from cache instead of rerunning the query (which is pretty slow)? I really thought it would do that, but it doesn't seem to be.
    Thanks!
    Scott

    Hi,
    Could you give a bit more of context for this case? The table in SQL server; Is it a dimension and the one in Oracle DB is a fact? I am guessing, you have set up the driving table here. Have you given a try taking it off, and let BI Server do the filter in memory?
    -Dhar

  • Get an error when accessing the entry from the Backing Map directly

    We are using some sample code from Oracle to access Objects associated via KeyAssociation directly from the Backing Map.
    Occasionally we get the error posted below. Can someone shed light on what this error means ?
    I'm doing a Get on the Backing Map directly.
    Thanks,
    J
    An entry was inserted into the backing map for the partitioned cache "Customerl" that is not owned by this member; the entry will be removed.
    ReadWriteBackingMap$5{ReadWriteBackingMap inserted: key=Binary(length=75, value=0x---binary key data removed ----), value=Binary(length=691, value=0x---binary value data removed---)), synthetic}
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$Storage.onBackingMapEvent(DistributedCache.CDB:152)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$Storage$PrimaryListener.entryInserted(DistributedCache.CDB:1)
         at com.tangosol.util.MapEvent.dispatch(MapEvent.java:191)
         at com.tangosol.util.MapEvent.dispatch(MapEvent.java:164)
         at com.tangosol.util.MapListenerSupport.fireEvent(MapListenerSupport.java:556)
         at com.tangosol.net.cache.ReadWriteBackingMap$InternalMapListener.dispatch(ReadWriteBackingMap.java:2064)
         at com.tangosol.net.cache.ReadWriteBackingMap$InternalMapListener.entryInserted(ReadWriteBackingMap.java:1903)
         at com.tangosol.util.MapEvent.dispatch(MapEvent.java:191)
         at com.tangosol.util.MapEvent.dispatch(MapEvent.java:164)
         at com.tangosol.util.MapListenerSupport.fireEvent(MapListenerSupport.java:556)
         at com.tangosol.net.cache.OldCache.dispatchEvent(OldCache.java:1718)
         at com.tangosol.net.cache.OldCache$Entry.onAdd(OldCache.java:1786)
         at com.tangosol.util.SafeHashMap.put(SafeHashMap.java:244)
         at com.tangosol.net.cache.OldCache.put(OldCache.java:253)
         at com.tangosol.net.cache.OldCache.put(OldCache.java:221)
         at com.tangosol.net.cache.ReadWriteBackingMap.get(ReadWriteBackingMap.java:721)
         at

    Here is the sample we adapted. We have adapted the code below to our specific Cache. I have highlighted the line that throws the exception, this exception doesnt occur all the time, saw it about 10 times yesterday and 2 times today.
    import com.tangosol.net.BackingMapManagerContext;
    import com.tangosol.net.CacheFactory;
    import com.tangosol.net.CacheService;
    import com.tangosol.net.DefaultConfigurableCacheFactory;
    import com.tangosol.net.NamedCache;
    import com.tangosol.util.Binary;
    import com.tangosol.util.ClassHelper;
    import com.tangosol.util.InvocableMap;
    import com.tangosol.util.processor.AbstractProcessor;
    import java.io.Serializable;
    import java.util.Map;
    * dimitri
    public class Main extends AbstractProcessor
    public static class Foo implements Serializable
    String m_sFoo;
    public String getFoo()
    return m_sFoo;
    public void setFoo(String sFoo)
    m_sFoo = sFoo;
    public String toString()
    return "Foo[foo=" + m_sFoo + "]";
    public static class Bar implements Serializable
    String m_sBar;
    public String getBar()
    return m_sBar;
    public void setBar(String sBar)
    m_sBar = sBar;
    public String toString()
    return "Bar[bar=" + m_sBar + "]";
    public Object process(InvocableMap.Entry entry)
    try
    // We are invoked on foo - update it.
    Foo foo = (Foo) entry.getValue();
    foo.setFoo(foo.getFoo() + " updated");
    entry.setValue(foo);
    // Now update Bar
    Object oStorage = ClassHelper.invoke(entry, "getStorage", null);
    CacheService service = (CacheService) ClassHelper.invoke(oStorage, "getService", null);
    DefaultConfigurableCacheFactory.Manager bmm =
    (DefaultConfigurableCacheFactory.Manager) service.getBackingMapManager();
    BackingMapManagerContext ctx = bmm.getContext();
    Map mapBack = bmm.getBackingMap("bar");
    // Assume that the key is still the same - "test"
    Binary binKey = (Binary) ctx.getKeyToInternalConverter().convert(entry.getKey());
    Binary binValue = (Binary) mapBack.get(binKey);
    // convert value from internal and update
    Bar bar = (Bar) ctx.getValueFromInternalConverter().convert(binValue);
    bar.setBar(bar.getBar() + " updated");
    // update backing map
    binValue = (Binary) ctx.getValueToInternalConverter().convert(bar);
    mapBack.put(binKey, binValue);
    catch (Throwable oops)
    throw ensureRuntimeException(oops);
    return null;
    public static void main(String[] asArg)
    try
    NamedCache cacheFoo = CacheFactory.getCache("foo");
    NamedCache cacheBar = CacheFactory.getCache("bar");
    Foo foo = new Foo();
    foo.setFoo("initial foo");
    cacheFoo.put("test", foo);
    Bar bar = new Bar();
    bar.setBar("initial bar");
    cacheBar.put("test", bar);
    System.out.println(cacheFoo.get("test"));
    System.out.println(cacheBar.get("test"));
    cacheFoo.invoke("test", new Main());
    System.out.println(cacheFoo.get("test"));
    System.out.println(cacheBar.get("test"));
    catch (Throwable oops)
    err(oops);
    finally
    CacheFactory.shutdown();
    }

  • How to get cache statistics of back map in a near cache

    Hi-
    I have a near cache configured.
    Local cache makes the front cache and Distributed cache makes the back cache.
    I am able to get statistics of front cache using coherence API, like as below.
    NamedCache coherenceCache = CacheUtils.getNamedCache( CacheUtils.getCacheRegionPrefix() + region.getRegionName() );
              if (coherenceCache instanceof NearCache)
                   NearCache nearCache = (NearCache)coherenceCache;
                   CachingMap cachingMap = (CachingMap)nearCache;
                   CacheStatistics frontStatistics = ((LocalCache)nearCache.getFrontMap()).getCacheStatistics();
    How do i get the statistics of back map?
    If i do below, it gives statistics which combines data for both front and back (That is what i infer based on my understanding - it increases get count in both front and back even if get it done in front cache)
                   CacheStatistics backStatistics = cachingMap.getCacheStatistics();
    So is there any way to get statistics for back cache alone with Coherence API for a near cache?
    Any help on this be much appreciated
    thanks

    Hi,
    NamedCache coherenceCache = CacheUtils.getNamedCache( CacheUtils.getCacheRegionPrefix() + region.getRegionName() );
    if (coherenceCache instanceof NearCache)
    NearCache nearCache = (NearCache)coherenceCache;
    CachingMap cachingMap = (CachingMap)nearCache;
    Map backingMap = cachingMap.getBackMap();
    OR
    Map backingMap = nearCache.getBackMap();
    getFrontMap() and getBackMap() are methods from CachingMap.
    Thanks & Regards,
    Murali.
    ============

  • Distributed cache with a backing-map as another distributed cache

    Hi All,
    Is it possible to create a distributed cache with a backing-map-schem with another distributed cache with local-storage disabled ?
    Please let me know how to configure this type of cache.
    regards
    S

    Hi Cameron,
    I am trying to create a distributed-schem with a backing-map schem. is it possible to configure another distributed queue as a backing map scheme for a cache.
    <distributed-scheme>
         <scheme-name>MyDistCache-2</scheme-name>
         <service-name> MyDistCacheService-2</service-name>
         <backing-map-scheme>
                   <external-scheme>
                        <scheme-name>MyDistCache-3</scheme-name>
                   </external-scheme>
         </backing-map-scheme>
    </distributed-scheme>
         <distributed-scheme>
              <scheme-name>MyDistCache-3</scheme-name>
              <service-name> MyDistBackCacheService-3</service-name>
              <local-storage>false</local-storage>
         </distributed-scheme>
    Please correct my understanding.
    Regards
    Srini

  • Why can't a backing-map-scheme be specified in caching-schemes?

    Most other types of schemes except backing-map-scheme can be specified in the caching-schemes section of the cache configuration XML file and after that be reused in other scheme definitions. What is the motivation for excluding the backing-map-scheme?
    /Magnus

    Hi Magnus,
    you can specify an "abstract" service-type scheme (e.g. distributed-scheme) containing the backing map scheme instead of the backing map scheme itself.
    I know it is not as flexible as having a backing map scheme separately, but it is almost as good.
    Best regards,
    Robert

  • Local Cache with write-behind backing map

    Hi there,
    I am a Coherence newb, so I hope my question isn't too naive. I have been experimenting with Coherence using a write-behind JPA backing map, but I have only been able to make it work with a distributed cache. Because of my specific database RAC architecture, I need to ensure that entries written to the database from a given WLS node are restricted to a specific RAC node. In my view, using a local cache rather than a distributed cache should solve my problem, although if there is a way of configuring my cache so this works I'd appreciate the info.
    So, the short form of the question: can I back a local cache with a write-behind JPA map?
    Cheers,
    Ron

    Hi Ron,
    The configuration for <local-scheme> allows you to add a cache store but you cannot use write-behind, only write-through.
    Presumably you do not want the data to be shared by the different WLS nodes, i.e. if one node puts data in the cache and that is eventually written to a RAC node, that data cannot be seen in the cache by other WLS nodes. Or do you want all the WLS nodes to share the data but just write it to different RAC nodes?
    If you use a local-scheme then the data will only be local to that WLS node and not shared.
    I can think of a possible way to do what you want but it depends on the answer to the above question.
    JK

  • Backing Map implementations

    We are planning to use both the distributed and replicated caches and they both will be a single EJB call( means i call the ejb method which inturn calls the 2 methods for cache will return data from 2 different tables ) and the computation has to be done based on the values we have in the objects. My question is that can i use readwritebacking map for both the caches or what are the best approaches ?

    Hi Santharam,
    Every replicated and distributed cache has to have a backing map, because that is where it actually will keep (maintain) its data. You will use the read/write backing maps if the data needs to be loaded from the database when it isn't in the cache and someone asks for the data; in this case, the cache will load it from the database automatically and pretend that it was in the cache all along (we call it "lazy loading".) Also, if the data can be modified and put back into the cache, the cache will write those changes back to the database if you choose.
    So the short answer is to use the read/write backing map if you need to pull data from the database.
    The long answer is that the read/write backing map is most effective with the distributed cache service, because each entry in the distributed cache has one owner in the cluster, and thus is very effectively managed vis-a-vis the database. To use the distributed cache in a way that performs like the replicated cache, I suggest using a near cache in front of the distributed cache. That way any server can access the data locally (up to the size limit of their near cache) and the data will stay in sync as well using cache events (this is automatic -- called Seppuku).
    Peace,
    Cameron Purdy
    Tangosol, Inc.
    Coherence: Clustered JCache for Grid Computing!

Maybe you are looking for

  • IPod cyclical redundancy check/Parameter Incorrect error on large files

    I've started getting errors on some large files (seemingly random) that I copy to my 5.5G 40GB iPod. (They're .tivo files from my Tivo so I can watch shows on another computer, and my iPod is FAT32 used only with Windows.) The files will copy without

  • A problem from "import" statement

    When I was doing some test on "package" and "import" statement, I had a problem. 1 I compiled one class: package com.sheng.population; public class A {      public static int sum (int k, int l) {           return (k+l); 2 I tried to call the method '

  • How can I save an "effect" and use it later as a default in my timeline

    I created a splitt screen. Now a want to reuse this effect later in the timeline. How can I save this effect? The same goes for titels. I arrange my own titels and want to reuse them later in editing again. So that I Only have to change the words and

  • Multiple keyword search

    Hi all. Can anyone help me with the following? I have a search field where a user can enter more than one keyword. The keyword then is matched to one of 20 fields in the database. This works fine as long as only one keyword is entered, however I am h

  • ITunes v8.1 does not export music library in plain text w/proper CR/LFs?

    Just an FYI...I can recreate this every time. Has anyone else tried and seen the same issue with the new v8.1? I never had this issue with the previous iTunes release. Thanks! The new iTunes v8.1 released yesterday does not export the music library i