Cache joins

I have two caches that both contain information from two separate database tables.
Is there a way to join the caches the way one would join sql tables?
i.e. select * from table1 inner join table2 on table1.a = table2.a
.... etc.

Hi,
Can you tell me if this is still the case? Your last post is quite old and I was wondering if the situation may have changed in subsequent releases.
Thanks

Similar Messages

  • Backing Map Access : Cross-cache joins

    Hi,
    I have been experimenting with cross-cache joins using Entry Processors in Coherence 3.7.1.
    (I have already sent a query to Dave Felcey regarding this - I will post any response from him here - but I just wondered if anyone else has had the same problem)
    h3. Scenario
    A simplified version of the problem is:
    We have two NamedCaches, one called "PARENT_CACHE" and one called "CHILD_CACHE"
    Each cache stores the following types of object respectivley...
    *class Parent implements PortableObject {*
    long id;
    String description;
    *class Child implements PortableObject {*
    long id;
    long parentId;
    String description;
    I want an entry processor that I can send to an instance of "Parent" that will return the "Parent" object together with the associated "Child" objects. The reason I want this is because I do not want to do two out-of-process calls (one to get the parent and one to get it's children - in the real world, we will need to get several different child types and we do not want to do one call for each type) for example...
    *class ExampleEntryProcessor extends AbstractProcessor implements PortableObject {*
    public ParentAndChildrenResult process(Entry entry) {
    I wrote an implementation of this based on Ben Stopfold's blog (see here - particularly the post by "Jonathan Knight")
    So I thought I needed something like this...
         *public ParentAndChildrenResult process(Entry entry) {*
              ParentAndChildrenResult result = new ParentAndChildResult();
              Parent parent = (Parent) entry.getValue();
              result.setParent(parent);
              Filter parentIdFilter = new EqualsFilter(new PofExtractor(Long.class, Child.PARENT_ID_FIELD), parent.getId());
              BinaryEntry binaryEntry = (BinaryEntry) entry;
              Set<java.util.Map.Entry> childEntrySet = queryBackingMap("CHILD_CACHE", parentIdFilter, binaryEntry);
              Converter valueUpConverter = binaryEntry.getContext().getValueFromInternalConverter();
              for (java.util.Map.Entry childEntry : childEntrySet) {
                   result.addChild((Child) valueUpConverter.convert(childEntry.getValue()));
              return result;
         *public Set<Map.Entry> queryBackingMap(String nameOfCacheToSearch, Filter filter, BinaryEntry entry) {*
         BackingMapContext backingMapContext = entry.getContext().getBackingMapContext(nameOfCacheToSearch);
         Map indexMap = backingMapContext.getIndexMap();
         return InvocableMapHelper.query(backingMapContext.getBackingMap(), indexMap, filter, true, false, null);
    I set up key association so I can ensure that the child objects are on the same node as the parent, the keys for each cache look like this...
    *class Key implements KeyAssociation, PortableObject {*
    private long id;
    private long associatedId;
    *public Key() {*
    *public Key(Parent parent) {*
    this.id = parent.getId();
    this.associatedId = parent.getId();
    *public Key(Child child) {*
    this.id = child.getId();
    this.associatedId = child.getParentId();
    *public Object getAssociatedKey() {*
    return associatedId;
    When I send this entry processor to a parent object, I am getting the following exception when the "InvocableMapHelper.query" method is called...
    +"Portable(java.lang.UnsupportedOperationException): PofExtractor must be used with POF-encoded Binary entries; the Map Entry is not a BinaryEntry"+
    I was not expecting this as our cluster is POF enabled and I thought that the backing maps always contained BinaryEntries.
    Has anyone else had a similar problem? Has anyone found any simple examples of how to do this anywhere on the web that work?
    Once I figure out how to get this to work, I want to post the solution somewhere (probably here) because there are bound to be other people who want to do something similar to this.
    Thanks in advance,
    -Bret

    Forgot the output...
    STORAGE NODE OUTPUT
    2012-01-12 18:38:09.860/0.766 Oracle Coherence 3.7.1.0 <Info> (thread=main, member=n/a): Loaded operational configuration from "jar:file:/C:/users/vclondon/.m2/repository/com/tangosol/coherence/3.7.1.0b27797/coherence-3.7.1.0b27797.jar!/tangosol-coherence.xml"
    2012-01-12 18:38:09.860/0.766 Oracle Coherence 3.7.1.0 <Info> (thread=main, member=n/a): Loaded operational overrides from "file:/C:/dev2/tech-trading-workspace/backing-map-access/bin/tangosol-coherence-override.xml"
    2012-01-12 18:38:09.860/0.766 Oracle Coherence 3.7.1.0 <D5> (thread=main, member=n/a): Optional configuration override "/custom-mbeans.xml" is not specified
    Oracle Coherence Version 3.7.1.0 Build 27797
    Grid Edition: Development mode
    Copyright (c) 2000, 2011, Oracle and/or its affiliates. All rights reserved.
    2012-01-12 18:38:10.016/0.922 Oracle Coherence GE 3.7.1.0 <Info> (thread=main, member=n/a): Loaded cache configuration from "file:/C:/dev2/tech-trading-workspace/backing-map-access/bin/coherence-cache-config.xml"; this document does not refer to any schema definition and has not been validated.
    2012-01-12 18:38:10.344/1.250 Oracle Coherence GE 3.7.1.0 <Info> (thread=main, member=n/a): Loaded Reporter configuration from "jar:file:/C:/users/vclondon/.m2/repository/com/tangosol/coherence/3.7.1.0b27797/coherence-3.7.1.0b27797.jar!/reports/report-group.xml"
    2012-01-12 18:38:23.610/14.516 Oracle Coherence GE 3.7.1.0 <D4> (thread=main, member=n/a): TCMP bound to /172.23.0.26:8088 using SystemSocketProvider
    2012-01-12 18:38:54.454/45.360 Oracle Coherence GE 3.7.1.0 <Info> (thread=Cluster, member=n/a): Created a new cluster "LOCAL" with Member(Id=1, Timestamp=2012-01-12 18:38:23.719, Address=172.23.0.26:8088, MachineId=21295, Location=site:,machine:J67LQ2J,process:13600, Role=TestBackingmapStartStorageNode, Edition=Grid Edition, Mode=Development, CpuCount=2, SocketCount=1) UID=0xAC17001A00000134D2FFC167532F1F98
    2012-01-12 18:38:54.454/45.360 Oracle Coherence GE 3.7.1.0 <Info> (thread=main, member=n/a): Started cluster Name=LOCAL
    WellKnownAddressList(Size=1,
    WKA{Address=172.23.0.26, Port=8088}
    MasterMemberSet(
    ThisMember=Member(Id=1, Timestamp=2012-01-12 18:38:23.719, Address=172.23.0.26:8088, MachineId=21295, Location=site:,machine:J67LQ2J,process:13600, Role=TestBackingmapStartStorageNode)
    OldestMember=Member(Id=1, Timestamp=2012-01-12 18:38:23.719, Address=172.23.0.26:8088, MachineId=21295, Location=site:,machine:J67LQ2J,process:13600, Role=TestBackingmapStartStorageNode)
    ActualMemberSet=MemberSet(Size=1
    Member(Id=1, Timestamp=2012-01-12 18:38:23.719, Address=172.23.0.26:8088, MachineId=21295, Location=site:,machine:J67LQ2J,process:13600, Role=TestBackingmapStartStorageNode)
    MemberId|ServiceVersion|ServiceJoined|MemberState
    1|3.7.1|2012-01-12 18:38:54.454|JOINED
    RecycleMillis=1200000
    RecycleSet=MemberSet(Size=0
    TcpRing{Connections=[]}
    IpMonitor{AddressListSize=0}
    2012-01-12 18:38:54.501/45.407 Oracle Coherence GE 3.7.1.0 <Info> (thread=Cluster, member=1): Loaded POF configuration from "file:/C:/dev2/tech-trading-workspace/backing-map-access/bin/pof-config.xml"; this document does not refer to any schema definition and has not been validated.
    2012-01-12 18:38:54.516/45.422 Oracle Coherence GE 3.7.1.0 <Info> (thread=Cluster, member=1): Loaded included POF configuration from "jar:file:/C:/users/vclondon/.m2/repository/com/tangosol/coherence/3.7.1.0b27797/coherence-3.7.1.0b27797.jar!/coherence-pof-config.xml"
    2012-01-12 18:38:54.579/45.485 Oracle Coherence GE 3.7.1.0 <D5> (thread=Invocation:Management, member=1): Service Management joined the cluster with senior service member 1
    2012-01-12 18:38:54.876/45.782 Oracle Coherence GE 3.7.1.0 <D5> (thread=DistributedCache, member=1): Service DistributedCache joined the cluster with senior service member 1
    2012-01-12 18:38:54.891/45.797 Oracle Coherence GE 3.7.1.0 <D5> (thread=Invocation:InvocationService, member=1): Service InvocationService joined the cluster with senior service member 1
    2012-01-12 18:38:54.907/45.813 Oracle Coherence GE 3.7.1.0 <Info> (thread=main, member=1):
    Services
    ClusterService{Name=Cluster, State=(SERVICE_STARTED, STATE_JOINED), Id=0, Version=3.7.1, OldestMemberId=1}
    InvocationService{Name=Management, State=(SERVICE_STARTED), Id=1, Version=3.1, OldestMemberId=1}
    PartitionedCache{Name=DistributedCache, State=(SERVICE_STARTED), LocalStorage=enabled, PartitionCount=257, BackupCount=1, AssignedPartitions=257, BackupPartitions=0}
    InvocationService{Name=InvocationService, State=(SERVICE_STARTED), Id=3, Version=3.1, OldestMemberId=1}
    Started DefaultCacheServer...
    2012-01-12 18:39:03.438/54.344 Oracle Coherence GE 3.7.1.0 <D5> (thread=Cluster, member=1): Member(Id=2, Timestamp=2012-01-12 18:39:03.274, Address=172.23.0.26:8090, MachineId=21295, Location=site:,machine:J67LQ2J,process:14408, Role=TestBackingmapMainRunTheTest) joined Cluster with senior member 1
    2012-01-12 18:39:03.610/54.516 Oracle Coherence GE 3.7.1.0 <D5> (thread=Cluster, member=1): Member 2 joined Service Management with senior member 1
    2012-01-12 18:39:03.907/54.813 Oracle Coherence GE 3.7.1.0 <D5> (thread=Cluster, member=1): Member 2 joined Service DistributedCache with senior member 1
    2012-01-12 18:39:04.032/54.938 Oracle Coherence GE 3.7.1.0 <D5> (thread=Cluster, member=1): TcpRing disconnected from Member(Id=2, Timestamp=2012-01-12 18:39:03.274, Address=172.23.0.26:8090, MachineId=21295, Location=site:,machine:J67LQ2J,process:14408, Role=TestBackingmapMainRunTheTest) due to a peer departure; removing the member.
    2012-01-12 18:39:04.032/54.938 Oracle Coherence GE 3.7.1.0 <D5> (thread=Cluster, member=1): Member 2 left service Management with senior member 1
    2012-01-12 18:39:04.032/54.938 Oracle Coherence GE 3.7.1.0 <D5> (thread=Cluster, member=1): Member 2 left service DistributedCache with senior member 1
    2012-01-12 18:39:04.032/54.938 Oracle Coherence GE 3.7.1.0 <D5> (thread=Cluster, member=1): Member(Id=2, Timestamp=2012-01-12 18:39:04.032, Address=172.23.0.26:8090, MachineId=21295, Location=site:,machine:J67LQ2J,process:14408, Role=TestBackingmapMainRunTheTest) left Cluster with senior member 1
    PROCESS NODE OUTPUT
    2012-01-12 18:39:02.266/0.328 Oracle Coherence 3.7.1.0 <Info> (thread=main, member=n/a): Loaded operational configuration from "jar:file:/C:/users/vclondon/.m2/repository/com/tangosol/coherence/3.7.1.0b27797/coherence-3.7.1.0b27797.jar!/tangosol-coherence.xml"
    2012-01-12 18:39:02.266/0.328 Oracle Coherence 3.7.1.0 <Info> (thread=main, member=n/a): Loaded operational overrides from "file:/C:/dev2/tech-trading-workspace/backing-map-access/bin/tangosol-coherence-override.xml"
    2012-01-12 18:39:02.266/0.328 Oracle Coherence 3.7.1.0 <D5> (thread=main, member=n/a): Optional configuration override "/custom-mbeans.xml" is not specified
    Oracle Coherence Version 3.7.1.0 Build 27797
    Grid Edition: Development mode
    Copyright (c) 2000, 2011, Oracle and/or its affiliates. All rights reserved.
    2012-01-12 18:39:02.407/0.469 Oracle Coherence GE 3.7.1.0 <Info> (thread=main, member=n/a): Loaded cache configuration from "file:/C:/dev2/tech-trading-workspace/backing-map-access/bin/coherence-cache-config.xml"; this document does not refer to any schema definition and has not been validated.
    2012-01-12 18:39:02.501/0.563 Oracle Coherence GE 3.7.1.0 <Info> (thread=main, member=n/a): Loaded Reporter configuration from "jar:file:/C:/users/vclondon/.m2/repository/com/tangosol/coherence/3.7.1.0b27797/coherence-3.7.1.0b27797.jar!/reports/report-group.xml"
    2012-01-12 18:39:03.063/1.125 Oracle Coherence GE 3.7.1.0 <D4> (thread=main, member=n/a): TCMP bound to /172.23.0.26:8090 using SystemSocketProvider
    2012-01-12 18:39:03.438/1.500 Oracle Coherence GE 3.7.1.0 <Info> (thread=Cluster, member=n/a): This Member(Id=2, Timestamp=2012-01-12 18:39:03.274, Address=172.23.0.26:8090, MachineId=21295, Location=site:,machine:J67LQ2J,process:14408, Role=TestBackingmapMainRunTheTest, Edition=Grid Edition, Mode=Development, CpuCount=2, SocketCount=1) joined cluster "LOCAL" with senior Member(Id=1, Timestamp=2012-01-12 18:38:23.719, Address=172.23.0.26:8088, MachineId=21295, Location=site:,machine:J67LQ2J,process:13600, Role=TestBackingmapStartStorageNode, Edition=Grid Edition, Mode=Development, CpuCount=2, SocketCount=1)
    2012-01-12 18:39:03.485/1.547 Oracle Coherence GE 3.7.1.0 <D5> (thread=Cluster, member=n/a): Member 1 joined Service Management with senior member 1
    2012-01-12 18:39:03.485/1.547 Oracle Coherence GE 3.7.1.0 <D5> (thread=Cluster, member=n/a): Member 1 joined Service DistributedCache with senior member 1
    2012-01-12 18:39:03.485/1.547 Oracle Coherence GE 3.7.1.0 <D5> (thread=Cluster, member=n/a): Member 1 joined Service InvocationService with senior member 1
    2012-01-12 18:39:03.485/1.547 Oracle Coherence GE 3.7.1.0 <Info> (thread=main, member=n/a): Started cluster Name=LOCAL
    WellKnownAddressList(Size=1,
    WKA{Address=172.23.0.26, Port=8088}
    MasterMemberSet(
    ThisMember=Member(Id=2, Timestamp=2012-01-12 18:39:03.274, Address=172.23.0.26:8090, MachineId=21295, Location=site:,machine:J67LQ2J,process:14408, Role=TestBackingmapMainRunTheTest)
    OldestMember=Member(Id=1, Timestamp=2012-01-12 18:38:23.719, Address=172.23.0.26:8088, MachineId=21295, Location=site:,machine:J67LQ2J,process:13600, Role=TestBackingmapStartStorageNode)
    ActualMemberSet=MemberSet(Size=2
    Member(Id=1, Timestamp=2012-01-12 18:38:23.719, Address=172.23.0.26:8088, MachineId=21295, Location=site:,machine:J67LQ2J,process:13600, Role=TestBackingmapStartStorageNode)
    Member(Id=2, Timestamp=2012-01-12 18:39:03.274, Address=172.23.0.26:8090, MachineId=21295, Location=site:,machine:J67LQ2J,process:14408, Role=TestBackingmapMainRunTheTest)
    MemberId|ServiceVersion|ServiceJoined|MemberState
    1|3.7.1|2012-01-12 18:38:23.719|JOINED,
    2|3.7.1|2012-01-12 18:39:03.477|JOINED
    RecycleMillis=1200000
    RecycleSet=MemberSet(Size=0
    TcpRing{Connections=[1]}
    IpMonitor{AddressListSize=0}
    2012-01-12 18:39:03.501/1.563 Oracle Coherence GE 3.7.1.0 <Info> (thread=Cluster, member=2): Loaded POF configuration from "file:/C:/dev2/tech-trading-workspace/backing-map-access/bin/pof-config.xml"; this document does not refer to any schema definition and has not been validated.
    2012-01-12 18:39:03.532/1.594 Oracle Coherence GE 3.7.1.0 <Info> (thread=Cluster, member=2): Loaded included POF configuration from "jar:file:/C:/users/vclondon/.m2/repository/com/tangosol/coherence/3.7.1.0b27797/coherence-3.7.1.0b27797.jar!/coherence-pof-config.xml"
    2012-01-12 18:39:03.594/1.656 Oracle Coherence GE 3.7.1.0 <D5> (thread=Invocation:Management, member=2): Service Management joined the cluster with senior service member 1
    2012-01-12 18:39:03.891/1.953 Oracle Coherence GE 3.7.1.0 <D5> (thread=DistributedCache, member=2): Service DistributedCache joined the cluster with senior service member 1
    Exception in thread "main" Portable(com.tangosol.util.WrapperException): (Wrapped: Failed request execution for DistributedCache service on Member(Id=1, Timestamp=2012-01-12 18:38:23.719, Address=172.23.0.26:8088, MachineId=21295, Location=site:,machine:J67LQ2J,process:13600, Role=TestBackingmapStartStorageNode)) PofExtractor must be used with POF-encoded Binary entries; the Map Entry is not a BinaryEntry
    at com.tangosol.util.Base.ensureRuntimeException(Base.java:288)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.tagException(Grid.CDB:36)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache.onInvokeRequest(PartitionedCache.CDB:68)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$InvokeRequest.run(PartitionedCache.CDB:1)
    at com.tangosol.coherence.component.net.message.requestMessage.DistributedCacheKeyRequest.onReceived(DistributedCacheKeyRequest.CDB:12)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onMessage(Grid.CDB:34)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onNotify(Grid.CDB:33)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.PartitionedService.onNotify(PartitionedService.CDB:3)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache.onNotify(PartitionedCache.CDB:3)
    at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
    at java.lang.Thread.run(Unknown Source)
    at <process boundary>
    at com.tangosol.io.pof.ThrowablePofSerializer.deserialize(ThrowablePofSerializer.java:57)
    at com.tangosol.io.pof.PofBufferReader.readAsObject(PofBufferReader.java:3316)
    at com.tangosol.io.pof.PofBufferReader.readObject(PofBufferReader.java:2604)
    at com.tangosol.io.pof.ConfigurablePofContext.deserialize(ConfigurablePofContext.java:368)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.Service.readObject(Service.CDB:1)
    at com.tangosol.coherence.component.net.Message.readObject(Message.CDB:1)
    at com.tangosol.coherence.component.net.message.SimpleResponse.read(SimpleResponse.CDB:6)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.deserializeMessage(Grid.CDB:19)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onNotify(Grid.CDB:31)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.PartitionedService.onNotify(PartitionedService.CDB:3)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache.onNotify(PartitionedCache.CDB:3)
    at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
    at java.lang.Thread.run(Unknown Source)
    Caused by: Portable(java.lang.UnsupportedOperationException): PofExtractor must be used with POF-encoded Binary entries; the Map Entry is not a BinaryEntry
    at com.tangosol.util.extractor.PofExtractor.extractInternal(PofExtractor.java:175)
    at com.tangosol.util.extractor.PofExtractor.extractFromEntry(PofExtractor.java:146)
    at com.tangosol.util.InvocableMapHelper.extractFromEntry(InvocableMapHelper.java:315)
    at com.tangosol.util.SimpleMapEntry.extract(SimpleMapEntry.java:168)
    at com.tangosol.util.filter.ExtractorFilter.evaluateEntry(ExtractorFilter.java:93)
    at com.tangosol.util.InvocableMapHelper.evaluateEntry(InvocableMapHelper.java:262)
    at com.tangosol.util.InvocableMapHelper.query(InvocableMapHelper.java:452)
    at test.backingmap.main.GetParentAndChildrenEntryProcessor.queryBackingMap(GetParentAndChildrenEntryProcessor.java:60)
    at test.backingmap.main.GetParentAndChildrenEntryProcessor.process(GetParentAndChildrenEntryProcessor.java:47)
    at test.backingmap.main.GetParentAndChildrenEntryProcessor.process(GetParentAndChildrenEntryProcessor.java:1)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$Storage.invoke(PartitionedCache.CDB:10)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache.onInvokeRequest(PartitionedCache.CDB:52)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$InvokeRequest.run(PartitionedCache.CDB:1)
    at com.tangosol.coherence.component.net.message.requestMessage.DistributedCacheKeyRequest.onReceived(DistributedCacheKeyRequest.CDB:12)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onMessage(Grid.CDB:34)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onNotify(Grid.CDB:33)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.PartitionedService.onNotify(PartitionedService.CDB:3)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache.onNotify(PartitionedCache.CDB:3)
    at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
    at java.lang.Thread.run(Unknown Source)
    at <process boundary>
    at com.tangosol.io.pof.ThrowablePofSerializer.deserialize(ThrowablePofSerializer.java:57)
    at com.tangosol.io.pof.PofBufferReader.readAsObject(PofBufferReader.java:3316)
    at com.tangosol.io.pof.PofBufferReader.readObject(PofBufferReader.java:2604)
    at com.tangosol.io.pof.PortableException.readExternal(PortableException.java:150)
    at com.tangosol.io.pof.ThrowablePofSerializer.deserialize(ThrowablePofSerializer.java:59)
    at com.tangosol.io.pof.PofBufferReader.readAsObject(PofBufferReader.java:3316)
    at com.tangosol.io.pof.PofBufferReader.readObject(PofBufferReader.java:2604)
    at com.tangosol.io.pof.ConfigurablePofContext.deserialize(ConfigurablePofContext.java:368)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.Service.readObject(Service.CDB:1)
    at com.tangosol.coherence.component.net.Message.readObject(Message.CDB:1)
    at com.tangosol.coherence.component.net.message.SimpleResponse.read(SimpleResponse.CDB:6)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.deserializeMessage(Grid.CDB:19)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onNotify(Grid.CDB:31)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.PartitionedService.onNotify(PartitionedService.CDB:3)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache.onNotify(PartitionedCache.CDB:3)
    at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
    at java.lang.Thread.run(Unknown Source)
    2012-01-12 18:39:04.016/2.078 Oracle Coherence GE 3.7.1.0 <D4> (thread=ShutdownHook, member=2): ShutdownHook: stopping cluster node
    Ta,
    -Bret

  • OSB result caching with Coherence Out of process

    Existing setup:
    Oracle Fusion Middleware SOA 11g domain with
    1 weblogic cluster
    1 OSB cluster
    We have an Out of Process Coherence cluster configured with  caches defined already which is just working fine in production.
    The requirement is that development team would like to use the OSB result caching feature and we are having hard time to configure this OSB result cache join our existing cluster.
    Any suggestions on this is appreciated.

    Hi,
    You would need to override the operational configuration on OSB Server to join the cluster spawned by the Coherence dedicated servers. Also, set the flag -Dtangosol.coherence.distributed.storage=false in the ServerStart of your OSB Servers which will disable the data storage in the OSB Servers.
    HTH
    Cheers,
    _NJ                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

  • How to extract a field from Binary object

    Hi Guys,
    I have a Binary object in custom aggregator and I would like to extract a field from the Binary object without de-serializing the Binary object. I know this can be done but figure out how.. Appreciate your help.
    Have tried this
    PofValue value = PofValueParser.parse(Binary, (PofContext) BinaryEntry.getSerializer());
    PofValue child = value.getChild(1);
    Object value = child1.getValue();
    but no luck.
    Thanks
    D

    Hi JK,
    Thanks..  I have Binary  and tried
    PofValue pofValue = PofValueParser.parse(binary, pofContext);
    PofNavigator path = new SimplePofPath(1);
    PofValue target = path.navigate(pofValue);
    Object value = target != null ? target.getValue() : null;
    but no luck.. I must be missing something.
    Basically, am trying to do cache joins using aggregator and data affinity. Sample code snippet
    @Override
      public Object aggregate(Set values) {
      List extractedValues = new ArrayList();
      for (Map.Entry entry : (Set<Map.Entry>)values) { 
       try {
        //these are BinaryEntries in TESTCACHE1
        BinaryEntry binaryEntry = (BinaryEntry)entry;  
        Long compId = (Long)binaryEntry.extract(new KeyExtractor("getComponentId")); 
        Long contId = (Long)binaryEntry.extract(new KeyExtractor("getPartyId"));
        BackingMapContext backingMapContext = binaryEntry.getContext().getBackingMapContext("TESTCACHE2");  
        ObservableMap backingMap = backingMapContext.getBackingMap();
        MapIndex mapIndex = backingMapContext.getIndexMap().get(new KeyExtractor("getComponentId"));   
        com.tangosol.util.InflatableSet keySet = (com.tangosol.util.InflatableSet) mapIndex.getIndexContents().get(compId);
        Binary object = null;
        for(Object key : keySet ) { 
         object = (Binary)backingMap.get(key);    
        PofNavigator  path  = new SimplePofPath(2);
        PofValue value = PofValueParser.parse(object,  (PofContext) binaryEntry.getSerializer());
        PofValue  target = path.navigate(value);  
        Object extractedValue = target != null ? target.getValue(): null;
        extractedValues.add(extractedValue);
       } catch (Throwable t) {
        log.error("Exception in aggregate method ", t);
        throw new RuntimeException("Exception in aggregate method", t);
      return extractedValues;

  • Java IRC bot.

    hi, recently I started making an irc bot in Java. I was testing it out on my friend's server and everything seemed to be working, it was reading from the server, and was responding to the one command. I wanted to show it to my other friend so I changed the variables to match his irc server, and the bot couldn't connect. I was wondering if the problem was in my friend's server or in my code. (I'll include what the server sends when I connect at the bottom)
    package ircbot;
    import java.io.BufferedReader;
    import java.io.BufferedWriter;
    import java.io.IOException;
    import java.io.InputStream;
    import java.io.InputStreamReader;
    import java.io.OutputStream;
    import java.io.OutputStreamWriter;
    import java.net.Socket;
    import java.net.UnknownHostException;
    import java.util.Scanner;
    public class bot {
        public static String message;
        /* variables */
        Socket sock;
        OutputStream out;
        BufferedWriter writer;
        InputStream in;
        BufferedReader reader;
        static doaction action = new doaction();
        String nick = "Gary";
        String user = "account 8 * :oHai";
        public static String channel = "#malvager";
        String ip = "127.0.0.1"; // <-- not the actual ip...
        int port = 11235;
        /* end variable and object declarations */
        public static void main(String[] args) {
            bot superbot = new bot();
            superbot.connect();
        public void connect() {
            try {
            sock = new Socket(ip, port);
            out = sock.getOutputStream();
            writer = new BufferedWriter(new OutputStreamWriter(out));
            in = sock.getInputStream();
            reader = new BufferedReader(new InputStreamReader(in));
            join(nick, channel, user, writer, reader);
            } catch (UnknownHostException e) {
                e.printStackTrace();
            } catch (IOException i) {
                i.printStackTrace();
            }// end try/catch
        public static void join(String nick, String channel, String user, BufferedWriter writer, BufferedReader reader) {
            try {
                writer.write("NICK " + nick);
                System.out.println("set nick");
                writer.newLine();
                writer.flush();
                System.out.println(reader.readLine());
                writer.write("USER " + user);
                System.out.println("set user");
                writer.newLine();
                writer.flush();
                System.out.println(reader.readLine());
                writer.write("JOIN " + channel);
                System.out.println("joined channel");
                writer.newLine();
                writer.flush();
                System.out.println(reader.readLine());
                read(reader, writer);
            } catch (IOException e) {
                e.printStackTrace();
        public static void read(BufferedReader reader, BufferedWriter writer) {
            String said;
            while(1 == 1) {
                try {
                    said = reader.readLine();
                    System.out.println(said);
                    if (check(said) == true) {
                        print(writer);
                } catch (IOException f) {
                    f.printStackTrace();
        public static boolean check(String input) {
            message = action.setinput(input);
            if (message.equals("I'm awesome")) {
                return(true);
            } else {
                return(false);
        public static void print(BufferedWriter writer) {
            Scanner input = new Scanner(System.in);
           try {
               writer.write("PRIVMSG " + channel + " " + message);
               writer.newLine();
               writer.flush();
           } catch(IOException r) {
               r.printStackTrace();
    }// end class
    package ircbot;
    import java.lang.String;
    public class doaction {
        public String setinput(String input) {
            int length = input.length();
            int control;
            String message = input;
            control = length - 5;
            message = message.substring(control, length);
            if (message.equals("!Gary")) {
                message = "I'm awesome";
                return(message);
            } else {
                message = input;
                return("bad");
    }When I connect to my second friend's server (the one that doesn't work) I get:
    set nick
    :lolnepp.no-ip.biz NOTICE AUTH :*** Looking up your hostname...
    set user
    :lolnepp.no-ip.biz NOTICE AUTH :*** Found your hostname (cached)
    joined channel
    PING :51AA6A7F
    :lolnepp.no-ip.biz 451 JOIN :You have not registered
    When I connect to the friend's server that works I get:
    set nick
    :Malvager.hub NOTICE AUTH :*** Looking up your hostname...
    set user
    :Malvager.hub NOTICE AUTH :*** Found your hostname
    joined channel
    :Malvager.hub 001 Gary :Welcome to the Malvager-IRC-Network IRC Network [email protected]
    :Malvager.hub 002 Gary :Your host is Malvager.hub, running version Unreal3.2.7
    :Malvager.hub 003 Gary :This server was created Sun Oct 26 2008 at 00:01:26 PDT
    :Malvager.hub 004 Gary Malvager.hub Unreal3.2.7 iowghraAsORTVSxNCWqBzvdHtGp
    Okay yea, can anyoen tell me if the problem is in my code or the server? And if the problem is in the code could you please tell me what to change or point me in the right directin? thanks.

    kajbj wrote:
    youngdeveloper wrote:
    What do you mean it doesn't look like a java program..? Sorry, I was watching tv while I was typing. It should say "Java problem" and not "Java program" :)
    And yea, I realize it says that, but the steps to register are you enter the nick and the user which both come before that, so I was wondering if anyone with knowledge of how irc works could tell me if there's some other way of registering, or if the problem's in my code.Some IRC servers requires that an admin in the channel creates the user account.
    KajOkay, glad to know it was supposed to say program :).
    I'm pretty sure that it's not because the admin has to set up the acc. because this server was set up with the sole purpose of showing my friend my bot. However, it might be just that the server is retarded. I'm gonna try it on some other servers and see how it works there.

  • NotSerializable not recognized

    We're upgrading from v3.2.3 to v3.5.3 and one of our unit tests is failing. The test attempts to insert a key/value pair into a cache, but the Class of the value object is not declared to implement java.io.Serializable. The test expects an exception to be thrown.
    Under v3.2.3, the attempt to insert failed, throwing an exception. Under 3.5.3, the call to cache.put( key, val ) returns without exception, suggesting that the insert worked. However, when the cache is released, the call to factory.releaseCache() fails with an exception. The exception thrown by releaseCache in v3.5.3 is the same as that thrown by cache.put in v3.2.3. It's an IllegalArgumentException with message indicating that the resource is not serializable.
    This behavior is restricted to replicated cache, it does not occur when using distributed cache. And it does not occur in a cluster with more than one node; in that case the cache.put() call throws the exception.
    Thanks,
    Rod Burgett

    Hi Rod,
    user9082576 wrote:
    After a bit more investigation into results when a second node joins the cluster, I think this is a bug.
    My test case has one member that inserts two pairs into the cache; one has String key and serializable value, one has String key and non-serializable value.
    The cache accepts both and will return both. But when a second member joins the cache, the original cache service dies and all the data is lost.Ok, now that explains things.
    As I mentioned, when there is only one member in the cluster, the replicated cache does not serialize the cached values, as there is absolutely no need for it, since they are not expected to be sent anywhere. However, when the second cache joins the replicated cache service, the cache content is immediately serialized (or at least attempted to) and sent to the newly joined member. This is when the exception is happening, and since there is no caller, there is no one to report the erroneous condition to, but the cache cannot function according to the contract (which among other mandates that all service members see the same content in the caches, which is prevented by putting a non-serializable value into the cache) therefore the only logical outcome is that the service is terminated.
    Best regards,
    Robert

  • Load balance question

    Hi,guys
    Suppose I started a empty cache firstly, then started another cache to load lots of data and put the data into the cache. If two caches join the same cluster, they will implement load balance automatically. It is transparent to us.
    My question is,
    Can I put the data into the empty cache until it is full, then put the data into the another one?
    Thanks,
    Bin

    Hi Bin,
    For partitioned caches (distributed and near-cache topologies), if you have multiple storage-enabled cache nodes within the same partitioned cache service, each of them will serve as the primary node for a share of the data. The identity of the cache node which is primary for a particular data entry depends on the key of the entry and the key association algorithm chosen (see the Wiki and the forum posts about this). By the default algorithm, the amount of data stored by each node as primary is about equal to each other (depends on the distribution of the hashCode algorithms).
    Also, if you have a backup count greater than zero, each node will also hold a backup copy of data for which other nodes are the primary nodes. The amount of the backup data in a node is roughly the amount of primary data multiplied by the configured backup-count (can be provided in the cache configuration, and by default it is 1).
    So it is almost impossible achieve a distribution of data in which you fill up the memory on one cache node before starting to consume memory on another.
    First of all, you would have to turn off backups, otherwise each added entry is stored on more than one servers. With turning off backups, you lose the chance to retain all your data if a node dies.
    Second, since the place of a data (the storage-enabled node which holds its primary copy) is distributed about evenly around a cluster, and not by the order of placing the entries into the cache, you would not be able to direct arbitrarily record-by-record which node is holding the primary (without backups the only) copy of your just-inserted data.
    Anyway, doing what you proposed (filling up caches one by one) reduces the advantages of load balancing and decreases performance otherwise, too, as there are a quite a few operations which are symmetrical in all storage-enabled nodes (e.g. queries, entry processing and aggregation, etc.), which all operate on local data in parallel (providing you use the proper edition of Coherence which supports parallel execution). Storing data on a lesser amount of nodes reduces the total processing power available to the parallel tasks, as some CPUs will not have any data to process, and the fewer rest will have to process all the data.
    As for replicated caches, it is outright impossible to fill up cache nodes one-by-one, as all nodes in a replicated cache service by their very nature store the same data related to that cache service.
    Just my 2 cents, of course.
    Best regards,
    Robert

  • OBIEE 11g caching question - cross database joins

    Hi, I'm seeing something strange (a.k.a. not wanted) in OBIEE (11g, not sure that version matters).
    I have a simple data mart that contains spend information. The supplier dimension contains keys that can be used to join it to detailed supplier information and supplier address information in our ERP system (that sits in a different database / on a different box). In the OBIEE physical layer I've created a cross database join between the supplier dimension table and the ERP tables that contain the address info.
    Here's the odd behavior I'm seeing. If I write an answers request to select the supplier, some address info, and total spend for fiscal year 2010, I'm seeing OBIEE fire off two queries (this I expect):
    A) Select supplier, address key, and total spend for fiscal year = 2010 against the spend mart
    B) select address_key and associated address info against the ERP system (no limit on this query, it pulls back all rows from the address table)
    OBIEE then does an internal join itself and serves up the results, everything is correct. But here's what's "wrong" - if I then run the exact same answers request, but change the fiscal year to 2009, I again see OBIEE firing off the two queries. What I expected and/or want to see is that, since the entire result set from query #B doesn't change at all, that it wouldn't have to rerun this query. However, it seems to be.
    Is there any way to get #B to cache so that, for any subsequent query that contains supplier address info, OBIEE can pull it from cache instead of rerunning the query (which is pretty slow)? I really thought it would do that, but it doesn't seem to be.
    Thanks!
    Scott

    Hi,
    Could you give a bit more of context for this case? The table in SQL server; Is it a dimension and the one in Oracle DB is a fact? I am guessing, you have set up the driving table here. Have you given a try taking it off, and let BI Server do the filter in memory?
    -Dhar

  • Caches resynch upon join and rejoin

    I have a few questions regarding cache state and events under a few circumstances. Will try to give clear descriptions for each case. :)
    1) Join to an existing cluster (optimistic cache)
    Lets say that we have one node running with its optimistic caches already populated. I start a second node that runs CacheFactory.ensureCluster() and then obtains an invocation service to add a member listener. Will the caches of the new node start to be populated before I finish adding my member listener? BTW, my member listener is adding map listeners so my concern is that I'm going to lose map events. Or is the cache population in that case not even going to trigger insert events?
    2) Is there a difference in case #1 if we are using distributed caches?
    3) Split brain scenario and node rejoins (optimistic cache)
    Lets assume that we have 2 nodes each one working alone (i.e. 2 islands with 1 node each). Both nodes will keep working thus updating their caches. Now both islands meet again and one of the nodes has its cluster service restarted. Will the restarted node receive insert map events for the content that is repopulated in its optimistic caches?
    4) Is there a difference in case #3 if we are using distributed caches?
    Thanks,
    -- Gato

    1) Join to an existing cluster (optimistic cache)
    Lets say that we have one node running with its
    optimistic caches already populated. I start a second
    node that runs
    CacheFactory.ensureCluster() and then
    obtains an invocation service to add a member
    listener. Will the caches of the new node start to be
    populated before I finish adding my member listener?No.
    They are not populated until you start that particular cache service, e.g. by calling getCache().
    BTW, my member listener is adding map listeners so my
    concern is that I'm going to lose map events. Or is
    the cache population in that case not even going to
    trigger insert events?Well, if the map is a cache, then by the time you get it to add a listener to it, it will be populated.
    2) Is there a difference in case #1 if we are using
    distributed caches?No.
    Peace,
    Cameron Purdy
    Oracle Coherence: The Java Data Grid

  • Near caches disappearing on cluster leave then join

    In the situation where a node gets dropped from the cluster, either because of GC pauses, network problems or whatever, normally it will manage to join back into the cluster and all of the distributed caches come back. However it seems that near caches are not recreated when the node rejoins the cluster. This is very problematic as although the system seems to be healthy, its performance is dramatically reduced.
    Why are these caches not being recreated? Is there some configuration to ensure this happens?
    This is using Coherence 3.4. I haven't tested the 3.5 behavior yet.

    Hi CormacB,
    I guess you are referring to the fact that during "disconnect" the content of the front tier of the near cache is cleared. It is done to prevent the client using stale data. After the re-connect, the front map will start "refilling" from the back tier as the application gets the data, which is no different with what happens upon the near cache initialization.
    Regards,
    Gene

  • CQL Join on Coherence Cache with Composite Key

    I have a Coherence Cache with a composite key and I want to join a channel to records in that cache with a CQL processor. When I deploy the package containing the processor, I get the following error:
    +italics14:32:35,938 | alter query SimpleQuery start                                                                        | [ACTIVE] ExecuteThread: '7' for queue: 'weblogic.kernel.Default (self-tuning)' | CQLServer | FATAL+
    +14:32:35,938 | alter query >>SimpleQuery<< start+
    +specified predicate requires full scan of external source which is not supported. please modify the join predicate | [ACTIVE] ExecuteThread: '7' for queue: 'weblogic.kernel.Default (self-tuning)' | CQLServer | FATAL+
    I think that I'm using the entire key. If I change the key to a single field, it deploys OK. I found a similar issue when I defined a Java class to represent the composite key. Is it possible to join in this way on a composite key cache?
    I could define another field which is a concatenation of the fields in the composite key but this is a little messy.
    My config is as below:
    <wlevs:caching-system id="MyCache" provider="coherence" />
    <wlevs:event-type-repository>
    <wlevs:event-type type-name="SimpleEvent">
              <wlevs:properties>
                   <wlevs:property name="field1" type="char" />
                   <wlevs:property name="field2" type="char" />
              </wlevs:properties>
    </wlevs:event-type>
    </wlevs:event-type-repository>
         <wlevs:channel id="InChannel" event-type="SimpleEvent" >
              <wlevs:listener ref="SimpleProcessor" />
         </wlevs:channel>
         <wlevs:processor id="SimpleProcessor">
              <wlevs:listener ref="OutChannel" />
              <wlevs:cache-source ref="SimpleCache" />
         </wlevs:processor>
         <wlevs:channel id="OutChannel" event-type="SimpleEvent" >
         </wlevs:channel>
         <wlevs:cache id="SimpleCache" value-type="SimpleEvent"
              key-properties="field1,field2"
              caching-system="MyCache" >
         </wlevs:cache>
    and the processor CQL is as follows:
    <processor>
    <name>SimpleProcessor</name>
    <rules>
    <query id="SimpleQuery">
    <![CDATA[ 
                select I.field1, I.field2 from InChannel [now] as I,
    SimpleCache as S where
    I.field1 = S.field1 and
    I.field2 = S.field2
    ]]> </query>
    </rules>
    </processor>
    Thanks
    Mike

    Unfortunately, joining on composite keys in Coherence is not supported in the released versions. This will be supported in 12g release.
    As you mention, defining another field as key, which is the concatenation of the original keys is the workaround.

  • Cache groups and table join

    Hi,
    Is there any limitation regarding an SQL query doing a JOIN of tables from multiple cache groups?
    Thanks

    No limitations. From a query/DML perspective, cache group tables are just like any other table.
    Chris

  • Maintaining data of view objects in cache memory for repeated usage

    Hi,
         We are developing an application which is having around 800 viewobjects that will be used as LOV in different screens. Therefore, it is decided to create a separate project for all such LOV view objects and keep the same in shared scope so that the data can be made availabe across the application.
         The application also communicates with different database schemas based on the logged-in county. For a particular user, LOV view object LovView1 should get the data fetched from Schema1 whereas for user2, the same LovView1 should get the data from Schema2.
         For this, we have created n number of ApplicationModules like AM1, AM2 etc in the project each one being connected to different database. A base application module also has been created and all the county specific AMs extend this base AM. Also all the LOV view object instances are included in this base AM so that they will be available in the county specific AMs also.The entire project is made as an ADF Library jar and this base AM is utilized by other projects for mapping the LOV by attaching the library.
         At runtime, whenever a particular viewobject is accessed, the findViewObject() method of the baseAM has been overridden and the logic is built in such a way to get the logged in user's county code from a session variable and based on the county, the corresponding AM is communicated with and the view object is returned.
         The view objects of the LOV project is used as LOV as well as for doing some other backend processes. In such cases, the view object is obtained and necessary filter conditions are appended to the view criteria and is executed to get the filtered rowset.
    Now, my questions are,
    1. Is it enough to create the jar for the LOVProject and access the view objects from the same baseAM across the application?
    2. I wish to keep all the data in cache memory to avoid repeated DB hits for all the LOV view objects. How it can be achieved? To be more precise, consider two users user1 and user2 logging into the application with different county. When user1 access a LOV viewobject for the first time, data needs to be fetched from the DB, kept in application scoped cache memory and return the rowset. On subsequent calls to the viewobject, the data needs to be retreived from the cache and not from the DB. When user2 also access the same LOV viewobject, the same logic as explained for user1 should happen. How can I achieve this? Actually my doubt is when user2 access, will the data pertaining to user1 remains available in cache? If not, how to make it retain the data in cache?
    3. I also wish to append a particular where condition to a viewobject irrespective of other considerations like logged in county, existing view criteria etc.. How can I do this? A separate thread for this requirement has been posted in the forum Including additional where clause conditions to view criteria dynamically
    Kindly give me your suggessions.
    Thanks in advance.
    Regards.

    Hi Vijay,
    regarding your questions:
    1. What is the difference between "TimesTen In-Memory Database" and
    "In-Memory Database Cache" in terms of features and licensing model?
    ==> Product has just been renamed and integrated better with the Oracle database - Times-Ten == In-Memory-Cache-Database
    2. Is "In-Memory Database Cache" option integrated with Oracle 10g
    installable or a separate installable (i.e. TimesTen installable with only
    cache feature)?
    ==> Seperate Installation
    3. Is "In-Memory Database Cache" option same as that of "TimesTen Cache
    Connect to Oracle" option in TimesTen In-Memory Database?
    ==> Please have a look here: http://www.oracle.com/technology/products/timesten/quickstart/cc_qs_index.html
    This explains the differences.
    4. After integrating "In-Memory Database Cache" option with Oracle 10g, data
    access will happen only through Oracle sqlplus or OCI calls. Am I right here
    in making this statement?
    ==> Please see above mentioned papers
    5. Is it possible to cache the result set of a join query in "In-Memory
    Database Cache"?
    ==> Again ... ;-)
    Kind regards
    Mike

  • Get all keys from Cache

    Hi,
    I have a scenario where I have a backing store attached to the cache and running the server in a fault tolerant mode. Because of fault tolerance when a new node joins the cluster it is required to recover the data from the cache. When using NamedCache.keySet() I get only the keys which are in the cache and not the ones persisted in the Backing store.
    How do I go about getting the whole set of keys from the cache and backing store using the Tangosol API ?

    Here is my cache-config file. The CacheStore implementation is quite big and I need to check with my manager to share it on the forum. Maybe this will continue through the regular support channel.
    Message was edited by: pkdhar<br><br> <b> Attachment: </b><br>coherence-cache-config.xml <br> (*To use this attachment you will need to rename 465.bin to coherence-cache-config.xml after the download is complete.)

  • Managed server not able to join the cluster

    Hi
    I have two storage node enabled coherence servers on two different machines.These two are able to form the cluster without any problem. I also have two Managed servers. When I start one, will join the cluster without any issue but when I start the fourth one which does not join the cluster. Only one Managed server joins the cluster. I am getting the following error.
    2011-12-22 15:39:26.940/356.798 Oracle Coherence GE 3.6.0.4 &lt;Info> (thread=[ACTIVE] ExecuteThread: '2' for queue: 'weblogic.kernel.Default (self-tuning)', member=n/a): Loaded cache configuration from "file:/u02/oracle/admin/atddomain/atdcluster/ATD/config/atd-client-cache-config.xml"
    2011-12-22 15:39:26.943/356.801 Oracle Coherence GE 3.6.0.4 &lt;D4> (thread=[ACTIVE] ExecuteThread: '2' for queue: 'weblogic.kernel.Default (self-tuning)', member=n/a): TCMP bound to /172.23.34.91:8190 using SystemSocketProvider
    2011-12-22 15:39:57.909/387.767 Oracle Coherence GE 3.6.0.4 &lt;Warning> (thread=Cluster, member=n/a): This Member(Id=0, Timestamp=2011-12-22 15:39:26.944, Address=172.23.34.91:8190, MachineId=39242, Location=site:dev.icd,machine:appsoad2-web2,process:24613, Role=WeblogicServer) has been attempting to join the cluster at address 231.1.1.50:7777 with TTL 4 for 30 seconds without success; this could indicate a mis-configured TTL value, or it may simply be the result of a busy cluster or active failover.
    2011-12-22 15:39:57.909/387.767 Oracle Coherence GE 3.6.0.4 &lt;Warning> (thread=Cluster, member=n/a): Received a discovery message that indicates the presence of an existing cluster:
    Message "NewMemberAnnounceWait"
    FromMember=Member(Id=2, Timestamp=2011-12-22 15:22:56.607, Address=172.23.34.74:8090, MachineId=39242, Location=site:dev.icd,machine:appsoad4,process:23937,member:CoherenceServer2, Role=WeblogicWeblogicCacheServer)
    FromMessageId=0
    Internal=false
    MessagePartCount=1
    PendingCount=0
    MessageType=9
    ToPollId=0
    Poll=null
    Packets
    [000]=Broadcast{PacketType=0x0DDF00D2, ToId=0, FromId=2, Direction=Incoming, ReceivedMillis=15:39:57.909, MessageType=9, ServiceId=0, MessagePartCount=1, MessagePartIndex=0, Body=0}
    Service=ClusterService{Name=Cluster, State=(SERVICE_STARTED, STATE_ANNOUNCE), Id=0, Version=3.6}
    ToMemberSet=null
    NotifySent=false
    ToMember=Member(Id=0, Timestamp=2011-12-22 15:39:26.944, Address=172.23.34.91:8190, MachineId=39242, Location=site:dev.icd,machine:appsoad2-web2,process:24613, Role=WeblogicServer)
    SeniorMember=Member(Id=1, Timestamp=2011-12-22 15:22:53.032, Address=172.23.34.73:8090, MachineId=39241, Location=site:dev.icd,machine:appsoad3,process:19339,member:CoherenceServer1, Role=WeblogicWeblogicCacheServer)
    2011-12-22 15:40:02.915/392.773 Oracle Coherence GE 3.6.0.4 &lt;Warning> (thread=Cluster, member=n/a): Received a discovery message that indicates the presence of an existing cluster:
    Message "NewMemberAnnounceWait"
    FromMember=Member(Id=2, Timestamp=2011-12-22 15:22:56.607, Address=172.23.34.74:8090, MachineId=39242, Location=site:dev.icd,machine:appsoad4,process:23937,member:CoherenceServer2, Role=WeblogicWeblogicCacheServer)
    FromMessageId=0
    Internal=false
    MessagePartCount=1
    PendingCount=0
    MessageType=9
    ToPollId=0
    Poll=null
    Packets
    {                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

    Hi,
    By default Coherence uses a multicast protocol to discover other nodes when forming a cluster. Since you are having difficulties in establishing a cluster via multicast, Can you please perform a multicast test and see if multicast is configured properly.
    http://wiki.tangosol.com/display/COH32UG/Multicast+Test
    Hope you are using same configuration files across the cluster members and all members of the cluster must specify the same cluster name in order to be allowed to join the cluster.
    <cluster-name system-property="tangosol.coherence.cluster";>xxx</cluster-name>
    I would suggest, try using the unicast-listener with the well-known-addresses instead of muticast-listener.
    http://wiki.tangosol.com/display/COH32UG/well-known-addresses
    Add similar entries like below in your tangosol override xml..
    <well-known-addresses>
    <socket-address id="1">
    <address> 172.23.34.91<;/address>
    <port>8190</port>
    </socket-address>
    <socket-address id="2">
    <address> 172.23.34.74<;/address>
    <port> 8090</port>
    </socket-address>
    </well-known-addresses>
    This list is used by all other nodes to find their way into the cluster without the use of multicast, thus at least one well known node must be running for other nodes to be able to join.
    Hope this helps!!
    Thanks,
    Ashok.
    <div id="isChromeWebToolbarDiv" style="display:none"></div>

Maybe you are looking for

  • New ipod wont sync for me

    I recently bought an Ipod Classic 80gigs because my old ipod got destroyed. So i decided to sync my brand new ipod and it wont let me. It goes to the syncing screen and then take mes back to the song list. I think its because my itunes is used to syn

  • Changing region position on submitting the page by clicking a  button.

    Hi, I have 3 regions in my APEX page. 1. Region having 20 LOV's with the width between them as 300. The display point is "Page Template Body (1)" 2. Region having 3 buttons.(Go, CLEAR,CALCULATE) The display point is "Page Template Body (2)" Template

  • Nokia asha311 7.36

    Don't upgrade to 7.36 no chat no mail 5.92 is better

  • Restart and frozen Ze

    my Zen V froze and i dont know where the reset button is

  • Dynamic DropDowns using Web Dynpro for ABAP

    Hi, I'm creating my first Web Dynpro for my new client. The requirement is this: There are three fields to be displayed on a screen - FieldA, FieldB, FieldC. All three should appear as DropDowns on the screen. FieldA determines the possible value lis