OutOfMemoryError in TCP Extend nodes?

Hi,
We are facing a starnge issue in which a storage disabled tcp extend node is consistently occupying around 450mb of space?
Whenever we populate data in cache even smaller than 40mb the it starts filling up oblivious space in tcp extend nodes. Although the jmx console doesn't provide any details of data residing in the storage disabled extend nodes.
Ideally how much memory does the extend node storage disabled tcp extend node requires to run? Is it propotional to max amount of data loaded as a single unit. I feel the extend node should hold data for short duration and should free itself.
We are running on DGE cluster with 2 linux machines having 11 cache nodes of 1gb each. Each machine has 2 storage disabled Extend nodes with 500m each through which my client connects.
Thanks
-Amit
Message was edited by:
Rock Solid

Is there any standard way of deciding on the space required by Extend TCP nodes (Storage Disabled)?
In my scnario
We are trying to get a Map Object of 42mb from the cache. The TCP extend node to which the client connects is throwing OutOfMemoryError. We are runing 4 Extend Nodes of 600MB each.
By looking at the JMX console AvailableMB is < 200 MB consistently in Extend TCP Node (Storage disabled).
Why is this Memory not released after the data transfer is done?
Thanks
-Amit

Similar Messages

  • OutOfMemoryError in cluster after  TCP-extend client suspends processing.

    Anybody able to explain why the following exception might occur... seems to occur when a client connected through tcp-extend is suspended during continuous query processing.
    2007-12-20 21:19:49.009 Oracle Coherence GE 3.3.1/389 <Error> (thread=DistributedCache, member=1): Error sending MapEvent to Channel(Id=374515075, Connection=0x00000116F96C67F4A97BDC3A739C40D11DA1C36E3C50F9FBC8BB5AD8DCF4E16E, Open=true): java.lang.OutOfMemoryError: Direct buffer memory
    at java.nio.Bits.reserveMemory(Bits.java:633)
    at java.nio.DirectByteBuffer.<init>(DirectByteBuffer.java:95)
    at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:288)
    at com.tangosol.coherence.component.comm.connectionManager.acceptor.TcpAcceptor$ByteBufferPool.instantiateResource(TcpAcceptor.CDB:7)

    Hi,
    We are also getting same excpetion. "java.lang.OutOfMemoryError: Direct buffer memory". I have changed the -XX:MaxDirectMemorySize=256m still getting the same Exception. Tried to investigate by getting the log level to 7 still could not figure out the exact issue. We are running on 3.3.1/389p4 version of Coherence. Only thing I noticed was at times there are many connections in netstat -a o/p in CLOSE_WAIT position.
    Please Help!
    Regards,
    -Amit
    This Exception we are getting on TCP Extend Proxy nodes.
    2008-06-30 12:54:41.478 Oracle Coherence GE 3.3.1/389p4 <D6> (thread=DistributedCache, member=26): Outgoing ByteBufferPool increased to 266526720 bytes total
    2008-06-30 12:54:41.478 Oracle Coherence GE 3.3.1/389p4 <D6> (thread=DistributedCache, member=26): Outgoing ByteBufferPool increased to 266536960 bytes total
    2008-06-30 12:54:41.478 Oracle Coherence GE 3.3.1/389p4 <D6> (thread=DistributedCache, member=26): Outgoing ByteBufferPool increased to 266547200 bytes total
    2008-06-30 12:54:41.478 Oracle Coherence GE 3.3.1/389p4 <D6> (thread=DistributedCache, member=26): Outgoing ByteBufferPool increased to 266557440 bytes total
    2008-06-30 12:54:41.479 Oracle Coherence GE 3.3.1/389p4 <D6> (thread=DistributedCache, member=26): Outgoing ByteBufferPool increased to 266567680 bytes total
    2008-06-30 12:54:41.479 Oracle Coherence GE 3.3.1/389p4 <D6> (thread=DistributedCache, member=26): Outgoing ByteBufferPool increased to 266577920 bytes total
    2008-06-30 12:54:41.789 Oracle Coherence GE 3.3.1/389p4 <Error> (thread=DistributedCache, member=26): Error sending MapEvent to Channel(Id=1140491821, Connection=0x0000011AD950F44BAC1A65A1FB03F9B2AFAE8B5F9FF39688C81572023DF9F53B, Open=true): java.lang.OutOfMemoryError: Direct buffer memory
    at java.nio.Bits.reserveMemory(Bits.java:632)
    at java.nio.DirectByteBuffer.<init>(DirectByteBuffer.java:95)
    at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:288)
    at com.tangosol.coherence.component.comm.connectionManager.acceptor.TcpAcceptor$ByteBufferPool.instantiateResource(TcpAcceptor.CDB:7)
    at com.tangosol.coherence.component.comm.connectionManager.acceptor.TcpAcceptor$ByteBufferPool.acquire(TcpAcceptor.CDB:26)
    at com.tangosol.coherence.component.comm.connectionManager.acceptor.TcpAcceptor$ByteBufferPool.allocate(TcpAcceptor.CDB:4)
    at com.tangosol.io.MultiBufferWriteBuffer.advance(MultiBufferWriteBuffer.java:870)
    at com.tangosol.io.MultiBufferWriteBuffer.<init>(MultiBufferWriteBuffer.java:32)
    at com.tangosol.coherence.component.comm.connectionManager.acceptor.TcpAcceptor$TcpConnection.allocateWriteBuffer(TcpAcceptor.CDB:3)
    at com.tangosol.coherence.component.comm.Connection.send(Connection.CDB:16)
    at com.tangosol.coherence.component.comm.Channel.doSend(Channel.CDB:4)
    at com.tangosol.coherence.component.comm.Channel.send(Channel.CDB:38)
    at com.tangosol.coherence.component.net.extend.proxy.MapListenerProxy.onMapEvent(MapListenerProxy.CDB:9)
    at com.tangosol.coherence.component.net.extend.proxy.MapListenerProxy.entryInserted(MapListenerProxy.CDB:1)
    at com.tangosol.util.MapEvent.dispatch(MapEvent.java:191)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.DistributedCache$ViewMap$ProxyListener.dispatch(DistributedCache.CDB:22)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.DistributedCache$ViewMap$ProxyListener.entryInserted(DistributedCache.CDB:1)
    at com.tangosol.util.MapListenerSupport$WrapperSynchronousListener.entryInserted(MapListenerSupport.java:856)
    at com.tangosol.util.MapEvent.dispatch(MapEvent.java:191)
    at com.tangosol.coherence.component.util.CacheEvent.dispatchSafe(CacheEvent.CDB:14)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.DistributedCache$BinaryMap.dispatch(DistributedCache.CDB:86)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.DistributedCache.onMapEvent(DistributedCache.CDB:31)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.DistributedCache$MapEvent.onReceived(DistributedCache.CDB:3)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.Service.onMessage(Service.CDB:9)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.Service.onNotify(Service.CDB:130)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.DistributedCache.onNotify(DistributedCache.CDB:3)
    at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:35)
    at java.lang.Thread.run(Thread.java:595)

  • Lock Client-Side TCP-Extends

    Hi,
    I am running 2 processes on the same server both connecting to one cluster node.
    I need to ensure only one process performs a write operation @ a certain time (i.e. the first one to attain the lock).
    I am finding that both the processes are getting the lock, and cannot work-out why...
    We are using tcp-extends, and I am using the following related cluster configuration:
    <caching-scheme-mapping>
    <cache-mapping>
    <cache-name>cache.cluster.*</cache-name>
    <scheme-name>scheme.cluster.system</scheme-name>
    </cache-mapping>
    </caching-scheme-mapping>
    <caching-schemes>
    <proxy-scheme>
    <scheme-name>scheme.cluster.proxy</scheme-name>
    <service-name>service.cluster.proxy</service-name>
    <thread-count>4</thread-count>
    <acceptor-config>
    <tcp-acceptor>
    <local-address>
    <address system-property="datacloud.node.tcp-extend.bind-address">localhost</address>
    <port system-property="datacloud.node.tcp-extend.port">11200</port>
    </local-address>
    <keep-alive-enabled>true</keep-alive-enabled>
    </tcp-acceptor>
    </acceptor-config>
    <proxy-config>
    <cache-service-proxy>
    <lock-enabled>true</lock-enabled>
    <!--read-only>true</read-only -->
    </cache-service-proxy>
    </proxy-config>
    <autostart>true</autostart>
    </proxy-scheme>
    <replicated-scheme>
    <scheme-name>scheme.cluster.system</scheme-name>
    <service-name>ReplicatedCache</service-name>
    <lease-granularity>member</lease-granularity>
    <member-listener>
    <class-name>datacloud.cluster.listeners.ClusterMemberListener</class-name>
    </member-listener>
    <backing-map-scheme>
    <local-scheme />
    </backing-map-scheme>
    <autostart>true</autostart>
    </replicated-scheme>
    </caching-schemes>
    The client copies this scheme by using the following:
              <cache-mapping>
                   <cache-name>cache.cluster.lock</cache-name>
                   <scheme-name>scheme.remote</scheme-name>
              </cache-mapping>
    The lock essentially does:
    NamedCache cache = CacheFactory.getCache("cache.cluster.lock")
    boolean isLockAcquired = cache.lock("KEY", 5000)
    try {
    if(isLockAcquired) {
    (takes 2 seconds to complete)
    } finally {
    cache.unlock("KEY")
    Why can 2 processes acquire the same lock when asked to acquire it @ the same time?
    Edited by: 907011 on 10-Jan-2012 03:33

    Hi,
    This is my explination of why you are seeing the behaviour you are - I think it is correct but I am sure someone will jump in if not.
    1. Your cache config has <lease-granularity>member</lease-granularity> which means that a lock taken out by any thread on a member can be released by the same member. It also means that a Member owns the lock so if I do cache.lock() for a key from some code running on a Member and then do cache.lock() again for the same key on the same Member they will both succeed as the Member owns the lock.
    2. Now, when your first process calls <tt>boolean isLockAcquired = cache.lock("KEY", 5000)</tt> it gets the lock but it is not your client process that owns the lock, it is the Extend Proxy your client is connected to that owns it.
    3. Consequently when process two, connected to the same Extend proxy, asks for the lock, it gets it too, as it is not your client process that owns the lock, it is the Extend Proxy your client is connected to that owns it.
    4. Worse, when process one finishes and releases the lock, then it is released, even though your code in process two still thinks it is running inside the lock.
    As I said, that is my understanding of it and why locks do not really work from Extend Clients. In fact there are very few occasions where I would bother to use explicit locks in Coherence (hence my unfamiliarity about the exact workings) as there are usually other, more reliable, ways to achieve the same requirements.
    JK

  • TCP Extend Server - Failed to start Service - Oracle Coherence GE 3.5.2/463

    Hello,
    We are about to go to production I see Failed to start Service in TCP Extend Server (Storage disable node).
    Regards
    /AG
    My Configuration look like the following
    <?xml version="1.0" encoding="windows-1252" ?>
    <!DOCTYPE cache-config SYSTEM "cache-config.dtd">
    <cache-config>
    <caching-scheme-mapping>
    </caching-scheme-mapping>
    <caching-schemes>
    <distributed-scheme>
    <scheme-name>distributedCache</scheme-name>
    <service-name>distributedCache</service-name>
    <backing-map-scheme>
    <local-scheme>
    <eviction-policy>HYBRID</eviction-policy>
    <high-units>500</high-units>
    <low-units>375</low-units>
    <unit-calculator>BINARY</unit-calculator>
    <unit-factor>1048576</unit-factor>
    </local-scheme>
    </backing-map-scheme>
    </distributed-scheme>
    <proxy-scheme>
    <service-name>ExtendTcpProxyService</service-name>
    <thread-count>15</thread-count>
    <acceptor-config>
    <tcp-acceptor>
    <local-address>
    <address system-property="proxy.listen.address">....</address>
    <port system-property="proxy.listen.port">....</port>
    </local-address>
    </tcp-acceptor>
    </acceptor-config>
    <autostart>true</autostart>
    </proxy-scheme>
    </caching-schemes>
    </cache-config>
    ------------------------------------------------------------------------- And the Log looks like the following ------------------------------------
    2009-12-04 16:21:54.056/25821.278 Oracle Coherence GE 3.5.2/463 <D6> (thread=Proxy:ExtendTcpProxyService:TcpAcceptor, member=12): Closed: Channel(Id=193159068
    6, Open=false)
    2009-12-04 16:21:54.058/25821.280 Oracle Coherence GE 3.5.2/463 <D5> (thread=Proxy:ExtendTcpProxyService:TcpAcceptorWorker:4, member=12): Repeating SizeReques
    t due to the re-distribution of PartitionSet{220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242
    , 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256}
    2009-12-04 16:21:54.058/25821.280 Oracle Coherence GE 3.5.2/463 <D5> (thread=Proxy:ExtendTcpProxyService:TcpAcceptorWorker:12, member=12): Repeating SizeReque
    st due to the re-distribution of PartitionSet{220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 24
    2, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256}
    2009-12-04 16:21:54.058/25821.280 Oracle Coherence GE 3.5.2/463 <D5> (thread=Proxy:ExtendTcpProxyService:TcpAcceptorWorker:11, member=12): Repeating SizeReque
    st due to the re-distribution of PartitionSet{220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 24
    2, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256}
    2009-12-04 16:21:54.058/25821.280 Oracle Coherence GE 3.5.2/463 <D5> (thread=Proxy:ExtendTcpProxyService:TcpAcceptorWorker:7, member=12): Repeating SizeReques
    t due to the re-distribution of PartitionSet{220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242
    , 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256}
    2009-12-04 16:21:54.175/25821.397 Oracle Coherence GE 3.5.2/463 <D5> (thread=Proxy:ExtendTcpProxyService:TcpAcceptorWorker:10, member=12): An exception occurr
    ed while processing a SizeRequest for Service=Proxy:ExtendTcpProxyService:TcpAcceptor: (Wrapped) java.lang.InterruptedException
    at com.tangosol.util.Base.ensureRuntimeException(Base.java:293)
    at com.tangosol.util.Base.ensureRuntimeException(Base.java:269)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$BinaryMap.waitForPartitionRedistribution(DistributedCache
    .CDB:107)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$BinaryMap.waitForRedistribution(DistributedCache.CDB:34)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$BinaryMap.ensureRequestTarget(DistributedCache.CDB:15)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$BinaryMap.sendPartitionedRequest(DistributedCache.CDB:31)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$BinaryMap.size(DistributedCache.CDB:13)
    at com.tangosol.util.ConverterCollections$ConverterMap.size(ConverterCollections.java:1470)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$ViewMap.size(DistributedCache.CDB:1)
    at com.tangosol.coherence.component.util.SafeNamedCache.size(SafeNamedCache.CDB:1)
    at com.tangosol.coherence.component.util.collections.WrapperMap.size(WrapperMap.CDB:1)
    at com.tangosol.coherence.component.net.extend.messageFactory.NamedCacheFactory$SizeRequest.onRun(NamedCacheFactory.CDB:7)
    at com.tangosol.coherence.component.net.extend.message.Request.run(Request.CDB:4)
    at com.tangosol.coherence.component.net.extend.proxy.NamedCacheProxy.onMessage(NamedCacheProxy.CDB:11)
    at com.tangosol.coherence.component.net.extend.Channel.execute(Channel.CDB:28)
    at com.tangosol.coherence.component.net.extend.Channel.receive(Channel.CDB:26)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Peer$DaemonPool$WrapperTask.run(Peer.CDB:9)
    at com.tangosol.coherence.component.util.DaemonPool$WrapperTask.run(DaemonPool.CDB:32)
    at com.tangosol.coherence.component.util.DaemonPool$Daemon.onNotify(DaemonPool.CDB:63)
    at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
    at java.lang.Thread.run(Thread.java:619)
    Caused by: java.lang.InterruptedException
    at java.lang.Object.wait(Native Method)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$BinaryMap.waitForPartitionRedistribution(DistributedCache
    .CDB:96)
    ... 18 more
    2009-12-04 16:21:54.175/25821.397 Oracle Coherence GE 3.5.2/463 <D5> (thread=Proxy:ExtendTcpProxyService:TcpAcceptorWorker:9, member=12): An exception occurre
    d while processing a SizeRequest for Service=Proxy:ExtendTcpProxyService:TcpAcceptor: (Wrapped) java.lang.InterruptedException
    at com.tangosol.util.Base.ensureRuntimeException(Base.java:293)
    at com.tangosol.util.Base.ensureRuntimeException(Base.java:269)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$BinaryMap.waitForPartitionRedistribution(DistributedCache
    .CDB:107)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$BinaryMap.waitForRedistribution(DistributedCache.CDB:34)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$BinaryMap.ensureRequestTarget(DistributedCache.CDB:15)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$BinaryMap.sendPartitionedRequest(DistributedCache.CDB:31)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$BinaryMap.size(DistributedCache.CDB:13)
    at com.tangosol.util.ConverterCollections$ConverterMap.size(ConverterCollections.java:1470)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$ViewMap.size(DistributedCache.CDB:1)
    at com.tangosol.coherence.component.util.SafeNamedCache.size(SafeNamedCache.CDB:1)
    at com.tangosol.coherence.component.util.collections.WrapperMap.size(WrapperMap.CDB:1)
    at com.tangosol.coherence.component.net.extend.messageFactory.NamedCacheFactory$SizeRequest.onRun(NamedCacheFactory.CDB:7)
    at com.tangosol.coherence.component.net.extend.message.Request.run(Request.CDB:4)
    at com.tangosol.coherence.component.net.extend.proxy.NamedCacheProxy.onMessage(NamedCacheProxy.CDB:11)
    at com.tangosol.coherence.component.net.extend.Channel.execute(Channel.CDB:28)
    at com.tangosol.coherence.component.net.extend.Channel.receive(Channel.CDB:26)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Peer$DaemonPool$WrapperTask.run(Peer.CDB:9)
    at com.tangosol.coherence.component.util.DaemonPool$WrapperTask.run(DaemonPool.CDB:32)
    at com.tangosol.coherence.component.util.DaemonPool$Daemon.onNotify(DaemonPool.CDB:63)
    at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
    at java.lang.Thread.run(Thread.java:619)
    Caused by: java.lang.InterruptedException
    at java.lang.Object.wait(Native Method)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$BinaryMap.waitForPartitionRedistribution(DistributedCache
    .CDB:96)
    ... 18 more
    2009-12-04 16:21:54.175/25821.397 Oracle Coherence GE 3.5.2/463 <D5> (thread=Proxy:ExtendTcpProxyService:TcpAcceptorWorker:13, member=12): An exception occurr
    ed while processing a SizeRequest for Service=Proxy:ExtendTcpProxyService:TcpAcceptor: (Wrapped) java.lang.InterruptedException
    at com.tangosol.util.Base.ensureRuntimeException(Base.java:293)
    at com.tangosol.util.Base.ensureRuntimeException(Base.java:269)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$BinaryMap.waitForPartitionRedistribution(DistributedCache
    .CDB:107)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$BinaryMap.waitForRedistribution(DistributedCache.CDB:34)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$BinaryMap.ensureRequestTarget(DistributedCache.CDB:15)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$BinaryMap.sendPartitionedRequest(DistributedCache.CDB:31)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$BinaryMap.size(DistributedCache.CDB:13)
    at com.tangosol.util.ConverterCollections$ConverterMap.size(ConverterCollections.java:1470)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$ViewMap.size(DistributedCache.CDB:1)
    at com.tangosol.coherence.component.util.SafeNamedCache.size(SafeNamedCache.CDB:1)
    at com.tangosol.coherence.component.util.collections.WrapperMap.size(WrapperMap.CDB:1)
    at com.tangosol.coherence.component.net.extend.messageFactory.NamedCacheFactory$SizeRequest.onRun(NamedCacheFactory.CDB:7)
    at com.tangosol.coherence.component.net.extend.message.Request.run(Request.CDB:4)
    at com.tangosol.coherence.component.net.extend.proxy.NamedCacheProxy.onMessage(NamedCacheProxy.CDB:11)
    at com.tangosol.coherence.component.net.extend.Channel.execute(Channel.CDB:28)
    at com.tangosol.coherence.component.net.extend.Channel.receive(Channel.CDB:26)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Peer$DaemonPool$WrapperTask.run(Peer.CDB:9)
    at com.tangosol.coherence.component.util.DaemonPool$WrapperTask.run(DaemonPool.CDB:32)
    at com.tangosol.coherence.component.util.DaemonPool$Daemon.onNotify(DaemonPool.CDB:63)
    at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
    at java.lang.Thread.run(Thread.java:619)
    Caused by: java.lang.InterruptedException
    at java.lang.Object.wait(Native Method)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$BinaryMap.waitForPartitionRedistribution(DistributedCache
    .CDB:96)
    ... 18 more
    2009-12-04 16:21:54.175/25821.397 Oracle Coherence GE 3.5.2/463 <D5> (thread=Proxy:ExtendTcpProxyService:TcpAcceptorWorker:11, member=12): An exception occurr
    ed while processing a SizeRequest for Service=Proxy:ExtendTcpProxyService:TcpAcceptor: (Wrapped) java.lang.InterruptedException
    at com.tangosol.util.Base.ensureRuntimeException(Base.java:293)
    at com.tangosol.util.Base.ensureRuntimeException(Base.java:269)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$BinaryMap.waitForPartitionRedistribution(DistributedCache
    .CDB:107)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$BinaryMap.waitForRedistribution(DistributedCache.CDB:34)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$BinaryMap.ensureRequestTarget(DistributedCache.CDB:15)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$BinaryMap.sendPartitionedRequest(DistributedCache.CDB:31)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$BinaryMap.size(DistributedCache.CDB:13)
    at com.tangosol.util.ConverterCollections$ConverterMap.size(ConverterCollections.java:1470)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$ViewMap.size(DistributedCache.CDB:1)
    at com.tangosol.coherence.component.util.SafeNamedCache.size(SafeNamedCache.CDB:1)
    at com.tangosol.coherence.component.util.collections.WrapperMap.size(WrapperMap.CDB:1)
    at com.tangosol.coherence.component.net.extend.messageFactory.NamedCacheFactory$SizeRequest.onRun(NamedCacheFactory.CDB:7)
    at com.tangosol.coherence.component.net.extend.message.Request.run(Request.CDB:4)
    at com.tangosol.coherence.component.net.extend.proxy.NamedCacheProxy.onMessage(NamedCacheProxy.CDB:11)
    at com.tangosol.coherence.component.net.extend.Channel.execute(Channel.CDB:28)
    at com.tangosol.coherence.component.net.extend.Channel.receive(Channel.CDB:26)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Peer$DaemonPool$WrapperTask.run(Peer.CDB:9)
    at com.tangosol.coherence.component.util.DaemonPool$WrapperTask.run(DaemonPool.CDB:32)
    at com.tangosol.coherence.component.util.DaemonPool$Daemon.onNotify(DaemonPool.CDB:63)
    at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
    at java.lang.Thread.run(Thread.java:619)
    Caused by: java.lang.InterruptedException
    at java.lang.Object.wait(Native Method)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$BinaryMap.waitForPartitionRedistribution(DistributedCache
    .CDB:96)
    ... 18 more
    2009-12-04 16:21:54.175/25821.397 Oracle Coherence GE 3.5.2/463 <D5> (thread=Proxy:ExtendTcpProxyService:TcpAcceptorWorker:2, member=12): An exception occurre
    d while processing a SizeRequest for Service=Proxy:ExtendTcpProxyService:TcpAcceptor: (Wrapped) java.lang.InterruptedException
    at com.tangosol.util.Base.ensureRuntimeException(Base.java:293)
    at com.tangosol.util.Base.ensureRuntimeException(Base.java:269)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$BinaryMap.waitForPartitionRedistribution(DistributedCache
    .CDB:107)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$BinaryMap.waitForRedistribution(DistributedCache.CDB:34)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$BinaryMap.ensureRequestTarget(DistributedCache.CDB:15)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$BinaryMap.sendPartitionedRequest(DistributedCache.CDB:31)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$BinaryMap.size(DistributedCache.CDB:13)
    at com.tangosol.util.ConverterCollections$ConverterMap.size(ConverterCollections.java:1470)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$ViewMap.size(DistributedCache.CDB:1)
    at com.tangosol.coherence.component.util.SafeNamedCache.size(SafeNamedCache.CDB:1)
    at com.tangosol.coherence.component.util.collections.WrapperMap.size(WrapperMap.CDB:1)
    at com.tangosol.coherence.component.net.extend.messageFactory.NamedCacheFactory$SizeRequest.onRun(NamedCacheFactory.CDB:7)
    at com.tangosol.coherence.component.net.extend.message.Request.run(Request.CDB:4)
    at com.tangosol.coherence.component.net.extend.proxy.NamedCacheProxy.onMessage(NamedCacheProxy.CDB:11)
    at com.tangosol.coherence.component.net.extend.Channel.execute(Channel.CDB:28)
    at com.tangosol.coherence.component.net.extend.Channel.receive(Channel.CDB:26)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Peer$DaemonPool$WrapperTask.run(Peer.CDB:9)
    at com.tangosol.coherence.component.util.DaemonPool$WrapperTask.run(DaemonPool.CDB:32)
    at com.tangosol.coherence.component.util.DaemonPool$Daemon.onNotify(DaemonPool.CDB:63)
    at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
    at java.lang.Thread.run(Thread.java:619)
    Caused by: java.lang.InterruptedException
    at java.lang.Object.wait(Native Method)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$BinaryMap.waitForPartitionRedistribution(DistributedCache
    .CDB:96)
    ... 18 more
    2009-12-04 16:21:54.176/25821.398 Oracle Coherence GE 3.5.2/463 <D5> (thread=Proxy:ExtendTcpProxyService:TcpAcceptorWorker:6, member=12): An exception occurre
    d while processing a SizeRequest for Service=Proxy:ExtendTcpProxyService:TcpAcceptor: (Wrapped) java.lang.InterruptedException
    at com.tangosol.util.Base.ensureRuntimeException(Base.java:293)
    at com.tangosol.util.Base.ensureRuntimeException(Base.java:269)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$BinaryMap.waitForPartitionRedistribution(DistributedCache
    .CDB:107)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$BinaryMap.waitForRedistribution(DistributedCache.CDB:34)
    :$
    2009-12-04 16:21:54.259/25821.481 Oracle Coherence GE 3.5.2/463 <D4> (thread=Proxy:ExtendTcpProxyService:TcpAcceptorWorker:8, member=12): Daemon caught an unh
    andled exception (com.tangosol.net.messaging.ConnectionException: channel is closed) while exiting.
    2009-12-04 16:21:54.264/25821.486 Oracle Coherence GE 3.5.2/463 <D4> (thread=Proxy:ExtendTcpProxyService:TcpAcceptorWorker:3, member=12): Daemon caught an unh
    andled exception (com.tangosol.net.messaging.ConnectionException: channel is closed) while exiting.
    2009-12-04 16:21:54.330/25821.552 Oracle Coherence GE 3.5.2/463 <D5> (thread=Proxy:ExtendTcpProxyService:TcpAcceptor, member=12): Stopped: TcpAcceptor{Name=Pr
    oxy:ExtendTcpProxyService:TcpAcceptor, State=(SERVICE_STOPPED), ThreadCount=0, Codec=Codec(Format=POF), PingInterval=0, PingTimeout=0, RequestTimeout=0, Local
    Address=[nybc94lxb01/10.12.101.81:21005], LocalAddressReusable=false, KeepAliveEnabled=true, TcpDelayEnabled=false, ReceiveBufferSize=0, SendBufferSize=0, Lis
    tenBacklog=0, LingerTimeout=-1, BufferPoolIn=BufferPool(BufferSize=2KB, BufferType=DIRECT, Capacity=Unlimited), BufferPoolOut=BufferPool(BufferSize=2KB, Buffe
    rType=DIRECT, Capacity=Unlimited)}
    Exception in thread "Thread-2" java.lang.RuntimeException: Failed to start Service "Proxy:ExtendTcpProxyService:TcpAcceptor" (ServiceState=SERVICE_STOPPED)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.Service.waitAcceptingClients(Service.CDB:12)
    at com.tangosol.coherence.component.net.extend.Channel.request(Channel.CDB:10)
    at com.tangosol.coherence.component.net.extend.Channel.request(Channel.CDB:1)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Peer.closeChannel(Peer.CDB:18)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Peer.closeChannel(Peer.CDB:1)
    at com.tangosol.coherence.component.net.extend.Channel.close(Channel.CDB:20)
    at com.tangosol.coherence.component.net.extend.Channel.close(Channel.CDB:1)
    at com.tangosol.coherence.component.net.extend.proxy.NamedCacheProxy.run(NamedCacheProxy.CDB:30)
    at java.lang.Thread.run(Thread.java:619)
    Exception in thread "Thread-3" java.lang.RuntimeException: Failed to start Service "Proxy:ExtendTcpProxyService:TcpAcceptor" (ServiceState=SERVICE_STOPPED)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.Service.waitAcceptingClients(Service.CDB:12)
    at com.tangosol.coherence.component.net.extend.Channel.request(Channel.CDB:10)
    at com.tangosol.coherence.component.net.extend.Channel.request(Channel.CDB:1)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Peer.closeChannel(Peer.CDB:18)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Peer.closeChannel(Peer.CDB:1)
    at com.tangosol.coherence.component.net.extend.Channel.close(Channel.CDB:20)
    at com.tangosol.coherence.component.net.extend.Channel.close(Channel.CDB:1)
    at com.tangosol.coherence.component.net.extend.proxy.NamedCacheProxy.run(NamedCacheProxy.CDB:30)
    at java.lang.Thread.run(Thread.java:619)
    Edited by: Anand Gupta on Dec 4, 2009 5:10 PM

    David,
    Thanks for your detailed response. I will try all that you suggested and let you know my observation at the end. To give a back ground of the application I am working on.
    Background
    Try to replace home grown cache (very well optimized uses custom serialization using NIO bytebuffer etc.,) my mandate is to replace the legacy cache with coherence (I love coherence product (at least 3.4) from my previous job experience). Since all kind of optimization is done in legacy application I am converting NIO byteBuffer generated by legacy serialization to byteArray and converting ByteArray into POF. Also for various other reasons, I have to send entry processor to do even get and put data from the cluster.
    Raw Data Set Size
    My data size is just 500MB (In Production). I put enough cache nodes so that raw data is just 50MB per node. (Since it is compute heavy with entry processor and the number of clients are going to be ~2500)
    What was happening when I got the exception?
    I was doing stress testing of the cache. Not sure if that put lot of garbage in some of the node hence may be the garbage collection pause triggered the re-distribution of partition set. Since the stress testing was on the way it might had cascading effect on all the nodes.
    Datagram Test result.
    Tx summary 4 peers:
    life: 96 MB/sec, 68774 packets/sec
    now: 100 MB/sec, 71378 packets/sec, packets/burst: 1029, bursts/second: 69.41594
    Success rate ranges from 0.87 to 1.0
    Is a success rate less than 0.98 a cause for concern?
    Regarding Size (to keep the extend client always connected to the proxy)
    I do size request every 5 sec. if there is not other request has gone to the cluster in the past 5 sec.
    In one of the oracle presentation it was said that "If you have map listener's and no other request from a long period of time extend client has to do a periodic size request to make sure that connection to proxy is alive"
    In this regard my question is "Can the same effect achieved by doing member listener on the extend client and on disconnection do the size request?"
    Proxy Size guide lines
    In production number of extend clients are going to be ~2500 connected all the time.
    Of which 2000 clients with few map listener + entry processor gets (Each of the clients will have distinct and disjoint set of data on which they work and listener)
    And the remaining ~500 clients will do all kinds of request on the entire data set.
    In this circumstance is there any recommendation for Number of extend-client : Number of extend-server : Number of threads ratio?
    Regards
    /Anand

  • Multiple TCP*Extend Proxy

    Hi,
    I am developing a C++ Client for Coherence and using the TCP*Extend to connect to the Cache.
    I am able to run multiples nodes on same machine.
    My goal is to add more machines to this setup (is different machine called cluster or node ?).
    I would like to have a setup as explained below:
    NamedCache: AQRCache
    Cache Type: Near Cache (Local and distributed/remote)
    Host_A:
    Tcp*extend A
    Node 1
    Node 2
    C++clientA running on Host_A talking to TCP*Extend_A.
    Host_B:
    Tcp*extend_B
    Node 3
    Node 4
    C++clientB running on Host_B talking to TCP*Extend_B.
    Host_C:
    Tcp*extend_C
    Node 5
    Node 6
    C++clientC running on Host_C talking to TCP*Extend_C.
    Other C++ Clients:
    C++clientD running on Host_D talking to TCP*Extend_A.
    C++clientE running on Host_E talking to TCP*Extend_A.
    C++clientF running on Host_F talking to TCP*Extend_A.
    Questions:
    1. To Add more machines, if I running one more instance on different machine with the same configuration file, will it work?
    2. Is it possible to run seperate TCP*Extend Proxy on each Host and it will be part of the same cluster?
    3. Or should all the C++ clients should talk to only one TCP*Extend Proxy?
    Thanks,
    NS

    1. To Add more machines, if I running one more instance on different machine with the same configuration file, will it work?Yes, you can add more machines/nodes using the same configuration file.
    2. Is it possible to run seperate TCP*Extend Proxy on each Host and it will be part of the same cluster?Yes, you can run multiple proxy nodes in a single cluster. Just about every production cluster has multiple proxy nodes.
    3. Or should all the C++ clients should talk to only one TCP*Extend Proxy?No, there is no reason to force all C++ clients to connect to a single proxy. In fact you might consider configuring each client with the entire list of proxy servers. Each client will pick a server at random to connect to, thus ensuring that a single proxy isn't overloaded.
    Also take a look at this document: http://coherence.oracle.com/display/COH35UG/Best+Practices+for+Coherence+Extend
    Thanks,
    Patrick

  • TCP Extend Errors

    Hi All,
    I am getting the below errors in my TCP-Extend Proxy nodes.
    Oracle Coherence GE 3.5.3/465p3 <Error> (thread=Proxy:ExendTcpProxyService:TcpAcceptorWorker:7, member=19): Extend*TCP has determined that the connection to "Socket[addr=/11.11.56.123,port=18733,localport=17062]" must be closed to maintain system stability: This connection is 1 messages behind (112820921 bytes); the limit is 60000 messages (100000000 bytes).
    What does this mean ? Any serious issues in the cluster ?

    Hi user594809 ,
    Coherence will close an Extend connection if it thinks that the client is not reading messages off of its queue fast enough as this can destabilise the extend proxy. The limit is 60000 messages behind or 100Mbytes behind.
    In your case you have tried to return too much data in a single go (112,820,921 bytes) - I have seen this is often the result of an invocation service call or something similar. What tends to happen in this case is that your client has started to read the message back and while doing this Coherence closes the connection so you get stream exceptions in the client.
    If you really need to return such a large amount of data there are a couple of settings you can add to your <tcp-acceptor> configuration on the client
    For example:
    <tcp-acceptor>
        <local-address>
            <address>localhost</address>
            <port>10000</port>
        </local-address>
        <limit-buffer-size>100000000</limit-buffer-size>
        <suspect-buffer-size>10000000</suspect-buffer-size>
    </tcp-acceptor>The limit-buffer-size is the maximum size of the queue in bytes before the connection gets closed - The default is 100Mbytes
    The suspect-buffer-size is the size of the queue that Coherence starts to log warnings - The default is 10Mbytes
    If you increase the limit-buffer-size you should increase the suspect-buffer-size too.
    Note there is a reason that these settings are there in the first place and changing them to allow very large results to be returned could have an adverse affect on your extend proxy node and your cluster so it is up to you to test things properly under load.
    JK

  • Best Practice: Application runs on Extend Node or Cluster Node

    Hello,
    I am working within an organization wherein the standard way of using Coherence is for all applications to run on extend nodes which connect to the cluster via a proxy service. This practice is followed even if the application is a single, dedicated JVM process (perhaps a server, perhaps a data aggregater) which can easily be co-located with the cluster (i.e. on a machine which is on the same network segment as the cluster). The primary motivation behind this practice is to protect the cluster from a poorly designed / implemented application.
    I want to challenge this standard procedure. If performance is a critical characteristic then the "proxy hop" can be eliminated by having the application code execute on a cluster node.
    Question: Is running an application on a cluster node a bad idea or a good idea?

    Hello,
    It is common to have application servers join as cluster members as well as Coherence*Extend clients. It is true that there is a bit of extra overhead when using Coherence*Extend because of the proxy server. I don't think there's a hard and fast rule that determines which is a better option. Has the performance of said application been measured using Coherence*Extend, and has it been determined that the performance (throughput, latency) is unacceptable?
    Thanks,
    Patrick

  • Cache config for distributed cache and TCP*Extend

    Hi,
    I want to use distributed cache with TCP*Extend. We have defined "remote-cache-scheme" as the default cache scheme. I want to use a distributed cache along with a cache-store. The configuration I used for my scheme was
    <distributed-scheme>
      <scheme-name>MyScheme</scheme-name>
      <backing-map-scheme>
        <read-write-backing-map-scheme>
          <internal-cache-scheme>
            <class-scheme>
              <class-name>com.tangosol.util.ObservableHashMap</class-name>
            </class-scheme>
          </internal-cache-scheme>
          <cachestore-scheme>
            <class-scheme>
              <class-name>MyCacheStore</class-name>
            </class-scheme>
            <remote-cache-scheme>
              <scheme-ref>default-scheme</scheme-ref>
            </remote-cache-scheme>
          </cachestore-scheme>
          <rollback-cachestore-failures>true</rollback-cachestore-failures>
        </read-write-backing-map-scheme>
      </backing-map-scheme>
    </distributed-scheme>
    <remote-cache-scheme>
      <scheme-name>default-scheme</scheme-name>
      <initiator-config>
        <tcp-initiator>
          <remote-addresses>
            <socket-address>
              <address>XYZ</address>
              <port>9909</port>
            </socket-address>
          </remote-addresses>
        </tcp-initiator>
      </initiator-config>
    </remote-cache-scheme>I know that the configuration defined for "MyScheme" is wrong but I do not know how to configure "MyScheme" correctly to make my distributed cache the part of the same cluster to which all other caches, which uses the default scheme, are joined. Currently, this ain't happening.
    Thanks.
    RG
    Message was edited by:
    user602943

    Hi,
    Is it that I need to define my distributed scheme with the CacheStore in the server-coherence-cache-config.xml and then on the client side use remote cache scheme to connect to get my distributed cache?
    Thanks,

  • Initial Deserialization on TCP*Extend Server

    Hi Guys,
    I have observed something on a TCP*Extend server which I can't quite explain. We've got some hairy custom serialization inside an Externalizable bean. I was under the impression that classes aren't deserialized inside the cluster unless they need to be e.g. to processed by entry processors or value extractors on indexes. However, when I put one of these objects from a TCP*Extend client into the cluster on the tcp*extend server I see some slightly odd behaviour on the server where the deserialization method is called on an empty object and the object I've just send from the client is then serialized.
    Can I therefore assume there is an extra deserialization/reserialization loop on a tcp*extend server on data obtained from the client?
    Kind Regards,
    Max

    Hi Max,
    Yes, currently the ProxyService must deserialize/serialize data sent from/to clients as part of a POF translation step in order to support non-Java clients. We will be extending POF serialization support "into" the cluster in the next Coherence release. Once this is in place, this extra deserialization/serialization step will not be necessary and will be removed from the ProxyService.
    Regards,
    Jason

  • TCP* Extend client thread pool

    Hi,
    Is there a way to configure the number of threads used by TCP* Extend client? What is the default value for the same?
    For some reason I am observing TCP connection being reset. Here are the logs:
    2010-05-05 04:39:02.572/15821.6> (thread=DistributedCacheForHDElements-NY:TcpInitiator, member=n/a): Closed: TcpConnection(Id=0x0000012866AD9AC4AAF0E60D08143308BF16B5D3C3356683C13C90CC0213FB3C, Open=false, LocalAddress=170.240.228.192:1105, RemoteAddress=170.240.230.13:27001)
    2010-05-05 04:39:02.572/15821.5> (thread=DistributedCacheForHDElements-NY:TcpInitiator, member=n/a): Stopped: TcpInitiator{Name=DistributedCacheForHDElements-NY:TcpInitiator, State=(SERVICE_STOPPED), ThreadCount=0, Codec=Codec(Format=POF), PingInterval=0, PingTimeout=10000, RequestTimeout=10000, ConnectTimeout=10000, RemoteAddresses=[/170.240.230.13:27001,/141.128.62.137:27007,/141.128.62.138:27005,/170.240.230.13:27002,/141.128.62.138:27006,/141.128.62.137:27008,/170.240.230.13:27003,/141.128.62.138:27004], KeepAliveEnabled=true, TcpDelayEnabled=false, ReceiveBufferSize=0, SendBufferSize=0, LingerTimeout=-1}
    2010-05-05 04:39:02.588/15821.5> (thread=DistributedCacheForHDElements-NY:TcpInitiator, member=n/a): Started: TcpInitiator{Name=DistributedCacheForHDElements-NY:TcpInitiator, State=(SERVICE_STARTED), ThreadCount=0, Codec=Codec(Format=POF), PingInterval=0, PingTimeout=10000, RequestTimeout=10000, ConnectTimeout=10000, RemoteAddresses=[/141.128.62.138:27006,/141.128.62.138:27005,/170.240.230.13:27003,/141.128.62.138:27004,/170.240.230.13:27002,/141.128.62.137:27007,/141.128.62.137:27008,/170.240.230.13:27001], KeepAliveEnabled=true, TcpDelayEnabled=false, ReceiveBufferSize=0, SendBufferSize=0, LingerTimeout=-1}
    2010-05-05 04:39:02.588/15821.5> (thread=[ACTIVE] ExecuteThread: '10' for queue: 'weblogic.kernel.Default (self-tuning)', member=n/a): Opening Socket connection to 141.128.62.138:27006
    2010-05-05 04:39:02.588/15821.nfo> (thread=[ACTIVE] ExecuteThread: '10' for queue: 'weblogic.kernel.Default (self-tuning)', member=n/a): Connected to 141.128.62.138:27006
    2010-05-05 04:39:02.604/15821.6> (thread=DistributedCacheForHDElements-NY:TcpInitiator, member=n/a): Opened: TcpConnection(Id=0x00000128679E39D68D803E8A52F48499974C7DC9B4BF127ECF23CF2771B8CB90, Open=true, LocalAddress=170.240.228.192:4742, RemoteAddress=141.128.62.138:27006)
    2010-05-05 04:39:02.604/15821.6> (thread=DistributedCacheForHDElements-NY:TcpInitiator, member=n/a): Opened: Channel(Id=1628408480, Open=true, Connection=0x00000128679E39D68D803E8A52F48499974C7DC9B4BF127ECF23CF2771B8CB90)
    2010-05-05 04:39:02.619/15821.6> (thread=DistributedCacheForHDElements-NY:TcpInitiator, member=n/a): Opened: Channel(Id=734361514, Open=true, Connection=0x00000128679E39D68D803E8A52F48499974C7DC9B4BF127ECF23CF2771B8CB90)
    Regards,
    Kishore
    Edited by: user10737736 on May 6, 2010 5:06 AM

    Hi Kishore,
    In the proxy-scheme of your cache configuration, you can use thread-count to configure the number of threads used by TCP* Extend client. The default value is 0. e.g.
    <proxy-scheme>
    <service-name>ExtendTcpProxyService</service-name>
    <thread-count>50</thread-count>
    </proxy-scheme>
    -Luk

  • TCP EXTEND Server SIde configuration

    one of the configuration which is required for server proxy scheme is address ofteh server
    I can't specify localhost as server and tcp client is going to run on two differenet servers
    I have to run this tcp extend server for 3-4 server for failover support
    How can I specify address ( IP ADDRESS HERE) which can changed for each server configuration without specifying this config for each server differenetly
    can I replace this address besed on property or system parameter or spring injection.
    <proxy-scheme>
    <service-name>ExtendTcpProxyService</service-name>
    <thread-count>5</thread-count>
    <acceptor-config>
    <tcp-acceptor>
    <local-address>
    *<address>IP ADDRESS OF SERVER HERE</address>*
    <port>9099</port>
    </local-address>
    </tcp-acceptor>
    </acceptor-config>
    <autostart>true</autostart>
    </proxy-scheme>
    </caching-schemes>

    Why can't you use localhost? Surely this is what you want unless you are running on a machine with multiple network cards and need to bind to a particular address.
    If you really want to be able to specify a different address for each machine you can do this
    <proxy-scheme>
      <service-name>ExtendTcpProxyService</service-name>
      <thread-count>5</thread-count>
      <acceptor-config>
        <tcp-acceptor>
            <local-address>
                <address system-property="extend.address"></address>
                <port>9099</port>
            </local-address>
          </tcp-acceptor>
        </acceptor-config>
      <autostart>true</autostart>
    </proxy-scheme>Coherence will now use the value of the extend.address system property as the value for your address. This can be passed in on the command line like this:
    java -cp coherence.jar -Dextend.address=IP_ADDRESS_HERE com.tangosol.net.DefaultCacheServerObviously your command to start the cache servers would probably have more in it than that.
    JK

  • TCP Extend (DefaultCacheServer rejects connections)

    Hi guys
    Have been trying to setup TCP Extend to make a Linux box use cache configured on a windows box and the DefaultCacheServer rejects TCP connections. The config files I'm using are attached. Can anyone help ?
    The DefaultCacheServer comes up nicely
    SafeCluster: Name=n/a
    Group{Address=224.3.2.0, Port=32367, TTL=1}
    MasterMemberSet
    ThisMember=Member(Id=1, Timestamp=2007-03-29 16:07:16.026, Address=147.114.162.160:54321, MachineId=17312)
    OldestMember=Member(Id=1, Timestamp=2007-03-29 16:07:16.026, Address=147.114.162.160:54321, MachineId=17312)
    ActualMemberSet=MemberSet(Size=1, BitSetCount=2
    Member(Id=1, Timestamp=2007-03-29 16:07:16.026, Address=147.114.162.160:54321, MachineId=17312)
    RecycleMillis=120000
    RecycleSet=MemberSet(Size=0, BitSetCount=0
    Services
    TcpRing{TcpSocketAccepter{State=STATE_OPEN, ServerSocket=147.114.162.160:54321}, Connections=[]}
    ClusterService{Name=Cluster, State=(SERVICE_STARTED, STATE_JOINED), Id=0, Version=3.2, OldestMemberId=1}
    DistributedCache{Name=DistributedCache, State=(SERVICE_STARTED), Id=1, Version=3.2, OldestMemberId=1, LocalStorage=enabled, PartitionCount=257, Bac
    upCount=1, AssignedPartitions=257, BackupPartitions=0}
    but when I run the client, I get this
    2007-03-29 16:09:42.698 Tangosol Coherence DGE 3.2/367 <D4> (thread=TcpRingListener, member=1): Rejecting connection to member 649 using TcpSocket{Sta
    te=STATE_OPEN, Socket=Socket[addr=/172.26.102.115,port=36952,localport=54321]}<br><br> <b> Attachment: </b><br>cluster-side-config.xml <br> (*To use this attachment you will need to rename 516.bin to cluster-side-config.xml after the download is complete.)<br><br> <b> Attachment: </b><br>client-side-config.xml <br> (*To use this attachment you will need to rename 517.bin to client-side-config.xml after the download is complete.)

    Hi pandeyv,
    You need to configure an instance of the ProxyService in your cluster-side cache configuration file. Coherence*Extend clients connect to the ProxyService over TCP/IP and not the TcpRingService. The TcpRingService is only used by cluster members for death detection.
    See the following for instructions on configuring the cluster and client-side configuration files:
    http://wiki.tangosol.com/display/COH32UG/Configuring+and+Using+Coherence*Extend
    Additionally, I noticed that you are using an old release of Coherence 3.2. Please upgrade to the latest 3.2 service pack (3.2.2):
    http://www.tangosol.com/product-downloads.jsp
    Regards,
    Jason

  • Adding Linux extended nodes to Qmaster from Ubuntu machines

    I realize this is both a complicated question and for some might be seen as a really easy question. I'm hoping for more the latter!
    I have a Mac Mini hooked up to a network switch and I plug in my MacBook to set up a very basic cluster through Qmaster. It's the only way i can get jobs done in Compressor, neither of these machines have the specs suggested to really run Final Cut Studio at all, but this is a church and we just appreciate what we have.
    In that same spirit, we have an old Dell PowerEdge 700 server I put Ubuntu Studio on running at 256 megs ram and about 2.8 Ghz. A Dell Inspiron 8200 laptop at 1.8 GHZ with 512 ram, and 4 Dell Optiplex with 256 RAM at 1GHZ. All old equipment I realize, but from what I've read it sounds like one can use these in Apple Qmaster as long as they are Unix based. So I've been installing Ubuntu on them (I'm a Linux newbie and the other distros are a pain to install on these machines.)
    From what I understand, you put a UNIX system (like Ubuntu) on these, setup something like OpenSSH, and of course I have them plugged into the network switch with my macs.
    The macs see each other and can create a cluster.
    I setup a Linux box and it could see the mac through the terminal.
    But the mac doesn't see the Linux box through terminal.
    (Of course I'm new to terminal too.)
    I opened up everything on the system preferences side for the mac for sharing services, no firewall setup, ect. I have a feeling there are a multitude of SSH things I have to set up on mac and linux that I just am unaware of.
    I go to system preferences _ qmaster _ and try to set up extended nodes with the Mac Mini as an intermediary node (before anyone posts it, yes I've studied the Qmaster manual laboriously.) Where you type in hostname, username, and password, I've been typing in "ubuntustudio" as my hostname because that's what I set it up as when installing. I also set that as username. But every example I see posted lists the username with something like a domain name after it. I think maybe I'm not typing the hostname correctly or something, but I can't find any documentation on naming conventions for something like this.
    Sorry if I sound like a newbie, on some of this I am. But our pc techs don't cover the linux machines under our contract, I'm sure Apple would charge me a ton if they were willing to help with this at all, and every support article I have found vaguely mentions that you CAN use SSH to add non-macs to the cluster, but just takes it as a given that you'll know exactly how to input the hostname and that it will work and all your setting will be perfect.
    I just think with 6 machines running over 1ghz, collectively they could be a descent set of nodes and it just seems best practice to repurpose old equipment to death before buying new.
    Thanks for your time

    Thanks so much for your response!
    I tried just typing in the ip addresses but to no avail. This was on an Airport Extreme and I'm going to try again on a network switch that has no internet connection. I'll post back here if it works. I don't think I have any firewall or anything that should be actively blocking on Ubuntu or Mac side but I wonder if there is something I should be unlocking that I haven't thought of.
    If I have any success I'll let you know!
    Oh this is latest version of Ubuntu Studio and Mac OSX btw.

  • TransactionMap and TCP*Extend

    Hi Guys,
         From what I understand, from a Real Time Client I can't make use of local cache transactions, either single or multi cache. I appreciate that, if for no other reason :), death detection isn't as capable on an RTC node determining rollback conditions might not be robust enough. Is it possible that I can use the invocation service to proxy that operation onto a node in the cache cluster?
         Kind Regards,
         Max

    Max,
         This is definitely a very good use case for utilizing the Invocation service over Coherence*Extend.
         Regards,
         Gene

  • JavaFX 2.0 extend Node or Group

    Hi,
    I don't wont to code everyting in one class, but I can't extend the group class. So which class do I have to extend and code some animations within? I'd like to instance this class in the Main Class, which extends the application.
    Thanks.

    Ok, I imported the wrong package...
    I can extend the Group class.

Maybe you are looking for