Local Cache containing all Distributed Cache entries

Hello all,
I am seeing what appears to be some sort of problem. I have 2 JVMS running, one for the application and the other serving as a coherence cache JVM (near-cache scheme).
When i stop the cache JVM - the local JVM displays all 1200 entries even if the <high-units> for that cache is set to 300.
Does the local JVM keep a copy of the Distributed Data?
Can anyone explain this?
Thanks

hi,
i have configured a near-cahe with frontscheme and back scheme.in the front scheme i have used local cache and in the back scheme i have used the distributed cache .my idea is to have a distributed cache on the coherence servers.
i have 01 jvm which has weblogic app server while i have a 02 jvm which has 4 coherence servers all forming the cluster.
Q1: where is the local cache data stored.? is it on the weblogic app server or on the coherence servers (SSI)..
Q2: although i have shutdown my 4 coherence servers..i am still able to get the data in the app.so have a feel that the data is also stored locally..on the 01 jvm which has weblogic server runnng...
q3: does both the client apps and coherence servers need to use the same coherence-cache-config.xml
can somebody help me with these questions.Appreciate your time..

Similar Messages

  • Cache config for distributed cache and TCP*Extend

    Hi,
    I want to use distributed cache with TCP*Extend. We have defined "remote-cache-scheme" as the default cache scheme. I want to use a distributed cache along with a cache-store. The configuration I used for my scheme was
    <distributed-scheme>
      <scheme-name>MyScheme</scheme-name>
      <backing-map-scheme>
        <read-write-backing-map-scheme>
          <internal-cache-scheme>
            <class-scheme>
              <class-name>com.tangosol.util.ObservableHashMap</class-name>
            </class-scheme>
          </internal-cache-scheme>
          <cachestore-scheme>
            <class-scheme>
              <class-name>MyCacheStore</class-name>
            </class-scheme>
            <remote-cache-scheme>
              <scheme-ref>default-scheme</scheme-ref>
            </remote-cache-scheme>
          </cachestore-scheme>
          <rollback-cachestore-failures>true</rollback-cachestore-failures>
        </read-write-backing-map-scheme>
      </backing-map-scheme>
    </distributed-scheme>
    <remote-cache-scheme>
      <scheme-name>default-scheme</scheme-name>
      <initiator-config>
        <tcp-initiator>
          <remote-addresses>
            <socket-address>
              <address>XYZ</address>
              <port>9909</port>
            </socket-address>
          </remote-addresses>
        </tcp-initiator>
      </initiator-config>
    </remote-cache-scheme>I know that the configuration defined for "MyScheme" is wrong but I do not know how to configure "MyScheme" correctly to make my distributed cache the part of the same cluster to which all other caches, which uses the default scheme, are joined. Currently, this ain't happening.
    Thanks.
    RG
    Message was edited by:
    user602943

    Hi,
    Is it that I need to define my distributed scheme with the CacheStore in the server-coherence-cache-config.xml and then on the client side use remote cache scheme to connect to get my distributed cache?
    Thanks,

  • Locking in replicated versus distributed caches

    Hello,
    In the User Guide for Coherence 2.5.0, in section 2.3 Cluster Services Overview it says
    the replicated cache service supports pessimistic lockingyet in section 2.4 Replicated Cache Service it says
    if a cluster node requests a lock, it should not have to get all cluster nodes to agree on the lockI am trying to decide whether to use a replicated cache or a distributed cache, either of which will be small, where I want the objects to be locked across the whole cluster.
    If not all of the cluster nodes have to agree on a lock in a replicated cluster, doesn't this mean that a replicated cluster does not support pessimistic locking?
    Could you please explain this?
    Thanks,
    Rohan

    Hi Rohan,
    The Replicated cache supports pessimistic locking. The User Guide is discussing the implementation details and how they relate to performance. The Replicated and Distributed cache services differ in performance and scalability characteristics, but both support cluster-wide coherence and locking.
    Jon Purdy
    Tangosol, Inc.

  • Distributed cache in custom developement.

    Why it is not recommended to use distributed caching in custom development in SP2013? Is there a reason, other than the concern of overwriting the cache used by SharePoint?
    If that would be the concern, any precautions we can take to overcome it?

    I think without a scope and priority defined for SharePoint named caches, custom named caches in SharePoint Distributed Cache cluster may evict SharePoint named caches if their usage is higher than that of SharePoint. I think the eviction algorithm is
    based on 2 criteria: the named caches which has been accessed for the least number of times or named caches which were not accessed for the longest time.
    This post is my own opinion and does not necessarily reflect the opinion or view of Slalom.

  • Index Preview Panel does not include all my index entries

    I've put all my chapters in a book and created an Index. All is well until I try to edit the Index. InDesigns materials tell me to go to my Index Preview Panel to make edits, which works fine. But the Panel does not contain all my index entries. I've tried updating the previews and regenerating the Index numerous times, but I still cannot get an up-to-date preview panel. Any ideas???

    Organize Bookmarks was renamed to '''Show All Bookmarks''' in Firefox 4+ versions.

  • Need Help regarding initial configuration for distributed cache

    Hi ,
    I am new to tangosol and trying to setup a basic partitioned distributed cache ,But I am not being able to do so
    Here is my Scenario,
    My Application DataServer create the instance of Tangosolcache .
    I have this config.xml set in my machine where my application start.
    <?xml version="1.0"?>
    <!DOCTYPE cache-config SYSTEM "cache-config.dtd">
    <cache-config>
    <caching-scheme-mapping>
    <!--
    Caches with any name will be created as default near.
    -->
    <cache-mapping>
    <cache-name>*</cache-name>
    <scheme-name>default-distributed</scheme-name>
    </cache-mapping>
    </caching-scheme-mapping>
    <caching-schemes>
    <!--
    Default Distributed caching scheme.
    -->
    <distributed-scheme>
    <scheme-name>default-distributed</scheme-name>
    <service-name>DistributedCache</service-name>
    <backing-map-scheme>
    <class-scheme>
    <scheme-ref>default-backing-map</scheme-ref>
    </class-scheme>
    </backing-map-scheme>
    <autostart>true</autostart>
    </distributed-scheme>
    <!--
    Default backing map scheme definition used by all the caches that do
    not require any eviction policies
    -->
    <class-scheme>
    <scheme-name>default-backing-map</scheme-name>
    <class-name>com.tangosol.util.SafeHashMap</class-name>
    <init-params></init-params>
    </class-scheme>
    </caching-schemes>
    </cache-config>
    Now on the same machine I start a different client using the command
    java -Dtangosol.coherence.distributed.localstorage=false -Dtangosol.coherence.cacheconfig=near-cache-config.xml -classpath
    "C:/calypso/software/release/build" -jar ../lib/coherence.jar
    The problem I am facing is
    1)If I do not start the client even then my application server cache the data .Ideally my config.xml setting is set to
    distributed so under no case it should cache the data in its local ...
    2)I want to bind my differet cache on different process on different machine .
    say
    for e.g
    machine1 should cache cache1 object
    machine2 should cache cache2 object
    and so on .......but i could not find any documentation which explain how to do this setting .Can some one give me example of
    how to do it ....
    3)I want to know the details of cache stored in any particular node how do I know say for e.g machine1 contains so and so
    cache and it corresponding object values ... etc .....
    Regards
    Mahesh

    Hi Thanks for answer.
    After digging into the wiki lot i found out something related to KeyAssociation I think what I need is something like implementation of KeyAssociation and that
    store the particular cache type object on particular node or group of node
    Say for e,g I want to have this kind of setup
    Cache1-->node1,node2 as I forecast this would take lot of memory (So i assign this jvms like 10 G)
    Cache2-->node3 to assign small memory (like 2G)
    and so on ...
    From the wiki documentation i see
    Key Association
    By default the specific set of entries assigned to each partition is transparent to the application. In some cases it may be advantageous to keep certain related entries within the same cluster node. A key-associator may be used to indicate related entries, the partitioned cache service will ensure that associated entries reside on the same partition, and thus on the same cluster node. Alternatively, key association may be specified from within the application code by using keys which implement the com.tangosol.net.cache.KeyAssociation interface.
    Do someone have any example of explaining how this is done in the simplest way ..

  • LiveCycle unable to access Cache Controller. Exception message - Error accessing the cache container - Error on GET action for cache Replicated:UM_CLUSTER_INVALIDATION_CACHE

    The incident starts after all the members are removed from the DCS Stack DefaultCoreGroup view and added back again by WebSphere Application Server. Following exceptions are seen after LiveCycle was added back in again to the view.
    We have a clustered environment with two nodes 2A and 2B. Server 2A crashed and therefore all members on 2B node were removed from the DCS view. Later the new core group view was installed but LiveCycle did resume operations as expected.
    Errors below:
    Exception caught while dealing with cache : Action - Get, ObjectType - UM_CLUSTER_INVALIDATION_CACHE, Exception message - Error accessing the cache container - Error on GET action for cache Replicated:UM_CLUSTER_INVALIDATION_CACHE
    ServiceRegist W   Cache get failed for service EncryptionService Reason: Error accessing the cache container - Error on GET action for cache
    Error accessing the cache container - Error on GET action for cache Local:SERVICE_FACTORY_CACHE
    The following message appeared for several different services:
    Cache put failed for service [CredentialService, ReaderExtensionsService,EncryptionService, etc]
    SSOSessionCle W   Error in cleaning stale sessions from the database. These sessions would be deleted in next trigger
                                     java.lang.RuntimeException: com.adobe.livecycle.cache.CacheActionException: Error accessing the cache container - Error on ENTRYSET action for cache Local:UM_ASSERTION_CACHE
      at com.adobe.idp.um.businesslogic.authentication.AssertionCacheManager.removeExpiredResults( AssertionCacheManager.java:125)
      at com.adobe.idp.um.scheduler.SSOSessionCleanupJob.execute(SSOSessionCleanupJob.java:69)
      at org.quartz.core.JobRunShell.run(JobRunShell.java:202)
      at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:529)
    Caused by: com.adobe.livecycle.cache.CacheActionException: Error accessing the cache container - Error on ENTRYSET action for cache Local:UM_ASSERTION_CACHE
      at com.adobe.livecycle.cache.adapter.GemfireLocalCacheAdapter.entrySet(GemfireLocalCacheAdap ter.java:219)
      at com.adobe.idp.um.businesslogic.authentication.AssertionCacheManager.removeExpiredResults( AssertionCacheManager.java:99)
      ... 3 more
    Caused by: com.gemstone.gemfire.distributed.DistributedSystemDisconnectedException: GemFire on 2B<v1>:9057/51073 started at Sun Aug 17 08:57:23 EDT 2014: Message distribution has terminated, caused by com.gemstone.gemfire.ForcedDisconnectException: This member has been forced out of the distributed system.  Reason='this member is no longer in the view but is initiating connections'
      at com.gemstone.gemfire.distributed.internal.DistributionManager$Stopper.generateCancelledEx ception(DistributionManager.java:746)
      at com.gemstone.gemfire.distributed.internal.InternalDistributedSystem$Stopper.generateCance lledException(InternalDistributedSystem.java:846)
      at com.gemstone.gemfire.internal.cache.GemFireCacheImpl$Stopper.generateCancelledException(G emFireCacheImpl.java:1090)
      at com.gemstone.gemfire.CancelCriterion.checkCancelInProgress(CancelCriterion.java:59)
      at com.gemstone.gemfire.internal.cache.LocalRegion.checkRegionDestroyed(LocalRegion.java:669 4)
      at com.gemstone.gemfire.internal.cache.LocalRegion.checkReadiness(LocalRegion.java:2587)
      at com.gemstone.gemfire.internal.cache.LocalRegion.entries(LocalRegion.java:1815)
      at com.gemstone.gemfire.internal.cache.LocalRegion.entrySet(LocalRegion.java:7941)
      at com.adobe.livecycle.cache.adapter.GemfireLocalCacheAdapter.entrySet(GemfireLocalCacheAdap ter.java:209)
      ... 4 more
    Caused by: com.gemstone.gemfire.ForcedDisconnectException: This member has been forced out of the distributed system.  Reason='this member is no longer in the view but is initiating connections'
      at com.gemstone.org.jgroups.protocols.pbcast.ParticipantGmsImpl.handleLeaveResponse(Particip antGmsImpl.java:106)
      at com.gemstone.org.jgroups.protocols.pbcast.GMS.up(GMS.java:1289)
      at com.gemstone.org.jgroups.stack.Protocol.passUp(Protocol.java:767)
      at com.gemstone.org.jgroups.protocols.VIEW_SYNC.up(VIEW_SYNC.java:202)
      at com.gemstone.org.jgroups.stack.Protocol.passUp(Protocol.java:767)
      at com.gemstone.org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:276)
      at com.gemstone.org.jgroups.stack.Protocol.passUp(Protocol.java:767)
      at com.gemstone.org.jgroups.protocols.UNICAST.up(UNICAST.java:294)
      at com.gemstone.org.jgroups.stack.Protocol.passUp(Protocol.java:767)
      at com.gemstone.org.jgroups.protocols.pbcast.NAKACK.up(NAKACK.java:625)
      at com.gemstone.org.jgroups.stack.Protocol.passUp(Protocol.java:767)
      at com.gemstone.org.jgroups.protocols.VERIFY_SUSPECT.up(VERIFY_SUSPECT.java:187)
      at com.gemstone.org.jgroups.stack.Protocol.passUp(Protocol.java:767)
      at com.gemstone.org.jgroups.protocols.FD_SOCK.up(FD_SOCK.java:504)
      at com.gemstone.org.jgroups.stack.Protocol.passUp(Protocol.java:767)
      at com.gemstone.org.jgroups.protocols.FD.up(FD.java:438)
      at com.gemstone.org.jgroups.stack.Protocol.passUp(Protocol.java:767)
      at com.gemstone.org.jgroups.protocols.Discovery.up(Discovery.java:258)
      at com.gemstone.org.jgroups.stack.Protocol.passUp(Protocol.java:767)
      at com.gemstone.org.jgroups.protocols.TP.handleIncomingMessage(TP.java:1110)
      at com.gemstone.org.jgroups.protocols.TP.handleIncomingPacket(TP.java:1016)
      at com.gemstone.org.jgroups.protocols.TP.receive(TP.java:923)
      at com.gemstone.org.jgroups.protocols.UDP$UcastReceiver.run(UDP.java:1320)
      at java.lang.Thread.run(Thread.java:773)
    [22/08/14 0:28:10:237 EDT] 00000b4f WorkPolicyEva W   Exception thrown in background thread. Cause: java.lang.NullPointerException
    [22/08/14 0:29:10:252 EDT] 00000b4f WorkPolicyEva W   Exception thrown in background thread. Cause: java.lang.NullPointerException
    [22/08/14 0:30:10:268 EDT] 00000b4f WorkPolicyEva W   Exception thrown in background thread. Cause: java.lang.NullPointerException
    [22/08/14 0:31:10:283 EDT] 00000b4f WorkPolicyEva W   Exception thrown in background thread. Cause: java.lang.NullPointerException
    [22/08/14 0:32:10:298 EDT] 00000b4f WorkPolicyEva W   Exception thrown in background thread. Cause: java.lang.NullPointerException
    [22/08/14 0:33:10:313 EDT] 00000b4f WorkPolicyEva W   Exception thrown in background thread. Cause: java.lang.NullPointerException
    [22/08/14 0:34:10:328 EDT] 00000b4f WorkPolicyEva W   Exception thrown in background thread. Cause: java.lang.NullPointerException
    [22/08/14 0:35:10:343 EDT] 00000b4f WorkPolicyEva W   Exception thrown in background thread. Cause: java.lang.NullPointerException
    [22/08/14 0:36:10:358 EDT] 00000b4f WorkPolicyEva W   Exception thrown in background thread. Cause: java.lang.NullPointerException
    [22/08/14 0:37:10:373 EDT] 00000b4f WorkPolicyEva W   Exception thrown in background thread. Cause: java.lang.NullPointerException
    [22/08/14 0:38:10:389 EDT] 00000b4f WorkPolicyEva W   Exception thrown in background thread. Cause: java.lang.NullPointerExceptionor
    I have tried looking for the root cause about why LiveCycle was not able to resume normally, didn't find anything related.

    LiveCycle uses Gemfire as distributed cache for cluster members. If you are using TCP Locators (caching based on TCP instead of UDP), below are the possible situations which might lead to “ForcedDisconnectException” :
    - There is time difference between two nodes.
    - These is network connectivity issues.
    - The high CPU usage by the member crashed.
    -Wasil

  • Different distributed caches within the cluster

    Hi,
    i've three machines n1 , n2 and n3 respectively that host tangosol. 2 of them act as the primary distributed cache and the third one acts as the secondary cache. i also have weblogic running on n1 and based on some requests pumps data on to the distributed cache on n1 and n2. i've a listener configured on n1 and n2 and on the entry deleted event i would like to populate tangosol distributed service running on n3. all the 3 nodes are within the same cluster.
    i would like to ensure that the data directly coming from weblogic should only be distributed across n1 and n2 and NOT n3. for e.g. i do not start an instance of tangosol on node n3. and an object gets pruned from either n1 or n2. so ideally i should get a storage not configured exception which does not happen.
    The point is the moment is say CacheFactory.getCache("Dist:n3") in the cache listener, tangosol does populate the secondary cache by creating an instance of Dist:n3 on either n1 or n2 depending from where the object has been pruned.
    from my understanding i dont think we can have a config file on n1 and n2 that does not have a scheme for n3. i tried doing that and got an illegalstate exception.
    my next step was to define the Dist:n3 scheme on n1 and n2 with local storage false and have a similar config file on n3 with local-storage for Dist:n3 as true and local storage for the primary cache as false.
    can i configure local-storage specific to a cache rather than to a node.
    i also have an EJB deployed on weblogic that also entertains a getData request. i.e. this ejb will also check the primary cache and the secondary cache for data. i would have the statement
    NamedCahe n3 = CacheFactory.getCache("n3") in the bean as well.

    Hi Jigar,
    i've three machines n1 , n2 and n3 respectively that
    host tangosol. 2 of them act as the primary
    distributed cache and the third one acts as the
    secondary cache.First, I am curious as to the requirements that drive this configuration setup.
    i would like to ensure that the data directly coming
    from weblogic should only be distributed across n1
    and n2 and NOT n3. for e.g. i do not start an
    instance of tangosol on node n3. and an object gets
    pruned from either n1 or n2. so ideally i should get
    a storage not configured exception which does not
    happen.
    The point is the moment is say
    CacheFactory.getCache("Dist:n3") in the cache
    listener, tangosol does populate the secondary cache
    by creating an instance of Dist:n3 on either n1 or n2
    depending from where the object has been pruned.
    from my understanding i dont think we can have a
    config file on n1 and n2 that does not have a scheme
    for n3. i tried doing that and got an illegalstate
    exception.
    my next step was to define the Dist:n3 scheme on n1
    and n2 with local storage false and have a similar
    config file on n3 with local-storage for Dist:n3 as
    true and local storage for the primary cache as
    false.
    can i configure local-storage specific to a cache
    rather than to a node.
    i also have an EJB deployed on weblogic that also
    entertains a getData request. i.e. this ejb will also
    check the primary cache and the secondary cache for
    data. i would have the statement
    NamedCahe n3 = CacheFactory.getCache("n3") in the
    bean as well.In this scenario, I would recommend having the "primary" and "secondary" caches on different cache services (i.e. distributed-scheme/service-name). Then you can configure local storage on a service by service basis (i.e. distributed-scheme/local-storage).
    Later,
    Rob Misek
    Tangosol, Inc.

  • How to organise cache structure in distributed and fault tolerant way?

    Suppose I have two caches - cacheA that contains objects of type A and cacheB containing objects of type B; caches are distributed across several physical servers.
    B-objects should listen to updates of objects in cacheA.
    How should the objects be implemented? In particular - how to organize subscription? For example, if in JVM1 I create Object B and subscribe to ObjectA, then launch JVM2 in the same cluster, then shut down JVM1, ObjectB will be available in CacheB on JVM 2 but will not receive any updates about ObjectA anymore.
    How can this problem be solved?

    That does not answer my question unfortunately. Let's say that I have something that listens to all updates to all objects in the cache that is distributed across several physical servers. The requirements are
    - whenever an object in the cache is changed the change is handled somehow
    - the code that handles the update should have read/write access to the cache
    - there must be one and only one notification of that kind
    - network traffic is minimized, i.e. there is no single dedicated JVM that listens to all updates, ideally each node in a cluster serves updates for its local data
    I'm thinking about setting up a listener to backing map, but not so sure

  • Multiple caches use same distributed scheme, what about cachestore config?

    Hi,
    If a distributed scheme is configured with read-write-backing-map, will ALL caches be required to define the cachestore configuration if the caches are using the same distributed scheme? For example, we have two caches "ExpireSessions" and "Customer" which are using distributed-scheme. But only "ExpiredSession" cache needs to have read-write-backing-map (for transaction lite) AND the "ExpiredSession" needs to be persisted to DB. For "Customer" cache, it does NOT need to have read-write-backing-map AND it does NOT need to be persisted to DB. Currently, we have the following configuration which also has the "write-delay" and "cache-store" configurations for the "Customer" cache.
    Is it possible not have cache store configuration (write-delay, cache-store) configuration for the Customer cache even though it is using the same distributed-scheme as "ExpiredSession" cache (which needs the cache store configuration?) We think it probably can remove some of the overhead and improve efficiency) for the 'Customer' cache operations.
    Or is it required to have a separate distributed-scheme for "Customer" cache without eh cache store configuration? But it then will have use separate/additional thread pools for the distributed service?
    Any suggestions?
    Thanks in advance for your help.
    <cache-name>ExpiredSessions</cache-name>
    <scheme-name>default-distributed</scheme-name>
    <init-params>
    <init-param>
    <param-name>expiry-delay</param-name>
    <param-value>2s</param-value>
    </init-param>
    <init-param>
    <param-name>write-delay</param-name>
    <param-value>1s</param-value>
    </init-param>
    <init-param>
    <param-name>cache-store</param-name>
    <param-value>xxx.xxx.DBCacheStore</param-value>
    </param-value>
    </init-param>
    </init-params>
    <cache-mapping>
    <cache-name>Customer</cache-name>
    <scheme-name>default-distributed</scheme-name>
    <init-params>
    <init-param>
    <param-name>cache-store</param-name>
    <param-value>xxx.xxx.EmptyCacheStore</param-value>
    </init-param>
    <init-param>
    <param-name>write-delay</param-name>
    <param-value>24h</param-value>
    </init-param>
    </init-params>
    </cache-mapping>
    <!--
    Default Distributed pricing-distributedcaching scheme.
    -->
    <distributed-scheme>
    <scheme-name>default-distributed</scheme-name>
    <service-name>XXXDistributedCache</service-name>
    <thread-count>16</thread-count>
    <backing-map-scheme>
    <read-write-backing-map-scheme>
    <scheme-ref>rw-bm</scheme-ref>
    </read-write-backing-map-scheme>
    </backing-map-scheme>
    <autostart>true</autostart>
    </distributed-scheme>

    Hi,
    Yes, you can use the same service for different caches with different configuration. You need to define base service configuration with the common parameters and all the caches with have their schemes (not services) refering the base service configuration along with their own configuration. For example,
              <cache-mapping>
                   <cache-name>ExpiredSessions</cache-name>
                   <scheme-name>default-distributed</scheme-name>
                   <init-params>
                        <init-param>
                             <param-name>expiry-delay</param-name>
                             <param-value>2s</param-value>
                        </init-param>
                        <init-param>
                             <param-name>write-delay</param-name>
                             <param-value>1s</param-value>
                        </init-param>
                        <init-param>
                             <param-name>cache-store</param-name>
                             <param-value>xxx.xxx.DBCacheStore</param-value>                    
                        </init-param>
                   </init-params>
              </cache-mapping>
              <cache-mapping>
                   <cache-name>Customer</cache-name>
                   <scheme-name>default-distributed</scheme-name>
              </cache-mapping>
              </caching-scheme-mapping>
                   <!-- Default Distributed pricing-distributedcaching scheme. -->
              <caching-schemes>
                   <distributed-scheme>
                        <scheme-name>default-distributed</scheme-name>
                        <service-name>XXXDistributedCache</service-name>
                        <thread-count>16</thread-count>
                        <autostart>true</autostart>
                   </distributed-scheme>
                   <distributed-scheme>
                        <scheme-name>default-customer</scheme-name>
                        <scheme-ref>default-distributed</scheme-ref>
                        <backing-map-scheme>
                             <local-scheme />
                        </backing-map-scheme>
                   </distributed-scheme>
                   <distributed-scheme>
                        <scheme-name>default-expiry</scheme-name>
                        <scheme-ref>default-distributed</scheme-ref>
                        <backing-map-scheme>
                             <read-write-backing-map-scheme>
                                  <scheme-ref>rw-bm</scheme-ref>
                             </read-write-backing-map-scheme>
                        </backing-map-scheme>
                   </distributed-scheme>
              </caching-schemes>
    Hope this helps!
    Cheers,
    NJ

  • Set request timeout for distributed cache

    Hi,
    Coherence provides 3 parameters we can tune for the distributed cache
    tangosol.coherence.distributed.request.timeout      The default client request timeout for distributed cache services
    tangosol.coherence.distributed.task.timeout      The default server execution timeout for distributed cache services
    tangosol.coherence.distributed.task.hung      the default time before a thread is reported as hung by distributed cache services
    It seems these timeout values are used for both system activities (node discovery, data re-balance etc.) and user activities (get, put). We would like to set the request timeout for get/put. But a low threshold like 10 ms sometimes causes the system activities to fail. Is there a way for us to separately set the timeout values? Or even is it possible to setup timeout on individual calls (like get(key, timeout))?
    -thanks

    Hi,
    not necessarily for get and put methods, but for queries, entry-processor and entry-aggregator and invocable agent sending, you can make the sent filter or aggregator or entry-processor or agent implement PriorityTask, which allows you to make QoS expectations known to Coherence. Most or all stock aggregators and entry-processors implement PriorityTask, if I correctly remember.
    For more info, look at the documentation of PriorityTask.
    Best regards,
    Robert

  • Setup failover for a distributed cache

    Hello,
    For our production setup we will have 4 app servers one clone per each app server. so there will be 4 clones to a cluster. And we will have 2 jvms for our distributed cache - one being a failover, both of those will be in cluster.
    How would i configure the failover for the distributed cache?
    Thanks

    user644269 wrote:
    Right - so each of the near cache schemes defined would need to have the back map high-units set to where it could take on 100% of data.Specifically the near-scheme/back-scheme/distributed-scheme/backing-map-scheme/local-scheme/high-units value (take a look at the [Cache Configuration Elements|http://coherence.oracle.com/display/COH34UG/Cache+Configuration+Elements] ).
    There are two options:
    1) No Expiry -- In this case you would have to size the storage enabled JVMs to that an individual JVM could store all of the data.
    or
    2) Expiry -- In this case you would set the high-units a value that you determine. If you want it to store all the data then it needs to be set higher than the total number of objects that you will store in the cache at any given time or you can set it lower with the understanding that once that high-units is reached Coherence will evict some data from the cluster (i.e. remove it from the "cluster memory").
    user644269 wrote:
    Other than that - there is not configuration needed to ensure that these JVM's act as a failover in the event one goes down.Correct, data fault tolerance is on by default (set to one level of redundancy).
    :Rob:
    Coherence Team

  • Distributed Cache errors

    On my application Server I am getting periodic entries under the General category
    "Unable to write SPDistributedCache call usage entry."
    The error is every 5 minutes exactly. It is followed by:
    Calling... SPDistributedCacheClusterCustomProvider:: BeginTransaction
    Calling... SPDistributedCacheClusterCustomProvider:: GetValue(object transactionContext, string type, string key
    Calling... SPDistributedCacheClusterCustomProvider:: GetStoreUtcTime.
    Calling... SPDistributedCacheClusterCustomProvider:: Update(object transactionContext, string type, string key, byte[] data, long oldVersion).
    Sometimes this group of calls succeeds without an error and the sequence continues maybe for 3 iterations every 5 minutes. Then the error
    "Unable to write SPDistributedCache call usage entry.""
    happens again.
    My Distributed Cache Service is running on my Application Server and on my web front end.
    All values are default.
    Any idea why this is happening intermittently?
    Love them all...regardless. - Buddha

    Hi,
    From the error message, check whether super accounts were setup correctly.
    Refer to the article about configuring object cache user accounts in SharePoint Server 2013::
    https://technet.microsoft.com/en-us/library/ff758656(v=office.15).aspx
    if the issue exists, please check the SharePoint ULS log located at : C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\15\LOGS to get a detailed error description.
    Best Regards,
    Lisa Chen
    TechNet Community Support
    Please remember to mark the replies as answers if they help, and unmark the answers if they provide no help. If you have feedback for TechNet Support, contact
    [email protected]

  • Distributed cache size limit

    Hi,
    I want to create a distributed cache with 2 node.
    Each node can have maximum 500 entries.
    Total entries in cache of both node together should not exceed more than 1000.
    If user try to put more than 1000 element in cache then some old entries from cache (LRU, LFU) should be removed from cache and new entries should be added.
    Can you please help me with the schema for the above scenario.
    Your help will be appreciated.
    Thanks & Regards,
    Viral Gala

    Hi,
    I tried below code <high-units> was not working
    Cache size is 1010  (greater than 500)
    Java code
    package com.splwg.ccb.domain.pricing;
    import com.tangosol.net.CacheFactory;
    import com.tangosol.net.NamedCache;
    public class CoherenceSizeTest {
         public static final int NO_OF_MAIN = 4;
         public static void main(String[] args) {
         NamedCache cache = CacheFactory.getCache("CheckSize");
          for (int i = 0; i < 1010; i++) {
                String key = "key" + i;
                cache.put(key, new Long(i));
         System.out.println(" Cache size : " + cache.size());
    config file
    <?xml version="1.0"?>
    <!DOCTYPE cache-config SYSTEM "cache-config.dtd">
    <cache-config>
        <caching-scheme-mapping>
            <cache-mapping>
                <cache-name>CheckSize</cache-name>
                <scheme-name>default-distributed</scheme-name>
            </cache-mapping>
        </caching-scheme-mapping>
        <caching-schemes>
            <distributed-scheme>
                <scheme-name>default-distributed</scheme-name>
                <service-name>DistributedCache</service-name>
                <backing-map-scheme>
                    <local-scheme />
                </backing-map-scheme>
                <high-units>500</high-units>
                <autostart>true</autostart>
            </distributed-scheme>
        </caching-schemes>
    </cache-config>
    Console output
    MasterMemberSet(
      ThisMember=Member(Id=1, Timestamp=2014-11-07 16:31:21.123, Address=10.180.7.97:8088, MachineId=16932, Location=site:,machine:OFSS310723,process:4036, Role=SplwgCcbDomainCoherenceSizeTest)
      OldestMember=Member(Id=1, Timestamp=2014-11-07 16:31:21.123, Address=10.180.7.97:8088, MachineId=16932, Location=site:,machine:OFSS310723,process:4036, Role=SplwgCcbDomainCoherenceSizeTest)
      ActualMemberSet=MemberSet(Size=1
        Member(Id=1, Timestamp=2014-11-07 16:31:21.123, Address=10.180.7.97:8088, MachineId=16932, Location=site:,machine:OFSS310723,process:4036, Role=SplwgCcbDomainCoherenceSizeTest)
      MemberId|ServiceVersion|ServiceJoined|MemberState
        1|3.7.1|2014-11-07 16:31:24.375|JOINED
      RecycleMillis=1200000
      RecycleSet=MemberSet(Size=0
    TcpRing{Connections=[]}
    IpMonitor{AddressListSize=0}
    Cache size : 1010

  • Distributed cache with a backing-map as another distributed cache

    Hi All,
    Is it possible to create a distributed cache with a backing-map-schem with another distributed cache with local-storage disabled ?
    Please let me know how to configure this type of cache.
    regards
    S

    Hi Cameron,
    I am trying to create a distributed-schem with a backing-map schem. is it possible to configure another distributed queue as a backing map scheme for a cache.
    <distributed-scheme>
         <scheme-name>MyDistCache-2</scheme-name>
         <service-name> MyDistCacheService-2</service-name>
         <backing-map-scheme>
                   <external-scheme>
                        <scheme-name>MyDistCache-3</scheme-name>
                   </external-scheme>
         </backing-map-scheme>
    </distributed-scheme>
         <distributed-scheme>
              <scheme-name>MyDistCache-3</scheme-name>
              <service-name> MyDistBackCacheService-3</service-name>
              <local-storage>false</local-storage>
         </distributed-scheme>
    Please correct my understanding.
    Regards
    Srini

Maybe you are looking for

  • Is it possible to retrieve RAD's extended variables with a pl/sql function?

    Good morning everyone, We have our forms and reports(10g) application, SSO enabled and we are using OID on 10g app server. some of our users use smart card to log in to the system and some who do not have smart cards provide their database userid, pa

  • Printer suggestions

    I know this probably isn't the right discussion but I didn't know where else to go and I always get great feedback from you guys. I need to get a new printer and I would like one that is bluetooth. I don't need anything REAL fancy, although I would l

  • BOM explosion in BOMBOS transfer

    Hello,       can any one tell me how BOM explosion happens in BOMBOS transfer? Thanks!! Regards, Saravana

  • Images on Robohelp are very poor quality/blurry or don't appear at all

    Hello, Any help on this problem would be greatly appreciated as I have been attempting to solve this for several hours a day for the past week. I have a simple employee handbook in word (about 20 pages with images) that I'm trying to turn into an htm

  • Paying for itunes with a credit card instead of gift card

    I currently have $ on an itunes gift card and this gift card's balance is on my itunes.  I got an email from Am. Ex. that if I use $5.00 on my credit card on itunes, I GET $5.00 for itunes.  Problem is, every time I download a song it comes off my gi