Distributed cache and MapListener

Do the MapListeners receive all events on a distributed cache, when the cache is updated on a cluster that is different of the cluster on which the MapListener is located?
I ask this because, I've been testing the following configuration:
A distributed cache started on 2 machines.
When listening to an event on a cache (using a getCache(String,ClassPath) to get the cache, and addMapListener() to connect my implementation of the MapListener to the cache), nothing is received when the other node is updated.
Am I misusing MapListeners?

Sorry. I have been testing using an unappropriate configuration (-Dtangosol.coherence.ttl=0)
Therefore the 2 machine could not see each other.
The MapListener works as expected.
Pedro

Similar Messages

  • Distributed cache and Windows AppFabric

    got some issues with the Dist. cache 
    followed this in order to remove the servicenstance
    Run Get-SPServiceInstance to find the GUID in the ID section of the Distributed Cache Service that is causing an issue.
    $s = get-spserviceinstance GUID 
    $s.delete()
    It deletes fine but when I try to add:  
    Add-SPDistributedCacheServiceInstance
    I get this: 
    Add-SPDistributedCacheServiceInstance : Could not load file or assembly 'Microsoft.ApplicationServer.Caching.Configuration, Version=1.0.0.0, Cul
    ture=neutral, PublicKeyToken=31bf3856ad364e35' or one of its dependencies. The system cannot find the file specified.
    how do I resolve this? 

    Hi JmATK,
    Regarding this issue, we don’t recommend to delete without stopping any service gracefully, because there may be a data/stack that is still intact one and another.
    The recommendation from Stacy is good, and if the issue is about zombie process that causing unresponsive or hang process, we may need to reset the process by re-attach database / farm.
    Best regards.
    Victoria
    TechNet Community Support
    Please remember to mark the replies as answers if they help, and unmark the answers if they provide no help. If you have feedback for TechNet Support, contact
    [email protected]

  • Cache config for distributed cache and TCP*Extend

    Hi,
    I want to use distributed cache with TCP*Extend. We have defined "remote-cache-scheme" as the default cache scheme. I want to use a distributed cache along with a cache-store. The configuration I used for my scheme was
    <distributed-scheme>
      <scheme-name>MyScheme</scheme-name>
      <backing-map-scheme>
        <read-write-backing-map-scheme>
          <internal-cache-scheme>
            <class-scheme>
              <class-name>com.tangosol.util.ObservableHashMap</class-name>
            </class-scheme>
          </internal-cache-scheme>
          <cachestore-scheme>
            <class-scheme>
              <class-name>MyCacheStore</class-name>
            </class-scheme>
            <remote-cache-scheme>
              <scheme-ref>default-scheme</scheme-ref>
            </remote-cache-scheme>
          </cachestore-scheme>
          <rollback-cachestore-failures>true</rollback-cachestore-failures>
        </read-write-backing-map-scheme>
      </backing-map-scheme>
    </distributed-scheme>
    <remote-cache-scheme>
      <scheme-name>default-scheme</scheme-name>
      <initiator-config>
        <tcp-initiator>
          <remote-addresses>
            <socket-address>
              <address>XYZ</address>
              <port>9909</port>
            </socket-address>
          </remote-addresses>
        </tcp-initiator>
      </initiator-config>
    </remote-cache-scheme>I know that the configuration defined for "MyScheme" is wrong but I do not know how to configure "MyScheme" correctly to make my distributed cache the part of the same cluster to which all other caches, which uses the default scheme, are joined. Currently, this ain't happening.
    Thanks.
    RG
    Message was edited by:
    user602943

    Hi,
    Is it that I need to define my distributed scheme with the CacheStore in the server-coherence-cache-config.xml and then on the client side use remote cache scheme to connect to get my distributed cache?
    Thanks,

  • How can i configure Distributed cache servers and front-end servers for Streamlined topology in share point 2013??

    my question is regarding SharePoint 2013 Farm topology. if i want go with Streamlined topology and having (2 distribute cache and Rm servers+ 2 front-end servers+ 2 batch-processing servers+ cluster sql server) then how distributed servers will
    be connecting to front end servers? Can i use windows 2012 NLB feature? if i use NLB and then do i need to install NLB to all distributed servers and front-end servers and split-out services? What will be the configuration regarding my scenario.
    Thanks in Advanced!

    For the Distributed Cache servers, you simply make them farm members (like any other SharePoint servers) and turn on the Distributed Cache service (while making sure it is disabled on all other farm members). Then, validate no other services (except for
    the Foundation Web service due to ease of solution management) is enabled on the DC servers and no end user requests or crawl requests are being routed to the DC servers. You do not need/use NLB for DC.
    Trevor Seward
    Follow or contact me at...
    &nbsp&nbsp
    This post is my own opinion and does not necessarily reflect the opinion or view of Microsoft, its employees, or other MVPs.

  • Query from Distributed Cache

    Hi
    I am newbie to Oracle Coherence and trying to get a hands on experience by running a example (coherence-example-distributedload.zip) (Coherence GE 3.6.1). I am running two instances of server . After this I ran "load.cmd" to distribute data across two server nodes - I can see that data is partitioned across server instances.
    Now I run another instance(on another JVM) of program which will try to join the distributed cache and try to query on the loaded on server instances. I see that the new JVM is joining the cluster and querying for data returns no records. Can you please tell me if I am missing something?
         NamedCache nNamedCache = CacheFactory.getCache("example-distributed");
         Filter eEqualsFilter = new GreaterFilter("getLocId", "1000");
         Set keySet = nNamedCache.keySet(eEqualsFilter);
    I see here that keySet has no records. Can you please help?
    Thanks
    sunder

    I got this problem sorted out - the was problem cache-config.xml.. The correct one looks as below.
    <distributed-scheme>
    <scheme-name>example-distributed</scheme-name>
    <service-name>DistributedCache1</service-name>
    <backing-map-scheme>
         <read-write-backing-map-scheme>
         <scheme-name>DBCacheLoaderScheme</scheme-name>
         <internal-cache-scheme>
         <local-scheme>
         <scheme-ref>DBCache-eviction</scheme-ref>
         </local-scheme>
         </internal-cache-scheme>
              <cachestore-scheme>
              <class-scheme>
                   <class-name>com.test.DBCacheStore</class-name>
                   <init-params>
                                  <init-param>
                                       <param-type>java.lang.String</param-type>
                                       <param-value>locations</param-value>
                                  </init-param>
                                  <init-param>
                                       <param-type>java.lang.String</param-type>
                                       <param-value>{cache-name}</param-value>
                                  </init-param>
                   </init-params>                     
                   </class-scheme>
              </cachestore-scheme>
              <cachestore-timeout>6000</cachestore-timeout>
              <refresh-ahead-factor>0.5</refresh-ahead-factor>     
         </read-write-backing-map-scheme>
         </backing-map-scheme>
         <thread-count>10</thread-count>
    <autostart>true</autostart>
    </distributed-scheme>
    <invocation-scheme>
    <scheme-name>example-invocation</scheme-name>
    <service-name>InvocationService1</service-name>
    <autostart system-property="tangosol.coherence.invocation.autostart">true</autostart>
    </invocation-scheme>
    Missed <class-scheme> element inside <cachestore-scheme> of <read-write-backing-map-scheme>.
    Thanks
    sunder

  • Object locking in Distributed cache

    Hi,
         I have gone through some of the posts on the forum regarding the locking issues.
         Thread at http://www.tangosol.net/forums/thread.jspa?messageID=3416 specifies that
         ..locks block locks, not gets or puts. If one member locks a key, it prevents another member from locking the key. It does not prevent the other member from getting/putting that key.
         What exactly do we mean by the above statement?
         I'm using distributed cache and would like to lock an object before "put" or "remove" operations. But I want "read" operation to be without locks. Now my questions are,
         1) In a distributed cache setup, if I try to obtain a lock before put or remove and discover that i cannot obtain it (i.e false is returned) because someone else has locked it, then how do I discover when the other entity releases lock and when should i retry for the lock?
         2) Again, if i lock USING "LOCK(OBJECT KEY)" method, can i be sure that no other cluster node would be writing/reading to that node until I release that lock?
         3) The first post in the thread http://www.tangosol.net/forums/thread.jspa?forumID=37&threadID=588 suggests that in distributed setup locks are of no effect to other cluster nodes. But i want locks to be valid across cluster. If item locked, then no one else shud be able to transact with it. How to get that?
         Regards,
         Mohnish

    Hi Mohnish,
         >> 1) In a distributed cache setup, if I try to obtain
         >> a lock before put or remove and discover that i cannot
         >> obtain it (i.e false is returned) because someone else
         >> has locked it, then how do I discover when the other
         >> entity releases lock and when should i retry for the
         >> lock?
         You may try to acquire a lock (waiting indefinitely for lock acquisition) by calling <tt>cache.lock(key, -1)</tt>; you may try for a specified time period by calling <tt>cache.lock(key, cMillis)</tt>. With either of these approaches, your thread will block until the other thread releases the lock (or the timeout is reached). In either case, if the other node releases its lock your lock request will complete immediately and your thread will resume execution.
         >> 2) Again, if i lock USING "LOCK(OBJECT KEY)" method,
         >> can i be sure that no other cluster node would be
         >> writing/reading to that node until I release that lock?
         If you want to prevent other threads from writing/reading a cache entry, you must ensure that those other threads lock the key prior to writing/reading it. If they do not lock it prior to writing, they may have dirty writes (overwriting your write or vice versa); if they do not lock prior to reading, they may have dirty reads (having the underlying data change after they've read it).
         >> 3) The first post in the thread
         http://www.tangosol.net/forums/thread.jspa?forumID=37&threadID=588
         >> suggests that in distributed setup locks are of no
         >> effect to other cluster nodes. But i want locks to be
         >> valid across cluster. If item locked, then no one else
         >> shud be able to transact with it. How to get that?
         The first post in that thread states that if the second thread doesn't lock, then it will overwrite the first thread (even if the first thread has locked). However, there is an inconsistency in that the Replicated cache topology has a stronger memory model than the Distributed/Near topologies. In the Replicated topoplogy, locks not only block out locks from other nodes, they also prevent puts from other nodes.
         Jon Purdy
         Tangosol, Inc.

  • Different distributed caches within the cluster

    Hi,
    i've three machines n1 , n2 and n3 respectively that host tangosol. 2 of them act as the primary distributed cache and the third one acts as the secondary cache. i also have weblogic running on n1 and based on some requests pumps data on to the distributed cache on n1 and n2. i've a listener configured on n1 and n2 and on the entry deleted event i would like to populate tangosol distributed service running on n3. all the 3 nodes are within the same cluster.
    i would like to ensure that the data directly coming from weblogic should only be distributed across n1 and n2 and NOT n3. for e.g. i do not start an instance of tangosol on node n3. and an object gets pruned from either n1 or n2. so ideally i should get a storage not configured exception which does not happen.
    The point is the moment is say CacheFactory.getCache("Dist:n3") in the cache listener, tangosol does populate the secondary cache by creating an instance of Dist:n3 on either n1 or n2 depending from where the object has been pruned.
    from my understanding i dont think we can have a config file on n1 and n2 that does not have a scheme for n3. i tried doing that and got an illegalstate exception.
    my next step was to define the Dist:n3 scheme on n1 and n2 with local storage false and have a similar config file on n3 with local-storage for Dist:n3 as true and local storage for the primary cache as false.
    can i configure local-storage specific to a cache rather than to a node.
    i also have an EJB deployed on weblogic that also entertains a getData request. i.e. this ejb will also check the primary cache and the secondary cache for data. i would have the statement
    NamedCahe n3 = CacheFactory.getCache("n3") in the bean as well.

    Hi Jigar,
    i've three machines n1 , n2 and n3 respectively that
    host tangosol. 2 of them act as the primary
    distributed cache and the third one acts as the
    secondary cache.First, I am curious as to the requirements that drive this configuration setup.
    i would like to ensure that the data directly coming
    from weblogic should only be distributed across n1
    and n2 and NOT n3. for e.g. i do not start an
    instance of tangosol on node n3. and an object gets
    pruned from either n1 or n2. so ideally i should get
    a storage not configured exception which does not
    happen.
    The point is the moment is say
    CacheFactory.getCache("Dist:n3") in the cache
    listener, tangosol does populate the secondary cache
    by creating an instance of Dist:n3 on either n1 or n2
    depending from where the object has been pruned.
    from my understanding i dont think we can have a
    config file on n1 and n2 that does not have a scheme
    for n3. i tried doing that and got an illegalstate
    exception.
    my next step was to define the Dist:n3 scheme on n1
    and n2 with local storage false and have a similar
    config file on n3 with local-storage for Dist:n3 as
    true and local storage for the primary cache as
    false.
    can i configure local-storage specific to a cache
    rather than to a node.
    i also have an EJB deployed on weblogic that also
    entertains a getData request. i.e. this ejb will also
    check the primary cache and the secondary cache for
    data. i would have the statement
    NamedCahe n3 = CacheFactory.getCache("n3") in the
    bean as well.In this scenario, I would recommend having the "primary" and "secondary" caches on different cache services (i.e. distributed-scheme/service-name). Then you can configure local storage on a service by service basis (i.e. distributed-scheme/local-storage).
    Later,
    Rob Misek
    Tangosol, Inc.

  • Foundation 2013 Farm and Distributed Cache settings

    We are on a 3 tier farm - 1 WFE + 1APP + 1SQL - have had many issues with AppFab and Dist Cache; and an additional issue with noderunner/Search Services.  Memory and CPU running very high.  Read that we shouldn't be running Search
    and Dist Cache in the same server, nor using a WFE as a cache host.  I don't have the budget to add another server in my environment. 
    I found an article (IderaWP_CachingFormSharePointPerformance.pdf) saying "To make use of SharePoint's caching capabilities requires a Server version of the platform." because it requires the publishing feature, which Foundation doesn't have. 
    So, I removed Distributed Cache (using Powershell) from my deployment and disabled the AppFab.  This resolved 90% of server errors but performance didn't improve. Now, not only I'm getting errors now on Central Admin. - expects Dist Cache
    - but I'm getting disk operations reading of 4000 ms.
    Questions:
    1) Should I enable AppFab and disable cache?
    2) Does Foundation support Dist Cache?  Do I need to run Distributed Cache?
    3) If so, can I run with just 1 cache host?  If I shouldn't run it on a WFE or an App server with Search, do I have to stop Search all together?  What happens with 2 tier farms out there? 
    4) Reading through the labyrinth of links on TechNet and MSDN on the subject, most of them says "Applies to SharePoint Server".
    5) Anyone out there on a Foundation 2013 production environment that could share your experience?
    Thanks in advance for any help with this!
    Monica
    Monica

    That article is referring to BlobCache, not Distributed Cache. BlobCache requires Publishing, hence Server, but DistributedCache is required on all SharePoint 2013 farms, regardless of edition.
    I would leave your DistCache on the WFE, given the App Server likely runs Search. Make sure you install
    AppFabric CU5 and make sure you make the changes as noted in the KB for
    AppFabric CU3.
    You'll need to separately investigate your disk performance issues. Could be poor disk layout, under spec'ed disks, and so on. A detail into the disks that support SharePoint would be valuable (type, kind, RPM if applicable, LUNs in place, etc.).
    Trevor Seward
    Follow or contact me at...
    &nbsp&nbsp
    This post is my own opinion and does not necessarily reflect the opinion or view of Microsoft, its employees, or other MVPs.

  • How to put the data into cache and distribute to nodeusing oracle coherence

    Hi Friends,
    i am having some random number data writing into file,from that file i am reading the data and i want to put into cache,how can i put the data into cache and partition this data into different nodes ( machines) to caluculate like S.D,variance..etc..like that.(or how can i implement montecarlo using oracle coherence) if any one know plz suggest me with flow.
    Thank you.
    regards
    chandra

    Hi robert,
    i have some bulk data in some arraylist or object format,i want to put into cache.
    i am not able to put into cache.i am using put method like cache.put(object key ,object value) ,but its not allowing to put into cache.
    can you please help me.i m sending my code.plz go thru and tel me whr i did mistake.
    package lab3;
    import com.tangosol.net.CacheFactory;
    import com.tangosol.net.NamedCache;
    import com.tangosol.net.cache.NearCache;
    import java.io.File;
    import java.io.FileNotFoundException;
    import java.io.PrintWriter;
    import java.util.ArrayList;
    import java.util.List;
    import java.util.Random;
    import java.util.Scanner;
    import javax.naming.Name;
    public class BlockScoleData {
         * @param args
         * s=The spot market price
         * x=the exercise price of the option
         * v=instantaneous standard deviation of s
         * r=risk free instantaneous rate of interest
         * t= time to expiration of the option
         * n – Number of MC simulations.
         private static String outputFile = "D:/cache1/sampledata2.txt";
    private static String inputFile = "D:/cache1/sampledata2.txt";
    NearCache cache;
    List<Credit> creditList = new ArrayList<Credit>();
         public void writeToFile(int noofsamples) {
              Random rnd = new Random();
              PrintWriter writer = null;
              try {
                   writer = new PrintWriter(outputFile);
                   for (int i = 1; i <= noofsamples; i++) {
                        double s = rnd.nextInt(200) * rnd.nextDouble();
                        //double x = rnd.nextInt(250) * rnd.nextDouble();
                        int t = rnd.nextInt(5);
                        double v = rnd.nextDouble() ;
                        double r = rnd.nextDouble() / 10;
                        //int n = rnd.nextInt(90000);
                        writer.println(s + " " + t + " " + v + " "
                                  + r );
              } catch (FileNotFoundException e) {
                   e.printStackTrace();
              } finally {
                   writer.close();
                   writer = null;
    public List<Credit> readFromFile() {
    Scanner scanner = null;
    Credit credit = null;
    // List<Credit> creditList = new ArrayList<Credit>();
    try {
    scanner = new Scanner(new File(inputFile));
    while (scanner.hasNext()) {
    credit = new Credit(scanner.nextDouble(), scanner.nextInt(),
    scanner.nextDouble(), scanner.nextDouble());
    creditList.add(credit);
    System.out.println("read the list from file:"+creditList);
    } catch (FileNotFoundException e) {
    e.printStackTrace();
    } finally {
    scanner.close();
    credit = null;
    scanner = null;
    return creditList;
    // public void putCache(String cachename,List<Credit> list){
    // cache = CacheFactory.getCache ( "VirtualCache");
    // List<Credit> rand = new ArrayList<Credit>();
    public Object put(Object key, Object value){
    cache = (NearCache)CacheFactory.getCache("mycache");
    String cachename = cache.getCacheName();
    List<Credit> cachelist=new ArrayList<Credit>();
    // Object key;
    //cachelist = (List<Credit>)cache.put(creditList,creditList);
    cache.put(creditList,creditList);
    System.out.println("read to the cache list from file:"+cache.get(creditList));
    return cachelist;
         public static void main(String[] args) throws Exception {
         NearCache cache = (NearCache)CacheFactory.getCache("mycache");
              new BlockScoleData().writeToFile(20);
         //new BlockScoleData().putCache("Name",);
              System.out
                        .println("New file \"myfile.csv\" has been created to the current directory");
         CacheFactory.ensureCluster();
         new BlockScoleData().readFromFile();
    System.out.println("data read from file successfully");
         List<Credit> creditList = new ArrayList<Credit>();
    new BlockScoleData().put(creditList,creditList);
         System.out.println("read to the cache list from file:"+cache.get(creditList));
    //cache=CacheFactory.getCache("mycache");
    //mycacheput("Name",new BlockScoleData());
    //     System.out.println("name of cache is :" +mycache.getCacheName());
    //     System.out.println("value in cache is :" +mycache.get("Name"));
    //     System.out.println("cache services are :" +mycache.getCacheService());
    regards
    chandra

  • Distributed Cache service stuck in Starting Provisioning

    Hello,
    I'm having problem with starting/stopping Distributed Cache service in one of the SharePoint 2013 farm servers. Initially, Distributed Cache was enabled in all the farm servers by default and it was running as a cluster. I wanted to remove it from all hosts
    but one (APP server) using below PowerShell commands, which worked fine.
    Stop-SPDistributedCacheServiceInstance -Graceful
    Remove-SPDistributedCacheServiceInstance
    But later I attempted to add the service back to two hosts (WFE servers) using below command and unfortunately one of them got stuck in the process. When I look at the Services on Server from Central Admin, the status says "Starting".
    Add-SPDistributedCacheServiceInstance
    Also, when I execute below script, the status says "Provisioning".
    Get-SPServiceInstance | ? {($_.service.tostring()) -eq "SPDistributedCacheService Name=AppFabricCachingService"} | select Server, Status
    I get "cacheHostInfo is null" error when I use "Stop-SPDistributedCacheServiceInstance -Graceful".
    I tried below script,
    $instanceName ="SPDistributedCacheService Name=AppFabricCachingService" 
    $serviceInstance = Get-SPServiceInstance | ? {($_.service.tostring()) -eq $instanceName -and ($_.server.name) -eq $env:computername}
    $serviceInstance.Unprovision()
    $serviceInstance.Delete()
    ,but it didn't work either, and I got below error.
    "SPDistributedCacheServiceInstance", could not be deleted because other objects depend on it.  Update all of these dependants to point to null or 
    different objects and retry this operation.  The dependant objects are as follows: 
    SPServiceInstanceJobDefinition Name=job-service-instance-{GUID}
    Has anyone come across this issue? I would appreciate any help.
    Thanks!

    Hi ,
    Are you able to ping the server that is already running Distributed Cache on this server? For example:
    ping WFE01
    As you are using more than one cache host in your server farm, you must configure the first cache host running the Distributed Cache service to allow Inbound ICMP (ICMPv4) traffic through the firewall.If an administrator removes the first cache host from
    the cluster which was configured to allow Inbound ICMP (ICMPv4) traffic through the firewall, you must configure the first server of the new cluster to allow Inbound ICMP (ICMPv4) traffic through the firewall. 
    You can create a rule to allow the incoming port.
    For more information, you can refer to the  blog:
    http://habaneroconsulting.com/insights/Distributed-Cache-Needs-Ping#.U4_nmPm1a3A
    Thanks,
    Eric
    Forum Support
    Please remember to mark the replies as answers
    if they help and unmark them if they provide no help. If you have feedback for TechNet Subscriber Support, contact [email protected]
    Eric Tao
    TechNet Community Support

  • Limitation on number of objects in distributed cache

    Hi,
    Is there a limitation on the number (or total size) of objects in a distributed cache? I am seeing a big increase in response time when the number of objects exceeds 16,000. Normally, the ServiceMBean.RequestAverageDuration value is in the 6-8ms range as long as the number of objects in the cache is less than 16K - I've run our application for weeks at a time without seeing any problems. However, once the number of objects exceeds the magic number of 16K the average request duration almost immediately jumps to over 100ms and continues to climb as more objects are added.
    I'm fairly confident that the cache is indexed properly (as Dimitri helped us with that). Are there any configuration changes that could possibly help out here? We are using Coherence 3.3.
    Any suggestions would be greatly appreciated.
    Thanks,
    Jim

    Hi Jim,
    The results from the load test look quite normal, the system fairly quickly stabilizes at a particular performance level and remains there for the duration of the test. In terms of latency results, we see that the cache.putAll operations are taking ~45ms per bulk operation where each operation is putting 100 1K items, for cache.getAll operations we see about ~15ms per bulk operation. Additionally note that the test runs over 256,000 items, so it is well beyond the 16,000 limit you've encountered.
    So it looks like your application are exhibiting different behavior then this test. You may wish to try to configure this test to behave as similarly to yours as possible. For instance you can set the size of the cache to just over/under 16,000 using the -entries parameter, set the size of the entries to 900 bytes using the -size parameter, and set the total number of threads per worker using the -threads parameter.
    What is quite interesting is that at 256,000 1K objects the latency measured with this test is apparently less then half the latency you are seeing with a much smaller cache size. This would seem to point at the issue being related to or rooted in your test. Would you be able to provide a more detailed description of how you are using the cache, and the types of operations you are performing.
    thanks,
    mark

  • Coherence 3.6.0 transactional cache and POF - NULL values

    Hi,
    We are trying to use the new transactional scheme defined in 3.6.0 and we encounter an abnormal behaviour. The code executes without any exception or warnings but in the cache we find the key associated with a NULL value.
    To try to identify the problem, we defined two services (see cache-config below):
    - one transactional cache
    - one distributed cache
    If we try to insert into transactional cache primitives or strings everything is normal (both key and value are visible using coherence console). But if we try to insert custom classes using POF, the key is inserted with a NULL value.
    In same cluster we defined a distributed cache that uses the same POF classes/configuration. A call to put will succeed in any scenario (both key and value are visible using coherence console).
    <?xml version="1.0"?>
    <!DOCTYPE cache-config SYSTEM "cache-config.dtd">
    <cache-config>
         <caching-scheme-mapping>
              <cache-mapping>
                   <cache-name>cnt.*</cache-name>
                   <scheme-name>storage.transactionalcache.cnt.scheme</scheme-name>
              </cache-mapping>
              <cache-mapping>
                   <cache-name>stt.*</cache-name>
                   <scheme-name>storage.distributedcache.stt.scheme</scheme-name>
              </cache-mapping>
         </caching-scheme-mapping>
         <caching-schemes>
              <transactional-scheme>
                   <scheme-name>storage.transactionalcache.cnt.scheme</scheme-name>
                   <service-name>storage.transactionalcache.cnt</service-name>
                   <thread-count>10</thread-count>
                   <serializer>
                        <class-name>com.tangosol.io.pof.ConfigurablePofContext</class-name>
                        <init-params>
                             <init-param>
                                  <param-type>String</param-type>
                                  <param-value>cnt-pof-config.xml</param-value>
                             </init-param>
                        </init-params>
                   </serializer>
                   <backing-map-scheme>
                        <local-scheme>
                             <high-units>250M</high-units>
                             <unit-calculator>binary</unit-calculator>
                        </local-scheme>
                   </backing-map-scheme>
                   <autostart>true</autostart>
              </transactional-scheme>
              <distributed-scheme>
                   <scheme-name>storage.distributedcache.stt.scheme</scheme-name>
                   <service-name>storage.distributedcache.stt</service-name>
                   <serializer>
                        <class-name>com.tangosol.io.pof.ConfigurablePofContext</class-name>
                        <init-params>
                             <init-param>
                                  <param-type>String</param-type>
                                  <param-value>cnt-pof-config.xml</param-value>
                             </init-param>
                        </init-params>
                   </serializer>
                   <backing-map-scheme>
                        <local-scheme>
                             <high-units>250M</high-units>
                             <unit-calculator>binary</unit-calculator>
                        </local-scheme>
                   </backing-map-scheme>
                   <autostart>true</autostart>
              </distributed-scheme>
         </caching-schemes>
    </cache-config>
    Failing code (uses transaction APIs 3.6.0):
         public static void main(String[] args)
              Connection con = new DefaultConnectionFactory().createConnection("storage.transactionalcache.cnt");
              con.setAutoCommit(false);
              try
                   OptimisticNamedCache cache = con.getNamedCache("cnt.t1");
                   CId tID = new CId();
                   tID.setId(11111L);
                   C tC = new C();
                   tC.setVal(new BigDecimal("100.1"));
                   cache.insert(tID, tC);
                   con.commit();
              catch (Exception e)
                   e.printStackTrace();
                   con.rollback();
              finally
                   con.close();
    Code that succeeds (but without transaction APIs):
         public static void main(String[] args)
              try
                   NamedCache cache = CacheFactory.getCache("stt.t1");
                   CId tID = new CId();
                   tID.setId(11111L);
                   C tC = new C();
                   tC.setVal(new BigDecimal("100.1"));
                   cache.put(tID, tC);
              catch (Exception e)
                   e.printStackTrace();
              finally
    And here is what we list using coherence console if we use transactional APIs:
    Map (cnt.t1): list
    CId {
    id = 11111
    } = null
    Any suggestion, please?

    Cristian,
    After looking at your configuration I noticed that your configuration is incorrect. For a transactional scheme you cannot specify a backing-map-scheme.
    Your config contained:
    <backing-map-scheme>
    <local-scheme>
    <high-units>250M</high-units>
    <unit-calculator>binary</unit-calculator>
    </local-scheme>
    </backing-map-scheme>To specify high-units for a transactional scheme, simply provide a high-units element directly under the transactional-scheme element.
    <transactional-scheme>
        <scheme-name>small-high-units</scheme-name>
        <service-name>TestTxnService</service-name>
        <autostart>true</autostart>
        <high-units>1M</high-units>
    </transactional-scheme>http://download.oracle.com/docs/cd/E15357_01/coh.360/e15723/api_transactionslocks.htm#BEIBACHA
    The reason that it is not allowable to specify a backing-map-scheme for a transactional scheme is that transactional caches use their own storage.
    I am not sure why this would work with primitives and only fail with POF. We will look into this further here and try to reproduce.
    Can you please change your configuration with the above changes and let us know your results.
    Thanks,
    John
    Edited by: jspeidel on Sep 16, 2010 10:44 AM

  • Error message when using a MessageListener using a distributed cache

    Hi --
    I'm getting the following error message when I attach a message listener to a distributed cache. I get the same message if I attach the listener to the NearCache in front of the DistributedCache, or to the DistributedCache itself.
    My message listener listens for a create() operation and writes the created value out to the database. Both the key and value are java objects that are getting "serialized" when they're pushed in the cache. The listener is never called.
    The error spits out two messages, which look like:
    2003-04-07 21:48:05.281 Tangosol Coherence 2.1/239 <Error> (thread=DistributedCache:EventDispatcher): An exception occurred while dispatching this event:
    CacheEvent: MapEvent{com.tangosol.coherence.component.util.daemon.queueProcessor
    .service.DistributedCache$BinaryMap added: key=Binary(length=269, value=0x0005AC
    ED000573720021636F6D2E6F6C742E646174612E696E7465726E616C2E4461746162617365554944
    6FABB5383C6013B402000078720021636F6D2E6F6C742E646174612E696E7465726E616C2E416273
    7472616374554944D04F591196E4DC1B0200024C000D657874656E73696F6E4461746174000F4C6A
    6176612F7574696C2F4D61703B4C0009756964537472696E677400124C6A6176612F6C616E672F53
    7472696E673B7870737200116A6176612E7574696C2E486173684D61700507DAC1C31660D1030002
    46000A6C6F6164466163746F724900097468726573686F6C6478703F400000000000087708000000
    0B0000000078740011363930395F436F6D706F6E656E74426964), value=Binary(length=1069,
    value=0x0005ACED000573720026636F6D2E6562726576696174652E61756374696F6E2E6269642
    E436F6D706F6E656E744269648EC95C4DE33A88D802000D5A0007626573744269644A000B6269645
    3657175656E6365440004636F73745A0007696E697469616C5A00066E65774269645A00067469654
    2696444000576616C75654C000B61756374696F6E4D6F64657400294C636F6D2F656272657669617
    4652F61756374696F6E2F6576656E742F41756374696F6E4D6F64653B4C000A61756374696F6E554
    9447400124C636F6D2F6F6C742F646174612F5549443B4C000A636F6D70616E7955494471007E000
    24C000C636F6D706F6E656E7455494471007E00024C000A737472696E67436F73747400124C6A617
    6612F6C616E672F537472696E673B4C000B737472696E6756616C756571007E00037872002D636F6
    D2E6562726576696174652E636F6D6D6F6E2E416273747261637450657273697374656E744F626A6
    56374497E2729A24CA5790200034C000A637265617465446174657400104C6A6176612F7574696C2
    F446174653B4C000375696471007E00024C000A7570646174654461746571007E000578707372000
    E6A6176612E7574696C2E44617465686A81014B59741903000078707708000000F46B9A286278737
    20021636F6D2E6F6C742E646174612E696E7465726E616C2E44617461626173655549446FABB5383
    C6013B402000078720021636F6D2E6F6C742E646174612E696E7465726E616C2E416273747261637
    4554944D04F591196E4DC1B0200024C000D657874656E73696F6E4461746174000F4C6A6176612F7
    574696C2F4D61703B4C0009756964537472696E6771007E00037870737200116A6176612E7574696
    C2E486173684D61700507DAC1C31660D103000246000A6C6F6164466163746F72490009746872657
    3686F6C6478703F4000000000000877080000000B0000000078740011363930395F436F6D706F6E6
    56E744269647371007E00077708000000F46B9A286278000000000000000000402E0000000000000
    00000402E00000000000073720027636F6D2E6562726576696174652E61756374696F6E2E6576656
    E742E41756374696F6E4D6F6465BD0C9E245C328B4F02000078720029636F6D2E656272657669617
    4652E636F6D6D6F6E2E41627374726163745479706553616665456E756D506D8C41B0144DB302000
    249000A696E744C69746572616C4C000D737472696E674C69746572616C71007E000378700000000
    374000A50524F44554354494F4E7371007E00097371007E000D3F4000000000000877080000000B0
    00000007874000A38335F41756374696F6E7371007E00097371007E000D3F4000000000000877080
    000000B000000007874000A34325F436F6D70616E797371007E00097371007E000D3F40000000000
    00877080000000B00000000787400103131315F426964436F6D706F6E656E747070)}
    2003-04-07 21:48:05.687 Tangosol Coherence 2.1/239 <Warning> (thread=CoherenceLogger): Asynchronous logging character limit exceeded; discarding 3 log messages (lines=17, chars=1416)

    Kris,
    First of all you should increase the value of logging-config/character-limit element in tangosol-coherence.xml to see the message entirely. The default setting is 4096 which is not enough to see your exception text.
    When you do that I believe you will see that the actual exception is java.lang.ClassNotFoundException indicating that the node that has the listener installed doesn't know about the class that is being put into the cache and could be easily fixed as shown here: http://www.tangosol.com/faq-coherence.jsp#classnotfound
    Please let me know if that doesn't help.
    Gene

  • Distributed cache

    HI,
    We have a server (Server 1), on which the status of the Distributed cache was in "Error Starting" state.
    While applying a service pack due to some issue we were unable to apply the path (Server 1) so we decided to remove the effected server from the farm and work on it. the effected server (Server 1) was removed from the farm through the configuration wizard.
    Even after running the configuration wizard we were still able to see the server (Server 1) on the SharePoint central admin site (Servers in farm) when clicked, the service "Distributed cache" was still visible with a status "Error Starting",
    tried deleting the server from the farm and got an error message, the ULS logs displayed the below.
    A failure occurred in SPDistributedCacheServiceInstance::UnprovisionInternal. cacheHostInfo is null for host 'servername'.
    8130ae9c-e52e-80d7-aef7-ead5fa0bc999
    A failure occurred SPDistributedCacheServiceInstance::UnprovisionInternal()... isGraceFulShutDown 'False' , isGraceFulShutDown, Exception 'System.InvalidOperationException: cacheHostInfo is null     at Microsoft.SharePoint.DistributedCaching.Utilities.SPDistributedCacheServiceInstance.UnProvisionInternal(Boolean
    isGraceFulShutDown)'
    8130ae9c-e52e-80d7-aef7-ead5fa0bc999
    A failure occurred SPDistributedCacheServiceInstance::UnProvision() , Exception 'System.InvalidOperationException: cacheHostInfo is null     at Microsoft.SharePoint.DistributedCaching.Utilities.SPDistributedCacheServiceInstance.UnProvisionInternal(Boolean
    isGraceFulShutDown)     at Microsoft.SharePoint.DistributedCaching.Utilities.SPDistributedCacheServiceInstance.Unprovision()'
    8130ae9c-e52e-80d7-aef7-ead5fa0bc999
    We are unable to perform any operation install/repair of SharePoint on the effected server (Server 1), as the server is no longer in the farm, we are unable to run any powershell commands.
    Questions:-
    What would cause that to happen?
    Is there a way to resolve this issue? (please provide the steps)
    Satyam

    Hi
    try this:
    http://edsitonline.com/2014/03/27/unexpected-exception-in-feedcacheservice-isrepopulationneeded-unable-to-create-a-datacache-spdistributedcache-is-probably-down/
    Hope this helps. Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading the thread.

  • Set request timeout for distributed cache

    Hi,
    Coherence provides 3 parameters we can tune for the distributed cache
    tangosol.coherence.distributed.request.timeout      The default client request timeout for distributed cache services
    tangosol.coherence.distributed.task.timeout      The default server execution timeout for distributed cache services
    tangosol.coherence.distributed.task.hung      the default time before a thread is reported as hung by distributed cache services
    It seems these timeout values are used for both system activities (node discovery, data re-balance etc.) and user activities (get, put). We would like to set the request timeout for get/put. But a low threshold like 10 ms sometimes causes the system activities to fail. Is there a way for us to separately set the timeout values? Or even is it possible to setup timeout on individual calls (like get(key, timeout))?
    -thanks

    Hi,
    not necessarily for get and put methods, but for queries, entry-processor and entry-aggregator and invocable agent sending, you can make the sent filter or aggregator or entry-processor or agent implement PriorityTask, which allows you to make QoS expectations known to Coherence. Most or all stock aggregators and entry-processors implement PriorityTask, if I correctly remember.
    For more info, look at the documentation of PriorityTask.
    Best regards,
    Robert

Maybe you are looking for