Distributed cache performance?

Hi,
I have a question about the performance of a cluster using a distributed cache:
A distributed cache is available in the the cluster, using the expiry-delay functionality. Each node first inserts new entries in the cache and then periodically updates the entries as long as the entry is needed in the cluster (entries that are no longer periodically updated will be removed due to the expiry-delay).
I performed a small test using a cluster with two nodes that each inserted ~2000 entries in the distributed cache. The nodes then periodically update their entries at 5 minutes intervals (using the Map.put(key, value) method). The nodes never access the same entries, so there will be no synchronization issues.
The problem is that the CPU load on the machines running the nodes are very high, ~70% (and this is quite powerful machines with 4 CPUs running Linux). To be able to find the reason for the high CPU load, I used a profiler tool on the application running on one of the nodes. It showed that the application spent ~70% of the time in com.tangosol.coherence.component.net.socket.UdpSocket.receive. Is this normal?
Since each node has a lot of other things to do, it is not acceptable that 70% of the CPU is used only for this purpose. Can this be a cache configuration issue, or do I have to find some other approach to perform this task?
Regards
Andreas

Hi Andreas,
Can you provide us with some additional information. You can e-mail it to our support account.
- JProfiler snapshot of the profiling showing high CPU utilization
- multiple full thread dumps for the process taken a few seconds apart, these should be taken when running outside of the profiler
- Your override file (tangosol-coherence-override.xml)
- Your cache configuration file (coherence-cache-config.xml)
- logs from the high CPU event, please also include -verbose:gc in the logs, directing the output to the coherence log file
- estimates on the sizes of the objects being updated in the cache
As this is occurring even when you are not actively adding data to the cache, can you describe what else your application is doing at this time. It would be extremely odd for Coherence to consume any noticeable amount of CPU if you are not making heavy use of the cache.
Note that when using the Map.put method the old value is returned to the caller, which for a distributed cache means extra network load, you may wish to consider switching to Map.putAll() as this does not need to return the old value, and is more efficient even if you are only operating on a single entry.
thanks,
Mark

Similar Messages

  • Distributed Cache : Performance issue; takes long to get data

    Hi there,
         I have set up a cluster on one a Linux machine with 11 nodes (Min & Max Heap Memory = 1GB). The nodes are connected through a multicast address / port number. I have configured Distributed Cache service running on all the nodes and 2 nodes with ExtendTCPService. I loaded a dataset of size 13 millions into the cache (approximately 5GB), where the key is String and value is Integer.
         I run a java process from another Linux machine on the same network, that makes use of the this cache. The process fetches around 200,000 items from the cache and it takes around 180 seconds ONLY to fetch the data from the cache.
         I had a look at the Performance Tuning > Coherence Network Tuning and checked the Publisher and Receiver Success rate and both were neardly 0.998 on all the nodes.
         It a bit hard to believe that it takes so long. May be I'm missing something. Would appreciate if you could advice me on the same?
         More info :
              a) All nodes are running on Java 5 update 7
              b) The java process is running on JDK1.4 Update 8
              c) -server option is enabled on all the nodes and the java process
              d) I'm using Tangosol Coherence 3.2.2b371
              d) cache-config.xml
                        <?xml version="1.0"?>
                        <!DOCTYPE cache-config SYSTEM "cache-config.dtd">
                        <cache-config>
                        <caching-scheme-mapping>
                        <cache-mapping>
                        <cache-name>dist-*</cache-name>
                        <scheme-name>dist-default</scheme-name>
                        </cache-mapping>
                        </caching-scheme-mapping>
                        <caching-schemes>
                        <distributed-scheme>
                        <scheme-name>dist-default</scheme-name>
                        <backing-map-scheme>
                             <local-scheme/>
                        </backing-map-scheme>
                        <lease-granularity>member</lease-granularity>
                        <autostart>true</autostart>
                        </distributed-scheme>
                        </caching-schemes>
                        </cache-config>
         Thanks,
         Amit Chhajed

    Hi Amit,
         Is the java test process single threaded, i.e. you performed 200,000 consecutive cache.get() operations? If so then this would go a long ways towards explaining the results, as most of the time in all processes would be spent waiting on the network, and your results would come out to just over 1ms per operation. Please be sure to run with multiple test threads, and also it would be good to make use of the cache.getAll() call where possible to have a single thread fetching multiple items in parallel.
         Also you may need to do a some tuning on your cache server side. In general I would say that on a 1GB heap you should only utilize roughly 750 MB of that space for cache storage. Taking backups into consideration this means 375MB of data per JVM. So with 11 nodes, this would mean a cache capacity of 4GB. At 5GB of data each cache server will be running quite low on free memory, resulting in frequent GCs which will hurt performance. Based on my calculations you should use 14 cache servers to hold your 5GB of data. Be sure to run with -verbose:gc to monitor your GC activity.
         You must also watch your machine to make sure that your cache servers aren't getting swapped out. This means that your server machine needs to have enough RAM to keep all the cache servers in memory. Using "top" you will see that a 1GB JVM actually takes about 1.2 GB of RAM. Thus for 14 JVMs you would need ~17GB of RAM. Obviously you need to leave some RAM for the OS, and other standard processes as well, so I would say this box would need around 18GB RAM. You can use "top" and "vmstat" to verify that you are not making active use of swap space. Obviously the easiest thing to do if you don't have enough RAM, would be to split your cache servers out onto two machines.
         See http://wiki.tangosol.com/display/COH32UG/Evaluating+Performance+and+Scalability for more information on things to consider when performance testing Coherence.
         thanks,
         Mark

  • Foundation 2013 Farm and Distributed Cache settings

    We are on a 3 tier farm - 1 WFE + 1APP + 1SQL - have had many issues with AppFab and Dist Cache; and an additional issue with noderunner/Search Services.  Memory and CPU running very high.  Read that we shouldn't be running Search
    and Dist Cache in the same server, nor using a WFE as a cache host.  I don't have the budget to add another server in my environment. 
    I found an article (IderaWP_CachingFormSharePointPerformance.pdf) saying "To make use of SharePoint's caching capabilities requires a Server version of the platform." because it requires the publishing feature, which Foundation doesn't have. 
    So, I removed Distributed Cache (using Powershell) from my deployment and disabled the AppFab.  This resolved 90% of server errors but performance didn't improve. Now, not only I'm getting errors now on Central Admin. - expects Dist Cache
    - but I'm getting disk operations reading of 4000 ms.
    Questions:
    1) Should I enable AppFab and disable cache?
    2) Does Foundation support Dist Cache?  Do I need to run Distributed Cache?
    3) If so, can I run with just 1 cache host?  If I shouldn't run it on a WFE or an App server with Search, do I have to stop Search all together?  What happens with 2 tier farms out there? 
    4) Reading through the labyrinth of links on TechNet and MSDN on the subject, most of them says "Applies to SharePoint Server".
    5) Anyone out there on a Foundation 2013 production environment that could share your experience?
    Thanks in advance for any help with this!
    Monica
    Monica

    That article is referring to BlobCache, not Distributed Cache. BlobCache requires Publishing, hence Server, but DistributedCache is required on all SharePoint 2013 farms, regardless of edition.
    I would leave your DistCache on the WFE, given the App Server likely runs Search. Make sure you install
    AppFabric CU5 and make sure you make the changes as noted in the KB for
    AppFabric CU3.
    You'll need to separately investigate your disk performance issues. Could be poor disk layout, under spec'ed disks, and so on. A detail into the disks that support SharePoint would be valuable (type, kind, RPM if applicable, LUNs in place, etc.).
    Trevor Seward
    Follow or contact me at...
    &nbsp&nbsp
    This post is my own opinion and does not necessarily reflect the opinion or view of Microsoft, its employees, or other MVPs.

  • Limitation on number of objects in distributed cache

    Hi,
    Is there a limitation on the number (or total size) of objects in a distributed cache? I am seeing a big increase in response time when the number of objects exceeds 16,000. Normally, the ServiceMBean.RequestAverageDuration value is in the 6-8ms range as long as the number of objects in the cache is less than 16K - I've run our application for weeks at a time without seeing any problems. However, once the number of objects exceeds the magic number of 16K the average request duration almost immediately jumps to over 100ms and continues to climb as more objects are added.
    I'm fairly confident that the cache is indexed properly (as Dimitri helped us with that). Are there any configuration changes that could possibly help out here? We are using Coherence 3.3.
    Any suggestions would be greatly appreciated.
    Thanks,
    Jim

    Hi Jim,
    The results from the load test look quite normal, the system fairly quickly stabilizes at a particular performance level and remains there for the duration of the test. In terms of latency results, we see that the cache.putAll operations are taking ~45ms per bulk operation where each operation is putting 100 1K items, for cache.getAll operations we see about ~15ms per bulk operation. Additionally note that the test runs over 256,000 items, so it is well beyond the 16,000 limit you've encountered.
    So it looks like your application are exhibiting different behavior then this test. You may wish to try to configure this test to behave as similarly to yours as possible. For instance you can set the size of the cache to just over/under 16,000 using the -entries parameter, set the size of the entries to 900 bytes using the -size parameter, and set the total number of threads per worker using the -threads parameter.
    What is quite interesting is that at 256,000 1K objects the latency measured with this test is apparently less then half the latency you are seeing with a much smaller cache size. This would seem to point at the issue being related to or rooted in your test. Would you be able to provide a more detailed description of how you are using the cache, and the types of operations you are performing.
    thanks,
    mark

  • Distributed cache

    HI,
    We have a server (Server 1), on which the status of the Distributed cache was in "Error Starting" state.
    While applying a service pack due to some issue we were unable to apply the path (Server 1) so we decided to remove the effected server from the farm and work on it. the effected server (Server 1) was removed from the farm through the configuration wizard.
    Even after running the configuration wizard we were still able to see the server (Server 1) on the SharePoint central admin site (Servers in farm) when clicked, the service "Distributed cache" was still visible with a status "Error Starting",
    tried deleting the server from the farm and got an error message, the ULS logs displayed the below.
    A failure occurred in SPDistributedCacheServiceInstance::UnprovisionInternal. cacheHostInfo is null for host 'servername'.
    8130ae9c-e52e-80d7-aef7-ead5fa0bc999
    A failure occurred SPDistributedCacheServiceInstance::UnprovisionInternal()... isGraceFulShutDown 'False' , isGraceFulShutDown, Exception 'System.InvalidOperationException: cacheHostInfo is null     at Microsoft.SharePoint.DistributedCaching.Utilities.SPDistributedCacheServiceInstance.UnProvisionInternal(Boolean
    isGraceFulShutDown)'
    8130ae9c-e52e-80d7-aef7-ead5fa0bc999
    A failure occurred SPDistributedCacheServiceInstance::UnProvision() , Exception 'System.InvalidOperationException: cacheHostInfo is null     at Microsoft.SharePoint.DistributedCaching.Utilities.SPDistributedCacheServiceInstance.UnProvisionInternal(Boolean
    isGraceFulShutDown)     at Microsoft.SharePoint.DistributedCaching.Utilities.SPDistributedCacheServiceInstance.Unprovision()'
    8130ae9c-e52e-80d7-aef7-ead5fa0bc999
    We are unable to perform any operation install/repair of SharePoint on the effected server (Server 1), as the server is no longer in the farm, we are unable to run any powershell commands.
    Questions:-
    What would cause that to happen?
    Is there a way to resolve this issue? (please provide the steps)
    Satyam

    Hi
    try this:
    http://edsitonline.com/2014/03/27/unexpected-exception-in-feedcacheservice-isrepopulationneeded-unable-to-create-a-datacache-spdistributedcache-is-probably-down/
    Hope this helps. Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading the thread.

  • Error handling for distributed cache synchronization

    Hello,
    Can somebody explain to me how the error handling works for the distributed cache synchronization ?
    Say I have four nodes of a weblogic cluster and 4 different sessions on each one of those nodes.
    On Node A an update happens on object B. This update is going to be propogated to all the other nodes B, C, D. But for some reason the connection between node A and node B is lost.
    In the following xml
    <cache-synchronization-manager>
    <clustering-service>...</clustering-service>
    <should-remove-connection-on-error>true</should-remove-connection-on-error>
    If I set this to true does this mean that the Toplink will stop sending updates from node A to node B ? I presume all of this is transparent. In order to handle any errors I do not have to write any code to capture this kind of error .
    Is that correct ?
    Aswin.

    This "should-remove-connection-on-error" option mainly applies to RMI or RMI_IIOP cache synchronization. If you use JMS for cache synchronization, then connectivity and error handling is provided by the JMS service.
    For RMI, when this is set to true (which is the default) if a communication exception occurs in sending the cache synchronization to a server, that server will be removed and no longer synchronized with. The assumption is that the server has gone down, and when it comes back up it will rejoin the cluster and reconnect to this server and resume synchronization. Since it will have an empty cache when it starts back up, it will not have missed anything.
    You do not have to perform any error handling, however if you wish to handle cache synchronization errors you can use a TopLink Session ExceptionHandler. Any cache synchronization errors will be sent to the session's exception handler and allow it to handle the error or be notified of the error. Any errors will also be logged to the TopLink session's log.

  • Using Tangosol Coherence in conjunction with Kodo JDO for distributing caching

    JDO currently has a perception problem in terms of performance. Transparent
    persistence is perceived to have a significant performance overhead compared
    to hand-coded JDBC. That was certainly true a while ago, when the first JDO
    implementations were evaluated. They typically performed about half as well
    and with higher resource requirements. No doubt JDO vendors have closed that
    gap by caching PreparedStatements, queries, data, and by using other
    optimizations.
    Aside from the ease of programming through transparent persistence, I
    believe that using JDO in conjunction with distributed caching techniques in
    a J2EE managed environment has the opportunity to transparently give
    scalability, performance, and availability improvements that would otherwise
    be much more difficult to realize through other persistence techniques.
    In particular, it looks like Tangosol is doing a lot of good work in the
    area of distributed caching for J2EE. For example, executing parallelized
    searches in a cluster is a capability that is pretty unique and potentially
    very valuable to many applications. It would appear to me to be a lot of
    synergy between Kodo JDO and Tangosol Coherence. Using Coherence as an
    implementation of Kodo JDO's distributed cache would be a natural desire for
    enterprise applications that have J2EE clustering requirements for high
    scalability, performance, and availability.
    I'm wondering if Solarmetric has any ideas or plans for closer integration
    (e.g., pluggability) of Tangosol Coherence into Kodo JDO. This is just my
    personal opinion, but I think a partnership between your two organizations
    to do this integration would be mutually advantageous, and it would
    potentially be very attractive to your customers.
    Ben

    Marc,
    Thanks for pointing that out. That is truly excellent!
    Ben
    "Marc Prud'hommeaux" <[email protected]> wrote in message
    news:[email protected]...
    Ben-
    We do currently have a plug-in for backing our data cache with a
    Tangosol cache.
    See: http://docs.solarmetric.com/manual.html#datastore_cache_config
    In article <[email protected]>, Ben Eng wrote:
    JDO currently has a perception problem in terms of performance.
    Transparent
    persistence is perceived to have a significant performance overheadcompared
    to hand-coded JDBC. That was certainly true a while ago, when the firstJDO
    implementations were evaluated. They typically performed about half aswell
    and with higher resource requirements. No doubt JDO vendors have closedthat
    gap by caching PreparedStatements, queries, data, and by using other
    optimizations.
    Aside from the ease of programming through transparent persistence, I
    believe that using JDO in conjunction with distributed cachingtechniques in
    a J2EE managed environment has the opportunity to transparently give
    scalability, performance, and availability improvements that wouldotherwise
    be much more difficult to realize through other persistence techniques.
    In particular, it looks like Tangosol is doing a lot of good work in the
    area of distributed caching for J2EE. For example, executingparallelized
    searches in a cluster is a capability that is pretty unique andpotentially
    very valuable to many applications. It would appear to me to be a lot of
    synergy between Kodo JDO and Tangosol Coherence. Using Coherence as an
    implementation of Kodo JDO's distributed cache would be a natural desirefor
    enterprise applications that have J2EE clustering requirements for high
    scalability, performance, and availability.
    I'm wondering if Solarmetric has any ideas or plans for closerintegration
    (e.g., pluggability) of Tangosol Coherence into Kodo JDO. This is justmy
    personal opinion, but I think a partnership between your twoorganizations
    to do this integration would be mutually advantageous, and it would
    potentially be very attractive to your customers.
    Ben--
    Marc Prud'hommeaux [email protected]
    SolarMetric Inc. http://www.solarmetric.com

  • Distributed Cache gets are very time consuming!!!!!

    We have an application that uses Coherence as a distributed partitioned cache server.
    Our problem is that the access to the near cache is very fast but the access to the remote servers is very cpu consuming (maybe for the deserialization of the objects retrieved) at such point that disabling the cache and reading the objects directly from the dbms (Oracle) improves the performances.
    We use coherence as the second level cache of hibernate so we have not much control on how hibernate serialize and deserialize it's objects (we cannot implement a custom externalization).
    Have you any experience of performance problems with the combination hibernate/tangosol and distributed cache?

    Hi Giancarlo,
    It's Jon, not Cameron ;)
    The Hibernate cache API (including 3.0) does not support bulk-fetches from cache, which means that these cache fetches are serial rather than parallel. Depending on how much of your performance problem is network related, Coherence may actually perform much better than straight JDBC as the application load increases (time per request will remain similar, but won't increase much under heavier load).
    The advantage to using the Coherence API (rather than Hibernate API) is that you have far better control over how your queries run. The drawback of course is that you have to code your own queries if they become more complex than search queries. As a result, it often makes sense to use Hibernate (or equivalent) for the bulk of your queries, then code specific queries directly against Coherence when performance really matters. A simple example of this is the ability to perform bulk gets (using <tt>NamedCache.getAll()</tt>). With queries that return hundreds or thousands of items, this can have a huge impact on performance (an order of magnitude or more depending on object complexity). You also get the ability to use XmlBean/ExternalizableLite and cluster-parallel search queries.
    As an aside, with the Coherence cache servers, you should avoid making your JVM heaps excessively large. As a general rule of thumb, we recommend a 512MB heap size for cache servers, using as many instances as required to reach your desired cache capacity. Also, from what I've seen, 64-bit JVMs are less efficient with memory usage than 32-bit (though performance seems a bit better on the x86-64 platforms vs the 32-bit versions).
    Jon Purdy
    Tangosol, Inc.

  • Locking in replicated versus distributed caches

    Hello,
    In the User Guide for Coherence 2.5.0, in section 2.3 Cluster Services Overview it says
    the replicated cache service supports pessimistic lockingyet in section 2.4 Replicated Cache Service it says
    if a cluster node requests a lock, it should not have to get all cluster nodes to agree on the lockI am trying to decide whether to use a replicated cache or a distributed cache, either of which will be small, where I want the objects to be locked across the whole cluster.
    If not all of the cluster nodes have to agree on a lock in a replicated cluster, doesn't this mean that a replicated cluster does not support pessimistic locking?
    Could you please explain this?
    Thanks,
    Rohan

    Hi Rohan,
    The Replicated cache supports pessimistic locking. The User Guide is discussing the implementation details and how they relate to performance. The Replicated and Distributed cache services differ in performance and scalability characteristics, but both support cluster-wide coherence and locking.
    Jon Purdy
    Tangosol, Inc.

  • Data-node affinity in a distributed cache

    My apologies if this is addressed elsewhere...
    I have a few questions regarding the association of a cached object to a cluster node in a distributed cache:
    1. What factor(s) determine which node is the primary cluster node for a given object in a distributed cache?
    2. Similarly, assuming that at least one backup node is configured, what determines which node will be the backup node for a given object?
    Thanks.

    Hi,
    There is not yet the ability to specify node ownership (through the DistributedCacheService). The basic issue is that a signficant chunk of our technology is involved in managing node ownership without introducing non-scalable state or data vulnerability. Allowing users to control this would shift that responsibility onto application code. This is a very difficult task to manage in a manner that is scalable, performant and fault-tolerant (any two of those are fairly easy to accomplish).
    In practice, this is not much of an issue as we have patterns to work around this (including data-driven load balancing and our near-cache technology) without impacting any of those three requirements.
    I believe there are plans to add this ability to a public Coherence API in a future release, but this would be (as discussed above) a very advanced feature.
    Jon Purdy
    Tangosol, Inc.

  • Do SharePoint 2013 Online have Distributed Cache?

    Do SharePoint 2013 Online have Distributed Cache?

    yes, it is. but what you want to know about Online & DC? are you planning any solution for it?
    Not available to SharePoint Online customers. SharePoint Server 2013 customers can use the Distributed Cache service to
    cache feature functionality, which improves authentication, newsfeed, OneNote client access, security trimming, and page load performance.
    check this:
    http://technet.microsoft.com/en-us/library/sharepoint-online-it-professional-service-description.aspx
    Please remember to mark your question as answered &Vote helpful,if this solves/helps your problem. ****************************************************************************************** Thanks -WS MCITP(SharePoint 2010, 2013) Blog: http://wscheema.com/blog

  • Managing the Distributed Cache

    In MS documentation I often see this(or something similar)
    "The Distributed Cache service can end up in a nonfunctioning or unrecoverable state if you do not follow the procedures that are listed in this article. In extreme scenarios, you might have to rebuild the server farm. The Distributed Cache depends
    on Windows Server AppFabric as a prerequisite. Do not administer the AppFabric Caching Service from the
    Services window in Administrative Tools in
    Control Panel. Do not use the applications in the folder named AppFabric for Windows Server on the
    Start menu. "
    In many blogs including technet, I see this command always used
    Restart-Service -Name AppFabricCachingService
    I often see this when updating timeout settings.
    Are these considered the same thing?
    This is an example, how would you perform these steps.
    Get-SPDistributedCacheClientSetting -ContainerType DistributedLogonTokenCache
    $DLTC = Get-SPDistributedCacheClientSetting -ContainerType DistributedLogonTokenCache
    $DLTC.requestTimeout = "3000"
    $DLTC.channelOpenTimeOut = "3000"
    $DLTC.MaxConnectionsToServer = "100"
    Set-SPDistributedCacheClientSetting -ContainerType DistributedLogonTokenCache $DLTC
    Restart-Service -Name AppFabricCachingService

    I haven't seen a clear statement about disabling the DC. It provides many essential caches where there are otherwise no replacements. Using the restart cmdlet isn't likely to cause you to need to rebuild your farm, Microsoft just doesn't want you touching
    the Distributed Cache outside of SharePoint, basically.
    Trevor Seward
    Follow or contact me at...
    &nbsp&nbsp
    This post is my own opinion and does not necessarily reflect the opinion or view of Microsoft, its employees, or other MVPs.

  • Distributed Cache service stuck in Starting Provisioning

    Hello,
    I'm having problem with starting/stopping Distributed Cache service in one of the SharePoint 2013 farm servers. Initially, Distributed Cache was enabled in all the farm servers by default and it was running as a cluster. I wanted to remove it from all hosts
    but one (APP server) using below PowerShell commands, which worked fine.
    Stop-SPDistributedCacheServiceInstance -Graceful
    Remove-SPDistributedCacheServiceInstance
    But later I attempted to add the service back to two hosts (WFE servers) using below command and unfortunately one of them got stuck in the process. When I look at the Services on Server from Central Admin, the status says "Starting".
    Add-SPDistributedCacheServiceInstance
    Also, when I execute below script, the status says "Provisioning".
    Get-SPServiceInstance | ? {($_.service.tostring()) -eq "SPDistributedCacheService Name=AppFabricCachingService"} | select Server, Status
    I get "cacheHostInfo is null" error when I use "Stop-SPDistributedCacheServiceInstance -Graceful".
    I tried below script,
    $instanceName ="SPDistributedCacheService Name=AppFabricCachingService" 
    $serviceInstance = Get-SPServiceInstance | ? {($_.service.tostring()) -eq $instanceName -and ($_.server.name) -eq $env:computername}
    $serviceInstance.Unprovision()
    $serviceInstance.Delete()
    ,but it didn't work either, and I got below error.
    "SPDistributedCacheServiceInstance", could not be deleted because other objects depend on it.  Update all of these dependants to point to null or 
    different objects and retry this operation.  The dependant objects are as follows: 
    SPServiceInstanceJobDefinition Name=job-service-instance-{GUID}
    Has anyone come across this issue? I would appreciate any help.
    Thanks!

    Hi ,
    Are you able to ping the server that is already running Distributed Cache on this server? For example:
    ping WFE01
    As you are using more than one cache host in your server farm, you must configure the first cache host running the Distributed Cache service to allow Inbound ICMP (ICMPv4) traffic through the firewall.If an administrator removes the first cache host from
    the cluster which was configured to allow Inbound ICMP (ICMPv4) traffic through the firewall, you must configure the first server of the new cluster to allow Inbound ICMP (ICMPv4) traffic through the firewall. 
    You can create a rule to allow the incoming port.
    For more information, you can refer to the  blog:
    http://habaneroconsulting.com/insights/Distributed-Cache-Needs-Ping#.U4_nmPm1a3A
    Thanks,
    Eric
    Forum Support
    Please remember to mark the replies as answers
    if they help and unmark them if they provide no help. If you have feedback for TechNet Subscriber Support, contact [email protected]
    Eric Tao
    TechNet Community Support

  • How can i configure Distributed cache servers and front-end servers for Streamlined topology in share point 2013??

    my question is regarding SharePoint 2013 Farm topology. if i want go with Streamlined topology and having (2 distribute cache and Rm servers+ 2 front-end servers+ 2 batch-processing servers+ cluster sql server) then how distributed servers will
    be connecting to front end servers? Can i use windows 2012 NLB feature? if i use NLB and then do i need to install NLB to all distributed servers and front-end servers and split-out services? What will be the configuration regarding my scenario.
    Thanks in Advanced!

    For the Distributed Cache servers, you simply make them farm members (like any other SharePoint servers) and turn on the Distributed Cache service (while making sure it is disabled on all other farm members). Then, validate no other services (except for
    the Foundation Web service due to ease of solution management) is enabled on the DC servers and no end user requests or crawl requests are being routed to the DC servers. You do not need/use NLB for DC.
    Trevor Seward
    Follow or contact me at...
    &nbsp&nbsp
    This post is my own opinion and does not necessarily reflect the opinion or view of Microsoft, its employees, or other MVPs.

  • Error message when using a MessageListener using a distributed cache

    Hi --
    I'm getting the following error message when I attach a message listener to a distributed cache. I get the same message if I attach the listener to the NearCache in front of the DistributedCache, or to the DistributedCache itself.
    My message listener listens for a create() operation and writes the created value out to the database. Both the key and value are java objects that are getting "serialized" when they're pushed in the cache. The listener is never called.
    The error spits out two messages, which look like:
    2003-04-07 21:48:05.281 Tangosol Coherence 2.1/239 <Error> (thread=DistributedCache:EventDispatcher): An exception occurred while dispatching this event:
    CacheEvent: MapEvent{com.tangosol.coherence.component.util.daemon.queueProcessor
    .service.DistributedCache$BinaryMap added: key=Binary(length=269, value=0x0005AC
    ED000573720021636F6D2E6F6C742E646174612E696E7465726E616C2E4461746162617365554944
    6FABB5383C6013B402000078720021636F6D2E6F6C742E646174612E696E7465726E616C2E416273
    7472616374554944D04F591196E4DC1B0200024C000D657874656E73696F6E4461746174000F4C6A
    6176612F7574696C2F4D61703B4C0009756964537472696E677400124C6A6176612F6C616E672F53
    7472696E673B7870737200116A6176612E7574696C2E486173684D61700507DAC1C31660D1030002
    46000A6C6F6164466163746F724900097468726573686F6C6478703F400000000000087708000000
    0B0000000078740011363930395F436F6D706F6E656E74426964), value=Binary(length=1069,
    value=0x0005ACED000573720026636F6D2E6562726576696174652E61756374696F6E2E6269642
    E436F6D706F6E656E744269648EC95C4DE33A88D802000D5A0007626573744269644A000B6269645
    3657175656E6365440004636F73745A0007696E697469616C5A00066E65774269645A00067469654
    2696444000576616C75654C000B61756374696F6E4D6F64657400294C636F6D2F656272657669617
    4652F61756374696F6E2F6576656E742F41756374696F6E4D6F64653B4C000A61756374696F6E554
    9447400124C636F6D2F6F6C742F646174612F5549443B4C000A636F6D70616E7955494471007E000
    24C000C636F6D706F6E656E7455494471007E00024C000A737472696E67436F73747400124C6A617
    6612F6C616E672F537472696E673B4C000B737472696E6756616C756571007E00037872002D636F6
    D2E6562726576696174652E636F6D6D6F6E2E416273747261637450657273697374656E744F626A6
    56374497E2729A24CA5790200034C000A637265617465446174657400104C6A6176612F7574696C2
    F446174653B4C000375696471007E00024C000A7570646174654461746571007E000578707372000
    E6A6176612E7574696C2E44617465686A81014B59741903000078707708000000F46B9A286278737
    20021636F6D2E6F6C742E646174612E696E7465726E616C2E44617461626173655549446FABB5383
    C6013B402000078720021636F6D2E6F6C742E646174612E696E7465726E616C2E416273747261637
    4554944D04F591196E4DC1B0200024C000D657874656E73696F6E4461746174000F4C6A6176612F7
    574696C2F4D61703B4C0009756964537472696E6771007E00037870737200116A6176612E7574696
    C2E486173684D61700507DAC1C31660D103000246000A6C6F6164466163746F72490009746872657
    3686F6C6478703F4000000000000877080000000B0000000078740011363930395F436F6D706F6E6
    56E744269647371007E00077708000000F46B9A286278000000000000000000402E0000000000000
    00000402E00000000000073720027636F6D2E6562726576696174652E61756374696F6E2E6576656
    E742E41756374696F6E4D6F6465BD0C9E245C328B4F02000078720029636F6D2E656272657669617
    4652E636F6D6D6F6E2E41627374726163745479706553616665456E756D506D8C41B0144DB302000
    249000A696E744C69746572616C4C000D737472696E674C69746572616C71007E000378700000000
    374000A50524F44554354494F4E7371007E00097371007E000D3F4000000000000877080000000B0
    00000007874000A38335F41756374696F6E7371007E00097371007E000D3F4000000000000877080
    000000B000000007874000A34325F436F6D70616E797371007E00097371007E000D3F40000000000
    00877080000000B00000000787400103131315F426964436F6D706F6E656E747070)}
    2003-04-07 21:48:05.687 Tangosol Coherence 2.1/239 <Warning> (thread=CoherenceLogger): Asynchronous logging character limit exceeded; discarding 3 log messages (lines=17, chars=1416)

    Kris,
    First of all you should increase the value of logging-config/character-limit element in tangosol-coherence.xml to see the message entirely. The default setting is 4096 which is not enough to see your exception text.
    When you do that I believe you will see that the actual exception is java.lang.ClassNotFoundException indicating that the node that has the listener installed doesn't know about the class that is being put into the cache and could be easily fixed as shown here: http://www.tangosol.com/faq-coherence.jsp#classnotfound
    Please let me know if that doesn't help.
    Gene

Maybe you are looking for

  • Why does my macbook pro go to sleep when i move it?

    I'll be sitting a my desk and decide to slightly move my macbook pro so its facing me a little better and it will go to sleep. I have a MacBook Pro (retina display) 13-inch, which I bought in February 2014. It happened both when I had it connected to

  • Itunes 11.0.1 not responding

    I've had the new version of Itunes installed since it came out. I've personally had nothing but problems with it. It freezes while syncing, then I have to control, alt, delete to get it to shut down. Then reopen it and sync my phone.  I have no clue

  • MacOS, Bootcamp, Win-Antivirus and (...User symlinks?)

    Hi, I'd appreciate some command line help on this issue: My setup: 750GB HDD with MacOS, apps, and Admin account, and Bootcamp partition; 2 1TB HDDs RAIDed 0 for 2 User accounts; Bootcamp: Win XP SP3, McAfee Security, MacDrive (enabling access to Mac

  • Error report  C:\DOCUME~1\COMPAQ~\LOCALS~1Temp\7da5_appcompat.txt

    I have recieved this error report when I attempt tolog into iTunes. The report is C:\DOCUME~1\COMPAQ~\LOCALS~1Temp\7da5_appcompat.txt How do I fix this?

  • Offline usage of CC apps

    I'm thinking about going with Photoshop CC but have a slow and unreliable internet connection. I read that it would be available offline but I'm confused as to whether there is a restriction on how often one can work offline. Anyone help?