Distributed Cache service stuck in Starting Provisioning

Hello,
I'm having problem with starting/stopping Distributed Cache service in one of the SharePoint 2013 farm servers. Initially, Distributed Cache was enabled in all the farm servers by default and it was running as a cluster. I wanted to remove it from all hosts
but one (APP server) using below PowerShell commands, which worked fine.
Stop-SPDistributedCacheServiceInstance -Graceful
Remove-SPDistributedCacheServiceInstance
But later I attempted to add the service back to two hosts (WFE servers) using below command and unfortunately one of them got stuck in the process. When I look at the Services on Server from Central Admin, the status says "Starting".
Add-SPDistributedCacheServiceInstance
Also, when I execute below script, the status says "Provisioning".
Get-SPServiceInstance | ? {($_.service.tostring()) -eq "SPDistributedCacheService Name=AppFabricCachingService"} | select Server, Status
I get "cacheHostInfo is null" error when I use "Stop-SPDistributedCacheServiceInstance -Graceful".
I tried below script,
$instanceName ="SPDistributedCacheService Name=AppFabricCachingService" 
$serviceInstance = Get-SPServiceInstance | ? {($_.service.tostring()) -eq $instanceName -and ($_.server.name) -eq $env:computername}
$serviceInstance.Unprovision()
$serviceInstance.Delete()
,but it didn't work either, and I got below error.
"SPDistributedCacheServiceInstance", could not be deleted because other objects depend on it.  Update all of these dependants to point to null or 
different objects and retry this operation.  The dependant objects are as follows: 
SPServiceInstanceJobDefinition Name=job-service-instance-{GUID}
Has anyone come across this issue? I would appreciate any help.
Thanks!

Hi ,
Are you able to ping the server that is already running Distributed Cache on this server? For example:
ping WFE01
As you are using more than one cache host in your server farm, you must configure the first cache host running the Distributed Cache service to allow Inbound ICMP (ICMPv4) traffic through the firewall.If an administrator removes the first cache host from
the cluster which was configured to allow Inbound ICMP (ICMPv4) traffic through the firewall, you must configure the first server of the new cluster to allow Inbound ICMP (ICMPv4) traffic through the firewall. 
You can create a rule to allow the incoming port.
For more information, you can refer to the  blog:
http://habaneroconsulting.com/insights/Distributed-Cache-Needs-Ping#.U4_nmPm1a3A
Thanks,
Eric
Forum Support
Please remember to mark the replies as answers
if they help and unmark them if they provide no help. If you have feedback for TechNet Subscriber Support, contact [email protected]
Eric Tao
TechNet Community Support

Similar Messages

  • Error starting Oracle BAM active data cache service

    Hi
    after installing BAM every thing working fine ,but if restart my system Oracle BAM active data cache service throwing following error
    "The Oracle BAM Active Data Cache service on Local computer started and then stopped.Some services stop automatically if they have no work to do,for example the performance logs and alerts service"
    Database is running fine
    Following is the ADC log file error
    2007-12-07 17:19:29,640 [2928] ERROR - ActiveDataCache The Oracle BAM Active Data Cache service failed to start. Oracle.BAM.ActiveDataCache.Common.Exceptions.CacheException: ADC Server exception in Startup(). ---> System.DllNotFoundException: Unable to load DLL (OraOps10.dll).
    at Oracle.DataAccess.Client.OpsTrace.GetRegTraceInfo(UInt32& TrcLevel, UInt32& StmtCacheSize)
    at Oracle.DataAccess.Client.OraTrace.GetRegistryTraceInfo()
    at Oracle.DataAccess.Client.OracleConnection..ctor(String connectionString)
    at Oracle.DataAccess.Client.OracleConnection..ctor(String connectionString)
    at Oracle.BAM.ActiveDataCache.Kernel.StorageEngine.Oracle.OracleDataFactory.GetConnection()
    at Oracle.BAM.ActiveDataCache.Kernel.StorageEngine.Oracle.OracleStorageEngine.GetServerVersion()
    at Oracle.BAM.ActiveDataCache.Kernel.StorageEngine.Oracle.OracleStorageEngine.Startup(IDictionary oParameters)
    at Oracle.BAM.ActiveDataCache.Kernel.Server.DataStoreServer.Startup()
    --- End of inner exception stack trace ---
    at Oracle.BAM.ActiveDataCache.Kernel.Server.DataStoreServer.Startup()
    at Oracle.BAM.ActiveDataCache.Kernel.Server.Server.Startup()
    at Oracle.BAM.ActiveDataCache.Service.DataServer.Run()
    2007-12-07 17:24:45,250 [1524] ERROR - ActiveDataCache Unable to load DLL (OraOps10.dll).
    2007-12-07 17:24:45,265 [1524] WARN - ActiveDataCache Exception occurred in method Startup
    Please help me in resolving this issue .Am getting this issue every time
    Thanks
    BS

    Make sure the path to the ODAC used by BAM (C:\OracleBAM\ClientForBAM\bin) is the first item in the system PATH
    environment variable. Restart your computer after fixing this.
    If that doesn't fix it, please check the Troubleshooting section in the BAM Install Guide.
    Regards, Stephen

  • Service Message Broker service will NOT start

    I've done quite a bit of searching on this issue and have found a reasonable amount of potential fixes, all of which I tried and none of which have worked.
    This service simply is stuck in the "Starting" status. The only way to stop it is to kill the process in Task Manager.
    I have tried reinstalling Workflow Manager and the Service Bus and starting from scratch. All of the databases get created successfully (on the same server as the rest of the SharePoint dbs) and the firewall is disabled via group policy.
    Does anyone have a consistent fix for this?
    Thank you

    Hi,
    Please follow the steps below for troubleshooting:
    1.Check services in SharePoint:
    In CA, distributed cache service is start.
    Run->services.msc, AppFabric Caching service is running.
    2.If issue persists, please run the command in SharePoint 2013 Management Shell:
    Check DistributedLogonTokenCache settings:
    $settings = Get-SPDistributedCacheClientSetting -ContainerType DistributedLogonTokenCache
    $settings
    Run the command to change the settings:
    $settings.ChannelOpenTimeOut =10000
    $settings.RequestTimeout=10000
    $settings.MaxBufferSize = 33554432
    Set-SPDistributedCacheClientSetting -ContainerType DistributedLogonTokenCache -DistributedCacheClientSettings $settings
    Check DistributedViewStateCache settings:
    $settingsvs = Get-SPDistributedCacheClientSetting -ContainerType DistributedViewStateCache
    $settingsvs
    Set DistributedViewStateCache:
    $settingsvs.ChannelOpenTimeOut = 10000
    $settingsvs.RequestTimeout=10000
    $settingsvs.MaxBufferSize = 33554432
    Set-SPDistributedCacheClientSetting -ContainerType DistributedViewStateCache -DistributedCacheClientSettings $settingsvs
    3.Try to restart distributed cache and AppFabric caching service to check if issue persists.
    4.If issue persists, please collect the information to me for further research:
    Get-SPDistributedCacheClientSetting -ContainerType DistributedLogonTokenCache<o:p></o:p>
    Get-SPDistributedCacheClientSetting -ContainerType DistributedViewStateCache
    Regards,
    Rebecca Tu
    TechNet Community Support

  • Distributed cache

    HI,
    We have a server (Server 1), on which the status of the Distributed cache was in "Error Starting" state.
    While applying a service pack due to some issue we were unable to apply the path (Server 1) so we decided to remove the effected server from the farm and work on it. the effected server (Server 1) was removed from the farm through the configuration wizard.
    Even after running the configuration wizard we were still able to see the server (Server 1) on the SharePoint central admin site (Servers in farm) when clicked, the service "Distributed cache" was still visible with a status "Error Starting",
    tried deleting the server from the farm and got an error message, the ULS logs displayed the below.
    A failure occurred in SPDistributedCacheServiceInstance::UnprovisionInternal. cacheHostInfo is null for host 'servername'.
    8130ae9c-e52e-80d7-aef7-ead5fa0bc999
    A failure occurred SPDistributedCacheServiceInstance::UnprovisionInternal()... isGraceFulShutDown 'False' , isGraceFulShutDown, Exception 'System.InvalidOperationException: cacheHostInfo is null     at Microsoft.SharePoint.DistributedCaching.Utilities.SPDistributedCacheServiceInstance.UnProvisionInternal(Boolean
    isGraceFulShutDown)'
    8130ae9c-e52e-80d7-aef7-ead5fa0bc999
    A failure occurred SPDistributedCacheServiceInstance::UnProvision() , Exception 'System.InvalidOperationException: cacheHostInfo is null     at Microsoft.SharePoint.DistributedCaching.Utilities.SPDistributedCacheServiceInstance.UnProvisionInternal(Boolean
    isGraceFulShutDown)     at Microsoft.SharePoint.DistributedCaching.Utilities.SPDistributedCacheServiceInstance.Unprovision()'
    8130ae9c-e52e-80d7-aef7-ead5fa0bc999
    We are unable to perform any operation install/repair of SharePoint on the effected server (Server 1), as the server is no longer in the farm, we are unable to run any powershell commands.
    Questions:-
    What would cause that to happen?
    Is there a way to resolve this issue? (please provide the steps)
    Satyam

    Hi
    try this:
    http://edsitonline.com/2014/03/27/unexpected-exception-in-feedcacheservice-isrepopulationneeded-unable-to-create-a-datacache-spdistributedcache-is-probably-down/
    Hope this helps. Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading the thread.

  • Distributed cache doesn't work. Please help.

    Hi,
    I am trying to use distributed cache by:
    1. use coherence as 2nd level cache for hibernate in the application server (weblogic 9). Configuration as follows:
    <cache-config>
    <caching-scheme-mapping>
    <cache-mapping>
    <cache-name>*</cache-name>
    <scheme-name>DistributedInMemoryCache</scheme-name>
    </cache-mapping>
    </caching-scheme-mapping>
    <caching-schemes>
    <distributed-scheme>
    <scheme-name>DistributedInMemoryCache</scheme-name>
    <service-name>DistributedCache</service-name>
    <backing-map-scheme>
    <local-scheme>
    <high-units>{size-limit 0}</high-units>
    </local-scheme>
    </backing-map-scheme>
    <autostart>true</autostart>
    </distributed-scheme>
    </caching-schemes>
    </cache-config>
    2. start a standalone jvm locally to form a 2 nodes cluster with the coherence node above using the same cache-config.xml as shown above. I use the following command to start this cache:
    %JAVA_HOME%/bin/java -server -showversion -jar %LIB_HOME%\coherence.jar
    I enable jmx on both jvm, the coherence cache on the coherence node in weblogic shows a bunch of caches exist (I use @Cache(usage = CacheConcurrencyStrategy.READ_ONLY) annontation in my code to enable cache for an entity), also there are a few cache services including DistributedCache started. However, from the standalone jvm, I didn't see any cache nor Distributed cache service (the only service is management from jconsole).
    My goal is to use the standalone jvm as a cache server that holds all the caches while the coherence in weblogic as the client which has no local storage but a near cache.
    However, I could not even get the distributed cache to work. Please help.
    Thanks

    Hi,
    To start the cache server you need to use the command like this:
    %JAVA_HOME%/bin/java -server -Dtangosol.coherence.cacheconfig=/path/to/cache_configuration_descriptor -cp %LIB_HOME%\coherence.jar com.tangosol.net.DefaultCacheServer
    Regards,
    Dimitri

  • Managing the Distributed Cache

    In MS documentation I often see this(or something similar)
    "The Distributed Cache service can end up in a nonfunctioning or unrecoverable state if you do not follow the procedures that are listed in this article. In extreme scenarios, you might have to rebuild the server farm. The Distributed Cache depends
    on Windows Server AppFabric as a prerequisite. Do not administer the AppFabric Caching Service from the
    Services window in Administrative Tools in
    Control Panel. Do not use the applications in the folder named AppFabric for Windows Server on the
    Start menu. "
    In many blogs including technet, I see this command always used
    Restart-Service -Name AppFabricCachingService
    I often see this when updating timeout settings.
    Are these considered the same thing?
    This is an example, how would you perform these steps.
    Get-SPDistributedCacheClientSetting -ContainerType DistributedLogonTokenCache
    $DLTC = Get-SPDistributedCacheClientSetting -ContainerType DistributedLogonTokenCache
    $DLTC.requestTimeout = "3000"
    $DLTC.channelOpenTimeOut = "3000"
    $DLTC.MaxConnectionsToServer = "100"
    Set-SPDistributedCacheClientSetting -ContainerType DistributedLogonTokenCache $DLTC
    Restart-Service -Name AppFabricCachingService

    I haven't seen a clear statement about disabling the DC. It provides many essential caches where there are otherwise no replacements. Using the restart cmdlet isn't likely to cause you to need to rebuild your farm, Microsoft just doesn't want you touching
    the Distributed Cache outside of SharePoint, basically.
    Trevor Seward
    Follow or contact me at...
    &nbsp&nbsp
    This post is my own opinion and does not necessarily reflect the opinion or view of Microsoft, its employees, or other MVPs.

  • How can i configure Distributed cache servers and front-end servers for Streamlined topology in share point 2013??

    my question is regarding SharePoint 2013 Farm topology. if i want go with Streamlined topology and having (2 distribute cache and Rm servers+ 2 front-end servers+ 2 batch-processing servers+ cluster sql server) then how distributed servers will
    be connecting to front end servers? Can i use windows 2012 NLB feature? if i use NLB and then do i need to install NLB to all distributed servers and front-end servers and split-out services? What will be the configuration regarding my scenario.
    Thanks in Advanced!

    For the Distributed Cache servers, you simply make them farm members (like any other SharePoint servers) and turn on the Distributed Cache service (while making sure it is disabled on all other farm members). Then, validate no other services (except for
    the Foundation Web service due to ease of solution management) is enabled on the DC servers and no end user requests or crawl requests are being routed to the DC servers. You do not need/use NLB for DC.
    Trevor Seward
    Follow or contact me at...
    &nbsp&nbsp
    This post is my own opinion and does not necessarily reflect the opinion or view of Microsoft, its employees, or other MVPs.

  • Set request timeout for distributed cache

    Hi,
    Coherence provides 3 parameters we can tune for the distributed cache
    tangosol.coherence.distributed.request.timeout      The default client request timeout for distributed cache services
    tangosol.coherence.distributed.task.timeout      The default server execution timeout for distributed cache services
    tangosol.coherence.distributed.task.hung      the default time before a thread is reported as hung by distributed cache services
    It seems these timeout values are used for both system activities (node discovery, data re-balance etc.) and user activities (get, put). We would like to set the request timeout for get/put. But a low threshold like 10 ms sometimes causes the system activities to fail. Is there a way for us to separately set the timeout values? Or even is it possible to setup timeout on individual calls (like get(key, timeout))?
    -thanks

    Hi,
    not necessarily for get and put methods, but for queries, entry-processor and entry-aggregator and invocable agent sending, you can make the sent filter or aggregator or entry-processor or agent implement PriorityTask, which allows you to make QoS expectations known to Coherence. Most or all stock aggregators and entry-processors implement PriorityTask, if I correctly remember.
    For more info, look at the documentation of PriorityTask.
    Best regards,
    Robert

  • Distributed Cache Errors in WFE

    Unexpected error occurred in method 'Put' , usage 'Distributed Logon Token Cache' - Exception 'Microsoft.ApplicationServer.Caching.DataCacheException: ErrorCode<ERRCA0016>:SubStatus<ES0001>:The
    connection was terminated, possibly due to server or network problems or serialized Object size is greater than MaxBufferSize on server. Result of the request is unknown. ---> System.TimeoutException: The socket was aborted because an asynchronous receive
    from the socket did not complete within the allotted timeout of 00:02:00. The time allotted to this operation may have been a portion of a longer timeout. ---> System.IO.IOException: The read operation failed, see inner exception. ---> System.TimeoutException:
    The socket was aborted because an asynchronous receive from the socket did not complete within the allotted timeout of 00:02:00. The time allotted to this operation may have been a portion of a longer timeout. ---> System.Net.Sockets.SocketException: The
    I/O operation has been aborted because of either a thread exit or an application request   
     at System.ServiceModel.Channels.SocketConnection.HandleReceiveAsyncCompleted()   
     at System.ServiceModel.Channels.SocketConnection.OnReceiveAsync(Object sender, SocketAsyncEventArgs eventArgs)     -
    -- End of inner exception stack trace ---   
     at System.Runtime.AsyncResult.End[TAsyncResult](IAsyncResult result)   
     at System.ServiceModel.Channels.ConnectionStream.ReadAsyncResult.End(IAsyncResult result)   
     at System.Net.FixedSizeReader.ReadCallback(IAsyncResult transportResult)     -
    -- End of inner exception stack trace ---   
     at System.Net.Security.NegotiateStream.EndRead(IAsyncResult asyncResult)   
     at System.ServiceModel.Channels.StreamConnection.EndRead()     -
    -- End of inner exception stack trace ---   
     at System.Runtime.AsyncResult.End[TAsyncResult](IAsyncResult result)   
     at System.ServiceModel.Channels.TransportDuplexSessionChannel.EndReceive(IAsyncResult result)   
     at Microsoft.ApplicationServer.Caching.WcfClientChannel.CompleteProcessing(IAsyncResult result)     -
    -- End of inner exception stack trace ---   
     at Microsoft.ApplicationServer.Caching.DataCache.ThrowException(ResponseBody respBody, RequestBody reqBody)   
     at Microsoft.ApplicationServer.Caching.DataCache.InternalPut(String key, Object value, DataCacheItemVersion oldVersion, TimeSpan timeout, DataCacheTag[] tags, String region,
    IMonitoringListener listener)   
     at Microsoft.ApplicationServer.Caching.DataCache.<>c__DisplayClass25.<Put>b__24()   
     at Microsoft.ApplicationServer.Caching.DataCache.Put(String key, Object value, TimeSpan timeout)   
     at Microsoft.SharePoint.DistributedCaching.SPDistributedCache.Put(String key, Object value)'.

    Hi Amol,
    check those links, contains similar issues and resolution
    https://habaneroconsulting.com/insights/sharepoint-2013-distributed-cache-bug#.VJAjsyuUdP0
    http://blogs.msdn.com/b/sambetts/archive/2014/05/28/troubleshooting-appfabric-timeouts-on-sharepoint.aspx
    http://sharepointchips.com/distributed-cache-errors-in-the-uls-log/
    https://www.dmcinfo.com/latest-thinking/blog/id/8657/fix-sharepoint-2013-distributed-cache-timeouts
    https://social.technet.microsoft.com/Forums/sharepoint/en-US/2fad2277-8f5f-4323-8c18-621ae4bfe11a/refresh-the-sp2013-distributed-cache-services-logon-token-cache?forum=sharepointgeneral
    Kind Regards,
    John Naguib
    Technical Consultant/Architect
    MCITP, MCPD, MCTS, MCT, TOGAF 9 Foundation
    Please remember to mark your question as answered if this solves your problem

  • Distributed cache and Windows AppFabric

    got some issues with the Dist. cache 
    followed this in order to remove the servicenstance
    Run Get-SPServiceInstance to find the GUID in the ID section of the Distributed Cache Service that is causing an issue.
    $s = get-spserviceinstance GUID 
    $s.delete()
    It deletes fine but when I try to add:  
    Add-SPDistributedCacheServiceInstance
    I get this: 
    Add-SPDistributedCacheServiceInstance : Could not load file or assembly 'Microsoft.ApplicationServer.Caching.Configuration, Version=1.0.0.0, Cul
    ture=neutral, PublicKeyToken=31bf3856ad364e35' or one of its dependencies. The system cannot find the file specified.
    how do I resolve this? 

    Hi JmATK,
    Regarding this issue, we don’t recommend to delete without stopping any service gracefully, because there may be a data/stack that is still intact one and another.
    The recommendation from Stacy is good, and if the issue is about zombie process that causing unresponsive or hang process, we may need to reset the process by re-attach database / farm.
    Best regards.
    Victoria
    TechNet Community Support
    Please remember to mark the replies as answers if they help, and unmark the answers if they provide no help. If you have feedback for TechNet Support, contact
    [email protected]

  • Distributed Cache errors

    On my application Server I am getting periodic entries under the General category
    "Unable to write SPDistributedCache call usage entry."
    The error is every 5 minutes exactly. It is followed by:
    Calling... SPDistributedCacheClusterCustomProvider:: BeginTransaction
    Calling... SPDistributedCacheClusterCustomProvider:: GetValue(object transactionContext, string type, string key
    Calling... SPDistributedCacheClusterCustomProvider:: GetStoreUtcTime.
    Calling... SPDistributedCacheClusterCustomProvider:: Update(object transactionContext, string type, string key, byte[] data, long oldVersion).
    Sometimes this group of calls succeeds without an error and the sequence continues maybe for 3 iterations every 5 minutes. Then the error
    "Unable to write SPDistributedCache call usage entry.""
    happens again.
    My Distributed Cache Service is running on my Application Server and on my web front end.
    All values are default.
    Any idea why this is happening intermittently?
    Love them all...regardless. - Buddha

    Hi,
    From the error message, check whether super accounts were setup correctly.
    Refer to the article about configuring object cache user accounts in SharePoint Server 2013::
    https://technet.microsoft.com/en-us/library/ff758656(v=office.15).aspx
    if the issue exists, please check the SharePoint ULS log located at : C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\15\LOGS to get a detailed error description.
    Best Regards,
    Lisa Chen
    TechNet Community Support
    Please remember to mark the replies as answers if they help, and unmark the answers if they provide no help. If you have feedback for TechNet Support, contact
    [email protected]

  • Distributed Cache : Performance issue; takes long to get data

    Hi there,
         I have set up a cluster on one a Linux machine with 11 nodes (Min & Max Heap Memory = 1GB). The nodes are connected through a multicast address / port number. I have configured Distributed Cache service running on all the nodes and 2 nodes with ExtendTCPService. I loaded a dataset of size 13 millions into the cache (approximately 5GB), where the key is String and value is Integer.
         I run a java process from another Linux machine on the same network, that makes use of the this cache. The process fetches around 200,000 items from the cache and it takes around 180 seconds ONLY to fetch the data from the cache.
         I had a look at the Performance Tuning > Coherence Network Tuning and checked the Publisher and Receiver Success rate and both were neardly 0.998 on all the nodes.
         It a bit hard to believe that it takes so long. May be I'm missing something. Would appreciate if you could advice me on the same?
         More info :
              a) All nodes are running on Java 5 update 7
              b) The java process is running on JDK1.4 Update 8
              c) -server option is enabled on all the nodes and the java process
              d) I'm using Tangosol Coherence 3.2.2b371
              d) cache-config.xml
                        <?xml version="1.0"?>
                        <!DOCTYPE cache-config SYSTEM "cache-config.dtd">
                        <cache-config>
                        <caching-scheme-mapping>
                        <cache-mapping>
                        <cache-name>dist-*</cache-name>
                        <scheme-name>dist-default</scheme-name>
                        </cache-mapping>
                        </caching-scheme-mapping>
                        <caching-schemes>
                        <distributed-scheme>
                        <scheme-name>dist-default</scheme-name>
                        <backing-map-scheme>
                             <local-scheme/>
                        </backing-map-scheme>
                        <lease-granularity>member</lease-granularity>
                        <autostart>true</autostart>
                        </distributed-scheme>
                        </caching-schemes>
                        </cache-config>
         Thanks,
         Amit Chhajed

    Hi Amit,
         Is the java test process single threaded, i.e. you performed 200,000 consecutive cache.get() operations? If so then this would go a long ways towards explaining the results, as most of the time in all processes would be spent waiting on the network, and your results would come out to just over 1ms per operation. Please be sure to run with multiple test threads, and also it would be good to make use of the cache.getAll() call where possible to have a single thread fetching multiple items in parallel.
         Also you may need to do a some tuning on your cache server side. In general I would say that on a 1GB heap you should only utilize roughly 750 MB of that space for cache storage. Taking backups into consideration this means 375MB of data per JVM. So with 11 nodes, this would mean a cache capacity of 4GB. At 5GB of data each cache server will be running quite low on free memory, resulting in frequent GCs which will hurt performance. Based on my calculations you should use 14 cache servers to hold your 5GB of data. Be sure to run with -verbose:gc to monitor your GC activity.
         You must also watch your machine to make sure that your cache servers aren't getting swapped out. This means that your server machine needs to have enough RAM to keep all the cache servers in memory. Using "top" you will see that a 1GB JVM actually takes about 1.2 GB of RAM. Thus for 14 JVMs you would need ~17GB of RAM. Obviously you need to leave some RAM for the OS, and other standard processes as well, so I would say this box would need around 18GB RAM. You can use "top" and "vmstat" to verify that you are not making active use of swap space. Obviously the easiest thing to do if you don't have enough RAM, would be to split your cache servers out onto two machines.
         See http://wiki.tangosol.com/display/COH32UG/Evaluating+Performance+and+Scalability for more information on things to consider when performance testing Coherence.
         thanks,
         Mark

  • Distributed cache during solution deployment

    Hi,
    We are using MySite newsfeed.
    What is the best practice during deployment of solution the distributed cache is not affected.
    Last time when we did IIS reset the feed was lost and we have to use repopulated job to pull the data.Is there any beetr way to follow during deployment and server upgrades.
    Thanks,
    Sudan

    Hi Sudan,
    The Distributed Cache service stores data in-memory only, so executing iisreset might cause cache flush. Please refer to the thread below to move all cached items from local cache to other cache host in the cluster:
    http://social.technet.microsoft.com/Forums/sharepoint/en-US/6a415c75-4ca3-4c43-9110-25a68db93a54/sharepoint-2013-my-site-newsfeed-posts-disappear?forum=sharepointgeneral 
    Regards,
    Rebecca Tu
    TechNet Community Support

  • Locking in replicated versus distributed caches

    Hello,
    In the User Guide for Coherence 2.5.0, in section 2.3 Cluster Services Overview it says
    the replicated cache service supports pessimistic lockingyet in section 2.4 Replicated Cache Service it says
    if a cluster node requests a lock, it should not have to get all cluster nodes to agree on the lockI am trying to decide whether to use a replicated cache or a distributed cache, either of which will be small, where I want the objects to be locked across the whole cluster.
    If not all of the cluster nodes have to agree on a lock in a replicated cluster, doesn't this mean that a replicated cluster does not support pessimistic locking?
    Could you please explain this?
    Thanks,
    Rohan

    Hi Rohan,
    The Replicated cache supports pessimistic locking. The User Guide is discussing the implementation details and how they relate to performance. The Replicated and Distributed cache services differ in performance and scalability characteristics, but both support cluster-wide coherence and locking.
    Jon Purdy
    Tangosol, Inc.

  • Do SharePoint 2013 Online have Distributed Cache?

    Do SharePoint 2013 Online have Distributed Cache?

    yes, it is. but what you want to know about Online & DC? are you planning any solution for it?
    Not available to SharePoint Online customers. SharePoint Server 2013 customers can use the Distributed Cache service to
    cache feature functionality, which improves authentication, newsfeed, OneNote client access, security trimming, and page load performance.
    check this:
    http://technet.microsoft.com/en-us/library/sharepoint-online-it-professional-service-description.aspx
    Please remember to mark your question as answered &Vote helpful,if this solves/helps your problem. ****************************************************************************************** Thanks -WS MCITP(SharePoint 2010, 2013) Blog: http://wscheema.com/blog

Maybe you are looking for