Managing the Distributed Cache
In MS documentation I often see this(or something similar)
"The Distributed Cache service can end up in a nonfunctioning or unrecoverable state if you do not follow the procedures that are listed in this article. In extreme scenarios, you might have to rebuild the server farm. The Distributed Cache depends
on Windows Server AppFabric as a prerequisite. Do not administer the AppFabric Caching Service from the
Services window in Administrative Tools in
Control Panel. Do not use the applications in the folder named AppFabric for Windows Server on the
Start menu. "
In many blogs including technet, I see this command always used
Restart-Service -Name AppFabricCachingService
I often see this when updating timeout settings.
Are these considered the same thing?
This is an example, how would you perform these steps.
Get-SPDistributedCacheClientSetting -ContainerType DistributedLogonTokenCache
$DLTC = Get-SPDistributedCacheClientSetting -ContainerType DistributedLogonTokenCache
$DLTC.requestTimeout = "3000"
$DLTC.channelOpenTimeOut = "3000"
$DLTC.MaxConnectionsToServer = "100"
Set-SPDistributedCacheClientSetting -ContainerType DistributedLogonTokenCache $DLTC
Restart-Service -Name AppFabricCachingService
I haven't seen a clear statement about disabling the DC. It provides many essential caches where there are otherwise no replacements. Using the restart cmdlet isn't likely to cause you to need to rebuild your farm, Microsoft just doesn't want you touching
the Distributed Cache outside of SharePoint, basically.
Trevor Seward
Follow or contact me at...
  
This post is my own opinion and does not necessarily reflect the opinion or view of Microsoft, its employees, or other MVPs.
Similar Messages
-
Recently installed SharePoint 2013 RTM, on the newsfeed page an error is displayed, and no entries display in the following or everyone tabs.
"The operation failed because the server could not access the distributed cache."
Reading through various posts, I've checked:
- Activity feeds and mentions tabs are working as expected.
- User Profile Service is operational and syncing as expected
- Search is operational and indexing as expected
- The farm was installed based on the autospinstaller scripts.
- Don't believe this to be a permissions issue, during testing added accounts to the admin group to verify
Any suggestions are welcomed, thanks.
The full error message and trace logs is as follows.
SharePoint returned the following error: The operation failed because the server could not access the distributed cache. Internal type name: Microsoft.Office.Server.Microfeed.MicrofeedException. Internal error code: 55. Contact your system administrator
for help in resolving this problem.
From the trace logs there's several messages which are triggered around the same time:
http://msdn.microsoft.com/en-AU/library/System.ServiceModel.Diagnostics.TraceHandledException.aspxHandling an exception. Exception details: System.ServiceModel.FaultException`1[Microsoft.Office.Server.UserProfiles.FeedCacheFault]: Unexpected exception in
FeedCacheService.GetPublishedFeed: Object reference not set to an instance of an object.. (Fault Detail is equal to Microsoft.Office.Server.UserProfiles.FeedCacheFault)./LM/W3SVC/2/ROOT/d71732192b0d4afdad17084e8214321e-1-129962393079894191System.ServiceModel.FaultException`1[[Microsoft.Office.Server.UserProfiles.FeedCacheFault,
Microsoft.Office.Server.UserProfiles, Version=15.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c]], System.ServiceModel, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089Unexpected exception in FeedCacheService.GetPublishedFeed: Object
reference not set to an instance of an object..
at Microsoft.Office.Server.UserProfiles.FeedCacheService.Microsoft.Office.Server.UserProfiles.IFeedCacheService.GetPublishedFeed(FeedCacheRetrievalEntity fcTargetEntity, FeedCacheRetrievalEntity fcViewingEntity, FeedCacheRetrievalOptions fcRetOptions)
at SyncInvokeGetPublishedFeed(Object , Object[] , Object[] )
at System.ServiceModel.Dispatcher.SyncMethodInvoker.Invoke(Object instance, Object[] inputs, Object[]& outputs)
at System.ServiceModel.Dispatcher.DispatchOperationRuntime.InvokeBegin(MessageRpc& rpc)
at System.ServiceModel.Dispatcher.ImmutableDispatchRuntime.ProcessMessage5(MessageRpc& rpc)
at System.ServiceModel.Dispatcher.ImmutableDispatchRuntime.ProcessMessage31(MessageRpc& rpc)
at System.ServiceModel.Dispatcher.MessageRpc.Process(Boolean isOperationContextSet)System.ServiceModel.FaultException`1[Microsoft.Office.Server.UserProfiles.FeedCacheFault]: Unexpected exception in FeedCacheService.GetPublishedFeed: Object reference not
set to an instance of an object.. (Fault Detail is equal to Microsoft.Office.Server.UserProfiles.FeedCacheFault).
SPSocialFeedManager.GetFeed: Exception: Microsoft.Office.Server.Microfeed.MicrofeedException: ServerErrorFetchingConsolidatedFeed : ( Unexpected exception in FeedCacheService.GetPublishedFeed: Object reference not set to an instance of an object.. ) : Correlation
ID:db6ddc9b-8d2e-906e-db86-77e4c9fab08f : Date and Time : 31/10/2012 1:40:20 PM
at Microsoft.Office.Server.Microfeed.SPMicrofeedThreadCollection.PopulateConsolidated(SPMicrofeedRetrievalOptions retOptions, SPMicrofeedContext context)
at Microsoft.Office.Server.Microfeed.SPMicrofeedThreadCollection.Populate(SPMicrofeedRetrievalOptions retrievalOptions, SPMicrofeedContext context)
at Microsoft.Office.Server.Microfeed.SPMicrofeedManager.CommonGetFeedFor(SPMicrofeedRetrievalOptions retrievalOptions)
at Microsoft.Office.Server.Microfeed.SPMicrofeedManager.CommonPubFeedGetter(SPMicrofeedRetrievalOptions feedOptions, MicrofeedPublishedFeedType feedType, Boolean publicView)
at Microsoft.Office.Server.Microfeed.SPMicrofeedManager.GetPublishedFeed(String feedOwner, SPMicrofeedRetrievalOptions feedOptions, MicrofeedPublishedFeedType typeOfPubFeed)
at Microsoft.Office.Server.Social.SPSocialFeedManager.Microsoft.Office.Server.Social.ISocialFeedManagerProxy.ProxyGetFeed(SPSocialFeedType type, SPSocialFeedOptions options)
at Microsoft.Office.Server.Social.SPSocialFeedManager.<>c__DisplayClass4b`1.<S2SInvoke>b__4a()
Microsoft.Office.Server.Social.SPSocialFeedManager.GetFeed: Microsoft.Office.Server.Microfeed.MicrofeedException: ServerErrorFetchingConsolidatedFeed : ( Unexpected exception in FeedCacheService.GetPublishedFeed: Object reference not set to an instance of
an object.. ) : Correlation ID:db6ddc9b-8d2e-906e-db86-77e4c9fab08f : Date and Time : 31/10/2012 1:40:20 PM
at Microsoft.Office.Server.Microfeed.SPMicrofeedThreadCollection.PopulateConsolidated(SPMicrofeedRetrievalOptions retOptions, SPMicrofeedContext context)
at Microsoft.Office.Server.Microfeed.SPMicrofeedThreadCollection.Populate(SPMicrofeedRetrievalOptions retrievalOptions, SPMicrofeedContext context)
at Microsoft.Office.Server.Microfeed.SPMicrofeedManager.CommonGetFeedFor(SPMicrofeedRetrievalOptions retrievalOptions)
at Microsoft.Office.Server.Microfeed.SPMicrofeedManager.CommonPubFeedGetter(SPMicrofeedRetrievalOptions feedOptions, MicrofeedPublishedFeedType feedType, Boolean publicView)
at Microsoft.Office.Server.Microfeed.SPMicrofeedManager.GetPublishedFeed(String feedOwner, SPMicrofeedRetrievalOptions feedOptions, MicrofeedPublishedFeedType typeOfPubFeed)
at Microsoft.Office.Server.Social.SPSocialFeedManager.Microsoft.Office.Server.Social.ISocialFeedManagerProxy.ProxyGetFeed(SPSocialFeedType type, SPSocialFeedOptions options)
at Microsoft.Office.Server.Social.SPSocialFeedManager.<>c__DisplayClass4b`1.<S2SInvoke>b__4a()
at Microsoft.Office.Server.Social.SPSocialUtil.InvokeWithExceptionTranslation[T](ISocialOperationManager target, String name, Func`1 func)
Microsoft.Office.Server.Social.SPSocialFeedManager.GetFeed: Microsoft.Office.Server.Social.SPSocialException: The operation failed because the server could not access the distributed cache. Internal type name: Microsoft.Office.Server.Microfeed.MicrofeedException.
Internal error code: 55.
at Microsoft.Office.Server.Social.SPSocialUtil.TryTranslateExceptionAndThrow(Exception exception)
at Microsoft.Office.Server.Social.SPSocialUtil.InvokeWithExceptionTranslation[T](ISocialOperationManager target, String name, Func`1 func)
at Microsoft.Office.Server.Social.SPSocialFeedManager.<>c__DisplayClass48`1.<S2SInvoke>b__47()
at Microsoft.Office.Server.Social.SPSocialUtil.InvokeWithExceptionTranslation[T](ISocialOperationManager target, String name, Func`1 func)Thanks Thuan,
I've restarted to the Distrubiton Cache servicem and the error is still occuring.
The AppFabric Caching Service is running under the service apps account, and does appear operational based on:
> use-cachecluster
> get-cache
CacheName [Host]
Regions
default
DistributedAccessCache_1e9f4999-0187-40e8-aa92-f8308d47d6e9
DistributedActivityFeedCache_1e9f4999-0187-40e8-aa92-f8308d47d6e9
DistributedActivityF [SERVER:22233]
eedLMTCache_1e9f4999 LMT(Primary)
-0187-40e8-aa92-f830
8d47d6e9
DistributedBouncerCache_1e9f4999-0187-40e8-aa92-f8308d47d6e9
DistributedDefaultCache_1e9f4999-0187-40e8-aa92-f8308d47d6e9
DistributedLogonToke [SERVER:22233]
nCache_1e9f4999-0187 Default_Region_0538(Primary)
-40e8-aa92-f8308d47d Default_Region_0004(Primary)
6e9 Default_Region_0451(Primary)
DistributedSearchCache_1e9f4999-0187-40e8-aa92-f8308d47d6e9
DistributedSecurityTrimmingCache_1e9f4999-0187-40e8-aa92-f8308d47d6e9
DistributedServerToAppServerAccessTokenCache_1e9f4999-0187-40e8-aa92-f8308d47d6e9 -
How Best to Manage the Media Cache?
Hi. I use a Mac Pro and have a 24TB RAID which holds most of my media. My problem is how best to manage the Media Cache which gets very big, certainly to hold on my OS drive (as recommended) . I could store it in a separate "cache" folder on my RAID but will this effect performance? Or is it best to save it with its media? I would rather not trash the Cache files as I often need to draw upon older projects and it's media for current projects. I think this is the weakest part for FCP editors switching to Premiere.
Ideally you should have five internal 'drives':
OS/Programs
Projects
Cache/Scratch
Media
Exports
You've got the OS and Media drives worked out, now get those other three in place. -
Forum,
Our Farm has two servers that are hosting and running the Distributed Cache service. How can I know if both servers/hosts belong to the exact same Cluster? What is the command for that?hi,
you can take help of the below articles it has list of powershell command to provide details of each host inside cluster
http://almondlabs.com/blog/manage-the-distributed-cache/
Whenever you see a reply and if you think is helpful,Vote As Helpful! And whenever you see a reply being an answer to the question of the thread, click Mark As Answer -
High-units reflect twice the amount with dual JVM's in a distributed cache
HI all,
I have a question - i have a near cache scheme defined - running 4 JVM's with my application deployed to it (localstorage=false) - and 2 JVM's for the distributed cache (localstorage=true)
The high-units is set to 2000 - but the cache is allowing 4000. Is this b/c each JVM will allow for 2000 high-units each?
I was under the impression that as long as coherence is running in the same multi-cast address and port - that the total high-units would be 2000 not 4000.
Thanks...user644269 wrote:
HI all,
I have a question - i have a near cache scheme defined - running 4 JVM's with my application deployed to it (localstorage=false) - and 2 JVM's for the distributed cache (localstorage=true)
The high-units is set to 2000 - but the cache is allowing 4000. Is this b/c each JVM will allow for 2000 high-units each?
I was under the impression that as long as coherence is running in the same multi-cast address and port - that the total high-units would be 2000 not 4000.
Thanks...Hi,
the high-unit setting is per-backing map, so in your case it means 2000 units per storage-enabled nodes.
From 3.5 it will become a bit more complex with the partition aware backing maps.
Best regards,
Robert -
Different distributed caches within the cluster
Hi,
i've three machines n1 , n2 and n3 respectively that host tangosol. 2 of them act as the primary distributed cache and the third one acts as the secondary cache. i also have weblogic running on n1 and based on some requests pumps data on to the distributed cache on n1 and n2. i've a listener configured on n1 and n2 and on the entry deleted event i would like to populate tangosol distributed service running on n3. all the 3 nodes are within the same cluster.
i would like to ensure that the data directly coming from weblogic should only be distributed across n1 and n2 and NOT n3. for e.g. i do not start an instance of tangosol on node n3. and an object gets pruned from either n1 or n2. so ideally i should get a storage not configured exception which does not happen.
The point is the moment is say CacheFactory.getCache("Dist:n3") in the cache listener, tangosol does populate the secondary cache by creating an instance of Dist:n3 on either n1 or n2 depending from where the object has been pruned.
from my understanding i dont think we can have a config file on n1 and n2 that does not have a scheme for n3. i tried doing that and got an illegalstate exception.
my next step was to define the Dist:n3 scheme on n1 and n2 with local storage false and have a similar config file on n3 with local-storage for Dist:n3 as true and local storage for the primary cache as false.
can i configure local-storage specific to a cache rather than to a node.
i also have an EJB deployed on weblogic that also entertains a getData request. i.e. this ejb will also check the primary cache and the secondary cache for data. i would have the statement
NamedCahe n3 = CacheFactory.getCache("n3") in the bean as well.Hi Jigar,
i've three machines n1 , n2 and n3 respectively that
host tangosol. 2 of them act as the primary
distributed cache and the third one acts as the
secondary cache.First, I am curious as to the requirements that drive this configuration setup.
i would like to ensure that the data directly coming
from weblogic should only be distributed across n1
and n2 and NOT n3. for e.g. i do not start an
instance of tangosol on node n3. and an object gets
pruned from either n1 or n2. so ideally i should get
a storage not configured exception which does not
happen.
The point is the moment is say
CacheFactory.getCache("Dist:n3") in the cache
listener, tangosol does populate the secondary cache
by creating an instance of Dist:n3 on either n1 or n2
depending from where the object has been pruned.
from my understanding i dont think we can have a
config file on n1 and n2 that does not have a scheme
for n3. i tried doing that and got an illegalstate
exception.
my next step was to define the Dist:n3 scheme on n1
and n2 with local storage false and have a similar
config file on n3 with local-storage for Dist:n3 as
true and local storage for the primary cache as
false.
can i configure local-storage specific to a cache
rather than to a node.
i also have an EJB deployed on weblogic that also
entertains a getData request. i.e. this ejb will also
check the primary cache and the secondary cache for
data. i would have the statement
NamedCahe n3 = CacheFactory.getCache("n3") in the
bean as well.In this scenario, I would recommend having the "primary" and "secondary" caches on different cache services (i.e. distributed-scheme/service-name). Then you can configure local storage on a service by service basis (i.e. distributed-scheme/local-storage).
Later,
Rob Misek
Tangosol, Inc. -
my question is regarding SharePoint 2013 Farm topology. if i want go with Streamlined topology and having (2 distribute cache and Rm servers+ 2 front-end servers+ 2 batch-processing servers+ cluster sql server) then how distributed servers will
be connecting to front end servers? Can i use windows 2012 NLB feature? if i use NLB and then do i need to install NLB to all distributed servers and front-end servers and split-out services? What will be the configuration regarding my scenario.
Thanks in Advanced!For the Distributed Cache servers, you simply make them farm members (like any other SharePoint servers) and turn on the Distributed Cache service (while making sure it is disabled on all other farm members). Then, validate no other services (except for
the Foundation Web service due to ease of solution management) is enabled on the DC servers and no end user requests or crawl requests are being routed to the DC servers. You do not need/use NLB for DC.
Trevor Seward
Follow or contact me at...
  
This post is my own opinion and does not necessarily reflect the opinion or view of Microsoft, its employees, or other MVPs. -
Error handling for distributed cache synchronization
Hello,
Can somebody explain to me how the error handling works for the distributed cache synchronization ?
Say I have four nodes of a weblogic cluster and 4 different sessions on each one of those nodes.
On Node A an update happens on object B. This update is going to be propogated to all the other nodes B, C, D. But for some reason the connection between node A and node B is lost.
In the following xml
<cache-synchronization-manager>
<clustering-service>...</clustering-service>
<should-remove-connection-on-error>true</should-remove-connection-on-error>
If I set this to true does this mean that the Toplink will stop sending updates from node A to node B ? I presume all of this is transparent. In order to handle any errors I do not have to write any code to capture this kind of error .
Is that correct ?
Aswin.This "should-remove-connection-on-error" option mainly applies to RMI or RMI_IIOP cache synchronization. If you use JMS for cache synchronization, then connectivity and error handling is provided by the JMS service.
For RMI, when this is set to true (which is the default) if a communication exception occurs in sending the cache synchronization to a server, that server will be removed and no longer synchronized with. The assumption is that the server has gone down, and when it comes back up it will rejoin the cluster and reconnect to this server and resume synchronization. Since it will have an empty cache when it starts back up, it will not have missed anything.
You do not have to perform any error handling, however if you wish to handle cache synchronization errors you can use a TopLink Session ExceptionHandler. Any cache synchronization errors will be sent to the session's exception handler and allow it to handle the error or be notified of the error. Any errors will also be logged to the TopLink session's log. -
Distributed cache doesn't work. Please help.
Hi,
I am trying to use distributed cache by:
1. use coherence as 2nd level cache for hibernate in the application server (weblogic 9). Configuration as follows:
<cache-config>
<caching-scheme-mapping>
<cache-mapping>
<cache-name>*</cache-name>
<scheme-name>DistributedInMemoryCache</scheme-name>
</cache-mapping>
</caching-scheme-mapping>
<caching-schemes>
<distributed-scheme>
<scheme-name>DistributedInMemoryCache</scheme-name>
<service-name>DistributedCache</service-name>
<backing-map-scheme>
<local-scheme>
<high-units>{size-limit 0}</high-units>
</local-scheme>
</backing-map-scheme>
<autostart>true</autostart>
</distributed-scheme>
</caching-schemes>
</cache-config>
2. start a standalone jvm locally to form a 2 nodes cluster with the coherence node above using the same cache-config.xml as shown above. I use the following command to start this cache:
%JAVA_HOME%/bin/java -server -showversion -jar %LIB_HOME%\coherence.jar
I enable jmx on both jvm, the coherence cache on the coherence node in weblogic shows a bunch of caches exist (I use @Cache(usage = CacheConcurrencyStrategy.READ_ONLY) annontation in my code to enable cache for an entity), also there are a few cache services including DistributedCache started. However, from the standalone jvm, I didn't see any cache nor Distributed cache service (the only service is management from jconsole).
My goal is to use the standalone jvm as a cache server that holds all the caches while the coherence in weblogic as the client which has no local storage but a near cache.
However, I could not even get the distributed cache to work. Please help.
ThanksHi,
To start the cache server you need to use the command like this:
%JAVA_HOME%/bin/java -server -Dtangosol.coherence.cacheconfig=/path/to/cache_configuration_descriptor -cp %LIB_HOME%\coherence.jar com.tangosol.net.DefaultCacheServer
Regards,
Dimitri -
Distributed Cache service stuck in Starting Provisioning
Hello,
I'm having problem with starting/stopping Distributed Cache service in one of the SharePoint 2013 farm servers. Initially, Distributed Cache was enabled in all the farm servers by default and it was running as a cluster. I wanted to remove it from all hosts
but one (APP server) using below PowerShell commands, which worked fine.
Stop-SPDistributedCacheServiceInstance -Graceful
Remove-SPDistributedCacheServiceInstance
But later I attempted to add the service back to two hosts (WFE servers) using below command and unfortunately one of them got stuck in the process. When I look at the Services on Server from Central Admin, the status says "Starting".
Add-SPDistributedCacheServiceInstance
Also, when I execute below script, the status says "Provisioning".
Get-SPServiceInstance | ? {($_.service.tostring()) -eq "SPDistributedCacheService Name=AppFabricCachingService"} | select Server, Status
I get "cacheHostInfo is null" error when I use "Stop-SPDistributedCacheServiceInstance -Graceful".
I tried below script,
$instanceName ="SPDistributedCacheService Name=AppFabricCachingService"
$serviceInstance = Get-SPServiceInstance | ? {($_.service.tostring()) -eq $instanceName -and ($_.server.name) -eq $env:computername}
$serviceInstance.Unprovision()
$serviceInstance.Delete()
,but it didn't work either, and I got below error.
"SPDistributedCacheServiceInstance", could not be deleted because other objects depend on it. Update all of these dependants to point to null or
different objects and retry this operation. The dependant objects are as follows:
SPServiceInstanceJobDefinition Name=job-service-instance-{GUID}
Has anyone come across this issue? I would appreciate any help.
Thanks!Hi ,
Are you able to ping the server that is already running Distributed Cache on this server? For example:
ping WFE01
As you are using more than one cache host in your server farm, you must configure the first cache host running the Distributed Cache service to allow Inbound ICMP (ICMPv4) traffic through the firewall.If an administrator removes the first cache host from
the cluster which was configured to allow Inbound ICMP (ICMPv4) traffic through the firewall, you must configure the first server of the new cluster to allow Inbound ICMP (ICMPv4) traffic through the firewall.
You can create a rule to allow the incoming port.
For more information, you can refer to the blog:
http://habaneroconsulting.com/insights/Distributed-Cache-Needs-Ping#.U4_nmPm1a3A
Thanks,
Eric
Forum Support
Please remember to mark the replies as answers
if they help and unmark them if they provide no help. If you have feedback for TechNet Subscriber Support, contact [email protected]
Eric Tao
TechNet Community Support -
HI,
We have a server (Server 1), on which the status of the Distributed cache was in "Error Starting" state.
While applying a service pack due to some issue we were unable to apply the path (Server 1) so we decided to remove the effected server from the farm and work on it. the effected server (Server 1) was removed from the farm through the configuration wizard.
Even after running the configuration wizard we were still able to see the server (Server 1) on the SharePoint central admin site (Servers in farm) when clicked, the service "Distributed cache" was still visible with a status "Error Starting",
tried deleting the server from the farm and got an error message, the ULS logs displayed the below.
A failure occurred in SPDistributedCacheServiceInstance::UnprovisionInternal. cacheHostInfo is null for host 'servername'.
8130ae9c-e52e-80d7-aef7-ead5fa0bc999
A failure occurred SPDistributedCacheServiceInstance::UnprovisionInternal()... isGraceFulShutDown 'False' , isGraceFulShutDown, Exception 'System.InvalidOperationException: cacheHostInfo is null at Microsoft.SharePoint.DistributedCaching.Utilities.SPDistributedCacheServiceInstance.UnProvisionInternal(Boolean
isGraceFulShutDown)'
8130ae9c-e52e-80d7-aef7-ead5fa0bc999
A failure occurred SPDistributedCacheServiceInstance::UnProvision() , Exception 'System.InvalidOperationException: cacheHostInfo is null at Microsoft.SharePoint.DistributedCaching.Utilities.SPDistributedCacheServiceInstance.UnProvisionInternal(Boolean
isGraceFulShutDown) at Microsoft.SharePoint.DistributedCaching.Utilities.SPDistributedCacheServiceInstance.Unprovision()'
8130ae9c-e52e-80d7-aef7-ead5fa0bc999
We are unable to perform any operation install/repair of SharePoint on the effected server (Server 1), as the server is no longer in the farm, we are unable to run any powershell commands.
Questions:-
What would cause that to happen?
Is there a way to resolve this issue? (please provide the steps)
SatyamHi
try this:
http://edsitonline.com/2014/03/27/unexpected-exception-in-feedcacheservice-isrepopulationneeded-unable-to-create-a-datacache-spdistributedcache-is-probably-down/
Hope this helps. Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading the thread. -
Set request timeout for distributed cache
Hi,
Coherence provides 3 parameters we can tune for the distributed cache
tangosol.coherence.distributed.request.timeout The default client request timeout for distributed cache services
tangosol.coherence.distributed.task.timeout The default server execution timeout for distributed cache services
tangosol.coherence.distributed.task.hung the default time before a thread is reported as hung by distributed cache services
It seems these timeout values are used for both system activities (node discovery, data re-balance etc.) and user activities (get, put). We would like to set the request timeout for get/put. But a low threshold like 10 ms sometimes causes the system activities to fail. Is there a way for us to separately set the timeout values? Or even is it possible to setup timeout on individual calls (like get(key, timeout))?
-thanksHi,
not necessarily for get and put methods, but for queries, entry-processor and entry-aggregator and invocable agent sending, you can make the sent filter or aggregator or entry-processor or agent implement PriorityTask, which allows you to make QoS expectations known to Coherence. Most or all stock aggregators and entry-processors implement PriorityTask, if I correctly remember.
For more info, look at the documentation of PriorityTask.
Best regards,
Robert -
Hi
I am newbie to Oracle Coherence and trying to get a hands on experience by running a example (coherence-example-distributedload.zip) (Coherence GE 3.6.1). I am running two instances of server . After this I ran "load.cmd" to distribute data across two server nodes - I can see that data is partitioned across server instances.
Now I run another instance(on another JVM) of program which will try to join the distributed cache and try to query on the loaded on server instances. I see that the new JVM is joining the cluster and querying for data returns no records. Can you please tell me if I am missing something?
NamedCache nNamedCache = CacheFactory.getCache("example-distributed");
Filter eEqualsFilter = new GreaterFilter("getLocId", "1000");
Set keySet = nNamedCache.keySet(eEqualsFilter);
I see here that keySet has no records. Can you please help?
Thanks
sunderI got this problem sorted out - the was problem cache-config.xml.. The correct one looks as below.
<distributed-scheme>
<scheme-name>example-distributed</scheme-name>
<service-name>DistributedCache1</service-name>
<backing-map-scheme>
<read-write-backing-map-scheme>
<scheme-name>DBCacheLoaderScheme</scheme-name>
<internal-cache-scheme>
<local-scheme>
<scheme-ref>DBCache-eviction</scheme-ref>
</local-scheme>
</internal-cache-scheme>
<cachestore-scheme>
<class-scheme>
<class-name>com.test.DBCacheStore</class-name>
<init-params>
<init-param>
<param-type>java.lang.String</param-type>
<param-value>locations</param-value>
</init-param>
<init-param>
<param-type>java.lang.String</param-type>
<param-value>{cache-name}</param-value>
</init-param>
</init-params>
</class-scheme>
</cachestore-scheme>
<cachestore-timeout>6000</cachestore-timeout>
<refresh-ahead-factor>0.5</refresh-ahead-factor>
</read-write-backing-map-scheme>
</backing-map-scheme>
<thread-count>10</thread-count>
<autostart>true</autostart>
</distributed-scheme>
<invocation-scheme>
<scheme-name>example-invocation</scheme-name>
<service-name>InvocationService1</service-name>
<autostart system-property="tangosol.coherence.invocation.autostart">true</autostart>
</invocation-scheme>
Missed <class-scheme> element inside <cachestore-scheme> of <read-write-backing-map-scheme>.
Thanks
sunder -
Starting a distributed cache system
Hi,
Is there a way to start a distributed cache system on different boxes without actually running the command line application on those boxes?
Thanks,
Sandeep.Hi Sandeep,
Once you call one of the following, the Distributed Cache will be started on a particular node:
CacheFatory.getDistributedCache("my-cache", getClass().getClassLoader());
or
CacheFactory.getDistributedCacheService("Distributed");Later,
Rob Misek
Tangosol, Inc.
Coherence: Cluster your Work. Work your Cluster. -
Setup failover for a distributed cache
Hello,
For our production setup we will have 4 app servers one clone per each app server. so there will be 4 clones to a cluster. And we will have 2 jvms for our distributed cache - one being a failover, both of those will be in cluster.
How would i configure the failover for the distributed cache?
Thanksuser644269 wrote:
Right - so each of the near cache schemes defined would need to have the back map high-units set to where it could take on 100% of data.Specifically the near-scheme/back-scheme/distributed-scheme/backing-map-scheme/local-scheme/high-units value (take a look at the [Cache Configuration Elements|http://coherence.oracle.com/display/COH34UG/Cache+Configuration+Elements] ).
There are two options:
1) No Expiry -- In this case you would have to size the storage enabled JVMs to that an individual JVM could store all of the data.
or
2) Expiry -- In this case you would set the high-units a value that you determine. If you want it to store all the data then it needs to be set higher than the total number of objects that you will store in the cache at any given time or you can set it lower with the understanding that once that high-units is reached Coherence will evict some data from the cluster (i.e. remove it from the "cluster memory").
user644269 wrote:
Other than that - there is not configuration needed to ensure that these JVM's act as a failover in the event one goes down.Correct, data fault tolerance is on by default (set to one level of redundancy).
:Rob:
Coherence Team
Maybe you are looking for
-
Backup problems with external hard drive
I bought a WD passport external hard drive and copied my itunes library onto a file there. I re-set the location of the library on itunes to the drive on the external hard drive. So far I have not deleted anything from my computer hard drive. However
-
Convert RTFD to Doc (without losing images)
Hello! Is it possible to convert RTFD-format files to Microsoft Word (doc) format - without losing images? As a longtime Mac user, I have a multitude of RTF and RTFD files. I want to be able to access the contents of these files on platforms other th
-
How can I have "active" status on a PnP network?
I use DC++ to connect to a peer-to-peer network. Using the FIOS router, I have been relegated to "passive" status on the network, which limits the number of contacts I can make. Is there a way to configure the router for active status? TIA whh Sol
-
Mapping with multiple adaptive web service model-how to map?
sorry, i´ve posted this two times....sorry Edited by: Anna_von_Landsberg on Jan 5, 2011 8:32 PM
-
Different font quality in preview mode
Hello, is it normal that fonts become so bulky in preview mode? I´m using a Webfont in this case (PT Sans Narrow Bold) Especially the button text.