Distributed Cache - large objects

Hi,
I have a need to store a large collection of object associated with one key in the cache. And updates to the cache will happen with one of the object in the collection.
How does coherence handle the sync b/w the distributed cache -- will it be like the whole collection marked dirty and synched again or just the bits that got changed. Ours is a perf intensive application so would like to know the implications of the same.
Thanks in advance for your suggestions.
- Anand

Hi Anand,
You can use InvocableMap (also see Provide a Data Grid for this - a very simple example is attached.
Regards,
Dimitri<br><br> <b> Attachment: </b><br>Main.java <br> (*To use this attachment you will need to rename 295.bin to Main.java after the download is complete.)

Similar Messages

  • Limitation on number of objects in distributed cache

    Hi,
    Is there a limitation on the number (or total size) of objects in a distributed cache? I am seeing a big increase in response time when the number of objects exceeds 16,000. Normally, the ServiceMBean.RequestAverageDuration value is in the 6-8ms range as long as the number of objects in the cache is less than 16K - I've run our application for weeks at a time without seeing any problems. However, once the number of objects exceeds the magic number of 16K the average request duration almost immediately jumps to over 100ms and continues to climb as more objects are added.
    I'm fairly confident that the cache is indexed properly (as Dimitri helped us with that). Are there any configuration changes that could possibly help out here? We are using Coherence 3.3.
    Any suggestions would be greatly appreciated.
    Thanks,
    Jim

    Hi Jim,
    The results from the load test look quite normal, the system fairly quickly stabilizes at a particular performance level and remains there for the duration of the test. In terms of latency results, we see that the cache.putAll operations are taking ~45ms per bulk operation where each operation is putting 100 1K items, for cache.getAll operations we see about ~15ms per bulk operation. Additionally note that the test runs over 256,000 items, so it is well beyond the 16,000 limit you've encountered.
    So it looks like your application are exhibiting different behavior then this test. You may wish to try to configure this test to behave as similarly to yours as possible. For instance you can set the size of the cache to just over/under 16,000 using the -entries parameter, set the size of the entries to 900 bytes using the -size parameter, and set the total number of threads per worker using the -threads parameter.
    What is quite interesting is that at 256,000 1K objects the latency measured with this test is apparently less then half the latency you are seeing with a much smaller cache size. This would seem to point at the issue being related to or rooted in your test. Would you be able to provide a more detailed description of how you are using the cache, and the types of operations you are performing.
    thanks,
    mark

  • Object locking in Distributed cache

    Hi,
         I have gone through some of the posts on the forum regarding the locking issues.
         Thread at http://www.tangosol.net/forums/thread.jspa?messageID=3416 specifies that
         ..locks block locks, not gets or puts. If one member locks a key, it prevents another member from locking the key. It does not prevent the other member from getting/putting that key.
         What exactly do we mean by the above statement?
         I'm using distributed cache and would like to lock an object before "put" or "remove" operations. But I want "read" operation to be without locks. Now my questions are,
         1) In a distributed cache setup, if I try to obtain a lock before put or remove and discover that i cannot obtain it (i.e false is returned) because someone else has locked it, then how do I discover when the other entity releases lock and when should i retry for the lock?
         2) Again, if i lock USING "LOCK(OBJECT KEY)" method, can i be sure that no other cluster node would be writing/reading to that node until I release that lock?
         3) The first post in the thread http://www.tangosol.net/forums/thread.jspa?forumID=37&threadID=588 suggests that in distributed setup locks are of no effect to other cluster nodes. But i want locks to be valid across cluster. If item locked, then no one else shud be able to transact with it. How to get that?
         Regards,
         Mohnish

    Hi Mohnish,
         >> 1) In a distributed cache setup, if I try to obtain
         >> a lock before put or remove and discover that i cannot
         >> obtain it (i.e false is returned) because someone else
         >> has locked it, then how do I discover when the other
         >> entity releases lock and when should i retry for the
         >> lock?
         You may try to acquire a lock (waiting indefinitely for lock acquisition) by calling <tt>cache.lock(key, -1)</tt>; you may try for a specified time period by calling <tt>cache.lock(key, cMillis)</tt>. With either of these approaches, your thread will block until the other thread releases the lock (or the timeout is reached). In either case, if the other node releases its lock your lock request will complete immediately and your thread will resume execution.
         >> 2) Again, if i lock USING "LOCK(OBJECT KEY)" method,
         >> can i be sure that no other cluster node would be
         >> writing/reading to that node until I release that lock?
         If you want to prevent other threads from writing/reading a cache entry, you must ensure that those other threads lock the key prior to writing/reading it. If they do not lock it prior to writing, they may have dirty writes (overwriting your write or vice versa); if they do not lock prior to reading, they may have dirty reads (having the underlying data change after they've read it).
         >> 3) The first post in the thread
         http://www.tangosol.net/forums/thread.jspa?forumID=37&threadID=588
         >> suggests that in distributed setup locks are of no
         >> effect to other cluster nodes. But i want locks to be
         >> valid across cluster. If item locked, then no one else
         >> shud be able to transact with it. How to get that?
         The first post in that thread states that if the second thread doesn't lock, then it will overwrite the first thread (even if the first thread has locked). However, there is an inconsistency in that the Replicated cache topology has a stronger memory model than the Distributed/Near topologies. In the Replicated topoplogy, locks not only block out locks from other nodes, they also prevent puts from other nodes.
         Jon Purdy
         Tangosol, Inc.

  • Distributed Cache gets are very time consuming!!!!!

    We have an application that uses Coherence as a distributed partitioned cache server.
    Our problem is that the access to the near cache is very fast but the access to the remote servers is very cpu consuming (maybe for the deserialization of the objects retrieved) at such point that disabling the cache and reading the objects directly from the dbms (Oracle) improves the performances.
    We use coherence as the second level cache of hibernate so we have not much control on how hibernate serialize and deserialize it's objects (we cannot implement a custom externalization).
    Have you any experience of performance problems with the combination hibernate/tangosol and distributed cache?

    Hi Giancarlo,
    It's Jon, not Cameron ;)
    The Hibernate cache API (including 3.0) does not support bulk-fetches from cache, which means that these cache fetches are serial rather than parallel. Depending on how much of your performance problem is network related, Coherence may actually perform much better than straight JDBC as the application load increases (time per request will remain similar, but won't increase much under heavier load).
    The advantage to using the Coherence API (rather than Hibernate API) is that you have far better control over how your queries run. The drawback of course is that you have to code your own queries if they become more complex than search queries. As a result, it often makes sense to use Hibernate (or equivalent) for the bulk of your queries, then code specific queries directly against Coherence when performance really matters. A simple example of this is the ability to perform bulk gets (using <tt>NamedCache.getAll()</tt>). With queries that return hundreds or thousands of items, this can have a huge impact on performance (an order of magnitude or more depending on object complexity). You also get the ability to use XmlBean/ExternalizableLite and cluster-parallel search queries.
    As an aside, with the Coherence cache servers, you should avoid making your JVM heaps excessively large. As a general rule of thumb, we recommend a 512MB heap size for cache servers, using as many instances as required to reach your desired cache capacity. Also, from what I've seen, 64-bit JVMs are less efficient with memory usage than 32-bit (though performance seems a bit better on the x86-64 platforms vs the 32-bit versions).
    Jon Purdy
    Tangosol, Inc.

  • Distributed Cache service stuck in Starting Provisioning

    Hello,
    I'm having problem with starting/stopping Distributed Cache service in one of the SharePoint 2013 farm servers. Initially, Distributed Cache was enabled in all the farm servers by default and it was running as a cluster. I wanted to remove it from all hosts
    but one (APP server) using below PowerShell commands, which worked fine.
    Stop-SPDistributedCacheServiceInstance -Graceful
    Remove-SPDistributedCacheServiceInstance
    But later I attempted to add the service back to two hosts (WFE servers) using below command and unfortunately one of them got stuck in the process. When I look at the Services on Server from Central Admin, the status says "Starting".
    Add-SPDistributedCacheServiceInstance
    Also, when I execute below script, the status says "Provisioning".
    Get-SPServiceInstance | ? {($_.service.tostring()) -eq "SPDistributedCacheService Name=AppFabricCachingService"} | select Server, Status
    I get "cacheHostInfo is null" error when I use "Stop-SPDistributedCacheServiceInstance -Graceful".
    I tried below script,
    $instanceName ="SPDistributedCacheService Name=AppFabricCachingService" 
    $serviceInstance = Get-SPServiceInstance | ? {($_.service.tostring()) -eq $instanceName -and ($_.server.name) -eq $env:computername}
    $serviceInstance.Unprovision()
    $serviceInstance.Delete()
    ,but it didn't work either, and I got below error.
    "SPDistributedCacheServiceInstance", could not be deleted because other objects depend on it.  Update all of these dependants to point to null or 
    different objects and retry this operation.  The dependant objects are as follows: 
    SPServiceInstanceJobDefinition Name=job-service-instance-{GUID}
    Has anyone come across this issue? I would appreciate any help.
    Thanks!

    Hi ,
    Are you able to ping the server that is already running Distributed Cache on this server? For example:
    ping WFE01
    As you are using more than one cache host in your server farm, you must configure the first cache host running the Distributed Cache service to allow Inbound ICMP (ICMPv4) traffic through the firewall.If an administrator removes the first cache host from
    the cluster which was configured to allow Inbound ICMP (ICMPv4) traffic through the firewall, you must configure the first server of the new cluster to allow Inbound ICMP (ICMPv4) traffic through the firewall. 
    You can create a rule to allow the incoming port.
    For more information, you can refer to the  blog:
    http://habaneroconsulting.com/insights/Distributed-Cache-Needs-Ping#.U4_nmPm1a3A
    Thanks,
    Eric
    Forum Support
    Please remember to mark the replies as answers
    if they help and unmark them if they provide no help. If you have feedback for TechNet Subscriber Support, contact [email protected]
    Eric Tao
    TechNet Community Support

  • Error message when using a MessageListener using a distributed cache

    Hi --
    I'm getting the following error message when I attach a message listener to a distributed cache. I get the same message if I attach the listener to the NearCache in front of the DistributedCache, or to the DistributedCache itself.
    My message listener listens for a create() operation and writes the created value out to the database. Both the key and value are java objects that are getting "serialized" when they're pushed in the cache. The listener is never called.
    The error spits out two messages, which look like:
    2003-04-07 21:48:05.281 Tangosol Coherence 2.1/239 <Error> (thread=DistributedCache:EventDispatcher): An exception occurred while dispatching this event:
    CacheEvent: MapEvent{com.tangosol.coherence.component.util.daemon.queueProcessor
    .service.DistributedCache$BinaryMap added: key=Binary(length=269, value=0x0005AC
    ED000573720021636F6D2E6F6C742E646174612E696E7465726E616C2E4461746162617365554944
    6FABB5383C6013B402000078720021636F6D2E6F6C742E646174612E696E7465726E616C2E416273
    7472616374554944D04F591196E4DC1B0200024C000D657874656E73696F6E4461746174000F4C6A
    6176612F7574696C2F4D61703B4C0009756964537472696E677400124C6A6176612F6C616E672F53
    7472696E673B7870737200116A6176612E7574696C2E486173684D61700507DAC1C31660D1030002
    46000A6C6F6164466163746F724900097468726573686F6C6478703F400000000000087708000000
    0B0000000078740011363930395F436F6D706F6E656E74426964), value=Binary(length=1069,
    value=0x0005ACED000573720026636F6D2E6562726576696174652E61756374696F6E2E6269642
    E436F6D706F6E656E744269648EC95C4DE33A88D802000D5A0007626573744269644A000B6269645
    3657175656E6365440004636F73745A0007696E697469616C5A00066E65774269645A00067469654
    2696444000576616C75654C000B61756374696F6E4D6F64657400294C636F6D2F656272657669617
    4652F61756374696F6E2F6576656E742F41756374696F6E4D6F64653B4C000A61756374696F6E554
    9447400124C636F6D2F6F6C742F646174612F5549443B4C000A636F6D70616E7955494471007E000
    24C000C636F6D706F6E656E7455494471007E00024C000A737472696E67436F73747400124C6A617
    6612F6C616E672F537472696E673B4C000B737472696E6756616C756571007E00037872002D636F6
    D2E6562726576696174652E636F6D6D6F6E2E416273747261637450657273697374656E744F626A6
    56374497E2729A24CA5790200034C000A637265617465446174657400104C6A6176612F7574696C2
    F446174653B4C000375696471007E00024C000A7570646174654461746571007E000578707372000
    E6A6176612E7574696C2E44617465686A81014B59741903000078707708000000F46B9A286278737
    20021636F6D2E6F6C742E646174612E696E7465726E616C2E44617461626173655549446FABB5383
    C6013B402000078720021636F6D2E6F6C742E646174612E696E7465726E616C2E416273747261637
    4554944D04F591196E4DC1B0200024C000D657874656E73696F6E4461746174000F4C6A6176612F7
    574696C2F4D61703B4C0009756964537472696E6771007E00037870737200116A6176612E7574696
    C2E486173684D61700507DAC1C31660D103000246000A6C6F6164466163746F72490009746872657
    3686F6C6478703F4000000000000877080000000B0000000078740011363930395F436F6D706F6E6
    56E744269647371007E00077708000000F46B9A286278000000000000000000402E0000000000000
    00000402E00000000000073720027636F6D2E6562726576696174652E61756374696F6E2E6576656
    E742E41756374696F6E4D6F6465BD0C9E245C328B4F02000078720029636F6D2E656272657669617
    4652E636F6D6D6F6E2E41627374726163745479706553616665456E756D506D8C41B0144DB302000
    249000A696E744C69746572616C4C000D737472696E674C69746572616C71007E000378700000000
    374000A50524F44554354494F4E7371007E00097371007E000D3F4000000000000877080000000B0
    00000007874000A38335F41756374696F6E7371007E00097371007E000D3F4000000000000877080
    000000B000000007874000A34325F436F6D70616E797371007E00097371007E000D3F40000000000
    00877080000000B00000000787400103131315F426964436F6D706F6E656E747070)}
    2003-04-07 21:48:05.687 Tangosol Coherence 2.1/239 <Warning> (thread=CoherenceLogger): Asynchronous logging character limit exceeded; discarding 3 log messages (lines=17, chars=1416)

    Kris,
    First of all you should increase the value of logging-config/character-limit element in tangosol-coherence.xml to see the message entirely. The default setting is 4096 which is not enough to see your exception text.
    When you do that I believe you will see that the actual exception is java.lang.ClassNotFoundException indicating that the node that has the listener installed doesn't know about the class that is being put into the cache and could be easily fixed as shown here: http://www.tangosol.com/faq-coherence.jsp#classnotfound
    Please let me know if that doesn't help.
    Gene

  • Error handling for distributed cache synchronization

    Hello,
    Can somebody explain to me how the error handling works for the distributed cache synchronization ?
    Say I have four nodes of a weblogic cluster and 4 different sessions on each one of those nodes.
    On Node A an update happens on object B. This update is going to be propogated to all the other nodes B, C, D. But for some reason the connection between node A and node B is lost.
    In the following xml
    <cache-synchronization-manager>
    <clustering-service>...</clustering-service>
    <should-remove-connection-on-error>true</should-remove-connection-on-error>
    If I set this to true does this mean that the Toplink will stop sending updates from node A to node B ? I presume all of this is transparent. In order to handle any errors I do not have to write any code to capture this kind of error .
    Is that correct ?
    Aswin.

    This "should-remove-connection-on-error" option mainly applies to RMI or RMI_IIOP cache synchronization. If you use JMS for cache synchronization, then connectivity and error handling is provided by the JMS service.
    For RMI, when this is set to true (which is the default) if a communication exception occurs in sending the cache synchronization to a server, that server will be removed and no longer synchronized with. The assumption is that the server has gone down, and when it comes back up it will rejoin the cluster and reconnect to this server and resume synchronization. Since it will have an empty cache when it starts back up, it will not have missed anything.
    You do not have to perform any error handling, however if you wish to handle cache synchronization errors you can use a TopLink Session ExceptionHandler. Any cache synchronization errors will be sent to the session's exception handler and allow it to handle the error or be notified of the error. Any errors will also be logged to the TopLink session's log.

  • JPA2 - downloading large objects from database

    Hi,
    I would like to enable user to download large (up to 20Mb) files, stored in Oracle database.
    I think I should use a stream, but I don't know how to get JPA to return a stream of Large Object. Can you help?
    I'm using EclipseLink.

    Hello,
    http://wiki.eclipse.org/EclipseLink/Examples/JPA/Pagination#Using_a_ScrollableCursor describes using a scrollable cursor to return entities. Since this is going through JPA (you didn't mention what api you are using to access EclipseLink) you will want to be careful to manage/clear your cache appropriately so it does not use all your available memory. You will probably want to make this query read-only, and clear the cache after a few iterations using em.clear().
    http://wiki.eclipse.org/Introduction_to_EclipseLink_Queries_(ELUG)#Stream_and_Cursor_Query_Results describes and has links to using streams and cursors with the native EclipseLink read queries.
    Best Regards,
    Chris

  • Programmatically use distributed caching in SharePoint 2013?

    As per below TechNet article, developers are not allowed to use the AppFabric instance that comes with SharePoint 2013. The expectation is to deploy a seperate AppFabric cluster for custom applications.
    http://technet.microsoft.com/en-us/library/jj219572.aspx#Important
    Distributed caching was a long waited, nice SharePoint feature but it does not make sense to restrict developer access to this. We have a requirement to cache fairly smaller amount of data (must be distributed), but cannot deploy a separate Cache servers
    for this. What are my options (other than System.Web.Caching)? Is there an API to safely cache smaller amount of data? What is the rationale behind restricting access to default AppFabric instance come with SharePoint?
    Thanks in Advance,
    Amal

    Yes, it's just a thread-safe implementation of object cache and not a distributed cache. Probably the reason behind recommending to use a separate AppFabric cache cluster is that additional named caches of custom solutions and applications will
    interfere with SharePoint  named caches. Without a scope and priority defined, the named caches of custom solutions may start evicting SharePoint items if their usage is much higher than that of SharePoint ones.
    AppFabric Caching and SharePoint: Concepts and Examples (Part
    1)
    This post is my own opinion and does not necessarily reflect the opinion or view of Slalom.

  • Newsfeed error - The operation failed because the server could not access the distributed cache.

    Recently installed SharePoint 2013 RTM, on the newsfeed page an error is displayed, and no entries display in the following or everyone tabs.
    "The operation failed because the server could not access the distributed cache."
    Reading through various posts, I've checked:
    - Activity feeds and mentions tabs are working as expected.
    - User Profile Service is operational and syncing as expected
    - Search is operational and indexing as expected
    - The farm was installed based on the autospinstaller scripts.
    - Don't believe this to be a permissions issue, during testing added accounts to the admin group to verify
    Any suggestions are welcomed, thanks.
    The full error message and trace logs is as follows.
    SharePoint returned the following error: The operation failed because the server could not access the distributed cache. Internal type name: Microsoft.Office.Server.Microfeed.MicrofeedException. Internal error code: 55. Contact your system administrator
    for help in resolving this problem.
    From the trace logs there's several messages which are triggered around the same time:
    http://msdn.microsoft.com/en-AU/library/System.ServiceModel.Diagnostics.TraceHandledException.aspxHandling an exception. Exception details: System.ServiceModel.FaultException`1[Microsoft.Office.Server.UserProfiles.FeedCacheFault]: Unexpected exception in
    FeedCacheService.GetPublishedFeed: Object reference not set to an instance of an object.. (Fault Detail is equal to Microsoft.Office.Server.UserProfiles.FeedCacheFault)./LM/W3SVC/2/ROOT/d71732192b0d4afdad17084e8214321e-1-129962393079894191System.ServiceModel.FaultException`1[[Microsoft.Office.Server.UserProfiles.FeedCacheFault,
    Microsoft.Office.Server.UserProfiles, Version=15.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c]], System.ServiceModel, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089Unexpected exception in FeedCacheService.GetPublishedFeed: Object
    reference not set to an instance of an object..  
     at Microsoft.Office.Server.UserProfiles.FeedCacheService.Microsoft.Office.Server.UserProfiles.IFeedCacheService.GetPublishedFeed(FeedCacheRetrievalEntity fcTargetEntity, FeedCacheRetrievalEntity fcViewingEntity, FeedCacheRetrievalOptions fcRetOptions)
     at SyncInvokeGetPublishedFeed(Object , Object[] , Object[] )    
     at System.ServiceModel.Dispatcher.SyncMethodInvoker.Invoke(Object instance, Object[] inputs, Object[]&amp; outputs)    
     at System.ServiceModel.Dispatcher.DispatchOperationRuntime.InvokeBegin(MessageRpc&amp; rpc)    
     at System.ServiceModel.Dispatcher.ImmutableDispatchRuntime.ProcessMessage5(MessageRpc&amp; rpc)    
     at System.ServiceModel.Dispatcher.ImmutableDispatchRuntime.ProcessMessage31(MessageRpc&amp; rpc)    
     at System.ServiceModel.Dispatcher.MessageRpc.Process(Boolean isOperationContextSet)System.ServiceModel.FaultException`1[Microsoft.Office.Server.UserProfiles.FeedCacheFault]: Unexpected exception in FeedCacheService.GetPublishedFeed: Object reference not
    set to an instance of an object.. (Fault Detail is equal to Microsoft.Office.Server.UserProfiles.FeedCacheFault).
    SPSocialFeedManager.GetFeed: Exception: Microsoft.Office.Server.Microfeed.MicrofeedException: ServerErrorFetchingConsolidatedFeed : ( Unexpected exception in FeedCacheService.GetPublishedFeed: Object reference not set to an instance of an object.. ) : Correlation
    ID:db6ddc9b-8d2e-906e-db86-77e4c9fab08f : Date and Time : 31/10/2012 1:40:20 PM    
     at Microsoft.Office.Server.Microfeed.SPMicrofeedThreadCollection.PopulateConsolidated(SPMicrofeedRetrievalOptions retOptions, SPMicrofeedContext context)    
     at Microsoft.Office.Server.Microfeed.SPMicrofeedThreadCollection.Populate(SPMicrofeedRetrievalOptions retrievalOptions, SPMicrofeedContext context)    
     at Microsoft.Office.Server.Microfeed.SPMicrofeedManager.CommonGetFeedFor(SPMicrofeedRetrievalOptions retrievalOptions)    
     at Microsoft.Office.Server.Microfeed.SPMicrofeedManager.CommonPubFeedGetter(SPMicrofeedRetrievalOptions feedOptions, MicrofeedPublishedFeedType feedType, Boolean publicView)    
     at Microsoft.Office.Server.Microfeed.SPMicrofeedManager.GetPublishedFeed(String feedOwner, SPMicrofeedRetrievalOptions feedOptions, MicrofeedPublishedFeedType typeOfPubFeed)    
     at Microsoft.Office.Server.Social.SPSocialFeedManager.Microsoft.Office.Server.Social.ISocialFeedManagerProxy.ProxyGetFeed(SPSocialFeedType type, SPSocialFeedOptions options)    
     at Microsoft.Office.Server.Social.SPSocialFeedManager.<>c__DisplayClass4b`1.<S2SInvoke>b__4a()
    Microsoft.Office.Server.Social.SPSocialFeedManager.GetFeed: Microsoft.Office.Server.Microfeed.MicrofeedException: ServerErrorFetchingConsolidatedFeed : ( Unexpected exception in FeedCacheService.GetPublishedFeed: Object reference not set to an instance of
    an object.. ) : Correlation ID:db6ddc9b-8d2e-906e-db86-77e4c9fab08f : Date and Time : 31/10/2012 1:40:20 PM    
     at Microsoft.Office.Server.Microfeed.SPMicrofeedThreadCollection.PopulateConsolidated(SPMicrofeedRetrievalOptions retOptions, SPMicrofeedContext context)    
     at Microsoft.Office.Server.Microfeed.SPMicrofeedThreadCollection.Populate(SPMicrofeedRetrievalOptions retrievalOptions, SPMicrofeedContext context)    
     at Microsoft.Office.Server.Microfeed.SPMicrofeedManager.CommonGetFeedFor(SPMicrofeedRetrievalOptions retrievalOptions)    
     at Microsoft.Office.Server.Microfeed.SPMicrofeedManager.CommonPubFeedGetter(SPMicrofeedRetrievalOptions feedOptions, MicrofeedPublishedFeedType feedType, Boolean publicView)    
     at Microsoft.Office.Server.Microfeed.SPMicrofeedManager.GetPublishedFeed(String feedOwner, SPMicrofeedRetrievalOptions feedOptions, MicrofeedPublishedFeedType typeOfPubFeed)    
     at Microsoft.Office.Server.Social.SPSocialFeedManager.Microsoft.Office.Server.Social.ISocialFeedManagerProxy.ProxyGetFeed(SPSocialFeedType type, SPSocialFeedOptions options)    
     at Microsoft.Office.Server.Social.SPSocialFeedManager.<>c__DisplayClass4b`1.<S2SInvoke>b__4a()    
     at Microsoft.Office.Server.Social.SPSocialUtil.InvokeWithExceptionTranslation[T](ISocialOperationManager target, String name, Func`1 func)
    Microsoft.Office.Server.Social.SPSocialFeedManager.GetFeed: Microsoft.Office.Server.Social.SPSocialException: The operation failed because the server could not access the distributed cache. Internal type name: Microsoft.Office.Server.Microfeed.MicrofeedException.
    Internal error code: 55.    
     at Microsoft.Office.Server.Social.SPSocialUtil.TryTranslateExceptionAndThrow(Exception exception)    
     at Microsoft.Office.Server.Social.SPSocialUtil.InvokeWithExceptionTranslation[T](ISocialOperationManager target, String name, Func`1 func)    
     at Microsoft.Office.Server.Social.SPSocialFeedManager.<>c__DisplayClass48`1.<S2SInvoke>b__47()    
     at Microsoft.Office.Server.Social.SPSocialUtil.InvokeWithExceptionTranslation[T](ISocialOperationManager target, String name, Func`1 func)

    Thanks Thuan,
    I've restarted to the Distrubiton Cache servicem and the error is still occuring.
    The AppFabric Caching Service is running under the service apps account, and does appear operational based on:
    > use-cachecluster
    > get-cache
    CacheName            [Host]
                         Regions
    default
    DistributedAccessCache_1e9f4999-0187-40e8-aa92-f8308d47d6e9
    DistributedActivityFeedCache_1e9f4999-0187-40e8-aa92-f8308d47d6e9
    DistributedActivityF [SERVER:22233]
    eedLMTCache_1e9f4999 LMT(Primary)
    -0187-40e8-aa92-f830
    8d47d6e9
    DistributedBouncerCache_1e9f4999-0187-40e8-aa92-f8308d47d6e9
    DistributedDefaultCache_1e9f4999-0187-40e8-aa92-f8308d47d6e9
    DistributedLogonToke [SERVER:22233]
    nCache_1e9f4999-0187 Default_Region_0538(Primary)
    -40e8-aa92-f8308d47d Default_Region_0004(Primary)
    6e9                  Default_Region_0451(Primary)
    DistributedSearchCache_1e9f4999-0187-40e8-aa92-f8308d47d6e9
    DistributedSecurityTrimmingCache_1e9f4999-0187-40e8-aa92-f8308d47d6e9
    DistributedServerToAppServerAccessTokenCache_1e9f4999-0187-40e8-aa92-f8308d47d6e9

  • Need Help regarding initial configuration for distributed cache

    Hi ,
    I am new to tangosol and trying to setup a basic partitioned distributed cache ,But I am not being able to do so
    Here is my Scenario,
    My Application DataServer create the instance of Tangosolcache .
    I have this config.xml set in my machine where my application start.
    <?xml version="1.0"?>
    <!DOCTYPE cache-config SYSTEM "cache-config.dtd">
    <cache-config>
    <caching-scheme-mapping>
    <!--
    Caches with any name will be created as default near.
    -->
    <cache-mapping>
    <cache-name>*</cache-name>
    <scheme-name>default-distributed</scheme-name>
    </cache-mapping>
    </caching-scheme-mapping>
    <caching-schemes>
    <!--
    Default Distributed caching scheme.
    -->
    <distributed-scheme>
    <scheme-name>default-distributed</scheme-name>
    <service-name>DistributedCache</service-name>
    <backing-map-scheme>
    <class-scheme>
    <scheme-ref>default-backing-map</scheme-ref>
    </class-scheme>
    </backing-map-scheme>
    <autostart>true</autostart>
    </distributed-scheme>
    <!--
    Default backing map scheme definition used by all the caches that do
    not require any eviction policies
    -->
    <class-scheme>
    <scheme-name>default-backing-map</scheme-name>
    <class-name>com.tangosol.util.SafeHashMap</class-name>
    <init-params></init-params>
    </class-scheme>
    </caching-schemes>
    </cache-config>
    Now on the same machine I start a different client using the command
    java -Dtangosol.coherence.distributed.localstorage=false -Dtangosol.coherence.cacheconfig=near-cache-config.xml -classpath
    "C:/calypso/software/release/build" -jar ../lib/coherence.jar
    The problem I am facing is
    1)If I do not start the client even then my application server cache the data .Ideally my config.xml setting is set to
    distributed so under no case it should cache the data in its local ...
    2)I want to bind my differet cache on different process on different machine .
    say
    for e.g
    machine1 should cache cache1 object
    machine2 should cache cache2 object
    and so on .......but i could not find any documentation which explain how to do this setting .Can some one give me example of
    how to do it ....
    3)I want to know the details of cache stored in any particular node how do I know say for e.g machine1 contains so and so
    cache and it corresponding object values ... etc .....
    Regards
    Mahesh

    Hi Thanks for answer.
    After digging into the wiki lot i found out something related to KeyAssociation I think what I need is something like implementation of KeyAssociation and that
    store the particular cache type object on particular node or group of node
    Say for e,g I want to have this kind of setup
    Cache1-->node1,node2 as I forecast this would take lot of memory (So i assign this jvms like 10 G)
    Cache2-->node3 to assign small memory (like 2G)
    and so on ...
    From the wiki documentation i see
    Key Association
    By default the specific set of entries assigned to each partition is transparent to the application. In some cases it may be advantageous to keep certain related entries within the same cluster node. A key-associator may be used to indicate related entries, the partitioned cache service will ensure that associated entries reside on the same partition, and thus on the same cluster node. Alternatively, key association may be specified from within the application code by using keys which implement the com.tangosol.net.cache.KeyAssociation interface.
    Do someone have any example of explaining how this is done in the simplest way ..

  • Different distributed caches within the cluster

    Hi,
    i've three machines n1 , n2 and n3 respectively that host tangosol. 2 of them act as the primary distributed cache and the third one acts as the secondary cache. i also have weblogic running on n1 and based on some requests pumps data on to the distributed cache on n1 and n2. i've a listener configured on n1 and n2 and on the entry deleted event i would like to populate tangosol distributed service running on n3. all the 3 nodes are within the same cluster.
    i would like to ensure that the data directly coming from weblogic should only be distributed across n1 and n2 and NOT n3. for e.g. i do not start an instance of tangosol on node n3. and an object gets pruned from either n1 or n2. so ideally i should get a storage not configured exception which does not happen.
    The point is the moment is say CacheFactory.getCache("Dist:n3") in the cache listener, tangosol does populate the secondary cache by creating an instance of Dist:n3 on either n1 or n2 depending from where the object has been pruned.
    from my understanding i dont think we can have a config file on n1 and n2 that does not have a scheme for n3. i tried doing that and got an illegalstate exception.
    my next step was to define the Dist:n3 scheme on n1 and n2 with local storage false and have a similar config file on n3 with local-storage for Dist:n3 as true and local storage for the primary cache as false.
    can i configure local-storage specific to a cache rather than to a node.
    i also have an EJB deployed on weblogic that also entertains a getData request. i.e. this ejb will also check the primary cache and the secondary cache for data. i would have the statement
    NamedCahe n3 = CacheFactory.getCache("n3") in the bean as well.

    Hi Jigar,
    i've three machines n1 , n2 and n3 respectively that
    host tangosol. 2 of them act as the primary
    distributed cache and the third one acts as the
    secondary cache.First, I am curious as to the requirements that drive this configuration setup.
    i would like to ensure that the data directly coming
    from weblogic should only be distributed across n1
    and n2 and NOT n3. for e.g. i do not start an
    instance of tangosol on node n3. and an object gets
    pruned from either n1 or n2. so ideally i should get
    a storage not configured exception which does not
    happen.
    The point is the moment is say
    CacheFactory.getCache("Dist:n3") in the cache
    listener, tangosol does populate the secondary cache
    by creating an instance of Dist:n3 on either n1 or n2
    depending from where the object has been pruned.
    from my understanding i dont think we can have a
    config file on n1 and n2 that does not have a scheme
    for n3. i tried doing that and got an illegalstate
    exception.
    my next step was to define the Dist:n3 scheme on n1
    and n2 with local storage false and have a similar
    config file on n3 with local-storage for Dist:n3 as
    true and local storage for the primary cache as
    false.
    can i configure local-storage specific to a cache
    rather than to a node.
    i also have an EJB deployed on weblogic that also
    entertains a getData request. i.e. this ejb will also
    check the primary cache and the secondary cache for
    data. i would have the statement
    NamedCahe n3 = CacheFactory.getCache("n3") in the
    bean as well.In this scenario, I would recommend having the "primary" and "secondary" caches on different cache services (i.e. distributed-scheme/service-name). Then you can configure local storage on a service by service basis (i.e. distributed-scheme/local-storage).
    Later,
    Rob Misek
    Tangosol, Inc.

  • Setup failover for a distributed cache

    Hello,
    For our production setup we will have 4 app servers one clone per each app server. so there will be 4 clones to a cluster. And we will have 2 jvms for our distributed cache - one being a failover, both of those will be in cluster.
    How would i configure the failover for the distributed cache?
    Thanks

    user644269 wrote:
    Right - so each of the near cache schemes defined would need to have the back map high-units set to where it could take on 100% of data.Specifically the near-scheme/back-scheme/distributed-scheme/backing-map-scheme/local-scheme/high-units value (take a look at the [Cache Configuration Elements|http://coherence.oracle.com/display/COH34UG/Cache+Configuration+Elements] ).
    There are two options:
    1) No Expiry -- In this case you would have to size the storage enabled JVMs to that an individual JVM could store all of the data.
    or
    2) Expiry -- In this case you would set the high-units a value that you determine. If you want it to store all the data then it needs to be set higher than the total number of objects that you will store in the cache at any given time or you can set it lower with the understanding that once that high-units is reached Coherence will evict some data from the cluster (i.e. remove it from the "cluster memory").
    user644269 wrote:
    Other than that - there is not configuration needed to ensure that these JVM's act as a failover in the event one goes down.Correct, data fault tolerance is on by default (set to one level of redundancy).
    :Rob:
    Coherence Team

  • Distributed Cache Errors in WFE

    Unexpected error occurred in method 'Put' , usage 'Distributed Logon Token Cache' - Exception 'Microsoft.ApplicationServer.Caching.DataCacheException: ErrorCode<ERRCA0016>:SubStatus<ES0001>:The
    connection was terminated, possibly due to server or network problems or serialized Object size is greater than MaxBufferSize on server. Result of the request is unknown. ---> System.TimeoutException: The socket was aborted because an asynchronous receive
    from the socket did not complete within the allotted timeout of 00:02:00. The time allotted to this operation may have been a portion of a longer timeout. ---> System.IO.IOException: The read operation failed, see inner exception. ---> System.TimeoutException:
    The socket was aborted because an asynchronous receive from the socket did not complete within the allotted timeout of 00:02:00. The time allotted to this operation may have been a portion of a longer timeout. ---> System.Net.Sockets.SocketException: The
    I/O operation has been aborted because of either a thread exit or an application request   
     at System.ServiceModel.Channels.SocketConnection.HandleReceiveAsyncCompleted()   
     at System.ServiceModel.Channels.SocketConnection.OnReceiveAsync(Object sender, SocketAsyncEventArgs eventArgs)     -
    -- End of inner exception stack trace ---   
     at System.Runtime.AsyncResult.End[TAsyncResult](IAsyncResult result)   
     at System.ServiceModel.Channels.ConnectionStream.ReadAsyncResult.End(IAsyncResult result)   
     at System.Net.FixedSizeReader.ReadCallback(IAsyncResult transportResult)     -
    -- End of inner exception stack trace ---   
     at System.Net.Security.NegotiateStream.EndRead(IAsyncResult asyncResult)   
     at System.ServiceModel.Channels.StreamConnection.EndRead()     -
    -- End of inner exception stack trace ---   
     at System.Runtime.AsyncResult.End[TAsyncResult](IAsyncResult result)   
     at System.ServiceModel.Channels.TransportDuplexSessionChannel.EndReceive(IAsyncResult result)   
     at Microsoft.ApplicationServer.Caching.WcfClientChannel.CompleteProcessing(IAsyncResult result)     -
    -- End of inner exception stack trace ---   
     at Microsoft.ApplicationServer.Caching.DataCache.ThrowException(ResponseBody respBody, RequestBody reqBody)   
     at Microsoft.ApplicationServer.Caching.DataCache.InternalPut(String key, Object value, DataCacheItemVersion oldVersion, TimeSpan timeout, DataCacheTag[] tags, String region,
    IMonitoringListener listener)   
     at Microsoft.ApplicationServer.Caching.DataCache.<>c__DisplayClass25.<Put>b__24()   
     at Microsoft.ApplicationServer.Caching.DataCache.Put(String key, Object value, TimeSpan timeout)   
     at Microsoft.SharePoint.DistributedCaching.SPDistributedCache.Put(String key, Object value)'.

    Hi Amol,
    check those links, contains similar issues and resolution
    https://habaneroconsulting.com/insights/sharepoint-2013-distributed-cache-bug#.VJAjsyuUdP0
    http://blogs.msdn.com/b/sambetts/archive/2014/05/28/troubleshooting-appfabric-timeouts-on-sharepoint.aspx
    http://sharepointchips.com/distributed-cache-errors-in-the-uls-log/
    https://www.dmcinfo.com/latest-thinking/blog/id/8657/fix-sharepoint-2013-distributed-cache-timeouts
    https://social.technet.microsoft.com/Forums/sharepoint/en-US/2fad2277-8f5f-4323-8c18-621ae4bfe11a/refresh-the-sp2013-distributed-cache-services-logon-token-cache?forum=sharepointgeneral
    Kind Regards,
    John Naguib
    Technical Consultant/Architect
    MCITP, MCPD, MCTS, MCT, TOGAF 9 Foundation
    Please remember to mark your question as answered if this solves your problem

  • Distributed Cache errors

    On my application Server I am getting periodic entries under the General category
    "Unable to write SPDistributedCache call usage entry."
    The error is every 5 minutes exactly. It is followed by:
    Calling... SPDistributedCacheClusterCustomProvider:: BeginTransaction
    Calling... SPDistributedCacheClusterCustomProvider:: GetValue(object transactionContext, string type, string key
    Calling... SPDistributedCacheClusterCustomProvider:: GetStoreUtcTime.
    Calling... SPDistributedCacheClusterCustomProvider:: Update(object transactionContext, string type, string key, byte[] data, long oldVersion).
    Sometimes this group of calls succeeds without an error and the sequence continues maybe for 3 iterations every 5 minutes. Then the error
    "Unable to write SPDistributedCache call usage entry.""
    happens again.
    My Distributed Cache Service is running on my Application Server and on my web front end.
    All values are default.
    Any idea why this is happening intermittently?
    Love them all...regardless. - Buddha

    Hi,
    From the error message, check whether super accounts were setup correctly.
    Refer to the article about configuring object cache user accounts in SharePoint Server 2013::
    https://technet.microsoft.com/en-us/library/ff758656(v=office.15).aspx
    if the issue exists, please check the SharePoint ULS log located at : C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\15\LOGS to get a detailed error description.
    Best Regards,
    Lisa Chen
    TechNet Community Support
    Please remember to mark the replies as answers if they help, and unmark the answers if they provide no help. If you have feedback for TechNet Support, contact
    [email protected]

Maybe you are looking for

  • Internal table in ABAP Query

    Hi all, Can we use internal table in ABAP/SAP Query (Infoset - SQ02)? If yes, Pls guide me on the same. Thanks in advance Regards Madhumathi A

  • Interface-like specification for construtors

    Consider this scenario: I want to instantiate one of the implementations of the interface Foo, the exact subclass being chosen at runtime based on user input. The constructor to be used (via Constructor.newInstance()) in such an instantiation has a n

  • Source in , source out not match

    hi , i have a little bug really boring problem with color , in fact with few plans of my edit color don't use the good in and out point , there are good on the timeline the plan is good sized , i colorise it and when i want to make a render i see tha

  • Role or associated query to be transported first ?

    Hi *, I have some queries created which are assigned to a request. There was a pre-existing role. I was required to include these queries in the role. After inclusion, shall I record this role change in a separate request or in the same one ? If reco

  • IOS 5 upgrade won't launch in iTunes 10.5?

    I got the Apple email about upgrading my iPhone 3GS to iOS 5. First I downloaded and installed iTunes 10.5. Then I restarted the PC. Then I plugged the 3GS into the USB port. iTunes launched, did an ordinary synch, then just sat there. Nothing new ha