Query from Distributed Cache

Hi
I am newbie to Oracle Coherence and trying to get a hands on experience by running a example (coherence-example-distributedload.zip) (Coherence GE 3.6.1). I am running two instances of server . After this I ran "load.cmd" to distribute data across two server nodes - I can see that data is partitioned across server instances.
Now I run another instance(on another JVM) of program which will try to join the distributed cache and try to query on the loaded on server instances. I see that the new JVM is joining the cluster and querying for data returns no records. Can you please tell me if I am missing something?
     NamedCache nNamedCache = CacheFactory.getCache("example-distributed");
     Filter eEqualsFilter = new GreaterFilter("getLocId", "1000");
     Set keySet = nNamedCache.keySet(eEqualsFilter);
I see here that keySet has no records. Can you please help?
Thanks
sunder

I got this problem sorted out - the was problem cache-config.xml.. The correct one looks as below.
<distributed-scheme>
<scheme-name>example-distributed</scheme-name>
<service-name>DistributedCache1</service-name>
<backing-map-scheme>
     <read-write-backing-map-scheme>
     <scheme-name>DBCacheLoaderScheme</scheme-name>
     <internal-cache-scheme>
     <local-scheme>
     <scheme-ref>DBCache-eviction</scheme-ref>
     </local-scheme>
     </internal-cache-scheme>
          <cachestore-scheme>
          <class-scheme>
               <class-name>com.test.DBCacheStore</class-name>
               <init-params>
                              <init-param>
                                   <param-type>java.lang.String</param-type>
                                   <param-value>locations</param-value>
                              </init-param>
                              <init-param>
                                   <param-type>java.lang.String</param-type>
                                   <param-value>{cache-name}</param-value>
                              </init-param>
               </init-params>                     
               </class-scheme>
          </cachestore-scheme>
          <cachestore-timeout>6000</cachestore-timeout>
          <refresh-ahead-factor>0.5</refresh-ahead-factor>     
     </read-write-backing-map-scheme>
     </backing-map-scheme>
     <thread-count>10</thread-count>
<autostart>true</autostart>
</distributed-scheme>
<invocation-scheme>
<scheme-name>example-invocation</scheme-name>
<service-name>InvocationService1</service-name>
<autostart system-property="tangosol.coherence.invocation.autostart">true</autostart>
</invocation-scheme>
Missed <class-scheme> element inside <cachestore-scheme> of <read-write-backing-map-scheme>.
Thanks
sunder

Similar Messages

  • How to query data from grid cache group after created global AWT group

    It is me again.
    as I mentioned in my previous posts, I am in progress of setup IMDB grid environment, and now I am at stage of creating cache group. and I created global AWT cache group on one node(cachealone2), but I can not query this global cache group from another node(cachealone1)
    thanks Chirs and J, I have done successfully setup IMDB grid env, and have two node in this grid as below
    Command> call ttGridNodeStatus;
    < MYGRID, 1, 1, T, igs_imdb02, MYGRID_cachealone1_1, 10.214.10.176, 5001, <NULL>, <NULL>, <NULL>, <NULL>, <NULL> >
    < MYGRID, 2, 1, T, igsimdb01, MYGRID_cachealone2_2, 10.214.10.119, 5002, <NULL>, <NULL>, <NULL>, <NULL>, <NULL> >
    2 rows found.
    and I create group ATW cache group on cachealone2
    Command> cachegroups;
    Cache Group CACHEUSER.SUBSCRIBER_ACCOUNTS:
    Cache Group Type: Asynchronous Writethrough global (Dynamic)
    Autorefresh: No
    Aging: LRU on
    Root Table: ORATT.SUBSCRIBER
    Table Type: Propagate
    1 cache group found.
    Command> SELECT * FROM oratt.subscriber;
    0 rows found.
    however I can not query this from another node cachealone1
    Command> SELECT * FROM oratt.subscriber WHERE subscriberid = 1004;
    2206: Table ORATT.SUBSCRIBER not found
    The command failed.
    Command> SELECT * FROM oratt.subscriber WHERE subscriberid = 1004;
    2206: Table ORATT.SUBSCRIBER not found
    The command failed.
    Command> SELECT * FROM oratt.subscriber;
    2206: Table ORATT.SUBSCRIBER not found
    this is example from Oracle docs, I an not sure where I missed for this. thanks for your help.

    Sounds like you have not created the Global AWT cache groupo in the second datastore? There is a multi-step process needed to roll out a cache grid and various things must be done on each node in the correct order. have you done that?
    Try checking out the QuickStart example here:
    http://download.oracle.com/otn_hosted_doc/timesten/1121/quickstart/index.html
    Chris

  • Why is it only possible to run queries on a Distributed cache?

    I found by experiementation that if you put a NearCache (only for the benefit of its QueryMap functions) on top of a ReplicatedCache, it will throw a runtime exception saying that the query operations are not supported on the ReplicatedCache.
    I understand that the primary goal of the QueryMap interface is to be able to do large, distributed queries on the data across machines in the cluster. However, there are definitely situations where it is useful (such as in my application) to be able to run a local query on the cache to take advantage of the index APIs, etc, for your searches.

    Kris,
    I believe the only API that is currently not supported for ReplicatedCache(s) is "addIndex" and "removeIndex". The query methods "keySet(Filter)" and "entrySet(Filter, Comparator)" are fully implemented.
    The reason the index functionality was "pushed" out of 2.x timeframe was an assumption that ReplicatedCache would hold a not-too-big number of entries and since all the data is "local" to the querying JVM the performance of non-indexed iterator would be acceptable. We do, however, plan to fully support the index functionality for ReplicatedCache in our future releases.
    Unless I misunderstand your design, since the com.tangosol.net.NamedCache interface extends com.tangosol.util.QueryMap there is no reason to wrap the NamedCache created by the ReplicatedCache service (i.e. returned by CacheFactory.getReplicatedCache method) using the NearCache construct.
    Gene

  • Distributed Cache service stuck in Starting Provisioning

    Hello,
    I'm having problem with starting/stopping Distributed Cache service in one of the SharePoint 2013 farm servers. Initially, Distributed Cache was enabled in all the farm servers by default and it was running as a cluster. I wanted to remove it from all hosts
    but one (APP server) using below PowerShell commands, which worked fine.
    Stop-SPDistributedCacheServiceInstance -Graceful
    Remove-SPDistributedCacheServiceInstance
    But later I attempted to add the service back to two hosts (WFE servers) using below command and unfortunately one of them got stuck in the process. When I look at the Services on Server from Central Admin, the status says "Starting".
    Add-SPDistributedCacheServiceInstance
    Also, when I execute below script, the status says "Provisioning".
    Get-SPServiceInstance | ? {($_.service.tostring()) -eq "SPDistributedCacheService Name=AppFabricCachingService"} | select Server, Status
    I get "cacheHostInfo is null" error when I use "Stop-SPDistributedCacheServiceInstance -Graceful".
    I tried below script,
    $instanceName ="SPDistributedCacheService Name=AppFabricCachingService" 
    $serviceInstance = Get-SPServiceInstance | ? {($_.service.tostring()) -eq $instanceName -and ($_.server.name) -eq $env:computername}
    $serviceInstance.Unprovision()
    $serviceInstance.Delete()
    ,but it didn't work either, and I got below error.
    "SPDistributedCacheServiceInstance", could not be deleted because other objects depend on it.  Update all of these dependants to point to null or 
    different objects and retry this operation.  The dependant objects are as follows: 
    SPServiceInstanceJobDefinition Name=job-service-instance-{GUID}
    Has anyone come across this issue? I would appreciate any help.
    Thanks!

    Hi ,
    Are you able to ping the server that is already running Distributed Cache on this server? For example:
    ping WFE01
    As you are using more than one cache host in your server farm, you must configure the first cache host running the Distributed Cache service to allow Inbound ICMP (ICMPv4) traffic through the firewall.If an administrator removes the first cache host from
    the cluster which was configured to allow Inbound ICMP (ICMPv4) traffic through the firewall, you must configure the first server of the new cluster to allow Inbound ICMP (ICMPv4) traffic through the firewall. 
    You can create a rule to allow the incoming port.
    For more information, you can refer to the  blog:
    http://habaneroconsulting.com/insights/Distributed-Cache-Needs-Ping#.U4_nmPm1a3A
    Thanks,
    Eric
    Forum Support
    Please remember to mark the replies as answers
    if they help and unmark them if they provide no help. If you have feedback for TechNet Subscriber Support, contact [email protected]
    Eric Tao
    TechNet Community Support

  • Foundation 2013 Farm and Distributed Cache settings

    We are on a 3 tier farm - 1 WFE + 1APP + 1SQL - have had many issues with AppFab and Dist Cache; and an additional issue with noderunner/Search Services.  Memory and CPU running very high.  Read that we shouldn't be running Search
    and Dist Cache in the same server, nor using a WFE as a cache host.  I don't have the budget to add another server in my environment. 
    I found an article (IderaWP_CachingFormSharePointPerformance.pdf) saying "To make use of SharePoint's caching capabilities requires a Server version of the platform." because it requires the publishing feature, which Foundation doesn't have. 
    So, I removed Distributed Cache (using Powershell) from my deployment and disabled the AppFab.  This resolved 90% of server errors but performance didn't improve. Now, not only I'm getting errors now on Central Admin. - expects Dist Cache
    - but I'm getting disk operations reading of 4000 ms.
    Questions:
    1) Should I enable AppFab and disable cache?
    2) Does Foundation support Dist Cache?  Do I need to run Distributed Cache?
    3) If so, can I run with just 1 cache host?  If I shouldn't run it on a WFE or an App server with Search, do I have to stop Search all together?  What happens with 2 tier farms out there? 
    4) Reading through the labyrinth of links on TechNet and MSDN on the subject, most of them says "Applies to SharePoint Server".
    5) Anyone out there on a Foundation 2013 production environment that could share your experience?
    Thanks in advance for any help with this!
    Monica
    Monica

    That article is referring to BlobCache, not Distributed Cache. BlobCache requires Publishing, hence Server, but DistributedCache is required on all SharePoint 2013 farms, regardless of edition.
    I would leave your DistCache on the WFE, given the App Server likely runs Search. Make sure you install
    AppFabric CU5 and make sure you make the changes as noted in the KB for
    AppFabric CU3.
    You'll need to separately investigate your disk performance issues. Could be poor disk layout, under spec'ed disks, and so on. A detail into the disks that support SharePoint would be valuable (type, kind, RPM if applicable, LUNs in place, etc.).
    Trevor Seward
    Follow or contact me at...
    &nbsp&nbsp
    This post is my own opinion and does not necessarily reflect the opinion or view of Microsoft, its employees, or other MVPs.

  • Limitation on number of objects in distributed cache

    Hi,
    Is there a limitation on the number (or total size) of objects in a distributed cache? I am seeing a big increase in response time when the number of objects exceeds 16,000. Normally, the ServiceMBean.RequestAverageDuration value is in the 6-8ms range as long as the number of objects in the cache is less than 16K - I've run our application for weeks at a time without seeing any problems. However, once the number of objects exceeds the magic number of 16K the average request duration almost immediately jumps to over 100ms and continues to climb as more objects are added.
    I'm fairly confident that the cache is indexed properly (as Dimitri helped us with that). Are there any configuration changes that could possibly help out here? We are using Coherence 3.3.
    Any suggestions would be greatly appreciated.
    Thanks,
    Jim

    Hi Jim,
    The results from the load test look quite normal, the system fairly quickly stabilizes at a particular performance level and remains there for the duration of the test. In terms of latency results, we see that the cache.putAll operations are taking ~45ms per bulk operation where each operation is putting 100 1K items, for cache.getAll operations we see about ~15ms per bulk operation. Additionally note that the test runs over 256,000 items, so it is well beyond the 16,000 limit you've encountered.
    So it looks like your application are exhibiting different behavior then this test. You may wish to try to configure this test to behave as similarly to yours as possible. For instance you can set the size of the cache to just over/under 16,000 using the -entries parameter, set the size of the entries to 900 bytes using the -size parameter, and set the total number of threads per worker using the -threads parameter.
    What is quite interesting is that at 256,000 1K objects the latency measured with this test is apparently less then half the latency you are seeing with a much smaller cache size. This would seem to point at the issue being related to or rooted in your test. Would you be able to provide a more detailed description of how you are using the cache, and the types of operations you are performing.
    thanks,
    mark

  • Getting All Entries from a cache

    Hi Folks,
         Just a small interesting observation. In an attempt to get back all the data from my partitioned cache I tried the following approaches:
         //EntrySet
         NamedCache cache = NamedCache.getCache("MyCache");
         Iterator<Entry<MyKeyObj, MyObj>> iter = cache.entrySet().iterator();
         //iterator over objects and get values
         //KeySet & getAll
         NamedCache cache = NamedCache.getCache("MyCache");
         Map results = cache.getAll(cache.keySet());
         Iterator<Entry<MyKeyObj, MyObj>> iter = results.iterator();
         //iterate over objects and get values
         Retrieving ~47k objects from 4 nodes takes 21 seconds using the entryset approach and 10 seconds for the keyset/getal approach.
         does that sound right to you? That implies that the entryset iterator is lazy loaded using get(key) for each entry.
         Regards,
         Max

    Hi Gene,
         I actually posted the question, because we are currently performance-tuning our application, and there are scenarios where (due to having a large amount of badly organized legacy code with the bottom layers ported to Coherence) there are lots of invocations getting all the entries from some caches, sometimes even hundreds of times during the processing of a HTTP request.
         In some cases (typically with caches having a low cache-size) we found, that the entrySet-AlwaysFilter solution was way faster than the keyset-getall solution, which was about as fast as the solution iterating over the cache (new HashMap(cache)).
         I just wanted to ask if there are some rules of thumb on how long is it efficient to use the AlwaysFilter on distributed caches, and where it starts to be better to use the keyset-getall approach (from a naive test-case keyset-getall seemed to be better upwards from a couple-of-thousand entries).
         Also, as we are considering to move some of the caches (static data mostly, with usually less than 1000 entries, sometimes even as few as a dozen entries in a named cache, and in very few cases as many as 40000 entries) to a replicated topology, that is why I asked about the effect of using replicated caches...
         I expect the entrySet-AlwaysFilter to be slower than the iterating solution, since it effectively does the same, and also has some additional filter evaluation to be done.
         The keySet-getall will be something similar to the iterating solution, I guess.
         What is to be known about the implementation of the values() method?
         Can it be worth using in some cases? Does it give us an instant snapshot in case of replicated caches? Is it faster than the entrySet(void) in case of replicated caches?
         Thanks and best regards,
         Robert

  • Distributed cache

    HI,
    We have a server (Server 1), on which the status of the Distributed cache was in "Error Starting" state.
    While applying a service pack due to some issue we were unable to apply the path (Server 1) so we decided to remove the effected server from the farm and work on it. the effected server (Server 1) was removed from the farm through the configuration wizard.
    Even after running the configuration wizard we were still able to see the server (Server 1) on the SharePoint central admin site (Servers in farm) when clicked, the service "Distributed cache" was still visible with a status "Error Starting",
    tried deleting the server from the farm and got an error message, the ULS logs displayed the below.
    A failure occurred in SPDistributedCacheServiceInstance::UnprovisionInternal. cacheHostInfo is null for host 'servername'.
    8130ae9c-e52e-80d7-aef7-ead5fa0bc999
    A failure occurred SPDistributedCacheServiceInstance::UnprovisionInternal()... isGraceFulShutDown 'False' , isGraceFulShutDown, Exception 'System.InvalidOperationException: cacheHostInfo is null     at Microsoft.SharePoint.DistributedCaching.Utilities.SPDistributedCacheServiceInstance.UnProvisionInternal(Boolean
    isGraceFulShutDown)'
    8130ae9c-e52e-80d7-aef7-ead5fa0bc999
    A failure occurred SPDistributedCacheServiceInstance::UnProvision() , Exception 'System.InvalidOperationException: cacheHostInfo is null     at Microsoft.SharePoint.DistributedCaching.Utilities.SPDistributedCacheServiceInstance.UnProvisionInternal(Boolean
    isGraceFulShutDown)     at Microsoft.SharePoint.DistributedCaching.Utilities.SPDistributedCacheServiceInstance.Unprovision()'
    8130ae9c-e52e-80d7-aef7-ead5fa0bc999
    We are unable to perform any operation install/repair of SharePoint on the effected server (Server 1), as the server is no longer in the farm, we are unable to run any powershell commands.
    Questions:-
    What would cause that to happen?
    Is there a way to resolve this issue? (please provide the steps)
    Satyam

    Hi
    try this:
    http://edsitonline.com/2014/03/27/unexpected-exception-in-feedcacheservice-isrepopulationneeded-unable-to-create-a-datacache-spdistributedcache-is-probably-down/
    Hope this helps. Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading the thread.

  • Error handling for distributed cache synchronization

    Hello,
    Can somebody explain to me how the error handling works for the distributed cache synchronization ?
    Say I have four nodes of a weblogic cluster and 4 different sessions on each one of those nodes.
    On Node A an update happens on object B. This update is going to be propogated to all the other nodes B, C, D. But for some reason the connection between node A and node B is lost.
    In the following xml
    <cache-synchronization-manager>
    <clustering-service>...</clustering-service>
    <should-remove-connection-on-error>true</should-remove-connection-on-error>
    If I set this to true does this mean that the Toplink will stop sending updates from node A to node B ? I presume all of this is transparent. In order to handle any errors I do not have to write any code to capture this kind of error .
    Is that correct ?
    Aswin.

    This "should-remove-connection-on-error" option mainly applies to RMI or RMI_IIOP cache synchronization. If you use JMS for cache synchronization, then connectivity and error handling is provided by the JMS service.
    For RMI, when this is set to true (which is the default) if a communication exception occurs in sending the cache synchronization to a server, that server will be removed and no longer synchronized with. The assumption is that the server has gone down, and when it comes back up it will rejoin the cluster and reconnect to this server and resume synchronization. Since it will have an empty cache when it starts back up, it will not have missed anything.
    You do not have to perform any error handling, however if you wish to handle cache synchronization errors you can use a TopLink Session ExceptionHandler. Any cache synchronization errors will be sent to the session's exception handler and allow it to handle the error or be notified of the error. Any errors will also be logged to the TopLink session's log.

  • High-units reflect twice the amount with dual JVM's in a distributed cache

    HI all,
    I have a question - i have a near cache scheme defined - running 4 JVM's with my application deployed to it (localstorage=false) - and 2 JVM's for the distributed cache (localstorage=true)
    The high-units is set to 2000 - but the cache is allowing 4000. Is this b/c each JVM will allow for 2000 high-units each?
    I was under the impression that as long as coherence is running in the same multi-cast address and port - that the total high-units would be 2000 not 4000.
    Thanks...

    user644269 wrote:
    HI all,
    I have a question - i have a near cache scheme defined - running 4 JVM's with my application deployed to it (localstorage=false) - and 2 JVM's for the distributed cache (localstorage=true)
    The high-units is set to 2000 - but the cache is allowing 4000. Is this b/c each JVM will allow for 2000 high-units each?
    I was under the impression that as long as coherence is running in the same multi-cast address and port - that the total high-units would be 2000 not 4000.
    Thanks...Hi,
    the high-unit setting is per-backing map, so in your case it means 2000 units per storage-enabled nodes.
    From 3.5 it will become a bit more complex with the partition aware backing maps.
    Best regards,
    Robert

  • The Security Token Service is not available error on dedicated Distributed Cache server

    I have an error on a dedicated Distributed Cache server stating that the Security Token Service is not available.  I was under the impression that when Distributed Cache was running on a dedicated server that the only service that should be enabled
    is Distributed Cache. 
    The token service is working as expected on all other servers but this one.  Does this service need to be started or should I just ignore this error message?
    Jennifer Knight (MCITP, MCPD)

    as per my little experience with 2013, if STS is working fine on Web server then I am sure that sharepoint will be fine...Distributed cache stores the ST issued by STS. NO need to worry about this error.
    Login
    Token Cache
    DistributedLogonTokenCache
    This
    cache stores the security token issued by a Secure Token Service for use by any web server in the server farm. Any web server that receives a request for resources can access the security token from the cache, authenticate the user, and provide access to the
    resources requested.
    I would say check the ULS logs and get more details about the error why its not working on that server.
    Please remember to mark your question as answered &Vote helpful,if this solves/helps your problem. ****************************************************************************************** Thanks -WS MCITP(SharePoint 2010, 2013) Blog: http://wscheema.com/blog

  • Query regarding Replicated Caches that Persist on Disk

    I need to build a simple fault tolerant system that will replicate cache
    entries across a small set of systems. I want the cache to be persistent
    even if all cluster members are brought down.
    Is this something that requires writing a custom CacheStore implementation
    or is there a configuration that I can use that uses off-the-shelf pluggable
    caches? The documentation was somewhat vague about this.
    If I need or want to write my own CacheStore, when there is a cache write-through
    operation how does Coherence figure out which member of the cluster will do
    the actual work and persist a particular object?

    Hi rhanckel,
    write-through and cache stores are not supported with replicated caches, you need to use partitioned (distributed) cache for cache stores.
    You can use a number of out-of-the-box cache stores (Hibernate, TopLink, etc) or you can write your own if you don't find a suitable one. Configuration is the same, you need to specify the cache store class name appropriately in the <cache-store-scheme> child element of the <read-write-backing-map> element.
    You can look at the documentation for it on the following urls:
    http://wiki.tangosol.com/display/COH34UG/cachestore-scheme
    http://wiki.tangosol.com/display/COH34UG/read-write-backing-map-scheme
    http://wiki.tangosol.com/display/COH34UG/Read-Through%2C+Write-Through%2C+Write-Behind+Caching+and+Refresh-Ahead
    As for how Coherence figures out which member needs to write:
    in a partitioned cache, each cache key has an owner node which is algorithmically determined from the key itself and the distribution of partitions among nodes (these two do not depend on the actual data in the cache). More specifically, any key is always mapped to the same partition (provided you did not change the partition-count or the partition affinity-related settings, although if you did the latter, then it is arguably not the same key anymore). Therefore Coherence just needs to know, who owns a certain partition. Hence, the owner of that partition is the owner of the key, and that node is tasked with every operation related to that key.
    Best regards,
    Robert

  • Using Tangosol Coherence in conjunction with Kodo JDO for distributing caching

    JDO currently has a perception problem in terms of performance. Transparent
    persistence is perceived to have a significant performance overhead compared
    to hand-coded JDBC. That was certainly true a while ago, when the first JDO
    implementations were evaluated. They typically performed about half as well
    and with higher resource requirements. No doubt JDO vendors have closed that
    gap by caching PreparedStatements, queries, data, and by using other
    optimizations.
    Aside from the ease of programming through transparent persistence, I
    believe that using JDO in conjunction with distributed caching techniques in
    a J2EE managed environment has the opportunity to transparently give
    scalability, performance, and availability improvements that would otherwise
    be much more difficult to realize through other persistence techniques.
    In particular, it looks like Tangosol is doing a lot of good work in the
    area of distributed caching for J2EE. For example, executing parallelized
    searches in a cluster is a capability that is pretty unique and potentially
    very valuable to many applications. It would appear to me to be a lot of
    synergy between Kodo JDO and Tangosol Coherence. Using Coherence as an
    implementation of Kodo JDO's distributed cache would be a natural desire for
    enterprise applications that have J2EE clustering requirements for high
    scalability, performance, and availability.
    I'm wondering if Solarmetric has any ideas or plans for closer integration
    (e.g., pluggability) of Tangosol Coherence into Kodo JDO. This is just my
    personal opinion, but I think a partnership between your two organizations
    to do this integration would be mutually advantageous, and it would
    potentially be very attractive to your customers.
    Ben

    Marc,
    Thanks for pointing that out. That is truly excellent!
    Ben
    "Marc Prud'hommeaux" <[email protected]> wrote in message
    news:[email protected]...
    Ben-
    We do currently have a plug-in for backing our data cache with a
    Tangosol cache.
    See: http://docs.solarmetric.com/manual.html#datastore_cache_config
    In article <[email protected]>, Ben Eng wrote:
    JDO currently has a perception problem in terms of performance.
    Transparent
    persistence is perceived to have a significant performance overheadcompared
    to hand-coded JDBC. That was certainly true a while ago, when the firstJDO
    implementations were evaluated. They typically performed about half aswell
    and with higher resource requirements. No doubt JDO vendors have closedthat
    gap by caching PreparedStatements, queries, data, and by using other
    optimizations.
    Aside from the ease of programming through transparent persistence, I
    believe that using JDO in conjunction with distributed cachingtechniques in
    a J2EE managed environment has the opportunity to transparently give
    scalability, performance, and availability improvements that wouldotherwise
    be much more difficult to realize through other persistence techniques.
    In particular, it looks like Tangosol is doing a lot of good work in the
    area of distributed caching for J2EE. For example, executingparallelized
    searches in a cluster is a capability that is pretty unique andpotentially
    very valuable to many applications. It would appear to me to be a lot of
    synergy between Kodo JDO and Tangosol Coherence. Using Coherence as an
    implementation of Kodo JDO's distributed cache would be a natural desirefor
    enterprise applications that have J2EE clustering requirements for high
    scalability, performance, and availability.
    I'm wondering if Solarmetric has any ideas or plans for closerintegration
    (e.g., pluggability) of Tangosol Coherence into Kodo JDO. This is justmy
    personal opinion, but I think a partnership between your twoorganizations
    to do this integration would be mutually advantageous, and it would
    potentially be very attractive to your customers.
    Ben--
    Marc Prud'hommeaux [email protected]
    SolarMetric Inc. http://www.solarmetric.com

  • Object locking in Distributed cache

    Hi,
         I have gone through some of the posts on the forum regarding the locking issues.
         Thread at http://www.tangosol.net/forums/thread.jspa?messageID=3416 specifies that
         ..locks block locks, not gets or puts. If one member locks a key, it prevents another member from locking the key. It does not prevent the other member from getting/putting that key.
         What exactly do we mean by the above statement?
         I'm using distributed cache and would like to lock an object before "put" or "remove" operations. But I want "read" operation to be without locks. Now my questions are,
         1) In a distributed cache setup, if I try to obtain a lock before put or remove and discover that i cannot obtain it (i.e false is returned) because someone else has locked it, then how do I discover when the other entity releases lock and when should i retry for the lock?
         2) Again, if i lock USING "LOCK(OBJECT KEY)" method, can i be sure that no other cluster node would be writing/reading to that node until I release that lock?
         3) The first post in the thread http://www.tangosol.net/forums/thread.jspa?forumID=37&threadID=588 suggests that in distributed setup locks are of no effect to other cluster nodes. But i want locks to be valid across cluster. If item locked, then no one else shud be able to transact with it. How to get that?
         Regards,
         Mohnish

    Hi Mohnish,
         >> 1) In a distributed cache setup, if I try to obtain
         >> a lock before put or remove and discover that i cannot
         >> obtain it (i.e false is returned) because someone else
         >> has locked it, then how do I discover when the other
         >> entity releases lock and when should i retry for the
         >> lock?
         You may try to acquire a lock (waiting indefinitely for lock acquisition) by calling <tt>cache.lock(key, -1)</tt>; you may try for a specified time period by calling <tt>cache.lock(key, cMillis)</tt>. With either of these approaches, your thread will block until the other thread releases the lock (or the timeout is reached). In either case, if the other node releases its lock your lock request will complete immediately and your thread will resume execution.
         >> 2) Again, if i lock USING "LOCK(OBJECT KEY)" method,
         >> can i be sure that no other cluster node would be
         >> writing/reading to that node until I release that lock?
         If you want to prevent other threads from writing/reading a cache entry, you must ensure that those other threads lock the key prior to writing/reading it. If they do not lock it prior to writing, they may have dirty writes (overwriting your write or vice versa); if they do not lock prior to reading, they may have dirty reads (having the underlying data change after they've read it).
         >> 3) The first post in the thread
         http://www.tangosol.net/forums/thread.jspa?forumID=37&threadID=588
         >> suggests that in distributed setup locks are of no
         >> effect to other cluster nodes. But i want locks to be
         >> valid across cluster. If item locked, then no one else
         >> shud be able to transact with it. How to get that?
         The first post in that thread states that if the second thread doesn't lock, then it will overwrite the first thread (even if the first thread has locked). However, there is an inconsistency in that the Replicated cache topology has a stronger memory model than the Distributed/Near topologies. In the Replicated topoplogy, locks not only block out locks from other nodes, they also prevent puts from other nodes.
         Jon Purdy
         Tangosol, Inc.

  • Newsfeed error - The operation failed because the server could not access the distributed cache.

    Recently installed SharePoint 2013 RTM, on the newsfeed page an error is displayed, and no entries display in the following or everyone tabs.
    "The operation failed because the server could not access the distributed cache."
    Reading through various posts, I've checked:
    - Activity feeds and mentions tabs are working as expected.
    - User Profile Service is operational and syncing as expected
    - Search is operational and indexing as expected
    - The farm was installed based on the autospinstaller scripts.
    - Don't believe this to be a permissions issue, during testing added accounts to the admin group to verify
    Any suggestions are welcomed, thanks.
    The full error message and trace logs is as follows.
    SharePoint returned the following error: The operation failed because the server could not access the distributed cache. Internal type name: Microsoft.Office.Server.Microfeed.MicrofeedException. Internal error code: 55. Contact your system administrator
    for help in resolving this problem.
    From the trace logs there's several messages which are triggered around the same time:
    http://msdn.microsoft.com/en-AU/library/System.ServiceModel.Diagnostics.TraceHandledException.aspxHandling an exception. Exception details: System.ServiceModel.FaultException`1[Microsoft.Office.Server.UserProfiles.FeedCacheFault]: Unexpected exception in
    FeedCacheService.GetPublishedFeed: Object reference not set to an instance of an object.. (Fault Detail is equal to Microsoft.Office.Server.UserProfiles.FeedCacheFault)./LM/W3SVC/2/ROOT/d71732192b0d4afdad17084e8214321e-1-129962393079894191System.ServiceModel.FaultException`1[[Microsoft.Office.Server.UserProfiles.FeedCacheFault,
    Microsoft.Office.Server.UserProfiles, Version=15.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c]], System.ServiceModel, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089Unexpected exception in FeedCacheService.GetPublishedFeed: Object
    reference not set to an instance of an object..  
     at Microsoft.Office.Server.UserProfiles.FeedCacheService.Microsoft.Office.Server.UserProfiles.IFeedCacheService.GetPublishedFeed(FeedCacheRetrievalEntity fcTargetEntity, FeedCacheRetrievalEntity fcViewingEntity, FeedCacheRetrievalOptions fcRetOptions)
     at SyncInvokeGetPublishedFeed(Object , Object[] , Object[] )    
     at System.ServiceModel.Dispatcher.SyncMethodInvoker.Invoke(Object instance, Object[] inputs, Object[]&amp; outputs)    
     at System.ServiceModel.Dispatcher.DispatchOperationRuntime.InvokeBegin(MessageRpc&amp; rpc)    
     at System.ServiceModel.Dispatcher.ImmutableDispatchRuntime.ProcessMessage5(MessageRpc&amp; rpc)    
     at System.ServiceModel.Dispatcher.ImmutableDispatchRuntime.ProcessMessage31(MessageRpc&amp; rpc)    
     at System.ServiceModel.Dispatcher.MessageRpc.Process(Boolean isOperationContextSet)System.ServiceModel.FaultException`1[Microsoft.Office.Server.UserProfiles.FeedCacheFault]: Unexpected exception in FeedCacheService.GetPublishedFeed: Object reference not
    set to an instance of an object.. (Fault Detail is equal to Microsoft.Office.Server.UserProfiles.FeedCacheFault).
    SPSocialFeedManager.GetFeed: Exception: Microsoft.Office.Server.Microfeed.MicrofeedException: ServerErrorFetchingConsolidatedFeed : ( Unexpected exception in FeedCacheService.GetPublishedFeed: Object reference not set to an instance of an object.. ) : Correlation
    ID:db6ddc9b-8d2e-906e-db86-77e4c9fab08f : Date and Time : 31/10/2012 1:40:20 PM    
     at Microsoft.Office.Server.Microfeed.SPMicrofeedThreadCollection.PopulateConsolidated(SPMicrofeedRetrievalOptions retOptions, SPMicrofeedContext context)    
     at Microsoft.Office.Server.Microfeed.SPMicrofeedThreadCollection.Populate(SPMicrofeedRetrievalOptions retrievalOptions, SPMicrofeedContext context)    
     at Microsoft.Office.Server.Microfeed.SPMicrofeedManager.CommonGetFeedFor(SPMicrofeedRetrievalOptions retrievalOptions)    
     at Microsoft.Office.Server.Microfeed.SPMicrofeedManager.CommonPubFeedGetter(SPMicrofeedRetrievalOptions feedOptions, MicrofeedPublishedFeedType feedType, Boolean publicView)    
     at Microsoft.Office.Server.Microfeed.SPMicrofeedManager.GetPublishedFeed(String feedOwner, SPMicrofeedRetrievalOptions feedOptions, MicrofeedPublishedFeedType typeOfPubFeed)    
     at Microsoft.Office.Server.Social.SPSocialFeedManager.Microsoft.Office.Server.Social.ISocialFeedManagerProxy.ProxyGetFeed(SPSocialFeedType type, SPSocialFeedOptions options)    
     at Microsoft.Office.Server.Social.SPSocialFeedManager.<>c__DisplayClass4b`1.<S2SInvoke>b__4a()
    Microsoft.Office.Server.Social.SPSocialFeedManager.GetFeed: Microsoft.Office.Server.Microfeed.MicrofeedException: ServerErrorFetchingConsolidatedFeed : ( Unexpected exception in FeedCacheService.GetPublishedFeed: Object reference not set to an instance of
    an object.. ) : Correlation ID:db6ddc9b-8d2e-906e-db86-77e4c9fab08f : Date and Time : 31/10/2012 1:40:20 PM    
     at Microsoft.Office.Server.Microfeed.SPMicrofeedThreadCollection.PopulateConsolidated(SPMicrofeedRetrievalOptions retOptions, SPMicrofeedContext context)    
     at Microsoft.Office.Server.Microfeed.SPMicrofeedThreadCollection.Populate(SPMicrofeedRetrievalOptions retrievalOptions, SPMicrofeedContext context)    
     at Microsoft.Office.Server.Microfeed.SPMicrofeedManager.CommonGetFeedFor(SPMicrofeedRetrievalOptions retrievalOptions)    
     at Microsoft.Office.Server.Microfeed.SPMicrofeedManager.CommonPubFeedGetter(SPMicrofeedRetrievalOptions feedOptions, MicrofeedPublishedFeedType feedType, Boolean publicView)    
     at Microsoft.Office.Server.Microfeed.SPMicrofeedManager.GetPublishedFeed(String feedOwner, SPMicrofeedRetrievalOptions feedOptions, MicrofeedPublishedFeedType typeOfPubFeed)    
     at Microsoft.Office.Server.Social.SPSocialFeedManager.Microsoft.Office.Server.Social.ISocialFeedManagerProxy.ProxyGetFeed(SPSocialFeedType type, SPSocialFeedOptions options)    
     at Microsoft.Office.Server.Social.SPSocialFeedManager.<>c__DisplayClass4b`1.<S2SInvoke>b__4a()    
     at Microsoft.Office.Server.Social.SPSocialUtil.InvokeWithExceptionTranslation[T](ISocialOperationManager target, String name, Func`1 func)
    Microsoft.Office.Server.Social.SPSocialFeedManager.GetFeed: Microsoft.Office.Server.Social.SPSocialException: The operation failed because the server could not access the distributed cache. Internal type name: Microsoft.Office.Server.Microfeed.MicrofeedException.
    Internal error code: 55.    
     at Microsoft.Office.Server.Social.SPSocialUtil.TryTranslateExceptionAndThrow(Exception exception)    
     at Microsoft.Office.Server.Social.SPSocialUtil.InvokeWithExceptionTranslation[T](ISocialOperationManager target, String name, Func`1 func)    
     at Microsoft.Office.Server.Social.SPSocialFeedManager.<>c__DisplayClass48`1.<S2SInvoke>b__47()    
     at Microsoft.Office.Server.Social.SPSocialUtil.InvokeWithExceptionTranslation[T](ISocialOperationManager target, String name, Func`1 func)

    Thanks Thuan,
    I've restarted to the Distrubiton Cache servicem and the error is still occuring.
    The AppFabric Caching Service is running under the service apps account, and does appear operational based on:
    > use-cachecluster
    > get-cache
    CacheName            [Host]
                         Regions
    default
    DistributedAccessCache_1e9f4999-0187-40e8-aa92-f8308d47d6e9
    DistributedActivityFeedCache_1e9f4999-0187-40e8-aa92-f8308d47d6e9
    DistributedActivityF [SERVER:22233]
    eedLMTCache_1e9f4999 LMT(Primary)
    -0187-40e8-aa92-f830
    8d47d6e9
    DistributedBouncerCache_1e9f4999-0187-40e8-aa92-f8308d47d6e9
    DistributedDefaultCache_1e9f4999-0187-40e8-aa92-f8308d47d6e9
    DistributedLogonToke [SERVER:22233]
    nCache_1e9f4999-0187 Default_Region_0538(Primary)
    -40e8-aa92-f8308d47d Default_Region_0004(Primary)
    6e9                  Default_Region_0451(Primary)
    DistributedSearchCache_1e9f4999-0187-40e8-aa92-f8308d47d6e9
    DistributedSecurityTrimmingCache_1e9f4999-0187-40e8-aa92-f8308d47d6e9
    DistributedServerToAppServerAccessTokenCache_1e9f4999-0187-40e8-aa92-f8308d47d6e9

Maybe you are looking for