Using CacheLoader for replicated cache in cohernce 3.6

Hi,
I s it possible to configure a CacheLoader for replicated Cache in Coherence 3.6? The backing map will be local scheme for this cache.
Regards,
CodeSlave

We have a "start of day" process that just runs up a Java client (full cluster member, but storage disabled node) that clears and then repopulates a number of "reference data" replicated caches we use in our application. Use the hints-n-tips in the Coherence Developer's Guide (bulk operations, etc.) to get decent performance. We load the data from an Oracle database. Again, tune the extract side (JDBC/JPA batching, etc.) to get that side of things performing well.
For ad-hoc, intra day updates to the replicated caches (and you should look to minimise these), we use a "listener" that attaches to an Oracle database DCN (data change notification) stream.
Cheers,
Steve

Similar Messages

  • One concern for replicated cache

    For replicated caches, I think changes are replicated asynchronously,then how to understand update operations will achieve "bad" performance when many nodes exist?Could you kindly pls. explain expenses costed for that? Thanks!
    BR
    michael

    user8024986 wrote:
    Hi Robert,
    That listens reasonable, unicast and multi-cast messages are sent out without having to wait for responses and multi-cast reduces the network's stream size at the same time.
    Then my concern is still that, for replication cache,any changes are sent as messages to other nodes in asyn mode(needn't wait for the response either unicast and muli-cast), so the cost are mainly caused by sending of changes ,which is decided by network status,if the network is high enough and since the messages are sent in asyn way, the performance will be affected finitely, right?
    thanks,
    MichaelMichael,
    it may not have been clear, but what Aleks said is still true. The interleaving means that messages are sent out to recipient nodes interleaved without waiting for a response before sending to the next node, but the cache.put() call returns after the the positive response from all cache nodes arrived that the update was incorporated into their own copy of the data (or the death of the recipient node was detected in which case the response will not be waited for).
    So the overall cost on the network is both sends and responses, and since in general responses go to a single node (the sender of the message the response replies to) therefore even for a multicast update there will be several unicast responses.
    But yes, the higher number of cluster nodes there are, the larger load this puts on the network.
    There are several measures in Coherence for trying to decrease this effect on the network, e.g. bundling together messages or ACKs to the same destination which allows them to be sent in less packets than if they were sent alone (this is particularly effective in case of small messages and ACKs), but this is really effective when there are many threads on each node each doing cache operations as this increases the likelihood of multiple messages/ACKs to be sent to the same node roughly at the same time.
    But in general, if you have frequent writes to a replicated cache you can't really scale it after a point (a certain number of cluster nodes) due to saturating the network, and you should consider switching to partitioned caches (distributed cache). Even near and continuous query caches are not really effective in case of write-heavy caches (more writes than reads).
    Even if the network is able to keep up, more messages would still increase the length of the queues of messages in a node to respond to and therefore more messages would probably mean longer response times.
    Best regards,
    Robert

  • [SOLVED] using squid for local cache and configuring firefox

    Hi, I have just installed squid following the instructions at http://wiki.archlinux.org/index.php/Squid and can run squidclient to retrieve webpages. The headers include
    X-Cache: HIT from localhost.localdomain
    X-Cache-Lookup: HIT from localhost.localdomain:3128
    which I take to mean the cache is working? Next I want firefox to use the proxy, but changing the settings in Edit/Preferences/Advanced/Network/Settings to localhost or 127.0.0.1 and 3128 doesn't work. Firefox can't see the proxy for some reason. I have set
    http_access allow all
    in the conf for now. How do I get firefox to use squid?
    TIA
    Last edited by jaybee (2010-07-09 13:22:52)

    It seems that entering 'localhost' rather than 'http://localhost' was what I needed.

  • Replicated Cache Data Visibility

    Hello
    When is the cache entry visible when doing a put(..) to a Replicated cache?
    I've been reading the documentation for the Replicated Cache Topology, and it's not clear when data that has been put(..) is visible to other nodes.
    For example, if I have a cluster comprised of 26 nodes (A through Z), and I invoke replicatedCache.put("foo", "bar") from member A, at what point is the Map.Entry("foo", "bar") present and queryable on member B? Is it as soon as it has been put into the local storage on B? Or is it only just before the call to put(..) on member A returns successfully? While the put(..) from member A is "in flight", is it possible to have to simultaneous reads on members F and Y return different results because the put(..) hasn't yet been invoked successfully on one of the nodes?
    Regards
    Pete

    Hi Pete,
    As the data replication is done asynchronously, (you may refer to this post, Re: Performance of replicated cache vs. distributed cache ) . So, you may read a different result on different nodes.
    Furthermore, may I know your use case on replicated cache?
    Regards,
    Rock
    Oracle ACS

  • Problem using Binary wrappers in a Replicated cache

    I'd like to store my keys and values as Binary objects in a replicated cache so I can monitor their size using a BINARY unit-calculator. However, when I attempt to put a key and value as com.tangosol.util.Binary, I get the enclosed IOException.
    I have a very simple sample to reproduce the exception, but I don't see a way to attach it here.
    2008-07-07 09:21:35.061 Oracle Coherence GE 3.3.1/389 <Error> (thread=ReplicatedCache, member=1): Failed to deserialize a key for cache MyMap
    2008-07-07 09:21:35.061 Oracle Coherence GE 3.3.1/389 <Error> (thread=ReplicatedCache, member=1): An exception (java.io.IOException) occurred reading Message LeaseUpdate Type=9 for Service=ReplicatedCache{Name=ReplicatedCache, State=(SERVICE_STARTED), Id=2, Version=3.0, OldestMemberId=1}
    2008-07-07 09:21:35.061 Oracle Coherence GE 3.3.1/389 <Error> (thread=ReplicatedCache, member=1): Terminating ReplicatedCache due to unhandled exception: java.io.IOException
    2008-07-07 09:21:35.061 Oracle Coherence GE 3.3.1/389 <Error> (thread=ReplicatedCache, member=1):
    java.io.IOException: unsupported type / corrupted stream
         at com.tangosol.util.ExternalizableHelper.readObject(ExternalizableHelper.java:2162)
         at com.tangosol.coherence.component.net.Message.readObject(Message.CDB:3)
         at com.tangosol.coherence.component.net.message.LeaseMessage.read(LeaseMessage.CDB:11)
         at com.tangosol.coherence.component.net.message.leaseMessage.ResourceMessage.read(ResourceMessage.CDB:5)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.Service.onNotify(Service.CDB:110)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.ReplicatedCache.onNotify(ReplicatedCache.CDB:3)
         at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:35)
         at java.lang.Thread.run(Thread.java:619)
    2008-07-07 09:21:35.061 Oracle Coherence GE 3.3.1/389 <D5> (thread=ReplicatedCache, member=1): Service ReplicatedCache left the cluster

    Hi John,
    Currently the Replicated cache service uses Binary only for internal value representation and does not allow storing arbitrary Binary objects in replicated caches. You should have no problems using partitioned (Distributed) cache service with this approach.
    Regards,
    Gene

  • Basic use of locks with replicated cache

    Hi,
    I'm in the process of evaluating Coherence and I have a few fundamental questions about best locking practices with a simple replicated cache. I've been through the docs & forums a few times, but I'm still confused.
    The docs say that a replicted cache is "fully coherent". What, exactly does this mean? Does this imply that Coherence handles all the locking for you? Under what situations do I have to lock a node?
    Thanks.

    Hi,
    There are generally two reasons for desiring full synchronicity:
    Fault tolerance ... however, note that by the time the put returns, the data is held by both the client and the issuer. Also, we recommend the use of the Distributed cache topology for transactional data (and not Replicated). Distributed is in fact fully synchronous.
    The other reasons people usually ask about this is concern over the logical view of data updates (simultaneous state across the cluster). Even within a single JVM running on a single CPU, data updates are not simultaneous or even synchronous. This is quite intentional (in fact most of the complexity of the Java Memory Model arises from the desire to avoid the requirement for simultaneous state changes).
    In the JVM, synchronized blocks are required to avoid race conditions and inconsistent views of data, and in Coherence, locks are required to do the same.
    The key point to remember is, from the point of view of a single cluster member, data updates are in fact synchronous and simultaneous (with or without locking).
    I'm assuming your question relates to one of those two issues (as those are the most common concerns that we hear); if not, could you provide a bit more background on your requirements?
    Jon Purdy
    Tangosol, Inc.

  • Cache and performance issue in browsing SSAS cube using Excel for first time

    Hello Group Members,
    I am facing a cache and performance issue for the first time, when I try to open a SSAS cube connection using Excel (using Data tab  -> From Other Sources --> From Analysis Services) after daily cube refresh. In end users
    system (8 GB RAM), for the first time, it takes 10 minutes to open the cube. From next run onwards, its open up quickly within 10 secs.
    We have daily ETL process running in high end servers. The configuration of dedicated SSAS cube server is 8 core, 64GB RAM. In total we have 4 cubes - out of which for 3 is full cube refresh and 1 is incremental refresh. We have seen after
    daily cube refresh, it takes 10 odd minutes to open the cube in end users system. From next time onwards, it opens up really fast with 10 secs. After cube refresh, in server systems (16 GB RAM), it takes 2 odd minutes to open the cube.
    Is there, any way we could reduce the time taken for first attempt ?
    Best Regards, Arka Mitra.

    Thanks Richard and Charlie,
    We have implemented the solution/suggestions in our DEV environment and we have seen a definite improvement. We are waiting this to be deployed in UAT environment to note down the actual performance and time improvement while browsing the cube for the
    first time after daily cube refresh.
    Guys,
    This is what we have done:
    We have 4 cube databases and each cube db has 1-8 cubes.
    1. We are doing daily cube refresh using SQL jobs as follows:
    <Batch xmlns="http://schemas.microsoft.com/analysisservices/2003/engine">
    <Parallel>
    <Process xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:ddl2="http://schemas.microsoft.com/analysisservices/2003/engine/2" xmlns:ddl2_2="http://schemas.microsoft.com/analysisservices/2003/engine/2/2" xmlns:ddl100_100="http://schemas.microsoft.com/analysisservices/2008/engine/100/100" xmlns:ddl200="http://schemas.microsoft.com/analysisservices/2010/engine/200" xmlns:ddl200_200="http://schemas.microsoft.com/analysisservices/2010/engine/200/200">
    <Object>
    <DatabaseID>FINANCE CUBES</DatabaseID>
    </Object>
    <Type>ProcessFull</Type>
    <WriteBackTableCreation>UseExisting</WriteBackTableCreation>
    </Process>
    </Parallel>
    </Batch>
    2. Next we are creating a separate SQL job (Cache Warming - Profitability Analysis) for cube cache warming for each single cube in each cube db like:
    CREATE CACHE FOR [Profit Analysis] AS
    {[Measures].members}
    *[TIME].[FINANCIAL QUARTER].[FINANCIAL QUARTER]
    3. Finally after each cube refresh step, we are creating a new step of type T-SQL where we are calling these individual steps:
    EXEC dbo.sp_start_job N'Cache Warming - Profit Analysis';
    GO
    I will update the post after I receive the actual im[provement from UAT/ Production environment.
    Best Regards, Arka Mitra.

  • Reporting Services will not automatically use a different replica for the report server databases when a failover occurs. How to overcome this issue

    Reporting Services offers limited support for using AlwaysOn Availability Groups with report server databases. The report server databases can be configured in AG to be part of a replica; however Reporting Services will not automatically use a different
    replica for the report server databases when a failover occurs. How to overcome this issue? is there any workaround for that..
    Rahul

    Hi.
    With the AlwaysOn listener you should have a single DNS name to connect to regardless of which cluster node is active. Are you using the listener service? If not, please refer to the link below.
    http://msdn.microsoft.com/en-us/library/hh213417.aspx#AGlisteners

  • I cannot use Firefox for over a year now as I get a 404 error message. I have cleaned caches etc, deleted & reinstalled twice & it freezes on the Hulu site.

    I cannot use Firefox for over a year now as I get a 404 error message; it is frozen on the Hulu site.

    Are you using a bookmark or does this also happen if you type the address of the main (home) page of the website?
    Bookmarked pages can become invalid, so you may have to enter the main page and then navigate to the wanted page.
    Clear the cache and remove cookies only from websites that cause problems.
    "Clear the Cache":
    *Firefox > Preferences > Advanced > Network > Cached Web Content: "Clear Now"
    "Remove Cookies" from sites causing problems:
    *Firefox > Preferences > Privacy > "Use custom settings for history" > Cookies: "Show Cookies"

  • Since using cloudflare for my site I'm having troubles with Firefox cache

    Hello,
    Since I'm using CloudFlare for my Site I'm having a weird trouble occurring only with Firefox (version 16.0.2 / Mac OsX.6.8), but not with Chrome or Safari.
    When I navigate from the main page of my Wordpress Site to a single post, then back to the main page, then to another single post and back again to the main page, the browser displays a binary file instead of the normal page. After emptying Firefox cache the page loads again properly.
    When I pause CloudFlare the trouble disappears.
    I just ask here because this does not happen with other browsers.
    The config of the site is : Wordpress 3.4.2 with W3 Total Cache Plugin and the browser caching enabled.
    If disabling the plugins the problem remains the same.
    Thanks in advance to anyone who could help.

    Hi,
    This is not a solution but one way you could try to troubleshoot this is through the [https://developer.mozilla.org/en-US/docs/Tools/Web_Console Web Console] opened via '''Tools''' ('''Alt''' + '''T''') > '''Web Developer'''. You need to have it open before loading/reloading etc. Clicking on GET, POST links provides detailed info including Content-Type, encoding etc. You can also right-click and select '''Log Request and Response Bodies''' to view the transferred data. Hopefully you would be able to find the cause and resolve it.
    [https://developer.mozilla.org/en-US/docs/Tools Firefox Tools]

  • [svn:fx-trunk] 5604: Ensuring qualified class names are used in type selector cache keys for Flex 4 applications .

    Revision: 5604
    Author: [email protected]
    Date: 2009-03-26 14:00:26 -0700 (Thu, 26 Mar 2009)
    Log Message:
    Ensuring qualified class names are used in type selector cache keys for Flex 4 applications.
    QE: Yes, this should address style issues for test cases that contain two different components with the same local name.
    Dev: No
    Doc: No
    Checkintests: Pass
    Modified Paths:
    flex/sdk/trunk/frameworks/projects/framework/src/mx/styles/StyleProtoChain.as

    Remember that Arch Arm is a different distribution, but we try to bend the rules and provide limited support for them.  This may or may not be unique to Arch Arm, so you might try asking on their forums as well.

  • Does Comcast Use My Computer to Cache Data for Wifi Guests?

    Sometimes in the evening, between about 5:00 pm to about 10 pm, I get inexplicable lag spikes while playing my favorite online game.  Trace-routes in both directions don't show anything of note.  (The game developer has a tool that lets players trace from their game servers back to our computers, to help troubleshoot.) I know that nothing unexpected is running on my system because I have disabled virtually everything that I don't need in the Startup group; stuff like Windows Update, Java updater, etc., are all set to notify me of updates rather than download their stuff at random, and I have moved all of the recurring jobs in the Task Scheduler to do their work while I'm usually sleeping.  (Same with my anti-virus updates and scans.) Therefore, nothing should be making my hard disk LED light up unexpectedly, nor should anything be creating connection latency (as indicated by a meter in my game) that I can't find the cause of (via my trace-routes.) After a great deal of checking and eliminating possible culprits (including component overheating, et al), about the only thing left to blame is the Comcast Xfinity Wireless Gateway. (1)  Does anybody here know if this wifi system is caching user data on my hard drive?  Comcast's customer support people sure as heck don't have a clue. (2)  How about the bandwidth pipeline.  What if a neighbor or two is accidentally connecting to my wifi instead of their own home network and downloading fappable movies while I'm trying to become the #1 hardcore barbarian in Sanctuary?  If the data download transfer pipe is only so many megabytes wide, and I'm having to share that with the 2 folks mentioned above, plus 2 or 3 more doing heaven-knows-what, doesn't that make my own pipline thinner?  Could that be causing the non-network lag that I'm seeing in the trace-routes? I realize that I can disable this service, but I think there's a bigger picture here.   If, in fact, Comcast's Home Wifi  system is impacting its home customers in the manner I seem to be impacted, they need to start disclosing that. Thank you in advance to anybody who can provide answers to my questions, but please don't guess if you don't know for sure.  But if you're suffering the same problems I am, by all means, share your experience. My next step will be to turn the Wifi off during my gaming session for the next couple of nights to see if things change.  I didn't even know I could turn it off until I got to these forums tonight.  When I asked a Comcast rep shortly after receiving this new device, they told me "no," that the wifi could not be disabled.   

    Hello,The limits of the Copper aren't as minimal as you'd expect. Comcast usually runs Fiber on the Poles, then Copper to the Home. Copper Cable is able to achieve 1.25 Gbps. The Coaxial cables running to your modem take in 8 channels downstream and 4 channels upstream. This defines to a maximum of 38 Mbps download and 27 Mbps upload per channel. This means the maximum speed a Comcast Gateway can achieve is 304 Mbps download and 108 Mbps upload. Mind you can buy a modem compatible with Comcast that does 16 download channels and 4 upload channels or 608 Mbps download and 108 Mbps upstream. This could be higher, if signal is increased. However these are average. XfinityWiFi from what I've seen is 16 Mbps download and 3 Mbps upload. So the better question is, how many neighbors use your xfinitywifi? You should remember that you have Private Internet and xfinitywifi, your neighbors should have the Private Network and public network too. To answer another question, no software was put on your computer by Comcast for your modem, so no, it can't cache. Data caches locally to your computer, likewise the neighbor that is using your xfinitywifi is caching locally. The Gateway has no caching functionality. Your harddisk is constantly writing, Windows is sending data to it to update time, make sure it is still connected, and programs that may be open are still running.

  • 'use dynamic name' and 'caching properites' options for alias table...

    Hello everybody,
    can anybody please explain 'use dynamic name' and 'caching properites' options for alias tables...
    Thanks...
    eagerly waiting for a response..
    Vijay

    You want to create dynamic target table name right?
    You can refresh a variable like #GET_SESSION
    #GET_SESSION= SELECT <%=odiRef.getSession("SESS_NAME")%> FROM DUAL
    My tmp table name like TMP_#GET_SESSION
    then in your package refresh #GET_SESSION variable and you can use it.
    I hope this can be helpful
    Thanks

  • Issues with TopLink Cache Coordination using JMS for manual DB updates

    Hi,
    We are having 2 web application using same Database and Toplink library but 2 session objects for both the applications. We are using JMS for cache coordination. JMS propagating messages successfully between the applications and able to see the same object changes in both the applications properly. Now, we are trying to refresh cache for manual updates in Database. We are trying to refresh single object which is modified in database in one application, refreshing in one application from which refresh happened but not in other application (JMS publishing the topic but updating one).
    Our intention is refreshing in one application so that JMS should coordinate and update in others when DB manual updates. Please let us know any comments for the same.
    Database using: Oracle 11g
    Toplink Version:- 9.0.3

    See,
    http://www.coderanch.com/t/592919/ORM/databases/Toplink-Cache-coordination-JMS-manual

  • Error while putting an object in Replicated Cache

    Hi,
    I am running just a single node of coherence with Replicated cache. But when I am trying to add an object to it I am getting the below exception. However I don't get this error while doing the same thing in a Distributed cache. Can someone please tell me what could I be doing wrong here.
    Caused by: java.io.IOException: readObject failed: java.lang.ClassNotFoundException: com.test.abc.pkg.RRSCachedObject
         at java.net.URLClassLoader$1.run(URLClassLoader.java:200)
         at java.security.AccessController.doPrivileged(Native Method)
         at java.net.URLClassLoader.findClass(URLClassLoader.java:188)
         at java.lang.ClassLoader.loadClass(ClassLoader.java:316)
         at java.lang.ClassLoader.loadClass(ClassLoader.java:251)
         at java.lang.ClassLoader.loadClassInternal(ClassLoader.java:374)
         at java.lang.Class.forName0(Native Method)
         at java.lang.Class.forName(Class.java:242)
         at java.io.ObjectInputStream.resolveClass(ObjectInputStream.java:585)
         at com.tangosol.io.ResolvingObjectInputStream.resolveClass(ResolvingObjectInputStream.java:68)
         at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1544)
         at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1466)
         at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1699)
         at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1305)
         at java.io.ObjectInputStream.readObject(ObjectInputStream.java:348)
         at com.tangosol.util.ExternalizableHelper.readSerializable(ExternalizableHelper.java:2145)
         at com.tangosol.util.ExternalizableHelper.readObjectInternal(ExternalizableHelper.java:2276)
         at com.tangosol.util.ExternalizableHelper.deserializeInternal(ExternalizableHelper.java:2673)
         at com.tangosol.util.ExternalizableHelper.fromBinary(ExternalizableHelper.java:257)
         at com.tangosol.coherence.component.net.extend.proxy.CacheServiceProxy$ConverterFromBinary.convert(CacheServiceProxy.CDB:4)
         at com.tangosol.util.ConverterCollections$ConverterCacheMap.put(ConverterCollections.java:2433)
         at com.tangosol.coherence.component.util.collections.wrapperMap.WrapperNamedCache.put(WrapperNamedCache.CDB:1)
         at com.tangosol.coherence.component.net.extend.proxy.CacheServiceProxy$WrapperNamedCache.put(CacheServiceProxy.CDB:2)
         at com.tangosol.coherence.component.net.extend.messageFactory.NamedCacheFactory$PutRequest.onRun(NamedCacheFactory.CDB:6)
         at com.tangosol.coherence.component.net.extend.message.Request.run(Request.CDB:4)
         at com.tangosol.coherence.component.net.extend.proxy.NamedCacheProxy.onMessage(NamedCacheProxy.CDB:11)
         at com.tangosol.coherence.component.net.extend.Channel.execute(Channel.CDB:28)
         at com.tangosol.coherence.component.net.extend.Channel.receive(Channel.CDB:26)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Peer$DaemonPool$WrapperTask.run(Peer.CDB:9)
         at com.tangosol.coherence.component.util.DaemonPool$WrapperTask.run(DaemonPool.CDB:32)
         at com.tangosol.coherence.component.util.DaemonPool$Daemon.onNotify(DaemonPool.CDB:69)
         at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
         at java.lang.Thread.run(Thread.java:613)
    ClassLoader: java.net.URLClassLoader@b5f53a
         at com.tangosol.util.ExternalizableHelper.fromBinary(ExternalizableHelper.java:261)
         at com.tangosol.coherence.component.net.extend.proxy.CacheServiceProxy$ConverterFromBinary.convert(CacheServiceProxy.CDB:4)
         at com.tangosol.util.ConverterCollections$ConverterCacheMap.put(ConverterCollections.java:2433)
         at com.tangosol.coherence.component.util.collections.wrapperMap.WrapperNamedCache.put(WrapperNamedCache.CDB:1)
         at com.tangosol.coherence.component.net.extend.proxy.CacheServiceProxy$WrapperNamedCache.put(CacheServiceProxy.CDB:2)
         at com.tangosol.coherence.component.net.extend.messageFactory.NamedCacheFactory$PutRequest.onRun(NamedCacheFactory.CDB:6)
         at com.tangosol.coherence.component.net.extend.message.Request.run(Request.CDB:4)
         at com.tangosol.coherence.component.net.extend.proxy.NamedCacheProxy.onMessage(NamedCacheProxy.CDB:11)
         at com.tangosol.coherence.component.net.extend.Channel.execute(Channel.CDB:28)
         at com.tangosol.coherence.component.net.extend.Channel.receive(Channel.CDB:26)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Peer$DaemonPool$WrapperTask.run(Peer.CDB:9)
         at com.tangosol.coherence.component.util.DaemonPool$WrapperTask.run(DaemonPool.CDB:32)
         at com.tangosol.coherence.component.util.DaemonPool$Daemon.onNotify(DaemonPool.CDB:69)
         at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
         at java.lang.Thread.run(Thread.java:613)
    This is my config file -
    <cache-config>
         <caching-scheme-mapping>
              <cache-mapping>
                   <cache-name>*</cache-name>
                   <scheme-name>MY-replicated-cache-scheme</scheme-name>
              </cache-mapping>
         </caching-scheme-mapping>
         <caching-schemes>
              <!--
    Replicated caching scheme.
    -->
              <replicated-scheme>
                   <scheme-name>MY-replicated-cache-scheme</scheme-name>
                   <service-name>ReplicatedCache</service-name>
                   <backing-map-scheme>
                        <local-scheme>
                        </local-scheme>
                   </backing-map-scheme>
                   <lease-granularity>member</lease-granularity>
                   <autostart>true</autostart>
              </replicated-scheme>
              <proxy-scheme>
                   <service-name>ExtendTcpProxyService</service-name>
                   <thread-count>5</thread-count>
                   <acceptor-config>
                        <tcp-acceptor>
                             <local-address>
                                  <address>server</address>
                                  <port>port</port>
                             </local-address>
                             <receive-buffer-size>768k</receive-buffer-size>
                             <send-buffer-size>768k</send-buffer-size>
                        </tcp-acceptor>
                   </acceptor-config>
                   <autostart>true</autostart>
              </proxy-scheme>
         </caching-schemes>
    </cache-config>
    Edited by: user1945969 on Jun 5, 2010 4:16 PM

    By default, it should have used FIXED as unit-calculator. But look at the trace seems your replicated cache was using BINARY as unit-calculator.
    Could you try add <unit-calculator>FIXED</unit-calculator> in your cache config for the replicate cache.
    Or just try insert a object (both key and value) which implement Binary.
    Check the unit-calculator part on this link
    http://wiki.tangosol.com/display/COH35UG/local-scheme

Maybe you are looking for