Question about LRU in a replicated cache

Hi Tangosol,
I have a question about how the LRU eviction policy works in a replicated cache that uses a local cache for its backing map. My cache config looks like this:
<replicated-scheme>
<scheme-name>local-repl-scheme</scheme-name>
<backing-map-scheme>
<local-scheme>
<scheme-ref>base-local-scheme</scheme-ref>
</local-scheme>
</backing-map-scheme>
</replicated-scheme>
<local-scheme>
<scheme-name>base-local-scheme</scheme-name>
<eviction-policy>LRU</eviction-policy>
<high-units>50</high-units>
<low-units>20</low-units>
<expiry-delay/>
<flush-delay/>
</local-scheme>
My test code does the following:
1. Inserts 50 entries into the cache
2. Checks to see that the cache size is 50
3. Inserts 1 additional entry (as I understand it, this should cause the eviction logic to kick-in)
4. Checks the cache size again, expecting it to now be 20
With HYBRID and LFU eviction policies, the above logic works exactly as expected. When I switch to LRU however, the code at step 2 always returns a value significantly less than 50. All 50 inserts appear to complete successfully, so I can only assume that some of the entries have already been evicted by the time I get to step 2.
Any thoughts?
Thanks.
Pete L.
Addendum:
As usual, in attempting to boil this issue down to its essential elements, I left out some details that turned out to be important. The logic that causes the condition to occur looks more like:
1. Loop 2 times:
2. Create named cache instance "TestReplCache"
3. Insert 50 cache entries
4. Verify that cache size == 50
5. Insert 1 additional entry
6. Verify that cache size == 20
7. call cache.release()
8. End Loop
With this logic, the problem occurs on the second pass of the loop. Step 4 reports a cache size of < 50. This happens with LRU, LFU, and HYBRID-- so my initial characterization of this problem is incorrect. The salient details appear to be that I am using the same cache name each pass of the loop and that I am calling release() at the end of the loop. (If I call destroy() instead, all works as expected.)
So... my revised question(s) would be: is this behavior expected? Is calling destroy() my only recourse?
Message was edited by: planeski

Robert,
Attached are my sample code and cache config files. The code is a bit contrived-- it's extracted from a JUnit test case. Typically, we wouldn't re-use the same cache name in this way. What caught my eye however, was the fact that this same test case does not exhibit this behavior when running against a local cache directly (as opposed to a repl cache backed by a local cache.)
Why call release? Well, again, when running this same test case against a local cache, you have to call release or it won't work. I figured the same applied to a repl cache backed by a local cache.
Now that I understand this is more a byproduct of how my unit tests are written and not an issue with LRU eviction (as I originally thought), it's not a big deal-- more of a curiosity than a problem.
Pete L.<br><br> <b> Attachment: </b><br>coherence-cache-config.xml <br> (*To use this attachment you will need to rename 545.bin to coherence-cache-config.xml after the download is complete.)<br><br> <b> Attachment: </b><br>LruTest.java <br> (*To use this attachment you will need to rename 546.bin to LruTest.java after the download is complete.)

Similar Messages

  • Question about MAXCPU and clearing data cache

    Hi all,
    We're running MaxDB 7.6.03.15 and have following questions. I apologize for not grouping them together with my earlier set of questions along the similar lines.
    1. What would be an appropriate value for our application for MAXCPU when running on a quad processor machine. The application consists of the MaxDB along with 4 windows services and an instance of tomcat. Can we use 4 for MAXCPU or does it always need to be something less than the total number of CPUs on the machine (less than 4 here?). Please advise.
    2. When we're doing query execution time analysis we need to be able to clear data cache in between query runs. We achieve this so far by simply bringing the database offline and then online again. The problem with this approach is its time consuming and if we do 100 query runs a lot of time is spent in waiting for the database go from online to offline to online again. Please let us know if there's an alternative faster way to achieve the same.
    Thanks for your time.
    Sincerely,
    Sameer

    > 1. What would be an appropriate value for our application for MAXCPU when running on a quad processor machine. The application consists of the MaxDB along with 4 windows services and an instance of tomcat. Can we use 4 for MAXCPU or does it always need to be something less than the total number of CPUs on the machine (less than 4 here?). Please advise.
    Of course you may set MAXCPU to 4 - but you risk that the four Usertask UKTs will use up all CPU resources on your system then.
    So it is usually better to be rather defensive with this. If you're unsure about the CPU needs of your other services better try out and check starting with MAXCPU 2.
    > 2. When we're doing query execution time analysis we need to be able to clear data cache in between query runs. We achieve this so far by simply bringing the database offline and then online again. The problem with this approach is its time consuming and if we do 100 query runs a lot of time is spent in waiting for the database go from online to offline to online again. Please let us know if there's an alternative faster way to achieve the same.
    Well, x_cons pagecache_release should do that for you.
    But I never was able to really see the effect of it (e.g. reading the pages in again).
    If I'm not mistaken in this point - which is pretty easy because the command is rarely used and sparsely documented - it just moves the pages to the lower end of the lru list and thereby making them candidates for getting moved out of cache but not actually doing it.
    However why don't you just use the resource monitor to figure out how many pages are touched by your statements?
    Then you don't have to bounce your instance.
    regards,
    Lars

  • I have a question about Lightroom 5... I used it last night, I go to get on it today and its will not open. I have an error msg "Lightroom encountered an error when reading from its preview cache and needs to quit" Lightroom will attempt to fix the proble

    I have a question about Lightroom 5... I used it last night, I go to get on it today and its will not open. I have an error msg "Lightroom encountered an error when reading from its preview cache and needs to quit" Lightroom will attempt to fix the problem when reopened

    https://forums.adobe.com/message/6219922#6219922
    See if the issue in the thread above helps you to solve your problem.

  • SUP - 2 questions about the CDB (cache database)

    Hi,
    I have 2 questions about the cache database and the cache groups:
    1 - How does the "On demand" cache group policy exactly works? I know that online cache group is without storing any data on the CDB making direct requests to de backend from the device, the DCN is based on updating from the backend, the scheduled is based on a time period, but I don't understand how the "on demand" exactly works, and why it has a time period too.
    2 - Is it possible to query the cache database table to check the data that SUP has stored? How can I do this?
    Thank you!

    I posted a similar question in SUP Apps project not too long ago and  Paul Horan provided this useful reply:
    Create a "Sybase ASA v12.x for Unwired Server" connection profile in the Enterprise Explorer.  I named mine CDB.
    : Host = localhost (or whatever the machine name is)
    : Port = 5200
    : Database name = "default"
    : User Name = "dba"
    : Password = "sql"
    Obviously, change the userid/password to match, if you changed them during install time.
    Connect, and you'll see the "default" database displayed.
    Navigate down through the Tables folder, and the first subfolder is labeled something like [#should_delete_sk ...]  Start there.
    You'll see a bunch of tables with the naming convention "D1" + package name + package version + MBO name.  These are the cache tables for the MBOs.

  • Basic use of locks with replicated cache

    Hi,
    I'm in the process of evaluating Coherence and I have a few fundamental questions about best locking practices with a simple replicated cache. I've been through the docs & forums a few times, but I'm still confused.
    The docs say that a replicted cache is "fully coherent". What, exactly does this mean? Does this imply that Coherence handles all the locking for you? Under what situations do I have to lock a node?
    Thanks.

    Hi,
    There are generally two reasons for desiring full synchronicity:
    Fault tolerance ... however, note that by the time the put returns, the data is held by both the client and the issuer. Also, we recommend the use of the Distributed cache topology for transactional data (and not Replicated). Distributed is in fact fully synchronous.
    The other reasons people usually ask about this is concern over the logical view of data updates (simultaneous state across the cluster). Even within a single JVM running on a single CPU, data updates are not simultaneous or even synchronous. This is quite intentional (in fact most of the complexity of the Java Memory Model arises from the desire to avoid the requirement for simultaneous state changes).
    In the JVM, synchronized blocks are required to avoid race conditions and inconsistent views of data, and in Coherence, locks are required to do the same.
    The key point to remember is, from the point of view of a single cluster member, data updates are in fact synchronous and simultaneous (with or without locking).
    I'm assuming your question relates to one of those two issues (as those are the most common concerns that we hear); if not, could you provide a bit more background on your requirements?
    Jon Purdy
    Tangosol, Inc.

  • Some questions about the integration between BIEE and EBS

    Hi, dear,
    I'm a new bie of BIEE. In these days, have a look about BIEE architecture and the BIEE components. In the next project, there are some work about BIEE development based on EBS application. I have some questions about the integration :
    1) generally, is the BIEE database and application server decentralized with EBS database and application? Both BIEE 10g and 11g version can be integrated with EBS R12?
    2) In BIEE administrator tool, the first step is to create physical tables. if the source appliation is EBS, is it still needed to create the physical tables?
    3) if the physical tables creation is needed, how to complete the data transfer from the EBS source tables to BIEE physical tables? which ETL tool is prefer for most developers? warehouse builder or Oracle Data Integration?
    4) During data transfer phase, if there are many many large volume data needs to transfer, how to keep the completeness? for example, it needs to transfer 1 million rows from source database to BIEE physical tables, when 50%is completed, the users try to open the BIEE report, can they see the new 50% data on the reports? is there some transaction control in ETL phase?
    could anyone give some guide for me? I'm very appreciated if you can also give any other information.
    Thanks in advance.

    1) generally, is the BIEE database and application server decentralized with EBS database and application? Both BIEE 10g and 11g version can be integrated with EBS R12?You, shud consider OBI Application here which uses OBIEE as a reporting tool with different pre-built modules. Both 10g & 11g comes with different versions of BI apps which supports sources like Siebel CRM, EBS, Peoplesoft, JD Edwards etc..
    2) In BIEE administrator tool, the first step is to create physical tables. if the source appliation is EBS, is it still needed to create the physical tables?Its independent of any soure. This is OBIEE modeling to create RPD with all the layers. If you build it from scratch then you will require to create all the layers else if BI Apps is used then you will get pre-built RPD along with other pre-built components.
    3) if the physical tables creation is needed, how to complete the data transfer from the EBS source tables to BIEE physical tables? which ETL tool is prefer for most developers? warehouse builder or Oracle Data Integration?BI apps comes with pre-built ETL mapping to use with the tools majorly with Informatica. Only BI Apps 7.9.5.2 comes with ODI but oracle has plans to have only ODI for any further releases.
    4) During data transfer phase, if there are many many large volume data needs to transfer, how to keep the completeness? for example, it needs to transfer 1 million rows from source database to BIEE physical tables, when 50%is completed, the users try to open the BIEE report, can they see the new 50% data on the reports? is there some transaction control in ETL phase?User will still see old data because its good to turn on Cache and purge it after every load.
    Refer..http://www.oracle.com/us/solutions/ent-performance-bi/bi-applications-066544.html
    and many more docs on google
    Hope this helps

  • Question about Kurts comments discussing the seperation of AIA & CDP - Test Lab Guide: Deploying an AD CS Two-Tier PKI Hierarchy - Kurt L Hudson MSFT

    Question about the sentence in bold. What is the meaning behind this comment?
    How would you separate the role of the AIA and CDP from a CA subordinate server? I can see where I add a CES and CEP server which has those as well, but I don't completely understand his comment. Because in this second step, (http://technet.microsoft.com/en-us/library/tlg-key-based-renewal.aspx)
    he shows how to implement CES and CEP.
    This is from the guide located at: http://technet.microsoft.com/library/hh831348.aspx
    Step 3: Configure APP1 to distribute certificates and CRLs
    In the extensions of the root CA, it was stated that the CRL from the root CA would be available via http://www.contoso.com/pki. Currently, there is not a PKI virtual directory on APP1, so one must be created.
    In a production environment, you would typically separate the issuing CA role from the role of hosting the AIA and CDP.
    However, this lab combines both in order to reduce the number of resources needed to complete the lab.
    Thanks,
    James

    My concern is, they have a 2-3k base of xp systems, over this year they are migrating them to Windows 7. During this time they will also be upgrading hardware for the existing windows 7 machines. The turnover of certificates are going to be high, which
    from what I've read here, it worries me.
    http://blogs.technet.com/b/askds/archive/2009/06/24/implementing-an-ocsp-responder-part-i-introducing-ocsp.aspx
    The application then can go to those locations to download the CRL. There are, however, some potential issues with this scenario. CRLs over time can get rather large
    depending on the number of certificates issued and revoked. If CRLs grow to a large size, and many clients have to download CRLs, this can have a negative impact on network performance. More importantly, by
    default Windows clients will timeout after 15 seconds while trying to download a CRL. Additionally,
    CRLs have information about every currently valid certificate that has been revoked, which is an excessive amount of data given the fact that an application may only need the revocation status for a few certificates. So,
    aside from downloading the CRL, the application or the OS has to parse the CRL and find a match for the serial number of the certificate that has been revoked.
    With the above limitations, which mostly revolve around scalability, it is clear that there are some drawbacks to using CRLs. Hence, the introduction of Online Certificate
    Status Protocol (OCSP). OCSP reduces the overhead associated with CRLs. There are server/client components to OCSP: The OCSP responder, which is the server component, and the OCSP Client. The OCSP Responder accepts status
    requests from OCSP Clients. When the OCSP Responder receives the request from the client it then needs to determine the status of the certificate using the serial number presented by the client. First the OCSP Responder determines if it has any cached responses
    for the same request. If it does, it can then send that response to the client. If there is no cached response, the OCSP Responder then checks to see if it has the CRL issued by the CA cached locally on the OCSP. If it does, it can check the revocation status
    locally, and send a response to the client stating whether the certificate is valid or revoked. The response is signed by the OCSP Signing Certificate that is selected during installation. If the OCSP does not have the CRL cached locally, the OCSP Responder
    can retrieve the CRL from the CDP locations listed in the certificate. The OCSP Responder then can parse the CRL to determine the revocation status, and send the appropriate response to the client.

  • Query regarding Replicated Caches that Persist on Disk

    I need to build a simple fault tolerant system that will replicate cache
    entries across a small set of systems. I want the cache to be persistent
    even if all cluster members are brought down.
    Is this something that requires writing a custom CacheStore implementation
    or is there a configuration that I can use that uses off-the-shelf pluggable
    caches? The documentation was somewhat vague about this.
    If I need or want to write my own CacheStore, when there is a cache write-through
    operation how does Coherence figure out which member of the cluster will do
    the actual work and persist a particular object?

    Hi rhanckel,
    write-through and cache stores are not supported with replicated caches, you need to use partitioned (distributed) cache for cache stores.
    You can use a number of out-of-the-box cache stores (Hibernate, TopLink, etc) or you can write your own if you don't find a suitable one. Configuration is the same, you need to specify the cache store class name appropriately in the <cache-store-scheme> child element of the <read-write-backing-map> element.
    You can look at the documentation for it on the following urls:
    http://wiki.tangosol.com/display/COH34UG/cachestore-scheme
    http://wiki.tangosol.com/display/COH34UG/read-write-backing-map-scheme
    http://wiki.tangosol.com/display/COH34UG/Read-Through%2C+Write-Through%2C+Write-Behind+Caching+and+Refresh-Ahead
    As for how Coherence figures out which member needs to write:
    in a partitioned cache, each cache key has an owner node which is algorithmically determined from the key itself and the distribution of partitions among nodes (these two do not depend on the actual data in the cache). More specifically, any key is always mapped to the same partition (provided you did not change the partition-count or the partition affinity-related settings, although if you did the latter, then it is arguably not the same key anymore). Therefore Coherence just needs to know, who owns a certain partition. Hence, the owner of that partition is the owner of the key, and that node is tasked with every operation related to that key.
    Best regards,
    Robert

  • Question about MapListener behavior

    can any body tell me about MapListener behavior when it in distributed cache model and in replicated cache model.
    when i type some code implement MapListener interface to change another NamedCache(named 'log') element,
    but MapListener.entryUpdated running under every machine, so 'log' will inserted some equal Data,
    how can do it , let's MapListener.entryUpdated only running one time under a idle machine
    thank you

    In addition to Jon's comment (which is a great way to approach the problem), there are two other possible approaches:
    1) implement a CacheStore and use it to do the processing (i.e. sync write-through or async write-behind processing of those updates)
    2) I don't suggest doing this, but you can elect only one machine that will be responsible for processing the events (more advanced, but allows you to funnel everything through one point if you must)
    Peace.

  • 2 question about GPU and Lens correction ,cs5

    Hi
    i have 2 questions about Gpu and lens correction in Cs5
    1)Filter->lens correction->search online
    i get often and almost every connection time out at the first click on search online , at the second click i get no online profile
    is it normal?
    2) question is about Gpu
    it run faster , but talking about ajustament layer
    like saturation or vibrance for example
    i found with the gpu on , a light slow refresh compared with gpu off
    i have set cache  levels 6 ,history 20
    i guess are the defaul
    well i add a saturation layer and move the saturation slide ,increase o decrease saturation
    with Gpu Off , the changes are immedially , i mean i can see in real time the increase o decrease of saturation
    with Gpu On it takes a few(very few) time more
    again is normal ?
    don't be angry , i'm going to buy cs5 and i'm unsecure ... the price make a big role
    thanks

    For what it's worth, I also see a timeout on the first [ Search Online ] click, after about half a minute delay.  Second click turns up results immediately.  This happens each time Lens Correction is started, even without restarting Photoshop, and in both 32 and 64 bit versions.  Also note that I started with one profile listed by default (though from the wrong camera) for my 40D with 28-135 zoom.
    I alsow noticed that I was seeing progress bar activity in the Lens Correction dialog while I was typing this (even though Lens Correction was NOT the active window) every time I hit the 'L' key.  Strange.
    Windows 7 x64.
    -Noel

  • Question about Finder-Load-Beans flag

    Hi all,
    I've read that the Finder-Load-Beans flag could yield some valuable gains in performance
    but:
    1) why is it suggested to do individual gets of methods within the same Transaction
    ? (tx-Required).
    2) this strategy is useful only for small sets of data, isn't it? I imagine I
    would choose Finder-Load-Beans to false (or JDBC) for larger sets of data.
    3) A last question: its default value is true or false ?
    Thanks
    Francesco

    Because if there are different transactions where the get method is called
    then the state/data of the bean would most be reloaded from the database. A
    new transactions causes the ejbLoad method to be invoked in the beginning
    and the ejbStore at the end. That is the usual case but there are other ways
    to modify this behavior.
    Thanks
    Gaurav
    "Francesco" <[email protected]> wrote in message
    news:[email protected]...
    >
    Hi thorick,
    I have found this in the newsgroup. It's from R.Woolen answering
    a question about Finder-Load-Beans flag.
    "Consider this case:
    tx.begin();
    Collection c = findAllEmployeesNamed("Rob");
    Iterator it = c.iterator();
    while (it.hasNext()) {
    Employee e = (Employee) it.next(); System.out.println("Favorite color is:"+ e.getFavColor());
    tx.commit();
    With CMP (and finders-load-beans set to its default true value), thefindAllEmployeesNamed
    finder will load all the employees with the name of rob. The getFavColormethods
    do not hit the db because they are in the same tx, and the beans arealready loaded
    in the cache.
    It's the big CMP performance advantage."
    So I wonder why this performance gain can be achieved when the iterationis inside
    a transaction.
    Thanks
    regards
    Francesco
    thorick <[email protected]> wrote:
    1) why is it suggested to do individual gets of methods within thesame Transaction
    ? (tx-Required).I'm not sure about the context of this question (in what document,
    paragraph
    is this
    mentioned).
    2) this strategy is useful only for small sets of data, isn't it? Iimagine I
    would choose Finder-Load-Beans to false (or JDBC) for larger sets ofdata.
    >
    If you know that you will be accessing the fields of all the Beans that
    you get back from a
    finder,
    then you will realize a significant performance gain. If one selects
    100s or more beans
    using
    a finder, but only accesses the fields for a few, then there may be some
    performance cost.
    It could
    depend on how large some of the fields are. I'd guess that the cost
    of 1 hit to the DB per
    bean vs.
    the cost of 1 + maybe 1 more hit to the DB per bean, would usually be
    less. A performance
    test using
    your actual apps beans would be the only way to know for sure.
    3) A last question: its default value is true or false ?The default is 'True'
    -thorick

  • POF serialization with replicated cache?

    Sorry again for the newbie question.
    Can you use POF serialized objects in a replicated cache?
    All of the examples show POF serialized objects being used with a partitioned cache.
    If you can do this, are there any caveats involved with the "replication" cache? I assume it would have to be started using the same configuration as the "master" cache.

    Thanks Rob.
    So, you just start up the Coherence instance on the (or at the) replication site, using the same configuration as the "master"? (Of course with appropriate classpaths and such set correctly)

  • Questions about Real Application Testing(RAT)

    Hi All,
    We have a production database running on 10gR3 on a server with local drives, while we have a Oracle 11gR2 DB running on a server with NFS mounts (using S7310 - AmberRoad) i.e. Faster and better storage.
    We captured the load on 10gR3 and replayed it on 11gR2. We noticed the following:
    (1) Replay is considerably slow even though Oracle11gR2 instance has a faster storage. We suspect that it may be something to do with the buffer cache / SGA because there is nothing in cache on the target (we didn't shutdown 10gR2 DB for capture) – what should we do then?
    (2) To make sure that we can take the advantage of cache, we replayed the load 2nd time right after the 1st replay and everything ran to our surprise. So we are wondering how’s that possible since we did not restore the DB as we do not want to wipe off the cache (chicken and egg situation)? Does Oracle rollback the changes after the replay?
    (3) Do we have to restore database on Target every time we do replay? But if we do that then we won't have anything in the SGA.
    So we need your advise and also would like to know how everyone else is doing this testing?
    Regards,
    RJiv.

    DB Replay's workload capture facility allows you to either start capture from a closed (mounted) database (capture starts upon opening the DB), or to begin capture mid-stream during normal activity. Starting capture on the production system from a closed database eliminates the divergence in performance resulting from a primed cache, as well as possible data divergence issues from open, partially-completed transactions at the time the capture started.
    For many customers, it will clearly not be possible to close their database during peak periods (!!)
    One way to address the cache priming issue is to start capture in production from a closed state during a low period of activity, and the allow capture to run through the peak period.
    Another approach is to start capture mid-stream with the DB open and to run capture for a long period (long enough to stabilize the cache). When performing the replay, begin a new AWR snapshot after the cache has stabilized.
    Your question about running the replay again after the first replay is done is confusing. Of course you will not get meaningful data from that, since replay must begin from the capture start SCN. If you run replay twice in a row without reverting the database to the capture start SCN, it will be applying meaningless changes to a database in a state that is unlike that of the original. You will be testing the data errors codepath instead of real performance.
    It is typical to enable database flashback on the repay database so that it can be repeatedly reverted to the capture start SCN for testing under a variety of scenarios.
    Regards,
    Jeremiah Wilton
    Blue Gecko, Inc.
    http://www.bluegecko.net

  • Question about source system copy for BW

    Dear all,
    I have a quick question about copying a source system for BW.
    We have two clients in the source ERP:
    100 Customizing
    200 Testing
    Our BW is only linked to client 200, so when we now delete client 200 and make a copy from 100 to 200, are we then still able to restore the source system connection so that our extraction will work, or will it fail, as in client 100 I have no BW related (except the cross-client-objects like datasource) information, that can be used for a possible restore of this connection.
    What is the procedure in this case?
    Thanks,
    Andreas

    Unfortunately, you will lose your source connection and need to create a new one in R/3.  You can use the same source system name in BW, but check the source system connection after the copy (SM59).  Also, I would "Restore" the source system (in BW, right-click on Source system in RSA1, and click "Restore") to insure the datasources get replicated from R/3 to BW.
    Depending on how much you want your BW and R/3 systems to be in synch, you will need to rebuild all your data providers (cubes/ODS's) from your R/3 system.  This means deleting all the data and running the initializations from scratch.  At the very least, you will need to rebuild the delta queues.
    Curious, if 100 is just for customizing, is there any data in it?  Are you just creating a new blank client 200?

  • Read through Scheme in Replicated Cache with berkely db

    Hi i have a 20 gb of data, while restarting the server i need to populate all the data in to coherence cache.if i create a pre load java class hadrly it will take 30 mins to 1 hour to load the data in to cache, while loading time if any request came means how can give the response to the request. i have gone through the Read through scheme its good. But i dont know how to implement with replicated cahce. is it possible to implement the read through+replicated cache+berkeley. if yes menas please post the sample code with full reference. thanks in advance
    Edited by: 875786 on Dec 5, 2011 8:10 PM

    If you read the documentation for the replicated scheme configuration here: http://docs.oracle.com/cd/E24290_01/coh.371/e22837/appendix_cacheconfig.htm#BABGHEJE ans specifically the part about the <backing-map> configutation you will see...
    http://docs.oracle.com/cd/E24290_01/coh.371/e22837/appendix_cacheconfig.htm#BABGHEJE:
    To ensure cache coherence, the backing-map of a replicated cache must not use a read-through pattern to load cache entries. Either use a cache-aside pattern from outside the cache service, or switch to the distributed-scheme, which supports read-through clustered caching.So it would appear that you cannot do read-through with a replicated cache - which makes sense really when you think about it.
    As I already asked - why are you trying to put 20GB in a replicated cache?
    Presumably you do not have JVMs with heaps well over 20GB to hold all that data, or do you - in which case you must have tuned GC very well. You say you are bothered about NFRs and network latency yet you are building a system that will require either very big heaps, and hence at some point there will be long GCs or you cannot hold all the data in memory so you have to configure expiry and then you have the latency of reading the data from the DB.
    If you are using read-through then presumably your use-case does not require all the data to be in the cache - i.e. all you data access is using a get by key and you do not do any filter queries. If this is the case then use a distributed cache where you can store all the data or use read-through. If all you access is using key based gets then you do not need to co-locate the caches and your application in a single JVM - have separate cache server JVMs to hold the data and configure near-caches in your application.
    There are various ways to hold 20GB of data that would be much more efficient than you are suggesting with a replicated cache.
    JK

Maybe you are looking for