Local Cache Misses increases latency.

Our data set contains data that may be null (does not exist for a given key).
When we call get(key) in the local cache, as the value is null, we seem to incur a network hop to the cluster to attempt to get the value (although this is null). It seems to do this subsequent gets. Is there a way to configure a near cache to cache misses so we do not incur the additional network hop to the cluster? We have a latency sensitive application where this is causing issue.
We can think of two workarounds:
1 - Cache a NullObject which we process as null - so there is no actuall nulls.
2 - Extend the local cache class and we cache the misses and add a MapListener to listen for data updates to manage the cache misses.
Is there a built in coherence solution so we don't reinvent the wheel?

Let me explain the situation we have.
We want to achieve consistently low response time, regardless of whether there is a mapping for a given key in the cache.
We are using near cache with a size-limited local cache at the front and a remote cache (tcp-extend) at the back.
When the requested key does exist in the cache, it is fetched from the slow back cache on the first get() and cached in the fast front cache. Any subsequent requests for the same key will hit the fast front cache (unless the entry is evicted).
However, when the key does not exist in the cache, the near cache implementation always requests it from the back cache. As a result, for any cache miss we incur a penalty of a network roundtrip to the cluster.
I wonder if there's a ready to use implementation of a cache, which caches both hits and misses for fast subsequent look-ups. Ideally, this would be some sort of drop-in implementation, so that we simply reconfigure our cache schemas and don't need to touch the application code.
Of course, we can insert dummy entries to the cache on the server side, so that every key is always associated with a value (null value for the 'missing' key). But this seems to be wasteful in terms of memory, and also needs extra effort to maintain.
So although 'Cache NullObject' is an option - its not the preferred on - more in hoping that Coherence has seen this 'problem' before and has a in built solution.

Similar Messages

  • Local Cache Visibility from the Cluster

    Hi, can you give me an explanation for the following Coherence issue, please ?
    I found in the documentation that the Coherence local cache is just that: a cache that is local to (completely contained within) a particular cluster node and is accessible from a single JVM.
    On te other hand, I also found the following statement:
    “ Clustered caches are accessible from multiple JVMs (any cluster node running the same cache service). The cache service provides the capability to access local caches from other cluster nodes.”
    My questions are:
    If I have local off-heap NIO memory cache or NIO File Manager cache on the one Coherence node, can it be visible from other Coherence nodes as a clustered cache  ?
    Also, if I have NIO File Manager cache on a shared disk, is it possible to configure all nodes to work with that cache ?
    Best Regards,
    Tomislav Milinovic

    Tomislav,
    I will answer your questions on top of your statements, OK?
    "Coherence local cache is just that: a cache that is local to (completely contained within) a particular cluster node and is accessible from a single JVM"
    Considering the partitioned (distributed) scheme, Coherence is a truly peer-to-peer technology in which data is spread across a cluster of nodes, the primary data is stored in a local JVM of one node, and its backup is stored in another node, preferably in another site, cluster or rack.
    "Clustered caches are accessible from multiple JVMs (any cluster node running the same cache service). The cache service provides the capability to access local caches from other cluster nodes"
    Yes, no matter if the data is stored locally in a single node of the cluster, but when you access that data through its key, Coherence automatically finds that data in the cluster and brings to you. Its transparently for the developer the location of data, but one thing is certain: you have a global view of caches, meaning that from every single member, you have access to all data stored. This is one of the magic that the Coherence protocol (called TCMP) does for you.
    "If I have local off-heap NIO memory cache or NIO File Manager cache on the one Coherence node, can it be visible from other Coherence nodes as a clustered cache  ?"
    As I said earlier, yes, you can access all the data stored from any node of the cluster. The way in which each node store its data (called as backing map scheme) can differ. One node can use an elastic data as backing map scheme, and another node can use Off-Heap NIO Memory Manager as backing map. This is just the way about each node store its data. For the architectural point of view, its a nice choice to use the same backing map scheme across multiple nodes, because each backing map scheme can have different behaviors when you read and/or write data. One could be faster and another could be slower.
    "Also, if I have NIO File Manager cache on a shared disk, is it possible to configure all nodes to work with that cache ?"
    There is no need for that, since data is available to all cluster nodes without any effort. Having said that, this would be a bad strategy choice. Coherence is a shared-nothing technology which uses that model to scale and give you predictable latency. If you start using a shared-disk as storage for data, you will lose the essence of shared-nothing benefits, and create a huge bottleneck in the data mgmt layer, since will occur dispute per I/O in each read/write.
    Cheers,
    Ricardo Ferreira

  • Java Local Cache Outperformed C++ Local Cache in 3.6.1

    Currently I'm using same local cache configuration to publish 10000 record of a portable object and retrieve same item few times from both Java and c++ client with oracle coherence 3.6.1 version. I'm using linux x86 version for both java and c++.
    Results from Java : 3 Micro Seconds (best Case), 4-5 Micro Seconds (Average Case)
    Results from C++ : 7 Micro Seconds, 8-9 Mirco Seconds (Average Case)
    When we have local cache for both Java and C++ data retrival latency ideally should be same. But I was able to witness 4 Mirco Second lagging in c++. Is there any sort of c++ configuration which I can improve the perfromance to reach at least 4-5 Micro Seconds.
    My local cache configuration is as follows.
    <local-scheme>
    <scheme-name>local-example</scheme-name>
    </local-scheme>
    So in underneath coherence implementation it uses Safe HashMap as the default (As the documentation). Please let me know if i'm doing something wrong?

    Hi Dave,
    I have append my c++ sample code for reference.
    -------------- Main class -------------------
    #include "coherence/lang.ns"
    #include "coherence/net/CacheFactory.hpp"
    #include "coherence/net/NamedCache.hpp"
    #include <ace/High_Res_Timer.h>
    #include <ace/Sched_Params.h>
    #include "Order.hpp"
    #include "Tokenizer.h"
    #include <iostream>
    #include <sstream>
    #include <string>
    #include <fstream>
    using namespace coherence::lang;
    using coherence::net::CacheFactory;
    using coherence::net::NamedCache;
    Order::View readOrder(String::View);
    void createCache(std::string, NamedCache::Handle&, std::string, std::string&, std::string, std::string);
    void readCache(NamedCache::Handle&, std::string, std::string&, std::string, std::string, std::string);
    static int globalOrderIndex = 1;
    int main(int argc, char** argv) {
    try {
    String::View vsCacheName;
    std::string input;
    std::ifstream infile;
    std::string comment = "#";
    infile.open("test-data.txt");
    size_t found;
    std::string result;
    while (!infile.eof()) {
    getline(infile, input);
    if (input.empty())
    continue;
    found = input.rfind(comment);
    if (found != std::string::npos)
    continue;
    Tokenizer str(input);
    std::vector<std::string> tokens = str.split();
    vsCacheName = tokens.at(0);
    NamedCache::Handle hCache = CacheFactory::getCache(vsCacheName);
    std::string itemCountList = tokens.at(1);
    std::string searchCount = tokens.at(2);
    std::string skipFirst = tokens.at(3);
    std::string searchValue = tokens.at(4);
    Tokenizer str1(itemCountList);
    str1.setDelimiter(",");
    std::vector<std::string> tokens1 = str1.split();
    for (int x = 0; x < tokens1.size(); x++) {
    std::string count = tokens1.at(x);
    std::string result;
    createCache(count, hCache, searchCount, result, vsCacheName, skipFirst);
    sleep(1);
    readCache(hCache, searchCount, result, skipFirst, count, searchValue);
    std::cout << result << std::endl;
    infile.close();
    } catch (const std::exception& e) {
    std::cerr << e.what() << std::endl;
    Order::View readOrder(String::View aotag) { 
    globalOrderIndex++;
    return Order::create(aotag);
    void createCache(std::string count, NamedCache::Handle& hCache, std::string searchIndex,
    std::string& result, std::string cacheName, std::string skipValue) {
    int totalRounds = atoi(count.c_str());
    int search = atoi(searchIndex.c_str());
    int skipFirstData = atoi(skipValue.c_str());
    bool skipFirst = skipFirstData == 1 ? true : false;
    int loop_count = skipFirstData == 1 ? search + 1 : search;
    if (totalRounds == 0)
    return;
    ACE_hrtime_t average(0);
    ACE_High_Res_Timer* tm = new ACE_High_Res_Timer();
    ACE_hrtime_t nstime(0);
    for (int x = 0; x <1; x++) {
    tm->start();
    for (int y = 0; y < totalRounds; y++) {
    std::stringstream out;
    out << globalOrderIndex;
    String::View aotag = out.str();
    Order::View order = readOrder(aotag);
    hCache->put(aotag, order);
    tm->stop();
    tm->elapsed_time(nstime);
    sleep(1);
    if (x > 0 || !skipFirst) // skipping first write because it is an odd result
    average += nstime;
    tm->reset();
    delete tm;
    double totalTimetoAdd = average / (1 * 1000);
    double averageOneItemAddTime = (average / (1 * totalRounds * 1000));
    std::stringstream out;
    out << totalTimetoAdd;
    std::string timeToAddAll = out.str();
    std::stringstream out1;
    out1 << averageOneItemAddTime;
    std::string timetoAddOne = out1.str();
    result.append("------------- Test ");
    result += cacheName;
    result += " with ";
    result += count;
    result += " -------------\n";
    result += "Time taken to publish data: ";
    result += (timeToAddAll);
    result += " us";
    result += "\n";
    result += "Time taken to publish one item: ";
    result += (timetoAddOne);
    result += " us\n";
    void readCache(NamedCache::Handle& hCache, std::string searchCount,
    std::string& result, std::string skipValue, std::string countVal, std::string searchValue) {
    int skipData = atoi(skipValue.c_str());
    bool skipFirst = skipData == 1 ? true : false;
    int count = atoi(countVal.c_str());
    String::View vsName = searchValue.c_str();
    ACE_hrtime_t average(0);
    int search = atoi(searchCount.c_str());
    int loop_count = skipData == 1 ? search + 1 : search;
    ACE_High_Res_Timer* tm = new ACE_High_Res_Timer();
    ACE_hrtime_t nstime(0);
    ACE_hrtime_t best_time(10000000);
    bool isSaturated = true;
    int saturatedValue = 0;
    for (int x = 0; x < loop_count; x++) {
    tm->start();
    Order::View vInfo = cast<Order::View>(hCache->get(vsName));
    tm->stop();
    tm->elapsed_time(nstime);
    if (x>0 || !skipFirst){
    average += nstime;
    if(nstime < best_time) {           
    best_time = nstime;
    if(isSaturated){
    saturatedValue = x+1;
    } else {
    isSaturated = false;
    std::cout << nstime << std::endl;
    vInfo = NULL;
    tm->reset();
    Order::View vInfo = cast<Order::View>(hCache->get(vsName));
    if(vInfo == NULL)
    std::cout << "No info available" << std::endl;
    // if(x%1000==0)
    // sleep(1);
    delete tm;
    double averageRead = (average / (search * 1000));
    double bestRead = ((best_time)/1000);
    std::stringstream out1;
    out1 << averageRead;
    std::string timeToRead = out1.str();
    std::stringstream out2;
    out2 << bestRead;
    std::stringstream out3;
    out3 << saturatedValue;
    result += "Average readtime: ";
    result += (timeToRead);
    result += " us, best time: ";
    result += (out2.str());
    result += " us, saturated index: ";
    result += (out3.str());
    result += " \n";
    ----------------- Order.hpp ---------------
    #ifndef ORDER_INFO_HPP
    #define ORDER_INFO_HPP
    #include "coherence/lang.ns"
    using namespace coherence::lang;
    class Order : public cloneable_spec<Order> {
    // ----- constructors ---------------------------------------------------
    friend class factory<Order>;
    public:
    virtual size_t hashCode() const {
    return size_t(&m_aotag);
    virtual void toStream(std::ostream& out) const {
    out << "Order("
    << "Aotag=" << getAotag()
    << ')';
    virtual bool equals(Object::View that) const {
    if (instanceof<Order::View > (that)) {
    Order::View vThat = cast<Order::View > (that);
    return Object::equals(getAotag(), vThat->getAotag())
    return false;
    protected:
    Order(String::View aotag) : m_aotag(self(), aotag) {}
    Order(const Order& that) : super(that), m_aotag(self(), that.m_aotag) {}
    // ----- accessors ------------------------------------------------------
    public:
    virtual String::View getAotag() const {
    return m_aotag;
    // ----- data members ---------------------------------------------------
    private:
    const MemberView<String> m_aotag;
    #endif // ORDER_INFO_HPP
    ----------- OrderSerializer.cpp -------------
    #include "coherence/lang.ns"
    #include "coherence/io/pof/PofReader.hpp"
    #include "coherence/io/pof/PofWriter.hpp"
    #include "coherence/io/pof/SystemPofContext.hpp"
    #include "coherence/io/pof/PofSerializer.hpp"
    #include "Order.hpp"
    using namespace coherence::lang;
    using coherence::io::pof::PofReader;
    using coherence::io::pof::PofWriter;
    using coherence::io::pof::PofSerializer;
    class OrderSerializer: public class_spec<OrderSerializer,extends<Object>,implements<PofSerializer> > {
    friend class factory<OrderSerializer>;
    protected:
    OrderSerializer(){
    public: // PofSerializer interface
    virtual void serialize(PofWriter::Handle hOut, Object::View v) const {
    Order::View order = cast<Order::View > (v);
    hOut->writeString(0, order->getAotag());
    hOut->writeRemainder(NULL);
    virtual Object::Holder deserialize(PofReader::Handle hIn) const {
    String::View aotag = hIn->readString(0);
    hIn->readRemainder();
    return Order::create(aotag);
    COH_REGISTER_POF_SERIALIZER(1001, TypedBarrenClass<Order>::create(), OrderSerializer::create());
    -----------------Tokenizer.h--------
    #ifndef TOKENIZER_H
    #define TOKENIZER_H
    #include <string>
    #include <vector>
    // default delimiter string (space, tab, newline, carriage return, form feed)
    const std::string DEFAULT_DELIMITER = " \t\v\n\r\f";
    class Tokenizer
    public:
    // ctor/dtor
    Tokenizer();
    Tokenizer(const std::string& str, const std::string& delimiter=DEFAULT_DELIMITER);
    ~Tokenizer();
    // set string and delimiter
    void set(const std::string& str, const std::string& delimiter=DEFAULT_DELIMITER);
    void setString(const std::string& str); // set source string only
    void setDelimiter(const std::string& delimiter); // set delimiter string only
    std::string next(); // return the next token, return "" if it ends
    std::vector<std::string> split(); // return array of tokens from current cursor
    protected:
    private:
    void skipDelimiter(); // ignore leading delimiters
    bool isDelimiter(char c); // check if the current char is delimiter
    std::string buffer; // input string
    std::string token; // output string
    std::string delimiter; // delimiter string
    std::string::const_iterator currPos; // string iterator pointing the current position
    #endif // TOKENIZER_H
    --------------- Tokenizer.cpp -------------
    #include "Tokenizer.h"
    Tokenizer::Tokenizer() : buffer(""), token(""), delimiter(DEFAULT_DELIMITER)
    currPos = buffer.begin();
    Tokenizer::Tokenizer(const std::string& str, const std::string& delimiter) : buffer(str), token(""), delimiter(delimiter)
    currPos = buffer.begin();
    Tokenizer::~Tokenizer()
    void Tokenizer::set(const std::string& str, const std::string& delimiter)
    this->buffer = str;
    this->delimiter = delimiter;
    this->currPos = buffer.begin();
    void Tokenizer::setString(const std::string& str)
    this->buffer = str;
    this->currPos = buffer.begin();
    void Tokenizer::setDelimiter(const std::string& delimiter)
    this->delimiter = delimiter;
    this->currPos = buffer.begin();
    std::string Tokenizer::next()
    if(buffer.size() <= 0) return ""; // skip if buffer is empty
    token.clear(); // reset token string
    this->skipDelimiter(); // skip leading delimiters
    // append each char to token string until it meets delimiter
    while(currPos != buffer.end() && !isDelimiter(*currPos))
    token += *currPos;
    ++currPos;
    return token;
    void Tokenizer::skipDelimiter()
    while(currPos != buffer.end() && isDelimiter(*currPos))
    ++currPos;
    bool Tokenizer::isDelimiter(char c)
    return (delimiter.find(c) != std::string::npos);
    std::vector<std::string> Tokenizer::split()
    std::vector<std::string> tokens;
    std::string token;
    while((token = this->next()) != "")
    tokens.push_back(token);
    return tokens;
    I'm really concerned about the performance. 1 Micro seconds is very much valuable for me. If you could reduce it to 5 micro seconds then it would be a great help for me. I'm building above code by following release arguments.
    "g++ -Wall -ansi -m32 -O3"
    Following file is my test script
    ------------ test-data.txt ---------------
    #cache type - data load - read attempts - skip first - read value
    local-orders 10000 5 1 1
    # dist-extend 1,100,10000 5 1 1
    # repl-extend 1,100,10000 5 1 1
    You can uncomment one by one and test different caches with different loads.
    Thanks for the reply
    sura
    Edited by: sura on 23-Jun-2011 18:49
    Edited by: sura on 23-Jun-2011 19:35
    Edited by: sura on 23-Jun-2011 19:53

  • WAAS Speed from local cache

    I have a WAAS demo setup in a test lab and have a simulated T1 span connecting two networks. When I transfer a file using CIFS or web initially, I see the traffic flow through the WAN. When I do the transfer a second time, I know it is getting the data from cache as there is no WAN traffic, but I am not getting it at wire speed. It is only coming to the client at about double the T1 speed. I expected almost line speed access when getting data from local cache. Is there a setting I missed or is this expected behaviour?

    Zach,
    Yes, I understand some things could break if you are careful with those commands, but doing some simple "show" commands (or using the gui) shouldn't hurt when troubleshooting without TAC. I've used some expert show commands to see which files are actively being accelerated through the edge device.
    A little off-topic, but how are the drives partitioned? Our WAE-512 edge has 2 250GB drives mirrored, the "Maximum Cache Disk Size" only shows 93GB. I understand there are probably OS, swap, and log partitions, but it would be nice to know more definitely how it is split up to explain to a customer why they don't get the "full" size of the drive.
    Thanks,
    Kevin

  • Initiating Upstream Proxy request on Cache-Miss (ACNS)

    1. Client has two (2) tier Content Routing network architecture:
    Client<->Tier_2_CE <-> Tier_1_CE <-> Origin_Server_01
    2. Client does not have EXPLICIT PROXY configured on browser pointing to Tier_2_CE
    3. Network uses Content Routing to redirect client requests to appropriate Tier_2_CE (proximity search using CZF, channel assignment)
    4. SCENARIO_PRE-POSITIONED: (WORKS FINE)
    4.1 Client requests PRE-POSITIONED content from CDN
    4.2 Client request redirected to Tier_2_CE
    4.3 Tier_2_CE services request from PRE-POSITINED content (i.e. cache-hit)
    5. SCENARIO_NONE-PRE-POSITIONED:
    5.1 Tier_2_CE configured with outgoing HTTP proxy pointing to Tier_1_CE:
    "http proxy outgoing host X.Y.Z.W 8080 primary"
    5.2 Tier_1_CE configured with inbound HTTP proxy
    "http proxy incoming 8080"
    5.3 Tier_1_CE configured with outgoing HTTP proxy pointing to Origin Server:
    "http proxy outgoing host A.B.C.D 80 primary"
    5.4 ISSUE
    5.4.1 Client request for NONE-PREPOSITIONED content reaches appropriate Tier_2_CE
    5.4.2 Tier_2_CE does not make Upstream request for content on Cache_MISS
    6. QUESTIONS
    6.1 Can you point me to reference documentation around proxy configuration (environments not using (a) WCCP (b) explicit proxy configuration)
    6.2 Provide some configuration guidance/samples around this scenario
    regards,

    Thanks Mary,
    It appears as if the configurations that are outlined on that page either
    (1) assume that the subscriber / client has
    EXPLICITLY configured a proxy server in their browser that points to their Tier-2/edge CE farm or
    (2) the access infrastructure into the CDN (essentially edge routers) are configured with WCCP
    Let me know if this is not the case, as I may be mis-reading & mis-configuring.
    In my setup, the Tier-2/edge CE's are configured to
    (3) listen for inbound requests on (80, 8080) and
    (4) point to an upstream proxy A.B.C.D:8080
    ISSUE
    (5) In normal operating conditions (i.e. content IS pre-positioned), client requests are fully serviced from CDNFS on the Tier-2/edge CE's
    (6) When content is not available on CDNFS (none-prepositioned), a cache miss occurs on the Tier-2/edge CE's.
    (7) What is the best way to configure my Tier-2/edge CE's so that they ALWAYS query upstream proxy for content in the event of a CACHE-MISS on local CDNFS?
    (8) How do I enable this ((7) above) without having to use rules on the Tier-2/edge CE's?
    Thanks again,
    regards,

  • How to avoid Cache misses?

    Hi,
    Before I explain the problem here's my current setup.
    - Distributed/partitioned cache
    - Annotated JPA classes
    - Backing map linked to an oracle database
    - Objects are stored in POF format
    - C++ extend client
    When I request an item that does not exist in the cache, the JPA magic forms a query and assembles the object and stores that inside the cache.
    However if the query returns no results then coherence sends back a cache miss. Our existing object hierarchy can request items that don't exist (this infrastructure is vast and entrenched and changing it is not an option). This blows any near cache performance out of the water.
    What I want to do is to intercept a cache miss and store a null object in the cache on that key (by null it will be 4 bytes in length). The client code can interpret the null object as a cache miss and everything will work as usual - however the null object will be stored in the near cache and performance will return.
    My problem is, as annotated JPA does all the 'magic', I don't get to intercept if the query returns an empty set. I've tried both map triggers and listeners, however as expected they don't get called as no result set is generated.
    Does anyone know of an entry point where I can return an object to coherence in the event of a query returning an empty set. I'd also like the ability to configure this behaviour on a per cache basis.
    Any help gratefully received.
    Thanks
    Rich
    Edited by: Rich Carless on Jan 6, 2011 1:56 PM

    Hi,
    If you are using 3.6 you can do this by writing a sub-class of JpaCacheStore that implements BinaryEntryStore or a more genric way (which would suit other people who have asked similar questions recently) would be to write an implementation of BinaryEntryStore that wraps another cache store.
    Here is one I knocked up recently...
    package org.gridman.coherence.cachestore;
    import com.tangosol.net.BackingMapManagerContext;
    import com.tangosol.net.CacheFactory;
    import com.tangosol.net.DefaultConfigurableCacheFactory;
    import com.tangosol.net.cache.BinaryEntryStore;
    import com.tangosol.net.cache.CacheStore;
    import com.tangosol.run.xml.XmlElement;
    import com.tangosol.util.Binary;
    import com.tangosol.util.BinaryEntry;
    import java.util.Set;
    public class WrapperBinaryCacheStore implements BinaryEntryStore {
        private BackingMapManagerContext context;
        private CacheStore wrapped;
        public WrapperBinaryCacheStore(BackingMapManagerContext context, ClassLoader loader, String cacheName, XmlElement cacheStoreConfig) {
            this.context = context;
            DefaultConfigurableCacheFactory cacheFactory = (DefaultConfigurableCacheFactory) CacheFactory.getConfigurableCacheFactory();
            DefaultConfigurableCacheFactory.CacheInfo info = cacheFactory.findSchemeMapping(cacheName);
            XmlElement xmlConfig = cacheStoreConfig.getSafeElement("class-scheme");
            wrapped = (CacheStore)cacheFactory.instantiateAny(info, xmlConfig, context, loader);
        @Override
        public void erase(BinaryEntry binaryEntry) {
            wrapped.erase(binaryEntry.getKey());
        @SuppressWarnings({"unchecked"})
        @Override
        public void eraseAll(Set entries) {
            for (BinaryEntry entry : (Set<BinaryEntry>)entries) {
                erase(entry);
        @Override
        public void load(BinaryEntry binaryEntry) {
            Object value = wrapped.load(binaryEntry.getKey());
            binaryEntry.updateBinaryValue((Binary) context.getValueToInternalConverter().convert(value));
        @SuppressWarnings({"unchecked"})
        @Override
        public void loadAll(Set entries) {
            for (BinaryEntry entry : (Set<BinaryEntry>)entries) {
                load(entry);
        @Override
        public void store(BinaryEntry binaryEntry) {
            wrapped.store(binaryEntry.getKey(), binaryEntry.getValue());
        @SuppressWarnings({"unchecked"})
        @Override
        public void storeAll(Set entries) {
            for (BinaryEntry entry : (Set<BinaryEntry>)entries) {
                store(entry);
    }Using the JPA example from the Coherence 3.6 Tutorial you would configure it like this...
    <distributed-scheme>
        <scheme-name>jpa-distributed</scheme-name>
        <service-name>JpaDistributedCache</service-name>
        <backing-map-scheme>
            <read-write-backing-map-scheme>
                <internal-cache-scheme>
                    <local-scheme/>
                </internal-cache-scheme>
                <cachestore-scheme>
                    <class-scheme>
                        <class-name>org.gridman.coherence.cachestore.WrapperBinaryCacheStore</class-name>
                        <init-params>
                            <init-param>
                                <param-type>com.tangosol.net.BackingMapManagerContext</param-type>
                                <param-value>{manager-context}</param-value>
                            </init-param>
                            <init-param>
                                <param-type>java.lang.ClassLoader</param-type>
                                <param-value>{class-loader}</param-value>
                            </init-param>
                            <init-param>
                                <param-type>java.lang.String</param-type>
                                <param-value>{cache-name}</param-value>
                            </init-param>
                            <init-param>
                                <param-type>com.tangosol.run.xml.XmlElement</param-type>
                                <param-value>
                                    <class-scheme>
                                        <class-name>com.tangosol.coherence.jpa.JpaCacheStore</class-name>
                                        <init-params>
                                            <init-param>
                                                <param-type>java.lang.String</param-type>
                                                <param-value>{cache-name}</param-value>
                                            </init-param>
                                            <init-param>
                                                <param-type>java.lang.String</param-type>
                                                <param-value>com.oracle.handson.{cache-name}</param-value>
                                            </init-param>
                                            <init-param>
                                                <param-type>java.lang.String</param-type>
                                                <param-value>JPA</param-value>
                                            </init-param>
                                        </init-params>
                                    </class-scheme>
                                </param-value>
                            </init-param>
                        </init-params>
                    </class-scheme>
                </cachestore-scheme>
            </read-write-backing-map-scheme>
        </backing-map-scheme>
        <autostart>true</autostart>
    </distributed-scheme>As you can see the WrapperBinaryCacheStore takes four cpnstructor parameters (set up in the init-params)
    <li>First is the Backing Map Context
    <li>Second is the ClassLoader
    <li>Third is the cache name
    <li>Fourth is the XML configuration for the cache store you want to wrap
    If the load method of the wrapped cache store returns null (i.e. nothing in the DB matches the key) then instead of returning null, the BinaryEntry is updated with a Binary representing null. Because the corresponding key is now in the cache with a value of null then the cache store will not be called again for the same key.
    Note If you do this and then subsequently your DB is updated with values for the keys thet were previously null (by something other than Coherence) then Coherence will not load them as it is never going to call load for those keys again.
    I have given the code above a quick test and it seems to work fine.
    If you are using 3.5 then you can still do this but you need to use the Coherence Incubator Commons library which has a version of BinaryCacheStore. The code and config will be similar but not identical.
    JK
    Edited by: Jonathan.Knight on Jan 6, 2011 3:50 PM

  • Expire all local cache entries at specific time of day

    Hi,
    We have a need for expiring all local cache entries at specific time(s) of the day (all days, like a crontab).
    Is it possible thru Coherence config ?
    Thanx,

    Hi,
    AFAIK there is no out of the box solution but certainly you can use Coherence API along with quartz to develop a simple class that can be triggered to remove all the entries from the cache at certain time. You can also define your custom cache factory configuration and an example is available here http://sites.google.com/site/miscellaneouscomponents/Home/time-service-for-oracle-coherence
    Hope this helps!
    Cheers,
    NJ

  • Air Runtime Error when querying local Cache

    Hi,
    I am running into trouble when attempting to fill a datagrid
    from a local SQlite cache, when the cache has been emptied either
    because it was never filled with any data or the files have been
    deleted. One would think there would be a mechanism in Flex to
    check the cache for proper structure without resulting in a runtime
    error. Reading the Dataservice documentation, there appears no way
    to inspect the cache without getting the runtime error.
    Basically, I have an Online/Offline application synchronizing
    data with a MySQL server via LiveCycle Dataservices. Everything
    works fine Online, and also Offline as long as the cache files have
    data in them.
    The problem is that if the program has just been
    installed(and server is not connectible) and the user hasn't
    connected to the server to retrieve any data yet, the local cache
    is empty and will result in the runtime error, or if for some
    reason the cache files get deleted, then Air will throw the runtime
    error.
    The resulting error,
    Error: Unable to initialize destinations on server:
    Thanks
    RM

    Hi i'm having the same problem =(
    i went to log on to my myspace and i get this message:
    Server Error in '/' Application.
    Runtime Error
    Description: An application error occurred on the server. The current custom error settings for this application prevent the details of the application error from being viewed remotely (for security reasons). It could, however, be viewed by browsers running on the local server machine.
    Details: To enable the details of this specific error message to be viewable on remote machines, please create a <customErrors> tag within a "web.config" configuration file located in the root directory of the current web application. This <customErrors> tag should then have its "mode" attribute set to "Off".
    <!-- Web.Config Configuration File -->
    <configuration>
    <system.web>
    <customErrors mode="Off"/>
    </system.web>
    </configuration>
    Notes: The current error page you are seeing can be replaced by a custom error page by modifying the "defaultRedirect" attribute of the application's <customErrors> configuration tag to point to a custom error page URL.
    <!-- Web.Config Configuration File -->
    <configuration>
    <system.web>
    <customErrors mode="RemoteOnly" defaultRedirect="mycustompage.htm"/>
    </system.web>
    </configuration>
    ----and i dont know what to do.
    Can anyone help me please coz this suxxxxxxxxxx =(
    thanks for any help and replies @};-

  • How to clear Local-Cache Entries for a Query in BW?

    Hi There,
    i`m student und i need please your help for my Thesis!!
    I execute the same Query many times in BEx Web Analyzer und note a Query Response Time under ST03N using each time a different READ Mode and Cache Mode is inactiv (Query Monitor RSRT).
    First time i exectue the Query, it reads also from database, second time it uses the local Cache and  that `s okay!
    My problem is:
    When i change the Read mode and execute the Query again, it uses for the first run also the old entries from the Cache so i get wrong response time for the first run!!
    I know that while the mode cache inactiv , the local cache will still be used, so how can i delete the local cache entries each
    time i change the read mode and execute the Query? In Cache monitor (rsrcache) i find only entries for Global cache etc..
    I've already tried to close the session and login in to the System again but it doesn`t solve the Problem!!
    i don't have a permission (access rights) to switch off the complete Cache (local and global ).
    Any idea please??
    Thanks und Best Regards,
    Rachidoo
    P.S: sorry for my bad english!! i have to refresh it soon:)

    Hi Praba,
    the entries stored in RSRCACHE are for global cache, there is no entry for my query in Cache monitor!
    i execute the query in RSRT using java web button and cache mode inactiv so, the results will be stored in local cache.
    this is as below what i want to do for my performance tests in my Thesis:
    1. run a query for the first time with cache inactiv and note runtime of it
    2. run the query again with cache inactiv and note runtime of it
    3. clear the local cache (i don't know how to do it??)
    4. change the read mode of query in RSRT then run the same query for the first time and note runtime of it
    5. run the query again and note runtime of it.
    i'm doing the same procedure for each Read mode.
    the problem is in step 4 , The olap Processor gets the old results form the cache so i get wrong runtime for my tests.
    Generate the report doesn't help, any idea please?

  • ESSO delete local cache in Citrix Server

    Hi all,
    I would to know when configuring the ESSO in the Citrix server, why do I need to enable the "Delete local cache"? Any problem if I disallow to delete local cache in the Citrix server?
    Thanks

    it has to do with the citrix server being a shared system. You should also be enabling an option to store local cache in memory only as well. I'm not sure as to the exact reason, but I do know it doesn't seem to function properly when those settings are not set as recommended.

  • Multiple AIR apps with the same local cache?

    Hi guys,
    Is it possible to create multiple AIR apps (for mobile & desktop) that can use the same local cache?
    For example: 2 apps for iPad will use the same data store (local cache). If we synchronize (with LCDS) and get all the data for 1 application, if we open the second application, can we access the data set from the other application?
    Thx!

    Hi Vikram,
    Eventhough I think it is techincally not possible, even if it was I would not recommend doing this. I think this is asking for problems and you can wait for the day that somebody messes up your production system, thinking it is DEV.
    I would use names like DEV_Oracle_BI_DW_Base and PRD_Oracle_BI_DW_Base, to clearly distinguish between the environments. But then again, I think Informatica forces you to use different names.
    Regards,
    Toin.
    ~Corrected typo.
    Edited by: Toin on Oct 16, 2008 4:02 PM

  • Clear Windows local cache

    Hi,
    After a 10MB file transfer across a WAN from DC to branch office with WAEs in inline intercepting mode, i noticed subsequent transfers were exetremely fast even without the WAAS appliances interception. It appears Windows OS was also doing some local caching. I have checked and cleared the Temp Folder and it contents but there is no change.
    Are there any other Windows Cache locations? How do I solve this?

    Obiora,
    The Windows redirector uses some caching operations for read and write requests, but there isn't a cache of the file that is kept.
    Are you sure the WAEs were not handling the traffic?
    Zach

  • HtmlLoader - is it possible to catch/redirect page content? like a Local cache?

    Here's the scenario, I have a kiosk app I'm working on, and am loading html pages within it using the HTMLLoader class. I'm curious if it is possibale to catch requests mainly for video and images from the html page, and redirect the request.
    Essentially what i want is a way to set up a local cache of images and video, and possibly data, and have the parent AIR app manage it. As example, the content is managed via an online CMS, and when the kiosk runs, I'd like it to cache all the images/videos it needs locally for playback, and add any new images/content as it changes.
    I have complete control over both ends, so if access/permissions/crossdomain files need to happen, that's no problem.
    Thanks in advance!

    here is a nice piece of code that might get you started!
    http://cookbooks.adobe.com/post_Caching_Images_to_disk_after_first_time_they_are_l-10784.h tml

  • Safari is not opening some webpage showing the error:  /usr/local/cache/files/block.html;400

    Hi
    safari is not opening some webpage showing the error:  /usr/local/cache/files/block.html;400
    Please help me.
    thanks

    Hey blissfull71,
    If you are having issues loading certain webpages in Safar, you may find the information and troubleshooting steps outlined in the following article helpful:
    Safari 6/7 (Mavericks): If Safari can’t open a website
    Cheers,
    - Brenden

  • Folder redirection with Offline files enabled - Can I change the location of the locally cached files?

    I have a 2012 r2 server setup with folder redirection and offline files enabled. All this works perfectly.
    As you probably know, the local cache for offline files is stored at c:\windows\csc\v2.0.6\namespace\servername\
    The problem I have run into is that one user (who cannot be told to delete files cough ceo cough) has a very large documents folder, and a small SSD drive for his C drive. So the offline files are filling up his SSD. He wants all his files to be synced,
    so decreasing the max disk usage is also not an option.
    What I would like to do is move the Offline files to his D drive which is a large drive, however I have been unable to find any official method for doing this. Is there any provision to change this?
    If not, would it work to move the entire \servername\ path to the d drive and then create a junction at c:\windows\csc\v2.0.6\namespace\servername\ that points to d:\servername\?
    Thanks,
    Travis

    Hi,
    The following article is for Windows Vista but it should work at least in Windows 7.
    How to change the location of the CSC folder by configuring the CacheLocation registry value in Windows Vista
    http://support.microsoft.com/kb/937475
    Meanwhile creating a symbolic link should also work like:
    mklink /d "C:\Windows\CSC" "D:\CSC"
    Note: It will create d:\csc folder so you do not need to manually create it. 
    Note 2: As mentioned above, you may need to re-sync the offline files. Personally I also think robocopy will not work. 
    Please remember to mark the replies as answers if they help and un-mark them if they provide no help. If you have feedback for TechNet Support, contact [email protected]

Maybe you are looking for

  • Creation of po in srm

    Hi friends, I am facing problem with  PO creation in srm data: poe_header like bbp_pds_po_header_ic. data: poe_att   type bbpt_pds_att_t with header line. data: poi_header like bbp_pds_po_header_d. data: poi_att type bbpt_pds_att_t with header line.

  • URGENT: CLASSIC & EXTENDED CLASSIC

    hi gurus,             can anyone tell me the difference between classic and extended classic scenarious in SRM? when will we use them for client requirement? what is the difference between R/3 enterprise structure and SRM organizational plan? what is

  • "Error al iniciar Excel. Causa: (Retrieve Excel Interface)

    Hola expertos Tengo un problema cuando intento iniciar XL Reporter. el error es el siguiente. "Error al iniciar Excel. Causa: (Retrieve Excel Interface ) Error al cargar la biblioteca dll Mi versión de SAP es 2005A PL 47 SP 01 yo ya intente volver a

  • How can i make a 'publish preview' in a browser from a jsfl script?

    I created a jsfl script that modifies and after publishes a .html file. I would like to be able to test this html file in a browser. Window.open doesnt seem to work in jsfl. Is there another solution that i can use?

  • Valid username n password???

    Hi, we are using Hyperion 11.1.1.3 version on windows 2003 server.when iam trying to login with default username n password for workspace its showing error "You must supply a valid User Name and Password to log onto the system." but with the same use