Question on local cache

i use a near cache mechanism with front cache as local cache and back cache as distributed.
i have a weblogic admin server which has 2 clusters..every cluster has 4 managed servers...i have the coherence.jar in the server classpath so i havent included it in my app specific war file.if i deploy couple of applications to these clusters..and start the managed servers ..how many local caches will be there on the weblogic jvm?
1.will there be only one local cache on weblogic admin jvm..o 1 copy of local cache on each managed server..
Thanks Much

There will be a local cache running in each JVM in which the application is running. Based on your description, there will be 8 WebLogic servers running the application, so 8 local caches. I assume you are using a near cache. I'm also assuming the cache config is available for each WLS instance.

Similar Messages

  • Local Cache question

    Hi,
    I want to create a partitioned cache in a cluster. I also want entries in the partitioned cache to be removed automatically after a specified amount of time, to be able to reduce the size of the distributed data set.
    As I understand it, this can be done by configuring the local-scheme on each node by using the expiry-delay and flush-delay functionality, is this correct? Will be expired entries be removed immediately (freeing memory) each time the cache is periodically flushed?
    I do not want to restrict the size of local caches in any other way, so I don't want to use the eviction-policy at all. If I set high-units to 0, will this completely disable the eviction functionality in the local cache?
    Regards
    Andreas

    Hi Andreas,
    This is correct. Eviction and expiry are independent concepts. Expiry is a logical concept, so the data may remain in memory until the flush is performed. The flush is triggered by the next cache access (after the minimum flush period) rather than by a "timer" event.
    For more information please refer to the local scheme configuration element documentation.
    Jon Purdy
    Tangosol, Inc.

  • Local Cache Visibility from the Cluster

    Hi, can you give me an explanation for the following Coherence issue, please ?
    I found in the documentation that the Coherence local cache is just that: a cache that is local to (completely contained within) a particular cluster node and is accessible from a single JVM.
    On te other hand, I also found the following statement:
    “ Clustered caches are accessible from multiple JVMs (any cluster node running the same cache service). The cache service provides the capability to access local caches from other cluster nodes.”
    My questions are:
    If I have local off-heap NIO memory cache or NIO File Manager cache on the one Coherence node, can it be visible from other Coherence nodes as a clustered cache  ?
    Also, if I have NIO File Manager cache on a shared disk, is it possible to configure all nodes to work with that cache ?
    Best Regards,
    Tomislav Milinovic

    Tomislav,
    I will answer your questions on top of your statements, OK?
    "Coherence local cache is just that: a cache that is local to (completely contained within) a particular cluster node and is accessible from a single JVM"
    Considering the partitioned (distributed) scheme, Coherence is a truly peer-to-peer technology in which data is spread across a cluster of nodes, the primary data is stored in a local JVM of one node, and its backup is stored in another node, preferably in another site, cluster or rack.
    "Clustered caches are accessible from multiple JVMs (any cluster node running the same cache service). The cache service provides the capability to access local caches from other cluster nodes"
    Yes, no matter if the data is stored locally in a single node of the cluster, but when you access that data through its key, Coherence automatically finds that data in the cluster and brings to you. Its transparently for the developer the location of data, but one thing is certain: you have a global view of caches, meaning that from every single member, you have access to all data stored. This is one of the magic that the Coherence protocol (called TCMP) does for you.
    "If I have local off-heap NIO memory cache or NIO File Manager cache on the one Coherence node, can it be visible from other Coherence nodes as a clustered cache  ?"
    As I said earlier, yes, you can access all the data stored from any node of the cluster. The way in which each node store its data (called as backing map scheme) can differ. One node can use an elastic data as backing map scheme, and another node can use Off-Heap NIO Memory Manager as backing map. This is just the way about each node store its data. For the architectural point of view, its a nice choice to use the same backing map scheme across multiple nodes, because each backing map scheme can have different behaviors when you read and/or write data. One could be faster and another could be slower.
    "Also, if I have NIO File Manager cache on a shared disk, is it possible to configure all nodes to work with that cache ?"
    There is no need for that, since data is available to all cluster nodes without any effort. Having said that, this would be a bad strategy choice. Coherence is a shared-nothing technology which uses that model to scale and give you predictable latency. If you start using a shared-disk as storage for data, you will lose the essence of shared-nothing benefits, and create a huge bottleneck in the data mgmt layer, since will occur dispute per I/O in each read/write.
    Cheers,
    Ricardo Ferreira

  • Local Cache containing all Distributed Cache entries

    Hello all,
    I am seeing what appears to be some sort of problem. I have 2 JVMS running, one for the application and the other serving as a coherence cache JVM (near-cache scheme).
    When i stop the cache JVM - the local JVM displays all 1200 entries even if the <high-units> for that cache is set to 300.
    Does the local JVM keep a copy of the Distributed Data?
    Can anyone explain this?
    Thanks

    hi,
    i have configured a near-cahe with frontscheme and back scheme.in the front scheme i have used local cache and in the back scheme i have used the distributed cache .my idea is to have a distributed cache on the coherence servers.
    i have 01 jvm which has weblogic app server while i have a 02 jvm which has 4 coherence servers all forming the cluster.
    Q1: where is the local cache data stored.? is it on the weblogic app server or on the coherence servers (SSI)..
    Q2: although i have shutdown my 4 coherence servers..i am still able to get the data in the app.so have a feel that the data is also stored locally..on the 01 jvm which has weblogic server runnng...
    q3: does both the client apps and coherence servers need to use the same coherence-cache-config.xml
    can somebody help me with these questions.Appreciate your time..

  • Local Cache with write-behind backing map

    Hi there,
    I am a Coherence newb, so I hope my question isn't too naive. I have been experimenting with Coherence using a write-behind JPA backing map, but I have only been able to make it work with a distributed cache. Because of my specific database RAC architecture, I need to ensure that entries written to the database from a given WLS node are restricted to a specific RAC node. In my view, using a local cache rather than a distributed cache should solve my problem, although if there is a way of configuring my cache so this works I'd appreciate the info.
    So, the short form of the question: can I back a local cache with a write-behind JPA map?
    Cheers,
    Ron

    Hi Ron,
    The configuration for <local-scheme> allows you to add a cache store but you cannot use write-behind, only write-through.
    Presumably you do not want the data to be shared by the different WLS nodes, i.e. if one node puts data in the cache and that is eventually written to a RAC node, that data cannot be seen in the cache by other WLS nodes. Or do you want all the WLS nodes to share the data but just write it to different RAC nodes?
    If you use a local-scheme then the data will only be local to that WLS node and not shared.
    I can think of a possible way to do what you want but it depends on the answer to the above question.
    JK

  • How long is the locally cached token valid?

    Dear Forum,
    we are currently planning to show Microsoft RMS to potential Clients with a live demo and so on. In our preparations we noticed that users are able to be authenticated eventhough there is no connection to the cloud Service. We also figured out that some
    kind of cache (more specific: parts of the certificates) is stored in C:\Users\<username>\AppData\Local\Microsoft\MSIPC, but it's quite intransparent how the cache is used and for how long the locally cached token is valid for a specific RMS protected
    file. In order to be prepared for such questions after the demo, I kindly ask you for help on this matter.
    Thanks and Regards
    Fabio

    Hi Fabio,
    When a user first authenticates against the certification URL of an RMS server, the user is than issued a RAC (rights account certificate) or GIC certificate (those that you see in MSIPC folder). RAC/GIC is issued after user authenticates to the domain and
    is used with further communication between user->RMS server to confirm users' identity. For ADRMS (RMS on premises) the default RAC lifetime is 365 days (it can be manually changed). I would assume that the same amount time is set for Azure RMS.
    You can have a better view on certificates with this great post of Dan Plastina 
    http://blogs.technet.com/b/rms/archive/2012/04/16/licenses-and-certificates-and-how-ad-rms-protects-and-consumes-documents.aspx
    Did my post help you or make you laugh? Don't forget to click the Helpful vote :) If I answered your question please mark my post as an Answer.

  • Expire all local cache entries at specific time of day

    Hi,
    We have a need for expiring all local cache entries at specific time(s) of the day (all days, like a crontab).
    Is it possible thru Coherence config ?
    Thanx,

    Hi,
    AFAIK there is no out of the box solution but certainly you can use Coherence API along with quartz to develop a simple class that can be triggered to remove all the entries from the cache at certain time. You can also define your custom cache factory configuration and an example is available here http://sites.google.com/site/miscellaneouscomponents/Home/time-service-for-oracle-coherence
    Hope this helps!
    Cheers,
    NJ

  • Air Runtime Error when querying local Cache

    Hi,
    I am running into trouble when attempting to fill a datagrid
    from a local SQlite cache, when the cache has been emptied either
    because it was never filled with any data or the files have been
    deleted. One would think there would be a mechanism in Flex to
    check the cache for proper structure without resulting in a runtime
    error. Reading the Dataservice documentation, there appears no way
    to inspect the cache without getting the runtime error.
    Basically, I have an Online/Offline application synchronizing
    data with a MySQL server via LiveCycle Dataservices. Everything
    works fine Online, and also Offline as long as the cache files have
    data in them.
    The problem is that if the program has just been
    installed(and server is not connectible) and the user hasn't
    connected to the server to retrieve any data yet, the local cache
    is empty and will result in the runtime error, or if for some
    reason the cache files get deleted, then Air will throw the runtime
    error.
    The resulting error,
    Error: Unable to initialize destinations on server:
    Thanks
    RM

    Hi i'm having the same problem =(
    i went to log on to my myspace and i get this message:
    Server Error in '/' Application.
    Runtime Error
    Description: An application error occurred on the server. The current custom error settings for this application prevent the details of the application error from being viewed remotely (for security reasons). It could, however, be viewed by browsers running on the local server machine.
    Details: To enable the details of this specific error message to be viewable on remote machines, please create a <customErrors> tag within a "web.config" configuration file located in the root directory of the current web application. This <customErrors> tag should then have its "mode" attribute set to "Off".
    <!-- Web.Config Configuration File -->
    <configuration>
    <system.web>
    <customErrors mode="Off"/>
    </system.web>
    </configuration>
    Notes: The current error page you are seeing can be replaced by a custom error page by modifying the "defaultRedirect" attribute of the application's <customErrors> configuration tag to point to a custom error page URL.
    <!-- Web.Config Configuration File -->
    <configuration>
    <system.web>
    <customErrors mode="RemoteOnly" defaultRedirect="mycustompage.htm"/>
    </system.web>
    </configuration>
    ----and i dont know what to do.
    Can anyone help me please coz this suxxxxxxxxxx =(
    thanks for any help and replies @};-

  • How to clear Local-Cache Entries for a Query in BW?

    Hi There,
    i`m student und i need please your help for my Thesis!!
    I execute the same Query many times in BEx Web Analyzer und note a Query Response Time under ST03N using each time a different READ Mode and Cache Mode is inactiv (Query Monitor RSRT).
    First time i exectue the Query, it reads also from database, second time it uses the local Cache and  that `s okay!
    My problem is:
    When i change the Read mode and execute the Query again, it uses for the first run also the old entries from the Cache so i get wrong response time for the first run!!
    I know that while the mode cache inactiv , the local cache will still be used, so how can i delete the local cache entries each
    time i change the read mode and execute the Query? In Cache monitor (rsrcache) i find only entries for Global cache etc..
    I've already tried to close the session and login in to the System again but it doesn`t solve the Problem!!
    i don't have a permission (access rights) to switch off the complete Cache (local and global ).
    Any idea please??
    Thanks und Best Regards,
    Rachidoo
    P.S: sorry for my bad english!! i have to refresh it soon:)

    Hi Praba,
    the entries stored in RSRCACHE are for global cache, there is no entry for my query in Cache monitor!
    i execute the query in RSRT using java web button and cache mode inactiv so, the results will be stored in local cache.
    this is as below what i want to do for my performance tests in my Thesis:
    1. run a query for the first time with cache inactiv and note runtime of it
    2. run the query again with cache inactiv and note runtime of it
    3. clear the local cache (i don't know how to do it??)
    4. change the read mode of query in RSRT then run the same query for the first time and note runtime of it
    5. run the query again and note runtime of it.
    i'm doing the same procedure for each Read mode.
    the problem is in step 4 , The olap Processor gets the old results form the cache so i get wrong runtime for my tests.
    Generate the report doesn't help, any idea please?

  • ESSO delete local cache in Citrix Server

    Hi all,
    I would to know when configuring the ESSO in the Citrix server, why do I need to enable the "Delete local cache"? Any problem if I disallow to delete local cache in the Citrix server?
    Thanks

    it has to do with the citrix server being a shared system. You should also be enabling an option to store local cache in memory only as well. I'm not sure as to the exact reason, but I do know it doesn't seem to function properly when those settings are not set as recommended.

  • Java Local Cache Outperformed C++ Local Cache in 3.6.1

    Currently I'm using same local cache configuration to publish 10000 record of a portable object and retrieve same item few times from both Java and c++ client with oracle coherence 3.6.1 version. I'm using linux x86 version for both java and c++.
    Results from Java : 3 Micro Seconds (best Case), 4-5 Micro Seconds (Average Case)
    Results from C++ : 7 Micro Seconds, 8-9 Mirco Seconds (Average Case)
    When we have local cache for both Java and C++ data retrival latency ideally should be same. But I was able to witness 4 Mirco Second lagging in c++. Is there any sort of c++ configuration which I can improve the perfromance to reach at least 4-5 Micro Seconds.
    My local cache configuration is as follows.
    <local-scheme>
    <scheme-name>local-example</scheme-name>
    </local-scheme>
    So in underneath coherence implementation it uses Safe HashMap as the default (As the documentation). Please let me know if i'm doing something wrong?

    Hi Dave,
    I have append my c++ sample code for reference.
    -------------- Main class -------------------
    #include "coherence/lang.ns"
    #include "coherence/net/CacheFactory.hpp"
    #include "coherence/net/NamedCache.hpp"
    #include <ace/High_Res_Timer.h>
    #include <ace/Sched_Params.h>
    #include "Order.hpp"
    #include "Tokenizer.h"
    #include <iostream>
    #include <sstream>
    #include <string>
    #include <fstream>
    using namespace coherence::lang;
    using coherence::net::CacheFactory;
    using coherence::net::NamedCache;
    Order::View readOrder(String::View);
    void createCache(std::string, NamedCache::Handle&, std::string, std::string&, std::string, std::string);
    void readCache(NamedCache::Handle&, std::string, std::string&, std::string, std::string, std::string);
    static int globalOrderIndex = 1;
    int main(int argc, char** argv) {
    try {
    String::View vsCacheName;
    std::string input;
    std::ifstream infile;
    std::string comment = "#";
    infile.open("test-data.txt");
    size_t found;
    std::string result;
    while (!infile.eof()) {
    getline(infile, input);
    if (input.empty())
    continue;
    found = input.rfind(comment);
    if (found != std::string::npos)
    continue;
    Tokenizer str(input);
    std::vector<std::string> tokens = str.split();
    vsCacheName = tokens.at(0);
    NamedCache::Handle hCache = CacheFactory::getCache(vsCacheName);
    std::string itemCountList = tokens.at(1);
    std::string searchCount = tokens.at(2);
    std::string skipFirst = tokens.at(3);
    std::string searchValue = tokens.at(4);
    Tokenizer str1(itemCountList);
    str1.setDelimiter(",");
    std::vector<std::string> tokens1 = str1.split();
    for (int x = 0; x < tokens1.size(); x++) {
    std::string count = tokens1.at(x);
    std::string result;
    createCache(count, hCache, searchCount, result, vsCacheName, skipFirst);
    sleep(1);
    readCache(hCache, searchCount, result, skipFirst, count, searchValue);
    std::cout << result << std::endl;
    infile.close();
    } catch (const std::exception& e) {
    std::cerr << e.what() << std::endl;
    Order::View readOrder(String::View aotag) { 
    globalOrderIndex++;
    return Order::create(aotag);
    void createCache(std::string count, NamedCache::Handle& hCache, std::string searchIndex,
    std::string& result, std::string cacheName, std::string skipValue) {
    int totalRounds = atoi(count.c_str());
    int search = atoi(searchIndex.c_str());
    int skipFirstData = atoi(skipValue.c_str());
    bool skipFirst = skipFirstData == 1 ? true : false;
    int loop_count = skipFirstData == 1 ? search + 1 : search;
    if (totalRounds == 0)
    return;
    ACE_hrtime_t average(0);
    ACE_High_Res_Timer* tm = new ACE_High_Res_Timer();
    ACE_hrtime_t nstime(0);
    for (int x = 0; x <1; x++) {
    tm->start();
    for (int y = 0; y < totalRounds; y++) {
    std::stringstream out;
    out << globalOrderIndex;
    String::View aotag = out.str();
    Order::View order = readOrder(aotag);
    hCache->put(aotag, order);
    tm->stop();
    tm->elapsed_time(nstime);
    sleep(1);
    if (x > 0 || !skipFirst) // skipping first write because it is an odd result
    average += nstime;
    tm->reset();
    delete tm;
    double totalTimetoAdd = average / (1 * 1000);
    double averageOneItemAddTime = (average / (1 * totalRounds * 1000));
    std::stringstream out;
    out << totalTimetoAdd;
    std::string timeToAddAll = out.str();
    std::stringstream out1;
    out1 << averageOneItemAddTime;
    std::string timetoAddOne = out1.str();
    result.append("------------- Test ");
    result += cacheName;
    result += " with ";
    result += count;
    result += " -------------\n";
    result += "Time taken to publish data: ";
    result += (timeToAddAll);
    result += " us";
    result += "\n";
    result += "Time taken to publish one item: ";
    result += (timetoAddOne);
    result += " us\n";
    void readCache(NamedCache::Handle& hCache, std::string searchCount,
    std::string& result, std::string skipValue, std::string countVal, std::string searchValue) {
    int skipData = atoi(skipValue.c_str());
    bool skipFirst = skipData == 1 ? true : false;
    int count = atoi(countVal.c_str());
    String::View vsName = searchValue.c_str();
    ACE_hrtime_t average(0);
    int search = atoi(searchCount.c_str());
    int loop_count = skipData == 1 ? search + 1 : search;
    ACE_High_Res_Timer* tm = new ACE_High_Res_Timer();
    ACE_hrtime_t nstime(0);
    ACE_hrtime_t best_time(10000000);
    bool isSaturated = true;
    int saturatedValue = 0;
    for (int x = 0; x < loop_count; x++) {
    tm->start();
    Order::View vInfo = cast<Order::View>(hCache->get(vsName));
    tm->stop();
    tm->elapsed_time(nstime);
    if (x>0 || !skipFirst){
    average += nstime;
    if(nstime < best_time) {           
    best_time = nstime;
    if(isSaturated){
    saturatedValue = x+1;
    } else {
    isSaturated = false;
    std::cout << nstime << std::endl;
    vInfo = NULL;
    tm->reset();
    Order::View vInfo = cast<Order::View>(hCache->get(vsName));
    if(vInfo == NULL)
    std::cout << "No info available" << std::endl;
    // if(x%1000==0)
    // sleep(1);
    delete tm;
    double averageRead = (average / (search * 1000));
    double bestRead = ((best_time)/1000);
    std::stringstream out1;
    out1 << averageRead;
    std::string timeToRead = out1.str();
    std::stringstream out2;
    out2 << bestRead;
    std::stringstream out3;
    out3 << saturatedValue;
    result += "Average readtime: ";
    result += (timeToRead);
    result += " us, best time: ";
    result += (out2.str());
    result += " us, saturated index: ";
    result += (out3.str());
    result += " \n";
    ----------------- Order.hpp ---------------
    #ifndef ORDER_INFO_HPP
    #define ORDER_INFO_HPP
    #include "coherence/lang.ns"
    using namespace coherence::lang;
    class Order : public cloneable_spec<Order> {
    // ----- constructors ---------------------------------------------------
    friend class factory<Order>;
    public:
    virtual size_t hashCode() const {
    return size_t(&m_aotag);
    virtual void toStream(std::ostream& out) const {
    out << "Order("
    << "Aotag=" << getAotag()
    << ')';
    virtual bool equals(Object::View that) const {
    if (instanceof<Order::View > (that)) {
    Order::View vThat = cast<Order::View > (that);
    return Object::equals(getAotag(), vThat->getAotag())
    return false;
    protected:
    Order(String::View aotag) : m_aotag(self(), aotag) {}
    Order(const Order& that) : super(that), m_aotag(self(), that.m_aotag) {}
    // ----- accessors ------------------------------------------------------
    public:
    virtual String::View getAotag() const {
    return m_aotag;
    // ----- data members ---------------------------------------------------
    private:
    const MemberView<String> m_aotag;
    #endif // ORDER_INFO_HPP
    ----------- OrderSerializer.cpp -------------
    #include "coherence/lang.ns"
    #include "coherence/io/pof/PofReader.hpp"
    #include "coherence/io/pof/PofWriter.hpp"
    #include "coherence/io/pof/SystemPofContext.hpp"
    #include "coherence/io/pof/PofSerializer.hpp"
    #include "Order.hpp"
    using namespace coherence::lang;
    using coherence::io::pof::PofReader;
    using coherence::io::pof::PofWriter;
    using coherence::io::pof::PofSerializer;
    class OrderSerializer: public class_spec<OrderSerializer,extends<Object>,implements<PofSerializer> > {
    friend class factory<OrderSerializer>;
    protected:
    OrderSerializer(){
    public: // PofSerializer interface
    virtual void serialize(PofWriter::Handle hOut, Object::View v) const {
    Order::View order = cast<Order::View > (v);
    hOut->writeString(0, order->getAotag());
    hOut->writeRemainder(NULL);
    virtual Object::Holder deserialize(PofReader::Handle hIn) const {
    String::View aotag = hIn->readString(0);
    hIn->readRemainder();
    return Order::create(aotag);
    COH_REGISTER_POF_SERIALIZER(1001, TypedBarrenClass<Order>::create(), OrderSerializer::create());
    -----------------Tokenizer.h--------
    #ifndef TOKENIZER_H
    #define TOKENIZER_H
    #include <string>
    #include <vector>
    // default delimiter string (space, tab, newline, carriage return, form feed)
    const std::string DEFAULT_DELIMITER = " \t\v\n\r\f";
    class Tokenizer
    public:
    // ctor/dtor
    Tokenizer();
    Tokenizer(const std::string& str, const std::string& delimiter=DEFAULT_DELIMITER);
    ~Tokenizer();
    // set string and delimiter
    void set(const std::string& str, const std::string& delimiter=DEFAULT_DELIMITER);
    void setString(const std::string& str); // set source string only
    void setDelimiter(const std::string& delimiter); // set delimiter string only
    std::string next(); // return the next token, return "" if it ends
    std::vector<std::string> split(); // return array of tokens from current cursor
    protected:
    private:
    void skipDelimiter(); // ignore leading delimiters
    bool isDelimiter(char c); // check if the current char is delimiter
    std::string buffer; // input string
    std::string token; // output string
    std::string delimiter; // delimiter string
    std::string::const_iterator currPos; // string iterator pointing the current position
    #endif // TOKENIZER_H
    --------------- Tokenizer.cpp -------------
    #include "Tokenizer.h"
    Tokenizer::Tokenizer() : buffer(""), token(""), delimiter(DEFAULT_DELIMITER)
    currPos = buffer.begin();
    Tokenizer::Tokenizer(const std::string& str, const std::string& delimiter) : buffer(str), token(""), delimiter(delimiter)
    currPos = buffer.begin();
    Tokenizer::~Tokenizer()
    void Tokenizer::set(const std::string& str, const std::string& delimiter)
    this->buffer = str;
    this->delimiter = delimiter;
    this->currPos = buffer.begin();
    void Tokenizer::setString(const std::string& str)
    this->buffer = str;
    this->currPos = buffer.begin();
    void Tokenizer::setDelimiter(const std::string& delimiter)
    this->delimiter = delimiter;
    this->currPos = buffer.begin();
    std::string Tokenizer::next()
    if(buffer.size() <= 0) return ""; // skip if buffer is empty
    token.clear(); // reset token string
    this->skipDelimiter(); // skip leading delimiters
    // append each char to token string until it meets delimiter
    while(currPos != buffer.end() && !isDelimiter(*currPos))
    token += *currPos;
    ++currPos;
    return token;
    void Tokenizer::skipDelimiter()
    while(currPos != buffer.end() && isDelimiter(*currPos))
    ++currPos;
    bool Tokenizer::isDelimiter(char c)
    return (delimiter.find(c) != std::string::npos);
    std::vector<std::string> Tokenizer::split()
    std::vector<std::string> tokens;
    std::string token;
    while((token = this->next()) != "")
    tokens.push_back(token);
    return tokens;
    I'm really concerned about the performance. 1 Micro seconds is very much valuable for me. If you could reduce it to 5 micro seconds then it would be a great help for me. I'm building above code by following release arguments.
    "g++ -Wall -ansi -m32 -O3"
    Following file is my test script
    ------------ test-data.txt ---------------
    #cache type - data load - read attempts - skip first - read value
    local-orders 10000 5 1 1
    # dist-extend 1,100,10000 5 1 1
    # repl-extend 1,100,10000 5 1 1
    You can uncomment one by one and test different caches with different loads.
    Thanks for the reply
    sura
    Edited by: sura on 23-Jun-2011 18:49
    Edited by: sura on 23-Jun-2011 19:35
    Edited by: sura on 23-Jun-2011 19:53

  • Multiple AIR apps with the same local cache?

    Hi guys,
    Is it possible to create multiple AIR apps (for mobile & desktop) that can use the same local cache?
    For example: 2 apps for iPad will use the same data store (local cache). If we synchronize (with LCDS) and get all the data for 1 application, if we open the second application, can we access the data set from the other application?
    Thx!

    Hi Vikram,
    Eventhough I think it is techincally not possible, even if it was I would not recommend doing this. I think this is asking for problems and you can wait for the day that somebody messes up your production system, thinking it is DEV.
    I would use names like DEV_Oracle_BI_DW_Base and PRD_Oracle_BI_DW_Base, to clearly distinguish between the environments. But then again, I think Informatica forces you to use different names.
    Regards,
    Toin.
    ~Corrected typo.
    Edited by: Toin on Oct 16, 2008 4:02 PM

  • Clear Windows local cache

    Hi,
    After a 10MB file transfer across a WAN from DC to branch office with WAEs in inline intercepting mode, i noticed subsequent transfers were exetremely fast even without the WAAS appliances interception. It appears Windows OS was also doing some local caching. I have checked and cleared the Temp Folder and it contents but there is no change.
    Are there any other Windows Cache locations? How do I solve this?

    Obiora,
    The Windows redirector uses some caching operations for read and write requests, but there isn't a cache of the file that is kept.
    Are you sure the WAEs were not handling the traffic?
    Zach

  • HtmlLoader - is it possible to catch/redirect page content? like a Local cache?

    Here's the scenario, I have a kiosk app I'm working on, and am loading html pages within it using the HTMLLoader class. I'm curious if it is possibale to catch requests mainly for video and images from the html page, and redirect the request.
    Essentially what i want is a way to set up a local cache of images and video, and possibly data, and have the parent AIR app manage it. As example, the content is managed via an online CMS, and when the kiosk runs, I'd like it to cache all the images/videos it needs locally for playback, and add any new images/content as it changes.
    I have complete control over both ends, so if access/permissions/crossdomain files need to happen, that's no problem.
    Thanks in advance!

    here is a nice piece of code that might get you started!
    http://cookbooks.adobe.com/post_Caching_Images_to_disk_after_first_time_they_are_l-10784.h tml

  • Safari is not opening some webpage showing the error:  /usr/local/cache/files/block.html;400

    Hi
    safari is not opening some webpage showing the error:  /usr/local/cache/files/block.html;400
    Please help me.
    thanks

    Hey blissfull71,
    If you are having issues loading certain webpages in Safar, you may find the information and troubleshooting steps outlined in the following article helpful:
    Safari 6/7 (Mavericks): If Safari can’t open a website
    Cheers,
    - Brenden

Maybe you are looking for

  • Text select tool not working properly

    When I use the text select tool, it selects columns, not words in a row, which is what I want. It used to work correctly. I am using Acrobat Pro, version 9.0.0. I tried adding the ALT key, with no good result.

  • Can't print to ethernet networked printer... suddenly!

    Today, for some reason I can't print to our networked printer, a Kyocera FS-6950DN laser. All I am getting now is a 'Network host is busy, will retry in 10/15 secs' and the job just holds. So, what have I tried. To eleminate any printer issues I have

  • The work flow of AR invoice to AP Supplier? (Urgent)

    Hi Experts, I am a new user of oracle finanical. There are some urgent issues I need to solve and I could not able to find in Google. Hope someone can give me a helping hand. We are on 11i (Oracle Financial modules only: GL/AP/AR/FA), I want to ask a

  • How can I copy and paste precisely?

    My little "copy and paste" camera in Adobe Acrobat used to work perfectly.  It would copy precisely the text or the image I wanted to copy.  However, lately it wants to grab a little more or a little less text than I want.  If I want to highlight a p

  • Dataset Will not Fill on Application Server IIS 6.0

    Hello everyone, I am having trouble getting my dataset to fill on the server. It works great from the desktop in VS, but stops when run on the server. I have a package that accepts a parameter, adds a row, then returns a cursor. The whole thing works