Coherence Extends and Local Cache

I am triying to use coherence extends to do some work with cache,
with a local cache is that possible i keep getting null pointer exception,
like if the data is not being stored in the cache.
     <cache-mapping>
          <cache-name>local-pds2-*</cache-name>
          <scheme-name>local-cache</scheme-name>
     </cache-mapping>
     <local-scheme>
          <scheme-name>local-cache</scheme-name>
               <eviction-policy>LRU</eviction-policy>
               <high-units>32000</high-units>
               <low-units>10</low-units>
               <unit-calculator>FIXED</unit-calculator>
               <expiry-delay>10ms</expiry-delay>
               <flush-delay>1000ms</flush-delay>
     </local-scheme>
is there something wrong in my configuration?

this is the config y use for the client
<?xml version="1.0"?>
<!DOCTYPE cache-config SYSTEM "cache-config.dtd">
<cache-config>
<caching-scheme-mapping>
     <cache-mapping>
          <cache-name>local-pds2-*</cache-name>
          <scheme-name>local-cache</scheme-name>
     </cache-mapping>
<cache-mapping>
<cache-name>dist-pds2-*</cache-name>
<scheme-name>extend-dist</scheme-name>
</cache-mapping>
</caching-scheme-mapping>
<caching-schemes>
     <local-scheme>
          <scheme-name>local-cache</scheme-name>
               <eviction-policy>LRU</eviction-policy>
               <high-units>32000</high-units>
               <low-units>10</low-units>
               <unit-calculator>FIXED</unit-calculator>
               <expiry-delay>10ms</expiry-delay>
               <flush-delay>1000ms</flush-delay>
     </local-scheme>
<remote-cache-scheme>
<scheme-name>extend-dist</scheme-name>
<service-name>ExtendTcpCacheService</service-name>
<initiator-config>
<tcp-initiator>
<remote-addresses>
<socket-address>
<address>172.16.2.229</address>
<address>localhost</address>
<port>5354</port>
</socket-address>
</remote-addresses>
<connect-timeout>10s</connect-timeout>
</tcp-initiator>
<outgoing-message-handler>
<request-timeout>5s</request-timeout>
</outgoing-message-handler>
</initiator-config>
</remote-cache-scheme>
</caching-schemes>
</cache-config>
and this for the server
<?xml version="1.0"?>
<!DOCTYPE cache-config SYSTEM "cache-config.dtd">
<cache-config>
     <defaults>
          <serializer system-property="tangosol.coherence.serializer"/>
          <socket-provider system-property="tangosol.coherence.socketprovider"/>
     </defaults>
     <caching-scheme-mapping>
          <cache-mapping>
               <cache-name>dist-pds2-*</cache-name>
               <scheme-name>dist-default</scheme-name>
          </cache-mapping>
     </caching-scheme-mapping>
     <cache-mapping>
          <cache-name>dist-*</cache-name>
          <scheme-name>distributed</scheme-name>
          <init-params>
               <init-param>
                    <param-name>back-size-limit</param-name>
                    <param-value>8MB</param-value>
               </init-param>
          </init-params>
     </cache-mapping>
     <distributed-scheme>
          <scheme-name>distributed</scheme-name>
          <service-name>DistributedCache</service-name>
          <backing-map-scheme>
               <local-scheme>
                    <scheme-ref>binary-backing-map</scheme-ref>
               </local-scheme>
          </backing-map-scheme>
          <autostart>true</autostart>
     </distributed-scheme>
     <local-scheme>
          <scheme-name>binary-backing-map</scheme-name>
          <eviction-policy>HYBRID</eviction-policy>
          <high-units>{back-size-limit 0}</high-units>
          <unit-calculator>BINARY</unit-calculator>
          <expiry-delay>{back-expiry 1h}</expiry-delay>
          <flush-delay>1m</flush-delay>
          <cachestore-scheme></cachestore-scheme>
     </local-scheme>
     <caching-schemes>
          <distributed-scheme>
               <scheme-name>dist-default</scheme-name>
               <backing-map-scheme>
                    <local-scheme/>
               </backing-map-scheme>
               <autostart>true</autostart>
          </distributed-scheme>
          <proxy-scheme>
               <service-name>ExtendTcpProxyService</service-name>
               <acceptor-config>
                    <tcp-acceptor>
                         <local-address>
                              <address >localhost</address>
                              <port >5354</port>
                         </local-address>
                    </tcp-acceptor>
               </acceptor-config>
               <proxy-config>
                    <cache-service-proxy>
                         <enabled>true</enabled>
                    </cache-service-proxy>
                    <invocation-service-proxy>
                         <enabled>true</enabled>
                    </invocation-service-proxy>
               </proxy-config>
               <autostart >true</autostart>
          </proxy-scheme>
     </caching-schemes>
</cache-config>

Similar Messages

  • Coherence-Extend and Continuous Query performance

    Hi,
    I am trying to evaluate the performance impact of continous queries, when using coherence extend (TCP). The idea is that desktop clients will be running continuous queries against a cluster, and other processes will be updating the data in that cluster. The clients themselves take a purely read-only view of the data.
    In my tests, I find that the updater process takes about 250ms to update 5000 values in the cache (using a putAll operation). When I have a continuous query running against a remote cache, linked with coherence extend, the update time increases to about 1500ms. This is not CPU bound.
    Is this what people would expect?
    If so this raises questions to me about:
    1) slow subscribers - what if one of my clients is very badly behaved? Can I detect this and/or take action?
    2) conflation of updates - can Coherence do conflation?
    3) can I get control to send object deltas over the wire rather than entire objects?
    Is this a use case for which CoherenceExtend and continuous queries were designed?
    Robert

    Yes, it is certainly possible, although depending on your requirements it may be more or less additional coding. You have a few choices. For example, since you have a CQC on the cache, you could conceivably aggregate locally (on any event). In other words, since all the data are local, there is no need to do the parallel aggregation (unless it is CPU limited). Depending on the aggregation, you may only have to recalculate part of it.
    You can access the internal data structure (Map) within the CQC as follows:
    Map map = cqc.getInternalCache();
    // now we can do aggregation
    NamedCache cache = new WrapperNamedCache(map);
    cache.aggregate(..);More complex approaches would only recalculate portions based on the event, or (depending on the function) might use the event to adjust the aggregated results.
    Peace,
    Cameron Purdy | Oracle Coherence
    http://coherence.oracle.com/

  • Local Cache Misses increases latency.

    Our data set contains data that may be null (does not exist for a given key).
    When we call get(key) in the local cache, as the value is null, we seem to incur a network hop to the cluster to attempt to get the value (although this is null). It seems to do this subsequent gets. Is there a way to configure a near cache to cache misses so we do not incur the additional network hop to the cluster? We have a latency sensitive application where this is causing issue.
    We can think of two workarounds:
    1 - Cache a NullObject which we process as null - so there is no actuall nulls.
    2 - Extend the local cache class and we cache the misses and add a MapListener to listen for data updates to manage the cache misses.
    Is there a built in coherence solution so we don't reinvent the wheel?

    Let me explain the situation we have.
    We want to achieve consistently low response time, regardless of whether there is a mapping for a given key in the cache.
    We are using near cache with a size-limited local cache at the front and a remote cache (tcp-extend) at the back.
    When the requested key does exist in the cache, it is fetched from the slow back cache on the first get() and cached in the fast front cache. Any subsequent requests for the same key will hit the fast front cache (unless the entry is evicted).
    However, when the key does not exist in the cache, the near cache implementation always requests it from the back cache. As a result, for any cache miss we incur a penalty of a network roundtrip to the cluster.
    I wonder if there's a ready to use implementation of a cache, which caches both hits and misses for fast subsequent look-ups. Ideally, this would be some sort of drop-in implementation, so that we simply reconfigure our cache schemas and don't need to touch the application code.
    Of course, we can insert dummy entries to the cache on the server side, so that every key is always associated with a value (null value for the 'missing' key). But this seems to be wasteful in terms of memory, and also needs extra effort to maintain.
    So although 'Cache NullObject' is an option - its not the preferred on - more in hoping that Coherence has seen this 'problem' before and has a in built solution.

  • Tiny doc error? Configuring+and+Using+Coherence+Extend

    This page:
    http://coherence.oracle.com/display/COH35UG/Configuring+and+Using+Coherence+Extend
    mentions the string "dist-extend-direct" only once. Is that intended just as an example of something that would match against
    <cache-name>dist-*</cache-name>
    or was that an error? It seems like it should be mentioned explicitly in the XML config file examples on that page.
    -Andrew

    Hi Andrew,
    It is the former. It is intended as an example of a cache name that would match the "<cache-name>dist-*</cache-name>" cache mapping in the cluster cache config.
    Patrick

  • What is the definition of cookies, cache, and local storage?

    Hello all, my first post but I've been Mac'd for about two years now. 
    My question is, how do cookies, cache, and local storage function, and other than where they're stored, what makes them different.

    Thanks Sig, actualy I brushed up on the definitions prior to posting the question. Was hoping for more of a dialog on the subject as it seems there is some confusion. At least my conversasion with Support would suggest. So I'm curious what other members of the Apple community have to say.
    Local Storage are flash cookies yet I was told not to be concerned as I am blocking cookies just fine. So are we talking hot dogs or franks?...  

  • Mail lost from .mac and locally, can i restore from cache???

    Hi,
    I have 2 years worth of messages about 3000 on .mac and have
    set up tiger mail to view these via imap and cache all messages
    and attachments locally.
    A couple of days ago i got a warning you are running out of space from
    .mac so deleted (via the webmail interface) the oldest 500 messages.
    I also deleted them from trash (again via the web interface).
    When i then opened mail it showed 0 messages. I then logged back
    into .mac webmail and all messages had been deleted!!!
    I emailed .mac support about restoring my account but am not getting
    any sensible help.
    I have located in my mail/imap folder the cache folder which contains most messages (no emlx extension) up to 17 June 2006, it is about 600Mb.
    However the message folder is empty (contains one recent message).
    I have tried setting up a dummy imap account and moving the data to it and attempting to import the mailbox back in but no joy. Most help I can find is targetted at people who still have the message folder intact.
    Any help greatly appreciated!!!!!!!!
    imac g5 1.8mhz   Mac OS X (10.4.7)  

    Hi Toby.
    Sounds like you upgraded to Tiger Mail around 17 June 2006, right? And the cache folder you're talking about is called CachedMessages, right? If that's the case, then what seems to have happened is the following.
    For some reason, your mail was deleted from the server (don't ask me why), and Mail deleted the cached copies from your disk as well after synchronizing with the server. BUT, the format used by Mail 2.x to store messages on disk is different from the format used by Mail 1.x. The CachedMessages folder is where Mail 1.x stored cached messages. Mail 2.x uses Messages folders and *.emlx files instead. Mail 2.x leaves the old files used by Mail 1.x there and just ignores them after the conversion. That's why CachedMessages has survived the disaster, whereas Messages hasn't. And that's also why you cannot see the cached messages in Mail now.
    The only way I know of to recover those messages is using Emailchemy to convert them to mbox format first, then import them back into Mail doing File > Import Mailboxes and choosing Other as the data format. I haven't tried Emailchemy myself, so I don't know how it works.

  • Java Local Cache Outperformed C++ Local Cache in 3.6.1

    Currently I'm using same local cache configuration to publish 10000 record of a portable object and retrieve same item few times from both Java and c++ client with oracle coherence 3.6.1 version. I'm using linux x86 version for both java and c++.
    Results from Java : 3 Micro Seconds (best Case), 4-5 Micro Seconds (Average Case)
    Results from C++ : 7 Micro Seconds, 8-9 Mirco Seconds (Average Case)
    When we have local cache for both Java and C++ data retrival latency ideally should be same. But I was able to witness 4 Mirco Second lagging in c++. Is there any sort of c++ configuration which I can improve the perfromance to reach at least 4-5 Micro Seconds.
    My local cache configuration is as follows.
    <local-scheme>
    <scheme-name>local-example</scheme-name>
    </local-scheme>
    So in underneath coherence implementation it uses Safe HashMap as the default (As the documentation). Please let me know if i'm doing something wrong?

    Hi Dave,
    I have append my c++ sample code for reference.
    -------------- Main class -------------------
    #include "coherence/lang.ns"
    #include "coherence/net/CacheFactory.hpp"
    #include "coherence/net/NamedCache.hpp"
    #include <ace/High_Res_Timer.h>
    #include <ace/Sched_Params.h>
    #include "Order.hpp"
    #include "Tokenizer.h"
    #include <iostream>
    #include <sstream>
    #include <string>
    #include <fstream>
    using namespace coherence::lang;
    using coherence::net::CacheFactory;
    using coherence::net::NamedCache;
    Order::View readOrder(String::View);
    void createCache(std::string, NamedCache::Handle&, std::string, std::string&, std::string, std::string);
    void readCache(NamedCache::Handle&, std::string, std::string&, std::string, std::string, std::string);
    static int globalOrderIndex = 1;
    int main(int argc, char** argv) {
    try {
    String::View vsCacheName;
    std::string input;
    std::ifstream infile;
    std::string comment = "#";
    infile.open("test-data.txt");
    size_t found;
    std::string result;
    while (!infile.eof()) {
    getline(infile, input);
    if (input.empty())
    continue;
    found = input.rfind(comment);
    if (found != std::string::npos)
    continue;
    Tokenizer str(input);
    std::vector<std::string> tokens = str.split();
    vsCacheName = tokens.at(0);
    NamedCache::Handle hCache = CacheFactory::getCache(vsCacheName);
    std::string itemCountList = tokens.at(1);
    std::string searchCount = tokens.at(2);
    std::string skipFirst = tokens.at(3);
    std::string searchValue = tokens.at(4);
    Tokenizer str1(itemCountList);
    str1.setDelimiter(",");
    std::vector<std::string> tokens1 = str1.split();
    for (int x = 0; x < tokens1.size(); x++) {
    std::string count = tokens1.at(x);
    std::string result;
    createCache(count, hCache, searchCount, result, vsCacheName, skipFirst);
    sleep(1);
    readCache(hCache, searchCount, result, skipFirst, count, searchValue);
    std::cout << result << std::endl;
    infile.close();
    } catch (const std::exception& e) {
    std::cerr << e.what() << std::endl;
    Order::View readOrder(String::View aotag) { 
    globalOrderIndex++;
    return Order::create(aotag);
    void createCache(std::string count, NamedCache::Handle& hCache, std::string searchIndex,
    std::string& result, std::string cacheName, std::string skipValue) {
    int totalRounds = atoi(count.c_str());
    int search = atoi(searchIndex.c_str());
    int skipFirstData = atoi(skipValue.c_str());
    bool skipFirst = skipFirstData == 1 ? true : false;
    int loop_count = skipFirstData == 1 ? search + 1 : search;
    if (totalRounds == 0)
    return;
    ACE_hrtime_t average(0);
    ACE_High_Res_Timer* tm = new ACE_High_Res_Timer();
    ACE_hrtime_t nstime(0);
    for (int x = 0; x <1; x++) {
    tm->start();
    for (int y = 0; y < totalRounds; y++) {
    std::stringstream out;
    out << globalOrderIndex;
    String::View aotag = out.str();
    Order::View order = readOrder(aotag);
    hCache->put(aotag, order);
    tm->stop();
    tm->elapsed_time(nstime);
    sleep(1);
    if (x > 0 || !skipFirst) // skipping first write because it is an odd result
    average += nstime;
    tm->reset();
    delete tm;
    double totalTimetoAdd = average / (1 * 1000);
    double averageOneItemAddTime = (average / (1 * totalRounds * 1000));
    std::stringstream out;
    out << totalTimetoAdd;
    std::string timeToAddAll = out.str();
    std::stringstream out1;
    out1 << averageOneItemAddTime;
    std::string timetoAddOne = out1.str();
    result.append("------------- Test ");
    result += cacheName;
    result += " with ";
    result += count;
    result += " -------------\n";
    result += "Time taken to publish data: ";
    result += (timeToAddAll);
    result += " us";
    result += "\n";
    result += "Time taken to publish one item: ";
    result += (timetoAddOne);
    result += " us\n";
    void readCache(NamedCache::Handle& hCache, std::string searchCount,
    std::string& result, std::string skipValue, std::string countVal, std::string searchValue) {
    int skipData = atoi(skipValue.c_str());
    bool skipFirst = skipData == 1 ? true : false;
    int count = atoi(countVal.c_str());
    String::View vsName = searchValue.c_str();
    ACE_hrtime_t average(0);
    int search = atoi(searchCount.c_str());
    int loop_count = skipData == 1 ? search + 1 : search;
    ACE_High_Res_Timer* tm = new ACE_High_Res_Timer();
    ACE_hrtime_t nstime(0);
    ACE_hrtime_t best_time(10000000);
    bool isSaturated = true;
    int saturatedValue = 0;
    for (int x = 0; x < loop_count; x++) {
    tm->start();
    Order::View vInfo = cast<Order::View>(hCache->get(vsName));
    tm->stop();
    tm->elapsed_time(nstime);
    if (x>0 || !skipFirst){
    average += nstime;
    if(nstime < best_time) {           
    best_time = nstime;
    if(isSaturated){
    saturatedValue = x+1;
    } else {
    isSaturated = false;
    std::cout << nstime << std::endl;
    vInfo = NULL;
    tm->reset();
    Order::View vInfo = cast<Order::View>(hCache->get(vsName));
    if(vInfo == NULL)
    std::cout << "No info available" << std::endl;
    // if(x%1000==0)
    // sleep(1);
    delete tm;
    double averageRead = (average / (search * 1000));
    double bestRead = ((best_time)/1000);
    std::stringstream out1;
    out1 << averageRead;
    std::string timeToRead = out1.str();
    std::stringstream out2;
    out2 << bestRead;
    std::stringstream out3;
    out3 << saturatedValue;
    result += "Average readtime: ";
    result += (timeToRead);
    result += " us, best time: ";
    result += (out2.str());
    result += " us, saturated index: ";
    result += (out3.str());
    result += " \n";
    ----------------- Order.hpp ---------------
    #ifndef ORDER_INFO_HPP
    #define ORDER_INFO_HPP
    #include "coherence/lang.ns"
    using namespace coherence::lang;
    class Order : public cloneable_spec<Order> {
    // ----- constructors ---------------------------------------------------
    friend class factory<Order>;
    public:
    virtual size_t hashCode() const {
    return size_t(&m_aotag);
    virtual void toStream(std::ostream& out) const {
    out << "Order("
    << "Aotag=" << getAotag()
    << ')';
    virtual bool equals(Object::View that) const {
    if (instanceof<Order::View > (that)) {
    Order::View vThat = cast<Order::View > (that);
    return Object::equals(getAotag(), vThat->getAotag())
    return false;
    protected:
    Order(String::View aotag) : m_aotag(self(), aotag) {}
    Order(const Order& that) : super(that), m_aotag(self(), that.m_aotag) {}
    // ----- accessors ------------------------------------------------------
    public:
    virtual String::View getAotag() const {
    return m_aotag;
    // ----- data members ---------------------------------------------------
    private:
    const MemberView<String> m_aotag;
    #endif // ORDER_INFO_HPP
    ----------- OrderSerializer.cpp -------------
    #include "coherence/lang.ns"
    #include "coherence/io/pof/PofReader.hpp"
    #include "coherence/io/pof/PofWriter.hpp"
    #include "coherence/io/pof/SystemPofContext.hpp"
    #include "coherence/io/pof/PofSerializer.hpp"
    #include "Order.hpp"
    using namespace coherence::lang;
    using coherence::io::pof::PofReader;
    using coherence::io::pof::PofWriter;
    using coherence::io::pof::PofSerializer;
    class OrderSerializer: public class_spec<OrderSerializer,extends<Object>,implements<PofSerializer> > {
    friend class factory<OrderSerializer>;
    protected:
    OrderSerializer(){
    public: // PofSerializer interface
    virtual void serialize(PofWriter::Handle hOut, Object::View v) const {
    Order::View order = cast<Order::View > (v);
    hOut->writeString(0, order->getAotag());
    hOut->writeRemainder(NULL);
    virtual Object::Holder deserialize(PofReader::Handle hIn) const {
    String::View aotag = hIn->readString(0);
    hIn->readRemainder();
    return Order::create(aotag);
    COH_REGISTER_POF_SERIALIZER(1001, TypedBarrenClass<Order>::create(), OrderSerializer::create());
    -----------------Tokenizer.h--------
    #ifndef TOKENIZER_H
    #define TOKENIZER_H
    #include <string>
    #include <vector>
    // default delimiter string (space, tab, newline, carriage return, form feed)
    const std::string DEFAULT_DELIMITER = " \t\v\n\r\f";
    class Tokenizer
    public:
    // ctor/dtor
    Tokenizer();
    Tokenizer(const std::string& str, const std::string& delimiter=DEFAULT_DELIMITER);
    ~Tokenizer();
    // set string and delimiter
    void set(const std::string& str, const std::string& delimiter=DEFAULT_DELIMITER);
    void setString(const std::string& str); // set source string only
    void setDelimiter(const std::string& delimiter); // set delimiter string only
    std::string next(); // return the next token, return "" if it ends
    std::vector<std::string> split(); // return array of tokens from current cursor
    protected:
    private:
    void skipDelimiter(); // ignore leading delimiters
    bool isDelimiter(char c); // check if the current char is delimiter
    std::string buffer; // input string
    std::string token; // output string
    std::string delimiter; // delimiter string
    std::string::const_iterator currPos; // string iterator pointing the current position
    #endif // TOKENIZER_H
    --------------- Tokenizer.cpp -------------
    #include "Tokenizer.h"
    Tokenizer::Tokenizer() : buffer(""), token(""), delimiter(DEFAULT_DELIMITER)
    currPos = buffer.begin();
    Tokenizer::Tokenizer(const std::string& str, const std::string& delimiter) : buffer(str), token(""), delimiter(delimiter)
    currPos = buffer.begin();
    Tokenizer::~Tokenizer()
    void Tokenizer::set(const std::string& str, const std::string& delimiter)
    this->buffer = str;
    this->delimiter = delimiter;
    this->currPos = buffer.begin();
    void Tokenizer::setString(const std::string& str)
    this->buffer = str;
    this->currPos = buffer.begin();
    void Tokenizer::setDelimiter(const std::string& delimiter)
    this->delimiter = delimiter;
    this->currPos = buffer.begin();
    std::string Tokenizer::next()
    if(buffer.size() <= 0) return ""; // skip if buffer is empty
    token.clear(); // reset token string
    this->skipDelimiter(); // skip leading delimiters
    // append each char to token string until it meets delimiter
    while(currPos != buffer.end() && !isDelimiter(*currPos))
    token += *currPos;
    ++currPos;
    return token;
    void Tokenizer::skipDelimiter()
    while(currPos != buffer.end() && isDelimiter(*currPos))
    ++currPos;
    bool Tokenizer::isDelimiter(char c)
    return (delimiter.find(c) != std::string::npos);
    std::vector<std::string> Tokenizer::split()
    std::vector<std::string> tokens;
    std::string token;
    while((token = this->next()) != "")
    tokens.push_back(token);
    return tokens;
    I'm really concerned about the performance. 1 Micro seconds is very much valuable for me. If you could reduce it to 5 micro seconds then it would be a great help for me. I'm building above code by following release arguments.
    "g++ -Wall -ansi -m32 -O3"
    Following file is my test script
    ------------ test-data.txt ---------------
    #cache type - data load - read attempts - skip first - read value
    local-orders 10000 5 1 1
    # dist-extend 1,100,10000 5 1 1
    # repl-extend 1,100,10000 5 1 1
    You can uncomment one by one and test different caches with different loads.
    Thanks for the reply
    sura
    Edited by: sura on 23-Jun-2011 18:49
    Edited by: sura on 23-Jun-2011 19:35
    Edited by: sura on 23-Jun-2011 19:53

  • Coherence Extend Config:   Client  create a new Cluster

    Hi,
    I have configured one storage-enabled coherence node and one proxy server on port 9099 as shown in the wiki. Coherence client is configured with the right -Dtangosol.coherence.cacheconfig which points to the xml file with
    +     +<remote-cache-scheme>++
    ++               <scheme-name>extend-dist</scheme-name>++
    ++               <service-name>ExtendTcpCacheService</service-name>++
    ++               <initiator-config>++
    ++                    <tcp-initiator>++
    ++                         <remote-addresses>++
    ++                              <socket-address>++
    ++                                   <address>Proxy_IP</address>++
    ++                                   <port>9099</port>++
    ++                              </socket-address>++
    ++                         </remote-addresses>++
    ++                         <connect-timeout>10s</connect-timeout>++
    ++                    </tcp-initiator>++
    ++                    <outgoing-message-handler>++
    ++                         <heartbeat-interval>5s</heartbeat-interval>++
    ++                         <heartbeat-timeout>4s</heartbeat-timeout>++
    ++                         <request-timeout>50s</request-timeout>++
    ++                    </outgoing-message-handler>++
    ++               </initiator-config>++
    ++          </remote-cache-scheme>++
    My Client logs shows that it has created a new cluster and then it loaded the Dtangosol.coherence.cacheconfig xml file. Is there a way to prevent my client starting a new cluster? Is configuration incorrect? Any help will be greatly appreciated :)
    Client Log:
    ======
    2011-02-10 04:39:37.599/0.599 Oracle Coherence 3.6.0.1 <Info> (thread=main, member=n/a): Loaded operational configuration from "jar:file:/usr/share/java/coherence-3.6.0.1.jar!/tangosol-coherence.xml"
    2011-02-10 04:39:37.606/0.606 Oracle Coherence 3.6.0.1 <Info> (thread=main, member=n/a): Loaded operational overrides from "jar:file:/usr/share/java/coherence-3.6.0.1.jar!/tangosol-coherence-override-dev.xml"
    2011-02-10 04:39:37.606/0.606 Oracle Coherence 3.6.0.1 <D5> (thread=main, member=n/a): Optional configuration override "/tangosol-coherence-override.xml" is not specified
    2011-02-10 04:39:37.615/0.615 Oracle Coherence 3.6.0.1 <D5> (thread=main, member=n/a): Optional configuration override "/custom-mbeans.xml" is not specified
    Oracle Coherence Version 3.6.0.1 Build 17846
    Grid Edition: Development mode
    Copyright (c) 2000, 2010, Oracle and/or its affiliates. All rights reserved.
    2011-02-10 04:39:38.112/1.112 Oracle Coherence GE 3.6.0.1 <D4> (thread=main, member=n/a): TCMP bound to /172.23.73.236:8088 using SystemSocketProvider
    2011-02-10 04:39:38.432/1.432 Oracle Coherence GE 3.6.0.1 <Info> (thread=Cluster, member=n/a): This Member(Id=3, Timestamp=2011-02-10 04:39:38.236, Address=172.23.73.236:8088, MachineId=62188, Location=site:lss.emc.com,machine:lglor236,process:5631, Role=ApacheCommonsDaemonDaemonLoader, Edition=Grid Edition, Mode=Development, CpuCount=4, SocketCount=4) joined cluster "*cluster:0xC4DB" with senior Member(Id=2, Timestamp=2011-02-10* 04:33:09.003, Address=172.23.73.236:8090, MachineId=62188, Location=site:lss.emc.com,machine:lglor236,process:4193, Role=ApacheCommonsDaemonDaemonLoader, Edition=Grid Edition, Mode=Development, CpuCount=4, SocketCount=4)
    2011-02-10 04:39:38.439/1.439 Oracle Coherence GE 3.6.0.1 <D5> (thread=Cluster, member=n/a): Member 2 joined Service Cluster with senior member 2
    2011-02-10 04:39:38.440/1.440 Oracle Coherence GE 3.6.0.1 <D5> (thread=Cluster, member=n/a): Member 2 joined Service Management with senior member 2
    2011-02-10 04:39:38.440/1.440 Oracle Coherence GE 3.6.0.1 <D5> (thread=Cluster, member=n/a): Member 2 joined Service DistributedCache with senior member 2
    2011-02-10 04:39:38.442/1.442 Oracle Coherence GE 3.6.0.1 <Info> (thread=main, member=n/a): Started cluster Name=cluster:0xC4DB
    Group{Address=224.3.6.0, Port=36000, TTL=4}
    MasterMemberSet
    ThisMember=Member(Id=3, Timestamp=2011-02-10 04:39:38.236, Address=172.23.73.236:8088, MachineId=62188, Location=site:lss.emc.com,machine:lglor236,process:5631, Role=ApacheCommonsDaemonDaemonLoader)
    OldestMember=Member(Id=2, Timestamp=2011-02-10 04:33:09.003, Address=172.23.73.236:8090, MachineId=62188, Location=site:lss.emc.com,machine:lglor236,process:4193, Role=ApacheCommonsDaemonDaemonLoader)
    ActualMemberSet=MemberSet(Size=2, BitSetCount=2
    Member(Id=2, Timestamp=2011-02-10 04:33:09.003, Address=172.23.73.236:8090, MachineId=62188, Location=site:lss.emc.com,machine:lglor236,process:4193, Role=ApacheCommonsDaemonDaemonLoader)
    Member(Id=3, Timestamp=2011-02-10 04:39:38.236, Address=172.23.73.236:8088, MachineId=62188, Location=site:lss.emc.com,machine:lglor236,process:5631, Role=ApacheCommonsDaemonDaemonLoader)
    RecycleMillis=1200000
    RecycleSet=MemberSet(Size=0, BitSetCount=0
    TcpRing{Connections=[2]}
    IpMonitor{AddressListSize=0}
    2011-02-10 04:39:38.477/1.477 Oracle Coherence GE 3.6.0.1 <D5> (thread=Invocation:Management, member=3): Service Management joined the cluster with senior service member 2
    Feb 10, 2011 4:39:38 AM org.springframework.context.support.AbstractApplicationContext prepareRefresh
    INFO: Refreshing org.springframework.context.support.ClassPathXmlApplicationContext@257b40fe: display name [org.springframework.context.support.ClassPathXmlApplicationContext@257b40fe]; startup date [Thu Feb 10 04:39:38 EST 2011]; root of context hierarchy
    Feb 10, 2011 4:39:38 AM org.springframework.beans.factory.xml.XmlBeanDefinitionReader loadBeanDefinitions
    INFO: Loading XML bean definitions from class path resource [configurationRestApplicationContext.xml]
    Feb 10, 2011 4:39:38 AM org.springframework.context.support.AbstractApplicationContext obtainFreshBeanFactory
    INFO: Bean factory for application context [org.springframework.context.support.ClassPathXmlApplicationContext@257b40fe]: org.springframework.beans.factory.support.DefaultListableBeanFactory@4bd27069
    Feb 10, 2011 4:39:38 AM org.springframework.beans.factory.support.DefaultListableBeanFactory preInstantiateSingletons
    INFO: Pre-instantiating singletons in org.springframework.beans.factory.support.DefaultListableBeanFactory@4bd27069: defining beans [component,server,router,srmconfigurationsresource,srmconfigurationtyperesource,srmconfigurationresource,coherenceStatusResource,configurationMapRepository,configurationOVFFileLoader,defaultConfigurationLoader,feedpagingLinkHandler,adminConfigTransformer,configurationMapQueryHandler,fileUploadResource,postProcessorImpl]; root of factory hierarchy
    [Fatal Error] :-1:-1: Premature end of file.
    [Fatal Error] :-1:-1: Premature end of file.
    *2011-02-10 04:39:39.332/2.332 Oracle Coherence GE 3.6.0.1 <Info> (thread=main, member=3): Loaded cache configuration from "file:/etc/sysconfig/proxy_node.xml"*
    2011-02-10 04:39:39.504/2.504 Oracle Coherence GE 3.6.0.1 <D5> (thread=DistributedCache, member=3): Service DistributedCache joined the cluster with senior service member 2
    Also I have verified that my storage enabled node and proxy node have formed a cluster...
    The client has been started with -Dtangosol.coherence.cacheconfig=/etc/sysconfig/proxy_node.xml
    Thanks & Regards,
    Sandeep

    Hi,
    Used -Dtangosol.coherence.tcmp.enabled=false to disable TCMP mode... ( Phew... :) )
    In my client code we have the following statements...
    *==> Service service = CacheFactory.getService("DistributedCache");*
    *          Set<Member> storeEnabledSet = ((DistributedCacheService) service)*
    *                    .getStorageEnabledMembers();*
    *==> CacheFactory.ensureCluster();*
    Does this needs to be changed for an Extend Client configuration?
    With my current setup I am getting exceptions ...
    2011-02-13 22:36:59.075/111.151 Oracle Coherence GE 3.6.0.1 <Error> (thread=main, member=n/a): Error while starting cluster: java.lang.UnsupportedOperationException: TCMP clustering has been disabled*; this configuration may only access clustered services via Extend proxies.*
    at com.tangosol.coherence.component.net.Cluster.onStart(Cluster.CDB:42)
    at com.tangosol.coherence.component.net.Cluster.start(Cluster.CDB:11)
    at com.tangosol.coherence.component.util.SafeCluster.startCluster(SafeCluster.CDB:3)
    at com.tangosol.coherence.component.util.SafeCluster.restartCluster(SafeCluster.CDB:7)
    at com.tangosol.coherence.component.util.SafeCluster.ensureRunningCluster(SafeCluster.CDB:26)
    at com.tangosol.coherence.component.util.SafeCluster.start(SafeCluster.CDB:2)
    at com.tangosol.net.CacheFactory.ensureCluster(CacheFactory.java:998)
    at com.emc.srm.admin.config.rest.RestApplicationLauncher.waitForCacheServer(RestApplicationLauncher.java:155)
    at com.emc.srm.admin.config.rest.RestApplicationLauncher.main(RestApplicationLauncher.java:108)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at com.emc.srm.common.daemon.SrmDaemon.start(SrmDaemon.java:59)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.commons.daemon.support.DaemonLoader.start(DaemonLoader.java:219)
    Any ideas on how to fix this?
    Regards,
    Sandeep
    ===========================
    Client configuration:
    <cache-mapping>
                   <cache-name>ConfigurationMapRepository</cache-name>
                   <scheme-name>extend-dist</scheme-name>
    </cache-mapping>
         <caching-schemes>
              <near-scheme>
                   <scheme-name>extend-near</scheme-name>
                   <front-scheme>
                        <local-scheme>
                             <high-units>1000</high-units>
                        </local-scheme>
                   </front-scheme>
                   <back-scheme>
                        <remote-cache-scheme>
                             <scheme-ref>extend-dist</scheme-ref>
                        </remote-cache-scheme>
                   </back-scheme>
                   <invalidation-strategy>all</invalidation-strategy>
              </near-scheme>
              <!-- Event Repository cache scheme definition START -->
              <remote-cache-scheme>
                   <scheme-name>extend-dist</scheme-name>
                   <service-name>DistributedCache</service-name>
                   <initiator-config>
                        <tcp-initiator>
                             <remote-addresses>
                                  <socket-address>
                                       <address>X.X.X.X</address>
                                       <port>9099</port>
                                  </socket-address>
                             </remote-addresses>
                             <connect-timeout>10s</connect-timeout>
                        </tcp-initiator>
                        <outgoing-message-handler>
                             <heartbeat-interval>5s</heartbeat-interval>
                             <heartbeat-timeout>4s</heartbeat-timeout>
                             <request-timeout>50s</request-timeout>
                        </outgoing-message-handler>
                   </initiator-config>
              </remote-cache-scheme>
    </caching-schemes>

  • Continuous Query Cache Local caching meaning

    Hi,
    I'm enconter following problem when I was working with continuous query cache with local caching TRUE.
    I was able to insert data into coherence cache and read as well.
    Then I stopped the process and try to read the data in the cache for keys which I inserted earlier.
    But I received NULL as the result.
    NamedCache::Handle hCache = DerivedCQC::create(CacheFactory::getCache("dist-AccountGCE"), AlwaysFilter::getInstance(), true);
    DerivedCQC.hpp
    * File: DerivedCQC.hpp
    * Author: srathna1
    * Created on 15 July 2011, 02:47
    #ifndef DERIVEDCQC_HPP
    #define     DERIVEDCQC_HPP
    #include "coherence/lang.ns"
    #include "coherence/net/cache/ContinuousQueryCache.hpp"
    #include "coherence/net/NamedCache.hpp"
    #include "coherence/util/Filter.hpp"
    #include "coherence/util/MapListener.hpp"
    using namespace coherence::lang;
    using coherence::net::cache::ContinuousQueryCache;
    using coherence::net::NamedCache;
    using coherence::util::Filter;
    using coherence::util::MapListener;
    class DerivedCQC
    : public class_spec<DerivedCQC,
    extends<ContinuousQueryCache> >
    friend class factory<DerivedCQC>;
    protected:
    DerivedCQC(NamedCache::Handle hCache,
    Filter::View vFilter, bool fCacheValues = false, MapListener::Handle hListener = NULL)
    : super(hCache, vFilter, fCacheValues, hListener) {}
    public:
    virtual bool containsKey(Object::View vKey) const
    return m_hMapLocal->containsKey(vKey);
    #endif     /* DERIVEDCQC_HPP */
    When I switch off the local storage flag to FALSE.
    I was able to read the data.
    NamedCache::Handle hCache = DerivedCQC::create(CacheFactory::getCache("dist-AccountGCE"), AlwaysFilter::getInstance(), false);
    Ideally I'm expecting in true scenario when I'm connected with coherence all keys and values with locally synced up with cache and store it locally and for each update also it will get synched up.
    In false scenario it will hook into the coherence cache and read it from their for each key and cache values from that moment onwards. Please share how it is implemented underneath
    Thanks and regards,
    Sura

    Hi Wei,
    I found the issue when you declare you cache as an global variable then you won't get data in TRUE scenario and if you declare the cache during a method then you will retrieve data.
    Try this.......
    #include <iostream>
    #include <coherence/net/CacheFactory.hpp>
    #include "coherence/lang.ns"
    #include <coherence/net/NamedCache.hpp>
    #include <stdio.h>
    #include <stdlib.h>
    #include <pthread.h>
    #include <coherence/net/cache/ContinuousQueryCache.hpp>
    #include <coherence/util/filter/AlwaysFilter.hpp>
    #include <coherence/util/filter/EntryFilter.hpp>
    #include "DerivedCQC.hpp"
    #include <fstream>
    #include <string>
    #include <sstream>
    #include <coherence/util/Set.hpp>
    #include <coherence/util/Iterator.hpp>
    #include <sys/types.h>
    #include <unistd.h>
    #include <coherence/stl/boxing_map.hpp>
    #include "EventPrinter.hpp"
    using namespace coherence::lang;
    using coherence::net::CacheFactory;
    using coherence::net::NamedCache;
    using coherence::net::ConcurrentMap;
    using coherence::net::cache::ContinuousQueryCache;
    using coherence::util::filter::AlwaysFilter;
    using coherence::util::filter::EntryFilter;
    using coherence::util::Set;
    using coherence::util::Iterator;
    using coherence::stl::boxing_map;
    NamedCache::Handle hCache = DerivedCQC::create(CacheFactory::getCache("gce-Account"), AlwaysFilter::getInstance(), true);
    int main(int argc, char** argv) {
    std::cout << "size: " << hCache->size() << std::endl;
    in above example you will see size is 0 for true case and size is equal to data size in the cache in false scenario.
    But if you declare the cache as below you will get the expected results as the documentation.
    int main(int argc, char** argv) {
    NamedCache::Handle hCache = DerivedCQC::create(CacheFactory::getCache("gce-Account"), AlwaysFilter::getInstance(), true);
    std::cout << "size: " << hCache->size() << std::endl;
    Is this a bug or is this the expected behaviour. According to my understanding this is a bug?
    Thanks and regards,
    Sura

  • Coherence *Extend configuration

    Hello,
    To enable Coherence Extend, we should define a proxy-scheme in our cluster cache-config that defines a coherence *extend proxy-service, containing a tcp-acceptor with a local  address and port.
    Can there be only 1 such an extend proxy service running on the same node?  Or should we redefine the extend proxy-service configuration for each cache server that is running on the same node (to use a different port)? In both cases, we have to define at least 2 new cache-config files - one with the extend proxy service and one without. Is this correct?
    We're now trying to run 2 cache servers on the same node with an extend proxy service.  When we try to start the 2nd cache server, we get the following exception (which is quite normal because the coherence proxy service tries to bind to a socket that is already in use by the other cache server instance that is running the extend proxy service):
    2010-01-21 09:48:10.923/7.865 Oracle Coherence GE 3.5.2/463 <Error> (thread=Proxy:ExtendTcpProxyService:TcpAcceptor:TcpProcessor, member=1): error binding ServerSocket to 10.2.12.144:9099: java.net.BindException: Address already in use
    at sun.nio.ch.Net.bind(Native Method)
    at sun.nio.ch.ServerSocketChannelImpl.bind(Unknown Source)
    at sun.nio.ch.ServerSocketAdaptor.bind(Unknown Source)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.peer.acceptor.TcpAcceptor.configureSocket(TcpAcceptor.CDB:27)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.peer.acceptor.TcpAcceptor$TcpProcessor.onEnter(TcpAcceptor.CDB:25)
    at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:14)
    at java.lang.Thread.run(Unknown Source)
    Is there a workaround that makes it possible to use the same cache configuration (containing coherence *extend proxy-service, with tcp-acceptor) with different cache servers on the same machine? 
    If not, what is the best practice to configure the *extend proxy service?
    Thanks in advance

    you can inject system properties into your coherence cache configuration as follows:
    <tcp-acceptor>
    <local-address>
    <address system-property="wouters.address">localhost</address>
    <port system-property="wouters.port">1234</port>
    </local-address>
    </tcp-acceptor>
    The above example will default to localhost:1234, however you can override these with system properties. This will allow you to reuse the same config, but tailor it to your application.
    Cheers,
    Neville.

  • Eviction of objects based on aging time and not cache size

    Hi
    Ii am using a coherence cluster ( extend) and wish to implement an aviction policy for each object inserted into the cache.
    From the docs i have read i understand customized eviction policies are ALL size based and not time based ( mean eviction is triggered when cache is full and not when cache object aging time reached ).
    It there a way to implement such eviction ?

    Hi Reem,
    You can expire cache entries based on time by setting the expiry-delay in the cache configuration, for example
        <distributed-scheme>
          <scheme-name>SampleMemoryExpirationScheme</scheme-name>
          <backing-map-scheme>
            <local-scheme>
              <expiry-delay>10s</expiry-delay>
            </local-scheme>
          </backing-map-scheme>
          <serializer>
            <class-name>com.tangosol.io.pof.ConfigurablePofContext</class-name>
          </serializer>
          <autostart>true</autostart>
        </distributed-scheme>The above configuration expires entries 10 seconds after they have been put into the cache.
    You can find more information here: http://download.oracle.com/docs/cd/E14526_01/coh.350/e14509/appcacheelements.htm#BABDHGHJ
    Regards,
    JK

  • Expire all local cache entries at specific time of day

    Hi,
    We have a need for expiring all local cache entries at specific time(s) of the day (all days, like a crontab).
    Is it possible thru Coherence config ?
    Thanx,

    Hi,
    AFAIK there is no out of the box solution but certainly you can use Coherence API along with quartz to develop a simple class that can be triggered to remove all the entries from the cache at certain time. You can also define your custom cache factory configuration and an example is available here http://sites.google.com/site/miscellaneouscomponents/Home/time-service-for-oracle-coherence
    Hope this helps!
    Cheers,
    NJ

  • Read-Through Caching with expiry-delay and near cache (front scheme)

    We are experiencing a problem with our custom CacheLoader and near cache together with expiry-delay on the backing map scheme.
    I was under the assumption that it was possible to have an expiry-delay configured on the backing-scheme and that the near cache object was evicted when backing object was evicted. But according to our tests we have to put an expiry-delay on the front scheme too.
    Is my assumption correct that there will not be automatic eviction on the near cache (front scheme)?
    With this config, near cache is never cleared:
                 <near-scheme>
                      <scheme-name>config-part</scheme-name>
                      <front-scheme>
                            <local-scheme />
                      </front-scheme>
                      <back-scheme>
                            <distributed-scheme>
                                  <scheme-ref>config-part-distributed</scheme-ref>
                            </distributed-scheme>
                      </back-scheme>
                <autostart>true</autostart>
                </near-scheme>
                <distributed-scheme>
                      <scheme-name>config-part-distributed</scheme-name>
                      <service-name>partDistributedCacheService</service-name>
                      <thread-count>10</thread-count>
                      <backing-map-scheme>
                            <read-write-backing-map-scheme>
                                  <read-only>true</read-only>
                                  <scheme-name>partStatusScheme</scheme-name>
                                  <internal-cache-scheme>
                                        <local-scheme>
                                              <scheme-name>part-eviction</scheme-name>
                                              <expiry-delay>30s</expiry-delay>
                                        </local-scheme>
                                  </internal-cache-scheme>
                                  <cachestore-scheme>
                                        <class-scheme>
                                              <class-name>net.jakeri.test.PingCacheLoader</class-name>
                                        </class-scheme>
                                  </cachestore-scheme>
                                  <refresh-ahead-factor>0.5</refresh-ahead-factor>
                            </read-write-backing-map-scheme>
                      </backing-map-scheme>
                      <autostart>true</autostart>
                      <local-storage system-property="tangosol.coherence.config.distributed.localstorage">true</local-storage>
                </distributed-scheme>With this config (added expiry-delay on front-scheme), near cache gets cleared.
            <near-scheme>
                      <scheme-name>config-part</scheme-name>
                      <front-scheme>
                            <local-scheme>
                                 <expiry-delay>15s</expiry-delay>
                            </local-scheme>
                      </front-scheme>
                      <back-scheme>
                            <distributed-scheme>
                                  <scheme-ref>config-part-distributed</scheme-ref>
                            </distributed-scheme>
                      </back-scheme>
                <autostart>true</autostart>
                </near-scheme>
                <distributed-scheme>
                      <scheme-name>config-part-distributed</scheme-name>
                      <service-name>partDistributedCacheService</service-name>
                      <thread-count>10</thread-count>
                      <backing-map-scheme>
                            <read-write-backing-map-scheme>
                                  <read-only>true</read-only>
                                  <scheme-name>partStatusScheme</scheme-name>
                                  <internal-cache-scheme>
                                        <local-scheme>
                                              <scheme-name>part-eviction</scheme-name>
                                              <expiry-delay>30s</expiry-delay>
                                        </local-scheme>
                                  </internal-cache-scheme>
                                  <cachestore-scheme>
                                        <class-scheme>
                                              <class-name>net.jakeri.test.PingCacheLoader</class-name>
                                        </class-scheme>
                                  </cachestore-scheme>
                                  <refresh-ahead-factor>0.5</refresh-ahead-factor>
                            </read-write-backing-map-scheme>
                      </backing-map-scheme>
                      <autostart>true</autostart>
                      <local-storage system-property="tangosol.coherence.config.distributed.localstorage">true</local-storage>
                </distributed-scheme>

    Hi Jakkke,
    The Near Cache scheme allows to have configurable levels of cache coherency from the most basic expiry based cache to invalidation based cache to data versioning cache depending on the coherency requirements. The Near Cache is commonly used to achieve the performance of replicated cache without losing the scalability aspects of replicated cache and this is achieved by having a subset of data (based on MRU or MFU) in the <front-scheme> of the near cache and the complete set of data in the <back-scheme> of near cache. The <back-scheme> updates can automatically trigger events to invalidate the entries in the <front-scheme> based on the invalidation strategy (present, all, none, auto) configured for the near cache.
    If you want to expire the entries in the <front-scheme> and <back-scheme>, you need to specify an expiry-delay on both the schemes as mentioned by you in the last example. Now if you are expiring the items in the <back-scheme> for the reason that they get loaded again from the cache-store but the <front-scheme> keys remain same (only the values should be refreshed from the cache store) then you need not set the expiry-delay on the <front-scheme> rather mention the invalidation-strategy as present. But if you want to have a different set of entries in <front-scheme> after a specified expiry delay then you need to mention it in the <front-scheme> configuration.
    The near cache has the capability to keep front scheme and back scheme data in sync but the expiry of entries is not synced. Always, front-scheme is a subset of back-scheme.
    Hope this helps!
    Cheers,
    NJ

  • Local Cache Visibility from the Cluster

    Hi, can you give me an explanation for the following Coherence issue, please ?
    I found in the documentation that the Coherence local cache is just that: a cache that is local to (completely contained within) a particular cluster node and is accessible from a single JVM.
    On te other hand, I also found the following statement:
    “ Clustered caches are accessible from multiple JVMs (any cluster node running the same cache service). The cache service provides the capability to access local caches from other cluster nodes.”
    My questions are:
    If I have local off-heap NIO memory cache or NIO File Manager cache on the one Coherence node, can it be visible from other Coherence nodes as a clustered cache  ?
    Also, if I have NIO File Manager cache on a shared disk, is it possible to configure all nodes to work with that cache ?
    Best Regards,
    Tomislav Milinovic

    Tomislav,
    I will answer your questions on top of your statements, OK?
    "Coherence local cache is just that: a cache that is local to (completely contained within) a particular cluster node and is accessible from a single JVM"
    Considering the partitioned (distributed) scheme, Coherence is a truly peer-to-peer technology in which data is spread across a cluster of nodes, the primary data is stored in a local JVM of one node, and its backup is stored in another node, preferably in another site, cluster or rack.
    "Clustered caches are accessible from multiple JVMs (any cluster node running the same cache service). The cache service provides the capability to access local caches from other cluster nodes"
    Yes, no matter if the data is stored locally in a single node of the cluster, but when you access that data through its key, Coherence automatically finds that data in the cluster and brings to you. Its transparently for the developer the location of data, but one thing is certain: you have a global view of caches, meaning that from every single member, you have access to all data stored. This is one of the magic that the Coherence protocol (called TCMP) does for you.
    "If I have local off-heap NIO memory cache or NIO File Manager cache on the one Coherence node, can it be visible from other Coherence nodes as a clustered cache  ?"
    As I said earlier, yes, you can access all the data stored from any node of the cluster. The way in which each node store its data (called as backing map scheme) can differ. One node can use an elastic data as backing map scheme, and another node can use Off-Heap NIO Memory Manager as backing map. This is just the way about each node store its data. For the architectural point of view, its a nice choice to use the same backing map scheme across multiple nodes, because each backing map scheme can have different behaviors when you read and/or write data. One could be faster and another could be slower.
    "Also, if I have NIO File Manager cache on a shared disk, is it possible to configure all nodes to work with that cache ?"
    There is no need for that, since data is available to all cluster nodes without any effort. Having said that, this would be a bad strategy choice. Coherence is a shared-nothing technology which uses that model to scale and give you predictable latency. If you start using a shared-disk as storage for data, you will lose the essence of shared-nothing benefits, and create a huge bottleneck in the data mgmt layer, since will occur dispute per I/O in each read/write.
    Cheers,
    Ricardo Ferreira

  • Testing Coherence Cluster and Servers after WebLogic Console Creation

    Hello,
    I have created WLST scripts that extend a Domain with Coherence Clusters and Servers using unicast configurations. I can start and run the Coherence Servers from WL Admin Console without errors and warnings.WL 10.3.6
    I am looking to test the configuration with something like coherence.sh and query.sh but I am missing instructions on how to use these tools with unicast and connect to the caches?
    Is there a command line interface that connects to a Coherence Server cache created form the WL Admin Console using unicast? Do I need to override the any xml configuration to make this work?
    Examples would be helpful.
    While testing I have found the following....
    I have changed coherence.sh and enabled storage. In addition:
    JAVA_OPTS="-Xms$MEMORY -Xmx$MEMORY -Dtangosol.coherence.distributed.localstorage=$STORAGE_ENABLED $JMXPROPERTIES -Dtangosol.coherence.clusterport=7777 -Dtangosol.coherence.clusteraddress=231.1.1.1"
    The Coherence Cluster configurations were changed to match the multicast settings for port and address above.
    When this was performed all worked!!
    However, if I changed JAVA_OPTS to use unicast
    JAVA_OPTS="-Xms$MEMORY -Xmx$MEMORY -Dtangosol.coherence.distributed.localstorage=$STORAGE_ENABLED $JMXPROPERTIES -Dtangosol.coherence.localport=8088 -Dtangosol.coherence.localhost=192.168.2.69"
    It fails to connect with the Coherence Server in the cluster.

    Hi there,
    1. How did you achieve the https configuration in Weblogic ? And for which server Admin server or managed Server ?
    2. Are you using which java key store ? Able to see the successful entries in <server>.out log file which is used for start up and stop of the weblogic server ?
    Thanks
    Laksh

Maybe you are looking for

  • When button pressed, update

    I have a criteria as I created a button 'cancel' with 'when_button_pressed' trigger. IF i press the button cancel , then the table should be updated setting the cancelled_flag column to 'YES' Which is initially by default as 'NO'. But when i press th

  • Can't import photos using image capture or iphoto

    When I plug in the iphone to my mac, I cannot get the photos off the phone. I did it once before, while leaving the original photos on the phone, but now I photo says there was an error importing 38 items, which happen to be the 38 photos I've taken

  • Problem in LookAndFeel(..JTree..)

    Hi All, i have a small problem in LookAndFeel for my Tree. in our project we are using JDK1.1.7B and Swing1.0..It is a client requirement..Is it possible to cretae my tree like windows Explorer..i am not geting tree structure..just i am geting bullet

  • Enquiry about the Driver program SAPFM06P

    Hi, If I am executing the Driver program SAPFM06P, shows the error like this -> SAPFM06P is not a report program (type 'S'). How can I execute ..Plz help. Pooja

  • When i try and create a combined pdf i get an error failed

    when i try and create a combined pdf i get an error failed