Putting a object on the coherence cache

Hi All,
Is there a better way of doing the following:
In a multi-threaded region of code I perform the following sequence of steps:
1. Use a filter to check if object foo already exists on the cache.
2. if the result set of the filter is null, I perform a lock Cache.lock(foo.getUniqueID);
3. I put the object foo on the cache Cache.put(foo);
Basically I am trying to avoid another thread from overwriting the existing cache object.
Is there a better way to achieve this ?
regards,
Ankit

Hi Ankit,
You can use a ConditionalPut EntryProcessor http://docs.oracle.com/cd/E24290_01/coh.371/e22843/toc.htm
Filter filter = new NotFilter(PresentFilter.INSTANCE);
cache.invoke(foo.getUniqueID(), new ConditionalPut(filter, foo));An EntryProcessor will take out an implicit lock on the key so no other EntryProcessor can execute on the same key at the same time. The ConditionalPut will apply the specified Filter to the entry (in this case to check that it is not present, and if this evaluates to true then will set the specified value.
JK

Similar Messages

  • Webutil_file_transfer url_to_client places objects in the Java Cache

    Hi,
    I'm using webutil_file_transfer url_to_client to download a PDF file.
    Once I downloaded the PDF file, it is visible in the Client Java Cache Viewer. When I try to run the download again webutil takes the PDF from the Cache instead taking the newer PDF from the Server where the URL is pointing to.
    So the URL I passing is each time the same, but the PDF file on the Server is getting updated very often.
    I don't usethe WEBCACHE port!
    My question now is. Can I avoid webutil to put does PDF file in the Java Cache on the Client. Or is there any utility to remove selected objects from the Java Cache.
    Does anybody have a Idea how to avoid such a behaviour?
    Fatih

    Hi Craig,
    with the help of Oracle Support, we found the reason and also a workaround for this issue.
    The real cause:
    By default the java applet parameter DefaultUseCaches is true an all files downloaded within an applet is going throug the Java Client Cache.
    WEBUTIL does not touch this parameter, so it keeps staying default (true).
    Solution/workaround:
    With the methode setDefaultUseCaches it possible to disable/enable the default Cache setting.
    So it's possible to disable the cache with a small Java Bean running in Forms, which disable the Cache before WEBUTIL_FILE_TRANSFER and enable it after the successfull download.
    here is an extract of the bean code:
    URL u = new URL( "http:" );
    URLConnection uc = u.openConnection();
    uc.setDefaultUseCaches(false);
    thank you for your time!
    regards,
    Fatih

  • Verify whether the session data is kept in the Coherence caches

    I have successfully combined the MapViewer application with WebLogic and Oracle Coherence*Web.
    How to verify whether the session data of MapViewer application is kept in the Coherence caches or not?
    All out put show that both of MapViewer and WebLogic server as well as Coherence are running well.
    All the following steps are right?
    The procedure is as the following:
    1. Create a WebLogic domain: Map_domain.
    2. Start the WebLogic domain Map_domain by running startWebLogic.sh script.
    3. Install Coherence.jar as a library on WebLogic.
    4. Copy the coherence.jar in the WAR's WEB-INF/lib directory.
    5. Create a reference to the shared library by modifying the weblogic.xml in web applications WEB-INF directory
    and add the following contents:
    <weblogic-web-app>
         <library-ref>
              <library-name>coherence-web-spi</library-name>
              <specification-version>1.0.0.0</specification-version>
              <implementation-version>1.0.0.0</implementation-version>
              <exact-match>false</exact-match>
         </library-ref>
    <weblogic-web-app>6. Install Coherence-web-spi.war as a WebLogic library.
    7. Install the MapViewer as a WebLogic application.
    8. Start a Coherence cache server using the cmd file web-cache-server.cmd and then start MapViewer application.
    The content of web-cache-server.cmd file:
    @echo off
    @rem This will start a cache server
    setlocal
    :config
    @rem specify the Coherence installation directory
    set coherence_home=F:\coherence
    @rem specify the JVM heap size
    set memory=256m
    :start
    if not exist "%coherence_home%\lib\coherence.jar" goto instructions
    if "%java_home%"=="" (set java_exec=java) else (set java_exec=%java_home%\bin\java)
    :launch
    set java_opts="-Xms%memory% -Xmx%memory%"
    "%java_exec%" -server -showversion "%java_opts%" -cp %coherence_home%\lib\coherence.jar;
    %coherence_home%\lib\coherence-web-spi.war
    -Dtangosol.coherence.management.remote=true
    -Dtangosol.coherence.cacheconfig=WEB-INF/classes/session-cache-config.xml
    -Dtangosol.coherence.session.localstorage=true
    com.tangosol.net.DefaultCacheServer %1
    goto exit
    :instructions
    echo Usage:
    echo   ^<coherence_home^>\bin\cache-server.cmd
    goto exit
    :exit
    endlocal
    @echo onEdited by: jetq on Jan 13, 2010 9:32 AM

    Any opinions are welcome.

  • How can I get a count of objects in the near cache? (Extend client)

    Hi,
    I'm trying to figure out how to get the count of objects in my near cache (from c++ client). Knowing the size of the near cache is a key factor when optimizing configurations for performance.
    However if I call size() on the cache handle I get the count of objects in the cache (ie the cluster). How can I get a count of objects in the near cache?
    Thanks
    Rich Carless

    H Rich,
    It may not be ideal, but I think you may be able to infer the size by using the HeapAnalyzer (http://download.oracle.com/docs/cd/E15357_01/coh.360/e15728/classcoherence_1_1lang_1_1_heap_analyzer.html) facility, specifically the "class" based HeapAnalyzer. Its Snapshot representation http://download.oracle.com/docs/cd/E15357_01/coh.360/e15728/classcoherence_1_1lang_1_1_class_based_heap_analyzer_1_1_snapshot.html provides a mapping between class name and ClassStats (http://download.oracle.com/docs/cd/E15357_01/coh.360/e15728/classcoherence_1_1lang_1_1_class_based_heap_analyzer_1_1_class_stats.html) which provides information on how many instances of a given class type are in memory. Note the reported counts are process wide but if your key or value type are distinct you may be able to infer your answer. I realize this is rather complex, my only other suggestion would be to just make some guesses on size and see how they effect performance.
    Mark
    Oracle Coherence

  • Not overriding the coherence-cache-config.xml but showing the error...

    Hi,
    I have created a file called tangosol-coherence-override.xml file in the specified path which is " C:\Program Files\Oracle\Coherence for .NET"
    and the coherence.jar file is located in path which is "C:\Program Files\Oracle\Coherence for .NET\lib"
    but it is not overriding the xml file.. with the default file...
    C:\Program Files\Oracle\Coherence for .NET\examples\ContactCache.Java>"C:\Progra
    m Files\Java\jdk1.6.0_11\bin\java" -server -showversion -Xms128m -Xmx128m -Dtang
    osol.coherence.ttl=0 -Dtangosol.coherence.cacheconfig=contact-cache-config.xml -
    cp "config;lib\custom-types.jar;C:\Program Files\Oracle\Coherence for .NET\lib\c
    oherence.jar" com.tangosol.net.DefaultCacheServer
    java version "1.6.0_11"
    Java(TM) SE Runtime Environment (build 1.6.0_11-b03)
    Java HotSpot(TM) Server VM (build 11.0-b16, mixed mode)
    2011-06-20 15:06:57.607/2.366 Oracle Coherence 3.7.0.0 <Info> (thread=main, memb
    er=n/a): Loaded operational configuration from "jar:file:/C:/Program%20Files/Ora
    cle/Coherence%20for%20.NET/lib/coherence.jar!/tangosol-coherence.xml"
    2011-06-20 15:06:57.989/2.748 Oracle Coherence 3.7.0.0 <Info> (thread=main, memb
    er=n/a): Loaded operational overrides from "jar:file:/C:/Program%20Files/Oracle/
    Coherence%20for%20.NET/lib/coherence.jar!/tangosol-coherence-override-dev.xml"
    2011-06-20 15:06:58.049/2.808 Oracle Coherence 3.7.0.0 <D5> (thread=main, member
    =n/a): Optional configuration override "/tangosol-coherence-override.xml" is not
    specified
    2011-06-20 15:06:58.062/2.821 Oracle Coherence 3.7.0.0 <D5> (thread=main, member
    =n/a): Optional configuration override "/custom-mbeans.xml" is not specified
    Oracle Coherence Version 3.7.0.0 Build 23397
    Grid Edition: Development mode
    Thanks in Advance

    The CLASSPATH is not the same as the PATH. It is similar but a classpath is a set of directories or jar files that Java uses to look for executable code and resources.
    Assuming you are using the example start-cache-server.cmd file that comes with the .Net examples to start your server you can put the tangosol-coherence-override.xml file into the C:\Program Files\Oracle\Coherence for .NET\examples\ContactCache.Java\config directory as this is on the classpath when using that script.
    JK

  • Caching objects in the data cache as a result of an extent.

    Patrick -
    I wanted to post this since it's related to a question I posted about extents and the data cache on
    11/8.
    I discovered that the com.solarmetric.kodo.DefaultFetchBatchSize setting affects how many objects
    get put into the data cache as a result of running an extent (in 2.3.2). If I have:
    com.solarmetric.kodo.DefaultFetchBatchSize=20
    then as soon as I execute the second line below:
    Iterator anIterator = results.iterator();
    Object anObject = anIterator.next();
    I see 20 objects in my data cache. In a prior reply you indicated that you were going to check this
    behavior in 2.4 so I wanted to send you this additional information. This behavior isn't a problem
    for me.
    Les

    Les,
    This is expected behavior -- the DefaultBatchFetchSize instructs Kodo to
    retrieve objects from the scrollable ResultSet in groups of 20. So,
    getting the first item from the iterator will cause a page of 20 objects
    to be pulled from the result set.
    -Patrick
    Les Selecky wrote:
    Patrick -
    I wanted to post this since it's related to a question I posted about
    extents and the data cache on
    11/8.
    I discovered that the com.solarmetric.kodo.DefaultFetchBatchSize
    setting affects how many objects
    get put into the data cache as a result of running an extent (in
    2.3.2). If I have:
    com.solarmetric.kodo.DefaultFetchBatchSize=20
    then as soon as I execute the second line below:
    Iterator anIterator = results.iterator();
    Object anObject = anIterator.next();
    I see 20 objects in my data cache. In a prior reply you indicated that
    you were going to check this
    behavior in 2.4 so I wanted to send you this additional information.
    This behavior isn't a problem
    for me.
    Les
    Patrick Linskey [email protected]
    SolarMetric Inc. http://www.solarmetric.com

  • Put an Object in the session scope

    I write a logon() action . Inside, i retrieve informations about user logon from a UserForm Object define in the faces-config.xml file with a request scope. So I want to set a business Object called User with the UserForm's values. Then i need to put the User Object in the session scope but i don't know how to do ?
    Can someone help me ?
    Thanks.

    I write a logon() action . Inside, i retrieve
    informations about user logon from a UserForm Object
    define in the faces-config.xml file with a request
    scope. So I want to set a business Object called User
    with the UserForm's values. Then i need to put the
    User Object in the session scope but i don't know how
    to do ?
    Can someone help me ?
    Thanks.The session attributes are available via the "sessionMap" property of the ExternalContext for this request. So, from within your Action, you could do something like this:
    User user = ... set up your user object ...;
    // Get FacesContext for this request
    FacesContext context =
      FacesContext.getCurrentInstance();
    // Store user object in the session map
    context.getExternalContext().getSessionMap().put("user", user);Craig McClanahan

  • How to transfer the coherence cache t to a different network/environment?

    Hi,
    I have a requirement where in i need to import/export cache from one network into a different network/environment all together keeping the cache data intact. How do i achieve this from Coherence side? I am using distributed cache scheme.
    Regards,
    Radhika

    You could serialize to a file the content of the cache then read it back at the other end.  The cache dump usually does not take much time even for GB sized caches.  The import usually takes more time.
    Here is sample code to serialize the content of a cache.  Ideally you should use POF to have already compacted data.
    Export:
         public void exportCache(String cacheName, File file) throws Exception {
              WrapperBufferOutput wrappedBufferOutput = null;
              try {
                   NamedCache cache = CacheFactory.getCache(cacheName);
                   FileOutputStream fileOutputStream = new FileOutputStream(file);
                   BufferedOutputStream bufferedOutputStream = new BufferedOutputStream(fileOutputStream, 1024 * 1024);
                   DataOutputStream dataOutputStream = new DataOutputStream(bufferedOutputStream);
                   wrappedBufferOutput = new WrapperBufferOutput(dataOutputStream);
                   ConfigurablePofContext pofContext = (ConfigurablePofContext) cache.getCacheService().getSerializer();
                        for (Object o : cache.entrySet()) {
                             pofContext.serialize(wrappedBufferOutput, ((Map.Entry) o).getValue());
              } finally {
                   if (wrappedBufferOutput != null) {
                        wrappedBufferOutput.close();
             timer.print();
    Here is sample code to deserialize the content of a cache store on a file:
    Import:
    public void importCache(String cacheName, File file) throws Exception {
              WrapperBufferInput wrappedBufferInput = null;
              try {
                   NamedCache cache = CacheFactory.getCache(cacheName);
                   FileInputStream fileInputStream = new FileInputStream(file);
                   BufferedInputStream bufferedInputStream = new BufferedInputStream(fileInputStream, 1024 * 1024);
                   DataInputStream dataInputStream = new DataInputStream(bufferedInputStream);
                   wrappedBufferInput = new WrapperBufferInput(dataInputStream);
                   ConfigurablePofContext pofContext = (ConfigurablePofContext) cache.getCacheService().getSerializer();
                   while (wrappedBufferInput.available() > 0) {
                        ImportableObject o = (ImportableObject) pofContext.deserialize(wrappedBufferInput);
                        cache.put(o.getObjectKey(), o);
              } finally {
                   if (wrappedBufferInput != null) {
                        wrappedBufferInput.close();
    Here we assume that cache entries implement ImportableObject interface which has the getObjectKey() method.  This make it easy to figure out the key of the entry without knowing its real type.

  • Does coherence cache the value from the cache?

    Hi, I have the question about if the coherence caches the value from the cache? I believe it does from my test, just want to get the confirmation.
    I like to use an example to describe my question. For example:
    If (key1, value1) are in the cache1, (value1 is an object),
    for the first time, if cache1.get(key1), coherence will deserialize value1 and return. But if in the same JVM, when cache1.get(key1) is invoked again, coherence will return value1, which I believe is cached by coherence in the current JVM, and return; instead of deserializing and return it. Is that right?
    I am asking this question because I found a problem in our project when use coherence. As the above example, if I use value1 = cache1.get(key1), and in our project, value1 object has a set method to change one of its internal attributes, and this method was indeed invoked after value1 get from cache1. Then in another class, value2 = cache1.get(key1) is called again, and I found out that value2's attribute will have the modified value, even cache1.put(key1, value1) is never invoked in the first place.
    Of course, this kind of behavior matches the java.util.Map. But coherence cache is a cluster/distributed environment. In the above example, if on another data node, value3 = cache1.get(key1) will get the original attribute value in value3, since the deserialize object will always get the original value, unless the new value is put in explicitly by cache1.put(key1, value1).
    In this case, should cache1.get(key1) always return a clone object make more sense?
    Thanks

    You observation is correct. More specifically for cache topologies which include an in-process cache Coherence may return the same object reference for repeated get requests on the same key. I say "may" because for any variety of reasons we may also have to retrieve a fresh copy from a remote cache server. When possible we will return existing objects for performance reasons avoiding costly things like network hops, and de-serialization. Any modifications made to an object returned from the cache will not be made automatically available to other cluster members. Additionally if these modifications are made concurrently with another thread performing a cache.put() on the same value could result in a corrupt cached value if your serialization methods are not thread-safe. Best practice dictates that unless you are sure that you are using a cache topology which does not include an in-process cache that you treat the values returned from the cache as immutable, and instead deep clone() it before making any modifications.
    The distributed-scheme and remote-scheme are the only types of caches which do not include in-process caching, and thus always return "mutation safe" values. The most common in-process cache topology is near-scheme, but others include replicated-scheme, optimistic-scheme, local-scheme, and the programatically created ContinuousQueryCache.
    thanks,
    mark

  • Verify if the sessision data of MapViewer app is stored in Coherence caches

    I have successfully combined the MapViewer application with WebLogic and Oracle Coherence*Web.
    How to verify whether the session data of MapViewer application is kept in the Coherence caches or not?
    All out put show that both of MapViewer and WebLogic server as well as Coherence are running well.
    All the following steps are right?
    The procedure is as the following:
    1. Create a WebLogic domain: Map_domain.
    2. Start the WebLogic domain Map_domain by running startWebLogic.sh script.
    3. Install Coherence.jar as a library on WebLogic.
    4. Copy the coherence.jar in the WAR's WEB-INF/lib directory.
    5. Create a reference to the shared library by modifying the weblogic.xml in web applications WEB-INF directory
    and add the following contents:
    <weblogic-web-app>
         <library-ref>
              <library-name>coherence-web-spi</library-name>
              <specification-version>1.0.0.0</specification-version>
              <implementation-version>1.0.0.0</implementation-version>
              <exact-match>false</exact-match>
         </library-ref>
    <weblogic-web-app>6. Install Coherence-web-spi.war as a WebLogic library.
    7. Install the MapViewer as a WebLogic application.
    8. Start a Coherence cache server using the cmd file web-cache-server.cmd and then start MapViewer application.
    The content of web-cache-server.cmd file:
    @echo off
    @rem This will start a cache server
    setlocal
    :config
    @rem specify the Coherence installation directory
    set coherence_home=F:\coherence
    @rem specify the JVM heap size
    set memory=256m
    :start
    if not exist "%coherence_home%\lib\coherence.jar" goto instructions
    if "%java_home%"=="" (set java_exec=java) else (set java_exec=%java_home%\bin\java)
    :launch
    set java_opts="-Xms%memory% -Xmx%memory%"
    "%java_exec%" -server -showversion "%java_opts%" -cp %coherence_home%\lib\coherence.jar;
    %coherence_home%\lib\coherence-web-spi.war
    -Dtangosol.coherence.management.remote=true
    -Dtangosol.coherence.cacheconfig=WEB-INF/classes/session-cache-config.xml
    -Dtangosol.coherence.session.localstorage=true
    com.tangosol.net.DefaultCacheServer %1
    goto exit
    :instructions
    echo Usage:
    echo   ^<coherence_home^>\bin\cache-server.cmd
    goto exit
    :exit
    endlocal
    @echo onEdited by: jetq on Jan 12, 2010 9:02 PM

    Any opinions are welcome.

  • Looking for some advice on CEP HA and Coherence cache

    We are looking for some advice or recommendation on CEP architecture.
    We need to build a CEP application that conforms to the following:
    • HA with no loss of events or duplicate events when failing over to the backup server.
    • We have some aggregative rules that needs to see all events.
    • Events are XMLs with size of 3KB-50KB. Not all elements are needed for the rules but they are there for other systems that come after the CEP (the customer services).
    • The XML elements that the CEP needs are in varying depth in the XML.
    Running the EPN on a single thread is not fast enough for the required throughput mainly because network latency to the JMS and the heavy task of parsing of the XML. Because of that we are looking for a solution that will read the messages from the JMS in parallel (multi thread) but will keep the same order of events between the Primary and Secondary CEPs.
    One idea that came to our minds is to use Coherence cache in the following way:
    • On the CEP inbound use a distributed queue and not topic (at the CEP outbound it is still topic).
    • On the CEPs side use a Coherence cache that runs on the CEPs JVMs (since we already have a Coherence cluster for HA).
    • Both CEPs read from the queue using multi threading (10 reading threads – total of 20 threads) and putting it to the Coherence cache.
    • The Coherence cache is publishing the events to both CEPs on a single thread.
    The EPN looks something like this:
    JMS adapter (multi threaded) -> replicated cache on both CEPs -> event bean -> HA adapter -> channel -> processor -> ….
    Does this sounds sound to you?
    Are we over shooting here? Is there a simpler solution for our needs?
    Is there a best practice for such requirements?
    Thanks

    Hi,
    Just to make it clear:
    We do not parse the XML on the event bean after the Coherence. We do it on the JMS adapter on multiple threads in order to utilize all the server resources (CPUs) and then we put it in the replicated cache.
    The requirements from our application are:
    - There is an aggregative query that needs to "see" all events (this means that we need to pass all events thru a single processor and we cannot partition them to several processors).
    - Because this is a HA solution the events on both CEPs (primary and secondary) needs to be at the same order when reaching the HA inbound adapter and the processor.
    - A single thread JMS adapter is not reading the messages from the JMS fast enough mainly because it takes time to parse the XML to an event.
    - Using a multi-threaded adapter or many single threaded adapters with message selector will create a situation that the order of events on both CEPs will not be the same at the processor inbound.
    This is why we needed a mediator so we can read in multiple threads that will parse the XMLs in parallel without concerning on order of messages and on the other hand publish all the messages on a single thread to the processors on both CEPs from this shared mediator (we use a replicated cache that runs on both JVMs).
    We use queue instead of topic because if we read the messages from a topic on both CEPs it will be stored twice on the Coherence replicated cache. But if we use a queue, when server 1 read the message and put it in the Coherence replicated cache then server 2 will not read it because it was removed from the queue.
    If I understand correctly you are suggesting replacing the JMS adapter with an event bean that will read the messages from the JMS directly?
    Are you also suggesting that we will not use a replicated cache but instead a stand alone cache on each server? In this case how do we keep the same order of events on both CEPs (on both caches)?

  • Listener for Unique Key in coherence Cache

    Hi Experts,
    I am using Oracle Coherence in one of my project and I am facing a Performance issue .Here is my scenario,
    Thie is my Coherence cache structure
    UniqueKey         <Hive data>
    D1                     Hive Data1         
    D2                     Hive Data2
    D3                     Hive Data3
    Each Unique Key is for user Session. My application is a single Sign on with multiple applications involved.
    The Coherence cache can be updated by any application/sub applications. When ever there is a change in my a user hive Data I need to update. My current implementation is
    I get all the data (Map) from the Coherence Cache (Named Cache).
    Look for the specific user's key
    Hence there is a performance issue when ever i retrieve/set the hive data
    Is there a default Listener/methodology which I can use in Oracle Coherence ?

    Thanks Jonathan for your timely response will look into the Map event, but just a quick question
    Map<Object, Map<Object, Object>> updateUserData = (HashMap) namedCache.get(uniqueKey);
    So this is how i retrieve the user data from the NamedCache. What it returns is a Map .....and any change to this current daykey is what the user is worry about.
    The addListener() on the ObservableMap will  start to listen on any event happening at the all key in the namedCache.I am looking for something like a listener/logic which looks only the for this uniqueKey for this user session.
    Will look into the Map Event in detail.
    Thanks Again,
    Sarath

  • Architectural Choice for accessing Coherence Cache Server

    I am a newbie and have a coherence use-case question.
    When accessing an independent coherence cache server from application code such as an EJB deployed in WLS, architecturally does one write up an entity which is then used as a sole point of
    access to the resource (coherence cache server) for querying, adding, modifying entries or are the accesses to the coherence cache server split and spread among the application code.
    For example,
    1. I write an EJB (EJB 1) which recieves the request from other EJB's (EJB 2, EJB 3), EJB 1 runs requests from EJB2 and EJB 3 on the Coherence Cache Server and acts as the sole point of
    contact to the resource.
    2. EJB 2 and EJB 3 both run requests against the coherence cache server. No fixed entity in architecture repsonsible for interaction with Coherence Cache Server.
    Which is more common ?

    stevephe wrote:
    Yes you could treat Coherence as a "pluggable" resource, just like a database. But that, just like in the case of a database, wouldn't boil it down to a single entity/interface. You'd treat Coherence as a "integration tier" resource that you'd "plumb in" just like you would a database, thus shielding your application's "domain" objects from integration-level concerns. That's how I've tiered our application, although we aren't inside a container like Weblogic/WebSphere/etc. The domain objects specify their persistence requirements via a multiplicity of interfaces; those interfaces have a number of implementations in the integration tier, one set of which just happens to be a Coherence set. You can use a "registry" approach to pick up the appropriate implementations (we use Spring injection.) Have a look at the Coherence book from Apress for more details.Apress? You mean Packt, don't you?
    Best regards,
    Robert

  • Adding Coherence Cache nodes

    Hi,
    I have a web application deployed on the weblogic cluster. I have included the coherence.jar as well the coherence-cache-config.xml and tangosol-coherence-override.xml in the ear, which is then deployed on the 2 nodes of the weblogic cluster. I am able to put and get the data into the cache.
    What I understand is that deploying the ear(containing the coherence.jar) on the 2 nodes of the WLS server have created 2 nodes of coherence cache servers.
    Can anybody please let me know how can I create multiple Coherence Cluster nodes so that I could scale the cache?
    Cheers,

    thanks Nick.
    I checked the link and it specifies various cache schemes that can be specified and a sample file exists in coherence.jar.
    So, how do you specify a cache that you want? Do you override using the override xml files?
    my apologies for my limited knowledge in coherence.

  • Simple UI to Query Coherence cache ?

    Hello to all Coherence Experts,
    My name is Goverdhan and I work for RBS. I am glad to be a part of this very active and informative forum. I had one question - do we have a query tool to query the coherence cache ? I mean - rather writing Java program for every query, do you know of any graphical user interface that would help in writing query and seeing result ? Like Toad, for example. It need not be as jazzy as toad - but, the idea is to have a simple UI and avoid writing Java program for every query. Any quick pointers in this direction is deeply appreciated.
    Please advise. We are urgently looking for a tool. Even if it is paid tool - please let me know. My email ids are :
    [email protected]
    [email protected]
    Thanks,
    Goverdhan

    Hi Goverdhan,
    SL Corp. are creating a Coherence Cache Viewer and Display Builder tool, which may be useful to you.
    You can send a mail to [email protected] to find out more about it.
    Best regards,
    Robert

Maybe you are looking for

  • How can I get my e-mail addresses working

    I synchronize my Touch with outlook 2003. The imported e-mail adresses are mainly unusable. Outlook displays the addresses by the name of the contact instead of the complete address. The touch imports the displayed name and not the e-mailaddress. Thi

  • Hp laserjet 1018 with windows 7-32 driver problems

    We've used this printer with our new computers for 2 yrs now. The last 4 months, we've had issues with it not printing all of the time, but. Could turn it off and back on and it would work. Now it's not printing at all. I have downloaded driver and u

  • How to convert a DVD disc for use in Final Cut Express 3.5

    I have a DVD disc of a conference which I would like to edit in FCE 3.5. I would like to be able to import it into my FCE project, ad use it without having to the render it in FCE. Is it possible to rip or import the disc (with Handbrake?) and then e

  • Problems creating packaged .h files with javah -jni

    I have all of my .java and .class files in package ClassLib.Satrack. My directory structure is C:/SatrackIIP/NavComponent/ClassLib/Satrack. I compiled all java files by calling javac ClassLib/Satrack/*.java from NavComponent directory. Then still fro

  • Retriving .doc and .rtf files in soap attachments

    Please help me I'm using JAXM to retrive soap attachments When i retrive txt files (.txt) it retrivs the content of the document without any problem but when i try to retrive .doc or rtf it gives content as java.io.FileInputStream@587c94 (retrive att