Warming up coherence cache for Grid-Read configuration

I am using toplink Grid-Read configuration for my coherence cache implementation.
In this, I am facing one problem of retrieving data in the following scenario
1. Search has been performed based on some criteria.
2. Part of the result is there in the cache and part of the result is there in database.
3. I can not bypass cache with
setHint(QueryHints.QUERY_REDIRECTOR, new IgnoreDefaultRedirector())
as coherence will not return null in this case.
4. So, the data which are only there in the cache will be displayed instead of showing the complete result.
To resolve this issue, I need to warm up the cache at system start-up so that the data in coherence and database will be in sync.
So, my question is how to warm up the cache at system start-up ?
Thanks
Sandeep Singh

I am using toplink Grid-Read configuration for my coherence cache implementation.
In this, I am facing one problem of retrieving data in the following scenario
1. Search has been performed based on some criteria.
2. Part of the result is there in the cache and part of the result is there in database.
3. I can not bypass cache with
setHint(QueryHints.QUERY_REDIRECTOR, new IgnoreDefaultRedirector())
as coherence will not return null in this case.
4. So, the data which are only there in the cache will be displayed instead of showing the complete result.
To resolve this issue, I need to warm up the cache at system start-up so that the data in coherence and database will be in sync.
So, my question is how to warm up the cache at system start-up ?
Thanks
Sandeep Singh

Similar Messages

  • Warming up File System Cache for BDB Performance

    Hi,
    We are using BDB DPL - JE package for our application.
    With our current machine configuration, we have
    1) 64 GB RAM
    2) 40-50 GB -- Berkley DB Data Size
    To warm up File System Cache, we cat the .jdb files to /dev/null (To minimize the disk access)
    e.g
         // Read all jdb files in the directory
         p = Runtime.getRuntime().exec("cat " + dirPath + "*.jdb >/dev/null 2>&1");
    Our application checks if new data is available every 15 minutes, If new Data is available then it clears all old reference and loads new data along with Cat *.jdb > /dev/null
    I would like to know that if something like this can be done to improve the BDB Read performance, if not is there any better method to Warm Up File System Cache ?
    Thanks,

    We've done a lot of performance testing with how to best utilize memory to maximize BDB performance.
    You'll get the best and most predictable performance by having everything in the DB cache. If the on-disk size of 40-50GB that you mention includes the default 50% utilization, then it should be able to fit. I probably wouldn't use a JVM larger than 56GB and a database cache percentage larger than 80%. But this depends a lot on the size of the keys and values in the database. The larger the keys and values, the closer the DB cache size will be to the on disk size. The preload option that Charles points out can pull everything into the cache to get to peak performance as soon as possible, but depending on your disk subsystem this still might take 30+ minutes.
    If everything does not fit in the DB cache, then your best bet is to devote as much memory as possible to the file system cache. You'll still need a large enough database cache to store the internal nodes of the btree databases. For our application and a dataset of this size, this would mean a JVM of about 5GB and a database cache percentage around 50%.
    I would also experiment with using CacheMode.EVICT_LN or even CacheMode.EVICT_BIN to reduce the presure on the garbage collector. If you have something in the file system cache, you'll get reasonably fast access to it (maybe 25-50% as fast as if it's in the database cache whereas pulling it from disk is 1-5% as fast), so unless you have very high locality between requests you might not want to put it into the database cache. What we found was that data was pulled in from disk, put into the DB cache, stayed there long enough to be promoted during GC to the old generation, and then it was evicted from the DB cache. This long-lived garbage put a lot of strain on the garbage collector, and led to very high stop-the-world GC times. If your application doesn't have latency requirements, then this might not matter as much to you. By setting the cache mode for a database to CacheMode.EVICT_LN, you effectively tell BDB to not to put the value or (leaf node = LN) into the cache.
    Relying on the file system cache is more unpredictable unless you control everything else that happens on the system since it's easy for parts of the BDB database to get evicted. To keep this from happening, I would recommend reading the files more frequently than every 15 minutes. If the files are in the file system cache, then cat'ing them should be fast. (During one test we ran, "cat *.jdb > /dev/null" took 1 minute when the files were on disk, but only 8 seconds when they were in the file system cache.) And if the files are not all in the file system cache, then you want to get them there sooner rather than later. By the way, if you're using Linux, then you can use "echo 1 > /proc/sys/vm/drop_caches" to clear out the file system cache. This might come in handy during testing. Something else to watch out for with ZFS on Solaris is that sequentially reading a large file might not pull it into the file system cache. To prevent the cache from being polluted, it assumes that sequentially reading through a large file doesn't imply that you're going to do a lot of random reads in that file later, so "cat *.jdb > /dev/null" might not pull the files into the ZFS cache.
    That sums up our experience with using the file system cache for BDB data, but I don't know how much of it will translate to your application.

  • How to Test coherence cache configuration

    Hi,
    I have configured coherence using the below two config xmls, I had started out by trying to configure a distributed cache scheme but I am not sure if it has come up correctly. This configuration works fine from caching point of view, it even does the clustering, but my only doubt here is that how can I test whether it is actually a distributed cache or a replicated cache?
    coherence-cache-config.xml
    <cache-config>
         <caching-scheme-mapping>
              <cache-mapping>
                   <cache-name>dist-ABCCache</cache-name>
                   <scheme-name>ABC-distributed-cache-scheme</scheme-name>
              </cache-mapping>
         </caching-scheme-mapping>
         <caching-schemes>
              <!--
    Distributed caching scheme.
    -->
              <distributed-scheme>
                   <scheme-name>ABC-distributed-cache-scheme</scheme-name>
                   <lease-granularity>member</lease-granularity>
                   <backing-map-scheme>
                        <local-scheme/>
                   </backing-map-scheme>
                   <autostart>true</autostart>
              </distributed-scheme>
              <proxy-scheme>
                   <service-name>ExtendTcpProxyService</service-name>
                   <thread-count>5</thread-count>
                   <acceptor-config>
                        <tcp-acceptor>
                             <local-address>
                                  <address>server1</address>
                                  <port>####</port>
                             </local-address>
                        </tcp-acceptor>
                   </acceptor-config>
                   <autostart>true</autostart>
              </proxy-scheme>
         </caching-schemes>
    </cache-config>
    tangosol-coherence-override.xml
    <coherence>
         <cluster-config>
              <member-identity>
                   <cluster-name>MyCluster</cluster-name>
              </member-identity>
              <unicast-listener>
                   <well-known-addresses>
                        <socket-address id="1">
                             <address>server1</address>
                             <port>####</port>
                             <port-auto-adjust>false</port-auto-adjust>
                        </socket-address>
                        <socket-address id="2">
                             <address>server2</address>
                             <port>####</port>
                             <port-auto-adjust>false</port-auto-adjust>
                        </socket-address>                    
                   </well-known-addresses>
              </unicast-listener>
              <multicast-listener>
                   <time-to-live system-property="tangosol.coherence.ttl">4</time-to-live>
                   <join-timeout-milliseconds>3000</join-timeout-milliseconds>
              </multicast-listener>
              <packet-publisher>
                   <packet-delivery>
                        <timeout-milliseconds>30000</timeout-milliseconds>
                   </packet-delivery>
              </packet-publisher>
              <service-guardian>
                   <timeout-milliseconds system-property="tangosol.coherence.guard.timeout">35000
                   </timeout-milliseconds>
              </service-guardian>
         </cluster-config>
         <logging-config>
              <severity-level system-property="tangosol.coherence.log.level">9</severity-level>
              <character-limit system-property="tangosol.coherence.log.limit">0</character-limit>
         </logging-config>
    </coherence>

    user1945969 wrote:
    Thanks for your answer but I also wanted to know if there is anyway I can verify that by the data in the cluster? You can start up the [command line application|http://coherence.oracle.com/pages/viewpage.action?pageId=16684] or write a quick class to display the information for that particular cache.
    I mean can check what all data is present in each cluster member?I would suggest taking a look via JMX. In this case, you would want to look at the ServiceMBean, CacheMBean and StorageManagerMBean MBeans (take a look at the Registry for more information).
    Another reason why I am not so confident if this scheme is distributed or not is that, in my config xml I do not have any backing map scheme configured so how is coherence going to do the backups in this case?
    <backing-map-scheme>
         <local-scheme/>
    </backing-map-scheme>You do have a "backing map" configured, it will just use the defaults.
    Coherence always manages the backups automatically, transparently and dynamically for you. When using the partitioned cache (i.e. "distributed-scheme") Coherence will place the backup in a storage enabled node on a separate physical machine as the primary.
    Rob
    :Coherence Team:

  • Coherence Help standalone java program put data in cache & Servlet to Read

    Hi,
    I have coherence 3.4 and using Oracle Application Server 10.1.3 We are in the process of developing a Web Application and want to use Coherence for caching the data. My Coherence is also installed on the same box as Oracle Application Server 10.1.3 need some help in storing the data in the coherence and reading it through the servlet. We have standalone java program that needs to put data in the cache and through servlet want to read that and display it on the page. When running the client the data is stored in the cache but when reading it through the servlet it returns null. We have included both coherence.jar and tangosol.jar in the war file and also in the path when running the standalone java program. Started the Coherence using the below command:
    C:\oracle\coherence\lib>java -cp coherence.jar -Dtangosol.coherence.cacheconfig=C:/oracle/coherence/tests/cache-config.xml com.tangosol.net.DefaultCacheServer
    here is the sample config file used when starting the server above:
    <?xml version="1.0"?>
    <!DOCTYPE cache-config SYSTEM "cache-config.dtd">
    <cache-config>
        <caching-scheme-mapping>
            <cache-mapping>
                <cache-name>VirtualCache</cache-name>
                <scheme-name>default-distributed</scheme-name>
            </cache-mapping>
        </caching-scheme-mapping>
        <caching-schemes>
            <!--
            Default Distributed caching scheme.
            -->
            <distributed-scheme>
                <scheme-name>default-distributed</scheme-name>
                <service-name>DistributedCache</service-name>
                <backing-map-scheme>
                    <class-scheme>
                        <scheme-ref>default-backing-map</scheme-ref>
                    </class-scheme>
                </backing-map-scheme>
            </distributed-scheme>
      <class-scheme>
                <scheme-name>default-backing-map</scheme-name>
                <class-name>com.tangosol.util.SafeHashMap</class-name>
                </class-scheme>
    </caching-schemes>
    </cache-config>And here is the standalone java program to put the data in the cache:
    import com.tangosol.net.CacheFactory;
    import com.tangosol.net.NamedCache;
    public class PutCache {
        public PutCache() {
        public static void main(String[] args) {
            PutCache putCache = new PutCache();
            NamedCache         cache = CacheFactory.getCache("VirtualCache");
            String key = "hello";
            cache.put(key, "Hello Cache123123");
    }And here is the Servlet code to read the data but it somehow returns null
    import com.tangosol.net.CacheFactory;
    import com.tangosol.net.NamedCache;
    import java.io.IOException;
    import java.io.PrintWriter;
    import javax.servlet.*;
    import javax.servlet.http.*;
    public class Servlet1 extends HttpServlet {
        private static final String CONTENT_TYPE = "text/html; charset=windows-1252";
        public void init(ServletConfig config) throws ServletException {
            super.init(config);
        public void doGet(HttpServletRequest request,
                          HttpServletResponse response) throws ServletException, IOException {response.setContentType(CONTENT_TYPE);
            PrintWriter out = response.getWriter();
            NamedCache         cache = CacheFactory.getCache("VirtualCache");
            String value = (String)cache.get("hello");
            out.println("<html>");
            out.println("<head><title>Servlet1</title></head>");
            out.println("<body>");
            out.println("<p>The servlet has received a GET. This is the reply.</p>"+value);
            out.println("</body></html>");
            out.close();
    }Is there any other configuration I need. Any help is really appreciated.
    Thanks

    Hi,
    While starting the coherence using
    C:\oracle\coherence\lib>java -cp coherence.jar -Dtangosol.coherence.cacheconfig=C:/oracle/coherence/tests/cache-config.xml com.tangosol.net.DefaultCacheServer
    while running standaone jave program using the below command
    java -Dtangosol.coherence.cacheconfig=C:/oracle/coherence/tests/cache-config.xml Populatecache
    In the Web Application don't have any reference to cache-config.xml just using the coherence.jar & tangosol.jar.
    What are the steps or configurations I need in order to connect to the same Coherence Cache. Do I need to provide some host:port for the Coherence for storing the data in the cache. How does the java client program and Web Application knows to connect to the Coherence. As currently even if I don't start the coherence server and just run the java standalone program it goes and executes fine wondering wher exactly does it persists the cache if coherence itself is not started or just adding the jars is enough. Any help is appreciated.
    Thanks

  • Specifying coherence-cache-config.xml for multiple clusters

    Hi,
    I am running two cache clusters (Cluster A and B that hold different cache types). we have a web application that needs to communicate with both the clusters. we have two coherence-cache-config-g.xml files, one for each cluster.
    where do we specify the two coherence-cache-config.xml for each of these clusters in our coherence.jar that we deploy on the web app server.
    pls provide some inputs...
    thanks in advance,
    - G.

    Hi G,
    You can define a path to the cache configuration descriptor in your operation configuration override file (tangosol-coherence=override.xml) or specify it in the system property "tangosol.coherence.cacheconfig".
    Please see this Wiki page for details:
    http://wiki.tangosol.com/display/COH32UG/configurable-cache-factory-config
    Regards,
    Gene

  • How to specify index for cache in coherence-cache-config.xml

    Hi All,
    We want to apply indexing on cache data.
    Suppose i have a EMPLOYEE object in coherence cache.
    and i want to use employeeID for indexing purpose.
    Can anybody help me to achieve this at Congregational level i.e. using xml file (coherence-cache-config.xml) .
    Edited by: 981644 on Jan 16, 2013 1:51 AM

    Hi,
    I've posted some [url http://coherence.oracle.com/download/attachments/14647422/add-index-namespace.jar]code and the [url http://coherence.oracle.com/download/attachments/14647422/add-index-namespace-src.jar]source. It depends on coherence common version 2.3.0.39174 however I believe it will work with 2.0.0.23649 also. Coherence common library can be downloaded from [url http://coherence.oracle.com/display/INC10/coherence-common]here
    Note: This is purely an example on how to achieve index creation via a cache configuration file and is not a part of the product thus is not covered by product support.
    Here is an example cache configuration that uses the namespace:
    <cache-config xmlns:service="class://com.oracle.coherence.environment.extensible.ServiceOperations">
        <caching-scheme-mapping>
            <service:index-add cache-name="dist-indexes">
                <extractor>
                    <class-name>ReflectionExtractor</class-name>
                    <init-params>
                        <init-param>
                            <param-type>string</param-type>
                            <param-value>getName</param-value>
                        </init-param>
                    </init-params>
                </extractor>
            </service:index-add>
            <!-- Simplified POF Config -->
            <service:index-add cache-name="dist-indexes" pof-enabled="true">
                <pof-index>8,16,32</pof-index>
            </service:index-add>
            <!-- This should not be counted based on system-property override -->
            <service:index-add cache-name="dist-indexes" pof-enabled="true" enabled="{tangosol.index.add}">
                <pof-index>8,16,31</pof-index>
            </service:index-add>
            <!-- Explicit POF Config -->
            <service:index-add cache-name="dist-indexes">
                <extractor>
                    <class-name>PofExtractor</class-name>
                    <init-params>
                        <init-param>
                            <param-type>{class}</param-type>
                            <param-value>null</param-value>
                        </init-param>
                        <init-param>
                            <param-type>{object}</param-type>
                            <param-value>
                                <class-name>com.tangosol.io.pof.reflect.SimplePofPath</class-name>
                                <init-params>
                                    <init-param>
                                        <param-type>{int[]}</param-type>
                                        <param-value>1,2,4</param-value>
                                    </init-param>
                                </init-params>                     
                            </param-value>
                        </init-param>
                    </init-params>
                </extractor>
            </service:index-add>
        </caching-scheme-mapping>
    </cache-config>Thanks,
    Harvey

  • Looking for some advice on CEP HA and Coherence cache

    We are looking for some advice or recommendation on CEP architecture.
    We need to build a CEP application that conforms to the following:
    • HA with no loss of events or duplicate events when failing over to the backup server.
    • We have some aggregative rules that needs to see all events.
    • Events are XMLs with size of 3KB-50KB. Not all elements are needed for the rules but they are there for other systems that come after the CEP (the customer services).
    • The XML elements that the CEP needs are in varying depth in the XML.
    Running the EPN on a single thread is not fast enough for the required throughput mainly because network latency to the JMS and the heavy task of parsing of the XML. Because of that we are looking for a solution that will read the messages from the JMS in parallel (multi thread) but will keep the same order of events between the Primary and Secondary CEPs.
    One idea that came to our minds is to use Coherence cache in the following way:
    • On the CEP inbound use a distributed queue and not topic (at the CEP outbound it is still topic).
    • On the CEPs side use a Coherence cache that runs on the CEPs JVMs (since we already have a Coherence cluster for HA).
    • Both CEPs read from the queue using multi threading (10 reading threads – total of 20 threads) and putting it to the Coherence cache.
    • The Coherence cache is publishing the events to both CEPs on a single thread.
    The EPN looks something like this:
    JMS adapter (multi threaded) -> replicated cache on both CEPs -> event bean -> HA adapter -> channel -> processor -> ….
    Does this sounds sound to you?
    Are we over shooting here? Is there a simpler solution for our needs?
    Is there a best practice for such requirements?
    Thanks

    Hi,
    Just to make it clear:
    We do not parse the XML on the event bean after the Coherence. We do it on the JMS adapter on multiple threads in order to utilize all the server resources (CPUs) and then we put it in the replicated cache.
    The requirements from our application are:
    - There is an aggregative query that needs to "see" all events (this means that we need to pass all events thru a single processor and we cannot partition them to several processors).
    - Because this is a HA solution the events on both CEPs (primary and secondary) needs to be at the same order when reaching the HA inbound adapter and the processor.
    - A single thread JMS adapter is not reading the messages from the JMS fast enough mainly because it takes time to parse the XML to an event.
    - Using a multi-threaded adapter or many single threaded adapters with message selector will create a situation that the order of events on both CEPs will not be the same at the processor inbound.
    This is why we needed a mediator so we can read in multiple threads that will parse the XMLs in parallel without concerning on order of messages and on the other hand publish all the messages on a single thread to the processors on both CEPs from this shared mediator (we use a replicated cache that runs on both JVMs).
    We use queue instead of topic because if we read the messages from a topic on both CEPs it will be stored twice on the Coherence replicated cache. But if we use a queue, when server 1 read the message and put it in the Coherence replicated cache then server 2 will not read it because it was removed from the queue.
    If I understand correctly you are suggesting replacing the JMS adapter with an event bean that will read the messages from the JMS directly?
    Are you also suggesting that we will not use a replicated cache but instead a stand alone cache on each server? In this case how do we keep the same order of events on both CEPs (on both caches)?

  • Using cache for read only data

    In my application I have to display some read only data (in number of drop down present on several portlets)
    this data might be driven from the database or from some XML/property file.Not decided yet.
    for example : Country /state/City/Zip code etc..
    Kindly let me know how can I implement the same in Weblogic Portal 10.0.
    Do I have to use some third party caching mechanism like Hibernate cache for this
    or
    Weblogic portal do support caching ??
    Please suggest the all possible soultions to implement this.

    Cache cache = CacheFactory.getCache("yourCache"); //any name can be passed, and you can create as many as you want , perhaps by functionality, perhaps by size etc. If you want to configure statically you must use this name in the xml file. If you define it in the xml file , you can administer the cache from Portal Admin
    cache.put("key","value");
    cache.get("key");
    There are other advanced things you can do like Time To live / flushing / auto reload the cache which is all described in the javadoc
    regards
    deepak

  • Set a new expiry delay for entries added in the cache by a read through

    Hi,
    I have an application that uses an Oracle Coherence cache backed by a database. Each item that is added in the cache must have a custom expiry delay computed from its content.
    Is it possible to set a new expiry delay for the entries loaded in the cache from the database by a read through operation?
    I tried setting a new expiry delay in the load method of the BinaryEntryStore, but it is ignored and the default expiry of the corresponding cache is used instead. Also I tried using a MapTrigger, but the entries added in the cache by a read through operation are not intercepted by the MapTrigger.
    Thanks,
    Adrian

    What we do (not sure if it proper way, but it worked for us) is extend the LocalCache and override the put method.
    public Object put(
    Object oKey,
    Object oValue,
    long expiry)
    long newexpiry = xxxxx;
    return super.put(oKey, oValue, newexpiry);
    }

  • How to i configure multiple JBoss caches for standard lone application

    how to i configure multiple JBoss caches for standard long application running on single JVM..Please advice and provide me sample code if any..
    Thanks
    NAgs

    [http://www.jboss.org]
    Locking this thread.

  • Dialog box says-"Bridge encountered a problem and is unable to read the cache. ... purge central cache" For one, I can't find the central cache and two, I've purged the cache in bridge. Shouldn't that help.

    Our Bridge has been acting very strange. It keeps giving a dialog box of "Bridge encountered a problem and is unable to read the cache. ... purge central cache" For one, I can't find the central cache and two, I've purged the cache in bridge. Shouldn't that help? There's just all kinds of weird stuff going on. I suppose it does have something to do with the central cache, so maybe someone can tell me where to find it.
    When I use the burn/dodge tool sometimes it drags and staggers and takes forever to complete whatever I'm burning/dodging. When I try to delete an image from the dock, it won't disappear but it won't display either.
    Any help would be appreciated.

    The Central Cache is the Bridge Cache.
    It's referred as the Central Cache to differentiate it from the individual folder's cache or even the individual image cache.

  • Configuring DDE for Adobe Reader printto.

    Hello,
    I am trying to configure DDE for Adobe Reader to use printto. Does anyone know what parameters and messages I should use to configure this?
    Regards, Ramon

    Uninstall Reader. Use Download Adobe Reader and Acrobat Cleaner Tool - Adobe Labs to remove all traces of Reader. Download a new copy from http://get.adobe.com/reader/enterprise
    Install.

  • How can I save visited internet pages in cache for off-line reading?

    Hi,
    is there any way to save visited internet pages in cache for off-line reading until I delete them manually or cache is filled completely?
    Thanks,
    Wilfried

    Hello
    you can use this Add-on : https://addons.mozilla.org/en-US/mobile/addon/reading-list
    Enjoy brorowsing!

  • Listener for Unique Key in coherence Cache

    Hi Experts,
    I am using Oracle Coherence in one of my project and I am facing a Performance issue .Here is my scenario,
    Thie is my Coherence cache structure
    UniqueKey         <Hive data>
    D1                     Hive Data1         
    D2                     Hive Data2
    D3                     Hive Data3
    Each Unique Key is for user Session. My application is a single Sign on with multiple applications involved.
    The Coherence cache can be updated by any application/sub applications. When ever there is a change in my a user hive Data I need to update. My current implementation is
    I get all the data (Map) from the Coherence Cache (Named Cache).
    Look for the specific user's key
    Hence there is a performance issue when ever i retrieve/set the hive data
    Is there a default Listener/methodology which I can use in Oracle Coherence ?

    Thanks Jonathan for your timely response will look into the Map event, but just a quick question
    Map<Object, Map<Object, Object>> updateUserData = (HashMap) namedCache.get(uniqueKey);
    So this is how i retrieve the user data from the NamedCache. What it returns is a Map .....and any change to this current daykey is what the user is worry about.
    The addListener() on the ObservableMap will  start to listen on any event happening at the all key in the namedCache.I am looking for something like a listener/logic which looks only the for this uniqueKey for this user session.
    Will look into the Map Event in detail.
    Thanks Again,
    Sarath

  • Architectural Choice for accessing Coherence Cache Server

    I am a newbie and have a coherence use-case question.
    When accessing an independent coherence cache server from application code such as an EJB deployed in WLS, architecturally does one write up an entity which is then used as a sole point of
    access to the resource (coherence cache server) for querying, adding, modifying entries or are the accesses to the coherence cache server split and spread among the application code.
    For example,
    1. I write an EJB (EJB 1) which recieves the request from other EJB's (EJB 2, EJB 3), EJB 1 runs requests from EJB2 and EJB 3 on the Coherence Cache Server and acts as the sole point of
    contact to the resource.
    2. EJB 2 and EJB 3 both run requests against the coherence cache server. No fixed entity in architecture repsonsible for interaction with Coherence Cache Server.
    Which is more common ?

    stevephe wrote:
    Yes you could treat Coherence as a "pluggable" resource, just like a database. But that, just like in the case of a database, wouldn't boil it down to a single entity/interface. You'd treat Coherence as a "integration tier" resource that you'd "plumb in" just like you would a database, thus shielding your application's "domain" objects from integration-level concerns. That's how I've tiered our application, although we aren't inside a container like Weblogic/WebSphere/etc. The domain objects specify their persistence requirements via a multiplicity of interfaces; those interfaces have a number of implementations in the integration tier, one set of which just happens to be a Coherence set. You can use a "registry" approach to pick up the appropriate implementations (we use Spring injection.) Have a look at the Coherence book from Apress for more details.Apress? You mean Packt, don't you?
    Best regards,
    Robert

Maybe you are looking for

  • Can't activate Photoshop CS2

    I have had Photoshop CS2 for years. Now installing it on new computer after my old one crashed. I am given 30 days to activate, but Adobe has shut down the activation server. I have my original CD, but will my Photoshop stop working after 30 days?

  • CAN I MAKE PDF'S FROM AUTOCAD DWGS. IN ACROBAT 9?

    CAN I MAKE PDF'S FROM AUTOCAD DWGS. USING ACROBAT 9, WAS USING ACROBAT 11 AND COULD PLOT TO PDF FROM AUTOCAD MENU, DON'T HAVE THAT OPTION WITH 9

  • OnLeadSelect for MasterColumn in a table isnt working

    Hello, I have a table with a hierarchy inside it . I used MasterColumn to bind the node with Recursivenode. When I use the onLeadSelect method, its not working. I have seen couple of blogs and forum posts, I think they were dealing with NW04. I am tr

  • THIRD PARTY ORDERING

    HI SAP GURUS CAN ANY ONE HELP ME OUT 1.     WHAT IS THIRD PARTY ORDERING? HOW WE CONFIGURE THIRD PARTY ORDERING ?  DOES DELIVERY OCCURS IN THIRD PARTY ORDERING? WHEN 2.     IF I HAVE GOT 4PLANTS HTEN HOW MANY BILLING CAN BE DONE? 3. AFTER PGI WHAT HA

  • OWB VLD-2750 error

    Hi! Could someone please help. I am geting a VLD-2750 error in a MAPPING .. as I have specified an update in theloading specifications of the table. It needs a where clause! ? .. where do I put this where clause? In the Mapping table properties,in th