Replicated vs optimistic cache schem

Hi,
I have two questions:
First one.
In replicated cache, a single put operation, first accures cluster wide lock on cache entry, then performs put, then release lock/replicate entry over the cluster - at least 2 network round trips. (correct me if I wrong please)
In optimistic cache, put performed in local replica, then entry is replicated over the cluster. Is replication done synchroniously (put does not return until all replicas confirm change) or asynchronious?
Second one.
How EntryProcessor is executed on optimistic cache. Does it run locally, then changes are broadcasted to other replicas or EntryProcessor itself is broadcasted and executed on every replica? (does not sounds realistic, but I want to be sure)
Thank you

Hi,
In replicated cache, a single put operation, first accures cluster wide lock on cache entry, then performs put, then release lock/replicate entry over the cluster - at least 2 network round trips. (correct me if I wrong please)I think a better way to think about how Coherence implements the replicated cache is like a partitioned cache that is backed up on all the nodes running the specified service. Assuming the put() is performed on a member that does not own the object, a message is sent to the owning member with the update. The owning member distributes the update to the remaining members and sends a message back to the originating member. Note: There are no locks required because there is always only one owner of an object in the cluster.
In optimistic cache, put performed in local replica, then entry is replicated over the cluster. Is replication done synchroniously (put does not return until all replicas confirm change) or asynchronious?Coherence implements the optimistic cache very much like the replicated cache. The put behaves the same, i.e., the put does not return until the replicas have confirmed the change. Note: The replication is performed asynchronously. (The updates are sent to the members and their acknowledgements are reaped as they complete.) The difference between optimistic cache and the replicated cache is that the optimistic cache does not support concurrency control. This means that you can not use the explicit locking provided by the ConcurrentMap interface. In a replicated cache, puts() are blocked on keys that are locked but they are not blocked in a partitioned cache.
How EntryProcessor is executed on optimistic cache. Does it run locally, then changes are broadcasted to other replicas or EntryProcessor itself is broadcasted and executed on every replica? (does not sounds realistic, but I want to be sure)In an optimistic cache (as with the replicated cache), the execution of an EntryProcessor will occur on the initiating member.
Ragards,
Harv

Similar Messages

  • Declaring Distributed, Replicated and Optimistic caches.

    Is looks like the methods to declare specific types of caches in Java are now deprecated.
    Link: [http://download.oracle.com/otn_hosted_doc/coherence/342/com/tangosol/net/CacheFactory.html#getCache(java.lang.String,%20java.lang.ClassLoader)]
    Can you still do this with the API or must it be done in the XML config file now?
    Thanks,
    Andrew

    Hi Andrew,
    If I am reading your question right then the api is now to go get the service with getService(String)
    http://download.oracle.com/otn_hosted_doc/coherence/342/com/tangosol/net/CacheFactory.html#getService(java.lang.String)
    you can then invoke ensureCache on the service.
    The service is setup in the XML config file.
    -David

  • Replicated cache scheme with cache store

    Hi All,
    I am having following configuration for the UserCacheDB in the coherence-cache-config.xml
    I having cachestore class which inserts data in the database and this data will be loaded from data on application start up.
    I need to make this cache replicated so that the other application will have this data. Can any one please guide me what should be my configuration which will make this cache replicated with cache store class.
    <distributed-scheme>
                   <scheme-name>UserCacheDB</scheme-name>
                   <service-name>DistributedCache</service-name>
                   <serializer>
                        <class-name>com.tangosol.io.pof.ConfigurablePofContext</class-name>
                        <init-params>
                             <init-param>
                                  <param-type>String</param-type>
                                  <param-value>pof-config.xml</param-value>
                             </init-param>
                        </init-params>
                   </serializer>
                   <backing-map-scheme>
                        <read-write-backing-map-scheme>
                             <internal-cache-scheme>
                                  <class-scheme>
                                       <class-name>com.tangosol.util.ObservableHashMap</class-name>
                                  </class-scheme>
                             </internal-cache-scheme>
                             <cachestore-scheme>
                                  <class-scheme>
                                       <class-name>test.UserCacheStore</class-name>
                                       <init-params>
                                            <init-param>
                                                 <param-type>java.lang.String</param-type>
                                                 <param-value>PC_USER</param-value>
                                            </init-param>
                                       </init-params>
                                  </class-scheme>
                             </cachestore-scheme>
                             <read-only>false</read-only>
                             <!--
                                  To make this a write-through cache just change the value below to
                                  0 (zero)
                             -->
                             <write-delay-seconds>0</write-delay-seconds>
                        </read-write-backing-map-scheme>
                   </backing-map-scheme>
                   <listener />
                   <autostart>true</autostart>
              </distributed-scheme>
    Thanks in Advance.

    Hi,
    You should be able to use a cachestore with a local-scheme.
          <replicated-scheme>
            <scheme-name>UserCacheDB</scheme-name>
            <service-name>ReplicatedCache</service-name>
            <serializer>
              <class-name>com.tangosol.io.pof.ConfigurablePofContext</class-name>
              <init-params>
                <init-param>
                  <param-type>String</param-type>
                  <param-value>coherence-pof-config.xml</param-value>
                </init-param>
              </init-params>
            </serializer>
            <backing-map-scheme>
              <local-scheme>
                <scheme-name>UserCacheDBLocal</scheme-name>
                <cachestore-scheme>
                  <class-scheme>
                    <class-name>test.UserCacheStore</class-name>
                    <init-params>
                      <init-param>
                        <param-type>java.lang.String</param-type>
                        <param-value>PC_USER</param-value>
                      </init-param>
                    </init-params>
                  </class-scheme>
                </cachestore-scheme>
              </local-scheme>
            </backing-map-scheme>
            <listener/>
            <autostart>true</autostart>
          </replicated-scheme>

  • What are the exactly characteristics of a "Optimistic Cache"?

    Hey,
    can someone explain me the exactly characteristics of a "Optimistic Cache".
    On the Wikipage is a short explanation http://wiki.tangosol.com/display/COH34UG/Types+of+Caches+in+Coherence
    but for me is this Definition not obvious.
    for example:
    "...two cluster members are independently pruning or purging the underlying local stores, it is possible that a cluster member may have a different store content than that held by another cluster member." i think this clear when no concurrency control is given but what are the key characteristics? and the differences comparing to the other Caching strategies?
    Thanks in advance!
    Pawel

    You can read for yourself here:
    http://us.blackberry.com/smartphones/blackberry-z10/overview.html
    and
    http://blogs.blackberry.com/2013/03/blackberry-10-reviews/
    1. If any post helps you please click the below the post(s) that helped you.
    2. Please resolve your thread by marking the post "Solution?" which solved it for you!
    3. Install free BlackBerry Protect today for backups of contacts and data.
    4. Guide to Unlocking your BlackBerry & Unlock Codes
    Join our BBM Channels (Beta)
    BlackBerry Support Forums Channel
    PIN: C0001B7B4   Display/Scan Bar Code
    Knowledge Base Updates
    PIN: C0005A9AA   Display/Scan Bar Code

  • Replicating to a different schema

    Some questions regarding Oracle 9i Lite:
    1. Using the WebToGo Application Packager (or Consolidator API), is it possible to specify the target schema in the Oracle Lite DB for a snapshot definition? Everything, no matter what the source schema (SYSTEM, SCOTT, etc.), is replicated to the SYSTEM schema on the client OLite DB.
    2. Can the Mobile Server push Java stored procedures to the client Olite DB?
    3. We created a snapshot definition for a table with no primary key. This would be a read-only table on the client DB. Using the WTG Packager, we cleared the Updatable checkbox on the snapshot definition tab. When we try to publish the application, it complains that the table cannot be updatable since it has no PK. Why is this happening? We even tried using the Consolidator API but still got the same error.
    Any help/pointers would be greatly appreciated.
    TIA

    Just tap into the existing trail if the change data is already there. There's no need to have a second extract in that case. So ADD REPLICAT and specify the EXTTRAIL the same as the EXTTRAIL for the redo log extract.
    OGG won't pick up truncates by default. If the row is gone before the update arrives at the target table then you can use INSERTMISSINGUPDATES. When using this parameter make sure that you add supplemental logging (ADD TRANDATA) for columns that you need but that may not be updated (e.g. target columns with NOT NULL or FK constraints).
    Good luck,
    -joe

  • Why can't a backing-map-scheme be specified in caching-schemes?

    Most other types of schemes except backing-map-scheme can be specified in the caching-schemes section of the cache configuration XML file and after that be reused in other scheme definitions. What is the motivation for excluding the backing-map-scheme?
    /Magnus

    Hi Magnus,
    you can specify an "abstract" service-type scheme (e.g. distributed-scheme) containing the backing map scheme instead of the backing map scheme itself.
    I know it is not as flexible as having a backing map scheme separately, but it is almost as good.
    Best regards,
    Robert

  • Storage disabled nodes and near-cache scheme

    This probably is a newbie question. I have a named cache with a near cache scheme, with a local-scheme as the front tier. I can see how this will work in a cache-server node. But I have a application node which pushes a lot of data into the same named cache, but it is set to be storage disabled.
    My understanding of a local cache scheme is that data is cached locally in the heap for faster access and the writes are delegated to the service for writing to backing map. If my application is storage disabled, is the local cache still used or is all data obtained from the cache-servers?

    Hello,
    You understanding is correct. To answer your question writes will always go through the cache servers. A put will also always go through the cache servers but the near cache may or may not be populated at that point.
    hth,
    -Dave

  • Locking in replicated versus distributed caches

    Hello,
    In the User Guide for Coherence 2.5.0, in section 2.3 Cluster Services Overview it says
    the replicated cache service supports pessimistic lockingyet in section 2.4 Replicated Cache Service it says
    if a cluster node requests a lock, it should not have to get all cluster nodes to agree on the lockI am trying to decide whether to use a replicated cache or a distributed cache, either of which will be small, where I want the objects to be locked across the whole cluster.
    If not all of the cluster nodes have to agree on a lock in a replicated cluster, doesn't this mean that a replicated cluster does not support pessimistic locking?
    Could you please explain this?
    Thanks,
    Rohan

    Hi Rohan,
    The Replicated cache supports pessimistic locking. The User Guide is discussing the implementation details and how they relate to performance. The Replicated and Distributed cache services differ in performance and scalability characteristics, but both support cluster-wide coherence and locking.
    Jon Purdy
    Tangosol, Inc.

  • Questions on InitialContext and replica-aware stub caching

    Hi All,
    We have a cluster and deployed with some stateless ejb session beans. Currently we only cached the InitialContext object in the client code, and I have several questions:
    1. in the current case, if we call lookup() to get a replica-aware stub, which server will return the stub object, the same server we get the InitialContext, or it will load balanced to other servers every time we call the lookup method?
    2. should we just cache the stub? is it thread safe? if it is, how does the stub handle concurrent requests from the client threads? in parallels or in sequence?
    One more question, when we call new InitialContext(), it will take a long time before it can return a timeout exception if the servers are not reachable, how can we set a timeout for this case?

    809364 wrote:
    You can set the timeout value programatically by using the
    weblogic.jndi.Environment.setRequestTimeout()
    Refer: http://docs.oracle.com/cd/E12839_01/apirefs.1111/e13941/weblogic/jndi/Environment.html
    or
    set the REQUEST_TIMEOUT in the weblogic.jndi.WLContext
    Refer: http://docs.oracle.com/cd/E11035_01/wls100/javadocs/weblogic/jndi/WLContext.html
    Hi, I tried setting the parameters before, but it only works for stub lookup and ejb call timeout, not for the creation of InitialContext. And any idea for my 2nd question?

  • Wildcards in caching-scheme-mapping

    Hi,
    I am trying to use this mapping:
    <cache-mapping>
    <cache-name>*CR</cache-name>
    <scheme-name>bitemporal-dist-CR</scheme-name>
    </cache-mapping>
    however, coherence can not find the scheme. If I put '*' at the end of the pattern it works, but i wonder whether the wildcard should work in the middle/beginning of a string as well. It would simplify our naming convention for disaster recovery cache names because we append them with '-CR' string.

    Hi Roman,
    According to the documentation the following cache name patterns are supported:
    * exact match, i.e. "MyCache"
    * prefix match, i.e. "My*" that matches to any cache name starting with "My"
    * any match "*", that matches to any cache name
    So it won't work at the beginning or in the middle.
    Best regards,
    Robert

  • Data distribution in distributed caching scheme

    When using the distributed ( partitioned ) scheme in coherence , how the data distribution is happening among the nodes in data-grid..? Is there any API to control it or are there some configurations to control it ?

    Hi 832093
    A distributed scheme works by allocating the data to partitions (by default there are 257 of these, but you can configure more for large clusters). The partitions are then allocated as evenly as possible to the nodes of the cluster, so each node owns a number of partitions. Partitions belong to a cache service so you might have a cache service that is responsible for a number of caches and a particular node will own the same partitions for all those caches. If you have a backup count > 0 then a backup of each partition is allocated to another node (on another machine if you have more than one). When you put a value into the cache Coherence will basically perform a hash function on your key which will allocate the key to a partition and therfore to the node that owns that partition. In effect a distibuted cache works like a Java HashMap which hashes keys and allocates them to buckets.
    You can have some control over which partition a key goes to if you use key association to co-locate entries into the same partition. You would normally do this to put related values into the same location to make processing them on the server side more efficient in use-cases where you might need to alter or query a number of related items. For example in finanial systems you might have a cache for Trades and a cache for TradeValuations in the same cache service. You can then use key association to allocate all the Valuations for a Trade to the same partition as the parent Trade. So if a Trade was mapped to partition 190 in the Trade cache then all of the Valuations for that Trade would map to partition 190 in the TradeValuations cache and hence be on the same node (in the same JVM process).
    You do not really want to have control over which nodes partitions are allocated to as this could alter Coherence ability to evenly distribute partitions and allocate backups properly.
    JK

  • 1 domain + 3 cluster - optimistic cache invalidate

    Weblogic-Release: 8.1.4.0 SP 4
    We are currently running 3 independent applications (A, B, C).
    Each deployed in a seperated domain/cluster, but sharing the same DB.
    Our UserBean (BMP,CMT), which is deployed in each of the 3 applications, is configured to use the default concurrency-strategy -> "Database".
    Due to heavy traffic (20M PI/per day) and massive DB-Access from one of the applications, we want to change our concurrency-strategy to "Optimistic" and make use of the "cache-between-transaction" feature.
    Reading the edocs I understand that cluster-wide invalidation of cached entities depends on running a cluster and accordant multicast settings.
    Does this even work for one "Domain" containing multiple "Cluster"-Elements (application A, B and C) in the config.xml ?
    I mean, will the EntityBean be invalidated in each independent cluster (but same JNDIName) or do any known issues exist in this scenario ?
    Any help would be very appreciated.
    Regards,
    Jan

    According chapter 2 Integrating Coherence Applications with Coherence*Web (
    Oracle® Coherence Integration Guide for Oracle Coherence) our cache-config shoud be merged with session-cache-config.xml . The name of the mergered cache-config shoud be session-cache-config.xml . What's the right approach to get cache storing business logic? Do I need to use CacheFactory.getCache(cacheName) - DefaultCacheFactory or SessionHelper.getFactory() ?

  • [ExtermlyUrgent]How can we implement our own caching scheme in servlets???

    Hi all,
    anyone, give me any idea about implementing our own cahing sechme in servelts. Please guide me with your knowledge i also need a running sample source code to understand this concept. It is extrememly urgent please help me .
    Regards.

    try to use
    http://cvs.apache.org/viewcvs/jakarta-commons-sandbox/cache/
    or
    http://jakarta.apache.org/turbine/jcs/
    if u are looking for a simple caching.. do it programatically.
    regards,
    Arun
    http://www.javageekz.com/

  • Refresh Ahead Cache with JPA

    I am trying to use Refresh-ahead caching with JPACacheStore. My config backig-map config is given below. I am using the same JPA example as given in the Coherence tutorial. The cache is only loading the data from the when the server starts. When i change the data in the DB, it is not reflecting in the cache. I am not sure I am doing the right thing. Need your help!!
    <backing-map-scheme>
                        <read-write-backing-map-scheme>
                             <!--Define the cache scheme-->
                             <internal-cache-scheme>
                                  <local-scheme>
                                       *<expiry-delay>1m</expiry-delay>*
                                  </local-scheme>
                             </internal-cache-scheme>
                             <cachestore-scheme>
                                  <class-scheme>
                                       <class-name>com.tangosol.coherence.jpa.JpaCacheStore</class-name>
                                       <init-params>
                                            <!--
                                            This param is the entity name
                                            This param is the fully qualified entity class
                                            This param should match the value of the
                                            persistence unit name in persistence.xml
                                            -->
                                            <init-param>
                                                 <param-type>java.lang.String</param-type>
                                                 <param-value>{cache-name}</param-value>
                                            </init-param>
                                            <init-param>
                                                 <param-type>java.lang.String</param-type>
                                                 <param-value>com.oracle.handson.{cache-name}</param-value>
                                            </init-param>
                                            <init-param>
                                                 <param-type>java.lang.String</param-type>
                                                 <param-value>JPA</param-value>
                                            </init-param>
                                       </init-params>
                                  </class-scheme>
                             </cachestore-scheme>
                             *<refresh-ahead-factor>0.5</refresh-ahead-factor>*
                        </read-write-backing-map-scheme>
                   </backing-map-scheme>
    Thanks in advance.
    John

    I guess this is the answer
    Sorry for the dumb question :)
    Note: For use with Partitioned (Distributed) and Near cache
    topologies: Read-through/write-through caching (and variants) are
    intended for use only with the Partitioned (Distributed) cache
    topology (and by extension, Near cache). Local caches support a
    subset of this functionality. Replicated and Optimistic caches should
    not be used.

  • Item disappeared from cache...

    I inserted 4 items in my cache. I have a JTable that displays them as they show up. I saw all four come in. That app's console logging confirmed it:
    2009-10-08 14:51:50.299/78.263 Oracle Coherence GE 3.5/459 <Info> (thread=AWT-EventQueue-0, member=4): insert MSFT      47V31209        1       SIM     2009281 0
    2009-10-08 14:54:52.292/260.256 Oracle Coherence GE 3.5/459 <Info> (thread=AWT-EventQueue-0, member=4): insert MSFT     47V31209        3       SIM     2009281 0
    2009-10-08 14:55:11.208/279.172 Oracle Coherence GE 3.5/459 <Info> (thread=AWT-EventQueue-0, member=4): insert MSFT     47V31209        5       SIM     2009281 0
    2009-10-08 14:55:36.024/303.988 Oracle Coherence GE 3.5/459 <Info> (thread=AWT-EventQueue-0, member=4): insert MSFT     47V31209        7       SIM     2009281 0I tried to list the items using coherence's provided console app:
    Map (orders): cache executions
    <distributed-scheme>
      <!--
      To use POF serialization for this partitioned service,
      uncomment the following section
      <serializer>
      <class-
      name>com.tangosol.io.pof.ConfigurablePofContext</class-
      name>
      </serializer>
      -->
      <scheme-name>example-distributed</scheme-name>
      <service-name>DistributedCache</service-name>
      <backing-map-scheme>
        <local-scheme>
          <scheme-ref>example-binary-backing-map</scheme-ref>
        </local-scheme>
      </backing-map-scheme>
      <autostart>true</autostart>
    </distributed-scheme>
    Map (executions): list
    MSFT    47V31209        5       SIM     2009281 0 = BUY 1 MSFT @ 1.0 Thu Oct 08 14:55:11 CDT 2009
    MSFT    47V31209        7       SIM     2009281 0 = BUY 1 MSFT @ 1.0 Thu Oct 08 14:55:35 CDT 2009
    MSFT    47V31209        3       SIM     2009281 0 = BUY 1 MSFT @ 1.0 Thu Oct 08 14:54:52 CDT 2009
    Iterator returned 3 items
    Map (executions): size
    4It says size is 4 but "Iterator returned 3 items". what??
    I refreshed my swing app's CQC and it confirmed that there are only 3 now:
    2009-10-08 15:01:25.147/653.111 Oracle Coherence GE 3.5/459 <Info> (thread=AWT-EventQueue-0, member=4): insert MSFT     47V31209        5       SIM     2009281 0
    2009-10-08 15:01:25.147/653.111 Oracle Coherence GE 3.5/459 <Info> (thread=AWT-EventQueue-0, member=4): insert MSFT     47V31209        3       SIM     2009281 0
    2009-10-08 15:01:25.147/653.111 Oracle Coherence GE 3.5/459 <Info> (thread=AWT-EventQueue-0, member=4): insert MSFT     47V31209        7       SIM     2009281 0Apparently an item disappeared from the cache somehow, right? My app with the CQC updating the JTable didn't get any entryDeleted events.
    Thanks,
    Andrew

    All I do to reproduce it is start coherence, register a trigger with that offending line of code in it and add an Execution to the Executions cache. The Execution shows up, but is immediately gone as soon as that line of code in the trigger runs. No entryDeleted event comes over the CQC. I haven't tried doing a get on the missing object's key yet.
    This starts the application:
    setlocal
    set JAVA_HOME=c:\jdk
    set memory=256m
    set java_opts=%java_opts% -Xms%memory%
    set java_opts=%java_opts% -Xmx%memory%
    set java_opts=%java_opts% -server
    set java_opts=%java_opts% -Dtangosol.coherence.distributed.localstorage=false
    set java_opts=%java_opts% -Dtangosol.coherence.member=%username%
    set PATH=%JAVA_HOME%\bin
    set CP=.
    set CP=%CP%;%JAVA_HOME%\lib
    set CP=%CP%;S:\java\javaclasses\log4j-1.2.8.jar
    set CP=%CP%;S:\java\oms2\lib\oms2.jar
    set CP=%CP%;S:\java\mju\classes
    set CP=%CP%;s:\java\javaclasses\mysql-connector-java.jar
    set CP=%CP%;s:\java\coup\lib\coup.jar
    set CP=%CP%;s:\java\execution_viewer\lib\execution_viewer.jar
    set CP=%CP%;S:\java\stats\classes\
    set CP=%CP%;S:\java\javaclasses\quoteclient.jar
    set CP=%CP%;S:\java\javaclasses\commons-collections-3.2.1.jar
    set CP=%CP%;S:\java\javaclasses\commons-configuration-1.6.jar
    set CP=%CP%;S:\java\javaclasses\commons-lang-2.4.jar
    set CP=%CP%;S:\java\javaclasses\commons-logging-1.1.1.jar
    rem --- BORLAND LAYOUTS ---
    set CP=%CP%;s:\java\javaclasses\jbcl.jar
    rem --- BORLAND LAYOUTS ---
    rem --- COHERENCE ----
    set CP=%CP%;C:\coherence\lib\tangosol.jar
    set CP=%CP%;C:\coherence\lib\coherence.jar
    set CP=%CP%;C:\coherence\lib\coherence-messagingpattern-2.2.0.jar
    set CP=%CP%;C:\coherence\lib\coherence-work.jar
    rem --- COHERENCE ----
    IF EXIST %JAVA_HOME%\bin\java_for_execution_viewer.exe GOTO OK
    copy %JAVA_HOME%\bin\java.exe %JAVA_HOME%\bin\java_for_execution_viewer.exe
    :OK
    %JAVA_HOME%\bin\java_for_execution_viewer.exe -classpath %CP% %java_opts% execution_viewer.ExecutionViewer
    endlocalThis starts the coherence node
    set java_home=c:\jdk1.6.0_14_64bit
    :config
    @rem specify the Coherence installation directory
    set coherence_home=%~dp0\..
    @rem specify the JVM heap size
    set memory=512m
    :start
    if not exist "%coherence_home%\lib\coherence.jar" goto instructions
    :launch
    set java_opts=%java_opts% -Xms%memory%
    set java_opts=%java_opts% -Xmx%memory%
    set java_opts=%java_opts% -Dtangosol.coherence.management=all
    set java_opts=%java_opts% -Dtangosol.coherence.management.remote=true
    set java_opts=%java_opts% -Dtangosol.coherence.localhost=%my_ip%
    set java_opts=%java_opts% -Djava.net.preferIPv4Stack=true
    set java_opts=%java_opts% -Dcom.sun.management.jmxremote
    set java_opts=%java_opts% -Dcom.sun.management.jmxremote.authenticate=false
    set java_opts=%java_opts% -Dcom.sun.management.jmxremote.ssl=false
    set java_opts=%java_opts% -Dtangosol.coherence.cacheconfig=c:/coherence/cache-config-dev.xml
    set java_opts=%java_opts% -Dtangosol.pof.enabled=false
    set CP=%CP%;"%coherence_home%\lib\coherence.jar"
    set CP=%CP%;S:\java\javaclasses\log4j-1.2.8.jar
    set CP=%CP%;s:\java\coup\lib\coup.jar
    set CP=%CP%;S:\java\oms2\lib\oms2.jar
    set CP=%CP%;S:\java\stats\classes\
    echo %java_opts%
    REM "%java_exec%" -Dcom.sun.management.jmxremote.port=9991 -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote -Dtangosol.coherence.management=all -Dtangosol.coherence.management.remote=true -server -showversion "%java_opts%" -cp "%coherence_home%\lib\coherence.jar" com.tangosol.net.DefaultCacheServer %1
    copy %java_home%\bin\java.exe %java_home%\bin\java_for_coherence.exe
    echo ****
    %java_home%/bin/java_for_coherence.exe -server -showversion %java_opts% -cp %CP% com.tangosol.net.DefaultCacheServer %1
    echo ****
    goto exit
    :instructions
    echo Usage:
    echo   ^<coherence_home^>\bin\cache-server.cmd
    goto exit
    :exitthis is c:/coherence/cache-config-dev.xml
    <?xml version="1.0"?>
    <!DOCTYPE cache-config SYSTEM "cache-config.dtd">
    <cache-config>
            <!-- ***********  SCHEME MAPPINGS  ***********  -->
            <caching-scheme-mapping>
                    <cache-mapping>
                            <cache-name>executions</cache-name>
                            <scheme-name>executions-scheme</scheme-name>
                    </cache-mapping>
                    <cache-mapping>
                            <cache-name>stats.*</cache-name>
                            <scheme-name>stats-scheme</scheme-name>
                    </cache-mapping>
                    <cache-mapping>
                            <cache-name>positions</cache-name>
                            <scheme-name>positions-scheme</scheme-name>
                    </cache-mapping>
                    <cache-mapping>
                            <cache-name>oms</cache-name>
                            <scheme-name>default-scheme</scheme-name>
                    </cache-mapping>
                    <cache-mapping>
                            <cache-name>orders</cache-name>
                            <scheme-name>orders-scheme</scheme-name>
                    </cache-mapping>
                    <cache-mapping>
                            <cache-name>coup.*</cache-name>
                            <scheme-name>default-scheme</scheme-name>
                    </cache-mapping>
                    <cache-mapping>
                            <cache-name>legacyExecs</cache-name>
                            <scheme-name>default-scheme</scheme-name>
                    </cache-mapping>
            </caching-scheme-mapping>
            <!-- ******************************** -->
            <caching-schemes>
            <!-- <distributed-scheme> -->
                      <optimistic-scheme>
                            <scheme-name>stats-scheme</scheme-name>
                            <!-- <service-name>ReplicatedCache.Optimistic</service-name> -->
                            <service-name>ReplicatedCache.Optimistic</service-name>
                            <backing-map-scheme>
                                     <local-scheme/>
                                     <!-- <external-scheme>  -->
                                     <!-- <paged-external-scheme>  -->
                                     <!-- <overflow-scheme>  -->
                                     <!-- <class-scheme>  -->
                            </backing-map-scheme>
                            <autostart>true</autostart>
                    </optimistic-scheme>
            <!-- </distributed-scheme> -->
                    <distributed-scheme>
                            <scheme-name>executions-scheme</scheme-name>
                            <service-name>DistributedCache</service-name>
                            <!--
                            <serializer>
                            <class-name>com.tangosol.io.pof.ConfigurablePofContext</class-name>
                            </serializer>
                            -->
                            <!-- <listener>
                            <class-scheme>
                            <class-factory-name>oms.grid.ExecutionMapTrigger</class-factory-name>
                            <method-name>createTriggerListener</method-name>
                            </class-scheme>
                    </listener> -->
                    <backing-map-scheme>
                            <read-write-backing-map-scheme>
                                    <scheme-name>ExecutionDatabaseScheme</scheme-name>
                                    <internal-cache-scheme>
                                            <local-scheme>
                                                    <!-- Any Memory Scheme Name Could Go Here, Right? -->
                                                    <scheme-name>SomeScheme1</scheme-name>
                                            </local-scheme>
                                    </internal-cache-scheme>
                                    <cachestore-scheme>
                                            <class-scheme>
                                                    <class-name>oms.grid.ExecutionCacheStore</class-name>
                                                    <class-factory-name>oms.grid.ExecutionCacheStore</class-factory-name>
                                                    <init-params>
                                                            <init-param>
                                                                    <param-name>url</param-name>
                                                                    <param-type>String</param-type>
                                                                    <param-value>jdbc:mysql://localhost:6033/oms2?autoReconnect=true</param-value>
                                                            </init-param>
                                                            <init-param>
                                                                    <param-name>username</param-name>
                                                                    <param-type>String</param-type>
                                                                    <param-value>xxx</param-value>
                                                            </init-param>
                                                            <init-param>
                                                                    <param-name>password</param-name>
                                                                    <param-type>String</param-type>
                                                                    <param-value>xxx</param-value>
                                                            </init-param>
                                                    </init-params>
                                            </class-scheme>
                                    </cachestore-scheme>
                                    <write-delay>30s</write-delay>
                                    <write-batch-factor>0.5</write-batch-factor>
                            </read-write-backing-map-scheme>
                            <!--
                            <local-scheme>
                            <scheme-ref>example-binary-backing-map</scheme-ref>
                            </local-scheme>
                            -->
                    </backing-map-scheme>
                    <autostart>true</autostart>
            </distributed-scheme>
            <distributed-scheme>
                    <scheme-name>positions-scheme</scheme-name>
                    <service-name>DistributedCache</service-name>
                    <!--
                    <serializer>
                    <class-name>com.tangosol.io.pof.ConfigurablePofContext</class-name>
                    </serializer>
                    -->
                    <backing-map-scheme>
                            <read-write-backing-map-scheme>
                                    <scheme-name>PositionDatabaseScheme</scheme-name>
                                    <internal-cache-scheme>
                                            <local-scheme>
                                                    <!-- Any Memory Scheme Name Could Go Here, Right? -->
                                                    <scheme-name>SomeScheme2</scheme-name>
                                            </local-scheme>
                                    </internal-cache-scheme>
                                    <cachestore-scheme>
                                            <class-scheme>
                                                    <class-name>oms.grid.PositionCacheStore</class-name>
                                                    <class-factory-name>oms.grid.PositionCacheStore</class-factory-name>
                                                    <!-- <method-name>PositionCacheStoreFactory</method-name> -->
                                                    <init-params>
                                                            <init-param>
                                                                    <param-name>url</param-name>
                                                                    <param-type>String</param-type>
                                                                    <param-value>jdbc:mysql://localhost:6033/oms2?autoReconnect=true</param-value>
                                                            </init-param>
                                                            <init-param>
                                                                    <param-name>username</param-name>
                                                                    <param-type>String</param-type>
                                                                    <param-value>xxx</param-value>
                                                            </init-param>
                                                            <init-param>
                                                                    <param-name>password</param-name>
                                                                    <param-type>String</param-type>
                                                                    <param-value>xxx</param-value>
                                                            </init-param>
                                                    </init-params>
                                            </class-scheme>
                                    </cachestore-scheme>
                                    <write-delay>30s</write-delay>
                                    <write-batch-factor>0.5</write-batch-factor>
                            </read-write-backing-map-scheme>
                            <!--
                            <local-scheme>
                            <scheme-ref>example-binary-backing-map</scheme-ref>
                            </local-scheme>
                            -->
                    </backing-map-scheme>
                    <autostart>true</autostart>
            </distributed-scheme>
            <distributed-scheme>
                    <scheme-name>orders-scheme</scheme-name>
                    <service-name>DistributedCache</service-name>
                    <backing-map-scheme>
                            <local-scheme/>
                    </backing-map-scheme>
                    <listener>
                            <class-scheme>
                                    <class-factory-name>oms.grid.OrderAddTrigger</class-factory-name>
                                    <method-name>createTriggerListener</method-name>
                            </class-scheme>
                    </listener>
                    <!--
                    <serializer>
                    <class-name>com.tangosol.io.pof.ConfigurablePofContext</class-name>
                    </serializer>
                    -->
                    <autostart>true</autostart>
            </distributed-scheme>
            <distributed-scheme>
                    <scheme-name>default-scheme</scheme-name>
                    <service-name>DistributedCache</service-name>
                    <!--
                    <serializer>
                    <class-name>com.tangosol.io.pof.ConfigurablePofContext</class-name>
                    </serializer>
                    -->
                    <backing-map-scheme>
                            <local-scheme/>
                    </backing-map-scheme>
                    <autostart>true</autostart>
            </distributed-scheme>
    </caching-schemes>
    </cache-config>
    this is coherence-cache-config.xml from the coherence.jar.
    <?xml version="1.0"?>
    <!--
    Note: This XML document is an example Coherence Cache Configuration deployment
    descriptor that should be customized (or replaced) for your particular caching
    requirements. The cache mappings and schemes declared in this descriptor are
    strictly for demonstration purposes and are not required.
    For detailed information on each of the elements that can be used in this
    descriptor please see the Coherence Cache Configuration deployment descriptor
    guide included in the Coherence distribution or the "Cache Configuration
    Elements" page on the Coherence Wiki (http://wiki.tangosol.com).
    -->
    <!DOCTYPE cache-config SYSTEM "cache-config.dtd">
    <cache-config>
      <caching-scheme-mapping>
        <cache-mapping>
          <cache-name>dist-*</cache-name>
          <scheme-name>example-distributed</scheme-name>
          <init-params>
            <init-param>
              <param-name>back-size-limit</param-name>
              <param-value>8MB</param-value>
            </init-param>
          </init-params>
        </cache-mapping>
        <cache-mapping>
          <cache-name>near-*</cache-name>
          <scheme-name>example-near</scheme-name>
          <init-params>
            <init-param>
              <param-name>back-size-limit</param-name>
              <param-value>8MB</param-value>
            </init-param>
          </init-params>
        </cache-mapping>
        <cache-mapping>
          <cache-name>repl-*</cache-name>
          <scheme-name>example-replicated</scheme-name>
        </cache-mapping>
        <cache-mapping>
          <cache-name>opt-*</cache-name>
          <scheme-name>example-optimistic</scheme-name>
          <init-params>
            <init-param>
              <param-name>back-size-limit</param-name>
              <param-value>5000</param-value>
            </init-param>
          </init-params>
        </cache-mapping>
        <cache-mapping>
          <cache-name>local-*</cache-name>
          <scheme-name>example-object-backing-map</scheme-name>
        </cache-mapping>
        <cache-mapping>
          <cache-name>*</cache-name>
          <scheme-name>example-distributed</scheme-name>
        </cache-mapping>
      </caching-scheme-mapping>
      <caching-schemes>
        <!--
        Distributed caching scheme.
        -->
        <distributed-scheme>
          <scheme-name>example-distributed</scheme-name>
          <service-name>DistributedCache</service-name>
          <!-- To use POF serialization for this partitioned service,
               uncomment the following section -->
          <!--
          <serializer>
            <class-name>com.tangosol.io.pof.ConfigurablePofContext</class-name>
          </serializer>
          -->
          <backing-map-scheme>
            <local-scheme>
              <scheme-ref>example-binary-backing-map</scheme-ref>
            </local-scheme>
          </backing-map-scheme>
          <autostart>true</autostart>
        </distributed-scheme>
        <!--
        Near caching (two-tier) scheme with size limited local cache
        in the front-tier and a distributed cache in the back-tier.
        -->
        <near-scheme>
          <scheme-name>example-near</scheme-name>
          <front-scheme>
            <local-scheme>
              <eviction-policy>HYBRID</eviction-policy>
              <high-units>100</high-units>
              <expiry-delay>1m</expiry-delay>
            </local-scheme>
          </front-scheme>
          <back-scheme>
            <distributed-scheme>
              <scheme-ref>example-distributed</scheme-ref>
            </distributed-scheme>
          </back-scheme>
          <invalidation-strategy>present</invalidation-strategy>
          <autostart>true</autostart>
        </near-scheme>
        <!--
        Replicated caching scheme.
        -->
        <replicated-scheme>
          <scheme-name>example-replicated</scheme-name>
          <service-name>ReplicatedCache</service-name>
          <backing-map-scheme>
            <local-scheme>
              <scheme-ref>unlimited-backing-map</scheme-ref>
            </local-scheme>
          </backing-map-scheme>
          <autostart>true</autostart>
        </replicated-scheme>
        <!--
        Optimistic caching scheme.
        -->
        <optimistic-scheme>
          <scheme-name>example-optimistic</scheme-name>
          <service-name>OptimisticCache</service-name>
          <backing-map-scheme>
            <local-scheme>
              <scheme-ref>example-object-backing-map</scheme-ref>
            </local-scheme>
          </backing-map-scheme>
          <autostart>true</autostart>
        </optimistic-scheme>
        <!--
         A scheme used by backing maps that may store data in object format and
         employ size limitation and/or expiry eviction policies.
        -->
        <local-scheme>
          <scheme-name>example-object-backing-map</scheme-name>
          <eviction-policy>HYBRID</eviction-policy>
          <high-units>{back-size-limit 0}</high-units>
    <!--      <expiry-delay>{back-expiry 1h}</expiry-delay> -->
          <flush-delay>1m</flush-delay>
          <cachestore-scheme></cachestore-scheme>
        </local-scheme>
        <!--
         A scheme used by backing maps that store data in internal (binary) format
         and employ size limitation and/or expiry eviction policies.
        -->
        <local-scheme>
          <scheme-name>example-binary-backing-map</scheme-name>
          <eviction-policy>HYBRID</eviction-policy>
          <high-units>{back-size-limit 0}</high-units>
          <unit-calculator>BINARY</unit-calculator>
    <!--      <expiry-delay>{back-expiry 1h}</expiry-delay> -->
          <flush-delay>1m</flush-delay>
          <cachestore-scheme></cachestore-scheme>
        </local-scheme>
        <!--
        Backing map scheme definition used by all the caches that do
        not require any eviction policies
        -->
        <local-scheme>
          <scheme-name>unlimited-backing-map</scheme-name>
        </local-scheme>
       <!--
        ReadWriteBackingMap caching scheme.
        -->
        <read-write-backing-map-scheme>
          <scheme-name>example-read-write</scheme-name>
          <internal-cache-scheme>
            <local-scheme>
              <scheme-ref>example-binary-backing-map</scheme-ref>
            </local-scheme>
          </internal-cache-scheme>
          <cachestore-scheme></cachestore-scheme>
          <read-only>true</read-only>
          <write-delay>0s</write-delay>
        </read-write-backing-map-scheme>
        <!--
        Overflow caching scheme with example eviction local cache
        in the front-tier and the example LH-based cache in the back-tier.
        -->
        <overflow-scheme>
          <scheme-name>example-overflow</scheme-name>
          <front-scheme>
            <local-scheme>
              <scheme-ref>example-binary-backing-map</scheme-ref>
            </local-scheme>
          </front-scheme>
          <back-scheme>
            <external-scheme>
              <scheme-ref>example-bdb</scheme-ref>
            </external-scheme>
          </back-scheme>
        </overflow-scheme>
        <!--
        External caching scheme using Berkley DB.
        -->
        <external-scheme>
          <scheme-name>example-bdb</scheme-name>
          <bdb-store-manager>
            <directory></directory>
          </bdb-store-manager>
          <high-units>0</high-units>
        </external-scheme>
        <!--
        External caching scheme using memory-mapped files.
        -->
        <external-scheme>
          <scheme-name>example-nio</scheme-name>
          <nio-file-manager>
            <initial-size>8MB</initial-size>
            <maximum-size>512MB</maximum-size>
            <directory></directory>
          </nio-file-manager>
          <high-units>0</high-units>
        </external-scheme>
        <!--
        Invocation Service scheme.
        -->
        <invocation-scheme>
          <scheme-name>example-invocation</scheme-name>
          <service-name>InvocationService</service-name>
          <autostart system-property="tangosol.coherence.invocation.autostart">true</autostart>
        </invocation-scheme>
        <!--
        Proxy Service scheme that allows remote clients to connect to the
        cluster over TCP/IP.
        -->
        <proxy-scheme>
          <scheme-name>example-proxy</scheme-name>
          <service-name>TcpProxyService</service-name>
          <acceptor-config>
            <tcp-acceptor>
              <local-address>
                <address system-property="tangosol.coherence.extend.address">localhost</address>
                <port system-property="tangosol.coherence.extend.port">9099</port>
              </local-address>
            </tcp-acceptor>
          </acceptor-config>
          <autostart system-property="tangosol.coherence.extend.enabled">false</autostart>
        </proxy-scheme>
      </caching-schemes>
    </cache-config>Thanks,
    Andrew

Maybe you are looking for

  • "Safari can't open page because it can't find the server"

    So i bought an ipod touch at the start of April and i still can't figure out how to access the internet, itunes, etc at my house. I have tried EVERYTHING and NOTHING seems to work. when i'm uptown i can get on the internet to work as long as no passw

  • Interactive report – column heading in multiple rows

    I am using interactive report. My question to the expert/guru’s is: - How do I change column heading into multiple row with text wrap. For example:- My column heading is Is Employee Trained ? -------> (single row display) I want to make it display li

  • ORA-12571-TNS: PACKET WRITER FAILURE

    Hi All, am using oracle 6i and facing this problem when am connecting Forms to Database.. Error Encounterd:ORA-12571-TNS: PACKET WRITER FAILURE. Plz help. Regards, Neha

  • LabVIEW 5.0.1f1 installed on Windows 2000

    I'm transferring LABView 5.01f1 programs from a W95 computer to a W2000 computer. I set all the software directories the same on both computers. When I open a .vi on the W2000 computer it had to search for many of the sub .vi's. I had to go thru all

  • Can i force all articles to update within a folio

    Hi, I am editing articles, and notice that not all updates are being passed through to the folio, unless i manually go into each articles and press "update" is there a way of forcing all articles in a folio to update, then I can publish the updated f