Backing Map Scheme with storage disabled

I've been playing around with Cache Servers and Cache Clients. If I have a distributed cache, the coherence-cache-config.xml for the client still requires a backing map scheme entry. If the client doesn't physically store data then why does it need a backing map scheme?
I'm interested to know if I'm understanding the concept of the cache client correctly.
Is Coherence just expecting to read in the XML but will ignore it because I've set local storage to false, and therefore it doesn't actually matter what is in the backing map sheme?
Cheers
Mike

Hi Mike,
You are right - storage disabled nodes do not instantiate backing map. Coherence reads XML during initial cohfiguration validation, and on cache clients 'backing-map-scheme' element is not used.
In general it is cleaner, less error prone and easier to maintain a single configuration file and let Coherence worry about what to instantiate on different nodes.
Regards,
Dimitri

Similar Messages

  • Finding exception with the read-write-backing-map-scheme configuration.

    Finding exception with the <read-write-backing-map-scheme> configuration, that is setup against a simple database cache store implementation. The class SimpleCacheEventStoreImpl implements CacheStore interface.
    Exception in thread "main" java.lang.UnsupportedOperationException: configureCache: read-write-backing-map-scheme
         at com.tangosol.net.DefaultConfigurableCacheFactory.configureCache(DefaultConfigurableCacheFactory.java:995)
         at com.tangosol.net.DefaultConfigurableCacheFactory.ensureCache(DefaultConfigurableCacheFactory.java:277)
         at com.tangosol.net.CacheFactory.getCache(CacheFactory.java:689)
         at com.tangosol.net.CacheFactory.getCache(CacheFactory.java:667)
         at Sample.SimpleEventStoreConsumer.main(SimpleEventStoreConsumer.java:10)
    The cache store is interfaced to the program SimpleEventStoreConsumer(where I have a put and get operation) through the following cache configuration descriptor. On running the SimpleEventStoreConsumer, the exception happens on trying to get the Named cache from the cache factory
    <cache-config>
         <caching-scheme-mapping>
              <cache-mapping>
                   <cache-name>Evt*</cache-name>
                   <scheme-name>SampleDatabaseScheme</scheme-name>
              </cache-mapping>
         </caching-scheme-mapping>
         <caching-schemes>
              <read-write-backing-map-scheme>
                   <scheme-name>SampleDatabaseScheme</scheme-name>
                   <internal-cache-scheme>
                        <local-scheme>
                             <scheme-ref>SampleMemoryScheme</scheme-ref>
                        </local-scheme>
                   </internal-cache-scheme>
                   <cachestore-scheme>
                        <class-scheme>
                             <class-name>com.emc.srm.cachestore.SimpleCacheEventStoreImpl</class-name>
                             <init-params>
                                  <init-param>
                                       <param-type>java.lang.String</param-type>
                                       <param-value>{cache-name}</param-value>
                                  </init-param>
                             </init-params>
                        </class-scheme>
                   </cachestore-scheme>
              </read-write-backing-map-scheme>
              <local-scheme>
                   <scheme-name>SampleMemoryScheme</scheme-name>
              </local-scheme>
         </caching-schemes>
    </cache-config>

    you are missing <backing-map-scheme>. Do like following:
    <caching-schemes>
              <distributed-scheme>
                   <scheme-name>distributed-scheme</scheme-name>
                   <service-name>DistributedQueryCache</service-name>
                   <backing-map-scheme>
                        <read-write-backing-map-scheme>
                             <scheme-ref>rw-bm</scheme-ref>
                        </read-write-backing-map-scheme>
                   </backing-map-scheme>
    <autostart>true</autostart>
              </distributed-scheme>
              <read-write-backing-map-scheme>
                   <scheme-name>rw-bm</scheme-name>
    <internal-cache-scheme>
         <local-scheme>
                        </local-scheme>
                   </internal-cache-scheme>               
              </read-write-backing-map-scheme>
    </caching-schemes>

  • Behaviour of read-write-backing-map-scheme in combination with local-scheme

    Hi,
    I have the following questions related to the distributed cache defined below:
    1. If I start putting unlimited number of entries to this cache, will I run out of memory or the local-scheme has some default size limit?
    2. What is default eviction policy for the local-scheme?
    <distributed-scheme>
    <scheme-name>A</scheme-name>
    <service-name>simple_service</service-name>
    <backing-map-scheme>
    <read-write-backing-map-scheme>
    <internal-cache-scheme>
    <local-scheme></local-scheme>
    </internal-cache-scheme>
    <cachestore-scheme>
    <class-scheme>
    <class-name>SomeCacheStore</class-name>
    </class-scheme>
    </cachestore-scheme>
    </read-write-backing-map-scheme>
    </backing-map-scheme>
    </distributed-scheme>
    Best regards,
    Jarek

    Hi,
    The default value for expiry-delay is zero which implies no expiry.
    http://wiki.tangosol.com/display/COH34UG/local-scheme
    Thanks,
    Tom

  • Could you explain how the read-write-backing-map-scheme is configured in...

    Could you explain how the read-write-backing-map-scheme is configured in the following example?
    <backing-map-scheme>
        <read-write-backing-map-scheme>
         <internal-cache-scheme>
          <class-scheme>
           <class-name>com.tangosol.util.ObservableHashMap</class-name>
          </class-scheme>
         </internal-cache-scheme>
         <cachestore-scheme>
          <class-scheme>
           <class-name>coherence.DBCacheStore</class-name>
           <init-params>
            <init-param>
             <param-type>java.lang.String</param-type>
             <param-value>CATALOG</param-value>
            </init-param>
           </init-params>
          </class-scheme>
         </cachestore-scheme>
         <read-only>false</read-only>
         <write-delay-seconds>0</write-delay-seconds>
        </read-write-backing-map-scheme>
    </backing-map-scheme>
    ...Edited by: qkc on 30-Nov-2009 10:48

    Thank you very much for reply.
    In the following example, the cachestore element is not specified in the <read-write-backing-map-scheme> section. Instead, a class-name ControllerBackingMap is designated. What is the result?
    If ControllerBackingMap is a persistence entity, is the result same with that of cachestore-scheme?
    <distributed-scheme>
                <scheme-name>with-rw-bm</scheme-name>
                <service-name>unlimited-partitioned</service-name>
                <backing-map-scheme>
                    <read-write-backing-map-scheme>
                        <scheme-ref>base-rw-bm</scheme-ref>
                    </read-write-backing-map-scheme>
                </backing-map-scheme>
                <autostart>true</autostart>
            </distributed-scheme>
            <read-write-backing-map-scheme>
                <scheme-name>base-rw-bm</scheme-name>
                <class-name>ControllerBackingMap</class-name>
                <internal-cache-scheme>
                    <local-scheme/>
                </internal-cache-scheme>
            </read-write-backing-map-scheme>

  • Why can't a backing-map-scheme be specified in caching-schemes?

    Most other types of schemes except backing-map-scheme can be specified in the caching-schemes section of the cache configuration XML file and after that be reused in other scheme definitions. What is the motivation for excluding the backing-map-scheme?
    /Magnus

    Hi Magnus,
    you can specify an "abstract" service-type scheme (e.g. distributed-scheme) containing the backing map scheme instead of the backing map scheme itself.
    I know it is not as flexible as having a backing map scheme separately, but it is almost as good.
    Best regards,
    Robert

  • Distributed cache with a backing-map as another distributed cache

    Hi All,
    Is it possible to create a distributed cache with a backing-map-schem with another distributed cache with local-storage disabled ?
    Please let me know how to configure this type of cache.
    regards
    S

    Hi Cameron,
    I am trying to create a distributed-schem with a backing-map schem. is it possible to configure another distributed queue as a backing map scheme for a cache.
    <distributed-scheme>
         <scheme-name>MyDistCache-2</scheme-name>
         <service-name> MyDistCacheService-2</service-name>
         <backing-map-scheme>
                   <external-scheme>
                        <scheme-name>MyDistCache-3</scheme-name>
                   </external-scheme>
         </backing-map-scheme>
    </distributed-scheme>
         <distributed-scheme>
              <scheme-name>MyDistCache-3</scheme-name>
              <service-name> MyDistBackCacheService-3</service-name>
              <local-storage>false</local-storage>
         </distributed-scheme>
    Please correct my understanding.
    Regards
    Srini

  • How do I combine the Coherence 3.5 partitioned backing map with overflow?

    I would like to set up a near cache where the back cache uses an overflow map that uses a partitioned backing map as front and a file (or Berkley DB) based back. I would like the storage for both primary and backup storage to use the same configuration. I tried the following cache config (I am not even sure this say anything about how the backup storage should be configured, except that I say it should be off-heap) :
    <?xml version="1.0"?>
    <!DOCTYPE cache-config SYSTEM "cache-config.dtd">
    <cache-config>
        <caching-scheme-mapping>
            <cache-mapping>
                <cache-name>near-small</cache-name>
                <scheme-name>near-schema</scheme-name>
            </cache-mapping>
        </caching-scheme-mapping>
        <caching-schemes>
            <near-scheme>
                <scheme-name>near-schema</scheme-name>
                <front-scheme>
                    <local-scheme>
                        <eviction-policy>HYBRID</eviction-policy>
                        <high-units>10000</high-units>
                    </local-scheme>
                </front-scheme>
                <back-scheme>
                    <distributed-scheme>
                        <scheme-name>near-distributed-scheme</scheme-name>
                        <service-name>PartitionedOffHeap</service-name>
                        <backup-count>1</backup-count>
                        <thread-count>4</thread-count>
                        <serializer>
                            <class-name>com.tangosol.io.pof.ConfigurablePofContext</class-name>
                        </serializer>
                        <backing-map-scheme>
                            <overflow-scheme>
                                <scheme-name>OverflowScheme</scheme-name>
                                <front-scheme>
                                    <external-scheme>
                                        <nio-memory-manager/>
                                        <unit-calculator>BINARY</unit-calculator>
                                        <high-units>256</high-units>
                                        <unit-factor>1048576</unit-factor>
                                    </external-scheme>
                                </front-scheme>
                                <back-scheme>
                                    <external-scheme>
                                        <scheme-name>DiskScheme</scheme-name>
                                        <lh-file-manager>
                                            <directory>./</directory>
                                        </lh-file-manager>
                                    </external-scheme>
                                </back-scheme>
                            </overflow-scheme>
                            <partitioned>true</partitioned>
                        </backing-map-scheme>
                        <backup-storage>
                            <type>off-heap</type>
                        </backup-storage>
                        <autostart>true</autostart>
                    </distributed-scheme>
                </back-scheme>
                <invalidation-strategy>present</invalidation-strategy>
                <autostart>true</autostart>
            </near-scheme>
            <!--
            Invocation Service scheme.
            -->
            <invocation-scheme>
                <scheme-name>example-invocation</scheme-name>
                <service-name>InvocationService</service-name>
                <autostart system-property="tangosol.coherence.invocation.autostart">true</autostart>
            </invocation-scheme>
        </caching-schemes>
    </cache-config>This all goes well when I start the cache node(s) but when i start an application that try to use the cache I get the error message:
    2009-04-24 08:20:24.925/17.877 Oracle Coherence GE 3.5/453 (Pre-release) <Error> (thread=DistributedCache:PartitionedOffHeap, member=1): java.lang.IllegalStateException: Partition backing map com.tangosol.net.cache.OverflowMap does not implement ConfigurableCacheMap
         at com.tangosol.net.partition.ObservableSplittingBackingCache.createPartition(ObservableSplittingBackingCache.java:100)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$Storage.initializePartitions(DistributedCache.CDB:10)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$Storage.instantiateResourceMap(DistributedCache.CDB:63)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$Storage.setCacheName(DistributedCache.CDB:27)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$ConfigListener.entryInserted(DistributedCache.CDB:15)
    How should I change my cache config to make this work?
    Best Regards
    Magnus

    Magnus,
    The optimizations related to efficiently supporting overflow-style caching are not included in Coherence 3.5. I created COH-2338 and COH-2339 to track the progress of the related issues.
    There are four different implementations of the PartitionAwareBackingMap for Coherence 3.5:
    * PartitionSplittingBackingMap is the simplest implementation that simply partitions data across a number of backing maps; it is not observable.
    * ObservableSplittingBackingMap is the observable implementation; it extends WrapperObservableMap and delegates to (wraps) a PartitionSplittingBackingMap.
    * ObservableSplittingBackingCache is an extension to the ObservableSplittingBackingMap that knows how to manage ConfigurableCacheMap instances as the underlying per-partition backing maps; in other words, it can spread out and coalesce a configured amount of memory (etc.) across all the actual backing maps.
    * ReadWriteSplittingBackingMap is an extension of the ReadWriteBackingMap that is partition-aware.
    The DefaultConfigurableCacheFactory currently only uses the ObservableSplittingBackingCache and the ReadWriteSplittingBackingMap; COH-2338 relates to the request for improvement to add support for the other two implementations as well. Additionally, optimizations to load balancing (where overflow caching tends to get bogged down by many small I/O operations) will be important; those are tracked by COH-2339.
    Peace,
    Cameron Purdy
    Oracle Coherence

  • Shoudn't 'put with expiry' throw with read-write backing map?

    Good morning all,
    If I run this client code:
    cache.put(1,  1, CacheMap.EXPIRY_NEVER);I'd expect this entry to never expire. Yet with a read-write backing map it does - immediately, which lead me to digging a bit more...
    According to the [java docs|http://download.oracle.com/otn_hosted_doc/coherence/330/com/tangosol/net/NamedCache.html#put%28java.lang.Object,%20java.lang.Object,%20long%29] support for this call is patchy:
    >
    Note: Though NamedCache interface extends CacheMap, not all implementations currently support this functionality.
    For example, if a cache is configured to be a replicated, optimistic or distributed cache then its backing map must be configured as a local cache. If a cache is configured to be a near cache then the front map must to be configured as a local cache and the back map must support this feature as well, typically by being a distributed cache backed by a local cache (as above.)
    >
    OK, so the docs even say this won't work. But shouldn't it throw an unsupported op exception? Is this a bug or my mistake?
    rw-scheme config:
    <backing-map-scheme>
      <read-write-backing-map-scheme>
         <internal-cache-scheme>
            <local-scheme/>
         </internal-cache-scheme>
         <cachestore-scheme>
        </cachestore-scheme>
        <write-delay>1ms</write-delay>
      </read-write-backing-map-scheme>
    </backing-map-scheme>Edited by: BigAndy on 04-Dec-2012 04:28

    Quick update on this - I've raised an SR and Oracle have confirmed this is a bug and are looking into a fix.

  • How to make a node storage disabled for a particular cache?

    I have multiple caches that are distributed across the nodes in my application. Can I disable storage (localstorage=false) for a certain cache in a node.
    Intention is to make something like this:
    CacheA distributed between node1 and node2
    CacheB distributed between node1 and node3
    Thus none of the node would be a non storage node completely here. Hence I would be required to specify this in the coherence-config.xml. If the answer is following for node 2
    <distributed-scheme>
         <scheme-name>CacheB</_CacheEvent_scheme-name>
         <service-name>DistributedCache</service-name>
         *<local-storage>false</local-storage>*
         <backing-map-scheme>
         <local-scheme>
         <scheme-ref>backingSchemeB</scheme-ref>
         </local-scheme>
         </backing-map-scheme>
         <autostart>true</autostart>
         </distributed-scheme>
         <local-scheme>
         <scheme-name>backingSchemeB</scheme-name>
         </local-scheme>
    What should be the backing scheme, as my local storage is false for cacheB?

    Hi Mahesh,
    you can control the storage-enablement of distributed caches on a per-service basis.
    In your case, you have to put cache A and cache B into different services (serviceA and serviceB for the example) and run service A as storage-enabled on nodes 1 and 2, and service B as storage-enabled on nodes 1 and 3.
    For more information, look at my post from two years ago:
    Re: Partitioned cache - where to put what config files?
    Best regards,
    Robert

  • Replicated cache scheme with cache store

    Hi All,
    I am having following configuration for the UserCacheDB in the coherence-cache-config.xml
    I having cachestore class which inserts data in the database and this data will be loaded from data on application start up.
    I need to make this cache replicated so that the other application will have this data. Can any one please guide me what should be my configuration which will make this cache replicated with cache store class.
    <distributed-scheme>
                   <scheme-name>UserCacheDB</scheme-name>
                   <service-name>DistributedCache</service-name>
                   <serializer>
                        <class-name>com.tangosol.io.pof.ConfigurablePofContext</class-name>
                        <init-params>
                             <init-param>
                                  <param-type>String</param-type>
                                  <param-value>pof-config.xml</param-value>
                             </init-param>
                        </init-params>
                   </serializer>
                   <backing-map-scheme>
                        <read-write-backing-map-scheme>
                             <internal-cache-scheme>
                                  <class-scheme>
                                       <class-name>com.tangosol.util.ObservableHashMap</class-name>
                                  </class-scheme>
                             </internal-cache-scheme>
                             <cachestore-scheme>
                                  <class-scheme>
                                       <class-name>test.UserCacheStore</class-name>
                                       <init-params>
                                            <init-param>
                                                 <param-type>java.lang.String</param-type>
                                                 <param-value>PC_USER</param-value>
                                            </init-param>
                                       </init-params>
                                  </class-scheme>
                             </cachestore-scheme>
                             <read-only>false</read-only>
                             <!--
                                  To make this a write-through cache just change the value below to
                                  0 (zero)
                             -->
                             <write-delay-seconds>0</write-delay-seconds>
                        </read-write-backing-map-scheme>
                   </backing-map-scheme>
                   <listener />
                   <autostart>true</autostart>
              </distributed-scheme>
    Thanks in Advance.

    Hi,
    You should be able to use a cachestore with a local-scheme.
          <replicated-scheme>
            <scheme-name>UserCacheDB</scheme-name>
            <service-name>ReplicatedCache</service-name>
            <serializer>
              <class-name>com.tangosol.io.pof.ConfigurablePofContext</class-name>
              <init-params>
                <init-param>
                  <param-type>String</param-type>
                  <param-value>coherence-pof-config.xml</param-value>
                </init-param>
              </init-params>
            </serializer>
            <backing-map-scheme>
              <local-scheme>
                <scheme-name>UserCacheDBLocal</scheme-name>
                <cachestore-scheme>
                  <class-scheme>
                    <class-name>test.UserCacheStore</class-name>
                    <init-params>
                      <init-param>
                        <param-type>java.lang.String</param-type>
                        <param-value>PC_USER</param-value>
                      </init-param>
                    </init-params>
                  </class-scheme>
                </cachestore-scheme>
              </local-scheme>
            </backing-map-scheme>
            <listener/>
            <autostart>true</autostart>
          </replicated-scheme>

  • Sample backing map configuration needed

    Hi,
    I want to persist a substantial amount of data into coherence cache. The number of entity objects (all POF enabled) that need to go into the cache is appx 4 million. I am trying to load the whole data set into my server which is having 4 coherence nodes , each running on 2 GB memory, but that is falling short. I have pasted my cache configuration below.
    <?xml version="1.0"?>
    <!DOCTYPE cache-config SYSTEM "cache-config.dtd">
    <cache-config>
         <defaults>
              <serializer>pof</serializer>
         </defaults>
         <caching-scheme-mapping>
              <cache-mapping>
                   <cache-name>SALES-*</cache-name>
                   <scheme-name>default-partitioned</scheme-name>
              </cache-mapping>
         </caching-scheme-mapping>
         <caching-schemes>
              <distributed-scheme>
                   <scheme-name>default-partitioned</scheme-name>
                   <service-name>DefaultPartitioned</service-name>
                   <backing-map-scheme>
                        <local-scheme />
                   </backing-map-scheme>
                   <autostart>true</autostart>
              </distributed-scheme>
         <!-- Invocation Service scheme. -->
              <invocation-scheme>
                   <scheme-name>examples-invocation</scheme-name>
                   <service-name>InvocationService</service-name>
                   <serializer>
                        <class-name>com.tangosol.io.pof.ConfigurablePofContext</class-name>
                   </serializer>
                   <autostart system-property="tangosol.coherence.invocation.autostart">true</autostart>
              </invocation-scheme>
         </caching-schemes>
    </cache-config>
    I want to also configure a backing map along with this configuration, but i am not sure how to size the backing map keeping in mind my server memory limitations. Can you please help me out in this? A sample backing map configuration will suffice.
    Thanks
    Surya

    >
    yzanagi wrote:
    > ....
    > The instruction in the FAQ note is actually straightforward if you are familiar with the xpath notation.
    >
    > If it's not working, can you provide how you set these parameters and how your SOAP envelope exactly looks?
    >
    +
    # Can I extract some ID value from the SOAP envelope and use it as the message ID?+
    You can use the com.sap.aii.axis.soap.MessageIDExtrator handler to extract some ID value from the SOAP envelope and use it as the message ID. This handler can extract an ID string value (i.e., the 36-character GUID string) from the SOAP envelope and set it to the request or response message ID property in the message context. The ID string must be contained within some XML element. See a configuration for extracting the ID from WS-Addressing's MessageID element. {quote)
    Hi,
    Thanks for your reply;
    I'm familiar with xpath however the FAQ does not mention the use of xpath anywhere. Please see above an extract of the FAQ regarding the MessageIDExtractor;
    The provided example of the module configuration looks like this:
    Module Key | Parameter Name | Parameter Value
    id | handler.type | java.com.sap.aii.axis.soap.MessageIDExtractor
    xireq | handler.type | java.com.sap.aii.axis.xi.XI30InboundHandler
    sap | module.pivot | true
    xires | handler.type | java.com.sap.aii.axis.xi.XI30InboundHandler
    When you observe this configuration you notice that there's only one entry for the MessageIDExtractor. How will the handler know which XML field to pick from the incoming SOAP envelope? I think the example is missing some additional parameters where you can provide the xpath query.
    Can you clarify?
    Many thanks,
    Roberto
    Edited by: Roberto Viana on Nov 5, 2010 9:00 AM
    Edited by: Roberto Viana on Nov 5, 2010 9:06 AM

  • Remote distributed scheme with custom serializer does not work

    HI,
    I have a remote scheme invoked through TcpExtend which is distributed-scheme. Now if I have a custom serializer attached the first node in the cluster comes up without a problem but not the second one onwards. It gives the following exception...
    2010-11-01 18:17:27.302/21.532 Oracle Coherence EE 3.6.0.1 <Error> (thread=Distr
    ibutedCache:extend-service, member=2): The service "extend-service" is configure
    d to use serializer <our package>.JavaSerialisation@562
    78964, which appears to be different from the serializer used by Member(Id=1, Ti
    mestamp=2010-11-01 18:14:35.066, Address=192.168.113.56:8088, MachineId=6456, Lo
    cation=site:oursite.corp,machine:mg-ldn-d0002,process:540, Role=Exe4jRuntimeWi
    nLauncher).
    java.io.IOException: invalid type: 119
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.g
    rid.PartitionedService$PartitionConfig.readObject(PartitionedService.CDB:25)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.g
    rid.PartitionedService$MemberWelcome.read(PartitionedService.CDB:20)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.G
    rid.deserializeMessage(Grid.CDB:42)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.G
    rid.onNotify(Grid.CDB:31)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.g
    rid.PartitionedService.onNotify(PartitionedService.CDB:3)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.g
    rid.partitionedService.PartitionedCache.onNotify(PartitionedCache.CDB:3)
    at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
    at java.lang.Thread.run(Thread.java:619)
    Stopping the extend-service service.
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.G
    rid.start(Grid.CDB:6)2010-11-01 18:17:27.302/21.532 Oracle Coherence EE 3.6.0.1
    <D5> (thread=DistributedCache:extend-service, member=2): Service extend-service
    left the cluster

    Config file contents follow...
    The server config - <cache-config>
    <caching-scheme-mapping>
    <cache-mapping>
    <cache-name>client-request-cache</cache-name>
    <scheme-name>extend-scheme</scheme-name>
    </cache-mapping>
    </caching-scheme-mapping>
    <caching-schemes>
    <proxy-scheme>
    <service-name>ExtendTcpProxyService</service-name>
    <acceptor-config>
    <tcp-acceptor>
    <local-address>
              <address>localhost</address>
              <port>9098</port>
    </local-address>
    </tcp-acceptor>
    <serializer>
    <class-name>test.JavaSerialisation</class-name>
    </serializer>
    </acceptor-config>
    </proxy-scheme>
    <distributed-scheme>
    <scheme-name>extend-scheme</scheme-name>
    <service-name>extend-service</service-name>
    <serializer>
    <class-name>test.JavaSerialisation</class-name>
    </serializer>
    <backing-map-scheme>
    <local-scheme/>
    </backing-map-scheme>
    <autostart>false</autostart>
    </distributed-scheme>
    </caching-schemes>
    </cache-config>
    Client - config is as follows
    <cache-config>
    <caching-scheme-mapping>
    <cache-mapping>
    <cache-name>client-request-cache</cache-name>
    <scheme-name>client-extend-scheme</scheme-name>
    </cache-mapping>
    </caching-scheme-mapping>
    <caching-schemes>
    <remote-cache-scheme>
    <scheme-name>client-extend-scheme</scheme-name>
    <service-name>ExtendTcpCacheService</service-name>
    <initiator-config>
    <tcp-initiator>
    <remote-addresses>
    <socket-address>
                   <address>localhost</address>
                   <port>9099</port>
              </socket-address>
    </remote-addresses>
                   <connect-timeout>5s</connect-timeout>
    </tcp-initiator>
    <outgoing-message-handler>
    <!-- If server has not responded by 100s, then will time out -->
    <request-timeout>100s</request-timeout>
    </outgoing-message-handler>
    <serializer>
    <instance>
    <class-name>test.JavaSerialisation</class-name>
    </instance>
    </serializer>
    </initiator-config>
    </remote-cache-scheme>
    </caching-schemes>
    </cache-config>
    Edited by: 807718 on 08-Nov-2010 10:14
    Edited by: 807718 on 08-Nov-2010 10:15

  • Storage disabled nodes and near-cache scheme

    This probably is a newbie question. I have a named cache with a near cache scheme, with a local-scheme as the front tier. I can see how this will work in a cache-server node. But I have a application node which pushes a lot of data into the same named cache, but it is set to be storage disabled.
    My understanding of a local cache scheme is that data is cached locally in the heap for faster access and the writes are delegated to the service for writing to backing map. If my application is storage disabled, is the local cache still used or is all data obtained from the cache-servers?

    Hello,
    You understanding is correct. To answer your question writes will always go through the cache servers. A put will also always go through the cache servers but the near cache may or may not be populated at that point.
    hth,
    -Dave

  • Partition Map Schemes: HFS+ and FAT32 partitions with OSX and Windows

    OK so I know this question has been practically beaten to death, but I keep finding conflicting information. I am using a 2011 MacBook Pro, on which I will set up Windows through Boot Camp. I recently purchased a 750 GB WD external hard drive to use with time machine for a backup on my Mac. However, I also need to be able to use part of this for Windows files. SO.. I intend to use the HFS+ partition for the the Mac (500GB) and create a FAT 32 partition (250GB) to use for backing up windows files (using it for solely computer modeling and need to be able to transfer/share files with Mac users who use Parallels as well as copying to PC desktops). My question is what to use as the partition map scheme. I have heard that when using these two partition types, a Master Boot Record is needed (so Windows can recognize the FAT32 partition) and also that a GUID partition map is required for use with time machine, meaning windows would no longer be able to read the FAT32 partition. Is there a way to reconcile this? Either using Time Machine with HFS+ partition that is set to MBR or uisng FAT32 on Windows with a GUID partition map? Also if I were to use Parallels (with a GUID setup) instead of Boot Camp, could that be the way to save the windows files to the FAT32 Partition and avoid problems with Time Machine not working with MBR? Thanks for any expertise, as I have heard that both setups that I have mentioned both will work and both will not work. Any experience with a similar situation?

    Wow. Thanks for the extremely quick responses. Just for a few points of clarification.. I'm a complete newb at backing up strategies.
    Steve, you would recommend to not backup files from my Mac OSX and files from Windows (also on my Mac) on the same drive, correct?
    I appreciate the strategy of using it only as a backup, that makes quite a bit of sense. However, if I want to only backup my OSX files, and also store (solely as backup copies) say, a number of computer models (Rhino, Revit, etc.) that were created in Windows programs (not needing to store the entire Windows disk), would it not make sense to store these on the same drive in a different partition, creating the need for two different partition formats? And if I were to do this, maybe I should use NTFS instead of FAT32 (and reformat to GUID since that seems to be a standard for Apple and Windows 7 recognizes it..?) to keep them completely separate since the computer model files cannot be opened unless running the Windows programs.
    How do you use your drive with HFS+ and NTFS if not for backups? I will not need to access the HFS+ backup files in Windows, nor need to access files from an NTFS partition in OSX, so that seems to simplify things in that, at least at the moment, I will not need any Paragon software.
    Currently my drive is partitioned as HFS+ and FAT32 as MBR, with the HFS+ partition set up with Time Machine. It appears to be successful, I see my files in Mac HD -> "users" and all my docs, desktop items, etc. are listed. Seems that there is in fact no limit on TM's use of MBR maps, or else it is way above 160GB.
    Third, are you using CarbonCopyClone in place of Time Machine or in addition to it? If in addition would it create the bit-wise clone on the same HFS+ partition as TM is backing up to? Or a separate drive? I'd like to only have one external that I am backing up to for simplicity's sake. I've never used TM before, so this is all new to me. Also, I suppose I have been missing the distinction between storing copies of files, and making a complete backup of a disk image... just now realizing the difference. Thanks so much.

  • Local Cache with write-behind backing map

    Hi there,
    I am a Coherence newb, so I hope my question isn't too naive. I have been experimenting with Coherence using a write-behind JPA backing map, but I have only been able to make it work with a distributed cache. Because of my specific database RAC architecture, I need to ensure that entries written to the database from a given WLS node are restricted to a specific RAC node. In my view, using a local cache rather than a distributed cache should solve my problem, although if there is a way of configuring my cache so this works I'd appreciate the info.
    So, the short form of the question: can I back a local cache with a write-behind JPA map?
    Cheers,
    Ron

    Hi Ron,
    The configuration for <local-scheme> allows you to add a cache store but you cannot use write-behind, only write-through.
    Presumably you do not want the data to be shared by the different WLS nodes, i.e. if one node puts data in the cache and that is eventually written to a RAC node, that data cannot be seen in the cache by other WLS nodes. Or do you want all the WLS nodes to share the data but just write it to different RAC nodes?
    If you use a local-scheme then the data will only be local to that WLS node and not shared.
    I can think of a possible way to do what you want but it depends on the answer to the above question.
    JK

Maybe you are looking for

  • JOB_OPEN JOB_SUBMIT JOB_CLOSE

    Hello Experts, I have a requirement where I want to execute a portion of my code in the background inorder to reduce the performance issue. This can be done through JOB_OPEN JOB_SUBMIT JOB_CLOSE command which generates a new JOB in the  background mo

  • File path too long when collecting??...

    Hi all, REALLY hope someone has some wisdom on this! I've searched forums and the web and not yet found an answer.... I've got a MASSIVE project on an external drive that's on the way out, so I want to collect the project that's on it to another new

  • Mac OS X 10.5 stuck on grey start up apple logo

    Ok so here's the whole story. First, I was warned for not having enough free space. Therefore, I decided to delete large files. I went on Finder then Command+F to search files by size greater than 1 GB. So, it showed all the files that I was looking

  • Photoshop CC Crashes on launch with Mavericks

    I have two Macs, a late 2012 MacBook Pro and a 2009 iMac, both installed with Creative Cloud and running Mavericks. The MBP is doing great. No issues whatsoever, and it is my main machine these days. However, Photoshop CC is crashing on launch on the

  • How to Proceed oracle database 10.2.0.4 with OID 10.1.4.0.1

    Hi, We have Oracle Metadata Repository version 10.2.0.4 and our Oracle Identity Management version is 10.1.4.0.1 While installing/configuring OID 10.1.4.0.1 we are getting the below error. "You must have an OID schema version 10.1.4.0.1 to 10.1.4.9.9