How do I combine the Coherence 3.5 partitioned backing map with overflow?

I would like to set up a near cache where the back cache uses an overflow map that uses a partitioned backing map as front and a file (or Berkley DB) based back. I would like the storage for both primary and backup storage to use the same configuration. I tried the following cache config (I am not even sure this say anything about how the backup storage should be configured, except that I say it should be off-heap) :
<?xml version="1.0"?>
<!DOCTYPE cache-config SYSTEM "cache-config.dtd">
<cache-config>
    <caching-scheme-mapping>
        <cache-mapping>
            <cache-name>near-small</cache-name>
            <scheme-name>near-schema</scheme-name>
        </cache-mapping>
    </caching-scheme-mapping>
    <caching-schemes>
        <near-scheme>
            <scheme-name>near-schema</scheme-name>
            <front-scheme>
                <local-scheme>
                    <eviction-policy>HYBRID</eviction-policy>
                    <high-units>10000</high-units>
                </local-scheme>
            </front-scheme>
            <back-scheme>
                <distributed-scheme>
                    <scheme-name>near-distributed-scheme</scheme-name>
                    <service-name>PartitionedOffHeap</service-name>
                    <backup-count>1</backup-count>
                    <thread-count>4</thread-count>
                    <serializer>
                        <class-name>com.tangosol.io.pof.ConfigurablePofContext</class-name>
                    </serializer>
                    <backing-map-scheme>
                        <overflow-scheme>
                            <scheme-name>OverflowScheme</scheme-name>
                            <front-scheme>
                                <external-scheme>
                                    <nio-memory-manager/>
                                    <unit-calculator>BINARY</unit-calculator>
                                    <high-units>256</high-units>
                                    <unit-factor>1048576</unit-factor>
                                </external-scheme>
                            </front-scheme>
                            <back-scheme>
                                <external-scheme>
                                    <scheme-name>DiskScheme</scheme-name>
                                    <lh-file-manager>
                                        <directory>./</directory>
                                    </lh-file-manager>
                                </external-scheme>
                            </back-scheme>
                        </overflow-scheme>
                        <partitioned>true</partitioned>
                    </backing-map-scheme>
                    <backup-storage>
                        <type>off-heap</type>
                    </backup-storage>
                    <autostart>true</autostart>
                </distributed-scheme>
            </back-scheme>
            <invalidation-strategy>present</invalidation-strategy>
            <autostart>true</autostart>
        </near-scheme>
        <!--
        Invocation Service scheme.
        -->
        <invocation-scheme>
            <scheme-name>example-invocation</scheme-name>
            <service-name>InvocationService</service-name>
            <autostart system-property="tangosol.coherence.invocation.autostart">true</autostart>
        </invocation-scheme>
    </caching-schemes>
</cache-config>This all goes well when I start the cache node(s) but when i start an application that try to use the cache I get the error message:
2009-04-24 08:20:24.925/17.877 Oracle Coherence GE 3.5/453 (Pre-release) <Error> (thread=DistributedCache:PartitionedOffHeap, member=1): java.lang.IllegalStateException: Partition backing map com.tangosol.net.cache.OverflowMap does not implement ConfigurableCacheMap
     at com.tangosol.net.partition.ObservableSplittingBackingCache.createPartition(ObservableSplittingBackingCache.java:100)
     at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$Storage.initializePartitions(DistributedCache.CDB:10)
     at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$Storage.instantiateResourceMap(DistributedCache.CDB:63)
     at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$Storage.setCacheName(DistributedCache.CDB:27)
     at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$ConfigListener.entryInserted(DistributedCache.CDB:15)
How should I change my cache config to make this work?
Best Regards
Magnus

Magnus,
The optimizations related to efficiently supporting overflow-style caching are not included in Coherence 3.5. I created COH-2338 and COH-2339 to track the progress of the related issues.
There are four different implementations of the PartitionAwareBackingMap for Coherence 3.5:
* PartitionSplittingBackingMap is the simplest implementation that simply partitions data across a number of backing maps; it is not observable.
* ObservableSplittingBackingMap is the observable implementation; it extends WrapperObservableMap and delegates to (wraps) a PartitionSplittingBackingMap.
* ObservableSplittingBackingCache is an extension to the ObservableSplittingBackingMap that knows how to manage ConfigurableCacheMap instances as the underlying per-partition backing maps; in other words, it can spread out and coalesce a configured amount of memory (etc.) across all the actual backing maps.
* ReadWriteSplittingBackingMap is an extension of the ReadWriteBackingMap that is partition-aware.
The DefaultConfigurableCacheFactory currently only uses the ObservableSplittingBackingCache and the ReadWriteSplittingBackingMap; COH-2338 relates to the request for improvement to add support for the other two implementations as well. Additionally, optimizations to load balancing (where overflow caching tends to get bogged down by many small I/O operations) will be important; those are tracked by COH-2339.
Peace,
Cameron Purdy
Oracle Coherence

Similar Messages

Maybe you are looking for