Distributed Scheme vs Local Scheme - Execution Time

Hello,
     I am retrofitting an existing application to use Coherence. This application uses one of the JDK's implementations of the Map interface to store a large number of objects. I have modified the application to use Coherence's NamedCache.
     I am running a test case in a single node cluster (only one JVM). When I configure the cache to use a local scheme my test case executes in 105 milliseconds. When I configure the cache to use a distributed scheme my test case executes in 72000 milliseconds. I am surprised by this since there should not be any network traffic in a single node cluster.
     Would you please help me to understand this? I have included the Coherence cache configuration below.
     <cache-config>
          <caching-scheme-mapping>
               <cache-mapping>
                    <cache-name>*</cache-name>
                    <!--scheme-name>LocalCache</scheme-name-->
                    <scheme-name>DistributedCache</scheme-name>
               </cache-mapping>
          </caching-scheme-mapping>
          <caching-schemes>
               <distributed-scheme>
                    <scheme-name>DistributedCache</scheme-name>
                    <service-name>DistributedCache</service-name>
                    <backing-map-scheme>
                         <local-scheme>
                              <scheme-ref>LocalCache</scheme-ref>
                         </local-scheme>
                    </backing-map-scheme>
                    <backup-count>0</backup-count>
                    <autostart>true</autostart>
               </distributed-scheme>
               <local-scheme>
                    <scheme-name>LocalCache</scheme-name>
               </local-scheme>
          </caching-schemes>
     </cache-config>
     Thank you,
     R. Vaessen

Hi Robert,
     according to question about parallel-aggregators this post from Dimitri
     there is a change request (COH-1013) about it.
     However, if you are concerned only about network transmission latency, you should be aware, that it will probably not improve that.
     The serialized format of the cache entry is serialized on the client who puts the data in the cache. This serialized form travels to the primary cache server node (which sends it over to the backups), and the primary node stores it in this form.
     When someone requests that entry, the same serialized form is sent back to the requestor, so getting and putting objects do not mean any serialization/deserialization in the cache servers.
     If there are indexes defined on the cache, then the serialized form is also deserialized upon putting it into the backing map.
     If filters or aggregators or entry-processors are executed on the entries which do not have indexes to be leveraged, then the serialized form of candidate entries is deserialized.
     If you wanted to store entries in object form and serialize it every time it is sent to the client, you would see a performance decrease, and probably a negative effect on GC as well, compared to the current implementation, if you only put and get values to the cache.
     Also, entries in object form consume more memory than they do in serialized form.
     Therefore the only advantage which storing entries in object form is experienced when you want to execute queries or aggregators over non-indexed attributes or entry-processors, where in this case you do not have to pay the cost of deserialization of each entry upon executing the evaluate() method of the filter or upon getting the value from the entry.
     If you want, you can achieve this effect even currently, but at a very large memory cost: you can create a method which returns the object itself, and index that method with a reflection extractor and use an index aware filter and the QueryMap.extract method to access the object in the index instead of the entry itself. Practically you should not do that, if possible, due to the additional memory cost.
     Best regards,
     Robert

Similar Messages

  • Local storage enabled per distributed-scheme

    As the cache configuration xml allows you to specify local-storage true or falase for individual distributed-schemes it would seem that it is possible to have a stiuation where you have a cluster of nodes where each node is storage enabled for a different sub-set of caches/schemes.
    I have never done this myself and always either storage enabled/disabled at the JVM level. Would having nodes with different caches/schemes storage enabled be a valid thing to do? I am not quite sure why I might want to do this but I was asked the question by one of our dev teams (I don't think they quite know they would want to do it either).
    I suppose what I really want to know is has anyone done this before and if we did it is something likely to break?
    Cheers,
    JK.

    Hi Jonathan,
    yes, it is doable, only you usually would want to specify a different override Java property for the storage-enabled flag in each service scheme.
    An example on when you want to do this is when there are different services storing data in (different) partitioned cache services (and usually they also have access to a database from a cache server, e.g. by a cache store) and you want to separate deployment of the server-side of the services to different nodes due to access control or provisioning reasons.
    In these cases it is even possible that the server side of one service is the client side of another service, meaning that the cache configuration of both services need to exist in some JVMs.
    I posted some additional info and configuration in this forum thread some months ago:
    Re: Partitioned cache - where to put what config files?
    Particularly look at the later posts in the thread
    Best regards,
    Robert
    Edited by: robvarga on Dec 5, 2008 4:16 PM
    Added link to forum thread.

  • Remote distributed scheme with custom serializer does not work

    HI,
    I have a remote scheme invoked through TcpExtend which is distributed-scheme. Now if I have a custom serializer attached the first node in the cluster comes up without a problem but not the second one onwards. It gives the following exception...
    2010-11-01 18:17:27.302/21.532 Oracle Coherence EE 3.6.0.1 <Error> (thread=Distr
    ibutedCache:extend-service, member=2): The service "extend-service" is configure
    d to use serializer <our package>.JavaSerialisation@562
    78964, which appears to be different from the serializer used by Member(Id=1, Ti
    mestamp=2010-11-01 18:14:35.066, Address=192.168.113.56:8088, MachineId=6456, Lo
    cation=site:oursite.corp,machine:mg-ldn-d0002,process:540, Role=Exe4jRuntimeWi
    nLauncher).
    java.io.IOException: invalid type: 119
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.g
    rid.PartitionedService$PartitionConfig.readObject(PartitionedService.CDB:25)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.g
    rid.PartitionedService$MemberWelcome.read(PartitionedService.CDB:20)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.G
    rid.deserializeMessage(Grid.CDB:42)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.G
    rid.onNotify(Grid.CDB:31)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.g
    rid.PartitionedService.onNotify(PartitionedService.CDB:3)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.g
    rid.partitionedService.PartitionedCache.onNotify(PartitionedCache.CDB:3)
    at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
    at java.lang.Thread.run(Thread.java:619)
    Stopping the extend-service service.
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.G
    rid.start(Grid.CDB:6)2010-11-01 18:17:27.302/21.532 Oracle Coherence EE 3.6.0.1
    <D5> (thread=DistributedCache:extend-service, member=2): Service extend-service
    left the cluster

    Config file contents follow...
    The server config - <cache-config>
    <caching-scheme-mapping>
    <cache-mapping>
    <cache-name>client-request-cache</cache-name>
    <scheme-name>extend-scheme</scheme-name>
    </cache-mapping>
    </caching-scheme-mapping>
    <caching-schemes>
    <proxy-scheme>
    <service-name>ExtendTcpProxyService</service-name>
    <acceptor-config>
    <tcp-acceptor>
    <local-address>
              <address>localhost</address>
              <port>9098</port>
    </local-address>
    </tcp-acceptor>
    <serializer>
    <class-name>test.JavaSerialisation</class-name>
    </serializer>
    </acceptor-config>
    </proxy-scheme>
    <distributed-scheme>
    <scheme-name>extend-scheme</scheme-name>
    <service-name>extend-service</service-name>
    <serializer>
    <class-name>test.JavaSerialisation</class-name>
    </serializer>
    <backing-map-scheme>
    <local-scheme/>
    </backing-map-scheme>
    <autostart>false</autostart>
    </distributed-scheme>
    </caching-schemes>
    </cache-config>
    Client - config is as follows
    <cache-config>
    <caching-scheme-mapping>
    <cache-mapping>
    <cache-name>client-request-cache</cache-name>
    <scheme-name>client-extend-scheme</scheme-name>
    </cache-mapping>
    </caching-scheme-mapping>
    <caching-schemes>
    <remote-cache-scheme>
    <scheme-name>client-extend-scheme</scheme-name>
    <service-name>ExtendTcpCacheService</service-name>
    <initiator-config>
    <tcp-initiator>
    <remote-addresses>
    <socket-address>
                   <address>localhost</address>
                   <port>9099</port>
              </socket-address>
    </remote-addresses>
                   <connect-timeout>5s</connect-timeout>
    </tcp-initiator>
    <outgoing-message-handler>
    <!-- If server has not responded by 100s, then will time out -->
    <request-timeout>100s</request-timeout>
    </outgoing-message-handler>
    <serializer>
    <instance>
    <class-name>test.JavaSerialisation</class-name>
    </instance>
    </serializer>
    </initiator-config>
    </remote-cache-scheme>
    </caching-schemes>
    </cache-config>
    Edited by: 807718 on 08-Nov-2010 10:14
    Edited by: 807718 on 08-Nov-2010 10:15

  • Multiple caches use same distributed scheme, what about cachestore config?

    Hi,
    If a distributed scheme is configured with read-write-backing-map, will ALL caches be required to define the cachestore configuration if the caches are using the same distributed scheme? For example, we have two caches "ExpireSessions" and "Customer" which are using distributed-scheme. But only "ExpiredSession" cache needs to have read-write-backing-map (for transaction lite) AND the "ExpiredSession" needs to be persisted to DB. For "Customer" cache, it does NOT need to have read-write-backing-map AND it does NOT need to be persisted to DB. Currently, we have the following configuration which also has the "write-delay" and "cache-store" configurations for the "Customer" cache.
    Is it possible not have cache store configuration (write-delay, cache-store) configuration for the Customer cache even though it is using the same distributed-scheme as "ExpiredSession" cache (which needs the cache store configuration?) We think it probably can remove some of the overhead and improve efficiency) for the 'Customer' cache operations.
    Or is it required to have a separate distributed-scheme for "Customer" cache without eh cache store configuration? But it then will have use separate/additional thread pools for the distributed service?
    Any suggestions?
    Thanks in advance for your help.
    <cache-name>ExpiredSessions</cache-name>
    <scheme-name>default-distributed</scheme-name>
    <init-params>
    <init-param>
    <param-name>expiry-delay</param-name>
    <param-value>2s</param-value>
    </init-param>
    <init-param>
    <param-name>write-delay</param-name>
    <param-value>1s</param-value>
    </init-param>
    <init-param>
    <param-name>cache-store</param-name>
    <param-value>xxx.xxx.DBCacheStore</param-value>
    </param-value>
    </init-param>
    </init-params>
    <cache-mapping>
    <cache-name>Customer</cache-name>
    <scheme-name>default-distributed</scheme-name>
    <init-params>
    <init-param>
    <param-name>cache-store</param-name>
    <param-value>xxx.xxx.EmptyCacheStore</param-value>
    </init-param>
    <init-param>
    <param-name>write-delay</param-name>
    <param-value>24h</param-value>
    </init-param>
    </init-params>
    </cache-mapping>
    <!--
    Default Distributed pricing-distributedcaching scheme.
    -->
    <distributed-scheme>
    <scheme-name>default-distributed</scheme-name>
    <service-name>XXXDistributedCache</service-name>
    <thread-count>16</thread-count>
    <backing-map-scheme>
    <read-write-backing-map-scheme>
    <scheme-ref>rw-bm</scheme-ref>
    </read-write-backing-map-scheme>
    </backing-map-scheme>
    <autostart>true</autostart>
    </distributed-scheme>

    Hi,
    Yes, you can use the same service for different caches with different configuration. You need to define base service configuration with the common parameters and all the caches with have their schemes (not services) refering the base service configuration along with their own configuration. For example,
              <cache-mapping>
                   <cache-name>ExpiredSessions</cache-name>
                   <scheme-name>default-distributed</scheme-name>
                   <init-params>
                        <init-param>
                             <param-name>expiry-delay</param-name>
                             <param-value>2s</param-value>
                        </init-param>
                        <init-param>
                             <param-name>write-delay</param-name>
                             <param-value>1s</param-value>
                        </init-param>
                        <init-param>
                             <param-name>cache-store</param-name>
                             <param-value>xxx.xxx.DBCacheStore</param-value>                    
                        </init-param>
                   </init-params>
              </cache-mapping>
              <cache-mapping>
                   <cache-name>Customer</cache-name>
                   <scheme-name>default-distributed</scheme-name>
              </cache-mapping>
              </caching-scheme-mapping>
                   <!-- Default Distributed pricing-distributedcaching scheme. -->
              <caching-schemes>
                   <distributed-scheme>
                        <scheme-name>default-distributed</scheme-name>
                        <service-name>XXXDistributedCache</service-name>
                        <thread-count>16</thread-count>
                        <autostart>true</autostart>
                   </distributed-scheme>
                   <distributed-scheme>
                        <scheme-name>default-customer</scheme-name>
                        <scheme-ref>default-distributed</scheme-ref>
                        <backing-map-scheme>
                             <local-scheme />
                        </backing-map-scheme>
                   </distributed-scheme>
                   <distributed-scheme>
                        <scheme-name>default-expiry</scheme-name>
                        <scheme-ref>default-distributed</scheme-ref>
                        <backing-map-scheme>
                             <read-write-backing-map-scheme>
                                  <scheme-ref>rw-bm</scheme-ref>
                             </read-write-backing-map-scheme>
                        </backing-map-scheme>
                   </distributed-scheme>
              </caching-schemes>
    Hope this helps!
    Cheers,
    NJ

  • Cachestore for distributed scheme not getting invoked to "load" the entries

    Hi,
    We have a distributed scheme with CacheStore as below,
    SERVER PART
    <?xml version="1.0"?>
    <!DOCTYPE cache-config SYSTEM "cache-config.dtd">
    <cache-config>
    <caching-scheme-mapping>          
         <cache-mapping>
                   <cache-name>ReferenceData-Cache</cache-name>
                   <scheme-name>RefData_Distributed_Scheme</scheme-name>
         </cache-mapping>
         <!--
         FEW OTHER DISTRIBUTED CACHE
         -- >
    </caching-scheme-mapping>
    <caching-schemes>
    <!-- definition of other cache schemes including one proxy scheme -->
    <distributed-scheme>
         <scheme-name>RefData_Distributed_Scheme</scheme-name>
         <service-name>RefData_Distributed_Service</service-name>     
         <serializer>
         <class-name>com.tangosol.io.pof.ConfigurablePofContext</class-name>                    
         <init-params>
              <init-param>
                   <param-type>string</param-type>
                   <param-value>TradeEngine-POF.xml</param-value>
              </init-param>
         </init-params>     
         </serializer>                         
         <backing-map-scheme>
         <read-write-backing-map-scheme>
         <internal-cache-scheme>
         <local-scheme>
         <expiry-delay>1m</expiry-delay>
         </local-scheme>
         </internal-cache-scheme>
         <cachestore-scheme>
         <class-scheme>
         <class-name>com.csfb.fid.gtb.referencedatacache.cachestore.RefDataCacheStore</class-name>     
         <init-params>
                        <init-param>
                             <param-type>string</param-type>
                             <param-value>{cache-name}</param-value>
                        </init-param>
                        </init-params>                    
         </class-scheme>
         </cachestore-scheme>
         <read-only>true</read-only>
         <refresh-ahead-factor>.5</refresh-ahead-factor>
         </read-write-backing-map-scheme>
         </backing-map-scheme>
         <backup-count>1</backup-count>
         <autostart system-property="tangosol.coherence.distributed-service.enabled">false</autostart>           
    </distributed-scheme>
    </caching-schemes>
    </cache-config>
    The above configuration is used on tcp extend proxy node with localstorage=false
    There is similar configuration on storage node,
    - with no proxy,
    - with same "ReferenceData-Cache" (autostart=true)
    - and localstorage=true.
    Following is my CacheStore implementation.
    NOTE: This Cachestore is only for loading the cache entry from cache store.i.e. from some excel file in my case, i.e. only load() and loadAll() methods.
    NO store() or storeAll().
    package com.csfb.fid.gtb.referencedatacache.cachestore;
    import java.util.Collection;
    import java.util.HashMap;
    import java.util.Iterator;
    import java.util.List;
    import java.util.Map;
    import com.creditsuisse.fid.gtb.common.FileLogger;
    import com.csfb.fid.gtb.referencedatacache.Currency;
    import com.csfb.fid.gtb.utils.refdada.DBDetails;
    import com.csfb.fid.gtb.utils.refdada.ReferenceDataReaderUtility;
    import com.tangosol.net.NamedCache;
    import com.tangosol.net.cache.CacheStore;
    public class RefDataCacheStore implements CacheStore
         private DBDetails dbDetails = null;
         private ReferenceDataReaderUtility utils = null;
    public RefDataCacheStore(String cacheName)
         System.out.println("RefDataCacheStore constructor..");
         //dbDetails = DBDetails.getInstance();
         utils = new ReferenceDataReaderUtility();
    public Object load(Object key)
         return utils.readCurrency(key);
    public void store(Object oKey, Object oValue)
    public void erase(Object oKey)
         public void eraseAll(Collection colKeys)
         public Map loadAll(Collection colKeys)
              System.out.println("RefDataCacheStore loadAll..");
              Map<String, Object> obejctMap = new HashMap<String, Object>();
              List<Object> list = utils.readAllCurrencies();
              Iterator<Object> listItr = list.iterator(colKeys);
              while(listItr.hasNext()){
                   Object obj = listItr.next();
                   if(obj != null){
                        String key = "CU-"+((Currency)obj).getId();
                        obejctMap.put(key, (Currency)obj);
              return obejctMap;
         public void storeAll(Map mapEntries)
    CLIENT PART
    I connect to this cache using extend client with follwing config file,
    <?xml version="1.0"?>
    <!DOCTYPE cache-config SYSTEM "cache-config.dtd">
    <cache-config>
         <caching-scheme-mapping>
              <cache-mapping>
                        <cache-name>ReferenceData-Cache</cache-name>
                        <scheme-name>coherence-remote-scheme</scheme-name>
              </cache-mapping>     
         </caching-scheme-mapping>
         <caching-schemes>
              <remote-cache-scheme>
                   <scheme-name>coherence-remote-scheme</scheme-name>
                   <initiator-config>
                        <serializer>
                             <class-name>com.tangosol.io.pof.ConfigurablePofContext</class-name>                         
                             <init-params>
                                  <init-param>
                                       <param-type>string</param-type>
                                       <param-value>TradeEngine-POF.xml</param-value>
                                  </init-param>
                             </init-params>                         
                        </serializer>
                        <tcp-initiator>
                             <remote-addresses>                              
                                  <socket-address>
                                       <address>169.39.30.182</address>
                                       <port>9001</port>
                                  </socket-address>
                             </remote-addresses>
                        <connect-timeout>10s</connect-timeout>
                        </tcp-initiator>
                        <outgoing-message-handler>
                             <request-timeout>3000s</request-timeout>
                        </outgoing-message-handler>                                   
                   </initiator-config>
              </remote-cache-scheme>
         </caching-schemes>
    </cache-config>
    PROBLEM
    From my test case (with extend client file as configuration), when i try to connect to get cache handle of this cache, as
    refDataCache = CacheFactory.getCache("ReferenceData-Cache");
    I get following error on server side,
    2010-05-12 18:28:25.229/1687.847 Oracle Coherence GE 3.5.3/465 <Error> (thread=DistributedCache, member=2): BackingMapManager com.tangosol.net.DefaultConfigurableCacheFactory$Manager: failed to instantiate a cache: ReferenceData-Cache
    2010-05-12 18:28:25.229/1687.847 Oracle Coherence GE 3.5.3/465 <Error> (thread=DistributedCache, member=2):
    java.lang.IllegalArgumentException: No scheme for cache: "ReferenceData-Cache"
         at com.tangosol.net.DefaultConfigurableCacheFactory.findSchemeMapping(DefaultConfigurableCacheFactory.java:507)
         at com.tangosol.net.DefaultConfigurableCacheFactory$Manager.instantiateBackingMap(DefaultConfigurableCacheFactory.java:3486)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$Storage.instantiateResourceMap(DistributedCache.CDB:22)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$Storage.setCacheName(DistributedCache.CDB:27)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$ConfigListener.entryInserted(DistributedCache.CDB:15)
         at com.tangosol.util.MapEvent.dispatch(MapEvent.java:266)
         at com.tangosol.util.MapEvent.dispatch(MapEvent.java:226)
         at com.tangosol.util.MapListenerSupport.fireEvent(MapListenerSupport.java:556)
         at com.tangosol.util.ObservableHashMap.dispatchEvent(ObservableHashMap.java:229)
         at com.tangosol.util.ObservableHashMap$Entry.onAdd(ObservableHashMap.java:270)
         at com.tangosol.util.SafeHashMap.put(SafeHashMap.java:244)
         at com.tangosol.coherence.component.util.collections.WrapperMap.put(WrapperMap.CDB:1)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid$ServiceConfigMap.put(Grid.CDB:31)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$StorageIdRequest.onReceived(DistributedCache.CDB:45)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onMessage(Grid.CDB:9)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onNotify(Grid.CDB:136)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache.onNotify(DistributedCache.CDB:3)
         at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
         at java.lang.Thread.run(Thread.java:619)
    However, with this error also i am able to do normal put operation and get operation against those put.
    But when i try to get the cache entry which doesnt exists in cache, like
    refDataCache.get("CU-1");
    I expected that call would go to RefDataCacheStore.load method. But in my case i dont see it happening.
    Also i debugged my jvms in debug mode and i put a class load breakpoint for RefDataCacheStore, but even that is not hit. I have RefDataCacheStore in my server classpath.
    Hope to see reply on this soon.
    Thanks
    Manish

    Hi Manish,
    <<previous advice deleted>>
    user13113666 wrote:
    Hi,
    I have my server picking up the correct configuration files for "ReferenceData-Cache". In my local jconsole i could see that the service named "RefData_Distributed_Service" is started under service mbean. Also i am able to perform a put and get on "ReferenceData-Cache", i could even see StorageManager mbean showing my inserted entries for "RefData_Distributed_Service".With the local jconsole, are you monitoring the server/proxy node, or the TCP*Extend cleint node?
    The client can have the service with the server still not having it.
    Could you please post the startup log for the storage node on the server...
    Best regards,
    Robert
    Edited by: robvarga on May 17, 2010 12:33 PM

  • Cache Mapping & Distributed Scheme not working.

    I have a set of cache mapping and its corresponding distributed scheme defined. When i run it on its own it works fine, but when I add the same cache mapping and distributed scheme to a set of cache mappings and distributed schemes i.e. HibernateCacheStore which are all binded to a service-name, i get the message "com.tangosol.net.RequestPolicyException: No storage-enabled nodes exist for service .... "
    I am using a cache config in which local storage is enabled for server side and my application connects to it with localstorage=false. Have tried the system property option also but of no use.
    I have tried it using coherence standalone as well as running it from JBOSS. Same behavior persists.
    My intention is to keep this cache mapping different from existing service (i.e. service-name).
    Is anyone aware of any conflicts when simulating such type of scenario. Looking forward for some pointers.
    Am using Coherence 3.6.1 with JBoss 5.x
    Edited by: Kapil Naudiyal on Jan 1, 2012 8:47 PM
    Little More research to be more specific. I tried with following combo.
    Coherence Server ----------> Client to initialize
    |
    |
    |
    Client to use cache
    This works fine with the scenario mentioned above.
    But when i use the scenario where
    Coherence Server + client to initialize Cache ---------------> Client to use cache, cache doesn't work. This scenario is implemented within JBoss. Is there is a limitation that an embedded Cache instance loads only one service..
    Thanks in advance for any insights..

    Hi,
    The message "com.tangosol.net.RequestPolicyException: No storage-enabled nodes exist for service .... " implies that your clients are unable to coonect to storage-enabled nodes so you need to ensure if clients are able to connect to the Coherence servers. The easier way to find out is to use the JMX and then look for node details or cluster size. Setting up JMX is quite simple and documentation can be found here - http://docs.oracle.com/cd/E15357_01/coh.360/e15723/manage_jmx.htm
    Hope this helps!
    Cheers,
    NJ

  • Using custom backing map for distributed scheme

    Hallo,
    We are evaluating usage of distributed scheme for storing large amount of data. We have analyzed memory usage for distributed-scheme backed by local-scheme (with binary unit calculator) and received interesting results.
    On single JVM
    110M - byte arrays - our data in binary form
    20M - com.tangosol.util.Binary objects - overhead of binary objects to wrap byte arrays
    27M - com.tangosol.net.cache.LocalCache$Entry objects - local cache entry overhead
    When we have tried our own implementation of java.util.Map as backing map (using some clever tricks to get smaller size at the cost of CPU processing)
    On single JVM
    36M - byte arrays - out application data
    So it is look like we are going to stick with our implementation.
    My question
    Are there any known drawbacks of using implementation of java.util.Map as backing map for distribute-scheme?
    Our implementation is thread safe of cause (we have already tested it thread-count option of distributed-scheme)

    Hi Alexey,
    alexey.ragozin wrote:
    But I already have this table in first map (String -> Binary), in former project we have used handmade Map implementation with method internKey() to avoid duplication of data.
    I'm looking for a way to reuse our old and proven techniques with coherence caches.
    Thank you,
    AlexeyUnfortunately there is no way the extractor could get hold of the already existing key reference in the reverse index. I agree that the extracted value reference in the reverse index entry (the key from that entry) should be reused by Coherence as the forward index value if the reverse index entry for the same extracted value exists, but apparently it is not.
    Try to submit an enhancement request for this (and please share the ticket number for it so we can also look for it in the release notes).
    Best regards,
    Robert

  • Mixing POF and Serializable in Distributed Schemes

    Hi,
    Is it possible to have 2 named caches, both using distributed schemes (2 distributed services), use 2 different serialization strategies, one with POF and other Java Serialization? I defined 2 distributed schemes for the two named caches but I get errors that the objects I put into the Java serializable named cache is an unknown user type. I do not want to implement PortableObject on this class.
    If I start up the grid and client with the system properties
    -Dtangosol.pof.config=knowmed-pof-config.xml
    -Dtangosol.pof.enabled=true
    both the services get the com.tangosol.io.pof.ConfigurablePofContext as their Serializer class. If I don't use these system properties at start up, I am not able to start the service with POF serializer. I get errors on both client and the grid
    Grid:
    2008-12-05 11:49:17.897/38.907 Oracle Coherence GE 3.4/405p1 <Error> (thread=DistributedCache:DistributedCachePOF, member=1): An exception (java.lang.IllegalArgumentException) occurred reading Message MemberConfigUpdate Type=-3 for Service=DistributedCache{Name=DistributedCachePOF, State=(SERVICE_STARTED), LocalStorage=enabled, PartitionCount=257, BackupCount=1, AssignedPartitions=257, BackupPartitions=0}
    2008-12-05 11:49:17.897/38.907 Oracle Coherence GE 3.4/405p1 <Error> (thread=DistributedCache:DistributedCachePOF, member=1): Terminating DistributedCache due to unhandled exception: java.lang.IllegalArgumentException
    2008-12-05 11:49:17.897/38.907 Oracle Coherence GE 3.4/405p1 <Error> (thread=DistributedCache:DistributedCachePOF, member=1):
    java.lang.IllegalArgumentException: unknown user type: 6
         at com.tangosol.io.pof.ConfigurablePofContext.getPofSerializer(ConfigurablePofContext.java:373)
         at com.tangosol.io.pof.PofBufferReader.readAsObject(PofBufferReader.java:3281)
         at com.tangosol.io.pof.PofBufferReader.readObject(PofBufferReader.java:2599)
         at com.tangosol.io.pof.ConfigurablePofContext.deserialize(ConfigurablePofContext.java:348)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.Service.readObject(Service.CDB:4)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid$ServiceConfigMap.readObject(Grid.CDB:1)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$ServiceConfigMap.readObject(DistributedCache.CDB:23)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid$MemberConfigUpdate.read(Grid.CDB:3)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onNotify(Grid.CDB:117)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache.onNotify(DistributedCache.CDB:3)
         at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:37)
         at java.lang.Thread.run(Thread.java:595)
    Client:
    2008-12-05 11:49:20,741 ERROR [Log4j] 2008-12-05 11:49:20.663/25.657 Oracle Coherence GE 3.4/405p1 <Error> (thread=DistributedCache:DistributedCachePOF, member=2): An exception (java.io.IOException) occurred reading Message MemberConfigUpdate Type=-3 for Service=DistributedCache{Name=DistributedCachePOF, State=(SERVICE_STARTED), LocalStorage=disabled}
    2008-12-05 11:49:20,741 ERROR [Log4j] 2008-12-05 11:49:20.663/25.657 Oracle Coherence GE 3.4/405p1 <Error> (thread=DistributedCache:DistributedCachePOF, member=2): Terminating DistributedCache due to unhandled exception: java.io.IOException
    2008-12-05 11:49:20,757 ERROR [Log4j] 2008-12-05 11:49:20.663/25.657 Oracle Coherence GE 3.4/405p1 <Error> (thread=DistributedCache:DistributedCachePOF, member=2):
    java.io.IOException: unsupported type / corrupted stream: 78
         at com.tangosol.util.ExternalizableHelper.readObjectInternal(ExternalizableHelper.java:2222)
         at com.tangosol.util.ExternalizableHelper.readObject(ExternalizableHelper.java:2209)
         at com.tangosol.io.DefaultSerializer.deserialize(DefaultSerializer.java:60)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.Service.readObject(Service.CDB:4)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid$ServiceConfigMap.readObject(Grid.CDB:1)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$ServiceConfigMap.readObject(DistributedCache.CDB:23)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid$MemberConfigUpdate.read(Grid.CDB:3)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onNotify(Grid.CDB:117)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache.onNotify(DistributedCache.CDB:3)
         at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:37)
         at java.lang.Thread.run(Thread.java:595)
    Pls advise
    Thanks
    Sairam

    I found the problem. I was overriding the DistributedScheme with a tangosol-override.xml file in my project which was causing some conflict with the serializer setting for each distributed service. I removed the override and now I am able to have different serialization strategies for different services.
    Regards,
    Sairam

  • Distributed scheme partitions and custom baking map.

    Hi,
    I'm experimenting with custom backing map (class-scheme) for distribute-scheme. I can see that only one instance of my map is created and it is used to store data from all partitions hosted by JVM.
    My question is
    In case of partition transfer, how single partition data will be extracted from backing map. Will it require full scan over keySet()? Will it be done in parallel with cache operation (on transfering partitions/other partitions)? I'm asking, because, I want to know which concurrency usage should support custom map implementation to be used as back map of distribute cache.
    Thank you.

    Hi,
    Organizing the data for partition transfer (e.g. due to failover) requires iterating the keyset. However, only the partition in question should be locked; remaining partitions owned by the same cache service can still be serviced. The locking is handled at a Coherence-internal level.
    In Coherence 3.5, a full key iteration is no longer necessary, reducing the overhead of partition transfer.
    thanks,
    -Rob

  • Report execution time should displayed in Local Time

    Hi,
    I have a query related to the Report execution time.
    Our SAP Servers are available in US.  The Servers are configured in US Time.
    We have developed a Z report and used SY-UZEIT to display the Report Run Time.
    we also have a plant in India.
    When we execute this report for India Plant we are getting the execution run time for US.
    But we need to get the run time in India Time.
    Please help ASAP.
    Regards,
    Shankar

    Hi Shankaran,
    In your Z report, give a condition to check if the Plant is in India. If it is in india, then get SY-UZEIT and add the time difference to get IST.
    Display this time on your report.
    For implementing this, you will have to convert the Date and Time into Timestamp (use FM "LXHME_TIMESTAMP_CONVERT_INTO"), add the Time to this Time stamp (use FM "TIMESTAMP_DURATION_ADD" and convert it back to Date and Time (use FM "LXHME_TIMESTAMP_CONVERT_FROM"). 
    Reward points if useful.
    regards,
    Raj
    Message was edited by: Rajagopal G

  • Execution Time & Explain Plan Variations

    I have a scenario here. I have 2 schemas; schema1 & schema2. I executed a lengthy SELECT statement of 5 TABLE JOIN in these 2 schemas. I am getting totally different execution time (one runs at 0.3 seconds & the other at 4 seconds) and a different Explain Plan. I assume that, since its the same SELECT statement in these schema, I should get the same Explain Plan. What could be the reason for these dissimilarities? Oracle Version: 9.2.0.8.0. I am ready to share the Explain Plan of these 2 schemas. But they are of length around 300 lines.
    Thank you.

    There are many factors come in to play here.
    1.) Size of all tables involved are same
    2.) structures are also same
    3.) Also indexes are same
    4.) Also stats are up to date on both
    5.) Constraints and other factors are also same.
    regards
    PravinAnd a few more.
    6) session environments are the same
    7) bind variable values are the same - or were at first execution
    I'd change 4 to read Optimizer statistics are the same. (not up to date which is a bit vague).
    In short if you are using the CBO and feed in the exact same inputs you will get the exact same plan, however typically you won't get the exact same inputs and so may get a different plan. If your query has a time element and the two queries are hard parsed at different times it may actually be impossible to get the same input - for example the percentage of a table that is returned by a predicate like
    timestamp_col > sysdate - 1 will be estimated differently depending on the time of parsing for the same data.
    That all said looking at the plans might reveal some obvious differences, though perhaps it might be better to point at a URL that holds the plans given the length you say they are.
    Niall Litchfield
    http://www.orawin.info/
    null

  • How to check mappings execution time in Process flow

    Hi All,
    We created one process flow and scheduled it. It is successfully completed after 30 Minutes.
    Process flows contains 3 mappings, First mapping complete sucessfully, Second mapping will start, after completing successfully second mapping. Third mapping will start and complete sucessfully. Success emails will generate.
    I would like to know which mapping is taking long time execution.
    Could you please suggest how can we find which mapping is taking long time execution.
    I dont like to run each mapping indiviual and see the execution time.
    Regards,
    Ava.

    Execute the below query in OWB owner or User schema
    In place of '11111' give the execution id from control center.
    select Map_run.NUMBER_RECORDS_INSERTED,
    map_run.NUMBER_RECORDS_MERGED ,
    map_run.NUMBER_RECORDS_UPDATED ,exe.execution_audit_id, Exe.ELAPSE_TIME,exe.EXECUTION_NAME,exe.EXECUTION_AUDIT_STATUS,map_run.MAP_NAME
      from ALL_RT_AUDIT_MAP_RUNS Map_run,ALL_RT_AUDIT_EXECUTIONS Exe
    where   exe.EXECUTION_AUDIT_ID=map_run.EXECUTION_AUDIT_ID(+)
            and exe.execution_audit_id > '11111'
            order by  exe.execution_audit_id descCheers
    Nawneet
    Edited by: Nawneet on Feb 22, 2010 4:26 AM

  • ETL execution time want to reduce

    Hi Everybody,
    I am working on owb 10g with R2.
    Environment is win 2003 server 64bit itanium server,
    oracle 10 database in netap server mapped as I drive on 186 server where owb installed.
    source files : oracle's staging schema
    target : oracle target schema
    Problem :
    The problem is before 1 month our ETL process was taking 2 hrs to complete .
    now a days 5 hrs...i dont know why.
    any body suggest what I need to check in owb.
    for optimization.

    Thanks for reply sir,
    as you suggest a query for checking the execution time in desc order, I am sending you little bit o/p for today date execution.
    MAP_NAME
    START_TIM END_TIME ELAPSE_TIME NUMBER_ERRORS NUMBER_LOGICAL_ERRORS
    NUMBER_RECORDS_SELECTED NUMBER_RECORDS_INSERTED NUMBER_RECORDS_UPDATED
    NUMBER_RECORDS_DELETED NUMBER_RECORDS_DISCARDED NUMBER_RECORDS_MERGED
    "M_CONTRACT_SUMMARY_M2__V_1"
    20-NOV-07 20-NOV-07 1056 0 0
    346150 0 346052
    0 0 0
    MAP_NAME
    START_TIM END_TIME ELAPSE_TIME NUMBER_ERRORS NUMBER_LOGICAL_ERRORS
    NUMBER_RECORDS_SELECTED NUMBER_RECORDS_INSERTED NUMBER_RECORDS_UPDATED
    NUMBER_RECORDS_DELETED NUMBER_RECORDS_DISCARDED NUMBER_RECORDS_MERGED
    "M_POLICY_SUSPENCE_V_1"
    20-NOV-07 20-NOV-07 884 0 0
    246576 0 0
    0 0 246576
    MAP_NAME
    START_TIM END_TIME ELAPSE_TIME NUMBER_ERRORS NUMBER_LOGICAL_ERRORS
    NUMBER_RECORDS_SELECTED NUMBER_RECORDS_INSERTED NUMBER_RECORDS_UPDATED
    NUMBER_RECORDS_DELETED NUMBER_RECORDS_DISCARDED NUMBER_RECORDS_MERGED
    "M_ACTIVITY_AMT_DETAIL_M3_V_1"
    20-NOV-07 20-NOV-07 615 0 0
    13927 13927 0
    0 0 0
    ==================================
    I think Elapse time depend on No of rec selected and inserted merge wahtever be...if rec are reduce then time also reduce but compare to before (when ETL got finished within 2 hrs), so we got more than 100 sec's diffrence b/w that time and now .
    source tables analyzed daily before mapping execution started. and target tables analyzed at evening time .
    As a remeber from last that day nothing any major changes made in ETL mappings. one day there was a problem arise that source_loc for another Process Wonders ( As i told before there are total 3 main Process runs Sun , Wonders and Life_asia,in which sun and wonders are scheduled) so we have correct that loc and deployed the all mappings as requier msg from control center.
    then mappings runs fine but Execution time increased by 1 hrs more(5 hrs+) than before (3-4hr).
    and normal time was
    2 hrs for LifeAsia.
    30 mnt for wonders
    15 mnts for Sun.
    Can you Suggest me what i can do for temp/permanent solution of this problem.
    according to our System config...
    1 tb hdd.in which 2-300 gb free
    4 gb ram
    64 bit windows os
    Used temp tablespace 99 % with auto-extendable
    Used target table space 93-95%....
    data load incrementaly daily.
    load window was 5am to 8 am which is now a days going upto 12 .30 pm
    after which matview going to refresh.
    after which reports and cubes refresh.
    So all process going to delay and this is live process .
    suggest me if any info u want .
    abt hardware config , we need to increase some...? like ram ....memory..etc.
    @wait for reply...

  • How to extract execution time of a step

    how to extract execution time of a step ?
    This step calls another sequence, and
    I want to know how long it take to execute that seqeuence.
    I need this information during run time, not in the
    report.
    thanks.

    Hi,
    You could try.
    Enable the Record Result for the step in question.
    This will allow you to extract the TotalTime.
    Then you could use the RemoveElements(Locals.ResultList,"[0]",1) - this assumes you have only logged one result. This is the same as not recording results for that step.
    Otherwise you will have to workout the time yourself by calling the API Execution.SecondsExecuting() - API Execution.SecondsAtStart().
    Regards
    Ray Farmer
    Regards
    Ray Farmer

  • Execution Time Format

    I want to get rid of the decimal portion of the Execution Time that prints on my test report. At a minimum, I want it to display only one significant digit. Below is the expression that is found in the reportgen_txt.seq for the
    f(x)Add Execution Time. What I'm wondering is, 1) how do I modify the format, 2)In general, how do I find out what is allowable in this expression.
    Locals.Header += Str(ResStr("MODEL", "RPT_HEADER_EXEC_TIME"), "%-30s") + (PropertyExists("Parameters.MainSequenceResults.TS.TotalTime") ? Str(Parameters.MainSequenceResults.TS.TotalTime, Parameters.ReportOptions.NumericFormat , 1, True) + ResStr("MODEL", "RPT_HEADER_SECONDS") : ResStr("MODEL", "RPT_NOT_APPLICABLE")) + "\n"

    Hi,
    The format for the Str() is Str(value, , , , )
    as you can see the format string is the second parameter. This is normally Parameters.ReportOptions.NumericFormat which is set as default in the Configuration Report Options as %$.13f, this is a C (printf) style format string. Which is a float with 13 places of precision.
    Therefore you can changes this to say %$.1f which would be 1 place of precision.
    Dont change it in the Configuration Report Options because this will affect everything, such the results, limits and so.
    Regards
    Ray Farmer
    Regards
    Ray Farmer

Maybe you are looking for