Replicated cache with cache store configuration

Hi All,
I am having two different applications. One is Admin kind of module from where data will be insterted/updated and another application will read data from Coherence cache.
My requirement is to use Replicated cache and data also needs to be stored in the database. I am configuring cache with cache store DB operation.
I have following coherence configuration. It works fine. Other application is able to read updated data. But while second application is trying to join the first application coherence cluster I am getting following exception in cache store. If I use distributed cache the same cache store works fine without any issues.
Also note that eventhough it is throwing exception, application is working fine as expected. Other thing I am pre loading data on application start up in first application.
Let me know if you need any further information.
Thanks in advance.
coherence-cache-config.xml
<?xml version="1.0"?>
<!DOCTYPE cache-config SYSTEM "cache-config.dtd">
<cache-config>
<caching-scheme-mapping>
<cache-mapping>
<cache-mapping>
<cache-name>TestCache</cache-name>
<scheme-name>TestCacheDB</scheme-name>
</cache-mapping>
</caching-scheme-mapping>
<caching-schemes>
<replicated-scheme>
<scheme-name>TestCacheDB</scheme-name>
<service-name>ReplicatedCache</service-name>
<backing-map-scheme>
<local-scheme>
<scheme-name>TestDBLocal</scheme-name>
<cachestore-scheme>
<class-scheme>
<class-name>test.TestCacheStore</class-name>
<init-params>
<init-param>
<param-type>java.lang.String</param-type>
<param-value>TEST_SUPPORT</param-value>
</init-param>
</init-params>
</class-scheme>
</cachestore-scheme>
</local-scheme>
</backing-map-scheme>
<listener/>
<autostart>true</autostart>
</replicated-scheme>
<!--
Proxy Service scheme that allows remote clients to connect to the
cluster over TCP/IP.
-->
<proxy-scheme>
<scheme-name>proxy</scheme-name>
<service-name>ProxyService</service-name>
<thread-count system-property="tangosol.coherence.extend.threads">10</thread-count>
<acceptor-config>
<tcp-acceptor>
<local-address>
<address system-property="tangosol.coherence.extend.address">localhost</address>
<port system-property="tangosol.coherence.extend.port">7001</port>
<reusable>true</reusable>
</local-address>
</tcp-acceptor>
<serializer>
<class-name>com.tangosol.io.pof.ConfigurablePofContext</class-name>
<init-params>
<init-param>
<param-type>String</param-type>
<param-value>pof-config.xml</param-value>
</init-param>
</init-params>
</serializer>
</acceptor-config>
<autostart system-property="tangosol.coherence.extend.enabled">false</autostart> </proxy-scheme>
</caching-schemes>
</cache-config>
Exception:
2010-08-31 10:46:09.062/171.859 Oracle Coherence GE 3.5.2/463 <Error> (thread=ReplicatedCache, member=2): java.lang.Clas
sCastException: com.tangosol.util.Binary cannot be cast to test.TestBean
at test.TestCacheStore.store(TestCacheStore.java:137)
at com.tangosol.net.cache.LocalCache$InternalListener.onModify(LocalCache.java:637)
at com.tangosol.net.cache.LocalCache$InternalListener.entryInserted(LocalCache.java:599)
at com.tangosol.util.MapEvent.dispatch(MapEvent.java:206)
at com.tangosol.util.MapEvent.dispatch(MapEvent.java:166)
at com.tangosol.util.MapListenerSupport.fireEvent(MapListenerSupport.java:556)
at com.tangosol.net.cache.OldCache.dispatchEvent(OldCache.java:1916)
at com.tangosol.net.cache.OldCache$Entry.onAdd(OldCache.java:1985)
at com.tangosol.util.SafeHashMap.put(SafeHashMap.java:244)
at com.tangosol.net.cache.OldCache.put(OldCache.java:266)
at com.tangosol.coherence.component.util.CacheHandler.onLeaseUpdate(CacheHandler.CDB:42)
at com.tangosol.coherence.component.util.CacheHandler.populateCache(CacheHandler.CDB:33)
at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.ReplicatedCache$CacheUpdate.onReceiv
ed(ReplicatedCache.CDB:5)
at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onMessage(Grid.CDB:9)
at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onNotify(Grid.CDB:136)
at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.ReplicatedCache.onNotify(ReplicatedC
ache.CDB:3)
at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
at java.lang.Thread.run(Thread.java:619)
2010-08-31 10:46:09.203/216.735 Oracle Coherence GE 3.5.2/463 <D5> (thread=DistributedCache, member=2): Deferring the di
stribution due to 128 pending configuration updates
TestBean.java
public class TestBean implements PortableObject, Serializable {
     private static final long serialVersionUID = 1L;
     private String name;
     private String number;
     private String taskType;
     public String getName() {
          return name;
     public void setName(String name) {
          this.name = name;
     public String getNumber() {
          return productId;
     public void setNumber(String number) {
          this.number= number;
     public String getTaskType() {
          return taskType;
     public void setTaskType(String taskType) {
          this.taskType = taskType;
     @Override
     public void readExternal(PofReader reader) throws IOException {
          name = reader.readString(0);
          number = reader.readString(1);
          taskType = reader.readString(2);
     @Override
     public void writeExternal(PofWriter writer) throws IOException {
          writer.writeString(0, name);
          writer.writeString(1, number);
          writer.writeString(2, taskType);
TestCacheStore.java
public class TestCacheStore extends Base implements CacheStore {
     @Override
     public void store(Object oKey, Object oValue) {
     if(logger.isInfoEnabled())
          logger.info("store :"+oKey);
     TestBean testBean = (TestBean)oValue; //Giving classcast exception here
//Doing some processing here over testBean
          ConnectionFactory connectionFactory = ConnectionFactory.getInstance();
          //Get the Connection
          Connection con = connectionFactory.getConnection();
          if(con != null){
//Code to insert into the database
          }else{
               logger.error("Connection is NULL");
Edited by: user8279242 on Aug 30, 2010 11:44 PM

Hello,
The problem is that replicated caches are not supported with read write backing maps.
Please refer to the link below for more information.
http://download.oracle.com/docs/cd/E15357_01/coh.360/e15723/cache_rtwtwbra.htm#CFHEJHCI
Best regards,
-Dave

Similar Messages

  • Replicated cache scheme with cache store

    Hi All,
    I am having following configuration for the UserCacheDB in the coherence-cache-config.xml
    I having cachestore class which inserts data in the database and this data will be loaded from data on application start up.
    I need to make this cache replicated so that the other application will have this data. Can any one please guide me what should be my configuration which will make this cache replicated with cache store class.
    <distributed-scheme>
                   <scheme-name>UserCacheDB</scheme-name>
                   <service-name>DistributedCache</service-name>
                   <serializer>
                        <class-name>com.tangosol.io.pof.ConfigurablePofContext</class-name>
                        <init-params>
                             <init-param>
                                  <param-type>String</param-type>
                                  <param-value>pof-config.xml</param-value>
                             </init-param>
                        </init-params>
                   </serializer>
                   <backing-map-scheme>
                        <read-write-backing-map-scheme>
                             <internal-cache-scheme>
                                  <class-scheme>
                                       <class-name>com.tangosol.util.ObservableHashMap</class-name>
                                  </class-scheme>
                             </internal-cache-scheme>
                             <cachestore-scheme>
                                  <class-scheme>
                                       <class-name>test.UserCacheStore</class-name>
                                       <init-params>
                                            <init-param>
                                                 <param-type>java.lang.String</param-type>
                                                 <param-value>PC_USER</param-value>
                                            </init-param>
                                       </init-params>
                                  </class-scheme>
                             </cachestore-scheme>
                             <read-only>false</read-only>
                             <!--
                                  To make this a write-through cache just change the value below to
                                  0 (zero)
                             -->
                             <write-delay-seconds>0</write-delay-seconds>
                        </read-write-backing-map-scheme>
                   </backing-map-scheme>
                   <listener />
                   <autostart>true</autostart>
              </distributed-scheme>
    Thanks in Advance.

    Hi,
    You should be able to use a cachestore with a local-scheme.
          <replicated-scheme>
            <scheme-name>UserCacheDB</scheme-name>
            <service-name>ReplicatedCache</service-name>
            <serializer>
              <class-name>com.tangosol.io.pof.ConfigurablePofContext</class-name>
              <init-params>
                <init-param>
                  <param-type>String</param-type>
                  <param-value>coherence-pof-config.xml</param-value>
                </init-param>
              </init-params>
            </serializer>
            <backing-map-scheme>
              <local-scheme>
                <scheme-name>UserCacheDBLocal</scheme-name>
                <cachestore-scheme>
                  <class-scheme>
                    <class-name>test.UserCacheStore</class-name>
                    <init-params>
                      <init-param>
                        <param-type>java.lang.String</param-type>
                        <param-value>PC_USER</param-value>
                      </init-param>
                    </init-params>
                  </class-scheme>
                </cachestore-scheme>
              </local-scheme>
            </backing-map-scheme>
            <listener/>
            <autostart>true</autostart>
          </replicated-scheme>

  • Replacing our Replicated Caches with Distributed+CQC

    Hi,
    I've been advised on this forum to replace our Replicated caches with Distributed+CQC with an AlwaysFilter. This should give me the "zero-latency" get() performance which really should be part of a Replicated Cache, but apparently isn't (the lease-model and serialization get in the way of this).
    My concern is now storage efficiency - my understanding is that my storage footprint will double, as the same information is now stored in two places (partitioned backing map and cqc front map). Is this correct? If so, I'm unsure why it would be considered a good replacement for the Replicated scheme.
    Thanks,
    Matt

    The second link looks like it helped you out
    - Re: The effects of "leases" on the read-performance of Replicated Caches
    Also if you are looking at (near) zero latency when garbage collections occur, you might want to have at the Zing JVM
    - http://www.azulsystems.com/products/zing/virtual-machine
    Layency due to garbage collection (i.e., introduced pause times) could break your (extreme) low-latency demand.
    You could try the CQC with the always filter:
    NamedCache cache = CacheFactory.getCache("somecache");
    ContinuousQueryCache localCache =new ContinousQueryCache(cache, AlwaysFilter.INSTANCE);The preceding code will result in a locally materialized view of the cache data that satisfies the specified filter. By default, both keys and values will be cached locally.
    If you want to cache only keys and retrieve values from the back cache as needed, which might be the best option if the values are large and accessed infrequently, or
    if you only care about having an up-to-date keyset locally, you can pass false as the third argument to the CQC constructor.
    To get data from the CQC you can use
    Iterator<Map.Entry<Integer, Klant>> data = localCache.entrySet().iterator();

  • Local Cache with write-behind backing map

    Hi there,
    I am a Coherence newb, so I hope my question isn't too naive. I have been experimenting with Coherence using a write-behind JPA backing map, but I have only been able to make it work with a distributed cache. Because of my specific database RAC architecture, I need to ensure that entries written to the database from a given WLS node are restricted to a specific RAC node. In my view, using a local cache rather than a distributed cache should solve my problem, although if there is a way of configuring my cache so this works I'd appreciate the info.
    So, the short form of the question: can I back a local cache with a write-behind JPA map?
    Cheers,
    Ron

    Hi Ron,
    The configuration for <local-scheme> allows you to add a cache store but you cannot use write-behind, only write-through.
    Presumably you do not want the data to be shared by the different WLS nodes, i.e. if one node puts data in the cache and that is eventually written to a RAC node, that data cannot be seen in the cache by other WLS nodes. Or do you want all the WLS nodes to share the data but just write it to different RAC nodes?
    If you use a local-scheme then the data will only be local to that WLS node and not shared.
    I can think of a possible way to do what you want but it depends on the answer to the above question.
    JK

  • Error removing object from cache with write behind

    We have a cache with a DB for a backing store. The cache has a write-behind delay of about 10 seconds.
    We see an error when we:
    - Write new object to the cache
    - Remove object from cache before it gets written to cachestore (because we're still within the 10 secs and the object has not made it to the db yet).
    At first i was thinking "coherence should know if the object is in the db or not, and do the right thing", but i guess that's not the case?

    Hi Ron,
    The configuration for <local-scheme> allows you to add a cache store but you cannot use write-behind, only write-through.
    Presumably you do not want the data to be shared by the different WLS nodes, i.e. if one node puts data in the cache and that is eventually written to a RAC node, that data cannot be seen in the cache by other WLS nodes. Or do you want all the WLS nodes to share the data but just write it to different RAC nodes?
    If you use a local-scheme then the data will only be local to that WLS node and not shared.
    I can think of a possible way to do what you want but it depends on the answer to the above question.
    JK

  • Reader X doesn't display Pdf stream if response contains header "Cache-Control: no-store, no-cache"

    Hi all,
    I work on a web application that, among others, generates Pdf documents. It renders them directly within the IE window by "streaming" the content of the Pdf in the response output stream. Note that we also add the header "Cache-Control", "no-store, no-cache, must-revalidate,post-check=0, pre-check=0" to the response.
    Everything was fine with previous version of reader but since I installed Adobe reader X the content of the Pdf is not showing any more in my browser.
    Here is what I already investigated:
    - if I use another machine with an "older" Reader version, it works. If I save the displayed Pdf and try to open it on the machine where X is installed  it works
    - if I remove the the Cache-Control header, then it works with reader X installation
    Do you have any idea what changed between version 9 and X that could lead to this issue ?
    To ease diagnostic I created a sandbox environment to reproduce the problem, you can go to the following address to see what's happening (or not in case you have version X installed)
    With the Cache-Control header: http://readerxissue.appspot.com
    Without the Cache-Control header: http://readerxissue.appspot.com/enableCache.html
    I must confess that I am a bit stuck and I wonder if some of you could help.
    Thanks a lot
    Regards
    Vincent

    Hello,
    We have semiliar problems in sweden with Adobe Reader X 10.1 Swe and Internet Explorer 8.0 with streamed PDF files.
    We had some issues and got them resolved by the following
    Upgraded to Adobe Reader 10.1.2.45 Eng
    - Print Issue:http://helpx.adobe.com/acrobat/kb/pdf-wont-print-reader-10.html
    - Grey box in Internet explorer: http://helpx.adobe.com/acrobat/kb/pdf-opens-grey-screen-browser.html
    - Add site as trusted: Edit-> Preferences, unbox Enable Enhanced Security + add the host/url to site that is whitelisted to send pdf files
    (the Automaticly trust sites from mu Win OS security zones doesnt work for us)
    The thing is we run MUI pack on our Citrix servers and want Adobe Reader in Swe but it havent been translated yet...
    So we have to wait for the swedish release on Adobe Reader X 10.1.2
    Thanks,
    Tony Van Der Haagen
    IT-Mästaren
    Sweden

  • Using a partitionned cache with off-heap storage for backup data

    Hi,
    Is it possible to define a partitionned cache (with data into the heap) with off-heap storage for backup data ?
    I think it could be worthwhile to do so, as backup data are associated with a different access pattern.
    If so, what are the impacts of such off-heap storage for backup data ?
    Particularly, what are the impacts on performance ?
    Thanks.
    Regards,
    Dominique

    Hi,
    It seems what using scheme for backup-store is broken in latest version of Coherence, I've got an exception using your setup.
    2010-07-24 12:21:16.562/7.969 Oracle Coherence GE 3.6.0.0 <Error> (thread=DistributedCache, member=1): java.lang.NullPointerException
         at com.tangosol.net.DefaultConfigurableCacheFactory.findSchemeMapping(DefaultConfigurableCacheFactory.java:466)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$Storage$BackingManager.isPartitioned(PartitionedCache.java:10)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$Storage.instantiateBackupMap(PartitionedCache.java:24)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$Storage.setCacheName(PartitionedCache.java:29)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$ServiceConfig$ConfigListener.entryInserted(PartitionedCache.java:17)
         at com.tangosol.util.MapEvent.dispatch(MapEvent.java:266)
         at com.tangosol.util.MapEvent.dispatch(MapEvent.java:226)
         at com.tangosol.util.MapListenerSupport.fireEvent(MapListenerSupport.java:556)
         at com.tangosol.util.ObservableHashMap.dispatchEvent(ObservableHashMap.java:229)
         at com.tangosol.util.ObservableHashMap$Entry.onAdd(ObservableHashMap.java:270)
         at com.tangosol.util.SafeHashMap.put(SafeHashMap.java:244)
         at com.tangosol.coherence.component.util.ServiceConfig$Map.put(ServiceConfig.java:43)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$StorageIdRequest.onReceived(PartitionedCache.java:45)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onMessage(Grid.java:11)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onNotify(Grid.java:33)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.PartitionedService.onNotify(PartitionedService.java:3)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache.onNotify(PartitionedCache.java:3)
         at com.tangosol.coherence.component.util.Daemon.run(Daemon.java:42)
         at java.lang.Thread.run(Thread.java:619)Tracing in debuger has shown what problem is in PartitionedCache$Storage#setCacheName(String) method, it calls instantiateBackingMap(String) before setting __m_CacheName field.
    It is broken in 3.6.0b17229
    PS using asynchronous wrapper around disk based backup storage should reduce performance impact

  • Do we need to point clients to internal SUS with Caching on OSX Server

    It's my understanding that devices (iOS and OSX) on the same local network will grab content from the Caching Service in OSX Server automatically without any configuration. If that is correct, do we need to point our managed clients to that server for OS updates anymore? Can't we just leave it to use Apples Public Update Servers, and then anytime when it's on this local network it will grab the updates from the server?

    Caching Server 2 catches and mirrors recent traffic for iOS and OS X systems and will inherently flush out older material as space is needed — as the name states it's a cache, after all — where Software Update Server downloads and maintains copies of all updates for OS X.
    When there are a number of OS X systems around, Software Update Server tends to be more predictable.  The updates are available.  With caching server, a barrage of iOS updates or a big OS X update might flush out the stuff you need, and it's off to the network to fetch it again.
    The systems around generally have more disk space available than network bandwidth, so Software Update Server and Caching Server 2 are both in common use, or maybe Reposado, depending on local requirements.  If you have more network than disk, then you might make a different choice — at the price of disk space these days, the extra space is an obvious choice. 
    If there isn't enough disk space on a Mac available but you do have something that'll boot Linux, then Reposado has you covered.

  • OSB result caching with Coherence Out of process

    Existing setup:
    Oracle Fusion Middleware SOA 11g domain with
    1 weblogic cluster
    1 OSB cluster
    We have an Out of Process Coherence cluster configured with  caches defined already which is just working fine in production.
    The requirement is that development team would like to use the OSB result caching feature and we are having hard time to configure this OSB result cache join our existing cluster.
    Any suggestions on this is appreciated.

    Hi,
    You would need to override the operational configuration on OSB Server to join the cluster spawned by the Coherence dedicated servers. Also, set the flag -Dtangosol.coherence.distributed.storage=false in the ServerStart of your OSB Servers which will disable the data storage in the OSB Servers.
    HTH
    Cheers,
    _NJ                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

  • Distributed cache with a backing-map as another distributed cache

    Hi All,
    Is it possible to create a distributed cache with a backing-map-schem with another distributed cache with local-storage disabled ?
    Please let me know how to configure this type of cache.
    regards
    S

    Hi Cameron,
    I am trying to create a distributed-schem with a backing-map schem. is it possible to configure another distributed queue as a backing map scheme for a cache.
    <distributed-scheme>
         <scheme-name>MyDistCache-2</scheme-name>
         <service-name> MyDistCacheService-2</service-name>
         <backing-map-scheme>
                   <external-scheme>
                        <scheme-name>MyDistCache-3</scheme-name>
                   </external-scheme>
         </backing-map-scheme>
    </distributed-scheme>
         <distributed-scheme>
              <scheme-name>MyDistCache-3</scheme-name>
              <service-name> MyDistBackCacheService-3</service-name>
              <local-storage>false</local-storage>
         </distributed-scheme>
    Please correct my understanding.
    Regards
    Srini

  • 'Develop module disabled' and No 'SLS cache or SLS store' folders anywhere. Also no registration document

    It has been working fine but it suddenly says that develop module has been disabled. I have looked for solutions on the internet and could not find the SLS cache or SLS store folders where they were supposed to be. It asks for the licence number every time I log in so it might be to do with that. I have uninstalled and re-installed but this made no difference. I don't understand what the problem is and not having the files I mentioned means that I can't find a solution. I also had a look at where the registration document should be and there isn't one. I think files are missing that shouldn't be but have no idea how to change that. I'm working on Windows 8 PC.

    Your statements contradict one-another.  Did you subscribe with an annual program or did you buy a boxed/downloaded product? 
    It sounds like you have tried these solutions? Lightroom doesn't launch or returns "Develop module is disabled" error after 5.5 update

  • Refresh Ahead Cache with JPA

    I am trying to use Refresh-ahead caching with JPACacheStore. My config backig-map config is given below. I am using the same JPA example as given in the Coherence tutorial. The cache is only loading the data from the when the server starts. When i change the data in the DB, it is not reflecting in the cache. I am not sure I am doing the right thing. Need your help!!
    <backing-map-scheme>
                        <read-write-backing-map-scheme>
                             <!--Define the cache scheme-->
                             <internal-cache-scheme>
                                  <local-scheme>
                                       *<expiry-delay>1m</expiry-delay>*
                                  </local-scheme>
                             </internal-cache-scheme>
                             <cachestore-scheme>
                                  <class-scheme>
                                       <class-name>com.tangosol.coherence.jpa.JpaCacheStore</class-name>
                                       <init-params>
                                            <!--
                                            This param is the entity name
                                            This param is the fully qualified entity class
                                            This param should match the value of the
                                            persistence unit name in persistence.xml
                                            -->
                                            <init-param>
                                                 <param-type>java.lang.String</param-type>
                                                 <param-value>{cache-name}</param-value>
                                            </init-param>
                                            <init-param>
                                                 <param-type>java.lang.String</param-type>
                                                 <param-value>com.oracle.handson.{cache-name}</param-value>
                                            </init-param>
                                            <init-param>
                                                 <param-type>java.lang.String</param-type>
                                                 <param-value>JPA</param-value>
                                            </init-param>
                                       </init-params>
                                  </class-scheme>
                             </cachestore-scheme>
                             *<refresh-ahead-factor>0.5</refresh-ahead-factor>*
                        </read-write-backing-map-scheme>
                   </backing-map-scheme>
    Thanks in advance.
    John

    I guess this is the answer
    Sorry for the dumb question :)
    Note: For use with Partitioned (Distributed) and Near cache
    topologies: Read-through/write-through caching (and variants) are
    intended for use only with the Partitioned (Distributed) cache
    topology (and by extension, Near cache). Local caches support a
    subset of this functionality. Replicated and Optimistic caches should
    not be used.

  • Warming up coherence cache for Grid-Read configuration

    I am using toplink Grid-Read configuration for my coherence cache implementation.
    In this, I am facing one problem of retrieving data in the following scenario
    1. Search has been performed based on some criteria.
    2. Part of the result is there in the cache and part of the result is there in database.
    3. I can not bypass cache with
    setHint(QueryHints.QUERY_REDIRECTOR, new IgnoreDefaultRedirector())
    as coherence will not return null in this case.
    4. So, the data which are only there in the cache will be displayed instead of showing the complete result.
    To resolve this issue, I need to warm up the cache at system start-up so that the data in coherence and database will be in sync.
    So, my question is how to warm up the cache at system start-up ?
    Thanks
    Sandeep Singh

    I am using toplink Grid-Read configuration for my coherence cache implementation.
    In this, I am facing one problem of retrieving data in the following scenario
    1. Search has been performed based on some criteria.
    2. Part of the result is there in the cache and part of the result is there in database.
    3. I can not bypass cache with
    setHint(QueryHints.QUERY_REDIRECTOR, new IgnoreDefaultRedirector())
    as coherence will not return null in this case.
    4. So, the data which are only there in the cache will be displayed instead of showing the complete result.
    To resolve this issue, I need to warm up the cache at system start-up so that the data in coherence and database will be in sync.
    So, my question is how to warm up the cache at system start-up ?
    Thanks
    Sandeep Singh

  • Problems with cache.clear()

    Hello!
    We are having some problems with cache clears in our production cluster that we do once a day. Sometimes heaps "explode" with a lot of outgoing messages when we do a cache.clear() and the entire cluster starts failing.
    We had some success with a alternate method of doing the cache clear where we iterate cache.keySet() and do a cache.remove(key) with a pausetime of 100 ms after 20000 objects until the cache is empty. But today nodes started failing on a cache.size() before the removes started (the first thing we do is to log the size of the cache we are about to clear before the remove operations start).
    We have multiple distributed caches configured with a near cache. The nearcache has 10k objects as high units and the back caches vary in size, the largest is around 300k / node.
    In total the DistributedCache-service is handling ~20 caches.
    The cluster consists of 18 storage enabled nodes spread across 6 servers and 31 non storage enabled nodes running on 31 servers.
    The invalidation stategy on the near caches is ALL (or, its AUTO but it looks like it selects ALL since ListenerFilterCount=29 and ListenerKeyCount=0 on a StorageManager?)
    Parition count is 257, backup count 1, no changes in thread count on the service, service is DistributedCache.
    Coherence version 3.6.1.0.
    A udp test sending from one node to another displays 60 megabyte/s.
    Heapsize for the Coherence JVMs, 3gb. LargePages is used.
    Heapsize for the front nodes JVMs, 6gb. LargePages is used.
    No long GC-times (until the heaps explode), 0.2-0.6 seconds. CMS-collector.
    JDK 1.6 u21 64-bit.
    Windows 2k8R2.
    We are also running CoherenceWeb and some Read/Write-caches, but on different coherence services. We are not doing any clear/size-operations against caches owed by these services.
    Looking at some metrics from the last time we had this problem (where we crashed on cache.size()).
    The number of messages sent by the backing nodes went from <100/s to 20k-50k/s in 15 s.
    The number of messages resent by the backing nodes went from ~0/s to 1k-50k/s depending on the node in 15 s.
    At the time the total number of requests against the DistributedCache-service was around 6-8/s and node.
    To my questions, should it be a problem to do a cache clear with this setup (where the largest cache is around 5.4 million entires) ? Should it be a problem to do a cache.size()?
    What is the nicest way to do a cache.clear()? Any other strategy?
    Could a lock on a entry in the cache cause this problem? Should it really cause a problem with cache.size()?
    Any advice?
    BR,
    Carl
    Edited by: carl_abramsson on 2011-nov-14 06:16

    Hi Charlie,
    Thank you for your answer! Yes, actually we are using a lot of expiry and many of the items are created at roughly the same time! We haven't configured expiry in the cache configuration, instead we do a put with a expire.
    Regarding the workload, compared to our peak hours it has been very low when had problems with the size and clear operations. So the backing tier isn't really doing much at the time. That's what has been so strange with this problem.
    The release going live today has PRESENT as near cache invalidation strategy. We remove as much of the expire as possible in the next.
    BR,
    Carl

  • No Cache with PHP & FLV

    I used the following php and called it with netsream.play()
    to prevent cacheing.
    <?php
    function nocache($filename){
    $ctype="application/octet-stream";
    header("Expires: 0");
    // HTTP/1.1
    header("Cache-Control: no-store, no-cache,
    must-revalidate");
    header("Cache-Control: post-check=0, pre-check=0", false);
    // HTTP/1.0
    header("Pragma: no-cache");
    header("Content-Type: $ctype");
    readfile($filename);
    $fname = $_GET['f'];
    nocache($fname);
    ?>
    This does work, but it would appear that the whole FLV needs
    to get loaded first, before palying. Is it possible to make it
    really stream.

    Hi,
    I have installed the extensions as was instructed - however
    it appears as if they can't be read. Here is my error
    Warning: mysql_pconnect(): Client does not support
    authentication protocol requested by server; consider upgrading
    MySQL client in c:\inetpub\wwwroot\test_site\test.php on line 14
    Client does not support authentication protocol requested by
    server; consider upgrading MySQL clientPHP Warning: Unknown():
    Unable to load dynamic library
    'c:\PHP-4.4.2-Win32\extensions\php_mysql.dll' - The specified
    module could not be found. in Unknown on line 0 PHP Warning:
    Unknown(): Unable to load dynamic library
    'c:\PHP-4.4.2-Win32\extensions\php_mysqli.dll' - The specified
    module could not be found. in Unknown on line 0
    I have double checked and triple checked and the files are in
    the folder where they are suppose to be.
    These are the extensions which I enabled in the PHP.INI file.
    extension=php_mbstring.dll
    extension=php_mysql.dll
    extension=php_mysqli.dll
    This is the directory from the PHP file that appears to be
    right.
    extension_dir = "c:\PHP-4.4.2-Win32\extensions"
    I can't figure out what I am doing wrong! Thanks for all your
    help! Any other ideas I may be missing?

Maybe you are looking for

  • Ipad air 2 touch id not working when plugged in

    A very strange problem tonight. After work, I grabbed my iPad Air 2 which was plugged in to the Apple Charger next to my chair.  Touched the home button to open it - nothing happened.  Tried again.  Nothing happened.  Entered the code and the IPad Ai

  • Object name length/width limitation

    We have a DB2 table and one of the column's width is 31. [say PST_NET_NEG_RPLCMT_COST_SRC_AMT] While insert that table into designer, the column name width/legth is truncted to 30. [say PST_NET_NEG_RPLCMT_COST_SRC_AM] Version: Designer 12.1.0.882

  • About knopflerfish OSGi Framework

    who knows this framework can supports which VMs?

  • Image Format to PDF, when zoom --resolution not good enough eventhough image is 300dpi

    Hi Guys, Anyone here know how to achieve a good image resolution when it is zoomed in? Doesn't matter the size but the quality of the image. I have tried 170dpi and 300dpi, it seems almost the same in image quality.. Please help. Thank you.

  • Custimized logon page doesn't work after upgrade

    Hi experts, We have custimized the portal logon page, and during an upgrade it is no longer working. It is back to the standard logon page. We have created our own .par file and deplyoed it (it is still there after the upgrade) and in the "direct edi