Storage disabled nodes and near-cache scheme

This probably is a newbie question. I have a named cache with a near cache scheme, with a local-scheme as the front tier. I can see how this will work in a cache-server node. But I have a application node which pushes a lot of data into the same named cache, but it is set to be storage disabled.
My understanding of a local cache scheme is that data is cached locally in the heap for faster access and the writes are delegated to the service for writing to backing map. If my application is storage disabled, is the local cache still used or is all data obtained from the cache-servers?

Hello,
You understanding is correct. To answer your question writes will always go through the cache servers. A put will also always go through the cache servers but the near cache may or may not be populated at that point.
hth,
-Dave

Similar Messages

  • InvocationService on storage disabled nodes

    Hi,
    Is it safe to assume that InvocationService tasks will not fire on cluster nodes that were started using '-Dtangosol.coherence.distributed.localstorage=false'? I've tried both 'execute' and 'query' methods on a cluster that contains both storage-enabled and storage-disabled nodes and the tasks only seem to fire on the storage-enabled nodes. The reason that I'm curious is because the InvocationService.getInfo().getServiceMembers() method returns all the cluster nodes (not just the storage-enabled nodes), and the InvocationService threads show up in the thread dumps of the storage-disabled nodes.
    Thanks,
    Jim

    Hi Rob,
    Thanks for your reply. This turned out to be an elusive problem on my end, complicated by the fact that Coherence was 'eating' an exception. One of the member fields in my invocation class was not serializable, but the InvocationService thread did not give any indication of this error. It wasn't until I put a try/catch around the InvocationService.execute method that I discovered the problem. The local node was the only storage-enabled node, so that explains why the invocation was not being executed on the storage-disabled nodes.
    This might be a good candidate for a bug fix in Coherence (to log some indication that an exception occurred). As is, a good programming tip is to ALWAYS put a try/catch around InvocationService.execute() and InvocationService.query().
    Jim

  • Read-Through Caching with expiry-delay and near cache (front scheme)

    We are experiencing a problem with our custom CacheLoader and near cache together with expiry-delay on the backing map scheme.
    I was under the assumption that it was possible to have an expiry-delay configured on the backing-scheme and that the near cache object was evicted when backing object was evicted. But according to our tests we have to put an expiry-delay on the front scheme too.
    Is my assumption correct that there will not be automatic eviction on the near cache (front scheme)?
    With this config, near cache is never cleared:
                 <near-scheme>
                      <scheme-name>config-part</scheme-name>
                      <front-scheme>
                            <local-scheme />
                      </front-scheme>
                      <back-scheme>
                            <distributed-scheme>
                                  <scheme-ref>config-part-distributed</scheme-ref>
                            </distributed-scheme>
                      </back-scheme>
                <autostart>true</autostart>
                </near-scheme>
                <distributed-scheme>
                      <scheme-name>config-part-distributed</scheme-name>
                      <service-name>partDistributedCacheService</service-name>
                      <thread-count>10</thread-count>
                      <backing-map-scheme>
                            <read-write-backing-map-scheme>
                                  <read-only>true</read-only>
                                  <scheme-name>partStatusScheme</scheme-name>
                                  <internal-cache-scheme>
                                        <local-scheme>
                                              <scheme-name>part-eviction</scheme-name>
                                              <expiry-delay>30s</expiry-delay>
                                        </local-scheme>
                                  </internal-cache-scheme>
                                  <cachestore-scheme>
                                        <class-scheme>
                                              <class-name>net.jakeri.test.PingCacheLoader</class-name>
                                        </class-scheme>
                                  </cachestore-scheme>
                                  <refresh-ahead-factor>0.5</refresh-ahead-factor>
                            </read-write-backing-map-scheme>
                      </backing-map-scheme>
                      <autostart>true</autostart>
                      <local-storage system-property="tangosol.coherence.config.distributed.localstorage">true</local-storage>
                </distributed-scheme>With this config (added expiry-delay on front-scheme), near cache gets cleared.
            <near-scheme>
                      <scheme-name>config-part</scheme-name>
                      <front-scheme>
                            <local-scheme>
                                 <expiry-delay>15s</expiry-delay>
                            </local-scheme>
                      </front-scheme>
                      <back-scheme>
                            <distributed-scheme>
                                  <scheme-ref>config-part-distributed</scheme-ref>
                            </distributed-scheme>
                      </back-scheme>
                <autostart>true</autostart>
                </near-scheme>
                <distributed-scheme>
                      <scheme-name>config-part-distributed</scheme-name>
                      <service-name>partDistributedCacheService</service-name>
                      <thread-count>10</thread-count>
                      <backing-map-scheme>
                            <read-write-backing-map-scheme>
                                  <read-only>true</read-only>
                                  <scheme-name>partStatusScheme</scheme-name>
                                  <internal-cache-scheme>
                                        <local-scheme>
                                              <scheme-name>part-eviction</scheme-name>
                                              <expiry-delay>30s</expiry-delay>
                                        </local-scheme>
                                  </internal-cache-scheme>
                                  <cachestore-scheme>
                                        <class-scheme>
                                              <class-name>net.jakeri.test.PingCacheLoader</class-name>
                                        </class-scheme>
                                  </cachestore-scheme>
                                  <refresh-ahead-factor>0.5</refresh-ahead-factor>
                            </read-write-backing-map-scheme>
                      </backing-map-scheme>
                      <autostart>true</autostart>
                      <local-storage system-property="tangosol.coherence.config.distributed.localstorage">true</local-storage>
                </distributed-scheme>

    Hi Jakkke,
    The Near Cache scheme allows to have configurable levels of cache coherency from the most basic expiry based cache to invalidation based cache to data versioning cache depending on the coherency requirements. The Near Cache is commonly used to achieve the performance of replicated cache without losing the scalability aspects of replicated cache and this is achieved by having a subset of data (based on MRU or MFU) in the <front-scheme> of the near cache and the complete set of data in the <back-scheme> of near cache. The <back-scheme> updates can automatically trigger events to invalidate the entries in the <front-scheme> based on the invalidation strategy (present, all, none, auto) configured for the near cache.
    If you want to expire the entries in the <front-scheme> and <back-scheme>, you need to specify an expiry-delay on both the schemes as mentioned by you in the last example. Now if you are expiring the items in the <back-scheme> for the reason that they get loaded again from the cache-store but the <front-scheme> keys remain same (only the values should be refreshed from the cache store) then you need not set the expiry-delay on the <front-scheme> rather mention the invalidation-strategy as present. But if you want to have a different set of entries in <front-scheme> after a specified expiry delay then you need to mention it in the <front-scheme> configuration.
    The near cache has the capability to keep front scheme and back scheme data in sync but the expiry of entries is not synced. Always, front-scheme is a subset of back-scheme.
    Hope this helps!
    Cheers,
    NJ

  • How to make a node storage disabled for a particular cache?

    I have multiple caches that are distributed across the nodes in my application. Can I disable storage (localstorage=false) for a certain cache in a node.
    Intention is to make something like this:
    CacheA distributed between node1 and node2
    CacheB distributed between node1 and node3
    Thus none of the node would be a non storage node completely here. Hence I would be required to specify this in the coherence-config.xml. If the answer is following for node 2
    <distributed-scheme>
         <scheme-name>CacheB</_CacheEvent_scheme-name>
         <service-name>DistributedCache</service-name>
         *<local-storage>false</local-storage>*
         <backing-map-scheme>
         <local-scheme>
         <scheme-ref>backingSchemeB</scheme-ref>
         </local-scheme>
         </backing-map-scheme>
         <autostart>true</autostart>
         </distributed-scheme>
         <local-scheme>
         <scheme-name>backingSchemeB</scheme-name>
         </local-scheme>
    What should be the backing scheme, as my local storage is false for cacheB?

    Hi Mahesh,
    you can control the storage-enablement of distributed caches on a per-service basis.
    In your case, you have to put cache A and cache B into different services (serviceA and serviceB for the example) and run service A as storage-enabled on nodes 1 and 2, and service B as storage-enabled on nodes 1 and 3.
    For more information, look at my post from two years ago:
    Re: Partitioned cache - where to put what config files?
    Best regards,
    Robert

  • Backing Map Scheme with storage disabled

    I've been playing around with Cache Servers and Cache Clients. If I have a distributed cache, the coherence-cache-config.xml for the client still requires a backing map scheme entry. If the client doesn't physically store data then why does it need a backing map scheme?
    I'm interested to know if I'm understanding the concept of the cache client correctly.
    Is Coherence just expecting to read in the XML but will ignore it because I've set local storage to false, and therefore it doesn't actually matter what is in the backing map sheme?
    Cheers
    Mike

    Hi Mike,
    You are right - storage disabled nodes do not instantiate backing map. Coherence reads XML during initial cohfiguration validation, and on cache clients 'backing-map-scheme' element is not used.
    In general it is cleaner, less error prone and easier to maintain a single configuration file and let Coherence worry about what to instantiate on different nodes.
    Regards,
    Dimitri

  • Update to near cache value is not reflected in the other near cache

    Hi,
    I have a cluster of two weblogic servers running within them tangosol cache. I also have two seperate tangosol servers running on the same two servers.
    When I make an update on the tangosol cache on one of the weblogic application, unfortunately the update is not picked up by the other weblogic application tangosol cache.
    What do I need to do so that all updates are available in all the tangosol cache?
    example below:
    initial:
    server1: weblogic tangosol near cache, key = "One", Value = null
    server2: weblogic tangosol near cache, key = "One", Value = null
    update on server1;
    server1: weblogic tangosol near cache, key = "One", Value = "New York"
    server2: weblogic tangosol near cache, key = "One", Value = null <- still null (?)
    Why is the value on server2 not updated also?
    Thanks.
    The tangosol configuration is a near distributed cache. The xml is below. Thanks.
    <?xml version="1.0"?>
    <!DOCTYPE cache-config SYSTEM "cache-config.dtd">
    <cache-config>
    <caching-scheme-mapping>
    <!--
    Caches with any name will be created as default replicated.
    -->
    <cache-mapping>
    <cache-name>*</cache-name>
    <scheme-name>default-near</scheme-name>
    </cache-mapping>
    </caching-scheme-mapping>
    <caching-schemes>
    <!--
    Default simple Near caching scheme with default eviction local cache
    in the front-tier and a non-expiring distributed cache in the back-tier.
    -->
    <near-scheme>
    <scheme-name>default-near</scheme-name>
    <front-scheme>
    <local-scheme>
    <scheme-ref>default-eviction</scheme-ref>
    </local-scheme>
    </front-scheme>
    <back-scheme>
    <distributed-scheme>
    <scheme-ref>default-distributed</scheme-ref>
    </distributed-scheme>
    </back-scheme>
    <!--
    invalidation auto = all changes in back caches gets notified
    present = subscribes to only changes thats held in near cache
    -->
    <invalidation-strategy>auto</invalidation-strategy>
    <!-- the cache server starts does not start all the schemes -->
    <autostart>false</autostart>
    </near-scheme>
    <!--
    Default Distributed caching scheme.
    -->
    <distributed-scheme>
    <scheme-name>default-distributed</scheme-name>
    <service-name>DistributedCache</service-name>
    <backing-map-scheme>
    <class-scheme>
    <scheme-ref>default-backing-map</scheme-ref>
    </class-scheme>
    </backing-map-scheme>
    <autostart>true</autostart>
    </distributed-scheme>
    <!--
    Default backing map scheme definition used by all the caches that do
    not require any eviction policies
    -->
    <class-scheme>
    <scheme-name>default-backing-map</scheme-name>
    <class-name>com.tangosol.util.SafeHashMap</class-name>
    <init-params></init-params>
    </class-scheme>
    <!--
    Default eviction policy scheme.
    -->
    <local-scheme>
    <scheme-name>default-eviction</scheme-name>
    <eviction-policy>HYBRID</eviction-policy>
    <high-units>100000</high-units>
    <low-units>0</low-units>
    <expiry-delay>0</expiry-delay>
    <flush-delay>0</flush-delay>
    <cachestore-scheme></cachestore-scheme>
    </local-scheme>
    </caching-schemes>
    </cache-config>

    We can certainly set up a call to discuss what you are seeing. Please email [email protected] with your phone number and other information to set up the call.
    The more information that you can provide in advance (on the forum or by email), the better. For example, send your configuration settings (cache configuration, cluster configuration) files.
    Peace.

  • EM for Coherence - Cannot automatically start departed storage enabled nodes

    Hi Guru,
    I have a cluster with 4 storage enabled nodes. I want EM to monitor those 4 storage enabled nodes and automatically bring up nodes if  down.  So i set the "Nodes Replenish and Entity Discovery Alert Metric ->
    Cluster Size Change (To Replenish Nodes)" as follows:
             Warning Threshold: Not Defined
              Critical Threshold: 3
              Comparision Operator: <
              Occurrences Before Alert: 1
    I manually killed 2 storage nodes and hope EM can automatically bring up them. But unfortunately, this never happen. I even cannot see the correct "Severity Message" displayed on GUI, it always shows "0 nodes departed Coherence cluster". 
    Did anyone have the similar problem? Any hints are appreciated!
    Thanks
    Hysun

    Hi,
    It looks like your cache servers have not used the correct cache configuration file so they do not have a service with the name DistributedSessions. You can see this in your log output here:
    Services
      ClusterService{Name=Cluster, State=(SERVICE_STARTED, STATE_JOINED), Id=0, Version=3.7.1, OldestMemberId=5}
      InvocationService{Name=Management, State=(SERVICE_STARTED), Id=1, Version=3.1, OldestMemberId=5}
      PartitionedCache{Name=DistributedCache, State=(SERVICE_STARTED), LocalStorage=enabled, PartitionCount=257, BackupCount=1, AssignedPartitions=257, BackupPartitions=0}
      ReplicatedCache{Name=ReplicatedCache, State=(SERVICE_STARTED), Id=3, Version=3.0, OldestMemberId=2}
      Optimistic{Name=OptimisticCache, State=(SERVICE_STARTED), Id=4, Version=3.0, OldestMemberId=2}
      InvocationService{Name=InvocationService, State=(SERVICE_STARTED), Id=5, Version=3.1, OldestMemberId=2}
    You said in the original post that you used the following JVM arguments:
    -Dtangosol.coherence.distributed.localstorage=true -Dtangosol.coherence.session.localstorage=true -Dtangosol.coherence.cluster=CoherenceCluster -Dtangosol.coherence.clusteraddress=231.1.3.4 -Dtangosol.coherence.clusterport=7744
    ...but none of those specify the cache configuration to use (in your case session-cache-config.xml) so the node will use the default configuration file; the default name is coherence-cache-config.xml which is inside the Coherence jar file.
    You need to add the following JVM argument
    -Dtangosol.coherence.cacheconfig=session-cache-config.xml -Dtangosol.coherence.distributed.localstorage=true -Dtangosol.coherence.session.localstorage=true -Dtangosol.coherence.cluster=CoherenceCluster -Dtangosol.coherence.clusteraddress=231.1.3.4 -Dtangosol.coherence.clusterport=7744
    JK

  • Upgrading a 3-node Hyper-V clusters storage for £10k and getting the most bang for our money.

    Hi all, looking for some discussion and advice on a few questions I have regarding storage for our next cluster upgrade cycle.
    Our current system for a bit of background:
    3x Clustered Hyper-V Servers running Server 2008 R2 (72TB Ram, dual cpu etc...)
    1x Dell MD3220i iSCSI with dual 1GB connections to each server (24x 146GB 15k SAS drives in RAID 10) - Tier 1 storage
    1x Dell MD1200 Expansion Array with 12x 2TB 7.2K drives in RAID 10 - Tier 2 storage, large vm's, files etc...
    ~25 VM's running all manner of workloads, SQL, Exchange, WSUS, Linux web servers etc....
    1x DPM 2012 SP1 Backup server with its own storage.
    Reasons for upgrading:
    Storage though put is becoming an issue as we only get around 125MB/s over the dual 1GB iSCSI connections to each physical server.  (tried everything under the sun to improve bandwidth but I suspect the MD3220i Raid is the bottleneck here.
    Backup times for vm's (once every night) is now in the 5-6 hours range.
    Storage performance during backups and large file syncronisations (DPM)
    Tier 1 storage is running out of capacity and we would like to build in more IOPS for future expansion.
    Tier 2 storage is massively underused (6tb of 12tb Raid 10 space)
    Migrating to 10GB server links.
    Total budget for the upgrade is in the region of £10k so I have to make sure we get absolutely the most bang for our buck.  
    Current Plan:
    Upgrade the cluster to Server 2012 R2
    Install a dual port 10GB NIC team in each server and virtualize cluster, live migration, vm and management traffic (with QoS of course)
    Purchase a new JBOD SAS array and leverage the new Storage Spaces and SSD caching/tiering capabilities.  Use our existing 2TB drives for capacity and purchase sufficient SSD's to replace the 15k SAS disks.
    On to the questions:
    Is it supported to use storage spaces directly connected to a Hyper-V cluster?  I have seen that for our setup we are on the verge of requiring a separate SOFS for storage but the extra costs and complexity are out of our reach. (RDMA, extra 10GB NIC's
    etc...)
    When using a storage space in a cluster, I have seen various articles suggesting that each csv will be active/passive within the cluster.  Causing redirected IO for all cluster nodes not currently active?
    If CSV's are active/passive its suggested that you should have a csv for each node in your cluster?  How in production do you balance vm's accross 3 CSV's without manually moving them to keep 1/3 of load on each csv?  Ideally I would like just
    a single CSV active/active for all vm's to sit on.  (ease of management etc...)
    If the CSV is active/active am I correct in assuming that DPM will backup vm's without causing any re-directed IO?
    Will DPM backups of VM's be incremental in terms of data transferred from the cluster to the backup server?
    Thanks in advance for anyone who can be bothered to read through all that and help me out!  I'm sure there are more questions I've forgotten but those will certainly get us started.
    Also lastly, does anyone else have a better suggestion for how we should proceed?
    Thanks

    Current Plan:
    Upgrade the cluster to Server 2012 R2
    Install a dual port 10GB NIC team in each server and virtualize cluster, live migration, vm and management traffic (with QoS of course)
    Purchase a new JBOD SAS array and leverage the new Storage Spaces and SSD caching/tiering capabilities.  Use our existing 2TB drives for capacity and purchase sufficient SSD's to replace the 15k SAS disks.
    On to the questions:
    Is it supported to use storage spaces directly connected to a Hyper-V cluster?  I have seen that for our setup we are on the verge of requiring a separate SOFS for storage but the extra costs and complexity are out of our reach. (RDMA, extra 10GB NIC's
    etc...)
    When using a storage space in a cluster, I have seen various articles suggesting that each csv will be active/passive within the cluster.  Causing redirected IO for all cluster nodes not currently active?
    If CSV's are active/passive its suggested that you should have a csv for each node in your cluster?  How in production do you balance vm's accross 3 CSV's without manually moving them to keep 1/3 of load on each csv?  Ideally I would like just
    a single CSV active/active for all vm's to sit on.  (ease of management etc...)
    If the CSV is active/active am I correct in assuming that DPM will backup vm's without causing any re-directed IO?
    Will DPM backups of VM's be incremental in terms of data transferred from the cluster to the backup server?
    Thanks in advance for anyone who can be bothered to read through all that and help me out!  I'm sure there are more questions I've forgotten but those will certainly get us started.
    Also lastly, does anyone else have a better suggestion for how we should proceed?
    Thanks
    1) You can use direct connection to SAS with a 3-node cluster of course (4-node, 5-node etc). Sure it would be much faster then running with an additional SoFS layer (with SAS fed directly to your Hyper-V cluster nodes all reads and writes would be local
    travelling down the SAS fabric and with SoFS layer added you'll have the same amount of I/Os targeting SAS + Ethernet with its huge compared to SAS latency sitting in between a requestor and your data residing on SAS spindles, I/Os overwrapped into SMB-over-TCP-over-IP-over-Etherent
    requests at the hypervisor-SoFS layers). Reason why SoFS is recommended - final SoFS-based solution would be cheaper as SAS-only is a pain to scale beyond basic 2-node configs. Instead of getting SAS switches, adding redundant SAS controllers to every hypervisor
    node and / or looking for expensive multi-port SAS JBODs you'll have a pair (at least) of SoFS boxes doing a file level proxy in front of a SAS-controlled back end. So you'll compromise performance in favor of cost. See:
    http://davidzi.com/windows-server-2012/hyper-v-and-scale-out-file-cluster-home-lab-design/
    Used interconnect diagram within this design would actually scale beyond 2 hosts. But you'll have to get a SAS switch (actually at least two of them for redundancy as you don't want any component to become a single point of failure, don't you?)
    2) With 2012 R2 all I/O from a multiple hypervisor nodes is done thru the storage fabric (in your case that's SAS) and only metadata updates would be done thru the coordinator node and using Ethernet connectivity. Redirected I/O would be used in a two cases
    only a) no SAS connectivity from the hypervisor node (but Ethernet one is still present) and b) broken-by-implementation backup software would keep access to CSV using snapshot mechanism for too long. In a nutshell: you'll be fine :) See for references:
    http://www.petri.co.il/redirected-io-windows-server-2012r2-cluster-shared-volumes.htm
    http://www.aidanfinn.com/?p=12844
    3) These are independent things. CSV is not active/passive (see 2) so basically with an interconnection design you'll be using there's virtually no point to having one-CSV-per-hypervisor. There are cases when you'd still probably do this. For example if
    you'd have all-flash and combined spindle/flash LUNs and you know for sure you want some VMs to sit on flash and others (no so I/O hungry) to stay within "spinning rust". One more case is many-node cluster. With it multiple nodes basically fight for a single
    LUN and a lot of time is wasted for SCSI reservation conflicts resove (ODX has no reservation offload like VAAI has so even if ODX is present its not going to help). Again it's a place where SoFS "helps" as having intermediate proxy level turns block I/O into
    file I/O triggering SCSI reservation conflicts for a two SoFS nodes only instead of an evey node in a hypervisor cluster. One more good example is when you'll have a mix of a local I/O (SAS) and Ethernet with a Virtual SAN products. Virtual SAN runs directly
    as part of the hypervisor and emulates high performance SAN using cheap DAS. To increase performance it DOES make sense to create a  concept of a "local LUN" (and thus "local CSV") as reads targeting this LUN/CSV would be passed down the local storage
    stack instead of hitting the wire (Ethernet) and going to partner hypervisor nodes to fetch the VM data. See:
    http://www.starwindsoftware.com/starwind-native-san-on-two-physical-servers
    http://www.starwindsoftware.com/sw-configuring-ha-shared-storage-on-scale-out-file-servers
    (feeding basically DAS to Hyper-V and SoFS to avoid expensive SAS JBOD and SAS spindles). The same thing as VMware is doing with their VSAN on vSphere. But again that's NOT your case so it DOES NOT make sense to keep many CSVs with only 3 nodes present or
    SoFS possibly used. 
    4) DPM is going to put your cluster in redirected mode for a very short period of time. Microsoft says NEVER. See:
    http://technet.microsoft.com/en-us/library/hh758090.aspx
    Direct and Redirect I/O
    Each Hyper-V host has a direct path (direct I/O) to the CSV storage Logical Unit Number (LUN). However, in Windows Server 2008 R2 there are a couple of limitations:
    For some actions, including DPM backup, the CSV coordinator takes control of the volume and uses redirected instead of direct I/O. With redirection, storage operations are no longer through a host’s direct SAN connection, but are instead routed
    through the CSV coordinator. This has a direct impact on performance.
    CSV backup is serialized, so that only one virtual machine on a CSV is backed up at a time.
    In Windows Server 2012, these limitations were removed:
    Redirection is no longer used. 
    CSV backup is now parallel and not serialized.
    5) Yes, VSS and CBT would be used so data would be incremental after first initial "seed" backup. See:
    http://technet.microsoft.com/en-us/library/ff399619.aspx
    http://itsalllegit.wordpress.com/2013/08/05/dpm-2012-sp1-manually-copy-large-volume-to-secondary-dpm-server/
    I'd look at some other options. There are few good discussion you may want to read. See:
    http://arstechnica.com/civis/viewtopic.php?f=10&t=1209963
    http://community.spiceworks.com/topic/316868-server-2012-2-node-cluster-without-san
    Good luck :)
    StarWind iSCSI SAN & NAS

  • I recently had to disable and enable all add-ons on Firefox. Now when I google something, when I hover over the website preview and click 'cached' it won't highlight the key terms anymore. How can I fix this?

    I recently had to disable and enable all add-ons on Firefox. Now when I google something, when I hover over the website preview and click 'cached' it won't highlight the key terms anymore. How can I fix this?

    Clear the cache and the cookies from sites that cause problems.
    "Clear the Cache":
    *Tools > Options > Advanced > Network > Offline Storage (Cache): "Clear Now"
    "Remove Cookies" from sites causing problems:
    *Tools > Options > Privacy > Cookies: "Show Cookies"
    Start Firefox in <u>[[Safe Mode]]</u> to check if one of the extensions or if hardware acceleration is causing the problem (switch to the DEFAULT theme: Firefox/Tools > Add-ons > Appearance/Themes).
    *Don't make any changes on the Safe mode start window.
    *https://support.mozilla.org/kb/Safe+Mode
    *https://support.mozilla.org/kb/Troubleshooting+extensions+and+themes

  • Stale Near Cache data when all Cache Server fails and are restarted

    Hi,
    We are currently making use of Coherence 3.6.1. We are seeing an issue. The following is the scenario:
    - caching servers and proxy server are all restarted.
    - one of the cache say "TestCache" is bulk loaded with some data (e.g. key/value of : key1/value1, key2/value, key3/value3) on the caching servers.
    - near cache client connects onto the server cluster via the Extend Proxy Server.
    - near cache client is primed with all data from the cache server "TestCache". Hence, near cache client now has all key/values locally (i.e. key1/value1, key2/value, key3/value3).
    - all caching servers in the cluster is down, but the extend proxy server is ok.
    - all cache server in the cluster comes back up.
    - we reload all cache data into "TestCache" on the cache server, but this time it only has key/value of : key1/value1, key2/value.
    - So the caching server's state for "TestCache" is that it should only have key1/value1, key2/value, but the near cache client still thinks it's got key1/value1, key2/value, key3/value3. So in effect, it still knows about key3/value3 which no longer exists.
    Is there anyway for the near cache client to invalidate the key3/value3 automatically? This scenario happens because the extend proxy server is actually not down, but all caching servers are, but the near cache client for some reason doesn't know about this and does not invalidate the cache client near cache data.
    Can anyone help?
    Thanks
    Regards
    Wilson.

    Hi,
    I do have the invalidation strategy as "ALL". Remember this cache client is connected via the Extend proxy server where it's connectivity is still ok, just the caching server holding the storage data in the cluster are all down.
    Please let me know why else we can try.
    Thanks
    Regards
    Wilson.

  • Cant open pages that need log in info. I've cleared cache, disabled extensions and deleted all cookies to no avail....help!!!

    Any page that requires a username and password will not open, this started last Thursday and at that time I was able to reinstate by clearing cache, disabling extension and deleting cookies. Subsequently that fix failed and now doing all those things has no effect.

    I found the answer: http://forums.adobe.com/thread/1018071?tstart=0 It was a conflict with RealPlayer. Incidentally, in order to find this thread again, I had to use a google search. Is there really no way to find a person's own questions via our own profile???

  • L2 and L3 Cache reportig as disabled

    CHUD 4.3.2 reports both L2 and L3 cache as disabled and trying to switch them on through processor prefs does nothing.
    Is this right? Do I need to boot to the HW test to see if the cache is b0rken?
    I have third party USB2 and SATA controllers in there - Can that cause the OS to shut down the cache?

    Hi
    From Apple menu choose about this Mac, then click more information,
    so at your left choose diagnosis, you'll see last diagnosis of your system.
    In the same menu check if info reports L2 and L3 cache enabled.
    I suggest to remove CHUD and restart, then check system profiler.
    Have you noticed some slowdown speed of your Mac during normal tasks?

  • Disable  username and/or password caching

    Hi,
    Whenever I run SAPGUI to logon to a SAP system, last user names are available for me to select. How to disable this username and/or password caching?
    Regards,
    Toan Do

    open a sap gui window
    and click ALT+F12 - >options->Local data (tab)-> here you can switch off / on and clear cache.
    Raja

  • Want to keep the data on one node and allow remote nodes to read it?

    by default, we have a distributed cache with backupcount=1 and data is kept over all the members in equal parts
    is there a way to do set it another way: i need only one 'main' node to put entries into the cache and keep them locally, while remote nodes can only read entries? the reason for that - i want to update the entries on the main node quite frequently (do not them travel anywhere over the network) - while remote read operations are infrequent. Also, Iwant the whole dataset survive the shutdown of any secondary node. In other words, I need 'all eggs in one backet' with remote read access.

    Andrey -
    Provided the 'remote/secondary' nodes are within the local area network, you could use a dedicated distributed cache service; storage enabled
    on the 'main' node and storage disabled on the 'secondary' nodes. Access from outside the LAN will require configuring a proxy service on the
    'main' node and 'secondary' node access over Coherence*Extend.
    /Mark

  • Excessive (?) cluster delays during shutdown of storage enabled node.

    We are experiencing significant delays when shutting down a storage enabled node. At the moment, this is happening in a benchmark environment. If these delays were to occur in production, however, they would push us well outside of our acceptable response times, so we are looking for ways to reduce/eliminate the delays.
    Some background:
    - We're running in a 'grid' style arrangement with a dedicated cache tier.
    - We're running our benchmarks with a vanilla distributed cache -- binary storage, no backups, no operations other than put/get.
    - We're allocating a relatively large number of partitions (1973), basing that number on the total potential cluster storage and the '50MB per partition' rule.
    - We're using JSW to manage startup/shutdown, calling DefaultCacheServer.main() to start the cache server, and using the shutdown hook (from the operational config) to shutdown the instance.
    - We're currently running all of the dedicated cache JVMs on a single machine (that won't be the case in production, of course), with a relatively higher ratio of JVMs to cores --> about 2 to 1.
    - We're using a simple benchmarking client that is issuing a combination of puts/gets against the distributed cache. The ids for these puts/gets are randomized (completely synthetic, i know).
    - We're currently handling all operations on the distributed service thread (i.e. thread count is zero).
    What we see:
    - When adding a new node to a cluster under steady load (~50% CPU idle avg) , there is a very slight degradation, but only very slight. There is no apparent pause, and the maximum operation times against the cluster might barely exceed ~100 ms.
    - When later removing that node from the cluster (kill the JVM, triggering the coherence supplied shutdown hook), there is an obvious, extended pause. During this time, the maximum operation times against the cluster are as high as 5, 10, or even 15 seconds.
    At the beginning of the pause, a client will see this message:
    2010-07-13 22:23:53.227/55.738 Oracle Coherence GE 3.5.3/465 <D5> (thread=Cluster, member=10): Member 8 left service Management with senior member 1
    During the length of the pause, the cache server logging indicates that primary partitions are being shuffled around.
    When the partition shuffle is complete, the clients become immediately responsive, and display these messages:
    2010-07-13 22:23:58.935/61.446 Oracle Coherence GE 3.5.3/465 <D5> (thread=Cluster, member=10): Member 8 left service hibL2-distributed with senior member 1
    2010-07-13 22:23:58.973/61.484 Oracle Coherence GE 3.5.3/465 <D5> (thread=Cluster, member=10): MemberLeft notification for Member 8 received from Member(Id=8, Timestamp=2010-07-13 22:23:21.378, Address=x.x.x.x:8001, MachineId=47282, Location=site:xxx.com,machine:xxx,process:30552,member:xxx-S02, Role=server)
    2010-07-13 22:23:58.973/61.484 Oracle Coherence GE 3.5.3/465 <D5> (thread=Cluster, member=10): Member(Id=8, Timestamp=2010-07-13 22:23:58.973, Address=x.x.x.x:8001, MachineId=47282, Location=site:xxx.com,machine:xxx,process:30552,member:xxx-S02, Role=server) left Cluster with senior member 1
    2010-07-13 22:23:59.135/61.646 Oracle Coherence GE 3.5.3/465 <D5> (thread=Cluster, member=10): TcpRing: disconnected from member 8 due to the peer departure
    Note that there was almost nothing actually in the entire cluster-wide cache at this point -- maybe 10 MB of data at most.
    Any thoughts on how we could eliminate (or nearly eliminate) these pauses on shutdown?

    Increasing the number of threads associated with the distributed service does not seem to have a noticable effect. I might try it in a larger scale test, just to make sure, but initial indications are not positive.
    From the client side, the operations seem hung behind the DistributedCache$BinaryMap.waitForPartitionRedistribution() method. The call stack is listed below.
    "main" prio=10 tid=0x09a75400 nid=0x6f02 in Object.wait() [0xb7452000]
    java.lang.Thread.State: TIMED_WAITING (on object monitor)
    at java.lang.Object.wait(Native Method)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$BinaryMap.waitForPartitionRedistribution(DistributedCache.CDB:96)
    - locked <0x9765c938> (a com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$BinaryMap$Contention)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$BinaryMap.waitForRedistribution(DistributedCache.CDB:10)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$BinaryMap.ensureRequestTarget(DistributedCache.CDB:21)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$BinaryMap.get(DistributedCache.CDB:16)
    at com.tangosol.util.ConverterCollections$ConverterMap.get(ConverterCollections.java:1547)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$ViewMap.get(DistributedCache.CDB:1)
    at com.tangosol.coherence.component.util.SafeNamedCache.get(SafeNamedCache.CDB:1)
    at com.ea.nova.coherence.lt.GetRandomTask.main(GetRandomTask.java:90)
    Any help appreciated!

Maybe you are looking for

  • Creation of Variant BOM using Simple BOM

    Hi I am havign a production BOM, i wanted to create Variant BOM using Production BOM. Pls let me know, whether we have any Function modules for the same. Regards MD

  • Best Practices for Setting up MailBox Quotas in Unity Connection.

    Hi all, I've just completed migrating from Unity 4.0.5 to Unity Connection 8.0.2c and all seems to be working well at the moment. I am looking into setting up quotas for subscriber mailboxes and I would like to find out what are the rules of thumb wh

  • Which drive to install db application on?

    I have a win7x64 machine (xeon processor 8GB RAM) with a standard 7200rpm hdd for the C: drive and a RAID 1+0 array (also composed of 7200rpm hdds). Obviously I want my data on the RAID array, thats what its for. Do I also want the Oracle application

  • No Default Value Maintained for Operation

    Hi Experts, System is showing an error when we are trying to create the process order, hence we are unable to create the process order. When we save the process order, system is showing below error: " No Default Value Maintained for Op.generation for

  • MAC (PSE 11) vs PC version differences

    I set up PSE 11 on my new MacBook Pro and I'm confused by the differences from the PC version I was using. 1. Do I have to open both the Organizer program AND the Editor program to be able to work in both? 2. I want to scan a photo from my scanner/pr