InvocationService on storage disabled nodes

Hi,
Is it safe to assume that InvocationService tasks will not fire on cluster nodes that were started using '-Dtangosol.coherence.distributed.localstorage=false'? I've tried both 'execute' and 'query' methods on a cluster that contains both storage-enabled and storage-disabled nodes and the tasks only seem to fire on the storage-enabled nodes. The reason that I'm curious is because the InvocationService.getInfo().getServiceMembers() method returns all the cluster nodes (not just the storage-enabled nodes), and the InvocationService threads show up in the thread dumps of the storage-disabled nodes.
Thanks,
Jim

Hi Rob,
Thanks for your reply. This turned out to be an elusive problem on my end, complicated by the fact that Coherence was 'eating' an exception. One of the member fields in my invocation class was not serializable, but the InvocationService thread did not give any indication of this error. It wasn't until I put a try/catch around the InvocationService.execute method that I discovered the problem. The local node was the only storage-enabled node, so that explains why the invocation was not being executed on the storage-disabled nodes.
This might be a good candidate for a bug fix in Coherence (to log some indication that an exception occurred). As is, a good programming tip is to ALWAYS put a try/catch around InvocationService.execute() and InvocationService.query().
Jim

Similar Messages

  • Storage disabled nodes and near-cache scheme

    This probably is a newbie question. I have a named cache with a near cache scheme, with a local-scheme as the front tier. I can see how this will work in a cache-server node. But I have a application node which pushes a lot of data into the same named cache, but it is set to be storage disabled.
    My understanding of a local cache scheme is that data is cached locally in the heap for faster access and the writes are delegated to the service for writing to backing map. If my application is storage disabled, is the local cache still used or is all data obtained from the cache-servers?

    Hello,
    You understanding is correct. To answer your question writes will always go through the cache servers. A put will also always go through the cache servers but the near cache may or may not be populated at that point.
    hth,
    -Dave

  • Backing Map Scheme with storage disabled

    I've been playing around with Cache Servers and Cache Clients. If I have a distributed cache, the coherence-cache-config.xml for the client still requires a backing map scheme entry. If the client doesn't physically store data then why does it need a backing map scheme?
    I'm interested to know if I'm understanding the concept of the cache client correctly.
    Is Coherence just expecting to read in the XML but will ignore it because I've set local storage to false, and therefore it doesn't actually matter what is in the backing map sheme?
    Cheers
    Mike

    Hi Mike,
    You are right - storage disabled nodes do not instantiate backing map. Coherence reads XML during initial cohfiguration validation, and on cache clients 'backing-map-scheme' element is not used.
    In general it is cleaner, less error prone and easier to maintain a single configuration file and let Coherence worry about what to instantiate on different nodes.
    Regards,
    Dimitri

  • EM for Coherence - Cannot automatically start departed storage enabled nodes

    Hi Guru,
    I have a cluster with 4 storage enabled nodes. I want EM to monitor those 4 storage enabled nodes and automatically bring up nodes if  down.  So i set the "Nodes Replenish and Entity Discovery Alert Metric ->
    Cluster Size Change (To Replenish Nodes)" as follows:
             Warning Threshold: Not Defined
              Critical Threshold: 3
              Comparision Operator: <
              Occurrences Before Alert: 1
    I manually killed 2 storage nodes and hope EM can automatically bring up them. But unfortunately, this never happen. I even cannot see the correct "Severity Message" displayed on GUI, it always shows "0 nodes departed Coherence cluster". 
    Did anyone have the similar problem? Any hints are appreciated!
    Thanks
    Hysun

    Hi,
    It looks like your cache servers have not used the correct cache configuration file so they do not have a service with the name DistributedSessions. You can see this in your log output here:
    Services
      ClusterService{Name=Cluster, State=(SERVICE_STARTED, STATE_JOINED), Id=0, Version=3.7.1, OldestMemberId=5}
      InvocationService{Name=Management, State=(SERVICE_STARTED), Id=1, Version=3.1, OldestMemberId=5}
      PartitionedCache{Name=DistributedCache, State=(SERVICE_STARTED), LocalStorage=enabled, PartitionCount=257, BackupCount=1, AssignedPartitions=257, BackupPartitions=0}
      ReplicatedCache{Name=ReplicatedCache, State=(SERVICE_STARTED), Id=3, Version=3.0, OldestMemberId=2}
      Optimistic{Name=OptimisticCache, State=(SERVICE_STARTED), Id=4, Version=3.0, OldestMemberId=2}
      InvocationService{Name=InvocationService, State=(SERVICE_STARTED), Id=5, Version=3.1, OldestMemberId=2}
    You said in the original post that you used the following JVM arguments:
    -Dtangosol.coherence.distributed.localstorage=true -Dtangosol.coherence.session.localstorage=true -Dtangosol.coherence.cluster=CoherenceCluster -Dtangosol.coherence.clusteraddress=231.1.3.4 -Dtangosol.coherence.clusterport=7744
    ...but none of those specify the cache configuration to use (in your case session-cache-config.xml) so the node will use the default configuration file; the default name is coherence-cache-config.xml which is inside the Coherence jar file.
    You need to add the following JVM argument
    -Dtangosol.coherence.cacheconfig=session-cache-config.xml -Dtangosol.coherence.distributed.localstorage=true -Dtangosol.coherence.session.localstorage=true -Dtangosol.coherence.cluster=CoherenceCluster -Dtangosol.coherence.clusteraddress=231.1.3.4 -Dtangosol.coherence.clusterport=7744
    JK

  • Downsides of using Proxy servers as a storage enabled node

    Hello,
    We are doing some investigation on proxy server configuration, I read "Oracle coherence recommends it's better to use proxy server as storage disabled".
    can anyone explain downside of using proxy server as a storage enabled node?
    Thanks
    Prab

    It seems that I was wrong with my original answer. The proxy uses a binary pass through mode so that if the proxy and cache service are using the same serialization format (de)serialization is largely avoided.
    However, there are other overhead associated with managing potentially unpredictable client work loads, so using proxy server as storage enable node is still discouraged.
    Thanks,
    Wei

  • How to set CORS properties for BLOB Storage using node?

    Hi - I just got started with Azure using a Node-based web site and mobile services.
    I am following various documentation in order to provide an API for users to upload images via a time-restricted SAS for the BLOB Storage.
    In order to upload my image, I need to set the CORS configuration for the BLOB Storage. Unfortunately this cannot be done via the management portal.
    I'm unclear as to how to accomplish this. I'm considering using the startup.js file in my mobile service to make a post request to the BLOB Storage REST API:
    http://msdn.microsoft.com/en-us/library/windowsazure/hh452235.aspx
    Are there appropriate methods in the Node SDK to make this easier, especially the signing part?
    What is the recommended way for setting CORS properties for the BLOB Storage via Node?
    Thanks for your help
    Stefan

    Unfortunately Node SDK does not support CORS functionality yet. Your option would be to write code which consumes the REST API for setting CORS. Not sure if it helps but there's a free tool out there written by my company which you can use to set CORS
    on your storage account. More information about this tool can be found here:
    http://blog.cynapta.com/2013/12/cynapta-azure-cors-helper-free-tool-to-manage-cors-rules-for-windows-azure-blob-storage/
    Hope this helps.

  • Excessive (?) cluster delays during shutdown of storage enabled node.

    We are experiencing significant delays when shutting down a storage enabled node. At the moment, this is happening in a benchmark environment. If these delays were to occur in production, however, they would push us well outside of our acceptable response times, so we are looking for ways to reduce/eliminate the delays.
    Some background:
    - We're running in a 'grid' style arrangement with a dedicated cache tier.
    - We're running our benchmarks with a vanilla distributed cache -- binary storage, no backups, no operations other than put/get.
    - We're allocating a relatively large number of partitions (1973), basing that number on the total potential cluster storage and the '50MB per partition' rule.
    - We're using JSW to manage startup/shutdown, calling DefaultCacheServer.main() to start the cache server, and using the shutdown hook (from the operational config) to shutdown the instance.
    - We're currently running all of the dedicated cache JVMs on a single machine (that won't be the case in production, of course), with a relatively higher ratio of JVMs to cores --> about 2 to 1.
    - We're using a simple benchmarking client that is issuing a combination of puts/gets against the distributed cache. The ids for these puts/gets are randomized (completely synthetic, i know).
    - We're currently handling all operations on the distributed service thread (i.e. thread count is zero).
    What we see:
    - When adding a new node to a cluster under steady load (~50% CPU idle avg) , there is a very slight degradation, but only very slight. There is no apparent pause, and the maximum operation times against the cluster might barely exceed ~100 ms.
    - When later removing that node from the cluster (kill the JVM, triggering the coherence supplied shutdown hook), there is an obvious, extended pause. During this time, the maximum operation times against the cluster are as high as 5, 10, or even 15 seconds.
    At the beginning of the pause, a client will see this message:
    2010-07-13 22:23:53.227/55.738 Oracle Coherence GE 3.5.3/465 <D5> (thread=Cluster, member=10): Member 8 left service Management with senior member 1
    During the length of the pause, the cache server logging indicates that primary partitions are being shuffled around.
    When the partition shuffle is complete, the clients become immediately responsive, and display these messages:
    2010-07-13 22:23:58.935/61.446 Oracle Coherence GE 3.5.3/465 <D5> (thread=Cluster, member=10): Member 8 left service hibL2-distributed with senior member 1
    2010-07-13 22:23:58.973/61.484 Oracle Coherence GE 3.5.3/465 <D5> (thread=Cluster, member=10): MemberLeft notification for Member 8 received from Member(Id=8, Timestamp=2010-07-13 22:23:21.378, Address=x.x.x.x:8001, MachineId=47282, Location=site:xxx.com,machine:xxx,process:30552,member:xxx-S02, Role=server)
    2010-07-13 22:23:58.973/61.484 Oracle Coherence GE 3.5.3/465 <D5> (thread=Cluster, member=10): Member(Id=8, Timestamp=2010-07-13 22:23:58.973, Address=x.x.x.x:8001, MachineId=47282, Location=site:xxx.com,machine:xxx,process:30552,member:xxx-S02, Role=server) left Cluster with senior member 1
    2010-07-13 22:23:59.135/61.646 Oracle Coherence GE 3.5.3/465 <D5> (thread=Cluster, member=10): TcpRing: disconnected from member 8 due to the peer departure
    Note that there was almost nothing actually in the entire cluster-wide cache at this point -- maybe 10 MB of data at most.
    Any thoughts on how we could eliminate (or nearly eliminate) these pauses on shutdown?

    Increasing the number of threads associated with the distributed service does not seem to have a noticable effect. I might try it in a larger scale test, just to make sure, but initial indications are not positive.
    From the client side, the operations seem hung behind the DistributedCache$BinaryMap.waitForPartitionRedistribution() method. The call stack is listed below.
    "main" prio=10 tid=0x09a75400 nid=0x6f02 in Object.wait() [0xb7452000]
    java.lang.Thread.State: TIMED_WAITING (on object monitor)
    at java.lang.Object.wait(Native Method)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$BinaryMap.waitForPartitionRedistribution(DistributedCache.CDB:96)
    - locked <0x9765c938> (a com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$BinaryMap$Contention)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$BinaryMap.waitForRedistribution(DistributedCache.CDB:10)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$BinaryMap.ensureRequestTarget(DistributedCache.CDB:21)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$BinaryMap.get(DistributedCache.CDB:16)
    at com.tangosol.util.ConverterCollections$ConverterMap.get(ConverterCollections.java:1547)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$ViewMap.get(DistributedCache.CDB:1)
    at com.tangosol.coherence.component.util.SafeNamedCache.get(SafeNamedCache.CDB:1)
    at com.ea.nova.coherence.lt.GetRandomTask.main(GetRandomTask.java:90)
    Any help appreciated!

  • Silverlight error - It appears you have Isolated Storage disabled in your Silverlight options.

    I installed silverlight on my Macbook pro mid 2009 so that I could watch quickflix but I keep getting the message
    Problem with playback
    It appears you have Isolated Storage disabled in your Silverlight options.
    To correct this problem Right Click on the black background of the player area to the right of this message and
    Select "Silverlight" to open the Microsoft Silverlight Configuration.
    Click on the "Application Storage" tab along the top and then Make sure the checkbox next to
    "Enable application storage" is checked, press
    "OK" and try again.
    but the application storage is already enabled!! I have tried reinstalling but to no avail. I am using firefox browser (up-to-date). My computer details are below
    Processor  2.66 GHz Intel Core 2 Duo
    Memory  4 GB 1067 MHz DDR3
    Graphics  NVIDIA GeForce 9400M 256 MB
    Software  OS X 10.9.5 (13F34)
    Please help

    Hi,
    After you enable your application,please try to restart your browser or application.
    For more information ,please check article below:
    http://www.microsoft.com/getsilverlight/resources/documentation/grouppolicysettings.aspx
    Best Regards,
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click
    HERE to participate the survey.

  • How to make a node storage disabled for a particular cache?

    I have multiple caches that are distributed across the nodes in my application. Can I disable storage (localstorage=false) for a certain cache in a node.
    Intention is to make something like this:
    CacheA distributed between node1 and node2
    CacheB distributed between node1 and node3
    Thus none of the node would be a non storage node completely here. Hence I would be required to specify this in the coherence-config.xml. If the answer is following for node 2
    <distributed-scheme>
         <scheme-name>CacheB</_CacheEvent_scheme-name>
         <service-name>DistributedCache</service-name>
         *<local-storage>false</local-storage>*
         <backing-map-scheme>
         <local-scheme>
         <scheme-ref>backingSchemeB</scheme-ref>
         </local-scheme>
         </backing-map-scheme>
         <autostart>true</autostart>
         </distributed-scheme>
         <local-scheme>
         <scheme-name>backingSchemeB</scheme-name>
         </local-scheme>
    What should be the backing scheme, as my local storage is false for cacheB?

    Hi Mahesh,
    you can control the storage-enablement of distributed caches on a per-service basis.
    In your case, you have to put cache A and cache B into different services (serviceA and serviceB for the example) and run service A as storage-enabled on nodes 1 and 2, and service B as storage-enabled on nodes 1 and 3.
    For more information, look at my post from two years ago:
    Re: Partitioned cache - where to put what config files?
    Best regards,
    Robert

  • Disableing nodes in JTree

    Hi All,
    is it possible to disable the nodes in a JTree. i.e. Grey them out, so user is not able to select.
    thanks

    if you use DefaultMutableTreeNode as your node's class, than it's not possible (does not contain any parameter to provide this functionality),
    but you can make your own derivation of this class, when you do it yourself
    (an interesting design question: what would you expect the disabled note to do? if you have some user-assigned action, you could probably disable it quite easily, but if you want to prevent the disabled note to be scrolled-up/down, I'm afraid it would be a bit more difficult)

  • Limiting the storage enabled nodes?

    A somewhat common mistake in our development environment is that developers run test programs against our test cluster and forgets to specify that the node (JVM) they start should NOT be storage enabled. This causes re-balancing or even worse (when we during development run with backup count = 0 to fit more data into our test machines limited memory) data loss in the cluster when they shut down the node by killing it.
    Is there a way to limit what nodes that are allowed to be storage enabled (in the same way as one can specify the IPs of the nobes that are at all allowed to participate in the cluster)? If this was possible we could set up a test cluster were no other nodes than the intended ones are allowed to be storage enabled and any other nodes trying to contribute storage would be refused to join the cluster!
    Best Regards
    Magnus

    There are a few improvements in 3.2 to configuration. First of all, eval / dev / prod licenses can now have specific overrides, to avoid accidentally using dev configuration in production, for example.
    We also introduce a configurable "member role" in 3.2 which will eventually be used to drive configuration options (and was requested by customers to affect their own application behavior).
    Peace,
    Cameron.

  • Disabling node in JTree

    Hi!
    I have a program and I want to disable a specific node in the tree when the user presses a button. My nodes are derived from IconNode, a class that extends DefaultMutableTreeNode.
    I�ve tried to redefine setEnabled(state) in IconNode, but without success.
    How can I do this?
    Thanks a bunch

    Make your node to be able to hold Enabled/Disabled state. On your button click, make your node to change state. Create custom TreeCellRenderer which will draw a node according to its state - you can use setEnabled(boolean) for a Label. Remember to notify TreeModel after your node changed its state.
    Denis Krukovsky
    http://dotuseful.sourceforge.net/

  • Streaming files from Azure Blob Storage through Node.js server

    Hello,
    I am currently trying to implement download server with Node.js Express app in a way that website visitors would be able to download files via stream straight from Azure Blob storage - so the server won't need to download files to its local storage. Here's
    how I do that:
    outStream is the response object (res) received by Express router; blobSvc is initialized like this:
    blobSvc = azure.createBlobService(conf.azureStorageConfig.account, conf.azureStorageConfig.key, conf.azureStorageConfig.host);
    . There are several issues I face, for example - while small files get downloaded well, the bigger ones (even 40 MB) do not finish the download successfully, throwing an error in console:
    It has to be mentioned that the files are zip-archives, and while the error is thrown, the file is not completely downloaded to the client's machine - the archive is broken. 
    There are other issues, like the fact that the client can't download more than 1 file from the server simultaneously - he gets response timeout after trying to start downloading the second file.
    What is the right way to use streaming with azure storage library in Node for client downloads?
    Best Regards,
    Petr.
    Arction Ltd.

    Hi,
    Thank you for posting in here.
    We are checking on this and will get back at earliest.
    Regards,
    Manu Rekhar

  • USB Mass storage disabled

    How do I enable the USB Mass Storage after it had been disabled?

    look at the point 5 to enable
    http://www.sevenforums.com/tutorials/181611-usb-storage-device-enable-disable-connecting.html

  • Best practice to handle the class definitions among storage enabled nodes

    We have a common set of cache servers that are shared among various applications. A common problem that we face upon deployment is with the missing class definition newly introduced by one of the application node. Any practical approach / best practices to address this problem?
    Edited by: Mahesh Kamath on Feb 3, 2010 10:17 PM

    Is it the cache servers themselves or your application servers that are having problems with loading classes?
    In order to dynamically add classes (in our case scripts that compile to Java byte code) we are considering to use a class loader that picks up classes from a coherence cache. I am however not so sure how/if this would work for the cache servers themselves if that is your problem!?
    Anyhow a simplistic cache class loader may look something like this:
    import com.tangosol.net.CacheFactory;
    * This trivial class loader searches a specified Coherence cache for classes to load. The classes are assumed
    * to be stored as arrays of bytes keyed with the "binary name" of the class (com.zzz.xxx).
    * It is probably a good idea to decide on some convention for how binary names are structured when stored in the
    * cache. For example the first tree parts of the binary name (com.scania.xxxx in the example) could be the
    * "application name" and this could be used as by a partitioning strategy to ensure that all classes associated with
    * a specific application are stored in the same partition and this way can be updated atomically by a processor or
    * transaction! This kind of partitioning policy also turns class loading into a "scalable" query since each
    * application will only involve one cache node!
    public class CacheClassLoader extends ClassLoader {
        public static final String DEFAULT_CLASS_CACHE_NAME = "ClassCache";
        private final String classCacheName;
        public CacheClassLoader() {
            this(DEFAULT_CLASS_CACHE_NAME);
        public CacheClassLoader(String classCacheName) {
            this.classCacheName = classCacheName;
        public CacheClassLoader(ClassLoader parent, String classCacheName) {
            super(parent);
            this.classCacheName = classCacheName;
        @Override
        public Class<?> loadClass(String className) throws ClassNotFoundException {
            byte[] bytes = (byte[]) CacheFactory.getCache(classCacheName).get(className);
            return defineClass(className, bytes, 0, bytes.length);
    }And a simple "loader" that put the classes in a JAR file into the cache may look like this:
    * This class loads classes from a JAR-files to a code cache
    public class JarToCacheLoader {
        private final String classCacheName;
        public JarToCacheLoader(String classCacheName) {
            this.classCacheName = classCacheName;
        public JarToCacheLoader() {
            this(CacheClassLoader.DEFAULT_CLASS_CACHE_NAME);
        public void loadClassFiles(String jarFileName) throws IOException {
            JarFile jarFile = new JarFile(jarFileName);
            System.out.println("Cache size = " + CacheFactory.getCache(classCacheName).size());
            for (Enumeration<JarEntry> entries = jarFile.entries(); entries.hasMoreElements();) {
                final JarEntry entry = entries.nextElement();
                if (!entry.isDirectory() && entry.getName().endsWith(".class")) {
                    final InputStream inputStream = jarFile.getInputStream(entry);
                    final long size = entry.getSize();
                    int totalRead = 0;
                    int read = 0;
                    byte[] bytes = new byte[(int) size];
                    do {
                        read = inputStream.read(bytes, totalRead, bytes.length - totalRead);
                        totalRead += read;
                    } while (read > 0);
                    if (totalRead != size)
                        System.out.println(entry.getName() + " failed to load completely, " + size + " ," + read);
                    else
                        System.out.println(entry.getName().replace('/', '.'));
                        CacheFactory.getCache(classCacheName).put(entry.getName() + entry, bytes);
                    inputStream.close();
        public static void main(String[] args) {
            JarToCacheLoader loader = new JarToCacheLoader();
            for (String jarFileName : args)
                try {
                    loader.loadClassFiles(jarFileName);
                } catch (IOException e) {
                    e.printStackTrace();
    }Standard disclaimer - this is prototype code use on your own risk :-)
    /Magnus

Maybe you are looking for

  • Read excel cell when excel is already open

    Hello, I want to read a specific cell of a sheet with excel already open. Labview has only to read the cell and not have to open excel. I have several examples that show : Labvview open excel select excel file select workbook select sheet select cell

  • Vga monitor fault

    Hello everyone! I have a problem with when switch users from one to another on windows xp I get a vga monitor driver fault. I've updated my vga drivers and direct x but I still have the same problem.

  • IDoc qualifier

    Hi All, We are using IDoc type DELVRY05 for pick confirmation of the outbound delivery. We want required quantity to be changed as per the pick quantity and PGI to be done automatically. For this we are using qualifier PIC and PGI respectively. But w

  • Create Transaction code for Area Menu with specific namesapce(/arba/ond_proc)

    Hi Friends, As I have created one Transaction Code For Area Menu with the namespace like (/arba/ond_proc). It is working fine when I run this with /n/atba/ond_proc. But When I ran this independently it's not working like:( /arba/ond_proc not working

  • Order unit on purchasing view

    Hi, When changing the order unit on the purchasing view of a material in one plant the order unit automatically changes on all the other plants for which this material was extended. Why is this the case? Regards