EM for Coherence - Cannot automatically start departed storage enabled nodes

Hi Guru,
I have a cluster with 4 storage enabled nodes. I want EM to monitor those 4 storage enabled nodes and automatically bring up nodes if  down.  So i set the "Nodes Replenish and Entity Discovery Alert Metric ->
Cluster Size Change (To Replenish Nodes)" as follows:
         Warning Threshold: Not Defined
          Critical Threshold: 3
          Comparision Operator: <
          Occurrences Before Alert: 1
I manually killed 2 storage nodes and hope EM can automatically bring up them. But unfortunately, this never happen. I even cannot see the correct "Severity Message" displayed on GUI, it always shows "0 nodes departed Coherence cluster". 
Did anyone have the similar problem? Any hints are appreciated!
Thanks
Hysun

Hi,
It looks like your cache servers have not used the correct cache configuration file so they do not have a service with the name DistributedSessions. You can see this in your log output here:
Services
  ClusterService{Name=Cluster, State=(SERVICE_STARTED, STATE_JOINED), Id=0, Version=3.7.1, OldestMemberId=5}
  InvocationService{Name=Management, State=(SERVICE_STARTED), Id=1, Version=3.1, OldestMemberId=5}
  PartitionedCache{Name=DistributedCache, State=(SERVICE_STARTED), LocalStorage=enabled, PartitionCount=257, BackupCount=1, AssignedPartitions=257, BackupPartitions=0}
  ReplicatedCache{Name=ReplicatedCache, State=(SERVICE_STARTED), Id=3, Version=3.0, OldestMemberId=2}
  Optimistic{Name=OptimisticCache, State=(SERVICE_STARTED), Id=4, Version=3.0, OldestMemberId=2}
  InvocationService{Name=InvocationService, State=(SERVICE_STARTED), Id=5, Version=3.1, OldestMemberId=2}
You said in the original post that you used the following JVM arguments:
-Dtangosol.coherence.distributed.localstorage=true -Dtangosol.coherence.session.localstorage=true -Dtangosol.coherence.cluster=CoherenceCluster -Dtangosol.coherence.clusteraddress=231.1.3.4 -Dtangosol.coherence.clusterport=7744
...but none of those specify the cache configuration to use (in your case session-cache-config.xml) so the node will use the default configuration file; the default name is coherence-cache-config.xml which is inside the Coherence jar file.
You need to add the following JVM argument
-Dtangosol.coherence.cacheconfig=session-cache-config.xml -Dtangosol.coherence.distributed.localstorage=true -Dtangosol.coherence.session.localstorage=true -Dtangosol.coherence.cluster=CoherenceCluster -Dtangosol.coherence.clusteraddress=231.1.3.4 -Dtangosol.coherence.clusterport=7744
JK

Similar Messages

  • Declarative workflows cannot automatically start if the triggering action was performed by System Account

    Hi , 
    I have very strange problem .Only in one site collection across the farm i am getting this error while starting OOTB workflow in list. Everywhere else it works, even another site collection within same web application. I have stopped and restarted all the
    work flow feature but still same issue?
    sachin

    Errors in ULS logs
    Declarative workflows cannot automatically start if the triggering action was performed by System Account. Canceling workflow auto-start. List Id: %s, Item Id: %d, Workflow Association
    Id: %s
    RunWorkflow: Microsoft.SharePoint.SPException: User cannot be found.   
     at Microsoft.SharePoint.SPUserCollection.get_Item(String loginName)   
     at Microsoft.SharePoint.Workflow.SPWorkflowNoCodeSupport.LoadWorkflowBytesElevated(SPFile file, Int32 fileVer, Int32& userid, DateTime& lastModified)   
     at Microsoft.SharePoint.Workflow.SPWorkflowNoCodeSupport.LoadWorkflowBytesElevated(SPWeb web, Guid docLibID, Int32 fileID, Int32 fileVer, Int32& userid, DateTime&
    lastModified)   
     at Microsoft.SharePoint.Workflow.SPWorkflowNoCodeSupport.<>c__DisplayClass1.<LoadWorkflowBytes>b__0(SPSite elevatedSite, SPWeb elevatedWeb)   
     at Microsoft.SharePoint.Workflow.SPWorkflowNoCodeSupport.LoadWorkflowBytes(SPWeb web, Guid docLibID, Int32 fileID, Int32 fileVer, Int32& userid)   
     at Microsoft.SharePoint.Workflow.SPNoCodeXomlCompiler.LoadXomlAssembly(SPWorkflowAssociation association, SPWeb web)   
     at Microsoft.SharePoint.Workflow.SPWinOeHostServices.LoadDeclarativeAssembly(SPWorkflowAssociation association)   
     at Microsoft.SharePoint.Workflow.SPWinOeHostServices.CreateInstance(SPWorkflow workflow)   
     at Microsoft.SharePoint.Workflow.SPWinOeEngine.RunWorkflow(SPWorkflowHostService host, SPWorkflow workflow, Collection`1 events, TimeSpan timeOut)   
     at Microsoft.SharePoint.Workflow.SPWorkflowManager.RunWorkflowElev(SPWorkflow workflow, Collection`1 events, SPWorkflowRunOptionsInternal runOptions)
    Microsoft.SharePoint.SPException: User cannot be found.   
     at Microsoft.SharePoint.SPUserCollection.get_Item(String loginName)   
     at Microsoft.SharePoint.Workflow.SPWorkflowNoCodeSupport.LoadWorkflowBytesElevated(SPFile file, Int32 fileVer, Int32& userid, DateTime& lastModified)   
     at Microsoft.SharePoint.Workflow.SPWorkflowNoCodeSupport.LoadWorkflowBytesElevated(SPWeb web, Guid docLibID, Int32 fileID, Int32 fileVer, Int32& userid, DateTime&
    lastModified)   
     at Microsoft.SharePoint.Workflow.SPWorkflowNoCodeSupport.<>c__DisplayClass1.<LoadWorkflowBytes>b__0(SPSite elevatedSite, SPWeb elevatedWeb)   
     at Microsoft.SharePoint.Workflow.SPWorkflowNoCodeSupport.LoadWorkflowBytes(SPWeb web, Guid docLibID, Int32 fileID, Int32 fileVer, Int32& userid)   
     at Microsoft.SharePoint.Workflow.SPNoCodeXomlCompiler.LoadXomlAssembly(SPWorkflowAssociation association, SPWeb web)   
     at Microsoft.SharePoint.Workflow.SPWinOeHostServices.LoadDeclarativeAssembly(SPWorkflowAssociation association)   
     at Microsoft.SharePoint.Workflow.SPWinOeHostServices.CreateInstance(SPWorkflow workflow)   
     at Microsoft.SharePoint.Workflow.SPWinOeEngine.RunWorkflow(SPWorkflowHostService host, SPWorkflow workflow, Collection`1 events, TimeSpan timeOut)   
     at Microsoft.SharePoint.Workflow.SPWorkflowManager.RunWorkflowElev(SPWorkflow workflow, Collection`1 events, SPWorkflowRunOptionsInternal runOptions)
    The emailenable value is true. And it just does not work for one site collection. It should not be regarding any hot fix. 
    Thank you for your suggestions and time. I will dig up further.
    sachin

  • Excessive (?) cluster delays during shutdown of storage enabled node.

    We are experiencing significant delays when shutting down a storage enabled node. At the moment, this is happening in a benchmark environment. If these delays were to occur in production, however, they would push us well outside of our acceptable response times, so we are looking for ways to reduce/eliminate the delays.
    Some background:
    - We're running in a 'grid' style arrangement with a dedicated cache tier.
    - We're running our benchmarks with a vanilla distributed cache -- binary storage, no backups, no operations other than put/get.
    - We're allocating a relatively large number of partitions (1973), basing that number on the total potential cluster storage and the '50MB per partition' rule.
    - We're using JSW to manage startup/shutdown, calling DefaultCacheServer.main() to start the cache server, and using the shutdown hook (from the operational config) to shutdown the instance.
    - We're currently running all of the dedicated cache JVMs on a single machine (that won't be the case in production, of course), with a relatively higher ratio of JVMs to cores --> about 2 to 1.
    - We're using a simple benchmarking client that is issuing a combination of puts/gets against the distributed cache. The ids for these puts/gets are randomized (completely synthetic, i know).
    - We're currently handling all operations on the distributed service thread (i.e. thread count is zero).
    What we see:
    - When adding a new node to a cluster under steady load (~50% CPU idle avg) , there is a very slight degradation, but only very slight. There is no apparent pause, and the maximum operation times against the cluster might barely exceed ~100 ms.
    - When later removing that node from the cluster (kill the JVM, triggering the coherence supplied shutdown hook), there is an obvious, extended pause. During this time, the maximum operation times against the cluster are as high as 5, 10, or even 15 seconds.
    At the beginning of the pause, a client will see this message:
    2010-07-13 22:23:53.227/55.738 Oracle Coherence GE 3.5.3/465 <D5> (thread=Cluster, member=10): Member 8 left service Management with senior member 1
    During the length of the pause, the cache server logging indicates that primary partitions are being shuffled around.
    When the partition shuffle is complete, the clients become immediately responsive, and display these messages:
    2010-07-13 22:23:58.935/61.446 Oracle Coherence GE 3.5.3/465 <D5> (thread=Cluster, member=10): Member 8 left service hibL2-distributed with senior member 1
    2010-07-13 22:23:58.973/61.484 Oracle Coherence GE 3.5.3/465 <D5> (thread=Cluster, member=10): MemberLeft notification for Member 8 received from Member(Id=8, Timestamp=2010-07-13 22:23:21.378, Address=x.x.x.x:8001, MachineId=47282, Location=site:xxx.com,machine:xxx,process:30552,member:xxx-S02, Role=server)
    2010-07-13 22:23:58.973/61.484 Oracle Coherence GE 3.5.3/465 <D5> (thread=Cluster, member=10): Member(Id=8, Timestamp=2010-07-13 22:23:58.973, Address=x.x.x.x:8001, MachineId=47282, Location=site:xxx.com,machine:xxx,process:30552,member:xxx-S02, Role=server) left Cluster with senior member 1
    2010-07-13 22:23:59.135/61.646 Oracle Coherence GE 3.5.3/465 <D5> (thread=Cluster, member=10): TcpRing: disconnected from member 8 due to the peer departure
    Note that there was almost nothing actually in the entire cluster-wide cache at this point -- maybe 10 MB of data at most.
    Any thoughts on how we could eliminate (or nearly eliminate) these pauses on shutdown?

    Increasing the number of threads associated with the distributed service does not seem to have a noticable effect. I might try it in a larger scale test, just to make sure, but initial indications are not positive.
    From the client side, the operations seem hung behind the DistributedCache$BinaryMap.waitForPartitionRedistribution() method. The call stack is listed below.
    "main" prio=10 tid=0x09a75400 nid=0x6f02 in Object.wait() [0xb7452000]
    java.lang.Thread.State: TIMED_WAITING (on object monitor)
    at java.lang.Object.wait(Native Method)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$BinaryMap.waitForPartitionRedistribution(DistributedCache.CDB:96)
    - locked <0x9765c938> (a com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$BinaryMap$Contention)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$BinaryMap.waitForRedistribution(DistributedCache.CDB:10)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$BinaryMap.ensureRequestTarget(DistributedCache.CDB:21)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$BinaryMap.get(DistributedCache.CDB:16)
    at com.tangosol.util.ConverterCollections$ConverterMap.get(ConverterCollections.java:1547)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$ViewMap.get(DistributedCache.CDB:1)
    at com.tangosol.coherence.component.util.SafeNamedCache.get(SafeNamedCache.CDB:1)
    at com.ea.nova.coherence.lt.GetRandomTask.main(GetRandomTask.java:90)
    Any help appreciated!

  • Downsides of using Proxy servers as a storage enabled node

    Hello,
    We are doing some investigation on proxy server configuration, I read "Oracle coherence recommends it's better to use proxy server as storage disabled".
    can anyone explain downside of using proxy server as a storage enabled node?
    Thanks
    Prab

    It seems that I was wrong with my original answer. The proxy uses a binary pass through mode so that if the proxy and cache service are using the same serialization format (de)serialization is largely avoided.
    However, there are other overhead associated with managing potentially unpredictable client work loads, so using proxy server as storage enable node is still discouraged.
    Thanks,
    Wei

  • Limiting the storage enabled nodes?

    A somewhat common mistake in our development environment is that developers run test programs against our test cluster and forgets to specify that the node (JVM) they start should NOT be storage enabled. This causes re-balancing or even worse (when we during development run with backup count = 0 to fit more data into our test machines limited memory) data loss in the cluster when they shut down the node by killing it.
    Is there a way to limit what nodes that are allowed to be storage enabled (in the same way as one can specify the IPs of the nobes that are at all allowed to participate in the cluster)? If this was possible we could set up a test cluster were no other nodes than the intended ones are allowed to be storage enabled and any other nodes trying to contribute storage would be refused to join the cluster!
    Best Regards
    Magnus

    There are a few improvements in 3.2 to configuration. First of all, eval / dev / prod licenses can now have specific overrides, to avoid accidentally using dev configuration in production, for example.
    We also introduce a configurable "member role" in 3.2 which will eventually be used to drive configuration options (and was requested by customers to affect their own application behavior).
    Peace,
    Cameron.

  • Best practice to handle the class definitions among storage enabled nodes

    We have a common set of cache servers that are shared among various applications. A common problem that we face upon deployment is with the missing class definition newly introduced by one of the application node. Any practical approach / best practices to address this problem?
    Edited by: Mahesh Kamath on Feb 3, 2010 10:17 PM

    Is it the cache servers themselves or your application servers that are having problems with loading classes?
    In order to dynamically add classes (in our case scripts that compile to Java byte code) we are considering to use a class loader that picks up classes from a coherence cache. I am however not so sure how/if this would work for the cache servers themselves if that is your problem!?
    Anyhow a simplistic cache class loader may look something like this:
    import com.tangosol.net.CacheFactory;
    * This trivial class loader searches a specified Coherence cache for classes to load. The classes are assumed
    * to be stored as arrays of bytes keyed with the "binary name" of the class (com.zzz.xxx).
    * It is probably a good idea to decide on some convention for how binary names are structured when stored in the
    * cache. For example the first tree parts of the binary name (com.scania.xxxx in the example) could be the
    * "application name" and this could be used as by a partitioning strategy to ensure that all classes associated with
    * a specific application are stored in the same partition and this way can be updated atomically by a processor or
    * transaction! This kind of partitioning policy also turns class loading into a "scalable" query since each
    * application will only involve one cache node!
    public class CacheClassLoader extends ClassLoader {
        public static final String DEFAULT_CLASS_CACHE_NAME = "ClassCache";
        private final String classCacheName;
        public CacheClassLoader() {
            this(DEFAULT_CLASS_CACHE_NAME);
        public CacheClassLoader(String classCacheName) {
            this.classCacheName = classCacheName;
        public CacheClassLoader(ClassLoader parent, String classCacheName) {
            super(parent);
            this.classCacheName = classCacheName;
        @Override
        public Class<?> loadClass(String className) throws ClassNotFoundException {
            byte[] bytes = (byte[]) CacheFactory.getCache(classCacheName).get(className);
            return defineClass(className, bytes, 0, bytes.length);
    }And a simple "loader" that put the classes in a JAR file into the cache may look like this:
    * This class loads classes from a JAR-files to a code cache
    public class JarToCacheLoader {
        private final String classCacheName;
        public JarToCacheLoader(String classCacheName) {
            this.classCacheName = classCacheName;
        public JarToCacheLoader() {
            this(CacheClassLoader.DEFAULT_CLASS_CACHE_NAME);
        public void loadClassFiles(String jarFileName) throws IOException {
            JarFile jarFile = new JarFile(jarFileName);
            System.out.println("Cache size = " + CacheFactory.getCache(classCacheName).size());
            for (Enumeration<JarEntry> entries = jarFile.entries(); entries.hasMoreElements();) {
                final JarEntry entry = entries.nextElement();
                if (!entry.isDirectory() && entry.getName().endsWith(".class")) {
                    final InputStream inputStream = jarFile.getInputStream(entry);
                    final long size = entry.getSize();
                    int totalRead = 0;
                    int read = 0;
                    byte[] bytes = new byte[(int) size];
                    do {
                        read = inputStream.read(bytes, totalRead, bytes.length - totalRead);
                        totalRead += read;
                    } while (read > 0);
                    if (totalRead != size)
                        System.out.println(entry.getName() + " failed to load completely, " + size + " ," + read);
                    else
                        System.out.println(entry.getName().replace('/', '.'));
                        CacheFactory.getCache(classCacheName).put(entry.getName() + entry, bytes);
                    inputStream.close();
        public static void main(String[] args) {
            JarToCacheLoader loader = new JarToCacheLoader();
            for (String jarFileName : args)
                try {
                    loader.loadClassFiles(jarFileName);
                } catch (IOException e) {
                    e.printStackTrace();
    }Standard disclaimer - this is prototype code use on your own risk :-)
    /Magnus

  • Issue with Authentication using JAAS for coherence

    Hi,
    I have configured security frame work using JAAS for storage enabled node,
    I am using keystore for authenticating the users, Below is the code used for authentication,
        Subject subject;
            try{ subject = Security.login(sUsername, sPassword.toCharArray()); }
            catch (Throwable t){
                subject = null;
                log("Authentication error:");
                log(t); }
            if (subject != null)
                for (Iterator iter = subject.getPrincipals().iterator(); iter.hasNext(); )
                    Principal principal = (Principal) iter.next();
                    log("Principal: " + principal.getName());
            Security.runAs(subject, new PrivilegedAction()
                public Object run()
                    NamedCache cache = CacheFactory.getCache(CACHE_NAME);
                    boolean flag = true;
                    while (flag) {}
                    return null;
                });and i am calling the above class in the callback handler which is defined in coherence operation descriptor.
            <security-config>
                    <enabled system-property="tangosol.coherence.security">true</enabled>
                    <login-module-name>TestCoherence</login-module-name>
                     <access-controller>
                    <class-name>com.tangosol.net.security.DefaultController</class-name>
                            <init-params>
                            <init-param id="1">
                            <param-type>java.io.File</param-type>
                            <param-value>config/keystore.jks</param-value>
                            </init-param>
                            <init-param id="2">
                            <param-type>java.io.File</param-type>
                            <param-value>config/permissions.xml</param-value>
                            </init-param>
                            </init-params>
                     </access-controller>
                     <callback-handler>
                            <class-name>Test</class-name>
                     </callback-handler>
             </security-config>I am using the following command line parameters for bringing up the storage enabled node.
    -Dtangosol.coherence.security.permissions="$CONFIG_PATH/permissions.xml" 
    -Dtangosol.coherence.security.keystore="$CONFIG_PATH/keystore.jks" 
    -Djava.security.auth.login.config="$CONFIG_PATH/login.config" 
    -Dtangosol.coherence.security=trueNow till the callback handler thread is alive, storage enabled node will be up. As soon as the call back handler thread dies. Storage enabled node stops with the following error,
    Exception in thread "main" java.lang.SecurityException: Authentication failed: Error initializing keystore
    at com.tangosol.coherence.component.net.security.Standard.loginSecure(Standard.CDB:36)
    at com.tangosol.coherence.component.net.security.Standard.getTempSubject(Standard.CDB:11)
    at com.tangosol.coherence.component.net.security.Standard.checkPermission(Standard.CDB:18)
    at com.tangosol.coherence.component.net.Security.checkPermission(Security.CDB:11)
    at com.tangosol.coherence.component.util.SafeCluster.ensureService(SafeCluster.CDB:6)
    at com.tangosol.coherence.component.net.management.Connector.startService(Connector.CDB:25)
    at com.tangosol.coherence.component.net.management.gateway.Remote.registerLocalModel(Remote.CDB:8)
    at com.tangosol.coherence.component.net.management.gateway.Local.registerLocalModel(Local.CDB:8)
    at com.tangosol.coherence.component.net.management.Gateway.register(Gateway.CDB:1)
    at com.tangosol.coherence.component.util.SafeCluster.ensureRunningCluster(SafeCluster.CDB:50)
    at com.tangosol.coherence.component.util.SafeCluster.start(SafeCluster.CDB:2)
    at com.tangosol.net.CacheFactory.ensureCluster(CacheFactory.java:948)
    at com.tangosol.net.DefaultConfigurableCacheFactory.ensureService(DefaultConfigurableCacheFactory.java:748)
    at com.tangosol.net.DefaultCacheServer.start(DefaultCacheServer.java:140)
    at com.tangosol.net.DefaultCacheServer.main(DefaultCacheServer.java:61)
    Please let me know where should i pass the credentials to the default cache server for authentication or should i change the any implementation of authentication here.
    Thanks in advance,
    Bhargav

    Bhargav,
    Rather than trying to loop forever in a callback handler try this
    import com.tangosol.net.CacheFactory;
    import com.tangosol.net.DefaultCacheServer;
    import com.tangosol.net.security.Security;
    import javax.security.auth.Subject;
    import java.security.PrivilegedExceptionAction;
    public class SecureCacheServer {
        public static void main(final String[] args) throws Exception {
            LoginContext lc = new LoginContext("Coherence");
            lc.login();      
            Subject subject = lc.getSubject();
            Security.runAs(subject, new PrivilegedExceptionAction() {
                public Object run() throws Exception {
                    DefaultCacheServer.main(args);
                    return null;
    }Then when you start your cache server just use the SecureCacheServer class above rather than DefaultCacheServer
    As the main method of DefaultCacheServer is running in a PrivilegedExceptionAction Coherence will use this identity anywhere it needs to do anything secured.
    I hope the code above compiles OK as it is a modified version of the code I really use.
    Hope this helps
    JK

  • InvocationService on storage disabled nodes

    Hi,
    Is it safe to assume that InvocationService tasks will not fire on cluster nodes that were started using '-Dtangosol.coherence.distributed.localstorage=false'? I've tried both 'execute' and 'query' methods on a cluster that contains both storage-enabled and storage-disabled nodes and the tasks only seem to fire on the storage-enabled nodes. The reason that I'm curious is because the InvocationService.getInfo().getServiceMembers() method returns all the cluster nodes (not just the storage-enabled nodes), and the InvocationService threads show up in the thread dumps of the storage-disabled nodes.
    Thanks,
    Jim

    Hi Rob,
    Thanks for your reply. This turned out to be an elusive problem on my end, complicated by the fact that Coherence was 'eating' an exception. One of the member fields in my invocation class was not serializable, but the InvocationService thread did not give any indication of this error. It wasn't until I put a try/catch around the InvocationService.execute method that I discovered the problem. The local node was the only storage-enabled node, so that explains why the invocation was not being executed on the storage-disabled nodes.
    This might be a good candidate for a bug fix in Coherence (to log some indication that an exception occurred). As is, a good programming tip is to ALWAYS put a try/catch around InvocationService.execute() and InvocationService.query().
    Jim

  • Automatically creation of storage location for material, in MF60

    Hi all,
    my collegue of PP module uses MF60 transaction.
    When he puts XXXX as Replenishment storage location, the system tells him this error: To stge loc. XXXX does not exist for material M1010000182 in plant 0001.
    In MM i have activate the automatic creation of storage location for plant 0001 and for all movement types.
    Why MF60 does not respect this requirement?
    If I try to do movement type 311 with MIGO, manually, there's no problem to transfer the material from a storage location to , XXXX s.loc.
    Is there something more that I have to do?
    Thanks in advance!
    Best regards
    Alba

    Thanks a lot...
    the note says:
    Summary
    Symptom
    A material without storage location view exists.
    If within a single material document, this material is moved in such a way that no change in stock occurs in total, the system does not create a storage location automatically even if this is set in Customizing.
    Additional key words
    OMC3, XLAUT
    Cause and prerequisites
    All postings to a segment (for example, material, plant, storage location) are only executed if a change in stock occurs. Only in this case, storage locations are created automatically. This way it is made sure that no empty segments are created.
    Solution
    The system behaves correctly.
    Header Data
    Release Status:     
    Released on:     07.07.2000  22:00:00
    BUT why IF i do 311 in MIGO, system creates automatically storage location (also if no change in stock occurs in total).
    AND if doMF60  (that does mouvement 311), system does not create automatically the storage location?
    Best regards
    Alba

  • Something similar to 'gcstartup'  for automatic  start / stop

    Hi @ll
    Is there any similar to ‘/etc/rc.d/init.d/gcstartup’ script for an automatic start / stop for GoldenGate manager on a Server.
    I use an OBEY command in a start.txt file with start_ogg.sh file, but there are an OEM agent , a listener and the database on this machine and I would like to start all of them automatic at ones.
    Regards
    jfa

    Hi Nic:_W
    Thank for the answer.
    I have just modified the gcstartup like that :
    #!/bin/sh
    #Source Function Library
    SU=/bin/su
    DBST="N"
    if [ -f /etc/oratab ]
    then
    oratab="/etc/oratab"
    else
    if [ -f /var/opt/oracle/oratab ]
    then
    oratab="/var/opt/oracle/oratab"
    fi
    fi
    if [ -f "$oratab" ]
    then
    for i in `cat $oratab | grep -v '^#' | cut -d ":" -f2`;
    do
    if [ -f "$i/start_goldgate.sh" ] ;
    then
    user=`ls -l $i/start_goldgate.sh|awk '{print $3}'`;
    if [ $1 = start ];
    then
    DBST='Y'
    $SU - $user -c "$i/start_goldgate.sh ";
    else
    DBST='N'
    $SU - $user -c "$i/stop_goldgate.sh ";
    fi;
    fi;
    done
    fi
    and it works fine
    the content of start_goldgate.sh
    is
    cd /orasoft/app/ogg
    /orasoft/app/ogg/ggsci << EOF
    OBEY /orasoft/app/ogg/startup_ogg.txt
    EOF
    the content of startup_ogg.txt
    is :
    START MANAGER
    INFO ALL
    exit
    automatic restart of REPLICAT und EXTRACT parameters are in mgr.prm file.
    Analogue for stop*/shutdown* files with 'STOP MANAGER'
    regards
    jfa

  • I have a dial up connection and I'm looking for a download accerator that will automatically start when the download(update) starts.

    I have a dial up connection. When I get a program update that
    may be 100M in size it can take a lot of time. I'm looking for a
    download accelerator program that will automatically start in
    this kind of situation. I know there are programs that if you
    know the site and the name of the download you want, you put
    this info in and the accelerator program works. But there is
    no way that I can figure to put this info in when a program
    tells me an update is available and then starts a 7hr download.

    That's the size of Firefox Setup 6.0.2.exe, which I started to download but figured there had to be more.
    So, you're saying that once that downloads and you run it, there's nothing else to get online? I could run Firefox Setup 6.0.2.exe successfully while not connected to the Internet?

  • Automatic start and shut down for Iphone

    Hello,
    I usually turn off my Iphone at nighttime.
    That is a nuisance because I have to turn it on and off every day and also I cannot use the built in alarm clock.
    A good idea would be to let the user set a timer that automatically start the Iphone in the morning and shut it down in the evening at a certain time.
    Could you please think about it?
    Many thanks
    Regards
    Andrea

    You can use the Do Not Disturb feature to keep the phone quiet during your down time, then you could use your phone as an alarm clock.  It's what I do.

  • Ipod touch for Christmas, but cannot get started: after installing itunes, when connecting to my laptop I get no instruactions, just the error message "This iPod cannot be used because the Aplle Mobile Device service is not started". Any suggestions???

    ipod touch for Christmas, but cannot get started: after installing itunes, when connecting to my laptop I get no instruactions, just the error message "This iPod cannot be used because the Aplle Mobile Device service is not started". Any suggestions???

    Jennifer...
    Follow the instructions here >  iPod touch: How to restart the Apple Mobile Device Service (AMDS) on Windows

  • When I click PDF file, the file does not open and  installation software for creative suite automatically begin to start. Even after re-installation of creative suite 5.5, only acrobat reader dose not work and the same phenomena occurs.

    When I click PDF file, the file does not open and  installation software for creative suite automatically begin to start. Even after re-installation of creative suite 5.5, only acrobat reader dose not work and the same phenomena occurs.

    Did you ever install Acrobat? It is not installed automatically with CS, but requires an extra installation step.

  • How to turn of "Automatic Start/Stop Detection for HDV Caputre

    Hello,
    I'm pulling my hair out trying to capture HDV 1080i-50 without getting separate clips created by the metadata at the start and stop of each shot. It's that feature called "Automatic Start/Stop Detection." I'm not talking about "Make new clip on timecode break."
    How can I turn this Start/Stop thing off. I don't want all these **** clips everytime the cameraman hit the record button.
    Anyone know how to turn it off so it stops making separate clips?
    Thanks!
    Kirk

    Yeah, Andy. It helps to have the manual!! THANKS. It says... Select or deselect "+Create new clip on Start/Stop+ checkbox in the Clip Settings Tab of the Log and Capture window."
    It's strange that I was looking for that kind of a box in all the various settings but not in the log/capture window.
    Kirk out.

Maybe you are looking for