Invocation service only processing on a single node (Coherence 3.4).

I'm trying to run a task on each node using the invocation service. The task should return a list of report files from each node.
The service is setup as follows:<b>
    <invocation-scheme>
        <scheme-name>agents</scheme-name>
        <service-name>agents</service-name>
        <serializer>com.tangosol.io.DefaultSerializer</serializer>
    </invocation-scheme></b>
And accessed as follows, using the deprecated getInvocationService method:<b>
    InvocationService service = CacheFactory.getInvocationService("agents");
    Map<Member, List<String>> reportQuery = service.query(new ReportFilesnameAgent(), null);</b>
When I access it in this way, the ReportFilesnameAgent only runs on a single node, despite the following listing both cluster nodes:<b>
    Set memberSet = service.getCluster().getMemberSet();</b>
I even tried to force it to run on all nodes by passing the member set to the query method, but this made no difference:<b>
    Map<Member, List<String>> reportQuery = service.query(new ReportFilesnameAgent(), memberSet);</b>
If I access the service using the recommended (non-deprecated) method:<b>
    InvocationService service = (InvocationService) CacheFactory.getService("agents");</b>
Coherences throws the following:<b>
    Caused by: java.lang.IllegalArgumentException: Missing scheme for service: "agents"</b>
Any ideas?
Many thanks in advance,
Cormac.
Edited by: user1744133 on Dec 21, 2009 4:15 AM

Hi,
I'm trying this invocation-schema with Version 3.5.2/463a and a pof serializer and it fails with the below stack trace suggesting that my nodes run of different configurations.
I can see them loading the same cache-config and the same config though.
Any ideas?
many thanks,
Christoph.
The new invocation conig:
     <invocation-scheme>
          <scheme-name>agents</scheme-name>
          <service-name>agents</service-name>
          <serializer>
               <class-name>
                    com.tangosol.io.pof.SafeConfigurablePofContext
          </class-name>
               <init-params>
                    <init-param>
                         <param-type>string</param-type>
                         <param-value>
                              orion-pof-config.xml
          </param-value>
                    </init-param>
               </init-params>
          </serializer>          
          <autostart>true</autostart>
     </invocation-scheme>
the stackt trace:
2010-02-15 15:37:37.286/5.469 Oracle Coherence GE 3.5.2/463 <D5> (thread=Invocation:agents, member=2): Service agents joined the cluster with senior service member 1
2010-02-15 15:37:37.333/5.516 Oracle Coherence GE 3.5.2/463 <Error> (thread=Invocation:agents, member=2): The service "agents" is configured to use serializer com.tangosol.io.DefaultSerializer {loader=sun.misc.Launcher$AppClassLoader@17590db}, which appears to be different from the serializer used by Member(Id=1, Timestamp=2010-02-15 15:37:20.268, Address=165.2.93.118:18001, MachineId=64118, Location=site:EMEA.AD.JPMORGANCHASE.COM,machine:LLDNCFI5SZW83J,process:2360, Role=TmpMain).
java.io.StreamCorruptedException: invalid type: 78
     at com.tangosol.util.ExternalizableHelper.readObjectInternal(ExternalizableHelper.java:2231)
     at com.tangosol.util.ExternalizableHelper.readObject(ExternalizableHelper.java:2219)
     at com.tangosol.io.DefaultSerializer.deserialize(DefaultSerializer.java:60)
     at com.tangosol.coherence.component.util.daemon.queueProcessor.Service.readObject(Service.CDB:4)
     at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid$ServiceConfigMap.readObject(Grid.CDB:1)
     at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid$MemberConfigResponse.read(Grid.CDB:13)
     at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onNotify(Grid.CDB:123)
     at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
     at java.lang.Thread.run(Thread.java:619)
Stopping the agents service.
2010-02-15 15:37:37.333/5.516 Oracle Coherence GE 3.5.2/463 <D5> (thread=Invocation:agents, member=2): Service agents left the cluster
2010-02-15 15:37:37.333/5.516 Oracle Coherence GE 3.5.2/463 <Error> (thread=main, member=2): Error while starting service "agents": java.lang.RuntimeException: Failed to start Service "agents" (ServiceState=SERVICE_STOPPED)
     at com.tangosol.coherence.component.util.daemon.queueProcessor.Service.start(Service.CDB:38)
     at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.start(Grid.CDB:38)
     at com.tangosol.coherence.component.util.SafeService.startService(SafeService.CDB:28)
     at com.tangosol.coherence.component.util.SafeService.ensureRunningService(SafeService.CDB:27)
     at com.tangosol.coherence.component.util.SafeService.start(SafeService.CDB:14)
     at com.tangosol.net.CacheFactory.getInvocationService(CacheFactory.java:943)
     at com.tmp.InvocationClient.main(InvocationClient.java:37)

Similar Messages

  • Having multiple Node manger process in a single host machine.

    I am using weblogic server(portal)10.2.
    I am running the Node Manager to start the admin server.I have installed Java based Node manager in the host machine .Created a unix id ND1 and started the Node Manager utiliy with this id.
    I keep on creating domain for my new applicaitons and add this ND1 in each new domain group,so that ND1 will access the new domain's admin folder.
    Now when I add this ND1 in more than 16 domain groups,I got trouble in accessing the domain folder.In unix OS there is group membership limitation where a unix id cannot be in more than 16 groups.
    Anyone came acroos this issue?
    Possible options.
    1) Can we have multiple node manager ids(ND1,ND2,ND3..etc) for a single Node manager utility in single host?
    2) While starting the server,will node manager look only the nm_password.properties file in domain folder?

    You can boot multiple Node Managers if you change the NodeManager home. To do that, create a directory for each NodeManager and copy the startNodeManager script to it. Then edit the NODEMGR_HOME, LISTEN_ADDRESS, and LISTEN_PORT

  • Integration of service processing with warranty claims node not appearing

    Dear experts,
    Plant Maintenance and Customer Service => Maintenance and Service Processing => Integration of Service Processing with Warranty Claims
    Integration of service processing with warranty claims node not appearing in SPRO.We are using ECC 6.0.How should we get this node?
    regards,
    ashraf

    Hi,
    It will not be in core solution. As far as I know you need Aerospace & Defence solution (DIMP) for this functionality.
    -Paul

  • How to Parse only single node

    Hi:
    I am DOM parser and i want to parse only single node. That is,
    String xmlRecords = "<name>Anup</name>";
    Here i want to parse name node.
    So can any body help me in that?
    Edited by: user11688398 on Jul 22, 2011 1:12 AM

    Ram wrote:
    Peter Lawrey wrote:
    Using String manipulation is likely to be faster than using a DOM (assuming you had valid XML) however if performance were an issue you would use JDBC to access the database.What will happen, if OP has large number of nodes in the string? Will String manipulation be the faster one? String manipulation will be faster only if the string has one or two nodes. Otherwise it will be complicated.Direct String manipulation will always be faster. This is because you can make assumptions a full DOM will not make (as it needs to be able to read any type of XML document)
    For this reason its best avoided unless you really need the speed as any subtle change in the XML could break a simplified parser.
    Peter Lawrey wrote:
    Make sure you have proper XML (not just a fragment) and use a document parser as EJP suggests.EJP didn't suggest anything, rather he did changed the conversation in wrong direction.I have been reading too many posts which look the same. Sabre not EJP made a suggestion. ;)

  • Error - convert single node-RAC-ConvertTomydb.xml -

    my single node init.ora file:
    # Copyright (c) 1991, 2001, 2002 by Oracle Corporation
    # Cache and I/O
    db_block_size=8192
    db_file_multiblock_read_count=16
    # Cursors and Library Cache
    open_cursors=300
    # Database Identification
    db_domain=""
    db_name=mydb
    # Diagnostics and Statistics
    background_dump_dest=/u01/app/oracle/admin/mydb/bdump
    core_dump_dest=/u01/app/oracle/admin/mydb/cdump
    user_dump_dest=/u01/app/oracle/admin/mydb/udump
    # File Configuration
    control_files=("/u01/app/oracle/oradata/mydb/control01.ctl", "/u01/app/oracle/oradata/mydb/control02.ctl", "/u01/app/oracle/oradata/mydb/control03.ctl")
    # Job Queues
    job_queue_processes=10
    # Miscellaneous
    compatible=10.2.0.1.0
    # Processes and Sessions
    processes=150
    # SGA Memory
    sga_target=1083179008
    # Security and Auditing
    audit_file_dest=/u01/app/oracle/admin/mydb/adump
    remote_login_passwordfile=EXCLUSIVE
    # Shared Server
    dispatchers="(PROTOCOL=TCP) (SERVICE=mydbXDB)"
    # Sort, Hash Joins, Bitmap Indexes
    pga_aggregate_target=360710144
    # System Managed Undo and Rollback Segments
    undo_management=AUTO
    undo_tablespace=UNDOTBS1
    my ConvertTomydb.xml ------------which is copy of ConvertToRAC.xml file
    <?xml version="1.0" encoding="UTF-8" ?>
    - <n:RConfig xmlns:n="http://www.oracle.com/rconfig" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.oracle.com/rconfig">
    - <n:ConvertToRAC>
    - <!-- Verify does a precheck to ensure all pre-requisites are met, before the conversion is attempted. Allowable values are: YES|NO|ONLY
    -->
    - <n:Convert verify="ONLY">
    - <!-- Specify current OracleHome of non-rac database for SourceDBHome
    -->
    <n:SourceDBHome>/u01/app/oracle/product/10.2.0/db_1</n:SourceDBHome>
    - <!-- Specify OracleHome where the rac database should be configured. It can be same as SourceDBHome
    -->
    <n:TargetDBHome>/u01/app/oracle/product/10.2.0/db_1</n:TargetDBHome>
    - <!-- Specify SID of non-rac database and credential. User with sysdba role is required to perform conversion
    -->
    - <n:SourceDBInfo SID="mydb">
    - <n:Credentials>
    <n:User>sys</n:User>
    <n:Password>oracle</n:Password>
    <n:Role>sysdba</n:Role>
    </n:Credentials>
    </n:SourceDBInfo>
    - <!-- ASMInfo element is required only if the current non-rac database uses ASM Storage
    -->
    - <n:ASMInfo SID="+ASM1">
    - <n:Credentials>
    <n:User>sys</n:User>
    <n:Password>oracle</n:Password>
    <n:Role>sysdba</n:Role>
    </n:Credentials>
    </n:ASMInfo>
    - <!-- Specify the list of nodes that should have rac instances running. LocalNode should be the first node in this nodelist.
    -->
    - <n:NodeList>
    <n:Node name="linux1" />
    <n:Node name="linux2" />
    </n:NodeList>
    - <!-- Specify prefix for rac instances. It can be same as the instance name for non-rac database or different. The instance number will be attached to this prefix.
    -->
    <n:InstancePrefix>mydb</n:InstancePrefix>
    - <!-- Specify port for the listener to be configured for rac database.If port="", alistener existing on localhost will be used for rac database.The listener will be extended to all nodes in the nodelist
    -->
    <n:Listener port="1521" />
    - <!-- Specify the type of storage to be used by rac database. Allowable values are CFS|ASM. The non-rac database should have same storage type.
    -->
    - <n:SharedStorage type="ASM">
    - <!-- Specify Database Area Location to be configured for rac database.If this field is left empty, current storage will be used for rac database. For CFS, this field will have directory path.
    -->
    <n:TargetDatabaseArea>+ORCL_DATA1</n:TargetDatabaseArea>
    - <!-- Specify Flash Recovery Area to be configured for rac database. If this field is left empty, current recovery area of non-rac database will be configured for rac database. If current database is not using recovery Area, the resulting rac database will not have a recovery area.
    -->
    <n:TargetFlashRecoveryArea>+FLASH_RECOVERY_AREA</n:TargetFlashRecoveryArea>
    </n:SharedStorage>
    </n:Convert>
    </n:ConvertToRAC>
    </n:RConfig>
    Ran the xml file
    $ rconfig ConvertTomydb.xml
    Got this below error.
    [oracle@linux1 bin]$ sh rconfig ConvertTomydb.xml
    <?xml version="1.0" ?>
    <RConfig>
    <ConvertToRAC>
    <Convert>
    <Response>
    <Result code="1" >
    Operation Failed
    </Result>
    <ErrorDetails>
    Clusterware is not configured
    </ErrorDetails>
    </Response>
    </Convert>
    </ConvertToRAC></RConfig>
    [oracle@linux1 bin]$
    Log file from /u01/app/oracle/product/10.2.0/db_1/cfgtoollogs/rconfig/rconfig.log
    [main] [0:14:4:4] [RuntimeExec.runCommand:175] Returning from RunTimeExec.runCommand
    oracle.ops.mgmt.cluster.RemoteShellException: PRKC-1044 : Failed to check remote command execution setup for node linux2 using shells /usr/bin/ssh and /usr/bin/rsh
    linux2.com: Connection refused
         at oracle.ops.mgmt.nativesystem.UnixSystem.checkRemoteExecutionSetup(UnixSystem.java:1880)
         at oracle.ops.mgmt.nativesystem.UnixSystem.getRemoteShellCmd(UnixSystem.java:1634)
         at oracle.ops.mgmt.nativesystem.UnixSystem.createCommand(UnixSystem.java:614)
         at oracle.ops.mgmt.nativesystem.UnixSystem.removeFile(UnixSystem.java:622)
         at oracle.ops.mgmt.nativesystem.UnixSystem.isSharedPath(UnixSystem.java:1352)
         at oracle.ops.mgmt.cluster.Cluster.isSharedPath(Cluster.java:916)
         at oracle.ops.mgmt.cluster.Cluster.isSharedPath(Cluster.java:859)
         at oracle.sysman.assistants.util.ClusterUtils.areSharedPaths(ClusterUtils.java:570)
         at oracle.sysman.assistants.util.ClusterUtils.isShared(ClusterUtils.java:501)
         at oracle.sysman.assistants.util.ClusterUtils.isShared(ClusterUtils.java:457)
         at oracle.sysman.assistants.util.attributes.CommonOPSAttributes.updateShared(CommonOPSAttributes.java:724)
         at oracle.sysman.assistants.util.attributes.CommonOPSAttributes.setNodeNames(CommonOPSAttributes.java:207)
         at oracle.sysman.assistants.rconfig.engine.Context.<init>(Context.java:54)
         at oracle.sysman.assistants.rconfig.engine.ASMInstance.createUtilASMInstanceRAC(ASMInstance.java:109)
         at oracle.sysman.assistants.rconfig.engine.Step.execute(Step.java:245)
         at oracle.sysman.assistants.rconfig.engine.Request.execute(Request.java:73)
         at oracle.sysman.assistants.rconfig.engine.RConfigEngine.execute(RConfigEngine.java:65)
         at oracle.sysman.assistants.rconfig.RConfig.<init>(RConfig.java:85)
         at oracle.sysman.assistants.rconfig.RConfig.<init>(RConfig.java:51)
         at oracle.sysman.assistants.rconfig.RConfig.main(RConfig.java:130)
    [main] [0:14:4:16] [UnixSystem.isSharedPath:1356] UnixSystem.isShared: creating file /u01/app/oracle/product/10.2.0/db_1/CFSFileName126249561289258204.tmp
    [main] [0:14:4:17] [UnixSystem.checkRemoteExecutionSetup:1817] checkRemoteExecutionSetup:: Checking user equivalence using Secured Shell '/usr/bin/ssh'
    [main] [0:14:4:17] [UnixSystem.checkRemoteExecutionSetup:1819] checkRemoteExecutionSetup:: Running Unix command: /usr/bin/ssh -o FallBackToRsh=no -o PasswordAuthentication=no -o StrictHostKeyChecking=yes -o NumberOfPasswordPrompts=0 linux2 /bin/true
    oracle.ops.mgmt.cluster.SharedDeviceException: PRKC-1044 : Failed to check remote command execution setup for node linux2 using shells /usr/bin/ssh and /usr/bin/rsh
    linux2.com: Connection refused
         at oracle.ops.mgmt.nativesystem.UnixSystem.testCFSFile(UnixSystem.java:1444)
         at oracle.ops.mgmt.nativesystem.UnixSystem.isSharedPath(UnixSystem.java:1402)
         at oracle.ops.mgmt.cluster.Cluster.isSharedPath(Cluster.java:916)
         at oracle.ops.mgmt.cluster.Cluster.isSharedPath(Cluster.java:859)
         at oracle.sysman.assistants.util.ClusterUtils.areSharedPaths(ClusterUtils.java:570)
         at oracle.sysman.assistants.util.ClusterUtils.isShared(ClusterUtils.java:501)
         at oracle.sysman.assistants.util.ClusterUtils.isShared(ClusterUtils.java:457)
         at oracle.sysman.assistants.util.attributes.CommonOPSAttributes.updateShared(CommonOPSAttributes.java:724)
         at oracle.sysman.assistants.util.attributes.CommonOPSAttributes.setNodeNames(CommonOPSAttributes.java:207)
         at oracle.sysman.assistants.rconfig.engine.Context.<init>(Context.java:54)
         at oracle.sysman.assistants.rconfig.engine.ASMInstance.createUtilASMInstanceRAC(ASMInstance.java:109)
         at oracle.sysman.assistants.rconfig.engine.Step.execute(Step.java:245)
         at oracle.sysman.assistants.rconfig.engine.Request.execute(Request.java:73)
         at oracle.sysman.assistants.rconfig.engine.RConfigEngine.execute(RConfigEngine.java:65)
         at oracle.sysman.assistants.rconfig.RConfig.<init>(RConfig.java:85)
         at oracle.sysman.assistants.rconfig.RConfig.<init>(RConfig.java:51)
         at oracle.sysman.assistants.rconfig.RConfig.main(RConfig.java:130)
    [main] [0:14:35:152] [Version.isPre10i:189] isPre10i.java: Returning FALSE
    [main] [0:14:35:152] [UnixSystem.getCSSConfigType:1985] configFile=/etc/oracle/ocr.loc
    [main] [0:14:35:157] [Utils.getPropertyValue:221] keyName=ocrconfig_loc props.val=/u02/oradata/orcl/OCRFile propValue=/u02/oradata/orcl/OCRFile
    [main] [0:14:35:157] [Utils.getPropertyValue:221] keyName=ocrmirrorconfig_loc props.val=/u02/oradata/orcl/OCRFile_mirror propValue=/u02/oradata/orcl/OCRFile_mirror
    [main] [0:14:35:157] [Utils.getPropertyValue:292] propName=local_only propValue=FALSE
    [main] [0:14:35:157] [UnixSystem.getCSSConfigType:2029] configType=false
    [main] [0:14:35:158] [Version.isPre10i:189] isPre10i.java: Returning FALSE
    [main] [0:14:35:168] [OCRTree.init:201] calling OCRTree.init
    [main] [0:14:35:169] [Version.isPre10i:189] isPre10i.java: Returning FALSE
    [main] [0:14:35:177] [OCRTree.<init>:157] calling OCR.init at level 7
    [main] [0:14:35:177] [HASContext.getInstance:190] Module init : 24
    [main] [0:14:35:177] [HASContext.getInstance:214] Local Module init : 0
    [main] [0:14:35:177] [HASContext.getInstance:249] HAS Context Allocated: 4 to oracle.ops.mgmt.has.ClusterLock@f47bf5
    [main] [0:14:35:177] [ClusterLock.<init>:60] ClusterLock Instance created.
    [main] [0:14:35:178] [OCR.getKeyValue:411] OCR.getKeyValue(SYSTEM.local_only)
    [main] [0:14:35:178] [nativesystem.OCRNative.Native] getKeyValue: procr_open_key retval = 0
    [main] [0:14:35:179] [nativesystem.OCRNative.Native] getKeyValue: procr_get_value retval = 0, size = 6
    [main] [0:14:35:179] [nativesystem.OCRNative.Native] getKeyValue: value is [false] dtype = 3
    [main] [0:14:35:179] [OCRTreeHA.getLocalOnlyKeyValue:1697] OCRTreeHA localOnly string = false
    [main] [0:14:35:180] [HASContext.getInstance:190] Module init : 6
    [main] [0:14:35:180] [HASContext.getInstance:214] Local Module init : 0
    [main] [0:14:35:180] [HASContext.getInstance:249] HAS Context Allocated: 5 to oracle.ops.mgmt.has.Util@f6438d
    [main] [0:14:35:180] [Util.<init>:86] Util Instance created.
    [main] [0:14:35:180] [has.UtilNative.Native] prsr_trace: Native: hasHAPrivilege
    [main] [0:14:35:184] [HASContext.getInstance:190] Module init : 56
    [main] [0:14:35:184] [HASContext.getInstance:214] Local Module init : 32
    [main] [0:14:35:184] [has.HASContextNative.Native] prsr_trace: Native: allocHASContext
    [main] [0:14:35:184] [has.HASContextNative.Native]
    allocHASContext: Came in
    [main] [0:14:35:184] [has.HASContextNative.Native] prsr_trace: Native: prsr_initCLSR
    [main] [0:14:35:185] [has.HASContextNative.Native]
    allocHASContext: CLSR context [1]
    [main] [0:14:35:185] [has.HASContextNative.Native]
    allocHASContext: retval [1]
    [main] [0:14:35:185] [HASContext.getInstance:249] HAS Context Allocated: 6 to oracle.ops.mgmt.has.ClusterAlias@18825b3
    [main] [0:14:35:185] [ClusterAlias.<init>:85] ClusterAlias Instance created.
    [main] [0:14:35:185] [has.UtilNative.Native] prsr_trace: Native: getCRSHome
    [main] [0:14:35:186] [has.UtilNative.Native] prsr_trace: Native: getCRSHome crs_home=/u01/app/oracle/product/10.2.0/crs(**)
    [main] [0:14:35:280] [ASMTree.getASMInstanceOracleHome:1328] DATABASE.ASM.linux1.+asm1 does exist
    [main] [0:14:35:280] [ASMTree.getASMInstanceOracleHome:1329] Acquiring shared CSS lock SRVM.ASM.DATABASE.ASM.linux1.+asm1
    [main] [0:14:35:280] [has.ClusterLockNative.Native] prsr_trace: Native: acquireShared
    [main] [0:14:35:281] [OCR.getKeyValue:411] OCR.getKeyValue(DATABASE.ASM.linux1.+asm1.ORACLE_HOME)
    [main] [0:14:35:281] [nativesystem.OCRNative.Native] getKeyValue: procr_open_key retval = 0
    [main] [0:14:35:282] [nativesystem.OCRNative.Native] getKeyValue: procr_get_value retval = 0, size = 36
    [main] [0:14:35:282] [nativesystem.OCRNative.Native] getKeyValue: value is [u01/app/oracle/product/10.2.0/db_1] dtype = 3
    [main] [0:14:35:282] [ASMTree.getASMInstanceOracleHome:1346] getASMInstanceOracleHome:ohome=/u01/app/oracle/product/10.2.0/db_1
    [main] [0:14:35:282] [ASMTree.getASMInstanceOracleHome:1367] Releasing shared CSS lock SRVM.ASM.DATABASE.ASM.linux1.+asm1
    [main] [0:14:35:282] [has.ClusterLockNative.Native] prsr_trace: Native: unlock
    [main] [0:14:35:802] [nativesystem.OCRNative.Native] keyExists: procr_close_key retval = 0
    [main] [0:14:35:802] [ASMTree.getNodes:1236] DATABASE.ASM does exist
    [main] [0:14:35:802] [ASMTree.getNodes:1237] Acquiring shared CSS lock SRVM.ASM.DATABASE.ASM
    [main] [0:14:35:802] [has.ClusterLockNative.Native] prsr_trace: Native: acquireShared
    [main] [0:14:35:803] [OCR.listSubKeys:615] OCR.listSubKeys(DATABASE.ASM)
    [main] [0:14:35:803] [nativesystem.OCRNative.Native] listSubKeys: key_name=[DATABASE.ASM]
    [main] [0:14:35:809] [GetASMNodeListOperation.run:78] Got nodes=[Ljava.lang.String;@11a75a2
    [main] [0:14:35:809] [GetASMNodeListOperation.run:91] result status 0
    [main] [0:14:35:809] [LocalCommand.execute:56] LocalCommand.execute: Returned from run method
    [main] [0:14:35:810] [ASMInstanceRAC.loadDiskGroups:2260] diskgroup: instName=+ASM2, diskGroupName=FLASH_RECOVERY_AREA, size=95378, freeSize=88454, type=EXTERN, state=MOUNTED
    [main] [0:14:35:811] [ASMInstanceRAC.loadDiskGroups:2260] diskgroup: instName=+ASM1, diskGroupName=FLASH_RECOVERY_AREA, size=95378, freeSize=88454, type=EXTERN, state=MOUNTED
    [main] [0:14:35:811] [ASMInstanceRAC.loadDiskGroups:2260] diskgroup: instName=+ASM2, diskGroupName=ORCL_DATA1, size=95384, freeSize=39480, type=NORMAL, state=MOUNTED
    [main] [0:14:35:811] [ASMInstanceRAC.loadDiskGroups:2260] diskgroup: instName=+ASM1, diskGroupName=ORCL_DATA1, size=95384, freeSize=39480, type=NORMAL, state=MOUNTED
    [main] [0:14:35:858] [ASMInstance.setBestDiskGroup:1422] sql to be executed:=select name from v$asm_diskgroup where free_mb= (select max(free_mb) from v$asm_diskgroup)
    [main] [0:14:35:864] [ASMInstance.setBestDiskGroup:1426] Setting best diskgroup....
    [main] [0:14:35:888] [SQLEngine.doSQLSubstitution:2165] The substituted sql statement:=select t1.name from v$asm_template t1, v$asm_diskgroup t2 where t1.group_number=t2.group_number and t2.name='FLASH_RECOVERY_AREA'
    [main] [0:14:35:888] [ASMInstance.setTemplates:1345] sql to be executed:=select t1.name from v$asm_template t1, v$asm_diskgroup t2 where t1.group_number=t2.group_number and t2.name='FLASH_RECOVERY_AREA'
    [main] [0:14:35:892] [ASMInstance.setTemplates:1349] Getting templates for diskgroup: oracle.sysman.assistants.util.asm.DiskGroup@170888e
    [main] [0:14:35:892] [ASMInstance.setTemplates:1357] template: PARAMETERFILE
    [main] [0:14:35:893] [ASMInstance.setTemplates:1357] template: DUMPSET
    [main] [0:14:35:893] [ASMInstance.setTemplates:1357] template: DATAGUARDCONFIG
    [main] [0:14:35:893] [ASMInstance.setTemplates:1357] template: FLASHBACK
    [main] [0:14:35:893] [ASMInstance.setTemplates:1357] template: CHANGETRACKING
    [main] [0:14:35:893] [ASMInstance.setTemplates:1357] template: XTRANSPORT
    [main] [0:14:35:893] [ASMInstance.setTemplates:1357] template: AUTOBACKUP
    [main] [0:14:35:893] [ASMInstance.setTemplates:1357] template: BACKUPSET
    [main] [0:14:35:894] [ASMInstance.setTemplates:1357] template: TEMPFILE
    [main] [0:14:35:894] [ASMInstance.setTemplates:1357] template: DATAFILE
    [main] [0:14:35:894] [ASMInstance.setTemplates:1357] template: ONLINELOG
    [main] [0:14:35:894] [ASMInstance.setTemplates:1357] template: ARCHIVELOG
    [main] [0:14:35:894] [ASMInstance.setTemplates:1357] template: CONTROLFILE
    [main] [0:14:35:894] [ASMInstance.createUtilASMInstanceRAC:113] Diskgroups loaded
    [main] [0:14:35:894] [LocalNodeCheck.checkLocalNode:107] Performing LocalNodeCheck
    [main] [0:14:35:894] [OracleHome.getNodeNames:270] inside getNodeNames
    [main] [0:14:36:116] [OracleHome.isClusterInstalled:252] bClusterInstalled=false
    [main] [0:14:36:120] [Step.execute:251] STEP Result=Clusterware is not configured
    [main] [0:14:36:121] [Step.execute:280] Returning result:Operation Failed
    [main] [0:14:36:121] [RConfigEngine.execute:67] bAsyncJob=false
    [main] [0:14:36:124] [RConfigEngine.execute:76] Result=<?xml version="1.0" ?>
    <RConfig>
    <ConvertToRAC>
    <Convert>
    <Response>
    <Result code="1" >
    Operation Failed
    </Result>
    <ErrorDetails>
    Clusterware is not configured
    </ErrorDetails>
    </Response>
    </Convert>
    </ConvertToRAC></RConfig>
    Log file from /u01/app/oracle/product/10.2.0/db_1/cfgtoollogs/rconfig/mydb/sqllog
    MYDB     mydb                    2622254467
    10.2.0.1.0     ACTIVE
    cluster_database                                        FALSE
    undo_management                                         AUTO
    db_domain
    dispatchers                                             (PROTOCOL=TCP) (SERVICE=mydbXDB)
    background_dump_dest                                        /u01/app/oracle/admin/mydb/bdump
    user_dump_dest                                             /u01/app/oracle/admin/mydb/udump
    core_dump_dest                                             /u01/app/oracle/admin/mydb/cdump
    audit_file_dest                                         /u01/app/oracle/admin/mydb/adump
    MYDB     mydb                    2622254467
    10.2.0.1.0     ACTIVE
    cluster_database                                        FALSE
    undo_management                                         AUTO
    db_domain
    dispatchers                                             (PROTOCOL=TCP) (SERVICE=mydbXDB)
    background_dump_dest                                        /u01/app/oracle/admin/mydb/bdump
    user_dump_dest                                             /u01/app/oracle/admin/mydb/udump
    core_dump_dest                                             /u01/app/oracle/admin/mydb/cdump
    audit_file_dest                                         /u01/app/oracle/admin/mydb/adump
    MYDB     mydb                    2622254467
    10.2.0.1.0     ACTIVE
    cluster_database                                        TRUE
    undo_management                                         AUTO
    db_domain
    dispatchers                                             (PROTOCOL=TCP) (SERVICE=mydbXDB)
    background_dump_dest                                        /u01/app/oracle/admin/mydb/bdump
    user_dump_dest                                             /u01/app/oracle/admin/mydb/udump
    core_dump_dest                                             /u01/app/oracle/admin/mydb/cdump
    audit_file_dest                                         /u01/app/oracle/admin/mydb/adump
    MYDB     mydb                    2622254467
    10.2.0.1.0     ACTIVE
    cluster_database                                        TRUE
    undo_management                                         AUTO
    db_domain
    dispatchers                                             (PROTOCOL=TCP) (SERVICE=mydbXDB)
    background_dump_dest                                        /u01/app/oracle/admin/mydb/bdump
    user_dump_dest                                             /u01/app/oracle/admin/mydb/udump
    core_dump_dest                                             /u01/app/oracle/admin/mydb/cdump
    audit_file_dest                                         /u01/app/oracle/admin/mydb/adump
    MYDB     mydb                    2622254467
    10.2.0.1.0     ACTIVE
    cluster_database                                        TRUE
    undo_management                                         AUTO
    db_domain
    dispatchers                                             (PROTOCOL=TCP) (SERVICE=mydbXDB)
    background_dump_dest                                        /u01/app/oracle/admin/mydb/bdump
    user_dump_dest                                             /u01/app/oracle/admin/mydb/udump
    core_dump_dest                                             /u01/app/oracle/admin/mydb/cdump
    audit_file_dest                                         /u01/app/oracle/admin/mydb/adump
    Please help me where I am making mistake.
    Thanks

    1) I have created single node standard database called mydb in /u01/app/oracle/product/10.2.0/db_1 home (hostname linux1)
    2) installed crs and asm on linux1 and linux2 and shared storag on ASM(which external HD running ieee1294 cards and ports) . no database is created on linux1 or linux2
    3) I want to convert mydb to RAC DATABASE called mydb1 instance on linux1 and mydb2 on linux2 machine respectively.
    4) copied and modifed xml as you see above called ConvertTomydb.xml to $ORACLE_HOME/bin directory
    5) when I run
    $rconfig ConvertTomydb.xml from $ORACLE_HOME/bin directory , i get the following error
    <ConvertToRAC>
    <Convert>
    <Response>
    <Result code="1" >
    Operation Failed
    </Result>
    <ErrorDetails>
    Clusterware is not configured
    </ErrorDetails>
    </Response>
    </Convert>
    </ConvertToRAC>
    $
    Please see my crs_stat -t command output
    Name Type Target State Host
    ora....SM1.asm application ONLINE ONLINE linux1
    ora....X1.lsnr application ONLINE ONLINE linux1
    ora.linux1.gsd application ONLINE ONLINE linux1
    ora.linux1.ons application ONLINE ONLINE linux1
    ora.linux1.vip application ONLINE ONLINE linux1
    ora....SM2.asm application ONLINE ONLINE linux2
    ora....X2.lsnr application ONLINE ONLINE linux2
    ora.linux2.gsd application ONLINE ONLINE linux2
    ora.linux2.ons application ONLINE ONLINE linux2
    ora.linux2.vip application ONLINE ONLINE linux2
    ora.orcl.db application ONLINE ONLINE linux1
    ora....l1.inst application ONLINE ONLINE linux1
    ora....l2.inst application ONLINE ONLINE linux2
    ora....test.cs application ONLINE ONLINE linux1
    ora....cl1.srv application ONLINE ONLINE linux1
    ora....cl2.srv application ONLINE UNKNOWN linux2
    please see the output from olsnodes command
    [oracle@linux1 bin]$ olsnodes
    linux1
    linux2
    [oracle@linux1 bin]$
    What is your cache fusion interconnect strategy?
    I don't about this, please let me know where can i find the answers. what kind of command do i have to use get the answer
    damorgan , Please let me know, if I gave answers to your questions. if not please let me know, i can give as much as possible. i really appreciate for your help
    Thanks

  • Work Manager and the Invocation Service - Items Queued?

    Hi,
    After reading the user guide & javadocs and searching this forum, I can't find documentation on how an Invocation Service works behind the scenes. I'm implementing a WorkManager (which sits atop an incovation service) with the goal of using some nodes in my Coherence cluster as essentially a distributed thread pool for parallel execution of work.
    I want to know:
    a) How work items are queued when all threads dedicated to the WorkManager are in use. I assume a cache is spawned to handle this, but if a queue is used, how do I configure the depth?
    b) Can thread count be throttled dynamically? The post linked below seems to have tried it, but is there no cleaner way since the service is configured prior to startup? Maybe this has changed since 3.4.
    How Can a Service's Configuration Be Modified Dynamically
    Thanks

    To answer Luk's question - yes the change in thread count took affect without me having to restart the service. I have only changed cache service thread counts via JMX but I assume invocation services would be the same.
    Roberto,
    Regarding the queue size - I suspect this is limited by the amount of heap your process has. Presumably Coherence uses some sort of queue structure internally to hold the requests and this is likely only limited by memory - or maybe number of entries if it uses an int for its size.
    I am sure if you try hard enough you could blow the queue size if you could somehow push a sufficiently large number of slow running tasks into the server and even eaiser if your invocables contained a lot of data themselves so they took up more heap.
    It would be very simple to write a test the just fires in invocables at a fast enough rate and see what happens.
    JK

  • Invocation Service and Local Maps

    Hi,
         I want to use an InvocationService to perform a filtered query on the local data of a node in a distributed cache scheme. A simple keySet or entrySet will not work in my case as I want to push out some additional processing to each node.
         I am mainly concered on how to perform the query on only the local data of each node (how to get the local map and query it). Do you have an example or documentation of how to this?
         Thanks in advance.
         Joey

    Cameron, this is in reference to a more "hands on" query approach previously discussed.
         Joey, the most direct means of accessing locally-hosted partitioned data is to access the cache's backing map directly. Since the data may be stored in serialized form, you'll need to access the backing map's context to get converters to convert from serialized form to regular java objects. The following snippet does a simple aggregation, using direct backing map access (for demonstration purposes only, obviously an InvocableMap aggregator would be easier, more reliable and more efficient for this specific use case). Keep in mind that when using InvocableMap, rebalancing and server failure are handled automatically; when using this lower-level approach, you'll need to handle failure manually (for queries, this usually just means re-issuing the query).
         (This snippet is from a larger examples compilation, so will not compile as-is due to a few "utility" calls).
         EDIT: Looking at the code below, I should also mention that there are separate converters for the cache keys and the cache values (this example happens to not process the cache keys).
                  package examples.invocationWithBackingMap;
             import com.tangosol.net.AbstractInvocable;
             import com.tangosol.net.CacheFactory;
             import com.tangosol.net.CacheService;
             import com.tangosol.net.DefaultConfigurableCacheFactory;
             import com.tangosol.net.InvocationService;
             import com.tangosol.net.Member;
             import com.tangosol.net.NamedCache;
             import com.tangosol.net.DistributedCacheService;
             import com.tangosol.util.Binary;
             import com.tangosol.util.Converter;
             import com.tangosol.util.Base;
             import util.TerminateAgent;
             import util.Util;
             import java.util.Iterator;
             import java.util.Map;
             import java.util.Set;
             public class Main
                 extends Base
                  * Usage:
                  * <pre>
                  * java examples.invocationWithBackingMap.Main cluster-size
                  * </pre>
                  * @param args      command-line arguments
                 public static void main(String[] args)
                     NamedCache cache = CacheFactory.getCache(CACHENAME);
                     int callerId = CacheFactory.ensureCluster().getLocalMember().getId();
                     // use (and start) an invocation service
                     InvocationService isvc =
                         CacheFactory.getInvocationService(INVOCATION_SERVICE_NAME);
                     int cClusterMembers = Integer.parseInt(args[0]);
                     Util.waitForCacheNodes(cache, cClusterMembers);
                     Util.verifyClusterSize(cClusterMembers);
                     // populate the cache (blindly; overwrites are ignored)
                     for (int i=0; i<100; i++)
                         cache.put(new Integer(i), new Integer(i));
                     // send to the partition owners
                     Set members = ((DistributedCacheService)cache.getCacheService()).getStorageEnabledMembers();
                     System.out.println("Number of storage-enabled nodes: " + members.size());
                     Map map = isvc.query(new SummingAgent(callerId), members);
                     System.out.println(map.size() + " node(s) responded");
                     int totalSum = 0;
                     System.out.println("Total should be 4950 for sum(0..99)");
                     for (Iterator iter = map.entrySet().iterator(); iter.hasNext();)
                         Map.Entry entry  = (Map.Entry)iter.next();
                         Member member = (Member)entry.getKey();
                         Integer memberSum = (Integer)entry.getValue();
                         // null if member died; did not run the Invocation service
                         //  or threw exception during execution
                         if (memberSum != null)
                             System.out.println("Sum of values on member " + member.getId() +
                                                " is " + memberSum);
                             totalSum += memberSum.intValue();
                         else
                             System.out.println("No result from " + member.getId());
                     System.out.println("Total sum is " + totalSum);
                     System.out.println("Terminating remote cluster memebers...");
                     // shut down the other cluster members remotely
                     // this member should not be acting as a cache server
                     azzert(!members.contains(CacheFactory.ensureCluster().getLocalMember()));
                     isvc.execute(new TerminateAgent(), members, null);
                     // leave the cluster
                     CacheFactory.shutdown();
                 public static class SummingAgent extends AbstractInvocable
                     public SummingAgent() {}
                     public SummingAgent(int callerId)
                         m_callerId = callerId;
                     public void run()
                         System.out.println("Running agent from member " + m_callerId + "...");
                         NamedCache cache = CacheFactory.getCache(CACHENAME);
                         CacheService service = cache.getCacheService();
                         DefaultConfigurableCacheFactory.Manager bmm =
                             (DefaultConfigurableCacheFactory.Manager)service.getBackingMapManager();
                         Map backingMap = bmm.getBackingMap(CACHENAME);
                         Converter valueConverter =
                             bmm.getContext().getValueFromInternalConverter();
                         int sum = 0;
                         // if concurrent updates are possible you need to deal with
                         // ConcurrentModificationException thrown by the iter.next() call
                         for (Iterator iter = backingMap.values().iterator();iter.hasNext();)
                             // data stored in Binary (wire) format
                             Binary binary = (Binary)iter.next();
                             Integer value = (Integer)valueConverter.convert(binary);
                             sum += value.intValue();
                         setResult(new Integer(sum));
                     protected int m_callerId;
                 public static final String CACHENAME = "dist-InvocationWithBackingMap";
                 public static final String INVOCATION_SERVICE_NAME="InvocationService";
            

  • Long running servlet in a single node of a cluster

    Good evening,
    I am developing a J2EE application that consists of a number of web services that perform background processing of relatively long-running jobs.  Status of the jobs actually need to be reported back to the SAP ABAP system, which we do via JCo.  All communication back to ABAP occurs in a single long-running, JMS driven servlet.  This callback servlet uses both a timestamp (last time status was sent back) or a  count of status events to determine when to send data to the ABAP system.
    All of this works great in a single Java instance J2EE configuration.  However, when we introduce clustering into the configuration, we get one instance of the callback servlet per J2EE server process.  Is there an easy way to configure the servlet so that it only runs on a single J2EE server process in one instance?
    TIA,
    - Bill

    Hi Bill,
    Launching manually threads even from a web container is also not recommended as good J2EE practice.
    If each applications launches its own set of threads that may easily crash the server. Another drawback of such long running servlet is that you are blocking application thread from the server. One last point - it seems that you are storing the data in the servlet at some intermediate variables without any persistance. What will happen if the server crashes ? Can you afford to loose the data ?
    I am not implying that you launch threads from the MDB. I am just saying that you could define that single MDB and send messages directly to it. You do the job there inside the onMessage, no new thread. You will be guaranteed that no other call will be executed. You are guaranteed nothing will be lost in case server crashes, you will not use resources even if there is no traffic.
    Btw, please feel free to give me a phone call or write an email to further discuss the issue. You can take the details from your CSN I am currently processing
    Best Regards
    Peter

  • Test client pinned to single node in production

    WL 6.1 sp2, Solaris 2.8
              Currently we have a bunch of SLSBs deployed in cluster out in production and
              a web tier that usually gets and invokes a single SLSB, and they're running
              happily. But everyone once in a while, we get an asymmetrical exception,
              where one node in the cluster is giving us bad results. What I like to do
              is write some simple test clients that can pin to a particular node and
              diagnose just that node while the regular client (web tier) still
              round-robins in production.
              Our SLSBs do not have <stateless-clustering> elemements at all in
              weblogic-ejb-jar.xml, so <stateless-bean-is-clusterable> defaults to true.
              My understanding is this means WL will round robin at 3 different levels:
              jndi Context, EJBHome and EJBObject, unless server-client is co-located in
              which WL will always pick the local object.
              What I have tried to do is to write the test client with a single url in
              PROVIDER_URL
              and PIN_TO_PRIMARY_SERVER set to true in InitialContext construction. This
              does not seem to work; by the time I get the EJBHome, create the EJBObject
              and invoke a test method, I see round-robin occuring. I can understand a
              reason FOR this behavior, and a counter-argument AGAINST this behavior. The
              reason why WL is still round-robining is because only the Context is pinned
              to the primary server; subsequent EJBHome and EJBObject are cluster-aware,
              and hence will round-robin, which in fact it is doing. But then the reason
              against this observation is that once I retrieve InitialContext, the
              subsequent EJBHome and EJBObject are all available locally. So shouldn't WL
              do co-location optimization and hence never round-robin???
              Here are some alternative framework I've thought up so I can write a test
              client that pins to a specific ejb server:
              1) Create a second set of DD's in weblogic-ejb-jar.xml, this time setting
              stateless-bean-is-clusterable to false, and have the test client us this for
              pinning.
              2) Expose a co-located servlet that will accept ejb invocation (via SOAP or
              customized RPC). Servlet invocation will always be ip-specific, and
              hopefully co-location of web and ejb tier will
              Problem with #1 is 2 sets of DD's, hence 2 sets of EJBHomes/Objects that
              behave slightly differently.
              Problem with #2 is the complexity of a new web tier just for pinning, which
              then also means the test client doesn't exactly replicate my actual web
              client calls.
              Is there a simple solution to isolate and diagnose a single node in a
              production cluster? Am I missing something? Much appreciated!
              Gene
              

    WL 6.1 sp2, Solaris 2.8
              Currently we have a bunch of SLSBs deployed in cluster out in production and
              a web tier that usually gets and invokes a single SLSB, and they're running
              happily. But everyone once in a while, we get an asymmetrical exception,
              where one node in the cluster is giving us bad results. What I like to do
              is write some simple test clients that can pin to a particular node and
              diagnose just that node while the regular client (web tier) still
              round-robins in production.
              Our SLSBs do not have <stateless-clustering> elemements at all in
              weblogic-ejb-jar.xml, so <stateless-bean-is-clusterable> defaults to true.
              My understanding is this means WL will round robin at 3 different levels:
              jndi Context, EJBHome and EJBObject, unless server-client is co-located in
              which WL will always pick the local object.
              What I have tried to do is to write the test client with a single url in
              PROVIDER_URL
              and PIN_TO_PRIMARY_SERVER set to true in InitialContext construction. This
              does not seem to work; by the time I get the EJBHome, create the EJBObject
              and invoke a test method, I see round-robin occuring. I can understand a
              reason FOR this behavior, and a counter-argument AGAINST this behavior. The
              reason why WL is still round-robining is because only the Context is pinned
              to the primary server; subsequent EJBHome and EJBObject are cluster-aware,
              and hence will round-robin, which in fact it is doing. But then the reason
              against this observation is that once I retrieve InitialContext, the
              subsequent EJBHome and EJBObject are all available locally. So shouldn't WL
              do co-location optimization and hence never round-robin???
              Here are some alternative framework I've thought up so I can write a test
              client that pins to a specific ejb server:
              1) Create a second set of DD's in weblogic-ejb-jar.xml, this time setting
              stateless-bean-is-clusterable to false, and have the test client us this for
              pinning.
              2) Expose a co-located servlet that will accept ejb invocation (via SOAP or
              customized RPC). Servlet invocation will always be ip-specific, and
              hopefully co-location of web and ejb tier will
              Problem with #1 is 2 sets of DD's, hence 2 sets of EJBHomes/Objects that
              behave slightly differently.
              Problem with #2 is the complexity of a new web tier just for pinning, which
              then also means the test client doesn't exactly replicate my actual web
              client calls.
              Is there a simple solution to isolate and diagnose a single node in a
              production cluster? Am I missing something? Much appreciated!
              Gene
              

  • Gettig the error at the end of single node installation on OEL5.6 32-bit

    Gettig the following error at the end of single node installation on OEL5.6(32-bit)
    I am getting the following error enf of the single node installation on OEL5.6(32-bit). I have installed the following RPMs
    rpm -Uvh sysstat
    rpm -Uvh libaio-devel
    rpm -Uvh compat-libstdc++
    rpm -Uvh compat-libstdc++
    rpm -Uvh xorg-x11-libs-compat-
    rpm -Uvh unixODBC-2.2.
    rpm -Uvh unixODBC-devel
    rpm -Uvh libXp-
    rpm -Uvh openmotif21
    These are following failed, when I clicked the respective button I get these respective errors...
    HTTP
    checking URL = http://linux1.oracle.com:8001
    RW-50015: Error: - HTTP Listener is not responding. The service might not have started on the port yet. Please check the service and use the retry button.
    Virtual Directory
    RW-50015: Error: - Http Server Virtual Directories is not responding. The service might not have started on the port yet. Please check the service and use the retry button.
    Login Page
    RW-50015: Error: - Login Page is not responding. The service might not have started on the port yet. Please check the service and use the retry button.
    Help Page
    checking URL = http://linux1.oracle.com:8001/OA_HTML/help
    RW-50015: Error: - Help Page is not responding. The service might not have started on the port yet. Please check the service and use the retry button.
    SP
    checking URL = http://linux1.oracle.com:8001/OA_HTML/jtfTestCookie.jsp
    RW-50015: Error: - JSP is not responding. The service might not have started on the port yet. Please check the service and use the retry button.
    Please let me know how to fix this.
    Thanks.
    Joy.

    Helios,
    While starting the application I am getting the following...
    ============================================
    [applprd1@linux1 scripts]$ ./adstrtal.sh
    You are running adstrtal.sh version 120.15
    Enter the APPS username: apps
    Enter the APPS password:
    The logfile for this session is located at /u01/prd1/inst/apps/PRD1_linux1/logs/appl/admin/log/adstrtal.log
    Executing service control script:
    /u01/prd1/inst/apps/PRD1_linux1/admin/scripts/adopmnctl.sh start
    script returned:
    You are running adopmnctl.sh version 120.6
    Starting Oracle Process Manager (OPMN) ...
    adopmnctl.sh: exiting with status 0
    adopmnctl.sh: check the logfile /u01/prd1/inst/apps/PRD1_linux1/logs/appl/admin/log/adopmnctl.txt for more information ...
    .end std out.
    .end err out.
    Executing service control script:
    /u01/prd1/inst/apps/PRD1_linux1/admin/scripts/adalnctl.sh start
    script returned:
    adalnctl.sh version 120.3
    Checking for FNDFS executable.
    Listener APPS_PRD1 has already been started.
    adalnctl.sh: exiting with status 2
    adalnctl.sh: check the logfile /u01/prd1/inst/apps/PRD1_linux1/logs/appl/admin/log/adalnctl.txt for more information ...
    .end std out.
    .end err out.
    Executing service control script:
    /u01/prd1/inst/apps/PRD1_linux1/admin/scripts/adapcctl.sh start
    script returned:
    You are running adapcctl.sh version 120.7.12010000.2
    Starting OPMN managed Oracle HTTP Server (OHS) instance ...
    adapcctl.sh: exiting with status 204
    adapcctl.sh: check the logfile /u01/prd1/inst/apps/PRD1_linux1/logs/appl/admin/log/adapcctl.txt for more information ...
    .end std out.
    .end err out.
    Executing service control script:
    /u01/prd1/inst/apps/PRD1_linux1/admin/scripts/adoacorectl.sh start
    script returned:
    You are running adoacorectl.sh version 120.13
    Starting OPMN managed OACORE OC4J instance ...
    adoacorectl.sh: exiting with status 150
    adoacorectl.sh: check the logfile /u01/prd1/inst/apps/PRD1_linux1/logs/appl/admin/log/adoacorectl.txt for more information ...
    .end std out.
    .end err out.
    Executing service control script:
    /u01/prd1/inst/apps/PRD1_linux1/admin/scripts/adformsctl.sh start
    script returned:
    You are running adformsctl.sh version 120.16
    Starting OPMN managed FORMS OC4J instance ...
    adformsctl.sh: exiting with status 150
    adformsctl.sh: check the logfile /u01/prd1/inst/apps/PRD1_linux1/logs/appl/admin/log/adformsctl.txt for more information ...
    .end std out.
    .end err out.
    Executing service control script:
    /u01/prd1/inst/apps/PRD1_linux1/admin/scripts/adoafmctl.sh start
    script returned:
    You are running adoafmctl.sh version 120.8
    Starting OPMN managed OAFM OC4J instance ...
    adoafmctl.sh: exiting with status 150
    adoafmctl.sh: check the logfile /u01/prd1/inst/apps/PRD1_linux1/logs/appl/admin/log/adoafmctl.txt for more information ...
    .end std out.
    .end err out.
    Executing service control script:
    /u01/prd1/inst/apps/PRD1_linux1/admin/scripts/adcmctl.sh start
    script returned:
    You are running adcmctl.sh version 120.17.12010000.3
    Starting concurrent manager for PRD1 ...
    Starting PRD1_0627@PRD1 Internal Concurrent Manager
    Default printer is noprint
    adcmctl.sh: exiting with status 0
    adcmctl.sh: check the logfile /u01/prd1/inst/apps/PRD1_linux1/logs/appl/admin/log/adcmctl.txt for more information ...
    .end std out.
    .end err out.
    Executing service control script:
    /u01/prd1/inst/apps/PRD1_linux1/admin/scripts/jtffmctl.sh start
    script returned:
    You are running jtffmctl.sh version 120.3
    Validating Fulfillment patch level via /u01/prd1/apps/apps_st/comn/java/classes
    Fulfillment patch level validated.
    Starting Fulfillment Server for PRD1 on port 9301 ...
    jtffmctl.sh: exiting with status 0
    .end std out.
    .end err out.
    adstrtal.sh: Exiting with status 4
    adstrtal.sh: check the logfile /u01/prd1/inst/apps/PRD1_linux1/logs/appl/admin/log/adstrtal.log for more information ...
    ===============================================
    [Service Control Execution Report]
    The report format is:
    <Service Group> <Service> <Script> <Status>
    Root Service Enabled
    Root Service Oracle Process Manager for PRD1_linux1 adopmnctl.sh Started
    Web Entry Point Services Enabled
    Web Entry Point Services Oracle HTTP Server PRD1_linux1 adapcctl.sh Failed
    Web Entry Point Services OracleTNSListenerAPPS_PRD1_linux1 adalnctl.sh Already Started
    Web Application Services Enabled
    Web Application Services OACORE OC4J Instance PRD1_linux1 adoacorectl.sh Failed
    Web Application Services FORMS OC4J Instance PRD1_linux1 adformsctl.sh Failed
    Web Application Services OAFM OC4J Instance PRD1_linux1 adoafmctl.sh Failed
    Batch Processing Services Enabled
    Batch Processing Services OracleConcMgrPRD1_linux1 adcmctl.sh Started
    Batch Processing Services Oracle Fulfillment Server PRD1_linux1 jtffmctl.sh Started
    Other Services Disabled
    Other Services OracleFormsServer-Forms PRD1_linux1 adformsrvctl.sh Disabled
    Other Services Oracle Metrics Client PRD1_linux1 adfmcctl.sh Disabled
    Other Services Oracle Metrics Server PRD1_linux1 adfmsctl.sh Disabled
    Other Services Oracle MWA Service PRD1_linux1 mwactlwrpr.sh Disabled
    ServiceControl is exiting with status 4
    Thanks.
    Joy.
    Edited by: user11952526 on Jun 26, 2011 11:51 AM

  • 9ir2 Single Node RAC + RH80

    Hi all.
    I'm triying to install a single node RAC based in the single node rac for linux document in the metalink.
    The document was made based in a oracle 9iR1 database an RH71, I have followed the steps and i have a working kernel made by my own, the raw devices, and softdog hacked for being testing only. But the service does not go up.
    I don't know if someone have made a single node rac instalation and can help me to choose the right pieces.
    I don't have a RHAS2.1 but I do have a UnitedLinux 1.0 Server Patched.
    I want to try the RAC tech but here (ecuador-sudamerica) is dificult to find firewire cards and disks so I´m working around a single node instalation.
    Any help will be apreciate.
    Fernando

    Hi
    Database version and GI versions are 11.2.0.2.2. And these are not multinode RAC configuration at any given time only single instance will be there for any given database. Some thing like ACTIVE and PASSIVE in hardware clusters such are VCS (Veritas Clusters).
    I agree with you failover sinario in multi node (2 or more) RAC environments. In single node clustering only one instance will be there, like services in multinode, here whole instance will be re created on available nodes.
    Hi
    Database version and GI versions are 11.2.0.2.2. And these are not multi node RAC configuration at any given time only single instance will be there for any given database. Some thing like ACTIVE and PASSIVE in hardware clusters such are VCS (VERITAS Clusters).
    I agree with you failover scenario in multi node (2 or more) RAC environments. In single node clustering only one instance will be there, like services in multi node, here whole instance will be re created on available nodes.
    Thanks,
    Thanks,

  • Cluster shuts down when invocation service fails to serialize message

    Hi,
    I have a slight problem with the invocation service. When calling InvocationService.execute()/query() with an Invocable that's not serializable the cluster shuts down. A stacktrace containing a NotSerializableException is dumped, followed by a:
    2005-12-16 10:42:45.623 Tangosol Coherence 3.1/321 <Error> (thread=main, member=2): PacketDispatcher: stopping cluster.
    I have tested with 3.01/317, 3.1/321 and 3.1/325, they all behave the same. I admit that it's a bug if the invocation service is used this way but we cannot guarantee the absence of such a bug and the consequences are little too harsh for us. Have I missed something here, is there way to avoid this behaviour?
    Thanks in advance,
    Chris

    Hi Chris,
    It is important to note that a fatal application error (like in your case, a non-serializable Invocable object) will affect the local member, at the most. The "cluster" being referred to is the local cluster object, which manages the membership of the local node in the actual cluster. After it is stopped like this, it will be automatically restarted during the next call to the Coherence API. We will improve the error message to be more clear.
    The fix for COH-370 will make sure that such an exception shows up only on the calling thread, and does not affect the membership of the local node within the cluster.
    Hopefully this fix will go into the next release after 3.1.
    Regards,
    Jason

  • NLB for Two FIM Service and portal servers in single domain

    Hi,
    I am currently working in a FIM Project in which i need to install two FIM service and Portal Servers in single Domain.
    Customer wants to open the FIM Portal with common URL of both the Servers.
    I have only knowledge that we need to do NLB between IIS of both the servers. anyone can provide help that how can we achieve this.
    Any help would be really appriciated.
    Thanks,

    Actually - just configure NLB and make sure that your Sharpoint site collection handles access mapping for this common name. Best would be to create it with this name as a site name from the start. 
    Same for service - configure all nodes to use same service name and configure NLB. 
    Here is some blog post which should help on details:
    http://blogs.msdn.com/b/agileer/archive/2011/06/28/setting-up-an-nlb-cluster-for-a-fim-portal-web-service.aspx
    Tomek Onyszko, memberOf Predica FIM Team (http://www.predica.pl), IdAM knowledge provider @ http://blog.predica.pl

  • Convert single node Multi Node

    We have one requirement, our single node is running on sun server ( 11.5.10.2 with 9.2.0.7)
    we want to convert the single node to 2 node, like this
    Node 1 = DB + CM + ADMIN
    Node 2 = FORMS, WEB
    I want to follow the following procedure
    (a) Copy the Total Application Tier File system from single node to Node 2
    (b) Run the config clone on Node 1 and specify to run the services CM and ADMIN only on this node
    (c) Run the config clone on Node 2 and specify to run Forms, web services only on this node 2
    is the above procedure correct?

    Node 1 = DB + CM + ADMIN
    Node 2 = FORMS, WEB
    I want to follow the following procedure
    (a) Copy the Total Application Tier File system from single node to Node 2
    (b) Run the config clone on Node 1 and specify to run the services CM and ADMIN only on this node
    (c) Run the config clone on Node 2 and specify to run Forms, web services only on this node 2It is correct.

  • Invocation service not start automatically

    hi, I have configured invocation service to start automatically as below, however, it did not work. And I need to code in application to get the service by CacheFactory.getService("MyInvocationService") to start it... why this is so? thanks.
    Henry
              <invocation-scheme>
                   <scheme-name>example-invocation</scheme-name>
                   <service-name>MyInvocationService</service-name>
                   <thread-count>10</thread-count>
                   <autostart system-property="tangosol.coherence.invocation.autostart">true</autostart>
              </invocation-scheme>

    922963 wrote:
    hi, I have configured invocation service to start automatically as below, however, it did not work. And I need to code in application to get the service by CacheFactory.getService("MyInvocationService") to start it... why this is so? thanks.
    Henry
              <invocation-scheme>
                   <scheme-name>example-invocation</scheme-name>
                   <service-name>MyInvocationService</service-name>
                   <thread-count>10</thread-count>
                   <autostart system-property="tangosol.coherence.invocation.autostart">true</autostart>
              </invocation-scheme>Hi,
    the autostart property provides information to the DefaultCacheServer.startServices() method and to methods it calls. That method will start the service if it is marked to be autostarted. The DefaultCacheServer class (the standalone cache server node script starts this application) is the only class which out-of-the-box calls that method (or possibly there are a few other ones like MBeanConnector).
    But if you use your own main class then it is not automatically called, it is your responsibility to call that method (or DefaultCacheServer.start()) if you want the services to autostart. Read the javadoc of the DefaultCacheServer class for more info on methods in that class and variations between them (e.g. monitoring thread, which configurable cache factory is used, etc).
    Best regards,
    Robert

Maybe you are looking for

  • How can I transfer apps from one iTunes library to another?

    How can I transfer apps from one iTunes library to another? My daughter has an iTouch, and I was managing it through my library. I have created an iTunes account for her, with a new iTouch, and would like to move the apps over. I tried to sync the ol

  • Mac Word 2008 - hyperlink message "Word cannot open the specified file"

    I'm using WordMac 2008 version 12.2.5 and after following detailed instructions on how to hyperlink chapters in a Table of Contents to their respective chapters in a book, I can only get the message "Word cannot open the specified file". Any ideas wh

  • Photoshop no longer recognizes NEF files / won't open jpg files because of "missing marker segment"

    I am running Photoshop CS6 on a 2013 iMac with OSX version 10.9.2. 1) As of today, Photoshop no longer recognizes NEF files. This has never happened before, and the files are coming from the same camera I've used for the past five years. 2) I receive

  • MM01-Legal Control Data

    Hello Experts, I have a requirement regarding creation of material.The version i am using is ECC6.0. I am creating a material by copying from existing material.By giving enter in each and every tab the data is getting copied. when coming to foreign t

  • IPod touch no audio with FaceTime

    connected iPod touch with iMac using FaceTime, video is great but the iPod has no audio, the iMac works fine, so only missing the audio on the iPod. Tried iPod head set and reloading the FaceTime software on the iMac, no joy. Suggestions?