Test client pinned to single node in production
WL 6.1 sp2, Solaris 2.8
Currently we have a bunch of SLSBs deployed in cluster out in production and
a web tier that usually gets and invokes a single SLSB, and they're running
happily. But everyone once in a while, we get an asymmetrical exception,
where one node in the cluster is giving us bad results. What I like to do
is write some simple test clients that can pin to a particular node and
diagnose just that node while the regular client (web tier) still
round-robins in production.
Our SLSBs do not have <stateless-clustering> elemements at all in
weblogic-ejb-jar.xml, so <stateless-bean-is-clusterable> defaults to true.
My understanding is this means WL will round robin at 3 different levels:
jndi Context, EJBHome and EJBObject, unless server-client is co-located in
which WL will always pick the local object.
What I have tried to do is to write the test client with a single url in
PROVIDER_URL
and PIN_TO_PRIMARY_SERVER set to true in InitialContext construction. This
does not seem to work; by the time I get the EJBHome, create the EJBObject
and invoke a test method, I see round-robin occuring. I can understand a
reason FOR this behavior, and a counter-argument AGAINST this behavior. The
reason why WL is still round-robining is because only the Context is pinned
to the primary server; subsequent EJBHome and EJBObject are cluster-aware,
and hence will round-robin, which in fact it is doing. But then the reason
against this observation is that once I retrieve InitialContext, the
subsequent EJBHome and EJBObject are all available locally. So shouldn't WL
do co-location optimization and hence never round-robin???
Here are some alternative framework I've thought up so I can write a test
client that pins to a specific ejb server:
1) Create a second set of DD's in weblogic-ejb-jar.xml, this time setting
stateless-bean-is-clusterable to false, and have the test client us this for
pinning.
2) Expose a co-located servlet that will accept ejb invocation (via SOAP or
customized RPC). Servlet invocation will always be ip-specific, and
hopefully co-location of web and ejb tier will
Problem with #1 is 2 sets of DD's, hence 2 sets of EJBHomes/Objects that
behave slightly differently.
Problem with #2 is the complexity of a new web tier just for pinning, which
then also means the test client doesn't exactly replicate my actual web
client calls.
Is there a simple solution to isolate and diagnose a single node in a
production cluster? Am I missing something? Much appreciated!
Gene
WL 6.1 sp2, Solaris 2.8
Currently we have a bunch of SLSBs deployed in cluster out in production and
a web tier that usually gets and invokes a single SLSB, and they're running
happily. But everyone once in a while, we get an asymmetrical exception,
where one node in the cluster is giving us bad results. What I like to do
is write some simple test clients that can pin to a particular node and
diagnose just that node while the regular client (web tier) still
round-robins in production.
Our SLSBs do not have <stateless-clustering> elemements at all in
weblogic-ejb-jar.xml, so <stateless-bean-is-clusterable> defaults to true.
My understanding is this means WL will round robin at 3 different levels:
jndi Context, EJBHome and EJBObject, unless server-client is co-located in
which WL will always pick the local object.
What I have tried to do is to write the test client with a single url in
PROVIDER_URL
and PIN_TO_PRIMARY_SERVER set to true in InitialContext construction. This
does not seem to work; by the time I get the EJBHome, create the EJBObject
and invoke a test method, I see round-robin occuring. I can understand a
reason FOR this behavior, and a counter-argument AGAINST this behavior. The
reason why WL is still round-robining is because only the Context is pinned
to the primary server; subsequent EJBHome and EJBObject are cluster-aware,
and hence will round-robin, which in fact it is doing. But then the reason
against this observation is that once I retrieve InitialContext, the
subsequent EJBHome and EJBObject are all available locally. So shouldn't WL
do co-location optimization and hence never round-robin???
Here are some alternative framework I've thought up so I can write a test
client that pins to a specific ejb server:
1) Create a second set of DD's in weblogic-ejb-jar.xml, this time setting
stateless-bean-is-clusterable to false, and have the test client us this for
pinning.
2) Expose a co-located servlet that will accept ejb invocation (via SOAP or
customized RPC). Servlet invocation will always be ip-specific, and
hopefully co-location of web and ejb tier will
Problem with #1 is 2 sets of DD's, hence 2 sets of EJBHomes/Objects that
behave slightly differently.
Problem with #2 is the complexity of a new web tier just for pinning, which
then also means the test client doesn't exactly replicate my actual web
client calls.
Is there a simple solution to isolate and diagnose a single node in a
production cluster? Am I missing something? Much appreciated!
Gene
Similar Messages
-
How to switch production single node EBS environment to multi node?
We're considering moving our production EBS environment from single node to multi node to separate out the middle processing from the database. We've completed the steps via clone (adcfgclone.pl) in out test environment and are now testing, however we're leery of running the same process in production. I've researched this process greatly, but most links point to either multi to single or cloning. Please assist.
Use Rapid Clone -- Rapid Clone Documentation Resources For Release 11i and 12 [ID 799735.1]
All the steps are covered under "Cloning a Single-Node System to a Multi-Node System" section.
Thanks,
Hussein -
We have a product which is a custom application based on SharePoint Foundation 2010. Right now, for each of our client we create a dedicated server and host the application in the standalone deployment. Now, the requirement is to host multiple clients in a
farm deployment.
Challenges are: 1. The product has same name for the wsp that is deployed on different client servers as of now. How to distinguish for different clients on same farm
Currently the product specific css and jquery is in 14 hive. These files will be of difefrent versions for different client. How to segregate that?
How many web applications is recommend to be created in a single SPF 2010 farm? What are the challenges?
There are a couple of DBs created in SQL for the application. What is the best way to separate those for the client?
Essentially its the same product but with different versions for each client that we want to deploy in a single farm. What is the best practice to tackle this?For the most part, these are not SharePoint questions per se, but product-specific questions you'd better ask the vendor about. To get in some more detail:
1. It totally depends on the scope of the solution. If its global, then you're out of luck and any changes you make affect all instances that use it. Better ask the vendor about it.
2. Not that many, let's say < 10, assuming you're web applications have separate application pools. Check out http://technet.microsoft.com/en-us/library/cc262787(v=office.14).aspx#WebApplication for
more info.
3. This is very application specific and really should be answered by the vendor. Not related to SharePoint at all.
4. Again, really depends on the product so better ask the vendor.
Good luck!
Kind regards,
Margriet Bruggeman
Lois & Clark IT Services
web site: http://www.loisandclark.eu
blog: http://www.sharepointdragons.com -
Testing single node ASM on Amazon EC2
Hi,
I'm using the 32bit 11g EE AMI on EC2 and have attached 2 5 GB Elastic Block Storage units to /dev/sdf and /dev/sdg. What I would like to do is create an ASM instance using these 2 EBS volumes for testing purposes. Unfortunately on 11g you can't create an ASM instance without starting the cluster services even if it's a single node. When I try to start cluster services as root they always time out and never start. Is there some particular reason that this won't run on EC2? I'm just curious if you've seen anyone do ASM on 2 EBS volumes (even in test mode) and if so, how did they get it working.
Thanks,
MarkYou need to change the /etc/inittab entry for cssd
from: h1:35:respawn:/etc/init.d/init.cssd run >/dev/null 2>&1 </dev/null
to: h1:345:respawn:/etc/init.d/init.cssd run >/dev/null 2>&1 </dev/null
Then init q and cssd should start -
Error - convert single node-RAC-ConvertTomydb.xml -
my single node init.ora file:
# Copyright (c) 1991, 2001, 2002 by Oracle Corporation
# Cache and I/O
db_block_size=8192
db_file_multiblock_read_count=16
# Cursors and Library Cache
open_cursors=300
# Database Identification
db_domain=""
db_name=mydb
# Diagnostics and Statistics
background_dump_dest=/u01/app/oracle/admin/mydb/bdump
core_dump_dest=/u01/app/oracle/admin/mydb/cdump
user_dump_dest=/u01/app/oracle/admin/mydb/udump
# File Configuration
control_files=("/u01/app/oracle/oradata/mydb/control01.ctl", "/u01/app/oracle/oradata/mydb/control02.ctl", "/u01/app/oracle/oradata/mydb/control03.ctl")
# Job Queues
job_queue_processes=10
# Miscellaneous
compatible=10.2.0.1.0
# Processes and Sessions
processes=150
# SGA Memory
sga_target=1083179008
# Security and Auditing
audit_file_dest=/u01/app/oracle/admin/mydb/adump
remote_login_passwordfile=EXCLUSIVE
# Shared Server
dispatchers="(PROTOCOL=TCP) (SERVICE=mydbXDB)"
# Sort, Hash Joins, Bitmap Indexes
pga_aggregate_target=360710144
# System Managed Undo and Rollback Segments
undo_management=AUTO
undo_tablespace=UNDOTBS1
my ConvertTomydb.xml ------------which is copy of ConvertToRAC.xml file
<?xml version="1.0" encoding="UTF-8" ?>
- <n:RConfig xmlns:n="http://www.oracle.com/rconfig" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.oracle.com/rconfig">
- <n:ConvertToRAC>
- <!-- Verify does a precheck to ensure all pre-requisites are met, before the conversion is attempted. Allowable values are: YES|NO|ONLY
-->
- <n:Convert verify="ONLY">
- <!-- Specify current OracleHome of non-rac database for SourceDBHome
-->
<n:SourceDBHome>/u01/app/oracle/product/10.2.0/db_1</n:SourceDBHome>
- <!-- Specify OracleHome where the rac database should be configured. It can be same as SourceDBHome
-->
<n:TargetDBHome>/u01/app/oracle/product/10.2.0/db_1</n:TargetDBHome>
- <!-- Specify SID of non-rac database and credential. User with sysdba role is required to perform conversion
-->
- <n:SourceDBInfo SID="mydb">
- <n:Credentials>
<n:User>sys</n:User>
<n:Password>oracle</n:Password>
<n:Role>sysdba</n:Role>
</n:Credentials>
</n:SourceDBInfo>
- <!-- ASMInfo element is required only if the current non-rac database uses ASM Storage
-->
- <n:ASMInfo SID="+ASM1">
- <n:Credentials>
<n:User>sys</n:User>
<n:Password>oracle</n:Password>
<n:Role>sysdba</n:Role>
</n:Credentials>
</n:ASMInfo>
- <!-- Specify the list of nodes that should have rac instances running. LocalNode should be the first node in this nodelist.
-->
- <n:NodeList>
<n:Node name="linux1" />
<n:Node name="linux2" />
</n:NodeList>
- <!-- Specify prefix for rac instances. It can be same as the instance name for non-rac database or different. The instance number will be attached to this prefix.
-->
<n:InstancePrefix>mydb</n:InstancePrefix>
- <!-- Specify port for the listener to be configured for rac database.If port="", alistener existing on localhost will be used for rac database.The listener will be extended to all nodes in the nodelist
-->
<n:Listener port="1521" />
- <!-- Specify the type of storage to be used by rac database. Allowable values are CFS|ASM. The non-rac database should have same storage type.
-->
- <n:SharedStorage type="ASM">
- <!-- Specify Database Area Location to be configured for rac database.If this field is left empty, current storage will be used for rac database. For CFS, this field will have directory path.
-->
<n:TargetDatabaseArea>+ORCL_DATA1</n:TargetDatabaseArea>
- <!-- Specify Flash Recovery Area to be configured for rac database. If this field is left empty, current recovery area of non-rac database will be configured for rac database. If current database is not using recovery Area, the resulting rac database will not have a recovery area.
-->
<n:TargetFlashRecoveryArea>+FLASH_RECOVERY_AREA</n:TargetFlashRecoveryArea>
</n:SharedStorage>
</n:Convert>
</n:ConvertToRAC>
</n:RConfig>
Ran the xml file
$ rconfig ConvertTomydb.xml
Got this below error.
[oracle@linux1 bin]$ sh rconfig ConvertTomydb.xml
<?xml version="1.0" ?>
<RConfig>
<ConvertToRAC>
<Convert>
<Response>
<Result code="1" >
Operation Failed
</Result>
<ErrorDetails>
Clusterware is not configured
</ErrorDetails>
</Response>
</Convert>
</ConvertToRAC></RConfig>
[oracle@linux1 bin]$
Log file from /u01/app/oracle/product/10.2.0/db_1/cfgtoollogs/rconfig/rconfig.log
[main] [0:14:4:4] [RuntimeExec.runCommand:175] Returning from RunTimeExec.runCommand
oracle.ops.mgmt.cluster.RemoteShellException: PRKC-1044 : Failed to check remote command execution setup for node linux2 using shells /usr/bin/ssh and /usr/bin/rsh
linux2.com: Connection refused
at oracle.ops.mgmt.nativesystem.UnixSystem.checkRemoteExecutionSetup(UnixSystem.java:1880)
at oracle.ops.mgmt.nativesystem.UnixSystem.getRemoteShellCmd(UnixSystem.java:1634)
at oracle.ops.mgmt.nativesystem.UnixSystem.createCommand(UnixSystem.java:614)
at oracle.ops.mgmt.nativesystem.UnixSystem.removeFile(UnixSystem.java:622)
at oracle.ops.mgmt.nativesystem.UnixSystem.isSharedPath(UnixSystem.java:1352)
at oracle.ops.mgmt.cluster.Cluster.isSharedPath(Cluster.java:916)
at oracle.ops.mgmt.cluster.Cluster.isSharedPath(Cluster.java:859)
at oracle.sysman.assistants.util.ClusterUtils.areSharedPaths(ClusterUtils.java:570)
at oracle.sysman.assistants.util.ClusterUtils.isShared(ClusterUtils.java:501)
at oracle.sysman.assistants.util.ClusterUtils.isShared(ClusterUtils.java:457)
at oracle.sysman.assistants.util.attributes.CommonOPSAttributes.updateShared(CommonOPSAttributes.java:724)
at oracle.sysman.assistants.util.attributes.CommonOPSAttributes.setNodeNames(CommonOPSAttributes.java:207)
at oracle.sysman.assistants.rconfig.engine.Context.<init>(Context.java:54)
at oracle.sysman.assistants.rconfig.engine.ASMInstance.createUtilASMInstanceRAC(ASMInstance.java:109)
at oracle.sysman.assistants.rconfig.engine.Step.execute(Step.java:245)
at oracle.sysman.assistants.rconfig.engine.Request.execute(Request.java:73)
at oracle.sysman.assistants.rconfig.engine.RConfigEngine.execute(RConfigEngine.java:65)
at oracle.sysman.assistants.rconfig.RConfig.<init>(RConfig.java:85)
at oracle.sysman.assistants.rconfig.RConfig.<init>(RConfig.java:51)
at oracle.sysman.assistants.rconfig.RConfig.main(RConfig.java:130)
[main] [0:14:4:16] [UnixSystem.isSharedPath:1356] UnixSystem.isShared: creating file /u01/app/oracle/product/10.2.0/db_1/CFSFileName126249561289258204.tmp
[main] [0:14:4:17] [UnixSystem.checkRemoteExecutionSetup:1817] checkRemoteExecutionSetup:: Checking user equivalence using Secured Shell '/usr/bin/ssh'
[main] [0:14:4:17] [UnixSystem.checkRemoteExecutionSetup:1819] checkRemoteExecutionSetup:: Running Unix command: /usr/bin/ssh -o FallBackToRsh=no -o PasswordAuthentication=no -o StrictHostKeyChecking=yes -o NumberOfPasswordPrompts=0 linux2 /bin/true
oracle.ops.mgmt.cluster.SharedDeviceException: PRKC-1044 : Failed to check remote command execution setup for node linux2 using shells /usr/bin/ssh and /usr/bin/rsh
linux2.com: Connection refused
at oracle.ops.mgmt.nativesystem.UnixSystem.testCFSFile(UnixSystem.java:1444)
at oracle.ops.mgmt.nativesystem.UnixSystem.isSharedPath(UnixSystem.java:1402)
at oracle.ops.mgmt.cluster.Cluster.isSharedPath(Cluster.java:916)
at oracle.ops.mgmt.cluster.Cluster.isSharedPath(Cluster.java:859)
at oracle.sysman.assistants.util.ClusterUtils.areSharedPaths(ClusterUtils.java:570)
at oracle.sysman.assistants.util.ClusterUtils.isShared(ClusterUtils.java:501)
at oracle.sysman.assistants.util.ClusterUtils.isShared(ClusterUtils.java:457)
at oracle.sysman.assistants.util.attributes.CommonOPSAttributes.updateShared(CommonOPSAttributes.java:724)
at oracle.sysman.assistants.util.attributes.CommonOPSAttributes.setNodeNames(CommonOPSAttributes.java:207)
at oracle.sysman.assistants.rconfig.engine.Context.<init>(Context.java:54)
at oracle.sysman.assistants.rconfig.engine.ASMInstance.createUtilASMInstanceRAC(ASMInstance.java:109)
at oracle.sysman.assistants.rconfig.engine.Step.execute(Step.java:245)
at oracle.sysman.assistants.rconfig.engine.Request.execute(Request.java:73)
at oracle.sysman.assistants.rconfig.engine.RConfigEngine.execute(RConfigEngine.java:65)
at oracle.sysman.assistants.rconfig.RConfig.<init>(RConfig.java:85)
at oracle.sysman.assistants.rconfig.RConfig.<init>(RConfig.java:51)
at oracle.sysman.assistants.rconfig.RConfig.main(RConfig.java:130)
[main] [0:14:35:152] [Version.isPre10i:189] isPre10i.java: Returning FALSE
[main] [0:14:35:152] [UnixSystem.getCSSConfigType:1985] configFile=/etc/oracle/ocr.loc
[main] [0:14:35:157] [Utils.getPropertyValue:221] keyName=ocrconfig_loc props.val=/u02/oradata/orcl/OCRFile propValue=/u02/oradata/orcl/OCRFile
[main] [0:14:35:157] [Utils.getPropertyValue:221] keyName=ocrmirrorconfig_loc props.val=/u02/oradata/orcl/OCRFile_mirror propValue=/u02/oradata/orcl/OCRFile_mirror
[main] [0:14:35:157] [Utils.getPropertyValue:292] propName=local_only propValue=FALSE
[main] [0:14:35:157] [UnixSystem.getCSSConfigType:2029] configType=false
[main] [0:14:35:158] [Version.isPre10i:189] isPre10i.java: Returning FALSE
[main] [0:14:35:168] [OCRTree.init:201] calling OCRTree.init
[main] [0:14:35:169] [Version.isPre10i:189] isPre10i.java: Returning FALSE
[main] [0:14:35:177] [OCRTree.<init>:157] calling OCR.init at level 7
[main] [0:14:35:177] [HASContext.getInstance:190] Module init : 24
[main] [0:14:35:177] [HASContext.getInstance:214] Local Module init : 0
[main] [0:14:35:177] [HASContext.getInstance:249] HAS Context Allocated: 4 to oracle.ops.mgmt.has.ClusterLock@f47bf5
[main] [0:14:35:177] [ClusterLock.<init>:60] ClusterLock Instance created.
[main] [0:14:35:178] [OCR.getKeyValue:411] OCR.getKeyValue(SYSTEM.local_only)
[main] [0:14:35:178] [nativesystem.OCRNative.Native] getKeyValue: procr_open_key retval = 0
[main] [0:14:35:179] [nativesystem.OCRNative.Native] getKeyValue: procr_get_value retval = 0, size = 6
[main] [0:14:35:179] [nativesystem.OCRNative.Native] getKeyValue: value is [false] dtype = 3
[main] [0:14:35:179] [OCRTreeHA.getLocalOnlyKeyValue:1697] OCRTreeHA localOnly string = false
[main] [0:14:35:180] [HASContext.getInstance:190] Module init : 6
[main] [0:14:35:180] [HASContext.getInstance:214] Local Module init : 0
[main] [0:14:35:180] [HASContext.getInstance:249] HAS Context Allocated: 5 to oracle.ops.mgmt.has.Util@f6438d
[main] [0:14:35:180] [Util.<init>:86] Util Instance created.
[main] [0:14:35:180] [has.UtilNative.Native] prsr_trace: Native: hasHAPrivilege
[main] [0:14:35:184] [HASContext.getInstance:190] Module init : 56
[main] [0:14:35:184] [HASContext.getInstance:214] Local Module init : 32
[main] [0:14:35:184] [has.HASContextNative.Native] prsr_trace: Native: allocHASContext
[main] [0:14:35:184] [has.HASContextNative.Native]
allocHASContext: Came in
[main] [0:14:35:184] [has.HASContextNative.Native] prsr_trace: Native: prsr_initCLSR
[main] [0:14:35:185] [has.HASContextNative.Native]
allocHASContext: CLSR context [1]
[main] [0:14:35:185] [has.HASContextNative.Native]
allocHASContext: retval [1]
[main] [0:14:35:185] [HASContext.getInstance:249] HAS Context Allocated: 6 to oracle.ops.mgmt.has.ClusterAlias@18825b3
[main] [0:14:35:185] [ClusterAlias.<init>:85] ClusterAlias Instance created.
[main] [0:14:35:185] [has.UtilNative.Native] prsr_trace: Native: getCRSHome
[main] [0:14:35:186] [has.UtilNative.Native] prsr_trace: Native: getCRSHome crs_home=/u01/app/oracle/product/10.2.0/crs(**)
[main] [0:14:35:280] [ASMTree.getASMInstanceOracleHome:1328] DATABASE.ASM.linux1.+asm1 does exist
[main] [0:14:35:280] [ASMTree.getASMInstanceOracleHome:1329] Acquiring shared CSS lock SRVM.ASM.DATABASE.ASM.linux1.+asm1
[main] [0:14:35:280] [has.ClusterLockNative.Native] prsr_trace: Native: acquireShared
[main] [0:14:35:281] [OCR.getKeyValue:411] OCR.getKeyValue(DATABASE.ASM.linux1.+asm1.ORACLE_HOME)
[main] [0:14:35:281] [nativesystem.OCRNative.Native] getKeyValue: procr_open_key retval = 0
[main] [0:14:35:282] [nativesystem.OCRNative.Native] getKeyValue: procr_get_value retval = 0, size = 36
[main] [0:14:35:282] [nativesystem.OCRNative.Native] getKeyValue: value is [u01/app/oracle/product/10.2.0/db_1] dtype = 3
[main] [0:14:35:282] [ASMTree.getASMInstanceOracleHome:1346] getASMInstanceOracleHome:ohome=/u01/app/oracle/product/10.2.0/db_1
[main] [0:14:35:282] [ASMTree.getASMInstanceOracleHome:1367] Releasing shared CSS lock SRVM.ASM.DATABASE.ASM.linux1.+asm1
[main] [0:14:35:282] [has.ClusterLockNative.Native] prsr_trace: Native: unlock
[main] [0:14:35:802] [nativesystem.OCRNative.Native] keyExists: procr_close_key retval = 0
[main] [0:14:35:802] [ASMTree.getNodes:1236] DATABASE.ASM does exist
[main] [0:14:35:802] [ASMTree.getNodes:1237] Acquiring shared CSS lock SRVM.ASM.DATABASE.ASM
[main] [0:14:35:802] [has.ClusterLockNative.Native] prsr_trace: Native: acquireShared
[main] [0:14:35:803] [OCR.listSubKeys:615] OCR.listSubKeys(DATABASE.ASM)
[main] [0:14:35:803] [nativesystem.OCRNative.Native] listSubKeys: key_name=[DATABASE.ASM]
[main] [0:14:35:809] [GetASMNodeListOperation.run:78] Got nodes=[Ljava.lang.String;@11a75a2
[main] [0:14:35:809] [GetASMNodeListOperation.run:91] result status 0
[main] [0:14:35:809] [LocalCommand.execute:56] LocalCommand.execute: Returned from run method
[main] [0:14:35:810] [ASMInstanceRAC.loadDiskGroups:2260] diskgroup: instName=+ASM2, diskGroupName=FLASH_RECOVERY_AREA, size=95378, freeSize=88454, type=EXTERN, state=MOUNTED
[main] [0:14:35:811] [ASMInstanceRAC.loadDiskGroups:2260] diskgroup: instName=+ASM1, diskGroupName=FLASH_RECOVERY_AREA, size=95378, freeSize=88454, type=EXTERN, state=MOUNTED
[main] [0:14:35:811] [ASMInstanceRAC.loadDiskGroups:2260] diskgroup: instName=+ASM2, diskGroupName=ORCL_DATA1, size=95384, freeSize=39480, type=NORMAL, state=MOUNTED
[main] [0:14:35:811] [ASMInstanceRAC.loadDiskGroups:2260] diskgroup: instName=+ASM1, diskGroupName=ORCL_DATA1, size=95384, freeSize=39480, type=NORMAL, state=MOUNTED
[main] [0:14:35:858] [ASMInstance.setBestDiskGroup:1422] sql to be executed:=select name from v$asm_diskgroup where free_mb= (select max(free_mb) from v$asm_diskgroup)
[main] [0:14:35:864] [ASMInstance.setBestDiskGroup:1426] Setting best diskgroup....
[main] [0:14:35:888] [SQLEngine.doSQLSubstitution:2165] The substituted sql statement:=select t1.name from v$asm_template t1, v$asm_diskgroup t2 where t1.group_number=t2.group_number and t2.name='FLASH_RECOVERY_AREA'
[main] [0:14:35:888] [ASMInstance.setTemplates:1345] sql to be executed:=select t1.name from v$asm_template t1, v$asm_diskgroup t2 where t1.group_number=t2.group_number and t2.name='FLASH_RECOVERY_AREA'
[main] [0:14:35:892] [ASMInstance.setTemplates:1349] Getting templates for diskgroup: oracle.sysman.assistants.util.asm.DiskGroup@170888e
[main] [0:14:35:892] [ASMInstance.setTemplates:1357] template: PARAMETERFILE
[main] [0:14:35:893] [ASMInstance.setTemplates:1357] template: DUMPSET
[main] [0:14:35:893] [ASMInstance.setTemplates:1357] template: DATAGUARDCONFIG
[main] [0:14:35:893] [ASMInstance.setTemplates:1357] template: FLASHBACK
[main] [0:14:35:893] [ASMInstance.setTemplates:1357] template: CHANGETRACKING
[main] [0:14:35:893] [ASMInstance.setTemplates:1357] template: XTRANSPORT
[main] [0:14:35:893] [ASMInstance.setTemplates:1357] template: AUTOBACKUP
[main] [0:14:35:893] [ASMInstance.setTemplates:1357] template: BACKUPSET
[main] [0:14:35:894] [ASMInstance.setTemplates:1357] template: TEMPFILE
[main] [0:14:35:894] [ASMInstance.setTemplates:1357] template: DATAFILE
[main] [0:14:35:894] [ASMInstance.setTemplates:1357] template: ONLINELOG
[main] [0:14:35:894] [ASMInstance.setTemplates:1357] template: ARCHIVELOG
[main] [0:14:35:894] [ASMInstance.setTemplates:1357] template: CONTROLFILE
[main] [0:14:35:894] [ASMInstance.createUtilASMInstanceRAC:113] Diskgroups loaded
[main] [0:14:35:894] [LocalNodeCheck.checkLocalNode:107] Performing LocalNodeCheck
[main] [0:14:35:894] [OracleHome.getNodeNames:270] inside getNodeNames
[main] [0:14:36:116] [OracleHome.isClusterInstalled:252] bClusterInstalled=false
[main] [0:14:36:120] [Step.execute:251] STEP Result=Clusterware is not configured
[main] [0:14:36:121] [Step.execute:280] Returning result:Operation Failed
[main] [0:14:36:121] [RConfigEngine.execute:67] bAsyncJob=false
[main] [0:14:36:124] [RConfigEngine.execute:76] Result=<?xml version="1.0" ?>
<RConfig>
<ConvertToRAC>
<Convert>
<Response>
<Result code="1" >
Operation Failed
</Result>
<ErrorDetails>
Clusterware is not configured
</ErrorDetails>
</Response>
</Convert>
</ConvertToRAC></RConfig>
Log file from /u01/app/oracle/product/10.2.0/db_1/cfgtoollogs/rconfig/mydb/sqllog
MYDB mydb 2622254467
10.2.0.1.0 ACTIVE
cluster_database FALSE
undo_management AUTO
db_domain
dispatchers (PROTOCOL=TCP) (SERVICE=mydbXDB)
background_dump_dest /u01/app/oracle/admin/mydb/bdump
user_dump_dest /u01/app/oracle/admin/mydb/udump
core_dump_dest /u01/app/oracle/admin/mydb/cdump
audit_file_dest /u01/app/oracle/admin/mydb/adump
MYDB mydb 2622254467
10.2.0.1.0 ACTIVE
cluster_database FALSE
undo_management AUTO
db_domain
dispatchers (PROTOCOL=TCP) (SERVICE=mydbXDB)
background_dump_dest /u01/app/oracle/admin/mydb/bdump
user_dump_dest /u01/app/oracle/admin/mydb/udump
core_dump_dest /u01/app/oracle/admin/mydb/cdump
audit_file_dest /u01/app/oracle/admin/mydb/adump
MYDB mydb 2622254467
10.2.0.1.0 ACTIVE
cluster_database TRUE
undo_management AUTO
db_domain
dispatchers (PROTOCOL=TCP) (SERVICE=mydbXDB)
background_dump_dest /u01/app/oracle/admin/mydb/bdump
user_dump_dest /u01/app/oracle/admin/mydb/udump
core_dump_dest /u01/app/oracle/admin/mydb/cdump
audit_file_dest /u01/app/oracle/admin/mydb/adump
MYDB mydb 2622254467
10.2.0.1.0 ACTIVE
cluster_database TRUE
undo_management AUTO
db_domain
dispatchers (PROTOCOL=TCP) (SERVICE=mydbXDB)
background_dump_dest /u01/app/oracle/admin/mydb/bdump
user_dump_dest /u01/app/oracle/admin/mydb/udump
core_dump_dest /u01/app/oracle/admin/mydb/cdump
audit_file_dest /u01/app/oracle/admin/mydb/adump
MYDB mydb 2622254467
10.2.0.1.0 ACTIVE
cluster_database TRUE
undo_management AUTO
db_domain
dispatchers (PROTOCOL=TCP) (SERVICE=mydbXDB)
background_dump_dest /u01/app/oracle/admin/mydb/bdump
user_dump_dest /u01/app/oracle/admin/mydb/udump
core_dump_dest /u01/app/oracle/admin/mydb/cdump
audit_file_dest /u01/app/oracle/admin/mydb/adump
Please help me where I am making mistake.
Thanks1) I have created single node standard database called mydb in /u01/app/oracle/product/10.2.0/db_1 home (hostname linux1)
2) installed crs and asm on linux1 and linux2 and shared storag on ASM(which external HD running ieee1294 cards and ports) . no database is created on linux1 or linux2
3) I want to convert mydb to RAC DATABASE called mydb1 instance on linux1 and mydb2 on linux2 machine respectively.
4) copied and modifed xml as you see above called ConvertTomydb.xml to $ORACLE_HOME/bin directory
5) when I run
$rconfig ConvertTomydb.xml from $ORACLE_HOME/bin directory , i get the following error
<ConvertToRAC>
<Convert>
<Response>
<Result code="1" >
Operation Failed
</Result>
<ErrorDetails>
Clusterware is not configured
</ErrorDetails>
</Response>
</Convert>
</ConvertToRAC>
$
Please see my crs_stat -t command output
Name Type Target State Host
ora....SM1.asm application ONLINE ONLINE linux1
ora....X1.lsnr application ONLINE ONLINE linux1
ora.linux1.gsd application ONLINE ONLINE linux1
ora.linux1.ons application ONLINE ONLINE linux1
ora.linux1.vip application ONLINE ONLINE linux1
ora....SM2.asm application ONLINE ONLINE linux2
ora....X2.lsnr application ONLINE ONLINE linux2
ora.linux2.gsd application ONLINE ONLINE linux2
ora.linux2.ons application ONLINE ONLINE linux2
ora.linux2.vip application ONLINE ONLINE linux2
ora.orcl.db application ONLINE ONLINE linux1
ora....l1.inst application ONLINE ONLINE linux1
ora....l2.inst application ONLINE ONLINE linux2
ora....test.cs application ONLINE ONLINE linux1
ora....cl1.srv application ONLINE ONLINE linux1
ora....cl2.srv application ONLINE UNKNOWN linux2
please see the output from olsnodes command
[oracle@linux1 bin]$ olsnodes
linux1
linux2
[oracle@linux1 bin]$
What is your cache fusion interconnect strategy?
I don't about this, please let me know where can i find the answers. what kind of command do i have to use get the answer
damorgan , Please let me know, if I gave answers to your questions. if not please let me know, i can give as much as possible. i really appreciate for your help
Thanks -
Creating DR standby from RAC to single NODE
OS and Database versions Primary:
node1:- OEL 5.7
node2:- OEL 5.7
inst1:- prod1 Oracle 11.2.0.3
inst2:- prod2 Oracle 11.2.0.3
Standby:
OEL 5.7
Oracle 11.2.0.3
NOTE:- Creating Standby on Single node.
My scenario:
============
node1:- linuxdb1
node2:- linuxdb2
inst1:- prod1
inst2:- prod2
Point1:- We have 2 node RAC with ASM disk and database is okay. Now i want to make DR on single node with ASM disk created already.
Point2:- grid and oracle users are on RAC prod database & also same grid and oracle users on SINGLE node DR
Point3:- Took RMAN backup from prod database ( 2 node RAC ) and copied to DR as shown below:
RMAN>backup device type disk format '/u02/stby/%U' database plus archivelog;
RMAN>backup device type disk format '/u02/stby/%U' current controlfile for standby;
scp -rp /u02/stby/* [email protected]:/u02/stby/
Point4:- Now when i run below command on DR ( oracle@linuxdr2 )
[oracle@linuxdr2 stby]$ rman target sys/pwd@prod1 auxiliary /
Recovery Manager: Release 11.2.0.3.0 - Production on Sun Mar 10 13:18:20 2013
Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved.
connected to target database: PROD (DBID=220323208)
connected to auxiliary database: PROD (not mounted)
RMAN> duplicate target database for standby;
Starting Duplicate Db at 10-MAR-13
using target database control file instead of recovery catalog
configuration for DISK channel 2 is ignored
allocated channel: ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: SID=1142 device type=DISK
contents of Memory Script:
restore clone standby controlfile;
executing Memory Script
Starting restore at 10-MAR-13
using channel ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: restoring control file
channel ORA_AUX_DISK_1: reading from backup piece /u02/stby/33o464m6_1_1
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of Duplicate Db command at 03/10/2013 13:19:22
RMAN-05501: aborting duplication of target database
RMAN-03015: error occurred
Can you please help me to resolve the issue.
Thanks,
VikharRMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of Duplicate Db command at 03/10/2013 13:19:22
RMAN-05501: aborting duplication of target database
RMAN-03015: error occurred Any errors in alert log file?
Have you tried once again ?
Why cant you go for RMAN duplicate from active database, If you have any concerns then enable trace of RMAN and share the information
rman target / log=/u01/test/rman_log.txt trace=/u01/test/rmantrace.log -
Transforming from multiple node to single node
Hi,
I new to BPEL trying to transform data from two different multiple nodes to single node of complex type.
For Eg where 'req1' and 'req2' are multiple nodes in request:
Request::
<req1>
<a1/>
</req1>
<req2>
<b1/>
</req2>
Response::
<c>
<ca1>
<cb2>
</c>
Please can anyone help me in getting the solution.
Hope the pseudo-logic is clear!!!
Edited by: c devi on Oct 18, 2011 7:12 AM
Edited by: c devi on Oct 18, 2011 7:12 AMHi',
You can try this,
<xsl:template match="/">
<client:processResponse>
<xsl:for-each select="$Var2.payload/client:process">
<client:result>
<xsl:value-of select="client:input"/>
</client:result>
</xsl:for-each>
<xsl:for-each select="/client:process">
<client:result>
<xsl:value-of select="client:input"/>
</client:result>
</xsl:for-each>
</client:processResponse>
</xsl:template>
Source:
<req1>
<a1/>
</req1>
<req2>
<b1/>
</req2>
Target
<c>
<c1/>
</c>
You can do like this,
Inside the Transform first put the for-each over the target variable, i.e. just above c1, then right click on the for-each > add XSL node > 'Clone' for-each.
This will create the source one more time i.e. you will have now 2 targets of the same type,
Like this.
Target
<c>
<c1/>
</c>
<c>
<c1/>
</c>
now join the wires from source to target for both of them, now right click on the transform (in the center) and do Test, this will give you the output.
-Yatan -
GI installation on a single-node cluster error.
Hello, I am trying to install GI on a single-node cluster (Solaris 10 / Sparc) but the root.sh script fails with the following error (this is not a GI installation for a Standalone Server :
root@selvac./dev/ASM/OCRVTD_DG # /app/oracle/grid/11.2/root.sh
Running Oracle 11g root script...
The following environment variables are set as:
ORACLE_OWNER= grid
ORACLE_HOME= /app/oracle/grid/11.2
Enter the full pathname of the local bin directory: [usr/local/bin]:
Copying dbhome to /usr/local/bin ...
Copying oraenv to /usr/local/bin ...
Copying coraenv to /usr/local/bin ...
Creating /var/opt/oracle/oratab file...
Entries will be added to the /var/opt/oracle/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /app/oracle/grid/11.2/crs/install/crsconfig_params
Creating trace directory
LOCAL ADD MODE
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
OLR initialization - successful
root wallet
root wallet cert
root cert export
peer wallet
profile reader wallet
pa wallet
peer wallet keys
pa wallet keys
peer cert request
pa cert request
peer cert
pa cert
peer root cert TP
profile reader root cert TP
pa root cert TP
peer pa cert TP
pa peer cert TP
profile reader pa cert TP
profile reader peer cert TP
peer user cert
pa user cert
Adding daemon to inittab
ACFS-9200: Supported
ACFS-9300: ADVM/ACFS distribution files found.
ACFS-9312: Existing ADVM/ACFS installation detected.
ACFS-9314: Removing previous ADVM/ACFS installation.
ACFS-9315: Previous ADVM/ACFS components successfully removed.
ACFS-9307: Installing requested ADVM/ACFS software.
ACFS-9308: Loading installed ADVM/ACFS drivers.
ACFS-9327: Verifying ADVM/ACFS devices.
ACFS-9309: ADVM/ACFS installation correctness verified.
CRS-2672: Attempting to start 'ora.mdnsd' on 'selvac'
CRS-2676: Start of 'ora.mdnsd' on 'selvac' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'selvac'
CRS-2676: Start of 'ora.gpnpd' on 'selvac' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'selvac'
CRS-2672: Attempting to start 'ora.gipcd' on 'selvac'
CRS-2676: Start of 'ora.cssdmonitor' on 'selvac' succeeded
CRS-2676: Start of 'ora.gipcd' on 'selvac' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'selvac'
CRS-2672: Attempting to start 'ora.diskmon' on 'selvac'
CRS-2676: Start of 'ora.diskmon' on 'selvac' succeeded
CRS-2676: Start of 'ora.cssd' on 'selvac' succeeded
ASM created and started successfully.
Disk Group OCRVTD_DG created successfully.
The ora.asm resource is not ONLINE
Did not succssfully configure and start ASM at /app/oracle/grid/11.2/crs/install/crsconfig_lib.pm line 6465.
/app/oracle/grid/11.2/perl/bin/perl -I/app/oracle/grid/11.2/perl/lib -I/app/oracle/grid/11.2/crs/install /app/oracle/grid/11.2/crs/install/rootcrs.pl execution failed
I also found the "PRVF-5150: Path OCRL:DISK1 is not a valid path on all nodes" error but as I have read it is a bug I Ignored it. But...
I think my ASM_DG OCR and voting is ok, accessible by grid user and 660. It seems ASM does not start or does not start in time.
Any help is wellcome.
Thanks in advance.Thanks a lot for the hint. I had already checked this doc. but I think it is not the problem. Actually de error ora.asm is not online is not correct. After failing root.sh, ora.asm is ONLINE:
root@selvac./app/oracle/grid/11.2/bin # ./crsctl check resource ora.asm -init
root@selvac./app/oracle/grid/11.2/bin # ./crsctl stat resource ora.asm -init
NAME=ora.asm
TYPE=ora.asm.type
TARGET=ONLINE
STATE=ONLINE on selvac
The last part of the /app/oracle/grid/11.2/cfgtoollogs/crsconfig/rootcrs_selvac.log file reads :
>
ASM created and started successfully.
Disk Group OCRVTD_DG created successfully.
End Command output2011-04-14 13:24:16: Executing cmd: /app/oracle/grid/11.2/bin/crsctl check resource ora.asm -init
2011-04-14 13:24:17: Executing cmd: /app/oracle/grid/11.2/bin/crsctl status resource ora.asm -init
2011-04-14 13:24:17: Command output:
NAME=ora.asm
TYPE=ora.asm.type
TARGET=ONLINE
STATE=OFFLINE
End Command output2011-04-14 13:24:17: Checking the status of ora.asm
2011-04-14 13:24:22: Executing cmd: /app/oracle/grid/11.2/bin/crsctl status resource ora.asm -init
2011-04-14 13:24:22: Command output:
NAME=ora.asm
TYPE=ora.asm.type
TARGET=ONLINE
STATE=OFFLINE
End Command output2011-04-14 13:24:22: Checking the status of ora.asm
2011-04-14 13:24:27: Executing cmd: /app/oracle/grid/11.2/bin/crsctl status resource ora.asm -init
2011-04-14 13:24:28: Command output:
NAME=ora.asm
TYPE=ora.asm.type
TARGET=ONLINE
STATE=OFFLINE
End Command output2011-04-14 13:24:28: Checking the status of ora.asm
2011-04-14 13:24:33: Executing cmd: /app/oracle/grid/11.2/bin/crsctl status resource ora.asm -init
2011-04-14 13:24:33: Command output:
NAME=ora.asm
TYPE=ora.asm.type
TARGET=ONLINE
STATE=OFFLINE
End Command output2011-04-14 13:24:33: Checking the status of ora.asm
2011-04-14 13:24:38: Executing cmd: /app/oracle/grid/11.2/bin/crsctl status resource ora.asm -init
2011-04-14 13:24:38: Command output:
NAME=ora.asm
TYPE=ora.asm.type
TARGET=ONLINE
STATE=OFFLINE
End Command output2011-04-14 13:24:38: Checking the status of ora.asm
2011-04-14 13:24:43: Executing cmd: /app/oracle/grid/11.2/bin/crsctl status resource ora.asm -init
2011-04-14 13:24:43: Command output:
NAME=ora.asm
TYPE=ora.asm.type
TARGET=ONLINE
STATE=OFFLINE
End Command output2011-04-14 13:24:43: Checking the status of ora.asm
2011-04-14 13:24:48: Executing cmd: /app/oracle/grid/11.2/bin/crsctl status resource ora.asm -init
2011-04-14 13:24:49: Command output:
NAME=ora.asm
TYPE=ora.asm.type
TARGET=ONLINE
STATE=OFFLINE
End Command output2011-04-14 13:24:49: Checking the status of ora.asm
2011-04-14 13:24:54: Executing cmd: /app/oracle/grid/11.2/bin/crsctl status resource ora.asm -init
2011-04-14 13:24:54: Command output:
NAME=ora.asm
TYPE=ora.asm.type
TARGET=ONLINE
STATE=OFFLINE
End Command output2011-04-14 13:24:54: Checking the status of ora.asm
2011-04-14 13:24:59: Executing cmd: /app/oracle/grid/11.2/bin/crsctl status resource ora.asm -init
2011-04-14 13:24:59: Command output:
NAME=ora.asm
TYPE=ora.asm.type
TARGET=ONLINE
STATE=OFFLINE
End Command output2011-04-14 13:24:59: Checking the status of ora.asm
2011-04-14 13:25:04: Executing cmd: /app/oracle/grid/11.2/bin/crsctl status resource ora.asm -init
2011-04-14 13:25:04: Command output:
NAME=ora.asm
TYPE=ora.asm.type
TARGET=ONLINE
STATE=OFFLINE
End Command output2011-04-14 13:25:04: Checking the status of ora.asm
2011-04-14 13:25:09: The ora.asm resource is not ONLINE
2011-04-14 13:25:09: Running as user grid: /app/oracle/grid/11.2/bin/cluutil -ckpt -oraclebase /app/grid -writeckpt -name ROOTCRS_BOOTCFG -state FAIL
2011-04-14 13:25:09: s_run_as_user2: Running /bin/su grid -c ' /app/oracle/grid/11.2/bin/cluutil -ckpt -oraclebase /app/grid -writeckpt -name ROOTCRS_BOOTCFG -state FAIL '
2011-04-14 13:25:10: Removing file /var/tmp/mbahSaGPn
2011-04-14 13:25:10: Successfully removed file: /var/tmp/mbahSaGPn
2011-04-14 13:25:10: /bin/su successfully executed
2011-04-14 13:25:10: Succeeded in writing the checkpoint:'ROOTCRS_BOOTCFG' with status:FAIL
2011-04-14 13:25:10: ###### Begin DIE Stack Trace ######
2011-04-14 13:25:10: Package File Line Calling
2011-04-14 13:25:10: --------------- -------------------- ---- ----------
2011-04-14 13:25:10: 1: main rootcrs.pl 322 crsconfig_lib::dietrap
2011-04-14 13:25:10: 2: crsconfig_lib crsconfig_lib.pm 6465 main::__ANON__
2011-04-14 13:25:10: 3: crsconfig_lib crsconfig_lib.pm 6390 crsconfig_lib::perform_initial_config
2011-04-14 13:25:10: 4: main rootcrs.pl 671 crsconfig_lib::perform_init_config
2011-04-14 13:25:10: ####### End DIE Stack Trace #######
2011-04-14 13:25:10: 'ROOTCRS_BOOTCFG' checkpoint has failed
So this must be a bug. During root.sh execution ora.asm is OFFLINE but after failing it is ONLINE. It maight be a question of waiting/repeating or timeout as I see the "Checking the status of ora.asm" command is repeated several times during root.sh, but not enough perhaps. Now root.sh is failed, installation halted but ASM is ONLINE.
Any other Idea?
Thanks again. -
SWN_SELSEN not working in test client after client-copy
Hi all,
We are using extended notification without problem until test client was refreshed by client-copy from production client. In test client, report SWN_SELSEN did not generated notification even there are newly generated workflow items. Because of client-copy, I found that tables SWN_NOTIF, SWN_NOTIFTSTMP, SWN_SENDLOG, SWN_TIMESTAMPS are all empty in test client.
Please advise how to make report SWN_SELSEN back to work in test client. Thanks.
<< Additional information >>
(1) Run report SWN_SELSEN_TEST with test Case 5 - Simulate Send Only. After this, some entries were written to above tables. But test on new workflow item, the schedule report SWN_SELSEN still fail to trigger notification.
(2) Run report SWN_SELSEN_TEST with test Case 4 - Send One Message Only. It triggered notifications for all active workflow items and sent out several hundred emails including workflow item created in step 1. However, further test on new workflowitem, the schedule report SWN_SELSEN is still unable to trigger notification.
I think there may be something else missing, like the timestamp of delta. Please help.
Regards,
DonaldHello,
I've experienced a few cases where stopping SWN_SELSEN and then restarting it helped to fill up SWN_TIMESTAMPS with proper values. How does that table look after you run it?
What is SWN_SELSEN_TEST?
Chck in SLG1 to get an idea of why it's not working. You can also run SWN_SELSEN in the foreground and debug it to see exactly where and why it goes wrong. Also check ST22.
regards
Rick Bakker
hanabi technology -
URGENT*** Test Client testpoint not generated in WebLogic 10.3.0.0
Hi,
I have created very basic "Java EE 1.5 with support for jax-ws Annotations" web service and deploying in to web logic server 10.3.0.0. I am using Jdeveloper 11g for convert my java file to web service and deploying the same in to WLS directly from Jdev IDE.
Problem is, I cant able to see "Test Client" testPoint in order to test my webservice. When I access my web service via WLS admin console can able to see only "?WSDL" test point at TESTING tab and WSDL is accessible. But Test client is not getting generated.
Please help me, what am I missing here, why the WLServer not generating the test client. Am I missing any setting at server side. Please help me. Below is my web service code.
package edu.ws;
import javax.jws.WebService;
*@WebService(serviceName = "demoWS", portName = "demoWSSoapHttpPort")*
public class demoWS
public demoWS()
public String fullName(String fn, String ln)
String fullName = fn + ln;
return fullName;
WSDL file:
*<!--*
Published by JAX-WS RI at http://jax-ws.dev.java.net. RI's version is Oracle JAX-WS 2.1.3-07/10/2008 08:41 PM(bt).
-->
*<!--*
Generated by JAX-WS RI at http://jax-ws.dev.java.net. RI's version is Oracle JAX-WS 2.1.3-07/10/2008 08:41 PM(bt).
-->
*<definitions targetNamespace="http://ws.syu.edu/" name="demoWS">*
*<types>*
*<xsd:schema>*
*<xsd:import namespace="http://ws.syu.edu/" schemaLocation="http://192.168.88.131:7001/DemoWebService-DemoWS-context-root/demoWSSoapHttpPort?xsd=1"/>*
*</xsd:schema>*
*</types>*
*<message name="fullName">*
*<part name="parameters" element="tns:fullName"/>*
*</message>*
*<message name="fullNameResponse">*
*<part name="parameters" element="tns:fullNameResponse"/>*
*</message>*
*<portType name="demoWS">*
*<operation name="fullName">*
*<input message="tns:fullName"/>*
*<output message="tns:fullNameResponse"/>*
*</operation>*
*</portType>*
*<binding name="demoWSSoapHttpPortBinding" type="tns:demoWS">*
*<soap:binding transport="http://schemas.xmlsoap.org/soap/http" style="document"/>*
*<operation name="fullName">*
*<soap:operation soapAction=""/>*
*<input>*
*<soap:body use="literal"/>*
*</input>*
*<output>*
*<soap:body use="literal"/>*
*</output>*
*</operation>*
*</binding>*
*<service name="demoWS">*
*<port name="demoWSSoapHttpPort" binding="tns:demoWSSoapHttpPortBinding">*
*<soap:address location="http://192.168.88.131:7001/DemoWebService-DemoWS-context-root/demoWSSoapHttpPort"/>*
*</port>*
*</service>*
*</definitions>*
Thanks
klogubeHi LJ,
I did the same. I configured my WLS for development mode by making production_mode = false. But still I cant able to open http://localhost:7001/wls_utc window and so far cant able to test my WS :(
Please check my config file and let me know where else I suppose to make the production mode false!!
#!/bin/sh
# WARNING: This file is created by the Configuration Wizard.
# Any changes to this script may be lost when adding extensions to this configuration.
# --- Start Functions ---
BP=100
SP=$BP
pushd()
if [ -z "$1" ]
then
return
fi
SP=`expr $SP - 1`
eval _stack$SP=`pwd`
cd $1
return
popd()
if [ $SP -eq $BP ]
then
return
fi
eval cd \${_stack$SP}
SP=`expr $SP + 1`
return
# --- End Functions ---
# This script is used to setup the needed environment to be able to start Weblogic Server in this domain.
# This script initializes the following variables before calling commEnv to set other variables:
# WL_HOME - The BEA home directory of your WebLogic installation.
# JAVA_VM - The desired Java VM to use. You can set this environment variable before calling
# this script to switch between Sun or BEA or just have the default be set.
# JAVA_HOME - Location of the version of Java used to start WebLogic
# Server. Depends directly on which JAVA_VM value is set by default or by the environment.
# USER_MEM_ARGS - The variable to override the standard memory arguments
# passed to java.
# PRODUCTION_MODE - The variable that determines whether Weblogic Server is started in production mode.
# DOMAIN_PRODUCTION_MODE
# - The variable that determines whether the workshop related settings like the debugger,
# testconsole or iterativedev should be enabled. ONLY settable using the
# command-line parameter named production
# NOTE: Specifying the production command-line param will force
# the server to start in production mode.
# Other variables used in this script include:
# SERVER_NAME - Name of the weblogic server.
# JAVA_OPTIONS - Java command-line options for running the server. (These
# will be tagged on to the end of the JAVA_VM and
# MEM_ARGS)
# For additional information, refer to the WebLogic Server Administration
# Console Online Help(http://e-docs.bea.com/wls/docs103/ConsoleHelp/startstop.html).
ORACLE_HOME="/opt/oracle/middleware/jdeveloper"
export ORACLE_HOME
WL_HOME="/opt/oracle/middleware/wlserver_10.3"
export WL_HOME
BEA_JAVA_HOME="/opt/oracle/middleware/jrockit_160_05"
export BEA_JAVA_HOME
SUN_JAVA_HOME=""
export SUN_JAVA_HOME
if [ "${JAVA_VENDOR}" = "BEA" ] ; then
JAVA_HOME="${BEA_JAVA_HOME}"
export JAVA_HOME
else
if [ "${JAVA_VENDOR}" = "Sun" ] ; then
JAVA_HOME="${SUN_JAVA_HOME}"
export JAVA_HOME
else
JAVA_VENDOR="BEA"
export JAVA_VENDOR
JAVA_HOME="/opt/oracle/middleware/jrockit_160_05"
export JAVA_HOME
fi
fi
# We need to reset the value of JAVA_HOME to get it shortened AND
# we can not shorten it above because immediate variable expansion will blank it
JAVA_HOME="${JAVA_HOME}"
export JAVA_HOME
SAMPLES_HOME="${WL_HOME}/samples"
export SAMPLES_HOME
DOMAIN_HOME="/opt/oracle/middleware/user_projects/domains/base_domain"
export DOMAIN_HOME
LONG_DOMAIN_HOME="/opt/oracle/middleware/user_projects/domains/base_domain"
export LONG_DOMAIN_HOME
if [ "${DEBUG_PORT}" = "" ] ; then
DEBUG_PORT="8453"
export DEBUG_PORT
fi
if [ "${SERVER_NAME}" = "" ] ; then
SERVER_NAME="AdminServer"
export SERVER_NAME
fi
POINTBASE_FLAG="false"
export POINTBASE_FLAG
enableHotswapFlag=""
export enableHotswapFlag
PRODUCTION_MODE="false"
export PRODUCTION_MODE
doExitFlag="false"
export doExitFlag
verboseLoggingFlag="false"
export verboseLoggingFlag
while [ $# -gt 0 ]
do
case $1 in
nodebug)
debugFlag="false"
export debugFlag
production)
DOMAIN_PRODUCTION_MODE="true"
export DOMAIN_PRODUCTION_MODE
notestconsole)
testConsoleFlag="false"
export testConsoleFlag
noiterativedev)
iterativeDevFlag="false"
export iterativeDevFlag
noLogErrorsToConsole)
logErrorsToConsoleFlag="false"
export logErrorsToConsoleFlag
nopointbase)
POINTBASE_FLAG="false"
export POINTBASE_FLAG
doExit)
doExitFlag="true"
export doExitFlag
noExit)
doExitFlag="false"
export doExitFlag
verbose)
verboseLoggingFlag="true"
export verboseLoggingFlag
enableHotswap)
enableHotswapFlag="-javaagent:${WL_HOME}/server/lib/diagnostics-agent.jar"
export enableHotswapFlag
PROXY_SETTINGS="${PROXY_SETTINGS} $1"
export PROXY_SETTINGS
esac
shift
done
MEM_DEV_ARGS=""
export MEM_DEV_ARGS
if [ "${DOMAIN_PRODUCTION_MODE}" = "true" ] ; then
PRODUCTION_MODE="${DOMAIN_PRODUCTION_MODE}"
export PRODUCTION_MODE
fi
if [ "${PRODUCTION_MODE}" = "true" ] ; then
debugFlag="false"
export debugFlag
testConsoleFlag="false"
export testConsoleFlag
iterativeDevFlag="false"
export iterativeDevFlag
fi
# If you want to override the default Patch Classpath, Library Path and Path for this domain,
# Please uncomment the following lines and add a valid value for the environment variables
# set PATCH_CLASSPATH=[myPatchClasspath] (windows)
# set PATCH_LIBPATH=[myPatchLibpath] (windows)
# set PATCH_PATH=[myPatchPath] (windows)
# PATCH_CLASSPATH=[myPatchClasspath] (unix)
# PATCH_LIBPATH=[myPatchLibpath] (unix)
# PATCH_PATH=[myPatchPath] (unix)
. ${WL_HOME}/common/bin/commEnv.sh
WLS_HOME="${WL_HOME}/server"
export WLS_HOME
MEM_ARGS="-Xms256m -Xmx512m"
export MEM_ARGS
MEM_PERM_SIZE="-XX:PermSize=48m"
export MEM_PERM_SIZE
MEM_MAX_PERM_SIZE="-XX:MaxPermSize=192m"
export MEM_MAX_PERM_SIZE
if [ "${JAVA_VENDOR}" = "Sun" ] ; then
if [ "${PRODUCTION_MODE}" = "" ] ; then
MEM_DEV_ARGS="-XX:CompileThreshold=8000 ${MEM_PERM_SIZE} "
export MEM_DEV_ARGS
fi
fi
# Had to have a separate test here BECAUSE of immediate variable expansion on windows
if [ "${JAVA_VENDOR}" = "Sun" ] ; then
MEM_ARGS="${MEM_ARGS} ${MEM_DEV_ARGS} ${MEM_MAX_PERM_SIZE}"
export MEM_ARGS
fi
if [ "${JAVA_VENDOR}" = "HP" ] ; then
MEM_ARGS="${MEM_ARGS} ${MEM_MAX_PERM_SIZE}"
export MEM_ARGS
fi
# IF USER_MEM_ARGS the environment variable is set, use it to override ALL MEM_ARGS values
if [ "${USER_MEM_ARGS}" != "" ] ; then
MEM_ARGS="${USER_MEM_ARGS}"
export MEM_ARGS
fi
JAVA_PROPERTIES="-Dplatform.home=${WL_HOME} -Dwls.home=${WLS_HOME} -Dweblogic.home=${WLS_HOME} "
export JAVA_PROPERTIES
# To use Java Authorization Contract for Containers (JACC) in this domain,
# please uncomment the following section. If there are multiple machines in
# your domain, be sure to edit the setDomainEnv in the associated domain on
# each machine.
# -Djava.security.manager
# -Djava.security.policy=location of weblogic.policy
# -Djavax.security.jacc.policy.provider=weblogic.security.jacc.simpleprovider.SimpleJACCPolicy
# -Djavax.security.jacc.PolicyConfigurationFactory.provider=weblogic.security.jacc.simpleprovider.PolicyConfigurationFactoryImpl
# -Dweblogic.security.jacc.RoleMapperFactory.provider=weblogic.security.jacc.simpleprovider.RoleMapperFactoryImpl
EXTRA_JAVA_PROPERTIES="-Ddomain.home=${DOMAIN_HOME} -Doracle.home=${ORACLE_HOME} -Doracle.security.jps.config=${DOMAIN_HOME}/config/oracle/jps-config.xml -Doracle.dms.context=OFF -Djava.protocol.handler.pkgs=oracle.mds.net.protocol ${EXTRA_JAVA_PROPERTIES}"
export EXTRA_JAVA_PROPERTIES
JAVA_PROPERTIES="${JAVA_PROPERTIES} ${EXTRA_JAVA_PROPERTIES}"
export JAVA_PROPERTIES
ARDIR="${WL_HOME}/server/lib"
export ARDIR
pushd ${LONG_DOMAIN_HOME}
# Clustering support (edit for your cluster!)
if [ "${ADMIN_URL}" = "" ] ; then
# The then part of this block is telling us we are either starting an admin server OR we are non-clustered
CLUSTER_PROPERTIES="-Dweblogic.management.discover=true"
export CLUSTER_PROPERTIES
else
CLUSTER_PROPERTIES="-Dweblogic.management.discover=false -Dweblogic.management.server=${ADMIN_URL}"
export CLUSTER_PROPERTIES
fi
if [ "${LOG4J_CONFIG_FILE}" != "" ] ; then
JAVA_PROPERTIES="${JAVA_PROPERTIES} -Dlog4j.configuration=file:${LOG4J_CONFIG_FILE}"
export JAVA_PROPERTIES
fi
JAVA_PROPERTIES="${JAVA_PROPERTIES} ${CLUSTER_PROPERTIES}"
export JAVA_PROPERTIES
# Clear the pre_classpath here in case an application template wants to set it before the larger pre_classpath is invoked below
PRE_CLASSPATH=""
export PRE_CLASSPATH
JAVA_DEBUG=""
export JAVA_DEBUG
if [ "${debugFlag}" = "true" ] ; then
JAVA_DEBUG="-Xdebug -Xnoagent -Xrunjdwp:transport=dt_socket,address=${DEBUG_PORT},server=y,suspend=n -Djava.compiler=NONE"
export JAVA_DEBUG
JAVA_OPTIONS="${JAVA_OPTIONS} ${enableHotswapFlag} -ea -da:com.bea... -da:javelin... -da:weblogic... -ea:com.bea.wli... -ea:com.bea.broker... -ea:com.bea.sbconsole..."
export JAVA_OPTIONS
else
JAVA_OPTIONS="${JAVA_OPTIONS} ${enableHotswapFlag} -da"
export JAVA_OPTIONS
fi
if [ ! -d ${JAVA_HOME}/lib ] ; then
echo "The JRE was not found in directory ${JAVA_HOME}. (JAVA_HOME)"
echo "Please edit your environment and set the JAVA_HOME"
echo "variable to point to the root directory of your Java installation."
popd
read _val
exit
fi
if [ "${POINTBASE_FLAG}" = "true" ] ; then
DATABASE_CLASSPATH="${POINTBASE_CLASSPATH}"
export DATABASE_CLASSPATH
else
DATABASE_CLASSPATH="${POINTBASE_CLIENT_CLASSPATH}"
export DATABASE_CLASSPATH
fi
POST_CLASSPATH=""
export POST_CLASSPATH
POST_CLASSPATH="${ORACLE_HOME}/modules/features/adf.share_11.1.1.jar${CLASSPATHSEP}${POST_CLASSPATH}"
export POST_CLASSPATH
POST_CLASSPATH="${POST_CLASSPATH}${CLASSPATHSEP}${DATABASE_CLASSPATH}${CLASSPATHSEP}${ARDIR}/xqrl.jar"
export POST_CLASSPATH
# PROFILING SUPPORT
JAVA_PROFILE=""
export JAVA_PROFILE
SERVER_CLASS="weblogic.Server"
export SERVER_CLASS
JAVA_PROPERTIES="${JAVA_PROPERTIES} ${WLP_JAVA_PROPERTIES}"
export JAVA_PROPERTIES
JAVA_OPTIONS="${JAVA_OPTIONS} ${JAVA_PROPERTIES} -Dwlw.iterativeDev=${iterativeDevFlag} -Dwlw.testConsole=${testConsoleFlag} -Dwlw.logErrorsToConsole=${logErrorsToConsoleFlag}"
export JAVA_OPTIONS
# -- Setup properties so that we can save stdout and stderr to files
if [ "${WLS_STDOUT_LOG}" != "" ] ; then
echo "Logging WLS stdout to ${WLS_STDOUT_LOG}"
JAVA_OPTIONS="${JAVA_OPTIONS} -Dweblogic.Stdout=${WLS_STDOUT_LOG}"
export JAVA_OPTIONS
fi
if [ "${WLS_STDERR_LOG}" != "" ] ; then
echo "Logging WLS stderr to ${WLS_STDERR_LOG}"
JAVA_OPTIONS="${JAVA_OPTIONS} -Dweblogic.Stderr=${WLS_STDERR_LOG}"
export JAVA_OPTIONS
fi
# ADD EXTENSIONS TO CLASSPATHS
if [ "${EXT_PRE_CLASSPATH}" != "" ] ; then
PRE_CLASSPATH="${EXT_PRE_CLASSPATH}${CLASSPATHSEP}${PRE_CLASSPATH}"
export PRE_CLASSPATH
fi
if [ "${EXT_POST_CLASSPATH}" != "" ] ; then
POST_CLASSPATH="${POST_CLASSPATH}${CLASSPATHSEP}${EXT_POST_CLASSPATH}"
export POST_CLASSPATH
fi
if [ "${WEBLOGIC_EXTENSION_DIRS}" != "" ] ; then
JAVA_OPTIONS="${JAVA_OPTIONS} -Dweblogic.ext.dirs=${WEBLOGIC_EXTENSION_DIRS}"
export JAVA_OPTIONS
fi
JAVA_OPTIONS="${JAVA_OPTIONS}"
export JAVA_OPTIONS
# SET THE CLASSPATH
CLASSPATH="${PRE_CLASSPATH}${CLASSPATHSEP}${WEBLOGIC_CLASSPATH}${CLASSPATHSEP}${POST_CLASSPATH}${CLASSPATHSEP}${WLP_POST_CLASSPATH}"
export CLASSPATH
JAVA_VM="${JAVA_VM} ${JAVA_DEBUG} ${JAVA_PROFILE}"
export JAVA_VM
Edited by: klogube on Oct 23, 2009 10:58 AM -
Hi All,
Issue: Upgrade from FP2.2.1 to FP2.4
Issue: java.io.FileNotFoundException: /apps/aiahome2/Infrastructure/install/scripts/FPCheckSOAServerStatus.xml (No such file or directory)
We're trying to upgrade a single-node Development Environment from FP2.2.1 to FP2.4 on WLS 9.2 MP3 running RedHat 4.
First time, the upgrade failed it complained that it couldn't write to log-file $AIA_HOME/logs/Install/FP/FPInstall.log since the directory $AIA_HOME/logs/Install/FP didn't exists. We created the directory manually and tried again.
Second time it failed, the serveroutput shows Build Failed.
BUILD FAILED
/apps/aiahome2/Infrastructure/install/scripts/FPInstall.xml:94: exec returned: 1
Total time: 3 seconds
09/16/2009 12:10 AM : ERROR: Error executing the command:
setenv PATH /apps/aiahome2/apache-ant-1.7.0/bin:/apps/product/10.1.3.1/OracleAS_2/jdk/bin:$PATH &&
setenv ANT_HOME /apps/aiahome2/apache-ant-1.7.0 &&
setenv JAVA_HOME /apps/product/10.1.3.1/OracleAS_2/jdk && setenv DEPLOY_COMP oracle.aia.common &&
setenv ORACLE_HOME /apps/product/10.1.3.1/OracleAS_2 &&
setenv AIA_HOME /apps/aiahome2&&
setenv BPEL_HOME /apps/product/10.1.3.1/OracleAS_2/bpel &&
cd /apps/aiahome2/Infrastructure/install/scripts && ant --noconfig -buildfile FPInstall.xmlIf I read the whole log correctly, the installer was successful (dumped all the files in the correct location), but not the configuration.
==> The $AIA_HOME/logs/Install/FP/FPInstall09162009-001043.log shows:
bash-3.00$ more FPInstall09162009-001043.log
InstallAIAFP:
BUILD FAILED
/apps/aiahome2/Infrastructure/install/scripts/FPInstall.xml:115: The following error occurred while executing this line:
java.io.FileNotFoundException: /apps/aiahome2/Infrastructure/install/scripts/FPCheckSOAServerStatus.xml (No such file or directory)
Total time: 0 secondsThe directory listed above, does not have that FPCheckSOAServerStatus.xml file. We even did a "find..." on the installation directory but could not find the missing file.
Have anyone seen anything similar? I've searched Metalink and Internet, but with no result for that missing file etc.
Just a note: When executing the FP2.4 Installer, we had to set Execute rights on three-four shell scripts in order to start the Installer.
Any help is highly appreciated.
Regards,
MartinHello Martin !
Upgrade of FP 2.2.1 to 2.4 is not supported on Weblogic. Could you get in touch with me and [email protected] offline.
Thanks
-Teja -
How to get the Change node in Production Server!!!!!
Hi Guys,
Kindly let me know how to get the Change node in Production Server for the Transaction Code Pe03 for generating the Acknowledgement No for the year 2008 .
Plz provide me the steps how to get the Change Node for Acknowledgement No so tat i can get the configuration done.
Regards
Ansuman Mohanty.Hi Mr!
If you want to generate the e-file feature 40ACK, do it in our Customization client box (Golden box) & save the request & move to Quality & production.
Still if you need to workout only in Production, than with the help of Basis people you can get the Production change mode for 5 to 10 min time & can generate it. But mostly Basis people wont give us change mode for Production box ... with ur request they can do..try it.
Did u collected 4 quarter TAN no's for 2008 Quarter...if not collect it & generate it at a time.
All the best:-)
Kind Regards,
Saisree.S -
EBS 11.5.10.2 Merge Multi Node to Single node.
Hello,
I have a two node EBS Production application tier running on 11.5.10.2.
I have the concurrent manager running on Node A, and the rest of the services running on Node B. I'd like to retire the Node A server.
Can someone help me with the steps to move the concurrent manager to Node B and remove Node A from the configuration?
The steps I have come up with so far are:
1. Run maintain snapshot information on Node A & B.
2. Run perl adpreclone.pl appsTier merge on Node B.
3. Run perl adpreclone.pl appltop merge on Node A.
4. From Node A copy $COMMON_TOP/clone/appl to Node B.
5. Shutdown application tier on Node A & B.
6. Copy log and output files to Node B.
7. Run perl adcfgclone.pl appsTier on Node B.
8. On node B run :
Adadmin -> generate JAR files
-> generate message files
-> relink executables
-> copy files to destination
Adadmin -> Check for missing files, and manually copy files if needs
9. On node B run: EXEC FND_CONC_CLONE.SETUP_CLEAN;
10. Start Application tier and verify changes.
I'm not sure step 9 is right for this process. I want to keep the printer configuration.1. Run maintain snapshot information on Node A & B.
2. Run perl adpreclone.pl appsTier merge on Node B.
3. Run perl adpreclone.pl appltop merge on Node A.
4. From Node A copy $COMMON_TOP/clone/appl to Node B.
5. Shutdown application tier on Node A & B.
6. Copy log and output files to Node B.
7. Run perl adcfgclone.pl appsTier on Node B.
8. On node B run :
Adadmin -> generate JAR files
-> generate message files
-> relink executables
-> copy files to destination
Adadmin -> Check for missing files, and manually copy files if needs
9. On node B run: EXEC FND_CONC_CLONE.SETUP_CLEAN;
10. Start Application tier and verify changes.
I'm not sure step 9 is right for this process. I want to keep the printer configuration.All your steps look OK to me -- -- (Sharing the Application Tier File System in Oracle Applications Release 11i [ID 233428.1] -- Section 4: Merging existing APPL_TOPs into a single APPL_TOP)
Step 9 is correct and you need to purge FND_NODES table and run AutoConfig on all tier nodes (you will not lose any printer configuration) --
How to Clean Nonexistent Nodes or IP Addresses From FND_NODES [ID 260887.1]
Thanks,
Hussein -
Gettig the error at the end of single node installation on OEL5.6 32-bit
Gettig the following error at the end of single node installation on OEL5.6(32-bit)
I am getting the following error enf of the single node installation on OEL5.6(32-bit). I have installed the following RPMs
rpm -Uvh sysstat
rpm -Uvh libaio-devel
rpm -Uvh compat-libstdc++
rpm -Uvh compat-libstdc++
rpm -Uvh xorg-x11-libs-compat-
rpm -Uvh unixODBC-2.2.
rpm -Uvh unixODBC-devel
rpm -Uvh libXp-
rpm -Uvh openmotif21
These are following failed, when I clicked the respective button I get these respective errors...
HTTP
checking URL = http://linux1.oracle.com:8001
RW-50015: Error: - HTTP Listener is not responding. The service might not have started on the port yet. Please check the service and use the retry button.
Virtual Directory
RW-50015: Error: - Http Server Virtual Directories is not responding. The service might not have started on the port yet. Please check the service and use the retry button.
Login Page
RW-50015: Error: - Login Page is not responding. The service might not have started on the port yet. Please check the service and use the retry button.
Help Page
checking URL = http://linux1.oracle.com:8001/OA_HTML/help
RW-50015: Error: - Help Page is not responding. The service might not have started on the port yet. Please check the service and use the retry button.
SP
checking URL = http://linux1.oracle.com:8001/OA_HTML/jtfTestCookie.jsp
RW-50015: Error: - JSP is not responding. The service might not have started on the port yet. Please check the service and use the retry button.
Please let me know how to fix this.
Thanks.
Joy.Helios,
While starting the application I am getting the following...
============================================
[applprd1@linux1 scripts]$ ./adstrtal.sh
You are running adstrtal.sh version 120.15
Enter the APPS username: apps
Enter the APPS password:
The logfile for this session is located at /u01/prd1/inst/apps/PRD1_linux1/logs/appl/admin/log/adstrtal.log
Executing service control script:
/u01/prd1/inst/apps/PRD1_linux1/admin/scripts/adopmnctl.sh start
script returned:
You are running adopmnctl.sh version 120.6
Starting Oracle Process Manager (OPMN) ...
adopmnctl.sh: exiting with status 0
adopmnctl.sh: check the logfile /u01/prd1/inst/apps/PRD1_linux1/logs/appl/admin/log/adopmnctl.txt for more information ...
.end std out.
.end err out.
Executing service control script:
/u01/prd1/inst/apps/PRD1_linux1/admin/scripts/adalnctl.sh start
script returned:
adalnctl.sh version 120.3
Checking for FNDFS executable.
Listener APPS_PRD1 has already been started.
adalnctl.sh: exiting with status 2
adalnctl.sh: check the logfile /u01/prd1/inst/apps/PRD1_linux1/logs/appl/admin/log/adalnctl.txt for more information ...
.end std out.
.end err out.
Executing service control script:
/u01/prd1/inst/apps/PRD1_linux1/admin/scripts/adapcctl.sh start
script returned:
You are running adapcctl.sh version 120.7.12010000.2
Starting OPMN managed Oracle HTTP Server (OHS) instance ...
adapcctl.sh: exiting with status 204
adapcctl.sh: check the logfile /u01/prd1/inst/apps/PRD1_linux1/logs/appl/admin/log/adapcctl.txt for more information ...
.end std out.
.end err out.
Executing service control script:
/u01/prd1/inst/apps/PRD1_linux1/admin/scripts/adoacorectl.sh start
script returned:
You are running adoacorectl.sh version 120.13
Starting OPMN managed OACORE OC4J instance ...
adoacorectl.sh: exiting with status 150
adoacorectl.sh: check the logfile /u01/prd1/inst/apps/PRD1_linux1/logs/appl/admin/log/adoacorectl.txt for more information ...
.end std out.
.end err out.
Executing service control script:
/u01/prd1/inst/apps/PRD1_linux1/admin/scripts/adformsctl.sh start
script returned:
You are running adformsctl.sh version 120.16
Starting OPMN managed FORMS OC4J instance ...
adformsctl.sh: exiting with status 150
adformsctl.sh: check the logfile /u01/prd1/inst/apps/PRD1_linux1/logs/appl/admin/log/adformsctl.txt for more information ...
.end std out.
.end err out.
Executing service control script:
/u01/prd1/inst/apps/PRD1_linux1/admin/scripts/adoafmctl.sh start
script returned:
You are running adoafmctl.sh version 120.8
Starting OPMN managed OAFM OC4J instance ...
adoafmctl.sh: exiting with status 150
adoafmctl.sh: check the logfile /u01/prd1/inst/apps/PRD1_linux1/logs/appl/admin/log/adoafmctl.txt for more information ...
.end std out.
.end err out.
Executing service control script:
/u01/prd1/inst/apps/PRD1_linux1/admin/scripts/adcmctl.sh start
script returned:
You are running adcmctl.sh version 120.17.12010000.3
Starting concurrent manager for PRD1 ...
Starting PRD1_0627@PRD1 Internal Concurrent Manager
Default printer is noprint
adcmctl.sh: exiting with status 0
adcmctl.sh: check the logfile /u01/prd1/inst/apps/PRD1_linux1/logs/appl/admin/log/adcmctl.txt for more information ...
.end std out.
.end err out.
Executing service control script:
/u01/prd1/inst/apps/PRD1_linux1/admin/scripts/jtffmctl.sh start
script returned:
You are running jtffmctl.sh version 120.3
Validating Fulfillment patch level via /u01/prd1/apps/apps_st/comn/java/classes
Fulfillment patch level validated.
Starting Fulfillment Server for PRD1 on port 9301 ...
jtffmctl.sh: exiting with status 0
.end std out.
.end err out.
adstrtal.sh: Exiting with status 4
adstrtal.sh: check the logfile /u01/prd1/inst/apps/PRD1_linux1/logs/appl/admin/log/adstrtal.log for more information ...
===============================================
[Service Control Execution Report]
The report format is:
<Service Group> <Service> <Script> <Status>
Root Service Enabled
Root Service Oracle Process Manager for PRD1_linux1 adopmnctl.sh Started
Web Entry Point Services Enabled
Web Entry Point Services Oracle HTTP Server PRD1_linux1 adapcctl.sh Failed
Web Entry Point Services OracleTNSListenerAPPS_PRD1_linux1 adalnctl.sh Already Started
Web Application Services Enabled
Web Application Services OACORE OC4J Instance PRD1_linux1 adoacorectl.sh Failed
Web Application Services FORMS OC4J Instance PRD1_linux1 adformsctl.sh Failed
Web Application Services OAFM OC4J Instance PRD1_linux1 adoafmctl.sh Failed
Batch Processing Services Enabled
Batch Processing Services OracleConcMgrPRD1_linux1 adcmctl.sh Started
Batch Processing Services Oracle Fulfillment Server PRD1_linux1 jtffmctl.sh Started
Other Services Disabled
Other Services OracleFormsServer-Forms PRD1_linux1 adformsrvctl.sh Disabled
Other Services Oracle Metrics Client PRD1_linux1 adfmcctl.sh Disabled
Other Services Oracle Metrics Server PRD1_linux1 adfmsctl.sh Disabled
Other Services Oracle MWA Service PRD1_linux1 mwactlwrpr.sh Disabled
ServiceControl is exiting with status 4
Thanks.
Joy.
Edited by: user11952526 on Jun 26, 2011 11:51 AM -
9ir2 Single Node RAC + RH80
Hi all.
I'm triying to install a single node RAC based in the single node rac for linux document in the metalink.
The document was made based in a oracle 9iR1 database an RH71, I have followed the steps and i have a working kernel made by my own, the raw devices, and softdog hacked for being testing only. But the service does not go up.
I don't know if someone have made a single node rac instalation and can help me to choose the right pieces.
I don't have a RHAS2.1 but I do have a UnitedLinux 1.0 Server Patched.
I want to try the RAC tech but here (ecuador-sudamerica) is dificult to find firewire cards and disks so I´m working around a single node instalation.
Any help will be apreciate.
FernandoHi
Database version and GI versions are 11.2.0.2.2. And these are not multinode RAC configuration at any given time only single instance will be there for any given database. Some thing like ACTIVE and PASSIVE in hardware clusters such are VCS (Veritas Clusters).
I agree with you failover sinario in multi node (2 or more) RAC environments. In single node clustering only one instance will be there, like services in multinode, here whole instance will be re created on available nodes.
Hi
Database version and GI versions are 11.2.0.2.2. And these are not multi node RAC configuration at any given time only single instance will be there for any given database. Some thing like ACTIVE and PASSIVE in hardware clusters such are VCS (VERITAS Clusters).
I agree with you failover scenario in multi node (2 or more) RAC environments. In single node clustering only one instance will be there, like services in multi node, here whole instance will be re created on available nodes.
Thanks,
Thanks,
Maybe you are looking for
-
Install Windows 8.1 on rMBP
Hi All, Probably a simple question, however I seem to be struggling with this! I'm trying to install Windows 8.1 on my Retina Macbook Pro, however I'm unsure if i need to buy Windows 7 first? Or can I just get a Windows 8.1 student disk or what? Im c
-
Apple av adapter does not work after updating ios 5,01
apple av adapter does not work after ios 5,01. no video on vga monitor. sound goes out on iphone 4 also.
-
Shrink Log File on MS sql 2005
Hi all, My DB has a huge logfile, with more than 100gb. The idea is to shrink it, but the good way. I was trying this way: use P01 go backup log P01 TO DISK = 'D:\P01LOG1\P01LOG1.bak' go dbcc shrinkfile (P01LOG1,250) with no_infomsgs go The problem i
-
Committed Date Calculation in an STO
Hello, Can anyone detail how exactly the committed date is calculated during an availability check for an STO?. Thanks.
-
Hi, all I have three questions, but I don't find information about this. 1.- How is the infraestructure and the structure of cFolders 4.0? 2.- when I upload the document in cFolders. I want to know Where does it keep ( path) exactly the documents tha