Node Reachability failure
Hi,
I am planning to install Oracle RAC.I want to run the cluster verification script.When i give the commands its showing failure on one node and the current node checks and its ok.But how can i chck for the both nodes.
Plz help me.Thanks.
Hi
My OS is windows 2003 Server.
My database is ORACLE 10g 10.2.0
The Command is C:\clusterware\cluvfy\runcluvfy.bat stage -pre crsinst -n NODE1,NODE2;
The Result is
C:\Documents and Settings\Administrator>C:\clusterware\cluvfy\runcluvfy.bat stage -pre crsinst -n RAC1,RAC2;
The system cannot find the file specified.
Performing pre-checks for cluster services setup
Checking node reachability...
Node reachability check failed from node "RAC1".
Check failed on nodes:
RAC2;
WARNING:
These nodes cannot be reached:
RAC2;
Verification will proceed with nodes:
RAC1
Checking user equivalence...
User equivalence check passed for user "Administrator".
Checking administrative privileges...
Checking node connectivity...
Node connectivity check passed for subnet "172.31.0.0" with node(s) RAC1.
Node connectivity check passed for subnet "10.0.0.0" with node(s) RAC1.
Suitable interfaces for the private interconnect on subnet "172.31.0.0":
RAC1 Public:172.31.1.68
Suitable interfaces for the private interconnect on subnet "10.0.0.0":
RAC1 Private:10.0.0.1
ERROR:
Could not find a suitable set of interfaces for VIPs.
Node connectivity check failed.
Checking system requirements for 'crs'...
Operating system version check passed.
Total memory check passed.
Swap space check passed.
System architecture check passed.
Free disk space check passed.
System requirement passed for 'crs'
Pre-check for cluster services setup was unsuccessful on all the nodes.
Similar Messages
-
Runcluvfy.sh stage pre fails with node reachability on 1 node only
Having a frustrating problem. 2 node RAC system on RHEL 5.2 installing 11.2.0.1 grid/clusterware. Performing the following pre check command from node 1:
./runcluvfy.sh stage -pre crsinst -n node1,node2 -verboseI'm getting the following error and it cannot write the trace information
[grid@node1 grid]$ sudo chmod -R 777 /tmp
[grid@node1 grid]$ ./runcluvfy.sh stage -pre crsinst -n node1,node2 -verbose
WARNING:
Could not access or create trace file path "/tmp/bootstrap/cv/log". Trace information could not be collected
Performing pre-checks for cluster services setup
Checking node reachability...
node1.mydomain.com: node1.mydomain.com
Check: Node reachability from node "null"
Destination Node Reachable?
node2 no
node1 no
Result: Node reachability check failed from node "null"
ERROR:
Unable to reach any of the nodes
Verification cannot proceed
Pre-check for cluster services setup was unsuccessful on all the nodes.
[grid@node1 grid]$
[grid@node1 grid]$ echo $CV_DESTLOC
/home/grid/software/grid/11gr2/gridI've verified the following:
1) there is user equivalence between the nodes for user grid
2) /tmp is read/writable by user grid on both nodes
3) Setting the CV_DESTLOC appears to do nothing - it seems to go back to wanting to write to /tmp
4) ./runcluvfy comp nodecon -n node1,node2-verbose succeeds no problem
And the weirdest thing of all, when I run ./runcluvfy.sh stage -pre crsinst -n node1,node2 -verbose from node 2, it succeeds without errors.
What am I missing? And TIA..I made a copy of the runcluvfy.sh and commented out all rm -rf commands so that it would at least save the trace files. Re-ran, and the following trace output - not entirely helpful to me, but any gurus out there see anything?
[main] [ 2010-04-20 15:48:38.275 CDT ] [TaskNodeConnectivity.performTask:354] _nw_:Performing Node Reachability verification task...
[main] [ 2010-04-20 15:48:38.282 CDT ] [ResultSet.traceResultSet:341]
Target ResultSet BEFORE Upload===>
Overall Status->UNKNOWN
[main] [ 2010-04-20 15:48:38.283 CDT ] [ResultSet.traceResultSet:341]
Source ResultSet ===>
Overall Status->OPERATION_FAILED
node2-->OPERATION_FAILED
node1-->OPERATION_FAILED
[main] [ 2010-04-20 15:48:38.283 CDT ] [ResultSet.traceResultSet:341]
Target ResultSet AFTER Upload===>
Overall Status->OPERATION_FAILED
node2-->OPERATION_FAILED
node1-->OPERATION_FAILED
[main] [ 2010-04-20 15:48:38.284 CDT ] [ResultSet.getSuccNodes:556] Checking for Success nodes from the total list of nodes in the resultset
[main] [ 2010-04-20 15:48:38.284 CDT ] [ReportUtil.printReportFooter:1553] stageMsgID: 8302
[main] [ 2010-04-20 15:48:38.284 CDT ] [CluvfyDriver.main:299] ==== cluvfy exiting normally.I'm still baffled why the precheck is successful from the second node. And, in fact, all other cluvfy checks that I've run succeed form both nodes. -
Failure at final check of Oracle CRS stack. 10 on the first node.
Hi everyone
I trying to install an Oracle RAC 10gr2 on an Oracle Enterprise Linux AS release 4 (October Update 7) , but I'm having this problem
root@fporn01 crs# ./root.sh
Checking to see if Oracle CRS stack is already configured
Setting the permissions on OCR backup directory
Setting up NS directories
Oracle Cluster Registry configuration upgraded successfully
clscfg: EXISTING configuration version 3 detected.
clscfg: version 3 is 10G Release 2.
assigning default hostname fporn01 for node 1.
assigning default hostname fporn02 for node 2.
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node <nodenumber>: <nodename> <private interconnect name> <hostname>
node 1: fporn01 fporn01-priv fporn01
node 2: fporn02 fporn02-priv fporn02
clscfg: Arguments check out successfully.
NO KEYS WERE WRITTEN. Supply -force parameter to override.
-force is destructive and will destroy any previous cluster
configuration.
Oracle Cluster Registry for cluster has already been initialized
Startup will be queued to init within 90 seconds.
Adding daemons to inittab
Expecting the CRS daemons to be up within 600 seconds.
Failure at final check of Oracle CRS stack.
+10+
forget about the node names!!!!
but on the second node everything went fine, so I'm sure this is not a connectivity issue.
the iptables service is stopped and disabled
check the results after running the root.sh script
root@fporn02 ~# /u01/app/crs/root.sh
Checking to see if Oracle CRS stack is already configured
+/etc/oracle does not exist. Creating it now.+
Setting the permissions on OCR backup directory
Setting up NS directories
Oracle Cluster Registry configuration upgraded successfully
clscfg: EXISTING configuration version 3 detected.
clscfg: version 3 is 10G Release 2.
assigning default hostname fporn01 for node 1.
assigning default hostname fporn02 for node 2.
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node <nodenumber>: <nodename> <private interconnect name> <hostname>
node 1: fporn01 fporn01-priv fporn01
node 2: fporn02 fporn02-priv fporn02
clscfg: Arguments check out successfully.
NO KEYS WERE WRITTEN. Supply -force parameter to override.
-force is destructive and will destroy any previous cluster
configuration.
Oracle Cluster Registry for cluster has already been initialized
Startup will be queued to init within 90 seconds.
Adding daemons to inittab
Expecting the CRS daemons to be up within 600 seconds.
CSS is active on these nodes.
fporn02
CSS is inactive on these nodes.
fporn01
Local node checking complete.
Run root.sh on remaining nodes to start CRS daemons.
this is the log of crs on the first node
root@fporn01 bin# cat /u01/app/crs/log/fporn01/alertfporn01.log
+2009-06-24 17:27:37.695+
client(9045)CRS-1006:The OCR location /u02/oradata/orcl/OCRFile_mirror is inaccessible. Details in /u01/app/crs/log/fporn01/client/ocrconfig_9045.log.
+2009-06-24 17:27:37.741+
client(9045)CRS-1001:The OCR was formatted using version 2.
+2009-06-24 17:28:24.544+
client(9092)CRS-1801:Cluster pdb-rac configured with nodes fporn01 fporn02 .
this is the log of crs on the second node
root@fporn02 ~# cat /u01/app/crs/log/fporn02/alertfporn02.log
+2009-06-24 18:09:09.307+
cssd(16991)CRS-1605:CSSD voting file is online: /u02/oradata/orcl/CSSFile. Details in /u01/app/crs/log/fporn02/cssd/ocssd.log.
+2009-06-24 18:09:09.307+
cssd(16991)CRS-1605:CSSD voting file is online: /u02/oradata/orcl/CSSFile_mirror1. Details in /u01/app/crs/log/fporn02/cssd/ocssd.log.
+2009-06-24 18:09:09.310+
cssd(16991)CRS-1605:CSSD voting file is online: /u02/oradata/orcl/CSSFile_mirror2. Details in /u01/app/crs/log/fporn02/cssd/ocssd.log.
+2009-06-24 18:09:12.441+
cssd(16991)CRS-1601:CSSD Reconfiguration complete. Active nodes are fporn02 .
I have rechecked the Remote Access / User Equivalence
after run the OCRCHECK command ia have this information
root@fporn01 bin# ./ocrcheck
Status of Oracle Cluster Registry is as follows :
Version : 2
Total space (kbytes) : 262144
Used space (kbytes) : 312
Available space (kbytes) : 261832
ID : 255880615
Device/File Name : /u02/oradata/orcl/OCRFile
Device/File integrity check succeeded
Device/File Name : /u02/oradata/orcl/OCRFile_mirror
Device/File integrity check succeeded
Cluster registry integrity check succeeded
on the second node i get the same output
root@fporn02 bin# ./ocrcheck
Status of Oracle Cluster Registry is as follows :
Version : 2
Total space (kbytes) : 262144
Used space (kbytes) : 312
Available space (kbytes) : 261832
ID : 255880615
Device/File Name : /u02/oradata/orcl/OCRFile
Device/File integrity check succeeded
Device/File Name : /u02/oradata/orcl/OCRFile_mirror
Device/File integrity check succeeded
Cluster registry integrity check succeeded
I have reviewed the following metalink notes but none of them seems to solve my problem
*344994.1*
*240001.1*
*725878.1*
*329450.1*
*734221.1*
I have done a research trough many forums, but always the fail is on the second node, but my fail is on the first node.
I hope anyone could help me.
this is the output of cluvfy
Performing pre-checks for cluster services setup
Checking node reachability...
Check: Node reachability from node "fporn01"
Destination Node Reachable?
fporn01 yes
fporn02 yes
Result: Node reachability check passed from node "fporn01".
Checking user equivalence...
Check: User equivalence for user "oracle"
Node Name Comment
fporn02 passed
fporn01 passed
Result: User equivalence check passed for user "oracle".
Checking administrative privileges...
Check: Existence of user "oracle"
Node Name User Exists Comment
fporn02 yes passed
fporn01 yes passed
Result: User existence check passed for "oracle".
Check: Existence of group "oinstall"
Node Name Status Group ID
fporn02 exists 501
fporn01 exists 501
Result: Group existence check passed for "oinstall".
Check: Membership of user "oracle" in group "oinstall" as Primary
Node Name User Exists Group Exists User in Group Primary Comment
fporn02 yes yes yes yes passed
fporn01 yes yes yes yes passed
Result: Membership check for user "oracle" in group "oinstall" as Primary passed.
Administrative privileges check passed.
Checking node connectivity...
Interface information for node "fporn02"
Interface Name IP Address Subnet
eth0 10.218.108.245 10.218.108.0
eth1 192.168.1.2 192.168.1.0
Interface information for node "fporn01"
Interface Name IP Address Subnet
eth0 10.218.108.244 10.218.108.0
eth1 192.168.1.1 192.168.1.0
eth2 172.16.9.210 172.16.9.0
Check: Node connectivity of subnet "10.218.108.0"
Source Destination Connected?
fporn02:eth0 fporn01:eth0 yes
Result: Node connectivity check passed for subnet "10.218.108.0" with node(s) fporn02,fporn01.
Check: Node connectivity of subnet "192.168.1.0"
Source Destination Connected?
fporn02:eth1 fporn01:eth1 yes
Result: Node connectivity check passed for subnet "192.168.1.0" with node(s) fporn02,fporn01.
Check: Node connectivity of subnet "172.16.9.0"
Result: Node connectivity check passed for subnet "172.16.9.0" with node(s) fporn01.
Suitable interfaces for the private interconnect on subnet "10.218.108.0":
fporn02 eth0:10.218.108.245
fporn01 eth0:10.218.108.244
Suitable interfaces for the private interconnect on subnet "192.168.1.0":
fporn02 eth1:192.168.1.2
fporn01 eth1:192.168.1.1
ERROR:
Could not find a suitable set of interfaces for VIPs.
Result: Node connectivity check failed.
Checking system requirements for 'crs'...
Check: Total memory
Node Name Available Required Comment
fporn02 7.93GB (8310276KB) 512MB (524288KB) passed
fporn01 7.93GB (8310276KB) 512MB (524288KB) passed
Result: Total memory check passed.
Check: Free disk space in "/tmp" dir
Node Name Available Required Comment
fporn02 9.57GB (10037300KB) 400MB (409600KB) passed
fporn01 9.55GB (10012168KB) 400MB (409600KB) passed
Result: Free disk space check passed.
Check: Swap space
Node Name Available Required Comment
fporn02 8.81GB (9240568KB) 1GB (1048576KB) passed
fporn01 8.81GB (9240568KB) 1GB (1048576KB) passed
Result: Swap space check passed.
Check: System architecture
Node Name Available Required Comment
fporn02 i686 i686 passed
fporn01 i686 i686 passed
Result: System architecture check passed.
Check: Kernel version
Node Name Available Required Comment
fporn02 2.6.9-78.0.0.0.1.ELhugemem 2.4.21-15EL passed
fporn01 2.6.9-78.0.0.0.1.ELhugemem 2.4.21-15EL passed
Result: Kernel version check passed.
Check: Package existence for "make-3.79"
Node Name Status Comment
fporn02 make-3.80-7.EL4 passed
fporn01 make-3.80-7.EL4 passed
Result: Package existence check passed for "make-3.79".
Check: Package existence for "binutils-2.14"
Node Name Status Comment
fporn02 binutils-2.15.92.0.2-25 passed
fporn01 binutils-2.15.92.0.2-25 passed
Result: Package existence check passed for "binutils-2.14".
Check: Package existence for "gcc-3.2"
Node Name Status Comment
fporn02 gcc-3.4.6-10.0.1 passed
fporn01 gcc-3.4.6-10.0.1 passed
Result: Package existence check passed for "gcc-3.2".
Check: Package existence for "glibc-2.3.2-95.27"
Node Name Status Comment
fporn02 glibc-2.3.4-2.41 passed
fporn01 glibc-2.3.4-2.41 passed
Result: Package existence check passed for "glibc-2.3.2-95.27".
Check: Package existence for "compat-db-4.0.14-5"
Node Name Status Comment
fporn02 compat-db-4.1.25-9 passed
fporn01 compat-db-4.1.25-9 passed
Result: Package existence check passed for "compat-db-4.0.14-5".
Check: Package existence for "compat-gcc-7.3-2.96.128"
Node Name Status Comment
fporn02 missing failed
fporn01 missing failed
Result: Package existence check failed for "compat-gcc-7.3-2.96.128".
++Check: Package existence for "compat-gcc-c++-7.3-2.96.128"++
Node Name Status Comment
fporn02 missing failed
fporn01 missing failed
++Result: Package existence check failed for "compat-gcc-c++-7.3-2.96.128".++
++Check: Package existence for "compat-libstdc++-7.3-2.96.128"++
Node Name Status Comment
fporn02 missing failed
fporn01 missing failed
++Result: Package existence check failed for "compat-libstdc++-7.3-2.96.128".++
++Check: Package existence for "compat-libstdc++-devel-7.3-2.96.128"++
Node Name Status Comment
fporn02 missing failed
fporn01 missing failed
++Result: Package existence check failed for "compat-libstdc++-devel-7.3-2.96.128".++
Check: Package existence for "openmotif-2.2.3"
Node Name Status Comment
fporn02 openmotif-2.2.3-10.2.el4 passed
fporn01 openmotif-2.2.3-10.2.el4 passed
Result: Package existence check passed for "openmotif-2.2.3".
Check: Package existence for "setarch-1.3-1"
Node Name Status Comment
fporn02 setarch-1.6-1 passed
fporn01 setarch-1.6-1 passed
Result: Package existence check passed for "setarch-1.3-1".
Check: Group existence for "dba"
Node Name Status Comment
fporn02 exists passed
fporn01 exists passed
Result: Group existence check passed for "dba".
Check: Group existence for "oinstall"
Node Name Status Comment
fporn02 exists passed
fporn01 exists passed
Result: Group existence check passed for "oinstall".
Check: User existence for "nobody"
Node Name Status Comment
fporn02 exists passed
fporn01 exists passed
Result: User existence check passed for "nobody".
System requirement failed for 'crs'
Pre-check for cluster services setup was unsuccessful on all the nodes.forget about my last post, it was my mistake, I rebooted the server and the clustered file system service did not start up at boot time.
sorry
this is what I really got in /var/log/messages
after manually running crs daemons
Jun 26 16:43:07 fporn01 su(pam_unix)[10020]: session opened for user oracle by (uid=0)
Jun 26 16:43:07 fporn01 su(pam_unix)[10020]: session closed for user oracle
Jun 26 16:43:07 fporn01 logger: Cluster Ready Services completed waiting on dependencies.
Jun 26 16:44:07 fporn01 su(pam_unix)[9977]: session opened for user oracle by (uid=0)
Jun 26 16:45:31 fporn01 su(pam_unix)[10293]: session opened for user oracle by (uid=0)
Jun 26 16:45:32 fporn01 su(pam_unix)[10293]: session closed for user oracle
Jun 26 16:45:32 fporn01 logger: Cluster Ready Services completed waiting on dependencies.
Jun 26 16:45:40 fporn01 su(pam_unix)[10351]: session opened for user oracle by (uid=0)
Jun 26 16:45:40 fporn01 su(pam_unix)[10351]: session closed for user oracle
Jun 26 16:45:40 fporn01 su(pam_unix)[10415]: session opened for user oracle by (uid=0)
Jun 26 16:45:40 fporn01 su(pam_unix)[10415]: session closed for user oracle
Jun 26 16:45:40 fporn01 logger: Cluster Ready Services completed waiting on dependencies.
Jun 26 16:46:32 fporn01 su(pam_unix)[10591]: session opened for user oracle by (uid=0)
Jun 26 16:46:40 fporn01 logger: Running CRSD with TZ =
after running ps -ef | grep -E 'init|d.bin|ocls|oprocd|diskmon|evmlogger|PID'
[root@fporn01 ~]# ps -ef | grep -E 'init|d.bin|ocls|oprocd|diskmon|evmlogger|PID'
UID PID PPID C STIME TTY TIME CMD
root 1 0 0 15:33 ? 00:00:00 init [5]
root 9869 7951 0 16:40 pts/1 00:00:00 [init.crsd] <defunct>
oracle 10053 9977 0 16:44 ? 00:00:00 /u01/app/crs/bin/evmd.bin
root 10249 7951 0 16:45 pts/1 00:00:00 /bin/sh /etc/init.d/init.cssd fatal
root 10341 7951 0 16:45 pts/1 00:00:00 /u01/app/crs/bin/crsd.bin reboot
root 10551 10249 0 16:46 pts/1 00:00:00 /bin/sh /etc/init.d/init.cssd daemon
oracle 10618 10592 0 16:46 ? 00:00:00 /u01/app/crs/bin/ocssd.bin
oracle 10926 10053 0 16:46 ? 00:00:00 /u01/app/crs/bin/evmlogger.bin -o /u01/app/crs/evm/log/evmlogger.info -l /u01/app/crs/evm/log/evmlogger.log
root 16658 9461 0 16:50 pts/2 00:00:00 grep -E init|d.bin|ocls|oprocd|diskmon|evmlogger|PID
CRS daemons finally work
*but i get this error when i run [oracle@fporn01 cluvfy]$ ./runcluvfy.sh stage -post crsinst -n fporn01,fporn02 -verbose*
Performing post-checks for cluster services setup
Checking node reachability...
Check: Node reachability from node "fporn01"
Destination Node Reachable?
fporn01 yes
fporn02 yes
Result: Node reachability check passed from node "fporn01".
Checking user equivalence...
Check: User equivalence for user "oracle"
Node Name Comment
fporn02 passed
fporn01 passed
Result: User equivalence check passed for user "oracle".
ERROR:
CRS is not installed on any of the nodes.
Verification cannot proceed.
Post-check for cluster services setup was unsuccessful on all the nodes. -
Root.sh failed on second node while installing CRS 10g on centos 5.5
root.sh failed on second node while installing CRS 10g
Hi all,
I am able to install Oracle 10g RAC clusterware on first node of the cluster. However, when I run the root.sh script as root
user on second node of the cluster, it fails with following error message:
NO KEYS WERE WRITTEN. Supply -force parameter to override.
-force is destructive and will destroy any previous cluster
configuration.
Oracle Cluster Registry for cluster has already been initialized
Startup will be queued to init within 90 seconds.
Adding daemons to inittab
Expecting the CRS daemons to be up within 600 seconds.
Failure at final check of Oracle CRS stack.
10
and run cluvfy stage -post hwos -n all -verbose,it show message:
ERROR:
Could not find a suitable set of interfaces for VIPs.
Result: Node connectivity check failed.
Checking shared storage accessibility...
Disk Sharing Nodes (2 in count)
/dev/sda db2 db1
and run cluvfy stage -pre crsinst -n all -verbose,it show message:
ERROR:
Could not find a suitable set of interfaces for VIPs.
Result: Node connectivity check failed.
Checking system requirements for 'crs'...
No checks registered for this product.
and run cluvfy stage -post crsinst -n all -verbose,it show message:
Result: Node reachability check passed from node "DB2".
Result: User equivalence check passed for user "oracle".
Node Name CRS daemon CSS daemon EVM daemon
db2 no no no
db1 yes yes yes
Check: Health of CRS
Node Name CRS OK?
db1 unknown
Result: CRS health check failed.
check crsd.log and show message:
clsc_connect: (0x143ca610) no listener at (ADDRESS=(PROTOCOL=ipc)(KEY=OCSSD_LL_db2_crs))
clsssInitNative: connect failed, rc 9
Any help would be greatly appreciated.
Edited by: 868121 on 2011-6-24 上午12:31Hello, it took a little searching, but I found this in a note in the GRID installation guide for Linux/UNIX:
Public IP addresses and virtual IP addresses must be in the same subnet.
In your case, you are using two different subnets for the VIPs. -
Root.sh fails inf 11gr2 2 node hp installation
Hello,
Env Details:
This is an 2node RAC 11gR2 installation on HP 11.3.
Following is the issue:
The GRid infrastructure installation fails at the point of running root.sh with the following errors on node 1,
CRS-2677: Stop of 'ora.cssdmonitor' on 'vpar1' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'vpar1'
CRS-2677: Stop of 'ora.gpnpd' on 'vpar1' succeeded
CRS-2673: Attempting to stop 'ora.mdnsd' on 'vpar1'
CRS-2677: Stop of 'ora.mdnsd' on 'vpar1' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'vpar1'
CRS-2677: Stop of 'ora.gipcd' on 'vpar1' succeeded
CRS-4000: Command Start failed, or completed with errors.
CRS-2672: Attempting to start 'ora.gipcd' on 'vpar1'
CRS-2672: Attempting to start 'ora.mdnsd' on 'vpar1'
CRS-2676: Start of 'ora.gipcd' on 'vpar1' succeeded
CRS-2676: Start of 'ora.mdnsd' on 'vpar1' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'vpar1'
CRS-2676: Start of 'ora.gpnpd' on 'vpar1' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'vpar1'
CRS-2676: Start of 'ora.cssdmonitor' on 'vpar1' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'vpar1'
CRS-2672: Attempting to start 'ora.diskmon' on 'vpar1'
CRS-2676: Start of 'ora.diskmon' on 'vpar1' succeeded
CRS-2674: Start of 'ora.cssd' on 'vpar1' failed
CRS-2679: Attempting to clean 'ora.cssd' on 'vpar1'
CRS-2678: 'ora.cssd' on 'vpar1' has experienced an unrecoverable failure
CRS-0267: Human intervention required to resume its availability.
CRS-2673: Attempting to stop 'ora.diskmon' on 'vpar1'
CRS-2677: Stop of 'ora.diskmon' on 'vpar1' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'vpar1'
CRS-2677: Stop of 'ora.gpnpd' on 'vpar1' succeeded
CRS-2673: Attempting to stop 'ora.mdnsd' on 'vpar1'
CRS-2677: Stop of 'ora.mdnsd' on 'vpar1' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'vpar1'
CRS-2677: Stop of 'ora.gipcd' on 'vpar1' succeeded
*CRS-4000: Command Start failed, or completed with errors.
Command return code of 1 (256) from command: /orabinary1/cluster/bin/crsctl start resource ora.ctssd -init
Start of resource "ora.ctssd -init" failed
Clusterware exclusive mode start of resource ora.ctssd failed
CRS-2500: Cannot stop resource 'ora.crsd' as it is not running
CRS-4000: Command Stop failed, or completed with errors.
Command return code of 1 (256) from command: /orabinary1/cluster/bin/crsctl stop resource ora.crsd -init
Stop of resource "ora.crsd -init" failed
Failed to stop CRSD
CRS-2500: Cannot stop resource 'ora.asm' as it is not running
CRS-4000: Command Stop failed, or completed with errors.
Command return code of 1 (256) from command: /orabinary1/cluster/bin/crsctl stop resource ora.asm -init
Stop of resource "ora.asm -init" failed
Failed to stop ASM*
CRS-2679: Attempting to clean 'ora.cssd' on 'vpar1'
CRS-2681: Clean of 'ora.cssd' on 'vpar1' succeeded
Initial cluster configuration failed.
the ocrcheck command fails with :
2010-02-25 14:57:08.610: [ OCRASM][1]proprasmo: Failed to open file in dirty mode
2010-02-25 14:57:08.610: [ OCRASM][1]proprasmo: Error in open/create file in dg [newgrid]
[ OCRASM][1]SLOS : SLOS: cat=8, opn=kgfolclcpi1, dep=210, loc=kgfokge
AMDU-00210: No disks found in diskgroup NEWGRID
AMDU-00210: No disks found in diskgroup NEWGRID
2010-02-25 14:57:08.630: [ OCRASM][1]proprasmo: kgfoCheckMount returned [7]
2010-02-25 14:57:08.630: [ OCRASM][1]proprasmo: The ASM instance is down
the disks are already owned by grid:asmadmin group i have already done chmod 660 for all of them.
the precheck during the installation does not fail with any particular error and goes ahead to the point of asking to run root.sh
kindly help.
regards
pgUnfortunately I can't run any -post checks because the CRS doesn't install correctly. The error occurs during the intial root.sh run on the first node, and doens't allow me to continue.
cluvfy stage -post crsinst -n sun277z1,sun278z1
Performing post-checks for cluster services setup
Checking node reachability...
Node reachability check passed from node "sun277z1"
Checking user equivalence...
User equivalence check passed for user "oracle"
ERROR:
CRS is not installed on any of the nodes
Verification cannot proceed -
Dear Team,
Oracle 12c GRID Runclufy check failing with below error. Even After Changing Local Built in Administrator User Name also same failure reporting. Kindly help to resolve this Issue and Provide steps to Avoid this conflict.
Windows user account consistency check across nodes - Checks consistency of Windows user account across nodes Error:
PRVG-11818 : Windows user "MDCCOMMONLDAP\Administrator" is a domain user but a conflicting local user account was found on nodes "sep03vvm-401,sep03vvm-402" -
Cause: A conflicting local user account as indicated was found on the identified nodes. - Action: Ensure that the Windows user account used for Oracle installation and configuration is defined as a domain user on all nodes or as a local user on all nodes, but not a mixture of the two.
Check Failed on Nodes: [sep03vvm-402, sep03vvm-401]
c:\Oracle12c_software\Oracle12c_grid\grid>runcluvfy.bat stage -pre crsinst -verbose -n SEP03VVM-401,SEP03VVM-402
Performing pre-checks for cluster services setup
Checking node reachability...
Check: Node reachability from node "sep03vvm-401"
Destination Node Reachable?
sep03vvm-401 yes
sep03vvm-402 yes
Result: Node reachability check passed from node "sep03vvm-401"
Checking user equivalence...
Check: User equivalence for user "Administrator"
Node Name Status
sep03vvm-402 passed
sep03vvm-401 passed
Result: User equivalence check passed for user "Administrator"
Checking node connectivity...
Interface information for node "sep03vvm-402"
Name IP Address Subnet Gateway Def. Gateway HW Addre
ss MTU
PublicLAN 153.71.45.202 153.71.45.0 On-link 153.71.45.254 00:50
:56:91:05:30 1500
PrivateLAN 10.10.10.15 10.10.10.0 On-link 153.71.45.254 00:5
0:56:91:75:1B 1500
6TO4 Adapter 2002:9947:2dca::9947:2dca 2002::
00:00:00:00:00:00 1280
Interface information for node "sep03vvm-401"
Name IP Address Subnet Gateway Def. Gateway HW Addre
ss MTU
PublicLAN 153.71.45.201 153.71.45.0 On-link 153.71.45.254 00:50
:56:91:56:B6 1500
PrivateLAN 10.10.10.14 10.10.10.0 On-link 153.71.45.254 00:5
0:56:91:60:99 1500
6TO4 Adapter 2002:9947:2dc9::9947:2dc9 2002::
00:00:00:00:00:00 1280
Check: Node connectivity of subnet "153.71.45.0"
Source Destination Connected?
sep03vvm-402[153.71.45.202] sep03vvm-401[153.71.45.201] yes
Result: Node connectivity passed for subnet "153.71.45.0" with node(s) sep03vvm-
402,sep03vvm-401
Check: TCP connectivity of subnet "153.71.45.0"
Source Destination Connected?
sep03vvm-402 : 153.71.45.202 sep03vvm-402 : 153.71.45.202 passed
sep03vvm-401 : 153.71.45.201 sep03vvm-402 : 153.71.45.202 passed
sep03vvm-402 : 153.71.45.202 sep03vvm-401 : 153.71.45.201 passed
sep03vvm-401 : 153.71.45.201 sep03vvm-401 : 153.71.45.201 passed
Result: TCP connectivity check passed for subnet "153.71.45.0"
Check: Node connectivity of subnet "10.10.10.0"
Source Destination Connected?
sep03vvm-402[10.10.10.15] sep03vvm-401[10.10.10.14] yes
Result: Node connectivity passed for subnet "10.10.10.0" with node(s) sep03vvm-4
02,sep03vvm-401
Check: TCP connectivity of subnet "10.10.10.0"
Source Destination Connected?
sep03vvm-402 : 10.10.10.15 sep03vvm-402 : 10.10.10.15 passed
sep03vvm-401 : 10.10.10.14 sep03vvm-402 : 10.10.10.15 passed
sep03vvm-402 : 10.10.10.15 sep03vvm-401 : 10.10.10.14 passed
sep03vvm-401 : 10.10.10.14 sep03vvm-401 : 10.10.10.14 passed
Result: TCP connectivity check passed for subnet "10.10.10.0"
Check: Node connectivity of subnet "2002::"
Source Destination Connected?
sep03vvm-402[2002:9947:2dca::9947:2dca] sep03vvm-401[2002:9947:2dc9::9947:2dc
9] yes
Result: Node connectivity passed for subnet "2002::" with node(s) sep03vvm-402,s
ep03vvm-401
Check: TCP connectivity of subnet "2002::"
Source Destination Connected?
sep03vvm-402 : 2002:9947:2dca::9947:2dca sep03vvm-402 : 2002:9947:2dca::9947:
2dca passed
sep03vvm-401 : 2002:9947:2dc9::9947:2dc9 sep03vvm-402 : 2002:9947:2dca::9947:
2dca passed
sep03vvm-402 : 2002:9947:2dca::9947:2dca sep03vvm-401 : 2002:9947:2dc9::9947:
2dc9 passed
sep03vvm-401 : 2002:9947:2dc9::9947:2dc9 sep03vvm-401 : 2002:9947:2dc9::9947:
2dc9 passed
Result: TCP connectivity check passed for subnet "2002::"
Interfaces found on subnet "153.71.45.0" that are likely candidates for VIP are:
sep03vvm-402 PublicLAN:153.71.45.202
sep03vvm-401 PublicLAN:153.71.45.201
Interfaces found on subnet "2002::" that are likely candidates for VIP are:
sep03vvm-402 6TO4 Adapter:2002:9947:2dca::9947:2dca
sep03vvm-401 6TO4 Adapter:2002:9947:2dc9::9947:2dc9
Interfaces found on subnet "10.10.10.0" that are likely candidates for a private
interconnect are:
sep03vvm-402 PrivateLAN:10.10.10.15
sep03vvm-401 PrivateLAN:10.10.10.14
Checking subnet mask consistency...
Subnet mask consistency check passed for subnet "153.71.45.0".
Subnet mask consistency check passed for subnet "10.10.10.0".
Subnet mask consistency check passed for subnet "2002::".
Subnet mask consistency check passed.
Result: Node connectivity check passed
Checking multicast communication...
Checking subnet "153.71.45.0" for multicast communication with multicast group "
224.0.0.251"...
Check of subnet "153.71.45.0" for multicast communication with multicast group "
224.0.0.251" passed.
Check of multicast communication passed.
Checking the status of Windows firewall
Node Name Enabled? Comment
sep03vvm-402 no passed
sep03vvm-401 no passed
Result: Windows firewall verification check passed
Check: Total memory
Node Name Available Required Status
sep03vvm-402 4.9996GB (5242420.0KB) 4GB (4194304.0KB) passed
sep03vvm-401 4.9996GB (5242420.0KB) 4GB (4194304.0KB) passed
Result: Total memory check passed
Check: Available memory
Node Name Available Required Status
sep03vvm-402 3.6612GB (3839028.0KB) 50MB (51200.0KB) passed
sep03vvm-401 3.3152GB (3476244.0KB) 50MB (51200.0KB) passed
Result: Available memory check passed
Check: Swap space
Node Name Available Required Status
sep03vvm-402 5.8121GB (6094388.0KB) 4.9996GB (5242420.0KB) passed
sep03vvm-401 5.8121GB (6094388.0KB) 4.9996GB (5242420.0KB) passed
Result: Swap space check passed
Check: Free disk space for "sep03vvm-402:C:\Windows\temp"
Path Node Name Mount point Available Required Stat
us
C:\Windows\temp sep03vvm-402 C 82.6484GB 1GB pass
ed
Result: Free disk space check passed for "sep03vvm-402:C:\Windows\temp"
Check: Free disk space for "sep03vvm-401:C:\Windows\temp"
Path Node Name Mount point Available Required Stat
us
C:\Windows\temp sep03vvm-401 C 82.6112GB 1GB pass
ed
Result: Free disk space check passed for "sep03vvm-401:C:\Windows\temp"
Check: System architecture
Node Name Available Required Status
sep03vvm-402 64-bit 64-bit passed
sep03vvm-401 64-bit 64-bit passed
Result: System architecture check passed
Checking length of value of environment variable "PATH"
Check: Length of value of environment variable "PATH"
Node Name Set? Maximum Length Actual Length Comment
sep03vvm-402 yes 5119 100 passed
sep03vvm-401 yes 5119 129 passed
Result: Check for length of value of environment variable "PATH" passed.
Checking availability of ports "6200,6100" required for component "Oracle Notifi
cation Service (ONS)"
Node Name Port Number Protocol Available Status
sep03vvm-402 6200 TCP yes successful
sep03vvm-401 6200 TCP yes successful
sep03vvm-402 6100 TCP yes successful
sep03vvm-401 6100 TCP yes successful
Result: Port availability check passed for ports "6200,6100"
Starting Clock synchronization checks using Network Time Protocol(NTP)...
Checking daemon liveness...
Check: Liveness for "W32Time"
Node Name Running?
sep03vvm-402 yes
sep03vvm-401 yes
Result: Liveness check passed for "W32Time"
Check for NTP daemon or service alive passed on all nodes
Result: Clock synchronization check using Network Time Protocol(NTP) passed
Checking if current user is a domain user...
Check: If user "Administrator" is a domain user
Result: User "MDCCOMMONLDAP\Administrator" is a part of the domain "MDCCOMMONLDA
P"
Check: Time zone consistency
Result: Time zone consistency check passed
Checking for status of Automount feature
Node Name Enabled? Comment
sep03vvm-402 yes passed
sep03vvm-401 yes passed
Result: Check for status of Automount feature passed
Checking consistency of current Windows user account across all nodes
PRVG-11818 : Windows user "MDCCOMMONLDAP\Administrator" is a domain user but a c
conflicting local user account was found on nodes "sep03vvm-402"
Result: Check for Windows user account "MDCCOMMONLDAP\Administrator" consistency
failed
Pre-check for cluster services setup was unsuccessful.
Checks did not pass for the following node(s):
sep03vvm-402SEVERE: [FATAL] [INS-30131] Initial setup required for the execution of installer validations failed.
CAUSE: Failed to access the temporary location.
ACTION: Ensure that the current user has required permissions to access the temporary location.
Are you using a supported OS version (listed in the Install Doc) and following all of the steps in the Install Doc ?
HTH
Srini -
Clusterware Install:root.sh- Failure at final check of Oracle CRS stack. 10
Hello All,
Image: !http://systemwars.com/rac/cluster_back.jpg!
I was attempting to perform the steps in:
Link: http://www.oracle-base.com/articles/11g/OracleDB11gR1RACInstallationOnLinuxUsingNFS.php
The only difference is that I decided to use fedora core 12 instead. I did this because I added a second NIC card (USB) and only FC12 would recognize it. I tried to get it to work on Cent 5 but it just wouldn't. The second nic on each machine eth1 are connected via crossover cable, and the interfaces can ping each other just fine, rac1-priv and rac2-priv.
So here is my setup:
# Public
192.168.2.11 rac1.localdomain rac1
192.168.2.12 rac2.localdomain rac2
#Private
192.168.0.11 rac1-priv.localdomain rac1-priv
192.168.0.12 rac2-priv.localdomain rac2-priv
#Virtual
192.168.2.111 rac1-vip.localdomain rac1-vip
192.168.2.112 rac2-vip.localdomain rac2-vip
#NAS
192.168.2.10 mini.localdomain mini
Mini refers to my Mac mini which I decided to use as the 3rd "server" in the group. I was able to mount/read & write to the file systems just fine. As you can see.
[root@rac1 ~]# df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/vg_rac1-lv_root
8063408 5156268 2497540 68% /
tmpfs 1417456 0 1417456 0% /dev/shm
/dev/sda1 198337 22080 166017 12% /boot
mini:/shared_config 488050688 76719808 411074880 16% /u01/shared_config
mini:/shared_crs 488050688 76719808 411074880 16% /u01/app/crs/product/11.1.0/crs
mini:/shared_home 488050688 76719808 411074880 16% /u01/app/oracle/product/11.1.0/db_1
mini:/shared_data 488050688 76719808 411074880 16% /u01/oradata
[root@rac1 ~]# ssh rac2
Last login: Mon Dec 21 19:33:38 2009 from rac1.localdomain
[root@rac2 ~]# df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/vg_rac2-lv_root
8063408 4958008 2695800 65% /
tmpfs 1417456 0 1417456 0% /dev/shm
/dev/sda1 198337 22063 166034 12% /boot
mini:/shared_config 488050688 76719808 411074880 16% /u01/shared_config
mini:/shared_crs 488050688 76719808 411074880 16% /u01/app/crs/product/11.1.0/crs
mini:/shared_home 488050688 76719808 411074880 16% /u01/app/oracle/product/11.1.0/db_1
mini:/shared_data 488050688 76719808 411074880 16% /u01/oradata[color]
CLUSTER VERIFY SEEMS OK APART FROM ONE WARNING
WARNING:
Could not find a suitable set of interfaces for VIPs.
which according to this link, "can be safety ignored", although I noticed in the link its an actual ERROR and not a WARNING => http://www.idevelopment.info/data/Oracle/DBA_tips/Oracle10gRAC/CLUSTER_11.shtml . I also noted that it saw the public IPs as the possible priv IPs, which I also thought could safety be ignored.
oracle@rac1 clusterware]$ ./runcluvfy.sh stage -pre crsinst -n rac1,rac2 -verbose
Performing pre-checks for cluster services setup
Checking node reachability...
Check: Node reachability from node "rac1"
Destination Node Reachable?
rac2 yes
rac1 yes
Result: Node reachability check passed from node "rac1".
Checking user equivalence...
Check: User equivalence for user "oracle"
Node Name Comment
rac2 passed
rac1 passed
Result: User equivalence check passed for user "oracle".
Checking administrative privileges...
Check: Existence of user "oracle"
Node Name User Exists Comment
rac2 yes passed
rac1 yes passed
Result: User existence check passed for "oracle".
Check: Existence of group "oinstall"
Node Name Status Group ID
rac2 exists 501
rac1 exists 501
Result: Group existence check passed for "oinstall".
Check: Membership of user "oracle" in group "oinstall" [as Primary]
Node Name User Exists Group Exists User in Group Primary Comment
rac2 yes yes yes yes passed
rac1 yes yes yes yes passed
Result: Membership check for user "oracle" in group "oinstall" [as Primary] passed.
Administrative privileges check passed.
Checking node connectivity...
Interface information for node "rac2"
Interface Name IP Address Subnet Subnet Gateway Default Gateway Hardware Address
eth0 192.168.2.12 192.168.2.0 0.0.0.0 192.168.2.1 00:01:6C:XXXX
eth2 192.168.0.12 192.168.0.0 0.0.0.0 192.168.2.1 00:25:4B:XXXX
Interface information for node "rac1"
Interface Name IP Address Subnet Subnet Gateway Default Gateway Hardware Address
eth0 192.168.2.11 192.168.2.0 0.0.0.0 192.168.2.1 00:01:6CXXXXX
eth1 192.168.0.11 192.168.0.0 0.0.0.0 192.168.2.1 00:25:4B:XXXX
Check: Node connectivity of subnet "192.168.2.0"
Source Destination Connected?
rac2:eth0 rac1:eth0 yes
Result: Node connectivity check passed for subnet "192.168.2.0" with node(s) rac2,rac1.
Check: Node connectivity of subnet "192.168.0.0"
Source Destination Connected?
rac2:eth2 rac1:eth1 yes
Result: Node connectivity check passed for subnet "192.168.0.0" with node(s) rac2,rac1.
Interfaces found on subnet "192.168.2.0" that are likely candidates for a private interconnect:
rac2 eth0:192.168.2.12
rac1 eth0:192.168.2.11
WARNING:
Could not find a suitable set of interfaces for VIPs.
Result: Node connectivity check passed.
Checking system requirements for 'crs'...
Check: Total memory
Node Name Available Required Comment
rac2 2.7GB (2834912KB) 1GB (1048576KB) passed
rac1 2.7GB (2834912KB) 1GB (1048576KB) passed
Result: Total memory check passed.
Check: Free disk space in "/tmp" dir
Node Name Available Required Comment
rac2 4.58GB (4805204KB) 400MB (409600KB) passed
rac1 10.51GB (11015624KB) 400MB (409600KB) passed
Result: Free disk space check passed.
Check: Swap space
Node Name Available Required Comment
rac2 2GB (2097144KB) 1.5GB (1572864KB) passed
rac1 3GB (3145720KB) 1.5GB (1572864KB) passed
Result: Swap space check passed.
Check: System architecture
Node Name Available Required Comment
rac2 i686 i686 passed
rac1 i686 i686 passed
Result: System architecture check passed.
Check: Kernel version
Node Name Available Required Comment
rac2 2.6.31.5-127.fc12.i686.PAE 2.6.9 passed
rac1 2.6.31.5-127.fc12.i686.PAE 2.6.9 passed
Result: Kernel version check passed.
Check: Package existence for "make-3.81"
Node Name Status Comment
rac2 make-3.81-18.fc12.i686 passed
rac1 make-3.81-18.fc12.i686 passed
Result: Package existence check passed for "make-3.81".
Check: Package existence for "binutils-2.17.50.0.6"
Node Name Status Comment
rac2 binutils-2.19.51.0.14-34.fc12.i686 passed
rac1 binutils-2.19.51.0.14-34.fc12.i686 passed
Result: Package existence check passed for "binutils-2.17.50.0.6".
Check: Package existence for "gcc-4.1.1"
Node Name Status Comment
rac2 gcc-4.4.2-7.fc12.i686 passed
rac1 gcc-4.4.2-7.fc12.i686 passed
Result: Package existence check passed for "gcc-4.1.1".
Check: Package existence for "libaio-0.3.106"
Node Name Status Comment
rac2 libaio-0.3.107-9.fc12.i686 passed
rac1 libaio-0.3.107-9.fc12.i686 passed
Result: Package existence check passed for "libaio-0.3.106".
Check: Package existence for "libaio-devel-0.3.106"
Node Name Status Comment
rac2 libaio-devel-0.3.107-9.fc12.i686 passed
rac1 libaio-devel-0.3.107-9.fc12.i686 passed
Result: Package existence check passed for "libaio-devel-0.3.106".
Check: Package existence for "libstdc++-4.1.1"
Node Name Status Comment
rac2 libstdc++-4.4.2-7.fc12.i686 passed
rac1 libstdc++-4.4.2-7.fc12.i686 passed
Result: Package existence check passed for "libstdc++-4.1.1".
Check: Package existence for "elfutils-libelf-devel-0.125"
Node Name Status Comment
rac2 elfutils-libelf-devel-0.143-1.fc12.i686 passed
rac1 elfutils-libelf-devel-0.143-1.fc12.i686 passed
Result: Package existence check passed for "elfutils-libelf-devel-0.125".
Check: Package existence for "sysstat-7.0.0"
Node Name Status Comment
rac2 sysstat-9.0.4-4.fc12.i686 passed
rac1 sysstat-9.0.4-4.fc12.i686 passed
Result: Package existence check passed for "sysstat-7.0.0".
Check: Package existence for "compat-libstdc++-33-3.2.3"
Node Name Status Comment
rac2 compat-libstdc++-33-3.2.3-68.i686 passed
rac1 compat-libstdc++-33-3.2.3-68.i686 passed
Result: Package existence check passed for "compat-libstdc++-33-3.2.3".
Check: Package existence for "libgcc-4.1.1"
Node Name Status Comment
rac2 libgcc-4.4.2-7.fc12.i686 passed
rac1 libgcc-4.4.2-7.fc12.i686 passed
Result: Package existence check passed for "libgcc-4.1.1".
Check: Package existence for "libstdc++-devel-4.1.1"
Node Name Status Comment
rac2 libstdc++-devel-4.4.2-7.fc12.i686 passed
rac1 libstdc++-devel-4.4.2-7.fc12.i686 passed
Result: Package existence check passed for "libstdc++-devel-4.1.1".
Check: Package existence for "unixODBC-2.2.11"
Node Name Status Comment
rac2 unixODBC-2.2.14-6.fc12.i686 passed
rac1 unixODBC-2.2.14-9.fc12.i686 passed
Result: Package existence check passed for "unixODBC-2.2.11".
Check: Package existence for "unixODBC-devel-2.2.11"
Node Name Status Comment
rac2 unixODBC-devel-2.2.14-6.fc12.i686 passed
rac1 unixODBC-devel-2.2.14-9.fc12.i686 passed
Result: Package existence check passed for "unixODBC-devel-2.2.11".
Check: Package existence for "glibc-2.5-12"
Node Name Status Comment
rac2 glibc-2.11-2.i686 passed
rac1 glibc-2.11-2.i686 passed
Result: Package existence check passed for "glibc-2.5-12".
Check: Group existence for "dba"
Node Name Status Comment
rac2 exists passed
rac1 exists passed
Result: Group existence check passed for "dba".
Check: Group existence for "oinstall"
Node Name Status Comment
rac2 exists passed
rac1 exists passed
Result: Group existence check passed for "oinstall".
Check: User existence for "nobody"
Node Name Status Comment
rac2 exists passed
rac1 exists passed
Result: User existence check passed for "nobody".
System requirement passed for 'crs'
Pre-check for cluster services setup was successful. So now here is the actual problem:
After the installation and during the run of the root.sh I get:
Failure at final check of Oracle CRS stack.
10
[root@rac1 crs]# ./root.sh
WARNING: directory '/u01/app/crs/product/11.1.0' is not owned by root
WARNING: directory '/u01/app/crs/product' is not owned by root
WARNING: directory '/u01/app/crs' is not owned by root
WARNING: directory '/u01/app' is not owned by root
Checking to see if Oracle CRS stack is already configured
/etc/oracle does not exist. Creating it now.
Setting the permissions on OCR backup directory
Setting up Network socket directories
Oracle Cluster Registry configuration upgraded successfully
The directory '/u01/app/crs/product/11.1.0' is not owned by root. Changing owner to root
The directory '/u01/app/crs/product' is not owned by root. Changing owner to root
The directory '/u01/app/crs' is not owned by root. Changing owner to root
The directory '/u01/app' is not owned by root. Changing owner to root
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node <nodenumber>: <nodename> <private interconnect name> <hostname>
node 1: rac1 rac1-priv rac1
node 2: rac2 rac2-priv rac2
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
Now formatting voting device: /u01/shared_config/voting_disk
Format of 1 voting devices complete.
Startup will be queued to init within 30 seconds.
Adding daemons to inittab
Expecting the CRS daemons to be up within 600 seconds.
Failure at final check of Oracle CRS stack.
10According to this link => http://blog.contractoracle.com/2009/01/failure-at-final-check-of-oracle-crs.html
To recover from a status 10, one must check:
check firewall / routing / iptables issues
Now I have turned iptables off completely it doesnt even start up at boot time, so I know it can't be that.
ROUTE
[oracle@rac1 clusterware]$ route
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
192.168.2.0 * 255.255.255.0 U 1 0 0 eth0
192.168.0.0 * 255.255.255.0 U 1 0 0 eth1
default 192.168.2.1 0.0.0.0 UG 0 0 0 eth0
[oracle@rac2 ~]$ route
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
192.168.2.0 * 255.255.255.0 U 1 0 0 eth0
192.168.0.0 * 255.255.255.0 U 1 0 0 eth2
default 192.168.2.1 0.0.0.0 UG 0 0 0 eth0
[oracle@rac1 clusterware]$ traceroute rac2
traceroute to rac2 (192.168.2.12), 30 hops max, 60 byte packets
1 rac2.localdomain (192.168.2.12) 0.424 ms 0.427 ms 0.096 ms
[oracle@rac1 clusterware]$ traceroute rac2-priv
traceroute to rac2-priv (192.168.0.12), 30 hops max, 60 byte packets
1 rac2-priv.localdomain (192.168.0.12) 1.336 ms 1.238 ms 1.188 ms
[oracle@rac1 clusterware]$ traceroute rac2-vip
traceroute to rac2-vip (192.168.2.112), 30 hops max, 60 byte packets
1 rac1.localdomain (192.168.2.11) 2999.599 ms !H 2999.560 ms !H 2999.523 ms !H
[oracle@rac1 bin]$ ./crs_stat -t
CRS-0184: Cannot communicate with the CRS daemon.
Both rac1 and rac2 get the same output above with the -vip getting !H => !H, !N, or !P (host, network or protocol unreachable), I am assuming this is normal as CRS install did not complete successfully and the virtual IP is not bound yet.
Im pretty sure I have some kind of networking issue here, but I cant put my finger on it. I have tried absolutely everything that is suggested on the internet that I could find. Even deleting the /tmp/.oracle and /var/tmp/.oracle but nothing works. Ssh keys for root and oracle users exist and Ive connected using every possible combination to avoid that first time ssh prompt so users oracle on each node goes directly into rac1/rac2 rac1-priv/rac2-priv & actual IPs as well. Any ideas?
Edited by: Javier on Dec 30, 2009 12:34 PM
Edited by: Javier on Dec 30, 2009 6:58 PMHello
Note 370605.1 (Clusterware Intermittently Hangs And Commands Fail With CRS-184) is telling this.
"This is caused by a cron job that cleans up the /tmp directory which also removes the Oracle socket files in /tmp/.oracle
Do not remove /tmp/.oracle or /var/tmp/.oracle or its files while Oracle Clusterware is up."
Best Regards... -
Scan-vip running only on one RAC node
Hi ,
While setting up RAC11.2 on Centos 5.7 , I was getting this error during the grid installation:
PRCR-1079 : Failed to start resource ora.scan1.vip
CRS-5005: IP Address: 192.168.100.208 is already in use in the network
CRS-2674: Start of 'ora.scan1.vip' on 'falcen6b' failed
CRS-2632: There are no more servers to try to place resource 'ora.scan1.vip' on that would satisfy its placement policy
PRCR-1079 : Failed to start resource ora.scan2.vip
CRS-5005: IP Address: 192.168.100.209 is already in use in the network
CRS-2674: Start of 'ora.scan2.vip' on 'falcen6b' failed
CRS-2632: There are no more servers to try to place resource 'ora.scan2.vip' on that would satisfy its placement policy
PRCR-1079 : Failed to start resource ora.scan3.vip
CRS-5005: IP Address: 192.168.100.210 is already in use in the network
CRS-2674: Start of 'ora.scan3.vip' on 'falcen6b' failed
CRS-2632: There are no more servers to try to place resource 'ora.scan3.vip' on that would satisfy its placement policy
I figured that the scan service is able to run only on one node at a time. When I stopped the service on rac1 and started it on rac2 the service is starting.
But I think for the grid installation the scan service has to simultaneously run on both the nodes.
How do I resolve it?
Any suggestions please.
PS - I am planning to try with the patch 11.0.2.3 but it will be a while till i get access to it.
Till then can someone suggest a workaround?Hi Balazs Papp and onedbguru,
I was able to resolve that error by running the following command on rac2, now that part of the installer passed.
crsctl start res ora.scan1.vip
However the cluster verification utility is failing at the end of installer.
When I executed the below command, this is my output:
[oracle@falcen6a grid]$ ./runcluvfy.sh stage -post crsinst -n falcen6a,falcen6b -verbose
Performing post-checks for cluster services setup
Checking node reachability...
Check: Node reachability from node "falcen6a"
Destination Node Reachable?
falcen6a yes
falcen6b yes
Result: Node reachability check passed from node "falcen6a"
Checking user equivalence...
Check: User equivalence for user "oracle"
Node Name Comment
falcen6b passed
falcen6a passed
Result: User equivalence check passed for user "oracle"
Checking time zone consistency...
Time zone consistency check passed.
Checking Cluster manager integrity...
Checking CSS daemon...
Node Name Status
falcen6b running
falcen6a running
Oracle Cluster Synchronization Services appear to be online.
Cluster manager integrity check passed
UDev attributes check for OCR locations started...
Result: UDev attributes check passed for OCR locations
UDev attributes check for Voting Disk locations started...
Result: UDev attributes check passed for Voting Disk locations
Check default user file creation mask
Node Name Available Required Comment
falcen6b 0022 0022 passed
falcen6a 0022 0022 passed
Result: Default user file creation mask check passed
Checking cluster integrity...
Cluster is divided into 2 partitions
Partition 1 consists of the following members:
Node Name
falcen6b
Partition 2 consists of the following members:
Node Name
falcen6a
Cluster integrity check failed. Cluster is divided into 2 partition(s).
Checking OCR integrity...
Checking the absence of a non-clustered configuration...
All nodes free of non-clustered, local-only configurations
ERROR:
PRVF-4193 : Asm is not running on the following nodes. Proceeding with the remaining nodes.
Checking OCR config file "/etc/oracle/ocr.loc"...
OCR config file "/etc/oracle/ocr.loc" check successful
ERROR:
PRVF-4195 : Disk group for ocr location "+DATA" not available on the following nodes:
Checking size of the OCR location "+DATA" ...
Size check for OCR location "+DATA" successful...
OCR integrity check failed
Checking CRS integrity...
ERROR:
PRVF-5316 : Failed to retrieve version of CRS installed on node "falcen6b"
The Oracle clusterware is healthy on node "falcen6b"
The Oracle clusterware is healthy on node "falcen6a"
CRS integrity check failed
Checking node application existence...
Checking existence of VIP node application
Node Name Required Status Comment
falcen6b yes unknown failed
falcen6a yes unknown failed
Result: Check failed.
Checking existence of ONS node application
Node Name Required Status Comment
falcen6b no unknown ignored
falcen6a no online passed
Result: Check ignored.
Checking existence of GSD node application
Node Name Required Status Comment
falcen6b no unknown ignored
falcen6a no does not exist ignored
Result: Check ignored.
Checking existence of EONS node application
Node Name Required Status Comment
falcen6b no unknown ignored
falcen6a no online passed
Result: Check ignored.
Checking existence of NETWORK node application
Node Name Required Status Comment
falcen6b no unknown ignored
falcen6a no online passed
Result: Check ignored.
Checking Single Client Access Name (SCAN)...
SCAN VIP name Node Running? ListenerName Port Running?
falcen6-scan unknown false LISTENER 1521 false
WARNING:
PRVF-5056 : Scan Listener "LISTENER" not running
Checking name resolution setup for "falcen6-scan"...
SCAN Name IP Address Status Comment
falcen6-scan 192.168.100.210 passed
falcen6-scan 192.168.100.208 passed
falcen6-scan 192.168.100.209 passed
Verification of SCAN VIP and Listener setup failed
OCR detected on ASM. Running ACFS Integrity checks...
Starting check to see if ASM is running on all cluster nodes...
PRVF-5137 : Failure while checking ASM status on node "falcen6b"
Starting Disk Groups check to see if at least one Disk Group configured...
Disk Group Check passed. At least one Disk Group configured
Task ACFS Integrity check failed
Checking Oracle Cluster Voting Disk configuration...
Oracle Cluster Voting Disk configuration check passed
Checking to make sure user "oracle" is not in "root" group
Node Name Status Comment
falcen6b does not exist passed
falcen6a does not exist passed
Result: User "oracle" is not part of "root" group. Check passed
Checking if Clusterware is installed on all nodes...
Check of Clusterware install passed
Checking if CTSS Resource is running on all nodes...
Check: CTSS Resource running on all nodes
Node Name Status
falcen6b passed
falcen6a passed
Result: CTSS resource check passed
Querying CTSS for time offset on all nodes...
Result: Query of CTSS for time offset passed
Check CTSS state started...
Check: CTSS state
Node Name State
falcen6b Observer
falcen6a Observer
CTSS is in Observer state. Switching over to clock synchronization checks using NTP
Starting Clock synchronization checks using Network Time Protocol(NTP)...
NTP Configuration file check started...
The NTP configuration file "/etc/ntp.conf" is available on all nodes
NTP Configuration file check passed
Checking daemon liveness...
Check: Liveness for "ntpd"
Node Name Running?
falcen6b yes
falcen6a yes
Result: Liveness check passed for "ntpd"
Checking NTP daemon command line for slewing option "-x"
Check: NTP daemon command line
Node Name Slewing Option Set?
falcen6b yes
falcen6a yes
Result:
NTP daemon slewing option check passed
Checking NTP daemon's boot time configuration, in file "/etc/sysconfig/ntpd", for slewing option "-x"
Check: NTP daemon's boot time configuration
Node Name Slewing Option Set?
falcen6b yes
falcen6a yes
Result:
NTP daemon's boot time configuration check for slewing option passed
NTP common Time Server Check started...
NTP Time Server "133.243.236.19" is common to all nodes on which the NTP daemon is running
NTP Time Server "133.243.236.18" is common to all nodes on which the NTP daemon is running
NTP Time Server "210.173.160.86" is common to all nodes on which the NTP daemon is running
NTP Time Server ".LOCL." is common to all nodes on which the NTP daemon is running
Check of common NTP Time Server passed
Clock time offset check from NTP Time Server started...
Checking on nodes "[falcen6b, falcen6a]"...
Check: Clock time offset from NTP Time Server
Time Server: 133.243.236.19
Time Offset Limit: 1000.0 msecs
Node Name Time Offset Status
falcen6b 15.332 passed
falcen6a -1.503 passed
Time Server "133.243.236.19" has time offsets that are within permissible limits for nodes "[falcen6b, falcen6a]".
Time Server: 133.243.236.18
Time Offset Limit: 1000.0 msecs
Node Name Time Offset Status
falcen6b 15.115 passed
falcen6a -1.614 passed
Time Server "133.243.236.18" has time offsets that are within permissible limits for nodes "[falcen6b, falcen6a]".
Time Server: 210.173.160.86
Time Offset Limit: 1000.0 msecs
Node Name Time Offset Status
falcen6b 15.219 passed
falcen6a -1.527 passed
Time Server "210.173.160.86" has time offsets that are within permissible limits for nodes "[falcen6b, falcen6a]".
Time Server: .LOCL.
Time Offset Limit: 1000.0 msecs
Node Name Time Offset Status
falcen6b 0.0 passed
falcen6a 0.0 passed
Time Server ".LOCL." has time offsets that are within permissible limits for nodes "[falcen6b, falcen6a]".
Clock time offset check passed
Result: Clock synchronization check using Network Time Protocol(NTP) passed
Oracle Cluster Time Synchronization Services check passed
Post-check for cluster services setup was unsuccessful on all the nodes.
[oracle@falcen6a grid]$
Any suggestions? -
11gR2 clusterware installation problem on root.sh script on second node
Hi all,
I wanna install the *11gR2 RAC* on ORA-Linux 5.5 (x86_64) using VMware server but on the second node i get two "*failed*" at the end of root.sh script.
After that i try to install DB but ı can see only one node.What is the problem...
I will send the output, ı need your help.
Thank you all for helping..
Hosts file:(we have no ping problem )
[root@rac2 ~]# cat /etc/hosts
# Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1 localhost.localdomain localhost
# Public
192.168.2.101 rac1.localdomain rac1
192.168.2.102 rac2.localdomain rac2
# Private
192.168.0.101 rac1-priv.localdomain rac1-priv
192.168.0.102 rac2-priv.localdomain rac2-priv
# Virtual
192.168.2.111 rac1-vip.localdomain rac1-vip
192.168.2.112 rac2-vip.localdomain rac2-vip
# SCAN
192.168.2.201 rac-scan.localdomain rac-scan
[root@rac2 ~]#
FIRST NODE root.sh script output...
[root@rac2 ~]# /u01/app/11.2.0/db_1/root.sh
Running Oracle 11g root.sh script...
The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME= /u01/app/11.2.0/db_1
Enter the full pathname of the local bin directory: [/usr/local/bin]:
Copying dbhome to /usr/local/bin ...
Copying oraenv to /usr/local/bin ...
Copying coraenv to /usr/local/bin ...
Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root.sh script.
Now product-specific root actions will be performed.
2010-12-06 14:45:06: Parsing the host name
2010-12-06 14:45:06: Checking for super user privileges
2010-12-06 14:45:06: User has super user privileges
Using configuration parameter file: /u01/app/11.2.0/db_1/crs/install/crsconfig_params
Creating trace directory
LOCAL ADD MODE
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
root wallet
root wallet cert
root cert export
peer wallet
profile reader wallet
pa wallet
peer wallet keys
pa wallet keys
peer cert request
pa cert request
peer cert
pa cert
peer root cert TP
profile reader root cert TP
pa root cert TP
peer pa cert TP
pa peer cert TP
profile reader pa cert TP
profile reader peer cert TP
peer user cert
pa user cert
Adding daemon to inittab
CRS-4123: Oracle High Availability Services has been started.
ohasd is starting
CRS-2672: Attempting to start 'ora.gipcd' on 'rac2'
CRS-2672: Attempting to start 'ora.mdnsd' on 'rac2'
CRS-2676: Start of 'ora.gipcd' on 'rac2' succeeded
CRS-2676: Start of 'ora.mdnsd' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'rac2'
CRS-2676: Start of 'ora.gpnpd' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac2'
CRS-2676: Start of 'ora.cssdmonitor' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'rac2'
CRS-2672: Attempting to start 'ora.diskmon' on 'rac2'
CRS-2676: Start of 'ora.diskmon' on 'rac2' succeeded
CRS-2676: Start of 'ora.cssd' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.ctssd' on 'rac2'
CRS-2676: Start of 'ora.ctssd' on 'rac2' succeeded
ASM created and started successfully.
DiskGroup DATA created successfully.
clscfg: -install mode specified
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
CRS-2672: Attempting to start 'ora.crsd' on 'rac2'
CRS-2676: Start of 'ora.crsd' on 'rac2' succeeded
CRS-4256: Updating the profile
Successful addition of voting disk 587cc69413ce4fd3bf0c2c2548fb9017.
Successfully replaced voting disk group with +DATA.
CRS-4256: Updating the profile
CRS-4266: Voting file(s) successfully replaced
## STATE File Universal Id File Name Disk group
1. ONLINE 587cc69413ce4fd3bf0c2c2548fb9017 (/dev/oracleasm/disks/DISK1) [DATA]
Located 1 voting disk(s).
CRS-2673: Attempting to stop 'ora.crsd' on 'rac2'
CRS-2677: Stop of 'ora.crsd' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'rac2'
CRS-2677: Stop of 'ora.asm' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.ctssd' on 'rac2'
CRS-2677: Stop of 'ora.ctssd' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.cssdmonitor' on 'rac2'
CRS-2677: Stop of 'ora.cssdmonitor' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'rac2'
CRS-2677: Stop of 'ora.cssd' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'rac2'
CRS-2677: Stop of 'ora.gpnpd' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'rac2'
CRS-2677: Stop of 'ora.gipcd' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.mdnsd' on 'rac2'
CRS-2677: Stop of 'ora.mdnsd' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.mdnsd' on 'rac2'
CRS-2676: Start of 'ora.mdnsd' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.gipcd' on 'rac2'
CRS-2676: Start of 'ora.gipcd' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'rac2'
CRS-2676: Start of 'ora.gpnpd' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac2'
CRS-2676: Start of 'ora.cssdmonitor' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'rac2'
CRS-2672: Attempting to start 'ora.diskmon' on 'rac2'
CRS-2676: Start of 'ora.diskmon' on 'rac2' succeeded
CRS-2676: Start of 'ora.cssd' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.ctssd' on 'rac2'
CRS-2676: Start of 'ora.ctssd' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'rac2'
CRS-2676: Start of 'ora.asm' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'rac2'
CRS-2676: Start of 'ora.crsd' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.evmd' on 'rac2'
CRS-2676: Start of 'ora.evmd' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'rac2'
CRS-2676: Start of 'ora.asm' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.DATA.dg' on 'rac2'
CRS-2676: Start of 'ora.DATA.dg' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.registry.acfs' on 'rac2'
CRS-2676: Start of 'ora.registry.acfs' on 'rac2' succeeded
rac2 2010/12/06 14:52:06 /u01/app/11.2.0/db_1/cdata/rac2/backup_20101206_145206.olr
Configure Oracle Grid Infrastructure for a Cluster ... succeeded
Updating inventory properties for clusterware
Starting Oracle Universal Installer...
Checking swap space: must be greater than 500 MB. Actual 6847 MB Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oraInventory
'UpdateNodeList' was successful.
[root@rac2 ~]#
SECOND NODE root.sh script output
[root@rac1 db_1]# ./root.sh
Running Oracle 11g root.sh script...
The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME= /u01/app/11.2.0/db_1
Enter the full pathname of the local bin directory: [/usr/local/bin]:
Copying dbhome to /usr/local/bin ...
Copying oraenv to /usr/local/bin ...
Copying coraenv to /usr/local/bin ...
Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root.sh script.
Now product-specific root actions will be performed.
2010-12-06 14:54:11: Parsing the host name
2010-12-06 14:54:11: Checking for super user privileges
2010-12-06 14:54:11: User has super user privileges
Using configuration parameter file: /u01/app/11.2.0/db_1/crs/install/crsconfig_params
Creating trace directory
LOCAL ADD MODE
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
Adding daemon to inittab
CRS-4123: Oracle High Availability Services has been started.
ohasd is starting
CRS-2672: Attempting to start 'ora.gipcd' on 'rac1'
CRS-2672: Attempting to start 'ora.mdnsd' on 'rac1'
CRS-2676: Start of 'ora.gipcd' on 'rac1' succeeded
CRS-2676: Start of 'ora.mdnsd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'rac1'
CRS-2676: Start of 'ora.gpnpd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac1'
CRS-2676: Start of 'ora.cssdmonitor' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'rac1'
CRS-2672: Attempting to start 'ora.diskmon' on 'rac1'
CRS-2676: Start of 'ora.diskmon' on 'rac1' succeeded
CRS-2676: Start of 'ora.cssd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.ctssd' on 'rac1'
CRS-2676: Start of 'ora.ctssd' on 'rac1' succeeded
ASM created and started successfully.
DiskGroup DATA created successfully.
clscfg: -install mode specified
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
CRS-2672: Attempting to start 'ora.crsd' on 'rac1'
CRS-2676: Start of 'ora.crsd' on 'rac1' succeeded
Successful addition of voting disk 2761ce8d47b44fbabf73462151e3ba1d.
Successfully replaced voting disk group with +DATA.
CRS-4266: Voting file(s) successfully replaced
## STATE File Universal Id File Name Disk group
1. ONLINE 2761ce8d47b44fbabf73462151e3ba1d (/dev/oracleasm/disks/DISK1) [DATA]
Located 1 voting disk(s).
CRS-2673: Attempting to stop 'ora.crsd' on 'rac1'
CRS-2677: Stop of 'ora.crsd' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'rac1'
CRS-2677: Stop of 'ora.asm' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.ctssd' on 'rac1'
CRS-2677: Stop of 'ora.ctssd' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.cssdmonitor' on 'rac1'
CRS-2677: Stop of 'ora.cssdmonitor' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'rac1'
CRS-2677: Stop of 'ora.cssd' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'rac1'
CRS-2677: Stop of 'ora.gpnpd' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'rac1'
CRS-2677: Stop of 'ora.gipcd' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.mdnsd' on 'rac1'
CRS-2677: Stop of 'ora.mdnsd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.mdnsd' on 'rac1'
CRS-2676: Start of 'ora.mdnsd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.gipcd' on 'rac1'
CRS-2676: Start of 'ora.gipcd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'rac1'
CRS-2676: Start of 'ora.gpnpd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac1'
CRS-2676: Start of 'ora.cssdmonitor' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'rac1'
CRS-2672: Attempting to start 'ora.diskmon' on 'rac1'
CRS-2676: Start of 'ora.diskmon' on 'rac1' succeeded
CRS-2676: Start of 'ora.cssd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.ctssd' on 'rac1'
CRS-2676: Start of 'ora.ctssd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'rac1'
CRS-2676: Start of 'ora.asm' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'rac1'
CRS-2676: Start of 'ora.crsd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.evmd' on 'rac1'
CRS-2676: Start of 'ora.evmd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'rac1'
CRS-2676: Start of 'ora.asm' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.DATA.dg' on 'rac1'
CRS-2676: Start of 'ora.DATA.dg' on 'rac1' succeeded
PRCR-1079 : *Failed* to start resource ora.scan1.vip
CRS-5005: IP Address: 192.168.2.201 is already in use in the network
CRS-2674: Start of 'ora.scan1.vip' on 'rac1' *failed*
CRS-2632: There are no more servers to try to place resource 'ora.scan1.vip' on that would satisfy its placement policy
start scan ... *failed*
Configure Oracle Grid Infrastructure for a Cluster ... *failed*
Updating inventory properties for clusterware
Starting Oracle Universal Installer...
Checking swap space: must be greater than 500 MB. Actual 6847 MB Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oraInventory
'UpdateNodeList' was successful.
[root@rac1 db_1]# * "./runcluvfy.sh stage -pre -crsinst -n rac1,rac2 " outputs are same each node....*
[oracle@rac2 grid]$ ./runcluvfy.sh stage -pre crsinst -n rac1,rac2
Performing pre-checks for cluster services setup
Checking node reachability...
Node reachability check passed from node "rac2"
Checking user equivalence...
User equivalence check passed for user "oracle"
Checking node connectivity...
Checking hosts config file...
Verification of the hosts config file successful
Node connectivity passed for subnet "192.168.2.0" with node(s) rac2,rac1
TCP connectivity check passed for subnet "192.168.2.0"
Node connectivity passed for subnet "192.168.122.0" with node(s) rac2,rac1
TCP connectivity check failed for subnet "192.168.122.0"
Node connectivity passed for subnet "192.168.0.0" with node(s) rac2,rac1
TCP connectivity check passed for subnet "192.168.0.0"
Interfaces found on subnet "192.168.2.0" that are likely candidates for VIP are:
rac2 eth0:192.168.2.102 eth0:192.168.2.112 eth0:192.168.2.201
rac1 eth0:192.168.2.101 eth0:192.168.2.111
Interfaces found on subnet "192.168.122.0" that are likely candidates for a private interconnect are:
rac2 virbr0:192.168.122.1
rac1 virbr0:192.168.122.1
Interfaces found on subnet "192.168.0.0" that are likely candidates for a private interconnect are:
rac2 eth1:192.168.0.102
rac1 eth1:192.168.0.101
Node connectivity check passed
Total memory check passed
Available memory check passed
Swap space check passed
Free disk space check passed for "rac2:/tmp"
Free disk space check passed for "rac1:/tmp"
User existence check passed for "oracle"
Group existence check passed for "oinstall"
Group existence check passed for "dba"
Membership check for user "oracle" in group "oinstall" [as Primary] passed
Membership check for user "oracle" in group "dba" passed
Run level check passed
Hard limits check passed for "maximum open file descriptors"
Soft limits check passed for "maximum open file descriptors"
Hard limits check passed for "maximum user processes"
Soft limits check passed for "maximum user processes"
System architecture check passed
Kernel version check passed
Kernel parameter check passed for "semmsl"
Kernel parameter check passed for "semmns"
Kernel parameter check passed for "semopm"
Kernel parameter check passed for "semmni"
Kernel parameter check passed for "shmmax"
Kernel parameter check passed for "shmmni"
Kernel parameter check passed for "shmall"
Kernel parameter check passed for "file-max"
Kernel parameter check passed for "ip_local_port_range"
Kernel parameter check passed for "rmem_default"
Kernel parameter check passed for "rmem_max"
Kernel parameter check passed for "wmem_default"
Kernel parameter check passed for "wmem_max"
Kernel parameter check passed for "aio-max-nr"
Package existence check passed for "make-3.81"
Package existence check passed for "binutils-2.17.50.0.6"
Package existence check passed for "gcc-4.1.2"
Package existence check passed for "libaio-0.3.106 (i386)"
Package existence check passed for "libaio-0.3.106 (x86_64)"
Package existence check passed for "glibc-2.5-24 (i686)"
Package existence check passed for "glibc-2.5-24 (x86_64)"
Package existence check passed for "compat-libstdc++-33-3.2.3 (i386)"
Package existence check passed for "compat-libstdc++-33-3.2.3 (x86_64)"
Package existence check passed for "elfutils-libelf-0.125 (x86_64)"
Package existence check passed for "elfutils-libelf-devel-0.125"
Package existence check passed for "glibc-common-2.5"
Package existence check passed for "glibc-devel-2.5 (i386)"
Package existence check passed for "glibc-devel-2.5 (x86_64)"
Package existence check passed for "glibc-headers-2.5"
Package existence check passed for "gcc-c++-4.1.2"
Package existence check passed for "libaio-devel-0.3.106 (i386)"
Package existence check passed for "libaio-devel-0.3.106 (x86_64)"
Package existence check passed for "libgcc-4.1.2 (i386)"
Package existence check passed for "libgcc-4.1.2 (x86_64)"
Package existence check passed for "libstdc++-4.1.2 (i386)"
Package existence check passed for "libstdc++-4.1.2 (x86_64)"
Package existence check passed for "libstdc++-devel-4.1.2 (x86_64)"
Package existence check passed for "sysstat-7.0.2"
Package existence check passed for "unixODBC-2.2.11 (i386)"
Package existence check passed for "unixODBC-2.2.11 (x86_64)"
Package existence check passed for "unixODBC-devel-2.2.11 (i386)"
Package existence check passed for "unixODBC-devel-2.2.11 (x86_64)"
Package existence check passed for "ksh-20060214"
Check for multiple users with UID value 0 passed
Current group ID check passed
Core file name pattern consistency check passed.
User "oracle" is not part of "root" group. Check passed
Default user file creation mask check passed
Starting Clock synchronization checks using Network Time Protocol(NTP)...
NTP Configuration file check started...
NTP Configuration file check passed
Checking daemon liveness...
Liveness check passed for "ntpd"
NTP daemon slewing option check passed
NTP daemon's boot time configuration check for slewing option passed
NTP common Time Server Check started...
Check of common NTP Time Server passed
Clock time offset check from NTP Time Server started...
Clock time offset check passed
Clock synchronization check using Network Time Protocol(NTP) passed
Pre-check for cluster services setup was successful.
[oracle@rac2 grid]$ I'm confused :)
Edited by: Eren GULERYUZ on 06.Ara.2010 05:57Hi,
it looks like, that your "shared device" you are using is not really shared.
The second node does "create an ASM diskgroup" and create OCR and Voting disks. If this indeed would be a shared device, he should have recognized, that your disk is shared.
So as a result your VMware configuration must be wrong, and the disk you presented as shared disk is not really shared.
Which VMWare version did you use? It will not work correctly with the workstation or player edition, since shared disks are only really working with the server version.
If you indeed using the server, could you paste your vm configurations?
Furthermore I recommend using Virtual Box. There is a nice how-to:
http://www.oracle-base.com/articles/11g/OracleDB11gR2RACInstallationOnOEL5UsingVirtualBox.php
Sebastian -
Runcluvfy.sh stage -pre crsinst: error Unable to reach any of the nodes
Hii all,
Well, I've gone through the pre-reqs for trying to install 11G clusterware on RHEL 5.3.
I'm to the point where i'm trying to run:
./runcluvfy.sh stage -pre crsinst -n node1 -verbose
I get this:
Performing pre-checks for cluster services setup
Checking node reachability...
Node reachability check failed from node "node1 ".
Check failed on nodes:
node1
ERROR:
Unable to reach any of the nodes.
Verification cannot proceed.
Pre-check for cluster services setup was unsuccessful on all the nodes.
I'm just wanting right now, to install a one node RAC system (I will add servers later as I get them online).
I've verified that ssh is working (thinking it may be trying to connect to itself by ssh). I have the keys generated and installed...if I connect ssh as the oracle user back to the same machine, it gets me right on with no prompts for passwords.
nslookup on node1 looks great.
This box has 2 cards....eth0 and eth1. Right now in the /etc/hosts file, I have node1 to the IP for eth0, and node1-priv set for the IP address eth1.
I do have a little trouble understanding what the node1-vip is supposed to do or be set. I found the an IP address one higher than for eth0 wasn't being used, and set node1-vip to be that.
(Can someone explain to me a little more about the vip host?? Is it supposed to somehow point to node1's IP address on eth0 like the regular one does?)
Since this is a one box, one node install...hoping clusterware and checks are just looking at the /etc/hosts file. I've tried playing around, and setting node1-vip to be the same as node1 (IP)...that doesn't work either.
One thing I can guess 'might' be wrong. Does runcluvfy use "ping"? I found the oracle user cannot ping this box from this box. The box (node1) can be pinged from outside the box...it is registered on DNS, I can ssh into it no problem, and again, oracle can ssh into himself on same box with keys properly generated).
I've been looking around, and I just don't see much of what to look at to troubleshoot with this error, I guess everyone gets past the verification the first time with no host unreachable errors?
I'm a bit weak when it comes to networking. Any help greatly appreciated...suggestions, links...etc!!
cayenneOk...looks like this was the problem. It appears the SA's, per newer policy, had turned off "ping" for any other user on the box besides root.
I took a shot in the dark, and had them turn it on (as that ssh'ing and other items to check seemed to work outside the runcluvfy script). They turned on ping. The nodes from the script are now reachable and test positive for equivalency.
Performing pre-checks for cluster services setup
Checking node reachability...
Check: Node reachability from node "node1"
Destination Node Reachable?
node1 yes
Result: Node reachability check passed from node "node1".
Checking user equivalence...
Check: User equivalence for user "oracle"
Node Name Comment
node1 passed
Result: User equivalence check passed for user "oracle".
Pre-check for cluster services setup was unsuccessful on all the nodes.
I"m guessing that last line...was due to not having the clusterware running on any other boxes?
Anyway, will try to config. RAC, and get things installed. -
Can't install ORACLE RAC on Solaris (specified nodes are not clusterable)
Hi all,
Could you please help with the Oracle CRS issue?
During the installation Oracle CRS the OUI indicate that the specified nodes are not clusterable.
The window appears and displays:
"The specified nodes are not clusterable.
The following error was returned by the operating system:"
I am using 10gr2_cluster_sol.cpio.gz file.
My Solaris 10 configuration:
server - sun3
bash-3.00# cat /etc/hosts
# Internet host table
127.0.0.1 localhost
10.160.19.49 sun3 loghost
10.160.19.50 sun4 loghost
10.11.12.13 sun3prv
10.11.12.14 sun4prv
10.160.19.64 sun3pub
10.160.19.65 sun4pub
bash-3.00# ifconfig -a
lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
inet 127.0.0.1 netmask ff000000
bge0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2
inet 10.160.19.49 netmask fffffe00 broadcast 10.160.19.255
ether 0:14:4f:0:64:82
bge1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 3
inet 10.11.12.13 netmask fffffe00 broadcast 10.11.13.255
ether 0:14:4f:0:64:83
bash-3.00# cat /etc/netmasks
10.160.18.0 255.255.254.0
10.160.19.0 255.255.254.0
10.11.12.0 255.255.254.0
bash-3.00# cat /etc/hostname.bge0
sun3
bash-3.00# cat /etc/hostname.bge1
sun3prv
server - sun4
bash-3.00# cat /etc/hosts
# Internet host table
127.0.0.1 localhost
10.160.19.50 sun4 loghost
10.160.19.49 sun3 loghost
10.11.12.14 sun4prv
10.11.12.13 sun3prv
10.160.19.63 sun4pub
10.160.19.62 sun3pub
bash-3.00# ifconfig -a
lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
inet 127.0.0.1 netmask ff000000
bge0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2
inet 10.160.19.50 netmask fffffe00 broadcast 10.160.19.255
ether 0:14:4f:0:41:c8
bge1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 3
inet 10.11.12.14 netmask fffffe00 broadcast 10.11.13.255
ether 0:14:4f:0:41:c9
bash-3.00# cat /etc/netmasks
10.160.18.0 255.255.254.0
10.11.12.0 255.255.254.0
10.160.19.0 255.255.254.0
bash-3.00# cat /etc/hostname.bge1
sun4prv
bash-3.00# cat /etc/hostname.bge0
sun40) This error occur when I run .runInstaller
All prerequisites check passed. The error window appears after clicking Next button in Specify Cluster Configuration window.
1) I have changed /etc/hosts file as you have mentioned
SUN3
bash-3.00# cat /etc/hosts
# Internet host table
::1 localhost
127.0.0.1 localhost
10.160.19.49 sun3
10.160.19.50 sun4
10.11.12.13 sun3-vip
10.11.12.14 sun4-vip
10.160.19.64 sun3pub
10.160.19.65 sun4pub
SUN4
bash-3.00# cat /etc/hosts
# Internet host table
::1 localhost
127.0.0.1 localhost
10.160.19.50 sun4
10.160.19.49 sun3
10.11.12.13 sun3-vip
10.11.12.14 sun4-vip
10.160.19.64 sun3pub
10.160.19.65 sun4pub
Also I have configured bge0:1 interface
bash-3.00# ifconfig -a
lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
inet 127.0.0.1 netmask ff000000
bge0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2
inet 10.160.19.49 netmask fffffe00 broadcast 10.160.19.255
ether 0:14:4f:0:64:82
bge0:1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2
inet 10.160.19.64 netmask ffffff00 broadcast 10.160.19.255
bge1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 3
inet 10.11.12.13 netmask fffffe00 broadcast 10.11.13.255
ether 0:14:4f:0:64:83
2) I have removed loghost from /etc/hosts file
3) Currently I do not have shared storage. I am going to use Storage Foundation to create a shared storage
Also I was trying to test the machines using runcluvfy.sh command
The output is the following:
-bash-3.00$ ./runcluvfy.sh stage -pre crsinst -n sun3,sun4
Performing pre-checks for cluster services setup
Checking node reachability...
Node reachability check passed from node "sun3".
Checking user equivalence...
User equivalence check failed for user "oracle".
Check failed on nodes:
sun4,sun3
ERROR:
User equivalence unavailable on all the nodes.
Verification cannot proceed.
Pre-check for cluster services setup was unsuccessful on all the nodes. -
Problem in NODE 1 after reboot
Hi,
Oracle Version:11gR2
Operating System:Cent Os
Hi we have some problem in node 1 after sudden reboot of both the nodes when the servers are up the database in node 2 started automatically but in node 1 we started manually.
But inthe CRSCTL command it is showing that node 1 database is down as show below.
[root@rac1 bin]# ./crsctl stat res -t
NAME TARGET STATE SERVER STATE_DETAILS
Local Resources
ora.ASM_DATA.dg
ONLINE ONLINE rac1
ONLINE ONLINE rac2
ora.ASM_FRA.dg
ONLINE ONLINE rac1
ONLINE ONLINE rac2
ora.LISTENER.lsnr
ONLINE ONLINE rac1
ONLINE ONLINE rac2
ora.OCR_VOTE.dg
ONLINE ONLINE rac1
ONLINE ONLINE rac2
ora.asm
ONLINE ONLINE rac1 Started
ONLINE ONLINE rac2 Started
ora.eons
ONLINE ONLINE rac1
ONLINE ONLINE rac2
ora.gsd
OFFLINE OFFLINE rac1
OFFLINE OFFLINE rac2
ora.net1.network
ONLINE ONLINE rac1
ONLINE ONLINE rac2
ora.ons
ONLINE ONLINE rac1
ONLINE ONLINE rac2
ora.registry.acfs
ONLINE ONLINE rac1
ONLINE ONLINE rac2
Cluster Resources
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE rac2
ora.LISTENER_SCAN2.lsnr
1 ONLINE ONLINE rac1
ora.LISTENER_SCAN3.lsnr
1 ONLINE ONLINE rac1
ora.oc4j
1 OFFLINE OFFLINE
ora.qfundrac.db
1 OFFLINE OFFLINE
2 ONLINE ONLINE rac2 Open
ora.rac1.vip
1 ONLINE ONLINE rac1
ora.rac2.vip
1 ONLINE ONLINE rac2
ora.scan1.vip
1 ONLINE ONLINE rac2
ora.scan2.vip
1 ONLINE ONLINE rac1
ora.scan3.vip
1 ONLINE ONLINE rac1but for the below command it is showing both the nodes are up
SQL> select inst_id,status,instance_role,active_state from gv$instance;
INST_ID STATUS INSTANCE_ROLE ACTIVE_ST
1 OPEN PRIMARY_INSTANCE NORMAL
2 OPEN PRIMARY_INSTANCE NORMALhere is the output for cluvfy .
[grid@rac1 bin]$ ./cluvfy stage -post crsinst -n rac1,rac2 -verbose
Performing post-checks for cluster services setup
Checking node reachability...
Check: Node reachability from node "rac1"
Destination Node Reachable?
rac2 yes
rac1 yes
Result: Node reachability check passed from node "rac1"
Checking user equivalence...
Check: User equivalence for user "grid"
Node Name Comment
rac2 passed
rac1 passed
Result: User equivalence check passed for user "grid"
Checking time zone consistency...
Time zone consistency check passed.
Checking Cluster manager integrity...
Checking CSS daemon...
Node Name Status
rac2 running
rac1 running
Oracle Cluster Synchronization Services appear to be online.
Cluster manager integrity check passed
UDev attributes check for OCR locations started...
Result: UDev attributes check passed for OCR locations
UDev attributes check for Voting Disk locations started...
Result: UDev attributes check passed for Voting Disk locations
Check default user file creation mask
Node Name Available Required Comment
rac2 0022 0022 passed
rac1 0022 0022 passed
Result: Default user file creation mask check passed
Checking cluster integrity...
Node Name
rac1
rac2
Cluster integrity check passed
Checking OCR integrity...
Checking the absence of a non-clustered configuration...
All nodes free of non-clustered, local-only configurations
ASM Running check passed. ASM is running on all cluster nodes
Checking OCR config file "/etc/oracle/ocr.loc"...
OCR config file "/etc/oracle/ocr.loc" check successful
Disk group for ocr location "+OCR_VOTE" available on all the nodes
Checking size of the OCR location "+OCR_VOTE" ...
Size check for OCR location "+OCR_VOTE" successful...
Size check for OCR location "+OCR_VOTE" successful...
WARNING:
This check does not verify the integrity of the OCR contents. Execute 'ocrcheck' as a privileged user to verify the contents of OCR.
OCR integrity check passed
Checking CRS integrity...
The Oracle clusterware is healthy on node "rac2"
The Oracle clusterware is healthy on node "rac1"
CRS integrity check passed
Checking node application existence...
Checking existence of VIP node application
Node Name Required Status Comment
rac2 yes online passed
rac1 yes online passed
Result: Check passed.
Checking existence of ONS node application
Node Name Required Status Comment
rac2 no online passed
rac1 no online passed
Result: Check passed.
Checking existence of GSD node application
Node Name Required Status Comment
rac2 no does not exist ignored
rac1 no does not exist ignored
Result: Check ignored.
Checking existence of EONS node application
Node Name Required Status Comment
rac2 no online passed
rac1 no online passed
Result: Check passed.
Checking existence of NETWORK node application
Node Name Required Status Comment
rac2 no online passed
rac1 no online passed
Result: Check passed.
Checking Single Client Access Name (SCAN)...
SCAN VIP name Node Running? ListenerName Port Running?
qfund-rac.qfund.net rac2 true LISTENER 1521 true
Checking name resolution setup for "qfund-rac.qfund.net"...
SCAN Name IP Address Status Comment
qfund-rac.qfund.net 192.168.8.118 passed
qfund-rac.qfund.net 192.168.8.119 passed
qfund-rac.qfund.net 192.168.8.117 passed
Verification of SCAN VIP and Listener setup passed
OCR detected on ASM. Running ACFS Integrity checks...
Starting check to see if ASM is running on all cluster nodes...
ASM Running check passed. ASM is running on all cluster nodes
Starting Disk Groups check to see if at least one Disk Group configured...
Disk Group Check passed. At least one Disk Group configured
Task ACFS Integrity check passed
Checking Oracle Cluster Voting Disk configuration...
Oracle Cluster Voting Disk configuration check passed
Checking to make sure user "grid" is not in "root" group
Node Name Status Comment
rac2 does not exist passed
rac1 does not exist passed
Result: User "grid" is not part of "root" group. Check passed
Checking if Clusterware is installed on all nodes...
Check of Clusterware install passed
Checking if CTSS Resource is running on all nodes...
Check: CTSS Resource running on all nodes
Node Name Status
rac2 passed
rac1 passed
Result: CTSS resource check passed
Querying CTSS for time offset on all nodes...
Result: Query of CTSS for time offset passed
Check CTSS state started...
Check: CTSS state
Node Name State
rac2 Observer
rac1 Observer
CTSS is in Observer state. Switching over to clock synchronization checks using NTP
Starting Clock synchronization checks using Network Time Protocol(NTP)...
NTP Configuration file check started...
The NTP configuration file "/etc/ntp.conf" is available on all nodes
NTP Configuration file check passed
Checking daemon liveness...
Check: Liveness for "ntpd"
Node Name Running?
rac2 yes
rac1 yes
Result: Liveness check passed for "ntpd"
Checking NTP daemon command line for slewing option "-x"
Check: NTP daemon command line
Node Name Slewing Option Set?
rac2 yes
rac1 yes
Result:
NTP daemon slewing option check passed
Checking NTP daemon's boot time configuration, in file "/etc/sysconfig/ntpd", for slewing option "-x"
Check: NTP daemon's boot time configuration
Node Name Slewing Option Set?
rac2 yes
rac1 yes
Result:
NTP daemon's boot time configuration check for slewing option passed
NTP common Time Server Check started...
NTP Time Server ".INIT." is common to all nodes on which the NTP daemon is running
NTP Time Server ".LOCL." is common to all nodes on which the NTP daemon is running
Check of common NTP Time Server passed
Clock time offset check from NTP Time Server started...
Checking on nodes "[rac2, rac1]"...
Check: Clock time offset from NTP Time Server
Time Server: .INIT.
Time Offset Limit: 1000.0 msecs
Node Name Time Offset Status
rac2 0.0 passed
rac1 0.0 passed
Time Server ".INIT." has time offsets that are within permissible limits for nodes "[rac2, rac1]".
Time Server: .LOCL.
Time Offset Limit: 1000.0 msecs
Node Name Time Offset Status
rac2 -29.328 passed
rac1 -84.385 passed
Time Server ".LOCL." has time offsets that are within permissible limits for nodes "[rac2, rac1]".
Clock time offset check passed
Result: Clock synchronization check using Network Time Protocol(NTP) passed
Oracle Cluster Time Synchronization Services check passed
Post-check for cluster services setup was successful.
[grid@rac1 bin]$Please help me how to solve this problem.
Thanks & regards
Poorna Prasad.SHi All,
Now again i reboothed the database again manually and the database is no up on both the node.
Here is the output for few commands
[grid@rac1 bin]$ ./crs_stat -t
Name Type Target State Host
ora....DATA.dg ora....up.type OFFLINE OFFLINE
ora.ASM_FRA.dg ora....up.type OFFLINE OFFLINE
ora....ER.lsnr ora....er.type ONLINE ONLINE rac1
ora....N1.lsnr ora....er.type ONLINE ONLINE rac1
ora....N2.lsnr ora....er.type ONLINE ONLINE rac2
ora....N3.lsnr ora....er.type ONLINE ONLINE rac2
ora....VOTE.dg ora....up.type ONLINE ONLINE rac1
ora.asm ora.asm.type ONLINE ONLINE rac1
ora.eons ora.eons.type ONLINE ONLINE rac1
ora.gsd ora.gsd.type OFFLINE OFFLINE
ora....network ora....rk.type ONLINE ONLINE rac1
ora.oc4j ora.oc4j.type OFFLINE OFFLINE
ora.ons ora.ons.type ONLINE ONLINE rac1
ora....drac.db ora....se.type OFFLINE OFFLINE
ora....SM1.asm application ONLINE ONLINE rac1
ora....C1.lsnr application ONLINE ONLINE rac1
ora.rac1.gsd application OFFLINE OFFLINE
ora.rac1.ons application ONLINE ONLINE rac1
ora.rac1.vip ora....t1.type ONLINE ONLINE rac1
ora....SM2.asm application ONLINE ONLINE rac2
ora....C2.lsnr application ONLINE ONLINE rac2
ora.rac2.gsd application OFFLINE OFFLINE
ora.rac2.ons application ONLINE ONLINE rac2
ora.rac2.vip ora....t1.type ONLINE ONLINE rac2
ora....ry.acfs ora....fs.type ONLINE ONLINE rac1
ora.scan1.vip ora....ip.type ONLINE ONLINE rac1
ora.scan2.vip ora....ip.type ONLINE ONLINE rac2
ora.scan3.vip ora....ip.type ONLINE ONLINE rac2
[grid@rac1 bin]$ srvctl status nodeapps -n rac1,rac2
-bash: srvctl: command not found
[grid@rac1 bin]$ ./srvctl status nodeapps -n rac1,rac2
PRKO-2003 : Invalid command line option value: rac1,rac2
[grid@rac1 bin]$ ./srvctl status nodeapps -n rac1
-n <node_name> option has been deprecated.
VIP rac1-vip is enabled
VIP rac1-vip is running on node: rac1
Network is enabled
Network is running on node: rac1
GSD is disabled
GSD is not running on node: rac1
ONS is enabled
ONS daemon is running on node: rac1
eONS is enabled
eONS daemon is running on node: rac1
[grid@rac1 bin]$ ./srvctl status nodeapps -n rac2
-n <node_name> option has been deprecated.
VIP rac2-vip is enabled
VIP rac2-vip is running on node: rac2
Network is enabled
Network is running on node: rac2
GSD is disabled
GSD is not running on node: rac2
ONS is enabled
ONS daemon is running on node: rac2
eONS is enabled
eONS daemon is running on node: rac2Here is the output for crsctl stat res -t
[grid@rac1 bin]$ ./crsctl stat res -t
NAME TARGET STATE SERVER STATE_DETAILS
Local Resources
ora.ASM_DATA.dg
OFFLINE OFFLINE rac1
OFFLINE OFFLINE rac2
ora.ASM_FRA.dg
OFFLINE OFFLINE rac1
OFFLINE OFFLINE rac2
ora.LISTENER.lsnr
ONLINE ONLINE rac1
ONLINE ONLINE rac2
ora.OCR_VOTE.dg
ONLINE ONLINE rac1
ONLINE ONLINE rac2
ora.asm
ONLINE ONLINE rac1 Started
ONLINE ONLINE rac2 Started
ora.eons
ONLINE ONLINE rac1
ONLINE ONLINE rac2
ora.gsd
OFFLINE OFFLINE rac1
OFFLINE OFFLINE rac2
ora.net1.network
ONLINE ONLINE rac1
ONLINE ONLINE rac2
ora.ons
ONLINE ONLINE rac1
ONLINE ONLINE rac2
ora.registry.acfs
ONLINE ONLINE rac1
ONLINE ONLINE rac2
Cluster Resources
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE rac1
ora.LISTENER_SCAN2.lsnr
1 ONLINE ONLINE rac2
ora.LISTENER_SCAN3.lsnr
1 ONLINE ONLINE rac2
ora.oc4j
1 OFFLINE OFFLINE
ora.qfundrac.db
1 OFFLINE OFFLINE
2 OFFLINE OFFLINE
ora.rac1.vip
1 ONLINE ONLINE rac1
ora.rac2.vip
1 ONLINE ONLINE rac2
ora.scan1.vip
1 ONLINE ONLINE rac1
ora.scan2.vip
1 ONLINE ONLINE rac2
ora.scan3.vip
1 ONLINE ONLINE rac2What is going wrong here .
Thanks & Regards,
Poorna Prasad.S
Edited by: SIDDABATHUNI on Apr 30, 2011 2:06 PM
Edited by: SIDDABATHUNI on Apr 30, 2011 2:10 PM -
Cluvfy returns Path "/tmp/" does not exist and cannot be created on nodes
Hi,
I'm installing Oracle RAC for SAP in AIX 5L.
After run Pre-check for cluster services setup it returns the next message:
Path "/tmp/" does not exist and cannot be created on nodes
This meessage is happening after Checking node reachability and Checking user equivalence phases.
This is my complete log:
pr_bd01/oramedia/clusterware/Disk1/cluvfy/> ./runcluvfy.sh stage -pre crsinst -n pr_bd01,pr_bd02
Performing pre-checks for cluster services setup
Checking node reachability...
Node reachability check passed from node "pr_bd01".
Checking user equivalence...
User equivalence check passed for user "oracle".
ERROR:
Path "/tmp/" does not exist and cannot be created on nodes:
pr_bd01
Verification will proceed with nodes:
pr_bd02
Pre-check for cluster services setup was unsuccessful on all the nodes.
The /tmp is a shared filesystem and oracle user can write and read it
Oracle user id is the same in both nodes
Group dba id is the same in both nodes
Group oinstall id is the same in both nodes
The primary group for Oracle user is oinstall
Where can my problem be ?
Thank you
Edited by: user8114467 on 27/02/2009 07:17 AMHi,
Even if this is not resolved from above stuff then do the following
[oracle@node1] ssh node1 date
[oracle@node2] ssh node2 date
Note that it is doing the ssh to itself rather than the remaining node(s). Usually people do forget to do atleast once the ssh to itself.
Talok Khatri -
Crs doesn't start on second node
Guys,
RAC on 2 nodes
Release 10.2.0.5.0
Solaris 10
There was a problem with the cable that enables connection for the interconnect, but the problem has been solved. One of the nodes was evicted and all resources were move to the other node. Once the problem was solved I tried to start the cluster that was evicted but to no success. when I run crs_stat -t I get the infamous CRS-0184.
I have checked the ocr and olsnodes; ocr seems to be fine and the second node is recognized as part of the cluster.
cluvfy comp ocr -n lenin,trotsky -verbose
Verifying OCR integrity
Checking OCR integrity...
Checking the absence of a non-clustered configuration...
All nodes free of non-clustered, local-only configurations.
Uniqueness check for OCR device passed.
Checking the version of OCR...
OCR of correct Version "2" exists.
Checking data integrity of OCR...
Data integrity check for OCR passed.
OCR integrity check passed.
Verification of OCR integrity was successful.
oracle@trotsky > cluvfy comp nodereach -n lenin,trotsky -srcnode trotsky -verbose
Verifying node reachability
Checking node reachability...
Check: Node reachability from node "trotsky"
Destination Node Reachable?
lenin yes
trotsky yes
Result: Node reachability check passed from node "trotsky".
I have checked /var/adm/messages and crs and cssd log but I didn't see anything that stands out...
I have also tried to delete the content of /var/tmp/.oracle and restart crs but again to no success.
I have read in another thread in this forum that crs problems are either related to the interconnect or ocr/voting disks but as mentioned before they seem to be OK.
I'm running out of ideas, any suggestions?
One of the nodes now holds both vip addresses:
bge0:1: flags=1040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4> mtu 1500 index 2
inet 192.168.191.184 netmask ffffff00 broadcast 192.168.191.255
bge0:2: flags=1040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4> mtu 1500 index 2
inet 192.168.191.182 netmask ffffff00 broadcast 192.168.191.255
Do I need to manually reconfigure the interface do that is then held by the second node?
Thanks in advance for your helpCheers for your input!
The results on the suggested cluvfy command is: passed on all checks with the exception of the daemon liveness (as expected).
Excerpts from the different logs:
alert.log
2010-11-19 13:12:35.033
[cssd(4928)]CRS-1605:CSSD voting file is online: /dev/rdsk/c1t500601604BA03AEAd0s5. Details in /u01/crs/10.2.0/crs_1/log/trotsky/cssd/ocssd.log.
2010-11-19 13:12:35.050
[cssd(4928)]CRS-1605:CSSD voting file is online: /dev/rdsk/c1t500601604BA03AEAd0s4. Details in /u01/crs/10.2.0/crs_1/log/trotsky/cssd/ocssd.log.
2010-11-19 13:12:35.062
[cssd(4928)]CRS-1605:CSSD voting file is online: /dev/rdsk/c1t500601604BA03AEAd0s6. Details in /u01/crs/10.2.0/crs_1/log/trotsky/cssd/ocssd.log.
cssd.log
[ CSSD]2010-11-19 13:16:47.059 [21] >WARNING: clssnmLocalJoinEvent: takeover aborted due to ALIVE node on Disk
[ CSSD]2010-11-19 13:16:47.059 [21] >WARNING: clssnmRcfgMgrThread: not possible to join the cluster. Please reboot the node.
[ CSSD]2010-11-19 13:16:47.059 [21] >WARNING: clssnmReconfigThread: state(1) clusterState(0) exit
I have tried rebooting the node but that did not help.
crsd.log
2010-11-19 13:53:49.652: [ CRSRTI][1] CSS is not ready. Received status 3 from CSS. Waiting for good status ..
2010-11-19 13:53:50.889: [ COMMCRS][1802]clsc_connect: (1009ac310) no listener at (ADDRESS=(PROTOCOL=ipc)(KEY=OCSSD_LL_trotsky_))
2010-11-19 13:53:50.889: [ CSSCLNT][1]clsssInitNative: connect failed, rc 9
2010-11-19 13:53:50.890: [ CRSRTI][1] CSS is not ready. Received status 3 from CSS. Waiting for good status ..
2010-11-19 13:53:51.899: [ CRSD][1][PANIC] CRSD exiting: Could not init the CSS context
2010-11-19 13:53:51.899: [ CRSD][1] Done.
Does this help? -
Vip not started in newly added node in 10gR2
Hi
I'm facing a issue while adding third node in 10gR2 env.
after pre-requisite check has been success, I run $CRS_HOME/oui/bin/addNode.sh script.
after GUI based installation , I run all the three script , which has been completed successfully,
Then I run vipca as root user to configure vip , but it thrown error ,CRS-1006, CRS 1028, crs-0215, crs-0223
I check status of OCR voting and their permission and status on node 3 , all looks good.
CRS-1006: No more member to consider CRS-0215: could not start resource 'ora.rac3.vip'
CRS-1028: dependency analysis failed because of CRS-0223 resource 'ora.rac1.gsd' has placement error.
CRS-1028: dependency analysis failed because of CRS-0223 resource 'ora.rac2.gsd' has placement error.
CRS-1028: dependency analysis failed because of CRS-0223 resource 'ora.rac1.ons' has placement error.
CRS-1028: dependency analysis failed because of CRS-0223 resource 'ora.rac2.ons' has placement error.
Can u please suggest.....
Regards
rajeev....[oracle@rac1 cluvfy]$ ./runcluvfy.sh stage -post crsinst -n rac1,rac2,rac3 -verbose
Performing post-checks for cluster services setup
Checking node reachability...
Check: Node reachability from node "rac1"
Destination Node Reachable?
rac2 yes
rac1 yes
rac3 yes
Result: Node reachability check passed from node "rac1".
Checking user equivalence...
Check: User equivalence for user "oracle"
Node Name Comment
rac2 passed
rac1 passed
rac3 passed
Result: User equivalence check passed for user "oracle".
Checking Cluster manager integrity...
Checking CSS daemon...
Node Name Status
rac2 running
rac1 running
rac3 running
Result: Daemon status check passed for "CSS daemon".
Cluster manager integrity check passed.
Checking cluster integrity...
Node Name
rac1
rac2
rac3
Cluster integrity check passed
Checking OCR integrity...
Checking the absence of a non-clustered configuration...
All nodes free of non-clustered, local-only configurations.
Uniqueness check for OCR device passed.
Checking the version of OCR...
OCR of correct Version "2" exists.
Checking data integrity of OCR...
Data integrity check for OCR passed.
OCR integrity check passed.
Checking CRS integrity...
Checking daemon liveness...
Check: Liveness for "CRS daemon"
Node Name Running
rac2 yes
rac1 yes
rac3 yes
Result: Liveness check passed for "CRS daemon".
Checking daemon liveness...
Check: Liveness for "CSS daemon"
Node Name Running
rac2 yes
rac1 yes
rac3 yes
Result: Liveness check passed for "CSS daemon".
Checking daemon liveness...
Check: Liveness for "EVM daemon"
Node Name Running
rac2 yes
rac1 yes
rac3 yes
Result: Liveness check passed for "EVM daemon".
Liveness of all the daemons
Node Name CRS daemon CSS daemon EVM daemon
rac2 yes yes yes
rac1 yes yes yes
rac3 yes yes yes
Checking CRS health...
Check: Health of CRS
Node Name CRS OK?
rac2 yes
rac1 yes
rac3 yes
Result: CRS health check passed.
CRS integrity check passed.
Checking node application existence...
Checking existence of VIP node application
Node Name Required Status Comment
rac2 yes exists passed
rac1 yes exists passed
rac3 yes exists passed
Result: Check passed.
Checking existence of ONS node application
Node Name Required Status Comment
rac2 no exists passed
rac1 no exists passed
rac3 no exists passed
Result: Check passed.
Checking existence of GSD node application
Node Name Required Status Comment
rac2 no exists passed
rac1 no exists passed
rac3 no exists passed
Result: Check passed.
Post-check for cluster services setup was successful.
Maybe you are looking for
-
Unable to capture the value of vbrk-vbeln value from VF02
Hi All, am printing form from VF02 ,,,once i execute the VF02 , and select Billing Document -> Issue Output to option .., my printi program gets triggered ,......, but in my print program am unable to capture the value of VBRK-VBELN which i have ent
-
Hi Gurus What is Schedule Background Jobs and whatu2019s the use of Schedule Background Jobs How it will worku2019s Please give me detail Information about Schedule Background Jobs Many Thanks Mahi
-
Powerpoint in Program Documentation
Does anyone know if it is possible to embed a PowerPoint slide in program documentation? I have a user that is interested in this. Thanks in advance. Regards, Jason
-
RFC metadata change: how to refresh?
Hi everybody I have a RFC in my model. If, after import, its metadata are changed, the new in/out parameters are ignored by VC. How can I workaround this problem (i.e. cache refresh, force metadata import again, ...) without renaming the RFC? Thanks
-
MacBookPro Built in Isight problems
I have problems with my built in isight all the time. I get the message that another program is using it if i open photobooth. If i open Ichat i get the telephone icon instead of a webcam icon, which prevents me from using the webcam, and if i use ya