Root.sh failed at second node OUL 6.3 Oracle GRID 11.2.0.3
Hi, im installing a two node cluster mounted on Oracle Linux 6.3 with Oracle DB 11.2.0.3, the installation went smooth up until the execution of the root.sh script on the second node.
THe script return this final lines:
CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node nodo1, number 1, and is terminating
An active cluster was found during exclusive startup, restarting to join the cluster
Start of resource "ora.crsd" failed
CRS-2800: Cannot start resource 'ora.asm' as it is already in the INTERMEDIATE state on server 'nodo2'
CRS-4000: Command Start failed, or completed with errors.
Failed to start Oracle Grid Infrastructure stack
Failed to start Cluster Ready Services at /u01/app/11.2.0/grid/crs/install/crsconfig_lib.pm line 1286.
/u01/app/11.2.0/grid/perl/bin/perl -I/u01/app/11.2.0/grid/perl/lib -I/u01/app/11.2.0/grid/crs/install /u01/app/11.2.0/grid/crs/install/rootcrs.pl execution failed
In $GRID_HOME/log/node2/alertnode.log It appears to be a Cluster Time Synchronization Service issue, (i didn't synchronyze the nodes..) however the CTSS is running in observer mode, wich i believe it shouldn't affect the installation process. After that i lost it...there's an entry CRS-5018 indicating that an unused HAIP route was removed... and then, out of the blue: CRS-5818:Aborted command 'start' for resource 'ora.asm'. Some clarification will be deeply apreciated.
Here's the complete log:
2013-04-01 13:39:35.358
[client(12163)]CRS-2101:The OLR was formatted using version 3.
2013-04-01 19:40:19.597
[ohasd(12338)]CRS-2112:The OLR service started on node nodo2.
2013-04-01 19:40:19.657
[ohasd(12338)]CRS-1301:Oracle High Availability Service started on node nodo2.
[client(12526)]CRS-10001:01-Apr-13 13:41 ACFS-9459: ADVM/ACFS is not supported on this OS version: '2.6.39-400.17.2.el6uek.i686'
[client(12528)]CRS-10001:01-Apr-13 13:41 ACFS-9201: Not Supported
[client(12603)]CRS-10001:01-Apr-13 13:41 ACFS-9459: ADVM/ACFS is not supported on this OS version: '2.6.39-400.17.2.el6uek.i686'
2013-04-01 19:41:17.509
[ohasd(12338)]CRS-2302:Cannot get GPnP profile. Error CLSGPNP_NO_DAEMON (GPNPD daemon is not running).
2013-04-01 19:41:17.618
[gpnpd(12695)]CRS-2328:GPNPD started on node nodo2.
2013-04-01 19:41:21.363
[cssd(12755)]CRS-1713:CSSD daemon is started in exclusive mode
2013-04-01 19:41:23.194
[ohasd(12338)]CRS-2767:Resource state recovery not attempted for 'ora.diskmon' as its target state is OFFLINE
2013-04-01 19:41:56.144
[cssd(12755)]CRS-1707:Lease acquisition for node nodo2 number 2 completed
2013-04-01 19:41:57.545
[cssd(12755)]CRS-1605:CSSD voting file is online: /dev/oracleasm/disks/ASM_DISK_1; details in /u01/app/11.2.0/grid/log/nodo2/cssd/ocssd.log.
[cssd(12755)]CRS-1636:The CSS daemon was started in exclusive mode but found an active CSS daemon on node nodo1 and is terminating; details at (:CSSNM00006:) in /u01/app/11.2.0/grid/log/nodo2/cssd/ocssd.log
2013-04-01 19:41:58.549
[ohasd(12338)]CRS-2765:Resource 'ora.cssdmonitor' has failed on server 'nodo2'.
2013-04-01 19:42:10.025
[gpnpd(12695)]CRS-2329:GPNPD on node nodo2 shutdown.
2013-04-01 19:42:11.407
[mdnsd(12685)]CRS-5602:mDNS service stopping by request.
2013-04-01 19:42:29.642
[gpnpd(12947)]CRS-2328:GPNPD started on node nodo2.
2013-04-01 19:42:33.241
[cssd(13012)]CRS-1713:CSSD daemon is started in clustered mode
2013-04-01 19:42:35.104
[ohasd(12338)]CRS-2767:Resource state recovery not attempted for 'ora.diskmon' as its target state is OFFLINE
2013-04-01 19:42:44.065
[cssd(13012)]CRS-1707:Lease acquisition for node nodo2 number 2 completed
2013-04-01 19:42:45.484
[cssd(13012)]CRS-1605:CSSD voting file is online: /dev/oracleasm/disks/ASM_DISK_1; details in /u01/app/11.2.0/grid/log/nodo2/cssd/ocssd.log.
2013-04-01 19:42:52.138
[cssd(13012)]CRS-1601:CSSD Reconfiguration complete. Active nodes are nodo1 nodo2 .
2013-04-01 19:42:55.081
[ctssd(13076)]CRS-2403:The Cluster Time Synchronization Service on host nodo2 is in observer mode.
2013-04-01 19:42:55.581
[ctssd(13076)]CRS-2401:The Cluster Time Synchronization Service started on host nodo2.
2013-04-01 19:42:55.581
[ctssd(13076)]CRS-2407:The new Cluster Time Synchronization Service reference node is host nodo1.
2013-04-01 19:43:08.875
[ctssd(13076)]CRS-2412:The Cluster Time Synchronization Service detects that the local time is significantly different from the mean cluster time. Details in /u01/app/11.2.0/grid/log/nodo2/ctssd/octssd.log.
2013-04-01 19:43:08.876
[ctssd(13076)]CRS-2409:The clock on host nodo2 is not synchronous with the mean cluster time. No action has been taken as the Cluster Time Synchronization Service is running in observer mode.
2013-04-01 19:43:13.565
[u01/app/11.2.0/grid/bin/orarootagent.bin(13064)]CRS-5018:(:CLSN00037:) Removed unused HAIP route: 169.254.0.0 / 255.255.0.0 / 0.0.0.0 / eth0
2013-04-01 19:53:09.800
[u01/app/11.2.0/grid/bin/oraagent.bin(12922)]CRS-5818:Aborted command 'start' for resource 'ora.asm'. Details at (:CRSAGF00113:) {0:0:223} in /u01/app/11.2.0/grid/log/nodo2/agent/ohasd/oraagent_oracle/oraagent_oracle.log.
2013-04-01 19:53:11.827
[ohasd(12338)]CRS-2757:Command 'Start' timed out waiting for response from the resource 'ora.asm'. Details at (:CRSPE00111:) {0:0:223} in /u01/app/11.2.0/grid/log/nodo2/ohasd/ohasd.log.
2013-04-01 19:53:12.779
[u01/app/11.2.0/grid/bin/oraagent.bin(12922)]CRS-5019:All OCR locations are on ASM disk groups [DATA], and none of these disk groups are mounted. Details are at "(:CLSN00100:)" in "/u01/app/11.2.0/grid/log/nodo2/agent/ohasd/oraagent_oracle/oraagent_oracle.log".
2013-04-01 19:53:13.892
[u01/app/11.2.0/grid/bin/oraagent.bin(12922)]CRS-5019:All OCR locations are on ASM disk groups [DATA], and none of these disk groups are mounted. Details are at "(:CLSN00100:)" in "/u01/app/11.2.0/grid/log/nodo2/agent/ohasd/oraagent_oracle/oraagent_oracle.log".
2013-04-01 19:53:43.877
[u01/app/11.2.0/grid/bin/oraagent.bin(12922)]CRS-5019:All OCR locations are on ASM disk groups [DATA], and none of these disk groups are mounted. Details are at "(:CLSN00100:)" in "/u01/app/11.2.0/grid/log/nodo2/agent/ohasd/oraagent_oracle/oraagent_oracle.log".
2013-04-01 19:54:13.891
[u01/app/11.2.0/grid/bin/oraagent.bin(12922)]CRS-5019:All OCR locations are on ASM disk groups [DATA], and none of these disk groups are mounted. Details are at "(:CLSN00100:)" in "/u01/app/11.2.0/grid/log/nodo2/agent/ohasd/oraagent_oracle/oraagent_oracle.log".
2013-04-01 19:54:43.906
[u01/app/11.2.0/grid/bin/oraagent.bin(12922)]CRS-5019:All OCR locations are on ASM disk groups [DATA], and none of these disk groups are mounted. Details are at "(:CLSN00100:)" in "/u01/app/11.2.0/grid/log/nodo2/agent/ohasd/oraagent_oracle/oraagent_oracle.log".
2013-04-01 19:55:13.914
[u01/app/11.2.0/grid/bin/oraagent.bin(12922)]CRS-5019:All OCR locations are on ASM disk groups [DATA], and none of these disk groups are mounted. Details are at "(:CLSN00100:)" in "/u01/app/11.2.0/grid/log/nodo2/agent/ohasd/oraagent_oracle/oraagent_oracle.log".
2013-04-01 19:55:43.918
[u01/app/11.2.0/grid/bin/oraagent.bin(12922)]CRS-5019:All OCR locations are on ASM disk groups [DATA], and none of these disk groups are mounted. Details are at "(:CLSN00100:)" in "/u01/app/11.2.0/grid/log/nodo2/agent/ohasd/oraagent_oracle/oraagent_oracle.log".
2013-04-01 19:56:13.922
[u01/app/11.2.0/grid/bin/oraagent.bin(12922)]CRS-5019:All OCR locations are on ASM disk groups [DATA], and none of these disk groups are mounted. Details are at "(:CLSN00100:)" in "/u01/app/11.2.0/grid/log/nodo2/agent/ohasd/oraagent_oracle/oraagent_oracle.log".
2013-04-01 19:56:53.209
[crsd(13741)]CRS-1012:The OCR service started on node nodo2.
2013-04-01 20:07:01.128
[crsd(13741)]CRS-0810:Cluster Ready Service aborted due to failure to communicate with Event Management Service with error [1]. Details at (:CRSD00120:) in /u01/app/11.2.0/grid/log/nodo2/crsd/crsd.log.
2013-04-01 20:07:01.278
[ohasd(12338)]CRS-2765:Resource 'ora.crsd' has failed on server 'nodo2'.
2013-04-01 20:07:08.689
[crsd(15248)]CRS-1012:The OCR service started on node nodo2.
2013-04-01 20:13:10.138
[ctssd(13076)]CRS-2412:The Cluster Time Synchronization Service detects that the local time is significantly different from the mean cluster time. Details in /u01/app/11.2.0/grid/log/nodo2/ctssd/octssd.log.
2013-04-01 20:17:13.024
[crsd(15248)]CRS-0810:Cluster Ready Service aborted due to failure to communicate with Event Management Service with error [1]. Details at (:CRSD00120:) in /u01/app/11.2.0/grid/log/nodo2/crsd/crsd.log.
2013-04-01 20:17:13.171
[ohasd(12338)]CRS-2765:Resource 'ora.crsd' has failed on server 'nodo2'.
2013-04-01 20:17:20.826
[crsd(16746)]CRS-1012:The OCR service started on node nodo2.
2013-04-01 20:27:25.020
[crsd(16746)]CRS-0810:Cluster Ready Service aborted due to failure to communicate with Event Management Service with error [1]. Details at (:CRSD00120:) in /u01/app/11.2.0/grid/log/nodo2/crsd/crsd.log.
2013-04-01 20:27:25.176
[ohasd(12338)]CRS-2765:Resource 'ora.crsd' has failed on server 'nodo2'.
2013-04-01 20:27:31.591
[crsd(18266)]CRS-1012:The OCR service started on node nodo2.
2013-04-01 20:37:35.668
[crsd(18266)]CRS-0810:Cluster Ready Service aborted due to failure to communicate with Event Management Service with error [1]. Details at (:CRSD00120:) in /u01/app/11.2.0/grid/log/nodo2/crsd/crsd.log.
2013-04-01 20:37:35.808
[ohasd(12338)]CRS-2765:Resource 'ora.crsd' has failed on server 'nodo2'.
2013-04-01 20:37:43.209
[crsd(19762)]CRS-1012:The OCR service started on node nodo2.
2013-04-01 20:43:11.160
[ctssd(13076)]CRS-2412:The Cluster Time Synchronization Service detects that the local time is significantly different from the mean cluster time. Details in /u01/app/11.2.0/grid/log/nodo2/ctssd/octssd.log.
2013-04-01 20:47:47.487
[crsd(19762)]CRS-0810:Cluster Ready Service aborted due to failure to communicate with Event Management Service with error [1]. Details at (:CRSD00120:) in /u01/app/11.2.0/grid/log/nodo2/crsd/crsd.log.
2013-04-01 20:47:47.637
[ohasd(12338)]CRS-2765:Resource 'ora.crsd' has failed on server 'nodo2'.
2013-04-01 20:47:55.086
[crsd(21242)]CRS-1012:The OCR service started on node nodo2.
2013-04-01 20:57:59.343
[crsd(21242)]CRS-0810:Cluster Ready Service aborted due to failure to communicate with Event Management Service with error [1]. Details at (:CRSD00120:) in /u01/app/11.2.0/grid/log/nodo2/crsd/crsd.log.
2013-04-01 20:57:59.492
[ohasd(12338)]CRS-2765:Resource 'ora.crsd' has failed on server 'nodo2'.
2013-04-01 20:58:06.996
[crsd(22744)]CRS-1012:The OCR service started on node nodo2.
2013-04-01 21:08:11.046
[crsd(22744)]CRS-0810:Cluster Ready Service aborted due to failure to communicate with Event Management Service with error [1]. Details at (:CRSD00120:) in /u01/app/11.2.0/grid/log/nodo2/crsd/crsd.log.
2013-04-01 21:08:11.192
[ohasd(12338)]CRS-2765:Resource 'ora.crsd' has failed on server 'nodo2'.
2013-04-01 21:08:18.726
[crsd(24260)]CRS-1012:The OCR service started on node nodo2.
2013-04-01 21:13:12.000
[ctssd(13076)]CRS-2412:The Cluster Time Synchronization Service detects that the local time is significantly different from the mean cluster time. Details in /u01/app/11.2.0/grid/log/nodo2/ctssd/octssd.log.
2013-04-01 21:18:22.262
[crsd(24260)]CRS-0810:Cluster Ready Service aborted due to failure to communicate with Event Management Service with error [1]. Details at (:CRSD00120:) in /u01/app/11.2.0/grid/log/nodo2/crsd/crsd.log.
2013-04-01 21:18:22.411
[ohasd(12338)]CRS-2765:Resource 'ora.crsd' has failed on server 'nodo2'.
2013-04-01 21:18:29.927
[crsd(25759)]CRS-1012:The OCR service started on node nodo2.
2013-04-01 21:28:34.467
[crsd(25759)]CRS-0810:Cluster Ready Service aborted due to failure to communicate with Event Management Service with error [1]. Details at (:CRSD00120:) in /u01/app/11.2.0/grid/log/nodo2/crsd/crsd.log.
2013-04-01 21:28:34.616
[ohasd(12338)]CRS-2765:Resource 'ora.crsd' has failed on server 'nodo2'.
2013-04-01 21:28:41.990
[crsd(27291)]CRS-1012:The OCR service started on node nodo2.
2013-04-01 21:38:45.012
[crsd(27291)]CRS-0810:Cluster Ready Service aborted due to failure to communicate with Event Management Service with error [1]. Details at (:CRSD00120:) in /u01/app/11.2.0/grid/log/nodo2/crsd/crsd.log.
2013-04-01 21:38:45.160
[ohasd(12338)]CRS-2765:Resource 'ora.crsd' has failed on server 'nodo2'.
2013-04-01 21:38:52.790
[crsd(28784)]CRS-1012:The OCR service started on node nodo2.
2013-04-01 21:43:12.378
[ctssd(13076)]CRS-2412:The Cluster Time Synchronization Service detects that the local time is significantly different from the mean cluster time. Details in /u01/app/11.2.0/grid/log/nodo2/ctssd/octssd.log.
2013-04-01 21:48:56.285
[crsd(28784)]CRS-0810:Cluster Ready Service aborted due to failure to communicate with Event Management Service with error [1]. Details at (:CRSD00120:) in /u01/app/11.2.0/grid/log/nodo2/crsd/crsd.log.
2013-04-01 21:48:56.435
[ohasd(12338)]CRS-2765:Resource 'ora.crsd' has failed on server 'nodo2'.
2013-04-01 21:49:04.421
[crsd(30272)]CRS-1012:The OCR service started on node nodo2.
2013-04-01 21:59:08.183
[crsd(30272)]CRS-0810:Cluster Ready Service aborted due to failure to communicate with Event Management Service with error [1]. Details at (:CRSD00120:) in /u01/app/11.2.0/grid/log/nodo2/crsd/crsd.log.
2013-04-01 21:59:08.318
[ohasd(12338)]CRS-2765:Resource 'ora.crsd' has failed on server 'nodo2'.
2013-04-01 21:59:15.860
[crsd(31772)]CRS-1012:The OCR service started on node nodo2.
Hi santysharma, thanks for the reply, i have two ethernet interfaces: eth0 (public network 192.168.1.0) and eth1 (private network 10.5.3.0), there is no device using that ip range, here's the output of route command:
(Sorry for the alignment, i tried to tab it but the editor trims it again)
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
default 192.168.1.1 0.0.0.0 UG 0 0 0 eth0
private * 255.255.255.0 U 0 0 0 eth1
link-local * 255.255.0.0 U 1002 0 0 eth0
link-local * 255.255.0.0 U 1003 0 0 eth1
public * 255.255.255.0 U 0 0 0 eth0
And the /etc/hosts file
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
10.5.3.1 nodo1.cluster nodo1
10.5.3.2 nodo2.cluster nodo2
192.168.1.13 cluster-scan
192.168.1.14 nodo1-vip
192.168.1.15 nodo2-vip
And the ifconfig -a
eth0 Link encap:Ethernet HWaddr C8:3A:35:D9:C6:2B
inet addr:192.168.1.12 Bcast:192.168.1.255 Mask:255.255.255.0
inet6 addr: fe80::ca3a:35ff:fed9:c62b/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:34708 errors:0 dropped:18 overruns:0 frame:0
TX packets:24693 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:48545969 (46.2 MiB) TX bytes:1994381 (1.9 MiB)
eth1 Link encap:Ethernet HWaddr 00:0D:87:D0:A3:8E
inet addr:10.5.3.2 Bcast:10.5.3.255 Mask:255.255.255.0
inet6 addr: fe80::20d:87ff:fed0:a38e/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:44 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:0 (0.0 b) TX bytes:5344 (5.2 KiB)
Interrupt:23 Base address:0x6000
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:16436 Metric:1
RX packets:20 errors:0 dropped:0 overruns:0 frame:0
TX packets:20 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:1320 (1.2 KiB) TX bytes:1320 (1.2 KiB)
Now that i'm thinking i've read somewhere that ipv6 was no supported...yet there's no relation with the 169.254.x.x ip range.
Similar Messages
-
Root.sh failed on second node while installing CRS 10g on centos 5.5
root.sh failed on second node while installing CRS 10g
Hi all,
I am able to install Oracle 10g RAC clusterware on first node of the cluster. However, when I run the root.sh script as root
user on second node of the cluster, it fails with following error message:
NO KEYS WERE WRITTEN. Supply -force parameter to override.
-force is destructive and will destroy any previous cluster
configuration.
Oracle Cluster Registry for cluster has already been initialized
Startup will be queued to init within 90 seconds.
Adding daemons to inittab
Expecting the CRS daemons to be up within 600 seconds.
Failure at final check of Oracle CRS stack.
10
and run cluvfy stage -post hwos -n all -verbose,it show message:
ERROR:
Could not find a suitable set of interfaces for VIPs.
Result: Node connectivity check failed.
Checking shared storage accessibility...
Disk Sharing Nodes (2 in count)
/dev/sda db2 db1
and run cluvfy stage -pre crsinst -n all -verbose,it show message:
ERROR:
Could not find a suitable set of interfaces for VIPs.
Result: Node connectivity check failed.
Checking system requirements for 'crs'...
No checks registered for this product.
and run cluvfy stage -post crsinst -n all -verbose,it show message:
Result: Node reachability check passed from node "DB2".
Result: User equivalence check passed for user "oracle".
Node Name CRS daemon CSS daemon EVM daemon
db2 no no no
db1 yes yes yes
Check: Health of CRS
Node Name CRS OK?
db1 unknown
Result: CRS health check failed.
check crsd.log and show message:
clsc_connect: (0x143ca610) no listener at (ADDRESS=(PROTOCOL=ipc)(KEY=OCSSD_LL_db2_crs))
clsssInitNative: connect failed, rc 9
Any help would be greatly appreciated.
Edited by: 868121 on 2011-6-24 上午12:31Hello, it took a little searching, but I found this in a note in the GRID installation guide for Linux/UNIX:
Public IP addresses and virtual IP addresses must be in the same subnet.
In your case, you are using two different subnets for the VIPs. -
Root.sh fails on second node
I already posted this issue on database installation forum, and was suggested to post it on this forum.
Here are the details.
I am running Linux 64bit on ESx clients. Installing Oracle 11gR2.
It passed all the per-requisite. Run root.sh on first node. It finished with no errorrs.
On second node I got the following:
Running Oracle 11g root.sh script...
The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME= /u01/app/11.2.0/grid
Enter the full pathname of the local bin directory: [usr/local/bin]:
The file "dbhome" already exists in /usr/local/bin. Overwrite it? (y/n)
[n]:
The file "oraenv" already exists in /usr/local/bin. Overwrite it? (y/n)
[n]:
The file "coraenv" already exists in /usr/local/bin. Overwrite it? (y/n)
[n]:
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root.sh script.
Now product-specific root actions will be performed.
2010-07-13 12:51:28: Parsing the host name
2010-07-13 12:51:28: Checking for super user privileges
2010-07-13 12:51:28: User has super user privileges
Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params
Creating trace directory
LOCAL ADD MODE
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
Adding daemon to inittab
CRS-4123: Oracle High Availability Services has been started.
ohasd is starting
CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node fred0224, number 1, and is terminating
An active cluster was found during exclusive startup, restarting to join the cluster
CRS-2672: Attempting to start 'ora.mdnsd' on 'fred0225'
CRS-2676: Start of 'ora.mdnsd' on 'fred0225' succeeded
CRS-2672: Attempting to start 'ora.gipcd' on 'fred0225'
CRS-2676: Start of 'ora.gipcd' on 'fred0225' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'fred0225'
CRS-2676: Start of 'ora.gpnpd' on 'fred0225' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'fred0225'
CRS-2676: Start of 'ora.cssdmonitor' on 'fred0225' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'fred0225'
CRS-2672: Attempting to start 'ora.diskmon' on 'fred0225'
CRS-2676: Start of 'ora.diskmon' on 'fred0225' succeeded
CRS-2676: Start of 'ora.cssd' on 'fred0225' succeeded
CRS-2672: Attempting to start 'ora.ctssd' on 'fred0225'
Start action for octssd aborted
CRS-2676: Start of 'ora.ctssd' on 'fred0225' succeeded
CRS-2672: Attempting to start 'ora.drivers.acfs' on 'fred0225'
CRS-2672: Attempting to start 'ora.asm' on 'fred0225'
CRS-2676: Start of 'ora.drivers.acfs' on 'fred0225' succeeded
CRS-2676: Start of 'ora.asm' on 'fred0225' succeeded
CRS-2664: Resource 'ora.ctssd' is already running on 'fred0225'
CRS-4000: Command Start failed, or completed with errors.
Command return code of 1 (256) from command: /u01/app/11.2.0/grid/bin/crsctl start resource ora.asm -init
Start of resource "ora.asm -init" failed
Failed to start ASM
Failed to start Oracle Clusterware stack
In the ocssd.log I found
[ CSSD][3559689984]clssnmvDHBValidateNCopy: node 1, fred0224, has a disk HB, but no network HB, DHB has rcfg 174483948, wrtcnt, 232, LATS 521702664, lastSeqNo 232, uniqueness 1279039649, timestamp 1279039959/521874274
In oraagent_oracle.log I found
[ clsdmc][1212365120]Fail to connect (ADDRESS=(PROTOCOL=ipc)(KEY=fred0225DBG_GPNPD)) with status 9
2010-07-13 12:54:07.234: [ora.gpnpd][1212365120] [check] Error = error 9 encountered when connecting to GPNPD
2010-07-13 12:54:07.238: [ora.gpnpd][1212365120] [check] Calling PID check for daemon
2010-07-13 12:54:07.238: [ora.gpnpd][1212365120] [check] Trying to check PID = 20584
2010-07-13 12:54:07.432: [ COMMCRS][1285794112]clsc_connect: (0x1304d850) no listener at (ADDRESS=(PROTOCOL=ipc)(KEY=fred0225DBG_GPNPD))
[ clsdmc][1222854976]Fail to connect (ADDRESS=(PROTOCOL=ipc)(KEY=fred0225DBG_MDNSD)) with status 9
2010-07-13 12:54:08.649: [ora.mdnsd][1222854976] [check] Error = error 9 encountered when connecting to MDNSD
2010-07-13 12:54:08.649: [ora.mdnsd][1222854976] [check] Calling PID check for daemon
2010-07-13 12:54:08.649: [ora.mdnsd][1222854976] [check] Trying to check PID = 20571
2010-07-13 12:54:08.841: [ COMMCRS][1201875264]clsc_connect: (0x12f3b1d0) no listener at (ADDRESS=(PROTOCOL=ipc)(KEY=fred0225DBG_MDNSD))
[ clsdmc][1159915840]Fail to connect (ADDRESS=(PROTOCOL=ipc)(KEY=fred0225DBG_GIPCD)) with status 9
2010-07-13 12:54:10.051: [ora.gipcd][1159915840] [check] Error = error 9 encountered when connecting to GIPCD
2010-07-13 12:54:10.051: [ora.gipcd][1159915840] [check] Calling PID check for daemon
2010-07-13 12:54:10.051: [ora.gipcd][1159915840] [check] Trying to check PID = 20566
2010-07-13 12:54:10.242: [ COMMCRS][1254324544]clsc_connect: (0x12f35630) no listener at (ADDRESS=(PROTOCOL=ipc)(KEY=fred0225DBG_GIPCD))
In oracssdagent_root.log I found
2010-07-13 12:52:28.698: [ CSSCLNT][1102481728]clssscConnect: gipc request failed with 29 (0x16)
2010-07-13 12:52:28.698: [ CSSCLNT][1102481728]clsssInitNative: connect failed, rc 29
2010-07-13 12:53:55.222: [ CSSCLNT][1102481728]clssnsqlnum: RPC failed rc 3
2010-07-13 12:53:55.222: [ USRTHRD][1102481728] clsnomon_cssini: failed 3 to fetch node number
2010-07-13 12:53:55.222: [ USRTHRD][1102481728] clsnomon_init: css init done, nodenum -1.
2010-07-13 12:53:55.222: [ CSSCLNT][1102481728]clsssRecvMsg: got a disconnect from the server while waiting for message type 43
2010-07-13 12:53:55.222: [ CSSCLNT][1102481728]clsssGetNLSData: Failure receiving a msg, rc 3
If you need more info, let me know.Well, the error clearly indicates that a communication problem exists on the private interconnect.
Could this be a setting in ESX, which prevents some communication between the clients on the second network card? Any routing table in ESX not configured correctly?
Sebastian -
Root.sh fails on second node during clusterware installation
I am setting up a test instance of OEL 5.4 using VMware.
I am running the clusterware install and it is failing only on node2. See below.
I followed note 414897.1 on metalink for raw device setup.
Any help would be greatly appreciate.
2010-09-01 11:58:21.084: [ default][1275584]a_init:7!: Backend init unsuccessful : [22]
2010-09-01 11:58:21.091: [ OCRRAW][1275584]propriogid:1: INVALID FORMAT
2010-09-01 11:58:21.091: [ OCRRAW][1275584]ibctx:1:ERROR: INVALID FORMAT
2010-09-01 11:58:21.091: [ OCRRAW][1275584]proprinit:problem reading the bootblock or superbloc 22
2010-09-01 11:58:21.097: [ OCRRAW][1275584]propriogid:1: INVALID FORMAT
2010-09-01 11:58:21.139: [ OCRRAW][1275584]propriowv: Vote information on disk 0 [u01/app/oracle/oradata/ocr] is adjusted from [0/0] to [2/2]
2010-09-01 11:58:21.191: [ OCRRAW][1275584]propriniconfig:No 92 configuration
2010-09-01 11:58:21.192: [ OCRAPI][1275584]a_init:6a: Backend init successful
2010-09-01 11:58:21.299: [ OCRCONF][1275584]Initialized DATABASE keys in OCR
2010-09-01 11:58:21.555: [ OCRCONF][1275584]Successfully set skgfr block 0
2010-09-01 11:58:21.557: [ OCRCONF][1275584]Exiting [status=success]...Oracle 10gR2 RAC Installation in RedHat 5 Linux Using VMware.
Important points to install 10gR2 oracle RAC in linux5.
1.LINUX 5(Redhat 5) doesn't have /etc/sysconfig/rawdevices file. so we have to configure it.
2. Edit the /etc/redhat-release version to redhat-4 and and to invoke the runInstaller use the command
$runInstaller -ignoreSysPrereqs. //this will bypass the os check //
3. Next during clusterware installation at the end of root.sh in node 2 end with error message.So we have adjust the parameters in vipca and srvctl files.
4. vipca will fail to run. so we have to adjust some parameters and configure it manually.
refer the link, it will be useful to you to complete your installation.
http://oracleinstance.blogspot.com/2010/03/oracle-10g-installation-in-linux-5.html -
Root.sh failed in one node - CLSMON and UDLM
Hi experts.
My enviroment is:
2-node SunCluster Update3
Oracle RAC 10.2.0.1 > planning to upgrade to 10.2.0.4
The problem is: I installed the CRS services on 2 nodes - OK
After that, running root.sh fails in 1 node:
/u01/app/product/10/CRS/root.sh
WARNING: directory '/u01/app/product/10' is not owned by root
WARNING: directory '/u01/app/product' is not owned by root
WARNING: directory '/u01/app' is not owned by root
WARNING: directory '/u01' is not owned by root
Checking to see if Oracle CRS stack is already configured
Checking to see if any 9i GSD is up
Setting the permissions on OCR backup directory
Setting up NS directories
Oracle Cluster Registry configuration upgraded successfully
WARNING: directory '/u01/app/product/10' is not owned by root
WARNING: directory '/u01/app/product' is not owned by root
WARNING: directory '/u01/app' is not owned by root
WARNING: directory '/u01' is not owned by root
clscfg: EXISTING configuration version 3 detected.
clscfg: version 3 is 10G Release 2.
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node <nodenumber>: <nodename> <private interconnect name> <hostname>
node 0: spodhcsvr10 clusternode1-priv spodhcsvr10
node 1: spodhcsvr12 clusternode2-priv spodhcsvr12
clscfg: Arguments check out successfully.
NO KEYS WERE WRITTEN. Supply -force parameter to override.
-force is destructive and will destroy any previous cluster
configuration.
Oracle Cluster Registry for cluster has already been initialized
Sep 22 13:34:17 spodhcsvr10 root: Oracle Cluster Ready Services starting by user request.
Startup will be queued to init within 30 seconds.
Sep 22 13:34:20 spodhcsvr10 root: Cluster Ready Services completed waiting on dependencies.
Adding daemons to inittab
Expecting the CRS daemons to be up within 600 seconds.
Sep 22 13:34:34 spodhcsvr10 last message repeated 3 times
Sep 22 13:34:34 spodhcsvr10 root: Running CRSD with TZ = Brazil/East
Sep 22 13:34:40 spodhcsvr10 root: Oracle CLSMON terminated with unexpected status 10. Respawning
Sep 22 13:35:43 spodhcsvr10 last message repeated 9 times
Sep 22 13:36:07 spodhcsvr10 root: Cluster Ready Services completed waiting on dependencies.
Sep 22 13:36:07 spodhcsvr10 root: Running CRSD with TZ = Brazil/East
Sep 22 13:36:14 spodhcsvr10 su: libsldap: Status: 85 Mesg: openConnection: simple bind failed - Timed out
Sep 22 13:36:19 spodhcsvr10 root: Oracle CLSMON terminated with unexpected status 10. Respawning
Sep 22 13:37:35 spodhcsvr10 last message repeated 11 times
Sep 22 13:37:40 spodhcsvr10 root: Cluster Ready Services completed waiting on dependencies.
Sep 22 13:37:40 spodhcsvr10 root: Running CRSD with TZ = Brazil/East
Sep 22 13:37:42 spodhcsvr10 root: Oracle CLSMON terminated with unexpected status 10. Respawning
Sep 22 13:38:03 spodhcsvr10 last message repeated 3 times
Sep 22 13:38:10 spodhcsvr10 root: Oracle CLSMON terminated with unexpected status 10. Respawning
Sep 22 13:39:12 spodhcsvr10 last message repeated 9 times
Sep 22 13:39:13 spodhcsvr10 root: Cluster Ready Services completed waiting on dependencies.
Sep 22 13:39:13 spodhcsvr10 root: Running CRSD with TZ = Brazil/East
Sep 22 13:39:19 spodhcsvr10 root: Oracle CLSMON terminated with unexpected status 10. Respawning
Sep 22 13:40:42 spodhcsvr10 last message repeated 12 times
Sep 22 13:40:46 spodhcsvr10 root: Cluster Ready Services completed waiting on dependencies.
Sep 22 13:40:46 spodhcsvr10 root: Running CRSD with TZ = Brazil/East
Sep 22 13:40:49 spodhcsvr10 root: Oracle CLSMON terminated with unexpected status 10. Respawning
Sep 22 13:42:05 spodhcsvr10 last message repeated 11 times
Sep 22 13:42:11 spodhcsvr10 root: Cluster Ready Services completed waiting on dependencies.
Sep 22 13:42:12 spodhcsvr10 root: Oracle CLSMON terminated with unexpected status 10. Respawning
Sep 22 13:42:19 spodhcsvr10 root: Cluster Ready Services completed waiting on dependencies.
Sep 22 13:42:19 spodhcsvr10 root: Running CRSD with TZ = Brazil/East
Sep 22 13:42:19 spodhcsvr10 root: Oracle CLSMON terminated with unexpected status 10. Respawning
Sep 22 13:43:49 spodhcsvr10 last message repeated 13 times
Sep 22 13:43:51 spodhcsvr10 root: Cluster Ready Services completed waiting on dependencies.
Sep 22 13:43:51 spodhcsvr10 root: Running CRSD with TZ = Brazil/East
Sep 22 13:43:56 spodhcsvr10 root: Oracle CLSMON terminated with unexpected status 10. Respawning
Failure at final check of Oracle CRS stack.
I traced the ocssd.log and found some informations:
[ CSSD]2010-09-22 14:04:14.739 [6] >TRACE: clssnmDiskStateChange: state from 2 to 4 disk (0//dev/vx/rdsk/racdg/ora_vote1)
[ CSSD]2010-09-22 14:04:14.742 [6] >TRACE: clssnmReadDskHeartbeat: node(1) is down. rcfg(2) wrtcnt(2478) LATS(0) Disk lastSeqNo(2478)
[ CSSD]2010-09-22 14:04:14.742 [7] >TRACE: clssnmDiskStateChange: state from 2 to 4 disk (1//dev/vx/rdsk/racdg/ora_vote2)
[ CSSD]2010-09-22 14:04:14.744 [7] >TRACE: clssnmReadDskHeartbeat: node(1) is down. rcfg(2) wrtcnt(2478) LATS(0) Disk lastSeqNo(2478)
[ CSSD]2010-09-22 14:04:14.745 [8] >TRACE: clssnmDiskStateChange: state from 2 to 4 disk (2//dev/vx/rdsk/racdg/ora_vote3)
[ CSSD]2010-09-22 14:04:14.746 [8] >TRACE: clssnmReadDskHeartbeat: node(1) is down. rcfg(2) wrtcnt(2478) LATS(0) Disk lastSeqNo(2478)
[ CSSD]2010-09-22 14:04:14.785 [1] >TRACE: clssscSclsFatal: read value of disable
[ CSSD]2010-09-22 14:04:14.785 [10] >TRACE: clssnmFatalThread: spawned
[ CSSD]2010-09-22 14:04:14.785 [1] >TRACE: clssscSclsFatal: read value of disable
[ CSSD]2010-09-22 14:04:14.786 [11] >TRACE: clssnmconnect: connecting to node 0, flags 0x0001, connector 1
[ CSSD]2010-09-22 14:04:23.075 >USER: Oracle Database 10g CSS Release 10.2.0.1.0 Production Copyright 1996, 2004 Oracle. All rights reserved.
[ CSSD]2010-09-22 14:04:23.075 >USER: CSS daemon log for node spodhcsvr10, number 0, in cluster NET_RAC
[ clsdmt]Listening to (ADDRESS=(PROTOCOL=ipc)(KEY=spodhcsvr10DBG_CSSD))
[ CSSD]2010-09-22 14:04:23.082 [1] >TRACE: clssscmain: local-only set to false
[ CSSD]2010-09-22 14:04:23.096 [1] >TRACE: clssnmReadNodeInfo: added node 0 (spodhcsvr10) to cluster
[ CSSD]2010-09-22 14:04:23.106 [1] >TRACE: clssnmReadNodeInfo: added node 1 (spodhcsvr12) to cluster
[ CSSD]2010-09-22 14:04:23.129 [5] >TRACE: [0]Node monitor: dlm attach failed error LK_STAT_NOTCREATED
[ CSSD]CLSS-0001: skgxn not active
[ CSSD]2010-09-22 14:04:23.129 [5] >TRACE: clssnm_skgxnmon: skgxn init failed, rc 30
[ CSSD]2010-09-22 14:04:23.132 [1] >TRACE: clssnmInitNMInfo: misscount set to 600
[ CSSD]2010-09-22 14:04:23.136 [1] >TRACE: clssnmDiskStateChange: state from 1 to 2 disk (0//dev/vx/rdsk/racdg/ora_vote1)
[ CSSD]2010-09-22 14:04:23.139 [1] >TRACE: clssnmDiskStateChange: state from 1 to 2 disk (1//dev/vx/rdsk/racdg/ora_vote2)
[ CSSD]2010-09-22 14:04:23.143 [1] >TRACE: clssnmDiskStateChange: state from 1 to 2 disk (2//dev/vx/rdsk/racdg/ora_vote3)
[ CSSD]2010-09-22 14:04:25.139 [6] >TRACE: clssnmDiskStateChange: state from 2 to 4 disk (0//dev/vx/rdsk/racdg/ora_vote1)
[ CSSD]2010-09-22 14:04:25.142 [6] >TRACE: clssnmReadDskHeartbeat: node(1) is down. rcfg(2) wrtcnt(2488) LATS(0) Disk lastSeqNo(2488)
[ CSSD]2010-09-22 14:04:25.143 [7] >TRACE: clssnmDiskStateChange: state from 2 to 4 disk (1//dev/vx/rdsk/racdg/ora_vote2)
[ CSSD]2010-09-22 14:04:25.144 [7] >TRACE: clssnmReadDskHeartbeat: node(1) is down. rcfg(2) wrtcnt(2488) LATS(0) Disk lastSeqNo(2488)
[ CSSD]2010-09-22 14:04:25.145 [8] >TRACE: clssnmDiskStateChange: state from 2 to 4 disk (2//dev/vx/rdsk/racdg/ora_vote3)
[ CSSD]2010-09-22 14:04:25.148 [8] >TRACE: clssnmReadDskHeartbeat: node(1) is down. rcfg(2) wrtcnt(2489) LATS(0) Disk lastSeqNo(2489)
[ CSSD]2010-09-22 14:04:25.186 [1] >TRACE: clssscSclsFatal: read value of disable
[ CSSD]2010-09-22 14:04:25.186 [10] >TRACE: clssnmFatalThread: spawned
[ CSSD]2010-09-22 14:04:25.186 [1] >TRACE: clssscSclsFatal: read value of disable
[ CSSD]2010-09-22 14:04:25.187 [11] >TRACE: clssnmconnect: connecting to node 0, flags 0x0001, connector 1
[ CSSD]2010-09-22 14:04:33.449 >USER: Oracle Database 10g CSS Release 10.2.0.1.0 Production Copyright 1996, 2004 Oracle. All rights reserved.
[ CSSD]2010-09-22 14:04:33.449 >USER: CSS daemon log for node spodhcsvr10, number 0, in cluster NET_RAC
[ clsdmt]Listening to (ADDRESS=(PROTOCOL=ipc)(KEY=spodhcsvr10DBG_CSSD))
[ CSSD]2010-09-22 14:04:33.457 [1] >TRACE: clssscmain: local-only set to false
[ CSSD]2010-09-22 14:04:33.470 [1] >TRACE: clssnmReadNodeInfo: added node 0 (spodhcsvr10) to cluster
[ CSSD]2010-09-22 14:04:33.480 [1] >TRACE: clssnmReadNodeInfo: added node 1 (spodhcsvr12) to cluster
[ CSSD]2010-09-22 14:04:33.498 [5] >TRACE: [0]Node monitor: dlm attach failed error LK_STAT_NOTCREATED
[ CSSD]CLSS-0001: skgxn not active
[ CSSD]2010-09-22 14:04:33.498 [5] >TRACE: clssnm_skgxnmon: skgxn init failed, rc 30
[ CSSD]2010-09-22 14:04:33.500 [1] >TRACE: clssnmInitNMInfo: misscount set to 600
[ CSSD]2010-09-22 14:04:33.505 [1] >TRACE: clssnmDiskStateChange: state from 1 to 2 disk (0//dev/vx/rdsk/racdg/ora_vote1)
[ CSSD]2010-09-22 14:04:33.508 [1] >TRACE: clssnmDiskStateChange: state from 1 to 2 disk (1//dev/vx/rdsk/racdg/ora_vote2)
[ CSSD]2010-09-22 14:04:33.510 [1] >TRACE: clssnmDiskStateChange: state from 1 to 2 disk (2//dev/vx/rdsk/racdg/ora_vote3)
[ CSSD]2010-09-22 14:04:35.508 [6] >TRACE: clssnmDiskStateChange: state from 2 to 4 disk (0//dev/vx/rdsk/racdg/ora_vote1)
[ CSSD]2010-09-22 14:04:35.510 [6] >TRACE: clssnmReadDskHeartbeat: node(1) is down. rcfg(2) wrtcnt(2499) LATS(0) Disk lastSeqNo(2499)
[ CSSD]2010-09-22 14:04:35.510 [7] >TRACE: clssnmDiskStateChange: state from 2 to 4 disk (1//dev/vx/rdsk/racdg/ora_vote2)
[ CSSD]2010-09-22 14:04:35.512 [7] >TRACE: clssnmReadDskHeartbeat: node(1) is down. rcfg(2) wrtcnt(2499) LATS(0) Disk lastSeqNo(2499)
[ CSSD]2010-09-22 14:04:35.513 [8] >TRACE: clssnmDiskStateChange: state from 2 to 4 disk (2//dev/vx/rdsk/racdg/ora_vote3)
[ CSSD]2010-09-22 14:04:35.514 [8] >TRACE: clssnmReadDskHeartbeat: node(1) is down. rcfg(2) wrtcnt(2499) LATS(0) Disk lastSeqNo(2499)
[ CSSD]2010-09-22 14:04:35.553 [1] >TRACE: clssscSclsFatal: read value of disable
[ CSSD]2010-09-22 14:04:35.553 [10] >TRACE: clssnmFatalThread: spawned
[ CSSD]2010-09-22 14:04:35.553 [1] >TRACE: clssscSclsFatal: read value of disable
[ CSSD]2010-09-22 14:04:35.553 [11] >TRACE: clssnmconnect: connecting to node 0, flags 0x0001, connector 1
I believe the main error is:
[ CSSD]2010-09-22 14:04:33.498 [5] >TRACE: [0]Node monitor: dlm attach failed error LK_STAT_NOTCREATED
[ CSSD]CLSS-0001: skgxn not active
And the communication between UDLM and CLSMON. But i don't know how to resolve this.
My UDLM version is 3.3.4.9.
Somebody have any ideas about this?
Tks!Now i finally installed CRS and run root.sh without errors (i think that problem is in some old file from other instalation tries...)
But now i have another problem: When install DB software, in step to copy instalation to remote node, this node have some failure in CLSMON/CSSD daemon and panicking:
Sep 23 16:10:51 spodhcsvr10 root: Oracle CLSMON terminated with unexpected status 138. Respawning
Sep 23 16:10:52 spodhcsvr10 root: Oracle CSSD failure. Rebooting for cluster integrity.
Sep 23 16:10:52 spodhcsvr10 root: [ID 702911 user.alert] Oracle CSSD failure. Rebooting for cluster integrity.
Sep 23 16:10:51 spodhcsvr10 root: [ID 702911 user.error] Oracle CLSMON terminated with unexpected status 138. Respawning
Sep 23 16:10:52 spodhcsvr10 root: [ID 702911 user.alert] Oracle CSSD failure. Rebooting for cluster integrity.
Sep 23 16:10:56 spodhcsvr10 Cluster.OPS.UCMMD: fatal: received signal 15
Sep 23 16:10:56 spodhcsvr10 Cluster.OPS.UCMMD: [ID 770355 daemon.error] fatal: received signal 15
Sep 23 16:10:59 spodhcsvr10 root: Oracle Cluster Ready Services waiting for SunCluster and UDLM to start.
Sep 23 16:10:59 spodhcsvr10 root: Cluster Ready Services completed waiting on dependencies.
Sep 23 16:10:59 spodhcsvr10 root: [ID 702911 user.error] Oracle Cluster Ready Services waiting for SunCluster and UDLM to start.
Sep 23 16:10:59 spodhcsvr10 root: [ID 702911 user.error] Cluster Ready Services completed waiting on dependencies.
Notifying cluster that this node is panicking
The instalation in first node continue and report error in copy to second node.
Any ideas? Tks! -
Root.sh fails on 2nd node
AIX 6
Oracle grid infrastructure 11.2.0.3
At the end of the grid install, ran the root.sh on the first node then on the second node, but failed on the second node. Ran deconfig was successfull, but root.sh failed again :
The deconfig worked but not the root.sh:
Successfully deconfigured Oracle clusterware stack on this node
mtnx213:/oracle/app/grid/product/11.2.0/grid/crs/install#/oracle/app/grid/product/11.2.0/grid/root.sh
Performing root user operation for Oracle 11g
The following environment variables are set as:
ORACLE_OWNER= oragrid
ORACLE_HOME= /oracle/app/grid/product/11.2.0/grid
Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /oracle/app/grid/product/11.2.0/grid/crs/install/crsconfig_params
User ignored Prerequisites during installation
User oragrid has the required capabilities to run CSSD in realtime mode
OLR initialization - successful
Adding Clusterware entries to inittab
USM driver install actions failed
/oracle/app/grid/product/11.2.0/grid/perl/bin/perl -I/oracle/app/grid/product/11.2.0/grid/perl/lib -I/oracle/app/grid/product/11.2.0/grid/crs/install /oracle/app/grid/product/11.2.0/grid/crs/install/rootcrs.pl execution failedMy answer you can find here (in your duplicate post): root.sh fails on 2nd node Timed out waiting for the CRS stack to start
-
11gR2 RAC install fail when running root.sh script on second node
I get the errors:
ORA-15018: diskgroup cannot be created
ORA-15072: command requires at least 2 regular failure groups, discovered only 0
ORA-15080: synchronous I/O operation to a disk failed
[main] [ 2012-04-10 16:44:12.564 EDT ] [UsmcaLogger.logException:175] oracle.sysman.assistants.util.sqlEngine.SQLFatalErrorException: ORA-15018: diskgroup cannot be created
ORA-15072: command requires at least 2 regular failure groups, discovered only 0
ORA-15080: synchronous I/O operation to a disk failed
I have tried the fix solutions from metalink note, but did not fix issue
11GR2 GRID INFRASTRUCTURE INSTALLATION FAILS WHEN RUNNING ROOT.SH ON NODE 2 OF RAC USING ASMLIB [ID 1059847.1Hi,
it looks like, that your "shared device" you are using is not really shared.
The second node does "create an ASM diskgroup" and create OCR and Voting disks. If this indeed would be a shared device, he should have recognized, that your disk is shared.
So as a result your VMware configuration must be wrong, and the disk you presented as shared disk is not really shared.
Which VMWare version did you use? It will not work correctly with the workstation or player edition, since shared disks are only really working with the server version.
If you indeed using the server, could you paste your vm configurations?
Furthermore I recommend using Virtual Box. There is a nice how-to:
http://www.oracle-base.com/articles/11g/OracleDB11gR2RACInstallationOnOEL5UsingVirtualBox.php
Sebastian -
Ora.asm -init failed on second node root.sh
Hi All,
Installing Grid Infrastructure for a 11gr2 Cluster on two nodes Oracle Linux 5 + Vsware vSphere v4, shared disk on same host machine. When run root.sh, first node was success but the second node got following error message (actually the first node was cloned from the seoncd):
CRS-2672: Attempting to start 'ora.ctssd' on 'wandrac2'
Start action for octssd aborted
CRS-2676: Start of 'ora.ctssd' on 'wandrac2' succeeded
CRS-2672: Attempting to start 'ora.drivers.acfs' on 'wandrac2'
CRS-2672: Attempting to start 'ora.asm' on 'wandrac2'
CRS-2676: Start of 'ora.drivers.acfs' on 'wandrac2' succeeded
CRS-2676: Start of 'ora.asm' on 'wandrac2' succeeded
CRS-2664: Resource 'ora.ctssd' is already running on 'wandrac2'
CRS-4000: Command Start failed, or completed with errors.
Command return code of 1 (256) from command: /orapp/racsl/11.2.0/bin/crsctl start resource ora.asm -init
Start of resource "ora.asm -init" failed
Failed to start ASM
Failed to start Oracle Clusterware stack
Thanks in advance for any information and helps,Hi,
I came across this error and I am about to start a fresh installation of the grid. (ealier one failed because it was unable to read the memory in rac2 )
Is there anything specific I can change before I start my installation.
PS - I didnt get what exactly is going on with the hosts file.
My files are as follows :
RAC1 - etc/hosts
[oracle@falcen6a ~]$ cat /etc/hosts
# Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1 localhost.localdomain localhost
::1 localhost6.localdomain6 localhost6
# Public
192.168.100.218 falcen6a.a.pri falcen6a
192.168.100.219 falcen6b.a.pri falcen6b
# Private
192.168.210.101 falcen6a-priv.a.pri falcen6a-priv
192.168.210.102 falcen6b-priv.a.pri falcen6b-priv
# Virtual
192.168.100.212 falcen6a-vip.a.pri falcen6a-vip
192.168.100.213 falcen6b-vip.a.pri falcen6b-vip
# SCAN
#192.168.100.208 falcen6-scan.a.pri falcen6-scan
#192.168.100.209 falcen6-scan.a.pri falcen6-scan
#192.168.100.210 falcen6-scan.a.pri falcen6-scan
on RAC2
[oracle@falcen6b ~]$ cat /etc/hosts
# Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1 localhost.localdomain localhost
::1 localhost6.localdomain6 localhost6
#Public
192.168.100.218 falcen6a.a.pri falcen6a
192.168.100.219 falcen6b.a.pri falcen6b
# Private
192.168.210.101 falcen6a-priv.a.pri falcen6a-priv
192.168.210.102 falcen6b-priv.a.pri falcen6b-priv
# Virtual
192.168.100.212 falcen6a-vip.a.pri falcen6a-vip
192.168.100.213 falcen6b-vip.a.pri falcen6b-vip
# SCAN
#192.168.100.208 falcen6-scan.a.pri falcen6-scan
#192.168.100.209 falcen6-scan.a.pri falcen6-scan
#192.168.100.210 falcen6-scan.a.pri falcen6-scan
Can someone please confirm this?? -
11gR2 clusterware installation problem on root.sh script on second node
Hi all,
I wanna install the *11gR2 RAC* on ORA-Linux 5.5 (x86_64) using VMware server but on the second node i get two "*failed*" at the end of root.sh script.
After that i try to install DB but ı can see only one node.What is the problem...
I will send the output, ı need your help.
Thank you all for helping..
Hosts file:(we have no ping problem )
[root@rac2 ~]# cat /etc/hosts
# Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1 localhost.localdomain localhost
# Public
192.168.2.101 rac1.localdomain rac1
192.168.2.102 rac2.localdomain rac2
# Private
192.168.0.101 rac1-priv.localdomain rac1-priv
192.168.0.102 rac2-priv.localdomain rac2-priv
# Virtual
192.168.2.111 rac1-vip.localdomain rac1-vip
192.168.2.112 rac2-vip.localdomain rac2-vip
# SCAN
192.168.2.201 rac-scan.localdomain rac-scan
[root@rac2 ~]#
FIRST NODE root.sh script output...
[root@rac2 ~]# /u01/app/11.2.0/db_1/root.sh
Running Oracle 11g root.sh script...
The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME= /u01/app/11.2.0/db_1
Enter the full pathname of the local bin directory: [/usr/local/bin]:
Copying dbhome to /usr/local/bin ...
Copying oraenv to /usr/local/bin ...
Copying coraenv to /usr/local/bin ...
Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root.sh script.
Now product-specific root actions will be performed.
2010-12-06 14:45:06: Parsing the host name
2010-12-06 14:45:06: Checking for super user privileges
2010-12-06 14:45:06: User has super user privileges
Using configuration parameter file: /u01/app/11.2.0/db_1/crs/install/crsconfig_params
Creating trace directory
LOCAL ADD MODE
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
root wallet
root wallet cert
root cert export
peer wallet
profile reader wallet
pa wallet
peer wallet keys
pa wallet keys
peer cert request
pa cert request
peer cert
pa cert
peer root cert TP
profile reader root cert TP
pa root cert TP
peer pa cert TP
pa peer cert TP
profile reader pa cert TP
profile reader peer cert TP
peer user cert
pa user cert
Adding daemon to inittab
CRS-4123: Oracle High Availability Services has been started.
ohasd is starting
CRS-2672: Attempting to start 'ora.gipcd' on 'rac2'
CRS-2672: Attempting to start 'ora.mdnsd' on 'rac2'
CRS-2676: Start of 'ora.gipcd' on 'rac2' succeeded
CRS-2676: Start of 'ora.mdnsd' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'rac2'
CRS-2676: Start of 'ora.gpnpd' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac2'
CRS-2676: Start of 'ora.cssdmonitor' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'rac2'
CRS-2672: Attempting to start 'ora.diskmon' on 'rac2'
CRS-2676: Start of 'ora.diskmon' on 'rac2' succeeded
CRS-2676: Start of 'ora.cssd' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.ctssd' on 'rac2'
CRS-2676: Start of 'ora.ctssd' on 'rac2' succeeded
ASM created and started successfully.
DiskGroup DATA created successfully.
clscfg: -install mode specified
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
CRS-2672: Attempting to start 'ora.crsd' on 'rac2'
CRS-2676: Start of 'ora.crsd' on 'rac2' succeeded
CRS-4256: Updating the profile
Successful addition of voting disk 587cc69413ce4fd3bf0c2c2548fb9017.
Successfully replaced voting disk group with +DATA.
CRS-4256: Updating the profile
CRS-4266: Voting file(s) successfully replaced
## STATE File Universal Id File Name Disk group
1. ONLINE 587cc69413ce4fd3bf0c2c2548fb9017 (/dev/oracleasm/disks/DISK1) [DATA]
Located 1 voting disk(s).
CRS-2673: Attempting to stop 'ora.crsd' on 'rac2'
CRS-2677: Stop of 'ora.crsd' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'rac2'
CRS-2677: Stop of 'ora.asm' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.ctssd' on 'rac2'
CRS-2677: Stop of 'ora.ctssd' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.cssdmonitor' on 'rac2'
CRS-2677: Stop of 'ora.cssdmonitor' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'rac2'
CRS-2677: Stop of 'ora.cssd' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'rac2'
CRS-2677: Stop of 'ora.gpnpd' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'rac2'
CRS-2677: Stop of 'ora.gipcd' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.mdnsd' on 'rac2'
CRS-2677: Stop of 'ora.mdnsd' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.mdnsd' on 'rac2'
CRS-2676: Start of 'ora.mdnsd' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.gipcd' on 'rac2'
CRS-2676: Start of 'ora.gipcd' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'rac2'
CRS-2676: Start of 'ora.gpnpd' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac2'
CRS-2676: Start of 'ora.cssdmonitor' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'rac2'
CRS-2672: Attempting to start 'ora.diskmon' on 'rac2'
CRS-2676: Start of 'ora.diskmon' on 'rac2' succeeded
CRS-2676: Start of 'ora.cssd' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.ctssd' on 'rac2'
CRS-2676: Start of 'ora.ctssd' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'rac2'
CRS-2676: Start of 'ora.asm' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'rac2'
CRS-2676: Start of 'ora.crsd' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.evmd' on 'rac2'
CRS-2676: Start of 'ora.evmd' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'rac2'
CRS-2676: Start of 'ora.asm' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.DATA.dg' on 'rac2'
CRS-2676: Start of 'ora.DATA.dg' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.registry.acfs' on 'rac2'
CRS-2676: Start of 'ora.registry.acfs' on 'rac2' succeeded
rac2 2010/12/06 14:52:06 /u01/app/11.2.0/db_1/cdata/rac2/backup_20101206_145206.olr
Configure Oracle Grid Infrastructure for a Cluster ... succeeded
Updating inventory properties for clusterware
Starting Oracle Universal Installer...
Checking swap space: must be greater than 500 MB. Actual 6847 MB Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oraInventory
'UpdateNodeList' was successful.
[root@rac2 ~]#
SECOND NODE root.sh script output
[root@rac1 db_1]# ./root.sh
Running Oracle 11g root.sh script...
The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME= /u01/app/11.2.0/db_1
Enter the full pathname of the local bin directory: [/usr/local/bin]:
Copying dbhome to /usr/local/bin ...
Copying oraenv to /usr/local/bin ...
Copying coraenv to /usr/local/bin ...
Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root.sh script.
Now product-specific root actions will be performed.
2010-12-06 14:54:11: Parsing the host name
2010-12-06 14:54:11: Checking for super user privileges
2010-12-06 14:54:11: User has super user privileges
Using configuration parameter file: /u01/app/11.2.0/db_1/crs/install/crsconfig_params
Creating trace directory
LOCAL ADD MODE
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
Adding daemon to inittab
CRS-4123: Oracle High Availability Services has been started.
ohasd is starting
CRS-2672: Attempting to start 'ora.gipcd' on 'rac1'
CRS-2672: Attempting to start 'ora.mdnsd' on 'rac1'
CRS-2676: Start of 'ora.gipcd' on 'rac1' succeeded
CRS-2676: Start of 'ora.mdnsd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'rac1'
CRS-2676: Start of 'ora.gpnpd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac1'
CRS-2676: Start of 'ora.cssdmonitor' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'rac1'
CRS-2672: Attempting to start 'ora.diskmon' on 'rac1'
CRS-2676: Start of 'ora.diskmon' on 'rac1' succeeded
CRS-2676: Start of 'ora.cssd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.ctssd' on 'rac1'
CRS-2676: Start of 'ora.ctssd' on 'rac1' succeeded
ASM created and started successfully.
DiskGroup DATA created successfully.
clscfg: -install mode specified
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
CRS-2672: Attempting to start 'ora.crsd' on 'rac1'
CRS-2676: Start of 'ora.crsd' on 'rac1' succeeded
Successful addition of voting disk 2761ce8d47b44fbabf73462151e3ba1d.
Successfully replaced voting disk group with +DATA.
CRS-4266: Voting file(s) successfully replaced
## STATE File Universal Id File Name Disk group
1. ONLINE 2761ce8d47b44fbabf73462151e3ba1d (/dev/oracleasm/disks/DISK1) [DATA]
Located 1 voting disk(s).
CRS-2673: Attempting to stop 'ora.crsd' on 'rac1'
CRS-2677: Stop of 'ora.crsd' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'rac1'
CRS-2677: Stop of 'ora.asm' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.ctssd' on 'rac1'
CRS-2677: Stop of 'ora.ctssd' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.cssdmonitor' on 'rac1'
CRS-2677: Stop of 'ora.cssdmonitor' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'rac1'
CRS-2677: Stop of 'ora.cssd' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'rac1'
CRS-2677: Stop of 'ora.gpnpd' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'rac1'
CRS-2677: Stop of 'ora.gipcd' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.mdnsd' on 'rac1'
CRS-2677: Stop of 'ora.mdnsd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.mdnsd' on 'rac1'
CRS-2676: Start of 'ora.mdnsd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.gipcd' on 'rac1'
CRS-2676: Start of 'ora.gipcd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'rac1'
CRS-2676: Start of 'ora.gpnpd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac1'
CRS-2676: Start of 'ora.cssdmonitor' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'rac1'
CRS-2672: Attempting to start 'ora.diskmon' on 'rac1'
CRS-2676: Start of 'ora.diskmon' on 'rac1' succeeded
CRS-2676: Start of 'ora.cssd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.ctssd' on 'rac1'
CRS-2676: Start of 'ora.ctssd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'rac1'
CRS-2676: Start of 'ora.asm' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'rac1'
CRS-2676: Start of 'ora.crsd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.evmd' on 'rac1'
CRS-2676: Start of 'ora.evmd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'rac1'
CRS-2676: Start of 'ora.asm' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.DATA.dg' on 'rac1'
CRS-2676: Start of 'ora.DATA.dg' on 'rac1' succeeded
PRCR-1079 : *Failed* to start resource ora.scan1.vip
CRS-5005: IP Address: 192.168.2.201 is already in use in the network
CRS-2674: Start of 'ora.scan1.vip' on 'rac1' *failed*
CRS-2632: There are no more servers to try to place resource 'ora.scan1.vip' on that would satisfy its placement policy
start scan ... *failed*
Configure Oracle Grid Infrastructure for a Cluster ... *failed*
Updating inventory properties for clusterware
Starting Oracle Universal Installer...
Checking swap space: must be greater than 500 MB. Actual 6847 MB Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oraInventory
'UpdateNodeList' was successful.
[root@rac1 db_1]# * "./runcluvfy.sh stage -pre -crsinst -n rac1,rac2 " outputs are same each node....*
[oracle@rac2 grid]$ ./runcluvfy.sh stage -pre crsinst -n rac1,rac2
Performing pre-checks for cluster services setup
Checking node reachability...
Node reachability check passed from node "rac2"
Checking user equivalence...
User equivalence check passed for user "oracle"
Checking node connectivity...
Checking hosts config file...
Verification of the hosts config file successful
Node connectivity passed for subnet "192.168.2.0" with node(s) rac2,rac1
TCP connectivity check passed for subnet "192.168.2.0"
Node connectivity passed for subnet "192.168.122.0" with node(s) rac2,rac1
TCP connectivity check failed for subnet "192.168.122.0"
Node connectivity passed for subnet "192.168.0.0" with node(s) rac2,rac1
TCP connectivity check passed for subnet "192.168.0.0"
Interfaces found on subnet "192.168.2.0" that are likely candidates for VIP are:
rac2 eth0:192.168.2.102 eth0:192.168.2.112 eth0:192.168.2.201
rac1 eth0:192.168.2.101 eth0:192.168.2.111
Interfaces found on subnet "192.168.122.0" that are likely candidates for a private interconnect are:
rac2 virbr0:192.168.122.1
rac1 virbr0:192.168.122.1
Interfaces found on subnet "192.168.0.0" that are likely candidates for a private interconnect are:
rac2 eth1:192.168.0.102
rac1 eth1:192.168.0.101
Node connectivity check passed
Total memory check passed
Available memory check passed
Swap space check passed
Free disk space check passed for "rac2:/tmp"
Free disk space check passed for "rac1:/tmp"
User existence check passed for "oracle"
Group existence check passed for "oinstall"
Group existence check passed for "dba"
Membership check for user "oracle" in group "oinstall" [as Primary] passed
Membership check for user "oracle" in group "dba" passed
Run level check passed
Hard limits check passed for "maximum open file descriptors"
Soft limits check passed for "maximum open file descriptors"
Hard limits check passed for "maximum user processes"
Soft limits check passed for "maximum user processes"
System architecture check passed
Kernel version check passed
Kernel parameter check passed for "semmsl"
Kernel parameter check passed for "semmns"
Kernel parameter check passed for "semopm"
Kernel parameter check passed for "semmni"
Kernel parameter check passed for "shmmax"
Kernel parameter check passed for "shmmni"
Kernel parameter check passed for "shmall"
Kernel parameter check passed for "file-max"
Kernel parameter check passed for "ip_local_port_range"
Kernel parameter check passed for "rmem_default"
Kernel parameter check passed for "rmem_max"
Kernel parameter check passed for "wmem_default"
Kernel parameter check passed for "wmem_max"
Kernel parameter check passed for "aio-max-nr"
Package existence check passed for "make-3.81"
Package existence check passed for "binutils-2.17.50.0.6"
Package existence check passed for "gcc-4.1.2"
Package existence check passed for "libaio-0.3.106 (i386)"
Package existence check passed for "libaio-0.3.106 (x86_64)"
Package existence check passed for "glibc-2.5-24 (i686)"
Package existence check passed for "glibc-2.5-24 (x86_64)"
Package existence check passed for "compat-libstdc++-33-3.2.3 (i386)"
Package existence check passed for "compat-libstdc++-33-3.2.3 (x86_64)"
Package existence check passed for "elfutils-libelf-0.125 (x86_64)"
Package existence check passed for "elfutils-libelf-devel-0.125"
Package existence check passed for "glibc-common-2.5"
Package existence check passed for "glibc-devel-2.5 (i386)"
Package existence check passed for "glibc-devel-2.5 (x86_64)"
Package existence check passed for "glibc-headers-2.5"
Package existence check passed for "gcc-c++-4.1.2"
Package existence check passed for "libaio-devel-0.3.106 (i386)"
Package existence check passed for "libaio-devel-0.3.106 (x86_64)"
Package existence check passed for "libgcc-4.1.2 (i386)"
Package existence check passed for "libgcc-4.1.2 (x86_64)"
Package existence check passed for "libstdc++-4.1.2 (i386)"
Package existence check passed for "libstdc++-4.1.2 (x86_64)"
Package existence check passed for "libstdc++-devel-4.1.2 (x86_64)"
Package existence check passed for "sysstat-7.0.2"
Package existence check passed for "unixODBC-2.2.11 (i386)"
Package existence check passed for "unixODBC-2.2.11 (x86_64)"
Package existence check passed for "unixODBC-devel-2.2.11 (i386)"
Package existence check passed for "unixODBC-devel-2.2.11 (x86_64)"
Package existence check passed for "ksh-20060214"
Check for multiple users with UID value 0 passed
Current group ID check passed
Core file name pattern consistency check passed.
User "oracle" is not part of "root" group. Check passed
Default user file creation mask check passed
Starting Clock synchronization checks using Network Time Protocol(NTP)...
NTP Configuration file check started...
NTP Configuration file check passed
Checking daemon liveness...
Liveness check passed for "ntpd"
NTP daemon slewing option check passed
NTP daemon's boot time configuration check for slewing option passed
NTP common Time Server Check started...
Check of common NTP Time Server passed
Clock time offset check from NTP Time Server started...
Clock time offset check passed
Clock synchronization check using Network Time Protocol(NTP) passed
Pre-check for cluster services setup was successful.
[oracle@rac2 grid]$ I'm confused :)
Edited by: Eren GULERYUZ on 06.Ara.2010 05:57Hi,
it looks like, that your "shared device" you are using is not really shared.
The second node does "create an ASM diskgroup" and create OCR and Voting disks. If this indeed would be a shared device, he should have recognized, that your disk is shared.
So as a result your VMware configuration must be wrong, and the disk you presented as shared disk is not really shared.
Which VMWare version did you use? It will not work correctly with the workstation or player edition, since shared disks are only really working with the server version.
If you indeed using the server, could you paste your vm configurations?
Furthermore I recommend using Virtual Box. There is a nice how-to:
http://www.oracle-base.com/articles/11g/OracleDB11gR2RACInstallationOnOEL5UsingVirtualBox.php
Sebastian -
Hi All,
I m trying to setup 11gR2 Grid installation on two-node Rac . When it comes to running root.sh on second node (i.e. rac2) it fails with below error. Could please anyone help me out. This is my 3rd attempt and all fails with below errors on node 2.
rac2:
[root@rac2 grid_home]# ./root.sh
Running Oracle 11g root.sh script...
The following environment variables are set as:
ORACLE_OWNER= grid
ORACLE_HOME= /u01/grid_home
Enter the full pathname of the local bin directory: [/usr/local/bin]:
Copying dbhome to /usr/local/bin ...
Copying oraenv to /usr/local/bin ...
Copying coraenv to /usr/local/bin ...
Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root.sh script.
Now product-specific root actions will be performed.
2013-07-10 18:53:15: Parsing the host name
2013-07-10 18:53:15: Checking for super user privileges
2013-07-10 18:53:15: User has super user privileges
Using configuration parameter file: /u01/grid_home/crs/install/crsconfig_params
Creating trace directory
LOCAL ADD MODE
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
Adding daemon to inittab
CRS-4123: Oracle High Availability Services has been started.
ohasd is starting
CRS-2672: Attempting to start 'ora.gipcd' on 'rac2'
CRS-2672: Attempting to start 'ora.mdnsd' on 'rac2'
CRS-2676: Start of 'ora.gipcd' on 'rac2' succeeded
CRS-2676: Start of 'ora.mdnsd' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'rac2'
CRS-2676: Start of 'ora.gpnpd' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac2'
CRS-2676: Start of 'ora.cssdmonitor' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'rac2'
CRS-2672: Attempting to start 'ora.diskmon' on 'rac2'
CRS-2676: Start of 'ora.diskmon' on 'rac2' succeeded
CRS-2676: Start of 'ora.cssd' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.ctssd' on 'rac2'
CRS-2676: Start of 'ora.ctssd' on 'rac2' succeeded
DiskGroup CRS creation failed with the following message:
ORA-15018: diskgroup cannot be created
ORA-15017: diskgroup "CRS" cannot be mounted
ORA-15003: diskgroup "CRS" already mounted in another lock name space
Configuration of ASM failed, see logs for details
Did not succssfully configure and start ASM
CRS-2500: Cannot stop resource 'ora.crsd' as it is not running
CRS-4000: Command Stop failed, or completed with errors.
Command return code of 1 (256) from command: /u01/grid_home/bin/crsctl stop resource ora.crsd -init
Stop of resource "ora.crsd -init" failed
Failed to stop CRSD
CRS-2673: Attempting to stop 'ora.asm' on 'rac2'
CRS-2677: Stop of 'ora.asm' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.ctssd' on 'rac2'
CRS-2677: Stop of 'ora.ctssd' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.cssdmonitor' on 'rac2'
CRS-2677: Stop of 'ora.cssdmonitor' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'rac2'
CRS-2677: Stop of 'ora.cssd' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'rac2'
CRS-2677: Stop of 'ora.gpnpd' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'rac2'
CRS-2677: Stop of 'ora.gipcd' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.mdnsd' on 'rac2'
CRS-2677: Stop of 'ora.mdnsd' on 'rac2' succeeded
Initial cluster configuration failed. See /u01/grid_home/cfgtoollogs/crsconfig/rootcrs_rac2.log for details
[root@rac2 grid_home]#
rac2 alertrac2.log
[root@rac2 rac2]# cat -n alertrac2.log
1 Oracle Database 11g Clusterware Release 11.2.0.1.0 - Production Copyright 1996, 2009 Oracle. All rights reserved.
2 2013-07-10 18:53:16.145
3 [client(13088)]CRS-2106:The OLR location /u01/grid_home/cdata/rac2.olr is inaccessible. Details in /u01/grid_home/log/rac2/client/ocrconfig_13088.log.
4 2013-07-10 18:53:16.228
5 [client(13088)]CRS-2101:The OLR was formatted using version 3.
6 2013-07-10 18:53:31.734
7 [ohasd(13132)]CRS-2112:The OLR service started on node rac2.
8 2013-07-10 18:53:31.893
9 [ohasd(13132)]CRS-2772:Server 'rac2' has been assigned to pool 'Free'.
10 2013-07-10 18:53:53.762
11 [ohasd(13132)]CRS-2302:Cannot get GPnP profile. Error CLSGPNP_NO_DAEMON (GPNPD daemon is not running).
12 2013-07-10 18:53:55.381
13 [cssd(14409)]CRS-1713:CSSD daemon is started in exclusive mode
14 2013-07-10 18:54:01.530
15 [cssd(14409)]CRS-1709:Lease acquisition failed for node rac2 because no voting file has been configured; Details at (:CSSNM00031:) in /u01/grid_home/log/rac2/cssd/ocssd.log
16 2013-07-10 18:54:19.113
17 [cssd(14409)]CRS-1601:CSSD Reconfiguration complete. Active nodes are rac2 .
18 2013-07-10 18:54:19.910
19 [ctssd(14465)]CRS-2403:The Cluster Time Synchronization Service on host rac2 is in observer mode.
20 2013-07-10 18:54:19.920
21 [ctssd(14465)]CRS-2407:The new Cluster Time Synchronization Service reference node is host rac2.
22 2013-07-10 18:54:20.903
23 [ctssd(14465)]CRS-2401:The Cluster Time Synchronization Service started on host rac2.
24 [client(14715)]CRS-10001:ACFS-9327: Verifying ADVM/ACFS devices.
25 [client(14719)]CRS-10001:ACFS-9322: done.
26 2013-07-10 18:54:47.104
27 [ctssd(14465)]CRS-2405:The Cluster Time Synchronization Service on host rac2 is shutdown by user
28 2013-07-10 18:54:55.837
29 [cssd(14409)]CRS-1603:CSSD on node rac2 shutdown by user.
rac2 rootcrs logfile
[root@rac2 rac2]# cat /u01/grid_home/cfgtoollogs/crsconfig/rootcrs_rac2.log
2013-07-10 18:53:15: The configuration parameter file /u01/grid_home/crs/install/crsconfig_params is valid
2013-07-10 18:53:15: Checking for super user privileges
2013-07-10 18:53:15: User has super user privileges
2013-07-10 18:53:15: ### Printing the configuration values from files:
2013-07-10 18:53:15: /u01/grid_home/crs/install/crsconfig_params
2013-07-10 18:53:15: /u01/grid_home/crs/install/s_crsconfig_defs
2013-07-10 18:53:15: ASM_DISCOVERY_STRING=
2013-07-10 18:53:15: ASM_DISKS=ORCL:CRS1
2013-07-10 18:53:15: ASM_DISK_GROUP=CRS
2013-07-10 18:53:15: ASM_REDUNDANCY=EXTERNAL
2013-07-10 18:53:15: ASM_SPFILE=
2013-07-10 18:53:15: ASM_UPGRADE=false
2013-07-10 18:53:15: CLSCFG_MISSCOUNT=
2013-07-10 18:53:15: CLUSTER_GUID=
2013-07-10 18:53:15: CLUSTER_NAME=rac-scan
2013-07-10 18:53:15: CRS_NODEVIPS='rac1-vip/255.255.255.0/eth0,rac2-vip/255.255.255.0/eth0'
2013-07-10 18:53:15: CRS_STORAGE_OPTION=1
2013-07-10 18:53:15: CSS_LEASEDURATION=400
2013-07-10 18:53:15: DIRPREFIX=
2013-07-10 18:53:15: DISABLE_OPROCD=0
2013-07-10 18:53:15: EMBASEJAR_NAME=oemlt.jar
2013-07-10 18:53:15: EWTJAR_NAME=ewt3.jar
2013-07-10 18:53:15: EXTERNAL_ORACLE_BIN=/opt/oracle/bin
2013-07-10 18:53:15: GNS_ADDR_LIST=
2013-07-10 18:53:15: GNS_ALLOW_NET_LIST=
2013-07-10 18:53:15: GNS_CONF=false
2013-07-10 18:53:15: GNS_DENY_ITF_LIST=
2013-07-10 18:53:15: GNS_DENY_NET_LIST=
2013-07-10 18:53:15: GNS_DOMAIN_LIST=
2013-07-10 18:53:15: GPNPCONFIGDIR=/u01/grid_home
2013-07-10 18:53:15: GPNPGCONFIGDIR=/u01/grid_home
2013-07-10 18:53:15: GPNP_PA=
2013-07-10 18:53:15: HELPJAR_NAME=help4.jar
2013-07-10 18:53:15: HOST_NAME_LIST=rac1,rac2
2013-07-10 18:53:15: ID=/etc/init.d
2013-07-10 18:53:15: INIT=/sbin/init
2013-07-10 18:53:15: IT=/etc/inittab
2013-07-10 18:53:15: JEWTJAR_NAME=jewt4.jar
2013-07-10 18:53:15: JLIBDIR=/u01/grid_home/jlib
2013-07-10 18:53:15: JREDIR=/u01/grid_home/jdk/jre/
2013-07-10 18:53:15: LANGUAGE_ID=AMERICAN_AMERICA.AL32UTF8
2013-07-10 18:53:15: MSGFILE=/var/adm/messages
2013-07-10 18:53:15: NETCFGJAR_NAME=netcfg.jar
2013-07-10 18:53:15: NETWORKS="eth0"/192.168.0.0:public,"eth1"/192.168.1.0:cluster_interconnect
2013-07-10 18:53:15: NEW_HOST_NAME_LIST=
2013-07-10 18:53:15: NEW_NODEVIPS='rac1-vip/255.255.255.0/eth0,rac2-vip/255.255.255.0/eth0'
2013-07-10 18:53:15: NEW_NODE_NAME_LIST=
2013-07-10 18:53:15: NEW_PRIVATE_NAME_LIST=
2013-07-10 18:53:15: NODELIST=rac1,rac2
2013-07-10 18:53:15: NODE_NAME_LIST=rac1,rac2
2013-07-10 18:53:15: OCFS_CONFIG=
2013-07-10 18:53:15: OCRCONFIG=/etc/oracle/ocr.loc
2013-07-10 18:53:15: OCRCONFIGDIR=/etc/oracle
2013-07-10 18:53:15: OCRID=
2013-07-10 18:53:15: OCRLOC=ocr.loc
2013-07-10 18:53:15: OCR_LOCATIONS=NO_VAL
2013-07-10 18:53:15: OLASTGASPDIR=/etc/oracle/lastgasp
2013-07-10 18:53:15: OLRCONFIG=/etc/oracle/olr.loc
2013-07-10 18:53:15: OLRCONFIGDIR=/etc/oracle
2013-07-10 18:53:15: OLRLOC=olr.loc
2013-07-10 18:53:15: OPROCDCHECKDIR=/etc/oracle/oprocd/check
2013-07-10 18:53:15: OPROCDDIR=/etc/oracle/oprocd
2013-07-10 18:53:15: OPROCDFATALDIR=/etc/oracle/oprocd/fatal
2013-07-10 18:53:15: OPROCDSTOPDIR=/etc/oracle/oprocd/stop
2013-07-10 18:53:15: ORACLE_BASE=/u01/11.2.0
2013-07-10 18:53:15: ORACLE_HOME=/u01/grid_home
2013-07-10 18:53:15: ORACLE_OWNER=grid
2013-07-10 18:53:15: ORA_ASM_GROUP=asmadmin
2013-07-10 18:53:15: ORA_DBA_GROUP=oinstall
2013-07-10 18:53:15: PRIVATE_NAME_LIST=
2013-07-10 18:53:15: RCALLDIR=/etc/rc.d/rc0.d /etc/rc.d/rc1.d /etc/rc.d/rc2.d /etc/rc.d/rc3.d /etc/rc.d/rc4.d /etc/rc.d/rc5.d /etc/rc.d/rc6.d
2013-07-10 18:53:15: RCKDIR=/etc/rc.d/rc0.d /etc/rc.d/rc1.d /etc/rc.d/rc2.d /etc/rc.d/rc4.d /etc/rc.d/rc6.d
2013-07-10 18:53:15: RCSDIR=/etc/rc.d/rc3.d /etc/rc.d/rc5.d
2013-07-10 18:53:15: RC_KILL=K19
2013-07-10 18:53:15: RC_KILL_OLD=K96
2013-07-10 18:53:15: RC_START=S96
2013-07-10 18:53:15: SCAN_NAME=rac-scan.naveed.com
2013-07-10 18:53:15: SCAN_PORT=1521
2013-07-10 18:53:15: SCRBASE=/etc/oracle/scls_scr
2013-07-10 18:53:15: SHAREJAR_NAME=share.jar
2013-07-10 18:53:15: SILENT=false
2013-07-10 18:53:15: SO_EXT=so
2013-07-10 18:53:15: SRVCFGLOC=srvConfig.loc
2013-07-10 18:53:15: SRVCONFIG=/var/opt/oracle/srvConfig.loc
2013-07-10 18:53:15: SRVCONFIGDIR=/var/opt/oracle
2013-07-10 18:53:15: VNDR_CLUSTER=false
2013-07-10 18:53:15: VOTING_DISKS=NO_VAL
2013-07-10 18:53:15: ### Printing other configuration values ###
2013-07-10 18:53:15: CLSCFG_EXTRA_PARMS=
2013-07-10 18:53:15: CRSDelete=0
2013-07-10 18:53:15: CRSPatch=0
2013-07-10 18:53:15: DEBUG=
2013-07-10 18:53:15: DOWNGRADE=
2013-07-10 18:53:15: HAS_GROUP=oinstall
2013-07-10 18:53:15: HAS_USER=root
2013-07-10 18:53:15: HOST=rac2
2013-07-10 18:53:15: IS_SIHA=0
2013-07-10 18:53:15: OLR_DIRECTORY=/u01/grid_home/cdata
2013-07-10 18:53:15: OLR_LOCATION=/u01/grid_home/cdata/rac2.olr
2013-07-10 18:53:15: ORA_CRS_HOME=/u01/grid_home
2013-07-10 18:53:15: SUPERUSER=root
2013-07-10 18:53:15: UPGRADE=
2013-07-10 18:53:15: VF_DISCOVERY_STRING=
2013-07-10 18:53:15: addfile=/u01/grid_home/crs/install/crsconfig_addparams
2013-07-10 18:53:15: crscfg_trace=1
2013-07-10 18:53:15: crscfg_trace_file=/u01/grid_home/cfgtoollogs/crsconfig/rootcrs_rac2.log
2013-07-10 18:53:15: hosts=
2013-07-10 18:53:15: oldcrshome=
2013-07-10 18:53:15: oldcrsver=
2013-07-10 18:53:15: osdfile=/u01/grid_home/crs/install/s_crsconfig_defs
2013-07-10 18:53:15: parameters_valid=1
2013-07-10 18:53:15: paramfile=/u01/grid_home/crs/install/crsconfig_params
2013-07-10 18:53:15: platform_family=unix
2013-07-10 18:53:15: srvctl_trc_suff=0
2013-07-10 18:53:15: unlock_crshome=
2013-07-10 18:53:15: user_is_superuser=1
2013-07-10 18:53:15: ### Printing of configuration values complete ###
2013-07-10 18:53:15: Oracle CRS stack is not configured yet
2013-07-10 18:53:15: CRS is not yet configured. Hence, will proceed to configure CRS
2013-07-10 18:53:15: Cluster-wide one-time actions... Done!
2013-07-10 18:53:15: Oracle CRS home = /u01/grid_home
2013-07-10 18:53:15: Host name = rac2
2013-07-10 18:53:15: CRS user = grid
2013-07-10 18:53:15: Oracle CRS home = /u01/grid_home
2013-07-10 18:53:15: GPnP host = rac2
2013-07-10 18:53:15: Oracle GPnP home = /u01/grid_home/gpnp
2013-07-10 18:53:15: Oracle GPnP local home = /u01/grid_home/gpnp/rac2
2013-07-10 18:53:15: GPnP directories verified.
2013-07-10 18:53:15: Checking to see if Oracle CRS stack is already configured
2013-07-10 18:53:15: Oracle CRS stack is not configured yet
2013-07-10 18:53:15: ---Checking local gpnp setup...
2013-07-10 18:53:15: The setup file "/u01/grid_home/gpnp/rac2/profiles/peer/profile.xml" does not exist
2013-07-10 18:53:15: The setup file "/u01/grid_home/gpnp/rac2/wallets/peer/cwallet.sso" does not exist
2013-07-10 18:53:15: The setup file "/u01/grid_home/gpnp/rac2/wallets/prdr/cwallet.sso" does not exist
2013-07-10 18:53:15: chk gpnphome /u01/grid_home/gpnp/rac2: profile_ok 0 wallet_ok 0 r/o_wallet_ok 0
2013-07-10 18:53:15: chk gpnphome /u01/grid_home/gpnp/rac2: INVALID (bad profile/wallet)
2013-07-10 18:53:15: ---Checking cluster-wide gpnp setup...
2013-07-10 18:53:15: chk gpnphome /u01/grid_home/gpnp: profile_ok 1 wallet_ok 1 r/o_wallet_ok 1
2013-07-10 18:53:15: gpnptool: run /u01/grid_home/bin/gpnptool verify -p="/u01/grid_home/gpnp/profiles/peer/profile.xml" -w="file:/u01/grid_home/gpnp/wallets/peer" -wu=peer
2013-07-10 18:53:15: Running as user grid: /u01/grid_home/bin/gpnptool verify -p="/u01/grid_home/gpnp/profiles/peer/profile.xml" -w="file:/u01/grid_home/gpnp/wallets/peer" -wu=peer
2013-07-10 18:53:15: s_run_as_user2: Running /bin/su grid -c ' /u01/grid_home/bin/gpnptool verify -p="/u01/grid_home/gpnp/profiles/peer/profile.xml" -w="file:/u01/grid_home/gpnp/wallets/peer" -wu=peer '
2013-07-10 18:53:15: Removing file /tmp/file0qKE0c
2013-07-10 18:53:15: Successfully removed file: /tmp/file0qKE0c
2013-07-10 18:53:15: /bin/su successfully executed
2013-07-10 18:53:15: gpnptool: rc=0
2013-07-10 18:53:15: gpnptool output:
Profile signature is valid.
2013-07-10 18:53:15: Profile "/u01/grid_home/gpnp/profiles/peer/profile.xml" signature is VALID for wallet "file:/u01/grid_home/gpnp/wallets/peer"
2013-07-10 18:53:15: gpnptool: run /u01/grid_home/bin/gpnptool verify -p="/u01/grid_home/gpnp/profiles/peer/profile.xml" -w="file:/u01/grid_home/gpnp/wallets/prdr" -wu=peer
2013-07-10 18:53:15: Running as user grid: /u01/grid_home/bin/gpnptool verify -p="/u01/grid_home/gpnp/profiles/peer/profile.xml" -w="file:/u01/grid_home/gpnp/wallets/prdr" -wu=peer
2013-07-10 18:53:15: s_run_as_user2: Running /bin/su grid -c ' /u01/grid_home/bin/gpnptool verify -p="/u01/grid_home/gpnp/profiles/peer/profile.xml" -w="file:/u01/grid_home/gpnp/wallets/prdr" -wu=peer '
2013-07-10 18:53:16: Removing file /tmp/filebkOtBv
2013-07-10 18:53:16: Successfully removed file: /tmp/filebkOtBv
2013-07-10 18:53:16: /bin/su successfully executed
2013-07-10 18:53:16: gpnptool: rc=0
2013-07-10 18:53:16: gpnptool output:
Profile signature is valid.
2013-07-10 18:53:16: Profile "/u01/grid_home/gpnp/profiles/peer/profile.xml" signature is VALID for wallet "file:/u01/grid_home/gpnp/wallets/prdr"
2013-07-10 18:53:16: chk gpnphome /u01/grid_home/gpnp: OK
2013-07-10 18:53:16: GPnP Wallets ownership/permissions successfully set.
2013-07-10 18:53:16: gpnp setup checked: local valid? 0 cluster-wide valid? 1
2013-07-10 18:53:16: Taking cluster-wide setup as local
2013-07-10 18:53:16: copy "/u01/grid_home/gpnp/profiles/peer/profile.xml" => "/u01/grid_home/gpnp/rac2/profiles/peer/profile.xml"
2013-07-10 18:53:16: set ownership on "/u01/grid_home/gpnp/rac2/profiles/peer/profile.xml" => (grid,oinstall)
2013-07-10 18:53:16: copy "/u01/grid_home/gpnp/wallets/peer/cwallet.sso" => "/u01/grid_home/gpnp/rac2/wallets/peer/cwallet.sso"
2013-07-10 18:53:16: set ownership on "/u01/grid_home/gpnp/rac2/wallets/peer/cwallet.sso" => (grid,oinstall)
2013-07-10 18:53:16: copy "/u01/grid_home/gpnp/wallets/prdr/cwallet.sso" => "/u01/grid_home/gpnp/rac2/wallets/prdr/cwallet.sso"
2013-07-10 18:53:16: set ownership on "/u01/grid_home/gpnp/rac2/wallets/prdr/cwallet.sso" => (grid,oinstall)
2013-07-10 18:53:16: copy "/u01/grid_home/gpnp/profiles/peer/profile_orig.xml" => "/u01/grid_home/gpnp/rac2/profiles/peer/profile_orig.xml"
2013-07-10 18:53:16: set ownership on "/u01/grid_home/gpnp/rac2/profiles/peer/profile_orig.xml" => (grid,oinstall)
2013-07-10 18:53:16: copy "/u01/grid_home/gpnp/wallets/root/ewallet.p12" => "/u01/grid_home/gpnp/rac2/wallets/root/ewallet.p12"
2013-07-10 18:53:16: set ownership on "/u01/grid_home/gpnp/rac2/wallets/root/ewallet.p12" => (grid,oinstall)
2013-07-10 18:53:16: copy "/u01/grid_home/gpnp/wallets/pa/cwallet.sso" => "/u01/grid_home/gpnp/rac2/wallets/pa/cwallet.sso"
2013-07-10 18:53:16: set ownership on "/u01/grid_home/gpnp/rac2/wallets/pa/cwallet.sso" => (grid,oinstall)
2013-07-10 18:53:16: copy "/u01/grid_home/gpnp/wallets/root/b64certificate.txt" => "/u01/grid_home/gpnp/rac2/wallets/root/b64certificate.txt"
2013-07-10 18:53:16: set ownership on "/u01/grid_home/gpnp/rac2/wallets/root/b64certificate.txt" => (grid,oinstall)
2013-07-10 18:53:16: copy "/u01/grid_home/gpnp/wallets/peer/cert.txt" => "/u01/grid_home/gpnp/rac2/wallets/peer/cert.txt"
2013-07-10 18:53:16: set ownership on "/u01/grid_home/gpnp/rac2/wallets/peer/cert.txt" => (grid,oinstall)
2013-07-10 18:53:16: copy "/u01/grid_home/gpnp/wallets/pa/cert.txt" => "/u01/grid_home/gpnp/rac2/wallets/pa/cert.txt"
2013-07-10 18:53:16: set ownership on "/u01/grid_home/gpnp/rac2/wallets/pa/cert.txt" => (grid,oinstall)
2013-07-10 18:53:16: GPnP Wallets ownership/permissions successfully set.
2013-07-10 18:53:16: gpnp setup: GOTCLUSTERWIDE
2013-07-10 18:53:16: Validating for SI-CSS configuration
2013-07-10 18:53:16: Retrieving OCR main disk location
2013-07-10 18:53:16: Opening file OCRCONFIG
2013-07-10 18:53:16: Value () is set for key=ocrconfig_loc
2013-07-10 18:53:16: Unable to retrieve ocr disk info
2013-07-10 18:53:16: Checking to see if any 9i GSD is up
2013-07-10 18:53:16: libskgxnBase_lib = /etc/ORCLcluster/oracm/lib/libskgxn2.so
2013-07-10 18:53:16: libskgxn_lib = /opt/ORCLcluster/lib/libskgxn2.so
2013-07-10 18:53:16: SKGXN library file does not exists
2013-07-10 18:53:16: OLR location = /u01/grid_home/cdata/rac2.olr
2013-07-10 18:53:16: Oracle CRS Home = /u01/grid_home
2013-07-10 18:53:16: Validating /etc/oracle/olr.loc file for OLR location /u01/grid_home/cdata/rac2.olr
2013-07-10 18:53:16: /etc/oracle/olr.loc already exists. Backing up /etc/oracle/olr.loc to /etc/oracle/olr.loc.orig
2013-07-10 18:53:16: Oracle CRS home = /u01/grid_home
2013-07-10 18:53:16: Oracle cluster name = rac-scan
2013-07-10 18:53:16: OCR locations = +CRS
2013-07-10 18:53:16: Validating OCR
2013-07-10 18:53:16: Retrieving OCR location used by previous installations
2013-07-10 18:53:16: Opening file OCRCONFIG
2013-07-10 18:53:16: Value () is set for key=ocrconfig_loc
2013-07-10 18:53:16: Opening file OCRCONFIG
2013-07-10 18:53:16: Value () is set for key=ocrmirrorconfig_loc
2013-07-10 18:53:16: Opening file OCRCONFIG
2013-07-10 18:53:16: Value () is set for key=ocrconfig_loc3
2013-07-10 18:53:16: Opening file OCRCONFIG
2013-07-10 18:53:16: Value () is set for key=ocrconfig_loc4
2013-07-10 18:53:16: Opening file OCRCONFIG
2013-07-10 18:53:16: Value () is set for key=ocrconfig_loc5
2013-07-10 18:53:16: Checking if OCR sync file exists
2013-07-10 18:53:16: No need to sync OCR file
2013-07-10 18:53:16: OCR_LOCATION=+CRS
2013-07-10 18:53:16: OCR_MIRROR_LOCATION=
2013-07-10 18:53:16: OCR_MIRROR_LOC3=
2013-07-10 18:53:16: OCR_MIRROR_LOC4=
2013-07-10 18:53:16: OCR_MIRROR_LOC5=
2013-07-10 18:53:16: Current OCR location=
2013-07-10 18:53:16: Current OCR mirror location=
2013-07-10 18:53:16: Current OCR mirror loc3=
2013-07-10 18:53:16: Current OCR mirror loc4=
2013-07-10 18:53:16: Current OCR mirror loc5=
2013-07-10 18:53:16: Verifying current OCR settings with user entered values
2013-07-10 18:53:16: Setting OCR locations in /etc/oracle/ocr.loc
2013-07-10 18:53:16: Validating OCR locations in /etc/oracle/ocr.loc
2013-07-10 18:53:16: Checking for existence of /etc/oracle/ocr.loc
2013-07-10 18:53:16: Backing up /etc/oracle/ocr.loc to /etc/oracle/ocr.loc.orig
2013-07-10 18:53:16: Setting ocr location +CRS
2013-07-10 18:53:16: Creating or upgrading Oracle Local Registry (OLR)
2013-07-10 18:53:16: OLR successfully created or upgraded
2013-07-10 18:53:16: /u01/grid_home/bin/clscfg -localadd
2013-07-10 18:53:16: Keys created in the OLR successfully
2013-07-10 18:53:16: GPnP setup state: new-cluster-wide
2013-07-10 18:53:16: GPnP cluster configuration already performed
2013-07-10 18:53:16: Registering ohasd
2013-07-10 18:53:16: init file = /u01/grid_home/crs/init/init.ohasd
2013-07-10 18:53:16: Copying file /u01/grid_home/crs/init/init.ohasd to /etc/init.d directory
2013-07-10 18:53:16: Setting init.ohasd permission in /etc/init.d directory
2013-07-10 18:53:16: init file = /u01/grid_home/crs/init/ohasd
2013-07-10 18:53:16: Copying file /u01/grid_home/crs/init/ohasd to /etc/init.d directory
2013-07-10 18:53:16: Setting ohasd permission in /etc/init.d directory
2013-07-10 18:53:16: Removing "/etc/rc.d/rc3.d/S96ohasd"
2013-07-10 18:53:16: Removing file /etc/rc.d/rc3.d/S96ohasd
2013-07-10 18:53:16: Failure with return code 1 from command rm /etc/rc.d/rc3.d/S96ohasd
2013-07-10 18:53:16: Failed to remove file:
2013-07-10 18:53:16: Creating a link "/etc/rc.d/rc3.d/S96ohasd" pointing to /etc/init.d/ohasd
2013-07-10 18:53:16: Removing "/etc/rc.d/rc5.d/S96ohasd"
2013-07-10 18:53:16: Removing file /etc/rc.d/rc5.d/S96ohasd
2013-07-10 18:53:16: Failure with return code 1 from command rm /etc/rc.d/rc5.d/S96ohasd
2013-07-10 18:53:16: Failed to remove file:
2013-07-10 18:53:16: Creating a link "/etc/rc.d/rc5.d/S96ohasd" pointing to /etc/init.d/ohasd
2013-07-10 18:53:16: Removing "/etc/rc.d/rc0.d/K19ohasd"
2013-07-10 18:53:16: Removing file /etc/rc.d/rc0.d/K19ohasd
2013-07-10 18:53:16: Failure with return code 1 from command rm /etc/rc.d/rc0.d/K19ohasd
2013-07-10 18:53:16: Failed to remove file:
2013-07-10 18:53:16: Creating a link "/etc/rc.d/rc0.d/K19ohasd" pointing to /etc/init.d/ohasd
2013-07-10 18:53:16: Removing "/etc/rc.d/rc1.d/K19ohasd"
2013-07-10 18:53:16: Removing file /etc/rc.d/rc1.d/K19ohasd
2013-07-10 18:53:16: Failure with return code 1 from command rm /etc/rc.d/rc1.d/K19ohasd
2013-07-10 18:53:16: Failed to remove file:
2013-07-10 18:53:16: Creating a link "/etc/rc.d/rc1.d/K19ohasd" pointing to /etc/init.d/ohasd
2013-07-10 18:53:16: Removing "/etc/rc.d/rc2.d/K19ohasd"
2013-07-10 18:53:16: Removing file /etc/rc.d/rc2.d/K19ohasd
2013-07-10 18:53:16: Failure with return code 1 from command rm /etc/rc.d/rc2.d/K19ohasd
2013-07-10 18:53:16: Failed to remove file:
2013-07-10 18:53:16: Creating a link "/etc/rc.d/rc2.d/K19ohasd" pointing to /etc/init.d/ohasd
2013-07-10 18:53:16: Removing "/etc/rc.d/rc4.d/K19ohasd"
2013-07-10 18:53:16: Removing file /etc/rc.d/rc4.d/K19ohasd
2013-07-10 18:53:16: Failure with return code 1 from command rm /etc/rc.d/rc4.d/K19ohasd
2013-07-10 18:53:16: Failed to remove file:
2013-07-10 18:53:16: Creating a link "/etc/rc.d/rc4.d/K19ohasd" pointing to /etc/init.d/ohasd
2013-07-10 18:53:16: Removing "/etc/rc.d/rc6.d/K19ohasd"
2013-07-10 18:53:16: Removing file /etc/rc.d/rc6.d/K19ohasd
2013-07-10 18:53:16: Failure with return code 1 from command rm /etc/rc.d/rc6.d/K19ohasd
2013-07-10 18:53:16: Failed to remove file:
2013-07-10 18:53:16: Creating a link "/etc/rc.d/rc6.d/K19ohasd" pointing to /etc/init.d/ohasd
2013-07-10 18:53:16: The file ohasd has been successfully linked to the RC directories
2013-07-10 18:53:16: Starting ohasd
2013-07-10 18:53:16: itab entries=
2013-07-10 18:53:21: Created backup /etc/inittab.no_crs
2013-07-10 18:53:21: Appending to /etc/inittab.tmp:
2013-07-10 18:53:21: h1:35:respawn:/etc/init.d/init.ohasd run >/dev/null 2>&1 </dev/null
2013-07-10 18:53:21: Done updating /etc/inittab.tmp
2013-07-10 18:53:21: Saved /etc/inittab.crs
2013-07-10 18:53:21: Installed new /etc/inittab
2013-07-10 18:53:36: ohasd is starting
2013-07-10 18:53:36: Checking ohasd
2013-07-10 18:53:37: ohasd started successfully
2013-07-10 18:53:37: Creating CRS resources and dependencies
2013-07-10 18:53:37: Configuring HASD
2013-07-10 18:53:37: Registering type ora.daemon.type
2013-07-10 18:53:37: Registering type ora.mdns.type
2013-07-10 18:53:37: Registering type ora.gpnp.type
2013-07-10 18:53:38: Registering type ora.gipc.type
2013-07-10 18:53:38: Registering type ora.cssd.type
2013-07-10 18:53:38: Registering type ora.cssdmonitor.type
2013-07-10 18:53:39: Registering type ora.crs.type
2013-07-10 18:53:39: Registering type ora.evm.type
2013-07-10 18:53:39: Registering type ora.ctss.type
2013-07-10 18:53:40: Registering type ora.asm.type
2013-07-10 18:53:40: Registering type ora.drivers.acfs.type
2013-07-10 18:53:40: Registering type ora.diskmon.type
2013-07-10 18:53:51: ADVM/ACFS is configured
2013-07-10 18:53:51: Successfully created CRS resources for cluster daemon and ASM
2013-07-10 18:53:51: Checking if initial configuration has been performed
2013-07-10 18:53:51: Starting CSS in exclusive mode
2013-07-10 18:54:19: CRS-2672: Attempting to start 'ora.gipcd' on 'rac2'
2013-07-10 18:54:19: CRS-2672: Attempting to start 'ora.mdnsd' on 'rac2'
2013-07-10 18:54:19: CRS-2676: Start of 'ora.gipcd' on 'rac2' succeeded
2013-07-10 18:54:19: CRS-2676: Start of 'ora.mdnsd' on 'rac2' succeeded
2013-07-10 18:54:19: CRS-2672: Attempting to start 'ora.gpnpd' on 'rac2'
2013-07-10 18:54:19: CRS-2676: Start of 'ora.gpnpd' on 'rac2' succeeded
2013-07-10 18:54:19: CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac2'
2013-07-10 18:54:19: CRS-2676: Start of 'ora.cssdmonitor' on 'rac2' succeeded
2013-07-10 18:54:19: CRS-2672: Attempting to start 'ora.cssd' on 'rac2'
2013-07-10 18:54:19: CRS-2672: Attempting to start 'ora.diskmon' on 'rac2'
2013-07-10 18:54:19: CRS-2676: Start of 'ora.diskmon' on 'rac2' succeeded
2013-07-10 18:54:19: CRS-2676: Start of 'ora.cssd' on 'rac2' succeeded
2013-07-10 18:54:19: Querying for existing CSS voting disks
2013-07-10 18:54:19: Performing initial configuration for cluster
2013-07-10 18:54:21: Start of resource "ora.ctssd -init" Succeeded
2013-07-10 18:54:21: Configuring ASM via ASMCA
2013-07-10 18:54:21: Executing as grid: /u01/grid_home/bin/asmca -silent -diskGroupName CRS -diskList ORCL:CRS1 -redundancy EXTERNAL -configureLocalASM
2013-07-10 18:54:21: Running as user grid: /u01/grid_home/bin/asmca -silent -diskGroupName CRS -diskList ORCL:CRS1 -redundancy EXTERNAL -configureLocalASM
2013-07-10 18:54:21: Invoking "/u01/grid_home/bin/asmca -silent -diskGroupName CRS -diskList ORCL:CRS1 -redundancy EXTERNAL -configureLocalASM" as user "grid"
2013-07-10 18:54:40: Configuration of ASM failed, see logs for details
2013-07-10 18:54:40: Did not succssfully configure and start ASM
2013-07-10 18:54:40: Exiting exclusive mode
2013-07-10 18:54:40: Command return code of 1 (256) from command: /u01/grid_home/bin/crsctl stop resource ora.crsd -init
2013-07-10 18:54:40: Stop of resource "ora.crsd -init" failed
2013-07-10 18:54:40: Failed to stop CRSD
2013-07-10 18:55:04: Initial cluster configuration failed. See /u01/grid_home/cfgtoollogs/crsconfig/rootcrs_rac2.log for details
Also below are some of the configs related to rac2 node
[root@rac2 rac2]# rpm -qa | grep oracleasm
oracleasmlib-2.0.4-1.el5
oracleasm-support-2.1.8-1.el5
oracleasm-2.6.18-274.el5xen-2.0.5-1.el5
oracleasm-2.6.18-274.el5-2.0.5-1.el5
oracleasm-2.6.18-274.el5debug-2.0.5-1.el5
oracleasm-2.6.18-274.el5-debuginfo-2.0.5-1.el5
[root@rac2 rac2]# /usr/sbin/oracleasm configure
ORACLEASM_ENABLED=true
ORACLEASM_UID=grid
ORACLEASM_GID=asmadmin
ORACLEASM_SCANBOOT=true
ORACLEASM_SCANORDER=""
ORACLEASM_SCANEXCLUDE=""
ORACLEASM_USE_LOGICAL_BLOCK_SIZE="false"
[root@rac2 rac2]# /usr/sbin/oracleasm status
Checking if ASM is loaded: yes
Checking if /dev/oracleasm is mounted: yes
[root@rac2 rac2]# /usr/sbin/oracleasm listdisks
CRS1
DATA1
FRA1
[root@rac2 rac2]# ls -l /dev/oracleasm/disks/
total 0
brw-rw---- 1 grid asmadmin 8, 17 Jul 10 18:35 CRS1
brw-rw---- 1 grid asmadmin 8, 33 Jul 10 18:36 DATA1
brw-rw---- 1 grid asmadmin 8, 49 Jul 10 18:36 FRA1
[root@rac2 rac2]# cat /etc/hosts
# Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1 localhost.localdomain localhost
::1 localhost6.localdomain6 localhost6
#Public IP's(eth0)
192.168.0.101 rac1.naveed.com rac1
192.168.0.102 rac2.naveed.com rac2
#Private IP's(eth1)
192.168.1.101 rac1-prv.naveed.com rac1-prv
192.168.1.102 rac2-prv.naveed.com rac2-prv
#VIPS
192.168.0.221 rac1-vip.naveed.com rac1-vip
192.168.0.222 rac2-vip.naveed.com rac2-vip
#DNS server IP
192.168.0.10 naveeddns.naveed.com naveeddns
[root@rac2 rac2]#
Thanks in advanceHi,
First of all thanks a lot for the response. You wont't beleive this is my 7th fresh installation and everytime in node 2 i m hit with this same error.
Also i tried below procedure instead of fresh installation
once i deconfig & rerun (./rootcrs.pl -verbose -deconfig -force) on node 2
Using configuration parameter file: ./crsconfig_params
PRCR-1119 : Failed to look up CRS resources of ora.cluster_vip_net1.type type
PRCR-1068 : Failed to query resources
Cannot communicate with crsd
PRCR-1070 : Failed to check if resource ora.gsd is registered
Cannot communicate with crsd
PRCR-1070 : Failed to check if resource ora.ons is registered
Cannot communicate with crsd
CRS-4535: Cannot communicate with Cluster Ready Services
CRS-4000: Command Stop failed, or completed with errors.
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac2'
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'rac2'
CRS-2673: Attempting to stop 'ora.asm' on 'rac2'
CRS-2673: Attempting to stop 'ora.ctssd' on 'rac2'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'rac2'
CRS-2677: Stop of 'ora.mdnsd' on 'rac2' succeeded
CRS-2677: Stop of 'ora.asm' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'rac2'
CRS-2677: Stop of 'ora.ctssd' on 'rac2' succeeded
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'rac2'
CRS-2677: Stop of 'ora.cssd' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'rac2'
CRS-2677: Stop of 'ora.drivers.acfs' on 'rac2' succeeded
CRS-2677: Stop of 'ora.gipcd' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'rac2'
CRS-2677: Stop of 'ora.gpnpd' on 'rac2' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac2' has completed
CRS-4133: Oracle High Availability Services has been stopped.
Successfully deconfigured Oracle clusterware stack on this node
[root@rac2 grid_home]# ./root.sh
Performing root user operation for Oracle 11g
The following environment variables are set as:
ORACLE_OWNER= grid
ORACLE_HOME= /u01/grid_home
Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/grid_home/crs/install/crsconfig_params
User ignored Prerequisites during installation
OLR initialization - successful
Adding Clusterware entries to inittab
CRS-2672: Attempting to start 'ora.mdnsd' on 'rac2'
CRS-2676: Start of 'ora.mdnsd' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'rac2'
CRS-2676: Start of 'ora.gpnpd' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac2'
CRS-2672: Attempting to start 'ora.gipcd' on 'rac2'
CRS-2676: Start of 'ora.cssdmonitor' on 'rac2' succeeded
CRS-2676: Start of 'ora.gipcd' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'rac2'
CRS-2672: Attempting to start 'ora.diskmon' on 'rac2'
CRS-2676: Start of 'ora.diskmon' on 'rac2' succeeded
CRS-2676: Start of 'ora.cssd' on 'rac2' succeeded
ASM created and started successfully.
Disk Group CRS mounted successfully.
clscfg: -install mode specified
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
Successful addition of voting disk 636af26485ef4f27bfec31523aaa0660.
Successfully replaced voting disk group with +CRS.
CRS-4266: Voting file(s) successfully replaced
## STATE File Universal Id File Name Disk group
1. ONLINE 636af26485ef4f27bfec31523aaa0660 (ORCL:CRS1) [CRS]
Located 1 voting disk(s).
Start of resource "ora.crsd" failed
CRS-2800: Cannot start resource 'ora.asm' as it is already in the INTERMEDIATE state on server 'rac2'
CRS-4000: Command Start failed, or completed with errors.
Failed to start Oracle Grid Infrastructure stack
Failed to start Cluster Ready Services at /u01/grid_home/crs/install/crsconfig_lib.pm line 1286.
/u01/grid_home/perl/bin/perl -I/u01/grid_home/perl/lib -I/u01/grid_home/crs/install /u01/grid_home/crs/install/rootcrs.pl execution failed -
11G R2 root.sh failed on first node with OLE fetch parameter error
I have successfully installed 11G R2.1 on Centos 5.4 64 bit.
Now it's coming to install 11G R2.2 on Redhat 5.4 64bit with HDS storrage.
[grid@dmdb1 grid]$ uname -a
Linux dmdb1 2.6.18-164.el5 #1 SMP Tue Aug 18 15:51:48 EDT 2009 x86_64 x86_64 x86_64 GNU/Linux
I passed all pre-ins requirements except shared storage. However, I manually verify it with no problems.
[grid@dmdb1 grid]$ ./runcluvfy.sh stage -pre crsinst -fixup -n dmdb1,dmdb2,dmdb3,dmdb4 -verbose|grep -i fail
[grid@dmdb1 grid]$ ./runcluvfy.sh stage -post hwos -n dmdb1,dmdb2,dmdb3,dmdb4 -verbose|grep -i fail
[grid@dmdb1 grid]$ ./runcluvfy.sh comp sys -n dmdb1,dmdb2,dmdb3,dmdb4 -p crs -osdba dba -orainv oinstall
Verifying system requirement
Total memory check passed
Available memory check passed
Swap space check passed
Free disk space check passed for "dmdb4:/tmp"
Free disk space check passed for "dmdb3:/tmp"
Free disk space check passed for "dmdb2:/tmp"
Free disk space check passed for "dmdb1:/tmp"
User existence check passed for "grid"
Group existence check passed for "oinstall"
Group existence check passed for "dba"
Membership check for user "grid" in group "oinstall" [as Primary] passed
Membership check for user "grid" in group "dba" passed
Run level check passed
Hard limits check passed for "maximum open file descriptors"
Soft limits check passed for "maximum open file descriptors"
Hard limits check passed for "maximum user processes"
Soft limits check passed for "maximum user processes"
System architecture check passed
Kernel version check passed
Kernel parameter check passed for "semmsl"
Kernel parameter check passed for "semmns"
Kernel parameter check passed for "semopm"
Kernel parameter check passed for "semmni"
Kernel parameter check passed for "shmmax"
Kernel parameter check passed for "shmmni"
Kernel parameter check passed for "shmall"
Kernel parameter check passed for "file-max"
Kernel parameter check passed for "ip_local_port_range"
Kernel parameter check passed for "rmem_default"
Kernel parameter check passed for "rmem_max"
Kernel parameter check passed for "wmem_default"
Kernel parameter check passed for "wmem_max"
Kernel parameter check passed for "aio-max-nr"
Package existence check passed for "make-3.81"
Package existence check passed for "binutils-2.17.50.0.6"
Package existence check passed for "gcc-4.1"
Package existence check passed for "libaio-0.3.106 (i386)"
Package existence check passed for "libaio-0.3.106 (x86_64)"
Package existence check passed for "glibc-2.5-24 (i686)"
Package existence check passed for "glibc-2.5-24 (x86_64)"
Package existence check passed for "compat-libstdc++-33-3.2.3 (i386)"
Package existence check passed for "compat-libstdc++-33-3.2.3 (x86_64)"
Package existence check passed for "elfutils-libelf-0.125 (x86_64)"
Package existence check passed for "elfutils-libelf-devel-0.125"
Package existence check passed for "glibc-common-2.5"
Package existence check passed for "glibc-devel-2.5 (i386)"
Package existence check passed for "glibc-devel-2.5 (x86_64)"
Package existence check passed for "glibc-headers-2.5"
Package existence check passed for "gcc-c++-4.1.2"
Package existence check passed for "libaio-devel-0.3.106 (i386)"
Package existence check passed for "libaio-devel-0.3.106 (x86_64)"
Package existence check passed for "libgcc-4.1.2 (i386)"
Package existence check passed for "libgcc-4.1.2 (x86_64)"
Package existence check passed for "libstdc++-4.1.2 (i386)"
Package existence check passed for "libstdc++-4.1.2 (x86_64)"
Package existence check passed for "libstdc++-devel-4.1.2 (x86_64)"
Package existence check passed for "sysstat-7.0.2"
Package existence check passed for "unixODBC-2.2.11 (i386)"
Package existence check passed for "unixODBC-2.2.11 (x86_64)"
Package existence check passed for "unixODBC-devel-2.2.11 (i386)"
Package existence check passed for "unixODBC-devel-2.2.11 (x86_64)"
Package existence check passed for "ksh-20060214"
Check for multiple users with UID value 0 passed
Verification of system requirement was successful.
[grid@dmdb1 grid]$ ./runcluvfy.sh comp sys -n dmdb1,dmdb2,dmdb3,dmdb4 -p database -osdba dba -orainv oinstall|grep -i fail
[grid@dmdb1 grid]$ ./runcluvfy.sh comp ssa -n dmdb1,dmdb2,dmdb3,dmdb4
Verifying shared storage accessibility
Checking shared storage accessibility...
Storage operation failed
Shared storage check failed on nodes "dmdb4,dmdb3,dmdb2,dmdb1"
Verification of shared storage accessibility was unsuccessful on all the specified nodes.
I followed below article to verify shared storage issues:
http://www.webofwood.com/rac/oracle-response-to-shared-storage-check-failed-on-nodes/
it's ok.
So I skipped SSA issue and go on install with (./runInstaller -ignoreInternalDriverError).
However, when I ran root.sh with below error:
CRS-2673: Attempting to stop 'ora.mdnsd' on 'dmdb1'
CRS-2677: Stop of 'ora.mdnsd' on 'dmdb1' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'dmdb1'
CRS-2677: Stop of 'ora.gipcd' on 'dmdb1' succeeded
CRS-4000: Command Start failed, or completed with errors.
CRS-2672: Attempting to start 'ora.gipcd' on 'dmdb1'
CRS-2672: Attempting to start 'ora.mdnsd' on 'dmdb1'
CRS-2676: Start of 'ora.gipcd' on 'dmdb1' succeeded
CRS-2676: Start of 'ora.mdnsd' on 'dmdb1' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'dmdb1'
CRS-2676: Start of 'ora.gpnpd' on 'dmdb1' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'dmdb1'
CRS-2676: Start of 'ora.cssdmonitor' on 'dmdb1' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'dmdb1'
CRS-2672: Attempting to start 'ora.diskmon' on 'dmdb1'
CRS-2676: Start of 'ora.diskmon' on 'dmdb1' succeeded
CRS-2674: Start of 'ora.cssd' on 'dmdb1' failed
CRS-2679: Attempting to clean 'ora.cssd' on 'dmdb1'
CRS-2681: Clean of 'ora.cssd' on 'dmdb1' succeeded
CRS-2673: Attempting to stop 'ora.diskmon' on 'dmdb1'
CRS-2677: Stop of 'ora.diskmon' on 'dmdb1' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'dmdb1'
CRS-2677: Stop of 'ora.gpnpd' on 'dmdb1' succeeded
CRS-2673: Attempting to stop 'ora.mdnsd' on 'dmdb1'
CRS-2677: Stop of 'ora.mdnsd' on 'dmdb1' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'dmdb1'
CRS-2677: Stop of 'ora.gipcd' on 'dmdb1' succeeded
CRS-4000: Command Start failed, or completed with errors.
Command return code of 1 (256) from command: /opt/app/11.2.0/grid/bin/crsctl start resource ora.ctssd -init
Start of resource "ora.ctssd -init" failed
Clusterware exclusive mode start of resource ora.ctssd failed
CRS-2500: Cannot stop resource 'ora.crsd' as it is not running
CRS-4000: Command Stop failed, or completed with errors.
Command return code of 1 (256) from command: /opt/app/11.2.0/grid/bin/crsctl stop resource ora.crsd -init
Stop of resource "ora.crsd -init" failed
Failed to stop CRSD
CRS-2500: Cannot stop resource 'ora.asm' as it is not running
CRS-4000: Command Stop failed, or completed with errors.
Command return code of 1 (256) from command: /opt/app/11.2.0/grid/bin/crsctl stop resource ora.asm -init
Stop of resource "ora.asm -init" failed
Failed to stop ASM
CRS-2673: Attempting to stop 'ora.cssdmonitor' on 'dmdb1'
CRS-2677: Stop of 'ora.cssdmonitor' on 'dmdb1' succeeded
Initial cluster configuration failed. See /opt/app/11.2.0/grid/cfgtoollogs/crsconfig/rootcrs_dmdb1.log for details
I manually ran '/opt/app/11.2.0/grid/bin/crsctl start resource ora.ctssd -init' and got below erros from /opt/app/11.2.0/grid/log/dmdb1/cssd/ocssd.log
Oracle Database 11g Clusterware Release 11.2.0.1.0 - Production Copyright 1996, 2009 Oracle. All rights reserved.
2011-09-23 19:06:41.501: [ CSSD][1812336384]clssscmain: Starting CSS daemon, version 11.2.0.1.0, in (exclusive) mode with uniqueness value 1316776001
2011-09-23 19:06:41.502: [ CSSD][1812336384]clssscmain: Environment is production
2011-09-23 19:06:41.502: [ CSSD][1812336384]clssscmain: Core file size limit extended
2011-09-23 19:06:41.515: [ CSSD][1812336384]clssscGetParameterOLR: OLR fetch for parameter logsize (8) failed with rc 21
2011-09-23 19:06:41.515: [ CSSD][1812336384]clssscSetPrivEnv: IPMI device not installed on this node
2011-09-23 19:06:41.517: [ CSSD][1812336384]clssscGetParameterOLR: OLR fetch for parameter priority (15) failed with rc 21
2011-09-23 19:06:41.539: [ CSSD][1812336384]clssscExtendLimits: The current soft limit for file descriptors is 65536, hard limit is 65536
2011-09-23 19:06:41.539: [ CSSD][1812336384]clssscExtendLimits: The current soft limit for locked memory is 4294967295, hard limit is 4294967295
2011-09-23 19:06:41.541: [ CSSD][1812336384]clssscmain: Running as user grid
anybody can help me fix it?I opened on SR for this case.
it's ok now.
Below is from Oracle Global Service request:
=== ODM Action Plan ===
Dear customer, after went through the uploaded log files, we found the issue looks like
bug 9732641 : The clusterware gpnpd process crashes when there is more than 1 cluster with the same name.
To narrow down the issue, pls apply the following steps.
1. Pls clean the previous configuration with below steps, then run root.sh script on node1 again.
1.1 remove current configuration.
$GRID_HOME/crs/install/rootcrs.pl -verbose -deconfig -force
1.2 remove other related files.
if $GI_BASE/Clusterware/ckptGridHA_.xml still there, please remove it manually with "rm" command on all nodes
If the gpnp profile is still there, pls clean up them, then rebuild require directories.
$ rm -rf $GRID_HOME/gpnp/*
$ mkdir -p $GRID_HOME/gpnp/profiles/peer $GRID_HOME/gpnp/wallets/peer $GRID_HOME/gpnp/wallets/prdr $GRID_HOME/gpnp/wallets/pa $GRID_HOME/gpnp/wallets/root
2. After the previous configuration was cleaned up, pls rerun the root.sh script again. If the issue still there, pls upload the following:
Everything under <GI_HOME>/log
Everything under <ORACLE_BAES for grid user>/cfgtoollogs
Everything under <GI_HOME>/cfgtolllogs/crsconfig
OS log(/var/log/messages)
3. Pls also make sure there is only one GI running on your cluster.
See /opt/app/11.2.0/grid/cfgtoollogs/crsconfig/rootcrs_dmdb1.log for details -
Scan_listener missed on second node
Hello,
I've installed a grid infraestructure on two nodes (red hat linux).
Im using static Ip adresses instead of gns.
The installation was ok, but I think that some resources are missed:
[root@nodo1 ~]# crsctl stat res -t
NAME TARGET STATE SERVER STATE_DETAILS
Local Resources
ora.DATA.dg
ONLINE ONLINE nodo1
ONLINE ONLINE nodo2
ora.asm
ONLINE ONLINE nodo1 Started
ONLINE ONLINE nodo2 Started
ora.eons
ONLINE ONLINE nodo1
ONLINE ONLINE nodo2
ora.gsd
OFFLINE OFFLINE nodo1
OFFLINE OFFLINE nodo2
ora.net1.network
ONLINE ONLINE nodo1
ONLINE ONLINE nodo2
ora.ons
ONLINE ONLINE nodo1
ONLINE ONLINE nodo2
ora.registry.acfs
ONLINE ONLINE nodo1
ONLINE ONLINE nodo2
Cluster Resources
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE nodo1
ora.nodo1.vip
1 ONLINE ONLINE nodo1
ora.oc4j
1 OFFLINE OFFLINE
ora.scan1.vip
1 ONLINE ONLINE nodo1
Why aren´t these correspondig resources for node 2???
ora.LISTENER_SCAN2.lsnr
ora.nodo2.vip
ora.scan2.vip
I've tried to add them manually and I've only got to add ora.nodo2.vip
When I try to add a scan_listener for the second node I get this error:
[grid@nodo2 ~]$ srvctl add scan_listener -l LISTENER_SCAN2 -s -p TCP:1521
PRCS-1028 : Single Client Access Name listeners already exist
Any ideas?
Thanksno, you dont have to. However, oracle recommends three IP Addresses for scan name in your dns/hosts file, if you are not using GNS. If it is a test machine I would not worry about it. Clusterware creates scan listeners on each node and if the scan ip address (interface) fails on one node it will automatically start SCAN on the other node. SCAN is independent of the nodes you can add/remove the nodes from the cluster without worrying about scan.
-
Application deployed on one node is not getting displayed in second node
Our environment is linux x86_64 and FMW version 11g,weblogic 10.3.4.0 ,soa 11.1.1.4.
We have installed weblogic cluster :
node1: Admin server,soa_server1
node2:soa_server2
when we deploy any soa application in one node it is not getting published in second node.We have taken oracle support also still problem is not solved.
They told us to configure coherence ,we have taken owc from metalink .
very urgent.
Any one can help me.You have a cluster consisting of soa_server1 and soa_server2 or are these stand-alone WebLogic instances?
Is soa-infra active on soa_server2?
Can you check if soa-infra can be reached on both the server instances (http://hostname:port/soa-infra/)
When soa_infra cannot be reached on soa_server2 can you check the logging to see what errors
are occuring.
Some examples that set-up a clustered envionment can be found here: http://middlewaremagic.com/weblogic/?p=6872
and here: http://middlewaremagic.com/weblogic/?p=6637 -
Root.sh on second node fails
I am running Linux 64bit. Installing Oracle 11gR2.
It passed all the per-requisite. Run root.sh on first node. It finished with no errorrs.
On second node I got the following:
Running Oracle 11g root.sh script...
The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME= /u01/app/11.2.0/grid
Enter the full pathname of the local bin directory: [usr/local/bin]:
The file "dbhome" already exists in /usr/local/bin. Overwrite it? (y/n)
[n]:
The file "oraenv" already exists in /usr/local/bin. Overwrite it? (y/n)
[n]:
The file "coraenv" already exists in /usr/local/bin. Overwrite it? (y/n)
[n]:
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root.sh script.
Now product-specific root actions will be performed.
2010-07-13 12:51:28: Parsing the host name
2010-07-13 12:51:28: Checking for super user privileges
2010-07-13 12:51:28: User has super user privileges
Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params
Creating trace directory
LOCAL ADD MODE
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
Adding daemon to inittab
CRS-4123: Oracle High Availability Services has been started.
ohasd is starting
CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node fred0224, number 1, and is terminating
An active cluster was found during exclusive startup, restarting to join the cluster
CRS-2672: Attempting to start 'ora.mdnsd' on 'fred0225'
CRS-2676: Start of 'ora.mdnsd' on 'fred0225' succeeded
CRS-2672: Attempting to start 'ora.gipcd' on 'fred0225'
CRS-2676: Start of 'ora.gipcd' on 'fred0225' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'fred0225'
CRS-2676: Start of 'ora.gpnpd' on 'fred0225' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'fred0225'
CRS-2676: Start of 'ora.cssdmonitor' on 'fred0225' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'fred0225'
CRS-2672: Attempting to start 'ora.diskmon' on 'fred0225'
CRS-2676: Start of 'ora.diskmon' on 'fred0225' succeeded
CRS-2676: Start of 'ora.cssd' on 'fred0225' succeeded
CRS-2672: Attempting to start 'ora.ctssd' on 'fred0225'
Start action for octssd aborted
CRS-2676: Start of 'ora.ctssd' on 'fred0225' succeeded
CRS-2672: Attempting to start 'ora.drivers.acfs' on 'fred0225'
CRS-2672: Attempting to start 'ora.asm' on 'fred0225'
CRS-2676: Start of 'ora.drivers.acfs' on 'fred0225' succeeded
CRS-2676: Start of 'ora.asm' on 'fred0225' succeeded
CRS-2664: Resource 'ora.ctssd' is already running on 'fred0225'
CRS-4000: Command Start failed, or completed with errors.
Command return code of 1 (256) from command: /u01/app/11.2.0/grid/bin/crsctl start resource ora.asm -init
Start of resource "ora.asm -init" failed
Failed to start ASM
Failed to start Oracle Clusterware stack
In the ocssd.log I found
[ CSSD][3559689984]clssnmvDHBValidateNCopy: node 1, fred0224, has a disk HB, but no network HB, DHB has rcfg 174483948, wrtcnt, 232, LATS 521702664, lastSeqNo 232, uniqueness 1279039649, timestamp 1279039959/521874274
In oraagent_oracle.log I found
[ clsdmc][1212365120]Fail to connect (ADDRESS=(PROTOCOL=ipc)(KEY=fred0225DBG_GPNPD)) with status 9
2010-07-13 12:54:07.234: [ora.gpnpd][1212365120] [check] Error = error 9 encountered when connecting to GPNPD
2010-07-13 12:54:07.238: [ora.gpnpd][1212365120] [check] Calling PID check for daemon
2010-07-13 12:54:07.238: [ora.gpnpd][1212365120] [check] Trying to check PID = 20584
2010-07-13 12:54:07.432: [ COMMCRS][1285794112]clsc_connect: (0x1304d850) no listener at (ADDRESS=(PROTOCOL=ipc)(KEY=fred0225DBG_GPNPD))
[ clsdmc][1222854976]Fail to connect (ADDRESS=(PROTOCOL=ipc)(KEY=fred0225DBG_MDNSD)) with status 9
2010-07-13 12:54:08.649: [ora.mdnsd][1222854976] [check] Error = error 9 encountered when connecting to MDNSD
2010-07-13 12:54:08.649: [ora.mdnsd][1222854976] [check] Calling PID check for daemon
2010-07-13 12:54:08.649: [ora.mdnsd][1222854976] [check] Trying to check PID = 20571
2010-07-13 12:54:08.841: [ COMMCRS][1201875264]clsc_connect: (0x12f3b1d0) no listener at (ADDRESS=(PROTOCOL=ipc)(KEY=fred0225DBG_MDNSD))
[ clsdmc][1159915840]Fail to connect (ADDRESS=(PROTOCOL=ipc)(KEY=fred0225DBG_GIPCD)) with status 9
2010-07-13 12:54:10.051: [ora.gipcd][1159915840] [check] Error = error 9 encountered when connecting to GIPCD
2010-07-13 12:54:10.051: [ora.gipcd][1159915840] [check] Calling PID check for daemon
2010-07-13 12:54:10.051: [ora.gipcd][1159915840] [check] Trying to check PID = 20566
2010-07-13 12:54:10.242: [ COMMCRS][1254324544]clsc_connect: (0x12f35630) no listener at (ADDRESS=(PROTOCOL=ipc)(KEY=fred0225DBG_GIPCD))
In oracssdagent_root.log I found
2010-07-13 12:52:28.698: [ CSSCLNT][1102481728]clssscConnect: gipc request failed with 29 (0x16)
2010-07-13 12:52:28.698: [ CSSCLNT][1102481728]clsssInitNative: connect failed, rc 29
2010-07-13 12:53:55.222: [ CSSCLNT][1102481728]clssnsqlnum: RPC failed rc 3
2010-07-13 12:53:55.222: [ USRTHRD][1102481728] clsnomon_cssini: failed 3 to fetch node number
2010-07-13 12:53:55.222: [ USRTHRD][1102481728] clsnomon_init: css init done, nodenum -1.
2010-07-13 12:53:55.222: [ CSSCLNT][1102481728]clsssRecvMsg: got a disconnect from the server while waiting for message type 43
2010-07-13 12:53:55.222: [ CSSCLNT][1102481728]clsssGetNLSData: Failure receiving a msg, rc 3
If anyone needs more info please let me know.On all nodes,
1. Modify the /etc/sysconfig/oracleasm with:
ORACLEASM_SCANORDER="dm"
ORACLEASM_SCANEXCLUDE="sd"
2. restart the asmlib by :
# /etc/init.d/oracleasm restart
3. Run root.sh on the 2nd node
hope this helps you -
Oracle 11gR2 RAC Root.sh Failed On The Second Node
Hello,
When i installing Oracle 11gR2 RAC on AIX 7.1 , root.sh succeeds on first node but fails on the second node:
I get error "Root.sh Failed On The Second Node With Error ORA-15018 ORA-15031 ORA-15025 ORA-27041 [ID 1459711.1]" within Oracle installation.
Applies to:
Oracle Server - 11gR2 RAC
EMC VNX 500
IBM AIX on POWER Systems (64-bit)
in /dev/rhdiskpower0 does not show in kfod output on second node. It is an EMC multipath disk device.
But the disk can be found with AIX command.
any help!!
Thanksthe soluation that uninstall "EMC solutitons enabler" but in the machine i just find "EMC migration enabler" and conn't remove without remove EMC Powerpath.
Maybe you are looking for
-
Problem with Adobe Premiere Elements 11
Hello Need your advice . Problem Description: Adobe Premiere Elements 11 (BOX) After adding any the timeline material ( pictures or music or video) , 2-3 seconds hanging. Then closes without any notification. This program gave me a friend, not for ne
-
Copying large number of files with Finder
When you copy lots of files and folders with Finder, it appears to count up the number of "items" and then start copying and at the same time the number of items remaining gets smaller and smaller as the copy process proceeds. My question is what are
-
The only thing currently listed to the left of my screen are My email address, Inbox, Drafts, Sent, Archives, trash, and local folders. I had about ten folders that I sorted my email into. They are gone. Clicking on local folders takes me to a screen
-
We've just upgraded our wireless infrastructure and streamlined our SSIDs from five (5) to three (3) - Corporate, Guest and Voice. In regards to VLANs, should all Coporate devices (Notebooks, Tablets, Smartphones) be included in one (1) VLAN or shoul
-
Lightroom does not have an uninstaller. Found the app,did that. The CC installers says it's OK. NOT!