Inst_loc inventory pointer missing on second node in RAC
Hi,
I have installed Clusterware 11.1.0.6 on Windows 2003 on two nodes and I'd like to patch it to 11.1.0.7. However, when calling "opatch lsinventory -all" I noticed that it works on the first node (the node where the installation was performed), but doesn't work on the second node:
C:\product\11.1.0\crs\OPatch>opatch lsinventory -all
Invoking OPatch 11.1.0.6.0
Oracle Interim Patch Installer version 11.1.0.6.0
Copyright (c) 2007, Oracle Corporation. All rights reserved.
Oracle Home : C:\product\11.1.0\crs
Central Inventory : n/a
from : n/a
OPatch version : 11.1.0.6.0
OUI version : 11.1.0.6.0
OUI location : C:\product\11.1.0\crs\oui
Log file location : C:\product\11.1.0\crs\cfgtoollogs\opatch\opatch2010-11-28_10-22-19AM.log
OPatch cannot find a valid oraInst.loc file to locate Central Inventory.
OPatch failed with error code = 104
C:\product\11.1.0\crs\OPatch>I checked in the registry key "HKEY_LOCAL_MACHINE\SOFTWARE\ORACLE" on both nodes and found out that the first node has an inventory pointer "inst_loc" wich points to "C:\Program Files\Oracle\Inventory", but on the second node this value is missing. If I manually set it on the second node, then "opatch lsinventory" works but I'm not sure if it's ok to set it manually?
I also checked the documentation and here http://download.oracle.com/docs/cd/B28359_01/em.111/b31207/oui5_cluster_environment.htm#OUICG267 it says: "After you click Next, the Oracle Universal Installer checks whether the remote inventories are set. If they are not set, the Oracle Universal Installer sets up the remote inventories by setting registry keys." So I suppose the registry key should exist also on the second node?
Thanks in advance for any answers.
Regards,
Jure
Hi, mmm ... looks like remote operations fail during the installation process. Possible reasons for this error could be either a missing oraInst.loc file or permission issues with oraInst.loc file. Ensure user have read/write priviligies on orainst.loc file as well as on the actual path of orainventory location. If you could not fix the issue and if you know the inventory location you may want try following solution: "opatch apply -invPtrLoc C:\mypath\mypath\oraInst.loc" where "mypath" should be replaced by your windows locations. Also review you have all the oracle binaries in the 2nd. node.
Edited by: Jose Valerio on Nov 28, 2010 2:21 PM
Edited by: Jose Valerio on Nov 28, 2010 2:21 PM
Similar Messages
-
Scan_listener missed on second node
Hello,
I've installed a grid infraestructure on two nodes (red hat linux).
Im using static Ip adresses instead of gns.
The installation was ok, but I think that some resources are missed:
[root@nodo1 ~]# crsctl stat res -t
NAME TARGET STATE SERVER STATE_DETAILS
Local Resources
ora.DATA.dg
ONLINE ONLINE nodo1
ONLINE ONLINE nodo2
ora.asm
ONLINE ONLINE nodo1 Started
ONLINE ONLINE nodo2 Started
ora.eons
ONLINE ONLINE nodo1
ONLINE ONLINE nodo2
ora.gsd
OFFLINE OFFLINE nodo1
OFFLINE OFFLINE nodo2
ora.net1.network
ONLINE ONLINE nodo1
ONLINE ONLINE nodo2
ora.ons
ONLINE ONLINE nodo1
ONLINE ONLINE nodo2
ora.registry.acfs
ONLINE ONLINE nodo1
ONLINE ONLINE nodo2
Cluster Resources
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE nodo1
ora.nodo1.vip
1 ONLINE ONLINE nodo1
ora.oc4j
1 OFFLINE OFFLINE
ora.scan1.vip
1 ONLINE ONLINE nodo1
Why aren´t these correspondig resources for node 2???
ora.LISTENER_SCAN2.lsnr
ora.nodo2.vip
ora.scan2.vip
I've tried to add them manually and I've only got to add ora.nodo2.vip
When I try to add a scan_listener for the second node I get this error:
[grid@nodo2 ~]$ srvctl add scan_listener -l LISTENER_SCAN2 -s -p TCP:1521
PRCS-1028 : Single Client Access Name listeners already exist
Any ideas?
Thanksno, you dont have to. However, oracle recommends three IP Addresses for scan name in your dns/hosts file, if you are not using GNS. If it is a test machine I would not worry about it. Clusterware creates scan listeners on each node and if the scan ip address (interface) fails on one node it will automatically start SCAN on the other node. SCAN is independent of the nodes you can add/remove the nodes from the cluster without worrying about scan.
-
11gR2 clusterware installation problem on root.sh script on second node
Hi all,
I wanna install the *11gR2 RAC* on ORA-Linux 5.5 (x86_64) using VMware server but on the second node i get two "*failed*" at the end of root.sh script.
After that i try to install DB but ı can see only one node.What is the problem...
I will send the output, ı need your help.
Thank you all for helping..
Hosts file:(we have no ping problem )
[root@rac2 ~]# cat /etc/hosts
# Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1 localhost.localdomain localhost
# Public
192.168.2.101 rac1.localdomain rac1
192.168.2.102 rac2.localdomain rac2
# Private
192.168.0.101 rac1-priv.localdomain rac1-priv
192.168.0.102 rac2-priv.localdomain rac2-priv
# Virtual
192.168.2.111 rac1-vip.localdomain rac1-vip
192.168.2.112 rac2-vip.localdomain rac2-vip
# SCAN
192.168.2.201 rac-scan.localdomain rac-scan
[root@rac2 ~]#
FIRST NODE root.sh script output...
[root@rac2 ~]# /u01/app/11.2.0/db_1/root.sh
Running Oracle 11g root.sh script...
The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME= /u01/app/11.2.0/db_1
Enter the full pathname of the local bin directory: [/usr/local/bin]:
Copying dbhome to /usr/local/bin ...
Copying oraenv to /usr/local/bin ...
Copying coraenv to /usr/local/bin ...
Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root.sh script.
Now product-specific root actions will be performed.
2010-12-06 14:45:06: Parsing the host name
2010-12-06 14:45:06: Checking for super user privileges
2010-12-06 14:45:06: User has super user privileges
Using configuration parameter file: /u01/app/11.2.0/db_1/crs/install/crsconfig_params
Creating trace directory
LOCAL ADD MODE
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
root wallet
root wallet cert
root cert export
peer wallet
profile reader wallet
pa wallet
peer wallet keys
pa wallet keys
peer cert request
pa cert request
peer cert
pa cert
peer root cert TP
profile reader root cert TP
pa root cert TP
peer pa cert TP
pa peer cert TP
profile reader pa cert TP
profile reader peer cert TP
peer user cert
pa user cert
Adding daemon to inittab
CRS-4123: Oracle High Availability Services has been started.
ohasd is starting
CRS-2672: Attempting to start 'ora.gipcd' on 'rac2'
CRS-2672: Attempting to start 'ora.mdnsd' on 'rac2'
CRS-2676: Start of 'ora.gipcd' on 'rac2' succeeded
CRS-2676: Start of 'ora.mdnsd' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'rac2'
CRS-2676: Start of 'ora.gpnpd' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac2'
CRS-2676: Start of 'ora.cssdmonitor' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'rac2'
CRS-2672: Attempting to start 'ora.diskmon' on 'rac2'
CRS-2676: Start of 'ora.diskmon' on 'rac2' succeeded
CRS-2676: Start of 'ora.cssd' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.ctssd' on 'rac2'
CRS-2676: Start of 'ora.ctssd' on 'rac2' succeeded
ASM created and started successfully.
DiskGroup DATA created successfully.
clscfg: -install mode specified
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
CRS-2672: Attempting to start 'ora.crsd' on 'rac2'
CRS-2676: Start of 'ora.crsd' on 'rac2' succeeded
CRS-4256: Updating the profile
Successful addition of voting disk 587cc69413ce4fd3bf0c2c2548fb9017.
Successfully replaced voting disk group with +DATA.
CRS-4256: Updating the profile
CRS-4266: Voting file(s) successfully replaced
## STATE File Universal Id File Name Disk group
1. ONLINE 587cc69413ce4fd3bf0c2c2548fb9017 (/dev/oracleasm/disks/DISK1) [DATA]
Located 1 voting disk(s).
CRS-2673: Attempting to stop 'ora.crsd' on 'rac2'
CRS-2677: Stop of 'ora.crsd' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'rac2'
CRS-2677: Stop of 'ora.asm' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.ctssd' on 'rac2'
CRS-2677: Stop of 'ora.ctssd' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.cssdmonitor' on 'rac2'
CRS-2677: Stop of 'ora.cssdmonitor' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'rac2'
CRS-2677: Stop of 'ora.cssd' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'rac2'
CRS-2677: Stop of 'ora.gpnpd' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'rac2'
CRS-2677: Stop of 'ora.gipcd' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.mdnsd' on 'rac2'
CRS-2677: Stop of 'ora.mdnsd' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.mdnsd' on 'rac2'
CRS-2676: Start of 'ora.mdnsd' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.gipcd' on 'rac2'
CRS-2676: Start of 'ora.gipcd' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'rac2'
CRS-2676: Start of 'ora.gpnpd' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac2'
CRS-2676: Start of 'ora.cssdmonitor' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'rac2'
CRS-2672: Attempting to start 'ora.diskmon' on 'rac2'
CRS-2676: Start of 'ora.diskmon' on 'rac2' succeeded
CRS-2676: Start of 'ora.cssd' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.ctssd' on 'rac2'
CRS-2676: Start of 'ora.ctssd' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'rac2'
CRS-2676: Start of 'ora.asm' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'rac2'
CRS-2676: Start of 'ora.crsd' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.evmd' on 'rac2'
CRS-2676: Start of 'ora.evmd' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'rac2'
CRS-2676: Start of 'ora.asm' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.DATA.dg' on 'rac2'
CRS-2676: Start of 'ora.DATA.dg' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.registry.acfs' on 'rac2'
CRS-2676: Start of 'ora.registry.acfs' on 'rac2' succeeded
rac2 2010/12/06 14:52:06 /u01/app/11.2.0/db_1/cdata/rac2/backup_20101206_145206.olr
Configure Oracle Grid Infrastructure for a Cluster ... succeeded
Updating inventory properties for clusterware
Starting Oracle Universal Installer...
Checking swap space: must be greater than 500 MB. Actual 6847 MB Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oraInventory
'UpdateNodeList' was successful.
[root@rac2 ~]#
SECOND NODE root.sh script output
[root@rac1 db_1]# ./root.sh
Running Oracle 11g root.sh script...
The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME= /u01/app/11.2.0/db_1
Enter the full pathname of the local bin directory: [/usr/local/bin]:
Copying dbhome to /usr/local/bin ...
Copying oraenv to /usr/local/bin ...
Copying coraenv to /usr/local/bin ...
Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root.sh script.
Now product-specific root actions will be performed.
2010-12-06 14:54:11: Parsing the host name
2010-12-06 14:54:11: Checking for super user privileges
2010-12-06 14:54:11: User has super user privileges
Using configuration parameter file: /u01/app/11.2.0/db_1/crs/install/crsconfig_params
Creating trace directory
LOCAL ADD MODE
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
Adding daemon to inittab
CRS-4123: Oracle High Availability Services has been started.
ohasd is starting
CRS-2672: Attempting to start 'ora.gipcd' on 'rac1'
CRS-2672: Attempting to start 'ora.mdnsd' on 'rac1'
CRS-2676: Start of 'ora.gipcd' on 'rac1' succeeded
CRS-2676: Start of 'ora.mdnsd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'rac1'
CRS-2676: Start of 'ora.gpnpd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac1'
CRS-2676: Start of 'ora.cssdmonitor' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'rac1'
CRS-2672: Attempting to start 'ora.diskmon' on 'rac1'
CRS-2676: Start of 'ora.diskmon' on 'rac1' succeeded
CRS-2676: Start of 'ora.cssd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.ctssd' on 'rac1'
CRS-2676: Start of 'ora.ctssd' on 'rac1' succeeded
ASM created and started successfully.
DiskGroup DATA created successfully.
clscfg: -install mode specified
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
CRS-2672: Attempting to start 'ora.crsd' on 'rac1'
CRS-2676: Start of 'ora.crsd' on 'rac1' succeeded
Successful addition of voting disk 2761ce8d47b44fbabf73462151e3ba1d.
Successfully replaced voting disk group with +DATA.
CRS-4266: Voting file(s) successfully replaced
## STATE File Universal Id File Name Disk group
1. ONLINE 2761ce8d47b44fbabf73462151e3ba1d (/dev/oracleasm/disks/DISK1) [DATA]
Located 1 voting disk(s).
CRS-2673: Attempting to stop 'ora.crsd' on 'rac1'
CRS-2677: Stop of 'ora.crsd' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'rac1'
CRS-2677: Stop of 'ora.asm' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.ctssd' on 'rac1'
CRS-2677: Stop of 'ora.ctssd' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.cssdmonitor' on 'rac1'
CRS-2677: Stop of 'ora.cssdmonitor' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'rac1'
CRS-2677: Stop of 'ora.cssd' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'rac1'
CRS-2677: Stop of 'ora.gpnpd' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'rac1'
CRS-2677: Stop of 'ora.gipcd' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.mdnsd' on 'rac1'
CRS-2677: Stop of 'ora.mdnsd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.mdnsd' on 'rac1'
CRS-2676: Start of 'ora.mdnsd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.gipcd' on 'rac1'
CRS-2676: Start of 'ora.gipcd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'rac1'
CRS-2676: Start of 'ora.gpnpd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac1'
CRS-2676: Start of 'ora.cssdmonitor' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'rac1'
CRS-2672: Attempting to start 'ora.diskmon' on 'rac1'
CRS-2676: Start of 'ora.diskmon' on 'rac1' succeeded
CRS-2676: Start of 'ora.cssd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.ctssd' on 'rac1'
CRS-2676: Start of 'ora.ctssd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'rac1'
CRS-2676: Start of 'ora.asm' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'rac1'
CRS-2676: Start of 'ora.crsd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.evmd' on 'rac1'
CRS-2676: Start of 'ora.evmd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'rac1'
CRS-2676: Start of 'ora.asm' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.DATA.dg' on 'rac1'
CRS-2676: Start of 'ora.DATA.dg' on 'rac1' succeeded
PRCR-1079 : *Failed* to start resource ora.scan1.vip
CRS-5005: IP Address: 192.168.2.201 is already in use in the network
CRS-2674: Start of 'ora.scan1.vip' on 'rac1' *failed*
CRS-2632: There are no more servers to try to place resource 'ora.scan1.vip' on that would satisfy its placement policy
start scan ... *failed*
Configure Oracle Grid Infrastructure for a Cluster ... *failed*
Updating inventory properties for clusterware
Starting Oracle Universal Installer...
Checking swap space: must be greater than 500 MB. Actual 6847 MB Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oraInventory
'UpdateNodeList' was successful.
[root@rac1 db_1]# * "./runcluvfy.sh stage -pre -crsinst -n rac1,rac2 " outputs are same each node....*
[oracle@rac2 grid]$ ./runcluvfy.sh stage -pre crsinst -n rac1,rac2
Performing pre-checks for cluster services setup
Checking node reachability...
Node reachability check passed from node "rac2"
Checking user equivalence...
User equivalence check passed for user "oracle"
Checking node connectivity...
Checking hosts config file...
Verification of the hosts config file successful
Node connectivity passed for subnet "192.168.2.0" with node(s) rac2,rac1
TCP connectivity check passed for subnet "192.168.2.0"
Node connectivity passed for subnet "192.168.122.0" with node(s) rac2,rac1
TCP connectivity check failed for subnet "192.168.122.0"
Node connectivity passed for subnet "192.168.0.0" with node(s) rac2,rac1
TCP connectivity check passed for subnet "192.168.0.0"
Interfaces found on subnet "192.168.2.0" that are likely candidates for VIP are:
rac2 eth0:192.168.2.102 eth0:192.168.2.112 eth0:192.168.2.201
rac1 eth0:192.168.2.101 eth0:192.168.2.111
Interfaces found on subnet "192.168.122.0" that are likely candidates for a private interconnect are:
rac2 virbr0:192.168.122.1
rac1 virbr0:192.168.122.1
Interfaces found on subnet "192.168.0.0" that are likely candidates for a private interconnect are:
rac2 eth1:192.168.0.102
rac1 eth1:192.168.0.101
Node connectivity check passed
Total memory check passed
Available memory check passed
Swap space check passed
Free disk space check passed for "rac2:/tmp"
Free disk space check passed for "rac1:/tmp"
User existence check passed for "oracle"
Group existence check passed for "oinstall"
Group existence check passed for "dba"
Membership check for user "oracle" in group "oinstall" [as Primary] passed
Membership check for user "oracle" in group "dba" passed
Run level check passed
Hard limits check passed for "maximum open file descriptors"
Soft limits check passed for "maximum open file descriptors"
Hard limits check passed for "maximum user processes"
Soft limits check passed for "maximum user processes"
System architecture check passed
Kernel version check passed
Kernel parameter check passed for "semmsl"
Kernel parameter check passed for "semmns"
Kernel parameter check passed for "semopm"
Kernel parameter check passed for "semmni"
Kernel parameter check passed for "shmmax"
Kernel parameter check passed for "shmmni"
Kernel parameter check passed for "shmall"
Kernel parameter check passed for "file-max"
Kernel parameter check passed for "ip_local_port_range"
Kernel parameter check passed for "rmem_default"
Kernel parameter check passed for "rmem_max"
Kernel parameter check passed for "wmem_default"
Kernel parameter check passed for "wmem_max"
Kernel parameter check passed for "aio-max-nr"
Package existence check passed for "make-3.81"
Package existence check passed for "binutils-2.17.50.0.6"
Package existence check passed for "gcc-4.1.2"
Package existence check passed for "libaio-0.3.106 (i386)"
Package existence check passed for "libaio-0.3.106 (x86_64)"
Package existence check passed for "glibc-2.5-24 (i686)"
Package existence check passed for "glibc-2.5-24 (x86_64)"
Package existence check passed for "compat-libstdc++-33-3.2.3 (i386)"
Package existence check passed for "compat-libstdc++-33-3.2.3 (x86_64)"
Package existence check passed for "elfutils-libelf-0.125 (x86_64)"
Package existence check passed for "elfutils-libelf-devel-0.125"
Package existence check passed for "glibc-common-2.5"
Package existence check passed for "glibc-devel-2.5 (i386)"
Package existence check passed for "glibc-devel-2.5 (x86_64)"
Package existence check passed for "glibc-headers-2.5"
Package existence check passed for "gcc-c++-4.1.2"
Package existence check passed for "libaio-devel-0.3.106 (i386)"
Package existence check passed for "libaio-devel-0.3.106 (x86_64)"
Package existence check passed for "libgcc-4.1.2 (i386)"
Package existence check passed for "libgcc-4.1.2 (x86_64)"
Package existence check passed for "libstdc++-4.1.2 (i386)"
Package existence check passed for "libstdc++-4.1.2 (x86_64)"
Package existence check passed for "libstdc++-devel-4.1.2 (x86_64)"
Package existence check passed for "sysstat-7.0.2"
Package existence check passed for "unixODBC-2.2.11 (i386)"
Package existence check passed for "unixODBC-2.2.11 (x86_64)"
Package existence check passed for "unixODBC-devel-2.2.11 (i386)"
Package existence check passed for "unixODBC-devel-2.2.11 (x86_64)"
Package existence check passed for "ksh-20060214"
Check for multiple users with UID value 0 passed
Current group ID check passed
Core file name pattern consistency check passed.
User "oracle" is not part of "root" group. Check passed
Default user file creation mask check passed
Starting Clock synchronization checks using Network Time Protocol(NTP)...
NTP Configuration file check started...
NTP Configuration file check passed
Checking daemon liveness...
Liveness check passed for "ntpd"
NTP daemon slewing option check passed
NTP daemon's boot time configuration check for slewing option passed
NTP common Time Server Check started...
Check of common NTP Time Server passed
Clock time offset check from NTP Time Server started...
Clock time offset check passed
Clock synchronization check using Network Time Protocol(NTP) passed
Pre-check for cluster services setup was successful.
[oracle@rac2 grid]$ I'm confused :)
Edited by: Eren GULERYUZ on 06.Ara.2010 05:57Hi,
it looks like, that your "shared device" you are using is not really shared.
The second node does "create an ASM diskgroup" and create OCR and Voting disks. If this indeed would be a shared device, he should have recognized, that your disk is shared.
So as a result your VMware configuration must be wrong, and the disk you presented as shared disk is not really shared.
Which VMWare version did you use? It will not work correctly with the workstation or player edition, since shared disks are only really working with the server version.
If you indeed using the server, could you paste your vm configurations?
Furthermore I recommend using Virtual Box. There is a nice how-to:
http://www.oracle-base.com/articles/11g/OracleDB11gR2RACInstallationOnOEL5UsingVirtualBox.php
Sebastian -
SC 3.2 second node panics on boot
I am trying to get a two node (potentially three if the cluster works :) ) cluster running in a solaris 10 x86 (AMD64) environment. Following are the machine specifications:
AMD 64 single core
SATA2 hdd partitioned as / (100+gb), swap (4gb) and /globaldevices (1gb)
Solaris 10 Generic_127112-07
Completely patched
2 gb RAM
NVidia nge nic
Syskonnect skge nic
Realtek rge nic
Sun Cluster 3.2
Two unmanaged gigabit switches
The cluster setup would look like the following:
DB03 (First node of the cluster)
db03nge0 -- public interconnect
db03skge0 -- private interconnect 1 -- connected to sw07
db03rge0 -- private interconnect 2 -- connected to sw09
/globaldevices -- local disk
DB02 (Second node of the cluster)
db02nge0 -- public interconnect
db02skge0 -- private interconnect 1 -- connected to sw07
db02rge0 -- private interconnect 2 -- connected to sw09
/globaldevices -- local disk
DB01 (Third node of the cluster)
db01nge0 -- public interconnect
db01skge0 -- private interconnect 1 -- connected to sw07
db01rge0 -- private interconnect 2 -- connected to sw09
/globaldevices -- local disk
All external/public communication happens at the nge0 nic.
Switch sw07 and sw09 connects these machines for private interconnect.
All of them have a local disk partition mounted as /globaldevices
Another fourth server which is not a part of the cluster environment acts as a quorum server. The systems connect to the quorum server over nge nic. the quorum device name is cl01qs
Next, I did a single node configuration on DB03 through scinstall utility and it completed successfully. The DB03 system reboot and acquired quorum vote from the quorum server and came up fine.
Then, I added the second node to the cluster (running the scinstall command from the second node). The scinstall completes successfully and goes down for a reboot.
i can see the following from the first node:
db03nge0# cluster show
Cluster ===
Cluster Name: cl01
installmode: disabled
private_netaddr: 172.16.0.0
private_netmask: 255.255.248.0
max_nodes: 64
max_privatenets: 10
udp_session_timeout: 480
global_fencing: pathcount
Node List: db03nge0, db02nge0
Host Access Control ===
Cluster name: cl01
Allowed hosts: Any
Authentication Protocol: sys
Cluster Nodes ===
Node Name: db03nge0
Node ID: 1
Enabled: yes
privatehostname: clusternode1-priv
reboot_on_path_failure: disabled
globalzoneshares: 1
defaultpsetmin: 1
quorum_vote: 1
quorum_defaultvote: 1
quorum_resv_key: 0x479C227E00000001
Transport Adapter List: skge0, rge0
Node Name: db02nge0
Node ID: 2
Enabled: yes
privatehostname: clusternode2-priv
reboot_on_path_failure: disabled
globalzoneshares: 1
defaultpsetmin: 1
quorum_vote: 0
quorum_defaultvote: 1
quorum_resv_key: 0x479C227E00000002
Transport Adapter List: skge0, rge0Now, the problem part, when scinstall completes on the second node, it sends the machine for a reboot and, the second node encounters a panic and shuts itself down. This panic and reboot cycle keeps on going unless I place the second node in non-cluster mode. The output from both the nodes looks like the following:
First Node DB03 (Primary)
Jan 27 18:34:49 db03nge0 genunix: [ID 537175 kern.notice] NOTICE: CMM: Node db02nge0 (nodeid: 2, incarnation #: 1201476860) has become reachable.
Jan 27 18:34:49 db03nge0 genunix: [ID 387288 kern.notice] NOTICE: clcomm: Path db03nge0:rge0 - db02nge0:rge0 online
Jan 27 18:34:49 db03nge0 genunix: [ID 387288 kern.notice] NOTICE: clcomm: Path db03nge0:skge0 - db02nge0:skge0 online
Jan 27 18:34:49 db03nge0 genunix: [ID 377347 kern.notice] NOTICE: CMM: Node db02nge0 (nodeid = 2) is up; new incarnation number = 1201476860.
Jan 27 18:34:49 db03nge0 genunix: [ID 108990 kern.notice] NOTICE: CMM: Cluster members: db03nge0 db02nge0.
Jan 27 18:34:49 db03nge0 Cluster.Framework: [ID 801593 daemon.notice] stdout: releasing reservations for scsi-2 disks shared with db02nge0
Jan 27 18:34:49 db03nge0 genunix: [ID 279084 kern.notice] NOTICE: CMM: node reconfiguration #7 completed.
Jan 27 18:34:59 db03nge0 genunix: [ID 446068 kern.notice] NOTICE: CMM: Node db02nge0 (nodeid = 2) is down.
Jan 27 18:34:59 db03nge0 genunix: [ID 108990 kern.notice] NOTICE: CMM: Cluster members: db03nge0.
Jan 27 18:34:59 db03nge0 genunix: [ID 489438 kern.notice] NOTICE: clcomm: Path db03nge0:skge0 - db02nge0:skge0 being drained
Jan 27 18:34:59 db03nge0 genunix: [ID 489438 kern.notice] NOTICE: clcomm: Path db03nge0:rge0 - db02nge0:rge0 being drained
Jan 27 18:35:00 db03nge0 genunix: [ID 279084 kern.notice] NOTICE: CMM: node reconfiguration #8 completed.
Jan 27 18:35:00 db03nge0 Cluster.Framework: [ID 801593 daemon.notice] stdout: fencing node db02nge0 from shared devices
Jan 27 18:35:59 db03nge0 genunix: [ID 604153 kern.notice] NOTICE: clcomm: Path db03nge0:skge0 - db02nge0:skge0 errors during initiation
Jan 27 18:35:59 db03nge0 genunix: [ID 618107 kern.warning] WARNING: Path db03nge0:skge0 - db02nge0:skge0 initiation encountered errors, errno = 62. Remote node may be down or unreachable through this path.
Jan 27 18:35:59 db03nge0 genunix: [ID 604153 kern.notice] NOTICE: clcomm: Path db03nge0:rge0 - db02nge0:rge0 errors during initiation
Jan 27 18:35:59 db03nge0 genunix: [ID 618107 kern.warning] WARNING: Path db03nge0:rge0 - db02nge0:rge0 initiation encountered errors, errno = 62. Remote node may be down or unreachable through this path.
Jan 27 18:40:27 db03nge0 genunix: [ID 273354 kern.notice] NOTICE: CMM: Node db02nge0 (nodeid = 2) is dead.Second Node DB02 (Secondary node just added to cluster)
Jan 27 18:33:43 db02nge0 ipf: [ID 774698 kern.info] IP Filter: v4.1.9, running.
Jan 27 18:33:50 db02nge0 svc.startd[8]: [ID 652011 daemon.warning] svc:/system/pools:default: Method "/lib/svc/method/svc-pools start" failed with exit status 96.
Jan 27 18:33:50 db02nge0 svc.startd[8]: [ID 748625 daemon.error] system/pools:default misconfigured: transitioned to maintenance (see 'svcs -xv' for details)
Jan 27 18:34:20 db02nge0 genunix: [ID 965873 kern.notice] NOTICE: CMM: Node db03nge0 (nodeid = 1) with votecount = 1 added.
Jan 27 18:34:20 db02nge0 genunix: [ID 965873 kern.notice] NOTICE: CMM: Node db02nge0 (nodeid = 2) with votecount = 0 added.
Jan 27 18:34:20 db02nge0 genunix: [ID 884114 kern.notice] NOTICE: clcomm: Adapter rge0 constructed
Jan 27 18:34:20 db02nge0 genunix: [ID 884114 kern.notice] NOTICE: clcomm: Adapter skge0 constructed
Jan 27 18:34:20 db02nge0 genunix: [ID 843983 kern.notice] NOTICE: CMM: Node db02nge0: attempting to join cluster.
Jan 27 18:34:23 db02nge0 skge: [ID 418734 kern.notice] skge0: Network connection up on port A
Jan 27 18:34:23 db02nge0 skge: [ID 249518 kern.notice] Link Speed: 1000 Mbps
Jan 27 18:34:23 db02nge0 skge: [ID 966250 kern.notice] Autonegotiation: Yes
Jan 27 18:34:23 db02nge0 skge: [ID 676895 kern.notice] Duplex Mode: Full
Jan 27 18:34:23 db02nge0 skge: [ID 825410 kern.notice] Flow Control: Symmetric
Jan 27 18:34:23 db02nge0 skge: [ID 512437 kern.notice] Role: Slave
Jan 27 18:34:23 db02nge0 rge: [ID 801725 kern.info] NOTICE: rge0: link up 1000Mbps Full_Duplex (initialized)
Jan 27 18:34:24 db02nge0 genunix: [ID 537175 kern.notice] NOTICE: CMM: Node db03nge0 (nodeid: 1, incarnation #: 1201416440) has become reachable.
Jan 27 18:34:24 db02nge0 genunix: [ID 387288 kern.notice] NOTICE: clcomm: Path db02nge0:rge0 - db03nge0:rge0 online
Jan 27 18:34:24 db02nge0 genunix: [ID 525628 kern.notice] NOTICE: CMM: Cluster has reached quorum.
Jan 27 18:34:24 db02nge0 genunix: [ID 377347 kern.notice] NOTICE: CMM: Node db03nge0 (nodeid = 1) is up; new incarnation number = 1201416440.
Jan 27 18:34:24 db02nge0 genunix: [ID 377347 kern.notice] NOTICE: CMM: Node db02nge0 (nodeid = 2) is up; new incarnation number = 1201476860.
Jan 27 18:34:24 db02nge0 genunix: [ID 108990 kern.notice] NOTICE: CMM: Cluster members: db03nge0 db02nge0.
Jan 27 18:34:24 db02nge0 genunix: [ID 387288 kern.notice] NOTICE: clcomm: Path db02nge0:skge0 - db03nge0:skge0 online
Jan 27 18:34:25 db02nge0 genunix: [ID 279084 kern.notice] NOTICE: CMM: node reconfiguration #7 completed.
Jan 27 18:34:25 db02nge0 genunix: [ID 499756 kern.notice] NOTICE: CMM: Node db02nge0: joined cluster.
Jan 27 18:34:25 db02nge0 cl_dlpitrans: [ID 624622 kern.notice] Notifying cluster that this node is panicking
Jan 27 18:34:25 db02nge0 unix: [ID 836849 kern.notice]
Jan 27 18:34:25 db02nge0 ^Mpanic[cpu0]/thread=ffffffff8202a1a0:
Jan 27 18:34:25 db02nge0 genunix: [ID 335743 kern.notice] BAD TRAP: type=e (#pf Page fault) rp=fffffe8000636b90 addr=30 occurred in module "cl_comm" due to a NULL pointer dereference
Jan 27 18:34:25 db02nge0 cl_dlpitrans: [ID 624622 kern.notice] Notifying cluster that this node is panicking
Jan 27 18:34:25 db02nge0 unix: [ID 836849 kern.notice]
Jan 27 18:34:25 db02nge0 ^Mpanic[cpu0]/thread=ffffffff8202a1a0:
Jan 27 18:34:25 db02nge0 genunix: [ID 335743 kern.notice] BAD TRAP: type=e (#pf Page fault) rp=fffffe8000636b90 addr=30 occurred in module "cl_comm" due to a NULL pointer dereference
Jan 27 18:34:25 db02nge0 unix: [ID 100000 kern.notice]
Jan 27 18:34:25 db02nge0 unix: [ID 839527 kern.notice] cluster:
Jan 27 18:34:25 db02nge0 unix: [ID 753105 kern.notice] #pf Page fault
Jan 27 18:34:25 db02nge0 unix: [ID 532287 kern.notice] Bad kernel fault at addr=0x30
Jan 27 18:34:25 db02nge0 unix: [ID 243837 kern.notice] pid=4, pc=0xfffffffff262c3f6, sp=0xfffffe8000636c80, eflags=0x10202
Jan 27 18:34:25 db02nge0 unix: [ID 211416 kern.notice] cr0: 8005003b<pg,wp,ne,et,ts,mp,pe> cr4: 6f0<xmme,fxsr,pge,mce,pae,pse>
Jan 27 18:34:25 db02nge0 unix: [ID 354241 kern.notice] cr2: 30 cr3: efd4000 cr8: c
Jan 27 18:34:25 db02nge0 unix: [ID 592667 kern.notice] rdi: ffffffff8c932b18 rsi: ffffffffc055a8e6 rdx: 10
Jan 27 18:34:25 db02nge0 unix: [ID 592667 kern.notice] rcx: ffffffff8d10d0c0 r8: 0 r9: 0
Jan 27 18:34:25 db02nge0 unix: [ID 592667 kern.notice] rax: 10 rbx: 0 rbp: fffffe8000636cd0
Jan 27 18:34:25 db02nge0 unix: [ID 592667 kern.notice] r10: 0 r11: fffffffffbce2d40 r12: ffffffff8216a008
Jan 27 18:34:25 db02nge0 unix: [ID 592667 kern.notice] r13: 800 r14: 0 r15: ffffffff8216a0d8
Jan 27 18:34:25 db02nge0 unix: [ID 592667 kern.notice] fsb: ffffffff80000000 gsb: fffffffffbc25520 ds: 43
Jan 27 18:34:25 db02nge0 unix: [ID 592667 kern.notice] es: 43 fs: 0 gs: 1c3
Jan 27 18:34:25 db02nge0 unix: [ID 592667 kern.notice] trp: e err: 0 rip: fffffffff262c3f6
Jan 27 18:34:25 db02nge0 unix: [ID 592667 kern.notice] cs: 28 rfl: 10202 rsp: fffffe8000636c80
Jan 27 18:34:25 db02nge0 unix: [ID 266532 kern.notice] ss: 30
Jan 27 18:34:25 db02nge0 unix: [ID 100000 kern.notice]
Jan 27 18:34:25 db02nge0 genunix: [ID 655072 kern.notice] fffffe8000636aa0 unix:die+da ()
Jan 27 18:34:25 db02nge0 genunix: [ID 655072 kern.notice] fffffe8000636b80 unix:trap+d86 ()
Jan 27 18:34:25 db02nge0 genunix: [ID 655072 kern.notice] fffffe8000636b90 unix:cmntrap+140 ()
Jan 27 18:34:25 db02nge0 genunix: [ID 655072 kern.notice] fffffe8000636cd0 cl_comm:__1cKfp_adapterNget_fp_header6MpCLHC_pnEmsgb__+163 ()
Jan 27 18:34:25 db02nge0 genunix: [ID 655072 kern.notice] fffffe8000636d30 cl_comm:__1cJfp_holderVupdate_remote_macaddr6MrnHnetworkJmacinfo_t__v_+e5 ()
Jan 27 18:34:25 db02nge0 genunix: [ID 655072 kern.notice] fffffe8000636d80 cl_comm:__1cLpernodepathOstart_matching6MnM_ManagedSeq_4nL_NormalSeq_4nHnetworkJmacinfo_t__
_n0C____v_+180 ()
Jan 27 18:34:25 db02nge0 genunix: [ID 655072 kern.notice] fffffe8000636e60 cl_comm:__1cGfpconfIfp_ns_if6M_v_+195 ()
Jan 27 18:34:25 db02nge0 genunix: [ID 655072 kern.notice] fffffe8000636e70 cl_comm:.XDKsQAiaUkSGENQ.__1fTget_idlversion_impl1AG__CCLD_+320bf51b ()
Jan 27 18:34:25 db02nge0 genunix: [ID 655072 kern.notice] fffffe8000636ed0 cl_orb:cllwpwrapper+106 ()
Jan 27 18:34:25 db02nge0 genunix: [ID 655072 kern.notice] fffffe8000636ee0 unix:thread_start+8 ()
Jan 27 18:34:25 db02nge0 unix: [ID 100000 kern.notice]
Jan 27 18:34:25 db02nge0 genunix: [ID 672855 kern.notice] syncing file systems...
Jan 27 18:34:25 db02nge0 genunix: [ID 433738 kern.notice] [1]
Jan 27 18:34:25 db02nge0 genunix: [ID 733762 kern.notice] 33
Jan 27 18:34:26 db02nge0 genunix: [ID 433738 kern.notice] [1]
Jan 27 18:34:26 db02nge0 genunix: [ID 733762 kern.notice] 2
Jan 27 18:34:27 db02nge0 genunix: [ID 433738 kern.notice] [1]
Jan 27 18:34:48 db02nge0 last message repeated 20 times
Jan 27 18:34:49 db02nge0 genunix: [ID 622722 kern.notice] done (not all i/o completed)
Jan 27 18:34:50 db02nge0 genunix: [ID 111219 kern.notice] dumping to /dev/dsk/c1d0s1, offset 860356608, content: kernel
Jan 27 18:34:55 db02nge0 genunix: [ID 409368 kern.notice] ^M100% done: 92936 pages dumped, compression ratio 5.02,
Jan 27 18:34:55 db02nge0 genunix: [ID 851671 kern.notice] dump succeeded
Jan 27 18:35:41 db02nge0 genunix: [ID 540533 kern.notice] ^MSunOS Release 5.10 Version Generic_127112-07 64-bit
Jan 27 18:35:41 db02nge0 genunix: [ID 943907 kern.notice] Copyright 1983-2007 Sun Microsystems, Inc. All rights reserved.
Jan 27 18:35:41 db02nge0 Use is subject to license terms.
Jan 27 18:35:41 db02nge0 unix: [ID 126719 kern.info] features: 1076fdf<cpuid,sse3,nx,asysc,sse2,sse,pat,cx8,pae,mca,mmx,cmov,pge,mtrr,msr,tsc,lgpg>
Jan 27 18:35:41 db02nge0 unix: [ID 168242 kern.info] mem = 3144188K (0xbfe7f000)
Jan 27 18:35:41 db02nge0 rootnex: [ID 466748 kern.info] root nexus = i86pcI don't know what is the next step to overcome this problem. I have tried the same with DB01 machine, but that machine is also throwing a kernel panic at the same point. From what I can see from the logs, it seems as if the secondary node(s) do join the cluster:
Jan 27 18:34:20 db02nge0 genunix: [ID 965873 kern.notice] NOTICE: CMM: Node db03nge0 (nodeid = 1) with votecount = 1 added.
Jan 27 18:34:20 db02nge0 genunix: [ID 965873 kern.notice] NOTICE: CMM: Node db02nge0 (nodeid = 2) with votecount = 0 added.
Jan 27 18:34:20 db02nge0 genunix: [ID 884114 kern.notice] NOTICE: clcomm: Adapter rge0 constructed
Jan 27 18:34:20 db02nge0 genunix: [ID 884114 kern.notice] NOTICE: clcomm: Adapter skge0 constructed
Jan 27 18:34:20 db02nge0 genunix: [ID 843983 kern.notice] NOTICE: CMM: Node db02nge0: attempting to join cluster.
Jan 27 18:34:23 db02nge0 rge: [ID 801725 kern.info] NOTICE: rge0: link up 1000Mbps Full_Duplex (initialized)
Jan 27 18:34:24 db02nge0 genunix: [ID 537175 kern.notice] NOTICE: CMM: Node db03nge0 (nodeid: 1, incarnation #: 1201416440) has become reachable.
Jan 27 18:34:24 db02nge0 genunix: [ID 387288 kern.notice] NOTICE: clcomm: Path db02nge0:rge0 - db03nge0:rge0 online
Jan 27 18:34:24 db02nge0 genunix: [ID 525628 kern.notice] NOTICE: CMM: Cluster has reached quorum.
Jan 27 18:34:24 db02nge0 genunix: [ID 377347 kern.notice] NOTICE: CMM: Node db03nge0 (nodeid = 1) is up; new incarnation number = 1201416440.
Jan 27 18:34:24 db02nge0 genunix: [ID 377347 kern.notice] NOTICE: CMM: Node db02nge0 (nodeid = 2) is up; new incarnation number = 1201476860.
Jan 27 18:34:24 db02nge0 genunix: [ID 108990 kern.notice] NOTICE: CMM: Cluster members: db03nge0 db02nge0.
Jan 27 18:34:24 db02nge0 genunix: [ID 387288 kern.notice] NOTICE: clcomm: Path db02nge0:skge0 - db03nge0:skge0 online
Jan 27 18:34:25 db02nge0 genunix: [ID 279084 kern.notice] NOTICE: CMM: node reconfiguration #7 completed.
Jan 27 18:34:25 db02nge0 genunix: [ID 499756 kern.notice] NOTICE: CMM: Node db02nge0: joined cluster.but, then, immediately due to some reason encounter the kernel panick.
The only thing which is coming to my mind is that the skge driver is somehow causing the problem while it is a part of the cluster interconnect. Don't know, but another thread somewhere on the internet was facing a similar problem:
http://unix.derkeiler.com/Mailing-Lists/SunManagers/2005-12/msg00114.html
The next step looks like inter-changing the nge and skge nics and trying it again.
Any help is much appreciated.
Thanks in advance.
tualhaI'm not sure I can solve your problem but I have some suggestions that you might want to consider. I can't find anything in the bugs database that is identical to this, but that may be because we haven't certified the adapters you are using and thus never came across the problem.
Although I'm not that hot on kernel debugging, looking at the stack traces seems to suggest that there might have been a problem with MAC addresses. Can you check that you have the equivalent of local_mac_address = true set, so that each adapter has a separate MAC address. If they don't it might confuse the cl_com module which seems to have had the fault.
If that checks out, then I would try switching the syskonnect adapter to being the public network and making the nge adapter the other private network. Again, I don't think any of these adapters have every been tested so there is no guarantee they will work.
Other ideas to try are to set the adapters to not auto-negotiate speeds, disable jumbo frames, check that they don't have any power saving modes that might put them to sleep periodically, etc.
Let us know if any of these make any difference.
Tim
--- -
Inventory pointer file in RAC DBs
Grid version:11.2.0.3
DB versions: 11.2.0.3, 10.2.0.5
Platform : Solaris / AIX
In our RAC servers, I have noticed that the inventory pointer file (oraInst.loc) of RDBMS HOMEs point to the Grid inventory
--- inventory pointer file of 11.2.0.3 GRID HOME
$ cat /u01/grid/11.2/oraInst.loc
inventory_loc=/u01/grid/11.2/oraInventory
inst_group=dba-- inventory pointer file of a 10.2.0.5 RDBMS HOME running in this cluster
$ cat /u01/oracle/10.2/db/oraInst.loc
inventory_loc=/u01/grid/11.2/oraInventory
inst_group=dba-- inventory pointer file of a 11.2.0.3 RDBMS HOME running in this cluster
$ cat /u01/oracle/11.2/db/oraInst.loc
inventory_loc=/u01/grid/11.2/oraInventory
inst_group=dbaIs this a pre-requisite or a standard practice ?In Solaris, the default location for the inventory location file is /var/opt/oracle/oraInst.loc. Make sure they all point to the same inventory and make sure they are all up-to-date. You also want to make sure that this "global/central" inventory has READ/WRITE for grid and oracle users. You can take an existing inventory (or delete all of them) and use the runInstaller to create and/or add a new home;
Start here:
http://docs.oracle.com/cd/E11882_01/em.112/e12255/oui2_manage_oracle_homes.htm
look for
./runInstaller -silent -attachHome ORACLE_HOME="<Oracle_Home_Location>"
"CLUSTER_NODES={<node1,node2>}" LOCAL_NODE="<node_name>" <<--note for clusters, the "local_node" - so do this on all nodes
There is also a "detachHome" -
Error "no connect to database " after moving group to second node.
hi experts,
I set up ERP2005 SR1 and MSSQL2005 with two nodes MSCS.
I am facing error "no connect to database session terminated mscs" from SAP GUI
after moving group to second node.
Now SPS level and kernel patch level has been not updated yet.
I counld'nt find any sap notes, so any help and suggestion could be appreciated.
Best Regards
MASAKIsolved myself . i was missing set SQL Service Account have Local Admnistrators.
-
ORA-27504: IPC error creating OSD context : Unable to start second node
I have set the DB parameter CLUSTER_INTERCONNECT to point to the Inet addr.
oifcfg getif
bondeth0
172.23.250.128 global public
bondib0 192.168.8.0 global
cluster_interconnect
When I try to restart the DB services, it is throwing below error while starting the second node.
These are set of commands I have executed to change the DB Parameter
alter system set cluster_interconnects = '192.168.10.6' scope=spfile sid='RAC1' ;
alter system set cluster_interconnects = '192.168.10.7' scope=spfile sid='RAC2' ;
alter system set cluster_interconnects = '192.168.10.6' scope=spfile sid='ASM1' ;
alter system set cluster_interconnects = '192.168.10.7' scope=spfile sid='ASM2' ;
On second node
SQL> startup ;
ORA-27504: IPC error creating OSD context
ORA-27300: OS system dependent operation:if_not_found failed with status: 0
ORA-27301: OS failure message: Error 0
ORA-27302: failure occurred at: skgxpvaddr9
ORA-27303: additional information: requested interface 192.168.10.6 not found. Check output from ifconfig command
SQL>
please let me know whether the proceedure I have followed is wrong
ThanksNode 1:
[oracle@prdat137db03 etc]$ /sbin/ifconfig bondib0
bondib0 Link encap:InfiniBand HWaddr 80:00:00:48:FE:80:00:00:00:00:00:00:00:00:00:00:00:00:00:00
inet addr:192.168.10.6 Bcast:192.168.11.255 Mask:255.255.252.0
inet6 addr: fe80::221:2800:1ef:bc4f/64 Scope:Link
UP BROADCAST RUNNING MASTER MULTICAST MTU:65520 Metric:1
RX packets:32550051 errors:0 dropped:0 overruns:0 frame:0
TX packets:32395961 errors:0 dropped:42 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:19382043590 (18.0 GiB) TX bytes:17164065360 (15.9 GiB)
[oracle@prdat137db03 etc]$
Node 2:
[oracle@prdat137db04 ~]$ /sbin/ifconfig bondib0
bondib0 Link encap:InfiniBand HWaddr 80:00:00:48:FE:80:00:00:00:00:00:00:00:00:00:00:00:00:00:00
inet addr:192.168.10.7 Bcast:192.168.11.255 Mask:255.255.252.0
inet6 addr: fe80::221:2800:1ef:abdb/64 Scope:Link
UP BROADCAST RUNNING MASTER MULTICAST MTU:65520 Metric:1
RX packets:29618287 errors:0 dropped:0 overruns:0 frame:0
TX packets:30769233 errors:0 dropped:12 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:16453595058 (15.3 GiB) TX bytes:18960175021 (17.6 GiB)
[oracle@prdat137db04 ~]$ -
Oracle Grid Infrastructure 11.2.0.2 root.sh hangs in the second node
Hi,
I am installing Oracle Grid Infrastructure 11.2.0.2.0 using VM's. Both the virtual machines are running RHEL 5.7 32 bit version.
Until the point where the installation suggests to run two scripts as root user, everything runs fine.
root.sh runs successfully on the first node.
On the second node it hangs at,
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_paramsuser10193468 wrote:
Hi,
Following is the last few lines of the $GRID_HOME/cfgtoollogs/crsconfig/rootcrs_<node>.log file
2012-03-19 17:20:17: OLR successfully created or upgraded
2012-03-19 17:20:17: /u01/app/11.2.0/grid/bin/clscfg -localadd
2012-03-19 17:20:17: Executing /u01/app/11.2.0/grid/bin/clscfg -localadd
2012-03-19 17:20:17: Executing cmd: /u01/app/11.2.0/grid/bin/clscfg -localadd
I am using ASM for OCR and Voting Disks.
Private network is up and running.Please let me know what have you set in asm_diskstring parameter?
I have find following note from metalink which says that you should not set asm_diskstring to '/dev/' or '/dev/*'. This is not supported
root.sh Hangs or Reboots Node on Non First Node if ASM Discovery String asm_diskstring is Set to /dev/ or /dev/* [ID 1356567.1] -
Need convert ccx8.5 single server to First Node; allow HA Second Node validation.
I have a ccx 8.5 single node and need to make ccx cluster (add HA node) for redundancy; my license includes HA -- and I've configured existing node for DNS.
1) I already added the 2nd Node server via CCX Admin on existing Node.
2) While installing ccx8.5 on 2nd server (not yet complete); configuration validation w/ existing node FAILS = msg "Configured first node <> is not a First Node".
How do I update the existing, heretofore operational, single node to behave as a First Node and allow HA 2nd Node?
Or, must I install 2nd Node as single to complete THEN logon CCX Admin to new node and add to cluster (point to existing/1st Node)?Hi Casey,
Have you added your second Node details in the UCCX first Node (UCCX Admin->System->Server) and than do the second node installation. Also make sure that both these nodes are reachable to each other.
And also while installing the Second Node, you need to select This is not the first Node step, so that it will prompt you to enter the first node details.
As Anthony rightly said, please crosscheck your UCCX HA license (open the License Information page from the first node), I also hope it's either Enhance\Premium license and a Valid one.
Hope this helps.
Anand
Please rate helpful posts !! -
Application deployed on one node is not getting displayed in second node
Our environment is linux x86_64 and FMW version 11g,weblogic 10.3.4.0 ,soa 11.1.1.4.
We have installed weblogic cluster :
node1: Admin server,soa_server1
node2:soa_server2
when we deploy any soa application in one node it is not getting published in second node.We have taken oracle support also still problem is not solved.
They told us to configure coherence ,we have taken owc from metalink .
very urgent.
Any one can help me.You have a cluster consisting of soa_server1 and soa_server2 or are these stand-alone WebLogic instances?
Is soa-infra active on soa_server2?
Can you check if soa-infra can be reached on both the server instances (http://hostname:port/soa-infra/)
When soa_infra cannot be reached on soa_server2 can you check the logging to see what errors
are occuring.
Some examples that set-up a clustered envionment can be found here: http://middlewaremagic.com/weblogic/?p=6872
and here: http://middlewaremagic.com/weblogic/?p=6637 -
Error after running root.sh on second node
hi,
i have installed clusterware on a 2 node system running on RHEL 5
i followed the prereqs , and solved all the errors i encountered
after clusterware installation , it asks to run root.sh on all the nodes
when i ran root.sh on second node
it gave this error
Running vipca(silent) for configuring nodeapps
/home/oracle/crs/oracle/product/10/crs/jdk/jre//bin/java: error while loading
shared libraries: libpthread.so.0: cannot open shared object file:
No such file or directory
so i follwed metalink note 414163.1
after that i called it a day
in the morning , wheni started both nodes
and started vipca on second node
it gave this error
PRKH:1010 unable to communicate with crs services
the i ran ps-ef | grep crs
root 3201 1 0 15:37 ? 00:00:00 /bin/sh /etc/init.d/init.crsd run
crsctl check crs gave
failure 1 contacting css daemon
cannot communicate with crs
cannot communicate with evm
what should i do to start these services?crsd and cssd logs were empty and there was no relevant info in crs alert
i am just reinstalling clusterware now
one thing i wanted to ask
why does owner ship of raw files change back to root (after node restart)
even though i chnged them to oracle -
Error CLSRSC-507 during the execution of root.sh on second node
Hi all.
OS.......: Red-Hat 6.5
RDBMS: Oracle 12.1.0.2.0
During the installation of a 2-node RAC in a RHEL 6.5, during the execution of the root.sh script in the second node, i get the following error:
[root@oraprd02 grid]# ./root.sh
Performing root user operation.
The following environment variables are set as:
ORACLE_OWNER= grid
ORACLE_HOME= /u01/app/12.1.0/grid
Enter the full pathname of the local bin directory: [/usr/local/bin]:
Copying dbhome to /usr/local/bin ...
Copying oraenv to /usr/local/bin ...
Copying coraenv to /usr/local/bin ...
Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/12.1.0/grid/crs/install/crsconfig_params
2015/05/04 22:47:16 CLSRSC-4001: Installing Oracle Trace File Analyzer (TFA) Collector.
2015/05/04 22:47:59 CLSRSC-4002: Successfully installed Oracle Trace File Analyzer (TFA) Collector.
2015/05/04 22:48:00 CLSRSC-363: User ignored prerequisites during installation
OLR initialization - successful
2015/05/04 22:48:46 CLSRSC-507: The root script cannot proceed on this node oraprd02 because either the first-node operations have not completed on node oraprd01 or there was an error in obtaining the status of the first-node operations.
Died at /u01/app/12.1.0/grid/crs/install/crsutils.pm line 3681.
The command '/u01/app/12.1.0/grid/perl/bin/perl -I/u01/app/12.1.0/grid/perl/lib -I/u01/app/12.1.0/grid/crs/install /u01/app/12.1.0/grid/crs/install/rootcrs.pl ' execution failed
The root.sh on the first node completed successfully. I get the succeeded message from the script in the first node.
Have anyone faced this problem? Any assistance will be most helpfull.
Thanks in advance.crsd and cssd logs were empty and there was no relevant info in crs alert
i am just reinstalling clusterware now
one thing i wanted to ask
why does owner ship of raw files change back to root (after node restart)
even though i chnged them to oracle -
Root.sh failed on second node while installing CRS 10g on centos 5.5
root.sh failed on second node while installing CRS 10g
Hi all,
I am able to install Oracle 10g RAC clusterware on first node of the cluster. However, when I run the root.sh script as root
user on second node of the cluster, it fails with following error message:
NO KEYS WERE WRITTEN. Supply -force parameter to override.
-force is destructive and will destroy any previous cluster
configuration.
Oracle Cluster Registry for cluster has already been initialized
Startup will be queued to init within 90 seconds.
Adding daemons to inittab
Expecting the CRS daemons to be up within 600 seconds.
Failure at final check of Oracle CRS stack.
10
and run cluvfy stage -post hwos -n all -verbose,it show message:
ERROR:
Could not find a suitable set of interfaces for VIPs.
Result: Node connectivity check failed.
Checking shared storage accessibility...
Disk Sharing Nodes (2 in count)
/dev/sda db2 db1
and run cluvfy stage -pre crsinst -n all -verbose,it show message:
ERROR:
Could not find a suitable set of interfaces for VIPs.
Result: Node connectivity check failed.
Checking system requirements for 'crs'...
No checks registered for this product.
and run cluvfy stage -post crsinst -n all -verbose,it show message:
Result: Node reachability check passed from node "DB2".
Result: User equivalence check passed for user "oracle".
Node Name CRS daemon CSS daemon EVM daemon
db2 no no no
db1 yes yes yes
Check: Health of CRS
Node Name CRS OK?
db1 unknown
Result: CRS health check failed.
check crsd.log and show message:
clsc_connect: (0x143ca610) no listener at (ADDRESS=(PROTOCOL=ipc)(KEY=OCSSD_LL_db2_crs))
clsssInitNative: connect failed, rc 9
Any help would be greatly appreciated.
Edited by: 868121 on 2011-6-24 上午12:31Hello, it took a little searching, but I found this in a note in the GRID installation guide for Linux/UNIX:
Public IP addresses and virtual IP addresses must be in the same subnet.
In your case, you are using two different subnets for the VIPs. -
11gR2 RAC install fail when running root.sh script on second node
I get the errors:
ORA-15018: diskgroup cannot be created
ORA-15072: command requires at least 2 regular failure groups, discovered only 0
ORA-15080: synchronous I/O operation to a disk failed
[main] [ 2012-04-10 16:44:12.564 EDT ] [UsmcaLogger.logException:175] oracle.sysman.assistants.util.sqlEngine.SQLFatalErrorException: ORA-15018: diskgroup cannot be created
ORA-15072: command requires at least 2 regular failure groups, discovered only 0
ORA-15080: synchronous I/O operation to a disk failed
I have tried the fix solutions from metalink note, but did not fix issue
11GR2 GRID INFRASTRUCTURE INSTALLATION FAILS WHEN RUNNING ROOT.SH ON NODE 2 OF RAC USING ASMLIB [ID 1059847.1Hi,
it looks like, that your "shared device" you are using is not really shared.
The second node does "create an ASM diskgroup" and create OCR and Voting disks. If this indeed would be a shared device, he should have recognized, that your disk is shared.
So as a result your VMware configuration must be wrong, and the disk you presented as shared disk is not really shared.
Which VMWare version did you use? It will not work correctly with the workstation or player edition, since shared disks are only really working with the server version.
If you indeed using the server, could you paste your vm configurations?
Furthermore I recommend using Virtual Box. There is a nice how-to:
http://www.oracle-base.com/articles/11g/OracleDB11gR2RACInstallationOnOEL5UsingVirtualBox.php
Sebastian -
Root.sh failed at second node OUL 6.3 Oracle GRID 11.2.0.3
Hi, im installing a two node cluster mounted on Oracle Linux 6.3 with Oracle DB 11.2.0.3, the installation went smooth up until the execution of the root.sh script on the second node.
THe script return this final lines:
CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node nodo1, number 1, and is terminating
An active cluster was found during exclusive startup, restarting to join the cluster
Start of resource "ora.crsd" failed
CRS-2800: Cannot start resource 'ora.asm' as it is already in the INTERMEDIATE state on server 'nodo2'
CRS-4000: Command Start failed, or completed with errors.
Failed to start Oracle Grid Infrastructure stack
Failed to start Cluster Ready Services at /u01/app/11.2.0/grid/crs/install/crsconfig_lib.pm line 1286.
/u01/app/11.2.0/grid/perl/bin/perl -I/u01/app/11.2.0/grid/perl/lib -I/u01/app/11.2.0/grid/crs/install /u01/app/11.2.0/grid/crs/install/rootcrs.pl execution failed
In $GRID_HOME/log/node2/alertnode.log It appears to be a Cluster Time Synchronization Service issue, (i didn't synchronyze the nodes..) however the CTSS is running in observer mode, wich i believe it shouldn't affect the installation process. After that i lost it...there's an entry CRS-5018 indicating that an unused HAIP route was removed... and then, out of the blue: CRS-5818:Aborted command 'start' for resource 'ora.asm'. Some clarification will be deeply apreciated.
Here's the complete log:
2013-04-01 13:39:35.358
[client(12163)]CRS-2101:The OLR was formatted using version 3.
2013-04-01 19:40:19.597
[ohasd(12338)]CRS-2112:The OLR service started on node nodo2.
2013-04-01 19:40:19.657
[ohasd(12338)]CRS-1301:Oracle High Availability Service started on node nodo2.
[client(12526)]CRS-10001:01-Apr-13 13:41 ACFS-9459: ADVM/ACFS is not supported on this OS version: '2.6.39-400.17.2.el6uek.i686'
[client(12528)]CRS-10001:01-Apr-13 13:41 ACFS-9201: Not Supported
[client(12603)]CRS-10001:01-Apr-13 13:41 ACFS-9459: ADVM/ACFS is not supported on this OS version: '2.6.39-400.17.2.el6uek.i686'
2013-04-01 19:41:17.509
[ohasd(12338)]CRS-2302:Cannot get GPnP profile. Error CLSGPNP_NO_DAEMON (GPNPD daemon is not running).
2013-04-01 19:41:17.618
[gpnpd(12695)]CRS-2328:GPNPD started on node nodo2.
2013-04-01 19:41:21.363
[cssd(12755)]CRS-1713:CSSD daemon is started in exclusive mode
2013-04-01 19:41:23.194
[ohasd(12338)]CRS-2767:Resource state recovery not attempted for 'ora.diskmon' as its target state is OFFLINE
2013-04-01 19:41:56.144
[cssd(12755)]CRS-1707:Lease acquisition for node nodo2 number 2 completed
2013-04-01 19:41:57.545
[cssd(12755)]CRS-1605:CSSD voting file is online: /dev/oracleasm/disks/ASM_DISK_1; details in /u01/app/11.2.0/grid/log/nodo2/cssd/ocssd.log.
[cssd(12755)]CRS-1636:The CSS daemon was started in exclusive mode but found an active CSS daemon on node nodo1 and is terminating; details at (:CSSNM00006:) in /u01/app/11.2.0/grid/log/nodo2/cssd/ocssd.log
2013-04-01 19:41:58.549
[ohasd(12338)]CRS-2765:Resource 'ora.cssdmonitor' has failed on server 'nodo2'.
2013-04-01 19:42:10.025
[gpnpd(12695)]CRS-2329:GPNPD on node nodo2 shutdown.
2013-04-01 19:42:11.407
[mdnsd(12685)]CRS-5602:mDNS service stopping by request.
2013-04-01 19:42:29.642
[gpnpd(12947)]CRS-2328:GPNPD started on node nodo2.
2013-04-01 19:42:33.241
[cssd(13012)]CRS-1713:CSSD daemon is started in clustered mode
2013-04-01 19:42:35.104
[ohasd(12338)]CRS-2767:Resource state recovery not attempted for 'ora.diskmon' as its target state is OFFLINE
2013-04-01 19:42:44.065
[cssd(13012)]CRS-1707:Lease acquisition for node nodo2 number 2 completed
2013-04-01 19:42:45.484
[cssd(13012)]CRS-1605:CSSD voting file is online: /dev/oracleasm/disks/ASM_DISK_1; details in /u01/app/11.2.0/grid/log/nodo2/cssd/ocssd.log.
2013-04-01 19:42:52.138
[cssd(13012)]CRS-1601:CSSD Reconfiguration complete. Active nodes are nodo1 nodo2 .
2013-04-01 19:42:55.081
[ctssd(13076)]CRS-2403:The Cluster Time Synchronization Service on host nodo2 is in observer mode.
2013-04-01 19:42:55.581
[ctssd(13076)]CRS-2401:The Cluster Time Synchronization Service started on host nodo2.
2013-04-01 19:42:55.581
[ctssd(13076)]CRS-2407:The new Cluster Time Synchronization Service reference node is host nodo1.
2013-04-01 19:43:08.875
[ctssd(13076)]CRS-2412:The Cluster Time Synchronization Service detects that the local time is significantly different from the mean cluster time. Details in /u01/app/11.2.0/grid/log/nodo2/ctssd/octssd.log.
2013-04-01 19:43:08.876
[ctssd(13076)]CRS-2409:The clock on host nodo2 is not synchronous with the mean cluster time. No action has been taken as the Cluster Time Synchronization Service is running in observer mode.
2013-04-01 19:43:13.565
[u01/app/11.2.0/grid/bin/orarootagent.bin(13064)]CRS-5018:(:CLSN00037:) Removed unused HAIP route: 169.254.0.0 / 255.255.0.0 / 0.0.0.0 / eth0
2013-04-01 19:53:09.800
[u01/app/11.2.0/grid/bin/oraagent.bin(12922)]CRS-5818:Aborted command 'start' for resource 'ora.asm'. Details at (:CRSAGF00113:) {0:0:223} in /u01/app/11.2.0/grid/log/nodo2/agent/ohasd/oraagent_oracle/oraagent_oracle.log.
2013-04-01 19:53:11.827
[ohasd(12338)]CRS-2757:Command 'Start' timed out waiting for response from the resource 'ora.asm'. Details at (:CRSPE00111:) {0:0:223} in /u01/app/11.2.0/grid/log/nodo2/ohasd/ohasd.log.
2013-04-01 19:53:12.779
[u01/app/11.2.0/grid/bin/oraagent.bin(12922)]CRS-5019:All OCR locations are on ASM disk groups [DATA], and none of these disk groups are mounted. Details are at "(:CLSN00100:)" in "/u01/app/11.2.0/grid/log/nodo2/agent/ohasd/oraagent_oracle/oraagent_oracle.log".
2013-04-01 19:53:13.892
[u01/app/11.2.0/grid/bin/oraagent.bin(12922)]CRS-5019:All OCR locations are on ASM disk groups [DATA], and none of these disk groups are mounted. Details are at "(:CLSN00100:)" in "/u01/app/11.2.0/grid/log/nodo2/agent/ohasd/oraagent_oracle/oraagent_oracle.log".
2013-04-01 19:53:43.877
[u01/app/11.2.0/grid/bin/oraagent.bin(12922)]CRS-5019:All OCR locations are on ASM disk groups [DATA], and none of these disk groups are mounted. Details are at "(:CLSN00100:)" in "/u01/app/11.2.0/grid/log/nodo2/agent/ohasd/oraagent_oracle/oraagent_oracle.log".
2013-04-01 19:54:13.891
[u01/app/11.2.0/grid/bin/oraagent.bin(12922)]CRS-5019:All OCR locations are on ASM disk groups [DATA], and none of these disk groups are mounted. Details are at "(:CLSN00100:)" in "/u01/app/11.2.0/grid/log/nodo2/agent/ohasd/oraagent_oracle/oraagent_oracle.log".
2013-04-01 19:54:43.906
[u01/app/11.2.0/grid/bin/oraagent.bin(12922)]CRS-5019:All OCR locations are on ASM disk groups [DATA], and none of these disk groups are mounted. Details are at "(:CLSN00100:)" in "/u01/app/11.2.0/grid/log/nodo2/agent/ohasd/oraagent_oracle/oraagent_oracle.log".
2013-04-01 19:55:13.914
[u01/app/11.2.0/grid/bin/oraagent.bin(12922)]CRS-5019:All OCR locations are on ASM disk groups [DATA], and none of these disk groups are mounted. Details are at "(:CLSN00100:)" in "/u01/app/11.2.0/grid/log/nodo2/agent/ohasd/oraagent_oracle/oraagent_oracle.log".
2013-04-01 19:55:43.918
[u01/app/11.2.0/grid/bin/oraagent.bin(12922)]CRS-5019:All OCR locations are on ASM disk groups [DATA], and none of these disk groups are mounted. Details are at "(:CLSN00100:)" in "/u01/app/11.2.0/grid/log/nodo2/agent/ohasd/oraagent_oracle/oraagent_oracle.log".
2013-04-01 19:56:13.922
[u01/app/11.2.0/grid/bin/oraagent.bin(12922)]CRS-5019:All OCR locations are on ASM disk groups [DATA], and none of these disk groups are mounted. Details are at "(:CLSN00100:)" in "/u01/app/11.2.0/grid/log/nodo2/agent/ohasd/oraagent_oracle/oraagent_oracle.log".
2013-04-01 19:56:53.209
[crsd(13741)]CRS-1012:The OCR service started on node nodo2.
2013-04-01 20:07:01.128
[crsd(13741)]CRS-0810:Cluster Ready Service aborted due to failure to communicate with Event Management Service with error [1]. Details at (:CRSD00120:) in /u01/app/11.2.0/grid/log/nodo2/crsd/crsd.log.
2013-04-01 20:07:01.278
[ohasd(12338)]CRS-2765:Resource 'ora.crsd' has failed on server 'nodo2'.
2013-04-01 20:07:08.689
[crsd(15248)]CRS-1012:The OCR service started on node nodo2.
2013-04-01 20:13:10.138
[ctssd(13076)]CRS-2412:The Cluster Time Synchronization Service detects that the local time is significantly different from the mean cluster time. Details in /u01/app/11.2.0/grid/log/nodo2/ctssd/octssd.log.
2013-04-01 20:17:13.024
[crsd(15248)]CRS-0810:Cluster Ready Service aborted due to failure to communicate with Event Management Service with error [1]. Details at (:CRSD00120:) in /u01/app/11.2.0/grid/log/nodo2/crsd/crsd.log.
2013-04-01 20:17:13.171
[ohasd(12338)]CRS-2765:Resource 'ora.crsd' has failed on server 'nodo2'.
2013-04-01 20:17:20.826
[crsd(16746)]CRS-1012:The OCR service started on node nodo2.
2013-04-01 20:27:25.020
[crsd(16746)]CRS-0810:Cluster Ready Service aborted due to failure to communicate with Event Management Service with error [1]. Details at (:CRSD00120:) in /u01/app/11.2.0/grid/log/nodo2/crsd/crsd.log.
2013-04-01 20:27:25.176
[ohasd(12338)]CRS-2765:Resource 'ora.crsd' has failed on server 'nodo2'.
2013-04-01 20:27:31.591
[crsd(18266)]CRS-1012:The OCR service started on node nodo2.
2013-04-01 20:37:35.668
[crsd(18266)]CRS-0810:Cluster Ready Service aborted due to failure to communicate with Event Management Service with error [1]. Details at (:CRSD00120:) in /u01/app/11.2.0/grid/log/nodo2/crsd/crsd.log.
2013-04-01 20:37:35.808
[ohasd(12338)]CRS-2765:Resource 'ora.crsd' has failed on server 'nodo2'.
2013-04-01 20:37:43.209
[crsd(19762)]CRS-1012:The OCR service started on node nodo2.
2013-04-01 20:43:11.160
[ctssd(13076)]CRS-2412:The Cluster Time Synchronization Service detects that the local time is significantly different from the mean cluster time. Details in /u01/app/11.2.0/grid/log/nodo2/ctssd/octssd.log.
2013-04-01 20:47:47.487
[crsd(19762)]CRS-0810:Cluster Ready Service aborted due to failure to communicate with Event Management Service with error [1]. Details at (:CRSD00120:) in /u01/app/11.2.0/grid/log/nodo2/crsd/crsd.log.
2013-04-01 20:47:47.637
[ohasd(12338)]CRS-2765:Resource 'ora.crsd' has failed on server 'nodo2'.
2013-04-01 20:47:55.086
[crsd(21242)]CRS-1012:The OCR service started on node nodo2.
2013-04-01 20:57:59.343
[crsd(21242)]CRS-0810:Cluster Ready Service aborted due to failure to communicate with Event Management Service with error [1]. Details at (:CRSD00120:) in /u01/app/11.2.0/grid/log/nodo2/crsd/crsd.log.
2013-04-01 20:57:59.492
[ohasd(12338)]CRS-2765:Resource 'ora.crsd' has failed on server 'nodo2'.
2013-04-01 20:58:06.996
[crsd(22744)]CRS-1012:The OCR service started on node nodo2.
2013-04-01 21:08:11.046
[crsd(22744)]CRS-0810:Cluster Ready Service aborted due to failure to communicate with Event Management Service with error [1]. Details at (:CRSD00120:) in /u01/app/11.2.0/grid/log/nodo2/crsd/crsd.log.
2013-04-01 21:08:11.192
[ohasd(12338)]CRS-2765:Resource 'ora.crsd' has failed on server 'nodo2'.
2013-04-01 21:08:18.726
[crsd(24260)]CRS-1012:The OCR service started on node nodo2.
2013-04-01 21:13:12.000
[ctssd(13076)]CRS-2412:The Cluster Time Synchronization Service detects that the local time is significantly different from the mean cluster time. Details in /u01/app/11.2.0/grid/log/nodo2/ctssd/octssd.log.
2013-04-01 21:18:22.262
[crsd(24260)]CRS-0810:Cluster Ready Service aborted due to failure to communicate with Event Management Service with error [1]. Details at (:CRSD00120:) in /u01/app/11.2.0/grid/log/nodo2/crsd/crsd.log.
2013-04-01 21:18:22.411
[ohasd(12338)]CRS-2765:Resource 'ora.crsd' has failed on server 'nodo2'.
2013-04-01 21:18:29.927
[crsd(25759)]CRS-1012:The OCR service started on node nodo2.
2013-04-01 21:28:34.467
[crsd(25759)]CRS-0810:Cluster Ready Service aborted due to failure to communicate with Event Management Service with error [1]. Details at (:CRSD00120:) in /u01/app/11.2.0/grid/log/nodo2/crsd/crsd.log.
2013-04-01 21:28:34.616
[ohasd(12338)]CRS-2765:Resource 'ora.crsd' has failed on server 'nodo2'.
2013-04-01 21:28:41.990
[crsd(27291)]CRS-1012:The OCR service started on node nodo2.
2013-04-01 21:38:45.012
[crsd(27291)]CRS-0810:Cluster Ready Service aborted due to failure to communicate with Event Management Service with error [1]. Details at (:CRSD00120:) in /u01/app/11.2.0/grid/log/nodo2/crsd/crsd.log.
2013-04-01 21:38:45.160
[ohasd(12338)]CRS-2765:Resource 'ora.crsd' has failed on server 'nodo2'.
2013-04-01 21:38:52.790
[crsd(28784)]CRS-1012:The OCR service started on node nodo2.
2013-04-01 21:43:12.378
[ctssd(13076)]CRS-2412:The Cluster Time Synchronization Service detects that the local time is significantly different from the mean cluster time. Details in /u01/app/11.2.0/grid/log/nodo2/ctssd/octssd.log.
2013-04-01 21:48:56.285
[crsd(28784)]CRS-0810:Cluster Ready Service aborted due to failure to communicate with Event Management Service with error [1]. Details at (:CRSD00120:) in /u01/app/11.2.0/grid/log/nodo2/crsd/crsd.log.
2013-04-01 21:48:56.435
[ohasd(12338)]CRS-2765:Resource 'ora.crsd' has failed on server 'nodo2'.
2013-04-01 21:49:04.421
[crsd(30272)]CRS-1012:The OCR service started on node nodo2.
2013-04-01 21:59:08.183
[crsd(30272)]CRS-0810:Cluster Ready Service aborted due to failure to communicate with Event Management Service with error [1]. Details at (:CRSD00120:) in /u01/app/11.2.0/grid/log/nodo2/crsd/crsd.log.
2013-04-01 21:59:08.318
[ohasd(12338)]CRS-2765:Resource 'ora.crsd' has failed on server 'nodo2'.
2013-04-01 21:59:15.860
[crsd(31772)]CRS-1012:The OCR service started on node nodo2.Hi santysharma, thanks for the reply, i have two ethernet interfaces: eth0 (public network 192.168.1.0) and eth1 (private network 10.5.3.0), there is no device using that ip range, here's the output of route command:
(Sorry for the alignment, i tried to tab it but the editor trims it again)
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
default 192.168.1.1 0.0.0.0 UG 0 0 0 eth0
private * 255.255.255.0 U 0 0 0 eth1
link-local * 255.255.0.0 U 1002 0 0 eth0
link-local * 255.255.0.0 U 1003 0 0 eth1
public * 255.255.255.0 U 0 0 0 eth0
And the /etc/hosts file
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
10.5.3.1 nodo1.cluster nodo1
10.5.3.2 nodo2.cluster nodo2
192.168.1.13 cluster-scan
192.168.1.14 nodo1-vip
192.168.1.15 nodo2-vip
And the ifconfig -a
eth0 Link encap:Ethernet HWaddr C8:3A:35:D9:C6:2B
inet addr:192.168.1.12 Bcast:192.168.1.255 Mask:255.255.255.0
inet6 addr: fe80::ca3a:35ff:fed9:c62b/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:34708 errors:0 dropped:18 overruns:0 frame:0
TX packets:24693 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:48545969 (46.2 MiB) TX bytes:1994381 (1.9 MiB)
eth1 Link encap:Ethernet HWaddr 00:0D:87:D0:A3:8E
inet addr:10.5.3.2 Bcast:10.5.3.255 Mask:255.255.255.0
inet6 addr: fe80::20d:87ff:fed0:a38e/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:44 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:0 (0.0 b) TX bytes:5344 (5.2 KiB)
Interrupt:23 Base address:0x6000
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:16436 Metric:1
RX packets:20 errors:0 dropped:0 overruns:0 frame:0
TX packets:20 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:1320 (1.2 KiB) TX bytes:1320 (1.2 KiB)
Now that i'm thinking i've read somewhere that ipv6 was no supported...yet there's no relation with the 169.254.x.x ip range.
Maybe you are looking for
-
I have a Macbook Pro. I´ve just installed Lion version 10.7.3. Basically, one day I turned on my computer, and all my desktop folders and pictures were missing. And the finder just kept flashing with screen that said "finder quit while trying to rest
-
Please take a look at the following sqlplus output: SQL> select time, to_char(time, 'D') from rep.tttt where time >= trunc(sysdate, 'mm') group by time, to_char(time, 'D'); TIME T 01-JUN-05 4 02-JUN-05 5 03-JUN-05 6 04-JUN-05 7 05-JUN-05 1 06-JUN-05
-
How to catch data from Purchase Order form, when user click CLOSE menu?
Hi, I need do my own biz process when user click CLOSE in the Data menu to close a purchase order. But I don't know how to get data from PO form which opened by user. Can anybody give me some suggestions? thanks.
-
Problems with Document Connect
I have a problem with Document Connect. I am unable access office live workspace. I have been on the help line with microsoft but they were unable to help. Then I went to an apple store and was informed by one of the sales assistance that Document Co
-
Labview runtime engine plugin problem in Mac OS X
Hi: We have just recently acquired a macBook Pro and am trying to test some instrumentation modules we have previously developed online for Circuit Theory courses on the MAc platform. These VI's are installed on lab view server and we have no difficu