!! ORACM always FAILS one node - Oracle 9i rac -Sles9 - 9.2.0.8 ORACM

Hi,
I really need help with this. I applied all the patches possible. I tried sharing the quorum.dbf as a nfs device, raw device, iscsi lun ... i patched the ORACM to 9.2.0.5, 9.2.0.6, and now 9.2.0.8. The setup has two hp dl360 with sles9 sp2, x86_64 and oracle 9.2 rac...
The problem is the cluster manager starts on one node. and when i run ./ocmstart.sh on the other node, it always fails. The CM.LOG file is pasted below. I get the same errors at all the patch levels. The quorum.dbf is setup as an iscsi lun on a netapp filer, which is then bounded to a raw device on the host. Whichever node i start the oracle cluster manager first, works and the other node always fails with the errors shown below.
It also keeps complaining about InitializeCM: query_module() failed about the hangcheck timer. The hangcheck timer is already loaded and i can see it in /sbin/lsmod
I would really appreciate help on this. This is my master's project at school and i cant graduate if this doesnt work. Please provide some guidance.
thanks
vishal
CM.LOG
tweedledum:/u01/app/oracle/product/920/oracm/log # cat cm.log
oracm, version[ 9.2.0.8.0.01 ] started {Tue Feb 13 00:56:16 2007 }
KernelModuleName is hangcheck-timer {Tue Feb 13 00:56:16 2007 }
OemNodeConfig(): Network Address of node0: 1.1.1.3 (port 9998)
{Tue Feb 13 00:56:16 2007 }
OemNodeConfig(): Network Address of node1: 1.1.1.4 (port 9998)
{Tue Feb 13 00:56:16 2007 }
WARNING: OemInit2: Opened file(/oradata/quorum.dbf 6), tid = main:182900764192 file = oem.c, line = 503 {Tue Feb 13 00:56:16 2007 }InitializeCM: ModuleName = hangcheck-timer {Tue Feb 13 00:56:16 2007 }
ClusterListener: Spawned with tid 0x4080e960 pid: 19662 {Tue Feb 13 00:56:16 2007 }
ERROR: InitializeCM: query_module() failed, tid = main:182900764192 file = cmstartup.c, line = 341 {Tue Feb 13 00:56:16 2007 }Debug Hang : ClusterListener (PID=19662) Registered with ORACM. {Tue Feb 13 00:56:16 2007 }
Debug Hang :StartNMMon (PID=19662) Registered with ORACM. {Tue Feb 13 00:56:16 2007 }
Debug Hang : CmConnectListener (PID=19662):Registered with ORACM. {Tue Feb 13 00:56:16 2007 }
CreateLocalEndpoint(): Network Address: 1.1.1.4
{Tue Feb 13 00:56:16 2007 }
PollingThread: Spawned with tid 0x40c10960. pid: 19662 {Tue Feb 13 00:56:16 2007 }
Debug Hang :PollingThread (PID=19662): Registered with ORACM. {Tue Feb 13 00:56:16 2007 }
SendingThread: Spawned with tid 0x41012960, 0x41012960. pid: 19662 {Tue Feb 13 00:56:16 2007 }
DiskPingThread: Spawned with tid 0x40e11960. pid: 19662 {Tue Feb 13 00:56:16 2007 }
Debug Hang : DiskPingThread (PID=19662): Registered with ORACM. {Tue Feb 13 00:56:16 2007 }
Debug Hang :SendingThread (PID=19662): Registered with ORACM. {Tue Feb 13 00:56:16 2007 }
UpdateNodeState(): node(1) added udpated {Tue Feb 13 00:56:19 2007 }
HandleUpdate(): SYNC(1) from node(0) completed {Tue Feb 13 00:56:19 2007 }
HandleUpdate(): NODE(0) IS ACTIVE MEMBER OF CLUSTER, INCARNATION(1) {Tue Feb 13 00:56:19 2007 }
HandleUpdate(): NODE(1) IS ACTIVE MEMBER OF CLUSTER, INCARNATION(2) {Tue Feb 13 00:56:19 2007 }
--- DUMP GROUP STATE DB ---
--- END OF GROUP STATE DUMP ---
--- Begin Dump ---
oracm, version[ 9.2.0.8.0.01 ] started {Tue Feb 13 00:56:16 2007 }
TRACE: LogListener: Spawned with tid 0x4060d960., tid = LogListener:1080088928 file = logging.c, line = 116 {Tue Feb 13 00:56:16 2007 }
TRACE: Can't read registry value for HeartBeat, tid = main:182900764192 file = unixinc.c, line = 1080 {Tue Feb 13 00:56:16 2007 }
TRACE: Can't read registry value for PollInterval, tid = main:182900764192 file = unixinc.c, line = 1080 {Tue Feb 13 00:56:16 2007 }
TRACE: Can't read registry value for WatchdogTimerMargin, tid = main:182900764192 file = unixinc.c, line = 1080 {Tue Feb 13 00:56:16 2007 }
TRACE: Can't read registry value for WatchdogSafetyMargin, tid = main:182900764192 file = unixinc.c, line = 1080 {Tue Feb 13 00:56:16 2007 }KernelModuleName is hangcheck-timer {Tue Feb 13 00:56:16 2007 }
TRACE: Can't read registry value for ClientTimeout, tid = main:182900764192 file = unixinc.c, line = 1080 {Tue Feb 13 00:56:16 2007 }
TRACE: InitNMInfo: setting clientTimeout to 140s based on MissCount 210 and PollInterval 1000ms, tid = main:182900764192 file = nmconfig.c, line = 138 {Tue Feb 13 00:56:16 2007 }
TRACE: InitClusterDb(): getservbyname on CMSrvr failed - 0 : assigning 9998, tid = main:182900764192 file = nmconfig.c, line = 208 {Tue Feb 13 00:56:16 2007 }OemNodeConfig(): Network Address of node0: 1.1.1.3 (port 9998)
{Tue Feb 13 00:56:16 2007 }
OemNodeConfig(): Network Address of node1: 1.1.1.4 (port 9998)
{Tue Feb 13 00:56:16 2007 }
TRACE: OemCreateListenPort: bound at 9998, tid = main:182900764192 file = oem.c, line = 907 {Tue Feb 13 00:56:16 2007 }
TRACE: InitClusterDb(): found my node info at 1 name tweedledum, priv int-dum, port 3623, tid = main:182900764192 file = nmconfig.c, line = 261 {Tue Feb 13 00:56:16 2007 }
TRACE: InitClusterDb(): Local Node(1) NodeName[int-dum], tid = main:182900764192 file = nmconfig.c, line = 279 {Tue Feb 13 00:56:16 2007 }
TRACE: InitClusterDb(): Cluster(Oracle) with (2) Defined Nodes, tid = main:182900764192 file = nmconfig.c, line = 282 {Tue Feb 13 00:56:16 2007 }
TRACE: OEMInits(): CM Disk File (/oradata/quorum.dbf), tid = main:182900764192 file = oem.c, line = 248 {Tue Feb 13 00:56:16 2007 }
WARNING: OemInit2: Opened file(/oradata/quorum.dbf 6), tid = main:182900764192 file = oem.c, line = 503 {Tue Feb 13 00:56:16 2007 }
TRACE: ReadOthersDskInfo(): node(0) rcfg(1) wrtcnt(1171356979) lastcnt(0) alive(1171356979), tid = main:182900764192 file = oem.c, line = 1491 {Tue Feb 13 00:56:16 2007 }
TRACE: ReadOthersDskInfo(): node(1) rcfg(1) wrtcnt(180) lastcnt(0) alive(1), tid = main:182900764192 file = oem.c, line = 1491 {Tue Feb 13 00:56:16 2007 }
TRACE: ReadOthersDskInfo(): node(2) rcfg(0) wrtcnt(0) lastcnt(0) alive(0), tid = main:182900764192 file = oem.c, line = 1491 {Tue Feb 13 00:56:16 2007 }
TRACE: ReadOthersDskInfo(): node(3) rcfg(0) wrtcnt(0) lastcnt(0) alive(0), tid = main:182900764192 file = oem.c, line = 1491 {Tue Feb 13 00:56:16 2007 }
TRACE: ReadOthersDskInfo(): node(4) rcfg(0) wrtcnt(0) lastcnt(0) alive(0), tid = main:182900764192 file = oem.c, line = 1491 {Tue Feb 13 00:56:16 2007 }
TRACE: ReadOthersDskInfo(): node(5) rcfg(0) wrtcnt(0) lastcnt(0) alive(0), tid = main:182900764192 file = oem.c, line = 1491 {Tue Feb 13 00:56:16 2007 }
TRACE: ReadOthersDskInfo(): node(6) rcfg(0) wrtcnt(0) lastcnt(0) alive(0), tid = main:182900764192 file = oem.c, line = 1491 {Tue Feb 13 00:56:16 2007 }
TRACE: ReadOthersDskInfo(): node(7) rcfg(0) wrtcnt(0) lastcnt(0) alive(0), tid = main:182900764192 file = oem.c, line = 1491 {Tue Feb 13 00:56:16 2007 }
TRACE: ReadOthersDskInfo(): node(8) rcfg(0) wrtcnt(0) lastcnt(0) alive(0), tid = main:182900764192 file = oem.c, line = 1491 {Tue Feb 13 00:56:16 2007 }
TRACE: ReadOthersDskInfo(): node(9) rcfg(0) wrtcnt(0) lastcnt(0) alive(0), tid = main:182900764192 file = oem.c, line = 1491 {Tue Feb 13 00:56:16 2007 }
TRACE: ReadOthersDskInfo(): node(10) rcfg(0) wrtcnt(0) lastcnt(0) alive(0), tid = main:182900764192 file = oem.c, line = 1491 {Tue Feb 13 00:56:16 2007 }
TRACE: ReadOthersDskInfo(): node(11) rcfg(0) wrtcnt(0) lastcnt(0) alive(0), tid = main:182900764192 file = oem.c, line = 1491 {Tue Feb 13 00:56:16 2007 }
TRACE: ReadOthersDskInfo(): node(12) rcfg(0) wrtcnt(0) lastcnt(0) alive(0), tid = main:182900764192 file = oem.c, line = 1491 {Tue Feb 13 00:56:16 2007 }
TRACE: ReadOthersDskInfo(): node(13) rcfg(0) wrtcnt(0) lastcnt(0) alive(0), tid = main:182900764192 file = oem.c, line = 1491 {Tue Feb 13 00:56:16 2007 }
TRACE: ReadOthersDskInfo(): node(14) rcfg(0) wrtcnt(0) lastcnt(0) alive(0), tid = main:182900764192 file = oem.c, line = 1491 {Tue Feb 13 00:56:16 2007 }
TRACE: ReadOthersDskInfo(): node(15) rcfg(0) wrtcnt(0) lastcnt(0) alive(0), tid = main:182900764192 file = oem.c, line = 1491 {Tue Feb 13 00:56:16 2007 }
TRACE: ReadOthersDskInfo(): node(16) rcfg(0) wrtcnt(0) lastcnt(0) alive(0), tid = main:182900764192 file = oem.c, line = 1491 {Tue Feb 13 00:56:16 2007 }
TRACE: ReadOthersDskInfo(): node(17) rcfg(0) wrtcnt(0) lastcnt(0) alive(0), tid = main:182900764192 file = oem.c, line = 1491 {Tue Feb 13 00:56:16 2007 }
TRACE: ReadOthersDskInfo(): node(18) rcfg(0) wrtcnt(0) lastcnt(0) alive(0), tid = main:182900764192 file = oem.c, line = 1491 {Tue Feb 13 00:56:16 2007 }
TRACE: ReadOthersDskInfo(): node(19) rcfg(0) wrtcnt(0) lastcnt(0) alive(0), tid = main:182900764192 file = oem.c, line = 1491 {Tue Feb 13 00:56:16 2007 }
TRACE: ReadOthersDskInfo(): node(20) rcfg(0) wrtcnt(0) lastcnt(0) alive(0), tid = main:182900764192 file = oem.c, line = 1491 {Tue Feb 13 00:56:16 2007 }
TRACE: ReadOthersDskInfo(): node(21) rcfg(0) wrtcnt(0) lastcnt(0) alive(0), tid = main:182900764192 file = oem.c, line = 1491 {Tue Feb 13 00:56:16 2007 }
TRACE: ReadOthersDskInfo(): node(22) rcfg(0) wrtcnt(0) lastcnt(0) alive(0), tid = main:182900764192 file = oem.c, line = 1491 {Tue Feb 13 00:56:16 2007 }
TRACE: ReadOthersDskInfo(): node(23) rcfg(0) wrtcnt(0) lastcnt(0) alive(0), tid = main:182900764192 file = oem.c, line = 1491 {Tue Feb 13 00:56:16 2007 }
TRACE: ReadOthersDskInfo(): node(24) rcfg(0) wrtcnt(0) lastcnt(0) alive(0), tid = main:182900764192 file = oem.c, line = 1491 {Tue Feb 13 00:56:16 2007 }
TRACE: ReadOthersDskInfo(): node(25) rcfg(0) wrtcnt(0) lastcnt(0) alive(0), tid = main:182900764192 file = oem.c, line = 1491 {Tue Feb 13 00:56:16 2007 }
TRACE: ReadOthersDskInfo(): node(26) rcfg(0) wrtcnt(0) lastcnt(0) alive(0), tid = main:182900764192 file = oem.c, line = 1491 {Tue Feb 13 00:56:16 2007 }
TRACE: ReadOthersDskInfo(): node(27) rcfg(0) wrtcnt(0) lastcnt(0) alive(0), tid = main:182900764192 file = oem.c, line = 1491 {Tue Feb 13 00:56:16 2007 }
TRACE: ReadOthersDskInfo(): node(28) rcfg(0) wrtcnt(0) lastcnt(0) alive(0), tid = main:182900764192 file = oem.c, line = 1491 {Tue Feb 13 00:56:16 2007 }
TRACE: ReadOthersDskInfo(): node(29) rcfg(0) wrtcnt(0) lastcnt(0) alive(0), tid = main:182900764192 file = oem.c, line = 1491 {Tue Feb 13 00:56:16 2007 }
TRACE: ReadOthersDskInfo(): node(30) rcfg(0) wrtcnt(0) lastcnt(0) alive(0), tid = main:182900764192 file = oem.c, line = 1491 {Tue Feb 13 00:56:16 2007 }
TRACE: ReadOthersDskInfo(): node(31) rcfg(0) wrtcnt(0) lastcnt(0) alive(0), tid = main:182900764192 file = oem.c, line = 1491 {Tue Feb 13 00:56:16 2007 }
TRACE: ReadOthersDskInfo(): node(32) rcfg(0) wrtcnt(0) lastcnt(0) alive(0), tid = main:182900764192 file = oem.c, line = 1491 {Tue Feb 13 00:56:16 2007 }
TRACE: ReadOthersDskInfo(): node(33) rcfg(0) wrtcnt(0) lastcnt(0) alive(0), tid = main:182900764192 file = oem.c, line = 1491 {Tue Feb 13 00:56:16 2007 }
TRACE: ReadOthersDskInfo(): node(34) rcfg(0) wrtcnt(0) lastcnt(0) alive(0), tid = main:182900764192 file = oem.c, line = 1491 {Tue Feb 13 00:56:16 2007 }
TRACE: ReadOthersDskInfo(): node(35) rcfg(0) wrtcnt(0) lastcnt(0) alive(0), tid = main:182900764192 file = oem.c, line = 1491 {Tue Feb 13 00:56:16 2007 }
TRACE: ReadOthersDskInfo(): node(36) rcfg(0) wrtcnt(0) lastcnt(0) alive(0), tid = main:182900764192 file = oem.c, line = 1491 {Tue Feb 13 00:56:16 2007 }
TRACE: ReadOthersDskInfo(): node(37) rcfg(0) wrtcnt(0) lastcnt(0) alive(0), tid = main:182900764192 file = oem.c, line = 1491 {Tue Feb 13 00:56:16 2007 }
TRACE: ReadOthersDskInfo(): node(38) rcfg(0) wrtcnt(0) lastcnt(0) alive(0), tid = main:182900764192 file = oem.c, line = 1491 {Tue Feb 13 00:56:16 2007 }
TRACE: ReadOthersDskInfo(): node(39) rcfg(0) wrtcnt(0) lastcnt(0) alive(0), tid = main:182900764192 file = oem.c, line = 1491 {Tue Feb 13 00:56:16 2007 }
TRACE: ReadOthersDskInfo(): node(40) rcfg(0) wrtcnt(0) lastcnt(0) alive(0), tid = main:182900764192 file = oem.c, line = 1491 {Tue Feb 13 00:56:16 2007 }
TRACE: ReadOthersDskInfo(): node(41) rcfg(0) wrtcnt(0) lastcnt(0) alive(0), tid = main:182900764192 file = oem.c, line = 1491 {Tue Feb 13 00:56:16 2007 }
TRACE: ReadOthersDskInfo(): node(42) rcfg(0) wrtcnt(0) lastcnt(0) alive(0), tid = main:182900764192 file = oem.c, line = 1491 {Tue Feb 13 00:56:16 2007 }
TRACE: ReadOthersDskInfo(): node(43) rcfg(0) wrtcnt(0) lastcnt(0) alive(0), tid = main:182900764192 file = oem.c, line = 1491 {Tue Feb 13 00:56:16 2007 }
TRACE: ReadOthersDskInfo(): node(44) rcfg(0) wrtcnt(0) lastcnt(0) alive(0), tid = main:182900764192 file = oem.c, line = 1491 {Tue Feb 13 00:56:16 2007 }
TRACE: ReadOthersDskInfo(): node(45) rcfg(0) wrtcnt(0) lastcnt(0) alive(0), tid = main:182900764192 file = oem.c, line = 1491 {Tue Feb 13 00:56:16 2007 }
TRACE: ReadOthersDskInfo(): node(46) rcfg(0) wrtcnt(0) lastcnt(0) alive(0), tid = main:182900764192 file = oem.c, line = 1491 {Tue Feb 13 00:56:16 2007 }
TRACE: ReadOthersDskInfo(): node(47) rcfg(0) wrtcnt(0) lastcnt(0) alive(0), tid = main:182900764192 file = oem.c, line = 1491 {Tue Feb 13 00:56:16 2007 }
TRACE: ReadOthersDskInfo(): node(48) rcfg(0) wrtcnt(0) lastcnt(0) alive(0), tid = main:182900764192 file = oem.c, line = 1491 {Tue Feb 13 00:56:16 2007 }
TRACE: ReadOthersDskInfo(): node(49) rcfg(0) wrtcnt(0) lastcnt(0) alive(0), tid = main:182900764192 file = oem.c, line = 1491 {Tue Feb 13 00:56:16 2007 }
TRACE: ReadOthersDskInfo(): node(50) rcfg(0) wrtcnt(0) lastcnt(0) alive(0), tid = main:182900764192 file = oem.c, line = 1491 {Tue Feb 13 00:56:16 2007 }
TRACE: ReadOthersDskInfo(): node(51) rcfg(0) wrtcnt(0) lastcnt(0) alive(0), tid = main:182900764192 file = oem.c, line = 1491 {Tue Feb 13 00:56:16 2007 }
TRACE: ReadOthersDskInfo(): node(52) rcfg(0) wrtcnt(0) lastcnt(0) alive(0), tid = main:182900764192 file = oem.c, line = 1491 {Tue Feb 13 00:56:16 2007 }
TRACE: ReadOthersDskInfo(): node(53) rcfg(0) wrtcnt(0) lastcnt(0) alive(0), tid = main:182900764192 file = oem.c, line = 1491 {Tue Feb 13 00:56:16 2007 }
TRACE: ReadOthersDskInfo(): node(54) rcfg(0) wrtcnt(0) lastcnt(0) alive(0), tid = main:182900764192 file = oem.c, line = 1491 {Tue Feb 13 00:56:16 2007 }
TRACE: ReadOthersDskInfo(): node(55) rcfg(0) wrtcnt(0) lastcnt(0) alive(0), tid = main:182900764192 file = oem.c, line = 1491 {Tue Feb 13 00:56:16 2007 }
TRACE: ReadOthersDskInfo(): node(56) rcfg(0) wrtcnt(0) lastcnt(0) alive(0), tid = main:182900764192 file = oem.c, line = 1491 {Tue Feb 13 00:56:16 2007 }
TRACE: ReadOthersDskInfo(): node(57) rcfg(0) wrtcnt(0) lastcnt(0) alive(0), tid = main:182900764192 file = oem.c, line = 1491 {Tue Feb 13 00:56:16 2007 }
TRACE: ReadOthersDskInfo(): node(58) rcfg(0) wrtcnt(0) lastcnt(0) alive(0), tid = main:182900764192 file = oem.c, line = 1491 {Tue Feb 13 00:56:16 2007 }
TRACE: ReadOthersDskInfo(): node(59) rcfg(0) wrtcnt(0) lastcnt(0) alive(0), tid = main:182900764192 file = oem.c, line = 1491 {Tue Feb 13 00:56:16 2007 }
TRACE: ReadOthersDskInfo(): node(60) rcfg(0) wrtcnt(0) lastcnt(0) alive(0), tid = main:182900764192 file = oem.c, line = 1491 {Tue Feb 13 00:56:16 2007 }
TRACE: ReadOthersDskInfo(): node(61) rcfg(0) wrtcnt(0) lastcnt(0) alive(0), tid = main:182900764192 file = oem.c, line = 1491 {Tue Feb 13 00:56:16 2007 }
TRACE: ReadOthersDskInfo(): node(62) rcfg(0) wrtcnt(0) lastcnt(0) alive(0), tid = main:182900764192 file = oem.c, line = 1491 {Tue Feb 13 00:56:16 2007 }
TRACE: ReadOthersDskInfo(): node(63) rcfg(0) wrtcnt(0) lastcnt(0) alive(0), tid = main:182900764192 file = oem.c, line = 1491 {Tue Feb 13 00:56:16 2007 }InitializeCM: ModuleName = hangcheck-timer {Tue Feb 13 00:56:16 2007 }
ClusterListener: Spawned with tid 0x4080e960 pid: 19662 {Tue Feb 13 00:56:16 2007 }
ERROR: InitializeCM: query_module() failed, tid = main:182900764192 file = cmstartup.c, line = 341 {Tue Feb 13 00:56:16 2007 }Debug Hang : ClusterListener (PID=19662) Registered with ORACM. {Tue Feb 13 00:56:16 2007 }
TRACE: ClusterListener (pid=19662, tid=1082190176): Registered with watchdog daemon., tid = ClusterListener:1082190176 file = nmlistener.c, line = 76 {Tue Feb 13 00:56:16 2007 }
TRACE: CmConnectListener: Spawned with tid 0x40a0f960., tid = CMConnectListerner:1084291424 file = cmclient.c, line = 216 {Tue Feb 13 00:56:16 2007 }Debug Hang :StartNMMon (PID=19662) Registered with ORACM. {Tue Feb 13 00:56:16 2007 }
TRACE: StartNMMon (pid=19662, tid=-1782829536): Registered with watchdog daemon., tid = main:182900764192 file = cmnodemon.c, line = 254 {Tue Feb 13 00:56:16 2007 }Debug Hang : CmConnectListener (PID=19662):Registered with ORACM. {Tue Feb 13 00:56:16 2007 }
TRACE: CmConnectListener (pid=19662, tid=1084291424): Registered with watchdog daemon., tid = CMConnectListerner:1084291424 file = cmclient.c, line = 247 {Tue Feb 13 00:56:16 2007 }CreateLocalEndpoint(): Network Address: 1.1.1.4
{Tue Feb 13 00:56:16 2007 }
TRACE: StartClusterJoin(): clusterState(0) nodeState(0), tid = main:182900764192 file = nmmember.c, line = 282 {Tue Feb 13 00:56:16 2007 }PollingThread: Spawned with tid 0x40c10960. pid: 19662 {Tue Feb 13 00:56:16 2007 }
Debug Hang :PollingThread (PID=19662): Registered with ORACM. {Tue Feb 13 00:56:16 2007 }
TRACE: PollingThread (pid=19662, tid=1086392672): Registered with watchdog daemon., tid = PollingThread:1086392672 file = nmmember.c, line = 765 {Tue Feb 13 00:56:16 2007 }SendingThread: Spawned with tid 0x41012960, 0x41012960. pid: 19662 {Tue Feb 13 00:56:16 2007 }
DiskPingThread: Spawned with tid 0x40e11960. pid: 19662 {Tue Feb 13 00:56:16 2007 }
Debug Hang : DiskPingThread (PID=19662): Registered with ORACM. {Tue Feb 13 00:56:16 2007 }
TRACE: DiskPingThread (pid=19662, tid=1088493920): Registered with watchdog daemon., tid = DiskPingThread:1088493920 file = nmmember.c, line = 1083 {Tue Feb 13 00:56:16 2007 }Debug Hang :SendingThread (PID=19662): Registered with ORACM. {Tue Feb 13 00:56:16 2007 }
TRACE: SendingThread (pid=19662, tid=1090595168): Registered with watchdog daemon., tid = SendingThread:1090595168 file = nmmember.c, line = 581 {Tue Feb 13 00:56:16 2007 }
TRACE: HandleJoin(): src[1] dest[1] dom[0] seq[1] sync[0], tid = ClusterListener:1082190176 file = nmlisten.c, line = 346 {Tue Feb 13 00:56:16 2007 }
TRACE: HandleJoin(): JOIN from node(1)->(1), tid = ClusterListener:1082190176 file = nmlisten.c, line = 362 {Tue Feb 13 00:56:16 2007 }
TRACE: HandleStatus(): node(0) UNKNOWN, tid = ClusterListener:1082190176 file = nmlisten.c, line = 404 {Tue Feb 13 00:56:17 2007 }
TRACE: HandleStatus(): src[0] dest[1] dom[0] seq[6] sync[1], tid = ClusterListener:1082190176 file = nmlisten.c, line = 415 {Tue Feb 13 00:56:17 2007 }
TRACE: HandleSync(): src[0] dest[1] dom[0] seq[7] sync[1], tid = ClusterListener:1082190176 file = nmlisten.c, line = 506 {Tue Feb 13 00:56:17 2007 }
TRACE: SendAck(): node(0) domain(0) syncSeqNo(1) type(11), tid = ClusterListener:1082190176 file = nmmember.c, line = 1922 {Tue Feb 13 00:56:17 2007 }
TRACE: HandleVote(): src[0] dest[1] dom[0] seq[8] sync[1], tid = ClusterListener:1082190176 file = nmlisten.c, line = 643 {Tue Feb 13 00:56:18 2007 }
TRACE: SendVoteInfo(): node(0) domain(0) syncSeqNo(1), tid = ClusterListener:1082190176 file = nmmember.c, line = 1736 {Tue Feb 13 00:56:18 2007 }
TRACE: HandleUpdate(): src[0] dest[1] dom[0] seq[9] sync[1], tid = ClusterListener:1082190176 file = nmlisten.c, line = 849 {Tue Feb 13 00:56:19 2007 }
TRACE: UpdateNodeState(): nodeNum 0, newState 2, tid = ClusterListener:1082190176 file = nmlisten.c, line = 1153 {Tue Feb 13 00:56:19 2007 }
TRACE: UpdateNodeState(): nodeNum 1, newState 2, tid = ClusterListener:1082190176 file = nmlisten.c, line = 1153 {Tue Feb 13 00:56:19 2007 }UpdateNodeState(): node(1) added udpated {Tue Feb 13 00:56:19 2007 }
TRACE: SendAck(): node(0) domain(0) syncSeqNo(1) type(15), tid = ClusterListener:1082190176 file = nmmember.c, line = 1922 {Tue Feb 13 00:56:19 2007 }
TRACE: HandleUpdate(): about to QueueClientEvent 0, 1, tid = ClusterListener:1082190176 file = nmlisten.c, line = 960 {Tue Feb 13 00:56:19 2007 }
TRACE: QueueClientEvent(): Sending Event(1) , tid = ClusterListener:1082190176 file = nmlisten.c, line = 1386 {Tue Feb 13 00:56:19 2007 }
TRACE: QueueClientEvent: Node[0] state = 2, tid = ClusterListener:1082190176 file = nmlisten.c, line = 1390 {Tue Feb 13 00:56:19 2007 }
TRACE: QueueClientEvent: Node[1] state = 2, tid = ClusterListener:1082190176 file = nmlisten.c, line = 1390 {Tue Feb 13 00:56:19 2007 }HandleUpdate(): SYNC(1) from node(0) completed {Tue Feb 13 00:56:19 2007 }
TRACE: HandleUpdate: saving incarnation value as 2, tid = ClusterListener:1082190176 file = nmlisten.c, line = 983 {Tue Feb 13 00:56:19 2007 }
HandleUpdate(): NODE(0) IS ACTIVE MEMBER OF CLUSTER, INCARNATION(1) {Tue Feb 13 00:56:19 2007 }
HandleUpdate(): NODE(1) IS ACTIVE MEMBER OF CLUSTER, INCARNATION(2) {Tue Feb 13 00:56:19 2007 }
TRACE: HandleStatus(): src[1] dest[1] dom[0] seq[2] sync[2], tid = ClusterListener:1082190176 file = nmlisten.c, line = 415 {Tue Feb 13 00:56:19 2007 }
TRACE: StartNMMon(): attached as node 1, tid = main:182900764192 file = cmnodemon.c, line = 288 {Tue Feb 13 00:56:19 2007 }
TRACE: StartNMMon: starting reconfig(2), tid = main:182900764192 file = cmnodemon.c, line = 395 {Tue Feb 13 00:56:19 2007 }
TRACE: UpdateEventValue: *(bfffe1f0) = (1, 1), tid = main:182900764192 file = unixinc.c, line = 336 {Tue Feb 13 00:56:19 2007 }
TRACE: UpdateEventValue: *(401bbeb0) = (3, 1), tid = main:182900764192 file = unixinc.c, line = 336 {Tue Feb 13 00:56:19 2007 }
TRACE: ReconfigThread: started for reconfig (2), tid = Reconfig Thread:1092696416 file = cmnodemon.c, line = 180 {Tue Feb 13 00:56:19 2007 }NMEVENT_RECONFIG [00][00][00][00][00][00][00][03] {Tue Feb 13 00:56:19 2007 }
TRACE: CleanupNodeContexts(): cleaning up nodes, rcfg(2), tid = Reconfig Thread:1092696416 file = cmnodemon.c, line = 671 {Tue Feb 13 00:56:19 2007 }
TRACE: DisconnectNode(): about to disconnect 0, tid = Reconfig Thread:1092696416 file = cmipc.c, line = 851 {Tue Feb 13 00:56:19 2007 }
TRACE: DisconnectNode(): waiting for 0 listeners to terminate, tid = Reconfig Thread:1092696416 file = cmipc.c, line = 874 {Tue Feb 13 00:56:19 2007 }
TRACE: UpdateEventValue: *(401be778) = (0, 1), tid = Reconfig Thread:1092696416 file = unixinc.c, line = 336 {Tue Feb 13 00:56:19 2007 }
TRACE: CleanupNodeContexts(): successful cleanup of nodes rcfg(2), tid = Reconfig Thread:1092696416 file = cmnodemon.c, line = 690 {Tue Feb 13 00:56:19 2007 }
TRACE: EstablishMasterNode(): MASTER is node(0) reconfigs(2), tid = Reconfig Thread:1092696416 file = cmnodemon.c, line = 832 {Tue Feb 13 00:56:19 2007 }
TRACE: IncrementEventValue: *(401b97c0) = (1, 1), tid = Reconfig Thread:1092696416 file = unixinc.c, line = 365 {Tue Feb 13 00:56:19 2007 }
TRACE: PrepareForConnectsX: still waiting at (0), tid = PrepareForConnectsX:1094797664 file = cmipc.c, line = 279 {Tue Feb 13 00:56:19 2007 }
TRACE: IncrementEventValue: *(401b97c0) = (2, 2), tid = PrepareForConnectsX:1094797664 file = unixinc.c, line = 365 {Tue Feb 13 00:56:19 2007 }--- End Dump ---

Set the LD_ASSUME_KERNEL before starting the cluster manager:
export LD_ASSUME_KERNEL=2.4.19
export ORACLE_HOME=/oracle/app/oracle/product/9.2.0
rm -f /oracle/app/oracle/product/9.2.0/oracm/log/cm.log
rm -f /oracle/app/oracle/product/9.2.0/oracm/log/ocmstart.ts
$ORACLE_HOME/oracm/bin/ocmstart.sh
tail -f /oracle/app/oracle/product/9.2.0/oracm/log/cm.log

Similar Messages

  • Oracle database not starting up in oracle 10g RAC

    Hi!
    Recently I came across one problem with one node oracle 10g RAC.When the Oracle database is started,while opening it is giving ORA-03113:End of file on communication channel error.When I saw the the alert trace file and other trace files I found Disk group is exhausted error and it is not able to create .dbf files.Actually it is not a production server and I gave archive log destination in SAN.Even the spfile(content of init_database.ora) is in SAN..
    I tried Asmcmd utility to delete the archive log files.As the oracle is not available I am not able to asmcmd prompt.
    How to change the destination of archive log and to remove the old archive log files(as it is a testing environment we can remove) from SAN?Please let me know.
    Thanks & Regards
    Srikanth MVS

    keithrust wrote:
    On VMware there's a known issue with Oracle databases on a Windows client not starting up properly all the time and a manual startup using oradim -start -sid <whatever> is required to get it fully running. Hmmm, doing it several time, and never seen such issue. Which "known issue" and by who are you talking about ?
    I created a brand new Oracle VM Windows 2003 32-bit server, installed the Oracle drivers for paravirtualization, and whammo, the problem is still hereI'm sure, you miss something somewhere in the config. Right now, you're on supported configuration, you could either raise a SR to the support, or get help from your peer on Oracle Database General forum.
    Ah, but it's not a Windows issue. On a non-VM Windows box the database starts just fine all the time. Again, this is a known issue acknowledged by Oracle on the VMware side, I'm just surprised it exists on the Oracle VM side.Again, give more details about this "known issue". Never heard about that, eventhough I've been around for years.
    I was afraid you were going to ask that. I'll have to search for it again, but I think you can do the same as well....Well, I doubt you could find a Metalink note about Oracle on VMWare. So far, Oracle has always refused to support database on OS virtualized on VMWare (or any VM software other than Oracle VM). Based on that, you could be sure, your "known issue" is not an issue on Oracle VM.
    If you want more help, again, give more details about your issue.
    Nicolas.

  • Can RAC and RAC One Node share the same servers ?

    Does anyone know if it is possible for RAC and the new 11gR2 RAC One Node to share the same set of physical servers i.e. in effect having 2 clusters sharing the same set of servers ( though you could argue RAC One node is a different type of clustering or even that it is not real clustering at all - more instance transporting ).
    Or does standard RAC always require exlusive use of the physical servers it is using as its nodes ?
    Any thoughts appreciated
    Jim

    Jimbo wrote:
    Does anyone know if it is possible for RAC and the new 11gR2 RAC One Node to share the same set of physical servers i.e. in effect having 2 clusters sharing the same set of servers ( though you could argue RAC One node is a different type of clustering or even that it is not real clustering at all - more instance transporting ).
    Or does standard RAC always require exlusive use of the physical servers it is using as its nodes ?
    Hi Jim,
    To deploy RAC we need Oracle Grid Infrastructure for a cluster (aka Oracle Clusterware) on top.
    What determine if it's Single /RAC /RAC one Node is the Installation of Oracle Database.
    So, Oracle Clusterware support on Same Cluster ( RAC/ RAC one Node / Single).
    You will need one installation for each Feature.
    e.g on Same Cluster
    ---> Grid Infrastructure GRID_HOME=/u01/app/11.2.0/grid
    --->> RAC one Node /ORACLE_HOME = /u01/app/oracle/product/11.2.0/racone_11203
    --->> RAC /ORACLE_HOME = /u01/app/oracle/product/11.2.0/rac_11203
    --->> SINGLE /ORACLE_HOME = /u01/app/oracle/product/11.2.0/db_11203
    For me make no sense RAC ONE NODE and RAC on same cluster.
    Because RAC ONE NODE is a RAC with less feature.
    Regards,
    Levi Pereira

  • Query running slow in one node

    Hi All,
    We are running 4-node Oracle 10g RAC (linux 64-bit). The query is running fast in one node, but the same query is running very slow in the other node. And sometimes, we see pin S wait on X wait event in top 5 events.
    Has anyone faced this kind of situation before ?
    Thanks,
    Kumar

    Hi,
    Execute your query on node where query is running very slow. Get SID and execute query above to see what is event of waiting.
    exec dbms_application_info.set_client_info('@sw2')
    -- file sw2.sql
    col event  format     a25  heading "Wait Event" trunc
    col state  format     a15  heading "Wait State" trunc
    col siw    format   99999  heading "Waited So|Far (ms)"
    col wt     format 9999999  heading "Time Waited|(ms)"
    select event,
           state,
           seconds_in_wait siw,
           wait_time wt
    from   v$session_wait
    where  sid = &sid
    order by event;
    exec dbms_application_info.set_client_info('@sw1');
    -- file  sw1.sql
    set linesize 30000
    set pagesize 200
    col sid      format    9999  heading "SID"
    col username format     a10  heading "USERNAME"
    col osuser   format     a20  heading "OSUSER"
    col event    format     a25  heading "Wait Event" trunc
    col state    format     a15  heading "Wait State" trunc
    col siw      format   99999  heading "Waited So|Far (ms)"
    col wt       format 9999999  heading "Time Waited|(ms)"
    col sw1      format 9999999  heading "File"
    col sw2      format 9999999  heading "Block"
    col Objeto   format a50
    select sw.event,
           sw.p1,
           sw.p2,
           sw.p3,
           sw.state,
           s.sid,
           S.osuser,
           s.username,
           nvl(s.program, s.module),
           sw.seconds_in_wait siw,
           sw.wait_time wt
      from gv$session_wait sw,
           gv$session s
    WHERE sw.sid = s.sid
       and sw.EVENT NOT LIKE 'SQL%'
       and username is not NULL
       and s.inst_id = sw.inst_id
       and sw.event not like 'PX%'
    order by 1, 6, 7;Regards,
    Levi Pereira

  • Shared Storage allocation for Oracle 10g RAC

    Hi,
    I am about to implement a Two Node Oracle 10g RAC on Windows 2008 (32-Bit) servers. We are going to use EVA4400 storeage and we have 600 GB for Oracle RAC and database.
    I intend to follow the steps given in RACGuides_Rac10gR2OnWindows.pdf mentioned in Note 811271.1 (RAC Assurance Support Team: RAC Starter Kit and Best Practices (Windows)).
    I would like to know how should I allocated the storage for OCR,Vote, Database and Flash Recovery Area. Meaning how many LUNs and of what size to be created and presented to both the servers. To start with I need to create a 25 GB database. We will be using Raw Device for OCR and Vote and ASM for Database and Flash Recovery Area.
    Please advice.
    Thanks,
    Thiru

    I am about to implement a Two Node Oracle 10g RAC on Windows 2008 (32-Bit) servers. Oracle 10g on Windows 2008 is not supported. Anyway installing a cluster as of today i would use a 64-bit operating system together with a 64-bit database. Everything else is not state-of-the-art anymore.
    As of today i would go with 11g Release 1 (11.1.0.7.0) plus latest patches for Clusterware and ASM. Because you are using Windows 2008 you must use 11g Release 1 for the database as well. Bear in mind that Windows 2008 R2 is at this point in time not supported for 11g R1 or 11g R2 either.
    I would like to know how should I allocated the storage for OCR,Vote, Database and Flash Recovery Area. Meaning how many LUNs and of what size to be created and presented to both the servers. This is said in the installation documents. Rule of thumb: At least three LUNs of 2 GB for OCR and three LUNs of 2 GB for Voting disk.
    To start with I need to create a 25 GB database. We will be using Raw Device for OCR and Vote and ASM for Database and Flash Recovery Area.According to Oracles recommendations: Two disk groups: one for database files, the other for the flashback area.
    But your setup may be different (more disk groups,....)
    Ronny Egner
    My blog: http://ronnyegner.wordpress.com

  • Oracle 10g RAC to 11g RAC Upgrade on Solaris

    Hi,
    We are planning to do a migration of a 4 Node Oracle 10g RAC on Solaris 9 to 11g with Solaris 10. We'd like know what would be the best path to take. We cannot afford any downtime!
    Options: Are these feasible? Which option is best? Any documents links?
    a) Do a rolling upgrade of Oracle from 10g to 11g. Then take down individual nodes and upgrade the Solaris OS from 9 to 10 and bring them up back into the cluster. Is there any known issues taking this path? Is a rolling upgrade like this possible?
    b) Do an upgrade of the Solaris OS from 9 to 10 on each node and then bring them back up? Is this practical? Does Oracle allow different versions of OS running on different nodes?
    c) Use Dataguard with 2 different RAC environments (2 nodes each). How would this work? Is it the only possible way? Any steps please?
    Thanks

    a) Do a rolling upgrade of Oracle from 10g to 11g. Then take down individual nodes and upgrade the Solaris OS from 9 to 10 and bring them up back into the cluster. Is there any known issues taking this path? Is a rolling upgrade like this possible?Hi,
    first of all i would not change several components (OS, database version) at a time. My recommendation is to make small steps and start with the operating system first. Seconds recommendation is to test and test everything in your dev or test environment prior doing the upgrades in the productive environment. Trust me: You will face problems :-) So you better try it beforehand!
    b) Do an upgrade of the Solaris OS from 9 to 10 on each node and then bring them back up? Is this practical? Does Oracle allow different versions of OS running on different nodes?As far i know you can run different operating system versions on different nodes if they are supported (Solaris 9 and 10 are).
    Ronny Egner
    My blog: http://ronnyegner.wordpress.com

  • Oracle RAC ONE Node ! Inquiry

    Hi,
    Good Day.
    In my understanding Oracle 11gr2 RAC ONE Node "online database relocation" ability can address ONLY "Planned downtime Maintenance window".
    I mean in case of any catastrophic event i.e "failover / un-planned downtime" a production downtime window is a Must which it need to restart the failed instance on standby node.
    Please comment.
    Thanks
    M Ahmad

    Muhammad Ahmad wrote:
    Hi Sharma,
    I understand.
    but i was talking about downtime windows, in my understanding customer will face downtime window during failover to secondary node or restart on same node 'the time needed to restart database instance' but ONLY for planned maintenance activities 'online database instance relocation' capability can be utilized to relocate database instance to another node without facing any downtime as it uses 'shutdown transactional' apporach..
    do you agree with my understanding?Yes..
    During failure of Instance it will be restarted (downtime occurs). Is not possible use OMOTION on this case
    Using Omotion:
    Omotion moves a RAC One Node instance from one server to another—without any downtime
    • Use Cases
    * Load balancing
    * Database + OS patching and maintenance
    • Oracle supplied tools control migration
    * Services are not accepting connections on both nodes at the same time
    * Migrated instance shutdown transactional once services moved
    * A maximum of 30 minutes allowed for connections to migrate (then shutdown abort)

  • Oracle 11gR2 RAC Root.sh Failed On The Second Node

    Hello,
    When i installing Oracle 11gR2 RAC on AIX 7.1 , root.sh succeeds on first node but fails on the second node:
    I get error "Root.sh Failed On The Second Node With Error ORA-15018 ORA-15031 ORA-15025 ORA-27041 [ID 1459711.1]" within Oracle installation.
    Applies to:
    Oracle Server - 11gR2 RAC
    EMC VNX 500
    IBM AIX on POWER Systems (64-bit)
    in /dev/rhdiskpower0 does not show in kfod output on second node. It is an EMC multipath disk device.
    But the disk can be found with AIX command.
    any help!!
    Thanks

    the soluation that uninstall "EMC solutitons enabler" but in the machine i just find "EMC migration enabler" and conn't remove without remove EMC Powerpath.

  • Oracle RAC one node

    why Oracle introduced Oracle RAC one node in 11gR2?
    I have gone through oracle documents on oracle rac one node, but i couldn't find much added advantage than instance failover. We have already some technologies like datagaurd and some third party softwares for instance failover.
    Then why oracle introduced this RAC one node in the new release i.e 11gR2?
    What exactly oracle wants to provide from oracle RAC one node?
    Thanks...
    Bharath

    Why RAC one node?
    Oracle RAC one node is a single instance of Oracle RAC that runs on the node in a cluster. The benefit of the RAC one node option is that it allows you to consolidate many databases into one cluster without a lot of overhead, while also providing high availbilty benefits of failover protection, as well as for Online rolling patch application and rolling upgrades for the Oracle clusterware.
    Another aspect of RAC one node allows you to limit th CPU utilization of individual database instances within the cluster through a feature called resource manager instance caging, which gives you the ability to dynamilcally change the limit as required.
    Furhtermore , with RAC one node there is no limitation for server scalabilty such that if applications outgrow the current resources than a single node can supply, you can then upgrade the applications online to Oracle RAC.
    In tthe event that the node which is runnig Oracle RAc one node becomes saturated and out of resources, you can migrate the instance to another node in the cluster using Oracle rac one node and another new utilty called OMOTION feature.OMOTION feature for Oracle RAC 11g R2 rac allows you to migrate a running instance to another server without downtime or distrubtion in service for your enviornment.
    Hope you're understand.

  • Oracle Applications 11i Load Balancing does not work with RAC one Node

    Hi all,
    Could you help me to resolve this issue.
    Architecture environment is :
    - One APPS tier node
    - Two nodes Oracle Database Appliance (Primary node 1 holds INSTANCE_1 et Secondary node is configurured to holds INSTANCE_2), i.e RAC one Node.
    - The primary node have instance_name SIGM_1 and the secondary node have instance_name SIGM_2, but in RAC one node, the secondary instance is not alive.
    We convert our EBS 11i environment to RAC following note ID Using Oracle 11g Release 2 Real Application Clusters with Oracle E-Business Suite Release 11i [ID 823586.1].
    When testing Database failover, Oracle Applications 11i load balancing does not work anymore.
    The root cause is that, when the primary node of the Rac one node is down, the INSTANCE_NAME_1 is automaically relocating to the surviving node,.
    During test failover, we imagine that when the primary node goes down, the secondary node start or relocate database with instance_name SIGM_2, and in that case the Oracle Applications load balancing works.
    Currently, when the primary node goes down, the instance_name SIGM_1 is relocated on the secondary node, which cause failure of Oracle Applications Load Balancing.
    Thank you for your advice.
    Moussa

    This is something I observed a long time ago for Safari (ie: around version 1). I'm not sure this is Safari, per se, but OpenSSL that is responsible for the behavior. I'm pretty sure Chrome does this and I've seen some Linux browsers do it.
    What I have done at the last two companies I've worked for is recommend that our clients do not use SSL SessionID as the way of tracking sticky sessions on web servers, but instead using IP address. This works in nearly all cases and has few downsides. The other solution is to use some sort of session sharing on your web servers to mitigate the issue (which also means that your web servers aren't a point of failure for your users' sessions). (One of the products I supported had no session information stored on the web servers, so we could safely round-robin requests, the other product could be implemented with a Session State Server... but in most cases we just used IP address to load balance with). The other solution is to configure your load balancer to terminate the SSL tunnel. You get some other benefits from this, such as allowing your load balancer to reduce the number of actual connections to the web servers. I've seen many devices setup this way.
    One thing to consider through this is that - due to the way internet standards work - this really can't be termed a bug on anyone's part. There is no guarantee in the SSL/TLS standards that a client will return the same SSL Session ID for each request and there is not requirement that subsequent requests will even use the same tunnel. Remember, HTTP is a stateless protocol. Each request is considered a new request by the web server and everything else is just trickery to try and get it to work the way you want. You can be annoyed at Safari's behavior, but it's been this way for over 5 years by my count, so I don't expect it to change.

  • Patch 9004119 to use Oracle RAC One Node Utilities

    Hi all,
    I am installing Oracle Grid Infrastructure (11.2.0.3), I am following DOC ug-raconenode-2009-130760.pdf, about RAC One Node installation, as Oracle recomendation, and this DOC talk about install patch 9004119 to use Oracle RAC One Node Utilities (such as Omotion). If I am using 11.2.0.3, is it necessary install patch 9004119 ?
    Thanks in advance.
    Leonardo.

    user10674190 wrote:
    Hi all,
    I am installing Oracle Grid Infrastructure (11.2.0.3), I am following DOC ug-raconenode-2009-130760.pdf, about RAC One Node installation, as Oracle recomendation, and this DOC talk about install patch 9004119 to use Oracle RAC One Node Utilities (such as Omotion). If I am using 11.2.0.3, is it necessary install patch 9004119 ?
    Thanks in advance.
    Leonardo.9004119:PATCH FOR RAC ONE NODE SCRIPTS can applied only on 11.2.0.1 , As you are in 11.2.0.3 so no need of it.

  • Details regarding Oracle RAC One node.

    Hi
    I am trying to google regarding the Oracle RAC one node. But I couldnt get an exact details of it. I even tried in Metallink too. Could you guys please provide the link or Notes ID for this Oracle RAC one node which includes what is it ? How to set up this one node RAC etc etc !
    Thanks

    There are couple of notes ,
    http://www.oracle.com/us/products/database/options/rac-one-node/overview/index.html
    http://www.oracle.com/technetwork/products/clustering/overview/ds-rac-one-node-11gr2-185089.pdf
    http://download.oracle.com/docs/cd/E11882_01/rac.112/e16795/onenode.htm#BABGAJGH
    http://download.oracle.com/docs/cd/E11882_01/install.112/e17214/racinstl.htm#CIHGGAAE
    http://download.oracle.com/docs/cd/E11882_01/server.112/e17157/architectures.htm#CJAJEAGH
    Aman....

  • Oracle RAC One Node Licensing

    Hi,
    I have planned to install a 2 Nodes RAC 11gR2.
    If I want to start my project by installing a single node in a RAC configuration and, later (few months), extend it by adding the second node ... do I have to license my cluster with the extra cost option of the EE : Oracle RAC One Node ?
    Thanks for answers.
    L.

    Hi,
    If I want to start my project by installing a single node in a RAC configuration and, later (few months), extend it by adding the second node ... do I have to license my cluster with the extra cost option of the EE : Oracle RAC One Node ?Refer lnk:
    http://www.oracle.com/us/corporate/pricing/price-lists/index.html
    thanks,
    X A H E E R

  • Is RAC node configuration  when disk array fails on one node .

    Hi ,
    We recently had all the filesystem of node 1 of RAC cluster , turned into read only mode. Upon further investigation it was revealed that it was due to disks array failure on node 1 . The database instance on node 2 is up and running fine . The OS team are rebuilding the node 1 from scratch and will restore oracle installable from the backup .
    My question is once all files are restored :
    Do we need to add the node to the RAC configuration ?
    Do we need to do relink of oracle binary files ?
    Can the node be brought up directly once all the oracle installables are restored properly or will the oRacle team require to perform addition steps to bring the node into RAC configuration .Thanks,
    Sachin K

    Hi ,
    If the restore fails in some way . We will require to first remove and then add the nodes to the node 1 cluster right ? Kindly confirm on the below steps.
    In case of such situation below are the steps we plan to follow:
    version ; 10.2.0.5
    Affected node :prd_node1
    Affected instance :PRDB1
    Surviving Node :prd_node2
    Surviving instance: PRDB2
    DB Listener on prd_node1:LISTENER_PRD01
    ASM listener on prd_node1:LISTENER_PRDASM01
    DB Listener on prd_node2:LISTENER_PRD02
    ASM listener on prd_node2:LISTENER_PRDASM02
    Login to the surviving node .In our case its prd_node2
    Step 1 - Remove ONS information :
    Execute as root the following command to find out the remote port number to be used
    $cat $CRS_HOME/opmn/conf/ons.config
    and remove the information pertaining the node to be deleted using
    #$CRS_HOME/bin/racgons remove_config prd_node1:6200
    Step 2 - Remove resources :
    In this step, the resources that were defined on this node has to be removed. These resources include (a) Database (b) Instance (c) ASM. A list of this can
    be acquired by running crs_stat -t command from any node
    The srvctl remove listener command used below is only applicable in 10204 and higher releases including 11.1.0.6. The command will report an error if the
    clusterware version is less than 10204. If clusterware version is less than 10204, use netca to remove the listener
    srvctl remove listener -n prd_node1 -l LISTENER_PRD01
    srvctl remove listener -n prd_node1 -l LISTENER_PRDASM01
    srvctl remove instance -d PRDB -i PRDB1
    srvctl remove asm -n prd_node1 -i +ASM1
    Step 3 Execute rootdeletenode.sh :
    From the node that you are not deleting execute as root the following command which will help find out the node number of the node that you want to delete
    #$CRS_HOME/bin/olsnodes -n
    this number can be passed to the rootdeletenode.sh command which is to be executed as root from any node which is going to remain in the cluster.
    #$CRS_HOME/install/rootdeletenode.sh prd_node1,1
    Step 5 Update the Inventory :
    From the node which is going to remain in the cluster run the following command as owner of the CRS_HOME. The argument to be passed to the CLUSTER_NODES is a
    comma seperated list of node names of the cluster which are going to remain in the cluster. This step needs to be performed from once per home (Clusterware,
    ASM and RDBMS homes).
    ## Example of running runInstaller to update inventory in Clusterware home
    $CRS_HOME/oui/bin/runInstaller -updateNodeList ORACLE_HOME=$ORA_CRS_HOME "CLUSTER_NODES=prd_node2" CRS=TRUE
    ## Optionally enclose the host names with {}
    ## Example of running runInstaller to update inventory in ASM home
    $CRS_HOME/oui/bin/runInstaller -updateNodeList ORACLE_HOME=$ASM_HOME "CLUSTER_NODES=prd_node2"
    ## Optionally enclose the host names with {}
    ## Example of running runInstaller to update inventory in RDBMS home
    $CRS_HOME/oui/bin/runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME "CLUSTER_NODES=prd_node2"
    ## Optionally enclose the host names with {}
    We need steps to add the node back into the cluster . Can anyone please help us on this ?
    Thanks,
    Sachin K

  • FRA on NFS Oracle RAC One Node

    Hi all,
    we installed Oracle RAC One Node on Oracle Linux. Everything seems to work fine except one little thing: we are trying to change the database to archivelog mode, but when we are trying to relocate the database, we are getting ORA-19816 "WARNING: Files may exist in ... that are not known to database." and "Linux-x86_64 Error: 37: No locks available"
    The FRA is mounted as NFS Share with follwing options: "rw,bg,hard,nointr,rsize=32768,wsize=32768,proto=tcp,noac,vers=3,suid"
    I searched a lot on the Internet but couldn't find any hint. Can anybody point me to the right installation guide?
    Thanks in advanced

    Hi,
    user10191672 wrote:
    Hi all,
    we installed Oracle RAC One Node on Oracle Linux. Everything seems to work fine except one little thing: we are trying to change the database to archivelog mode, but when we are trying to relocate the database, we are getting ORA-19816 "WARNING: Files may exist in ... that are not known to database." and "Linux-x86_64 Error: 37: No locks available"
    The FRA is mounted as NFS Share with follwing options: "rw,bg,hard,nointr,rsize=32768,wsize=32768,proto=tcp,noac,vers=3,suid"
    I searched a lot on the Internet but couldn't find any hint. Can anybody point me to the right installation guide?Check if NFSLOCK service is running... if not start it.
    # service nfslock status*Mount Options for Oracle files when used with NAS devices [ID 359515.1]*
    Mount options for Oracle Datafiles
    rw,bg,hard,nointr,rsize=32768, wsize=32768,tcp,actimeo=0, vers=3,timeo=600For RMAN backup sets, image copies, and Data Pump dump files, the "NOAC" mount option should not be specified - that is because RMAN and Data Pump do not check this option and specifying this can adversely affect performance.
    The following NFS options must be specified for 11.2.0.2 RMAN disk backup directory:
    opts="-fstype=nfs,rsize=65536,wsize=65536,hard,actime=0,intr,nodev,nosuid"Hope this helps,
    Levi Pereira
    Edited by: Levi Pereira on Aug 18, 2011 1:20 PM

Maybe you are looking for