Root.sh fails inf 11gr2 2 node hp installation

Hello,
Env Details:
This is an 2node RAC 11gR2 installation on HP 11.3.
Following is the issue:
The GRid infrastructure installation fails at the point of running root.sh with the following errors on node 1,
CRS-2677: Stop of 'ora.cssdmonitor' on 'vpar1' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'vpar1'
CRS-2677: Stop of 'ora.gpnpd' on 'vpar1' succeeded
CRS-2673: Attempting to stop 'ora.mdnsd' on 'vpar1'
CRS-2677: Stop of 'ora.mdnsd' on 'vpar1' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'vpar1'
CRS-2677: Stop of 'ora.gipcd' on 'vpar1' succeeded
CRS-4000: Command Start failed, or completed with errors.
CRS-2672: Attempting to start 'ora.gipcd' on 'vpar1'
CRS-2672: Attempting to start 'ora.mdnsd' on 'vpar1'
CRS-2676: Start of 'ora.gipcd' on 'vpar1' succeeded
CRS-2676: Start of 'ora.mdnsd' on 'vpar1' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'vpar1'
CRS-2676: Start of 'ora.gpnpd' on 'vpar1' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'vpar1'
CRS-2676: Start of 'ora.cssdmonitor' on 'vpar1' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'vpar1'
CRS-2672: Attempting to start 'ora.diskmon' on 'vpar1'
CRS-2676: Start of 'ora.diskmon' on 'vpar1' succeeded
CRS-2674: Start of 'ora.cssd' on 'vpar1' failed
CRS-2679: Attempting to clean 'ora.cssd' on 'vpar1'
CRS-2678: 'ora.cssd' on 'vpar1' has experienced an unrecoverable failure
CRS-0267: Human intervention required to resume its availability.
CRS-2673: Attempting to stop 'ora.diskmon' on 'vpar1'
CRS-2677: Stop of 'ora.diskmon' on 'vpar1' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'vpar1'
CRS-2677: Stop of 'ora.gpnpd' on 'vpar1' succeeded
CRS-2673: Attempting to stop 'ora.mdnsd' on 'vpar1'
CRS-2677: Stop of 'ora.mdnsd' on 'vpar1' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'vpar1'
CRS-2677: Stop of 'ora.gipcd' on 'vpar1' succeeded
*CRS-4000: Command Start failed, or completed with errors.
Command return code of 1 (256) from command: /orabinary1/cluster/bin/crsctl start resource ora.ctssd -init
Start of resource "ora.ctssd -init" failed
Clusterware exclusive mode start of resource ora.ctssd failed
CRS-2500: Cannot stop resource 'ora.crsd' as it is not running
CRS-4000: Command Stop failed, or completed with errors.
Command return code of 1 (256) from command: /orabinary1/cluster/bin/crsctl stop resource ora.crsd -init
Stop of resource "ora.crsd -init" failed
Failed to stop CRSD
CRS-2500: Cannot stop resource 'ora.asm' as it is not running
CRS-4000: Command Stop failed, or completed with errors.
Command return code of 1 (256) from command: /orabinary1/cluster/bin/crsctl stop resource ora.asm -init
Stop of resource "ora.asm -init" failed
Failed to stop ASM*
CRS-2679: Attempting to clean 'ora.cssd' on 'vpar1'
CRS-2681: Clean of 'ora.cssd' on 'vpar1' succeeded
Initial cluster configuration failed.
the ocrcheck command fails with :
2010-02-25 14:57:08.610: [  OCRASM][1]proprasmo: Failed to open file in dirty mode
2010-02-25 14:57:08.610: [  OCRASM][1]proprasmo: Error in open/create file in dg [newgrid]
[  OCRASM][1]SLOS : SLOS: cat=8, opn=kgfolclcpi1, dep=210, loc=kgfokge
AMDU-00210: No disks found in diskgroup NEWGRID
AMDU-00210: No disks found in diskgroup NEWGRID
2010-02-25 14:57:08.630: [  OCRASM][1]proprasmo: kgfoCheckMount returned [7]
2010-02-25 14:57:08.630: [  OCRASM][1]proprasmo: The ASM instance is down
the disks are already owned by grid:asmadmin group i have already done chmod 660 for all of them.
the precheck during the installation does not fail with any particular error and goes ahead to the point of asking to run root.sh
kindly help.
regards
pg

Unfortunately I can't run any -post checks because the CRS doesn't install correctly. The error occurs during the intial root.sh run on the first node, and doens't allow me to continue.
cluvfy stage -post crsinst -n sun277z1,sun278z1
Performing post-checks for cluster services setup
Checking node reachability...
Node reachability check passed from node "sun277z1"
Checking user equivalence...
User equivalence check passed for user "oracle"
ERROR:
CRS is not installed on any of the nodes
Verification cannot proceed

Similar Messages

  • Oracle 11gR2 RAC Root.sh Failed On The Second Node

    Hello,
    When i installing Oracle 11gR2 RAC on AIX 7.1 , root.sh succeeds on first node but fails on the second node:
    I get error "Root.sh Failed On The Second Node With Error ORA-15018 ORA-15031 ORA-15025 ORA-27041 [ID 1459711.1]" within Oracle installation.
    Applies to:
    Oracle Server - 11gR2 RAC
    EMC VNX 500
    IBM AIX on POWER Systems (64-bit)
    in /dev/rhdiskpower0 does not show in kfod output on second node. It is an EMC multipath disk device.
    But the disk can be found with AIX command.
    any help!!
    Thanks

    the soluation that uninstall "EMC solutitons enabler" but in the machine i just find "EMC migration enabler" and conn't remove without remove EMC Powerpath.

  • Root.sh fails for 11gR2 Grid Infrastructure installation on AIX 6.1

    Hello all,
    root.sh fails with the errors below. SR with Oracle opened. Will post the resolution when it is available. Any insights in the meantime? Thank you!
    System information:
    OS: AIX 6.1
    Runcluvfy.sh reported no issue
    Permissions on the raw devices set to 660 and ownership is oracle:dba
    Using external redundancy for ASM, ASM instance is online
    Permissions on block and raw device files
    system1:ux460p1> ls -l /dev/hdisk32
    brw-rw---- 1 oracle dba 17, 32 Mar 11 16:50 /dev/hdisk32
    system11:ux460p1> ls -l /dev/rhdisk32
    crw-rw---- 1 oracle dba 17, 32 Mar 12 15:52 /dev/rhdisk32
    ocrconfig.log
    racle Database 11g Clusterware Release 11.2.0.1.0 - Production Copyright 1996, 2009 Oracle. All rights reserved.
    2010-03-15 19:17:19.773: [ OCRCONF][1]ocrconfig starts...
    2010-03-15 19:17:19.775: [ OCRCONF][1]Upgrading OCR data
    2010-03-15 19:17:20.474: [  OCRASM][1]proprasmo: kgfoCheckMount return [0]. Cannot proceed with dirty open.
    2010-03-15 19:17:20.474: [  OCRASM][1]proprasmo: Error in open/create file in dg [DATA]
    [  OCRASM][1]SLOS : [clsuSlosFormatDiag called with non-error slos.]
    2010-03-15 19:17:20.603: [  OCRRAW][1]proprioo: Failed to open [+DATA]. Returned proprasmo() with [26]. Marking location as UNAVAILABLE.
    2010-03-15 19:17:20.603: [  OCRRAW][1]proprioo: No OCR/OLR devices are usable
    2010-03-15 19:17:20.603: [  OCRASM][1]proprasmcl: asmhandle is NULL
    2010-03-15 19:17:20.603: [  OCRRAW][1]proprinit: Could not open raw device
    2010-03-15 19:17:20.603: [  OCRASM][1]proprasmcl: asmhandle is NULL
    2010-03-15 19:17:20.604: [ default][1]a_init:7!: Backend init unsuccessful : [26]
    2010-03-15 19:17:20.604: [ OCRCONF][1]Exporting OCR data to [OCRUPGRADEFILE]
    2010-03-15 19:17:20.604: [  OCRAPI][1]a_init:7!: Backend init unsuccessful : [33]
    2010-03-15 19:17:20.605: [ OCRCONF][1]There was no previous version of OCR. error:[PROC-33: Oracle Cluster Registry is not configured]
    2010-03-15 19:17:20.841: [  OCRASM][1]proprasmo: kgfoCheckMount return [0]. Cannot proceed with dirty open.
    2010-03-15 19:17:20.841: [  OCRASM][1]proprasmo: Error in open/create file in dg [DATA]
    [  OCRASM][1]SLOS : [clsuSlosFormatDiag called with non-error slos.]
    2010-03-15 19:17:20.966: [  OCRRAW][1]proprioo: Failed to open [+DATA]. Returned proprasmo() with [26]. Marking location as UNAVAILABLE.
    2010-03-15 19:17:20.966: [  OCRRAW][1]proprioo: No OCR/OLR devices are usable
    2010-03-15 19:17:20.966: [  OCRASM][1]proprasmcl: asmhandle is NULL
    2010-03-15 19:17:20.966: [  OCRRAW][1]proprinit: Could not open raw device
    2010-03-15 19:17:20.966: [  OCRASM][1]proprasmcl: asmhandle is NULL
    2010-03-15 19:17:20.966: [ default][1]a_init:7!: Backend init unsuccessful : [26]
    2010-03-15 19:17:21.412: [  OCRRAW][1]propriogid:1_2: INVALID FORMAT
    2010-03-15 19:17:21.412: [  OCRRAW][1]proprior: Header check from OCR device 0 offset 0 failed (26).
    2010-03-15 19:17:21.414: [  OCRRAW][1]ibctx: Failed to read the whole bootblock. Assumes invalid format.
    2010-03-15 19:17:21.414: [  OCRRAW][1]proprinit:problem reading the bootblock or superbloc 22
    2010-03-15 19:17:21.534: [  OCRRAW][1]propriogid:1_2: INVALID FORMAT
    2010-03-15 19:17:21.701: [  OCRRAW][1]iniconfig:No 92 configuration
    2010-03-15 19:17:21.701: [  OCRAPI][1]a_init:6a: Backend init successful
    2010-03-15 19:17:21.764: [ OCRCONF][1]Initialized DATABASE keys
    2010-03-15 19:17:21.770: [ OCRCONF][1]Successfully set skgfr block 0
    2010-03-15 19:17:21.771: [ OCRCONF][1]Exiting [status=success]...
    **alert.log**
    Oracle Database 11g Clusterware Release 11.2.0.1.0 - Production Copyright 1996, 2009 Oracle. All rights reserved.
    2010-03-15 19:12:00.148
    [client(483478)]CRS-2106:The OLR location /u01/app/grid/cdata/ux460p1.olr is inaccessible. Details in /u01/app/grid/log/ux460p1/client/ocrconfig_483478.log.
    2010-03-15 19:12:00.171
    [client(483478)]CRS-2101:The OLR was formatted using version 3.
    2010-03-15 14:16:18.620
    [ohasd(471204)]CRS-2112:The OLR service started on node ux460p1.
    2010-03-15 14:16:18.720
    [ohasd(471204)]CRS-8017:location: /etc/oracle/lastgasp has 8 reboot advisory log files, 0 were announced and 0 errors occurred
    2010-03-15 14:16:18.847
    [ohasd(471204)]CRS-2772:Server 'ux460p1' has been assigned to pool 'Free'.
    2010-03-15 14:16:54.107
    [ctssd(340174)]CRS-2403:The Cluster Time Synchronization Service on host ux460p1 is in observer mode.
    2010-03-15 14:16:54.123
    [ctssd(340174)]CRS-2407:The new Cluster Time Synchronization Service reference node is host ux460p1.
    2010-03-15 14:16:54.917
    [ctssd(340174)]CRS-2401:The Cluster Time Synchronization Service started on host ux460p1.
    2010-03-15 19:17:21.414
    [client(376968)]CRS-1006:The OCR location +DATA is inaccessible. Details in /u01/app/grid/log/ux460p1/client/ocrconfig_376968.log.
    2010-03-15 19:17:21.701
    [client(376968)]CRS-1001:The OCR was formatted using version 3.
    2010-03-15 14:17:24.888
    [crsd(303252)]CRS-1012:The OCR service started on node ux460p1.
    2010-03-15 14:17:56.344
    [ctssd(340174)]CRS-2405:The Cluster Time Synchronization Service on host ux460p1 is shutdown by user
    2010-03-15 14:19:14.855
    [ctssd(340188)]CRS-2403:The Cluster Time Synchronization Service on host ux460p1 is in observer mode.
    2010-03-15 14:19:14.870
    [ctssd(340188)]CRS-2407:The new Cluster Time Synchronization Service reference node is host ux460p1.
    2010-03-15 14:19:15.638
    [ctssd(340188)]CRS-2401:The Cluster Time Synchronization Service started on host ux460p1.
    2010-03-15 14:19:32.985
    [crsd(417946)]CRS-1012:The OCR service started on node ux460p1.
    2010-03-15 14:19:35.250
    [crsd(417946)]CRS-1201:CRSD started on node ux460p1.
    2010-03-15 14:19:35.698
    [ohasd(471204)]CRS-2765:Resource 'ora.crsd' has failed on server 'ux460p1'.
    2010-03-15 14:19:38.928

    Public and Private are on different devices and subnets.
    No logfile named: ocrconfig_7833.log
    I do have ocrconfig_7089.log and ocrconfig_8985.log
    Here is their contents:
    ocrconfig_7089.log:
    Oracle Database 11g Clusterware Release 11.2.0.1.0 - Production Copyright 1996, 2009 Oracle. All rights reserved.
    2010-11-09 13:38:32.518: [ OCRCONF][2819644944]ocrconfig starts...
    2010-11-09 13:38:32.542: [ OCRCONF][2819644944]Upgrading OCR data
    2010-11-09 13:38:32.576: [  OCROSD][2819644944]utread:3: Problem reading buffer 12e54000 buflen 4096 retval 0 phy_offset 102400 retry 0
    2010-11-09 13:38:32.576: [  OCROSD][2819644944]utread:3: Problem reading buffer 12e54000 buflen 4096 retval 0 phy_offset 102400 retry 1
    2010-11-09 13:38:32.576: [  OCROSD][2819644944]utread:3: Problem reading buffer 12e54000 buflen 4096 retval 0 phy_offset 102400 retry 2
    2010-11-09 13:38:32.576: [  OCROSD][2819644944]utread:3: Problem reading buffer 12e54000 buflen 4096 retval 0 phy_offset 102400 retry 3
    2010-11-09 13:38:32.576: [  OCROSD][2819644944]utread:3: Problem reading buffer 12e54000 buflen 4096 retval 0 phy_offset 102400 retry 4
    2010-11-09 13:38:32.576: [  OCROSD][2819644944]utread:3: Problem reading buffer 12e54000 buflen 4096 retval 0 phy_offset 102400 retry 5
    2010-11-09 13:38:32.576: [  OCRRAW][2819644944]propriogid:1_1: Failed to read the whole bootblock. Assumes invalid format.
    2010-11-09 13:38:32.576: [  OCRRAW][2819644944]proprioini: all disks are not OCR/OLR formatted
    2010-11-09 13:38:32.576: [  OCRRAW][2819644944]proprinit: Could not open raw device
    2010-11-09 13:38:32.576: [ default][2819644944]a_init:7!: Backend init unsuccessful : [26]
    2010-11-09 13:38:32.577: [ OCRCONF][2819644944]Exporting OCR data to [OCRUPGRADEFILE]
    2010-11-09 13:38:32.577: [  OCRAPI][2819644944]a_init:7!: Backend init unsuccessful : [33]
    2010-11-09 13:38:32.577: [ OCRCONF][2819644944]There was no previous version of OCR. error:[PROCL-33: Oracle Local Registry is not configured]
    2010-11-09 13:38:32.578: [  OCROSD][2819644944]utread:3: Problem reading buffer 12e54000 buflen 4096 retval 0 phy_offset 102400 retry 0
    2010-11-09 13:38:32.578: [  OCROSD][2819644944]utread:3: Problem reading buffer 12e54000 buflen 4096 retval 0 phy_offset 102400 retry 1
    2010-11-09 13:38:32.578: [  OCROSD][2819644944]utread:3: Problem reading buffer 12e54000 buflen 4096 retval 0 phy_offset 102400 retry 2
    2010-11-09 13:38:32.578: [  OCROSD][2819644944]utread:3: Problem reading buffer 12e54000 buflen 4096 retval 0 phy_offset 102400 retry 3
    2010-11-09 13:38:32.578: [  OCROSD][2819644944]utread:3: Problem reading buffer 12e54000 buflen 4096 retval 0 phy_offset 102400 retry 4
    2010-11-09 13:38:32.578: [  OCROSD][2819644944]utread:3: Problem reading buffer 12e54000 buflen 4096 retval 0 phy_offset 102400 retry 5
    2010-11-09 13:38:32.578: [  OCRRAW][2819644944]propriogid:1_1: Failed to read the whole bootblock. Assumes invalid format.
    2010-11-09 13:38:32.578: [  OCRRAW][2819644944]proprioini: all disks are not OCR/OLR formatted
    2010-11-09 13:38:32.578: [  OCRRAW][2819644944]proprinit: Could not open raw device
    2010-11-09 13:38:32.578: [ default][2819644944]a_init:7!: Backend init unsuccessful : [26]
    2010-11-09 13:38:32.579: [  OCROSD][2819644944]utread:3: Problem reading buffer 12e54000 buflen 4096 retval 0 phy_offset 102400 retry 0
    2010-11-09 13:38:32.579: [  OCROSD][2819644944]utread:3: Problem reading buffer 12e54000 buflen 4096 retval 0 phy_offset 102400 retry 1
    2010-11-09 13:38:32.579: [  OCROSD][2819644944]utread:3: Problem reading buffer 12e54000 buflen 4096 retval 0 phy_offset 102400 retry 2
    2010-11-09 13:38:32.579: [  OCROSD][2819644944]utread:3: Problem reading buffer 12e54000 buflen 4096 retval 0 phy_offset 102400 retry 3
    2010-11-09 13:38:32.579: [  OCROSD][2819644944]utread:3: Problem reading buffer 12e54000 buflen 4096 retval 0 phy_offset 102400 retry 4
    2010-11-09 13:38:32.579: [  OCROSD][2819644944]utread:3: Problem reading buffer 12e54000 buflen 4096 retval 0 phy_offset 102400 retry 5
    2010-11-09 13:38:32.579: [  OCRRAW][2819644944]propriogid:1_1: Failed to read the whole bootblock. Assumes invalid format.
    2010-11-09 13:38:32.579: [  OCROSD][2819644944]utread:3: Problem reading buffer 12e54000 buflen 4096 retval 0 phy_offset 102400 retry 0
    2010-11-09 13:38:32.579: [  OCROSD][2819644944]utread:3: Problem reading buffer 12e54000 buflen 4096 retval 0 phy_offset 102400 retry 1
    2010-11-09 13:38:32.579: [  OCROSD][2819644944]utread:3: Problem reading buffer 12e54000 buflen 4096 retval 0 phy_offset 102400 retry 2
    2010-11-09 13:38:32.579: [  OCROSD][2819644944]utread:3: Problem reading buffer 12e54000 buflen 4096 retval 0 phy_offset 102400 retry 3
    2010-11-09 13:38:32.579: [  OCROSD][2819644944]utread:3: Problem reading buffer 12e54000 buflen 4096 retval 0 phy_offset 102400 retry 4
    2010-11-09 13:38:32.579: [  OCROSD][2819644944]utread:3: Problem reading buffer 12e54000 buflen 4096 retval 0 phy_offset 102400 retry 5
    2010-11-09 13:38:32.591: [  OCRRAW][2819644944]ibctx: Failed to read the whole bootblock. Assumes invalid format.
    2010-11-09 13:38:32.591: [  OCRRAW][2819644944]proprinit:problem reading the bootblock or superbloc 22
    2010-11-09 13:38:32.591: [  OCROSD][2819644944]utread:3: Problem reading buffer 12e55000 buflen 4096 retval 0 phy_offset 102400 retry 0
    2010-11-09 13:38:32.591: [  OCROSD][2819644944]utread:3: Problem reading buffer 12e55000 buflen 4096 retval 0 phy_offset 102400 retry 1
    2010-11-09 13:38:32.591: [  OCROSD][2819644944]utread:3: Problem reading buffer 12e55000 buflen 4096 retval 0 phy_offset 102400 retry 2
    2010-11-09 13:38:32.591: [  OCROSD][2819644944]utread:3: Problem reading buffer 12e55000 buflen 4096 retval 0 phy_offset 102400 retry 3
    2010-11-09 13:38:32.591: [  OCROSD][2819644944]utread:3: Problem reading buffer 12e55000 buflen 4096 retval 0 phy_offset 102400 retry 4
    2010-11-09 13:38:32.591: [  OCROSD][2819644944]utread:3: Problem reading buffer 12e55000 buflen 4096 retval 0 phy_offset 102400 retry 5
    2010-11-09 13:38:32.591: [  OCRRAW][2819644944]propriogid:1_1: Failed to read the whole bootblock. Assumes invalid format.
    2010-11-09 13:38:32.681: [  OCRAPI][2819644944]a_init:6a: Backend init successful
    2010-11-09 13:38:32.699: [ OCRCONF][2819644944]Initialized DATABASE keys
    2010-11-09 13:38:32.700: [ OCRCONF][2819644944]Exiting [status=success]...
    ocrconfig_8985.log:
    Oracle Database 11g Clusterware Release 11.2.0.1.0 - Production Copyright 1996, 2009 Oracle. All rights reserved.
    2010-11-09 13:41:28.169: [ OCRCONF][2281741840]ocrconfig starts...
    2010-11-09 13:41:28.175: [ OCRCONF][2281741840]Upgrading OCR data
    2010-11-09 13:41:30.896: [  OCRASM][2281741840]proprasmo: kgfoCheckMount return [0]. Cannot proceed with dirty open.
    2010-11-09 13:41:30.896: [  OCRASM][2281741840]proprasmo: Error in open/create file in dg [DATA]
    [  OCRASM][2281741840]SLOS : [clsuSlosFormatDiag called with non-error slos.]
    2010-11-09 13:41:31.208: [  OCRRAW][2281741840]proprioo: Failed to open [+DATA]. Returned proprasmo() with [26]. Marking location as UNAVAILABLE.
    2010-11-09 13:41:31.210: [  OCRRAW][2281741840]proprioo: No OCR/OLR devices are usable
    2010-11-09 13:41:31.210: [  OCRASM][2281741840]proprasmcl: asmhandle is NULL
    2010-11-09 13:41:31.210: [  OCRRAW][2281741840]proprinit: Could not open raw device
    2010-11-09 13:41:31.211: [  OCRASM][2281741840]proprasmcl: asmhandle is NULL
    2010-11-09 13:41:31.213: [ default][2281741840]a_init:7!: Backend init unsuccessful : [26]
    2010-11-09 13:41:31.214: [ OCRCONF][2281741840]Exporting OCR data to [OCRUPGRADEFILE]
    2010-11-09 13:41:31.216: [  OCRAPI][2281741840]a_init:7!: Backend init unsuccessful : [33]
    2010-11-09 13:41:31.216: [ OCRCONF][2281741840]There was no previous version of OCR. error:[PROC-33: Oracle Cluster Registry is not configured]
    2010-11-09 13:41:32.214: [  OCRASM][2281741840]proprasmo: kgfoCheckMount return [0]. Cannot proceed with dirty open.
    2010-11-09 13:41:32.214: [  OCRASM][2281741840]proprasmo: Error in open/create file in dg [DATA]
    [  OCRASM][2281741840]SLOS : [clsuSlosFormatDiag called with non-error slos.]
    2010-11-09 13:41:32.535: [  OCRRAW][2281741840]proprioo: Failed to open [+DATA]. Returned proprasmo() with [26]. Marking location as UNAVAILABLE.
    2010-11-09 13:41:32.535: [  OCRRAW][2281741840]proprioo: No OCR/OLR devices are usable
    2010-11-09 13:41:32.535: [  OCRASM][2281741840]proprasmcl: asmhandle is NULL
    2010-11-09 13:41:32.535: [  OCRRAW][2281741840]proprinit: Could not open raw device
    2010-11-09 13:41:32.535: [  OCRASM][2281741840]proprasmcl: asmhandle is NULL
    2010-11-09 13:41:32.536: [ default][2281741840]a_init:7!: Backend init unsuccessful : [26]
    2010-11-09 13:41:35.359: [  OCRRAW][2281741840]propriogid:1_2: INVALID FORMAT
    2010-11-09 13:41:35.361: [  OCRRAW][2281741840]proprior: Header check from OCR device 0 offset 0 failed (26).
    2010-11-09 13:41:35.363: [  OCRRAW][2281741840]ibctx: Failed to read the whole bootblock. Assumes invalid format.
    2010-11-09 13:41:35.363: [  OCRRAW][2281741840]proprinit:problem reading the bootblock or superbloc 22
    2010-11-09 13:41:35.843: [  OCRRAW][2281741840]propriogid:1_2: INVALID FORMAT
    2010-11-09 13:41:36.430: [  OCRRAW][2281741840]iniconfig:No 92 configuration
    2010-11-09 13:41:36.431: [  OCRAPI][2281741840]a_init:6a: Backend init successful
    2010-11-09 13:41:36.540: [ OCRCONF][2281741840]Initialized DATABASE keys
    2010-11-09 13:41:36.545: [ OCRCONF][2281741840]Successfully set skgfr block 0
    2010-11-09 13:41:36.552: [ OCRCONF][2281741840]Exiting [status=success]...
    Both of these log files show errors, then they show success??????

  • RAC 11gR2 cluster installation: root.sh failed on the 1st node

    Hi,
    Does anybody know why is possible when I run the root.sh on the 1st node, during the Oracle 11gR2 RAC installation (cluster installation) to get the following error?
    The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME= /oracle/grid
    Enter the full pathname of the local bin directory: [usr/local/bin]:
    The file "dbhome" already exists in /usr/local/bin. Overwrite it? (y/n) [n]: y
    Copying dbhome to /usr/local/bin ...
    The file "oraenv" already exists in /usr/local/bin. Overwrite it? (y/n) [n]: y
    Copying oraenv to /usr/local/bin ...
    The file "coraenv" already exists in /usr/local/bin. Overwrite it? (y/n) [n]: y
    Copying coraenv to /usr/local/bin ...
    Entries will be added to the /etc/oratab file as needed by
    Database Configuration Assistant when a database is created
    Finished running generic part of root.sh script.
    Now product-specific root actions will be performed.
    2010-06-29 14:17:43: Parsing the host name
    2010-06-29 14:17:43: Checking for super user privileges
    2010-06-29 14:17:43: User has super user privileges
    Using configuration parameter file: /oracle/grid/crs/install/crsconfig_params
    Creating trace directory
    User oracle has the required capabilities to run CSSD in realtime mode
    LOCAL ADD MODE
    Creating OCR keys for user 'root', privgrp 'system'..
    Operation successful.
    root wallet
    root wallet cert
    root cert export
    peer wallet
    profile reader wallet
    pa wallet
    peer wallet keys
    pa wallet keys
    peer cert request
    pa cert request
    peer cert
    pa cert
    peer root cert TP
    profile reader root cert TP
    pa root cert TP
    peer pa cert TP
    pa peer cert TP
    profile reader pa cert TP
    profile reader peer cert TP
    peer user cert
    pa user cert
    Adding daemon to inittab
    CRS-4123: Oracle High Availability Services has been started.
    ohasd is starting
    CRS-2672: Attempting to start 'ora.gipcd' on 'trz1test_rac'
    CRS-2672: Attempting to start 'ora.mdnsd' on 'trz1test_rac'
    CRS-2676: Start of 'ora.gipcd' on 'trz1test_rac' succeeded
    CRS-2676: Start of 'ora.mdnsd' on 'trz1test_rac' succeeded
    CRS-2672: Attempting to start 'ora.gpnpd' on 'trz1test_rac'
    CRS-2676: Start of 'ora.gpnpd' on 'trz1test_rac' succeeded
    CRS-2672: Attempting to start 'ora.cssdmonitor' on 'trz1test_rac'
    CRS-2676: Start of 'ora.cssdmonitor' on 'trz1test_rac' succeeded
    CRS-2672: Attempting to start 'ora.cssd' on 'trz1test_rac'
    CRS-2672: Attempting to start 'ora.diskmon' on 'trz1test_rac'
    CRS-2676: Start of 'ora.diskmon' on 'trz1test_rac' succeeded
    CRS-2676: Start of 'ora.cssd' on 'trz1test_rac' succeeded
    CRS-2672: Attempting to start 'ora.ctssd' on 'trz1test_rac'
    CRS-2676: Start of 'ora.ctssd' on 'trz1test_rac' succeeded
    clscfg: -install mode specified
    Successfully accumulated necessary OCR keys.
    Creating OCR keys for user 'root', privgrp 'system'..
    Operation successful.
    CRS-2672: Attempting to start 'ora.crsd' on 'trz1test_rac'
    CRS-2676: Start of 'ora.crsd' on 'trz1test_rac' succeeded
    Now formatting voting disk: /data_gpfs/oracle/crs/vdsk.
    CRS-4603: Successful addition of voting disk /data_gpfs/oracle/crs/vdsk.
    ## STATE File Universal Id File Name Disk group
    1. ONLINE 653624f2aa1f4f83bf774e8052889a32 (/data_gpfs/oracle/crs/vdsk) []
    Located 1 voting disk(s).
    CRS-2673: Attempting to stop 'ora.crsd' on 'trz1test_rac'
    CRS-2677: Stop of 'ora.crsd' on 'trz1test_rac' succeeded
    CRS-2673: Attempting to stop 'ora.ctssd' on 'trz1test_rac'
    CRS-2677: Stop of 'ora.ctssd' on 'trz1test_rac' succeeded
    CRS-2673: Attempting to stop 'ora.cssdmonitor' on 'trz1test_rac'
    CRS-2677: Stop of 'ora.cssdmonitor' on 'trz1test_rac' succeeded
    CRS-2673: Attempting to stop 'ora.cssd' on 'trz1test_rac'
    CRS-2677: Stop of 'ora.cssd' on 'trz1test_rac' succeeded
    CRS-2673: Attempting to stop 'ora.gpnpd' on 'trz1test_rac'
    CRS-2677: Stop of 'ora.gpnpd' on 'trz1test_rac' succeeded
    CRS-2673: Attempting to stop 'ora.gipcd' on 'trz1test_rac'
    CRS-2677: Stop of 'ora.gipcd' on 'trz1test_rac' succeeded
    CRS-2673: Attempting to stop 'ora.mdnsd' on 'trz1test_rac'
    CRS-2677: Stop of 'ora.mdnsd' on 'trz1test_rac' succeeded
    CRS-2672: Attempting to start 'ora.mdnsd' on 'trz1test_rac'
    CRS-2676: Start of 'ora.mdnsd' on 'trz1test_rac' succeeded
    CRS-2672: Attempting to start 'ora.gipcd' on 'trz1test_rac'
    CRS-2676: Start of 'ora.gipcd' on 'trz1test_rac' succeeded
    CRS-2672: Attempting to start 'ora.gpnpd' on 'trz1test_rac'
    CRS-2676: Start of 'ora.gpnpd' on 'trz1test_rac' succeeded
    CRS-2672: Attempting to start 'ora.cssdmonitor' on 'trz1test_rac'
    CRS-2676: Start of 'ora.cssdmonitor' on 'trz1test_rac' succeeded
    CRS-2672: Attempting to start 'ora.cssd' on 'trz1test_rac'
    CRS-2672: Attempting to start 'ora.diskmon' on 'trz1test_rac'
    CRS-2676: Start of 'ora.diskmon' on 'trz1test_rac' succeeded
    CRS-2676: Start of 'ora.cssd' on 'trz1test_rac' succeeded
    CRS-2672: Attempting to start 'ora.ctssd' on 'trz1test_rac'
    CRS-2676: Start of 'ora.ctssd' on 'trz1test_rac' succeeded
    CRS-2672: Attempting to start 'ora.crsd' on 'trz1test_rac'
    CRS-2676: Start of 'ora.crsd' on 'trz1test_rac' succeeded
    CRS-2672: Attempting to start 'ora.evmd' on 'trz1test_rac'
    CRS-2676: Start of 'ora.evmd' on 'trz1test_rac' succeeded
    */oracle/grid/bin/srvctl start nodeapps -n trz1test_rac ... failed*
    Configure Oracle Grid Infrastructure for a Cluster ... failed
    This is because ora.eONS daemon is not starting. There is a Metalink note that we MIGHT start this daemon manually ... but this is not working.
    *./srvctl status nodeapps -n trz1test_rac*
    -n <node_name> option has been deprecated.
    VIP trz1test_rac_vip is enabled
    VIP trz1test_rac_vip is running on node: trz1test_rac
    Network is enabled
    Network is running on node: trz1test_rac
    GSD is disabled
    GSD is not running on node: trz1test_rac
    ONS is enabled
    ONS daemon is running on node: trz1test_rac
    eONS is enabled
    eONS daemon is not running on node: trz1test_rac

    I run my clusterware/DB on AIX 5.3
    When I run runcluvfy.sh here are the things which are not passing:
    Check: Node connectivity of subnet "192.168.1.0"
    Source Destination Connected?
    trz2test_rac:en5 trz2test_rac:en5 yes
    trz2test_rac:en5 trz1test_rac:en5 yes
    trz2test_rac:en5 trz1test_rac:en5 yes
    trz2test_rac:en5 trz1test_rac:en5 yes
    trz2test_rac:en5 trz1test_rac:en5 yes
    trz1test_rac:en5 trz1test_rac:en5 yes
    Result: Node connectivity passed for subnet "192.168.1.0" with node(s) trz2test_rac,trz1test_rac
    Check: TCP connectivity of subnet "192.168.1.0"
    Source Destination Connected?
    trz1test_rac:192.168.1.140 trz2test_rac:192.168.1.142 failed
    trz1test_rac:192.168.1.140 trz2test_rac:192.168.1.142 failed
    Result: TCP connectivity check failed for subnet "192.168.1.0"
    NTP daemon slewing option check failed on some nodes
    PRVF-5436 : The NTP daemon running on one or more nodes lacks the slewing option "-x"
    Result: Clock synchronization check using Network Time Protocol(NTP) failed
    NTP mustn't be a problem I guess as the date are identical on the 2 nodes.
    I have no idea how to fix the TCP connectivity issue with the subnet "192.168.1.0". Some posts wrote that could be a firewall issue. Are there any other causes ?
    Thanks to all,
    Paul

  • Root.sh failed during 11gR2 Grid Infrasture Installation.

    Hello There !
    I am Installing two node rac cluster on OEL5.4 using 11gR2 Grid Infrastructure.
    But during installation when i run root.sh file on my first "Oracle1" node I am getting following errors.
    I Reinstall it again after clearing everything. but stuck at same place again.
    [root@oracle1 ~]# su - root
    [root@oracle1 ~]# /u01/app/11.2.0/grid/root.sh
    Running Oracle 11g root.sh script...
    The following environment variables are set as:
    ORACLE_OWNER= grid
    ORACLE_HOME= /u01/app/11.2.0/grid
    Enter the full pathname of the local bin directory: [usr/local/bin]:
    The file "dbhome" already exists in /usr/local/bin. Overwrite it? (y/n)
    [n]: y
    Copying dbhome to /usr/local/bin ...
    The file "oraenv" already exists in /usr/local/bin. Overwrite it? (y/n)
    [n]: y
    Copying oraenv to /usr/local/bin ...
    The file "coraenv" already exists in /usr/local/bin. Overwrite it? (y/n)
    [n]: y
    Copying coraenv to /usr/local/bin ...
    Entries will be added to the /etc/oratab file as needed by
    Database Configuration Assistant when a database is created
    Finished running generic part of root.sh script.
    Now product-specific root actions will be performed.
    2010-01-18 16:59:53: Parsing the host name
    2010-01-18 16:59:53: Checking for super user privileges
    2010-01-18 16:59:53: User has super user privileges
    Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params
    Creating trace directory
    LOCAL ADD MODE
    Creating OCR keys for user 'root', privgrp 'root'..
    Operation successful.
    root wallet
    root wallet cert
    root cert export
    peer wallet
    profile reader wallet
    pa wallet
    peer wallet keys
    pa wallet keys
    peer cert request
    pa cert request
    peer cert
    pa cert
    peer root cert TP
    profile reader root cert TP
    pa root cert TP
    peer pa cert TP
    pa peer cert TP
    profile reader pa cert TP
    profile reader peer cert TP
    peer user cert
    pa user cert
    Failed to create a peer profile for Oracle Cluster GPnP. gpnptool rc=32512
    Creation of Oracle GPnP peer profile failed for oracle1 at /u01/app/11.2.0/grid/crs/install/crsconfig_lib.pm line 4138.
    [root@oracle1 ~]# echo $ORACLE_HOME
    Could you please , help me why I am having this problem ?
    Thanks.
    Regards
    Hems.

    I had same problem which was resolved using below steps.
    1) Disable firewall and selinux encryption both
    2) Deconfig crs on both nodes
    3) clear all previous install from oracle home
    4) reinstall

  • Grid installation: root.sh failed on the first node on Solaris cluster 4.1

    Hi all,
    I'm trying to install the Grid (11.2.0.3.0) on the 2 node-clusters (OSC 4.1).
    When I run the root.sh on the first node, I got the out put as follow:
    xha239080-root-5.11# root.sh
    Performing root user operation for Oracle 11g
    The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME= /Grid/CRShome
    Enter the full pathname of the local bin directory: [/usr/local/bin]:
    /usr/local/bin is read only. Continue without copy (y/n) or retry (r)? [y]:
    Warning: /usr/local/bin is read only. No files will be copied.
    Creating /var/opt/oracle/oratab file...
    Entries will be added to the /var/opt/oracle/oratab file as needed by
    Database Configuration Assistant when a database is created
    Finished running generic part of root script.
    Now product-specific root actions will be performed.
    Using configuration parameter file: /Grid/CRShome/crs/install/crsconfig_params
    Creating trace directory
    User ignored Prerequisites during installation
    OLR initialization - successful
    root wallet
    root wallet cert
    root cert export
    peer wallet
    profile reader wallet
    pa wallet
    peer wallet keys
    pa wallet keys
    peer cert request
    pa cert request
    peer cert
    pa cert
    peer root cert TP
    profile reader root cert TP
    pa root cert TP
    peer pa cert TP
    pa peer cert TP
    profile reader pa cert TP
    profile reader peer cert TP
    peer user cert
    pa user cert
    Adding Clusterware entries to inittab
    CRS-2672: Attempting to start 'ora.mdnsd' on 'xha239080'
    CRS-2676: Start of 'ora.mdnsd' on 'xha239080' succeeded
    CRS-2672: Attempting to start 'ora.gpnpd' on 'xha239080'
    CRS-2676: Start of 'ora.gpnpd' on 'xha239080' succeeded
    CRS-2672: Attempting to start 'ora.cssdmonitor' on 'xha239080'
    CRS-2672: Attempting to start 'ora.gipcd' on 'xha239080'
    CRS-2676: Start of 'ora.cssdmonitor' on 'xha239080' succeeded
    CRS-2676: Start of 'ora.gipcd' on 'xha239080' succeeded
    CRS-2672: Attempting to start 'ora.cssd' on 'xha239080'
    CRS-2672: Attempting to start 'ora.diskmon' on 'xha239080'
    CRS-2676: Start of 'ora.diskmon' on 'xha239080' succeeded
    CRS-2676: Start of 'ora.cssd' on 'xha239080' succeeded
    ASM created and started successfully.
    Disk Group DATA created successfully.
    clscfg: -install mode specified
    Successfully accumulated necessary OCR keys.
    Creating OCR keys for user 'root', privgrp 'root'..
    Operation successful.
    CRS-4256: Updating the profile
    Successful addition of voting disk 9cdb938773bc4f16bf332edac499fd06.
    Successful addition of voting disk 842907db11f74f59bf65247138d6e8f5.
    Successful addition of voting disk 748852d2a5c84f72bfcd50d60f65654d.
    Successfully replaced voting disk group with +DATA.
    CRS-4256: Updating the profile
    CRS-4266: Voting file(s) successfully replaced
    ## STATE File Universal Id File Name Disk group
    1. ONLINE 9cdb938773bc4f16bf332edac499fd06 (/dev/did/rdsk/d10s6) [DATA]
    2. ONLINE 842907db11f74f59bf65247138d6e8f5 (/dev/did/rdsk/d8s6) [DATA]
    3. ONLINE 748852d2a5c84f72bfcd50d60f65654d (/dev/did/rdsk/d9s6) [DATA]
    Located 3 voting disk(s).
    Start of resource "ora.cssd" failed
    CRS-2672: Attempting to start 'ora.cssdmonitor' on 'xha239080'
    CRS-2672: Attempting to start 'ora.gipcd' on 'xha239080'
    CRS-2676: Start of 'ora.cssdmonitor' on 'xha239080' succeeded
    CRS-2676: Start of 'ora.gipcd' on 'xha239080' succeeded
    CRS-2672: Attempting to start 'ora.cssd' on 'xha239080'
    CRS-2672: Attempting to start 'ora.diskmon' on 'xha239080'
    CRS-2676: Start of 'ora.diskmon' on 'xha239080' succeeded
    CRS-2674: Start of 'ora.cssd' on 'xha239080' failed
    CRS-2679: Attempting to clean 'ora.cssd' on 'xha239080'
    CRS-2681: Clean of 'ora.cssd' on 'xha239080' succeeded
    CRS-2673: Attempting to stop 'ora.gipcd' on 'xha239080'
    CRS-2677: Stop of 'ora.gipcd' on 'xha239080' succeeded
    CRS-2673: Attempting to stop 'ora.cssdmonitor' on 'xha239080'
    CRS-2677: Stop of 'ora.cssdmonitor' on 'xha239080' succeeded
    CRS-5804: Communication error with agent process
    CRS-4000: Command Start failed, or completed with errors.
    Failed to start Oracle Grid Infrastructure stack
    Failed to start Cluster Synchorinisation Service in clustered mode at /Grid/CRShome/crs/install/crsconfig_lib.pm line 1211.
    /Grid/CRShome/perl/bin/perl -I/Grid/CRShome/perl/lib -I/Grid/CRShome/crs/install /Grid/CRShome/crs/install/rootcrs.pl execution failed
    xha239080-root-5.11# history
    checking the ocssd.log, I see some thing as follow:
    2013-09-16 18:46:24.238: [    CSSD][1]clssscmain: Starting CSS daemon, version 11.2.0.3.0, in (clustered) mode with uniqueness value 1379371584
    2013-09-16 18:46:24.239: [    CSSD][1]clssscmain: Environment is production
    2013-09-16 18:46:24.239: [    CSSD][1]clssscmain: Core file size limit extended
    2013-09-16 18:46:24.248: [    CSSD][1]clssscmain: GIPCHA down 1
    2013-09-16 18:46:24.249: [    CSSD][1]clssscGetParameterOLR: OLR fetch for parameter logsize (8) failed with rc 21
    2013-09-16 18:46:24.250: [    CSSD][1]clssscExtendLimits: The current soft limit for file descriptors is 65536, hard limit is 65536
    2013-09-16 18:46:24.250: [    CSSD][1]clssscExtendLimits: The current soft limit for locked memory is 4294967293, hard limit is 4294967293
    2013-09-16 18:46:24.250: [    CSSD][1]clssscGetParameterOLR: OLR fetch for parameter priority (15) failed with rc 21
    2013-09-16 18:46:24.250: [    CSSD][1]clssscSetPrivEnv: Setting priority to 4
    2013-09-16 18:46:24.253: [    CSSD][1]clssscSetPrivEnv: unable to set priority to 4
    2013-09-16 18:46:24.253: [    CSSD][1]SLOS: cat=-2, opn=scls_mem_lockdown, dep=11, loc=mlockall
    unable to lock memory
    2013-09-16 18:46:24.253: [    CSSD][1](:CSSSC00011:)clssscExit: A fatal error occurred during initialization
    Do anyone have any idea what going on and how can I fix it ?

    Hi,
    solaris has several issues with DISM, e.g.:
    Solaris 10 and Solaris 11 Shared Memory Locking May Fail (Doc ID 1590151.1)
    Sounds like Solaris Cluster  has a similar bug. A "workaround" is to reboot the (cluster) zone, that "fixes" the mlock error. This bug was introduced with updates in september, atleast to our environment (Solaris 11.1). Prior i did not have the issue and now i have to restart the entire zone, whenever i stop crs.
    With 11.2.0.3 the root.sh script can be rerun without prior cleaning up, so you should be able to continue installation at that point after the reboot. After the root.sh completes some configuration assistants need to be run, to complete the installation. You need to execute this manually as you wipe your oui session
    Kind Regards
    Thomas

  • Why root.sh fails in the second node?

    Hi
    After successful install of oracle11 grid in 2 nodes and running root.sh on
    node1, root.sh on node2 fails:
    [root@vmorarac2 dev]#
    /u01/app/product/11.2.0/oracle/root.sh
    Running Oracle 11g root.sh script...
    The following environment variables are set as:
        ORACLE_OWNER= oracle
        ORACLE_HOME=  /u01/app/product/11.2.0/oracle
    Enter the full pathname of the local bin directory:
    [/usr/local/bin]:
    The file "dbhome" already exists in /usr/local/bin. 
    Overwrite it? (y/n)
    [n]: y
       Copying dbhome to /usr/local/bin ...
    The file "oraenv" already exists in /usr/local/bin. 
    Overwrite it? (y/n)
    [n]: y
       Copying oraenv to /usr/local/bin ...
    The file "coraenv" already exists in /usr/local/bin. 
    Overwrite it? (y/n)
    [n]: y
       Copying coraenv to /usr/local/bin ...
    Creating /etc/oratab file...
    Entries will be added to the /etc/oratab file as needed
    by
    Database Configuration Assistant when a database is
    created
    Finished running generic part of root.sh script.
    Now product-specific root actions will be performed.
    2013-07-17 08:37:10: Parsing the host name
    2013-07-17 08:37:10: Checking for super user
    privileges
    2013-07-17 08:37:10: User has super user privileges
    Using configuration parameter file:
    /u01/app/product/11.2.0/oracle/crs/install/crsconfig_params
    Creating trace directory
    LOCAL ADD MODE
    Creating OCR keys for user 'root', privgrp 'root'..
    Operation successful.
    Adding daemon to inittab
    CRS-4123: Oracle High Availability Services has been
    started.
    ohasd is starting
    CRS-4402: The CSS daemon was started in exclusive mode but
    found an active CSS daemon on node vmorarac1, number 1, and is terminating
    CRS-2673: Attempting to stop 'ora.cssdmonitor' on
    'vmorarac2'
    CRS-2677: Stop of 'ora.cssdmonitor' on 'vmorarac2'
    succeeded
    An active cluster was found during exclusive startup,
    restarting to join the cluster
    CRS-2672: Attempting to start 'ora.mdnsd' on
    'vmorarac2'
    CRS-2676: Start of 'ora.mdnsd' on 'vmorarac2'
    succeeded
    CRS-2672: Attempting to start 'ora.gipcd' on
    'vmorarac2'
    CRS-2676: Start of 'ora.gipcd' on 'vmorarac2'
    succeeded
    CRS-2672: Attempting to start 'ora.gpnpd' on
    'vmorarac2'
    CRS-2676: Start of 'ora.gpnpd' on 'vmorarac2'
    succeeded
    CRS-2672: Attempting to start 'ora.cssdmonitor' on
    'vmorarac2'
    CRS-2676: Start of 'ora.cssdmonitor' on 'vmorarac2'
    succeeded
    CRS-2672: Attempting to start 'ora.cssd' on
    'vmorarac2'
    CRS-2672: Attempting to start 'ora.diskmon' on
    'vmorarac2'
    CRS-2676: Start of 'ora.diskmon' on 'vmorarac2'
    succeeded
    CRS-2674: Start of 'ora.cssd' on 'vmorarac2' failed
    CRS-2679: Attempting to clean 'ora.cssd' on
    'vmorarac2'
    CRS-2681: Clean of 'ora.cssd' on 'vmorarac2' succeeded
    CRS-2673: Attempting to stop 'ora.diskmon' on
    'vmorarac2'
    CRS-2677: Stop of 'ora.diskmon' on 'vmorarac2'
    succeeded
    CRS-4000: Command Start failed, or completed with
    errors.
    CRS-2672: Attempting to start 'ora.cssd' on
    'vmorarac2'
    CRS-2672: Attempting to start 'ora.diskmon' on
    'vmorarac2'
    CRS-2674: Start of 'ora.diskmon' on 'vmorarac2' failed
    CRS-2679: Attempting to clean 'ora.diskmon' on
    'vmorarac2'
    CRS-5016: Process
    "/u01/app/product/11.2.0/oracle/bin/diskmon" spawned by agent
    "/u01/app/product/11.2.0/oracle/bin/orarootagent.bin" for action "clean" failed:
    details at "(:CLSN00010:)" in
    "/u01/app/product/11.2.0/oracle/log/vmorarac2/agent/ohasd/orarootagent_root/orarootagent_root.log"
    CRS-2681: Clean of 'ora.diskmon' on 'vmorarac2'
    succeeded
    CRS-2674: Start of 'ora.cssd' on 'vmorarac2' failed
    CRS-2679: Attempting to clean 'ora.cssd' on
    'vmorarac2'
    CRS-2681: Clean of 'ora.cssd' on 'vmorarac2' succeeded
    CRS-4000: Command Start failed, or completed with
    errors.
    Command return code of 1 (256) from command:
    /u01/app/product/11.2.0/oracle/bin/crsctl start resource ora.ctssd -init -env
    USR_ORA_ENV=CTSS_REBOOT=TRUE
    Start of resource "ora.ctssd -init -env
    USR_ORA_ENV=CTSS_REBOOT=TRUE" failed
    Failed to start CTSS
    Failed to start Oracle Clusterware stack
    [root@vmorarac2 dev]#
    [root@vmorarac2 dev]#
    try again:
    [root@vmorarac2 bin]# ./crsctl start resource ora.ctssd
    -init -env USR_ORA_ENV=CTSS_REBOOT=TRUE
    CRS-2672: Attempting to start 'ora.cssd' on
    'vmorarac2'
    CRS-2672: Attempting to start 'ora.diskmon' on
    'vmorarac2'
    CRS-2674: Start of 'ora.diskmon' on 'vmorarac2' failed
    CRS-2679: Attempting to clean 'ora.diskmon' on
    'vmorarac2'
    CRS-5016: Process
    "/u01/app/product/11.2.0/oracle/bin/diskmon" spawned by agent
    "/u01/app/product/11.2.0/oracle/bin/orarootagent.bin" for action "clean" failed:
    details at "(:CLSN00010:)" in
    "/u01/app/product/11.2.0/oracle/log/vmorarac2/agent/ohasd/orarootagent_root/orarootagent_root.log"
    CRS-2681: Clean of 'ora.diskmon' on 'vmorarac2'
    succeeded
    CRS-2674: Start of 'ora.cssd' on 'vmorarac2' failed
    CRS-2679: Attempting to clean 'ora.cssd' on
    'vmorarac2'
    CRS-2681: Clean of 'ora.cssd' on 'vmorarac2' succeeded
    CRS-4000: Command Start failed, or completed with
    errors.
    [root@vmorarac2 bin]# ps -ef | grep u01
    root      8913     1  0 08:37 ?        00:00:06
    /u01/app/product/11.2.0/oracle/bin/ohasd.bin reboot
    oracle   10507     1  0 08:39 ?        00:00:02
    /u01/app/product/11.2.0/oracle/bin/oraagent.bin
    oracle   10522     1  0 08:39 ?        00:00:00
    /u01/app/product/11.2.0/oracle/bin/mdnsd.bin
    oracle   10534     1  0 08:39 ?        00:00:00
    /u01/app/product/11.2.0/oracle/bin/gipcd.bin
    oracle   10548     1  0 08:39 ?        00:00:39
    /u01/app/product/11.2.0/oracle/bin/gpnpd.bin
    root     11723     1  0 11:00 ?        00:00:03
    /u01/app/product/11.2.0/oracle/bin/cssdmonitor
    [oracle@vmorarac2 bin]$ ./crsctl check crs
    CRS-4638: Oracle High Availability Services is online
    CRS-4535: Cannot communicate with Cluster Ready
    Services
    CRS-4530: Communications failure contacting Cluster
    Synchronization Services daemon
    CRS-4534: Cannot communicate with Event Manager

    log:
    Oracle Database 11g Clusterware Release 11.2.0.1.0 - Production Copyright 1996, 2009 Oracle. All rights reserved.
    2013-07-17 08:38:04.485: [    AGFW][3481860944] Starting the agent: /u01/app/product/11.2.0/oracle/log/vmorarac2/agent/ohasd/orarootagent_root/
    2013-07-17 08:38:04.485: [   AGENT][3481860944] Agent framework initialized, Process Id = 10319
    2013-07-17 08:38:04.487: [ USRTHRD][3481860944] Utils::getCrsHome crsHome /u01/app/product/11.2.0/oracle
    2013-07-17 08:38:04.487: [ USRTHRD][3481860944] Process::convertPidToString pid = 10319
    2013-07-17 08:38:04.488: [    AGFW][3481860944] SERVER IPC CONNECT STR: (ADDRESS=(PROTOCOL=IPC)(KEY=OHASD_IPC_SOCKET_11))
    2013-07-17 08:38:04.488: [CLSFRAME][3481860944] Inited lsf context 0x317e9e0
    2013-07-17 08:38:04.488: [CLSFRAME][3481860944] Initing CLS Framework messaging
    2013-07-17 08:38:04.488: [CLSFRAME][3481860944] New Framework state: 2
    2013-07-17 08:38:04.488: [CLSFRAME][3481860944] M2M is starting...
    2013-07-17 08:38:04.490: [ CRSCOMM][3481860944] m_pClscCtx=0x31d1bd0m_pUgblm=0x31d5720
    2013-07-17 08:38:04.490: [ CRSCOMM][3481860944] Starting send thread
    2013-07-17 08:38:04.490: [ CRSCOMM][1119435072] clsIpc: sendWork thread started.
    2013-07-17 08:38:04.491: [ CRSCOMM][1129924928] IPC Client thread started listening
    2013-07-17 08:38:04.491: [ CRSCOMM][1129924928] init data sent from server
    2013-07-17 08:38:04.491: [CLSFRAME][3481860944] New IPC Member:{Relative|Node:0|Process:0|Type:2}:OHASD:vmorarac2
    2013-07-17 08:38:04.491: [CLSFRAME][3481860944] New process connected to us ID:{Relative|Node:0|Process:0|Type:2} Info:OHASD:vmorarac2
    2013-07-17 08:38:04.492: [CLSFRAME][3481860944] Starting thread model named: MultiThread
    2013-07-17 08:38:04.492: [CLSFRAME][3481860944] Starting thread model named: SingleThread
    2013-07-17 08:38:04.492: [CLSFRAME][3481860944] Starting thread model named: SingleThreadT
    2013-07-17 08:38:04.492: [CLSFRAME][3481860944] New Framework state: 3
    2013-07-17 08:38:04.493: [    AGFW][3481860944] Agent Framework started successfully
    2013-07-17 08:38:04.493: [    AGFW][1182374208] Agfw engine module has enabled...
    2013-07-17 08:38:04.493: [CLSFRAME][1182374208] Module Enabling is complete
    2013-07-17 08:38:04.493: [CLSFRAME][1182374208] New Framework state: 6
    2013-07-17 08:38:04.493: [    AGFW][1182374208] Agent is started with userid: root , expected user: root
    2013-07-17 08:38:04.493: [    AGFW][1182374208] Agent sending message to PE: AGENT_HANDSHAKE[Proxy] ID 20484:14
    2013-07-17 08:38:04.505: [    AGFW][1182374208] Agent received the message: RESTYPE_ADD[ora.crs.type] ID 8196:358
    2013-07-17 08:38:04.506: [    AGFW][1182374208] Added new restype: ora.crs.type
    2013-07-17 08:38:04.506: [    AGFW][1182374208] Agent sending last reply for: RESTYPE_ADD[ora.crs.type] ID 8196:358
    2013-07-17 08:38:04.506: [    AGFW][1182374208] Agent received the message: RESTYPE_ADD[ora.ctss.type] ID 8196:360
    2013-07-17 08:38:04.506: [    AGFW][1182374208] Added new restype: ora.ctss.type
    2013-07-17 08:38:04.507: [    AGFW][1182374208] Agent sending last reply for: RESTYPE_ADD[ora.ctss.type] ID 8196:360
    2013-07-17 08:38:04.516: [    AGFW][1182374208] Agent received the message: RESTYPE_ADD[ora.diskmon.type] ID 8196:362
    2013-07-17 08:38:04.516: [    AGFW][1182374208] Added new restype: ora.diskmon.type
    2013-07-17 08:38:04.517: [    AGFW][1182374208] Agent sending last reply for: RESTYPE_ADD[ora.diskmon.type] ID 8196:362
    2013-07-17 08:38:04.519: [    AGFW][1182374208] Agent received the message: RESTYPE_ADD[ora.drivers.acfs.type] ID 8196:364
    2013-07-17 08:38:04.520: [    AGFW][1182374208] Added new restype: ora.drivers.acfs.type
    2013-07-17 08:38:04.520: [    AGFW][1182374208] Agent sending last reply for: RESTYPE_ADD[ora.drivers.acfs.type] ID 8196:364
    2013-07-17 08:38:04.521: [    AGFW][1182374208] Agent received the message: RESOURCE_ADD[ora.diskmon 1 1] ID 4356:366
    2013-07-17 08:38:04.521: [    AGFW][1182374208] Added new resource: ora.diskmon 1 1 to the agfw
    2013-07-17 08:38:04.522: [    AGFW][1182374208] Agent sending last reply for: RESOURCE_ADD[ora.diskmon 1 1] ID 4356:366
    2013-07-17 08:38:04.522: [    AGFW][1182374208] Agent received the message: RESOURCE_START[ora.diskmon 1 1] ID 4098:367
    2013-07-17 08:38:04.522: [    AGFW][1182374208] Preparing START command for: ora.diskmon 1 1
    2013-07-17 08:38:04.522: [    AGFW][1182374208] ora.diskmon 1 1 state changed from: UNKNOWN to: STARTING
    2013-07-17 08:38:04.526: [    AGFW][1161394496] Executing command: start for resource: ora.diskmon 1 1
    2013-07-17 08:38:04.527: [ora.diskmon][1161394496] [start] clsn_agent::start {
    2013-07-17 08:38:04.527: [ora.diskmon][1161394496] [start] DaemonAgent{
    2013-07-17 08:38:04.527: [ora.diskmon][1161394496] [start] }DaemonAgent
    2013-07-17 08:38:04.527: [ora.diskmon][1161394496] [start] DiskmonAgent::DiskmonAgent {
    2013-07-17 08:38:04.527: [ora.diskmon][1161394496] [start] InitAttrs {
    2013-07-17 08:38:04.527: [ora.diskmon][1161394496] [start] __IS_HASD_AGENT=TRUE
    2013-07-17 08:38:04.527: [ora.diskmon][1161394496] [start] }InitAttrs
    2013-07-17 08:38:04.527: [ora.diskmon][1161394496] [start] DiskmonAgent::DiskmonAgent }
    2013-07-17 08:38:04.527: [ora.diskmon][1161394496] [start] DiskmonAgent::start {
    2013-07-17 08:38:04.527: [ora.diskmon][1161394496] [start] Arg Value = -d
    2013-07-17 08:38:04.527: [ora.diskmon][1161394496] [start] Arg Value = -f
    2013-07-17 08:38:04.527: [ora.diskmon][1161394496] [start] Total Count of Environment Variables = 3
    2013-07-17 08:38:04.527: [ora.diskmon][1161394496] [start] Adding Environment Variables _ORA_AGENT_ACTION=TRUE
    2013-07-17 08:38:04.527: [ora.diskmon][1161394496] [start] Adding Environment Variables __IS_HASD_AGENT=
    2013-07-17 08:38:04.527: [ora.diskmon][1161394496] [start] Adding Environment variable from USR_ORA_ENV ORACLE_USER=oracle
    2013-07-17 08:38:04.527: [ora.diskmon][1161394496] [start] Utils:execCmd action = 1 flags = 5 ohome = (null) cmdname = diskmon.
    2013-07-17 08:38:04.528: [ora.diskmon][1161394496] [start] getOracleHomeAttrib: oracle_home = /u01/app/product/11.2.0/oracle
    2013-07-17 08:38:04.528: [ora.diskmon][1161394496] [start] Utils:execCmd Running the binary from /u01/app/product/11.2.0/oracle/bin/diskmon
    2013-07-17 08:38:04.531: [CRSTIMER][1091324224] Timer Thread Starting.
    2013-07-17 08:38:04.533: [ora.diskmon][1161394496] [start] execCmd ret = 0
    2013-07-17 08:38:04.533: [ora.diskmon][1161394496] [start] }DaemonAgent::start
    2013-07-17 08:38:10.534: [ora.diskmon][1161394496] [start] DiskmonAgent::connect {
    2013-07-17 08:38:10.534: [ora.diskmon][1161394496] [start] Process::convertPidToString pid = 10319
    2013-07-17 08:38:10.535: [ora.diskmon][1161394496] [start] DiskmonAgent::connect }
    2013-07-17 08:38:10.535: [ora.diskmon][1161394496] [start] DiskmonAgent::start }
    2013-07-17 08:38:10.535: [ora.diskmon][1161394496] [start] clsn_agent::start }
    2013-07-17 08:38:10.535: [    AGFW][1161394496] Command: start for resource: ora.diskmon 1 1 completed with status: SUCCESS
    2013-07-17 08:38:10.535: [    AGFW][1182374208] Agent sending reply for: RESOURCE_START[ora.diskmon 1 1] ID 4098:367
    2013-07-17 08:38:10.537: [    AGFW][1161394496] Executing command: check for resource: ora.diskmon 1 1
    2013-07-17 08:38:10.538: [ora.diskmon][1161394496] [check] DiskmonAgent::check {
    2013-07-17 08:38:10.538: [ora.diskmon][1161394496] [check] DiskmonAgent::check } 0
    2013-07-17 08:38:10.538: [    AGFW][1161394496] check for resource: ora.diskmon 1 1 completed with status: ONLINE
    2013-07-17 08:38:10.538: [    AGFW][1182374208] ora.diskmon 1 1 state changed from: STARTING to: ONLINE
    2013-07-17 08:38:10.538: [    AGFW][1182374208] Started implicit monitor for:ora.diskmon 1 1
    2013-07-17 08:38:10.538: [    AGFW][1182374208] Agent sending last reply for: RESOURCE_START[ora.diskmon 1 1] ID 4098:367
    2013-07-17 08:38:30.543: [    AGFW][1182374208] CHECK initiated by timer for: ora.diskmon 1 1
    2013-07-17 08:38:30.544: [    AGFW][1161394496] Executing command: check for resource: ora.diskmon 1 1
    2013-07-17 08:38:30.544: [ora.diskmon][1161394496] [check] DiskmonAgent::check {
    2013-07-17 08:38:30.544: [ora.diskmon][1161394496] [check] DiskmonAgent::check } 0
    2013-07-17 08:38:30.545: [    AGFW][1161394496] check for resource: ora.diskmon 1 1 completed with status: ONLINE
    2013-07-17 08:38:50.550: [    AGFW][1182374208] CHECK initiated by timer for: ora.diskmon 1 1
    2013-07-17 08:38:50.551: [    AGFW][1161394496] Executing command: check for resource: ora.diskmon 1 1
    2013-07-17 08:38:50.551: [ora.diskmon][1161394496] [check] DiskmonAgent::check {
    2013-07-17 08:38:50.551: [ora.diskmon][1161394496] [check] DiskmonAgent::check } 0
    2013-07-17 08:38:50.551: [    AGFW][1161394496] check for resource: ora.diskmon 1 1 completed with status: ONLINE
    2013-07-17 08:39:02.782: [    AGFW][1182374208] Agent received the message: RESOURCE_STOP[ora.diskmon 1 1] ID 4099:547
    2013-07-17 08:39:02.782: [    AGFW][1182374208] Preparing STOP command for: ora.diskmon 1 1
    2013-07-17 08:39:02.782: [    AGFW][1182374208] ora.diskmon 1 1 state changed from: ONLINE to: STOPPING
    2013-07-17 08:39:02.783: [    AGFW][1161394496] Executing command: stop for resource: ora.diskmon 1 1
    2013-07-17 08:39:02.783: [ora.diskmon][1161394496] [stop] clsn_agent::stop {
    2013-07-17 08:39:02.783: [ora.diskmon][1161394496] [stop] DiskmonAgent::stop {
    2013-07-17 08:39:02.783: [ora.diskmon][1161394496] [stop] DiskmonAgent::stop }
    2013-07-17 08:39:02.783: [ora.diskmon][1161394496] [stop] clsn_agent::stop }
    2013-07-17 08:39:02.783: [    AGFW][1161394496] Command: stop for resource: ora.diskmon 1 1 completed with status: SUCCESS
    2013-07-17 08:39:02.784: [    AGFW][1161394496] Executing command: check for resource: ora.diskmon 1 1
    2013-07-17 08:39:02.784: [ora.diskmon][1161394496] [check] DiskmonAgent::check {
    2013-07-17 08:39:02.784: [ora.diskmon][1161394496] [check] DiskmonAgent::check } 2
    2013-07-17 08:39:02.784: [    AGFW][1161394496] check for resource: ora.diskmon 1 1 completed with status: PLANNED_OFFLINE
    2013-07-17 08:39:02.784: [    AGFW][1182374208] Agent sending reply for: RESOURCE_STOP[ora.diskmon 1 1] ID 4099:547
    2013-07-17 08:39:02.785: [    AGFW][1182374208] ora.diskmon 1 1 state changed from: STOPPING to: PLANNED_OFFLINE
    2013-07-17 08:39:02.785: [    AGFW][1182374208] Agent sending last reply for: RESOURCE_STOP[ora.diskmon 1 1] ID 4099:547
    2013-07-17 08:39:02.785: [    AGFW][1182374208] Agent has no resources to be monitored.Sending suicide request.
    2013-07-17 08:39:02.786: [    AGFW][1182374208] Agent sending message to PE: AGENT_SUICIDE[Proxy] ID 20486:72
    2013-07-17 08:39:02.789: [    AGFW][1182374208] Agent is commiting suicide.
    2013-07-17 08:39:02.790: [    AGFW][1182374208] Agent is exiting with exit code: 1
    2013-07-17 08:39:12.176: [    AGFW][1664629584] Starting the agent: /u01/app/product/11.2.0/oracle/log/vmorarac2/agent/ohasd/orarootagent_root/
    2013-07-17 08:39:12.176: [   AGENT][1664629584] Agent framework initialized, Process Id = 10581
    2013-07-17 08:39:12.178: [ USRTHRD][1664629584] Utils::getCrsHome crsHome /u01/app/product/11.2.0/oracle
    2013-07-17 08:39:12.178: [ USRTHRD][1664629584] Process::convertPidToString pid = 10581
    2013-07-17 08:39:12.178: [    AGFW][1664629584] SERVER IPC CONNECT STR: (ADDRESS=(PROTOCOL=IPC)(KEY=OHASD_IPC_SOCKET_11))
    2013-07-17 08:39:12.178: [CLSFRAME][1664629584] Inited lsf context 0xd9309e0
    2013-07-17 08:39:12.179: [CLSFRAME][1664629584] Initing CLS Framework messaging
    2013-07-17 08:39:12.179: [CLSFRAME][1664629584] New Framework state: 2
    2013-07-17 08:39:12.179: [CLSFRAME][1664629584] M2M is starting...
    2013-07-17 08:39:12.180: [ CRSCOMM][1664629584] m_pClscCtx=0xd983bd0m_pUgblm=0xd987720
    2013-07-17 08:39:12.180: [ CRSCOMM][1664629584] Starting send thread
    2013-07-17 08:39:12.181: [ CRSCOMM][1115052352] clsIpc: sendWork thread started.
    2013-07-17 08:39:12.181: [ CRSCOMM][1125542208] IPC Client thread started listening
    2013-07-17 08:39:12.181: [ CRSCOMM][1125542208] init data sent from server
    2013-07-17 08:39:12.181: [CLSFRAME][1664629584] New IPC Member:{Relative|Node:0|Process:0|Type:2}:OHASD:vmorarac2
    2013-07-17 08:39:12.181: [CLSFRAME][1664629584] New process connected to us ID:{Relative|Node:0|Process:0|Type:2} Info:OHASD:vmorarac2
    2013-07-17 08:39:12.182: [CLSFRAME][1664629584] Starting thread model named: MultiThread
    2013-07-17 08:39:12.182: [CLSFRAME][1664629584] Starting thread model named: SingleThread
    2013-07-17 08:39:12.182: [CLSFRAME][1664629584] Starting thread model named: SingleThreadT
    2013-07-17 08:39:12.182: [CLSFRAME][1664629584] New Framework state: 3
    2013-07-17 08:39:12.182: [    AGFW][1664629584] Agent Framework started successfully
    2013-07-17 08:39:12.182: [    AGFW][1177991488] Agfw engine module has enabled...
    2013-07-17 08:39:12.183: [CLSFRAME][1177991488] Module Enabling is complete
    2013-07-17 08:39:12.183: [CLSFRAME][1177991488] New Framework state: 6
    2013-07-17 08:39:12.183: [    AGFW][1177991488] Agent is started with userid: root , expected user: root
    2013-07-17 08:39:12.183: [    AGFW][1177991488] Agent sending message to PE: AGENT_HANDSHAKE[Proxy] ID 20484:14
    2013-07-17 08:39:12.192: [    AGFW][1177991488] Agent received the message: RESTYPE_ADD[ora.crs.type] ID 8196:886
    2013-07-17 08:39:12.192: [    AGFW][1177991488] Added new restype: ora.crs.type
    2013-07-17 08:39:12.192: [    AGFW][1177991488] Agent sending last reply for: RESTYPE_ADD[ora.crs.type] ID 8196:886
    2013-07-17 08:39:12.198: [    AGFW][1177991488] Agent received the message: RESTYPE_ADD[ora.ctss.type] ID 8196:888
    2013-07-17 08:39:12.198: [    AGFW][1177991488] Added new restype: ora.ctss.type
    2013-07-17 08:39:12.199: [    AGFW][1177991488] Agent sending last reply for: RESTYPE_ADD[ora.ctss.type] ID 8196:888
    2013-07-17 08:39:12.204: [    AGFW][1177991488] Agent received the message: RESTYPE_ADD[ora.diskmon.type] ID 8196:890
    2013-07-17 08:39:12.204: [    AGFW][1177991488] Added new restype: ora.diskmon.type
    2013-07-17 08:39:12.204: [    AGFW][1177991488] Agent sending last reply for: RESTYPE_ADD[ora.diskmon.type] ID 8196:890
    2013-07-17 08:39:12.209: [    AGFW][1177991488] Agent received the message: RESTYPE_ADD[ora.drivers.acfs.type] ID 8196:892
    2013-07-17 08:39:12.209: [    AGFW][1177991488] Added new restype: ora.drivers.acfs.type
    2013-07-17 08:39:12.210: [    AGFW][1177991488] Agent sending last reply for: RESTYPE_ADD[ora.drivers.acfs.type] ID 8196:892
    2013-07-17 08:39:12.210: [    AGFW][1177991488] Agent received the message: RESOURCE_ADD[ora.diskmon 1 1] ID 4356:894
    2013-07-17 08:39:12.210: [    AGFW][1177991488] Added new resource: ora.diskmon 1 1 to the agfw
    2013-07-17 08:39:12.210: [    AGFW][1177991488] Agent sending last reply for: RESOURCE_ADD[ora.diskmon 1 1] ID 4356:894
    2013-07-17 08:39:12.210: [    AGFW][1177991488] Agent received the message: RESOURCE_START[ora.diskmon 1 1] ID 4098:895
    2013-07-17 08:39:12.211: [    AGFW][1177991488] Preparing START command for: ora.diskmon 1 1
    2013-07-17 08:39:12.211: [    AGFW][1177991488] ora.diskmon 1 1 state changed from: UNKNOWN to: STARTING
    2013-07-17 08:39:12.216: [    AGFW][1167501632] Executing command: start for resource: ora.diskmon 1 1
    2013-07-17 08:39:12.216: [ora.diskmon][1167501632] [start] clsn_agent::start {
    2013-07-17 08:39:12.216: [ora.diskmon][1167501632] [start] DaemonAgent{
    2013-07-17 08:39:12.216: [ora.diskmon][1167501632] [start] }DaemonAgent
    2013-07-17 08:39:12.216: [ora.diskmon][1167501632] [start] DiskmonAgent::DiskmonAgent {
    2013-07-17 08:39:12.216: [ora.diskmon][1167501632] [start] InitAttrs {
    2013-07-17 08:39:12.216: [ora.diskmon][1167501632] [start] __IS_HASD_AGENT=TRUE
    2013-07-17 08:39:12.216: [ora.diskmon][1167501632] [start] }InitAttrs
    2013-07-17 08:39:12.217: [ora.diskmon][1167501632] [start] DiskmonAgent::DiskmonAgent }
    2013-07-17 08:39:12.217: [ora.diskmon][1167501632] [start] DiskmonAgent::start {
    2013-07-17 08:39:12.217: [ora.diskmon][1167501632] [start] Arg Value = -d
    2013-07-17 08:39:12.217: [ora.diskmon][1167501632] [start] Arg Value = -f
    2013-07-17 08:39:12.217: [ora.diskmon][1167501632] [start] Total Count of Environment Variables = 3
    2013-07-17 08:39:12.217: [ora.diskmon][1167501632] [start] Adding Environment Variables _ORA_AGENT_ACTION=TRUE
    2013-07-17 08:39:12.217: [ora.diskmon][1167501632] [start] Adding Environment Variables __IS_HASD_AGENT=
    2013-07-17 08:39:12.217: [ora.diskmon][1167501632] [start] Adding Environment variable from USR_ORA_ENV ORACLE_USER=oracle
    2013-07-17 08:39:12.217: [ora.diskmon][1167501632] [start] Utils:execCmd action = 1 flags = 5 ohome = (null) cmdname = diskmon.
    2013-07-17 08:39:12.217: [ora.diskmon][1167501632] [start] getOracleHomeAttrib: oracle_home = /u01/app/product/11.2.0/oracle
    2013-07-17 08:39:12.217: [ora.diskmon][1167501632] [start] Utils:execCmd Running the binary from /u01/app/product/11.2.0/oracle/bin/diskmon
    2013-07-17 08:39:12.220: [CRSTIMER][1198971200] Timer Thread Starting.
    2013-07-17 08:39:12.220: [ora.diskmon][1167501632] [start] execCmd ret = 0
    2013-07-17 08:39:12.220: [ora.diskmon][1167501632] [start] }DaemonAgent::start
    2013-07-17 08:39:18.222: [ora.diskmon][1167501632] [start] DiskmonAgent::connect {
    2013-07-17 08:39:18.222: [ora.diskmon][1167501632] [start] Process::convertPidToString pid = 10581
    2013-07-17 08:39:18.222: [ora.diskmon][1167501632] [start] DiskmonAgent::connect }
    2013-07-17 08:39:18.222: [ora.diskmon][1167501632] [start] DiskmonAgent::start }
    2013-07-17 08:39:18.222: [ora.diskmon][1167501632] [start] clsn_agent::start }
    2013-07-17 08:39:18.222: [    AGFW][1167501632] Command: start for resource: ora.diskmon 1 1 completed with status: SUCCESS
    2013-07-17 08:39:18.223: [    AGFW][1177991488] Agent sending reply for: RESOURCE_START[ora.diskmon 1 1] ID 4098:895
    2013-07-17 08:39:18.223: [    AGFW][1167501632] Executing command: check for resource: ora.diskmon 1 1
    2013-07-17 08:39:18.223: [ora.diskmon][1167501632] [check] DiskmonAgent::check {
    2013-07-17 08:39:18.224: [ora.diskmon][1167501632] [check] DiskmonAgent::check } 0
    2013-07-17 08:39:18.224: [    AGFW][1167501632] check for resource: ora.diskmon 1 1 completed with status: ONLINE
    2013-07-17 08:39:18.225: [    AGFW][1177991488] ora.diskmon 1 1 state changed from: STARTING to: ONLINE
    2013-07-17 08:39:18.225: [    AGFW][1177991488] Started implicit monitor for:ora.diskmon 1 1
    2013-07-17 08:39:18.225: [    AGFW][1177991488] Agent sending last reply for: RESOURCE_START[ora.diskmon 1 1] ID 4098:895
    2013-07-17 08:39:38.231: [    AGFW][1177991488] CHECK initiated by timer for: ora.diskmon 1 1
    2013-07-17 08:39:38.232: [    AGFW][1167501632] Executing command: check for resource: ora.diskmon 1 1
    2013-07-17 08:39:38.232: [ora.diskmon][1167501632] [check] DiskmonAgent::check {
    2013-07-17 08:39:38.232: [ora.diskmon][1167501632] [check] DiskmonAgent::check } 0
    2013-07-17 08:39:38.232: [    AGFW][1167501632] check for resource: ora.diskmon 1 1 completed with status: ONLINE
    2013-07-17 08:39:58.237: [    AGFW][1177991488] CHECK initiated by timer for: ora.diskmon 1 1
    2013-07-17 08:39:58.238: [    AGFW][1167501632] Executing command: check for resource: ora.diskmon 1 1
    2013-07-17 08:39:58.238: [ora.diskmon][1167501632] [check] DiskmonAgent::check {
    2013-07-17 08:39:58.238: [ora.diskmon][1167501632] [check] DiskmonAgent::check } 0
    2013-07-17 08:39:58.238: [    AGFW][1167501632] check for resource: ora.diskmon 1 1 completed with status: ONLINE
    2013-07-17 08:40:12.107: [    AGFW][1177991488] Agent received the message: AGENT_HB[Engine] ID 12293:966
    2013-07-17 08:40:18.243: [    AGFW][1177991488] CHECK initiated by timer for: ora.diskmon 1 1
    2013-07-17 08:40:18.244: [    AGFW][1167501632] Executing command: check for resource: ora.diskmon 1 1
    2013-07-17 08:40:18.244: [ora.diskmon][1167501632] [check] DiskmonAgent::check {
    2013-07-17 08:40:18.244: [ora.diskmon][1167501632] [check] DiskmonAgent::check } 0
    2013-07-17 08:40:18.244: [    AGFW][1167501632] check for resource: ora.diskmon 1 1 completed with status: ONLINE
    2013-07-17 08:40:38.250: [    AGFW][1177991488] CHECK initiated by timer for: ora.diskmon 1 1
    2013-07-17 08:40:38.251: [    AGFW][1167501632] Executing command: check for resource: ora.diskmon 1 1
    2013-07-17 08:40:38.251: [ora.diskmon][1167501632] [check] DiskmonAgent::check {
    2013-07-17 08:40:38.251: [ora.diskmon][1167501632] [check] DiskmonAgent::check } 0
    2013-07-17 08:40:38.251: [    AGFW][1167501632] check for resource: ora.diskmon 1 1 completed with status: ONLINE
    2013-07-17 08:40:42.116: [    AGFW][1177991488] Agent received the message: AGENT_HB[Engine] ID 12293:982
    2013-07-17 08:40:58.245: [    AGFW][1177991488] CHECK initiated by timer for: ora.diskmon 1 1
    2013-07-17 08:40:58.246: [    AGFW][1167501632] Executing command: check for resource: ora.diskmon 1 1
    2013-07-17 08:40:58.247: [ora.diskmon][1167501632] [check] DiskmonAgent::check {
    2013-07-17 08:40:58.247: [ora.diskmon][1167501632] [check] DiskmonAgent::check } 0
    2013-07-17 08:40:58.247: [    AGFW][1167501632] check for resource: ora.diskmon 1 1 completed with status: ONLINE
    2013-07-17 08:41:12.125: [    AGFW][1177991488] Agent received the message: AGENT_HB[Engine] ID 12293:998
    2013-07-17 08:41:18.252: [    AGFW][1177991488] CHECK initiated by timer for: ora.diskmon 1 1
    2013-07-17 08:41:18.252: [    AGFW][1167501632] Executing command: check for resource: ora.diskmon 1 1
    2013-07-17 08:41:18.253: [ora.diskmon][1167501632] [check] DiskmonAgent::check {
    2013-07-17 08:41:18.253: [ora.diskmon][1167501632] [check] DiskmonAgent::check } 0
    2013-07-17 08:41:18.253: [    AGFW][1167501632] check for resource: ora.diskmon 1 1 completed with status: ONLINE
    2013-07-17 08:41:38.259: [    AGFW][1177991488] CHECK initiated by timer for: ora.diskmon 1 1
    2013-07-17 08:41:38.260: [    AGFW][1167501632] Executing command: check for resource: ora.diskmon 1 1
    2013-07-17 08:41:38.260: [ora.diskmon][1167501632] [check] DiskmonAgent::check {
    2013-07-17 08:41:38.260: [ora.diskmon][1167501632] [check] DiskmonAgent::check } 0
    2013-07-17 08:41:38.260: [    AGFW][1167501632] check for resource: ora.diskmon 1 1 completed with status: ONLINE
    2013-07-17 08:41:58.255: [    AGFW][1177991488] CHECK initiated by timer for: ora.diskmon 1 1
    2013-07-17 08:41:58.256: [    AGFW][1167501632] Executing command: check for resource: ora.diskmon 1 1
    2013-07-17 08:41:58.256: [ora.diskmon][1167501632] [check] DiskmonAgent::check {
    2013-07-17 08:41:58.256: [ora.diskmon][1167501632] [check] DiskmonAgent::check } 0
    2013-07-17 08:41:58.256: [    AGFW][1167501632] check for resource: ora.diskmon 1 1 completed with status: ONLINE
    2013-07-17 08:42:12.134: [    AGFW][1177991488] Agent received the message: AGENT_HB[Engine] ID 12293:1026
    2013-07-17 08:42:18.261: [    AGFW][1177991488] CHECK initiated by timer for: ora.diskmon 1 1
    2013-07-17 08:42:18.262: [    AGFW][1167501632] Executing command: check for resource: ora.diskmon 1 1
    2013-07-17 08:42:18.262: [ora.diskmon][1167501632] [check] DiskmonAgent::check {
    2013-07-17 08:42:18.262: [ora.diskmon][1167501632] [check] DiskmonAgent::check } 0
    2013-07-17 08:42:18.262: [    AGFW][1167501632] check for resource: ora.diskmon 1 1 completed with status: ONLINE
    2013-07-17 08:42:38.268: [    AGFW][1177991488] CHECK initiated by timer for: ora.diskmon 1 1
    2013-07-17 08:42:38.269: [    AGFW][1167501632] Executing command: check for resource: ora.diskmon 1 1
    2013-07-17 08:42:38.269: [ora.diskmon][1167501632] [check] DiskmonAgent::check {
    2013-07-17 08:42:38.269: [ora.diskmon][1167501632] [check] DiskmonAgent::check } 0
    2013-07-17 08:42:38.269: [    AGFW][1167501632] check for resource: ora.diskmon 1 1 completed with status: ONLINE
    2013-07-17 08:42:58.265: [    AGFW][1177991488] CHECK initiated by timer for: ora.diskmon 1 1
    2013-07-17 08:42:58.266: [    AGFW][1167501632] Executing command: check for resource: ora.diskmon 1 1
    2013-07-17 08:42:58.266: [ora.diskmon][1167501632] [check] DiskmonAgent::check {
    2013-07-17 08:42:58.267: [ora.diskmon][1167501632] [check] DiskmonAgent::check } 0
    2013-07-17 08:42:58.267: [    AGFW][1167501632] check for resource: ora.diskmon 1 1 completed with status: ONLINE
    2013-07-17 08:43:12.144: [    AGFW][1177991488] Agent received the message: AGENT_HB[Engine] ID 12293:1054
    2013-07-17 08:43:18.272: [    AGFW][1177991488] CHECK initiated by timer for: ora.diskmon 1 1
    2013-07-17 08:43:18.272: [    AGFW][1167501632] Executing command: check for resource: ora.diskmon 1 1
    2013-07-17 08:43:18.273: [ora.diskmon][1167501632] [check] DiskmonAgent::check {
    2013-07-17 08:43:18.273: [ora.diskmon][1167501632] [check] DiskmonAgent::check } 0
    2013-07-17 08:43:18.273: [    AGFW][1167501632] check for resource: ora.diskmon 1 1 completed with status: ONLINE
    2013-07-17 08:43:38.278: [    AGFW][1177991488] CHECK initiated by timer for: ora.diskmon 1 1
    2013-07-17 08:43:38.278: [    AGFW][1167501632] Executing command: check for resource: ora.diskmon 1 1
    2013-07-17 08:43:38.279: [ora.diskmon][1167501632] [check] DiskmonAgent::check {
    2013-07-17 08:43:38.279: [ora.diskmon][1167501632] [check] DiskmonAgent::check } 0
    2013-07-17 08:43:38.279: [    AGFW][1167501632] check for resource: ora.diskmon 1 1 completed with status: ONLINE
    2013-07-17 08:43:42.154: [    AGFW][1177991488] Agent received the message: AGENT_HB[Engine] ID 12293:1070
    2013-07-17 08:43:58.284: [    AGFW][1177991488] CHECK initiated by timer for: ora.diskmon 1 1
    2013-07-17 08:43:58.285: [    AGFW][1167501632] Executing command: check for resource: ora.diskmon 1 1
    2013-07-17 08:43:58.285: [ora.diskmon][1167501632] [check] DiskmonAgent::check {
    2013-07-17 08:43:58.285: [ora.diskmon][1167501632] [check] DiskmonAgent::check } 0
    2013-07-17 08:43:58.285: [    AGFW][1167501632] check for resource: ora.diskmon 1 1 completed with status: ONLINE
    2013-07-17 08:44:12.154: [    AGFW][1177991488] Agent received the message: AGENT_HB[Engine] ID 12293:1086
    2013-07-17 08:44:18.291: [    AGFW][1177991488] CHECK initiated by timer for: ora.diskmon 1 1
    2013-07-17 08:44:18.292: [    AGFW][1167501632] Executing command: check for resource: ora.diskmon 1 1
    2013-07-17 08:44:18.292: [ora.diskmon][1167501632] [check] DiskmonAgent::check {
    2013-07-17 08:44:18.292: [ora.diskmon][1167501632] [check] DiskmonAgent::check } 0
    2013-07-17 08:44:18.292: [    AGFW][1167501632] check for resource: ora.diskmon 1 1 completed with status: ONLINE
    2013-07-17 08:44:38.296: [    AGFW][1177991488] CHECK initiated by timer for: ora.diskmon 1 1
    2013-07-17 08:44:38.297: [    AGFW][1157011776] Executing command: check for resource: ora.diskmon 1 1
    2013-07-17 08:44:38.297: [ora.diskmon][1157011776] [check] DiskmonAgent::check {
    2013-07-17 08:44:38.297: [ora.diskmon][1157011776] [check] DiskmonAgent::check } 0
    2013-07-17 08:44:38.297: [    AGFW][1157011776] check for resource: ora.diskmon 1 1 completed with status: ONLINE
    2013-07-17 08:44:42.163: [    AGFW][1177991488] Agent received the message: AGENT_HB[Engine] ID 12293:1102
    2013-07-17 08:44:58.302: [    AGFW][1177991488] CHECK initiated by timer for: ora.diskmon 1 1
    2013-07-17 08:44:58.303: [    AGFW][1157011776] Executing command: check for resource: ora.diskmon 1 1
    2013-07-17 08:44:58.303: [ora.diskmon][1157011776] [check] DiskmonAgent::check {
    2013-07-17 08:44:58.304: [ora.diskmon][1157011776] [check] DiskmonAgent::check } 0
    2013-07-17 08:44:58.304: [    AGFW][1157011776] check for resource: ora.diskmon 1 1 completed with status: ONLINE
    2013-07-17 08:45:12.174: [    AGFW][1177991488] Agent received the message: AGENT_HB[Engine] ID 12293:1118
    2013-07-17 08:45:18.309: [    AGFW][1177991488] CHECK initiated by timer for: ora.diskmon 1 1
    2013-07-17 08:45:18.309: [    AGFW][1157011776] Executing command: check for resource: ora.diskmon 1 1
    2013-07-17 08:45:18.310: [ora.diskmon][1157011776] [check] DiskmonAgent::check {
    2013-07-17 08:45:18.310: [ora.diskmon][1157011776] [check] DiskmonAgent::check } 0
    2013-07-17 08:45:18.310: [    AGFW][1157011776] check for resource: ora.diskmon 1 1 completed with status: ONLINE
    2013-07-17 08:45:38.315: [    AGFW][1177991488] CHECK initiated by timer for: ora.diskmon 1 1
    2013-07-17 08:45:38.316: [    AGFW][1157011776] Executing command: check for resource: ora.diskmon 1 1
    2013-07-17 08:45:38.316: [ora.diskmon][1157011776] [check] DiskmonAgent::check {
    2013-07-17 08:45:38.316: [ora.diskmon][1157011776] [check] DiskmonAgent::check } 0
    2013-07-17 08:45:38.316: [    AGFW][1157011776] check for resource: ora.diskmon 1 1 completed with status: ONLINE
    2013-07-17 08:45:42.183: [    AGFW][1177991488] Agent received the message: AGENT_HB[Engine] ID 12293:1134
    2013-07-17 08:45:58.312: [    AGFW][1177991488] CHECK initiated by timer for: ora.diskmon 1 1
    2013-07-17 08:45:58.313: [    AGFW][1157011776] Executing command: check for resource: ora.diskmon 1 1
    2013-07-17 08:45:58.313: [ora.diskmon][1157011776] [check] DiskmonAgent::check {
    2013-07-17 08:45:58.313: [ora.diskmon][1157011776] [check] DiskmonAgent::check } 0
    2013-07-17 08:45:58.313: [    AGFW][1157011776] check for resource: ora.diskmon 1 1 completed with status: ONLINE
    2013-07-17 08:46:12.192: [    AGFW][1177991488] Agent received the message: AGENT_HB[Engine] ID 12293:1150
    2013-07-17 08:46:18.318: [    AGFW][1177991488] CHECK initiated by timer for: ora.diskmon 1 1
    2013-07-17 08:46:18.319: [    AGFW][1157011776] Executing command: check for resource: ora.diskmon 1 1
    2013-07-17 08:46:18.319: [ora.diskmon][1157011776] [check] DiskmonAgent::check {
    2013-07-17 08:46:18.319: [ora.diskmon][1157011776] [check] DiskmonAgent::check } 0
    2013-07-17 08:46:18.319: [    AGFW][1157011776] check for resource: ora.diskmon 1 1 completed with status: ONLINE
    2013-07-17 08:46:38.325: [    AGFW][1177991488] CHECK initiated by timer for: ora.diskmon 1 1
    2013-07-17 08:46:38.326: [    AGFW][1157011776] Executing command: check for resource: ora.diskmon 1 1
    2013-07-17 08:46:38.326: [ora.diskmon][1157011776] [check] DiskmonAgent::check {
    2013-07-17 08:46:38.326: [ora.diskmon][1157011776] [check] DiskmonAgent::check } 0
    2013-07-17 08:46:38.326: [    AGFW][1157011776] check for resource: ora.diskmon 1 1 completed with status: ONLINE
    2013-07-17 08:46:42.203: [    AGFW][1177991488] Agent received the message: AGENT_HB[Engine] ID 12293:1166
    2013-07-17 08:46:58.332: [    AGFW][1177991488] CHECK initiated by timer for: ora.diskmon 1 1
    2013-07-17 08:46:58.333: [    AGFW][1157011776] Executing command: check for resource: ora.diskmon 1 1
    2013-07-17 08:46:58.333: [ora.diskmon][1157011776] [check] DiskmonAgent::check {
    2013-07-17 08:46:58.333: [ora.diskmon][1157011776] [check] DiskmonAgent::check } 0
    2013-07-17 08:46:58.333: [    AGFW][1157011776] check for resource: ora.diskmon 1 1 completed with status: ONLINE
    2013-07-17 08:47:12.203: [    AGFW][1177991488] Agent received the message: AGENT_HB[Engine] ID 12293:1182
    2013-07-17 08:47:18.338: [    AGFW][1177991488] CHECK initiated by timer for: ora.diskmon 1 1
    2013-07-17 08:47:18.339: [    AGFW][1157011776] Executing command: check for resource: ora.diskmon 1 1
    2013-07-17 08:47:18.339: [ora.diskmon][1157011776] [check] DiskmonAgent::check {
    2013-07-17 08:47:18.339: [ora.diskmon][1157011776] [check] DiskmonAgent::check } 0
    2013-07-17 08:47:18.340: [    AGFW][1157011776] check for resource: ora.diskmon 1 1 completed with status: ONLINE
    2013-07-17 08:47:38.345: [    AGFW][1177991488] CHECK initiated by timer for: ora.diskmon 1 1
    2013-07-17 08:47:38.345: [    AGFW][1157011776] Executing command: check for resource: ora.diskmon 1 1
    2013-07-17 08:47:38.346: [ora.diskmon][1157011776] [check] DiskmonAgent::check {
    2013-07-17 08:47:38.346: [ora.diskmon][1157011776] [check] DiskmonAgent::check } 0
    2013-07-17 08:47:38.346: [    AGFW][1157011776] check for resource: ora.diskmon 1 1 completed with status: ONLINE
    2013-07-17 08:47:42.211: [    AGFW][1177991488] Agent received the message: AGENT_HB[Engine] ID 12293:1198
    2013-07-17 08:47:58.351: [    AGFW][1177991488] CHECK initiated by timer for: ora.diskmon 1 1
    2013-07-17 08:47:58.352: [    AGFW][1157011776] Executing command: check for resource: ora.diskmon 1 1
    2013-07-17 08:47:58.352: [ora.diskmon][1157011776] [check] DiskmonAgent::check {
    2013-07-17 08:47:58.352: [ora.diskmon][1157011776] [check] DiskmonAgent::check } 0
    2013-07-17 08:47:58.352: [    AGFW][1157011776] check for resource: ora.diskmon 1 1 completed with status: ONLINE
    2013-07-17 08:48:12.220: [    AGFW][1177991488] Agent received the message: AGENT_HB[Engine] ID 12293:1214
    2013-07-17 08:48:18.358: [    AGFW][1177991488] CHECK initiated by timer for: ora.diskmon 1 1
    2013-07-17 08:48:18.359: [    AGFW][1157011776] Executing command: check for resource: ora.diskmon 1 1
    2013-07-17 08:48:18.359: [ora.diskmon][1157011776] [check] DiskmonAgent::check {
    2013-07-17 08:48:18.359: [ora.diskmon][1157011776] [check] DiskmonAgent::check } 0
    2013-07-17 08:48:18.359: [    AGFW][1157011776] check for resource: ora.diskmon 1 1 completed with status: ONLINE
    2013-07-17 08:48:38.365: [    AGFW][1177991488] CHECK initiated by timer for: ora.diskmon 1 1
    2013-07-17 08:48:38.366: [    AGFW][1157011776] Executing command: check for resource: ora.diskmon 1 1
    2013-07-17 08:48:38.366: [ora.diskmon][1157011776] [check] DiskmonAgent::check {
    2013-07-17 08:48:38.366: [ora.diskmon][1157011776] [check] DiskmonAgent::check } 0
    2013-07-17 08:48:38.366: [    AGFW][1157011776] check for resource: ora.diskmon 1 1 completed with status: ONLINE
    2013-07-17 08:48:42.230: [    AGFW][1177991488] Agent received the message: AGENT_HB[Engine] ID 12293:1230
    2013-07-17 08:48:58.370: [    AGFW][1177991488] CHECK initiated by timer for: ora.diskmon 1 1
    2013-07-17 08:48:58.371: [    AGFW][1157011776] Executing command: check for resource: ora.diskmon 1 1
    2013-07-17 08:48:58.371: [ora.diskmon][1157011776] [check] DiskmonAgent::check {
    2013-07-17 08:48:58.371: [ora.diskmon][1157011776] [check] DiskmonAgent::check } 0
    2013-07-17 08:48:58.371: [    AGFW][1157011776] check for resource: ora.diskmon 1 1 completed with status: ONLINE
    2013-07-17 08:49:12.238: [    AGFW][1177991488] Agent received the message: AGENT_HB[Engine] ID 12293:1246
    2013-07-17 08:49:18.377: [    AGFW][1177991488] CHECK initiated by timer for: ora.diskmon 1 1
    2013-07-17 08:49:18.378: [    AGFW][1157011776] Executing command: check for resource: ora.diskmon 1 1
    2013-07-17 08:49:18.378: [ora.diskmon][1157011776] [check] DiskmonAgent::check {
    2013-07-17 08:49:18.379: [ora.diskmon][1157011776] [check] DiskmonAgent::check } 0
    2013-07-17 08:49:18.379: [    AGFW][1157011776] check for resource: ora.diskmon 1 1 completed with status: ONLINE
    2013-07-17 08:49:23.262: [    AGFW][1177991488] Agent received the message: RESOURCE_STOP[ora.diskmon 1 1] ID 4099:1310
    2013-07-17 0

  • Root.sh failing at node2 during 10g rac installation on vmware

    Hi All,
    I'm very new to Oracle RAC. I was trying to install 10gr2 rac on vmware. But while running root.sh script at node 2 I'm getting error like "Failure at check of Final Oracle Stack" . For the same I tried a lot to find the solution from this forum and also from other sources but did not get any proper solution. So any help from you experts is highly appreciable. Let me tell you how I'm getting this error.
    1.Install vmware and RHEL 4 as guest OS.
    2.Create a virtual machine as RAC1 and configured shared storage by adding 5 virtual disks(1-ocr,1-voting,3-asm)
    3.Then clone RAC1 to a new virtual machine RAC2.
    4.After binding and assigning permissions all raw devices(raw1,raw2,raw3,raw4,raw5) are available in both the virtual machines.
    5. Then create user equivalence for both the system and each node is reach from other node also.
    6. Now when I tried to install Oracle 10gr2 clusterware it got completed at RAC1 and I ran the required scripts but while running root.sh on RAC2 I got the above mentioned error.
    I did OCR check at both the nodes and found both are different. Also the cssd.log gives error message like OCR mismatch.
    Now here my question is as Firewal is disabled and all the configurations are proper though I'm getting the same error.
    Can anyboy provide some suggestion on resolving this issue.
    Thanks

    /dev/raw/raw1 /dev/sdb1
    /dev/raw/raw2 /dev/sdc1
    /dev/raw/raw3 /dev/sdd1
    /dev/raw/raw4 /dev/sde1
    /dev/raw/raw5 /dev/sdf1
    The 1st one is for OCR and the next one is for Voting Disk and the rest are for asm. Also the permission are same across the two virtual machines.
    root:oinstall /dev/raw/raw1
    oracle:oinstall /dev/raw/raw2
    oracle:oinstall /dev/raw/raw3
    oracle:oinstall /dev/raw/raw4
    oracle:oinstall /dev/raw/raw5
    Thanks

  • 11g R2 RAC - Grid Infrastructure installation - "root.sh" fails on node#2

    Hi there,
    I am trying to create a two node 11g R2 RAC on OEL 5.5 (32-bit) using VMWare virtual machines. I have correctly configured both nodes. Cluster Verification utility returns on following error \[which I believe can be ignored]:
    Checking daemon liveness...
    Liveness check failed for "ntpd"
    Check failed on nodes:
    rac2,rac1
    PRVF-5415 : Check to see if NTP daemon is running failed
    Clock synchronization check using Network Time Protocol(NTP) failed
    Pre-check for cluster services setup was unsuccessful on all the nodes.
    While Grid Infrastructure installation (for a Cluster option), things go very smooth until I run "root.sh" on node# 2. orainstRoot.sh ran OK on both. "root.sh" run OK on node# 1 and ends with:
    Checking swap space: must be greater than 500 MB.   Actual 1967 MB    Passed
    The inventory pointer is located at /etc/oraInst.loc
    The inventory is located at /u01/app/oraInventory
    *'UpdateNodeList' was successful.*
    *[root@rac1 ~]#*
    "root.sh" fails on rac2 (2nd node) with following error:
    CRS-2672: Attempting to start 'ora.evmd' on 'rac2'
    CRS-2676: Start of 'ora.evmd' on 'rac2' succeeded
    Timed out waiting for the CRS stack to start.
    *[root@rac2 ~]#*
    I know this info may not be enough to figure out what the problem may be. Please let me know what should I look for to find the issue and fix it. Its been like almost two weeks now :-(
    Regards
    Amer

    Hi Zheng,
    ocssd.log is HUGE. So I am putting few of the last lines in the log file hoping they may give some clue:
    2011-07-04 19:49:24.007: [    CSSD][2997803920]clssnmvSchedDiskThreads: KillBlockThread for voting file ORCL:DATA sched delay 2180 > margin 1500  cur_ms 36118424 lastalive 36116244
    2011-07-04 19:49:26.005: [    CSSD][2997803920]clssnmvSchedDiskThreads: DiskPingThread for voting file ORCL:DATA sched delay 4150 > margin 1500 cur_ms 36120424 lastalive 36116274
    2011-07-04 19:49:26.006: [    CSSD][2997803920]clssnmvSchedDiskThreads: KillBlockThread for voting file ORCL:DATA sched delay 4180 > margin 1500  cur_ms 36120424 lastalive 36116244
    2011-07-04 19:49:27.997: [    CSSD][2901298064]clssnmSendingThread: sending status msg to all nodes
    2011-07-04 19:49:27.997: [    CSSD][2901298064]clssnmSendingThread: sent 5 status msgs to all nodes
    2011-07-04 19:49:33.001: [    CSSD][2901298064]clssnmSendingThread: sending status msg to all nodes
    2011-07-04 19:49:33.001: [    CSSD][2901298064]clssnmSendingThread: sent 5 status msgs to all nodes
    2011-07-04 19:49:37.996: [    CSSD][2901298064]clssnmSendingThread: sending status msg to all nodes
    2011-07-04 19:49:37.996: [    CSSD][2901298064]clssnmSendingThread: sent 5 status msgs to all nodes
    2011-07-04 19:49:43.000: [    CSSD][2901298064]clssnmSendingThread: sending status msg to all nodes
    2011-07-04 19:49:43.000: [    CSSD][2901298064]clssnmSendingThread: sent 5 status msgs to all nodes
    2011-07-04 19:49:48.004: [    CSSD][2901298064]clssnmSendingThread: sending status msg to all nodes
    2011-07-04 19:49:48.005: [    CSSD][2901298064]clssnmSendingThread: sent 5 status msgs to all nodes
    2011-07-04 19:50:12.003: [    CSSD][2901298064]clssnmSendingThread: sent 5 status msgs to all nodes
    2011-07-04 19:50:12.008: [    CSSD][2997803920]clssnmvSchedDiskThreads: DiskPingThread for voting file ORCL:DATA sched delay 1660 > margin 1500 cur_ms 36166424 lastalive 36164764
    2011-07-04 19:50:12.009: [    CSSD][2997803920]clssnmvSchedDiskThreads: KillBlockThread for voting file ORCL:DATA sched delay 1660 > margin 1500  cur_ms 36166424 lastalive 36164764
    2011-07-04 19:50:15.796: [    CSSD][2997803920]clssnmvSchedDiskThreads: KillBlockThread for voting file ORCL:DATA sched delay 2130 > margin 1500  cur_ms 36170214 lastalive 36168084
    2011-07-04 19:50:16.996: [    CSSD][2901298064]clssnmSendingThread: sending status msg to all nodes
    2011-07-04 19:50:16.996: [    CSSD][2901298064]clssnmSendingThread: sent 5 status msgs to all nodes
    2011-07-04 19:50:17.826: [    CSSD][2997803920]clssnmvSchedDiskThreads: DiskPingThread for voting file ORCL:DATA sched delay 1540 > margin 1500 cur_ms 36172244 lastalive 36170704
    2011-07-04 19:50:17.826: [    CSSD][2997803920]clssnmvSchedDiskThreads: KillBlockThread for voting file ORCL:DATA sched delay 1570 > margin 1500  cur_ms 36172244 lastalive 36170674
    2011-07-04 19:50:21.999: [    CSSD][2901298064]clssnmSendingThread: sending status msg to all nodes
    2011-07-04 19:50:21.999: [    CSSD][2901298064]clssnmSendingThread: sent 5 status msgs to all nodes
    2011-07-04 19:50:26.011: [    CSSD][2997803920]clssnmvSchedDiskThreads: DiskPingThread for voting file ORCL:DATA sched delay 1740 > margin 1500 cur_ms 36180424 lastalive 36178684
    2011-07-04 19:50:26.011: [    CSSD][2997803920]clssnmvSchedDiskThreads: KillBlockThread for voting file ORCL:DATA sched delay 1620 > margin 1500  cur_ms 36180424 lastalive 36178804
    2011-07-04 19:50:27.004: [    CSSD][2901298064]clssnmSendingThread: sending status msg to all nodes
    2011-07-04 19:50:27.004: [    CSSD][2901298064]clssnmSendingThread: sent 5 status msgs to all nodes
    2011-07-04 19:50:28.002: [    CSSD][2997803920]clssnmvSchedDiskThreads: DiskPingThread for voting file ORCL:DATA sched delay 1700 > margin 1500 cur_ms 36182414 lastalive 36180714
    2011-07-04 19:50:28.002: [    CSSD][2997803920]clssnmvSchedDiskThreads: KillBlockThread for voting file ORCL:DATA sched delay 1790 > margin 1500  cur_ms 36182414 lastalive 36180624
    2011-07-04 19:50:31.998: [    CSSD][2901298064]clssnmSendingThread: sending status msg to all nodes
    2011-07-04 19:50:31.998: [    CSSD][2901298064]clssnmSendingThread: sent 5 status msgs to all nodes
    2011-07-04 19:50:37.001: [    CSSD][2901298064]clssnmSendingThread: sending status msg to all nodes
    2011-07-04 19:50:37.002: [    CSSD][2901298064]clssnmSendingThread: sent 5 status msgs to all nodes
    *<end of log file>*And the alertrac2.log contains:
    *[root@rac2 rac2]# cat alertrac2.log*
    Oracle Database 11g Clusterware Release 11.2.0.1.0 - Production Copyright 1996, 2009 Oracle. All rights reserved.
    2011-07-02 16:43:51.571
    [client(16134)]CRS-2106:The OLR location /u01/grid/oracle/product/11.2.0/grid/cdata/rac2.olr is inaccessible. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/client/ocrconfig_16134.log.
    2011-07-02 16:43:57.125
    [client(16134)]CRS-2101:The OLR was formatted using version 3.
    2011-07-02 16:44:43.214
    [ohasd(16188)]CRS-2112:The OLR service started on node rac2.
    2011-07-02 16:45:06.446
    [ohasd(16188)]CRS-2772:Server 'rac2' has been assigned to pool 'Free'.
    2011-07-02 16:53:30.061
    [ohasd(16188)]CRS-2302:Cannot get GPnP profile. Error CLSGPNP_NO_DAEMON (GPNPD daemon is not running).
    2011-07-02 16:53:55.042
    [cssd(17674)]CRS-1713:CSSD daemon is started in exclusive mode
    2011-07-02 16:54:38.334
    [cssd(17674)]CRS-1707:Lease acquisition for node rac2 number 2 completed
    [cssd(17674)]CRS-1636:The CSS daemon was started in exclusive mode but found an active CSS daemon on node rac1 and is terminating; details at (:CSSNM00006:) in /u01/grid/oracle/product/11.2.0/grid/log/rac2/cssd/ocssd.log
    2011-07-02 16:54:38.464
    [cssd(17674)]CRS-1603:CSSD on node rac2 shutdown by user.
    2011-07-02 16:54:39.174
    [ohasd(16188)]CRS-2765:Resource 'ora.cssdmonitor' has failed on server 'rac2'.
    2011-07-02 16:55:43.430
    [cssd(17945)]CRS-1713:CSSD daemon is started in clustered mode
    2011-07-02 16:56:02.852
    [cssd(17945)]CRS-1707:Lease acquisition for node rac2 number 2 completed
    2011-07-02 16:56:04.061
    [cssd(17945)]CRS-1605:CSSD voting file is online: ORCL:DATA; details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/cssd/ocssd.log.
    2011-07-02 16:56:18.350
    [cssd(17945)]CRS-1601:CSSD Reconfiguration complete. Active nodes are rac1 rac2 .
    2011-07-02 16:56:29.283
    [ctssd(18020)]CRS-2403:The Cluster Time Synchronization Service on host rac2 is in observer mode.
    2011-07-02 16:56:29.551
    [ctssd(18020)]CRS-2407:The new Cluster Time Synchronization Service reference node is host rac1.
    2011-07-02 16:56:29.615
    [ctssd(18020)]CRS-2412:The Cluster Time Synchronization Service detects that the local time is significantly different from the mean cluster time. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/ctssd/octssd.log.
    2011-07-02 16:56:29.616
    [ctssd(18020)]CRS-2409:The clock on host rac2 is not synchronous with the mean cluster time. No action has been taken as the Cluster Time Synchronization Service is running in observer mode.
    2011-07-02 16:56:29.641
    [ctssd(18020)]CRS-2401:The Cluster Time Synchronization Service started on host rac2.
    [client(18052)]CRS-10001:ACFS-9327: Verifying ADVM/ACFS devices.
    [client(18056)]CRS-10001:ACFS-9322: done.
    2011-07-02 17:01:40.963
    [ohasd(16188)]CRS-2757:Command 'Start' timed out waiting for response from the resource 'ora.asm'. Details at (:CRSPE00111:) in /u01/grid/oracle/product/11.2.0/grid/log/rac2/ohasd/ohasd.log.
    [client(18590)]CRS-10001:ACFS-9327: Verifying ADVM/ACFS devices.
    [client(18594)]CRS-10001:ACFS-9322: done.
    2011-07-02 17:27:46.385
    [ctssd(18020)]CRS-2412:The Cluster Time Synchronization Service detects that the local time is significantly different from the mean cluster time. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/ctssd/octssd.log.
    2011-07-02 17:27:46.385
    [ctssd(18020)]CRS-2409:The clock on host rac2 is not synchronous with the mean cluster time. No action has been taken as the Cluster Time Synchronization Service is running in observer mode.
    2011-07-02 17:46:48.717
    [crsd(22519)]CRS-1013:The OCR location in an ASM disk group is inaccessible. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/crsd/crsd.log.
    2011-07-02 17:46:49.641
    [ohasd(16188)]CRS-2765:Resource 'ora.crsd' has failed on server 'rac2'.
    2011-07-02 17:46:51.459
    [crsd(22553)]CRS-1013:The OCR location in an ASM disk group is inaccessible. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/crsd/crsd.log.
    2011-07-02 17:46:51.776
    [ohasd(16188)]CRS-2765:Resource 'ora.crsd' has failed on server 'rac2'.
    2011-07-02 17:46:53.928
    [crsd(22574)]CRS-1013:The OCR location in an ASM disk group is inaccessible. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/crsd/crsd.log.
    2011-07-02 17:46:53.956
    [ohasd(16188)]CRS-2765:Resource 'ora.crsd' has failed on server 'rac2'.
    2011-07-02 17:46:55.834
    [crsd(22592)]CRS-1013:The OCR location in an ASM disk group is inaccessible. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/crsd/crsd.log.
    2011-07-02 17:46:56.273
    [ohasd(16188)]CRS-2765:Resource 'ora.crsd' has failed on server 'rac2'.
    2011-07-02 17:46:57.762
    [crsd(22610)]CRS-1013:The OCR location in an ASM disk group is inaccessible. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/crsd/crsd.log.
    2011-07-02 17:46:58.631
    [ohasd(16188)]CRS-2765:Resource 'ora.crsd' has failed on server 'rac2'.
    2011-07-02 17:47:00.259
    [crsd(22628)]CRS-1013:The OCR location in an ASM disk group is inaccessible. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/crsd/crsd.log.
    2011-07-02 17:47:00.968
    [ohasd(16188)]CRS-2765:Resource 'ora.crsd' has failed on server 'rac2'.
    2011-07-02 17:47:02.513
    [crsd(22645)]CRS-1013:The OCR location in an ASM disk group is inaccessible. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/crsd/crsd.log.
    2011-07-02 17:47:03.309
    [ohasd(16188)]CRS-2765:Resource 'ora.crsd' has failed on server 'rac2'.
    2011-07-02 17:47:05.081
    [crsd(22663)]CRS-1013:The OCR location in an ASM disk group is inaccessible. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/crsd/crsd.log.
    2011-07-02 17:47:05.770
    [ohasd(16188)]CRS-2765:Resource 'ora.crsd' has failed on server 'rac2'.
    2011-07-02 17:47:07.796
    [crsd(22681)]CRS-1013:The OCR location in an ASM disk group is inaccessible. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/crsd/crsd.log.
    2011-07-02 17:47:08.257
    [ohasd(16188)]CRS-2765:Resource 'ora.crsd' has failed on server 'rac2'.
    2011-07-02 17:47:10.733
    [crsd(22699)]CRS-1013:The OCR location in an ASM disk group is inaccessible. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/crsd/crsd.log.
    2011-07-02 17:47:11.739
    [ohasd(16188)]CRS-2765:Resource 'ora.crsd' has failed on server 'rac2'.
    2011-07-02 17:47:13.547
    [crsd(22732)]CRS-1013:The OCR location in an ASM disk group is inaccessible. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/crsd/crsd.log.
    2011-07-02 17:47:14.111
    [ohasd(16188)]CRS-2765:Resource 'ora.crsd' has failed on server 'rac2'.
    2011-07-02 17:47:14.112
    [ohasd(16188)]CRS-2771:Maximum restart attempts reached for resource 'ora.crsd'; will not restart.
    2011-07-02 17:58:18.459
    [ctssd(18020)]CRS-2412:The Cluster Time Synchronization Service detects that the local time is significantly different from the mean cluster time. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/ctssd/octssd.log.
    2011-07-02 17:58:18.459
    [ctssd(18020)]CRS-2409:The clock on host rac2 is not synchronous with the mean cluster time. No action has been taken as the Cluster Time Synchronization Service is running in observer mode.
    [client(26883)]CRS-10001:ACFS-9200: Supported
    2011-07-02 18:13:34.627
    [ctssd(18020)]CRS-2405:The Cluster Time Synchronization Service on host rac2 is shutdown by user
    2011-07-02 18:13:42.368
    [cssd(17945)]CRS-1603:CSSD on node rac2 shutdown by user.
    2011-07-02 18:15:13.877
    [client(27222)]CRS-2106:The OLR location /u01/grid/oracle/product/11.2.0/grid/cdata/rac2.olr is inaccessible. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/client/ocrconfig_27222.log.
    2011-07-02 18:15:14.011
    [client(27222)]CRS-2101:The OLR was formatted using version 3.
    2011-07-02 18:15:23.226
    [ohasd(27261)]CRS-2112:The OLR service started on node rac2.
    2011-07-02 18:15:23.688
    [ohasd(27261)]CRS-8017:location: /etc/oracle/lastgasp has 2 reboot advisory log files, 0 were announced and 0 errors occurred
    2011-07-02 18:15:24.064
    [ohasd(27261)]CRS-2772:Server 'rac2' has been assigned to pool 'Free'.
    2011-07-02 18:16:29.761
    [ohasd(27261)]CRS-2302:Cannot get GPnP profile. Error CLSGPNP_NO_DAEMON (GPNPD daemon is not running).
    2011-07-02 18:16:30.190
    [gpnpd(28498)]CRS-2328:GPNPD started on node rac2.
    2011-07-02 18:16:41.561
    [cssd(28562)]CRS-1713:CSSD daemon is started in exclusive mode
    2011-07-02 18:16:49.111
    [cssd(28562)]CRS-1707:Lease acquisition for node rac2 number 2 completed
    2011-07-02 18:16:49.166
    [cssd(28562)]CRS-1605:CSSD voting file is online: ORCL:DATA; details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/cssd/ocssd.log.
    [cssd(28562)]CRS-1636:The CSS daemon was started in exclusive mode but found an active CSS daemon on node rac1 and is terminating; details at (:CSSNM00006:) in /u01/grid/oracle/product/11.2.0/grid/log/rac2/cssd/ocssd.log
    2011-07-02 18:17:01.122
    [cssd(28562)]CRS-1603:CSSD on node rac2 shutdown by user.
    2011-07-02 18:17:06.917
    [ohasd(27261)]CRS-2765:Resource 'ora.cssdmonitor' has failed on server 'rac2'.
    2011-07-02 18:17:23.602
    [mdnsd(28485)]CRS-5602:mDNS service stopping by request.
    2011-07-02 18:17:36.217
    [gpnpd(28732)]CRS-2328:GPNPD started on node rac2.
    2011-07-02 18:17:43.673
    [cssd(28794)]CRS-1713:CSSD daemon is started in clustered mode
    2011-07-02 18:17:49.826
    [cssd(28794)]CRS-1707:Lease acquisition for node rac2 number 2 completed
    2011-07-02 18:17:49.865
    [cssd(28794)]CRS-1605:CSSD voting file is online: ORCL:DATA; details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/cssd/ocssd.log.
    2011-07-02 18:18:03.049
    [cssd(28794)]CRS-1601:CSSD Reconfiguration complete. Active nodes are rac1 rac2 .
    2011-07-02 18:18:06.160
    [ctssd(28861)]CRS-2403:The Cluster Time Synchronization Service on host rac2 is in observer mode.
    2011-07-02 18:18:06.220
    [ctssd(28861)]CRS-2407:The new Cluster Time Synchronization Service reference node is host rac1.
    2011-07-02 18:18:06.238
    [ctssd(28861)]CRS-2412:The Cluster Time Synchronization Service detects that the local time is significantly different from the mean cluster time. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/ctssd/octssd.log.
    2011-07-02 18:18:06.239
    [ctssd(28861)]CRS-2409:The clock on host rac2 is not synchronous with the mean cluster time. No action has been taken as the Cluster Time Synchronization Service is running in observer mode.
    2011-07-02 18:18:06.794
    [ctssd(28861)]CRS-2401:The Cluster Time Synchronization Service started on host rac2.
    [client(28891)]CRS-10001:ACFS-9327: Verifying ADVM/ACFS devices.
    [client(28895)]CRS-10001:ACFS-9322: done.
    2011-07-02 18:18:33.465
    [crsd(29020)]CRS-1013:The OCR location in an ASM disk group is inaccessible. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/crsd/crsd.log.
    2011-07-02 18:18:33.575
    [ohasd(27261)]CRS-2765:Resource 'ora.crsd' has failed on server 'rac2'.
    2011-07-02 18:18:35.757
    [crsd(29051)]CRS-1013:The OCR location in an ASM disk group is inaccessible. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/crsd/crsd.log.
    2011-07-02 18:18:36.129
    [ohasd(27261)]CRS-2765:Resource 'ora.crsd' has failed on server 'rac2'.
    2011-07-02 18:18:38.596
    [crsd(29066)]CRS-1013:The OCR location in an ASM disk group is inaccessible. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/crsd/crsd.log.
    2011-07-02 18:18:39.146
    [ohasd(27261)]CRS-2765:Resource 'ora.crsd' has failed on server 'rac2'.
    2011-07-02 18:18:41.058
    [crsd(29085)]CRS-1013:The OCR location in an ASM disk group is inaccessible. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/crsd/crsd.log.
    2011-07-02 18:18:41.435
    [ohasd(27261)]CRS-2765:Resource 'ora.crsd' has failed on server 'rac2'.
    2011-07-02 18:18:44.255
    [crsd(29101)]CRS-1013:The OCR location in an ASM disk group is inaccessible. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/crsd/crsd.log.
    2011-07-02 18:18:45.165
    [ohasd(27261)]CRS-2765:Resource 'ora.crsd' has failed on server 'rac2'.
    2011-07-02 18:18:47.013
    [crsd(29121)]CRS-1013:The OCR location in an ASM disk group is inaccessible. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/crsd/crsd.log.
    2011-07-02 18:18:47.409
    [ohasd(27261)]CRS-2765:Resource 'ora.crsd' has failed on server 'rac2'.
    2011-07-02 18:18:50.071
    [crsd(29136)]CRS-1013:The OCR location in an ASM disk group is inaccessible. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/crsd/crsd.log.
    2011-07-02 18:18:50.118
    [ohasd(27261)]CRS-2765:Resource 'ora.crsd' has failed on server 'rac2'.
    2011-07-02 18:18:51.843
    [crsd(29156)]CRS-1013:The OCR location in an ASM disk group is inaccessible. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/crsd/crsd.log.
    2011-07-02 18:18:52.373
    [ohasd(27261)]CRS-2765:Resource 'ora.crsd' has failed on server 'rac2'.
    2011-07-02 18:18:54.361
    [crsd(29171)]CRS-1013:The OCR location in an ASM disk group is inaccessible. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/crsd/crsd.log.
    2011-07-02 18:18:54.772
    [ohasd(27261)]CRS-2765:Resource 'ora.crsd' has failed on server 'rac2'.
    2011-07-02 18:18:56.620
    [crsd(29202)]CRS-1013:The OCR location in an ASM disk group is inaccessible. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/crsd/crsd.log.
    2011-07-02 18:18:57.104
    [ohasd(27261)]CRS-2765:Resource 'ora.crsd' has failed on server 'rac2'.
    2011-07-02 18:18:58.997
    [crsd(29218)]CRS-1013:The OCR location in an ASM disk group is inaccessible. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/crsd/crsd.log.
    2011-07-02 18:18:59.301
    [ohasd(27261)]CRS-2765:Resource 'ora.crsd' has failed on server 'rac2'.
    2011-07-02 18:18:59.302
    [ohasd(27261)]CRS-2771:Maximum restart attempts reached for resource 'ora.crsd'; will not restart.
    2011-07-02 18:49:58.070
    [ctssd(28861)]CRS-2412:The Cluster Time Synchronization Service detects that the local time is significantly different from the mean cluster time. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/ctssd/octssd.log.
    2011-07-02 18:49:58.070
    [ctssd(28861)]CRS-2409:The clock on host rac2 is not synchronous with the mean cluster time. No action has been taken as the Cluster Time Synchronization Service is running in observer mode.
    2011-07-02 19:21:33.362
    [ctssd(28861)]CRS-2412:The Cluster Time Synchronization Service detects that the local time is significantly different from the mean cluster time. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/ctssd/octssd.log.
    2011-07-02 19:21:33.362
    [ctssd(28861)]CRS-2409:The clock on host rac2 is not synchronous with the mean cluster time. No action has been taken as the Cluster Time Synchronization Service is running in observer mode.
    2011-07-02 19:52:05.271
    [ctssd(28861)]CRS-2412:The Cluster Time Synchronization Service detects that the local time is significantly different from the mean cluster time. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/ctssd/octssd.log.
    2011-07-02 19:52:05.271
    [ctssd(28861)]CRS-2409:The clock on host rac2 is not synchronous with the mean cluster time. No action has been taken as the Cluster Time Synchronization Service is running in observer mode.
    2011-07-02 20:22:53.696
    [ctssd(28861)]CRS-2412:The Cluster Time Synchronization Service detects that the local time is significantly different from the mean cluster time. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/ctssd/octssd.log.
    2011-07-02 20:22:53.696
    [ctssd(28861)]CRS-2409:The clock on host rac2 is not synchronous with the mean cluster time. No action has been taken as the Cluster Time Synchronization Service is running in observer mode.
    2011-07-02 20:53:43.949
    [ctssd(28861)]CRS-2412:The Cluster Time Synchronization Service detects that the local time is significantly different from the mean cluster time. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/ctssd/octssd.log.
    2011-07-02 20:53:43.949
    [ctssd(28861)]CRS-2409:The clock on host rac2 is not synchronous with the mean cluster time. No action has been taken as the Cluster Time Synchronization Service is running in observer mode.
    2011-07-02 21:24:32.990
    [ctssd(28861)]CRS-2412:The Cluster Time Synchronization Service detects that the local time is significantly different from the mean cluster time. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/ctssd/octssd.log.
    2011-07-02 21:24:32.990
    [ctssd(28861)]CRS-2409:The clock on host rac2 is not synchronous with the mean cluster time. No action has been taken as the Cluster Time Synchronization Service is running in observer mode.
    2011-07-02 21:55:21.907
    [ctssd(28861)]CRS-2412:The Cluster Time Synchronization Service detects that the local time is significantly different from the mean cluster time. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/ctssd/octssd.log.
    2011-07-02 21:55:21.908
    [ctssd(28861)]CRS-2409:The clock on host rac2 is not synchronous with the mean cluster time. No action has been taken as the Cluster Time Synchronization Service is running in observer mode.
    2011-07-02 22:26:45.752
    [ctssd(28861)]CRS-2412:The Cluster Time Synchronization Service detects that the local time is significantly different from the mean cluster time. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/ctssd/octssd.log.
    2011-07-02 22:26:45.752
    [ctssd(28861)]CRS-2409:The clock on host rac2 is not synchronous with the mean cluster time. No action has been taken as the Cluster Time Synchronization Service is running in observer mode.
    2011-07-02 22:57:54.682
    [ctssd(28861)]CRS-2412:The Cluster Time Synchronization Service detects that the local time is significantly different from the mean cluster time. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/ctssd/octssd.log.
    2011-07-02 22:57:54.683
    [ctssd(28861)]CRS-2409:The clock on host rac2 is not synchronous with the mean cluster time. No action has been taken as the Cluster Time Synchronization Service is running in observer mode.
    2011-07-02 23:07:28.603
    [cssd(28794)]CRS-1612:Network communication with node rac1 (1) missing for 50% of timeout interval.  Removal of this node from cluster in 14.020 seconds
    2011-07-02 23:07:35.621
    [cssd(28794)]CRS-1611:Network communication with node rac1 (1) missing for 75% of timeout interval.  Removal of this node from cluster in 7.010 seconds
    2011-07-02 23:07:39.629
    [cssd(28794)]CRS-1610:Network communication with node rac1 (1) missing for 90% of timeout interval.  Removal of this node from cluster in 3.000 seconds
    2011-07-02 23:07:42.641
    [cssd(28794)]CRS-1632:Node rac1 is being removed from the cluster in cluster incarnation 205080558
    2011-07-02 23:07:44.751
    [cssd(28794)]CRS-1601:CSSD Reconfiguration complete. Active nodes are rac2 .
    2011-07-02 23:07:45.326
    [ctssd(28861)]CRS-2407:The new Cluster Time Synchronization Service reference node is host rac2.
    2011-07-04 19:46:26.008
    [ohasd(27261)]CRS-8011:reboot advisory message from host: rac1, component: mo155738, with time stamp: L-2011-07-04-19:44:43.318
    [ohasd(27261)]CRS-8013:reboot advisory message text: clsnomon_status: need to reboot, unexpected failure 8 received from CSS
    *[root@rac2 rac2]#* This log file start with complaint that OLR is not accessible. Here is what I see (rca2):
    -rw------- 1 root oinstall 272756736 Jul  2 18:18 /u01/grid/oracle/product/11.2.0/grid/cdata/rac2.olrAnd I guess rest of the problems start with this.

  • Error while running root.sh in 11GR2 2 node cluster

    Hi Friends,
    I am trying to setup 11gR2 2 node RAC on RHEL 4.7. The cluvfy script shows everything as passed and grid infrastructure installation completes fine until I ran root.sh script. The root.sh script when run on node 1 exits with following error in console
    DOCRAC1-N310 []:./root.sh
    Running Oracle 11g root.sh script...
    The following environment variables are set as:>ORACLE_OWNER= oracle
    >ORACLE_HOME= /app/oracle/grid
    >
    Enter the full pathname of the local bin directory: [usr/local/bin]:
    The file "dbhome" already exists in /usr/local/bin. Overwrite it? (y/n)
    [n]: y>Copying dbhome to /usr/local/bin ...
    The file "oraenv" already exists in /usr/local/bin. Overwrite it? (y/n)
    [n]: y>Copying oraenv to /usr/local/bin ...
    The file "coraenv" already exists in /usr/local/bin. Overwrite it? (y/n)
    [n]: y>Copying coraenv to /usr/local/bin ...
    >
    Entries will be added to the /etc/oratab file as needed by
    Database Configuration Assistant when a database is created
    Finished running generic part of root.sh script.
    Now product-specific root actions will be performed.
    2011-01-06 13:20:18: Parsing the host name
    2011-01-06 13:20:18: Checking for super user privileges
    2011-01-06 13:20:18: User has super user privileges
    Using configuration parameter file: /app/oracle/grid/crs/install/crsconfig_params
    PROTL-16: Internal Error
    Command return code of 41 (10496) from command: /app/oracle/grid/bin/ocrconfig -local -upgrade oracle oinstall
    Failed to create or upgrade OLRWhen I checked the logs in $GRID_HOME/log/<Node1>/client
    I can see errors in ocrconfig_12426.log as
    Oracle Database 11g Clusterware Release 11.2.0.1.0 - Production Copyright 1996, 2009 Oracle. All rights reserved.
    2011-01-06 13:20:19.133: [ OCRCONF][2564297536]ocrconfig starts...
    2011-01-06 13:20:19.134: [ default][2564297536]utgdv: Could not find occonfig_loc property or ocrconfig_loc pointing to nothing
    [ OCRCONF][2564297536]Error retrieving OCR configuration. Return [4]. procr_get_conf rc [41] error buffer [Error retrieving ocrconfig_loc property.]
    2011-01-06 13:20:19.134: [ OCRCONF][2564297536]Error [4] retrieving configuration type
    2011-01-06 13:20:19.134: [ OCRCONF][2564297536]Exiting [status=failed]...and in crsctl.log
    [  CRSCTL][2566642592]Command::initStatic: clsugetconf failed with return code 4, status 1
    2011-01-06 13:19:01.372: [  CRSCTL][2566642592]Command::checkConfig: clsugetconf returned unknown status
    2011-01-06 13:19:11.502: [ default][2566642592]utgdv: Could not find occonfig_loc property or ocrconfig_loc pointing to nothing
    [ default][2566642592]Error retrieving OCR configuration. Return [4]. procr_get_conf rc [41] error buffer [Error retrieving ocrconfig_loc property.]
    [  CRSCTL][2566642592]Command::initStatic: clsugetconf failed with return code 4, status 1
    2011-01-06 13:19:11.503: [  CRSCTL][2566642592]Command::checkConfig: clsugetconf returned unknown statusI have tried to deinstall, reconfigure , but still no luck. I will be trying to use Oracle Deinstall utlity now. Before that I would like to get comments from experts
    Thanks ,
    SSN.

    Consider moving this question to the Real Application Clusters forum.
    Confirm that +/app/oracle/grid/crs/install/crsconfig_params+ exists before running root.sh - if it does, please post contents using tags to encapsulate the config text data.
    Are you using ASM for managing voting and OCR disk storage, or are you manually setting these to specific block devices?                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

  • Root.sh fails on second node

    I already posted this issue on database installation forum, and was suggested to post it on this forum.
    Here are the details.
    I am running Linux 64bit on ESx clients. Installing Oracle 11gR2.
    It passed all the per-requisite. Run root.sh on first node. It finished with no errorrs.
    On second node I got the following:
    Running Oracle 11g root.sh script...
    The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME= /u01/app/11.2.0/grid
    Enter the full pathname of the local bin directory: [usr/local/bin]:
    The file "dbhome" already exists in /usr/local/bin. Overwrite it? (y/n)
    [n]:
    The file "oraenv" already exists in /usr/local/bin. Overwrite it? (y/n)
    [n]:
    The file "coraenv" already exists in /usr/local/bin. Overwrite it? (y/n)
    [n]:
    Entries will be added to the /etc/oratab file as needed by
    Database Configuration Assistant when a database is created
    Finished running generic part of root.sh script.
    Now product-specific root actions will be performed.
    2010-07-13 12:51:28: Parsing the host name
    2010-07-13 12:51:28: Checking for super user privileges
    2010-07-13 12:51:28: User has super user privileges
    Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params
    Creating trace directory
    LOCAL ADD MODE
    Creating OCR keys for user 'root', privgrp 'root'..
    Operation successful.
    Adding daemon to inittab
    CRS-4123: Oracle High Availability Services has been started.
    ohasd is starting
    CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node fred0224, number 1, and is terminating
    An active cluster was found during exclusive startup, restarting to join the cluster
    CRS-2672: Attempting to start 'ora.mdnsd' on 'fred0225'
    CRS-2676: Start of 'ora.mdnsd' on 'fred0225' succeeded
    CRS-2672: Attempting to start 'ora.gipcd' on 'fred0225'
    CRS-2676: Start of 'ora.gipcd' on 'fred0225' succeeded
    CRS-2672: Attempting to start 'ora.gpnpd' on 'fred0225'
    CRS-2676: Start of 'ora.gpnpd' on 'fred0225' succeeded
    CRS-2672: Attempting to start 'ora.cssdmonitor' on 'fred0225'
    CRS-2676: Start of 'ora.cssdmonitor' on 'fred0225' succeeded
    CRS-2672: Attempting to start 'ora.cssd' on 'fred0225'
    CRS-2672: Attempting to start 'ora.diskmon' on 'fred0225'
    CRS-2676: Start of 'ora.diskmon' on 'fred0225' succeeded
    CRS-2676: Start of 'ora.cssd' on 'fred0225' succeeded
    CRS-2672: Attempting to start 'ora.ctssd' on 'fred0225'
    Start action for octssd aborted
    CRS-2676: Start of 'ora.ctssd' on 'fred0225' succeeded
    CRS-2672: Attempting to start 'ora.drivers.acfs' on 'fred0225'
    CRS-2672: Attempting to start 'ora.asm' on 'fred0225'
    CRS-2676: Start of 'ora.drivers.acfs' on 'fred0225' succeeded
    CRS-2676: Start of 'ora.asm' on 'fred0225' succeeded
    CRS-2664: Resource 'ora.ctssd' is already running on 'fred0225'
    CRS-4000: Command Start failed, or completed with errors.
    Command return code of 1 (256) from command: /u01/app/11.2.0/grid/bin/crsctl start resource ora.asm -init
    Start of resource "ora.asm -init" failed
    Failed to start ASM
    Failed to start Oracle Clusterware stack
    In the ocssd.log I found
    [ CSSD][3559689984]clssnmvDHBValidateNCopy: node 1, fred0224, has a disk HB, but no network HB, DHB has rcfg 174483948, wrtcnt, 232, LATS 521702664, lastSeqNo 232, uniqueness 1279039649, timestamp 1279039959/521874274
    In oraagent_oracle.log I found
    [ clsdmc][1212365120]Fail to connect (ADDRESS=(PROTOCOL=ipc)(KEY=fred0225DBG_GPNPD)) with status 9
    2010-07-13 12:54:07.234: [ora.gpnpd][1212365120] [check] Error = error 9 encountered when connecting to GPNPD
    2010-07-13 12:54:07.238: [ora.gpnpd][1212365120] [check] Calling PID check for daemon
    2010-07-13 12:54:07.238: [ora.gpnpd][1212365120] [check] Trying to check PID = 20584
    2010-07-13 12:54:07.432: [ COMMCRS][1285794112]clsc_connect: (0x1304d850) no listener at (ADDRESS=(PROTOCOL=ipc)(KEY=fred0225DBG_GPNPD))
    [ clsdmc][1222854976]Fail to connect (ADDRESS=(PROTOCOL=ipc)(KEY=fred0225DBG_MDNSD)) with status 9
    2010-07-13 12:54:08.649: [ora.mdnsd][1222854976] [check] Error = error 9 encountered when connecting to MDNSD
    2010-07-13 12:54:08.649: [ora.mdnsd][1222854976] [check] Calling PID check for daemon
    2010-07-13 12:54:08.649: [ora.mdnsd][1222854976] [check] Trying to check PID = 20571
    2010-07-13 12:54:08.841: [ COMMCRS][1201875264]clsc_connect: (0x12f3b1d0) no listener at (ADDRESS=(PROTOCOL=ipc)(KEY=fred0225DBG_MDNSD))
    [ clsdmc][1159915840]Fail to connect (ADDRESS=(PROTOCOL=ipc)(KEY=fred0225DBG_GIPCD)) with status 9
    2010-07-13 12:54:10.051: [ora.gipcd][1159915840] [check] Error = error 9 encountered when connecting to GIPCD
    2010-07-13 12:54:10.051: [ora.gipcd][1159915840] [check] Calling PID check for daemon
    2010-07-13 12:54:10.051: [ora.gipcd][1159915840] [check] Trying to check PID = 20566
    2010-07-13 12:54:10.242: [ COMMCRS][1254324544]clsc_connect: (0x12f35630) no listener at (ADDRESS=(PROTOCOL=ipc)(KEY=fred0225DBG_GIPCD))
    In oracssdagent_root.log I found
    2010-07-13 12:52:28.698: [ CSSCLNT][1102481728]clssscConnect: gipc request failed with 29 (0x16)
    2010-07-13 12:52:28.698: [ CSSCLNT][1102481728]clsssInitNative: connect failed, rc 29
    2010-07-13 12:53:55.222: [ CSSCLNT][1102481728]clssnsqlnum: RPC failed rc 3
    2010-07-13 12:53:55.222: [ USRTHRD][1102481728] clsnomon_cssini: failed 3 to fetch node number
    2010-07-13 12:53:55.222: [ USRTHRD][1102481728] clsnomon_init: css init done, nodenum -1.
    2010-07-13 12:53:55.222: [ CSSCLNT][1102481728]clsssRecvMsg: got a disconnect from the server while waiting for message type 43
    2010-07-13 12:53:55.222: [ CSSCLNT][1102481728]clsssGetNLSData: Failure receiving a msg, rc 3
    If you need more info, let me know.

    Well, the error clearly indicates that a communication problem exists on the private interconnect.
    Could this be a setting in ESX, which prevents some communication between the clients on the second network card? Any routing table in ESX not configured correctly?
    Sebastian

  • 11gR2 root.sh fails to copy OCR locations stored in ASM

    Hi all,
    I'm trying to extend an 11gR2 3-node RAC cluster to a 4th node. When trying to run the $GI_HOME/root.sh script it fails giving the following error:
    /oracle/app/11.2.0/grid/root.sh
    Running Oracle 11g root.sh script...
    The following environment variables are set as:
    ORACLE_OWNER= grid
    ORACLE_HOME= /oracle/app/11.2.0/grid
    Enter the full pathname of the local bin directory: [usr/local/bin]:
    Copying dbhome to /usr/local/bin ...
    Copying oraenv to /usr/local/bin ...
    Copying coraenv to /usr/local/bin ...
    Creating /etc/oratab file...
    Entries will be added to the /etc/oratab file as needed by
    Database Configuration Assistant when a database is created
    Finished running generic part of root.sh script.
    Now product-specific root actions will be performed.
    2010-08-11 16:12:19: Parsing the host name
    2010-08-11 16:12:19: Checking for super user privileges
    2010-08-11 16:12:19: User has super user privileges
    Using configuration parameter file: /oracle/app/11.2.0/grid/crs/install/crsconfig_params
    Creating trace directory
    -ksh: line 1: /bin/env: not found
    /oracle/app/11.2.0/grid/bin/cluutil -sourcefile /etc/oracle/ocr.loc -sourcenode ucstst12 -destfile /orac le/app/11.2.0/grid/srvm/admin/ocrloc.tmp -nodelist ucstst12 ... failed
    Unable to copy OCR locations
    validateOCR failed for +OCR_VOTE at /oracle/app/11.2.0/grid/crs/install/crsconfig_lib.pm line 7979.
    My environment is below:
    OS: SLES 11.1
    Database: 11.2.0.1
    Grid Infrastructure: 11.2.0.1
    OCR & Voting storage: ASM
    DB file & FRA storage: ASM
    # Nodes: 3
    Any help is really appreciated.
    Thanks.

    Hi,
    have you setup SSH for the new node (for User Oracle) in both ways, when extending it?
    A common problem is to extend SSH is to do this one way only.
    However especially when extending the cluster, there are some oracle tools which ssh/scp to the existing nodes (as the installation owner) to copy special files (like ocr.loc) to the new node.
    If SSH is not setup both ways, then this will fail.
    Check with user oracle or grid (whatever you used) that
    ssh <node1> date
    ssh <node2> date
    ssh <node3> date
    ssh <node4> date
    Is working (from every node).
    Note: In 11.2 you can rerun root.sh. Just do a $CRS_HOME/crs/install/rootcrs.pl –deconfig –force to deconfigure the stack and then rerun root.sh
    Sebastian

  • Root.sh failed on second node while installing CRS 10g on centos 5.5

    root.sh failed on second node while installing CRS 10g
    Hi all,
    I am able to install Oracle 10g RAC clusterware on first node of the cluster. However, when I run the root.sh script as root
    user on second node of the cluster, it fails with following error message:
    NO KEYS WERE WRITTEN. Supply -force parameter to override.
    -force is destructive and will destroy any previous cluster
    configuration.
    Oracle Cluster Registry for cluster has already been initialized
    Startup will be queued to init within 90 seconds.
    Adding daemons to inittab
    Expecting the CRS daemons to be up within 600 seconds.
    Failure at final check of Oracle CRS stack.
    10
    and run cluvfy stage -post hwos -n all -verbose,it show message:
    ERROR:
    Could not find a suitable set of interfaces for VIPs.
    Result: Node connectivity check failed.
    Checking shared storage accessibility...
    Disk Sharing Nodes (2 in count)
    /dev/sda db2 db1
    and run cluvfy stage -pre crsinst -n all -verbose,it show message:
    ERROR:
    Could not find a suitable set of interfaces for VIPs.
    Result: Node connectivity check failed.
    Checking system requirements for 'crs'...
    No checks registered for this product.
    and run cluvfy stage -post crsinst -n all -verbose,it show message:
    Result: Node reachability check passed from node "DB2".
    Result: User equivalence check passed for user "oracle".
    Node Name CRS daemon CSS daemon EVM daemon
    db2 no no no
    db1 yes yes yes
    Check: Health of CRS
    Node Name CRS OK?
    db1 unknown
    Result: CRS health check failed.
    check crsd.log and show message:
    clsc_connect: (0x143ca610) no listener at (ADDRESS=(PROTOCOL=ipc)(KEY=OCSSD_LL_db2_crs))
    clsssInitNative: connect failed, rc 9
    Any help would be greatly appreciated.
    Edited by: 868121 on 2011-6-24 上午12:31

    Hello, it took a little searching, but I found this in a note in the GRID installation guide for Linux/UNIX:
    Public IP addresses and virtual IP addresses must be in the same subnet.
    In your case, you are using two different subnets for the VIPs.

  • Root.sh failed at second node OUL 6.3 Oracle GRID 11.2.0.3

    Hi, im installing a two node cluster mounted on Oracle Linux 6.3 with Oracle DB 11.2.0.3, the installation went smooth up until the execution of the root.sh script on the second node.
    THe script return this final lines:
    CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node nodo1, number 1, and is terminating
    An active cluster was found during exclusive startup, restarting to join the cluster
    Start of resource "ora.crsd" failed
    CRS-2800: Cannot start resource 'ora.asm' as it is already in the INTERMEDIATE state on server 'nodo2'
    CRS-4000: Command Start failed, or completed with errors.
    Failed to start Oracle Grid Infrastructure stack
    Failed to start Cluster Ready Services at /u01/app/11.2.0/grid/crs/install/crsconfig_lib.pm line 1286.
    /u01/app/11.2.0/grid/perl/bin/perl -I/u01/app/11.2.0/grid/perl/lib -I/u01/app/11.2.0/grid/crs/install /u01/app/11.2.0/grid/crs/install/rootcrs.pl execution failed
    In $GRID_HOME/log/node2/alertnode.log It appears to be a Cluster Time Synchronization Service issue, (i didn't synchronyze the nodes..) however the CTSS is running in observer mode, wich i believe it shouldn't affect the installation process. After that i lost it...there's an entry CRS-5018 indicating that an unused HAIP route was removed... and then, out of the blue: CRS-5818:Aborted command 'start' for resource 'ora.asm'. Some clarification will be deeply apreciated.
    Here's the complete log:
    2013-04-01 13:39:35.358
    [client(12163)]CRS-2101:The OLR was formatted using version 3.
    2013-04-01 19:40:19.597
    [ohasd(12338)]CRS-2112:The OLR service started on node nodo2.
    2013-04-01 19:40:19.657
    [ohasd(12338)]CRS-1301:Oracle High Availability Service started on node nodo2.
    [client(12526)]CRS-10001:01-Apr-13 13:41 ACFS-9459: ADVM/ACFS is not supported on this OS version: '2.6.39-400.17.2.el6uek.i686'
    [client(12528)]CRS-10001:01-Apr-13 13:41 ACFS-9201: Not Supported
    [client(12603)]CRS-10001:01-Apr-13 13:41 ACFS-9459: ADVM/ACFS is not supported on this OS version: '2.6.39-400.17.2.el6uek.i686'
    2013-04-01 19:41:17.509
    [ohasd(12338)]CRS-2302:Cannot get GPnP profile. Error CLSGPNP_NO_DAEMON (GPNPD daemon is not running).
    2013-04-01 19:41:17.618
    [gpnpd(12695)]CRS-2328:GPNPD started on node nodo2.
    2013-04-01 19:41:21.363
    [cssd(12755)]CRS-1713:CSSD daemon is started in exclusive mode
    2013-04-01 19:41:23.194
    [ohasd(12338)]CRS-2767:Resource state recovery not attempted for 'ora.diskmon' as its target state is OFFLINE
    2013-04-01 19:41:56.144
    [cssd(12755)]CRS-1707:Lease acquisition for node nodo2 number 2 completed
    2013-04-01 19:41:57.545
    [cssd(12755)]CRS-1605:CSSD voting file is online: /dev/oracleasm/disks/ASM_DISK_1; details in /u01/app/11.2.0/grid/log/nodo2/cssd/ocssd.log.
    [cssd(12755)]CRS-1636:The CSS daemon was started in exclusive mode but found an active CSS daemon on node nodo1 and is terminating; details at (:CSSNM00006:) in /u01/app/11.2.0/grid/log/nodo2/cssd/ocssd.log
    2013-04-01 19:41:58.549
    [ohasd(12338)]CRS-2765:Resource 'ora.cssdmonitor' has failed on server 'nodo2'.
    2013-04-01 19:42:10.025
    [gpnpd(12695)]CRS-2329:GPNPD on node nodo2 shutdown.
    2013-04-01 19:42:11.407
    [mdnsd(12685)]CRS-5602:mDNS service stopping by request.
    2013-04-01 19:42:29.642
    [gpnpd(12947)]CRS-2328:GPNPD started on node nodo2.
    2013-04-01 19:42:33.241
    [cssd(13012)]CRS-1713:CSSD daemon is started in clustered mode
    2013-04-01 19:42:35.104
    [ohasd(12338)]CRS-2767:Resource state recovery not attempted for 'ora.diskmon' as its target state is OFFLINE
    2013-04-01 19:42:44.065
    [cssd(13012)]CRS-1707:Lease acquisition for node nodo2 number 2 completed
    2013-04-01 19:42:45.484
    [cssd(13012)]CRS-1605:CSSD voting file is online: /dev/oracleasm/disks/ASM_DISK_1; details in /u01/app/11.2.0/grid/log/nodo2/cssd/ocssd.log.
    2013-04-01 19:42:52.138
    [cssd(13012)]CRS-1601:CSSD Reconfiguration complete. Active nodes are nodo1 nodo2 .
    2013-04-01 19:42:55.081
    [ctssd(13076)]CRS-2403:The Cluster Time Synchronization Service on host nodo2 is in observer mode.
    2013-04-01 19:42:55.581
    [ctssd(13076)]CRS-2401:The Cluster Time Synchronization Service started on host nodo2.
    2013-04-01 19:42:55.581
    [ctssd(13076)]CRS-2407:The new Cluster Time Synchronization Service reference node is host nodo1.
    2013-04-01 19:43:08.875
    [ctssd(13076)]CRS-2412:The Cluster Time Synchronization Service detects that the local time is significantly different from the mean cluster time. Details in /u01/app/11.2.0/grid/log/nodo2/ctssd/octssd.log.
    2013-04-01 19:43:08.876
    [ctssd(13076)]CRS-2409:The clock on host nodo2 is not synchronous with the mean cluster time. No action has been taken as the Cluster Time Synchronization Service is running in observer mode.
    2013-04-01 19:43:13.565
    [u01/app/11.2.0/grid/bin/orarootagent.bin(13064)]CRS-5018:(:CLSN00037:) Removed unused HAIP route: 169.254.0.0 / 255.255.0.0 / 0.0.0.0 / eth0
    2013-04-01 19:53:09.800
    [u01/app/11.2.0/grid/bin/oraagent.bin(12922)]CRS-5818:Aborted command 'start' for resource 'ora.asm'. Details at (:CRSAGF00113:) {0:0:223} in /u01/app/11.2.0/grid/log/nodo2/agent/ohasd/oraagent_oracle/oraagent_oracle.log.
    2013-04-01 19:53:11.827
    [ohasd(12338)]CRS-2757:Command 'Start' timed out waiting for response from the resource 'ora.asm'. Details at (:CRSPE00111:) {0:0:223} in /u01/app/11.2.0/grid/log/nodo2/ohasd/ohasd.log.
    2013-04-01 19:53:12.779
    [u01/app/11.2.0/grid/bin/oraagent.bin(12922)]CRS-5019:All OCR locations are on ASM disk groups [DATA], and none of these disk groups are mounted. Details are at "(:CLSN00100:)" in "/u01/app/11.2.0/grid/log/nodo2/agent/ohasd/oraagent_oracle/oraagent_oracle.log".
    2013-04-01 19:53:13.892
    [u01/app/11.2.0/grid/bin/oraagent.bin(12922)]CRS-5019:All OCR locations are on ASM disk groups [DATA], and none of these disk groups are mounted. Details are at "(:CLSN00100:)" in "/u01/app/11.2.0/grid/log/nodo2/agent/ohasd/oraagent_oracle/oraagent_oracle.log".
    2013-04-01 19:53:43.877
    [u01/app/11.2.0/grid/bin/oraagent.bin(12922)]CRS-5019:All OCR locations are on ASM disk groups [DATA], and none of these disk groups are mounted. Details are at "(:CLSN00100:)" in "/u01/app/11.2.0/grid/log/nodo2/agent/ohasd/oraagent_oracle/oraagent_oracle.log".
    2013-04-01 19:54:13.891
    [u01/app/11.2.0/grid/bin/oraagent.bin(12922)]CRS-5019:All OCR locations are on ASM disk groups [DATA], and none of these disk groups are mounted. Details are at "(:CLSN00100:)" in "/u01/app/11.2.0/grid/log/nodo2/agent/ohasd/oraagent_oracle/oraagent_oracle.log".
    2013-04-01 19:54:43.906
    [u01/app/11.2.0/grid/bin/oraagent.bin(12922)]CRS-5019:All OCR locations are on ASM disk groups [DATA], and none of these disk groups are mounted. Details are at "(:CLSN00100:)" in "/u01/app/11.2.0/grid/log/nodo2/agent/ohasd/oraagent_oracle/oraagent_oracle.log".
    2013-04-01 19:55:13.914
    [u01/app/11.2.0/grid/bin/oraagent.bin(12922)]CRS-5019:All OCR locations are on ASM disk groups [DATA], and none of these disk groups are mounted. Details are at "(:CLSN00100:)" in "/u01/app/11.2.0/grid/log/nodo2/agent/ohasd/oraagent_oracle/oraagent_oracle.log".
    2013-04-01 19:55:43.918
    [u01/app/11.2.0/grid/bin/oraagent.bin(12922)]CRS-5019:All OCR locations are on ASM disk groups [DATA], and none of these disk groups are mounted. Details are at "(:CLSN00100:)" in "/u01/app/11.2.0/grid/log/nodo2/agent/ohasd/oraagent_oracle/oraagent_oracle.log".
    2013-04-01 19:56:13.922
    [u01/app/11.2.0/grid/bin/oraagent.bin(12922)]CRS-5019:All OCR locations are on ASM disk groups [DATA], and none of these disk groups are mounted. Details are at "(:CLSN00100:)" in "/u01/app/11.2.0/grid/log/nodo2/agent/ohasd/oraagent_oracle/oraagent_oracle.log".
    2013-04-01 19:56:53.209
    [crsd(13741)]CRS-1012:The OCR service started on node nodo2.
    2013-04-01 20:07:01.128
    [crsd(13741)]CRS-0810:Cluster Ready Service aborted due to failure to communicate with Event Management Service with error [1]. Details at (:CRSD00120:) in /u01/app/11.2.0/grid/log/nodo2/crsd/crsd.log.
    2013-04-01 20:07:01.278
    [ohasd(12338)]CRS-2765:Resource 'ora.crsd' has failed on server 'nodo2'.
    2013-04-01 20:07:08.689
    [crsd(15248)]CRS-1012:The OCR service started on node nodo2.
    2013-04-01 20:13:10.138
    [ctssd(13076)]CRS-2412:The Cluster Time Synchronization Service detects that the local time is significantly different from the mean cluster time. Details in /u01/app/11.2.0/grid/log/nodo2/ctssd/octssd.log.
    2013-04-01 20:17:13.024
    [crsd(15248)]CRS-0810:Cluster Ready Service aborted due to failure to communicate with Event Management Service with error [1]. Details at (:CRSD00120:) in /u01/app/11.2.0/grid/log/nodo2/crsd/crsd.log.
    2013-04-01 20:17:13.171
    [ohasd(12338)]CRS-2765:Resource 'ora.crsd' has failed on server 'nodo2'.
    2013-04-01 20:17:20.826
    [crsd(16746)]CRS-1012:The OCR service started on node nodo2.
    2013-04-01 20:27:25.020
    [crsd(16746)]CRS-0810:Cluster Ready Service aborted due to failure to communicate with Event Management Service with error [1]. Details at (:CRSD00120:) in /u01/app/11.2.0/grid/log/nodo2/crsd/crsd.log.
    2013-04-01 20:27:25.176
    [ohasd(12338)]CRS-2765:Resource 'ora.crsd' has failed on server 'nodo2'.
    2013-04-01 20:27:31.591
    [crsd(18266)]CRS-1012:The OCR service started on node nodo2.
    2013-04-01 20:37:35.668
    [crsd(18266)]CRS-0810:Cluster Ready Service aborted due to failure to communicate with Event Management Service with error [1]. Details at (:CRSD00120:) in /u01/app/11.2.0/grid/log/nodo2/crsd/crsd.log.
    2013-04-01 20:37:35.808
    [ohasd(12338)]CRS-2765:Resource 'ora.crsd' has failed on server 'nodo2'.
    2013-04-01 20:37:43.209
    [crsd(19762)]CRS-1012:The OCR service started on node nodo2.
    2013-04-01 20:43:11.160
    [ctssd(13076)]CRS-2412:The Cluster Time Synchronization Service detects that the local time is significantly different from the mean cluster time. Details in /u01/app/11.2.0/grid/log/nodo2/ctssd/octssd.log.
    2013-04-01 20:47:47.487
    [crsd(19762)]CRS-0810:Cluster Ready Service aborted due to failure to communicate with Event Management Service with error [1]. Details at (:CRSD00120:) in /u01/app/11.2.0/grid/log/nodo2/crsd/crsd.log.
    2013-04-01 20:47:47.637
    [ohasd(12338)]CRS-2765:Resource 'ora.crsd' has failed on server 'nodo2'.
    2013-04-01 20:47:55.086
    [crsd(21242)]CRS-1012:The OCR service started on node nodo2.
    2013-04-01 20:57:59.343
    [crsd(21242)]CRS-0810:Cluster Ready Service aborted due to failure to communicate with Event Management Service with error [1]. Details at (:CRSD00120:) in /u01/app/11.2.0/grid/log/nodo2/crsd/crsd.log.
    2013-04-01 20:57:59.492
    [ohasd(12338)]CRS-2765:Resource 'ora.crsd' has failed on server 'nodo2'.
    2013-04-01 20:58:06.996
    [crsd(22744)]CRS-1012:The OCR service started on node nodo2.
    2013-04-01 21:08:11.046
    [crsd(22744)]CRS-0810:Cluster Ready Service aborted due to failure to communicate with Event Management Service with error [1]. Details at (:CRSD00120:) in /u01/app/11.2.0/grid/log/nodo2/crsd/crsd.log.
    2013-04-01 21:08:11.192
    [ohasd(12338)]CRS-2765:Resource 'ora.crsd' has failed on server 'nodo2'.
    2013-04-01 21:08:18.726
    [crsd(24260)]CRS-1012:The OCR service started on node nodo2.
    2013-04-01 21:13:12.000
    [ctssd(13076)]CRS-2412:The Cluster Time Synchronization Service detects that the local time is significantly different from the mean cluster time. Details in /u01/app/11.2.0/grid/log/nodo2/ctssd/octssd.log.
    2013-04-01 21:18:22.262
    [crsd(24260)]CRS-0810:Cluster Ready Service aborted due to failure to communicate with Event Management Service with error [1]. Details at (:CRSD00120:) in /u01/app/11.2.0/grid/log/nodo2/crsd/crsd.log.
    2013-04-01 21:18:22.411
    [ohasd(12338)]CRS-2765:Resource 'ora.crsd' has failed on server 'nodo2'.
    2013-04-01 21:18:29.927
    [crsd(25759)]CRS-1012:The OCR service started on node nodo2.
    2013-04-01 21:28:34.467
    [crsd(25759)]CRS-0810:Cluster Ready Service aborted due to failure to communicate with Event Management Service with error [1]. Details at (:CRSD00120:) in /u01/app/11.2.0/grid/log/nodo2/crsd/crsd.log.
    2013-04-01 21:28:34.616
    [ohasd(12338)]CRS-2765:Resource 'ora.crsd' has failed on server 'nodo2'.
    2013-04-01 21:28:41.990
    [crsd(27291)]CRS-1012:The OCR service started on node nodo2.
    2013-04-01 21:38:45.012
    [crsd(27291)]CRS-0810:Cluster Ready Service aborted due to failure to communicate with Event Management Service with error [1]. Details at (:CRSD00120:) in /u01/app/11.2.0/grid/log/nodo2/crsd/crsd.log.
    2013-04-01 21:38:45.160
    [ohasd(12338)]CRS-2765:Resource 'ora.crsd' has failed on server 'nodo2'.
    2013-04-01 21:38:52.790
    [crsd(28784)]CRS-1012:The OCR service started on node nodo2.
    2013-04-01 21:43:12.378
    [ctssd(13076)]CRS-2412:The Cluster Time Synchronization Service detects that the local time is significantly different from the mean cluster time. Details in /u01/app/11.2.0/grid/log/nodo2/ctssd/octssd.log.
    2013-04-01 21:48:56.285
    [crsd(28784)]CRS-0810:Cluster Ready Service aborted due to failure to communicate with Event Management Service with error [1]. Details at (:CRSD00120:) in /u01/app/11.2.0/grid/log/nodo2/crsd/crsd.log.
    2013-04-01 21:48:56.435
    [ohasd(12338)]CRS-2765:Resource 'ora.crsd' has failed on server 'nodo2'.
    2013-04-01 21:49:04.421
    [crsd(30272)]CRS-1012:The OCR service started on node nodo2.
    2013-04-01 21:59:08.183
    [crsd(30272)]CRS-0810:Cluster Ready Service aborted due to failure to communicate with Event Management Service with error [1]. Details at (:CRSD00120:) in /u01/app/11.2.0/grid/log/nodo2/crsd/crsd.log.
    2013-04-01 21:59:08.318
    [ohasd(12338)]CRS-2765:Resource 'ora.crsd' has failed on server 'nodo2'.
    2013-04-01 21:59:15.860
    [crsd(31772)]CRS-1012:The OCR service started on node nodo2.

    Hi santysharma, thanks for the reply, i have two ethernet interfaces: eth0 (public network 192.168.1.0) and eth1 (private network 10.5.3.0), there is no device using that ip range, here's the output of route command:
    (Sorry for the alignment, i tried to tab it but the editor trims it again)
    Kernel IP routing table
    Destination Gateway Genmask Flags Metric Ref Use Iface
    default 192.168.1.1 0.0.0.0 UG 0 0 0 eth0
    private * 255.255.255.0 U 0 0 0 eth1
    link-local * 255.255.0.0 U 1002 0 0 eth0
    link-local * 255.255.0.0 U 1003 0 0 eth1
    public * 255.255.255.0 U 0 0 0 eth0
    And the /etc/hosts file
    127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
    ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
    10.5.3.1 nodo1.cluster nodo1
    10.5.3.2 nodo2.cluster nodo2
    192.168.1.13 cluster-scan
    192.168.1.14 nodo1-vip
    192.168.1.15 nodo2-vip
    And the ifconfig -a
    eth0 Link encap:Ethernet HWaddr C8:3A:35:D9:C6:2B
    inet addr:192.168.1.12 Bcast:192.168.1.255 Mask:255.255.255.0
    inet6 addr: fe80::ca3a:35ff:fed9:c62b/64 Scope:Link
    UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
    RX packets:34708 errors:0 dropped:18 overruns:0 frame:0
    TX packets:24693 errors:0 dropped:0 overruns:0 carrier:0
    collisions:0 txqueuelen:1000
    RX bytes:48545969 (46.2 MiB) TX bytes:1994381 (1.9 MiB)
    eth1 Link encap:Ethernet HWaddr 00:0D:87:D0:A3:8E
    inet addr:10.5.3.2 Bcast:10.5.3.255 Mask:255.255.255.0
    inet6 addr: fe80::20d:87ff:fed0:a38e/64 Scope:Link
    UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
    RX packets:0 errors:0 dropped:0 overruns:0 frame:0
    TX packets:44 errors:0 dropped:0 overruns:0 carrier:0
    collisions:0 txqueuelen:1000
    RX bytes:0 (0.0 b) TX bytes:5344 (5.2 KiB)
    Interrupt:23 Base address:0x6000
    lo Link encap:Local Loopback
    inet addr:127.0.0.1 Mask:255.0.0.0
    inet6 addr: ::1/128 Scope:Host
    UP LOOPBACK RUNNING MTU:16436 Metric:1
    RX packets:20 errors:0 dropped:0 overruns:0 frame:0
    TX packets:20 errors:0 dropped:0 overruns:0 carrier:0
    collisions:0 txqueuelen:0
    RX bytes:1320 (1.2 KiB) TX bytes:1320 (1.2 KiB)
    Now that i'm thinking i've read somewhere that ipv6 was no supported...yet there's no relation with the 169.254.x.x ip range.

  • Root.sh failed in one node - CLSMON and UDLM

    Hi experts.
    My enviroment is:
    2-node SunCluster Update3
    Oracle RAC 10.2.0.1 > planning to upgrade to 10.2.0.4
    The problem is: I installed the CRS services on 2 nodes - OK
    After that, running root.sh fails in 1 node:
    /u01/app/product/10/CRS/root.sh
    WARNING: directory '/u01/app/product/10' is not owned by root
    WARNING: directory '/u01/app/product' is not owned by root
    WARNING: directory '/u01/app' is not owned by root
    WARNING: directory '/u01' is not owned by root
    Checking to see if Oracle CRS stack is already configured
    Checking to see if any 9i GSD is up
    Setting the permissions on OCR backup directory
    Setting up NS directories
    Oracle Cluster Registry configuration upgraded successfully
    WARNING: directory '/u01/app/product/10' is not owned by root
    WARNING: directory '/u01/app/product' is not owned by root
    WARNING: directory '/u01/app' is not owned by root
    WARNING: directory '/u01' is not owned by root
    clscfg: EXISTING configuration version 3 detected.
    clscfg: version 3 is 10G Release 2.
    Successfully accumulated necessary OCR keys.
    Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
    node <nodenumber>: <nodename> <private interconnect name> <hostname>
    node 0: spodhcsvr10 clusternode1-priv spodhcsvr10
    node 1: spodhcsvr12 clusternode2-priv spodhcsvr12
    clscfg: Arguments check out successfully.
    NO KEYS WERE WRITTEN. Supply -force parameter to override.
    -force is destructive and will destroy any previous cluster
    configuration.
    Oracle Cluster Registry for cluster has already been initialized
    Sep 22 13:34:17 spodhcsvr10 root: Oracle Cluster Ready Services starting by user request.
    Startup will be queued to init within 30 seconds.
    Sep 22 13:34:20 spodhcsvr10 root: Cluster Ready Services completed waiting on dependencies.
    Adding daemons to inittab
    Expecting the CRS daemons to be up within 600 seconds.
    Sep 22 13:34:34 spodhcsvr10 last message repeated 3 times
    Sep 22 13:34:34 spodhcsvr10 root: Running CRSD with TZ = Brazil/East
    Sep 22 13:34:40 spodhcsvr10 root: Oracle CLSMON terminated with unexpected status 10. Respawning
    Sep 22 13:35:43 spodhcsvr10 last message repeated 9 times
    Sep 22 13:36:07 spodhcsvr10 root: Cluster Ready Services completed waiting on dependencies.
    Sep 22 13:36:07 spodhcsvr10 root: Running CRSD with TZ = Brazil/East
    Sep 22 13:36:14 spodhcsvr10 su: libsldap: Status: 85 Mesg: openConnection: simple bind failed - Timed out
    Sep 22 13:36:19 spodhcsvr10 root: Oracle CLSMON terminated with unexpected status 10. Respawning
    Sep 22 13:37:35 spodhcsvr10 last message repeated 11 times
    Sep 22 13:37:40 spodhcsvr10 root: Cluster Ready Services completed waiting on dependencies.
    Sep 22 13:37:40 spodhcsvr10 root: Running CRSD with TZ = Brazil/East
    Sep 22 13:37:42 spodhcsvr10 root: Oracle CLSMON terminated with unexpected status 10. Respawning
    Sep 22 13:38:03 spodhcsvr10 last message repeated 3 times
    Sep 22 13:38:10 spodhcsvr10 root: Oracle CLSMON terminated with unexpected status 10. Respawning
    Sep 22 13:39:12 spodhcsvr10 last message repeated 9 times
    Sep 22 13:39:13 spodhcsvr10 root: Cluster Ready Services completed waiting on dependencies.
    Sep 22 13:39:13 spodhcsvr10 root: Running CRSD with TZ = Brazil/East
    Sep 22 13:39:19 spodhcsvr10 root: Oracle CLSMON terminated with unexpected status 10. Respawning
    Sep 22 13:40:42 spodhcsvr10 last message repeated 12 times
    Sep 22 13:40:46 spodhcsvr10 root: Cluster Ready Services completed waiting on dependencies.
    Sep 22 13:40:46 spodhcsvr10 root: Running CRSD with TZ = Brazil/East
    Sep 22 13:40:49 spodhcsvr10 root: Oracle CLSMON terminated with unexpected status 10. Respawning
    Sep 22 13:42:05 spodhcsvr10 last message repeated 11 times
    Sep 22 13:42:11 spodhcsvr10 root: Cluster Ready Services completed waiting on dependencies.
    Sep 22 13:42:12 spodhcsvr10 root: Oracle CLSMON terminated with unexpected status 10. Respawning
    Sep 22 13:42:19 spodhcsvr10 root: Cluster Ready Services completed waiting on dependencies.
    Sep 22 13:42:19 spodhcsvr10 root: Running CRSD with TZ = Brazil/East
    Sep 22 13:42:19 spodhcsvr10 root: Oracle CLSMON terminated with unexpected status 10. Respawning
    Sep 22 13:43:49 spodhcsvr10 last message repeated 13 times
    Sep 22 13:43:51 spodhcsvr10 root: Cluster Ready Services completed waiting on dependencies.
    Sep 22 13:43:51 spodhcsvr10 root: Running CRSD with TZ = Brazil/East
    Sep 22 13:43:56 spodhcsvr10 root: Oracle CLSMON terminated with unexpected status 10. Respawning
    Failure at final check of Oracle CRS stack.
    I traced the ocssd.log and found some informations:
    [    CSSD]2010-09-22 14:04:14.739 [6] >TRACE: clssnmDiskStateChange: state from 2 to 4 disk (0//dev/vx/rdsk/racdg/ora_vote1)
    [    CSSD]2010-09-22 14:04:14.742 [6] >TRACE: clssnmReadDskHeartbeat: node(1) is down. rcfg(2) wrtcnt(2478) LATS(0) Disk lastSeqNo(2478)
    [    CSSD]2010-09-22 14:04:14.742 [7] >TRACE: clssnmDiskStateChange: state from 2 to 4 disk (1//dev/vx/rdsk/racdg/ora_vote2)
    [    CSSD]2010-09-22 14:04:14.744 [7] >TRACE: clssnmReadDskHeartbeat: node(1) is down. rcfg(2) wrtcnt(2478) LATS(0) Disk lastSeqNo(2478)
    [    CSSD]2010-09-22 14:04:14.745 [8] >TRACE: clssnmDiskStateChange: state from 2 to 4 disk (2//dev/vx/rdsk/racdg/ora_vote3)
    [    CSSD]2010-09-22 14:04:14.746 [8] >TRACE: clssnmReadDskHeartbeat: node(1) is down. rcfg(2) wrtcnt(2478) LATS(0) Disk lastSeqNo(2478)
    [    CSSD]2010-09-22 14:04:14.785 [1] >TRACE: clssscSclsFatal: read value of disable
    [    CSSD]2010-09-22 14:04:14.785 [10] >TRACE: clssnmFatalThread: spawned
    [    CSSD]2010-09-22 14:04:14.785 [1] >TRACE: clssscSclsFatal: read value of disable
    [    CSSD]2010-09-22 14:04:14.786 [11] >TRACE: clssnmconnect: connecting to node 0, flags 0x0001, connector 1
    [    CSSD]2010-09-22 14:04:23.075 >USER: Oracle Database 10g CSS Release 10.2.0.1.0 Production Copyright 1996, 2004 Oracle. All rights reserved.
    [    CSSD]2010-09-22 14:04:23.075 >USER: CSS daemon log for node spodhcsvr10, number 0, in cluster NET_RAC
    [  clsdmt]Listening to (ADDRESS=(PROTOCOL=ipc)(KEY=spodhcsvr10DBG_CSSD))
    [    CSSD]2010-09-22 14:04:23.082 [1] >TRACE: clssscmain: local-only set to false
    [    CSSD]2010-09-22 14:04:23.096 [1] >TRACE: clssnmReadNodeInfo: added node 0 (spodhcsvr10) to cluster
    [    CSSD]2010-09-22 14:04:23.106 [1] >TRACE: clssnmReadNodeInfo: added node 1 (spodhcsvr12) to cluster
    [    CSSD]2010-09-22 14:04:23.129 [5] >TRACE: [0]Node monitor: dlm attach failed error LK_STAT_NOTCREATED
    [    CSSD]CLSS-0001: skgxn not active
    [    CSSD]2010-09-22 14:04:23.129 [5] >TRACE: clssnm_skgxnmon: skgxn init failed, rc 30
    [    CSSD]2010-09-22 14:04:23.132 [1] >TRACE: clssnmInitNMInfo: misscount set to 600
    [    CSSD]2010-09-22 14:04:23.136 [1] >TRACE: clssnmDiskStateChange: state from 1 to 2 disk (0//dev/vx/rdsk/racdg/ora_vote1)
    [    CSSD]2010-09-22 14:04:23.139 [1] >TRACE: clssnmDiskStateChange: state from 1 to 2 disk (1//dev/vx/rdsk/racdg/ora_vote2)
    [    CSSD]2010-09-22 14:04:23.143 [1] >TRACE: clssnmDiskStateChange: state from 1 to 2 disk (2//dev/vx/rdsk/racdg/ora_vote3)
    [    CSSD]2010-09-22 14:04:25.139 [6] >TRACE: clssnmDiskStateChange: state from 2 to 4 disk (0//dev/vx/rdsk/racdg/ora_vote1)
    [    CSSD]2010-09-22 14:04:25.142 [6] >TRACE: clssnmReadDskHeartbeat: node(1) is down. rcfg(2) wrtcnt(2488) LATS(0) Disk lastSeqNo(2488)
    [    CSSD]2010-09-22 14:04:25.143 [7] >TRACE: clssnmDiskStateChange: state from 2 to 4 disk (1//dev/vx/rdsk/racdg/ora_vote2)
    [    CSSD]2010-09-22 14:04:25.144 [7] >TRACE: clssnmReadDskHeartbeat: node(1) is down. rcfg(2) wrtcnt(2488) LATS(0) Disk lastSeqNo(2488)
    [    CSSD]2010-09-22 14:04:25.145 [8] >TRACE: clssnmDiskStateChange: state from 2 to 4 disk (2//dev/vx/rdsk/racdg/ora_vote3)
    [    CSSD]2010-09-22 14:04:25.148 [8] >TRACE: clssnmReadDskHeartbeat: node(1) is down. rcfg(2) wrtcnt(2489) LATS(0) Disk lastSeqNo(2489)
    [    CSSD]2010-09-22 14:04:25.186 [1] >TRACE: clssscSclsFatal: read value of disable
    [    CSSD]2010-09-22 14:04:25.186 [10] >TRACE: clssnmFatalThread: spawned
    [    CSSD]2010-09-22 14:04:25.186 [1] >TRACE: clssscSclsFatal: read value of disable
    [    CSSD]2010-09-22 14:04:25.187 [11] >TRACE: clssnmconnect: connecting to node 0, flags 0x0001, connector 1
    [    CSSD]2010-09-22 14:04:33.449 >USER: Oracle Database 10g CSS Release 10.2.0.1.0 Production Copyright 1996, 2004 Oracle. All rights reserved.
    [    CSSD]2010-09-22 14:04:33.449 >USER: CSS daemon log for node spodhcsvr10, number 0, in cluster NET_RAC
    [  clsdmt]Listening to (ADDRESS=(PROTOCOL=ipc)(KEY=spodhcsvr10DBG_CSSD))
    [    CSSD]2010-09-22 14:04:33.457 [1] >TRACE: clssscmain: local-only set to false
    [    CSSD]2010-09-22 14:04:33.470 [1] >TRACE: clssnmReadNodeInfo: added node 0 (spodhcsvr10) to cluster
    [    CSSD]2010-09-22 14:04:33.480 [1] >TRACE: clssnmReadNodeInfo: added node 1 (spodhcsvr12) to cluster
    [    CSSD]2010-09-22 14:04:33.498 [5] >TRACE: [0]Node monitor: dlm attach failed error LK_STAT_NOTCREATED
    [    CSSD]CLSS-0001: skgxn not active
    [    CSSD]2010-09-22 14:04:33.498 [5] >TRACE: clssnm_skgxnmon: skgxn init failed, rc 30
    [    CSSD]2010-09-22 14:04:33.500 [1] >TRACE: clssnmInitNMInfo: misscount set to 600
    [    CSSD]2010-09-22 14:04:33.505 [1] >TRACE: clssnmDiskStateChange: state from 1 to 2 disk (0//dev/vx/rdsk/racdg/ora_vote1)
    [    CSSD]2010-09-22 14:04:33.508 [1] >TRACE: clssnmDiskStateChange: state from 1 to 2 disk (1//dev/vx/rdsk/racdg/ora_vote2)
    [    CSSD]2010-09-22 14:04:33.510 [1] >TRACE: clssnmDiskStateChange: state from 1 to 2 disk (2//dev/vx/rdsk/racdg/ora_vote3)
    [    CSSD]2010-09-22 14:04:35.508 [6] >TRACE: clssnmDiskStateChange: state from 2 to 4 disk (0//dev/vx/rdsk/racdg/ora_vote1)
    [    CSSD]2010-09-22 14:04:35.510 [6] >TRACE: clssnmReadDskHeartbeat: node(1) is down. rcfg(2) wrtcnt(2499) LATS(0) Disk lastSeqNo(2499)
    [    CSSD]2010-09-22 14:04:35.510 [7] >TRACE: clssnmDiskStateChange: state from 2 to 4 disk (1//dev/vx/rdsk/racdg/ora_vote2)
    [    CSSD]2010-09-22 14:04:35.512 [7] >TRACE: clssnmReadDskHeartbeat: node(1) is down. rcfg(2) wrtcnt(2499) LATS(0) Disk lastSeqNo(2499)
    [    CSSD]2010-09-22 14:04:35.513 [8] >TRACE: clssnmDiskStateChange: state from 2 to 4 disk (2//dev/vx/rdsk/racdg/ora_vote3)
    [    CSSD]2010-09-22 14:04:35.514 [8] >TRACE: clssnmReadDskHeartbeat: node(1) is down. rcfg(2) wrtcnt(2499) LATS(0) Disk lastSeqNo(2499)
    [    CSSD]2010-09-22 14:04:35.553 [1] >TRACE: clssscSclsFatal: read value of disable
    [    CSSD]2010-09-22 14:04:35.553 [10] >TRACE: clssnmFatalThread: spawned
    [    CSSD]2010-09-22 14:04:35.553 [1] >TRACE: clssscSclsFatal: read value of disable
    [    CSSD]2010-09-22 14:04:35.553 [11] >TRACE: clssnmconnect: connecting to node 0, flags 0x0001, connector 1
    I believe the main error is:
    [    CSSD]2010-09-22 14:04:33.498 [5] >TRACE: [0]Node monitor: dlm attach failed error LK_STAT_NOTCREATED
    [    CSSD]CLSS-0001: skgxn not active
    And the communication between UDLM and CLSMON. But i don't know how to resolve this.
    My UDLM version is 3.3.4.9.
    Somebody have any ideas about this?
    Tks!

    Now i finally installed CRS and run root.sh without errors (i think that problem is in some old file from other instalation tries...)
    But now i have another problem: When install DB software, in step to copy instalation to remote node, this node have some failure in CLSMON/CSSD daemon and panicking:
    Sep 23 16:10:51 spodhcsvr10 root: Oracle CLSMON terminated with unexpected status 138. Respawning
    Sep 23 16:10:52 spodhcsvr10 root: Oracle CSSD failure. Rebooting for cluster integrity.
    Sep 23 16:10:52 spodhcsvr10 root: [ID 702911 user.alert] Oracle CSSD failure. Rebooting for cluster integrity.
    Sep 23 16:10:51 spodhcsvr10 root: [ID 702911 user.error] Oracle CLSMON terminated with unexpected status 138. Respawning
    Sep 23 16:10:52 spodhcsvr10 root: [ID 702911 user.alert] Oracle CSSD failure. Rebooting for cluster integrity.
    Sep 23 16:10:56 spodhcsvr10 Cluster.OPS.UCMMD: fatal: received signal 15
    Sep 23 16:10:56 spodhcsvr10 Cluster.OPS.UCMMD: [ID 770355 daemon.error] fatal: received signal 15
    Sep 23 16:10:59 spodhcsvr10 root: Oracle Cluster Ready Services waiting for SunCluster and UDLM to start.
    Sep 23 16:10:59 spodhcsvr10 root: Cluster Ready Services completed waiting on dependencies.
    Sep 23 16:10:59 spodhcsvr10 root: [ID 702911 user.error] Oracle Cluster Ready Services waiting for SunCluster and UDLM to start.
    Sep 23 16:10:59 spodhcsvr10 root: [ID 702911 user.error] Cluster Ready Services completed waiting on dependencies.
    Notifying cluster that this node is panicking
    The instalation in first node continue and report error in copy to second node.
    Any ideas? Tks!

Maybe you are looking for

  • Key mapping on UNIX forms

    We are using character based forms on SGI IRIX (Unix) machine as client, I need to map the keys for the client which I have never done. Can anybody help me or advice me what are the respective steps to do it. I couldn't find a good document on this.

  • How to save entries in a fillable form

    I have created a fillable form using Adobe Acrobat X Pro (I uploaded a form created in Word 2010, and modified it slightly using Adobe X), but users I send the form to are unable to save their entries.How do I allow users to save the information they

  • What's the error in this package

    dear all, can you plz tell me what's the erro in this package plz SQL> CREATE OR REPLACE PACKAGE discounts   2  IS   3  g_id NUMBER := 7839;   4  discount_rate NUMBER := 0.00;   5  PROCEDURE display_price (p_price NUMBER);   6  END discounts;   7  /

  • Twinkle libccext error after latest pacman -Syu

    twinkle: error while loading shared libraries: libccext2-1.5.so.0: cannot open shared object file: No such file or directory After receiving this message with  1.0.1-2 from Extra, I gave      1.0.1-3 from Testing a go, but I'm still seeing this messa

  • How to make an account in iChat with Apple ID but no .mac?

    Hi everyone, my Apple ID is my email adress but not ending with @mac.com or @me.com. How can I make an account in iChat with my Apple ID without making an extra ID with an @mac.com name? Thanks