Root.sh failed on node2

here is my root.sh output on node1, it was successful.
[root@rac1 ~]# /u01/app/11.2.0/grid/root.sh
Running Oracle 11g root.sh script...
The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME= /u01/app/11.2.0/grid
Enter the full pathname of the local bin directory: [usr/local/bin]:
Copying dbhome to /usr/local/bin ...
Copying oraenv to /usr/local/bin ...
Copying coraenv to /usr/local/bin ...
Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root.sh script.
Now product-specific root actions will be performed.
2011-01-21 21:24:03: Parsing the host name
2011-01-21 21:24:03: Checking for super user privileges
2011-01-21 21:24:03: User has super user privileges
Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params
Creating trace directory
LOCAL ADD MODE
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
root wallet
root wallet cert
root cert export
peer wallet
profile reader wallet
pa wallet
peer wallet keys
pa wallet keys
peer cert request
pa cert request
peer cert
pa cert
peer root cert TP
profile reader root cert TP
pa root cert TP
peer pa cert TP
pa peer cert TP
profile reader pa cert TP
profile reader peer cert TP
peer user cert
pa user cert
Adding daemon to inittab
CRS-4123: Oracle High Availability Services has been started.
ohasd is starting
CRS-2672: Attempting to start 'ora.gipcd' on 'rac1'
CRS-2672: Attempting to start 'ora.mdnsd' on 'rac1'
CRS-2676: Start of 'ora.gipcd' on 'rac1' succeeded
CRS-2676: Start of 'ora.mdnsd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'rac1'
CRS-2676: Start of 'ora.gpnpd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac1'
CRS-2676: Start of 'ora.cssdmonitor' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'rac1'
CRS-2672: Attempting to start 'ora.diskmon' on 'rac1'
CRS-2676: Start of 'ora.diskmon' on 'rac1' succeeded
CRS-2676: Start of 'ora.cssd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.ctssd' on 'rac1'
CRS-2676: Start of 'ora.ctssd' on 'rac1' succeeded
ASM created and started successfully.
DiskGroup DATA created successfully.
clscfg: -install mode specified
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
CRS-2672: Attempting to start 'ora.crsd' on 'rac1'
CRS-2676: Start of 'ora.crsd' on 'rac1' succeeded
CRS-4256: Updating the profile
Successful addition of voting disk 68df83b7f2764fb5bf99934ce7b9d0b8.
Successfully replaced voting disk group with +DATA.
CRS-4256: Updating the profile
CRS-4266: Voting file(s) successfully replaced
## STATE File Universal Id File Name Disk group
1. ONLINE 68df83b7f2764fb5bf99934ce7b9d0b8 (/dev/oracleasm/disks/DISK1) [DATA]
Located 1 voting disk(s).
CRS-2673: Attempting to stop 'ora.crsd' on 'rac1'
CRS-2677: Stop of 'ora.crsd' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'rac1'
CRS-2677: Stop of 'ora.asm' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.ctssd' on 'rac1'
CRS-2677: Stop of 'ora.ctssd' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.cssdmonitor' on 'rac1'
CRS-2677: Stop of 'ora.cssdmonitor' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'rac1'
CRS-2677: Stop of 'ora.cssd' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'rac1'
CRS-2677: Stop of 'ora.gpnpd' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'rac1'
CRS-2677: Stop of 'ora.gipcd' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.mdnsd' on 'rac1'
CRS-2677: Stop of 'ora.mdnsd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.mdnsd' on 'rac1'
CRS-2676: Start of 'ora.mdnsd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.gipcd' on 'rac1'
CRS-2676: Start of 'ora.gipcd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'rac1'
CRS-2676: Start of 'ora.gpnpd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac1'
CRS-2676: Start of 'ora.cssdmonitor' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'rac1'
CRS-2672: Attempting to start 'ora.diskmon' on 'rac1'
CRS-2676: Start of 'ora.diskmon' on 'rac1' succeeded
CRS-2676: Start of 'ora.cssd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.ctssd' on 'rac1'
CRS-2676: Start of 'ora.ctssd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'rac1'
CRS-2676: Start of 'ora.asm' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'rac1'
CRS-2676: Start of 'ora.crsd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.evmd' on 'rac1'
CRS-2676: Start of 'ora.evmd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'rac1'
CRS-2676: Start of 'ora.asm' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.DATA.dg' on 'rac1'
CRS-2676: Start of 'ora.DATA.dg' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.registry.acfs' on 'rac1'
CRS-2676: Start of 'ora.registry.acfs' on 'rac1' succeeded
rac1 2011/01/21 22:08:18 /u01/app/11.2.0/grid/cdata/rac1/backup_20110121_220818.olr
Configure Oracle Grid Infrastructure for a Cluster ... succeeded
Updating inventory properties for clusterware
Starting Oracle Universal Installer...
Checking swap space: must be greater than 500 MB. Actual 4031 MB Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oraInventory
'UpdateNodeList' was successful.
[root@rac1 ~]#

and this is my output to root.sh on node2, it failed.
where do I look for possible problem?
my runcluvfy before installation ran successfully and passed
[root@rac2 ~]# /u01/app/11.2.0/grid/root.sh
Running Oracle 11g root.sh script...
The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME= /u01/app/11.2.0/grid
Enter the full pathname of the local bin directory: [usr/local/bin]:
Copying dbhome to /usr/local/bin ...
Copying oraenv to /usr/local/bin ...
Copying coraenv to /usr/local/bin ...
Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root.sh script.
Now product-specific root actions will be performed.
2011-01-22 01:04:41: Parsing the host name
2011-01-22 01:04:41: Checking for super user privileges
2011-01-22 01:04:41: User has super user privileges
Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_p arams
Creating trace directory
LOCAL ADD MODE
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
Adding daemon to inittab
CRS-4123: Oracle High Availability Services has been started.
ohasd is starting
CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node rac1, number 1, and is terminating
An active cluster was found during exclusive startup, restarting to join the cluster
CRS-2672: Attempting to start 'ora.mdnsd' on 'rac2'
CRS-2676: Start of 'ora.mdnsd' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.gipcd' on 'rac2'
CRS-2676: Start of 'ora.gipcd' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'rac2'
CRS-2676: Start of 'ora.gpnpd' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac2'
CRS-2676: Start of 'ora.cssdmonitor' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'rac2'
CRS-2672: Attempting to start 'ora.diskmon' on 'rac2'
CRS-2676: Start of 'ora.diskmon' on 'rac2' succeeded
CRS-2676: Start of 'ora.cssd' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.ctssd' on 'rac2'
CRS-2676: Start of 'ora.ctssd' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.drivers.acfs' on 'rac2'
CRS-2676: Start of 'ora.drivers.acfs' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'rac2'
CRS-2676: Start of 'ora.asm' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'rac2'
CRS-2676: Start of 'ora.crsd' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.evmd' on 'rac2'
CRS-2676: Start of 'ora.evmd' on 'rac2' succeeded
Timed out waiting for the CRS stack to start.
[root@rac2 ~]#

Similar Messages

  • Root.sh failing at node2 during 10g rac installation on vmware

    Hi All,
    I'm very new to Oracle RAC. I was trying to install 10gr2 rac on vmware. But while running root.sh script at node 2 I'm getting error like "Failure at check of Final Oracle Stack" . For the same I tried a lot to find the solution from this forum and also from other sources but did not get any proper solution. So any help from you experts is highly appreciable. Let me tell you how I'm getting this error.
    1.Install vmware and RHEL 4 as guest OS.
    2.Create a virtual machine as RAC1 and configured shared storage by adding 5 virtual disks(1-ocr,1-voting,3-asm)
    3.Then clone RAC1 to a new virtual machine RAC2.
    4.After binding and assigning permissions all raw devices(raw1,raw2,raw3,raw4,raw5) are available in both the virtual machines.
    5. Then create user equivalence for both the system and each node is reach from other node also.
    6. Now when I tried to install Oracle 10gr2 clusterware it got completed at RAC1 and I ran the required scripts but while running root.sh on RAC2 I got the above mentioned error.
    I did OCR check at both the nodes and found both are different. Also the cssd.log gives error message like OCR mismatch.
    Now here my question is as Firewal is disabled and all the configurations are proper though I'm getting the same error.
    Can anyboy provide some suggestion on resolving this issue.
    Thanks

    /dev/raw/raw1 /dev/sdb1
    /dev/raw/raw2 /dev/sdc1
    /dev/raw/raw3 /dev/sdd1
    /dev/raw/raw4 /dev/sde1
    /dev/raw/raw5 /dev/sdf1
    The 1st one is for OCR and the next one is for Voting Disk and the rest are for asm. Also the permission are same across the two virtual machines.
    root:oinstall /dev/raw/raw1
    oracle:oinstall /dev/raw/raw2
    oracle:oinstall /dev/raw/raw3
    oracle:oinstall /dev/raw/raw4
    oracle:oinstall /dev/raw/raw5
    Thanks

  • Root.sh fails on 11.2.0.3 clusterware while starting 'ora.asm' resource

    Dear all,
    I am trying to install clean Oracle 11.2.0.3 grid infrastructure on a two node cluster running on Solaris 5.10.
    - Cluster verification was successfully on both nodes; No warning or issues;
    - I am using 2 network cards for the public and 2 for the private interconnect;
    - OCR is stored on ASM
    - Firewall is disabled on both nodes
    - SCAN is being configured on the DNS (not added in /etc/hosts)
    - GNS is not used
    - hosts file is identical (except the primary hostname)
    The problem: root.sh fails on the 2nd (remote) node, because it fails to start the "ora.asm" resource. However, the root.sh has completed successfully on the 1st node.. Somehow, root.sh doesn't create +ASM2 instance on the remote (host2) node.
    root.sh was executed first on the local node (host1) and after the successful execution was started on the remote (host2) node.
    Output from host1 (working):
    ===================
    Adding Clusterware entries to inittab
    CRS-2672: Attempting to start 'ora.mdnsd' on 'host1'
    CRS-2676: Start of 'ora.mdnsd' on 'host1' succeeded
    CRS-2672: Attempting to start 'ora.gpnpd' on 'host1'
    CRS-2676: Start of 'ora.gpnpd' on 'host1' succeeded
    CRS-2672: Attempting to start 'ora.cssdmonitor' on 'host1'
    CRS-2672: Attempting to start 'ora.gipcd' on 'host1'
    CRS-2676: Start of 'ora.cssdmonitor' on 'host1' succeeded
    CRS-2676: Start of 'ora.gipcd' on 'host1' succeeded
    CRS-2672: Attempting to start 'ora.cssd' on 'host1'
    CRS-2672: Attempting to start 'ora.diskmon' on 'host1'
    CRS-2676: Start of 'ora.diskmon' on 'host1' succeeded
    CRS-2676: Start of 'ora.cssd' on 'host1' succeeded
    ASM created and started successfully.
    Disk Group CRS created successfully.
    clscfg: -install mode specified
    Successfully accumulated necessary OCR keys.
    Creating OCR keys for user 'root', privgrp 'root'..
    Operation successful.
    CRS-4256: Updating the profile
    Successful addition of voting disk 4373be34efab4f01bf79f6c5362acfd3.
    Successful addition of voting disk 7fd725fa4d904f07bf76cecf96791547.
    Successful addition of voting disk a9c85297bdd74f3abfd86899205aaf17.
    Successfully replaced voting disk group with +CRS.
    CRS-4256: Updating the profile
    CRS-4266: Voting file(s) successfully replaced
    ## STATE File Universal Id File Name Disk group
    1. ONLINE 4373be34efab4f01bf79f6c5362acfd3 (/dev/rdsk/c4t600A0B80006E2CC40000C6674E82AA57d0s4) [CRS]
    2. ONLINE 7fd725fa4d904f07bf76cecf96791547 (/dev/rdsk/c4t600A0B80006E2CC40000C6694E82AADDd0s4) [CRS]
    3. ONLINE a9c85297bdd74f3abfd86899205aaf17 (/dev/rdsk/c4t600A0B80006E2F100000C7744E82AC7Ad0s4) [CRS]
    Located 3 voting disk(s).
    CRS-2672: Attempting to start 'ora.asm' on 'host1'
    CRS-2676: Start of 'ora.asm' on 'host1' succeeded
    CRS-2672: Attempting to start 'ora.CRS.dg' on 'host1'
    CRS-2676: Start of 'ora.CRS.dg' on 'host1' succeeded
    CRS-2672: Attempting to start 'ora.registry.acfs' on 'host1'
    CRS-2676: Start of 'ora.registry.acfs' on 'host1' succeeded
    Configure Oracle Grid Infrastructure for a Cluster ... succeeded
    Name Type Target State Host
    ora.CRS.dg ora....up.type ONLINE ONLINE host1
    ora....ER.lsnr ora....er.type ONLINE ONLINE host1
    ora....N1.lsnr ora....er.type ONLINE ONLINE host1
    ora....N2.lsnr ora....er.type ONLINE ONLINE host1
    ora....N3.lsnr ora....er.type ONLINE ONLINE host1
    ora.asm ora.asm.type ONLINE ONLINE host1
    ora....SM1.asm application ONLINE ONLINE host1
    ora....B1.lsnr application ONLINE ONLINE host1
    ora....db1.gsd application OFFLINE OFFLINE
    ora....db1.ons application ONLINE ONLINE host1
    ora....db1.vip ora....t1.type ONLINE ONLINE host1
    ora.cvu ora.cvu.type ONLINE ONLINE host1
    ora.gsd ora.gsd.type OFFLINE OFFLINE
    ora....network ora....rk.type ONLINE ONLINE host1
    ora.oc4j ora.oc4j.type ONLINE ONLINE host1
    ora.ons ora.ons.type ONLINE ONLINE host1
    ora....ry.acfs ora....fs.type ONLINE ONLINE host1
    ora.scan1.vip ora....ip.type ONLINE ONLINE host1
    ora.scan2.vip ora....ip.type ONLINE ONLINE host1
    ora.scan3.vip ora....ip.type ONLINE ONLINE host1
    Output from host2 (failing):
    ===================
    OLR initialization - successful
    Adding Clusterware entries to inittab
    CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node billdb1, number 1, and is terminating
    An active cluster was found during exclusive startup, restarting to join the cluster
    Start of resource "ora.asm" failed
    CRS-2672: Attempting to start 'ora.drivers.acfs' on 'host2'
    CRS-2676: Start of 'ora.drivers.acfs' on 'host2' succeeded
    CRS-2672: Attempting to start 'ora.asm' on 'host2'
    CRS-5017: The resource action "ora.asm start" encountered the following error:
    ORA-03113: end-of-file on communication channel
    Process ID: 0
    Session ID: 0 Serial number: 0
    *. For details refer to "(:CLSN00107:)" in "/u01/11.2.0/grid/log/host2/agent/ohasd/oraagent_grid/oraagent_grid.log".*
    CRS-2674: Start of 'ora.asm' on 'host2' failed
    CRS-2679: Attempting to clean 'ora.asm' on 'host2'
    CRS-2681: Clean of 'ora.asm' on 'host2' succeeded
    CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'host2'
    CRS-2677: Stop of 'ora.drivers.acfs' on 'host2' succeeded
    CRS-4000: Command Start failed, or completed with errors.
    Failed to start Oracle Grid Infrastructure stack
    Failed to start ASM at /u01/11.2.0/grid/crs/install/crsconfig_lib.pm line 1272.
    /u01/11.2.0/grid/perl/bin/perl -I/u01/11.2.0/grid/perl/lib -I/u01/11.2.0/grid/crs/install /u01/11.2.0/grid/crs/install/rootcrs.pl execution failed
    Contents of "/u01/11.2.0/grid/cfgtoollogs/crsconfig/rootcrs_host2.log"
    =============================================
    CRS-2672: Attempting to start 'ora.asm' on 'host2'
    CRS-5017: The resource action "ora.asm start" encountered the following error:
    ORA-03113: end-of-file on communication channel
    Process ID: 0
    Session ID: 0 Serial number: 0
    . For details refer to "(:CLSN00107:)" in "/u01/11.2.0/grid/log/host2/agent/ohasd/oraagent_grid/oraagent_grid.log".
    CRS-2674: Start of 'ora.asm' on 'host2' failed
    CRS-2679: Attempting to clean 'ora.asm' on 'host2'
    CRS-2681: Clean of 'ora.asm' on 'host2' succeeded
    CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'host2'
    CRS-2677: Stop of 'ora.drivers.acfs' on 'host2' succeeded
    CRS-4000: Command Start failed, or completed with errors.
    2011-10-24 19:36:54: Failed to start Oracle Grid Infrastructure stack
    2011-10-24 19:36:54: ###### Begin DIE Stack Trace ######
    2011-10-24 19:36:54: Package File Line Calling
    2011-10-24 19:36:54: --------------- -------------------- ---- ----------
    2011-10-24 19:36:54: 1: main rootcrs.pl 375 crsconfig_lib::dietrap
    2011-10-24 19:36:54: 2: crsconfig_lib crsconfig_lib.pm 1272 main::__ANON__
    2011-10-24 19:36:54: 3: crsconfig_lib crsconfig_lib.pm 1171 crsconfig_lib::start_cluster
    2011-10-24 19:36:54: 4: main rootcrs.pl 803 crsconfig_lib::perform_start_cluster
    2011-10-24 19:36:54: ####### End DIE Stack Trace #######
    Shortened output from "/u01/11.2.0/grid/log/host2/agent/ohasd/oraagent_grid/oraagent_grid.log"
    2011-10-24 19:35:48.726: [ora.asm][9] {0:0:224} [start] clean {
    2011-10-24 19:35:48.726: [ora.asm][9] {0:0:224} [start] InstAgent::stop_option stop mode immediate option 1
    2011-10-24 19:35:48.726: [ora.asm][9] {0:0:224} [start] InstAgent::stop {
    2011-10-24 19:35:48.727: [ora.asm][9] {0:0:224} [start] InstAgent::stop original reason system do shutdown abort
    2011-10-24 19:35:48.727: [ora.asm][9] {0:0:224} [start] ConnectionPool::resetConnection s_statusOfConnectionMap 00ab1948
    2011-10-24 19:35:48.727: [ora.asm][9] {0:0:224} [start] ConnectionPool::resetConnection sid +ASM2 status  2
    2011-10-24 19:35:48.728: [ora.asm][9] {0:0:224} [start] Gimh::check OH /u01/11.2.0/grid SID +ASM2
    2011-10-24 19:35:48.728: [ora.asm][9] {0:0:224} [start] Gimh::check condition changes to (GIMH_NEXT_NUM) 0,1,7 exists
    2011-10-24 19:35:48.729: [ora.asm][9] {0:0:224} [start] (:CLSN00006:)AsmAgent::check failed gimh state 0
    2011-10-24 19:35:48.729: [ora.asm][9] {0:0:224} [start] AsmAgent::check ocrCheck 1 m_OcrOnline 0 m_OcrTimer 0
    2011-10-24 19:35:48.729: [ora.asm][9] {0:0:224} [start] DgpAgent::initOcrDgpSet { entry
    2011-10-24 19:35:48.730: [ora.asm][9] {0:0:224} [start] DgpAgent::initOcrDgpSet procr_get_conf: retval [0] configured [1] local only [0] error buffer []
    2011-10-24 19:35:48.730: [ora.asm][9] {0:0:224} [start] DgpAgent::initOcrDgpSet procr_get_conf: OCR loc [0], Disk Group : [+CRS]
    2011-10-24 19:35:48.730: [ora.asm][9] {0:0:224} [start] DgpAgent::initOcrDgpSet m_ocrDgpSet 015fba90 dgName CRS
    2011-10-24 19:35:48.731: [ora.asm][9] {0:0:224} [start] DgpAgent::initOcrDgpSet ocrret 0 found 1
    2011-10-24 19:35:48.731: [ora.asm][9] {0:0:224} [start] DgpAgent::initOcrDgpSet ocrDgpSet CRS
    2011-10-24 19:35:48.731: [ora.asm][9] {0:0:224} [start] DgpAgent::initOcrDgpSet exit }
    2011-10-24 19:35:48.731: [ora.asm][9] {0:0:224} [start] DgpAgent::ocrDgCheck Entry {
    2011-10-24 19:35:48.732: [ora.asm][9] {0:0:224} [start] DgpAgent::getConnxn new pool
    2011-10-24 19:35:48.732: [ora.asm][9] {0:0:224} [start] DgpAgent::getConnxn new pool m_oracleHome:/u01/11.2.0/grid m_oracleSid:+ASM2 m_usrOraEnv:
    2011-10-24 19:35:48.732: [ora.asm][9] {0:0:224} [start] ConnectionPool::ConnectionPool 2 m_oracleHome:/u01/11.2.0/grid, m_oracleSid:+ASM2, m_usrOraEnv:
    2011-10-24 19:35:48.733: [ora.asm][9] {0:0:224} [start] ConnectionPool::addConnection m_oracleHome:/u01/11.2.0/grid m_oracleSid:+ASM2 m_usrOraEnv: pConnxn:
    01fcdf10
    2011-10-24 19:35:48.733: [ora.asm][9] {0:0:224} [start] Utils::getCrsHome crsHome /u01/11.2.0/grid
    2011-10-24 19:35:51.969: [ora.asm][14] {0:0:224} [check] makeConnectStr = (DESCRIPTION=(ADDRESS=(PROTOCOL=beq)(PROGRAM=/u01/11.2.0/grid/bin/oracle)(ARGV0=o
    racle+ASM2)(ENVS='ORACLE_HOME=/u01/11.2.0/grid,ORACLE_SID=+ASM2')(ARGS='(DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))'))(CONNECT_DATA=(SID=+ASM2)))
    2011-10-24 19:35:51.971: [ora.asm][14] {0:0:224} [check] ConnectionPool::getConnection 260 pConnxn 013e40a0
    2011-10-24 19:35:51.971: [ora.asm][14] {0:0:224} [check] DgpAgent::getConnxn connected
    2011-10-24 19:35:51.971: [ora.asm][14] {0:0:224} [check] InstConnection::connectInt: server not attached
    2011-10-24 19:35:52.190: [ora.asm][14] {0:0:224} [check] ORA-01034: ORACLE not available
    ORA-27101: shared memory realm does not exist
    SVR4 Error: 2: No such file or directory
    Process ID: 0
    Session ID: 0 Serial number: 0
    2011-10-24 19:35:52.190: [ora.asm][14] {0:0:224} [check] InstConnection::connectInt (2) Exception OCIException
    2011-10-24 19:35:52.190: [ora.asm][14] {0:0:224} [check] InstConnection:connect:excp OCIException OCI error 1034
    2011-10-24 19:35:52.190: [ora.asm][14] {0:0:224} [check] DgpAgent::queryDgStatus excp ORA-01034: ORACLE not available
    ORA-27101: shared memory realm does not exist
    SVR4 Error: 2: No such file or directory
    Process ID: 0
    Session ID: 0 Serial number: 0
    2011-10-24 19:35:52.190: [ora.asm][14] {0:0:224} [check] DgpAgent::queryDgStatus asm inst is down or going down
    2011-10-24 19:35:52.191: [ora.asm][14] {0:0:224} [check] DgpAgent::queryDgStatus dgName CRS ret 1
    2011-10-24 19:35:52.191: [ora.asm][14] {0:0:224} [check] (:CLSN00100:)DgpAgent::ocrDgCheck OCR dgName CRS state 1
    2011-10-24 19:35:52.192: [ora.asm][14] {0:0:224} [check] ConnectionPool::releaseConnection InstConnection 013e40a0
    2011-10-24 19:35:52.192: [ora.asm][14] {0:0:224} [check] AsmAgent::check ocrCheck 2 m_OcrOnline 0 m_OcrTimer 0
    2011-10-24 19:35:52.193: [ora.asm][14] {0:0:224} [check] CrsCmd::ClscrsCmdData::stat entity 1 statflag 32 useFilter 0
    2011-10-24 19:35:52.197: [ COMMCRS][23]clsc_connect: (1020d39d0) no listener at (ADDRESS=(PROTOCOL=IPC)(KEY=CRSD_UI_SOCKET))
    Please advice for any workaround or a metalink note.
    Thanks in advance!

    Thanks for the fast reply!
    - Yes, the shared storage is accessible.
    - The alert log for the +ASM2 clearly shows that ASM instance has started normally using default parameters and at one point PMON process dumped.
    - The system logs just shows that there is an error executing "crswrapexece.pl"
    System Log
    ===================
    *Oct 24 19:25:03 host2 root: [ID 702911 user.error] exec /u01/11.2.0/grid/perl/bin/perl -I/u01/11.2.0/grid/perl/lib /u01/11.2.0/grid/bin/crswrapexece.pl /*
    u01/11.2.0/grid/crs/install/s_crsconfig_host2_env.txt /u01/11.2.0/grid/bin/ohasd.bin "reboot"
    Oct 24 19:26:33 host2 oracleoks: [ID 902884 kern.notice] [Oracle OKS] mallocing log buffer, size=10485760
    Oct 24 19:26:33 host2 oracleoks: [ID 714332 kern.notice] [Oracle OKS] log buffer = 0x301780fcb50, size 10485760
    Oct 24 19:26:33 host2 oracleoks: [ID 400061 kern.notice] NOTICE: [Oracle OKS] ODLM hash size 16384
    Oct 24 19:26:33 host2 oracleoks: [ID 160659 kern.notice] NOTICE: OKSK-00004: Module load succeeded. Build information: (LOW DEBUG) USM_11.2.0.3.0_SOLAR
    IS.SPARC64_110803.1 2011/08/11 02:38:30
    Oct 24 19:26:33 host2 pseudo: [ID 129642 kern.info] pseudo-device: oracleadvm0
    Oct 24 19:26:33 host2 genunix: [ID 936769 kern.info] oracleadvm0 is /pseudo/oracleadvm@0
    Oct 24 19:26:33 host2 oracleoks: [ID 141287 kern.notice] NOTICE: ADVMK-00001: Module load succeeded. Build information: (LOW DEBUG) - USM_11.2.0.3.0_SOL
    ARIS.SPARC64_110803.1 built on 2011/08/11 02:40:17.
    Oct 24 19:26:33 host2 oracleacfs: [ID 202941 kern.notice] NOTICE: [Oracle ACFS] FCB hash size 16384
    Oct 24 19:26:33 host2 oracleacfs: [ID 671725 kern.notice] NOTICE: [Oracle ACFS] buffer cache size 511MB (79884 buckets)
    Oct 24 19:26:33 host2 oracleacfs: [ID 730054 kern.notice] NOTICE: [Oracle ACFS] DLM hash size 16384
    Oct 24 19:26:33 host2 oracleoks: [ID 617314 kern.notice] NOTICE: ACFSK-0037: Module load succeeded. Build information: (LOW DEBUG) USM_11.2.0.3.0_SOLAR
    IS.SPARC64_110803.1 2011/08/11 02:42:45
    Oct 24 19:26:33 host2 pseudo: [ID 129642 kern.info] pseudo-device: oracleacfs0
    Oct 24 19:26:33 host2 genunix: [ID 936769 kern.info] oracleacfs0 is /pseudo/oracleacfs@0
    Oct 24 19:26:36 host2 oracleoks: [ID 621795 kern.notice] NOTICE: OKSK-00010: Persistent OKS log opened at /u01/11.2.0/grid/log/host2/acfs/acfs.log.0.
    Oct 24 19:31:37 host2 last message repeated 1 time
    Oct 24 19:33:05 host2 CLSD: [ID 770310 daemon.notice] The clock on host host2 has been updated by the Cluster Time Synchronization Service to be synchr
    onous with the mean cluster time.
    ASM alert log
    ====================================================================
    <msg time='2011-10-24T19:35:48.776+01:00' org_id='oracle' comp_id='asm'
    client_id='' type='UNKNOWN' level='16'
    host_id='host2' host_addr='10.172.16.200' module=''
    pid='26406'>
    <txt>System state dump requested by (instance=2, osid=26396 (PMON)), summary=[abnormal instance termination].
    </txt>
    </msg>
    <msg time='2011-10-24T19:35:48.778+01:00' org_id='oracle' comp_id='asm'
    client_id='' type='UNKNOWN' level='16'
    host_id='host2' host_addr='10.172.16.200' module=''
    pid='26406'>
    <txt>System State dumped to trace file /u01/app/oracle/diag/asm/+asm/+ASM2/trace/+ASM2_diag_26406.trc
    </txt>
    </msg>
    <msg time='2011-10-24T19:35:48.927+01:00' org_id='oracle' comp_id='asm'
    type='UNKNOWN' level='16' host_id='host2'
    host_addr='10.172.16.200' pid='26470'>
    <txt>ORA-1092 : opitsk aborting process
    </txt>
    </msg>
    <msg time='2011-10-24T19:35:49.128+01:00' org_id='oracle' comp_id='asm'
    type='UNKNOWN' level='16' host_id='host2'
    host_addr='10.172.16.200' pid='26472'>
    <txt>ORA-1092 : opitsk aborting process
    </txt>
    </msg>
    Output from "/u01/app/oracle/diag/asm/+asm/+ASM2/trace/+ASM2_diag_26406.trc"
    REQUEST:system state dump at level 10, requested by (instance=2, osid=26396 (PMON)), summary=[abnormal instance termination].
    kjzdattdlm: Can not attach to DLM (LMON up=[TRUE], DB mounted=[FALSE]).
    ===================================================
    SYSTEM STATE (level=10)
    Orapids on dead process list: [count = 0]
    PROCESS 1:
    SO: 0x3df098b50, type: 2, owner: 0x0, flag: INIT/-/-/0x00 if: 0x3 c: 0x3
    proc=0x3df098b50, name=process, file=ksu.h LINE:12616 ID:, pg=0
    (process) Oracle pid:1, ser:0, calls cur/top: 0x0/0x0
    flags : (0x20) PSEUDO
    flags2: (0x0), flags3: (0x10)
    intr error: 0, call error: 0, sess error: 0, txn error 0
    intr queue: empty
    ksudlp FALSE at location: 0
    (post info) last post received: 0 0 0
    last post received-location: No post
    last process to post me: none
    last post sent: 0 0 0
    last post sent-location: No post
    last process posted by me: none
    (latch info) wait_event=0 bits=0
    O/S info: user: , term: , ospid: (DEAD)
    OSD pid info: Unix process pid: 0, image: PSEUDO
    SO: 0x38000cef0, type: 5, owner: 0x3df098b50, flag: INIT/-/-/0x00 if: 0x3 c: 0x3
    proc=0x0, name=kss parent, file=kss2.h LINE:138 ID:, pg=0
    PSO child state object changes :
    Dump of memory from 0x00000003DF722AC0 to 0x00000003DF722CC8
    3DF722AC0 00000000 00000000 00000000 00000000 [................]
    Repeat 31 times
    3DF722CC0 00000000 00000000 [........]
    PROCESS 2: PMON
    SO: 0x3df099bf8, type: 2, owner: 0x0, flag: INIT/-/-/0x00 if: 0x3 c: 0x3
    proc=0x3df099bf8, name=process, file=ksu.h LINE:12616 ID:, pg=0
    (process) Oracle pid:2, ser:1, calls cur/top: 0x3db6c8d30/0x3db6c8d30
    flags : (0xe) SYSTEM
    flags2: (0x0), flags3: (0x10)
    intr error: 0, call error: 0, sess error: 0, txn error 0
    intr queue: empty
    ksudlp FALSE at location: 0
    (post info) last post received: 0 0 136
    last post received-location: kjm.h LINE:1228 ID:kjmdmi: pmon to attach
    last process to post me: 3df0a2138 1 6
    last post sent: 0 0 137
    last post sent-location: kjm.h LINE:1230 ID:kjiath: pmon attached
    last process posted by me: 3df0a2138 1 6
    (latch info) wait_event=0 bits=0
    Process Group: DEFAULT, pseudo proc: 0x3debbbf40
    O/S info: user: grid, term: UNKNOWN, ospid: 26396
    OSD pid info: Unix process pid: 26396, image: oracle@host2 (PMON)
    SO: 0x3d8800c18, type: 30, owner: 0x3df099bf8, flag: INIT/-/-/0x00 if: 0x3 c: 0x3
    proc=0x3df099bf8, name=ges process, file=kji.h LINE:3669 ID:, pg=0
    GES MSG BUFFERS: st=emp chunk=0x0 hdr=0x0 lnk=0x0 flags=0x0 inc=0
    outq=0 sndq=0 opid=0 prmb=0x0
    mbg=(0 0) mbg=(0 0) mbg[r]=(0 0)
    fmq=(0 0) fmq=(0 0) fmq[r]=(0 0)
    mop[s]=0 mop[q]=0 pendq=0 zmbq=0
    nonksxp_recvs=0
    ------------process 3d8800c18--------------------
    proc version : 0
    Local inst : 2
    pid : 26396
    lkp_inst : 2
    svr_mode : 0
    proc state : KJP_FROZEN
    Last drm hb acked : 0
    flags : x50
    ast_rcvd_svrmod : 0
    current lock op : 0
    Total accesses : 1
    Imm. accesses : 0
    Locks on ASTQ : 0
    Locks Pending AST : 0
    Granted locks : 0
    AST_Q:
    PENDING_Q:
    GRANTED_Q:
    SO: 0x3d9835198, type: 14, owner: 0x3df099bf8, flag: INIT/-/-/0x00 if: 0x1 c: 0x1
    proc=0x3df099bf8, name=channel handle, file=ksr2.h LINE:367 ID:, pg=0
    (broadcast handle) 3d9835198 flag: (2) ACTIVE SUBSCRIBER,
    owner: 3df099bf8 - ospid: 26396
    event: 1, last message event: 1,
    last message waited event: 1,
    next message: 0(0), messages read: 0
    channel: (3d9934df8) PMON actions channel [name: 2]
    scope: 7, event: 1, last mesage event: 0,
    publishers/subscribers: 0/1,
    messages published: 0
    heuristic msg queue length: 0
    SO: 0x3d9835008, type: 14, owner: 0x3df099bf8, flag: INIT/-/-/0x00 if: 0x1 c: 0x1
    proc=0x3df099bf8, name=channel handle, file=ksr2.h LINE:367 ID:, pg=0
    (broadcast handle) 3d9835008 flag: (2) ACTIVE SUBSCRIBER,
    owner: 3df099bf8 - ospid: 26396
    event: 1, last message event: 1,
    last message waited event: 1,
    next message: 0(0), messages read: 0
    channel: (3d9941e40) scumnt mount lock [name: 157]
    scope: 1, event: 12, last mesage event: 0,
    publishers/subscribers: 0/12,
    messages published: 0
    heuristic msg queue length: 0
    SO: 0x3de4a2b80, type: 4, owner: 0x3df099bf8, flag: INIT/-/-/0x00 if: 0x3 c: 0x3
    proc=0x3df099bf8, name=session, file=ksu.h LINE:12624 ID:, pg=0
    (session) sid: 33 ser: 1 trans: 0x0, creator: 0x3df099bf8
    flags: (0x51) USR/- flags_idl: (0x1) BSY/-/-/-/-/-
    flags2: (0x409) -/-/INC
    DID: , short-term DID:
    txn branch: 0x0
    oct: 0, prv: 0, sql: 0x0, psql: 0x0, user: 0/SYS
    ksuxds FALSE at location: 0
    service name: SYS$BACKGROUND
    Current Wait Stack:
    Not in wait; last wait ended 0.666415 sec ago
    Wait State:
    fixed_waits=0 flags=0x21 boundary=0x0/-1
    Session Wait History:
    elapsed time of 0.666593 sec since last wait
    0: waited for 'pmon timer'
    duration=0x12c, =0x0, =0x0
    wait_id=63 seq_num=64 snap_id=1
    wait times: snap=3.000089 sec, exc=3.000089 sec, total=3.000089 sec
    wait times: max=3.000000 sec
    wait counts: calls=1 os=1
    occurred after 0.002067 sec of elapsed time
    1: waited for 'pmon timer'
    duration=0x12c, =0x0, =0x0
    wait_id=62 seq_num=63 snap_id=1
    wait times: snap=3.010111 sec, exc=3.010111 sec, total=3.010111 sec
    wait times: max=3.000000 sec
    wait counts: calls=1 os=1
    occurred after 0.001926 sec of elapsed time
    2: waited for 'pmon timer'
    duration=0x12c, =0x0, =0x0
    wait_id=61 seq_num=62 snap_id=1
    wait times: snap=3.125286 sec, exc=3.125286 sec, total=3.125286 sec
    wait times: max=3.000000 sec
    wait counts: calls=1 os=1
    occurred after 0.003361 sec of elapsed time
    3: waited for 'pmon timer'
    duration=0x12c, =0x0, =0x0
    wait_id=60 seq_num=61 snap_id=1
    wait times: snap=3.000081 sec, exc=3.000081 sec, total=3.000081 sec
    wait times: max=3.000000 sec
    wait counts: calls=1 os=1
    occurred after 0.002102 sec of elapsed time
    4: waited for 'pmon timer'
    duration=0x12c, =0x0, =0x0

  • Root.sh fails for 11gR2 Grid Infrastructure installation on AIX 6.1

    Hello all,
    root.sh fails with the errors below. SR with Oracle opened. Will post the resolution when it is available. Any insights in the meantime? Thank you!
    System information:
    OS: AIX 6.1
    Runcluvfy.sh reported no issue
    Permissions on the raw devices set to 660 and ownership is oracle:dba
    Using external redundancy for ASM, ASM instance is online
    Permissions on block and raw device files
    system1:ux460p1> ls -l /dev/hdisk32
    brw-rw---- 1 oracle dba 17, 32 Mar 11 16:50 /dev/hdisk32
    system11:ux460p1> ls -l /dev/rhdisk32
    crw-rw---- 1 oracle dba 17, 32 Mar 12 15:52 /dev/rhdisk32
    ocrconfig.log
    racle Database 11g Clusterware Release 11.2.0.1.0 - Production Copyright 1996, 2009 Oracle. All rights reserved.
    2010-03-15 19:17:19.773: [ OCRCONF][1]ocrconfig starts...
    2010-03-15 19:17:19.775: [ OCRCONF][1]Upgrading OCR data
    2010-03-15 19:17:20.474: [  OCRASM][1]proprasmo: kgfoCheckMount return [0]. Cannot proceed with dirty open.
    2010-03-15 19:17:20.474: [  OCRASM][1]proprasmo: Error in open/create file in dg [DATA]
    [  OCRASM][1]SLOS : [clsuSlosFormatDiag called with non-error slos.]
    2010-03-15 19:17:20.603: [  OCRRAW][1]proprioo: Failed to open [+DATA]. Returned proprasmo() with [26]. Marking location as UNAVAILABLE.
    2010-03-15 19:17:20.603: [  OCRRAW][1]proprioo: No OCR/OLR devices are usable
    2010-03-15 19:17:20.603: [  OCRASM][1]proprasmcl: asmhandle is NULL
    2010-03-15 19:17:20.603: [  OCRRAW][1]proprinit: Could not open raw device
    2010-03-15 19:17:20.603: [  OCRASM][1]proprasmcl: asmhandle is NULL
    2010-03-15 19:17:20.604: [ default][1]a_init:7!: Backend init unsuccessful : [26]
    2010-03-15 19:17:20.604: [ OCRCONF][1]Exporting OCR data to [OCRUPGRADEFILE]
    2010-03-15 19:17:20.604: [  OCRAPI][1]a_init:7!: Backend init unsuccessful : [33]
    2010-03-15 19:17:20.605: [ OCRCONF][1]There was no previous version of OCR. error:[PROC-33: Oracle Cluster Registry is not configured]
    2010-03-15 19:17:20.841: [  OCRASM][1]proprasmo: kgfoCheckMount return [0]. Cannot proceed with dirty open.
    2010-03-15 19:17:20.841: [  OCRASM][1]proprasmo: Error in open/create file in dg [DATA]
    [  OCRASM][1]SLOS : [clsuSlosFormatDiag called with non-error slos.]
    2010-03-15 19:17:20.966: [  OCRRAW][1]proprioo: Failed to open [+DATA]. Returned proprasmo() with [26]. Marking location as UNAVAILABLE.
    2010-03-15 19:17:20.966: [  OCRRAW][1]proprioo: No OCR/OLR devices are usable
    2010-03-15 19:17:20.966: [  OCRASM][1]proprasmcl: asmhandle is NULL
    2010-03-15 19:17:20.966: [  OCRRAW][1]proprinit: Could not open raw device
    2010-03-15 19:17:20.966: [  OCRASM][1]proprasmcl: asmhandle is NULL
    2010-03-15 19:17:20.966: [ default][1]a_init:7!: Backend init unsuccessful : [26]
    2010-03-15 19:17:21.412: [  OCRRAW][1]propriogid:1_2: INVALID FORMAT
    2010-03-15 19:17:21.412: [  OCRRAW][1]proprior: Header check from OCR device 0 offset 0 failed (26).
    2010-03-15 19:17:21.414: [  OCRRAW][1]ibctx: Failed to read the whole bootblock. Assumes invalid format.
    2010-03-15 19:17:21.414: [  OCRRAW][1]proprinit:problem reading the bootblock or superbloc 22
    2010-03-15 19:17:21.534: [  OCRRAW][1]propriogid:1_2: INVALID FORMAT
    2010-03-15 19:17:21.701: [  OCRRAW][1]iniconfig:No 92 configuration
    2010-03-15 19:17:21.701: [  OCRAPI][1]a_init:6a: Backend init successful
    2010-03-15 19:17:21.764: [ OCRCONF][1]Initialized DATABASE keys
    2010-03-15 19:17:21.770: [ OCRCONF][1]Successfully set skgfr block 0
    2010-03-15 19:17:21.771: [ OCRCONF][1]Exiting [status=success]...
    **alert.log**
    Oracle Database 11g Clusterware Release 11.2.0.1.0 - Production Copyright 1996, 2009 Oracle. All rights reserved.
    2010-03-15 19:12:00.148
    [client(483478)]CRS-2106:The OLR location /u01/app/grid/cdata/ux460p1.olr is inaccessible. Details in /u01/app/grid/log/ux460p1/client/ocrconfig_483478.log.
    2010-03-15 19:12:00.171
    [client(483478)]CRS-2101:The OLR was formatted using version 3.
    2010-03-15 14:16:18.620
    [ohasd(471204)]CRS-2112:The OLR service started on node ux460p1.
    2010-03-15 14:16:18.720
    [ohasd(471204)]CRS-8017:location: /etc/oracle/lastgasp has 8 reboot advisory log files, 0 were announced and 0 errors occurred
    2010-03-15 14:16:18.847
    [ohasd(471204)]CRS-2772:Server 'ux460p1' has been assigned to pool 'Free'.
    2010-03-15 14:16:54.107
    [ctssd(340174)]CRS-2403:The Cluster Time Synchronization Service on host ux460p1 is in observer mode.
    2010-03-15 14:16:54.123
    [ctssd(340174)]CRS-2407:The new Cluster Time Synchronization Service reference node is host ux460p1.
    2010-03-15 14:16:54.917
    [ctssd(340174)]CRS-2401:The Cluster Time Synchronization Service started on host ux460p1.
    2010-03-15 19:17:21.414
    [client(376968)]CRS-1006:The OCR location +DATA is inaccessible. Details in /u01/app/grid/log/ux460p1/client/ocrconfig_376968.log.
    2010-03-15 19:17:21.701
    [client(376968)]CRS-1001:The OCR was formatted using version 3.
    2010-03-15 14:17:24.888
    [crsd(303252)]CRS-1012:The OCR service started on node ux460p1.
    2010-03-15 14:17:56.344
    [ctssd(340174)]CRS-2405:The Cluster Time Synchronization Service on host ux460p1 is shutdown by user
    2010-03-15 14:19:14.855
    [ctssd(340188)]CRS-2403:The Cluster Time Synchronization Service on host ux460p1 is in observer mode.
    2010-03-15 14:19:14.870
    [ctssd(340188)]CRS-2407:The new Cluster Time Synchronization Service reference node is host ux460p1.
    2010-03-15 14:19:15.638
    [ctssd(340188)]CRS-2401:The Cluster Time Synchronization Service started on host ux460p1.
    2010-03-15 14:19:32.985
    [crsd(417946)]CRS-1012:The OCR service started on node ux460p1.
    2010-03-15 14:19:35.250
    [crsd(417946)]CRS-1201:CRSD started on node ux460p1.
    2010-03-15 14:19:35.698
    [ohasd(471204)]CRS-2765:Resource 'ora.crsd' has failed on server 'ux460p1'.
    2010-03-15 14:19:38.928

    Public and Private are on different devices and subnets.
    No logfile named: ocrconfig_7833.log
    I do have ocrconfig_7089.log and ocrconfig_8985.log
    Here is their contents:
    ocrconfig_7089.log:
    Oracle Database 11g Clusterware Release 11.2.0.1.0 - Production Copyright 1996, 2009 Oracle. All rights reserved.
    2010-11-09 13:38:32.518: [ OCRCONF][2819644944]ocrconfig starts...
    2010-11-09 13:38:32.542: [ OCRCONF][2819644944]Upgrading OCR data
    2010-11-09 13:38:32.576: [  OCROSD][2819644944]utread:3: Problem reading buffer 12e54000 buflen 4096 retval 0 phy_offset 102400 retry 0
    2010-11-09 13:38:32.576: [  OCROSD][2819644944]utread:3: Problem reading buffer 12e54000 buflen 4096 retval 0 phy_offset 102400 retry 1
    2010-11-09 13:38:32.576: [  OCROSD][2819644944]utread:3: Problem reading buffer 12e54000 buflen 4096 retval 0 phy_offset 102400 retry 2
    2010-11-09 13:38:32.576: [  OCROSD][2819644944]utread:3: Problem reading buffer 12e54000 buflen 4096 retval 0 phy_offset 102400 retry 3
    2010-11-09 13:38:32.576: [  OCROSD][2819644944]utread:3: Problem reading buffer 12e54000 buflen 4096 retval 0 phy_offset 102400 retry 4
    2010-11-09 13:38:32.576: [  OCROSD][2819644944]utread:3: Problem reading buffer 12e54000 buflen 4096 retval 0 phy_offset 102400 retry 5
    2010-11-09 13:38:32.576: [  OCRRAW][2819644944]propriogid:1_1: Failed to read the whole bootblock. Assumes invalid format.
    2010-11-09 13:38:32.576: [  OCRRAW][2819644944]proprioini: all disks are not OCR/OLR formatted
    2010-11-09 13:38:32.576: [  OCRRAW][2819644944]proprinit: Could not open raw device
    2010-11-09 13:38:32.576: [ default][2819644944]a_init:7!: Backend init unsuccessful : [26]
    2010-11-09 13:38:32.577: [ OCRCONF][2819644944]Exporting OCR data to [OCRUPGRADEFILE]
    2010-11-09 13:38:32.577: [  OCRAPI][2819644944]a_init:7!: Backend init unsuccessful : [33]
    2010-11-09 13:38:32.577: [ OCRCONF][2819644944]There was no previous version of OCR. error:[PROCL-33: Oracle Local Registry is not configured]
    2010-11-09 13:38:32.578: [  OCROSD][2819644944]utread:3: Problem reading buffer 12e54000 buflen 4096 retval 0 phy_offset 102400 retry 0
    2010-11-09 13:38:32.578: [  OCROSD][2819644944]utread:3: Problem reading buffer 12e54000 buflen 4096 retval 0 phy_offset 102400 retry 1
    2010-11-09 13:38:32.578: [  OCROSD][2819644944]utread:3: Problem reading buffer 12e54000 buflen 4096 retval 0 phy_offset 102400 retry 2
    2010-11-09 13:38:32.578: [  OCROSD][2819644944]utread:3: Problem reading buffer 12e54000 buflen 4096 retval 0 phy_offset 102400 retry 3
    2010-11-09 13:38:32.578: [  OCROSD][2819644944]utread:3: Problem reading buffer 12e54000 buflen 4096 retval 0 phy_offset 102400 retry 4
    2010-11-09 13:38:32.578: [  OCROSD][2819644944]utread:3: Problem reading buffer 12e54000 buflen 4096 retval 0 phy_offset 102400 retry 5
    2010-11-09 13:38:32.578: [  OCRRAW][2819644944]propriogid:1_1: Failed to read the whole bootblock. Assumes invalid format.
    2010-11-09 13:38:32.578: [  OCRRAW][2819644944]proprioini: all disks are not OCR/OLR formatted
    2010-11-09 13:38:32.578: [  OCRRAW][2819644944]proprinit: Could not open raw device
    2010-11-09 13:38:32.578: [ default][2819644944]a_init:7!: Backend init unsuccessful : [26]
    2010-11-09 13:38:32.579: [  OCROSD][2819644944]utread:3: Problem reading buffer 12e54000 buflen 4096 retval 0 phy_offset 102400 retry 0
    2010-11-09 13:38:32.579: [  OCROSD][2819644944]utread:3: Problem reading buffer 12e54000 buflen 4096 retval 0 phy_offset 102400 retry 1
    2010-11-09 13:38:32.579: [  OCROSD][2819644944]utread:3: Problem reading buffer 12e54000 buflen 4096 retval 0 phy_offset 102400 retry 2
    2010-11-09 13:38:32.579: [  OCROSD][2819644944]utread:3: Problem reading buffer 12e54000 buflen 4096 retval 0 phy_offset 102400 retry 3
    2010-11-09 13:38:32.579: [  OCROSD][2819644944]utread:3: Problem reading buffer 12e54000 buflen 4096 retval 0 phy_offset 102400 retry 4
    2010-11-09 13:38:32.579: [  OCROSD][2819644944]utread:3: Problem reading buffer 12e54000 buflen 4096 retval 0 phy_offset 102400 retry 5
    2010-11-09 13:38:32.579: [  OCRRAW][2819644944]propriogid:1_1: Failed to read the whole bootblock. Assumes invalid format.
    2010-11-09 13:38:32.579: [  OCROSD][2819644944]utread:3: Problem reading buffer 12e54000 buflen 4096 retval 0 phy_offset 102400 retry 0
    2010-11-09 13:38:32.579: [  OCROSD][2819644944]utread:3: Problem reading buffer 12e54000 buflen 4096 retval 0 phy_offset 102400 retry 1
    2010-11-09 13:38:32.579: [  OCROSD][2819644944]utread:3: Problem reading buffer 12e54000 buflen 4096 retval 0 phy_offset 102400 retry 2
    2010-11-09 13:38:32.579: [  OCROSD][2819644944]utread:3: Problem reading buffer 12e54000 buflen 4096 retval 0 phy_offset 102400 retry 3
    2010-11-09 13:38:32.579: [  OCROSD][2819644944]utread:3: Problem reading buffer 12e54000 buflen 4096 retval 0 phy_offset 102400 retry 4
    2010-11-09 13:38:32.579: [  OCROSD][2819644944]utread:3: Problem reading buffer 12e54000 buflen 4096 retval 0 phy_offset 102400 retry 5
    2010-11-09 13:38:32.591: [  OCRRAW][2819644944]ibctx: Failed to read the whole bootblock. Assumes invalid format.
    2010-11-09 13:38:32.591: [  OCRRAW][2819644944]proprinit:problem reading the bootblock or superbloc 22
    2010-11-09 13:38:32.591: [  OCROSD][2819644944]utread:3: Problem reading buffer 12e55000 buflen 4096 retval 0 phy_offset 102400 retry 0
    2010-11-09 13:38:32.591: [  OCROSD][2819644944]utread:3: Problem reading buffer 12e55000 buflen 4096 retval 0 phy_offset 102400 retry 1
    2010-11-09 13:38:32.591: [  OCROSD][2819644944]utread:3: Problem reading buffer 12e55000 buflen 4096 retval 0 phy_offset 102400 retry 2
    2010-11-09 13:38:32.591: [  OCROSD][2819644944]utread:3: Problem reading buffer 12e55000 buflen 4096 retval 0 phy_offset 102400 retry 3
    2010-11-09 13:38:32.591: [  OCROSD][2819644944]utread:3: Problem reading buffer 12e55000 buflen 4096 retval 0 phy_offset 102400 retry 4
    2010-11-09 13:38:32.591: [  OCROSD][2819644944]utread:3: Problem reading buffer 12e55000 buflen 4096 retval 0 phy_offset 102400 retry 5
    2010-11-09 13:38:32.591: [  OCRRAW][2819644944]propriogid:1_1: Failed to read the whole bootblock. Assumes invalid format.
    2010-11-09 13:38:32.681: [  OCRAPI][2819644944]a_init:6a: Backend init successful
    2010-11-09 13:38:32.699: [ OCRCONF][2819644944]Initialized DATABASE keys
    2010-11-09 13:38:32.700: [ OCRCONF][2819644944]Exiting [status=success]...
    ocrconfig_8985.log:
    Oracle Database 11g Clusterware Release 11.2.0.1.0 - Production Copyright 1996, 2009 Oracle. All rights reserved.
    2010-11-09 13:41:28.169: [ OCRCONF][2281741840]ocrconfig starts...
    2010-11-09 13:41:28.175: [ OCRCONF][2281741840]Upgrading OCR data
    2010-11-09 13:41:30.896: [  OCRASM][2281741840]proprasmo: kgfoCheckMount return [0]. Cannot proceed with dirty open.
    2010-11-09 13:41:30.896: [  OCRASM][2281741840]proprasmo: Error in open/create file in dg [DATA]
    [  OCRASM][2281741840]SLOS : [clsuSlosFormatDiag called with non-error slos.]
    2010-11-09 13:41:31.208: [  OCRRAW][2281741840]proprioo: Failed to open [+DATA]. Returned proprasmo() with [26]. Marking location as UNAVAILABLE.
    2010-11-09 13:41:31.210: [  OCRRAW][2281741840]proprioo: No OCR/OLR devices are usable
    2010-11-09 13:41:31.210: [  OCRASM][2281741840]proprasmcl: asmhandle is NULL
    2010-11-09 13:41:31.210: [  OCRRAW][2281741840]proprinit: Could not open raw device
    2010-11-09 13:41:31.211: [  OCRASM][2281741840]proprasmcl: asmhandle is NULL
    2010-11-09 13:41:31.213: [ default][2281741840]a_init:7!: Backend init unsuccessful : [26]
    2010-11-09 13:41:31.214: [ OCRCONF][2281741840]Exporting OCR data to [OCRUPGRADEFILE]
    2010-11-09 13:41:31.216: [  OCRAPI][2281741840]a_init:7!: Backend init unsuccessful : [33]
    2010-11-09 13:41:31.216: [ OCRCONF][2281741840]There was no previous version of OCR. error:[PROC-33: Oracle Cluster Registry is not configured]
    2010-11-09 13:41:32.214: [  OCRASM][2281741840]proprasmo: kgfoCheckMount return [0]. Cannot proceed with dirty open.
    2010-11-09 13:41:32.214: [  OCRASM][2281741840]proprasmo: Error in open/create file in dg [DATA]
    [  OCRASM][2281741840]SLOS : [clsuSlosFormatDiag called with non-error slos.]
    2010-11-09 13:41:32.535: [  OCRRAW][2281741840]proprioo: Failed to open [+DATA]. Returned proprasmo() with [26]. Marking location as UNAVAILABLE.
    2010-11-09 13:41:32.535: [  OCRRAW][2281741840]proprioo: No OCR/OLR devices are usable
    2010-11-09 13:41:32.535: [  OCRASM][2281741840]proprasmcl: asmhandle is NULL
    2010-11-09 13:41:32.535: [  OCRRAW][2281741840]proprinit: Could not open raw device
    2010-11-09 13:41:32.535: [  OCRASM][2281741840]proprasmcl: asmhandle is NULL
    2010-11-09 13:41:32.536: [ default][2281741840]a_init:7!: Backend init unsuccessful : [26]
    2010-11-09 13:41:35.359: [  OCRRAW][2281741840]propriogid:1_2: INVALID FORMAT
    2010-11-09 13:41:35.361: [  OCRRAW][2281741840]proprior: Header check from OCR device 0 offset 0 failed (26).
    2010-11-09 13:41:35.363: [  OCRRAW][2281741840]ibctx: Failed to read the whole bootblock. Assumes invalid format.
    2010-11-09 13:41:35.363: [  OCRRAW][2281741840]proprinit:problem reading the bootblock or superbloc 22
    2010-11-09 13:41:35.843: [  OCRRAW][2281741840]propriogid:1_2: INVALID FORMAT
    2010-11-09 13:41:36.430: [  OCRRAW][2281741840]iniconfig:No 92 configuration
    2010-11-09 13:41:36.431: [  OCRAPI][2281741840]a_init:6a: Backend init successful
    2010-11-09 13:41:36.540: [ OCRCONF][2281741840]Initialized DATABASE keys
    2010-11-09 13:41:36.545: [ OCRCONF][2281741840]Successfully set skgfr block 0
    2010-11-09 13:41:36.552: [ OCRCONF][2281741840]Exiting [status=success]...
    Both of these log files show errors, then they show success??????

  • 11g R2 RAC - Grid Infrastructure installation - "root.sh" fails on node#2

    Hi there,
    I am trying to create a two node 11g R2 RAC on OEL 5.5 (32-bit) using VMWare virtual machines. I have correctly configured both nodes. Cluster Verification utility returns on following error \[which I believe can be ignored]:
    Checking daemon liveness...
    Liveness check failed for "ntpd"
    Check failed on nodes:
    rac2,rac1
    PRVF-5415 : Check to see if NTP daemon is running failed
    Clock synchronization check using Network Time Protocol(NTP) failed
    Pre-check for cluster services setup was unsuccessful on all the nodes.
    While Grid Infrastructure installation (for a Cluster option), things go very smooth until I run "root.sh" on node# 2. orainstRoot.sh ran OK on both. "root.sh" run OK on node# 1 and ends with:
    Checking swap space: must be greater than 500 MB.   Actual 1967 MB    Passed
    The inventory pointer is located at /etc/oraInst.loc
    The inventory is located at /u01/app/oraInventory
    *'UpdateNodeList' was successful.*
    *[root@rac1 ~]#*
    "root.sh" fails on rac2 (2nd node) with following error:
    CRS-2672: Attempting to start 'ora.evmd' on 'rac2'
    CRS-2676: Start of 'ora.evmd' on 'rac2' succeeded
    Timed out waiting for the CRS stack to start.
    *[root@rac2 ~]#*
    I know this info may not be enough to figure out what the problem may be. Please let me know what should I look for to find the issue and fix it. Its been like almost two weeks now :-(
    Regards
    Amer

    Hi Zheng,
    ocssd.log is HUGE. So I am putting few of the last lines in the log file hoping they may give some clue:
    2011-07-04 19:49:24.007: [    CSSD][2997803920]clssnmvSchedDiskThreads: KillBlockThread for voting file ORCL:DATA sched delay 2180 > margin 1500  cur_ms 36118424 lastalive 36116244
    2011-07-04 19:49:26.005: [    CSSD][2997803920]clssnmvSchedDiskThreads: DiskPingThread for voting file ORCL:DATA sched delay 4150 > margin 1500 cur_ms 36120424 lastalive 36116274
    2011-07-04 19:49:26.006: [    CSSD][2997803920]clssnmvSchedDiskThreads: KillBlockThread for voting file ORCL:DATA sched delay 4180 > margin 1500  cur_ms 36120424 lastalive 36116244
    2011-07-04 19:49:27.997: [    CSSD][2901298064]clssnmSendingThread: sending status msg to all nodes
    2011-07-04 19:49:27.997: [    CSSD][2901298064]clssnmSendingThread: sent 5 status msgs to all nodes
    2011-07-04 19:49:33.001: [    CSSD][2901298064]clssnmSendingThread: sending status msg to all nodes
    2011-07-04 19:49:33.001: [    CSSD][2901298064]clssnmSendingThread: sent 5 status msgs to all nodes
    2011-07-04 19:49:37.996: [    CSSD][2901298064]clssnmSendingThread: sending status msg to all nodes
    2011-07-04 19:49:37.996: [    CSSD][2901298064]clssnmSendingThread: sent 5 status msgs to all nodes
    2011-07-04 19:49:43.000: [    CSSD][2901298064]clssnmSendingThread: sending status msg to all nodes
    2011-07-04 19:49:43.000: [    CSSD][2901298064]clssnmSendingThread: sent 5 status msgs to all nodes
    2011-07-04 19:49:48.004: [    CSSD][2901298064]clssnmSendingThread: sending status msg to all nodes
    2011-07-04 19:49:48.005: [    CSSD][2901298064]clssnmSendingThread: sent 5 status msgs to all nodes
    2011-07-04 19:50:12.003: [    CSSD][2901298064]clssnmSendingThread: sent 5 status msgs to all nodes
    2011-07-04 19:50:12.008: [    CSSD][2997803920]clssnmvSchedDiskThreads: DiskPingThread for voting file ORCL:DATA sched delay 1660 > margin 1500 cur_ms 36166424 lastalive 36164764
    2011-07-04 19:50:12.009: [    CSSD][2997803920]clssnmvSchedDiskThreads: KillBlockThread for voting file ORCL:DATA sched delay 1660 > margin 1500  cur_ms 36166424 lastalive 36164764
    2011-07-04 19:50:15.796: [    CSSD][2997803920]clssnmvSchedDiskThreads: KillBlockThread for voting file ORCL:DATA sched delay 2130 > margin 1500  cur_ms 36170214 lastalive 36168084
    2011-07-04 19:50:16.996: [    CSSD][2901298064]clssnmSendingThread: sending status msg to all nodes
    2011-07-04 19:50:16.996: [    CSSD][2901298064]clssnmSendingThread: sent 5 status msgs to all nodes
    2011-07-04 19:50:17.826: [    CSSD][2997803920]clssnmvSchedDiskThreads: DiskPingThread for voting file ORCL:DATA sched delay 1540 > margin 1500 cur_ms 36172244 lastalive 36170704
    2011-07-04 19:50:17.826: [    CSSD][2997803920]clssnmvSchedDiskThreads: KillBlockThread for voting file ORCL:DATA sched delay 1570 > margin 1500  cur_ms 36172244 lastalive 36170674
    2011-07-04 19:50:21.999: [    CSSD][2901298064]clssnmSendingThread: sending status msg to all nodes
    2011-07-04 19:50:21.999: [    CSSD][2901298064]clssnmSendingThread: sent 5 status msgs to all nodes
    2011-07-04 19:50:26.011: [    CSSD][2997803920]clssnmvSchedDiskThreads: DiskPingThread for voting file ORCL:DATA sched delay 1740 > margin 1500 cur_ms 36180424 lastalive 36178684
    2011-07-04 19:50:26.011: [    CSSD][2997803920]clssnmvSchedDiskThreads: KillBlockThread for voting file ORCL:DATA sched delay 1620 > margin 1500  cur_ms 36180424 lastalive 36178804
    2011-07-04 19:50:27.004: [    CSSD][2901298064]clssnmSendingThread: sending status msg to all nodes
    2011-07-04 19:50:27.004: [    CSSD][2901298064]clssnmSendingThread: sent 5 status msgs to all nodes
    2011-07-04 19:50:28.002: [    CSSD][2997803920]clssnmvSchedDiskThreads: DiskPingThread for voting file ORCL:DATA sched delay 1700 > margin 1500 cur_ms 36182414 lastalive 36180714
    2011-07-04 19:50:28.002: [    CSSD][2997803920]clssnmvSchedDiskThreads: KillBlockThread for voting file ORCL:DATA sched delay 1790 > margin 1500  cur_ms 36182414 lastalive 36180624
    2011-07-04 19:50:31.998: [    CSSD][2901298064]clssnmSendingThread: sending status msg to all nodes
    2011-07-04 19:50:31.998: [    CSSD][2901298064]clssnmSendingThread: sent 5 status msgs to all nodes
    2011-07-04 19:50:37.001: [    CSSD][2901298064]clssnmSendingThread: sending status msg to all nodes
    2011-07-04 19:50:37.002: [    CSSD][2901298064]clssnmSendingThread: sent 5 status msgs to all nodes
    *<end of log file>*And the alertrac2.log contains:
    *[root@rac2 rac2]# cat alertrac2.log*
    Oracle Database 11g Clusterware Release 11.2.0.1.0 - Production Copyright 1996, 2009 Oracle. All rights reserved.
    2011-07-02 16:43:51.571
    [client(16134)]CRS-2106:The OLR location /u01/grid/oracle/product/11.2.0/grid/cdata/rac2.olr is inaccessible. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/client/ocrconfig_16134.log.
    2011-07-02 16:43:57.125
    [client(16134)]CRS-2101:The OLR was formatted using version 3.
    2011-07-02 16:44:43.214
    [ohasd(16188)]CRS-2112:The OLR service started on node rac2.
    2011-07-02 16:45:06.446
    [ohasd(16188)]CRS-2772:Server 'rac2' has been assigned to pool 'Free'.
    2011-07-02 16:53:30.061
    [ohasd(16188)]CRS-2302:Cannot get GPnP profile. Error CLSGPNP_NO_DAEMON (GPNPD daemon is not running).
    2011-07-02 16:53:55.042
    [cssd(17674)]CRS-1713:CSSD daemon is started in exclusive mode
    2011-07-02 16:54:38.334
    [cssd(17674)]CRS-1707:Lease acquisition for node rac2 number 2 completed
    [cssd(17674)]CRS-1636:The CSS daemon was started in exclusive mode but found an active CSS daemon on node rac1 and is terminating; details at (:CSSNM00006:) in /u01/grid/oracle/product/11.2.0/grid/log/rac2/cssd/ocssd.log
    2011-07-02 16:54:38.464
    [cssd(17674)]CRS-1603:CSSD on node rac2 shutdown by user.
    2011-07-02 16:54:39.174
    [ohasd(16188)]CRS-2765:Resource 'ora.cssdmonitor' has failed on server 'rac2'.
    2011-07-02 16:55:43.430
    [cssd(17945)]CRS-1713:CSSD daemon is started in clustered mode
    2011-07-02 16:56:02.852
    [cssd(17945)]CRS-1707:Lease acquisition for node rac2 number 2 completed
    2011-07-02 16:56:04.061
    [cssd(17945)]CRS-1605:CSSD voting file is online: ORCL:DATA; details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/cssd/ocssd.log.
    2011-07-02 16:56:18.350
    [cssd(17945)]CRS-1601:CSSD Reconfiguration complete. Active nodes are rac1 rac2 .
    2011-07-02 16:56:29.283
    [ctssd(18020)]CRS-2403:The Cluster Time Synchronization Service on host rac2 is in observer mode.
    2011-07-02 16:56:29.551
    [ctssd(18020)]CRS-2407:The new Cluster Time Synchronization Service reference node is host rac1.
    2011-07-02 16:56:29.615
    [ctssd(18020)]CRS-2412:The Cluster Time Synchronization Service detects that the local time is significantly different from the mean cluster time. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/ctssd/octssd.log.
    2011-07-02 16:56:29.616
    [ctssd(18020)]CRS-2409:The clock on host rac2 is not synchronous with the mean cluster time. No action has been taken as the Cluster Time Synchronization Service is running in observer mode.
    2011-07-02 16:56:29.641
    [ctssd(18020)]CRS-2401:The Cluster Time Synchronization Service started on host rac2.
    [client(18052)]CRS-10001:ACFS-9327: Verifying ADVM/ACFS devices.
    [client(18056)]CRS-10001:ACFS-9322: done.
    2011-07-02 17:01:40.963
    [ohasd(16188)]CRS-2757:Command 'Start' timed out waiting for response from the resource 'ora.asm'. Details at (:CRSPE00111:) in /u01/grid/oracle/product/11.2.0/grid/log/rac2/ohasd/ohasd.log.
    [client(18590)]CRS-10001:ACFS-9327: Verifying ADVM/ACFS devices.
    [client(18594)]CRS-10001:ACFS-9322: done.
    2011-07-02 17:27:46.385
    [ctssd(18020)]CRS-2412:The Cluster Time Synchronization Service detects that the local time is significantly different from the mean cluster time. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/ctssd/octssd.log.
    2011-07-02 17:27:46.385
    [ctssd(18020)]CRS-2409:The clock on host rac2 is not synchronous with the mean cluster time. No action has been taken as the Cluster Time Synchronization Service is running in observer mode.
    2011-07-02 17:46:48.717
    [crsd(22519)]CRS-1013:The OCR location in an ASM disk group is inaccessible. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/crsd/crsd.log.
    2011-07-02 17:46:49.641
    [ohasd(16188)]CRS-2765:Resource 'ora.crsd' has failed on server 'rac2'.
    2011-07-02 17:46:51.459
    [crsd(22553)]CRS-1013:The OCR location in an ASM disk group is inaccessible. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/crsd/crsd.log.
    2011-07-02 17:46:51.776
    [ohasd(16188)]CRS-2765:Resource 'ora.crsd' has failed on server 'rac2'.
    2011-07-02 17:46:53.928
    [crsd(22574)]CRS-1013:The OCR location in an ASM disk group is inaccessible. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/crsd/crsd.log.
    2011-07-02 17:46:53.956
    [ohasd(16188)]CRS-2765:Resource 'ora.crsd' has failed on server 'rac2'.
    2011-07-02 17:46:55.834
    [crsd(22592)]CRS-1013:The OCR location in an ASM disk group is inaccessible. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/crsd/crsd.log.
    2011-07-02 17:46:56.273
    [ohasd(16188)]CRS-2765:Resource 'ora.crsd' has failed on server 'rac2'.
    2011-07-02 17:46:57.762
    [crsd(22610)]CRS-1013:The OCR location in an ASM disk group is inaccessible. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/crsd/crsd.log.
    2011-07-02 17:46:58.631
    [ohasd(16188)]CRS-2765:Resource 'ora.crsd' has failed on server 'rac2'.
    2011-07-02 17:47:00.259
    [crsd(22628)]CRS-1013:The OCR location in an ASM disk group is inaccessible. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/crsd/crsd.log.
    2011-07-02 17:47:00.968
    [ohasd(16188)]CRS-2765:Resource 'ora.crsd' has failed on server 'rac2'.
    2011-07-02 17:47:02.513
    [crsd(22645)]CRS-1013:The OCR location in an ASM disk group is inaccessible. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/crsd/crsd.log.
    2011-07-02 17:47:03.309
    [ohasd(16188)]CRS-2765:Resource 'ora.crsd' has failed on server 'rac2'.
    2011-07-02 17:47:05.081
    [crsd(22663)]CRS-1013:The OCR location in an ASM disk group is inaccessible. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/crsd/crsd.log.
    2011-07-02 17:47:05.770
    [ohasd(16188)]CRS-2765:Resource 'ora.crsd' has failed on server 'rac2'.
    2011-07-02 17:47:07.796
    [crsd(22681)]CRS-1013:The OCR location in an ASM disk group is inaccessible. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/crsd/crsd.log.
    2011-07-02 17:47:08.257
    [ohasd(16188)]CRS-2765:Resource 'ora.crsd' has failed on server 'rac2'.
    2011-07-02 17:47:10.733
    [crsd(22699)]CRS-1013:The OCR location in an ASM disk group is inaccessible. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/crsd/crsd.log.
    2011-07-02 17:47:11.739
    [ohasd(16188)]CRS-2765:Resource 'ora.crsd' has failed on server 'rac2'.
    2011-07-02 17:47:13.547
    [crsd(22732)]CRS-1013:The OCR location in an ASM disk group is inaccessible. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/crsd/crsd.log.
    2011-07-02 17:47:14.111
    [ohasd(16188)]CRS-2765:Resource 'ora.crsd' has failed on server 'rac2'.
    2011-07-02 17:47:14.112
    [ohasd(16188)]CRS-2771:Maximum restart attempts reached for resource 'ora.crsd'; will not restart.
    2011-07-02 17:58:18.459
    [ctssd(18020)]CRS-2412:The Cluster Time Synchronization Service detects that the local time is significantly different from the mean cluster time. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/ctssd/octssd.log.
    2011-07-02 17:58:18.459
    [ctssd(18020)]CRS-2409:The clock on host rac2 is not synchronous with the mean cluster time. No action has been taken as the Cluster Time Synchronization Service is running in observer mode.
    [client(26883)]CRS-10001:ACFS-9200: Supported
    2011-07-02 18:13:34.627
    [ctssd(18020)]CRS-2405:The Cluster Time Synchronization Service on host rac2 is shutdown by user
    2011-07-02 18:13:42.368
    [cssd(17945)]CRS-1603:CSSD on node rac2 shutdown by user.
    2011-07-02 18:15:13.877
    [client(27222)]CRS-2106:The OLR location /u01/grid/oracle/product/11.2.0/grid/cdata/rac2.olr is inaccessible. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/client/ocrconfig_27222.log.
    2011-07-02 18:15:14.011
    [client(27222)]CRS-2101:The OLR was formatted using version 3.
    2011-07-02 18:15:23.226
    [ohasd(27261)]CRS-2112:The OLR service started on node rac2.
    2011-07-02 18:15:23.688
    [ohasd(27261)]CRS-8017:location: /etc/oracle/lastgasp has 2 reboot advisory log files, 0 were announced and 0 errors occurred
    2011-07-02 18:15:24.064
    [ohasd(27261)]CRS-2772:Server 'rac2' has been assigned to pool 'Free'.
    2011-07-02 18:16:29.761
    [ohasd(27261)]CRS-2302:Cannot get GPnP profile. Error CLSGPNP_NO_DAEMON (GPNPD daemon is not running).
    2011-07-02 18:16:30.190
    [gpnpd(28498)]CRS-2328:GPNPD started on node rac2.
    2011-07-02 18:16:41.561
    [cssd(28562)]CRS-1713:CSSD daemon is started in exclusive mode
    2011-07-02 18:16:49.111
    [cssd(28562)]CRS-1707:Lease acquisition for node rac2 number 2 completed
    2011-07-02 18:16:49.166
    [cssd(28562)]CRS-1605:CSSD voting file is online: ORCL:DATA; details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/cssd/ocssd.log.
    [cssd(28562)]CRS-1636:The CSS daemon was started in exclusive mode but found an active CSS daemon on node rac1 and is terminating; details at (:CSSNM00006:) in /u01/grid/oracle/product/11.2.0/grid/log/rac2/cssd/ocssd.log
    2011-07-02 18:17:01.122
    [cssd(28562)]CRS-1603:CSSD on node rac2 shutdown by user.
    2011-07-02 18:17:06.917
    [ohasd(27261)]CRS-2765:Resource 'ora.cssdmonitor' has failed on server 'rac2'.
    2011-07-02 18:17:23.602
    [mdnsd(28485)]CRS-5602:mDNS service stopping by request.
    2011-07-02 18:17:36.217
    [gpnpd(28732)]CRS-2328:GPNPD started on node rac2.
    2011-07-02 18:17:43.673
    [cssd(28794)]CRS-1713:CSSD daemon is started in clustered mode
    2011-07-02 18:17:49.826
    [cssd(28794)]CRS-1707:Lease acquisition for node rac2 number 2 completed
    2011-07-02 18:17:49.865
    [cssd(28794)]CRS-1605:CSSD voting file is online: ORCL:DATA; details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/cssd/ocssd.log.
    2011-07-02 18:18:03.049
    [cssd(28794)]CRS-1601:CSSD Reconfiguration complete. Active nodes are rac1 rac2 .
    2011-07-02 18:18:06.160
    [ctssd(28861)]CRS-2403:The Cluster Time Synchronization Service on host rac2 is in observer mode.
    2011-07-02 18:18:06.220
    [ctssd(28861)]CRS-2407:The new Cluster Time Synchronization Service reference node is host rac1.
    2011-07-02 18:18:06.238
    [ctssd(28861)]CRS-2412:The Cluster Time Synchronization Service detects that the local time is significantly different from the mean cluster time. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/ctssd/octssd.log.
    2011-07-02 18:18:06.239
    [ctssd(28861)]CRS-2409:The clock on host rac2 is not synchronous with the mean cluster time. No action has been taken as the Cluster Time Synchronization Service is running in observer mode.
    2011-07-02 18:18:06.794
    [ctssd(28861)]CRS-2401:The Cluster Time Synchronization Service started on host rac2.
    [client(28891)]CRS-10001:ACFS-9327: Verifying ADVM/ACFS devices.
    [client(28895)]CRS-10001:ACFS-9322: done.
    2011-07-02 18:18:33.465
    [crsd(29020)]CRS-1013:The OCR location in an ASM disk group is inaccessible. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/crsd/crsd.log.
    2011-07-02 18:18:33.575
    [ohasd(27261)]CRS-2765:Resource 'ora.crsd' has failed on server 'rac2'.
    2011-07-02 18:18:35.757
    [crsd(29051)]CRS-1013:The OCR location in an ASM disk group is inaccessible. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/crsd/crsd.log.
    2011-07-02 18:18:36.129
    [ohasd(27261)]CRS-2765:Resource 'ora.crsd' has failed on server 'rac2'.
    2011-07-02 18:18:38.596
    [crsd(29066)]CRS-1013:The OCR location in an ASM disk group is inaccessible. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/crsd/crsd.log.
    2011-07-02 18:18:39.146
    [ohasd(27261)]CRS-2765:Resource 'ora.crsd' has failed on server 'rac2'.
    2011-07-02 18:18:41.058
    [crsd(29085)]CRS-1013:The OCR location in an ASM disk group is inaccessible. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/crsd/crsd.log.
    2011-07-02 18:18:41.435
    [ohasd(27261)]CRS-2765:Resource 'ora.crsd' has failed on server 'rac2'.
    2011-07-02 18:18:44.255
    [crsd(29101)]CRS-1013:The OCR location in an ASM disk group is inaccessible. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/crsd/crsd.log.
    2011-07-02 18:18:45.165
    [ohasd(27261)]CRS-2765:Resource 'ora.crsd' has failed on server 'rac2'.
    2011-07-02 18:18:47.013
    [crsd(29121)]CRS-1013:The OCR location in an ASM disk group is inaccessible. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/crsd/crsd.log.
    2011-07-02 18:18:47.409
    [ohasd(27261)]CRS-2765:Resource 'ora.crsd' has failed on server 'rac2'.
    2011-07-02 18:18:50.071
    [crsd(29136)]CRS-1013:The OCR location in an ASM disk group is inaccessible. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/crsd/crsd.log.
    2011-07-02 18:18:50.118
    [ohasd(27261)]CRS-2765:Resource 'ora.crsd' has failed on server 'rac2'.
    2011-07-02 18:18:51.843
    [crsd(29156)]CRS-1013:The OCR location in an ASM disk group is inaccessible. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/crsd/crsd.log.
    2011-07-02 18:18:52.373
    [ohasd(27261)]CRS-2765:Resource 'ora.crsd' has failed on server 'rac2'.
    2011-07-02 18:18:54.361
    [crsd(29171)]CRS-1013:The OCR location in an ASM disk group is inaccessible. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/crsd/crsd.log.
    2011-07-02 18:18:54.772
    [ohasd(27261)]CRS-2765:Resource 'ora.crsd' has failed on server 'rac2'.
    2011-07-02 18:18:56.620
    [crsd(29202)]CRS-1013:The OCR location in an ASM disk group is inaccessible. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/crsd/crsd.log.
    2011-07-02 18:18:57.104
    [ohasd(27261)]CRS-2765:Resource 'ora.crsd' has failed on server 'rac2'.
    2011-07-02 18:18:58.997
    [crsd(29218)]CRS-1013:The OCR location in an ASM disk group is inaccessible. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/crsd/crsd.log.
    2011-07-02 18:18:59.301
    [ohasd(27261)]CRS-2765:Resource 'ora.crsd' has failed on server 'rac2'.
    2011-07-02 18:18:59.302
    [ohasd(27261)]CRS-2771:Maximum restart attempts reached for resource 'ora.crsd'; will not restart.
    2011-07-02 18:49:58.070
    [ctssd(28861)]CRS-2412:The Cluster Time Synchronization Service detects that the local time is significantly different from the mean cluster time. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/ctssd/octssd.log.
    2011-07-02 18:49:58.070
    [ctssd(28861)]CRS-2409:The clock on host rac2 is not synchronous with the mean cluster time. No action has been taken as the Cluster Time Synchronization Service is running in observer mode.
    2011-07-02 19:21:33.362
    [ctssd(28861)]CRS-2412:The Cluster Time Synchronization Service detects that the local time is significantly different from the mean cluster time. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/ctssd/octssd.log.
    2011-07-02 19:21:33.362
    [ctssd(28861)]CRS-2409:The clock on host rac2 is not synchronous with the mean cluster time. No action has been taken as the Cluster Time Synchronization Service is running in observer mode.
    2011-07-02 19:52:05.271
    [ctssd(28861)]CRS-2412:The Cluster Time Synchronization Service detects that the local time is significantly different from the mean cluster time. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/ctssd/octssd.log.
    2011-07-02 19:52:05.271
    [ctssd(28861)]CRS-2409:The clock on host rac2 is not synchronous with the mean cluster time. No action has been taken as the Cluster Time Synchronization Service is running in observer mode.
    2011-07-02 20:22:53.696
    [ctssd(28861)]CRS-2412:The Cluster Time Synchronization Service detects that the local time is significantly different from the mean cluster time. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/ctssd/octssd.log.
    2011-07-02 20:22:53.696
    [ctssd(28861)]CRS-2409:The clock on host rac2 is not synchronous with the mean cluster time. No action has been taken as the Cluster Time Synchronization Service is running in observer mode.
    2011-07-02 20:53:43.949
    [ctssd(28861)]CRS-2412:The Cluster Time Synchronization Service detects that the local time is significantly different from the mean cluster time. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/ctssd/octssd.log.
    2011-07-02 20:53:43.949
    [ctssd(28861)]CRS-2409:The clock on host rac2 is not synchronous with the mean cluster time. No action has been taken as the Cluster Time Synchronization Service is running in observer mode.
    2011-07-02 21:24:32.990
    [ctssd(28861)]CRS-2412:The Cluster Time Synchronization Service detects that the local time is significantly different from the mean cluster time. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/ctssd/octssd.log.
    2011-07-02 21:24:32.990
    [ctssd(28861)]CRS-2409:The clock on host rac2 is not synchronous with the mean cluster time. No action has been taken as the Cluster Time Synchronization Service is running in observer mode.
    2011-07-02 21:55:21.907
    [ctssd(28861)]CRS-2412:The Cluster Time Synchronization Service detects that the local time is significantly different from the mean cluster time. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/ctssd/octssd.log.
    2011-07-02 21:55:21.908
    [ctssd(28861)]CRS-2409:The clock on host rac2 is not synchronous with the mean cluster time. No action has been taken as the Cluster Time Synchronization Service is running in observer mode.
    2011-07-02 22:26:45.752
    [ctssd(28861)]CRS-2412:The Cluster Time Synchronization Service detects that the local time is significantly different from the mean cluster time. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/ctssd/octssd.log.
    2011-07-02 22:26:45.752
    [ctssd(28861)]CRS-2409:The clock on host rac2 is not synchronous with the mean cluster time. No action has been taken as the Cluster Time Synchronization Service is running in observer mode.
    2011-07-02 22:57:54.682
    [ctssd(28861)]CRS-2412:The Cluster Time Synchronization Service detects that the local time is significantly different from the mean cluster time. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/ctssd/octssd.log.
    2011-07-02 22:57:54.683
    [ctssd(28861)]CRS-2409:The clock on host rac2 is not synchronous with the mean cluster time. No action has been taken as the Cluster Time Synchronization Service is running in observer mode.
    2011-07-02 23:07:28.603
    [cssd(28794)]CRS-1612:Network communication with node rac1 (1) missing for 50% of timeout interval.  Removal of this node from cluster in 14.020 seconds
    2011-07-02 23:07:35.621
    [cssd(28794)]CRS-1611:Network communication with node rac1 (1) missing for 75% of timeout interval.  Removal of this node from cluster in 7.010 seconds
    2011-07-02 23:07:39.629
    [cssd(28794)]CRS-1610:Network communication with node rac1 (1) missing for 90% of timeout interval.  Removal of this node from cluster in 3.000 seconds
    2011-07-02 23:07:42.641
    [cssd(28794)]CRS-1632:Node rac1 is being removed from the cluster in cluster incarnation 205080558
    2011-07-02 23:07:44.751
    [cssd(28794)]CRS-1601:CSSD Reconfiguration complete. Active nodes are rac2 .
    2011-07-02 23:07:45.326
    [ctssd(28861)]CRS-2407:The new Cluster Time Synchronization Service reference node is host rac2.
    2011-07-04 19:46:26.008
    [ohasd(27261)]CRS-8011:reboot advisory message from host: rac1, component: mo155738, with time stamp: L-2011-07-04-19:44:43.318
    [ohasd(27261)]CRS-8013:reboot advisory message text: clsnomon_status: need to reboot, unexpected failure 8 received from CSS
    *[root@rac2 rac2]#* This log file start with complaint that OLR is not accessible. Here is what I see (rca2):
    -rw------- 1 root oinstall 272756736 Jul  2 18:18 /u01/grid/oracle/product/11.2.0/grid/cdata/rac2.olrAnd I guess rest of the problems start with this.

  • Root.sh failed on second node while installing CRS 10g on centos 5.5

    root.sh failed on second node while installing CRS 10g
    Hi all,
    I am able to install Oracle 10g RAC clusterware on first node of the cluster. However, when I run the root.sh script as root
    user on second node of the cluster, it fails with following error message:
    NO KEYS WERE WRITTEN. Supply -force parameter to override.
    -force is destructive and will destroy any previous cluster
    configuration.
    Oracle Cluster Registry for cluster has already been initialized
    Startup will be queued to init within 90 seconds.
    Adding daemons to inittab
    Expecting the CRS daemons to be up within 600 seconds.
    Failure at final check of Oracle CRS stack.
    10
    and run cluvfy stage -post hwos -n all -verbose,it show message:
    ERROR:
    Could not find a suitable set of interfaces for VIPs.
    Result: Node connectivity check failed.
    Checking shared storage accessibility...
    Disk Sharing Nodes (2 in count)
    /dev/sda db2 db1
    and run cluvfy stage -pre crsinst -n all -verbose,it show message:
    ERROR:
    Could not find a suitable set of interfaces for VIPs.
    Result: Node connectivity check failed.
    Checking system requirements for 'crs'...
    No checks registered for this product.
    and run cluvfy stage -post crsinst -n all -verbose,it show message:
    Result: Node reachability check passed from node "DB2".
    Result: User equivalence check passed for user "oracle".
    Node Name CRS daemon CSS daemon EVM daemon
    db2 no no no
    db1 yes yes yes
    Check: Health of CRS
    Node Name CRS OK?
    db1 unknown
    Result: CRS health check failed.
    check crsd.log and show message:
    clsc_connect: (0x143ca610) no listener at (ADDRESS=(PROTOCOL=ipc)(KEY=OCSSD_LL_db2_crs))
    clsssInitNative: connect failed, rc 9
    Any help would be greatly appreciated.
    Edited by: 868121 on 2011-6-24 上午12:31

    Hello, it took a little searching, but I found this in a note in the GRID installation guide for Linux/UNIX:
    Public IP addresses and virtual IP addresses must be in the same subnet.
    In your case, you are using two different subnets for the VIPs.

  • Asm 11.1.0.6 root.sh  fails to start css

    We have Oracle ASM and installing 11.1.0.6 on a new machine (eventually will apply 11.1.0.7 patchset). So we are on 11g R1 in production environments.
    When installing Oracle ASM 11.1.0.6, the root.sh fails
    I checked various metalink notes, all settings are OK. We have ASMLib and i have configured ASMLib with the new disks prior to the ASM installation
    Just to rule out the rootcause is with ASMLib , I have also disabled it. Still same problem. I ran the usual localconfig delete and localconfig add too.
    Startup will be queued to init within 30 seconds.
    Checking the status of new Oracle init process...
    Expecting the CRS daemons to be up within 600 seconds.
    Giving up: Oracle CSS stack appears NOT to be running.+
    Oracle CSS service would not start as installed+
    Automatic Storage Management(ASM) cannot be used until Oracle CSS service is started+
    Finished product-specific root actions.+

    I applied the 11.1.0.7 patchset. Still same problem.
    Strangely, there are no logfiles in $ASM_HOME/log/<hostname>/cssd/ - This directory is empty
    The below are the final messages in root.sh
    Startup will be queued to init within 30 seconds.
    Checking the status of new Oracle init process...
    Expecting the CRS daemons to be up within 600 seconds.
    Giving up: Oracle CSS stack appears NOT to be running.
    Oracle CSS service would not start as installed
    Automatic Storage Management(ASM) cannot be used until Oracle CSS service is started

  • Oracle 11gR2 RAC Root.sh Failed On The Second Node

    Hello,
    When i installing Oracle 11gR2 RAC on AIX 7.1 , root.sh succeeds on first node but fails on the second node:
    I get error "Root.sh Failed On The Second Node With Error ORA-15018 ORA-15031 ORA-15025 ORA-27041 [ID 1459711.1]" within Oracle installation.
    Applies to:
    Oracle Server - 11gR2 RAC
    EMC VNX 500
    IBM AIX on POWER Systems (64-bit)
    in /dev/rhdiskpower0 does not show in kfod output on second node. It is an EMC multipath disk device.
    But the disk can be found with AIX command.
    any help!!
    Thanks

    the soluation that uninstall "EMC solutitons enabler" but in the machine i just find "EMC migration enabler" and conn't remove without remove EMC Powerpath.

  • Root.sh failed in one node - CLSMON and UDLM

    Hi experts.
    My enviroment is:
    2-node SunCluster Update3
    Oracle RAC 10.2.0.1 > planning to upgrade to 10.2.0.4
    The problem is: I installed the CRS services on 2 nodes - OK
    After that, running root.sh fails in 1 node:
    /u01/app/product/10/CRS/root.sh
    WARNING: directory '/u01/app/product/10' is not owned by root
    WARNING: directory '/u01/app/product' is not owned by root
    WARNING: directory '/u01/app' is not owned by root
    WARNING: directory '/u01' is not owned by root
    Checking to see if Oracle CRS stack is already configured
    Checking to see if any 9i GSD is up
    Setting the permissions on OCR backup directory
    Setting up NS directories
    Oracle Cluster Registry configuration upgraded successfully
    WARNING: directory '/u01/app/product/10' is not owned by root
    WARNING: directory '/u01/app/product' is not owned by root
    WARNING: directory '/u01/app' is not owned by root
    WARNING: directory '/u01' is not owned by root
    clscfg: EXISTING configuration version 3 detected.
    clscfg: version 3 is 10G Release 2.
    Successfully accumulated necessary OCR keys.
    Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
    node <nodenumber>: <nodename> <private interconnect name> <hostname>
    node 0: spodhcsvr10 clusternode1-priv spodhcsvr10
    node 1: spodhcsvr12 clusternode2-priv spodhcsvr12
    clscfg: Arguments check out successfully.
    NO KEYS WERE WRITTEN. Supply -force parameter to override.
    -force is destructive and will destroy any previous cluster
    configuration.
    Oracle Cluster Registry for cluster has already been initialized
    Sep 22 13:34:17 spodhcsvr10 root: Oracle Cluster Ready Services starting by user request.
    Startup will be queued to init within 30 seconds.
    Sep 22 13:34:20 spodhcsvr10 root: Cluster Ready Services completed waiting on dependencies.
    Adding daemons to inittab
    Expecting the CRS daemons to be up within 600 seconds.
    Sep 22 13:34:34 spodhcsvr10 last message repeated 3 times
    Sep 22 13:34:34 spodhcsvr10 root: Running CRSD with TZ = Brazil/East
    Sep 22 13:34:40 spodhcsvr10 root: Oracle CLSMON terminated with unexpected status 10. Respawning
    Sep 22 13:35:43 spodhcsvr10 last message repeated 9 times
    Sep 22 13:36:07 spodhcsvr10 root: Cluster Ready Services completed waiting on dependencies.
    Sep 22 13:36:07 spodhcsvr10 root: Running CRSD with TZ = Brazil/East
    Sep 22 13:36:14 spodhcsvr10 su: libsldap: Status: 85 Mesg: openConnection: simple bind failed - Timed out
    Sep 22 13:36:19 spodhcsvr10 root: Oracle CLSMON terminated with unexpected status 10. Respawning
    Sep 22 13:37:35 spodhcsvr10 last message repeated 11 times
    Sep 22 13:37:40 spodhcsvr10 root: Cluster Ready Services completed waiting on dependencies.
    Sep 22 13:37:40 spodhcsvr10 root: Running CRSD with TZ = Brazil/East
    Sep 22 13:37:42 spodhcsvr10 root: Oracle CLSMON terminated with unexpected status 10. Respawning
    Sep 22 13:38:03 spodhcsvr10 last message repeated 3 times
    Sep 22 13:38:10 spodhcsvr10 root: Oracle CLSMON terminated with unexpected status 10. Respawning
    Sep 22 13:39:12 spodhcsvr10 last message repeated 9 times
    Sep 22 13:39:13 spodhcsvr10 root: Cluster Ready Services completed waiting on dependencies.
    Sep 22 13:39:13 spodhcsvr10 root: Running CRSD with TZ = Brazil/East
    Sep 22 13:39:19 spodhcsvr10 root: Oracle CLSMON terminated with unexpected status 10. Respawning
    Sep 22 13:40:42 spodhcsvr10 last message repeated 12 times
    Sep 22 13:40:46 spodhcsvr10 root: Cluster Ready Services completed waiting on dependencies.
    Sep 22 13:40:46 spodhcsvr10 root: Running CRSD with TZ = Brazil/East
    Sep 22 13:40:49 spodhcsvr10 root: Oracle CLSMON terminated with unexpected status 10. Respawning
    Sep 22 13:42:05 spodhcsvr10 last message repeated 11 times
    Sep 22 13:42:11 spodhcsvr10 root: Cluster Ready Services completed waiting on dependencies.
    Sep 22 13:42:12 spodhcsvr10 root: Oracle CLSMON terminated with unexpected status 10. Respawning
    Sep 22 13:42:19 spodhcsvr10 root: Cluster Ready Services completed waiting on dependencies.
    Sep 22 13:42:19 spodhcsvr10 root: Running CRSD with TZ = Brazil/East
    Sep 22 13:42:19 spodhcsvr10 root: Oracle CLSMON terminated with unexpected status 10. Respawning
    Sep 22 13:43:49 spodhcsvr10 last message repeated 13 times
    Sep 22 13:43:51 spodhcsvr10 root: Cluster Ready Services completed waiting on dependencies.
    Sep 22 13:43:51 spodhcsvr10 root: Running CRSD with TZ = Brazil/East
    Sep 22 13:43:56 spodhcsvr10 root: Oracle CLSMON terminated with unexpected status 10. Respawning
    Failure at final check of Oracle CRS stack.
    I traced the ocssd.log and found some informations:
    [    CSSD]2010-09-22 14:04:14.739 [6] >TRACE: clssnmDiskStateChange: state from 2 to 4 disk (0//dev/vx/rdsk/racdg/ora_vote1)
    [    CSSD]2010-09-22 14:04:14.742 [6] >TRACE: clssnmReadDskHeartbeat: node(1) is down. rcfg(2) wrtcnt(2478) LATS(0) Disk lastSeqNo(2478)
    [    CSSD]2010-09-22 14:04:14.742 [7] >TRACE: clssnmDiskStateChange: state from 2 to 4 disk (1//dev/vx/rdsk/racdg/ora_vote2)
    [    CSSD]2010-09-22 14:04:14.744 [7] >TRACE: clssnmReadDskHeartbeat: node(1) is down. rcfg(2) wrtcnt(2478) LATS(0) Disk lastSeqNo(2478)
    [    CSSD]2010-09-22 14:04:14.745 [8] >TRACE: clssnmDiskStateChange: state from 2 to 4 disk (2//dev/vx/rdsk/racdg/ora_vote3)
    [    CSSD]2010-09-22 14:04:14.746 [8] >TRACE: clssnmReadDskHeartbeat: node(1) is down. rcfg(2) wrtcnt(2478) LATS(0) Disk lastSeqNo(2478)
    [    CSSD]2010-09-22 14:04:14.785 [1] >TRACE: clssscSclsFatal: read value of disable
    [    CSSD]2010-09-22 14:04:14.785 [10] >TRACE: clssnmFatalThread: spawned
    [    CSSD]2010-09-22 14:04:14.785 [1] >TRACE: clssscSclsFatal: read value of disable
    [    CSSD]2010-09-22 14:04:14.786 [11] >TRACE: clssnmconnect: connecting to node 0, flags 0x0001, connector 1
    [    CSSD]2010-09-22 14:04:23.075 >USER: Oracle Database 10g CSS Release 10.2.0.1.0 Production Copyright 1996, 2004 Oracle. All rights reserved.
    [    CSSD]2010-09-22 14:04:23.075 >USER: CSS daemon log for node spodhcsvr10, number 0, in cluster NET_RAC
    [  clsdmt]Listening to (ADDRESS=(PROTOCOL=ipc)(KEY=spodhcsvr10DBG_CSSD))
    [    CSSD]2010-09-22 14:04:23.082 [1] >TRACE: clssscmain: local-only set to false
    [    CSSD]2010-09-22 14:04:23.096 [1] >TRACE: clssnmReadNodeInfo: added node 0 (spodhcsvr10) to cluster
    [    CSSD]2010-09-22 14:04:23.106 [1] >TRACE: clssnmReadNodeInfo: added node 1 (spodhcsvr12) to cluster
    [    CSSD]2010-09-22 14:04:23.129 [5] >TRACE: [0]Node monitor: dlm attach failed error LK_STAT_NOTCREATED
    [    CSSD]CLSS-0001: skgxn not active
    [    CSSD]2010-09-22 14:04:23.129 [5] >TRACE: clssnm_skgxnmon: skgxn init failed, rc 30
    [    CSSD]2010-09-22 14:04:23.132 [1] >TRACE: clssnmInitNMInfo: misscount set to 600
    [    CSSD]2010-09-22 14:04:23.136 [1] >TRACE: clssnmDiskStateChange: state from 1 to 2 disk (0//dev/vx/rdsk/racdg/ora_vote1)
    [    CSSD]2010-09-22 14:04:23.139 [1] >TRACE: clssnmDiskStateChange: state from 1 to 2 disk (1//dev/vx/rdsk/racdg/ora_vote2)
    [    CSSD]2010-09-22 14:04:23.143 [1] >TRACE: clssnmDiskStateChange: state from 1 to 2 disk (2//dev/vx/rdsk/racdg/ora_vote3)
    [    CSSD]2010-09-22 14:04:25.139 [6] >TRACE: clssnmDiskStateChange: state from 2 to 4 disk (0//dev/vx/rdsk/racdg/ora_vote1)
    [    CSSD]2010-09-22 14:04:25.142 [6] >TRACE: clssnmReadDskHeartbeat: node(1) is down. rcfg(2) wrtcnt(2488) LATS(0) Disk lastSeqNo(2488)
    [    CSSD]2010-09-22 14:04:25.143 [7] >TRACE: clssnmDiskStateChange: state from 2 to 4 disk (1//dev/vx/rdsk/racdg/ora_vote2)
    [    CSSD]2010-09-22 14:04:25.144 [7] >TRACE: clssnmReadDskHeartbeat: node(1) is down. rcfg(2) wrtcnt(2488) LATS(0) Disk lastSeqNo(2488)
    [    CSSD]2010-09-22 14:04:25.145 [8] >TRACE: clssnmDiskStateChange: state from 2 to 4 disk (2//dev/vx/rdsk/racdg/ora_vote3)
    [    CSSD]2010-09-22 14:04:25.148 [8] >TRACE: clssnmReadDskHeartbeat: node(1) is down. rcfg(2) wrtcnt(2489) LATS(0) Disk lastSeqNo(2489)
    [    CSSD]2010-09-22 14:04:25.186 [1] >TRACE: clssscSclsFatal: read value of disable
    [    CSSD]2010-09-22 14:04:25.186 [10] >TRACE: clssnmFatalThread: spawned
    [    CSSD]2010-09-22 14:04:25.186 [1] >TRACE: clssscSclsFatal: read value of disable
    [    CSSD]2010-09-22 14:04:25.187 [11] >TRACE: clssnmconnect: connecting to node 0, flags 0x0001, connector 1
    [    CSSD]2010-09-22 14:04:33.449 >USER: Oracle Database 10g CSS Release 10.2.0.1.0 Production Copyright 1996, 2004 Oracle. All rights reserved.
    [    CSSD]2010-09-22 14:04:33.449 >USER: CSS daemon log for node spodhcsvr10, number 0, in cluster NET_RAC
    [  clsdmt]Listening to (ADDRESS=(PROTOCOL=ipc)(KEY=spodhcsvr10DBG_CSSD))
    [    CSSD]2010-09-22 14:04:33.457 [1] >TRACE: clssscmain: local-only set to false
    [    CSSD]2010-09-22 14:04:33.470 [1] >TRACE: clssnmReadNodeInfo: added node 0 (spodhcsvr10) to cluster
    [    CSSD]2010-09-22 14:04:33.480 [1] >TRACE: clssnmReadNodeInfo: added node 1 (spodhcsvr12) to cluster
    [    CSSD]2010-09-22 14:04:33.498 [5] >TRACE: [0]Node monitor: dlm attach failed error LK_STAT_NOTCREATED
    [    CSSD]CLSS-0001: skgxn not active
    [    CSSD]2010-09-22 14:04:33.498 [5] >TRACE: clssnm_skgxnmon: skgxn init failed, rc 30
    [    CSSD]2010-09-22 14:04:33.500 [1] >TRACE: clssnmInitNMInfo: misscount set to 600
    [    CSSD]2010-09-22 14:04:33.505 [1] >TRACE: clssnmDiskStateChange: state from 1 to 2 disk (0//dev/vx/rdsk/racdg/ora_vote1)
    [    CSSD]2010-09-22 14:04:33.508 [1] >TRACE: clssnmDiskStateChange: state from 1 to 2 disk (1//dev/vx/rdsk/racdg/ora_vote2)
    [    CSSD]2010-09-22 14:04:33.510 [1] >TRACE: clssnmDiskStateChange: state from 1 to 2 disk (2//dev/vx/rdsk/racdg/ora_vote3)
    [    CSSD]2010-09-22 14:04:35.508 [6] >TRACE: clssnmDiskStateChange: state from 2 to 4 disk (0//dev/vx/rdsk/racdg/ora_vote1)
    [    CSSD]2010-09-22 14:04:35.510 [6] >TRACE: clssnmReadDskHeartbeat: node(1) is down. rcfg(2) wrtcnt(2499) LATS(0) Disk lastSeqNo(2499)
    [    CSSD]2010-09-22 14:04:35.510 [7] >TRACE: clssnmDiskStateChange: state from 2 to 4 disk (1//dev/vx/rdsk/racdg/ora_vote2)
    [    CSSD]2010-09-22 14:04:35.512 [7] >TRACE: clssnmReadDskHeartbeat: node(1) is down. rcfg(2) wrtcnt(2499) LATS(0) Disk lastSeqNo(2499)
    [    CSSD]2010-09-22 14:04:35.513 [8] >TRACE: clssnmDiskStateChange: state from 2 to 4 disk (2//dev/vx/rdsk/racdg/ora_vote3)
    [    CSSD]2010-09-22 14:04:35.514 [8] >TRACE: clssnmReadDskHeartbeat: node(1) is down. rcfg(2) wrtcnt(2499) LATS(0) Disk lastSeqNo(2499)
    [    CSSD]2010-09-22 14:04:35.553 [1] >TRACE: clssscSclsFatal: read value of disable
    [    CSSD]2010-09-22 14:04:35.553 [10] >TRACE: clssnmFatalThread: spawned
    [    CSSD]2010-09-22 14:04:35.553 [1] >TRACE: clssscSclsFatal: read value of disable
    [    CSSD]2010-09-22 14:04:35.553 [11] >TRACE: clssnmconnect: connecting to node 0, flags 0x0001, connector 1
    I believe the main error is:
    [    CSSD]2010-09-22 14:04:33.498 [5] >TRACE: [0]Node monitor: dlm attach failed error LK_STAT_NOTCREATED
    [    CSSD]CLSS-0001: skgxn not active
    And the communication between UDLM and CLSMON. But i don't know how to resolve this.
    My UDLM version is 3.3.4.9.
    Somebody have any ideas about this?
    Tks!

    Now i finally installed CRS and run root.sh without errors (i think that problem is in some old file from other instalation tries...)
    But now i have another problem: When install DB software, in step to copy instalation to remote node, this node have some failure in CLSMON/CSSD daemon and panicking:
    Sep 23 16:10:51 spodhcsvr10 root: Oracle CLSMON terminated with unexpected status 138. Respawning
    Sep 23 16:10:52 spodhcsvr10 root: Oracle CSSD failure. Rebooting for cluster integrity.
    Sep 23 16:10:52 spodhcsvr10 root: [ID 702911 user.alert] Oracle CSSD failure. Rebooting for cluster integrity.
    Sep 23 16:10:51 spodhcsvr10 root: [ID 702911 user.error] Oracle CLSMON terminated with unexpected status 138. Respawning
    Sep 23 16:10:52 spodhcsvr10 root: [ID 702911 user.alert] Oracle CSSD failure. Rebooting for cluster integrity.
    Sep 23 16:10:56 spodhcsvr10 Cluster.OPS.UCMMD: fatal: received signal 15
    Sep 23 16:10:56 spodhcsvr10 Cluster.OPS.UCMMD: [ID 770355 daemon.error] fatal: received signal 15
    Sep 23 16:10:59 spodhcsvr10 root: Oracle Cluster Ready Services waiting for SunCluster and UDLM to start.
    Sep 23 16:10:59 spodhcsvr10 root: Cluster Ready Services completed waiting on dependencies.
    Sep 23 16:10:59 spodhcsvr10 root: [ID 702911 user.error] Oracle Cluster Ready Services waiting for SunCluster and UDLM to start.
    Sep 23 16:10:59 spodhcsvr10 root: [ID 702911 user.error] Cluster Ready Services completed waiting on dependencies.
    Notifying cluster that this node is panicking
    The instalation in first node continue and report error in copy to second node.
    Any ideas? Tks!

  • Root.sh fails - asm won't shut down

    I am trying to instal clusterware 11gR2 on Oracle Enterprise Linux 5. This is all running an Oracle Virtualbox environment, using ASM for the cluster disk in an openfiler. 32 bit installation, 1GB memory allocated to each node environment. I had to ignore errors for the following:
    < 1.5 GB memory
    small swap file
    ncsd and ntpd processes not running (not connected to internet), I understand Oracle will install it's own timer service if this is not present
    Running root.sh fails because asm won't shut down properly:
    [root@odbn1 grid]# ./root.sh
    Running Oracle 11g root.sh script...
    The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME= /u01/app/grid
    Enter the full pathname of the local bin directory: [usr/local/bin]:
    Copying dbhome to /usr/local/bin ...
    Copying oraenv to /usr/local/bin ...
    Copying coraenv to /usr/local/bin ...
    Creating /etc/oratab file...
    Entries will be added to the /etc/oratab file as needed by
    Database Configuration Assistant when a database is created
    Finished running generic part of root.sh script.
    Now product-specific root actions will be performed.
    2010-08-24 14:46:44: Parsing the host name
    2010-08-24 14:46:44: Checking for super user privileges
    2010-08-24 14:46:44: User has super user privileges
    Using configuration parameter file: /u01/app/grid/crs/install/crsconfig_params
    Creating trace directory
    LOCAL ADD MODE
    Creating OCR keys for user 'root', privgrp 'root'..
    Operation successful.
    root wallet
    root wallet cert
    root cert export
    peer wallet
    profile reader wallet
    pa wallet
    peer wallet keys
    pa wallet keys
    peer cert request
    pa cert request
    peer cert
    pa cert
    peer root cert TP
    profile reader root cert TP
    pa root cert TP
    peer pa cert TP
    pa peer cert TP
    profile reader pa cert TP
    profile reader peer cert TP
    peer user cert
    pa user cert
    Adding daemon to inittab
    CRS-4123: Oracle High Availability Services has been started.
    ohasd is starting
    CRS-2672: Attempting to start 'ora.gipcd' on 'odbn1'
    CRS-2672: Attempting to start 'ora.mdnsd' on 'odbn1'
    CRS-2676: Start of 'ora.mdnsd' on 'odbn1' succeeded
    CRS-2676: Start of 'ora.gipcd' on 'odbn1' succeeded
    CRS-2672: Attempting to start 'ora.gpnpd' on 'odbn1'
    CRS-2676: Start of 'ora.gpnpd' on 'odbn1' succeeded
    CRS-2672: Attempting to start 'ora.cssdmonitor' on 'odbn1'
    CRS-2676: Start of 'ora.cssdmonitor' on 'odbn1' succeeded
    CRS-2672: Attempting to start 'ora.cssd' on 'odbn1'
    CRS-2672: Attempting to start 'ora.diskmon' on 'odbn1'
    CRS-2676: Start of 'ora.diskmon' on 'odbn1' succeeded
    CRS-2676: Start of 'ora.cssd' on 'odbn1' succeeded
    CRS-2672: Attempting to start 'ora.ctssd' on 'odbn1'
    CRS-2676: Start of 'ora.ctssd' on 'odbn1' succeeded
    ASM created and started successfully.
    DiskGroup CLSVOL1 created successfully.
    clscfg: -install mode specified
    Successfully accumulated necessary OCR keys.
    Creating OCR keys for user 'root', privgrp 'root'..
    Operation successful.
    CRS-2672: Attempting to start 'ora.crsd' on 'odbn1'
    CRS-2676: Start of 'ora.crsd' on 'odbn1' succeeded
    CRS-4256: Updating the profile
    Successful addition of voting disk 502348953fc24f1cbf9c9f0fdf5cf2e0.
    Successfully replaced voting disk group with +CLSVOL1.
    CRS-4256: Updating the profile
    CRS-4266: Voting file(s) successfully replaced
    ## STATE File Universal Id File Name Disk group
    1. ONLINE 502348953fc24f1cbf9c9f0fdf5cf2e0 (/dev/oracleasm/disks/CRS) [CLSVOL1]
    Located 1 voting disk(s).
    CRS-2673: Attempting to stop 'ora.crsd' on 'odbn1'
    CRS-2677: Stop of 'ora.crsd' on 'odbn1' succeeded
    CRS-2673: Attempting to stop 'ora.asm' on 'odbn1'
    ORA-15097: cannot SHUTDOWN ASM instance with connected client
    CRS-2675: Stop of 'ora.asm' on 'odbn1' failed
    CRS-4000: Command Stop failed, or completed with errors.
    Command return code of 1 (256) from command: /u01/app/grid/bin/crsctl stop resource ora.asm -init
    Stop of resource "ora.asm -init" failed
    Failed to stop ASM
    CRS-2673: Attempting to stop 'ora.ctssd' on 'odbn1'
    CRS-2677: Stop of 'ora.ctssd' on 'odbn1' succeeded
    CRS-2673: Attempting to stop 'ora.cssdmonitor' on 'odbn1'
    CRS-2677: Stop of 'ora.cssdmonitor' on 'odbn1' succeeded
    CRS-2529: Unable to act on 'ora.cssd' because that would require stopping or relocating 'ora.asm', but the force option was not specified
    CRS-4000: Command Stop failed, or completed with errors.
    Command return code of 1 (256) from command: /u01/app/grid/bin/crsctl stop resource ora.cssd -init
    Failed to exit exclusive mode
    Initial cluster configuration failed. See /u01/app/grid/cfgtoollogs/crsconfig/rootcrs_odbn1.log for details
    [root@odbn1 grid]#
    The rootcrs_odbn1.log shows the same error:
    [root@odbn1 crsconfig]# tail -50 rootcrs_odbn1.log
    2010-08-24 15:10:38: /bin/su successfully executed
    2010-08-24 15:10:38: /u01/app/grid/gpnp/odbn1/wallets/prdr/cwallet.sso => /u01/app/grid/gpnp/wallets/prdr/cwallet.sso
    2010-08-24 15:10:38: rmtcpy: /u01/app/grid/bin/cluutil -sourcefile /u01/app/grid/gpnp/odbn1/wallets/prdr/cwallet.sso -destfile /u01/app/grid/gpnp/wallets/prdr/cwallet.sso -nodelist odbn1,odbn2
    2010-08-24 15:10:38: Running as user oracle: /u01/app/grid/bin/cluutil -sourcefile /u01/app/grid/gpnp/odbn1/wallets/prdr/cwallet.sso -destfile /u01/app/grid/gpnp/wallets/prdr/cwallet.sso -nodelist odbn1,odbn2
    2010-08-24 15:10:38: s_run_as_user2: Running /bin/su oracle -c ' /u01/app/grid/bin/cluutil -sourcefile /u01/app/grid/gpnp/odbn1/wallets/prdr/cwallet.sso -destfile /u01/app/grid/gpnp/wallets/prdr/cwallet.sso -nodelist odbn1,odbn2 '
    2010-08-24 15:11:20: Removing file /tmp/file1ye3x8
    2010-08-24 15:11:21: Successfully removed file: /tmp/file1ye3x8
    2010-08-24 15:11:21: /bin/su successfully executed
    2010-08-24 15:11:21: /u01/app/grid/gpnp/odbn1/wallets/pa/cwallet.sso => /u01/app/grid/gpnp/wallets/pa/cwallet.sso
    2010-08-24 15:11:21: rmtcpy: /u01/app/grid/bin/cluutil -sourcefile /u01/app/grid/gpnp/odbn1/wallets/pa/cwallet.sso -destfile /u01/app/grid/gpnp/wallets/pa/cwallet.sso -nodelist odbn1,odbn2
    2010-08-24 15:11:21: Running as user oracle: /u01/app/grid/bin/cluutil -sourcefile /u01/app/grid/gpnp/odbn1/wallets/pa/cwallet.sso -destfile /u01/app/grid/gpnp/wallets/pa/cwallet.sso -nodelist odbn1,odbn2
    2010-08-24 15:11:21: s_run_as_user2: Running /bin/su oracle -c ' /u01/app/grid/bin/cluutil -sourcefile /u01/app/grid/gpnp/odbn1/wallets/pa/cwallet.sso -destfile /u01/app/grid/gpnp/wallets/pa/cwallet.sso -nodelist odbn1,odbn2 '
    2010-08-24 15:11:49: Removing file /tmp/filelEb5Lp
    2010-08-24 15:11:49: Successfully removed file: /tmp/filelEb5Lp
    2010-08-24 15:11:50: /bin/su successfully executed
    2010-08-24 15:11:50: /u01/app/grid/gpnp/odbn1/wallets/root/b64certificate.txt => /u01/app/grid/gpnp/wallets/root/b64certificate.txt
    2010-08-24 15:11:50: rmtcpy: /u01/app/grid/bin/cluutil -sourcefile /u01/app/grid/gpnp/odbn1/wallets/root/b64certificate.txt -destfile /u01/app/grid/gpnp/wallets/root/b64certificate.txt -nodelist odbn1,odbn2
    2010-08-24 15:11:50: Running as user oracle: /u01/app/grid/bin/cluutil -sourcefile /u01/app/grid/gpnp/odbn1/wallets/root/b64certificate.txt -destfile /u01/app/grid/gpnp/wallets/root/b64certificate.txt -nodelist odbn1,odbn2
    2010-08-24 15:11:50: s_run_as_user2: Running /bin/su oracle -c ' /u01/app/grid/bin/cluutil -sourcefile /u01/app/grid/gpnp/odbn1/wallets/root/b64certificate.txt -destfile /u01/app/grid/gpnp/wallets/root/b64certificate.txt -nodelist odbn1,odbn2 '
    2010-08-24 15:12:26: Removing file /tmp/fileUQzFxE
    2010-08-24 15:12:27: Successfully removed file: /tmp/fileUQzFxE
    2010-08-24 15:12:27: /bin/su successfully executed
    2010-08-24 15:12:27: /u01/app/grid/gpnp/odbn1/wallets/peer/cert.txt => /u01/app/grid/gpnp/wallets/peer/cert.txt
    2010-08-24 15:12:27: rmtcpy: /u01/app/grid/bin/cluutil -sourcefile /u01/app/grid/gpnp/odbn1/wallets/peer/cert.txt -destfile /u01/app/grid/gpnp/wallets/peer/cert.txt -nodelist odbn1,odbn2
    2010-08-24 15:12:27: Running as user oracle: /u01/app/grid/bin/cluutil -sourcefile /u01/app/grid/gpnp/odbn1/wallets/peer/cert.txt -destfile /u01/app/grid/gpnp/wallets/peer/cert.txt -nodelist odbn1,odbn2
    2010-08-24 15:12:27: s_run_as_user2: Running /bin/su oracle -c ' /u01/app/grid/bin/cluutil -sourcefile /u01/app/grid/gpnp/odbn1/wallets/peer/cert.txt -destfile /u01/app/grid/gpnp/wallets/peer/cert.txt -nodelist odbn1,odbn2 '
    2010-08-24 15:12:47: Removing file /tmp/filevnw3D8
    2010-08-24 15:12:47: Successfully removed file: /tmp/filevnw3D8
    2010-08-24 15:12:47: /bin/su successfully executed
    2010-08-24 15:12:47: /u01/app/grid/gpnp/odbn1/wallets/pa/cert.txt => /u01/app/grid/gpnp/wallets/pa/cert.txt
    2010-08-24 15:12:47: rmtcpy: /u01/app/grid/bin/cluutil -sourcefile /u01/app/grid/gpnp/odbn1/wallets/pa/cert.txt -destfile /u01/app/grid/gpnp/wallets/pa/cert.txt -nodelist odbn1,odbn2
    2010-08-24 15:12:47: Running as user oracle: /u01/app/grid/bin/cluutil -sourcefile /u01/app/grid/gpnp/odbn1/wallets/pa/cert.txt -destfile /u01/app/grid/gpnp/wallets/pa/cert.txt -nodelist odbn1,odbn2
    2010-08-24 15:12:47: s_run_as_user2: Running /bin/su oracle -c ' /u01/app/grid/bin/cluutil -sourcefile /u01/app/grid/gpnp/odbn1/wallets/pa/cert.txt -destfile /u01/app/grid/gpnp/wallets/pa/cert.txt -nodelist odbn1,odbn2 '
    2010-08-24 15:13:19: Removing file /tmp/fileArkUFi
    2010-08-24 15:13:20: Successfully removed file: /tmp/fileArkUFi
    2010-08-24 15:13:20: /bin/su successfully executed
    2010-08-24 15:13:20: Exiting exclusive mode
    2010-08-24 15:13:41: Command return code of 1 (256) from command: /u01/app/grid/bin/crsctl stop resource ora.asm -init
    2010-08-24 15:13:41: Stop of resource "ora.asm -init" failed
    2010-08-24 15:13:41: Failed to stop ASM
    2010-08-24 15:14:44: Command return code of 1 (256) from command: /u01/app/grid/bin/crsctl stop resource ora.cssd -init
    2010-08-24 15:14:44: CSS shutdown failed
    2010-08-24 15:14:44: Failed to exit exclusive mode
    2010-08-24 15:14:44: Initial cluster configuration failed. See /u01/app/grid/cfgtoollogs/crsconfig/rootcrs_odbn1.log for details
    [root@odbn1 crsconfig]#
    Has anyone seen this before? All help greatly appreciated!

    Hi Sebastian,
    Thank you for your quick reply. Here's what I tried:
    1. reset the install as you suggested, and tried running root.sh again. The exact same thing happened.
    2. restored to an openfiler snapshots that were taken right before the failed upgrade. Increased memory on both VMs to 1280M as you suggested. Rebooted the virtual environment and ran the installation again. This time, the root.sh script froze my environment - not enough memory to support this.
    3. restored the openfiler snapshots again. Increased memory on one node (odbn1) to 1500M. Ran the install for a single node cluster. It got past this error.
    Thank you!!!
    Also, thanks for the tip on the ntp.init file. That did eliminate the error for ntpd not running during the installation.

  • Ocrcheck success on node1 but failed on node2.(11gR2 for Windows)

    Hi all , I have installed the Oracle Grid Infrastructure with ASM on windows 2008 x64, Everything is done without any error, but ocrcheck failed on nodde2. detail is below:
    checking commnds on node 1:
    *>crsctl check crs*
    CRS-4638: Oracle High Availability Services is online
    CRS-4537: Cluster Ready Services is online
    CRS-4529: Cluster Synchronization Services is online
    CRS-4533: Event Manager is online
    *>srvctl status asm -a*
    ASM on rac1,rac2 is running
    ASM enable
    *>crs_stat -t*
    Name Type Target State Host
    ora.DATA.dg ora....up.type ONLINE ONLINE rac1
    ora....ER.lsnr ora....er.type ONLINE ONLINE rac1
    ora....N1.lsnr ora....er.type ONLINE ONLINE rac2
    ora.asm ora.asm.type ONLINE ONLINE rac1
    ora.eons ora.eons.type ONLINE ONLINE rac1
    ora.gsd ora.gsd.type OFFLINE OFFLINE
    ora....network ora....rk.type ONLINE ONLINE rac1
    ora.oc4j ora.oc4j.type OFFLINE OFFLINE
    ora.ons ora.ons.type ONLINE ONLINE rac1
    ora....SM1.asm application ONLINE ONLINE rac1
    ora....C1.lsnr application ONLINE ONLINE rac1
    ora.rac1.gsd application OFFLINE OFFLINE
    ora.rac1.ons application ONLINE ONLINE rac1
    ora.rac1.vip ora....t1.type ONLINE ONLINE rac1
    ora....SM2.asm application ONLINE ONLINE rac2
    ora....C2.lsnr application ONLINE ONLINE rac2
    ora.rac2.gsd application OFFLINE OFFLINE
    ora.rac2.ons application ONLINE ONLINE rac2
    ora.rac2.vip ora....t1.type ONLINE ONLINE rac2
    ora.scan1.vip ora....ip.type ONLINE ONLINE rac2
    *>ocrcheck*
    Status of Oracle Cluster Registry is as follows :
    Version : 3
    Total space (kbytes) : 262120
    Used space (kbytes) : 2364
    Available space (kbytes) : 259756
    ID : 257699632
    Device/File Name : +DATA
    Device/File integrity check succeeded
    Device/File not configured
    Device/File not configured
    Device/File not configured
    Device/File not configured
    Cluster registry integrity check succeeded
    Logical corruption check succeeded
    checking commnds on node 2:
    *>crsctl check crs*
    CRS-4638: Oracle High Availability Services is online
    CRS-4537: Cluster Ready Services is online
    CRS-4529: Cluster Synchronization Services is online
    CRS-4533: Event Manager is online
    *>srvctl status asm -a*
    ASM on rac1,rac2 is running
    ASM enable
    *>crs_stat -t*
    Name Type Target State Host
    ora.DATA.dg ora....up.type ONLINE ONLINE rac1
    ora....ER.lsnr ora....er.type ONLINE ONLINE rac1
    ora....N1.lsnr ora....er.type ONLINE ONLINE rac2
    ora.asm ora.asm.type ONLINE ONLINE rac1
    ora.eons ora.eons.type ONLINE ONLINE rac1
    ora.gsd ora.gsd.type OFFLINE OFFLINE
    ora....network ora....rk.type ONLINE ONLINE rac1
    ora.oc4j ora.oc4j.type OFFLINE OFFLINE
    ora.ons ora.ons.type ONLINE ONLINE rac1
    ora....SM1.asm application ONLINE ONLINE rac1
    ora....C1.lsnr application ONLINE ONLINE rac1
    ora.rac1.gsd application OFFLINE OFFLINE
    ora.rac1.ons application ONLINE ONLINE rac1
    ora.rac1.vip ora....t1.type ONLINE ONLINE rac1
    ora....SM2.asm application ONLINE ONLINE rac2
    ora....C2.lsnr application ONLINE ONLINE rac2
    ora.rac2.gsd application OFFLINE OFFLINE
    ora.rac2.ons application ONLINE ONLINE rac2
    ora.rac2.vip ora....t1.type ONLINE ONLINE rac2
    ora.scan1.vip ora....ip.type ONLINE ONLINE rac2
    *>ocrcheck*
    PROT-602: Failed to retrieve data from the cluster registry
    PROC-26: Error while accessing the physical storage ASM error [SLOS: cat=8, opn=
    kgfolclcpi1, dep=204, loc=kgfokge
    AMDU-00204: Disk N0002 is in currently mounted diskgroup DATA
    AMDU-00201: Disk N0002: '\\.\ORCLDISKDATA1'
    ] [8]
    also, asmca can run on node1 but can not work on node 2.
    Can anyone help me to resolve this issue?
    Regards.
    Edited by: user8306020 on 2010-7-4 下午11:10

    Thank you for your replay.. I tried to run "cluvfy comp ocr -n all -verbose" on both node, and get the response like below:
    Node1:
    *>cluvfy comp ocr -n all -verbose*
    Verifying OCR integrity
    Checking OCR integrity...
    Checking the absence of a non-clustered configuration...
    All nodes free of non-clustered, local-only configurations
    ASM Running check passed. ASM is running on all cluster nodes
    Disk group for ocr location "+DATA" available on all the nodes
    Checking size of the OCR location "+DATA" ...
    rac2:Size check for OCR location "+DATA" successful...
    rac1:Size check for OCR location "+DATA" successful...
    WARNING:
    This check does not verify the integrity of the OCR contents. Execute 'ocrcheck'
    as a privileged user to verify the contents of OCR.
    OCR integrity check passed
    Verification of OCR integrity was successful.
    =========================================================================
    Node2:
    *>cluvfy comp ocr -n all -verbose*
    Verifying OCR integrity
    Checking OCR integrity...
    Checking the absence of a non-clustered configuration...
    All nodes free of non-clustered, local-only configurations
    ASM Running check passed. ASM is running on all cluster nodes
    Disk group for ocr location "+DATA" available on all the nodes
    Checking size of the OCR location "+DATA" ...
    rac2:Size check for OCR location "+DATA" successful...
    rac1:Size check for OCR location "+DATA" successful...
    WARNING:
    This check does not verify the integrity of the OCR contents. Execute 'ocrcheck'
    as a privileged user to verify the contents of OCR.
    OCR integrity check passed
    Verification of OCR integrity was successful.
    But ocrcheck failed on node2 again. the log file in \app\11.2.0\grid\log\rac2\client\orccheck_4844.log:
    Oracle Database 11g Clusterware Release 11.2.0.1.0 - Production Copyright 1996, 2009 Oracle. All rights reserved.
    2010-07-05 15:52:44.985: [OCRCHECK][4728]ocrcheck starts...
    2010-07-05 15:52:45.141: [    GPnP][4728]clsgpnp_Init: [at clsgpnp0.c:406] gpnp tracelevel 1, component tracelevel 0
    2010-07-05 15:52:45.141: [    GPnP][4728]clsgpnp_Init: [at clsgpnp0.c:536] 'E:\app\11.2.0\grid' in effect as GPnP home base.
    2010-07-05 15:52:45.157: [    GPnP][4728]clsgpnpkwf_initwfloc: [at clsgpnpkwf.c:398] Using FS Wallet Location : E:\app\11.2.0\grid\gpnp\rac2\wallets\peer\
    [   CLWAL][4728]clsw_Initialize: OLR initlevel [70000]
    2010-07-05 15:52:45.173: [    GPnP][4728]clsgpnp_getCK: [at clsgpnp0.c:1952] <Get gpnp security keys (wallet) for id:1,typ;7. (2 providers - fatal if all fail)
    2010-07-05 15:52:45.188: [    GPnP][4728]clsgpnp_getCK: [at clsgpnp0.c:1967] Result: (0) CLSGPNP_OK. Get gpnp wallet - provider 1 of 2 (LSKP-FSW(1))
    2010-07-05 15:52:45.188: [    GPnP][4728]clsgpnp_getCK: [at clsgpnp0.c:1984] Got gpnp security keys (wallet).>
    2010-07-05 15:52:45.188: [    GPnP][4728]clsgpnp_getCK: [at clsgpnp0.c:1952] <Get gpnp security keys (wallet) for id:1,typ;4. (2 providers - fatal if all fail)
    2010-07-05 15:52:45.188: [    GPnP][4728]clsgpnp_getCK: [at clsgpnp0.c:1967] Result: (0) CLSGPNP_OK. Get gpnp wallet - provider 1 of 2 (LSKP-FSW(1))
    2010-07-05 15:52:45.188: [    GPnP][4728]clsgpnp_getCK: [at clsgpnp0.c:1984] Got gpnp security keys (wallet).>
    2010-07-05 15:52:45.188: [    GPnP][4728]clsgpnp_Init: [at clsgpnp0.c:839] GPnP client pid=4844, tl=1, f=3
    2010-07-05 15:53:04.221: [  OCRASM][4728]proprasmo: Failed to open file in dirty mode
    2010-07-05 15:53:04.221: [  OCRASM][4728]proprasmo: Error in open/create file in dg [DATA]
    [  OCRASM][4728]SLOS : SLOS: cat=8, opn=kgfolclcpi1, dep=204, loc=kgfokge
    AMDU-00204: Disk N0002 is in currently mounted diskgroup DATA
    AMDU-00201: Disk N0002: '\\.\ORCLDISKDATA1'
    2010-07-05 15:53:04.314: [  OCRASM][4728]proprasmo: kgfoCheckMount returned [7]
    2010-07-05 15:53:04.314: [  OCRASM][4728]proprasmo: The ASM instance is down
    2010-07-05 15:53:04.361: [  OCRRAW][4728]proprioo: Failed to open [+DATA]. Returned proprasmo() with [26]. Marking location as UNAVAILABLE.
    2010-07-05 15:53:04.361: [  OCRRAW][4728]proprioo: No OCR/OLR devices are usable
    2010-07-05 15:53:04.361: [  OCRASM][4728]proprasmcl: asmhandle is NULL
    2010-07-05 15:53:04.361: [  OCRRAW][4728]proprinit: Could not open raw device
    2010-07-05 15:53:04.361: [  OCRASM][4728]proprasmcl: asmhandle is NULL
    2010-07-05 15:53:04.361: [ default][4728]a_init:7!: Backend init unsuccessful : [26]
    2010-07-05 15:53:04.361: [OCRCHECK][4728]Failed to access OCR repository: [PROC-26: Error while accessing the physical storage ASM error [SLOS: cat=8, opn=kgfolclcpi1, dep=204, loc=kgfokge
    AMDU-00204: Disk N0002 is in currently mounted diskgroup DATA
    AMDU-00201: Disk N0002: '\\.\ORCLDISKDATA1'
    ] [8]]
    2010-07-05 15:53:04.361: [OCRCHECK][4728]Failed to initialize ocrchek2
    2010-07-05 15:53:04.361: [OCRCHECK][4728]Exiting [status=failed]...

  • Root.sh failed throws error when installing Oracle Grid Infrastructure 11.2

    Hi,
    root.sh failed with the following error when installing / configuring the oracle grid infrastructure version 11.2.0.1 for standalone on RHEL 6
    Now product-specific root actions will be performed.
    2011-10-10 11:46:55: Checking for super user privileges
    2011-10-10 11:46:55: User has super user privileges
    2011-10-10 11:46:55: Parsing the host name
    Using configuration parameter file: /apps/opt/oracle_infra/crs/install/crsconfig_params
    Creating trace directory
    LOCAL ADD MODE
    Creating OCR keys for user 'oracle', privgrp 'oinstall'..
    Operation successful.
    CRS-4664: Node vmhost1 successfully pinned.
    Adding daemon to inittab
    CRS-4124: Oracle High Availability Services startup failed.
    CRS-4000: Command Start failed, or completed with errors.
    ohasd failed to start: Inappropriate ioctl for device
    ohasd failed to start: Inappropriate ioctl for device at /apps/opt/oracle_infra/crs/install/roothas.pl line 296.
    I followed the steps / solution provided in the ID 1069182.1. But no use.
    Are there any workaround?
    Thanks
    -KarthicK
    Edited by: user11984375 on Oct 10, 2011 7:06 AM

    Check the logfiles under $GRID_HOME/log/<node_name>/cssd/
    I had seen the same problem and the following resolved the problem for me.
    [root@rac1 ~]# rm -f /usr/tmp/.oracle/* /tmp/.oracle/* /var/tmp/.oracle/*
    [root@rac1 ~]# > $ORA_CRS_HOME/log/<node_name>/cssd/<node_name>.pid
    HTH,
    Raj Mareddi
    http://www.freeoraclehelp.com

  • Run root.sh Failed to create or upgrade OLR (oracle11gr2+AIX6.1)

    2011-12-29 19:38:54: The configuration parameter file /oracle/grid/11.2/grid/crs/install/crsconfig_params is valid
    2011-12-29 19:38:54: Checking for super user privileges
    2011-12-29 19:38:54: User has super user privileges
    2011-12-29 19:38:54: ### Printing the configuration values from files:
    2011-12-29 19:38:54: /oracle/grid/11.2/grid/crs/install/crsconfig_params
    2011-12-29 19:38:54: /oracle/grid/11.2/grid/crs/install/s_crsconfig_defs
    2011-12-29 19:38:54: ASM_DISCOVERY_STRING=/dev/rup*
    2011-12-29 19:38:54: ASM_DISKS=/dev/rupdisk0,/dev/rupdisk1,/dev/rupdisk2
    2011-12-29 19:38:54: ASM_DISK_GROUP=CRS
    2011-12-29 19:38:54: ASM_REDUNDANCY=NORMAL
    2011-12-29 19:38:54: ASM_SPFILE=
    2011-12-29 19:38:54: ASM_UPGRADE=false
    2011-12-29 19:38:54: CLSCFG_MISSCOUNT=
    2011-12-29 19:38:54: CLUSTER_GUID=
    2011-12-29 19:38:54: CLUSTER_NAME=yhscluster
    2011-12-29 19:38:54: CRS_NODEVIPS="yhsscore1vip/255.255.255.192/en2,yhsscore2vip/255.255.255.192/en2"
    2011-12-29 19:38:54: CRS_STORAGE_OPTION=1
    2011-12-29 19:38:54: CSS_LEASEDURATION=400
    2011-12-29 19:38:54: DIRPREFIX=
    2011-12-29 19:38:54: DISABLE_OPROCD=0
    2011-12-29 19:38:54: EMBASEJAR_NAME=oemlt.jar
    2011-12-29 19:38:54: EWTJAR_NAME=ewt3.jar
    2011-12-29 19:38:54: EXTERNAL_ORACLE_BIN=/opt/oracle/bin
    2011-12-29 19:38:54: GNS_ADDR_LIST=
    2011-12-29 19:38:54: GNS_ALLOW_NET_LIST=
    2011-12-29 19:38:54: GNS_CONF=false
    2011-12-29 19:38:54: GNS_DENY_ITF_LIST=
    2011-12-29 19:38:54: GNS_DENY_NET_LIST=
    2011-12-29 19:38:54: GNS_DOMAIN_LIST=
    2011-12-29 19:38:54: GPNPCONFIGDIR=/oracle/grid/11.2/grid
    2011-12-29 19:38:54: GPNPGCONFIGDIR=/oracle/grid/11.2/grid
    2011-12-29 19:38:54: GPNP_PA=
    2011-12-29 19:38:54: HELPJAR_NAME=help4.jar
    2011-12-29 19:38:54: HOST_NAME_LIST=yhsscore1,yhsscore2
    2011-12-29 19:38:54: ID=/etc
    2011-12-29 19:38:54: INIT=/usr/sbin/init
    2011-12-29 19:38:54: IT=/etc/inittab
    2011-12-29 19:38:54: JEWTJAR_NAME=jewt4.jar
    2011-12-29 19:38:54: JLIBDIR=/oracle/grid/11.2/grid/jlib
    2011-12-29 19:38:54: JREDIR=/oracle/grid/11.2/grid/jdk/jre/
    2011-12-29 19:38:54: LANGUAGE_ID=AMERICAN_AMERICA.WE8ISO8859P1
    2011-12-29 19:38:54: MSGFILE=/var/adm/messages
    2011-12-29 19:38:54: NETCFGJAR_NAME=netcfg.jar
    2011-12-29 19:38:54: NETWORKS="en2"/53.2.1.0:public,"en3"/10.0.0.0:cluster_interconnect
    2011-12-29 19:38:54: NEW_HOST_NAME_LIST=
    2011-12-29 19:38:54: NEW_NODEVIPS="yhsscore1vip/255.255.255.192/en2,yhsscore2vip/255.255.255.192/en2"
    2011-12-29 19:38:54: NEW_NODE_NAME_LIST=
    2011-12-29 19:38:54: NEW_PRIVATE_NAME_LIST=
    2011-12-29 19:38:54: NODELIST=yhsscore1,yhsscore2
    2011-12-29 19:38:54: NODE_NAME_LIST=yhsscore1,yhsscore2
    2011-12-29 19:38:54: OCFS_CONFIG=
    2011-12-29 19:38:54: OCRCONFIG=/etc/oracle/ocr.loc
    2011-12-29 19:38:54: OCRCONFIGDIR=/etc/oracle
    2011-12-29 19:38:54: OCRID=
    2011-12-29 19:38:54: OCRLOC=ocr.loc
    2011-12-29 19:38:54: OCR_LOCATIONS=NO_VAL
    2011-12-29 19:38:54: OLASTGASPDIR=/etc/oracle/lastgasp
    2011-12-29 19:38:54: OLD_CRS_HOME=
    2011-12-29 19:38:54: OLRCONFIG=/etc/oracle/olr.loc
    2011-12-29 19:38:54: OLRCONFIGDIR=/etc/oracle
    2011-12-29 19:38:54: OLRLOC=olr.loc
    2011-12-29 19:38:54: OPROCDCHECKDIR=/etc/oracle/oprocd/check
    2011-12-29 19:38:54: OPROCDDIR=/etc/oracle/oprocd
    2011-12-29 19:38:54: OPROCDFATALDIR=/etc/oracle/oprocd/fatal
    2011-12-29 19:38:54: OPROCDSTOPDIR=/etc/oracle/oprocd/stop
    2011-12-29 19:38:54: ORACLE_BASE=/oracle/grid/app/grid
    2011-12-29 19:38:54: ORACLE_HOME=/oracle/grid/11.2/grid
    2011-12-29 19:38:54: ORACLE_OWNER=grid
    2011-12-29 19:38:54: ORA_ASM_GROUP=dba
    2011-12-29 19:38:54: ORA_DBA_GROUP=dba
    2011-12-29 19:38:54: PRIVATE_NAME_LIST=
    2011-12-29 19:38:54: RCALLDIR=/etc/rc.d/rc2.d
    2011-12-29 19:38:54: RCKDIR=/etc/rc.d/rc2.d
    2011-12-29 19:38:54: RCSDIR=/etc/rc.d/rc2.d
    2011-12-29 19:38:54: RC_KILL=K19
    2011-12-29 19:38:54: RC_KILL_OLD=S96
    2011-12-29 19:38:54: RC_START=S96
    2011-12-29 19:38:54: SCAN_NAME=yhsscan
    2011-12-29 19:38:54: SCAN_PORT=1521
    2011-12-29 19:38:54: SCRBASE=/etc/oracle/scls_scr
    2011-12-29 19:38:54: SHAREJAR_NAME=share.jar
    2011-12-29 19:38:54: SILENT=false
    2011-12-29 19:38:54: SO_EXT=so
    2011-12-29 19:38:54: SRVCFGLOC=srvConfig.loc
    2011-12-29 19:38:54: SRVCONFIG=/var/opt/oracle/srvConfig.loc
    2011-12-29 19:38:54: SRVCONFIGDIR=/var/opt/oracle
    2011-12-29 19:38:54: VNDR_CLUSTER=false
    2011-12-29 19:38:54: VOTING_DISKS=NO_VAL
    2011-12-29 19:38:54: ### Printing other configuration values ###
    2011-12-29 19:38:54: CLSCFG_EXTRA_PARMS=
    2011-12-29 19:38:54: CRSDelete=0
    2011-12-29 19:38:54: CRSPatch=0
    2011-12-29 19:38:54: DEBUG=
    2011-12-29 19:38:54: DOWNGRADE=
    2011-12-29 19:38:54: HAS_GROUP=dba
    2011-12-29 19:38:54: HAS_USER=root
    2011-12-29 19:38:54: HOST=yhsscore1
    2011-12-29 19:38:54: IS_SIHA=0
    2011-12-29 19:38:54: OLR_DIRECTORY=/oracle/grid/11.2/grid/cdata
    2011-12-29 19:38:54: OLR_LOCATION=/oracle/grid/11.2/grid/cdata/yhsscore1.olr
    2011-12-29 19:38:54: ORA_CRS_HOME=/oracle/grid/11.2/grid
    2011-12-29 19:38:54: REMOTENODE=
    2011-12-29 19:38:54: SUPERUSER=root
    2011-12-29 19:38:54: UPGRADE=
    2011-12-29 19:38:54: VF_DISCOVERY_STRING=
    2011-12-29 19:38:54: addfile=/oracle/grid/11.2/grid/crs/install/crsconfig_addparams
    2011-12-29 19:38:54: crscfg_trace=1
    2011-12-29 19:38:54: crscfg_trace_file=/oracle/grid/11.2/grid/cfgtoollogs/crsconfig/rootcrs_yhsscore1.log
    2011-12-29 19:38:54: hosts=
    2011-12-29 19:38:54: oldcrshome=
    2011-12-29 19:38:54: oldcrsver=
    2011-12-29 19:38:54: osdfile=/oracle/grid/11.2/grid/crs/install/s_crsconfig_defs
    2011-12-29 19:38:54: parameters_valid=1
    2011-12-29 19:38:54: paramfile=/oracle/grid/11.2/grid/crs/install/crsconfig_params
    2011-12-29 19:38:54: platform_family=unix
    2011-12-29 19:38:54: srvctl_trc_suff=0
    2011-12-29 19:38:54: unlock_crshome=
    2011-12-29 19:38:54: user_is_superuser=1
    2011-12-29 19:38:54: ### Printing of configuration values complete ###
    2011-12-29 19:38:54: Oracle CRS stack is not configured yet
    2011-12-29 19:38:54: CRS is not yet configured. Hence, will proceed to configure CRS
    2011-12-29 19:38:54: Cluster-wide one-time actions... Done!
    2011-12-29 19:38:56: Oracle CRS home = /oracle/grid/11.2/grid
    2011-12-29 19:38:56: Host name = yhsscore1
    2011-12-29 19:38:56: CRS user = grid
    2011-12-29 19:38:56: Oracle CRS home = /oracle/grid/11.2/grid
    2011-12-29 19:38:56: GPnP host = yhsscore1
    2011-12-29 19:38:56: Oracle GPnP home = /oracle/grid/11.2/grid/gpnp
    2011-12-29 19:38:56: Oracle GPnP local home = /oracle/grid/11.2/grid/gpnp/yhsscore1
    2011-12-29 19:38:56: GPnP directories verified.
    2011-12-29 19:38:56: Checking to see if Oracle CRS stack is already configured
    2011-12-29 19:38:56: Oracle CRS stack is not configured yet
    2011-12-29 19:38:56: ---Checking local gpnp setup...
    2011-12-29 19:38:56: The setup file "/oracle/grid/11.2/grid/gpnp/yhsscore1/profiles/peer/profile.xml" does not exist
    2011-12-29 19:38:56: The setup file "/oracle/grid/11.2/grid/gpnp/yhsscore1/wallets/peer/cwallet.sso" does not exist
    2011-12-29 19:38:56: The setup file "/oracle/grid/11.2/grid/gpnp/yhsscore1/wallets/prdr/cwallet.sso" does not exist
    2011-12-29 19:38:56: chk gpnphome /oracle/grid/11.2/grid/gpnp/yhsscore1: profile_ok 0 wallet_ok 0 r/o_wallet_ok 0
    2011-12-29 19:38:56: chk gpnphome /oracle/grid/11.2/grid/gpnp/yhsscore1: INVALID (bad profile/wallet)
    2011-12-29 19:38:56: ---Checking cluster-wide gpnp setup...
    2011-12-29 19:38:56: The setup file "/oracle/grid/11.2/grid/gpnp/profiles/peer/profile.xml" does not exist
    2011-12-29 19:38:56: The setup file "/oracle/grid/11.2/grid/gpnp/wallets/peer/cwallet.sso" does not exist
    2011-12-29 19:38:56: The setup file "/oracle/grid/11.2/grid/gpnp/wallets/prdr/cwallet.sso" does not exist
    2011-12-29 19:38:56: chk gpnphome /oracle/grid/11.2/grid/gpnp: profile_ok 0 wallet_ok 0 r/o_wallet_ok 0
    2011-12-29 19:38:56: chk gpnphome /oracle/grid/11.2/grid/gpnp: INVALID (bad profile/wallet)
    2011-12-29 19:38:56: gpnp setup checked: local valid? 0 cluster-wide valid? 0
    2011-12-29 19:38:56: gpnp setup: NONE
    2011-12-29 19:38:56: GPNP configuration required
    2011-12-29 19:38:56: Validating for SI-CSS configuration
    2011-12-29 19:38:56: Retrieving OCR main disk location
    2011-12-29 19:38:56: Opening file OCRCONFIG
    2011-12-29 19:38:56: Value () is set for key=ocrconfig_loc
    2011-12-29 19:38:56: Unable to retrieve ocr disk info
    2011-12-29 19:38:56: Checking to see if any 9i GSD is up
    2011-12-29 19:38:56: libskgxnBase_lib = /etc/ORCLcluster/oracm/lib/libskgxn2.so
    2011-12-29 19:38:56: libskgxn_lib = /opt/ORCLcluster/lib/libskgxn2.so
    2011-12-29 19:38:56: SKGXN library file does not exists
    2011-12-29 19:38:56: OLR location = /oracle/grid/11.2/grid/cdata/yhsscore1.olr
    2011-12-29 19:38:56: Oracle CRS Home = /oracle/grid/11.2/grid
    2011-12-29 19:38:56: Validating /etc/oracle/olr.loc file for OLR location /oracle/grid/11.2/grid/cdata/yhsscore1.olr
    2011-12-29 19:38:56: /etc/oracle/olr.loc already exists. Backing up /etc/oracle/olr.loc to /etc/oracle/olr.loc.orig
    2011-12-29 19:38:56: Oracle CRS home = /oracle/grid/11.2/grid
    2011-12-29 19:38:56: Oracle cluster name = yhscluster
    2011-12-29 19:38:56: OCR locations = +CRS
    2011-12-29 19:38:56: Validating OCR
    2011-12-29 19:38:56: Retrieving OCR location used by previous installations
    2011-12-29 19:38:56: Opening file OCRCONFIG
    2011-12-29 19:38:56: Value () is set for key=ocrconfig_loc
    2011-12-29 19:38:56: Opening file OCRCONFIG
    2011-12-29 19:38:56: Value () is set for key=ocrmirrorconfig_loc
    2011-12-29 19:38:56: Opening file OCRCONFIG
    2011-12-29 19:38:56: Value () is set for key=ocrconfig_loc3
    2011-12-29 19:38:56: Opening file OCRCONFIG
    2011-12-29 19:38:56: Value () is set for key=ocrconfig_loc4
    2011-12-29 19:38:56: Opening file OCRCONFIG
    2011-12-29 19:38:56: Value () is set for key=ocrconfig_loc5
    2011-12-29 19:38:56: Checking if OCR sync file exists
    2011-12-29 19:38:56: No need to sync OCR file
    2011-12-29 19:38:56: OCR_LOCATION=+CRS
    2011-12-29 19:38:56: OCR_MIRROR_LOCATION=
    2011-12-29 19:38:56: OCR_MIRROR_LOC3=
    2011-12-29 19:38:56: OCR_MIRROR_LOC4=
    2011-12-29 19:38:56: OCR_MIRROR_LOC5=
    2011-12-29 19:38:56: Current OCR location=
    2011-12-29 19:38:56: Current OCR mirror location=
    2011-12-29 19:38:56: Current OCR mirror loc3=
    2011-12-29 19:38:56: Current OCR mirror loc4=
    2011-12-29 19:38:56: Current OCR mirror loc5=
    2011-12-29 19:38:56: Verifying current OCR settings with user entered values
    2011-12-29 19:38:56: Setting OCR locations in /etc/oracle/ocr.loc
    2011-12-29 19:38:56: Validating OCR locations in /etc/oracle/ocr.loc
    2011-12-29 19:38:56: Checking for existence of /etc/oracle/ocr.loc
    2011-12-29 19:38:56: Backing up /etc/oracle/ocr.loc to /etc/oracle/ocr.loc.orig
    2011-12-29 19:38:56: Setting ocr location +CRS
    2011-12-29 19:38:56: User grid has the required capabilities to run CSSD in realtime mode
    *2011-12-29 19:38:56: Creating or upgrading Oracle Local Registry (OLR)*
    *2011-12-29 19:38:56: Command return code of 255 (65280) from command: /oracle/grid/11.2/grid/bin/ocrconfig -local -upgrade grid dba*
    *2011-12-29 19:38:56: /oracle/grid/11.2/grid/bin/ocrconfig -local -upgrade failed with error: 255*
    *2011-12-29 19:38:56: Failed to create or upgrade OLR*
    帖子经 905068编辑过

    refer:-
    Command return code of 255 (65280) during Grid Infrastructure Installation
    http://coskan.wordpress.com/2009/12/07/root-sh-failed-after-asm-disk-creation-for-11gr2-grid-infrastructure/

  • Root.sh fails on 2nd node

    AIX 6
    Oracle grid infrastructure 11.2.0.3
    At the end of the grid install, ran the root.sh on the first node then on the second node, but failed on the second node. Ran deconfig was successfull, but root.sh failed again :
    The deconfig worked but not the root.sh:
    Successfully deconfigured Oracle clusterware stack on this node
    mtnx213:/oracle/app/grid/product/11.2.0/grid/crs/install#/oracle/app/grid/product/11.2.0/grid/root.sh
    Performing root user operation for Oracle 11g
    The following environment variables are set as:
        ORACLE_OWNER= oragrid
        ORACLE_HOME= /oracle/app/grid/product/11.2.0/grid
    Enter the full pathname of the local bin directory: [/usr/local/bin]:
    The contents of "dbhome" have not changed. No need to overwrite.
    The contents of "oraenv" have not changed. No need to overwrite.
    The contents of "coraenv" have not changed. No need to overwrite.
    Entries will be added to the /etc/oratab file as needed by
    Database Configuration Assistant when a database is created
    Finished running generic part of root script.
    Now product-specific root actions will be performed.
    Using configuration parameter file: /oracle/app/grid/product/11.2.0/grid/crs/install/crsconfig_params
    User ignored Prerequisites during installation
    User oragrid has the required capabilities to run CSSD in realtime mode
    OLR initialization - successful
    Adding Clusterware entries to inittab
    USM driver install actions failed
    /oracle/app/grid/product/11.2.0/grid/perl/bin/perl -I/oracle/app/grid/product/11.2.0/grid/perl/lib -I/oracle/app/grid/product/11.2.0/grid/crs/install /oracle/app/grid/product/11.2.0/grid/crs/install/rootcrs.pl execution failed

    My answer you can find here (in your duplicate post): root.sh fails on 2nd node Timed out waiting for the CRS stack to start

  • Root.sh failed/stucked on node2

    hi All,
    I am deploying 2 node rac 11g on AIX 6.1.
    After successful installation of clusterware software, I have executed root.sh on node 1 and it was successful. But while trying to run the root.sh script on node 2, i am getting following error.
    First time i executed the root.sh script was halt at
    Setting the permissions on OCR backup directory
    Setting up Network socket directories
    I waited for some time around 30 min and then killed the session and tried to rerun the root.sh script and this time it failed with following errors:
    Setting the permissions on OCR backup directory
    Setting up Network socket directories
    PROT-1: Failed to initialize ocrconfig
    Failed to upgrade Oracle Cluster Registry configuration
    I have checked the processes running on remote node with following command:
    ps -ef
    it has shown following processes running after we executed the root.sh command:
    root 164218 193000 0 15:47:28 pts/0 0:00 /bin/sh /u01/app/crs/11.1.0/crs/install/rootconfig
    root 193000 184778 0 15:47:27 pts/0 0:00 /bin/sh ./root.sh
    root 217504 164218 0 15:47:30 pts/0 0:00 /u01/app/crs/11.1.0/crs/bin/ocrconfig.bin -upgrade oracle oinstall
    I was checking the disks ownership and want to share that with you also.
    initially i executed following commands to give ownership/permission for oracle user.
    chown oracle:oinstall rhdisk22
    chown oracle:oinstall rhdisk23
    chmod 660 rhdisk22
    chmod 660 rhdisk23
    # more /etc/oracle/ocr.loc
    ocrconfig_loc=/dev/rhdisk22
    ocrmirrorconfig_loc=/dev/rhdisk23
    local_only=FALSE
    however after executing of root.sh on node 1 and node 2, i have checked the disks ownership has been changed to root.
    ls -ltr /dev/rhdisk22 /dev/rhdisk23
    node1:
    # ls -ltr /dev/rhdisk22 /dev/rhdisk23
    crw-r----- 1 root oinstall 21, 23 Apr 29 15:43 /dev/rhdisk23
    crw-r----- 1 root oinstall 21, 22 Apr 29 15:43 /dev/rhdisk22
    node2:
    # ls -ltr /dev/rhdisk22 /dev/rhdisk23
    crw-r----- 1 root oinstall 20, 21 Apr 29 13:08 /dev/rhdisk23
    crw-r----- 1 root oinstall 20, 19 Apr 29 13:08 /dev/rhdisk22
    similarly following is the result from ocrcheck commands.
    node1:
    # ./ocrcheck
    Status of Oracle Cluster Registry is as follows :
    Version : 2
    Total space (kbytes) : 614156
    Used space (kbytes) : 316
    Available space (kbytes) : 613840
    ID : 604081339
    Device/File Name : /dev/rhdisk22
    Device/File integrity check succeeded
    Device/File Name : /dev/rhdisk23
    Device/File integrity check succeeded
    Cluster registry integrity check succeeded
    node2:
    # ./ocrcheck
    PROT-602: Failed to retrieve data from the cluster registry
    can any one help me please.
    regards
    Farooq

    This may not be applicable directly but check if metalink note 330234.1 helps here.

Maybe you are looking for