Root.sh failed

hi
i am new to rac.
os oel 5.4
2 nodes rac 11gR2 installation.
got this error in log.
2010-05-23 22:44:59: Configuring ASM via ASMCA
2010-05-23 22:45:00: Executing as grid1: /u01/app/11.2.0/grid1/bin/asmca -silent -diskGroupName DATA -diskList ORCL:VOLUME1,ORCL:VOLUME2,ORCL:VOLUME3,ORCL:VOLUME4,ORCL:VOLUME5 -redundancy NORMAL -configureLocalASM
2010-05-23 22:45:00: Running as user grid1: /u01/app/11.2.0/grid1/bin/asmca -silent -diskGroupName DATA -diskList ORCL:VOLUME1,ORCL:VOLUME2,ORCL:VOLUME3,ORCL:VOLUME4,ORCL:VOLUME5 -redundancy NORMAL -configureLocalASM
2010-05-23 22:45:00:   Invoking "/u01/app/11.2.0/grid1/bin/asmca -silent -diskGroupName DATA -diskList ORCL:VOLUME1,ORCL:VOLUME2,ORCL:VOLUME3,ORCL:VOLUME4,ORCL:VOLUME5 -redundancy NORMAL -configureLocalASM" as user "grid1"
2010-05-23 22:45:08: Configuration of ASM failed, see logs for details
2010-05-23 22:45:08: Did not succssfully configure and start ASM
2010-05-23 22:45:08: Exiting exclusive mode
2010-05-23 22:45:08: Command return code of 1 (256) from command: /u01/app/11.2.0/grid1/bin/crsctl stop resource ora.crsd -init
2010-05-23 22:45:08: Stop of resource "ora.crsd -init" failed
2010-05-23 22:45:08: Failed to stop CRSD
2010-05-23 22:45:09: Command return code of 1 (256) from command: /u01/app/11.2.0/grid1/bin/crsctl stop resource ora.asm -init
2010-05-23 22:45:09: Stop of resource "ora.asm -init" failed
2010-05-23 22:45:09: Failed to stop ASM
2010-05-23 22:45:38: Initial cluster configuration failed.  See /u01/app/11.2.0/grid1/cfgtoollogs/crsconfig/rootcrs_rac-2.log for detailsduring root.sh running i got this error.what to do?
all the rest of the installation failed for this error.can anyone help me to configure the rest of the installation?
can you give any pointer for this?
regards
Edited by: you on May 23, 2010 1:41 PM

As per your log,
2010-05-23 22:45:00: Invoking "/u01/app/11.2.0/grid1/bin/asmca -silent -diskGroupName DATA -diskList ORCL:VOLUME1,ORCL:VOLUME2,ORCL:VOLUME3,ORCL:VOLUME4,ORCL:VOLUME5 -redundancy NORMAL -configureLocalASM" as user "grid1"
I can see you configured the GRID INFRASTRUCTURE with user "grid1".
Please let me know when you configured the ORACLE ASM Library then what the user you used ??
-sumit

Similar Messages

  • Root.sh fails on 11.2.0.3 clusterware while starting 'ora.asm' resource

    Dear all,
    I am trying to install clean Oracle 11.2.0.3 grid infrastructure on a two node cluster running on Solaris 5.10.
    - Cluster verification was successfully on both nodes; No warning or issues;
    - I am using 2 network cards for the public and 2 for the private interconnect;
    - OCR is stored on ASM
    - Firewall is disabled on both nodes
    - SCAN is being configured on the DNS (not added in /etc/hosts)
    - GNS is not used
    - hosts file is identical (except the primary hostname)
    The problem: root.sh fails on the 2nd (remote) node, because it fails to start the "ora.asm" resource. However, the root.sh has completed successfully on the 1st node.. Somehow, root.sh doesn't create +ASM2 instance on the remote (host2) node.
    root.sh was executed first on the local node (host1) and after the successful execution was started on the remote (host2) node.
    Output from host1 (working):
    ===================
    Adding Clusterware entries to inittab
    CRS-2672: Attempting to start 'ora.mdnsd' on 'host1'
    CRS-2676: Start of 'ora.mdnsd' on 'host1' succeeded
    CRS-2672: Attempting to start 'ora.gpnpd' on 'host1'
    CRS-2676: Start of 'ora.gpnpd' on 'host1' succeeded
    CRS-2672: Attempting to start 'ora.cssdmonitor' on 'host1'
    CRS-2672: Attempting to start 'ora.gipcd' on 'host1'
    CRS-2676: Start of 'ora.cssdmonitor' on 'host1' succeeded
    CRS-2676: Start of 'ora.gipcd' on 'host1' succeeded
    CRS-2672: Attempting to start 'ora.cssd' on 'host1'
    CRS-2672: Attempting to start 'ora.diskmon' on 'host1'
    CRS-2676: Start of 'ora.diskmon' on 'host1' succeeded
    CRS-2676: Start of 'ora.cssd' on 'host1' succeeded
    ASM created and started successfully.
    Disk Group CRS created successfully.
    clscfg: -install mode specified
    Successfully accumulated necessary OCR keys.
    Creating OCR keys for user 'root', privgrp 'root'..
    Operation successful.
    CRS-4256: Updating the profile
    Successful addition of voting disk 4373be34efab4f01bf79f6c5362acfd3.
    Successful addition of voting disk 7fd725fa4d904f07bf76cecf96791547.
    Successful addition of voting disk a9c85297bdd74f3abfd86899205aaf17.
    Successfully replaced voting disk group with +CRS.
    CRS-4256: Updating the profile
    CRS-4266: Voting file(s) successfully replaced
    ## STATE File Universal Id File Name Disk group
    1. ONLINE 4373be34efab4f01bf79f6c5362acfd3 (/dev/rdsk/c4t600A0B80006E2CC40000C6674E82AA57d0s4) [CRS]
    2. ONLINE 7fd725fa4d904f07bf76cecf96791547 (/dev/rdsk/c4t600A0B80006E2CC40000C6694E82AADDd0s4) [CRS]
    3. ONLINE a9c85297bdd74f3abfd86899205aaf17 (/dev/rdsk/c4t600A0B80006E2F100000C7744E82AC7Ad0s4) [CRS]
    Located 3 voting disk(s).
    CRS-2672: Attempting to start 'ora.asm' on 'host1'
    CRS-2676: Start of 'ora.asm' on 'host1' succeeded
    CRS-2672: Attempting to start 'ora.CRS.dg' on 'host1'
    CRS-2676: Start of 'ora.CRS.dg' on 'host1' succeeded
    CRS-2672: Attempting to start 'ora.registry.acfs' on 'host1'
    CRS-2676: Start of 'ora.registry.acfs' on 'host1' succeeded
    Configure Oracle Grid Infrastructure for a Cluster ... succeeded
    Name Type Target State Host
    ora.CRS.dg ora....up.type ONLINE ONLINE host1
    ora....ER.lsnr ora....er.type ONLINE ONLINE host1
    ora....N1.lsnr ora....er.type ONLINE ONLINE host1
    ora....N2.lsnr ora....er.type ONLINE ONLINE host1
    ora....N3.lsnr ora....er.type ONLINE ONLINE host1
    ora.asm ora.asm.type ONLINE ONLINE host1
    ora....SM1.asm application ONLINE ONLINE host1
    ora....B1.lsnr application ONLINE ONLINE host1
    ora....db1.gsd application OFFLINE OFFLINE
    ora....db1.ons application ONLINE ONLINE host1
    ora....db1.vip ora....t1.type ONLINE ONLINE host1
    ora.cvu ora.cvu.type ONLINE ONLINE host1
    ora.gsd ora.gsd.type OFFLINE OFFLINE
    ora....network ora....rk.type ONLINE ONLINE host1
    ora.oc4j ora.oc4j.type ONLINE ONLINE host1
    ora.ons ora.ons.type ONLINE ONLINE host1
    ora....ry.acfs ora....fs.type ONLINE ONLINE host1
    ora.scan1.vip ora....ip.type ONLINE ONLINE host1
    ora.scan2.vip ora....ip.type ONLINE ONLINE host1
    ora.scan3.vip ora....ip.type ONLINE ONLINE host1
    Output from host2 (failing):
    ===================
    OLR initialization - successful
    Adding Clusterware entries to inittab
    CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node billdb1, number 1, and is terminating
    An active cluster was found during exclusive startup, restarting to join the cluster
    Start of resource "ora.asm" failed
    CRS-2672: Attempting to start 'ora.drivers.acfs' on 'host2'
    CRS-2676: Start of 'ora.drivers.acfs' on 'host2' succeeded
    CRS-2672: Attempting to start 'ora.asm' on 'host2'
    CRS-5017: The resource action "ora.asm start" encountered the following error:
    ORA-03113: end-of-file on communication channel
    Process ID: 0
    Session ID: 0 Serial number: 0
    *. For details refer to "(:CLSN00107:)" in "/u01/11.2.0/grid/log/host2/agent/ohasd/oraagent_grid/oraagent_grid.log".*
    CRS-2674: Start of 'ora.asm' on 'host2' failed
    CRS-2679: Attempting to clean 'ora.asm' on 'host2'
    CRS-2681: Clean of 'ora.asm' on 'host2' succeeded
    CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'host2'
    CRS-2677: Stop of 'ora.drivers.acfs' on 'host2' succeeded
    CRS-4000: Command Start failed, or completed with errors.
    Failed to start Oracle Grid Infrastructure stack
    Failed to start ASM at /u01/11.2.0/grid/crs/install/crsconfig_lib.pm line 1272.
    /u01/11.2.0/grid/perl/bin/perl -I/u01/11.2.0/grid/perl/lib -I/u01/11.2.0/grid/crs/install /u01/11.2.0/grid/crs/install/rootcrs.pl execution failed
    Contents of "/u01/11.2.0/grid/cfgtoollogs/crsconfig/rootcrs_host2.log"
    =============================================
    CRS-2672: Attempting to start 'ora.asm' on 'host2'
    CRS-5017: The resource action "ora.asm start" encountered the following error:
    ORA-03113: end-of-file on communication channel
    Process ID: 0
    Session ID: 0 Serial number: 0
    . For details refer to "(:CLSN00107:)" in "/u01/11.2.0/grid/log/host2/agent/ohasd/oraagent_grid/oraagent_grid.log".
    CRS-2674: Start of 'ora.asm' on 'host2' failed
    CRS-2679: Attempting to clean 'ora.asm' on 'host2'
    CRS-2681: Clean of 'ora.asm' on 'host2' succeeded
    CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'host2'
    CRS-2677: Stop of 'ora.drivers.acfs' on 'host2' succeeded
    CRS-4000: Command Start failed, or completed with errors.
    2011-10-24 19:36:54: Failed to start Oracle Grid Infrastructure stack
    2011-10-24 19:36:54: ###### Begin DIE Stack Trace ######
    2011-10-24 19:36:54: Package File Line Calling
    2011-10-24 19:36:54: --------------- -------------------- ---- ----------
    2011-10-24 19:36:54: 1: main rootcrs.pl 375 crsconfig_lib::dietrap
    2011-10-24 19:36:54: 2: crsconfig_lib crsconfig_lib.pm 1272 main::__ANON__
    2011-10-24 19:36:54: 3: crsconfig_lib crsconfig_lib.pm 1171 crsconfig_lib::start_cluster
    2011-10-24 19:36:54: 4: main rootcrs.pl 803 crsconfig_lib::perform_start_cluster
    2011-10-24 19:36:54: ####### End DIE Stack Trace #######
    Shortened output from "/u01/11.2.0/grid/log/host2/agent/ohasd/oraagent_grid/oraagent_grid.log"
    2011-10-24 19:35:48.726: [ora.asm][9] {0:0:224} [start] clean {
    2011-10-24 19:35:48.726: [ora.asm][9] {0:0:224} [start] InstAgent::stop_option stop mode immediate option 1
    2011-10-24 19:35:48.726: [ora.asm][9] {0:0:224} [start] InstAgent::stop {
    2011-10-24 19:35:48.727: [ora.asm][9] {0:0:224} [start] InstAgent::stop original reason system do shutdown abort
    2011-10-24 19:35:48.727: [ora.asm][9] {0:0:224} [start] ConnectionPool::resetConnection s_statusOfConnectionMap 00ab1948
    2011-10-24 19:35:48.727: [ora.asm][9] {0:0:224} [start] ConnectionPool::resetConnection sid +ASM2 status  2
    2011-10-24 19:35:48.728: [ora.asm][9] {0:0:224} [start] Gimh::check OH /u01/11.2.0/grid SID +ASM2
    2011-10-24 19:35:48.728: [ora.asm][9] {0:0:224} [start] Gimh::check condition changes to (GIMH_NEXT_NUM) 0,1,7 exists
    2011-10-24 19:35:48.729: [ora.asm][9] {0:0:224} [start] (:CLSN00006:)AsmAgent::check failed gimh state 0
    2011-10-24 19:35:48.729: [ora.asm][9] {0:0:224} [start] AsmAgent::check ocrCheck 1 m_OcrOnline 0 m_OcrTimer 0
    2011-10-24 19:35:48.729: [ora.asm][9] {0:0:224} [start] DgpAgent::initOcrDgpSet { entry
    2011-10-24 19:35:48.730: [ora.asm][9] {0:0:224} [start] DgpAgent::initOcrDgpSet procr_get_conf: retval [0] configured [1] local only [0] error buffer []
    2011-10-24 19:35:48.730: [ora.asm][9] {0:0:224} [start] DgpAgent::initOcrDgpSet procr_get_conf: OCR loc [0], Disk Group : [+CRS]
    2011-10-24 19:35:48.730: [ora.asm][9] {0:0:224} [start] DgpAgent::initOcrDgpSet m_ocrDgpSet 015fba90 dgName CRS
    2011-10-24 19:35:48.731: [ora.asm][9] {0:0:224} [start] DgpAgent::initOcrDgpSet ocrret 0 found 1
    2011-10-24 19:35:48.731: [ora.asm][9] {0:0:224} [start] DgpAgent::initOcrDgpSet ocrDgpSet CRS
    2011-10-24 19:35:48.731: [ora.asm][9] {0:0:224} [start] DgpAgent::initOcrDgpSet exit }
    2011-10-24 19:35:48.731: [ora.asm][9] {0:0:224} [start] DgpAgent::ocrDgCheck Entry {
    2011-10-24 19:35:48.732: [ora.asm][9] {0:0:224} [start] DgpAgent::getConnxn new pool
    2011-10-24 19:35:48.732: [ora.asm][9] {0:0:224} [start] DgpAgent::getConnxn new pool m_oracleHome:/u01/11.2.0/grid m_oracleSid:+ASM2 m_usrOraEnv:
    2011-10-24 19:35:48.732: [ora.asm][9] {0:0:224} [start] ConnectionPool::ConnectionPool 2 m_oracleHome:/u01/11.2.0/grid, m_oracleSid:+ASM2, m_usrOraEnv:
    2011-10-24 19:35:48.733: [ora.asm][9] {0:0:224} [start] ConnectionPool::addConnection m_oracleHome:/u01/11.2.0/grid m_oracleSid:+ASM2 m_usrOraEnv: pConnxn:
    01fcdf10
    2011-10-24 19:35:48.733: [ora.asm][9] {0:0:224} [start] Utils::getCrsHome crsHome /u01/11.2.0/grid
    2011-10-24 19:35:51.969: [ora.asm][14] {0:0:224} [check] makeConnectStr = (DESCRIPTION=(ADDRESS=(PROTOCOL=beq)(PROGRAM=/u01/11.2.0/grid/bin/oracle)(ARGV0=o
    racle+ASM2)(ENVS='ORACLE_HOME=/u01/11.2.0/grid,ORACLE_SID=+ASM2')(ARGS='(DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))'))(CONNECT_DATA=(SID=+ASM2)))
    2011-10-24 19:35:51.971: [ora.asm][14] {0:0:224} [check] ConnectionPool::getConnection 260 pConnxn 013e40a0
    2011-10-24 19:35:51.971: [ora.asm][14] {0:0:224} [check] DgpAgent::getConnxn connected
    2011-10-24 19:35:51.971: [ora.asm][14] {0:0:224} [check] InstConnection::connectInt: server not attached
    2011-10-24 19:35:52.190: [ora.asm][14] {0:0:224} [check] ORA-01034: ORACLE not available
    ORA-27101: shared memory realm does not exist
    SVR4 Error: 2: No such file or directory
    Process ID: 0
    Session ID: 0 Serial number: 0
    2011-10-24 19:35:52.190: [ora.asm][14] {0:0:224} [check] InstConnection::connectInt (2) Exception OCIException
    2011-10-24 19:35:52.190: [ora.asm][14] {0:0:224} [check] InstConnection:connect:excp OCIException OCI error 1034
    2011-10-24 19:35:52.190: [ora.asm][14] {0:0:224} [check] DgpAgent::queryDgStatus excp ORA-01034: ORACLE not available
    ORA-27101: shared memory realm does not exist
    SVR4 Error: 2: No such file or directory
    Process ID: 0
    Session ID: 0 Serial number: 0
    2011-10-24 19:35:52.190: [ora.asm][14] {0:0:224} [check] DgpAgent::queryDgStatus asm inst is down or going down
    2011-10-24 19:35:52.191: [ora.asm][14] {0:0:224} [check] DgpAgent::queryDgStatus dgName CRS ret 1
    2011-10-24 19:35:52.191: [ora.asm][14] {0:0:224} [check] (:CLSN00100:)DgpAgent::ocrDgCheck OCR dgName CRS state 1
    2011-10-24 19:35:52.192: [ora.asm][14] {0:0:224} [check] ConnectionPool::releaseConnection InstConnection 013e40a0
    2011-10-24 19:35:52.192: [ora.asm][14] {0:0:224} [check] AsmAgent::check ocrCheck 2 m_OcrOnline 0 m_OcrTimer 0
    2011-10-24 19:35:52.193: [ora.asm][14] {0:0:224} [check] CrsCmd::ClscrsCmdData::stat entity 1 statflag 32 useFilter 0
    2011-10-24 19:35:52.197: [ COMMCRS][23]clsc_connect: (1020d39d0) no listener at (ADDRESS=(PROTOCOL=IPC)(KEY=CRSD_UI_SOCKET))
    Please advice for any workaround or a metalink note.
    Thanks in advance!

    Thanks for the fast reply!
    - Yes, the shared storage is accessible.
    - The alert log for the +ASM2 clearly shows that ASM instance has started normally using default parameters and at one point PMON process dumped.
    - The system logs just shows that there is an error executing "crswrapexece.pl"
    System Log
    ===================
    *Oct 24 19:25:03 host2 root: [ID 702911 user.error] exec /u01/11.2.0/grid/perl/bin/perl -I/u01/11.2.0/grid/perl/lib /u01/11.2.0/grid/bin/crswrapexece.pl /*
    u01/11.2.0/grid/crs/install/s_crsconfig_host2_env.txt /u01/11.2.0/grid/bin/ohasd.bin "reboot"
    Oct 24 19:26:33 host2 oracleoks: [ID 902884 kern.notice] [Oracle OKS] mallocing log buffer, size=10485760
    Oct 24 19:26:33 host2 oracleoks: [ID 714332 kern.notice] [Oracle OKS] log buffer = 0x301780fcb50, size 10485760
    Oct 24 19:26:33 host2 oracleoks: [ID 400061 kern.notice] NOTICE: [Oracle OKS] ODLM hash size 16384
    Oct 24 19:26:33 host2 oracleoks: [ID 160659 kern.notice] NOTICE: OKSK-00004: Module load succeeded. Build information: (LOW DEBUG) USM_11.2.0.3.0_SOLAR
    IS.SPARC64_110803.1 2011/08/11 02:38:30
    Oct 24 19:26:33 host2 pseudo: [ID 129642 kern.info] pseudo-device: oracleadvm0
    Oct 24 19:26:33 host2 genunix: [ID 936769 kern.info] oracleadvm0 is /pseudo/oracleadvm@0
    Oct 24 19:26:33 host2 oracleoks: [ID 141287 kern.notice] NOTICE: ADVMK-00001: Module load succeeded. Build information: (LOW DEBUG) - USM_11.2.0.3.0_SOL
    ARIS.SPARC64_110803.1 built on 2011/08/11 02:40:17.
    Oct 24 19:26:33 host2 oracleacfs: [ID 202941 kern.notice] NOTICE: [Oracle ACFS] FCB hash size 16384
    Oct 24 19:26:33 host2 oracleacfs: [ID 671725 kern.notice] NOTICE: [Oracle ACFS] buffer cache size 511MB (79884 buckets)
    Oct 24 19:26:33 host2 oracleacfs: [ID 730054 kern.notice] NOTICE: [Oracle ACFS] DLM hash size 16384
    Oct 24 19:26:33 host2 oracleoks: [ID 617314 kern.notice] NOTICE: ACFSK-0037: Module load succeeded. Build information: (LOW DEBUG) USM_11.2.0.3.0_SOLAR
    IS.SPARC64_110803.1 2011/08/11 02:42:45
    Oct 24 19:26:33 host2 pseudo: [ID 129642 kern.info] pseudo-device: oracleacfs0
    Oct 24 19:26:33 host2 genunix: [ID 936769 kern.info] oracleacfs0 is /pseudo/oracleacfs@0
    Oct 24 19:26:36 host2 oracleoks: [ID 621795 kern.notice] NOTICE: OKSK-00010: Persistent OKS log opened at /u01/11.2.0/grid/log/host2/acfs/acfs.log.0.
    Oct 24 19:31:37 host2 last message repeated 1 time
    Oct 24 19:33:05 host2 CLSD: [ID 770310 daemon.notice] The clock on host host2 has been updated by the Cluster Time Synchronization Service to be synchr
    onous with the mean cluster time.
    ASM alert log
    ====================================================================
    <msg time='2011-10-24T19:35:48.776+01:00' org_id='oracle' comp_id='asm'
    client_id='' type='UNKNOWN' level='16'
    host_id='host2' host_addr='10.172.16.200' module=''
    pid='26406'>
    <txt>System state dump requested by (instance=2, osid=26396 (PMON)), summary=[abnormal instance termination].
    </txt>
    </msg>
    <msg time='2011-10-24T19:35:48.778+01:00' org_id='oracle' comp_id='asm'
    client_id='' type='UNKNOWN' level='16'
    host_id='host2' host_addr='10.172.16.200' module=''
    pid='26406'>
    <txt>System State dumped to trace file /u01/app/oracle/diag/asm/+asm/+ASM2/trace/+ASM2_diag_26406.trc
    </txt>
    </msg>
    <msg time='2011-10-24T19:35:48.927+01:00' org_id='oracle' comp_id='asm'
    type='UNKNOWN' level='16' host_id='host2'
    host_addr='10.172.16.200' pid='26470'>
    <txt>ORA-1092 : opitsk aborting process
    </txt>
    </msg>
    <msg time='2011-10-24T19:35:49.128+01:00' org_id='oracle' comp_id='asm'
    type='UNKNOWN' level='16' host_id='host2'
    host_addr='10.172.16.200' pid='26472'>
    <txt>ORA-1092 : opitsk aborting process
    </txt>
    </msg>
    Output from "/u01/app/oracle/diag/asm/+asm/+ASM2/trace/+ASM2_diag_26406.trc"
    REQUEST:system state dump at level 10, requested by (instance=2, osid=26396 (PMON)), summary=[abnormal instance termination].
    kjzdattdlm: Can not attach to DLM (LMON up=[TRUE], DB mounted=[FALSE]).
    ===================================================
    SYSTEM STATE (level=10)
    Orapids on dead process list: [count = 0]
    PROCESS 1:
    SO: 0x3df098b50, type: 2, owner: 0x0, flag: INIT/-/-/0x00 if: 0x3 c: 0x3
    proc=0x3df098b50, name=process, file=ksu.h LINE:12616 ID:, pg=0
    (process) Oracle pid:1, ser:0, calls cur/top: 0x0/0x0
    flags : (0x20) PSEUDO
    flags2: (0x0), flags3: (0x10)
    intr error: 0, call error: 0, sess error: 0, txn error 0
    intr queue: empty
    ksudlp FALSE at location: 0
    (post info) last post received: 0 0 0
    last post received-location: No post
    last process to post me: none
    last post sent: 0 0 0
    last post sent-location: No post
    last process posted by me: none
    (latch info) wait_event=0 bits=0
    O/S info: user: , term: , ospid: (DEAD)
    OSD pid info: Unix process pid: 0, image: PSEUDO
    SO: 0x38000cef0, type: 5, owner: 0x3df098b50, flag: INIT/-/-/0x00 if: 0x3 c: 0x3
    proc=0x0, name=kss parent, file=kss2.h LINE:138 ID:, pg=0
    PSO child state object changes :
    Dump of memory from 0x00000003DF722AC0 to 0x00000003DF722CC8
    3DF722AC0 00000000 00000000 00000000 00000000 [................]
    Repeat 31 times
    3DF722CC0 00000000 00000000 [........]
    PROCESS 2: PMON
    SO: 0x3df099bf8, type: 2, owner: 0x0, flag: INIT/-/-/0x00 if: 0x3 c: 0x3
    proc=0x3df099bf8, name=process, file=ksu.h LINE:12616 ID:, pg=0
    (process) Oracle pid:2, ser:1, calls cur/top: 0x3db6c8d30/0x3db6c8d30
    flags : (0xe) SYSTEM
    flags2: (0x0), flags3: (0x10)
    intr error: 0, call error: 0, sess error: 0, txn error 0
    intr queue: empty
    ksudlp FALSE at location: 0
    (post info) last post received: 0 0 136
    last post received-location: kjm.h LINE:1228 ID:kjmdmi: pmon to attach
    last process to post me: 3df0a2138 1 6
    last post sent: 0 0 137
    last post sent-location: kjm.h LINE:1230 ID:kjiath: pmon attached
    last process posted by me: 3df0a2138 1 6
    (latch info) wait_event=0 bits=0
    Process Group: DEFAULT, pseudo proc: 0x3debbbf40
    O/S info: user: grid, term: UNKNOWN, ospid: 26396
    OSD pid info: Unix process pid: 26396, image: oracle@host2 (PMON)
    SO: 0x3d8800c18, type: 30, owner: 0x3df099bf8, flag: INIT/-/-/0x00 if: 0x3 c: 0x3
    proc=0x3df099bf8, name=ges process, file=kji.h LINE:3669 ID:, pg=0
    GES MSG BUFFERS: st=emp chunk=0x0 hdr=0x0 lnk=0x0 flags=0x0 inc=0
    outq=0 sndq=0 opid=0 prmb=0x0
    mbg=(0 0) mbg=(0 0) mbg[r]=(0 0)
    fmq=(0 0) fmq=(0 0) fmq[r]=(0 0)
    mop[s]=0 mop[q]=0 pendq=0 zmbq=0
    nonksxp_recvs=0
    ------------process 3d8800c18--------------------
    proc version : 0
    Local inst : 2
    pid : 26396
    lkp_inst : 2
    svr_mode : 0
    proc state : KJP_FROZEN
    Last drm hb acked : 0
    flags : x50
    ast_rcvd_svrmod : 0
    current lock op : 0
    Total accesses : 1
    Imm. accesses : 0
    Locks on ASTQ : 0
    Locks Pending AST : 0
    Granted locks : 0
    AST_Q:
    PENDING_Q:
    GRANTED_Q:
    SO: 0x3d9835198, type: 14, owner: 0x3df099bf8, flag: INIT/-/-/0x00 if: 0x1 c: 0x1
    proc=0x3df099bf8, name=channel handle, file=ksr2.h LINE:367 ID:, pg=0
    (broadcast handle) 3d9835198 flag: (2) ACTIVE SUBSCRIBER,
    owner: 3df099bf8 - ospid: 26396
    event: 1, last message event: 1,
    last message waited event: 1,
    next message: 0(0), messages read: 0
    channel: (3d9934df8) PMON actions channel [name: 2]
    scope: 7, event: 1, last mesage event: 0,
    publishers/subscribers: 0/1,
    messages published: 0
    heuristic msg queue length: 0
    SO: 0x3d9835008, type: 14, owner: 0x3df099bf8, flag: INIT/-/-/0x00 if: 0x1 c: 0x1
    proc=0x3df099bf8, name=channel handle, file=ksr2.h LINE:367 ID:, pg=0
    (broadcast handle) 3d9835008 flag: (2) ACTIVE SUBSCRIBER,
    owner: 3df099bf8 - ospid: 26396
    event: 1, last message event: 1,
    last message waited event: 1,
    next message: 0(0), messages read: 0
    channel: (3d9941e40) scumnt mount lock [name: 157]
    scope: 1, event: 12, last mesage event: 0,
    publishers/subscribers: 0/12,
    messages published: 0
    heuristic msg queue length: 0
    SO: 0x3de4a2b80, type: 4, owner: 0x3df099bf8, flag: INIT/-/-/0x00 if: 0x3 c: 0x3
    proc=0x3df099bf8, name=session, file=ksu.h LINE:12624 ID:, pg=0
    (session) sid: 33 ser: 1 trans: 0x0, creator: 0x3df099bf8
    flags: (0x51) USR/- flags_idl: (0x1) BSY/-/-/-/-/-
    flags2: (0x409) -/-/INC
    DID: , short-term DID:
    txn branch: 0x0
    oct: 0, prv: 0, sql: 0x0, psql: 0x0, user: 0/SYS
    ksuxds FALSE at location: 0
    service name: SYS$BACKGROUND
    Current Wait Stack:
    Not in wait; last wait ended 0.666415 sec ago
    Wait State:
    fixed_waits=0 flags=0x21 boundary=0x0/-1
    Session Wait History:
    elapsed time of 0.666593 sec since last wait
    0: waited for 'pmon timer'
    duration=0x12c, =0x0, =0x0
    wait_id=63 seq_num=64 snap_id=1
    wait times: snap=3.000089 sec, exc=3.000089 sec, total=3.000089 sec
    wait times: max=3.000000 sec
    wait counts: calls=1 os=1
    occurred after 0.002067 sec of elapsed time
    1: waited for 'pmon timer'
    duration=0x12c, =0x0, =0x0
    wait_id=62 seq_num=63 snap_id=1
    wait times: snap=3.010111 sec, exc=3.010111 sec, total=3.010111 sec
    wait times: max=3.000000 sec
    wait counts: calls=1 os=1
    occurred after 0.001926 sec of elapsed time
    2: waited for 'pmon timer'
    duration=0x12c, =0x0, =0x0
    wait_id=61 seq_num=62 snap_id=1
    wait times: snap=3.125286 sec, exc=3.125286 sec, total=3.125286 sec
    wait times: max=3.000000 sec
    wait counts: calls=1 os=1
    occurred after 0.003361 sec of elapsed time
    3: waited for 'pmon timer'
    duration=0x12c, =0x0, =0x0
    wait_id=60 seq_num=61 snap_id=1
    wait times: snap=3.000081 sec, exc=3.000081 sec, total=3.000081 sec
    wait times: max=3.000000 sec
    wait counts: calls=1 os=1
    occurred after 0.002102 sec of elapsed time
    4: waited for 'pmon timer'
    duration=0x12c, =0x0, =0x0

  • Root.sh fails for 11gR2 Grid Infrastructure installation on AIX 6.1

    Hello all,
    root.sh fails with the errors below. SR with Oracle opened. Will post the resolution when it is available. Any insights in the meantime? Thank you!
    System information:
    OS: AIX 6.1
    Runcluvfy.sh reported no issue
    Permissions on the raw devices set to 660 and ownership is oracle:dba
    Using external redundancy for ASM, ASM instance is online
    Permissions on block and raw device files
    system1:ux460p1> ls -l /dev/hdisk32
    brw-rw---- 1 oracle dba 17, 32 Mar 11 16:50 /dev/hdisk32
    system11:ux460p1> ls -l /dev/rhdisk32
    crw-rw---- 1 oracle dba 17, 32 Mar 12 15:52 /dev/rhdisk32
    ocrconfig.log
    racle Database 11g Clusterware Release 11.2.0.1.0 - Production Copyright 1996, 2009 Oracle. All rights reserved.
    2010-03-15 19:17:19.773: [ OCRCONF][1]ocrconfig starts...
    2010-03-15 19:17:19.775: [ OCRCONF][1]Upgrading OCR data
    2010-03-15 19:17:20.474: [  OCRASM][1]proprasmo: kgfoCheckMount return [0]. Cannot proceed with dirty open.
    2010-03-15 19:17:20.474: [  OCRASM][1]proprasmo: Error in open/create file in dg [DATA]
    [  OCRASM][1]SLOS : [clsuSlosFormatDiag called with non-error slos.]
    2010-03-15 19:17:20.603: [  OCRRAW][1]proprioo: Failed to open [+DATA]. Returned proprasmo() with [26]. Marking location as UNAVAILABLE.
    2010-03-15 19:17:20.603: [  OCRRAW][1]proprioo: No OCR/OLR devices are usable
    2010-03-15 19:17:20.603: [  OCRASM][1]proprasmcl: asmhandle is NULL
    2010-03-15 19:17:20.603: [  OCRRAW][1]proprinit: Could not open raw device
    2010-03-15 19:17:20.603: [  OCRASM][1]proprasmcl: asmhandle is NULL
    2010-03-15 19:17:20.604: [ default][1]a_init:7!: Backend init unsuccessful : [26]
    2010-03-15 19:17:20.604: [ OCRCONF][1]Exporting OCR data to [OCRUPGRADEFILE]
    2010-03-15 19:17:20.604: [  OCRAPI][1]a_init:7!: Backend init unsuccessful : [33]
    2010-03-15 19:17:20.605: [ OCRCONF][1]There was no previous version of OCR. error:[PROC-33: Oracle Cluster Registry is not configured]
    2010-03-15 19:17:20.841: [  OCRASM][1]proprasmo: kgfoCheckMount return [0]. Cannot proceed with dirty open.
    2010-03-15 19:17:20.841: [  OCRASM][1]proprasmo: Error in open/create file in dg [DATA]
    [  OCRASM][1]SLOS : [clsuSlosFormatDiag called with non-error slos.]
    2010-03-15 19:17:20.966: [  OCRRAW][1]proprioo: Failed to open [+DATA]. Returned proprasmo() with [26]. Marking location as UNAVAILABLE.
    2010-03-15 19:17:20.966: [  OCRRAW][1]proprioo: No OCR/OLR devices are usable
    2010-03-15 19:17:20.966: [  OCRASM][1]proprasmcl: asmhandle is NULL
    2010-03-15 19:17:20.966: [  OCRRAW][1]proprinit: Could not open raw device
    2010-03-15 19:17:20.966: [  OCRASM][1]proprasmcl: asmhandle is NULL
    2010-03-15 19:17:20.966: [ default][1]a_init:7!: Backend init unsuccessful : [26]
    2010-03-15 19:17:21.412: [  OCRRAW][1]propriogid:1_2: INVALID FORMAT
    2010-03-15 19:17:21.412: [  OCRRAW][1]proprior: Header check from OCR device 0 offset 0 failed (26).
    2010-03-15 19:17:21.414: [  OCRRAW][1]ibctx: Failed to read the whole bootblock. Assumes invalid format.
    2010-03-15 19:17:21.414: [  OCRRAW][1]proprinit:problem reading the bootblock or superbloc 22
    2010-03-15 19:17:21.534: [  OCRRAW][1]propriogid:1_2: INVALID FORMAT
    2010-03-15 19:17:21.701: [  OCRRAW][1]iniconfig:No 92 configuration
    2010-03-15 19:17:21.701: [  OCRAPI][1]a_init:6a: Backend init successful
    2010-03-15 19:17:21.764: [ OCRCONF][1]Initialized DATABASE keys
    2010-03-15 19:17:21.770: [ OCRCONF][1]Successfully set skgfr block 0
    2010-03-15 19:17:21.771: [ OCRCONF][1]Exiting [status=success]...
    **alert.log**
    Oracle Database 11g Clusterware Release 11.2.0.1.0 - Production Copyright 1996, 2009 Oracle. All rights reserved.
    2010-03-15 19:12:00.148
    [client(483478)]CRS-2106:The OLR location /u01/app/grid/cdata/ux460p1.olr is inaccessible. Details in /u01/app/grid/log/ux460p1/client/ocrconfig_483478.log.
    2010-03-15 19:12:00.171
    [client(483478)]CRS-2101:The OLR was formatted using version 3.
    2010-03-15 14:16:18.620
    [ohasd(471204)]CRS-2112:The OLR service started on node ux460p1.
    2010-03-15 14:16:18.720
    [ohasd(471204)]CRS-8017:location: /etc/oracle/lastgasp has 8 reboot advisory log files, 0 were announced and 0 errors occurred
    2010-03-15 14:16:18.847
    [ohasd(471204)]CRS-2772:Server 'ux460p1' has been assigned to pool 'Free'.
    2010-03-15 14:16:54.107
    [ctssd(340174)]CRS-2403:The Cluster Time Synchronization Service on host ux460p1 is in observer mode.
    2010-03-15 14:16:54.123
    [ctssd(340174)]CRS-2407:The new Cluster Time Synchronization Service reference node is host ux460p1.
    2010-03-15 14:16:54.917
    [ctssd(340174)]CRS-2401:The Cluster Time Synchronization Service started on host ux460p1.
    2010-03-15 19:17:21.414
    [client(376968)]CRS-1006:The OCR location +DATA is inaccessible. Details in /u01/app/grid/log/ux460p1/client/ocrconfig_376968.log.
    2010-03-15 19:17:21.701
    [client(376968)]CRS-1001:The OCR was formatted using version 3.
    2010-03-15 14:17:24.888
    [crsd(303252)]CRS-1012:The OCR service started on node ux460p1.
    2010-03-15 14:17:56.344
    [ctssd(340174)]CRS-2405:The Cluster Time Synchronization Service on host ux460p1 is shutdown by user
    2010-03-15 14:19:14.855
    [ctssd(340188)]CRS-2403:The Cluster Time Synchronization Service on host ux460p1 is in observer mode.
    2010-03-15 14:19:14.870
    [ctssd(340188)]CRS-2407:The new Cluster Time Synchronization Service reference node is host ux460p1.
    2010-03-15 14:19:15.638
    [ctssd(340188)]CRS-2401:The Cluster Time Synchronization Service started on host ux460p1.
    2010-03-15 14:19:32.985
    [crsd(417946)]CRS-1012:The OCR service started on node ux460p1.
    2010-03-15 14:19:35.250
    [crsd(417946)]CRS-1201:CRSD started on node ux460p1.
    2010-03-15 14:19:35.698
    [ohasd(471204)]CRS-2765:Resource 'ora.crsd' has failed on server 'ux460p1'.
    2010-03-15 14:19:38.928

    Public and Private are on different devices and subnets.
    No logfile named: ocrconfig_7833.log
    I do have ocrconfig_7089.log and ocrconfig_8985.log
    Here is their contents:
    ocrconfig_7089.log:
    Oracle Database 11g Clusterware Release 11.2.0.1.0 - Production Copyright 1996, 2009 Oracle. All rights reserved.
    2010-11-09 13:38:32.518: [ OCRCONF][2819644944]ocrconfig starts...
    2010-11-09 13:38:32.542: [ OCRCONF][2819644944]Upgrading OCR data
    2010-11-09 13:38:32.576: [  OCROSD][2819644944]utread:3: Problem reading buffer 12e54000 buflen 4096 retval 0 phy_offset 102400 retry 0
    2010-11-09 13:38:32.576: [  OCROSD][2819644944]utread:3: Problem reading buffer 12e54000 buflen 4096 retval 0 phy_offset 102400 retry 1
    2010-11-09 13:38:32.576: [  OCROSD][2819644944]utread:3: Problem reading buffer 12e54000 buflen 4096 retval 0 phy_offset 102400 retry 2
    2010-11-09 13:38:32.576: [  OCROSD][2819644944]utread:3: Problem reading buffer 12e54000 buflen 4096 retval 0 phy_offset 102400 retry 3
    2010-11-09 13:38:32.576: [  OCROSD][2819644944]utread:3: Problem reading buffer 12e54000 buflen 4096 retval 0 phy_offset 102400 retry 4
    2010-11-09 13:38:32.576: [  OCROSD][2819644944]utread:3: Problem reading buffer 12e54000 buflen 4096 retval 0 phy_offset 102400 retry 5
    2010-11-09 13:38:32.576: [  OCRRAW][2819644944]propriogid:1_1: Failed to read the whole bootblock. Assumes invalid format.
    2010-11-09 13:38:32.576: [  OCRRAW][2819644944]proprioini: all disks are not OCR/OLR formatted
    2010-11-09 13:38:32.576: [  OCRRAW][2819644944]proprinit: Could not open raw device
    2010-11-09 13:38:32.576: [ default][2819644944]a_init:7!: Backend init unsuccessful : [26]
    2010-11-09 13:38:32.577: [ OCRCONF][2819644944]Exporting OCR data to [OCRUPGRADEFILE]
    2010-11-09 13:38:32.577: [  OCRAPI][2819644944]a_init:7!: Backend init unsuccessful : [33]
    2010-11-09 13:38:32.577: [ OCRCONF][2819644944]There was no previous version of OCR. error:[PROCL-33: Oracle Local Registry is not configured]
    2010-11-09 13:38:32.578: [  OCROSD][2819644944]utread:3: Problem reading buffer 12e54000 buflen 4096 retval 0 phy_offset 102400 retry 0
    2010-11-09 13:38:32.578: [  OCROSD][2819644944]utread:3: Problem reading buffer 12e54000 buflen 4096 retval 0 phy_offset 102400 retry 1
    2010-11-09 13:38:32.578: [  OCROSD][2819644944]utread:3: Problem reading buffer 12e54000 buflen 4096 retval 0 phy_offset 102400 retry 2
    2010-11-09 13:38:32.578: [  OCROSD][2819644944]utread:3: Problem reading buffer 12e54000 buflen 4096 retval 0 phy_offset 102400 retry 3
    2010-11-09 13:38:32.578: [  OCROSD][2819644944]utread:3: Problem reading buffer 12e54000 buflen 4096 retval 0 phy_offset 102400 retry 4
    2010-11-09 13:38:32.578: [  OCROSD][2819644944]utread:3: Problem reading buffer 12e54000 buflen 4096 retval 0 phy_offset 102400 retry 5
    2010-11-09 13:38:32.578: [  OCRRAW][2819644944]propriogid:1_1: Failed to read the whole bootblock. Assumes invalid format.
    2010-11-09 13:38:32.578: [  OCRRAW][2819644944]proprioini: all disks are not OCR/OLR formatted
    2010-11-09 13:38:32.578: [  OCRRAW][2819644944]proprinit: Could not open raw device
    2010-11-09 13:38:32.578: [ default][2819644944]a_init:7!: Backend init unsuccessful : [26]
    2010-11-09 13:38:32.579: [  OCROSD][2819644944]utread:3: Problem reading buffer 12e54000 buflen 4096 retval 0 phy_offset 102400 retry 0
    2010-11-09 13:38:32.579: [  OCROSD][2819644944]utread:3: Problem reading buffer 12e54000 buflen 4096 retval 0 phy_offset 102400 retry 1
    2010-11-09 13:38:32.579: [  OCROSD][2819644944]utread:3: Problem reading buffer 12e54000 buflen 4096 retval 0 phy_offset 102400 retry 2
    2010-11-09 13:38:32.579: [  OCROSD][2819644944]utread:3: Problem reading buffer 12e54000 buflen 4096 retval 0 phy_offset 102400 retry 3
    2010-11-09 13:38:32.579: [  OCROSD][2819644944]utread:3: Problem reading buffer 12e54000 buflen 4096 retval 0 phy_offset 102400 retry 4
    2010-11-09 13:38:32.579: [  OCROSD][2819644944]utread:3: Problem reading buffer 12e54000 buflen 4096 retval 0 phy_offset 102400 retry 5
    2010-11-09 13:38:32.579: [  OCRRAW][2819644944]propriogid:1_1: Failed to read the whole bootblock. Assumes invalid format.
    2010-11-09 13:38:32.579: [  OCROSD][2819644944]utread:3: Problem reading buffer 12e54000 buflen 4096 retval 0 phy_offset 102400 retry 0
    2010-11-09 13:38:32.579: [  OCROSD][2819644944]utread:3: Problem reading buffer 12e54000 buflen 4096 retval 0 phy_offset 102400 retry 1
    2010-11-09 13:38:32.579: [  OCROSD][2819644944]utread:3: Problem reading buffer 12e54000 buflen 4096 retval 0 phy_offset 102400 retry 2
    2010-11-09 13:38:32.579: [  OCROSD][2819644944]utread:3: Problem reading buffer 12e54000 buflen 4096 retval 0 phy_offset 102400 retry 3
    2010-11-09 13:38:32.579: [  OCROSD][2819644944]utread:3: Problem reading buffer 12e54000 buflen 4096 retval 0 phy_offset 102400 retry 4
    2010-11-09 13:38:32.579: [  OCROSD][2819644944]utread:3: Problem reading buffer 12e54000 buflen 4096 retval 0 phy_offset 102400 retry 5
    2010-11-09 13:38:32.591: [  OCRRAW][2819644944]ibctx: Failed to read the whole bootblock. Assumes invalid format.
    2010-11-09 13:38:32.591: [  OCRRAW][2819644944]proprinit:problem reading the bootblock or superbloc 22
    2010-11-09 13:38:32.591: [  OCROSD][2819644944]utread:3: Problem reading buffer 12e55000 buflen 4096 retval 0 phy_offset 102400 retry 0
    2010-11-09 13:38:32.591: [  OCROSD][2819644944]utread:3: Problem reading buffer 12e55000 buflen 4096 retval 0 phy_offset 102400 retry 1
    2010-11-09 13:38:32.591: [  OCROSD][2819644944]utread:3: Problem reading buffer 12e55000 buflen 4096 retval 0 phy_offset 102400 retry 2
    2010-11-09 13:38:32.591: [  OCROSD][2819644944]utread:3: Problem reading buffer 12e55000 buflen 4096 retval 0 phy_offset 102400 retry 3
    2010-11-09 13:38:32.591: [  OCROSD][2819644944]utread:3: Problem reading buffer 12e55000 buflen 4096 retval 0 phy_offset 102400 retry 4
    2010-11-09 13:38:32.591: [  OCROSD][2819644944]utread:3: Problem reading buffer 12e55000 buflen 4096 retval 0 phy_offset 102400 retry 5
    2010-11-09 13:38:32.591: [  OCRRAW][2819644944]propriogid:1_1: Failed to read the whole bootblock. Assumes invalid format.
    2010-11-09 13:38:32.681: [  OCRAPI][2819644944]a_init:6a: Backend init successful
    2010-11-09 13:38:32.699: [ OCRCONF][2819644944]Initialized DATABASE keys
    2010-11-09 13:38:32.700: [ OCRCONF][2819644944]Exiting [status=success]...
    ocrconfig_8985.log:
    Oracle Database 11g Clusterware Release 11.2.0.1.0 - Production Copyright 1996, 2009 Oracle. All rights reserved.
    2010-11-09 13:41:28.169: [ OCRCONF][2281741840]ocrconfig starts...
    2010-11-09 13:41:28.175: [ OCRCONF][2281741840]Upgrading OCR data
    2010-11-09 13:41:30.896: [  OCRASM][2281741840]proprasmo: kgfoCheckMount return [0]. Cannot proceed with dirty open.
    2010-11-09 13:41:30.896: [  OCRASM][2281741840]proprasmo: Error in open/create file in dg [DATA]
    [  OCRASM][2281741840]SLOS : [clsuSlosFormatDiag called with non-error slos.]
    2010-11-09 13:41:31.208: [  OCRRAW][2281741840]proprioo: Failed to open [+DATA]. Returned proprasmo() with [26]. Marking location as UNAVAILABLE.
    2010-11-09 13:41:31.210: [  OCRRAW][2281741840]proprioo: No OCR/OLR devices are usable
    2010-11-09 13:41:31.210: [  OCRASM][2281741840]proprasmcl: asmhandle is NULL
    2010-11-09 13:41:31.210: [  OCRRAW][2281741840]proprinit: Could not open raw device
    2010-11-09 13:41:31.211: [  OCRASM][2281741840]proprasmcl: asmhandle is NULL
    2010-11-09 13:41:31.213: [ default][2281741840]a_init:7!: Backend init unsuccessful : [26]
    2010-11-09 13:41:31.214: [ OCRCONF][2281741840]Exporting OCR data to [OCRUPGRADEFILE]
    2010-11-09 13:41:31.216: [  OCRAPI][2281741840]a_init:7!: Backend init unsuccessful : [33]
    2010-11-09 13:41:31.216: [ OCRCONF][2281741840]There was no previous version of OCR. error:[PROC-33: Oracle Cluster Registry is not configured]
    2010-11-09 13:41:32.214: [  OCRASM][2281741840]proprasmo: kgfoCheckMount return [0]. Cannot proceed with dirty open.
    2010-11-09 13:41:32.214: [  OCRASM][2281741840]proprasmo: Error in open/create file in dg [DATA]
    [  OCRASM][2281741840]SLOS : [clsuSlosFormatDiag called with non-error slos.]
    2010-11-09 13:41:32.535: [  OCRRAW][2281741840]proprioo: Failed to open [+DATA]. Returned proprasmo() with [26]. Marking location as UNAVAILABLE.
    2010-11-09 13:41:32.535: [  OCRRAW][2281741840]proprioo: No OCR/OLR devices are usable
    2010-11-09 13:41:32.535: [  OCRASM][2281741840]proprasmcl: asmhandle is NULL
    2010-11-09 13:41:32.535: [  OCRRAW][2281741840]proprinit: Could not open raw device
    2010-11-09 13:41:32.535: [  OCRASM][2281741840]proprasmcl: asmhandle is NULL
    2010-11-09 13:41:32.536: [ default][2281741840]a_init:7!: Backend init unsuccessful : [26]
    2010-11-09 13:41:35.359: [  OCRRAW][2281741840]propriogid:1_2: INVALID FORMAT
    2010-11-09 13:41:35.361: [  OCRRAW][2281741840]proprior: Header check from OCR device 0 offset 0 failed (26).
    2010-11-09 13:41:35.363: [  OCRRAW][2281741840]ibctx: Failed to read the whole bootblock. Assumes invalid format.
    2010-11-09 13:41:35.363: [  OCRRAW][2281741840]proprinit:problem reading the bootblock or superbloc 22
    2010-11-09 13:41:35.843: [  OCRRAW][2281741840]propriogid:1_2: INVALID FORMAT
    2010-11-09 13:41:36.430: [  OCRRAW][2281741840]iniconfig:No 92 configuration
    2010-11-09 13:41:36.431: [  OCRAPI][2281741840]a_init:6a: Backend init successful
    2010-11-09 13:41:36.540: [ OCRCONF][2281741840]Initialized DATABASE keys
    2010-11-09 13:41:36.545: [ OCRCONF][2281741840]Successfully set skgfr block 0
    2010-11-09 13:41:36.552: [ OCRCONF][2281741840]Exiting [status=success]...
    Both of these log files show errors, then they show success??????

  • 11g R2 RAC - Grid Infrastructure installation - "root.sh" fails on node#2

    Hi there,
    I am trying to create a two node 11g R2 RAC on OEL 5.5 (32-bit) using VMWare virtual machines. I have correctly configured both nodes. Cluster Verification utility returns on following error \[which I believe can be ignored]:
    Checking daemon liveness...
    Liveness check failed for "ntpd"
    Check failed on nodes:
    rac2,rac1
    PRVF-5415 : Check to see if NTP daemon is running failed
    Clock synchronization check using Network Time Protocol(NTP) failed
    Pre-check for cluster services setup was unsuccessful on all the nodes.
    While Grid Infrastructure installation (for a Cluster option), things go very smooth until I run "root.sh" on node# 2. orainstRoot.sh ran OK on both. "root.sh" run OK on node# 1 and ends with:
    Checking swap space: must be greater than 500 MB.   Actual 1967 MB    Passed
    The inventory pointer is located at /etc/oraInst.loc
    The inventory is located at /u01/app/oraInventory
    *'UpdateNodeList' was successful.*
    *[root@rac1 ~]#*
    "root.sh" fails on rac2 (2nd node) with following error:
    CRS-2672: Attempting to start 'ora.evmd' on 'rac2'
    CRS-2676: Start of 'ora.evmd' on 'rac2' succeeded
    Timed out waiting for the CRS stack to start.
    *[root@rac2 ~]#*
    I know this info may not be enough to figure out what the problem may be. Please let me know what should I look for to find the issue and fix it. Its been like almost two weeks now :-(
    Regards
    Amer

    Hi Zheng,
    ocssd.log is HUGE. So I am putting few of the last lines in the log file hoping they may give some clue:
    2011-07-04 19:49:24.007: [    CSSD][2997803920]clssnmvSchedDiskThreads: KillBlockThread for voting file ORCL:DATA sched delay 2180 > margin 1500  cur_ms 36118424 lastalive 36116244
    2011-07-04 19:49:26.005: [    CSSD][2997803920]clssnmvSchedDiskThreads: DiskPingThread for voting file ORCL:DATA sched delay 4150 > margin 1500 cur_ms 36120424 lastalive 36116274
    2011-07-04 19:49:26.006: [    CSSD][2997803920]clssnmvSchedDiskThreads: KillBlockThread for voting file ORCL:DATA sched delay 4180 > margin 1500  cur_ms 36120424 lastalive 36116244
    2011-07-04 19:49:27.997: [    CSSD][2901298064]clssnmSendingThread: sending status msg to all nodes
    2011-07-04 19:49:27.997: [    CSSD][2901298064]clssnmSendingThread: sent 5 status msgs to all nodes
    2011-07-04 19:49:33.001: [    CSSD][2901298064]clssnmSendingThread: sending status msg to all nodes
    2011-07-04 19:49:33.001: [    CSSD][2901298064]clssnmSendingThread: sent 5 status msgs to all nodes
    2011-07-04 19:49:37.996: [    CSSD][2901298064]clssnmSendingThread: sending status msg to all nodes
    2011-07-04 19:49:37.996: [    CSSD][2901298064]clssnmSendingThread: sent 5 status msgs to all nodes
    2011-07-04 19:49:43.000: [    CSSD][2901298064]clssnmSendingThread: sending status msg to all nodes
    2011-07-04 19:49:43.000: [    CSSD][2901298064]clssnmSendingThread: sent 5 status msgs to all nodes
    2011-07-04 19:49:48.004: [    CSSD][2901298064]clssnmSendingThread: sending status msg to all nodes
    2011-07-04 19:49:48.005: [    CSSD][2901298064]clssnmSendingThread: sent 5 status msgs to all nodes
    2011-07-04 19:50:12.003: [    CSSD][2901298064]clssnmSendingThread: sent 5 status msgs to all nodes
    2011-07-04 19:50:12.008: [    CSSD][2997803920]clssnmvSchedDiskThreads: DiskPingThread for voting file ORCL:DATA sched delay 1660 > margin 1500 cur_ms 36166424 lastalive 36164764
    2011-07-04 19:50:12.009: [    CSSD][2997803920]clssnmvSchedDiskThreads: KillBlockThread for voting file ORCL:DATA sched delay 1660 > margin 1500  cur_ms 36166424 lastalive 36164764
    2011-07-04 19:50:15.796: [    CSSD][2997803920]clssnmvSchedDiskThreads: KillBlockThread for voting file ORCL:DATA sched delay 2130 > margin 1500  cur_ms 36170214 lastalive 36168084
    2011-07-04 19:50:16.996: [    CSSD][2901298064]clssnmSendingThread: sending status msg to all nodes
    2011-07-04 19:50:16.996: [    CSSD][2901298064]clssnmSendingThread: sent 5 status msgs to all nodes
    2011-07-04 19:50:17.826: [    CSSD][2997803920]clssnmvSchedDiskThreads: DiskPingThread for voting file ORCL:DATA sched delay 1540 > margin 1500 cur_ms 36172244 lastalive 36170704
    2011-07-04 19:50:17.826: [    CSSD][2997803920]clssnmvSchedDiskThreads: KillBlockThread for voting file ORCL:DATA sched delay 1570 > margin 1500  cur_ms 36172244 lastalive 36170674
    2011-07-04 19:50:21.999: [    CSSD][2901298064]clssnmSendingThread: sending status msg to all nodes
    2011-07-04 19:50:21.999: [    CSSD][2901298064]clssnmSendingThread: sent 5 status msgs to all nodes
    2011-07-04 19:50:26.011: [    CSSD][2997803920]clssnmvSchedDiskThreads: DiskPingThread for voting file ORCL:DATA sched delay 1740 > margin 1500 cur_ms 36180424 lastalive 36178684
    2011-07-04 19:50:26.011: [    CSSD][2997803920]clssnmvSchedDiskThreads: KillBlockThread for voting file ORCL:DATA sched delay 1620 > margin 1500  cur_ms 36180424 lastalive 36178804
    2011-07-04 19:50:27.004: [    CSSD][2901298064]clssnmSendingThread: sending status msg to all nodes
    2011-07-04 19:50:27.004: [    CSSD][2901298064]clssnmSendingThread: sent 5 status msgs to all nodes
    2011-07-04 19:50:28.002: [    CSSD][2997803920]clssnmvSchedDiskThreads: DiskPingThread for voting file ORCL:DATA sched delay 1700 > margin 1500 cur_ms 36182414 lastalive 36180714
    2011-07-04 19:50:28.002: [    CSSD][2997803920]clssnmvSchedDiskThreads: KillBlockThread for voting file ORCL:DATA sched delay 1790 > margin 1500  cur_ms 36182414 lastalive 36180624
    2011-07-04 19:50:31.998: [    CSSD][2901298064]clssnmSendingThread: sending status msg to all nodes
    2011-07-04 19:50:31.998: [    CSSD][2901298064]clssnmSendingThread: sent 5 status msgs to all nodes
    2011-07-04 19:50:37.001: [    CSSD][2901298064]clssnmSendingThread: sending status msg to all nodes
    2011-07-04 19:50:37.002: [    CSSD][2901298064]clssnmSendingThread: sent 5 status msgs to all nodes
    *<end of log file>*And the alertrac2.log contains:
    *[root@rac2 rac2]# cat alertrac2.log*
    Oracle Database 11g Clusterware Release 11.2.0.1.0 - Production Copyright 1996, 2009 Oracle. All rights reserved.
    2011-07-02 16:43:51.571
    [client(16134)]CRS-2106:The OLR location /u01/grid/oracle/product/11.2.0/grid/cdata/rac2.olr is inaccessible. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/client/ocrconfig_16134.log.
    2011-07-02 16:43:57.125
    [client(16134)]CRS-2101:The OLR was formatted using version 3.
    2011-07-02 16:44:43.214
    [ohasd(16188)]CRS-2112:The OLR service started on node rac2.
    2011-07-02 16:45:06.446
    [ohasd(16188)]CRS-2772:Server 'rac2' has been assigned to pool 'Free'.
    2011-07-02 16:53:30.061
    [ohasd(16188)]CRS-2302:Cannot get GPnP profile. Error CLSGPNP_NO_DAEMON (GPNPD daemon is not running).
    2011-07-02 16:53:55.042
    [cssd(17674)]CRS-1713:CSSD daemon is started in exclusive mode
    2011-07-02 16:54:38.334
    [cssd(17674)]CRS-1707:Lease acquisition for node rac2 number 2 completed
    [cssd(17674)]CRS-1636:The CSS daemon was started in exclusive mode but found an active CSS daemon on node rac1 and is terminating; details at (:CSSNM00006:) in /u01/grid/oracle/product/11.2.0/grid/log/rac2/cssd/ocssd.log
    2011-07-02 16:54:38.464
    [cssd(17674)]CRS-1603:CSSD on node rac2 shutdown by user.
    2011-07-02 16:54:39.174
    [ohasd(16188)]CRS-2765:Resource 'ora.cssdmonitor' has failed on server 'rac2'.
    2011-07-02 16:55:43.430
    [cssd(17945)]CRS-1713:CSSD daemon is started in clustered mode
    2011-07-02 16:56:02.852
    [cssd(17945)]CRS-1707:Lease acquisition for node rac2 number 2 completed
    2011-07-02 16:56:04.061
    [cssd(17945)]CRS-1605:CSSD voting file is online: ORCL:DATA; details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/cssd/ocssd.log.
    2011-07-02 16:56:18.350
    [cssd(17945)]CRS-1601:CSSD Reconfiguration complete. Active nodes are rac1 rac2 .
    2011-07-02 16:56:29.283
    [ctssd(18020)]CRS-2403:The Cluster Time Synchronization Service on host rac2 is in observer mode.
    2011-07-02 16:56:29.551
    [ctssd(18020)]CRS-2407:The new Cluster Time Synchronization Service reference node is host rac1.
    2011-07-02 16:56:29.615
    [ctssd(18020)]CRS-2412:The Cluster Time Synchronization Service detects that the local time is significantly different from the mean cluster time. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/ctssd/octssd.log.
    2011-07-02 16:56:29.616
    [ctssd(18020)]CRS-2409:The clock on host rac2 is not synchronous with the mean cluster time. No action has been taken as the Cluster Time Synchronization Service is running in observer mode.
    2011-07-02 16:56:29.641
    [ctssd(18020)]CRS-2401:The Cluster Time Synchronization Service started on host rac2.
    [client(18052)]CRS-10001:ACFS-9327: Verifying ADVM/ACFS devices.
    [client(18056)]CRS-10001:ACFS-9322: done.
    2011-07-02 17:01:40.963
    [ohasd(16188)]CRS-2757:Command 'Start' timed out waiting for response from the resource 'ora.asm'. Details at (:CRSPE00111:) in /u01/grid/oracle/product/11.2.0/grid/log/rac2/ohasd/ohasd.log.
    [client(18590)]CRS-10001:ACFS-9327: Verifying ADVM/ACFS devices.
    [client(18594)]CRS-10001:ACFS-9322: done.
    2011-07-02 17:27:46.385
    [ctssd(18020)]CRS-2412:The Cluster Time Synchronization Service detects that the local time is significantly different from the mean cluster time. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/ctssd/octssd.log.
    2011-07-02 17:27:46.385
    [ctssd(18020)]CRS-2409:The clock on host rac2 is not synchronous with the mean cluster time. No action has been taken as the Cluster Time Synchronization Service is running in observer mode.
    2011-07-02 17:46:48.717
    [crsd(22519)]CRS-1013:The OCR location in an ASM disk group is inaccessible. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/crsd/crsd.log.
    2011-07-02 17:46:49.641
    [ohasd(16188)]CRS-2765:Resource 'ora.crsd' has failed on server 'rac2'.
    2011-07-02 17:46:51.459
    [crsd(22553)]CRS-1013:The OCR location in an ASM disk group is inaccessible. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/crsd/crsd.log.
    2011-07-02 17:46:51.776
    [ohasd(16188)]CRS-2765:Resource 'ora.crsd' has failed on server 'rac2'.
    2011-07-02 17:46:53.928
    [crsd(22574)]CRS-1013:The OCR location in an ASM disk group is inaccessible. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/crsd/crsd.log.
    2011-07-02 17:46:53.956
    [ohasd(16188)]CRS-2765:Resource 'ora.crsd' has failed on server 'rac2'.
    2011-07-02 17:46:55.834
    [crsd(22592)]CRS-1013:The OCR location in an ASM disk group is inaccessible. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/crsd/crsd.log.
    2011-07-02 17:46:56.273
    [ohasd(16188)]CRS-2765:Resource 'ora.crsd' has failed on server 'rac2'.
    2011-07-02 17:46:57.762
    [crsd(22610)]CRS-1013:The OCR location in an ASM disk group is inaccessible. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/crsd/crsd.log.
    2011-07-02 17:46:58.631
    [ohasd(16188)]CRS-2765:Resource 'ora.crsd' has failed on server 'rac2'.
    2011-07-02 17:47:00.259
    [crsd(22628)]CRS-1013:The OCR location in an ASM disk group is inaccessible. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/crsd/crsd.log.
    2011-07-02 17:47:00.968
    [ohasd(16188)]CRS-2765:Resource 'ora.crsd' has failed on server 'rac2'.
    2011-07-02 17:47:02.513
    [crsd(22645)]CRS-1013:The OCR location in an ASM disk group is inaccessible. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/crsd/crsd.log.
    2011-07-02 17:47:03.309
    [ohasd(16188)]CRS-2765:Resource 'ora.crsd' has failed on server 'rac2'.
    2011-07-02 17:47:05.081
    [crsd(22663)]CRS-1013:The OCR location in an ASM disk group is inaccessible. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/crsd/crsd.log.
    2011-07-02 17:47:05.770
    [ohasd(16188)]CRS-2765:Resource 'ora.crsd' has failed on server 'rac2'.
    2011-07-02 17:47:07.796
    [crsd(22681)]CRS-1013:The OCR location in an ASM disk group is inaccessible. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/crsd/crsd.log.
    2011-07-02 17:47:08.257
    [ohasd(16188)]CRS-2765:Resource 'ora.crsd' has failed on server 'rac2'.
    2011-07-02 17:47:10.733
    [crsd(22699)]CRS-1013:The OCR location in an ASM disk group is inaccessible. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/crsd/crsd.log.
    2011-07-02 17:47:11.739
    [ohasd(16188)]CRS-2765:Resource 'ora.crsd' has failed on server 'rac2'.
    2011-07-02 17:47:13.547
    [crsd(22732)]CRS-1013:The OCR location in an ASM disk group is inaccessible. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/crsd/crsd.log.
    2011-07-02 17:47:14.111
    [ohasd(16188)]CRS-2765:Resource 'ora.crsd' has failed on server 'rac2'.
    2011-07-02 17:47:14.112
    [ohasd(16188)]CRS-2771:Maximum restart attempts reached for resource 'ora.crsd'; will not restart.
    2011-07-02 17:58:18.459
    [ctssd(18020)]CRS-2412:The Cluster Time Synchronization Service detects that the local time is significantly different from the mean cluster time. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/ctssd/octssd.log.
    2011-07-02 17:58:18.459
    [ctssd(18020)]CRS-2409:The clock on host rac2 is not synchronous with the mean cluster time. No action has been taken as the Cluster Time Synchronization Service is running in observer mode.
    [client(26883)]CRS-10001:ACFS-9200: Supported
    2011-07-02 18:13:34.627
    [ctssd(18020)]CRS-2405:The Cluster Time Synchronization Service on host rac2 is shutdown by user
    2011-07-02 18:13:42.368
    [cssd(17945)]CRS-1603:CSSD on node rac2 shutdown by user.
    2011-07-02 18:15:13.877
    [client(27222)]CRS-2106:The OLR location /u01/grid/oracle/product/11.2.0/grid/cdata/rac2.olr is inaccessible. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/client/ocrconfig_27222.log.
    2011-07-02 18:15:14.011
    [client(27222)]CRS-2101:The OLR was formatted using version 3.
    2011-07-02 18:15:23.226
    [ohasd(27261)]CRS-2112:The OLR service started on node rac2.
    2011-07-02 18:15:23.688
    [ohasd(27261)]CRS-8017:location: /etc/oracle/lastgasp has 2 reboot advisory log files, 0 were announced and 0 errors occurred
    2011-07-02 18:15:24.064
    [ohasd(27261)]CRS-2772:Server 'rac2' has been assigned to pool 'Free'.
    2011-07-02 18:16:29.761
    [ohasd(27261)]CRS-2302:Cannot get GPnP profile. Error CLSGPNP_NO_DAEMON (GPNPD daemon is not running).
    2011-07-02 18:16:30.190
    [gpnpd(28498)]CRS-2328:GPNPD started on node rac2.
    2011-07-02 18:16:41.561
    [cssd(28562)]CRS-1713:CSSD daemon is started in exclusive mode
    2011-07-02 18:16:49.111
    [cssd(28562)]CRS-1707:Lease acquisition for node rac2 number 2 completed
    2011-07-02 18:16:49.166
    [cssd(28562)]CRS-1605:CSSD voting file is online: ORCL:DATA; details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/cssd/ocssd.log.
    [cssd(28562)]CRS-1636:The CSS daemon was started in exclusive mode but found an active CSS daemon on node rac1 and is terminating; details at (:CSSNM00006:) in /u01/grid/oracle/product/11.2.0/grid/log/rac2/cssd/ocssd.log
    2011-07-02 18:17:01.122
    [cssd(28562)]CRS-1603:CSSD on node rac2 shutdown by user.
    2011-07-02 18:17:06.917
    [ohasd(27261)]CRS-2765:Resource 'ora.cssdmonitor' has failed on server 'rac2'.
    2011-07-02 18:17:23.602
    [mdnsd(28485)]CRS-5602:mDNS service stopping by request.
    2011-07-02 18:17:36.217
    [gpnpd(28732)]CRS-2328:GPNPD started on node rac2.
    2011-07-02 18:17:43.673
    [cssd(28794)]CRS-1713:CSSD daemon is started in clustered mode
    2011-07-02 18:17:49.826
    [cssd(28794)]CRS-1707:Lease acquisition for node rac2 number 2 completed
    2011-07-02 18:17:49.865
    [cssd(28794)]CRS-1605:CSSD voting file is online: ORCL:DATA; details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/cssd/ocssd.log.
    2011-07-02 18:18:03.049
    [cssd(28794)]CRS-1601:CSSD Reconfiguration complete. Active nodes are rac1 rac2 .
    2011-07-02 18:18:06.160
    [ctssd(28861)]CRS-2403:The Cluster Time Synchronization Service on host rac2 is in observer mode.
    2011-07-02 18:18:06.220
    [ctssd(28861)]CRS-2407:The new Cluster Time Synchronization Service reference node is host rac1.
    2011-07-02 18:18:06.238
    [ctssd(28861)]CRS-2412:The Cluster Time Synchronization Service detects that the local time is significantly different from the mean cluster time. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/ctssd/octssd.log.
    2011-07-02 18:18:06.239
    [ctssd(28861)]CRS-2409:The clock on host rac2 is not synchronous with the mean cluster time. No action has been taken as the Cluster Time Synchronization Service is running in observer mode.
    2011-07-02 18:18:06.794
    [ctssd(28861)]CRS-2401:The Cluster Time Synchronization Service started on host rac2.
    [client(28891)]CRS-10001:ACFS-9327: Verifying ADVM/ACFS devices.
    [client(28895)]CRS-10001:ACFS-9322: done.
    2011-07-02 18:18:33.465
    [crsd(29020)]CRS-1013:The OCR location in an ASM disk group is inaccessible. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/crsd/crsd.log.
    2011-07-02 18:18:33.575
    [ohasd(27261)]CRS-2765:Resource 'ora.crsd' has failed on server 'rac2'.
    2011-07-02 18:18:35.757
    [crsd(29051)]CRS-1013:The OCR location in an ASM disk group is inaccessible. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/crsd/crsd.log.
    2011-07-02 18:18:36.129
    [ohasd(27261)]CRS-2765:Resource 'ora.crsd' has failed on server 'rac2'.
    2011-07-02 18:18:38.596
    [crsd(29066)]CRS-1013:The OCR location in an ASM disk group is inaccessible. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/crsd/crsd.log.
    2011-07-02 18:18:39.146
    [ohasd(27261)]CRS-2765:Resource 'ora.crsd' has failed on server 'rac2'.
    2011-07-02 18:18:41.058
    [crsd(29085)]CRS-1013:The OCR location in an ASM disk group is inaccessible. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/crsd/crsd.log.
    2011-07-02 18:18:41.435
    [ohasd(27261)]CRS-2765:Resource 'ora.crsd' has failed on server 'rac2'.
    2011-07-02 18:18:44.255
    [crsd(29101)]CRS-1013:The OCR location in an ASM disk group is inaccessible. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/crsd/crsd.log.
    2011-07-02 18:18:45.165
    [ohasd(27261)]CRS-2765:Resource 'ora.crsd' has failed on server 'rac2'.
    2011-07-02 18:18:47.013
    [crsd(29121)]CRS-1013:The OCR location in an ASM disk group is inaccessible. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/crsd/crsd.log.
    2011-07-02 18:18:47.409
    [ohasd(27261)]CRS-2765:Resource 'ora.crsd' has failed on server 'rac2'.
    2011-07-02 18:18:50.071
    [crsd(29136)]CRS-1013:The OCR location in an ASM disk group is inaccessible. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/crsd/crsd.log.
    2011-07-02 18:18:50.118
    [ohasd(27261)]CRS-2765:Resource 'ora.crsd' has failed on server 'rac2'.
    2011-07-02 18:18:51.843
    [crsd(29156)]CRS-1013:The OCR location in an ASM disk group is inaccessible. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/crsd/crsd.log.
    2011-07-02 18:18:52.373
    [ohasd(27261)]CRS-2765:Resource 'ora.crsd' has failed on server 'rac2'.
    2011-07-02 18:18:54.361
    [crsd(29171)]CRS-1013:The OCR location in an ASM disk group is inaccessible. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/crsd/crsd.log.
    2011-07-02 18:18:54.772
    [ohasd(27261)]CRS-2765:Resource 'ora.crsd' has failed on server 'rac2'.
    2011-07-02 18:18:56.620
    [crsd(29202)]CRS-1013:The OCR location in an ASM disk group is inaccessible. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/crsd/crsd.log.
    2011-07-02 18:18:57.104
    [ohasd(27261)]CRS-2765:Resource 'ora.crsd' has failed on server 'rac2'.
    2011-07-02 18:18:58.997
    [crsd(29218)]CRS-1013:The OCR location in an ASM disk group is inaccessible. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/crsd/crsd.log.
    2011-07-02 18:18:59.301
    [ohasd(27261)]CRS-2765:Resource 'ora.crsd' has failed on server 'rac2'.
    2011-07-02 18:18:59.302
    [ohasd(27261)]CRS-2771:Maximum restart attempts reached for resource 'ora.crsd'; will not restart.
    2011-07-02 18:49:58.070
    [ctssd(28861)]CRS-2412:The Cluster Time Synchronization Service detects that the local time is significantly different from the mean cluster time. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/ctssd/octssd.log.
    2011-07-02 18:49:58.070
    [ctssd(28861)]CRS-2409:The clock on host rac2 is not synchronous with the mean cluster time. No action has been taken as the Cluster Time Synchronization Service is running in observer mode.
    2011-07-02 19:21:33.362
    [ctssd(28861)]CRS-2412:The Cluster Time Synchronization Service detects that the local time is significantly different from the mean cluster time. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/ctssd/octssd.log.
    2011-07-02 19:21:33.362
    [ctssd(28861)]CRS-2409:The clock on host rac2 is not synchronous with the mean cluster time. No action has been taken as the Cluster Time Synchronization Service is running in observer mode.
    2011-07-02 19:52:05.271
    [ctssd(28861)]CRS-2412:The Cluster Time Synchronization Service detects that the local time is significantly different from the mean cluster time. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/ctssd/octssd.log.
    2011-07-02 19:52:05.271
    [ctssd(28861)]CRS-2409:The clock on host rac2 is not synchronous with the mean cluster time. No action has been taken as the Cluster Time Synchronization Service is running in observer mode.
    2011-07-02 20:22:53.696
    [ctssd(28861)]CRS-2412:The Cluster Time Synchronization Service detects that the local time is significantly different from the mean cluster time. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/ctssd/octssd.log.
    2011-07-02 20:22:53.696
    [ctssd(28861)]CRS-2409:The clock on host rac2 is not synchronous with the mean cluster time. No action has been taken as the Cluster Time Synchronization Service is running in observer mode.
    2011-07-02 20:53:43.949
    [ctssd(28861)]CRS-2412:The Cluster Time Synchronization Service detects that the local time is significantly different from the mean cluster time. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/ctssd/octssd.log.
    2011-07-02 20:53:43.949
    [ctssd(28861)]CRS-2409:The clock on host rac2 is not synchronous with the mean cluster time. No action has been taken as the Cluster Time Synchronization Service is running in observer mode.
    2011-07-02 21:24:32.990
    [ctssd(28861)]CRS-2412:The Cluster Time Synchronization Service detects that the local time is significantly different from the mean cluster time. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/ctssd/octssd.log.
    2011-07-02 21:24:32.990
    [ctssd(28861)]CRS-2409:The clock on host rac2 is not synchronous with the mean cluster time. No action has been taken as the Cluster Time Synchronization Service is running in observer mode.
    2011-07-02 21:55:21.907
    [ctssd(28861)]CRS-2412:The Cluster Time Synchronization Service detects that the local time is significantly different from the mean cluster time. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/ctssd/octssd.log.
    2011-07-02 21:55:21.908
    [ctssd(28861)]CRS-2409:The clock on host rac2 is not synchronous with the mean cluster time. No action has been taken as the Cluster Time Synchronization Service is running in observer mode.
    2011-07-02 22:26:45.752
    [ctssd(28861)]CRS-2412:The Cluster Time Synchronization Service detects that the local time is significantly different from the mean cluster time. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/ctssd/octssd.log.
    2011-07-02 22:26:45.752
    [ctssd(28861)]CRS-2409:The clock on host rac2 is not synchronous with the mean cluster time. No action has been taken as the Cluster Time Synchronization Service is running in observer mode.
    2011-07-02 22:57:54.682
    [ctssd(28861)]CRS-2412:The Cluster Time Synchronization Service detects that the local time is significantly different from the mean cluster time. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/ctssd/octssd.log.
    2011-07-02 22:57:54.683
    [ctssd(28861)]CRS-2409:The clock on host rac2 is not synchronous with the mean cluster time. No action has been taken as the Cluster Time Synchronization Service is running in observer mode.
    2011-07-02 23:07:28.603
    [cssd(28794)]CRS-1612:Network communication with node rac1 (1) missing for 50% of timeout interval.  Removal of this node from cluster in 14.020 seconds
    2011-07-02 23:07:35.621
    [cssd(28794)]CRS-1611:Network communication with node rac1 (1) missing for 75% of timeout interval.  Removal of this node from cluster in 7.010 seconds
    2011-07-02 23:07:39.629
    [cssd(28794)]CRS-1610:Network communication with node rac1 (1) missing for 90% of timeout interval.  Removal of this node from cluster in 3.000 seconds
    2011-07-02 23:07:42.641
    [cssd(28794)]CRS-1632:Node rac1 is being removed from the cluster in cluster incarnation 205080558
    2011-07-02 23:07:44.751
    [cssd(28794)]CRS-1601:CSSD Reconfiguration complete. Active nodes are rac2 .
    2011-07-02 23:07:45.326
    [ctssd(28861)]CRS-2407:The new Cluster Time Synchronization Service reference node is host rac2.
    2011-07-04 19:46:26.008
    [ohasd(27261)]CRS-8011:reboot advisory message from host: rac1, component: mo155738, with time stamp: L-2011-07-04-19:44:43.318
    [ohasd(27261)]CRS-8013:reboot advisory message text: clsnomon_status: need to reboot, unexpected failure 8 received from CSS
    *[root@rac2 rac2]#* This log file start with complaint that OLR is not accessible. Here is what I see (rca2):
    -rw------- 1 root oinstall 272756736 Jul  2 18:18 /u01/grid/oracle/product/11.2.0/grid/cdata/rac2.olrAnd I guess rest of the problems start with this.

  • Root.sh failed on second node while installing CRS 10g on centos 5.5

    root.sh failed on second node while installing CRS 10g
    Hi all,
    I am able to install Oracle 10g RAC clusterware on first node of the cluster. However, when I run the root.sh script as root
    user on second node of the cluster, it fails with following error message:
    NO KEYS WERE WRITTEN. Supply -force parameter to override.
    -force is destructive and will destroy any previous cluster
    configuration.
    Oracle Cluster Registry for cluster has already been initialized
    Startup will be queued to init within 90 seconds.
    Adding daemons to inittab
    Expecting the CRS daemons to be up within 600 seconds.
    Failure at final check of Oracle CRS stack.
    10
    and run cluvfy stage -post hwos -n all -verbose,it show message:
    ERROR:
    Could not find a suitable set of interfaces for VIPs.
    Result: Node connectivity check failed.
    Checking shared storage accessibility...
    Disk Sharing Nodes (2 in count)
    /dev/sda db2 db1
    and run cluvfy stage -pre crsinst -n all -verbose,it show message:
    ERROR:
    Could not find a suitable set of interfaces for VIPs.
    Result: Node connectivity check failed.
    Checking system requirements for 'crs'...
    No checks registered for this product.
    and run cluvfy stage -post crsinst -n all -verbose,it show message:
    Result: Node reachability check passed from node "DB2".
    Result: User equivalence check passed for user "oracle".
    Node Name CRS daemon CSS daemon EVM daemon
    db2 no no no
    db1 yes yes yes
    Check: Health of CRS
    Node Name CRS OK?
    db1 unknown
    Result: CRS health check failed.
    check crsd.log and show message:
    clsc_connect: (0x143ca610) no listener at (ADDRESS=(PROTOCOL=ipc)(KEY=OCSSD_LL_db2_crs))
    clsssInitNative: connect failed, rc 9
    Any help would be greatly appreciated.
    Edited by: 868121 on 2011-6-24 上午12:31

    Hello, it took a little searching, but I found this in a note in the GRID installation guide for Linux/UNIX:
    Public IP addresses and virtual IP addresses must be in the same subnet.
    In your case, you are using two different subnets for the VIPs.

  • Asm 11.1.0.6 root.sh  fails to start css

    We have Oracle ASM and installing 11.1.0.6 on a new machine (eventually will apply 11.1.0.7 patchset). So we are on 11g R1 in production environments.
    When installing Oracle ASM 11.1.0.6, the root.sh fails
    I checked various metalink notes, all settings are OK. We have ASMLib and i have configured ASMLib with the new disks prior to the ASM installation
    Just to rule out the rootcause is with ASMLib , I have also disabled it. Still same problem. I ran the usual localconfig delete and localconfig add too.
    Startup will be queued to init within 30 seconds.
    Checking the status of new Oracle init process...
    Expecting the CRS daemons to be up within 600 seconds.
    Giving up: Oracle CSS stack appears NOT to be running.+
    Oracle CSS service would not start as installed+
    Automatic Storage Management(ASM) cannot be used until Oracle CSS service is started+
    Finished product-specific root actions.+

    I applied the 11.1.0.7 patchset. Still same problem.
    Strangely, there are no logfiles in $ASM_HOME/log/<hostname>/cssd/ - This directory is empty
    The below are the final messages in root.sh
    Startup will be queued to init within 30 seconds.
    Checking the status of new Oracle init process...
    Expecting the CRS daemons to be up within 600 seconds.
    Giving up: Oracle CSS stack appears NOT to be running.
    Oracle CSS service would not start as installed
    Automatic Storage Management(ASM) cannot be used until Oracle CSS service is started

  • Oracle 11gR2 RAC Root.sh Failed On The Second Node

    Hello,
    When i installing Oracle 11gR2 RAC on AIX 7.1 , root.sh succeeds on first node but fails on the second node:
    I get error "Root.sh Failed On The Second Node With Error ORA-15018 ORA-15031 ORA-15025 ORA-27041 [ID 1459711.1]" within Oracle installation.
    Applies to:
    Oracle Server - 11gR2 RAC
    EMC VNX 500
    IBM AIX on POWER Systems (64-bit)
    in /dev/rhdiskpower0 does not show in kfod output on second node. It is an EMC multipath disk device.
    But the disk can be found with AIX command.
    any help!!
    Thanks

    the soluation that uninstall "EMC solutitons enabler" but in the machine i just find "EMC migration enabler" and conn't remove without remove EMC Powerpath.

  • Root.sh failed in one node - CLSMON and UDLM

    Hi experts.
    My enviroment is:
    2-node SunCluster Update3
    Oracle RAC 10.2.0.1 > planning to upgrade to 10.2.0.4
    The problem is: I installed the CRS services on 2 nodes - OK
    After that, running root.sh fails in 1 node:
    /u01/app/product/10/CRS/root.sh
    WARNING: directory '/u01/app/product/10' is not owned by root
    WARNING: directory '/u01/app/product' is not owned by root
    WARNING: directory '/u01/app' is not owned by root
    WARNING: directory '/u01' is not owned by root
    Checking to see if Oracle CRS stack is already configured
    Checking to see if any 9i GSD is up
    Setting the permissions on OCR backup directory
    Setting up NS directories
    Oracle Cluster Registry configuration upgraded successfully
    WARNING: directory '/u01/app/product/10' is not owned by root
    WARNING: directory '/u01/app/product' is not owned by root
    WARNING: directory '/u01/app' is not owned by root
    WARNING: directory '/u01' is not owned by root
    clscfg: EXISTING configuration version 3 detected.
    clscfg: version 3 is 10G Release 2.
    Successfully accumulated necessary OCR keys.
    Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
    node <nodenumber>: <nodename> <private interconnect name> <hostname>
    node 0: spodhcsvr10 clusternode1-priv spodhcsvr10
    node 1: spodhcsvr12 clusternode2-priv spodhcsvr12
    clscfg: Arguments check out successfully.
    NO KEYS WERE WRITTEN. Supply -force parameter to override.
    -force is destructive and will destroy any previous cluster
    configuration.
    Oracle Cluster Registry for cluster has already been initialized
    Sep 22 13:34:17 spodhcsvr10 root: Oracle Cluster Ready Services starting by user request.
    Startup will be queued to init within 30 seconds.
    Sep 22 13:34:20 spodhcsvr10 root: Cluster Ready Services completed waiting on dependencies.
    Adding daemons to inittab
    Expecting the CRS daemons to be up within 600 seconds.
    Sep 22 13:34:34 spodhcsvr10 last message repeated 3 times
    Sep 22 13:34:34 spodhcsvr10 root: Running CRSD with TZ = Brazil/East
    Sep 22 13:34:40 spodhcsvr10 root: Oracle CLSMON terminated with unexpected status 10. Respawning
    Sep 22 13:35:43 spodhcsvr10 last message repeated 9 times
    Sep 22 13:36:07 spodhcsvr10 root: Cluster Ready Services completed waiting on dependencies.
    Sep 22 13:36:07 spodhcsvr10 root: Running CRSD with TZ = Brazil/East
    Sep 22 13:36:14 spodhcsvr10 su: libsldap: Status: 85 Mesg: openConnection: simple bind failed - Timed out
    Sep 22 13:36:19 spodhcsvr10 root: Oracle CLSMON terminated with unexpected status 10. Respawning
    Sep 22 13:37:35 spodhcsvr10 last message repeated 11 times
    Sep 22 13:37:40 spodhcsvr10 root: Cluster Ready Services completed waiting on dependencies.
    Sep 22 13:37:40 spodhcsvr10 root: Running CRSD with TZ = Brazil/East
    Sep 22 13:37:42 spodhcsvr10 root: Oracle CLSMON terminated with unexpected status 10. Respawning
    Sep 22 13:38:03 spodhcsvr10 last message repeated 3 times
    Sep 22 13:38:10 spodhcsvr10 root: Oracle CLSMON terminated with unexpected status 10. Respawning
    Sep 22 13:39:12 spodhcsvr10 last message repeated 9 times
    Sep 22 13:39:13 spodhcsvr10 root: Cluster Ready Services completed waiting on dependencies.
    Sep 22 13:39:13 spodhcsvr10 root: Running CRSD with TZ = Brazil/East
    Sep 22 13:39:19 spodhcsvr10 root: Oracle CLSMON terminated with unexpected status 10. Respawning
    Sep 22 13:40:42 spodhcsvr10 last message repeated 12 times
    Sep 22 13:40:46 spodhcsvr10 root: Cluster Ready Services completed waiting on dependencies.
    Sep 22 13:40:46 spodhcsvr10 root: Running CRSD with TZ = Brazil/East
    Sep 22 13:40:49 spodhcsvr10 root: Oracle CLSMON terminated with unexpected status 10. Respawning
    Sep 22 13:42:05 spodhcsvr10 last message repeated 11 times
    Sep 22 13:42:11 spodhcsvr10 root: Cluster Ready Services completed waiting on dependencies.
    Sep 22 13:42:12 spodhcsvr10 root: Oracle CLSMON terminated with unexpected status 10. Respawning
    Sep 22 13:42:19 spodhcsvr10 root: Cluster Ready Services completed waiting on dependencies.
    Sep 22 13:42:19 spodhcsvr10 root: Running CRSD with TZ = Brazil/East
    Sep 22 13:42:19 spodhcsvr10 root: Oracle CLSMON terminated with unexpected status 10. Respawning
    Sep 22 13:43:49 spodhcsvr10 last message repeated 13 times
    Sep 22 13:43:51 spodhcsvr10 root: Cluster Ready Services completed waiting on dependencies.
    Sep 22 13:43:51 spodhcsvr10 root: Running CRSD with TZ = Brazil/East
    Sep 22 13:43:56 spodhcsvr10 root: Oracle CLSMON terminated with unexpected status 10. Respawning
    Failure at final check of Oracle CRS stack.
    I traced the ocssd.log and found some informations:
    [    CSSD]2010-09-22 14:04:14.739 [6] >TRACE: clssnmDiskStateChange: state from 2 to 4 disk (0//dev/vx/rdsk/racdg/ora_vote1)
    [    CSSD]2010-09-22 14:04:14.742 [6] >TRACE: clssnmReadDskHeartbeat: node(1) is down. rcfg(2) wrtcnt(2478) LATS(0) Disk lastSeqNo(2478)
    [    CSSD]2010-09-22 14:04:14.742 [7] >TRACE: clssnmDiskStateChange: state from 2 to 4 disk (1//dev/vx/rdsk/racdg/ora_vote2)
    [    CSSD]2010-09-22 14:04:14.744 [7] >TRACE: clssnmReadDskHeartbeat: node(1) is down. rcfg(2) wrtcnt(2478) LATS(0) Disk lastSeqNo(2478)
    [    CSSD]2010-09-22 14:04:14.745 [8] >TRACE: clssnmDiskStateChange: state from 2 to 4 disk (2//dev/vx/rdsk/racdg/ora_vote3)
    [    CSSD]2010-09-22 14:04:14.746 [8] >TRACE: clssnmReadDskHeartbeat: node(1) is down. rcfg(2) wrtcnt(2478) LATS(0) Disk lastSeqNo(2478)
    [    CSSD]2010-09-22 14:04:14.785 [1] >TRACE: clssscSclsFatal: read value of disable
    [    CSSD]2010-09-22 14:04:14.785 [10] >TRACE: clssnmFatalThread: spawned
    [    CSSD]2010-09-22 14:04:14.785 [1] >TRACE: clssscSclsFatal: read value of disable
    [    CSSD]2010-09-22 14:04:14.786 [11] >TRACE: clssnmconnect: connecting to node 0, flags 0x0001, connector 1
    [    CSSD]2010-09-22 14:04:23.075 >USER: Oracle Database 10g CSS Release 10.2.0.1.0 Production Copyright 1996, 2004 Oracle. All rights reserved.
    [    CSSD]2010-09-22 14:04:23.075 >USER: CSS daemon log for node spodhcsvr10, number 0, in cluster NET_RAC
    [  clsdmt]Listening to (ADDRESS=(PROTOCOL=ipc)(KEY=spodhcsvr10DBG_CSSD))
    [    CSSD]2010-09-22 14:04:23.082 [1] >TRACE: clssscmain: local-only set to false
    [    CSSD]2010-09-22 14:04:23.096 [1] >TRACE: clssnmReadNodeInfo: added node 0 (spodhcsvr10) to cluster
    [    CSSD]2010-09-22 14:04:23.106 [1] >TRACE: clssnmReadNodeInfo: added node 1 (spodhcsvr12) to cluster
    [    CSSD]2010-09-22 14:04:23.129 [5] >TRACE: [0]Node monitor: dlm attach failed error LK_STAT_NOTCREATED
    [    CSSD]CLSS-0001: skgxn not active
    [    CSSD]2010-09-22 14:04:23.129 [5] >TRACE: clssnm_skgxnmon: skgxn init failed, rc 30
    [    CSSD]2010-09-22 14:04:23.132 [1] >TRACE: clssnmInitNMInfo: misscount set to 600
    [    CSSD]2010-09-22 14:04:23.136 [1] >TRACE: clssnmDiskStateChange: state from 1 to 2 disk (0//dev/vx/rdsk/racdg/ora_vote1)
    [    CSSD]2010-09-22 14:04:23.139 [1] >TRACE: clssnmDiskStateChange: state from 1 to 2 disk (1//dev/vx/rdsk/racdg/ora_vote2)
    [    CSSD]2010-09-22 14:04:23.143 [1] >TRACE: clssnmDiskStateChange: state from 1 to 2 disk (2//dev/vx/rdsk/racdg/ora_vote3)
    [    CSSD]2010-09-22 14:04:25.139 [6] >TRACE: clssnmDiskStateChange: state from 2 to 4 disk (0//dev/vx/rdsk/racdg/ora_vote1)
    [    CSSD]2010-09-22 14:04:25.142 [6] >TRACE: clssnmReadDskHeartbeat: node(1) is down. rcfg(2) wrtcnt(2488) LATS(0) Disk lastSeqNo(2488)
    [    CSSD]2010-09-22 14:04:25.143 [7] >TRACE: clssnmDiskStateChange: state from 2 to 4 disk (1//dev/vx/rdsk/racdg/ora_vote2)
    [    CSSD]2010-09-22 14:04:25.144 [7] >TRACE: clssnmReadDskHeartbeat: node(1) is down. rcfg(2) wrtcnt(2488) LATS(0) Disk lastSeqNo(2488)
    [    CSSD]2010-09-22 14:04:25.145 [8] >TRACE: clssnmDiskStateChange: state from 2 to 4 disk (2//dev/vx/rdsk/racdg/ora_vote3)
    [    CSSD]2010-09-22 14:04:25.148 [8] >TRACE: clssnmReadDskHeartbeat: node(1) is down. rcfg(2) wrtcnt(2489) LATS(0) Disk lastSeqNo(2489)
    [    CSSD]2010-09-22 14:04:25.186 [1] >TRACE: clssscSclsFatal: read value of disable
    [    CSSD]2010-09-22 14:04:25.186 [10] >TRACE: clssnmFatalThread: spawned
    [    CSSD]2010-09-22 14:04:25.186 [1] >TRACE: clssscSclsFatal: read value of disable
    [    CSSD]2010-09-22 14:04:25.187 [11] >TRACE: clssnmconnect: connecting to node 0, flags 0x0001, connector 1
    [    CSSD]2010-09-22 14:04:33.449 >USER: Oracle Database 10g CSS Release 10.2.0.1.0 Production Copyright 1996, 2004 Oracle. All rights reserved.
    [    CSSD]2010-09-22 14:04:33.449 >USER: CSS daemon log for node spodhcsvr10, number 0, in cluster NET_RAC
    [  clsdmt]Listening to (ADDRESS=(PROTOCOL=ipc)(KEY=spodhcsvr10DBG_CSSD))
    [    CSSD]2010-09-22 14:04:33.457 [1] >TRACE: clssscmain: local-only set to false
    [    CSSD]2010-09-22 14:04:33.470 [1] >TRACE: clssnmReadNodeInfo: added node 0 (spodhcsvr10) to cluster
    [    CSSD]2010-09-22 14:04:33.480 [1] >TRACE: clssnmReadNodeInfo: added node 1 (spodhcsvr12) to cluster
    [    CSSD]2010-09-22 14:04:33.498 [5] >TRACE: [0]Node monitor: dlm attach failed error LK_STAT_NOTCREATED
    [    CSSD]CLSS-0001: skgxn not active
    [    CSSD]2010-09-22 14:04:33.498 [5] >TRACE: clssnm_skgxnmon: skgxn init failed, rc 30
    [    CSSD]2010-09-22 14:04:33.500 [1] >TRACE: clssnmInitNMInfo: misscount set to 600
    [    CSSD]2010-09-22 14:04:33.505 [1] >TRACE: clssnmDiskStateChange: state from 1 to 2 disk (0//dev/vx/rdsk/racdg/ora_vote1)
    [    CSSD]2010-09-22 14:04:33.508 [1] >TRACE: clssnmDiskStateChange: state from 1 to 2 disk (1//dev/vx/rdsk/racdg/ora_vote2)
    [    CSSD]2010-09-22 14:04:33.510 [1] >TRACE: clssnmDiskStateChange: state from 1 to 2 disk (2//dev/vx/rdsk/racdg/ora_vote3)
    [    CSSD]2010-09-22 14:04:35.508 [6] >TRACE: clssnmDiskStateChange: state from 2 to 4 disk (0//dev/vx/rdsk/racdg/ora_vote1)
    [    CSSD]2010-09-22 14:04:35.510 [6] >TRACE: clssnmReadDskHeartbeat: node(1) is down. rcfg(2) wrtcnt(2499) LATS(0) Disk lastSeqNo(2499)
    [    CSSD]2010-09-22 14:04:35.510 [7] >TRACE: clssnmDiskStateChange: state from 2 to 4 disk (1//dev/vx/rdsk/racdg/ora_vote2)
    [    CSSD]2010-09-22 14:04:35.512 [7] >TRACE: clssnmReadDskHeartbeat: node(1) is down. rcfg(2) wrtcnt(2499) LATS(0) Disk lastSeqNo(2499)
    [    CSSD]2010-09-22 14:04:35.513 [8] >TRACE: clssnmDiskStateChange: state from 2 to 4 disk (2//dev/vx/rdsk/racdg/ora_vote3)
    [    CSSD]2010-09-22 14:04:35.514 [8] >TRACE: clssnmReadDskHeartbeat: node(1) is down. rcfg(2) wrtcnt(2499) LATS(0) Disk lastSeqNo(2499)
    [    CSSD]2010-09-22 14:04:35.553 [1] >TRACE: clssscSclsFatal: read value of disable
    [    CSSD]2010-09-22 14:04:35.553 [10] >TRACE: clssnmFatalThread: spawned
    [    CSSD]2010-09-22 14:04:35.553 [1] >TRACE: clssscSclsFatal: read value of disable
    [    CSSD]2010-09-22 14:04:35.553 [11] >TRACE: clssnmconnect: connecting to node 0, flags 0x0001, connector 1
    I believe the main error is:
    [    CSSD]2010-09-22 14:04:33.498 [5] >TRACE: [0]Node monitor: dlm attach failed error LK_STAT_NOTCREATED
    [    CSSD]CLSS-0001: skgxn not active
    And the communication between UDLM and CLSMON. But i don't know how to resolve this.
    My UDLM version is 3.3.4.9.
    Somebody have any ideas about this?
    Tks!

    Now i finally installed CRS and run root.sh without errors (i think that problem is in some old file from other instalation tries...)
    But now i have another problem: When install DB software, in step to copy instalation to remote node, this node have some failure in CLSMON/CSSD daemon and panicking:
    Sep 23 16:10:51 spodhcsvr10 root: Oracle CLSMON terminated with unexpected status 138. Respawning
    Sep 23 16:10:52 spodhcsvr10 root: Oracle CSSD failure. Rebooting for cluster integrity.
    Sep 23 16:10:52 spodhcsvr10 root: [ID 702911 user.alert] Oracle CSSD failure. Rebooting for cluster integrity.
    Sep 23 16:10:51 spodhcsvr10 root: [ID 702911 user.error] Oracle CLSMON terminated with unexpected status 138. Respawning
    Sep 23 16:10:52 spodhcsvr10 root: [ID 702911 user.alert] Oracle CSSD failure. Rebooting for cluster integrity.
    Sep 23 16:10:56 spodhcsvr10 Cluster.OPS.UCMMD: fatal: received signal 15
    Sep 23 16:10:56 spodhcsvr10 Cluster.OPS.UCMMD: [ID 770355 daemon.error] fatal: received signal 15
    Sep 23 16:10:59 spodhcsvr10 root: Oracle Cluster Ready Services waiting for SunCluster and UDLM to start.
    Sep 23 16:10:59 spodhcsvr10 root: Cluster Ready Services completed waiting on dependencies.
    Sep 23 16:10:59 spodhcsvr10 root: [ID 702911 user.error] Oracle Cluster Ready Services waiting for SunCluster and UDLM to start.
    Sep 23 16:10:59 spodhcsvr10 root: [ID 702911 user.error] Cluster Ready Services completed waiting on dependencies.
    Notifying cluster that this node is panicking
    The instalation in first node continue and report error in copy to second node.
    Any ideas? Tks!

  • Root.sh fails - asm won't shut down

    I am trying to instal clusterware 11gR2 on Oracle Enterprise Linux 5. This is all running an Oracle Virtualbox environment, using ASM for the cluster disk in an openfiler. 32 bit installation, 1GB memory allocated to each node environment. I had to ignore errors for the following:
    < 1.5 GB memory
    small swap file
    ncsd and ntpd processes not running (not connected to internet), I understand Oracle will install it's own timer service if this is not present
    Running root.sh fails because asm won't shut down properly:
    [root@odbn1 grid]# ./root.sh
    Running Oracle 11g root.sh script...
    The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME= /u01/app/grid
    Enter the full pathname of the local bin directory: [usr/local/bin]:
    Copying dbhome to /usr/local/bin ...
    Copying oraenv to /usr/local/bin ...
    Copying coraenv to /usr/local/bin ...
    Creating /etc/oratab file...
    Entries will be added to the /etc/oratab file as needed by
    Database Configuration Assistant when a database is created
    Finished running generic part of root.sh script.
    Now product-specific root actions will be performed.
    2010-08-24 14:46:44: Parsing the host name
    2010-08-24 14:46:44: Checking for super user privileges
    2010-08-24 14:46:44: User has super user privileges
    Using configuration parameter file: /u01/app/grid/crs/install/crsconfig_params
    Creating trace directory
    LOCAL ADD MODE
    Creating OCR keys for user 'root', privgrp 'root'..
    Operation successful.
    root wallet
    root wallet cert
    root cert export
    peer wallet
    profile reader wallet
    pa wallet
    peer wallet keys
    pa wallet keys
    peer cert request
    pa cert request
    peer cert
    pa cert
    peer root cert TP
    profile reader root cert TP
    pa root cert TP
    peer pa cert TP
    pa peer cert TP
    profile reader pa cert TP
    profile reader peer cert TP
    peer user cert
    pa user cert
    Adding daemon to inittab
    CRS-4123: Oracle High Availability Services has been started.
    ohasd is starting
    CRS-2672: Attempting to start 'ora.gipcd' on 'odbn1'
    CRS-2672: Attempting to start 'ora.mdnsd' on 'odbn1'
    CRS-2676: Start of 'ora.mdnsd' on 'odbn1' succeeded
    CRS-2676: Start of 'ora.gipcd' on 'odbn1' succeeded
    CRS-2672: Attempting to start 'ora.gpnpd' on 'odbn1'
    CRS-2676: Start of 'ora.gpnpd' on 'odbn1' succeeded
    CRS-2672: Attempting to start 'ora.cssdmonitor' on 'odbn1'
    CRS-2676: Start of 'ora.cssdmonitor' on 'odbn1' succeeded
    CRS-2672: Attempting to start 'ora.cssd' on 'odbn1'
    CRS-2672: Attempting to start 'ora.diskmon' on 'odbn1'
    CRS-2676: Start of 'ora.diskmon' on 'odbn1' succeeded
    CRS-2676: Start of 'ora.cssd' on 'odbn1' succeeded
    CRS-2672: Attempting to start 'ora.ctssd' on 'odbn1'
    CRS-2676: Start of 'ora.ctssd' on 'odbn1' succeeded
    ASM created and started successfully.
    DiskGroup CLSVOL1 created successfully.
    clscfg: -install mode specified
    Successfully accumulated necessary OCR keys.
    Creating OCR keys for user 'root', privgrp 'root'..
    Operation successful.
    CRS-2672: Attempting to start 'ora.crsd' on 'odbn1'
    CRS-2676: Start of 'ora.crsd' on 'odbn1' succeeded
    CRS-4256: Updating the profile
    Successful addition of voting disk 502348953fc24f1cbf9c9f0fdf5cf2e0.
    Successfully replaced voting disk group with +CLSVOL1.
    CRS-4256: Updating the profile
    CRS-4266: Voting file(s) successfully replaced
    ## STATE File Universal Id File Name Disk group
    1. ONLINE 502348953fc24f1cbf9c9f0fdf5cf2e0 (/dev/oracleasm/disks/CRS) [CLSVOL1]
    Located 1 voting disk(s).
    CRS-2673: Attempting to stop 'ora.crsd' on 'odbn1'
    CRS-2677: Stop of 'ora.crsd' on 'odbn1' succeeded
    CRS-2673: Attempting to stop 'ora.asm' on 'odbn1'
    ORA-15097: cannot SHUTDOWN ASM instance with connected client
    CRS-2675: Stop of 'ora.asm' on 'odbn1' failed
    CRS-4000: Command Stop failed, or completed with errors.
    Command return code of 1 (256) from command: /u01/app/grid/bin/crsctl stop resource ora.asm -init
    Stop of resource "ora.asm -init" failed
    Failed to stop ASM
    CRS-2673: Attempting to stop 'ora.ctssd' on 'odbn1'
    CRS-2677: Stop of 'ora.ctssd' on 'odbn1' succeeded
    CRS-2673: Attempting to stop 'ora.cssdmonitor' on 'odbn1'
    CRS-2677: Stop of 'ora.cssdmonitor' on 'odbn1' succeeded
    CRS-2529: Unable to act on 'ora.cssd' because that would require stopping or relocating 'ora.asm', but the force option was not specified
    CRS-4000: Command Stop failed, or completed with errors.
    Command return code of 1 (256) from command: /u01/app/grid/bin/crsctl stop resource ora.cssd -init
    Failed to exit exclusive mode
    Initial cluster configuration failed. See /u01/app/grid/cfgtoollogs/crsconfig/rootcrs_odbn1.log for details
    [root@odbn1 grid]#
    The rootcrs_odbn1.log shows the same error:
    [root@odbn1 crsconfig]# tail -50 rootcrs_odbn1.log
    2010-08-24 15:10:38: /bin/su successfully executed
    2010-08-24 15:10:38: /u01/app/grid/gpnp/odbn1/wallets/prdr/cwallet.sso => /u01/app/grid/gpnp/wallets/prdr/cwallet.sso
    2010-08-24 15:10:38: rmtcpy: /u01/app/grid/bin/cluutil -sourcefile /u01/app/grid/gpnp/odbn1/wallets/prdr/cwallet.sso -destfile /u01/app/grid/gpnp/wallets/prdr/cwallet.sso -nodelist odbn1,odbn2
    2010-08-24 15:10:38: Running as user oracle: /u01/app/grid/bin/cluutil -sourcefile /u01/app/grid/gpnp/odbn1/wallets/prdr/cwallet.sso -destfile /u01/app/grid/gpnp/wallets/prdr/cwallet.sso -nodelist odbn1,odbn2
    2010-08-24 15:10:38: s_run_as_user2: Running /bin/su oracle -c ' /u01/app/grid/bin/cluutil -sourcefile /u01/app/grid/gpnp/odbn1/wallets/prdr/cwallet.sso -destfile /u01/app/grid/gpnp/wallets/prdr/cwallet.sso -nodelist odbn1,odbn2 '
    2010-08-24 15:11:20: Removing file /tmp/file1ye3x8
    2010-08-24 15:11:21: Successfully removed file: /tmp/file1ye3x8
    2010-08-24 15:11:21: /bin/su successfully executed
    2010-08-24 15:11:21: /u01/app/grid/gpnp/odbn1/wallets/pa/cwallet.sso => /u01/app/grid/gpnp/wallets/pa/cwallet.sso
    2010-08-24 15:11:21: rmtcpy: /u01/app/grid/bin/cluutil -sourcefile /u01/app/grid/gpnp/odbn1/wallets/pa/cwallet.sso -destfile /u01/app/grid/gpnp/wallets/pa/cwallet.sso -nodelist odbn1,odbn2
    2010-08-24 15:11:21: Running as user oracle: /u01/app/grid/bin/cluutil -sourcefile /u01/app/grid/gpnp/odbn1/wallets/pa/cwallet.sso -destfile /u01/app/grid/gpnp/wallets/pa/cwallet.sso -nodelist odbn1,odbn2
    2010-08-24 15:11:21: s_run_as_user2: Running /bin/su oracle -c ' /u01/app/grid/bin/cluutil -sourcefile /u01/app/grid/gpnp/odbn1/wallets/pa/cwallet.sso -destfile /u01/app/grid/gpnp/wallets/pa/cwallet.sso -nodelist odbn1,odbn2 '
    2010-08-24 15:11:49: Removing file /tmp/filelEb5Lp
    2010-08-24 15:11:49: Successfully removed file: /tmp/filelEb5Lp
    2010-08-24 15:11:50: /bin/su successfully executed
    2010-08-24 15:11:50: /u01/app/grid/gpnp/odbn1/wallets/root/b64certificate.txt => /u01/app/grid/gpnp/wallets/root/b64certificate.txt
    2010-08-24 15:11:50: rmtcpy: /u01/app/grid/bin/cluutil -sourcefile /u01/app/grid/gpnp/odbn1/wallets/root/b64certificate.txt -destfile /u01/app/grid/gpnp/wallets/root/b64certificate.txt -nodelist odbn1,odbn2
    2010-08-24 15:11:50: Running as user oracle: /u01/app/grid/bin/cluutil -sourcefile /u01/app/grid/gpnp/odbn1/wallets/root/b64certificate.txt -destfile /u01/app/grid/gpnp/wallets/root/b64certificate.txt -nodelist odbn1,odbn2
    2010-08-24 15:11:50: s_run_as_user2: Running /bin/su oracle -c ' /u01/app/grid/bin/cluutil -sourcefile /u01/app/grid/gpnp/odbn1/wallets/root/b64certificate.txt -destfile /u01/app/grid/gpnp/wallets/root/b64certificate.txt -nodelist odbn1,odbn2 '
    2010-08-24 15:12:26: Removing file /tmp/fileUQzFxE
    2010-08-24 15:12:27: Successfully removed file: /tmp/fileUQzFxE
    2010-08-24 15:12:27: /bin/su successfully executed
    2010-08-24 15:12:27: /u01/app/grid/gpnp/odbn1/wallets/peer/cert.txt => /u01/app/grid/gpnp/wallets/peer/cert.txt
    2010-08-24 15:12:27: rmtcpy: /u01/app/grid/bin/cluutil -sourcefile /u01/app/grid/gpnp/odbn1/wallets/peer/cert.txt -destfile /u01/app/grid/gpnp/wallets/peer/cert.txt -nodelist odbn1,odbn2
    2010-08-24 15:12:27: Running as user oracle: /u01/app/grid/bin/cluutil -sourcefile /u01/app/grid/gpnp/odbn1/wallets/peer/cert.txt -destfile /u01/app/grid/gpnp/wallets/peer/cert.txt -nodelist odbn1,odbn2
    2010-08-24 15:12:27: s_run_as_user2: Running /bin/su oracle -c ' /u01/app/grid/bin/cluutil -sourcefile /u01/app/grid/gpnp/odbn1/wallets/peer/cert.txt -destfile /u01/app/grid/gpnp/wallets/peer/cert.txt -nodelist odbn1,odbn2 '
    2010-08-24 15:12:47: Removing file /tmp/filevnw3D8
    2010-08-24 15:12:47: Successfully removed file: /tmp/filevnw3D8
    2010-08-24 15:12:47: /bin/su successfully executed
    2010-08-24 15:12:47: /u01/app/grid/gpnp/odbn1/wallets/pa/cert.txt => /u01/app/grid/gpnp/wallets/pa/cert.txt
    2010-08-24 15:12:47: rmtcpy: /u01/app/grid/bin/cluutil -sourcefile /u01/app/grid/gpnp/odbn1/wallets/pa/cert.txt -destfile /u01/app/grid/gpnp/wallets/pa/cert.txt -nodelist odbn1,odbn2
    2010-08-24 15:12:47: Running as user oracle: /u01/app/grid/bin/cluutil -sourcefile /u01/app/grid/gpnp/odbn1/wallets/pa/cert.txt -destfile /u01/app/grid/gpnp/wallets/pa/cert.txt -nodelist odbn1,odbn2
    2010-08-24 15:12:47: s_run_as_user2: Running /bin/su oracle -c ' /u01/app/grid/bin/cluutil -sourcefile /u01/app/grid/gpnp/odbn1/wallets/pa/cert.txt -destfile /u01/app/grid/gpnp/wallets/pa/cert.txt -nodelist odbn1,odbn2 '
    2010-08-24 15:13:19: Removing file /tmp/fileArkUFi
    2010-08-24 15:13:20: Successfully removed file: /tmp/fileArkUFi
    2010-08-24 15:13:20: /bin/su successfully executed
    2010-08-24 15:13:20: Exiting exclusive mode
    2010-08-24 15:13:41: Command return code of 1 (256) from command: /u01/app/grid/bin/crsctl stop resource ora.asm -init
    2010-08-24 15:13:41: Stop of resource "ora.asm -init" failed
    2010-08-24 15:13:41: Failed to stop ASM
    2010-08-24 15:14:44: Command return code of 1 (256) from command: /u01/app/grid/bin/crsctl stop resource ora.cssd -init
    2010-08-24 15:14:44: CSS shutdown failed
    2010-08-24 15:14:44: Failed to exit exclusive mode
    2010-08-24 15:14:44: Initial cluster configuration failed. See /u01/app/grid/cfgtoollogs/crsconfig/rootcrs_odbn1.log for details
    [root@odbn1 crsconfig]#
    Has anyone seen this before? All help greatly appreciated!

    Hi Sebastian,
    Thank you for your quick reply. Here's what I tried:
    1. reset the install as you suggested, and tried running root.sh again. The exact same thing happened.
    2. restored to an openfiler snapshots that were taken right before the failed upgrade. Increased memory on both VMs to 1280M as you suggested. Rebooted the virtual environment and ran the installation again. This time, the root.sh script froze my environment - not enough memory to support this.
    3. restored the openfiler snapshots again. Increased memory on one node (odbn1) to 1500M. Ran the install for a single node cluster. It got past this error.
    Thank you!!!
    Also, thanks for the tip on the ntp.init file. That did eliminate the error for ntpd not running during the installation.

  • Root.sh failed throws error when installing Oracle Grid Infrastructure 11.2

    Hi,
    root.sh failed with the following error when installing / configuring the oracle grid infrastructure version 11.2.0.1 for standalone on RHEL 6
    Now product-specific root actions will be performed.
    2011-10-10 11:46:55: Checking for super user privileges
    2011-10-10 11:46:55: User has super user privileges
    2011-10-10 11:46:55: Parsing the host name
    Using configuration parameter file: /apps/opt/oracle_infra/crs/install/crsconfig_params
    Creating trace directory
    LOCAL ADD MODE
    Creating OCR keys for user 'oracle', privgrp 'oinstall'..
    Operation successful.
    CRS-4664: Node vmhost1 successfully pinned.
    Adding daemon to inittab
    CRS-4124: Oracle High Availability Services startup failed.
    CRS-4000: Command Start failed, or completed with errors.
    ohasd failed to start: Inappropriate ioctl for device
    ohasd failed to start: Inappropriate ioctl for device at /apps/opt/oracle_infra/crs/install/roothas.pl line 296.
    I followed the steps / solution provided in the ID 1069182.1. But no use.
    Are there any workaround?
    Thanks
    -KarthicK
    Edited by: user11984375 on Oct 10, 2011 7:06 AM

    Check the logfiles under $GRID_HOME/log/<node_name>/cssd/
    I had seen the same problem and the following resolved the problem for me.
    [root@rac1 ~]# rm -f /usr/tmp/.oracle/* /tmp/.oracle/* /var/tmp/.oracle/*
    [root@rac1 ~]# > $ORA_CRS_HOME/log/<node_name>/cssd/<node_name>.pid
    HTH,
    Raj Mareddi
    http://www.freeoraclehelp.com

  • Run root.sh Failed to create or upgrade OLR (oracle11gr2+AIX6.1)

    2011-12-29 19:38:54: The configuration parameter file /oracle/grid/11.2/grid/crs/install/crsconfig_params is valid
    2011-12-29 19:38:54: Checking for super user privileges
    2011-12-29 19:38:54: User has super user privileges
    2011-12-29 19:38:54: ### Printing the configuration values from files:
    2011-12-29 19:38:54: /oracle/grid/11.2/grid/crs/install/crsconfig_params
    2011-12-29 19:38:54: /oracle/grid/11.2/grid/crs/install/s_crsconfig_defs
    2011-12-29 19:38:54: ASM_DISCOVERY_STRING=/dev/rup*
    2011-12-29 19:38:54: ASM_DISKS=/dev/rupdisk0,/dev/rupdisk1,/dev/rupdisk2
    2011-12-29 19:38:54: ASM_DISK_GROUP=CRS
    2011-12-29 19:38:54: ASM_REDUNDANCY=NORMAL
    2011-12-29 19:38:54: ASM_SPFILE=
    2011-12-29 19:38:54: ASM_UPGRADE=false
    2011-12-29 19:38:54: CLSCFG_MISSCOUNT=
    2011-12-29 19:38:54: CLUSTER_GUID=
    2011-12-29 19:38:54: CLUSTER_NAME=yhscluster
    2011-12-29 19:38:54: CRS_NODEVIPS="yhsscore1vip/255.255.255.192/en2,yhsscore2vip/255.255.255.192/en2"
    2011-12-29 19:38:54: CRS_STORAGE_OPTION=1
    2011-12-29 19:38:54: CSS_LEASEDURATION=400
    2011-12-29 19:38:54: DIRPREFIX=
    2011-12-29 19:38:54: DISABLE_OPROCD=0
    2011-12-29 19:38:54: EMBASEJAR_NAME=oemlt.jar
    2011-12-29 19:38:54: EWTJAR_NAME=ewt3.jar
    2011-12-29 19:38:54: EXTERNAL_ORACLE_BIN=/opt/oracle/bin
    2011-12-29 19:38:54: GNS_ADDR_LIST=
    2011-12-29 19:38:54: GNS_ALLOW_NET_LIST=
    2011-12-29 19:38:54: GNS_CONF=false
    2011-12-29 19:38:54: GNS_DENY_ITF_LIST=
    2011-12-29 19:38:54: GNS_DENY_NET_LIST=
    2011-12-29 19:38:54: GNS_DOMAIN_LIST=
    2011-12-29 19:38:54: GPNPCONFIGDIR=/oracle/grid/11.2/grid
    2011-12-29 19:38:54: GPNPGCONFIGDIR=/oracle/grid/11.2/grid
    2011-12-29 19:38:54: GPNP_PA=
    2011-12-29 19:38:54: HELPJAR_NAME=help4.jar
    2011-12-29 19:38:54: HOST_NAME_LIST=yhsscore1,yhsscore2
    2011-12-29 19:38:54: ID=/etc
    2011-12-29 19:38:54: INIT=/usr/sbin/init
    2011-12-29 19:38:54: IT=/etc/inittab
    2011-12-29 19:38:54: JEWTJAR_NAME=jewt4.jar
    2011-12-29 19:38:54: JLIBDIR=/oracle/grid/11.2/grid/jlib
    2011-12-29 19:38:54: JREDIR=/oracle/grid/11.2/grid/jdk/jre/
    2011-12-29 19:38:54: LANGUAGE_ID=AMERICAN_AMERICA.WE8ISO8859P1
    2011-12-29 19:38:54: MSGFILE=/var/adm/messages
    2011-12-29 19:38:54: NETCFGJAR_NAME=netcfg.jar
    2011-12-29 19:38:54: NETWORKS="en2"/53.2.1.0:public,"en3"/10.0.0.0:cluster_interconnect
    2011-12-29 19:38:54: NEW_HOST_NAME_LIST=
    2011-12-29 19:38:54: NEW_NODEVIPS="yhsscore1vip/255.255.255.192/en2,yhsscore2vip/255.255.255.192/en2"
    2011-12-29 19:38:54: NEW_NODE_NAME_LIST=
    2011-12-29 19:38:54: NEW_PRIVATE_NAME_LIST=
    2011-12-29 19:38:54: NODELIST=yhsscore1,yhsscore2
    2011-12-29 19:38:54: NODE_NAME_LIST=yhsscore1,yhsscore2
    2011-12-29 19:38:54: OCFS_CONFIG=
    2011-12-29 19:38:54: OCRCONFIG=/etc/oracle/ocr.loc
    2011-12-29 19:38:54: OCRCONFIGDIR=/etc/oracle
    2011-12-29 19:38:54: OCRID=
    2011-12-29 19:38:54: OCRLOC=ocr.loc
    2011-12-29 19:38:54: OCR_LOCATIONS=NO_VAL
    2011-12-29 19:38:54: OLASTGASPDIR=/etc/oracle/lastgasp
    2011-12-29 19:38:54: OLD_CRS_HOME=
    2011-12-29 19:38:54: OLRCONFIG=/etc/oracle/olr.loc
    2011-12-29 19:38:54: OLRCONFIGDIR=/etc/oracle
    2011-12-29 19:38:54: OLRLOC=olr.loc
    2011-12-29 19:38:54: OPROCDCHECKDIR=/etc/oracle/oprocd/check
    2011-12-29 19:38:54: OPROCDDIR=/etc/oracle/oprocd
    2011-12-29 19:38:54: OPROCDFATALDIR=/etc/oracle/oprocd/fatal
    2011-12-29 19:38:54: OPROCDSTOPDIR=/etc/oracle/oprocd/stop
    2011-12-29 19:38:54: ORACLE_BASE=/oracle/grid/app/grid
    2011-12-29 19:38:54: ORACLE_HOME=/oracle/grid/11.2/grid
    2011-12-29 19:38:54: ORACLE_OWNER=grid
    2011-12-29 19:38:54: ORA_ASM_GROUP=dba
    2011-12-29 19:38:54: ORA_DBA_GROUP=dba
    2011-12-29 19:38:54: PRIVATE_NAME_LIST=
    2011-12-29 19:38:54: RCALLDIR=/etc/rc.d/rc2.d
    2011-12-29 19:38:54: RCKDIR=/etc/rc.d/rc2.d
    2011-12-29 19:38:54: RCSDIR=/etc/rc.d/rc2.d
    2011-12-29 19:38:54: RC_KILL=K19
    2011-12-29 19:38:54: RC_KILL_OLD=S96
    2011-12-29 19:38:54: RC_START=S96
    2011-12-29 19:38:54: SCAN_NAME=yhsscan
    2011-12-29 19:38:54: SCAN_PORT=1521
    2011-12-29 19:38:54: SCRBASE=/etc/oracle/scls_scr
    2011-12-29 19:38:54: SHAREJAR_NAME=share.jar
    2011-12-29 19:38:54: SILENT=false
    2011-12-29 19:38:54: SO_EXT=so
    2011-12-29 19:38:54: SRVCFGLOC=srvConfig.loc
    2011-12-29 19:38:54: SRVCONFIG=/var/opt/oracle/srvConfig.loc
    2011-12-29 19:38:54: SRVCONFIGDIR=/var/opt/oracle
    2011-12-29 19:38:54: VNDR_CLUSTER=false
    2011-12-29 19:38:54: VOTING_DISKS=NO_VAL
    2011-12-29 19:38:54: ### Printing other configuration values ###
    2011-12-29 19:38:54: CLSCFG_EXTRA_PARMS=
    2011-12-29 19:38:54: CRSDelete=0
    2011-12-29 19:38:54: CRSPatch=0
    2011-12-29 19:38:54: DEBUG=
    2011-12-29 19:38:54: DOWNGRADE=
    2011-12-29 19:38:54: HAS_GROUP=dba
    2011-12-29 19:38:54: HAS_USER=root
    2011-12-29 19:38:54: HOST=yhsscore1
    2011-12-29 19:38:54: IS_SIHA=0
    2011-12-29 19:38:54: OLR_DIRECTORY=/oracle/grid/11.2/grid/cdata
    2011-12-29 19:38:54: OLR_LOCATION=/oracle/grid/11.2/grid/cdata/yhsscore1.olr
    2011-12-29 19:38:54: ORA_CRS_HOME=/oracle/grid/11.2/grid
    2011-12-29 19:38:54: REMOTENODE=
    2011-12-29 19:38:54: SUPERUSER=root
    2011-12-29 19:38:54: UPGRADE=
    2011-12-29 19:38:54: VF_DISCOVERY_STRING=
    2011-12-29 19:38:54: addfile=/oracle/grid/11.2/grid/crs/install/crsconfig_addparams
    2011-12-29 19:38:54: crscfg_trace=1
    2011-12-29 19:38:54: crscfg_trace_file=/oracle/grid/11.2/grid/cfgtoollogs/crsconfig/rootcrs_yhsscore1.log
    2011-12-29 19:38:54: hosts=
    2011-12-29 19:38:54: oldcrshome=
    2011-12-29 19:38:54: oldcrsver=
    2011-12-29 19:38:54: osdfile=/oracle/grid/11.2/grid/crs/install/s_crsconfig_defs
    2011-12-29 19:38:54: parameters_valid=1
    2011-12-29 19:38:54: paramfile=/oracle/grid/11.2/grid/crs/install/crsconfig_params
    2011-12-29 19:38:54: platform_family=unix
    2011-12-29 19:38:54: srvctl_trc_suff=0
    2011-12-29 19:38:54: unlock_crshome=
    2011-12-29 19:38:54: user_is_superuser=1
    2011-12-29 19:38:54: ### Printing of configuration values complete ###
    2011-12-29 19:38:54: Oracle CRS stack is not configured yet
    2011-12-29 19:38:54: CRS is not yet configured. Hence, will proceed to configure CRS
    2011-12-29 19:38:54: Cluster-wide one-time actions... Done!
    2011-12-29 19:38:56: Oracle CRS home = /oracle/grid/11.2/grid
    2011-12-29 19:38:56: Host name = yhsscore1
    2011-12-29 19:38:56: CRS user = grid
    2011-12-29 19:38:56: Oracle CRS home = /oracle/grid/11.2/grid
    2011-12-29 19:38:56: GPnP host = yhsscore1
    2011-12-29 19:38:56: Oracle GPnP home = /oracle/grid/11.2/grid/gpnp
    2011-12-29 19:38:56: Oracle GPnP local home = /oracle/grid/11.2/grid/gpnp/yhsscore1
    2011-12-29 19:38:56: GPnP directories verified.
    2011-12-29 19:38:56: Checking to see if Oracle CRS stack is already configured
    2011-12-29 19:38:56: Oracle CRS stack is not configured yet
    2011-12-29 19:38:56: ---Checking local gpnp setup...
    2011-12-29 19:38:56: The setup file "/oracle/grid/11.2/grid/gpnp/yhsscore1/profiles/peer/profile.xml" does not exist
    2011-12-29 19:38:56: The setup file "/oracle/grid/11.2/grid/gpnp/yhsscore1/wallets/peer/cwallet.sso" does not exist
    2011-12-29 19:38:56: The setup file "/oracle/grid/11.2/grid/gpnp/yhsscore1/wallets/prdr/cwallet.sso" does not exist
    2011-12-29 19:38:56: chk gpnphome /oracle/grid/11.2/grid/gpnp/yhsscore1: profile_ok 0 wallet_ok 0 r/o_wallet_ok 0
    2011-12-29 19:38:56: chk gpnphome /oracle/grid/11.2/grid/gpnp/yhsscore1: INVALID (bad profile/wallet)
    2011-12-29 19:38:56: ---Checking cluster-wide gpnp setup...
    2011-12-29 19:38:56: The setup file "/oracle/grid/11.2/grid/gpnp/profiles/peer/profile.xml" does not exist
    2011-12-29 19:38:56: The setup file "/oracle/grid/11.2/grid/gpnp/wallets/peer/cwallet.sso" does not exist
    2011-12-29 19:38:56: The setup file "/oracle/grid/11.2/grid/gpnp/wallets/prdr/cwallet.sso" does not exist
    2011-12-29 19:38:56: chk gpnphome /oracle/grid/11.2/grid/gpnp: profile_ok 0 wallet_ok 0 r/o_wallet_ok 0
    2011-12-29 19:38:56: chk gpnphome /oracle/grid/11.2/grid/gpnp: INVALID (bad profile/wallet)
    2011-12-29 19:38:56: gpnp setup checked: local valid? 0 cluster-wide valid? 0
    2011-12-29 19:38:56: gpnp setup: NONE
    2011-12-29 19:38:56: GPNP configuration required
    2011-12-29 19:38:56: Validating for SI-CSS configuration
    2011-12-29 19:38:56: Retrieving OCR main disk location
    2011-12-29 19:38:56: Opening file OCRCONFIG
    2011-12-29 19:38:56: Value () is set for key=ocrconfig_loc
    2011-12-29 19:38:56: Unable to retrieve ocr disk info
    2011-12-29 19:38:56: Checking to see if any 9i GSD is up
    2011-12-29 19:38:56: libskgxnBase_lib = /etc/ORCLcluster/oracm/lib/libskgxn2.so
    2011-12-29 19:38:56: libskgxn_lib = /opt/ORCLcluster/lib/libskgxn2.so
    2011-12-29 19:38:56: SKGXN library file does not exists
    2011-12-29 19:38:56: OLR location = /oracle/grid/11.2/grid/cdata/yhsscore1.olr
    2011-12-29 19:38:56: Oracle CRS Home = /oracle/grid/11.2/grid
    2011-12-29 19:38:56: Validating /etc/oracle/olr.loc file for OLR location /oracle/grid/11.2/grid/cdata/yhsscore1.olr
    2011-12-29 19:38:56: /etc/oracle/olr.loc already exists. Backing up /etc/oracle/olr.loc to /etc/oracle/olr.loc.orig
    2011-12-29 19:38:56: Oracle CRS home = /oracle/grid/11.2/grid
    2011-12-29 19:38:56: Oracle cluster name = yhscluster
    2011-12-29 19:38:56: OCR locations = +CRS
    2011-12-29 19:38:56: Validating OCR
    2011-12-29 19:38:56: Retrieving OCR location used by previous installations
    2011-12-29 19:38:56: Opening file OCRCONFIG
    2011-12-29 19:38:56: Value () is set for key=ocrconfig_loc
    2011-12-29 19:38:56: Opening file OCRCONFIG
    2011-12-29 19:38:56: Value () is set for key=ocrmirrorconfig_loc
    2011-12-29 19:38:56: Opening file OCRCONFIG
    2011-12-29 19:38:56: Value () is set for key=ocrconfig_loc3
    2011-12-29 19:38:56: Opening file OCRCONFIG
    2011-12-29 19:38:56: Value () is set for key=ocrconfig_loc4
    2011-12-29 19:38:56: Opening file OCRCONFIG
    2011-12-29 19:38:56: Value () is set for key=ocrconfig_loc5
    2011-12-29 19:38:56: Checking if OCR sync file exists
    2011-12-29 19:38:56: No need to sync OCR file
    2011-12-29 19:38:56: OCR_LOCATION=+CRS
    2011-12-29 19:38:56: OCR_MIRROR_LOCATION=
    2011-12-29 19:38:56: OCR_MIRROR_LOC3=
    2011-12-29 19:38:56: OCR_MIRROR_LOC4=
    2011-12-29 19:38:56: OCR_MIRROR_LOC5=
    2011-12-29 19:38:56: Current OCR location=
    2011-12-29 19:38:56: Current OCR mirror location=
    2011-12-29 19:38:56: Current OCR mirror loc3=
    2011-12-29 19:38:56: Current OCR mirror loc4=
    2011-12-29 19:38:56: Current OCR mirror loc5=
    2011-12-29 19:38:56: Verifying current OCR settings with user entered values
    2011-12-29 19:38:56: Setting OCR locations in /etc/oracle/ocr.loc
    2011-12-29 19:38:56: Validating OCR locations in /etc/oracle/ocr.loc
    2011-12-29 19:38:56: Checking for existence of /etc/oracle/ocr.loc
    2011-12-29 19:38:56: Backing up /etc/oracle/ocr.loc to /etc/oracle/ocr.loc.orig
    2011-12-29 19:38:56: Setting ocr location +CRS
    2011-12-29 19:38:56: User grid has the required capabilities to run CSSD in realtime mode
    *2011-12-29 19:38:56: Creating or upgrading Oracle Local Registry (OLR)*
    *2011-12-29 19:38:56: Command return code of 255 (65280) from command: /oracle/grid/11.2/grid/bin/ocrconfig -local -upgrade grid dba*
    *2011-12-29 19:38:56: /oracle/grid/11.2/grid/bin/ocrconfig -local -upgrade failed with error: 255*
    *2011-12-29 19:38:56: Failed to create or upgrade OLR*
    帖子经 905068编辑过

    refer:-
    Command return code of 255 (65280) during Grid Infrastructure Installation
    http://coskan.wordpress.com/2009/12/07/root-sh-failed-after-asm-disk-creation-for-11gr2-grid-infrastructure/

  • Root.sh fails on 2nd node

    AIX 6
    Oracle grid infrastructure 11.2.0.3
    At the end of the grid install, ran the root.sh on the first node then on the second node, but failed on the second node. Ran deconfig was successfull, but root.sh failed again :
    The deconfig worked but not the root.sh:
    Successfully deconfigured Oracle clusterware stack on this node
    mtnx213:/oracle/app/grid/product/11.2.0/grid/crs/install#/oracle/app/grid/product/11.2.0/grid/root.sh
    Performing root user operation for Oracle 11g
    The following environment variables are set as:
        ORACLE_OWNER= oragrid
        ORACLE_HOME= /oracle/app/grid/product/11.2.0/grid
    Enter the full pathname of the local bin directory: [/usr/local/bin]:
    The contents of "dbhome" have not changed. No need to overwrite.
    The contents of "oraenv" have not changed. No need to overwrite.
    The contents of "coraenv" have not changed. No need to overwrite.
    Entries will be added to the /etc/oratab file as needed by
    Database Configuration Assistant when a database is created
    Finished running generic part of root script.
    Now product-specific root actions will be performed.
    Using configuration parameter file: /oracle/app/grid/product/11.2.0/grid/crs/install/crsconfig_params
    User ignored Prerequisites during installation
    User oragrid has the required capabilities to run CSSD in realtime mode
    OLR initialization - successful
    Adding Clusterware entries to inittab
    USM driver install actions failed
    /oracle/app/grid/product/11.2.0/grid/perl/bin/perl -I/oracle/app/grid/product/11.2.0/grid/perl/lib -I/oracle/app/grid/product/11.2.0/grid/crs/install /oracle/app/grid/product/11.2.0/grid/crs/install/rootcrs.pl execution failed

    My answer you can find here (in your duplicate post): root.sh fails on 2nd node Timed out waiting for the CRS stack to start

  • 11G R2 root.sh failed on first node with OLE fetch parameter error

    I have successfully installed 11G R2.1 on Centos 5.4 64 bit.
    Now it's coming to install 11G R2.2 on Redhat 5.4 64bit with HDS storrage.
    [grid@dmdb1 grid]$ uname -a
    Linux dmdb1 2.6.18-164.el5 #1 SMP Tue Aug 18 15:51:48 EDT 2009 x86_64 x86_64 x86_64 GNU/Linux
    I passed all pre-ins requirements except shared storage. However, I manually verify it with no problems.
    [grid@dmdb1 grid]$ ./runcluvfy.sh stage -pre crsinst -fixup -n dmdb1,dmdb2,dmdb3,dmdb4 -verbose|grep -i fail
    [grid@dmdb1 grid]$ ./runcluvfy.sh stage -post hwos -n dmdb1,dmdb2,dmdb3,dmdb4 -verbose|grep -i fail
    [grid@dmdb1 grid]$ ./runcluvfy.sh comp sys -n dmdb1,dmdb2,dmdb3,dmdb4 -p crs -osdba dba -orainv oinstall
    Verifying system requirement
    Total memory check passed
    Available memory check passed
    Swap space check passed
    Free disk space check passed for "dmdb4:/tmp"
    Free disk space check passed for "dmdb3:/tmp"
    Free disk space check passed for "dmdb2:/tmp"
    Free disk space check passed for "dmdb1:/tmp"
    User existence check passed for "grid"
    Group existence check passed for "oinstall"
    Group existence check passed for "dba"
    Membership check for user "grid" in group "oinstall" [as Primary] passed
    Membership check for user "grid" in group "dba" passed
    Run level check passed
    Hard limits check passed for "maximum open file descriptors"
    Soft limits check passed for "maximum open file descriptors"
    Hard limits check passed for "maximum user processes"
    Soft limits check passed for "maximum user processes"
    System architecture check passed
    Kernel version check passed
    Kernel parameter check passed for "semmsl"
    Kernel parameter check passed for "semmns"
    Kernel parameter check passed for "semopm"
    Kernel parameter check passed for "semmni"
    Kernel parameter check passed for "shmmax"
    Kernel parameter check passed for "shmmni"
    Kernel parameter check passed for "shmall"
    Kernel parameter check passed for "file-max"
    Kernel parameter check passed for "ip_local_port_range"
    Kernel parameter check passed for "rmem_default"
    Kernel parameter check passed for "rmem_max"
    Kernel parameter check passed for "wmem_default"
    Kernel parameter check passed for "wmem_max"
    Kernel parameter check passed for "aio-max-nr"
    Package existence check passed for "make-3.81"
    Package existence check passed for "binutils-2.17.50.0.6"
    Package existence check passed for "gcc-4.1"
    Package existence check passed for "libaio-0.3.106 (i386)"
    Package existence check passed for "libaio-0.3.106 (x86_64)"
    Package existence check passed for "glibc-2.5-24 (i686)"
    Package existence check passed for "glibc-2.5-24 (x86_64)"
    Package existence check passed for "compat-libstdc++-33-3.2.3 (i386)"
    Package existence check passed for "compat-libstdc++-33-3.2.3 (x86_64)"
    Package existence check passed for "elfutils-libelf-0.125 (x86_64)"
    Package existence check passed for "elfutils-libelf-devel-0.125"
    Package existence check passed for "glibc-common-2.5"
    Package existence check passed for "glibc-devel-2.5 (i386)"
    Package existence check passed for "glibc-devel-2.5 (x86_64)"
    Package existence check passed for "glibc-headers-2.5"
    Package existence check passed for "gcc-c++-4.1.2"
    Package existence check passed for "libaio-devel-0.3.106 (i386)"
    Package existence check passed for "libaio-devel-0.3.106 (x86_64)"
    Package existence check passed for "libgcc-4.1.2 (i386)"
    Package existence check passed for "libgcc-4.1.2 (x86_64)"
    Package existence check passed for "libstdc++-4.1.2 (i386)"
    Package existence check passed for "libstdc++-4.1.2 (x86_64)"
    Package existence check passed for "libstdc++-devel-4.1.2 (x86_64)"
    Package existence check passed for "sysstat-7.0.2"
    Package existence check passed for "unixODBC-2.2.11 (i386)"
    Package existence check passed for "unixODBC-2.2.11 (x86_64)"
    Package existence check passed for "unixODBC-devel-2.2.11 (i386)"
    Package existence check passed for "unixODBC-devel-2.2.11 (x86_64)"
    Package existence check passed for "ksh-20060214"
    Check for multiple users with UID value 0 passed
    Verification of system requirement was successful.
    [grid@dmdb1 grid]$ ./runcluvfy.sh comp sys -n dmdb1,dmdb2,dmdb3,dmdb4 -p database -osdba dba -orainv oinstall|grep -i fail
    [grid@dmdb1 grid]$ ./runcluvfy.sh comp ssa -n dmdb1,dmdb2,dmdb3,dmdb4
    Verifying shared storage accessibility
    Checking shared storage accessibility...
    Storage operation failed
    Shared storage check failed on nodes "dmdb4,dmdb3,dmdb2,dmdb1"
    Verification of shared storage accessibility was unsuccessful on all the specified nodes.
    I followed below article to verify shared storage issues:
    http://www.webofwood.com/rac/oracle-response-to-shared-storage-check-failed-on-nodes/
    it's ok.
    So I skipped SSA issue and go on install with (./runInstaller -ignoreInternalDriverError).
    However, when I ran root.sh with below error:
    CRS-2673: Attempting to stop 'ora.mdnsd' on 'dmdb1'
    CRS-2677: Stop of 'ora.mdnsd' on 'dmdb1' succeeded
    CRS-2673: Attempting to stop 'ora.gipcd' on 'dmdb1'
    CRS-2677: Stop of 'ora.gipcd' on 'dmdb1' succeeded
    CRS-4000: Command Start failed, or completed with errors.
    CRS-2672: Attempting to start 'ora.gipcd' on 'dmdb1'
    CRS-2672: Attempting to start 'ora.mdnsd' on 'dmdb1'
    CRS-2676: Start of 'ora.gipcd' on 'dmdb1' succeeded
    CRS-2676: Start of 'ora.mdnsd' on 'dmdb1' succeeded
    CRS-2672: Attempting to start 'ora.gpnpd' on 'dmdb1'
    CRS-2676: Start of 'ora.gpnpd' on 'dmdb1' succeeded
    CRS-2672: Attempting to start 'ora.cssdmonitor' on 'dmdb1'
    CRS-2676: Start of 'ora.cssdmonitor' on 'dmdb1' succeeded
    CRS-2672: Attempting to start 'ora.cssd' on 'dmdb1'
    CRS-2672: Attempting to start 'ora.diskmon' on 'dmdb1'
    CRS-2676: Start of 'ora.diskmon' on 'dmdb1' succeeded
    CRS-2674: Start of 'ora.cssd' on 'dmdb1' failed
    CRS-2679: Attempting to clean 'ora.cssd' on 'dmdb1'
    CRS-2681: Clean of 'ora.cssd' on 'dmdb1' succeeded
    CRS-2673: Attempting to stop 'ora.diskmon' on 'dmdb1'
    CRS-2677: Stop of 'ora.diskmon' on 'dmdb1' succeeded
    CRS-2673: Attempting to stop 'ora.gpnpd' on 'dmdb1'
    CRS-2677: Stop of 'ora.gpnpd' on 'dmdb1' succeeded
    CRS-2673: Attempting to stop 'ora.mdnsd' on 'dmdb1'
    CRS-2677: Stop of 'ora.mdnsd' on 'dmdb1' succeeded
    CRS-2673: Attempting to stop 'ora.gipcd' on 'dmdb1'
    CRS-2677: Stop of 'ora.gipcd' on 'dmdb1' succeeded
    CRS-4000: Command Start failed, or completed with errors.
    Command return code of 1 (256) from command: /opt/app/11.2.0/grid/bin/crsctl start resource ora.ctssd -init
    Start of resource "ora.ctssd -init" failed
    Clusterware exclusive mode start of resource ora.ctssd failed
    CRS-2500: Cannot stop resource 'ora.crsd' as it is not running
    CRS-4000: Command Stop failed, or completed with errors.
    Command return code of 1 (256) from command: /opt/app/11.2.0/grid/bin/crsctl stop resource ora.crsd -init
    Stop of resource "ora.crsd -init" failed
    Failed to stop CRSD
    CRS-2500: Cannot stop resource 'ora.asm' as it is not running
    CRS-4000: Command Stop failed, or completed with errors.
    Command return code of 1 (256) from command: /opt/app/11.2.0/grid/bin/crsctl stop resource ora.asm -init
    Stop of resource "ora.asm -init" failed
    Failed to stop ASM
    CRS-2673: Attempting to stop 'ora.cssdmonitor' on 'dmdb1'
    CRS-2677: Stop of 'ora.cssdmonitor' on 'dmdb1' succeeded
    Initial cluster configuration failed. See /opt/app/11.2.0/grid/cfgtoollogs/crsconfig/rootcrs_dmdb1.log for details
    I manually ran '/opt/app/11.2.0/grid/bin/crsctl start resource ora.ctssd -init' and got below erros from /opt/app/11.2.0/grid/log/dmdb1/cssd/ocssd.log
    Oracle Database 11g Clusterware Release 11.2.0.1.0 - Production Copyright 1996, 2009 Oracle. All rights reserved.
    2011-09-23 19:06:41.501: [    CSSD][1812336384]clssscmain: Starting CSS daemon, version 11.2.0.1.0, in (exclusive) mode with uniqueness value 1316776001
    2011-09-23 19:06:41.502: [    CSSD][1812336384]clssscmain: Environment is production
    2011-09-23 19:06:41.502: [    CSSD][1812336384]clssscmain: Core file size limit extended
    2011-09-23 19:06:41.515: [    CSSD][1812336384]clssscGetParameterOLR: OLR fetch for parameter logsize (8) failed with rc 21
    2011-09-23 19:06:41.515: [    CSSD][1812336384]clssscSetPrivEnv: IPMI device not installed on this node
    2011-09-23 19:06:41.517: [    CSSD][1812336384]clssscGetParameterOLR: OLR fetch for parameter priority (15) failed with rc 21
    2011-09-23 19:06:41.539: [    CSSD][1812336384]clssscExtendLimits: The current soft limit for file descriptors is 65536, hard limit is 65536
    2011-09-23 19:06:41.539: [    CSSD][1812336384]clssscExtendLimits: The current soft limit for locked memory is 4294967295, hard limit is 4294967295
    2011-09-23 19:06:41.541: [    CSSD][1812336384]clssscmain: Running as user grid
    anybody can help me fix it?

    I opened on SR for this case.
    it's ok now.
    Below is from Oracle Global Service request:
    === ODM Action Plan ===
    Dear customer, after went through the uploaded log files, we found the issue looks like
    bug 9732641 : The clusterware gpnpd process crashes when there is more than 1 cluster with the same name.
    To narrow down the issue, pls apply the following steps.
    1. Pls clean the previous configuration with below steps, then run root.sh script on node1 again.
    1.1 remove current configuration.
    $GRID_HOME/crs/install/rootcrs.pl -verbose -deconfig -force
    1.2 remove other related files.
    if $GI_BASE/Clusterware/ckptGridHA_.xml still there, please remove it manually with "rm" command on all nodes
    If the gpnp profile is still there, pls clean up them, then rebuild require directories.
    $ rm -rf $GRID_HOME/gpnp/*
    $ mkdir -p $GRID_HOME/gpnp/profiles/peer $GRID_HOME/gpnp/wallets/peer $GRID_HOME/gpnp/wallets/prdr $GRID_HOME/gpnp/wallets/pa $GRID_HOME/gpnp/wallets/root
    2. After the previous configuration was cleaned up, pls rerun the root.sh script again. If the issue still there, pls upload the following:
    Everything under <GI_HOME>/log
    Everything under <ORACLE_BAES for grid user>/cfgtoollogs
    Everything under <GI_HOME>/cfgtolllogs/crsconfig
    OS log(/var/log/messages)
    3. Pls also make sure there is only one GI running on your cluster.
    See /opt/app/11.2.0/grid/cfgtoollogs/crsconfig/rootcrs_dmdb1.log for details

  • Root.sh fails while installation of 10.2.0.1 clusterware on Solaris

    All
    I am trying to configure RAC on solaris 10 OS running on a 64 bit SPARC machine using RAW slices for the cluster files /dev/rdsk/c1t9d0s5 (OCR)and /dev/rdsk/c1t9d0s6 (Vote Disk). Using format command, I would create these slices leaving about 10 cylinders initially I would assign 200mb for each slice.
    when I try to run OUI for cluster ware 10.2.0.1 at the step when I am asked to run root.sh, my attempt to run this script continues to fail with below error message.
    root@hcssun01 # /u01/app/oracle/product/10.2.0/crs_home/root.sh
    WARNING: directory '/u01/app/oracle/product/10.2.0' is not owned by root
    WARNING: directory '/u01/app/oracle/product' is not owned by root
    WARNING: directory '/u01/app/oracle' is not owned by root
    Checking to see if Oracle CRS stack is already configured
    Checking to see if any 9i GSD is up
    /u01/app/oracle/product/10.2.0/crs_home/bin/lsdb: failed to initialize interface to Cluster Manager.
    Setting the permissions on OCR backup directory
    Setting up NS directories
    Oracle Cluster Registry configuration upgraded successfully
    WARNING: directory '/u01/app/oracle/product/10.2.0' is not owned by root
    WARNING: directory '/u01/app/oracle/product' is not owned by root
    WARNING: directory '/u01/app/oracle' is not owned by root
    clscfg -install -nn nodeA,nodeAnum,nodeB,nodeBnum... -o crshome
    -l languageid -c clustername -q votedisk
    [-t p1,p2,p3,p4] [-pn privA,privAnum,privB,privBnum...]
    [-hn hostA,hostAnum,hostB,hostBnum...]
    -o crshome - directory CRS is installed in
    -q votedisk - path to the CSS voting disk
    -c clustername - name of the cluster. 1-14 character string
    -l languageid - Oracle localization language id.
    e.g. AMERICAN_AMERICA.WE8ASCII37C
    -nn name,num - nodename list in pairs of nodename,nodenumber
    If OS clusterware is installed see vendor docs.
    e.g. node1,1,node2,2,node4,4
    -pn name,num - Defines private interconnect names for nodes already
    specified with the -nn flag.
    Defaults to the nodename if not specified.
    -hn name,num - Defines hostnames for nodes specified with the -nn
    flag in the same format as above.
    Defaults to the nodename if not specified.
    -t p1,p2,p3,p4 - Specifies TCP ports to be used by the CRS daemons
    on the private interconnect.
    default ports: 49895,49896,49897,49898
    -force Forces overwrite of any previous configuration.
    WARNING: Using this tool may corrupt your cluster configuration. Do not
    use unless you positively know what you are doing.
    Failed to initialize Oracle Cluster Registry for cluster
    0
    From the log files under the folder CRS_HOME/log/<hostname>/client does not reveal much information either... If you have any idea of whats going wrong and needs a fix/workaround could you please help me? Below are more details of the Software Stack and H/W which may be of some help.
    OS: Solaris 5.10 running on SF 480R.
    Cluster 10.2.0.1
    I intend to install RDBMS 10.2.0.1 on top of this and later upgrade it to 11.1
    I am referring to the Oracle Clusterware and Oracle Real Application Clusters Installation Guide (b14205-01) published by Oracle.
    Thanks in advance.
    Sarat.

    Thanks for the assiatance. However, I am yet to proceed beyond this root.sh.
    after multiple tries.. I am coming across this Oracle UDLM software which I am sure I do not have on my cluster machines. But not in every place is it mentioned? What is this software about.. I have searched all my oracle and no where this package / patch is available.. if you know about it ( the patch number or something) can you help me?
    Also I have a question to clarify... it could be elimentary .. but request your patience... On the Sun Cluster we have defined Disk Resource Groups which are configured for cold failover along with other resource groups ... is this some thing which is not allowed/permitted.. I mean not supported...
    I will need your help as I am singing this song of Oracle RAC on Sun Solaris 10 for the first time..
    Regards!
    Sarat

  • Root.sh fails on second node during clusterware installation

    I am setting up a test instance of OEL 5.4 using VMware.
    I am running the clusterware install and it is failing only on node2. See below.
    I followed note 414897.1 on metalink for raw device setup.
    Any help would be greatly appreciate.
    2010-09-01 11:58:21.084: [ default][1275584]a_init:7!: Backend init unsuccessful : [22]
    2010-09-01 11:58:21.091: [  OCRRAW][1275584]propriogid:1: INVALID FORMAT
    2010-09-01 11:58:21.091: [  OCRRAW][1275584]ibctx:1:ERROR: INVALID FORMAT
    2010-09-01 11:58:21.091: [  OCRRAW][1275584]proprinit:problem reading the bootblock or superbloc 22
    2010-09-01 11:58:21.097: [  OCRRAW][1275584]propriogid:1: INVALID FORMAT
    2010-09-01 11:58:21.139: [  OCRRAW][1275584]propriowv: Vote information on disk 0 [u01/app/oracle/oradata/ocr] is adjusted from [0/0] to [2/2]
    2010-09-01 11:58:21.191: [  OCRRAW][1275584]propriniconfig:No 92 configuration
    2010-09-01 11:58:21.192: [  OCRAPI][1275584]a_init:6a: Backend init successful
    2010-09-01 11:58:21.299: [ OCRCONF][1275584]Initialized DATABASE keys in OCR
    2010-09-01 11:58:21.555: [ OCRCONF][1275584]Successfully set skgfr block 0
    2010-09-01 11:58:21.557: [ OCRCONF][1275584]Exiting [status=success]...

    Oracle 10gR2 RAC Installation in RedHat 5 Linux Using VMware.
    Important points to install 10gR2 oracle RAC in linux5.
    1.LINUX 5(Redhat 5) doesn't have /etc/sysconfig/rawdevices file. so we have to configure it.
    2. Edit the /etc/redhat-release version to redhat-4 and and to invoke the runInstaller use the command
    $runInstaller -ignoreSysPrereqs. //this will bypass the os check //
    3. Next during clusterware installation at the end of root.sh in node 2 end with error message.So we have adjust the parameters in vipca and srvctl files.
    4. vipca will fail to run. so we have to adjust some parameters and configure it manually.
    refer the link, it will be useful to you to complete your installation.
    http://oracleinstance.blogspot.com/2010/03/oracle-10g-installation-in-linux-5.html

  • Root.sh failing after installing grid 11g on Linux x86-64

    Hi,
    Am getting below error while executing root.sh after instalaltion of 11g grid for RAC.
    [root@erprac2 ~]# /u01/app/11.2.0/grid/root.sh
    Performing root user operation for Oracle 11g
    The following environment variables are set as:
        ORACLE_OWNER= oracle
        ORACLE_HOME=  /u01/app/11.2.0/grid
    Enter the full pathname of the local bin directory: [/usr/local/bin]:
       Copying dbhome to /usr/local/bin ...
       Copying oraenv to /usr/local/bin ...
       Copying coraenv to /usr/local/bin ...
    Creating /etc/oratab file...
    Entries will be added to the /etc/oratab file as needed by
    Database Configuration Assistant when a database is created
    Finished running generic part of root script.
    Now product-specific root actions will be performed.
    syntax error at /u01/app/11.2.0/grid/perl/lib/5.10.0/Net/Ping.pm line 1129, next token ???
    Global symbol "$p" requires explicit package name at /u01/app/11.2.0/grid/perl/lib/5.10.0/Net/Ping.pm line 1130.
    Global symbol "$host" requires explicit package name at /u01/app/11.2.0/grid/perl/lib/5.10.0/Net/Ping.pm line 1130.
    Global symbol "$p" requires explicit package name at /u01/app/11.2.0/grid/perl/lib/5.10.0/Net/Ping.pm line 1133.
    Global symbol "$p" requires explicit package name at /u01/app/11.2.0/grid/perl/lib/5.10.0/Net/Ping.pm line 1136.
    Global symbol "$p" requires explicit package name at /u01/app/11.2.0/grid/perl/lib/5.10.0/Net/Ping.pm line 1137.
    Global symbol "$host" requires explicit package name at /u01/app/11.2.0/grid/perl/lib/5.10.0/Net/Ping.pm line 1138.
    Global symbol "@host_array" requires explicit package name at /u01/app/11.2.0/grid/perl/lib/5.10.0/Net/Ping.pm line 1138.
    Global symbol "$p" requires explicit package name at /u01/app/11.2.0/grid/perl/lib/5.10.0/Net/Ping.pm line 1139.
    Global symbol "$host" requires explicit package name at /u01/app/11.2.0/grid/perl/lib/5.10.0/Net/Ping.pm line 1139.
    Global symbol "$host" requires explicit package name at /u01/app/11.2.0/grid/perl/lib/5.10.0/Net/Ping.pm line 1141.
    Global symbol "$rtt" requires explicit package name at /u01/app/11.2.0/grid/perl/lib/5.10.0/Net/Ping.pm line 1141.
    Global symbol "$ip" requires explicit package name at /u01/app/11.2.0/grid/perl/lib/5.10.0/Net/Ping.pm line 1141.
    Global symbol "$p" requires explicit package name at /u01/app/11.2.0/grid/perl/lib/5.10.0/Net/Ping.pm line 1141.
    Global symbol "$host" requires explicit package name at /u01/app/11.2.0/grid/perl/lib/5.10.0/Net/Ping.pm line 1142.
    Global symbol "$ip" requires explicit package name at /u01/app/11.2.0/grid/perl/lib/5.10.0/Net/Ping.pm line 1142.
    Global symbol "$rtt" requires explicit package name at /u01/app/11.2.0/grid/perl/lib/5.10.0/Net/Ping.pm line 1142.
    Global symbol "$p" requires explicit package name at /u01/app/11.2.0/grid/perl/lib/5.10.0/Net/Ping.pm line 1146.
    Global symbol "$p" requires explicit package name at /u01/app/11.2.0/grid/perl/lib/5.10.0/Net/Ping.pm line 1147.
    Global symbol "$ret" requires explicit package name at /u01/app/11.2.0/grid/perl/lib/5.10.0/Net/Ping.pm line 1148.
    Global symbol "$duration" requires explicit package name at /u01/app/11.2.0/grid/perl/lib/5.10.0/Net/Ping.pm line 1148.
    Global symbol "$ip" requires explicit package name at /u01/app/11.2.0/grid/perl/lib/5.10.0/Net/Ping.pm line 1148.
    Global symbol "$p" requires explicit package name at /u01/app/11.2.0/grid/perl/lib/5.10.0/Net/Ping.pm line 1148.
    Global symbol "$host" requires explicit package name at /u01/app/11.2.0/grid/perl/lib/5.10.0/Net/Ping.pm line 1148.
    Global symbol "$host" requires explicit package name at /u01/app/11.2.0/grid/perl/lib/5.10.0/Net/Ping.pm line 1149.
    Global symbol "$ip" requires explicit package name at /u01/app/11.2.0/grid/perl/lib/5.10.0/Net/Ping.pm line 1149.
    Global symbol "$duration" requires explicit package name at /u01/app/11.2.0/grid/perl/lib/5.10.0/Net/Ping.pm line 1149.
    Global symbol "$ret" requires explicit package name at /u01/app/11.2.0/grid/perl/lib/5.10.0/Net/Ping.pm line 1150.
    Global symbol "$p" requires explicit package name at /u01/app/11.2.0/grid/perl/lib/5.10.0/Net/Ping.pm line 1151.
    Global symbol "$host" requires explicit package name at /u01/app/11.2.0/grid/perl/lib/5.10.0/Net/Ping.pm line 1154.
    Global symbol "$host" requires explicit package name at /u01/app/11.2.0/grid/perl/lib/5.10.0/Net/Ping.pm line 1154.
    Compilation failed in require at /u01/app/11.2.0/grid/crs/install/rootcrs.pl line 286.
    BEGIN failed--compilation aborted at /u01/app/11.2.0/grid/crs/install/rootcrs.pl line 286.
    /u01/app/11.2.0/grid/perl/bin/perl -I/u01/app/11.2.0/grid/perl/lib -I/u01/app/11.2.0/grid/crs/install /u01/app/11.2.0/grid/crs/install/rootcrs.pl execution failed

    Hi Anil,
    It says some library packages are missing .Did u check the cluster verify utility on all nodes in RAC? Its very mandatory. If not, Install all required packages on all nodes in a cluster  and then try again.
    Regards,
    Pradeep V

Maybe you are looking for

  • OneNote prevents my computer from going to sleep when I have a shared notebook open. Suggestions?

    Hello, The issue is that my computer will not go to sleep if I have OneNote opened with my shared notebook loaded. Background: I'm currently running Windows 8 Pro on a desktop machine with Office 2010 Home and Student installed. I log into the OS und

  • Arrow key in the keydown event

    Hello, I used the keydown event to display the characters on the Screen depending on the key which is pressed, but if I press arrow keys and want to display "x" on the extreme right side of the characters for example : abcsdex here x is displayed whe

  • Removing Payment Draft

    HI I'm trying to remove a payment draft that I have on the system using the gollowing code however when getting to the .                                 oIncomingPaymentsDraft.GetByKey(GetDraftDocEntry)                                 oIncomingPaymen

  • Illustrator or Photoshop for printed poster/advert design?

    Still in the planning phase of doing a 4m wide and 2m high poster to hang upon a door. Can anyone advise about how to go about it? Photoshop and Illustrator are both amazing but which one is more suitable for this particular job? Also, what does the

  • Using java and BC4J datatags together; causes problem

    Hello This is a piece of my code, and I am trying to add data, retrieved from the datasource, to add to a Java Vector. However, this does not work (at the part where the **** are). Can anybody tell me what I am doing wrong? I would appreciate it very