CRS installation roo.sh failed -- FAILURE AT FINAL CHECK

CRS 10.2.0.1 installation on Solaris 10
root. sh failed, following error appeared.
bash-3.00# sh -x root.sh
+ /opt/oracrs/install/rootinstall
+ /opt/oracrs/install/rootconfig
Checking to see if Oracle CRS stack is already configured
Setting the permissions on OCR backup directory
Setting up NS directories
Oracle Cluster Registry configuration upgraded successfully
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node <nodenumber>: <nodename> <private interconnect name> <hostname>
node 1: proddb02 proddb02-priv proddb02
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
Now formatting voting device: /dev/rdsk/vpath9a
Now formatting voting device: /dev/rdsk/vpath10a
Now formatting voting device: /dev/rdsk/vpath11a
Format of 3 voting devices complete.
Startup will be queued to init within 30 seconds.
Adding daemons to inittab
Expecting the CRS daemons to be up within 600 seconds.
Failure at final check of Oracle CRS stack.
10
bash-3.00#
following metalink note 240001.1 as well, no success.
any idea where should I look?

You can start looking at the following files to check if there are any errors reported :
a) The OS Messages file ( /var/adm/messages )
b) Any files with names like crsctl* under the /tmp directory
c) The client trace files under $ORA_CRS_HOME/log/<hostname>/client/*
d) The crsd and the cssd logs under $ORA_CRS_HOME/log/<hostname>/[crsd/cssd]/*
Also what does a ps -ef | grep d.bin indicate.
Which daemons get started and which do not.
Vishwa

Similar Messages

  • CRS installation: Failure at final check of Oracle CRS stack.10

    Hello,
    I am trying to install Oracle RAC in 10GR2 to simulate a migration from 1024 to 11GR2. I am using VMWARE with two Linux CentOS 64b 6.2 and shared disks as raw devices. I got "Failure at final check of Oracle CRS stack.10" when running root.sh, on both nodes. The ocrcheck is fine, but I have two different IDs... which is not good and I do not understand why:
    - I have shared raw devices
    - the devices are the same, I checked this twice
    Can anyoane help?
    I thank you all in advance.

    Hello,
    The exact error message is the one in my subject description: Failure at final check of Oracle CRS stack.10
    This occurs when runing root.sh after the installation is successfull and there is no way to continue, the assistants are failing, which is logic, as the root.sh attempt to configure and run the CRS agents are not done successfully. The OCR ar ok with the same ID on both nodes, the minor in major values for the raw devices are and the firewall is disabled on both nodes. I checked the rights, they are also ok. The exact and complete return of the root.sh script is the following:
    Checking to see if Oracle CRS stack is already configured
    Setting the permissions on OCR backup directory
    Setting up NS directories
    Oracle Cluster Registry configuration upgraded successfully
    Successfully accumulated necessary OCR keys.
    Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
    node <nodenumber>: <nodename> <private interconnect name> <hostname>
    node 1: cygnus cygnus-priv cygnus
    node 2: taurus taurus-priv taurus
    Creating OCR keys for user 'root', privgrp 'root'..
    Operation successful.
    Now formatting voting device: /apps/oracle/oradat/vot10
    Now formatting voting device: /apps/oracle/oradat/vot20
    Now formatting voting device: /apps/oracle/oradat/vot30
    Format of 3 voting devices complete.
    Startup will be queued to init within 30 seconds.
    Adding daemons to inittab
    Expecting the CRS daemons to be up within 600 seconds.
    Failure at final check of Oracle CRS stack.
    10
    Any help will be really apreciated and I thank you in advancen guys...

  • Clusterware Install:root.sh- Failure at final check of Oracle CRS stack. 10

    Hello All,
    Image: !http://systemwars.com/rac/cluster_back.jpg!
    I was attempting to perform the steps in:
    Link: http://www.oracle-base.com/articles/11g/OracleDB11gR1RACInstallationOnLinuxUsingNFS.php
    The only difference is that I decided to use fedora core 12 instead. I did this because I added a second NIC card (USB) and only FC12 would recognize it. I tried to get it to work on Cent 5 but it just wouldn't. The second nic on each machine eth1 are connected via crossover cable, and the interfaces can ping each other just fine, rac1-priv and rac2-priv.
    So here is my setup:
    # Public
    192.168.2.11 rac1.localdomain rac1
    192.168.2.12 rac2.localdomain rac2
    #Private
    192.168.0.11 rac1-priv.localdomain rac1-priv
    192.168.0.12 rac2-priv.localdomain rac2-priv
    #Virtual
    192.168.2.111 rac1-vip.localdomain rac1-vip
    192.168.2.112 rac2-vip.localdomain rac2-vip
    #NAS
    192.168.2.10 mini.localdomain mini
    Mini refers to my Mac mini which I decided to use as the 3rd "server" in the group. I was able to mount/read & write to the file systems just fine. As you can see.
    [root@rac1 ~]# df
    Filesystem 1K-blocks Used Available Use% Mounted on
    /dev/mapper/vg_rac1-lv_root
    8063408 5156268 2497540 68% /
    tmpfs 1417456 0 1417456 0% /dev/shm
    /dev/sda1 198337 22080 166017 12% /boot
    mini:/shared_config 488050688 76719808 411074880 16% /u01/shared_config
    mini:/shared_crs 488050688 76719808 411074880 16% /u01/app/crs/product/11.1.0/crs
    mini:/shared_home 488050688 76719808 411074880 16% /u01/app/oracle/product/11.1.0/db_1
    mini:/shared_data 488050688 76719808 411074880 16% /u01/oradata
    [root@rac1 ~]# ssh rac2
    Last login: Mon Dec 21 19:33:38 2009 from rac1.localdomain
    [root@rac2 ~]# df
    Filesystem 1K-blocks Used Available Use% Mounted on
    /dev/mapper/vg_rac2-lv_root
    8063408 4958008 2695800 65% /
    tmpfs 1417456 0 1417456 0% /dev/shm
    /dev/sda1 198337 22063 166034 12% /boot
    mini:/shared_config 488050688 76719808 411074880 16% /u01/shared_config
    mini:/shared_crs 488050688 76719808 411074880 16% /u01/app/crs/product/11.1.0/crs
    mini:/shared_home 488050688 76719808 411074880 16% /u01/app/oracle/product/11.1.0/db_1
    mini:/shared_data 488050688 76719808 411074880 16% /u01/oradata[color]
    CLUSTER VERIFY SEEMS OK APART FROM ONE WARNING
    WARNING:
    Could not find a suitable set of interfaces for VIPs.
    which according to this link, "can be safety ignored", although I noticed in the link its an actual ERROR and not a WARNING => http://www.idevelopment.info/data/Oracle/DBA_tips/Oracle10gRAC/CLUSTER_11.shtml . I also noted that it saw the public IPs as the possible priv IPs, which I also thought could safety be ignored.
    oracle@rac1 clusterware]$ ./runcluvfy.sh stage -pre crsinst -n rac1,rac2 -verbose
    Performing pre-checks for cluster services setup
    Checking node reachability...
    Check: Node reachability from node "rac1"
      Destination Node                      Reachable?             
      rac2                                  yes                    
      rac1                                  yes                    
    Result: Node reachability check passed from node "rac1".
    Checking user equivalence...
    Check: User equivalence for user "oracle"
      Node Name                             Comment                
      rac2                                  passed                 
      rac1                                  passed                 
    Result: User equivalence check passed for user "oracle".
    Checking administrative privileges...
    Check: Existence of user "oracle"
      Node Name     User Exists               Comment                
      rac2          yes                       passed                 
      rac1          yes                       passed                 
    Result: User existence check passed for "oracle".
    Check: Existence of group "oinstall"
      Node Name     Status                    Group ID               
      rac2          exists                    501                    
      rac1          exists                    501                    
    Result: Group existence check passed for "oinstall".
    Check: Membership of user "oracle" in group "oinstall" [as Primary]
      Node Name         User Exists   Group Exists  User in Group  Primary       Comment    
      rac2              yes           yes           yes           yes           passed     
      rac1              yes           yes           yes           yes           passed     
    Result: Membership check for user "oracle" in group "oinstall" [as Primary] passed.
    Administrative privileges check passed.
    Checking node connectivity...
    Interface information for node "rac2"
      Interface Name    IP Address    Subnet        Subnet Gateway  Default Gateway  Hardware Address
      eth0              192.168.2.12  192.168.2.0   0.0.0.0       192.168.2.1   00:01:6C:XXXX
      eth2              192.168.0.12  192.168.0.0   0.0.0.0       192.168.2.1   00:25:4B:XXXX
    Interface information for node "rac1"
      Interface Name    IP Address    Subnet        Subnet Gateway  Default Gateway  Hardware Address
      eth0              192.168.2.11  192.168.2.0   0.0.0.0       192.168.2.1   00:01:6CXXXXX
      eth1              192.168.0.11  192.168.0.0   0.0.0.0       192.168.2.1   00:25:4B:XXXX
    Check: Node connectivity of subnet "192.168.2.0"
      Source                          Destination                     Connected?     
      rac2:eth0                       rac1:eth0                       yes            
    Result: Node connectivity check passed for subnet "192.168.2.0" with node(s) rac2,rac1.
    Check: Node connectivity of subnet "192.168.0.0"
      Source                          Destination                     Connected?     
      rac2:eth2                       rac1:eth1                       yes            
    Result: Node connectivity check passed for subnet "192.168.0.0" with node(s) rac2,rac1.
    Interfaces found on subnet "192.168.2.0" that are likely candidates for a private interconnect:
    rac2 eth0:192.168.2.12
    rac1 eth0:192.168.2.11
    WARNING:
    Could not find a suitable set of interfaces for VIPs.
    Result: Node connectivity check passed.
    Checking system requirements for 'crs'...
    Check: Total memory
      Node Name     Available                 Required                  Comment  
      rac2          2.7GB (2834912KB)         1GB (1048576KB)           passed   
      rac1          2.7GB (2834912KB)         1GB (1048576KB)           passed   
    Result: Total memory check passed.
    Check: Free disk space in "/tmp" dir
      Node Name     Available                 Required                  Comment  
      rac2          4.58GB (4805204KB)        400MB (409600KB)          passed   
      rac1          10.51GB (11015624KB)      400MB (409600KB)          passed   
    Result: Free disk space check passed.
    Check: Swap space
      Node Name     Available                 Required                  Comment  
      rac2          2GB (2097144KB)           1.5GB (1572864KB)         passed   
      rac1          3GB (3145720KB)           1.5GB (1572864KB)         passed   
    Result: Swap space check passed.
    Check: System architecture
      Node Name     Available                 Required                  Comment  
      rac2          i686                      i686                      passed   
      rac1          i686                      i686                      passed   
    Result: System architecture check passed.
    Check: Kernel version
      Node Name     Available                 Required                  Comment  
      rac2          2.6.31.5-127.fc12.i686.PAE  2.6.9                     passed   
      rac1          2.6.31.5-127.fc12.i686.PAE  2.6.9                     passed   
    Result: Kernel version check passed.
    Check: Package existence for "make-3.81"
      Node Name                       Status                          Comment        
      rac2                            make-3.81-18.fc12.i686          passed         
      rac1                            make-3.81-18.fc12.i686          passed         
    Result: Package existence check passed for "make-3.81".
    Check: Package existence for "binutils-2.17.50.0.6"
      Node Name                       Status                          Comment        
      rac2                            binutils-2.19.51.0.14-34.fc12.i686  passed         
      rac1                            binutils-2.19.51.0.14-34.fc12.i686  passed         
    Result: Package existence check passed for "binutils-2.17.50.0.6".
    Check: Package existence for "gcc-4.1.1"
      Node Name                       Status                          Comment        
      rac2                            gcc-4.4.2-7.fc12.i686           passed         
      rac1                            gcc-4.4.2-7.fc12.i686           passed         
    Result: Package existence check passed for "gcc-4.1.1".
    Check: Package existence for "libaio-0.3.106"
      Node Name                       Status                          Comment        
      rac2                            libaio-0.3.107-9.fc12.i686      passed         
      rac1                            libaio-0.3.107-9.fc12.i686      passed         
    Result: Package existence check passed for "libaio-0.3.106".
    Check: Package existence for "libaio-devel-0.3.106"
      Node Name                       Status                          Comment        
      rac2                            libaio-devel-0.3.107-9.fc12.i686  passed         
      rac1                            libaio-devel-0.3.107-9.fc12.i686  passed         
    Result: Package existence check passed for "libaio-devel-0.3.106".
    Check: Package existence for "libstdc++-4.1.1"
      Node Name                       Status                          Comment        
      rac2                            libstdc++-4.4.2-7.fc12.i686     passed         
      rac1                            libstdc++-4.4.2-7.fc12.i686     passed         
    Result: Package existence check passed for "libstdc++-4.1.1".
    Check: Package existence for "elfutils-libelf-devel-0.125"
      Node Name                       Status                          Comment        
      rac2                            elfutils-libelf-devel-0.143-1.fc12.i686  passed         
      rac1                            elfutils-libelf-devel-0.143-1.fc12.i686  passed         
    Result: Package existence check passed for "elfutils-libelf-devel-0.125".
    Check: Package existence for "sysstat-7.0.0"
      Node Name                       Status                          Comment        
      rac2                            sysstat-9.0.4-4.fc12.i686       passed         
      rac1                            sysstat-9.0.4-4.fc12.i686       passed         
    Result: Package existence check passed for "sysstat-7.0.0".
    Check: Package existence for "compat-libstdc++-33-3.2.3"
      Node Name                       Status                          Comment        
      rac2                            compat-libstdc++-33-3.2.3-68.i686  passed         
      rac1                            compat-libstdc++-33-3.2.3-68.i686  passed         
    Result: Package existence check passed for "compat-libstdc++-33-3.2.3".
    Check: Package existence for "libgcc-4.1.1"
      Node Name                       Status                          Comment        
      rac2                            libgcc-4.4.2-7.fc12.i686        passed         
      rac1                            libgcc-4.4.2-7.fc12.i686        passed         
    Result: Package existence check passed for "libgcc-4.1.1".
    Check: Package existence for "libstdc++-devel-4.1.1"
      Node Name                       Status                          Comment        
      rac2                            libstdc++-devel-4.4.2-7.fc12.i686  passed         
      rac1                            libstdc++-devel-4.4.2-7.fc12.i686  passed         
    Result: Package existence check passed for "libstdc++-devel-4.1.1".
    Check: Package existence for "unixODBC-2.2.11"
      Node Name                       Status                          Comment        
      rac2                            unixODBC-2.2.14-6.fc12.i686     passed         
      rac1                            unixODBC-2.2.14-9.fc12.i686     passed         
    Result: Package existence check passed for "unixODBC-2.2.11".
    Check: Package existence for "unixODBC-devel-2.2.11"
      Node Name                       Status                          Comment        
      rac2                            unixODBC-devel-2.2.14-6.fc12.i686  passed         
      rac1                            unixODBC-devel-2.2.14-9.fc12.i686  passed         
    Result: Package existence check passed for "unixODBC-devel-2.2.11".
    Check: Package existence for "glibc-2.5-12"
      Node Name                       Status                          Comment        
      rac2                            glibc-2.11-2.i686               passed         
      rac1                            glibc-2.11-2.i686               passed         
    Result: Package existence check passed for "glibc-2.5-12".
    Check: Group existence for "dba"
      Node Name     Status                    Comment                
      rac2          exists                    passed                 
      rac1          exists                    passed                 
    Result: Group existence check passed for "dba".
    Check: Group existence for "oinstall"
      Node Name     Status                    Comment                
      rac2          exists                    passed                 
      rac1          exists                    passed                 
    Result: Group existence check passed for "oinstall".
    Check: User existence for "nobody"
      Node Name     Status                    Comment                
      rac2          exists                    passed                 
      rac1          exists                    passed                 
    Result: User existence check passed for "nobody".
    System requirement passed for 'crs'
    Pre-check for cluster services setup was successful.  So now here is the actual problem:
    After the installation and during the run of the root.sh I get:
    Failure at final check of Oracle CRS stack.
    10
    [root@rac1 crs]# ./root.sh
    WARNING: directory '/u01/app/crs/product/11.1.0' is not owned by root
    WARNING: directory '/u01/app/crs/product' is not owned by root
    WARNING: directory '/u01/app/crs' is not owned by root
    WARNING: directory '/u01/app' is not owned by root
    Checking to see if Oracle CRS stack is already configured
    /etc/oracle does not exist. Creating it now.
    Setting the permissions on OCR backup directory
    Setting up Network socket directories
    Oracle Cluster Registry configuration upgraded successfully
    The directory '/u01/app/crs/product/11.1.0' is not owned by root. Changing owner to root
    The directory '/u01/app/crs/product' is not owned by root. Changing owner to root
    The directory '/u01/app/crs' is not owned by root. Changing owner to root
    The directory '/u01/app' is not owned by root. Changing owner to root
    Successfully accumulated necessary OCR keys.
    Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
    node <nodenumber>: <nodename> <private interconnect name> <hostname>
    node 1: rac1 rac1-priv rac1
    node 2: rac2 rac2-priv rac2
    Creating OCR keys for user 'root', privgrp 'root'..
    Operation successful.
    Now formatting voting device: /u01/shared_config/voting_disk
    Format of 1 voting devices complete.
    Startup will be queued to init within 30 seconds.
    Adding daemons to inittab
    Expecting the CRS daemons to be up within 600 seconds.
    Failure at final check of Oracle CRS stack.
    10According to this link => http://blog.contractoracle.com/2009/01/failure-at-final-check-of-oracle-crs.html
    To recover from a status 10, one must check:
    check firewall / routing / iptables issues
    Now I have turned iptables off completely it doesnt even start up at boot time, so I know it can't be that.
    ROUTE
    [oracle@rac1 clusterware]$ route
    Kernel IP routing table
    Destination Gateway Genmask Flags Metric Ref Use Iface
    192.168.2.0 * 255.255.255.0 U 1 0 0 eth0
    192.168.0.0 * 255.255.255.0 U 1 0 0 eth1
    default 192.168.2.1 0.0.0.0 UG 0 0 0 eth0
    [oracle@rac2 ~]$ route
    Kernel IP routing table
    Destination Gateway Genmask Flags Metric Ref Use Iface
    192.168.2.0 * 255.255.255.0 U 1 0 0 eth0
    192.168.0.0 * 255.255.255.0 U 1 0 0 eth2
    default 192.168.2.1 0.0.0.0 UG 0 0 0 eth0
    [oracle@rac1 clusterware]$ traceroute rac2
    traceroute to rac2 (192.168.2.12), 30 hops max, 60 byte packets
    1 rac2.localdomain (192.168.2.12) 0.424 ms 0.427 ms 0.096 ms
    [oracle@rac1 clusterware]$ traceroute rac2-priv
    traceroute to rac2-priv (192.168.0.12), 30 hops max, 60 byte packets
    1 rac2-priv.localdomain (192.168.0.12) 1.336 ms 1.238 ms 1.188 ms
    [oracle@rac1 clusterware]$ traceroute rac2-vip
    traceroute to rac2-vip (192.168.2.112), 30 hops max, 60 byte packets
    1 rac1.localdomain (192.168.2.11) 2999.599 ms !H 2999.560 ms !H 2999.523 ms !H
    [oracle@rac1 bin]$ ./crs_stat -t
    CRS-0184: Cannot communicate with the CRS daemon.
    Both rac1 and rac2 get the same output above with the -vip getting !H => !H, !N, or !P (host, network or protocol unreachable), I am assuming this is normal as CRS install did not complete successfully and the virtual IP is not bound yet.
    Im pretty sure I have some kind of networking issue here, but I cant put my finger on it. I have tried absolutely everything that is suggested on the internet that I could find. Even deleting the /tmp/.oracle and /var/tmp/.oracle but nothing works. Ssh keys for root and oracle users exist and Ive connected using every possible combination to avoid that first time ssh prompt so users oracle on each node goes directly into rac1/rac2 rac1-priv/rac2-priv & actual IPs as well. Any ideas?
    Edited by: Javier on Dec 30, 2009 12:34 PM
    Edited by: Javier on Dec 30, 2009 6:58 PM

    Hello
    Note 370605.1 (Clusterware Intermittently Hangs And Commands Fail With CRS-184) is telling this.
    "This is caused by a cron job that cleans up the /tmp directory which also removes the Oracle socket files in /tmp/.oracle
    Do not remove /tmp/.oracle or /var/tmp/.oracle or its files while Oracle Clusterware is up."
    Best Regards...

  • Failure at final check of Oracle CRS stack.10  on the second node

    Hi,
    I am trying to install Oracle Clusterware 10.2.0.1.0 in VM machines (2 nodes config) in Linux (OEL5) using VMware Server (2.0). Everything went very well one the first node upto running the root.sh. Running root.sh ended with Failure at final check of Oracle CRS stack 10 error.
    RAC1 root.sh output
    [root@rac1 crs]# ./root.sh
    WARNING: directory '/u01/crs/oracle/product/10.2.0' is not owned by root
    WARNING: directory '/u01/crs/oracle/product' is not owned by root
    WARNING: directory '/u01/crs/oracle' is not owned by root
    WARNING: directory '/u01/crs' is not owned by root
    WARNING: directory '/u01' is not owned by root
    Checking to see if Oracle CRS stack is already configured
    /etc/oracle does not exist. Creating it now.
    Setting the permissions on OCR backup directory
    Setting up NS directories
    Oracle Cluster Registry configuration upgraded successfully
    WARNING: directory '/u01/crs/oracle/product/10.2.0' is not owned by root
    WARNING: directory '/u01/crs/oracle/product' is not owned by root
    WARNING: directory '/u01/crs/oracle' is not owned by root
    WARNING: directory '/u01/crs' is not owned by root
    WARNING: directory '/u01' is not owned by root
    assigning default hostname rac1 for node 1.
    assigning default hostname rac2 for node 2.
    Successfully accumulated necessary OCR keys.
    Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
    node <nodenumber>: <nodename> <private interconnect name> <hostname>
    node 1: rac1 rac1-priv rac1
    node 2: rac2 rac2-priv rac2
    Creating OCR keys for user 'root', privgrp 'root'..
    Operation successful.
    Now formatting voting device: /dev/raw/raw2
    Format of 1 voting devices complete.
    Startup will be queued to init within 90 seconds.
    Adding daemons to inittab
    Expecting the CRS daemons to be up within 600 seconds.
    CSS is active on these nodes.
    rac1
    CSS is inactive on these nodes.
    rac2
    Local node checking complete.
    Run root.sh on remaining nodes to start CRS daemons.
    [root@rac1 crs]#
    RAC2 root.sh output
    [root@rac2 crs]# ./root.sh
    WARNING: directory '/u01/crs/oracle/product/10.2.0' is not owned by root
    WARNING: directory '/u01/crs/oracle/product' is not owned by root
    WARNING: directory '/u01/crs/oracle' is not owned by root
    WARNING: directory '/u01/crs' is not owned by root
    WARNING: directory '/u01' is not owned by root
    Checking to see if Oracle CRS stack is already configured
    /etc/oracle does not exist. Creating it now.
    Setting the permissions on OCR backup directory
    Setting up NS directories
    Oracle Cluster Registry configuration upgraded successfully
    WARNING: directory '/u01/crs/oracle/product/10.2.0' is not owned by root
    WARNING: directory '/u01/crs/oracle/product' is not owned by root
    WARNING: directory '/u01/crs/oracle' is not owned by root
    WARNING: directory '/u01/crs' is not owned by root
    WARNING: directory '/u01' is not owned by root
    assigning default hostname rac1 for node 1.
    assigning default hostname rac2 for node 2.
    Successfully accumulated necessary OCR keys.
    Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
    node <nodenumber>: <nodename> <private interconnect name> <hostname>
    node 1: rac1 rac1-priv rac1
    node 2: rac2 rac2-priv rac2
    Creating OCR keys for user 'root', privgrp 'root'..
    Operation successful.
    Now formatting voting device: /dev/raw/raw2
    Format of 1 voting devices complete.
    Startup will be queued to init within 90 seconds.
    Adding daemons to inittab
    Expecting the CRS daemons to be up within 600 seconds.
    Failure at final check of Oracle CRS stack.
    10
    [root@rac2 crs]#
    Output of alterrac2.log
    [root@rac2 rac2]# more alertrac2.log
    2009-08-14 23:02:44.699
    [client(5935)]CRS-1006:The OCR location /dev/raw/raw1 is inaccessible. Details in /u01/crs/oracle/product/10.2.
    0/crs/log/rac2/client/ocrconfig_5935.log.
    2009-08-14 23:02:44.704
    [client(5935)]CRS-1006:The OCR location /dev/raw/raw1 is inaccessible. Details in /u01/crs/oracle/product/10.2.
    0/crs/log/rac2/client/ocrconfig_5935.log.
    2009-08-14 23:02:44.707
    [client(5935)]CRS-1006:The OCR location /dev/raw/raw1 is inaccessible. Details in /u01/crs/oracle/product/10.2.
    0/crs/log/rac2/client/ocrconfig_5935.log.
    2009-08-14 23:02:44.864
    [client(5935)]CRS-1001:The OCR was formatted using version 2.
    2009-08-14 23:02:50.339
    [client(6004)]CRS-1801:Cluster crs configured with nodes rac1 rac2 .
    2009-08-14 23:05:07.603
    [cssd(6600)]CRS-1605:CSSD voting file is online: /dev/raw/raw2. Details in /u01/crs/oracle/product/10.2.0/crs/l
    og/rac2/cssd/ocssd.log.
    [root@rac2 rac2]#
    Since raw devices are not supported from OEL5, I did do the workaround in *63-oracle-raw.rules file under /etc/udev/rules.d* dir.
    ACTION=="add", KERNEL=="sdb1", RUN+="/bin/raw /dev/raw/raw1 %N"
    ACTION=="add", KERNEL=="sdc1", RUN+="/bin/raw /dev/raw/raw2 %N"
    ACTION=="add", KERNEL=="sdd1", RUN+="/bin/raw /dev/raw/raw3 %N"
    ACTION=="add", KERNEL=="sde1", RUN+="/bin/raw /dev/raw/raw4 %N"
    ACTION=="add", KERNEL=="sdf1", RUN+="/bin/raw /dev/raw/raw5 %N"
    KERNEL=="raw[1-2]*", OWNER="root", GROUP="oinstall", MODE="640"
    KERNEL=="raw[3-5]*", OWNER="oracle", GROUP="oinstall", MODE="640"
    One thing I have noticed after running root.sh on both the nodes is the permissons on raw devices changed from
    Before root.sh
    [root@rac2 crs]# ls -ls /dev/raw*
    0 crw------- 1 root root 162, 0 Aug 14 22:42 /dev/rawctl
    /dev/raw:
    total 0
    0 crw-r----- 1 root oinstall 162, 1 Aug 14 22:42 raw1
    0 crw-r----- 1 root oinstall 162, 2 Aug 14 22:42 raw2
    0 crw-r----- 1 oracle oinstall 162, 3 Aug 14 22:42 raw3
    0 crw-r----- 1 oracle oinstall 162, 4 Aug 14 22:42 raw4
    0 crw-r----- 1 oracle oinstall 162, 5 Aug 14 22:42 raw5
    to
    [root@rac2 crs]# ls -ls /dev/raw*
    0 crw------- 1 root root 162, 0 Aug 14 22:31 /dev/rawctl
    /dev/raw:
    total 0
    0 crw-r----- 1 root oinstall 162, 1 Aug 14 22:56 raw1
    0 crw-r--r-- 1 oracle oinstall 162, 2 Aug 14 23:01 raw2
    0 crw-r----- 1 oracle oinstall 162, 3 Aug 14 22:31 raw3
    0 crw-r----- 1 oracle oinstall 162, 4 Aug 14 22:31 raw4
    0 crw-r----- 1 oracle oinstall 162, 5 Aug 14 22:31 raw5
    [root@rac1 crs]#
    My shared disk listing
    [root@www shared]# ls -ltr
    total 8780
    -rw------- 1 root root 640 Aug 14 22:43 votingdisk.vmdk
    -rw------- 1 root root 598 Aug 14 22:43 ocr.vmdk
    -rw------- 1 root root 604 Aug 14 22:43 asm3.vmdk
    -rw------- 1 root root 604 Aug 14 22:43 asm2.vmdk
    -rw------- 1 root root 604 Aug 14 22:43 asm1.vmdk
    -rw------- 1 root root 65536 Aug 14 22:44 votingdisk-s006.vmdk
    -rw------- 1 root root 327680 Aug 14 22:44 votingdisk-s005.vmdk
    -rw------- 1 root root 327680 Aug 14 22:44 votingdisk-s004.vmdk
    -rw------- 1 root root 327680 Aug 14 22:44 votingdisk-s003.vmdk
    -rw------- 1 root root 327680 Aug 14 22:44 votingdisk-s002.vmdk
    -rw------- 1 root root 393216 Aug 14 22:44 votingdisk-s001.vmdk
    -rw------- 1 root root 65536 Aug 14 22:44 ocr-s006.vmdk
    -rw------- 1 root root 327680 Aug 14 22:44 ocr-s005.vmdk
    -rw------- 1 root root 327680 Aug 14 22:44 ocr-s004.vmdk
    -rw------- 1 root root 327680 Aug 14 22:44 ocr-s003.vmdk
    -rw------- 1 root root 327680 Aug 14 22:44 ocr-s002.vmdk
    -rw------- 1 root root 393216 Aug 14 22:44 ocr-s001.vmdk
    -rw------- 1 root root 65536 Aug 14 22:44 asm3-s006.vmdk
    -rw------- 1 root root 327680 Aug 14 22:44 asm3-s005.vmdk
    -rw------- 1 root root 327680 Aug 14 22:44 asm3-s004.vmdk
    -rw------- 1 root root 327680 Aug 14 22:44 asm3-s003.vmdk
    -rw------- 1 root root 327680 Aug 14 22:44 asm3-s002.vmdk
    -rw------- 1 root root 393216 Aug 14 22:44 asm3-s001.vmdk
    -rw------- 1 root root 65536 Aug 14 22:44 asm2-s006.vmdk
    -rw------- 1 root root 327680 Aug 14 22:44 asm2-s005.vmdk
    -rw------- 1 root root 327680 Aug 14 22:44 asm2-s004.vmdk
    -rw------- 1 root root 327680 Aug 14 22:44 asm2-s003.vmdk
    -rw------- 1 root root 327680 Aug 14 22:44 asm2-s002.vmdk
    -rw------- 1 root root 393216 Aug 14 22:44 asm2-s001.vmdk
    -rw------- 1 root root 65536 Aug 14 22:44 asm1-s006.vmdk
    -rw------- 1 root root 327680 Aug 14 22:44 asm1-s005.vmdk
    -rw------- 1 root root 327680 Aug 14 22:44 asm1-s004.vmdk
    -rw------- 1 root root 327680 Aug 14 22:44 asm1-s003.vmdk
    -rw------- 1 root root 327680 Aug 14 22:44 asm1-s002.vmdk
    -rw------- 1 root root 393216 Aug 14 22:44 asm1-s001.vmdk
    [root@www shared]#
    I don't know how to fix this problem. I did go through many docs and metalink notes.
    I am new to RAC world. It took 3 days to come to this stage. Please help me.
    Thanks
    Leo

    Hi Surachart,
    Here is my messages output..
    */var/log/messages*
    Aug 20 14:05:01 rac2 avahi-daemon[3627]: Registering new address record for fe80::20c:29ff:fe6b:f9a8 on eth1.
    Aug 20 14:05:01 rac2 avahi-daemon[3627]: Registering new address record for 192.168.1.196 on eth1.
    Aug 20 14:05:01 rac2 avahi-daemon[3627]: Registering new address record for fe80::20c:29ff:fe6b:f99e on eth0.
    Aug 20 14:05:01 rac2 avahi-daemon[3627]: Registering new address record for 192.168.0.196 on eth0.
    Aug 20 14:05:01 rac2 avahi-daemon[3627]: Registering HINFO record with values 'I686'/'LINUX'.
    Aug 20 14:05:02 rac2 avahi-daemon[3627]: Server startup complete. Host name is rac2.local. Local service cookie is 927471131.
    Aug 20 14:05:03 rac2 avahi-daemon[3627]: Service "SFTP File Transfer on rac2" (/services/sftp-ssh.service) successfully established.
    Aug 20 14:05:08 rac2 smartd[3739]: smartd version 5.38 [i686-redhat-linux-gnu] Copyright (C) 2002-8 Bruce Allen
    Aug 20 14:05:08 rac2 smartd[3739]: Home page is http://smartmontools.sourceforge.net/
    Aug 20 14:05:08 rac2 smartd[3739]: Opened configuration file /etc/smartd.conf
    Aug 20 14:05:08 rac2 smartd[3739]: Configuration file /etc/smartd.conf was parsed, found DEVICESCAN, scanning devices
    Aug 20 14:05:08 rac2 smartd[3739]: Device: /dev/hdc, opened
    Aug 20 14:05:08 rac2 kernel: hdc: drive_cmd: status=0x51 { DriveReady SeekComplete Error }
    Aug 20 14:05:08 rac2 kernel: hdc: drive_cmd: error=0x04 { AbortedCommand }
    Aug 20 14:05:08 rac2 kernel: ide: failed opcode was: 0xec
    Aug 20 14:05:08 rac2 smartd[3739]: Device: /dev/hdc, not ATA, no IDENTIFY DEVICE Structure
    Aug 20 14:05:08 rac2 smartd[3739]: Device: /dev/sda, opened
    Aug 20 14:05:08 rac2 smartd[3739]: Device: /dev/sda, IE (SMART) not enabled, skip device Try 'smartctl -s on /dev/sda' to turn on SMART features
    Aug 20 14:05:08 rac2 smartd[3739]: Device: /dev/sdb, opened
    Aug 20 14:05:08 rac2 smartd[3739]: Device: /dev/sdb, IE (SMART) not enabled, skip device Try 'smartctl -s on /dev/sdb' to turn on SMART features
    Aug 20 14:05:08 rac2 smartd[3739]: Device: /dev/sdc, opened
    Aug 20 14:05:08 rac2 smartd[3739]: Device: /dev/sdc, IE (SMART) not enabled, skip device Try 'smartctl -s on /dev/sdc' to turn on SMART features
    Aug 20 14:05:08 rac2 smartd[3739]: Device: /dev/sdd, opened
    Aug 20 14:05:08 rac2 smartd[3739]: Device: /dev/sdd, IE (SMART) not enabled, skip device Try 'smartctl -s on /dev/sdd' to turn on SMART features
    Aug 20 14:05:09 rac2 smartd[3739]: Device: /dev/sde, opened
    Aug 20 14:05:09 rac2 smartd[3739]: Device: /dev/sde, IE (SMART) not enabled, skip device Try 'smartctl -s on /dev/sde' to turn on SMART features
    Aug 20 14:05:10 rac2 smartd[3739]: Device: /dev/sdf, opened
    Aug 20 14:05:10 rac2 smartd[3739]: Device: /dev/sdf, IE (SMART) not enabled, skip device Try 'smartctl -s on /dev/sdf' to turn on SMART features
    Aug 20 14:05:10 rac2 smartd[3739]: Monitoring 0 ATA and 0 SCSI devices
    Aug 20 14:05:10 rac2 smartd[3741]: smartd has fork()ed into background mode. New PID=3741.
    Aug 20 14:05:13 rac2 pcscd: winscard.c:304:SCardConnect() Reader E-Gate 0 0 Not Found
    Aug 20 14:05:13 rac2 last message repeated 3 times
    Aug 20 14:05:27 rac2 gconfd (root-3967): starting (version 2.14.0), pid 3967 user 'root'
    Aug 20 14:05:27 rac2 gconfd (root-3967): Resolved address "xml:readonly:/etc/gconf/gconf.xml.mandatory" to a read-only configuration source at position 0
    Aug 20 14:05:27 rac2 gconfd (root-3967): Resolved address "xml:readwrite:/root/.gconf" to a writable configuration source at position 1
    Aug 20 14:05:27 rac2 gconfd (root-3967): Resolved address "xml:readonly:/etc/gconf/gconf.xml.defaults" to a read-only configuration source at position 2
    Aug 20 14:05:29 rac2 gconfd (root-3967): Resolved address "xml:readwrite:/root/.gconf" to a writable configuration source at position 0
    Aug 20 14:05:29 rac2 hald: mounted /dev/hdc on behalf of uid 0
    Aug 20 14:05:29 rac2 hcid[3311]: Default passkey agent (:1.8, /org/bluez/applet) registered
    Aug 20 14:05:31 rac2 nm-system-settings: Loaded plugin ifcfg-rh: (c) 2007 - 2008 Red Hat, Inc. To report bugs please use the NetworkManager mailing list.
    Aug 20 14:05:31 rac2 nm-system-settings: ifcfg-rh: parsing /etc/sysconfig/network-scripts/ifcfg-eth1 ...
    Aug 20 14:05:31 rac2 nm-system-settings: ifcfg-rh: read connection 'System eth1'
    Aug 20 14:05:31 rac2 nm-system-settings: ifcfg-rh: parsing /etc/sysconfig/network-scripts/ifcfg-lo ...
    Aug 20 14:05:31 rac2 nm-system-settings: ifcfg-rh: error: Ignoring loopback device config.
    Aug 20 14:05:31 rac2 nm-system-settings: ifcfg-rh: parsing /etc/sysconfig/network-scripts/ifcfg-eth0 ...
    Aug 20 14:05:31 rac2 nm-system-settings: ifcfg-rh: read connection 'System eth0'
    Aug 20 14:05:31 rac2 pcscd: winscard.c:304:SCardConnect() Reader E-Gate 0 0 Not Found
    Aug 20 14:05:32 rac2 last message repeated 4 times
    Aug 20 14:12:51 rac2 kernel: FS-Cache: Loaded
    Aug 20 14:22:06 rac2 xinetd[3488]: START: shell pid=5193 from=192.168.0.195
    Aug 20 14:22:06 rac2 xinetd[3488]: EXIT: shell status=0 pid=5193 duration=0(sec)
    Aug 20 14:22:07 rac2 xinetd[3488]: START: shell pid=5217 from=192.168.0.195
    Aug 20 14:22:07 rac2 xinetd[3488]: EXIT: shell status=0 pid=5217 duration=0(sec)
    Aug 20 14:22:07 rac2 xinetd[3488]: START: shell pid=5241 from=192.168.0.195
    Aug 20 14:22:07 rac2 xinetd[3488]: EXIT: shell status=0 pid=5241 duration=0(sec)
    Aug 20 14:22:16 rac2 xinetd[3488]: EXIT: shell status=0 pid=6236 duration=0(sec)
    Aug 20 14:22:16 rac2 xinetd[3488]: START: shell pid=6265 from=192.168.0.195
    Aug 20 14:22:16 rac2 xinetd[3488]: EXIT: shell status=0 pid=6265 duration=0(sec)
    Aug 20 14:22:16 rac2 xinetd[3488]: START: shell pid=6291 from=192.168.0.195
    Aug 20 14:22:17 rac2 xinetd[3488]: EXIT: shell status=0 pid=6291 duration=1(sec)
    Aug 20 14:22:17 rac2 xinetd[3488]: START: shell pid=6317 from=192.168.0.195
    Aug 20 14:22:17 rac2 xinetd[3488]: EXIT: shell status=0 pid=6317 duration=0(sec)
    [root@rac2 log]#

  • Failure at final check of Oracle CRS stack. 10 on the first node.

    Hi everyone
    I trying to install an Oracle RAC 10gr2 on an Oracle Enterprise Linux AS release 4 (October Update 7) , but I'm having this problem
    root@fporn01 crs# ./root.sh
    Checking to see if Oracle CRS stack is already configured
    Setting the permissions on OCR backup directory
    Setting up NS directories
    Oracle Cluster Registry configuration upgraded successfully
    clscfg: EXISTING configuration version 3 detected.
    clscfg: version 3 is 10G Release 2.
    assigning default hostname fporn01 for node 1.
    assigning default hostname fporn02 for node 2.
    Successfully accumulated necessary OCR keys.
    Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
    node <nodenumber>: <nodename> <private interconnect name> <hostname>
    node 1: fporn01 fporn01-priv fporn01
    node 2: fporn02 fporn02-priv fporn02
    clscfg: Arguments check out successfully.
    NO KEYS WERE WRITTEN. Supply -force parameter to override.
    -force is destructive and will destroy any previous cluster
    configuration.
    Oracle Cluster Registry for cluster has already been initialized
    Startup will be queued to init within 90 seconds.
    Adding daemons to inittab
    Expecting the CRS daemons to be up within 600 seconds.
    Failure at final check of Oracle CRS stack.
    +10+
    forget about the node names!!!!
    but on the second node everything went fine, so I'm sure this is not a connectivity issue.
    the iptables service is stopped and disabled
    check the results after running the root.sh script
    root@fporn02 ~# /u01/app/crs/root.sh
    Checking to see if Oracle CRS stack is already configured
    +/etc/oracle does not exist. Creating it now.+
    Setting the permissions on OCR backup directory
    Setting up NS directories
    Oracle Cluster Registry configuration upgraded successfully
    clscfg: EXISTING configuration version 3 detected.
    clscfg: version 3 is 10G Release 2.
    assigning default hostname fporn01 for node 1.
    assigning default hostname fporn02 for node 2.
    Successfully accumulated necessary OCR keys.
    Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
    node <nodenumber>: <nodename> <private interconnect name> <hostname>
    node 1: fporn01 fporn01-priv fporn01
    node 2: fporn02 fporn02-priv fporn02
    clscfg: Arguments check out successfully.
    NO KEYS WERE WRITTEN. Supply -force parameter to override.
    -force is destructive and will destroy any previous cluster
    configuration.
    Oracle Cluster Registry for cluster has already been initialized
    Startup will be queued to init within 90 seconds.
    Adding daemons to inittab
    Expecting the CRS daemons to be up within 600 seconds.
    CSS is active on these nodes.
    fporn02
    CSS is inactive on these nodes.
    fporn01
    Local node checking complete.
    Run root.sh on remaining nodes to start CRS daemons.
    this is the log of crs on the first node
    root@fporn01 bin# cat /u01/app/crs/log/fporn01/alertfporn01.log
    +2009-06-24 17:27:37.695+
    client(9045)CRS-1006:The OCR location /u02/oradata/orcl/OCRFile_mirror is inaccessible. Details in /u01/app/crs/log/fporn01/client/ocrconfig_9045.log.
    +2009-06-24 17:27:37.741+
    client(9045)CRS-1001:The OCR was formatted using version 2.
    +2009-06-24 17:28:24.544+
    client(9092)CRS-1801:Cluster pdb-rac configured with nodes fporn01 fporn02 .
    this is the log of crs on the second node
    root@fporn02 ~# cat /u01/app/crs/log/fporn02/alertfporn02.log
    +2009-06-24 18:09:09.307+
    cssd(16991)CRS-1605:CSSD voting file is online: /u02/oradata/orcl/CSSFile. Details in /u01/app/crs/log/fporn02/cssd/ocssd.log.
    +2009-06-24 18:09:09.307+
    cssd(16991)CRS-1605:CSSD voting file is online: /u02/oradata/orcl/CSSFile_mirror1. Details in /u01/app/crs/log/fporn02/cssd/ocssd.log.
    +2009-06-24 18:09:09.310+
    cssd(16991)CRS-1605:CSSD voting file is online: /u02/oradata/orcl/CSSFile_mirror2. Details in /u01/app/crs/log/fporn02/cssd/ocssd.log.
    +2009-06-24 18:09:12.441+
    cssd(16991)CRS-1601:CSSD Reconfiguration complete. Active nodes are fporn02 .
    I have rechecked the Remote Access / User Equivalence
    after run the OCRCHECK command ia have this information
    root@fporn01 bin# ./ocrcheck
    Status of Oracle Cluster Registry is as follows :
    Version : 2
    Total space (kbytes) : 262144
    Used space (kbytes) : 312
    Available space (kbytes) : 261832
    ID : 255880615
    Device/File Name : /u02/oradata/orcl/OCRFile
    Device/File integrity check succeeded
    Device/File Name : /u02/oradata/orcl/OCRFile_mirror
    Device/File integrity check succeeded
    Cluster registry integrity check succeeded
    on the second node i get the same output
    root@fporn02 bin# ./ocrcheck
    Status of Oracle Cluster Registry is as follows :
    Version : 2
    Total space (kbytes) : 262144
    Used space (kbytes) : 312
    Available space (kbytes) : 261832
    ID : 255880615
    Device/File Name : /u02/oradata/orcl/OCRFile
    Device/File integrity check succeeded
    Device/File Name : /u02/oradata/orcl/OCRFile_mirror
    Device/File integrity check succeeded
    Cluster registry integrity check succeeded
    I have reviewed the following metalink notes but none of them seems to solve my problem
    *344994.1*
    *240001.1*
    *725878.1*
    *329450.1*
    *734221.1*
    I have done a research trough many forums, but always the fail is on the second node, but my fail is on the first node.
    I hope anyone could help me.
    this is the output of cluvfy
    Performing pre-checks for cluster services setup
    Checking node reachability...
    Check: Node reachability from node "fporn01"
    Destination Node Reachable?
    fporn01 yes
    fporn02 yes
    Result: Node reachability check passed from node "fporn01".
    Checking user equivalence...
    Check: User equivalence for user "oracle"
    Node Name Comment
    fporn02 passed
    fporn01 passed
    Result: User equivalence check passed for user "oracle".
    Checking administrative privileges...
    Check: Existence of user "oracle"
    Node Name User Exists Comment
    fporn02 yes passed
    fporn01 yes passed
    Result: User existence check passed for "oracle".
    Check: Existence of group "oinstall"
    Node Name Status Group ID
    fporn02 exists 501
    fporn01 exists 501
    Result: Group existence check passed for "oinstall".
    Check: Membership of user "oracle" in group "oinstall" as Primary
    Node Name User Exists Group Exists User in Group Primary Comment
    fporn02 yes yes yes yes passed
    fporn01 yes yes yes yes passed
    Result: Membership check for user "oracle" in group "oinstall" as Primary passed.
    Administrative privileges check passed.
    Checking node connectivity...
    Interface information for node "fporn02"
    Interface Name IP Address Subnet
    eth0 10.218.108.245 10.218.108.0
    eth1 192.168.1.2 192.168.1.0
    Interface information for node "fporn01"
    Interface Name IP Address Subnet
    eth0 10.218.108.244 10.218.108.0
    eth1 192.168.1.1 192.168.1.0
    eth2 172.16.9.210 172.16.9.0
    Check: Node connectivity of subnet "10.218.108.0"
    Source Destination Connected?
    fporn02:eth0 fporn01:eth0 yes
    Result: Node connectivity check passed for subnet "10.218.108.0" with node(s) fporn02,fporn01.
    Check: Node connectivity of subnet "192.168.1.0"
    Source Destination Connected?
    fporn02:eth1 fporn01:eth1 yes
    Result: Node connectivity check passed for subnet "192.168.1.0" with node(s) fporn02,fporn01.
    Check: Node connectivity of subnet "172.16.9.0"
    Result: Node connectivity check passed for subnet "172.16.9.0" with node(s) fporn01.
    Suitable interfaces for the private interconnect on subnet "10.218.108.0":
    fporn02 eth0:10.218.108.245
    fporn01 eth0:10.218.108.244
    Suitable interfaces for the private interconnect on subnet "192.168.1.0":
    fporn02 eth1:192.168.1.2
    fporn01 eth1:192.168.1.1
    ERROR:
    Could not find a suitable set of interfaces for VIPs.
    Result: Node connectivity check failed.
    Checking system requirements for 'crs'...
    Check: Total memory
    Node Name Available Required Comment
    fporn02 7.93GB (8310276KB) 512MB (524288KB) passed
    fporn01 7.93GB (8310276KB) 512MB (524288KB) passed
    Result: Total memory check passed.
    Check: Free disk space in "/tmp" dir
    Node Name Available Required Comment
    fporn02 9.57GB (10037300KB) 400MB (409600KB) passed
    fporn01 9.55GB (10012168KB) 400MB (409600KB) passed
    Result: Free disk space check passed.
    Check: Swap space
    Node Name Available Required Comment
    fporn02 8.81GB (9240568KB) 1GB (1048576KB) passed
    fporn01 8.81GB (9240568KB) 1GB (1048576KB) passed
    Result: Swap space check passed.
    Check: System architecture
    Node Name Available Required Comment
    fporn02 i686 i686 passed
    fporn01 i686 i686 passed
    Result: System architecture check passed.
    Check: Kernel version
    Node Name Available Required Comment
    fporn02 2.6.9-78.0.0.0.1.ELhugemem 2.4.21-15EL passed
    fporn01 2.6.9-78.0.0.0.1.ELhugemem 2.4.21-15EL passed
    Result: Kernel version check passed.
    Check: Package existence for "make-3.79"
    Node Name Status Comment
    fporn02 make-3.80-7.EL4 passed
    fporn01 make-3.80-7.EL4 passed
    Result: Package existence check passed for "make-3.79".
    Check: Package existence for "binutils-2.14"
    Node Name Status Comment
    fporn02 binutils-2.15.92.0.2-25 passed
    fporn01 binutils-2.15.92.0.2-25 passed
    Result: Package existence check passed for "binutils-2.14".
    Check: Package existence for "gcc-3.2"
    Node Name Status Comment
    fporn02 gcc-3.4.6-10.0.1 passed
    fporn01 gcc-3.4.6-10.0.1 passed
    Result: Package existence check passed for "gcc-3.2".
    Check: Package existence for "glibc-2.3.2-95.27"
    Node Name Status Comment
    fporn02 glibc-2.3.4-2.41 passed
    fporn01 glibc-2.3.4-2.41 passed
    Result: Package existence check passed for "glibc-2.3.2-95.27".
    Check: Package existence for "compat-db-4.0.14-5"
    Node Name Status Comment
    fporn02 compat-db-4.1.25-9 passed
    fporn01 compat-db-4.1.25-9 passed
    Result: Package existence check passed for "compat-db-4.0.14-5".
    Check: Package existence for "compat-gcc-7.3-2.96.128"
    Node Name Status Comment
    fporn02 missing failed
    fporn01 missing failed
    Result: Package existence check failed for "compat-gcc-7.3-2.96.128".
    ++Check: Package existence for "compat-gcc-c++-7.3-2.96.128"++
    Node Name Status Comment
    fporn02 missing failed
    fporn01 missing failed
    ++Result: Package existence check failed for "compat-gcc-c++-7.3-2.96.128".++
    ++Check: Package existence for "compat-libstdc++-7.3-2.96.128"++
    Node Name Status Comment
    fporn02 missing failed
    fporn01 missing failed
    ++Result: Package existence check failed for "compat-libstdc++-7.3-2.96.128".++
    ++Check: Package existence for "compat-libstdc++-devel-7.3-2.96.128"++
    Node Name Status Comment
    fporn02 missing failed
    fporn01 missing failed
    ++Result: Package existence check failed for "compat-libstdc++-devel-7.3-2.96.128".++
    Check: Package existence for "openmotif-2.2.3"
    Node Name Status Comment
    fporn02 openmotif-2.2.3-10.2.el4 passed
    fporn01 openmotif-2.2.3-10.2.el4 passed
    Result: Package existence check passed for "openmotif-2.2.3".
    Check: Package existence for "setarch-1.3-1"
    Node Name Status Comment
    fporn02 setarch-1.6-1 passed
    fporn01 setarch-1.6-1 passed
    Result: Package existence check passed for "setarch-1.3-1".
    Check: Group existence for "dba"
    Node Name Status Comment
    fporn02 exists passed
    fporn01 exists passed
    Result: Group existence check passed for "dba".
    Check: Group existence for "oinstall"
    Node Name Status Comment
    fporn02 exists passed
    fporn01 exists passed
    Result: Group existence check passed for "oinstall".
    Check: User existence for "nobody"
    Node Name Status Comment
    fporn02 exists passed
    fporn01 exists passed
    Result: User existence check passed for "nobody".
    System requirement failed for 'crs'
    Pre-check for cluster services setup was unsuccessful on all the nodes.

    forget about my last post, it was my mistake, I rebooted the server and the clustered file system service did not start up at boot time.
    sorry
    this is what I really got in /var/log/messages
    after manually running crs daemons
    Jun 26 16:43:07 fporn01 su(pam_unix)[10020]: session opened for user oracle by (uid=0)
    Jun 26 16:43:07 fporn01 su(pam_unix)[10020]: session closed for user oracle
    Jun 26 16:43:07 fporn01 logger: Cluster Ready Services completed waiting on dependencies.
    Jun 26 16:44:07 fporn01 su(pam_unix)[9977]: session opened for user oracle by (uid=0)
    Jun 26 16:45:31 fporn01 su(pam_unix)[10293]: session opened for user oracle by (uid=0)
    Jun 26 16:45:32 fporn01 su(pam_unix)[10293]: session closed for user oracle
    Jun 26 16:45:32 fporn01 logger: Cluster Ready Services completed waiting on dependencies.
    Jun 26 16:45:40 fporn01 su(pam_unix)[10351]: session opened for user oracle by (uid=0)
    Jun 26 16:45:40 fporn01 su(pam_unix)[10351]: session closed for user oracle
    Jun 26 16:45:40 fporn01 su(pam_unix)[10415]: session opened for user oracle by (uid=0)
    Jun 26 16:45:40 fporn01 su(pam_unix)[10415]: session closed for user oracle
    Jun 26 16:45:40 fporn01 logger: Cluster Ready Services completed waiting on dependencies.
    Jun 26 16:46:32 fporn01 su(pam_unix)[10591]: session opened for user oracle by (uid=0)
    Jun 26 16:46:40 fporn01 logger: Running CRSD with TZ =
    after running ps -ef | grep -E 'init|d.bin|ocls|oprocd|diskmon|evmlogger|PID'
    [root@fporn01 ~]# ps -ef | grep -E 'init|d.bin|ocls|oprocd|diskmon|evmlogger|PID'
    UID PID PPID C STIME TTY TIME CMD
    root 1 0 0 15:33 ? 00:00:00 init [5]
    root 9869 7951 0 16:40 pts/1 00:00:00 [init.crsd] <defunct>
    oracle 10053 9977 0 16:44 ? 00:00:00 /u01/app/crs/bin/evmd.bin
    root 10249 7951 0 16:45 pts/1 00:00:00 /bin/sh /etc/init.d/init.cssd fatal
    root 10341 7951 0 16:45 pts/1 00:00:00 /u01/app/crs/bin/crsd.bin reboot
    root 10551 10249 0 16:46 pts/1 00:00:00 /bin/sh /etc/init.d/init.cssd daemon
    oracle 10618 10592 0 16:46 ? 00:00:00 /u01/app/crs/bin/ocssd.bin
    oracle 10926 10053 0 16:46 ? 00:00:00 /u01/app/crs/bin/evmlogger.bin -o /u01/app/crs/evm/log/evmlogger.info -l /u01/app/crs/evm/log/evmlogger.log
    root 16658 9461 0 16:50 pts/2 00:00:00 grep -E init|d.bin|ocls|oprocd|diskmon|evmlogger|PID
    CRS daemons finally work
    *but i get this error when i run [oracle@fporn01 cluvfy]$ ./runcluvfy.sh stage -post crsinst -n fporn01,fporn02 -verbose*
    Performing post-checks for cluster services setup
    Checking node reachability...
    Check: Node reachability from node "fporn01"
    Destination Node                      Reachable?
    fporn01                               yes
    fporn02                               yes
    Result: Node reachability check passed from node "fporn01".
    Checking user equivalence...
    Check: User equivalence for user "oracle"
    Node Name                             Comment
    fporn02                               passed
    fporn01                               passed
    Result: User equivalence check passed for user "oracle".
    ERROR:
    CRS is not installed on any of the nodes.
    Verification cannot proceed.
    Post-check for cluster services setup was unsuccessful on all the nodes.

  • Failure at final check of Oracle CRS stac

    +++crs ID conflicts ocrcheck scsi
    ++Referred metalink note:344994.1 but it was specific to RAW devices, Need a solution for scsi device.
    oracle@vx0302 bin # ocrcheck
    Status of Oracle Cluster Registry is as follows :
    Version : 2
    Total space (kbytes) : 3306636
    Used space (kbytes) : 440
    Available space (kbytes) : 3306196
    ID : 1425438992 <<different from node1
    Device/File Name : /dev/sdb1
    Device/File integrity check failed
    Device/File not configured
    Cluster registry integrity check failed
    oracle@vx0301 bin # ocrcheck
    Status of Oracle Cluster Registry is as follows :
    Version : 2
    Total space (kbytes) : 40269168
    Used space (kbytes) : 308
    Available space (kbytes) : 40268860
    ID : 68510624 << different from node1
    Device/File Name : /dev/sdb1
    Device/File integrity check succeeded
    Device/File not configured
    Cluster registry integrity check succeeded
    According to the above output, the IDs are same. I followed this http://surachartopun.com/2009/01/failure-at-final-check-of-oracle-crs.html link for the workaround. I have taken the following steps:
    1. Firewall is off.
    2. I have referred to the metalink, but it provides a solution only for raw devices. I have used the link http://www.oracle.com/technology/pub/articles/hunter-rac11gr2-iscsi.html to configure Openfiler for iscsi devices. The configuration is EXACTLY the same. However, when i ran ls -l /dev/iscsi/*, the output was different on both nodes.
    Node1:
    [oracle@vx0301 ~]# ls -l /dev/iscsi/*
    /dev/iscsi/crs1:
    total 0
    lrwxrwxrwx 1 root root 9 Nov 3 18:13 part -> ../../sdc
    lrwxrwxrwx 1 root root 9 Nov 3 18:13 part1 -> ../../sdc1
    /dev/iscsi/data1:
    total 0
    lrwxrwxrwx 1 root root 9 Nov 3 18:13 part -> ../../sdb1
    lrwxrwxrwx 1 root root 9 Nov 3 18:13 part1 -> ../../sdb1
    /dev/iscsi/fra1:
    total 0
    lrwxrwxrwx 1 root root 9 Nov 3 18:13 part -> ../../sdd
    lrwxrwxrwx 1 root root 9 Nov 3 18:13 part1 -> ../../sdd1
    Node2:
    [oracle@vx0302 ~]# ls -l /dev/iscsi/*
    /dev/iscsi/crs1:
    total 0
    lrwxrwxrwx 1 root root 9 Nov 3 18:13 part -> ../../sdb
    lrwxrwxrwx 1 root root 9 Nov 3 18:13 part1 -> ../../sdb1
    /dev/iscsi/data1:
    total 0
    lrwxrwxrwx 1 root root 9 Nov 3 18:13 part -> ../../sdd1
    lrwxrwxrwx 1 root root 9 Nov 3 18:13 part1 -> ../../sdd1
    /dev/iscsi/fra1:
    total 0
    lrwxrwxrwx 1 root root 9 Nov 3 18:13 part -> ../../sdc
    lrwxrwxrwx 1 root root 9 Nov 3 18:13 part1 -> ../../sdc1
    Is there a way i can fix this?
    Edited by: user10594250 on Jan 22, 2010 4:41 AM

    +++crs ID conflicts ocrcheck scsi
    ++Referred metalink note:344994.1 but it was specific to RAW devices, Need a solution for scsi device.
    oracle@vx0302 bin # ocrcheck
    Status of Oracle Cluster Registry is as follows :
    Version : 2
    Total space (kbytes) : 3306636
    Used space (kbytes) : 440
    Available space (kbytes) : 3306196
    ID : 1425438992 <<different from node1
    Device/File Name : /dev/sdb1
    Device/File integrity check failed
    Device/File not configured
    Cluster registry integrity check failed
    oracle@vx0301 bin # ocrcheck
    Status of Oracle Cluster Registry is as follows :
    Version : 2
    Total space (kbytes) : 40269168
    Used space (kbytes) : 308
    Available space (kbytes) : 40268860
    ID : 68510624 << different from node1
    Device/File Name : /dev/sdb1
    Device/File integrity check succeeded
    Device/File not configured
    Cluster registry integrity check succeeded
    According to the above output, the IDs are same. I followed this http://surachartopun.com/2009/01/failure-at-final-check-of-oracle-crs.html link for the workaround. I have taken the following steps:
    1. Firewall is off.
    2. I have referred to the metalink, but it provides a solution only for raw devices. I have used the link http://www.oracle.com/technology/pub/articles/hunter-rac11gr2-iscsi.html to configure Openfiler for iscsi devices. The configuration is EXACTLY the same. However, when i ran ls -l /dev/iscsi/*, the output was different on both nodes.
    Node1:
    [oracle@vx0301 ~]# ls -l /dev/iscsi/*
    /dev/iscsi/crs1:
    total 0
    lrwxrwxrwx 1 root root 9 Nov 3 18:13 part -> ../../sdc
    lrwxrwxrwx 1 root root 9 Nov 3 18:13 part1 -> ../../sdc1
    /dev/iscsi/data1:
    total 0
    lrwxrwxrwx 1 root root 9 Nov 3 18:13 part -> ../../sdb1
    lrwxrwxrwx 1 root root 9 Nov 3 18:13 part1 -> ../../sdb1
    /dev/iscsi/fra1:
    total 0
    lrwxrwxrwx 1 root root 9 Nov 3 18:13 part -> ../../sdd
    lrwxrwxrwx 1 root root 9 Nov 3 18:13 part1 -> ../../sdd1
    Node2:
    [oracle@vx0302 ~]# ls -l /dev/iscsi/*
    /dev/iscsi/crs1:
    total 0
    lrwxrwxrwx 1 root root 9 Nov 3 18:13 part -> ../../sdb
    lrwxrwxrwx 1 root root 9 Nov 3 18:13 part1 -> ../../sdb1
    /dev/iscsi/data1:
    total 0
    lrwxrwxrwx 1 root root 9 Nov 3 18:13 part -> ../../sdd1
    lrwxrwxrwx 1 root root 9 Nov 3 18:13 part1 -> ../../sdd1
    /dev/iscsi/fra1:
    total 0
    lrwxrwxrwx 1 root root 9 Nov 3 18:13 part -> ../../sdc
    lrwxrwxrwx 1 root root 9 Nov 3 18:13 part1 -> ../../sdc1
    Is there a way i can fix this?
    Edited by: user10594250 on Jan 22, 2010 4:41 AM

  • Failure at final check of oracle crs stack

    Hi,
    I am installing oracle clusterware 11, when I run on the first node root.sh it's ok, but when I run it on the second node recive this message:
    Failure at final check of Oracle CRS stack 10
    I have stopped firewall and ssh,scp works fine without password using node.domain and node without domain between the nodes.
    please help me!

    thanks for the answer.
    this is root.sh output of second node
    [root@orac-asbe-c cluster]# ./root.sh
    Checking to see if Oracle CRS stack is already configured
    /etc/oracle does not exist. Creating it now.
    Setting the permissions on OCR backup directory
    Setting up Network socket directories
    Oracle Cluster Registry configuration upgraded successfully
    Successfully accumulated necessary OCR keys.
    Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
    node <nodenumber>: <nodename> <private interconnect name> <hostname>
    node 1: orac-asbe-d orac-asbe-d-priv orac-asbe-d
    node 2: orac-asbe-c orac-asbe-c-priv orac-asbe-c
    Creating OCR keys for user 'root', privgrp 'root'..
    Operation successful.
    Now formatting voting device: /dev/hdd1
    Now formatting voting device: /dev/hdd2
    Now formatting voting device: /dev/hdd3
    Format of 3 voting devices complete.
    Startup will be queued to init within 30 seconds.
    Adding daemons to inittab
    Expecting the CRS daemons to be up within 600 seconds.
    Failure at final check of Oracle CRS stack.
    10
    the ocssd.log in very long, I take fiew pieces
    Oracle Database 11g CRS Release 11.1.0.6.0 - Production Copyright 1996, 2007 Oracle. All rights reserved.
    [  clsdmt]Listening to (ADDRESS=(PROTOCOL=ipc)(KEY=orac-asbe-cDBG_CSSD))
    [    CSSD]2011-06-17 11:04:17.918 >USER: Oracle Database 10g CSS Release 11.1.0.6.0 Production Copyright 1996, 2004 Oracle. All rights reserved.
    [    CSSD]2011-06-17 11:04:17.918 >USER: CSS daemon log for node orac-asbe-c, number 2, in cluster orac-as_cluster
    [    CSSD]2011-06-17 11:04:17.936 [649053344] >TRACE: clssscmain: local-only set to false
    [    CSSD]2011-06-17 11:04:17.970 [649053344] >TRACE: clssnmReadNodeInfo: added node 1 (orac-asbe-d) to cluster
    [    CSSD]2011-06-17 11:04:17.994 [649053344] >TRACE: clssnmReadNodeInfo: added node 2 (orac-asbe-c) to cluster
    [    CSSD]2011-06-17 11:04:17.997 [649053344] >WARNING: clssnmReadWallet: Open Wallet returned 28759
    [    CSSD]2011-06-17 11:04:17.997 [649053344] >WARNING: clssnmInitNMInfo: Node not configured for node kill
    [    CSSD]2011-06-17 11:04:18.011 [1133824320] >TRACE: clssnm_skgxninit: Compatible vendor clusterware not in use
    [    CSSD]2011-06-17 11:04:18.011 [1133824320] >TRACE: clssnm_skgxnmon: skgxn init failed
    [    CSSD]2011-06-17 11:04:18.023 [649053344] >TRACE: clssnmNMInitialize: Network heartbeat thresholds are: impending reconfig 15000 ms, reconfig start (misscount) 30000 ms
    [    CSSD]2011-06-17 11:04:18.027 [649053344] >TRACE: clssnmNMInitialize: Voting file I/O timeouts are: short 27000 ms, long 200000 ms
    [    CSSD]2011-06-17 11:04:18.039 [649053344] >TRACE: clssnmDiskStateChange: state from 1 to 2 disk (0//dev/hdd1)
    [    CSSD]2011-06-17 11:04:18.040 [1133824320] >TRACE: clssnmvDPT: spawned for disk 0 (/dev/hdd1)
    [    CSSD]2011-06-17 11:04:18.083 [1133824320] >TRACE: clssnmDiskStateChange: state from 2 to 4 disk (0//dev/hdd1)
    [    CSSD]2011-06-17 11:04:18.088 [1144314176] >TRACE: clssnmvKillBlockThread: spawned for disk 0 (/dev/hdd1) initial : sleep interval (1000)ms
    [    CSSD]2011-06-17 11:04:18.185 [649053344] >TRACE: clssnmDiskStateChange: state from 1 to 2 disk (1//dev/hdd2)
    [    CSSD]2011-06-17 11:04:18.189 [1154804032] >TRACE: clssnmvDPT: spawned for disk 1 (/dev/hdd2)
    [    CSSD]2011-06-17 11:04:18.208 [1154804032] >TRACE: clssnmDiskStateChange: state from 2 to 4 disk (1//dev/hdd2)
    [    CSSD]2011-06-17 11:04:18.212 [1102223680] >TRACE: clssnmvKillBlockThread: spawned for disk 1 (/dev/hdd2) initial sleep interval (1000)ms
    [    CSSD]2011-06-17 11:04:18.219 [649053344] >TRACE: clssnmDiskStateChange: state from 1 to 2 disk (2//dev/hdd3)
    [    CSSD]2011-06-17 11:04:18.223 [1165293888] >TRACE: clssnmvDPT: spawned for disk 2 (/dev/hdd3)
    [    CSSD]2011-06-17 11:04:18.251 [1165293888] >TRACE: clssnmDiskStateChange: state from 2 to 4 disk (2//dev/hdd3)
    [    CSSD]2011-06-17 11:04:18.255 [1175783744] >TRACE: clssnmvKillBlockThread: spawned for disk 2 (/dev/hdd3) initial sleep interval (1000)ms
    CSSD]2011-06-17 11:04:18.367 [649053344] >TRACE: clssscSclsFatal: read value of disable
    [    CSSD]2011-06-17 11:04:18.370 [649053344] >TRACE: clssscSclsFatal: read value of disable
    [    CSSD]2011-06-17 11:04:18.373 [1196763456] >TRACE: clssnmFatalThread: spawned
    [    CSSD]2011-06-17 11:04:18.375 [1207253312] >TRACE: clssnmClusterListener: Listening on (ADDRESS=(PROTOCOL=tcp)(HOST=orac-asbe-c-priv)(PORT=49895))
    [    CSSD]2011-06-17 11:04:18.375 [1207253312] >TRACE: clssnmconnect: connecting to node(1), con(0xc3af000), flags 0x0003
    [    CSSD]2011-06-17 11:04:18.381 [1217743168] >TRACE: clssgmDeathChkThread: Spawned
    [    CSSD]------- Begin Dump -------
    [    CSSD]
    [    CSSD]
    [    CSSD]2011-06-17 11:04:18.383 [1207253312] >TRACE: clssnmConnComplete: MSGSRC 1, type 6, node 1, flags 0x0003, con 0xc3af000, probe (nil), nodekillsz 0
    [    CSSD]2011-06-17 11:04:18.383 [1207253312] >TRACE: clssnmConnComplete: msg src=1 dst=2 seq=0 type=6 birth=203767168 state=3 name=()
    [    CSSD]2011-06-17 11:04:18.383 [1207253312] >ERROR: ASSERT clssnm.c 11562
    [    CSSD]2011-06-17 11:04:18.383 [1207253312] >ERROR: clssnmConnComplete: OCR id mismatch (1700660325, 1307814030)
    [    CSSD]2011-06-17 11:04:18.383 [1207253312] >ERROR: ###################################
    [    CSSD]2011-06-17 11:04:18.383 [1207253312] >ERROR: clssscExit: CSSD aborting from thread clssnmClusterListener
    [    CSSD]2011-06-17 11:04:18.383 [1207253312] >ERROR: ###################################
    [    CSSD]
    ----- Call Stack Trace -----
    [    CSSD]calling call entry argument values in hex
    [    CSSD]location type point (? means dubious value)
    [    CSSD]-------------------- -------- -------------------- ----------------------------
    [    CSSD]sskgds_getexecname: using /proc/self/status and $PATH to get ocssd.bin
    [    CSSD]Cannot open ocssd.bin for reading: errno=2
    [    CSSD]Cannot open ocssd.bin for reading: errno=2
    [    CSSD]Cannot open ocssd.bin for reading: errno=2
    [    CSSD]000000000040C2D6 call kgdsdst() 000000000 ? 000000001 ?
    [    CSSD] 047F4FB68 ? 047F4E650 ?
    [    CSSD] 000000000 ? 000000003 ?
    thanks again

  • Failure at final  check of Oracle CRS stack.AIX oracle 10g RAC with GPFS

    HI ,I install oracle 10R2 RAC using GPFS,os is AIX 6.1,but when I installed  CRS,at executing the second root.sh ,I am in trouble ,the error information as follow :
    Failure at final  check of Oracle CRS stack.
    10
    I look up the log file the information :
    The OCR location /data_gpfs/CRS/ocr_disk2 is inaccessible. Details in /orac
    leapp/product/10.2.0/crs/log/p615a/client/ocrconfig_6881286.log.
    the ocrconfig_6881286.log.information:
    2014-01-24 01:32:20.361: [ OCRCONF][1]ocrconfig starts...
    2014-01-24 01:32:20.389: [ OCRCONF][1]Upgrading OCR data
    2014-01-24 01:32:20.391: [  OCROSD][1]utread:3: problem reading buffer 100f21d0 buflen 512 retval 0 phy_offset 102400 retry 0
    2014-01-24 01:32:20.391: [  OCROSD][1]utread:4: problem reading the buffer errno 2 errstring No such file or directory
    2014-01-24 01:32:20.391: [  OCROSD][1]utread:3: problem reading buffer 100f21d0 buflen 512 retval 0 phy_offset 102400 retry 0
    2014-01-24 01:32:20.391: [  OCROSD][1]utread:4: problem reading the buffer errno 2 errstring No such file or directory
    2014-01-24 01:32:20.391: [  OCROSD][1]utread:3: problem reading buffer ffffb1d0 buflen 4096 retval 0 phy_offset 102400 retry 0
    2014-01-24 01:32:20.392: [  OCROSD][1]utread:4: problem reading the buffer errno 2 errstring No such file or directory
    2014-01-24 01:32:20.392: [  OCRRAW][1]propriogid:1: INVALID FORMAT
    2014-01-24 01:32:20.392: [  OCROSD][1]utread:3: problem reading buffer ffffb1d0 buflen 4096 retval 0 phy_offset 102400 retry 0
    2014-01-24 01:32:20.392: [  OCROSD][1]utread:4: problem reading the buffer errno 2 errstring No such file or directory
    2014-01-24 01:32:20.392: [  OCRRAW][1]propriogid:1: INVALID FORMAT
    2014-01-24 01:32:20.392: [  OCRRAW][1]proprioini: both disks are not OCR formatted
    2014-01-24 01:32:20.392: [  OCRRAW][1]proprinit: Could not open raw device
    2014-01-24 01:32:20.392: [ default][1]a_init:7!: Backend init unsuccessful : [26]
    2014-01-24 01:32:20.393: [ OCRCONF][1]Exporting OCR data to [OCRUPGRADEFILE]
    2014-01-24 01:32:20.393: [  OCRAPI][1]a_init:7!: Backend init unsuccessful : [33]
    propriogid:1: INVALID FORMAT
    2014-01-24 01:32:20.516: [  OCRRAW][1]propriowv: Vote information on disk 0 [/data_gpfs/CRS/ocr_disk1] is adjusted from [0/0] to [1/2]
    2014-01-24 01:32:20.527: [  OCRRAW][1]propriowv: Vote information on disk 1 [/data_gpfs/CRS/ocr_disk2] is adjusted from [0/0] to [1/2]
    2014-01-24 01:32:20.960: [  OCRRAW][1]propriniconfig:No 92 configuration
    2014-01-24 01:32:20.960: [  OCRAPI][1]a_init:6a: Backend init successful
    2014-01-24 01:32:21.191: [ OCRCONF][1]Initialized DATABASE keys in OCR
    2014-01-24 01:32:21.349: [ OCRCONF][1]Successfully set skgfr block 0
    2014-01-24 01:32:21.351: [ OCRCONF][1]Exiting [status=success]...
    I dont know what cause this error,i am really trouble,who can help me !!!

    HI ,I install oracle 10R2 RAC using GPFS,os is AIX 6.1,but when I installed  CRS,at executing the second root.sh ,I am in trouble ,the error information as follow :
    Failure at final  check of Oracle CRS stack.
    10
    I look up the log file the information :
    The OCR location /data_gpfs/CRS/ocr_disk2 is inaccessible. Details in /orac
    leapp/product/10.2.0/crs/log/p615a/client/ocrconfig_6881286.log.
    the ocrconfig_6881286.log.information:
    2014-01-24 01:32:20.361: [ OCRCONF][1]ocrconfig starts...
    2014-01-24 01:32:20.389: [ OCRCONF][1]Upgrading OCR data
    2014-01-24 01:32:20.391: [  OCROSD][1]utread:3: problem reading buffer 100f21d0 buflen 512 retval 0 phy_offset 102400 retry 0
    2014-01-24 01:32:20.391: [  OCROSD][1]utread:4: problem reading the buffer errno 2 errstring No such file or directory
    2014-01-24 01:32:20.391: [  OCROSD][1]utread:3: problem reading buffer 100f21d0 buflen 512 retval 0 phy_offset 102400 retry 0
    2014-01-24 01:32:20.391: [  OCROSD][1]utread:4: problem reading the buffer errno 2 errstring No such file or directory
    2014-01-24 01:32:20.391: [  OCROSD][1]utread:3: problem reading buffer ffffb1d0 buflen 4096 retval 0 phy_offset 102400 retry 0
    2014-01-24 01:32:20.392: [  OCROSD][1]utread:4: problem reading the buffer errno 2 errstring No such file or directory
    2014-01-24 01:32:20.392: [  OCRRAW][1]propriogid:1: INVALID FORMAT
    2014-01-24 01:32:20.392: [  OCROSD][1]utread:3: problem reading buffer ffffb1d0 buflen 4096 retval 0 phy_offset 102400 retry 0
    2014-01-24 01:32:20.392: [  OCROSD][1]utread:4: problem reading the buffer errno 2 errstring No such file or directory
    2014-01-24 01:32:20.392: [  OCRRAW][1]propriogid:1: INVALID FORMAT
    2014-01-24 01:32:20.392: [  OCRRAW][1]proprioini: both disks are not OCR formatted
    2014-01-24 01:32:20.392: [  OCRRAW][1]proprinit: Could not open raw device
    2014-01-24 01:32:20.392: [ default][1]a_init:7!: Backend init unsuccessful : [26]
    2014-01-24 01:32:20.393: [ OCRCONF][1]Exporting OCR data to [OCRUPGRADEFILE]
    2014-01-24 01:32:20.393: [  OCRAPI][1]a_init:7!: Backend init unsuccessful : [33]
    propriogid:1: INVALID FORMAT
    2014-01-24 01:32:20.516: [  OCRRAW][1]propriowv: Vote information on disk 0 [/data_gpfs/CRS/ocr_disk1] is adjusted from [0/0] to [1/2]
    2014-01-24 01:32:20.527: [  OCRRAW][1]propriowv: Vote information on disk 1 [/data_gpfs/CRS/ocr_disk2] is adjusted from [0/0] to [1/2]
    2014-01-24 01:32:20.960: [  OCRRAW][1]propriniconfig:No 92 configuration
    2014-01-24 01:32:20.960: [  OCRAPI][1]a_init:6a: Backend init successful
    2014-01-24 01:32:21.191: [ OCRCONF][1]Initialized DATABASE keys in OCR
    2014-01-24 01:32:21.349: [ OCRCONF][1]Successfully set skgfr block 0
    2014-01-24 01:32:21.351: [ OCRCONF][1]Exiting [status=success]...
    I dont know what cause this error,i am really trouble,who can help me !!!

  • Failure at Final Check of Oracle CRS Stack. ... 10

    Hi,
    when i run the root.sh script on second node,i'm facing this issue.i checked with my alert log. " /dev/raw/raw2" is showing as inaccessible. how to resolve this issue.
    [client(1449)]CRS-1006:The OCR location /dev/raw/raw2 is inaccessible. Details in /u01/app/oracle/product/10.2.0/crs_home/log/rac5/client/ocrconfig_1449.log.
    2012-08-16 12:12:32.628
    [client(1449)]CRS-1006:The OCR location /dev/raw/raw2 is inaccessible. Details in /u01/app/oracle/product/10.2.0/crs_home/log/rac5/client/ocrconfig_1449.log.
    2012-08-16 12:12:32.637
    [client(1449)]CRS-1006:The OCR location /dev/raw/raw2 is inaccessible. Details in /u01/app/oracle/product/10.2.0/crs_home/log/rac5/client/ocrconfig_1449.log.
    2012-08-16 12:12:32.742
    [client(1449)]CRS-1001:The OCR was formatted using version 2.
    2012-08-16 12:12:40.144
    [client(1500)]CRS-1801:Cluster crs configured with nodes rac5 rac6 .
    2012-08-16 12:15:00.465
    [cssd(2185)]CRS-1605:CSSD voting file is online: /dev/raw/raw3. Details in /u01/app/oracle/product/10.2.0/crs_home/log/rac5/cssd/ocssd.log.
    2012-08-16 12:15:03.672
    [cssd(2185)]CRS-1601:CSSD Reconfiguration complete. Active nodes are rac5

    Hi,
    after checking (and correcting) the situtation with you raw device permission (probably you should check udev rulels), you can do the following:
    as root /u01..../crs_home/crs/install/rootcrsl.pl -deconfig -force
    and then run root.sh again.

  • CRS Installation failure

    CRS installation failed with the following error. The log file does not have any details for the cause of the failure ...
    Error While copying directory /u01/app/oracle/product/crs_1 with exculde file list 'null' to nodes
    'prd2':[PRKC-1002; all the submitted commands did not execute successfully]
    prd2:
    /bin/tar:/lib/libocrutl10.a: file shrank by 254 bytes; padding with zeros
    /bin/tar:Error exit delayed from previous errors
    refer to '/u01/app/oracle/oraInventory/logs/installActionsXXXXXXXX.log' for details. you may fix the errors on the required remote nodes.Refer to install guide for error recovery.
    What could be the reson for this failure?

    Hi,
    I have configured NFS and started installing CRS. At the end of installation, I ran the script /d01/crs/oracle/product/10.2.0/crs/root.sh from the Node1. It was successful.
    But while running the /d01/crs/oracle/product/10.2.0/crs/root.sh from the Node 2 from root user, it is hanging after displaying the message "Startup will be queued to init within 90 seconds."
    I am struck at this point. Please help me.
    /d01/crs/oracle/product/10.2.0/crs/root.sh
    WARNING: directory '/d01/crs/oracle/product/10.2.0' is not owned by root
    WARNING: directory '/d01/crs/oracle/product' is not owned by root
    WARNING: directory '/d01/crs/oracle' is not owned by root
    WARNING: directory '/d01/crs' is not owned by root
    WARNING: directory '/d01' is not owned by root
    Checking to see if Oracle CRS stack is already configured
    /etc/oracle does not exist. Creating it now.
    Setting the permissions on OCR backup directory
    Setting up NS directories
    Oracle Cluster Registry configuration upgraded successfully
    WARNING: directory '/d01/crs/oracle/product/10.2.0' is not owned by root
    WARNING: directory '/d01/crs/oracle/product' is not owned by root
    WARNING: directory '/d01/crs/oracle' is not owned by root
    WARNING: directory '/d01/crs' is not owned by root
    WARNING: directory '/d01' is not owned by root
    clscfg: EXISTING configuration version 3 detected.
    clscfg: version 3 is 10G Release 2.
    assigning default hostname jces401 for node 1.
    assigning default hostname perf1 for node 2.
    Successfully accumulated necessary OCR keys.
    Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
    node <nodenumber>: <nodename> <private interconnect name> <hostname>
    node 1: jces401 jces401-priv jces401
    node 2: perf1 perf1-priv perf1
    clscfg: Arguments check out successfully.
    NO KEYS WERE WRITTEN. Supply -force parameter to override.
    -force is destructive and will destroy any previous cluster
    configuration.
    Oracle Cluster Registry for cluster has already been initialized
    Startup will be queued to init within 90 seconds.
    Thanks,
    Muthu

  • Acrobat installer encounter an unexpected failure,please try again, if it continues to fail, contact adobe support

    acrobat installer encounter an unexpected failure,please try again, if it continues to fail, contact adobe support,
    i try many times, but the same error occurred,
    now wt should i do?

    Hi monikap7171658,
    Could you please let me know what version of OS and Acrobat are you using.
    Please check for the system requirements for Acrobat XI and previous versions:
    System requirements | Acrobat family of products—older versions (XI, X, 9)
    If you have Acrobat DC, then check for the system requirements here:
    System requirements | Adobe Acrobat Pro DC, Adobe Acrobat Standard DC
    You can try downloading the software again from the below mentioned link:
    Other downloads
    Let me know.
    Regards,
    Anubha

  • CRS installation giving Warning message

    Dear DBA's,
    CRS installation failed with the following error. The log file does not have any details for the cause of the failure ...
    Error While copying directory /home/oracle/crs/102 with exculde file list 'null' to nodes
    'rac2':PRKC-1002; all the submitted commands did not execute successfully
    rac2:
    /bin/tar:./bin/vipca: time stamp <date> is 230 s in the future
    /bin/tar:./bin/oifcfg: time stamp <date> is 194 s in the future
    ....(more errors on this node)
    Refer to '/home/oracle/oraInventory/logs/installActions2009-08-27_09-29-33AM.log' for details. You may fix the err
    ors on the required remote nodes. Refer to the install guide for error recovery. Click 'Yes' if you want to procee
    d. Click 'No' to exit the install. Do you want to continue?
    What could be the reson for this failure?
    Best Regards,
    SG

    Dear Sawwan,
    I followed the above note the result as follows.
    [oracle@rac1 bin]$ ./cluvfy stage -post crsinst -n all
    Performing post-checks for cluster services setup
    Checking node reachability...
    Node reachability check passed from node "rac1".
    Checking user equivalence...
    User equivalence check passed for user "oracle".
    Checking Cluster manager integrity...
    Checking CSS daemon...
    Daemon status check failed for "CSS daemon".
    Check failed on nodes:
    rac2,rac1
    Cluster manager integrity check failed.
    Checking cluster integrity...
    Cluster integrity check passed
    Checking OCR integrity...
    Checking the absence of a non-clustered configuration...
    All nodes free of non-clustered, local-only configurations.
    Uniqueness check for OCR device passed.
    Checking the version of OCR...
    OCR of correct Version "2" exists.
    Checking data integrity of OCR...
    Data integrity check for OCR passed.
    OCR integrity check passed.
    Checking CRS integrity...
    Checking daemon liveness...
    Liveness check failed for "CRS daemon".
    Check failed on nodes:
    rac2,rac1
    Checking daemon liveness...
    Liveness check failed for "CSS daemon".
    Check failed on nodes:
    rac2,rac1
    Checking daemon liveness...
    Liveness check failed for "EVM daemon".
    Check failed on nodes:
    rac2,rac1
    CRS integrity check failed.
    Post-check for cluster services setup was unsuccessful on all the nodes.
    ======================================================================
    [oracle@rac1 bin]$ ./crs_stat -t
    CRS-0184: Cannot communicate with the CRS daemon.
    [oracle@rac1 bin]$
    ==========================================================================
    [oracle@rac1 bin]$ ./crsctl
    Usage: crsctl check crs - checks the viability of the CRS stack
    crsctl check cssd - checks the viability of CSS
    crsctl check crsd - checks the viability of CRS
    crsctl check evmd - checks the viability of EVM
    crsctl set css <parameter> <value> - sets a parameter override
    crsctl get css <parameter> - gets the value of a CSS parameter
    crsctl unset css <parameter> - sets CSS parameter to its default
    crsctl query css votedisk - lists the voting disks used by CSS
    crsctl add css votedisk <path> - adds a new voting disk
    crsctl delete css votedisk <path> - removes a voting disk
    crsctl enable crs - enables startup for all CRS daemons
    crsctl disable crs - disables startup for all CRS daemons
    crsctl start crs - starts all CRS daemons.
    crsctl stop crs - stops all CRS daemons. Stops CRS resources in case of cluster.
    crsctl start resources - starts CRS resources.
    crsctl stop resources - stops CRS resources.
    crsctl debug statedump evm - dumps state info for evm objects
    crsctl debug statedump crs - dumps state info for crs objects
    crsctl debug statedump css - dumps state info for css objects
    crsctl debug log css [module:level]{,module:level} ...
    - Turns on debugging for CSS
    crsctl debug trace css - dumps CSS in-memory tracing cache
    crsctl debug log crs [module:level]{,module:level} ...
    - Turns on debugging for CRS
    crsctl debug trace crs - dumps CRS in-memory tracing cache
    crsctl debug log evm [module:level]{,module:level} ...
    - Turns on debugging for EVM
    crsctl debug trace evm - dumps EVM in-memory tracing cache
    crsctl debug log res <resname:level> turns on debugging for resources
    crsctl query crs softwareversion [<nodename>] - lists the version of CRS software installed
    crsctl query crs activeversion - lists the CRS software operating version
    crsctl lsmodules css - lists the CSS modules that can be used for debugging
    crsctl lsmodules crs - lists the CRS modules that can be used for debugging
    crsctl lsmodules evm - lists the EVM modules that can be used for debugging
    If necesary any of these commands can be run with additional tracing by
    adding a "trace" argument at the very front.
    Example: crsctl trace check css
    [oracle@rac1 bin]$
    Best Regards,
    SG

  • Crs installation problem in oracle 10g rac with NAS storage

    Hi,
    for my practice i am trying to install oracle 10gR2 on RHEL5-64bit OS in my laptop.
    during my crs installation i have struckup with the below error while i am executing root.sh in node1.
    Error:
    +++++
    Setting the permissions on OCR backup directory
    Setting up NS directories
    PROT-1: Failed to initialize ocrconfig
    Failed to upgrade Oracle Cluster Registry configuration
    ocrconfig.log ;
    ++++++++++++
    NFS file system /u01 mounted with incorrect options
    [  OCROSD][4265610768]WARNING:Expected NFS mount options: wsize>=32768,rsize>=32768,hard,(noac | actimeo=0 | acregmin=0,acregmax=0,acdirmin=0,acdirmax=0
    [  OCROSD][4265610768]utopen:6m'': OCR location [share/storage/ocr] configured is not a valid storage type. Rturn code [37].
    As per metalink i have find that this problem is fixed with Patch:4679769
    # Patch Installation Instructions:
    # To apply the patch, unzip the PSE container file:
    # p4679769_10201_LINUX.zip
    # Set your current directory to the directory where the patch
    # is located:
    # % cd 4679769
    # Copy the clsfmt.bin binary to the $ORACLE_HOME/bin directory where
    # clsfmt is being run:
    # % cp $ORACLE_HOME/bin/clsfmt.bin $ORACLE_HOME/bin/clsfmt.bin.bak
    # % cp clsfmt.bin $ORACLE_HOME/bin/clsfmt.bin
    # Ensure permissions on the clsfmt.bin binary are correct:
    # % chmod 755 $ORACLE_HOME/bin/clsfmt.bin
    3. Run the root.sh script and proceed with the installation.
    **My question is still i am not install Database ..only i ma trying to install crs but in this readme .txt we need to replace the clsfmt.bin file in ORACLE_HOME/bin.**
    **but i have not bin directory under in ORACLE_HOME.please clear my doupt to apply this patch...**
    Regards,
    Mugunth

    Also you clusterware installation installs to an ORACLE_HOME.
    Oracle does only make a differentiation, if it has to be clear, that you got a clusterware home and a database home.
    Normally if a patch is referring to $ORACLE_HOME (and the patch can be used for clusterware & database), it just means the installation directory of the oracle software installed.
    Sebastian

  • Grid installation: root.sh failed on the first node on Solaris cluster 4.1

    Hi all,
    I'm trying to install the Grid (11.2.0.3.0) on the 2 node-clusters (OSC 4.1).
    When I run the root.sh on the first node, I got the out put as follow:
    xha239080-root-5.11# root.sh
    Performing root user operation for Oracle 11g
    The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME= /Grid/CRShome
    Enter the full pathname of the local bin directory: [/usr/local/bin]:
    /usr/local/bin is read only. Continue without copy (y/n) or retry (r)? [y]:
    Warning: /usr/local/bin is read only. No files will be copied.
    Creating /var/opt/oracle/oratab file...
    Entries will be added to the /var/opt/oracle/oratab file as needed by
    Database Configuration Assistant when a database is created
    Finished running generic part of root script.
    Now product-specific root actions will be performed.
    Using configuration parameter file: /Grid/CRShome/crs/install/crsconfig_params
    Creating trace directory
    User ignored Prerequisites during installation
    OLR initialization - successful
    root wallet
    root wallet cert
    root cert export
    peer wallet
    profile reader wallet
    pa wallet
    peer wallet keys
    pa wallet keys
    peer cert request
    pa cert request
    peer cert
    pa cert
    peer root cert TP
    profile reader root cert TP
    pa root cert TP
    peer pa cert TP
    pa peer cert TP
    profile reader pa cert TP
    profile reader peer cert TP
    peer user cert
    pa user cert
    Adding Clusterware entries to inittab
    CRS-2672: Attempting to start 'ora.mdnsd' on 'xha239080'
    CRS-2676: Start of 'ora.mdnsd' on 'xha239080' succeeded
    CRS-2672: Attempting to start 'ora.gpnpd' on 'xha239080'
    CRS-2676: Start of 'ora.gpnpd' on 'xha239080' succeeded
    CRS-2672: Attempting to start 'ora.cssdmonitor' on 'xha239080'
    CRS-2672: Attempting to start 'ora.gipcd' on 'xha239080'
    CRS-2676: Start of 'ora.cssdmonitor' on 'xha239080' succeeded
    CRS-2676: Start of 'ora.gipcd' on 'xha239080' succeeded
    CRS-2672: Attempting to start 'ora.cssd' on 'xha239080'
    CRS-2672: Attempting to start 'ora.diskmon' on 'xha239080'
    CRS-2676: Start of 'ora.diskmon' on 'xha239080' succeeded
    CRS-2676: Start of 'ora.cssd' on 'xha239080' succeeded
    ASM created and started successfully.
    Disk Group DATA created successfully.
    clscfg: -install mode specified
    Successfully accumulated necessary OCR keys.
    Creating OCR keys for user 'root', privgrp 'root'..
    Operation successful.
    CRS-4256: Updating the profile
    Successful addition of voting disk 9cdb938773bc4f16bf332edac499fd06.
    Successful addition of voting disk 842907db11f74f59bf65247138d6e8f5.
    Successful addition of voting disk 748852d2a5c84f72bfcd50d60f65654d.
    Successfully replaced voting disk group with +DATA.
    CRS-4256: Updating the profile
    CRS-4266: Voting file(s) successfully replaced
    ## STATE File Universal Id File Name Disk group
    1. ONLINE 9cdb938773bc4f16bf332edac499fd06 (/dev/did/rdsk/d10s6) [DATA]
    2. ONLINE 842907db11f74f59bf65247138d6e8f5 (/dev/did/rdsk/d8s6) [DATA]
    3. ONLINE 748852d2a5c84f72bfcd50d60f65654d (/dev/did/rdsk/d9s6) [DATA]
    Located 3 voting disk(s).
    Start of resource "ora.cssd" failed
    CRS-2672: Attempting to start 'ora.cssdmonitor' on 'xha239080'
    CRS-2672: Attempting to start 'ora.gipcd' on 'xha239080'
    CRS-2676: Start of 'ora.cssdmonitor' on 'xha239080' succeeded
    CRS-2676: Start of 'ora.gipcd' on 'xha239080' succeeded
    CRS-2672: Attempting to start 'ora.cssd' on 'xha239080'
    CRS-2672: Attempting to start 'ora.diskmon' on 'xha239080'
    CRS-2676: Start of 'ora.diskmon' on 'xha239080' succeeded
    CRS-2674: Start of 'ora.cssd' on 'xha239080' failed
    CRS-2679: Attempting to clean 'ora.cssd' on 'xha239080'
    CRS-2681: Clean of 'ora.cssd' on 'xha239080' succeeded
    CRS-2673: Attempting to stop 'ora.gipcd' on 'xha239080'
    CRS-2677: Stop of 'ora.gipcd' on 'xha239080' succeeded
    CRS-2673: Attempting to stop 'ora.cssdmonitor' on 'xha239080'
    CRS-2677: Stop of 'ora.cssdmonitor' on 'xha239080' succeeded
    CRS-5804: Communication error with agent process
    CRS-4000: Command Start failed, or completed with errors.
    Failed to start Oracle Grid Infrastructure stack
    Failed to start Cluster Synchorinisation Service in clustered mode at /Grid/CRShome/crs/install/crsconfig_lib.pm line 1211.
    /Grid/CRShome/perl/bin/perl -I/Grid/CRShome/perl/lib -I/Grid/CRShome/crs/install /Grid/CRShome/crs/install/rootcrs.pl execution failed
    xha239080-root-5.11# history
    checking the ocssd.log, I see some thing as follow:
    2013-09-16 18:46:24.238: [    CSSD][1]clssscmain: Starting CSS daemon, version 11.2.0.3.0, in (clustered) mode with uniqueness value 1379371584
    2013-09-16 18:46:24.239: [    CSSD][1]clssscmain: Environment is production
    2013-09-16 18:46:24.239: [    CSSD][1]clssscmain: Core file size limit extended
    2013-09-16 18:46:24.248: [    CSSD][1]clssscmain: GIPCHA down 1
    2013-09-16 18:46:24.249: [    CSSD][1]clssscGetParameterOLR: OLR fetch for parameter logsize (8) failed with rc 21
    2013-09-16 18:46:24.250: [    CSSD][1]clssscExtendLimits: The current soft limit for file descriptors is 65536, hard limit is 65536
    2013-09-16 18:46:24.250: [    CSSD][1]clssscExtendLimits: The current soft limit for locked memory is 4294967293, hard limit is 4294967293
    2013-09-16 18:46:24.250: [    CSSD][1]clssscGetParameterOLR: OLR fetch for parameter priority (15) failed with rc 21
    2013-09-16 18:46:24.250: [    CSSD][1]clssscSetPrivEnv: Setting priority to 4
    2013-09-16 18:46:24.253: [    CSSD][1]clssscSetPrivEnv: unable to set priority to 4
    2013-09-16 18:46:24.253: [    CSSD][1]SLOS: cat=-2, opn=scls_mem_lockdown, dep=11, loc=mlockall
    unable to lock memory
    2013-09-16 18:46:24.253: [    CSSD][1](:CSSSC00011:)clssscExit: A fatal error occurred during initialization
    Do anyone have any idea what going on and how can I fix it ?

    Hi,
    solaris has several issues with DISM, e.g.:
    Solaris 10 and Solaris 11 Shared Memory Locking May Fail (Doc ID 1590151.1)
    Sounds like Solaris Cluster  has a similar bug. A "workaround" is to reboot the (cluster) zone, that "fixes" the mlock error. This bug was introduced with updates in september, atleast to our environment (Solaris 11.1). Prior i did not have the issue and now i have to restart the entire zone, whenever i stop crs.
    With 11.2.0.3 the root.sh script can be rerun without prior cleaning up, so you should be able to continue installation at that point after the reboot. After the root.sh completes some configuration assistants need to be run, to complete the installation. You need to execute this manually as you wipe your oui session
    Kind Regards
    Thomas

  • 11gR2 RAC installation in AIX fails while running root.sh on node2

    Hi
    We are in the process on installing 11gR2 RAC on AIX 6.1
    But the installation is failing on node 2while running root.sh with the following error.
    DiskGroup DATA creation failed with the following message:
    ORA-15018: diskgroup cannot be created
    ORA-15017: diskgroup "DATA" cannot be mounted
    ORA-15003: diskgroup "DATA" already mounted in another lock name space
    Configuration of ASM failed, see logs for details
    Did not succssfully configure and start ASM
    CRS-2500: Cannot stop resource 'ora.crsd' as it is not running
    CRS-4000: Command Stop failed, or completed with errors.
    Command return code of 1 (256) from command: XXXXX/grid/bin/crsctl stop resource ora.crsd -init
    Stop of resource "ora.crsd -init" failed
    Failed to stop CRSD
    Please help
    Regards

    HI,
    the second node (second root.sh) should not want to "create" the diskgroup data, since it is already there.
    This pretty much sounds like you tried to start both root.sh (on the first node and on the second node) at the same time, not waiting for it to finish on the first node.
    It is important, that before you start the root.sh on the second node, it has to be finished successfully on the first node for the cluster to update the information.
    Sebastian

Maybe you are looking for