RAC AttachHome' failed on nodes

I am in process of configuring on VMWARE for two node real application cluster
I have installed CLUSTERWARE installation successfully on both the nodes
when I install Oracle software at 90% stage I am getting the below error
AttachHome' failed on nodes: 'rac2'. Refer to '/u01/app/oraInventory/logs/installActions2011-03-17_06-57-18PM.log' for details.
You can manually re-run the following command on the failed nodes after the installation:
/u01/app/oracle/product/11.1.0/db_1/oui/bin/runInstaller -attachHome -noClusterEnabled ORACLE_HOME=/u01/app/oracle/product/11.1.0/db_1 ORACLE_HOME_NAME=OraDb11g_home1 CLUSTER_NODES=rac1,rac2 "INVENTORY_LOCATION=/u01/app/oraInventory" LOCAL_NODE=<node on which command is to be run>.
INFO: User Selected: Yes/OK

Hi,
This post is duplicate.
Please close this thread we will continue on this Thread RAC software installation : AttachHome' failed on nodes: 'rac2'
Levi Pereira

Similar Messages

  • 11g R2 RAC - Grid Infrastructure installation - "root.sh" fails on node#2

    Hi there,
    I am trying to create a two node 11g R2 RAC on OEL 5.5 (32-bit) using VMWare virtual machines. I have correctly configured both nodes. Cluster Verification utility returns on following error \[which I believe can be ignored]:
    Checking daemon liveness...
    Liveness check failed for "ntpd"
    Check failed on nodes:
    rac2,rac1
    PRVF-5415 : Check to see if NTP daemon is running failed
    Clock synchronization check using Network Time Protocol(NTP) failed
    Pre-check for cluster services setup was unsuccessful on all the nodes.
    While Grid Infrastructure installation (for a Cluster option), things go very smooth until I run "root.sh" on node# 2. orainstRoot.sh ran OK on both. "root.sh" run OK on node# 1 and ends with:
    Checking swap space: must be greater than 500 MB.   Actual 1967 MB    Passed
    The inventory pointer is located at /etc/oraInst.loc
    The inventory is located at /u01/app/oraInventory
    *'UpdateNodeList' was successful.*
    *[root@rac1 ~]#*
    "root.sh" fails on rac2 (2nd node) with following error:
    CRS-2672: Attempting to start 'ora.evmd' on 'rac2'
    CRS-2676: Start of 'ora.evmd' on 'rac2' succeeded
    Timed out waiting for the CRS stack to start.
    *[root@rac2 ~]#*
    I know this info may not be enough to figure out what the problem may be. Please let me know what should I look for to find the issue and fix it. Its been like almost two weeks now :-(
    Regards
    Amer

    Hi Zheng,
    ocssd.log is HUGE. So I am putting few of the last lines in the log file hoping they may give some clue:
    2011-07-04 19:49:24.007: [    CSSD][2997803920]clssnmvSchedDiskThreads: KillBlockThread for voting file ORCL:DATA sched delay 2180 > margin 1500  cur_ms 36118424 lastalive 36116244
    2011-07-04 19:49:26.005: [    CSSD][2997803920]clssnmvSchedDiskThreads: DiskPingThread for voting file ORCL:DATA sched delay 4150 > margin 1500 cur_ms 36120424 lastalive 36116274
    2011-07-04 19:49:26.006: [    CSSD][2997803920]clssnmvSchedDiskThreads: KillBlockThread for voting file ORCL:DATA sched delay 4180 > margin 1500  cur_ms 36120424 lastalive 36116244
    2011-07-04 19:49:27.997: [    CSSD][2901298064]clssnmSendingThread: sending status msg to all nodes
    2011-07-04 19:49:27.997: [    CSSD][2901298064]clssnmSendingThread: sent 5 status msgs to all nodes
    2011-07-04 19:49:33.001: [    CSSD][2901298064]clssnmSendingThread: sending status msg to all nodes
    2011-07-04 19:49:33.001: [    CSSD][2901298064]clssnmSendingThread: sent 5 status msgs to all nodes
    2011-07-04 19:49:37.996: [    CSSD][2901298064]clssnmSendingThread: sending status msg to all nodes
    2011-07-04 19:49:37.996: [    CSSD][2901298064]clssnmSendingThread: sent 5 status msgs to all nodes
    2011-07-04 19:49:43.000: [    CSSD][2901298064]clssnmSendingThread: sending status msg to all nodes
    2011-07-04 19:49:43.000: [    CSSD][2901298064]clssnmSendingThread: sent 5 status msgs to all nodes
    2011-07-04 19:49:48.004: [    CSSD][2901298064]clssnmSendingThread: sending status msg to all nodes
    2011-07-04 19:49:48.005: [    CSSD][2901298064]clssnmSendingThread: sent 5 status msgs to all nodes
    2011-07-04 19:50:12.003: [    CSSD][2901298064]clssnmSendingThread: sent 5 status msgs to all nodes
    2011-07-04 19:50:12.008: [    CSSD][2997803920]clssnmvSchedDiskThreads: DiskPingThread for voting file ORCL:DATA sched delay 1660 > margin 1500 cur_ms 36166424 lastalive 36164764
    2011-07-04 19:50:12.009: [    CSSD][2997803920]clssnmvSchedDiskThreads: KillBlockThread for voting file ORCL:DATA sched delay 1660 > margin 1500  cur_ms 36166424 lastalive 36164764
    2011-07-04 19:50:15.796: [    CSSD][2997803920]clssnmvSchedDiskThreads: KillBlockThread for voting file ORCL:DATA sched delay 2130 > margin 1500  cur_ms 36170214 lastalive 36168084
    2011-07-04 19:50:16.996: [    CSSD][2901298064]clssnmSendingThread: sending status msg to all nodes
    2011-07-04 19:50:16.996: [    CSSD][2901298064]clssnmSendingThread: sent 5 status msgs to all nodes
    2011-07-04 19:50:17.826: [    CSSD][2997803920]clssnmvSchedDiskThreads: DiskPingThread for voting file ORCL:DATA sched delay 1540 > margin 1500 cur_ms 36172244 lastalive 36170704
    2011-07-04 19:50:17.826: [    CSSD][2997803920]clssnmvSchedDiskThreads: KillBlockThread for voting file ORCL:DATA sched delay 1570 > margin 1500  cur_ms 36172244 lastalive 36170674
    2011-07-04 19:50:21.999: [    CSSD][2901298064]clssnmSendingThread: sending status msg to all nodes
    2011-07-04 19:50:21.999: [    CSSD][2901298064]clssnmSendingThread: sent 5 status msgs to all nodes
    2011-07-04 19:50:26.011: [    CSSD][2997803920]clssnmvSchedDiskThreads: DiskPingThread for voting file ORCL:DATA sched delay 1740 > margin 1500 cur_ms 36180424 lastalive 36178684
    2011-07-04 19:50:26.011: [    CSSD][2997803920]clssnmvSchedDiskThreads: KillBlockThread for voting file ORCL:DATA sched delay 1620 > margin 1500  cur_ms 36180424 lastalive 36178804
    2011-07-04 19:50:27.004: [    CSSD][2901298064]clssnmSendingThread: sending status msg to all nodes
    2011-07-04 19:50:27.004: [    CSSD][2901298064]clssnmSendingThread: sent 5 status msgs to all nodes
    2011-07-04 19:50:28.002: [    CSSD][2997803920]clssnmvSchedDiskThreads: DiskPingThread for voting file ORCL:DATA sched delay 1700 > margin 1500 cur_ms 36182414 lastalive 36180714
    2011-07-04 19:50:28.002: [    CSSD][2997803920]clssnmvSchedDiskThreads: KillBlockThread for voting file ORCL:DATA sched delay 1790 > margin 1500  cur_ms 36182414 lastalive 36180624
    2011-07-04 19:50:31.998: [    CSSD][2901298064]clssnmSendingThread: sending status msg to all nodes
    2011-07-04 19:50:31.998: [    CSSD][2901298064]clssnmSendingThread: sent 5 status msgs to all nodes
    2011-07-04 19:50:37.001: [    CSSD][2901298064]clssnmSendingThread: sending status msg to all nodes
    2011-07-04 19:50:37.002: [    CSSD][2901298064]clssnmSendingThread: sent 5 status msgs to all nodes
    *<end of log file>*And the alertrac2.log contains:
    *[root@rac2 rac2]# cat alertrac2.log*
    Oracle Database 11g Clusterware Release 11.2.0.1.0 - Production Copyright 1996, 2009 Oracle. All rights reserved.
    2011-07-02 16:43:51.571
    [client(16134)]CRS-2106:The OLR location /u01/grid/oracle/product/11.2.0/grid/cdata/rac2.olr is inaccessible. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/client/ocrconfig_16134.log.
    2011-07-02 16:43:57.125
    [client(16134)]CRS-2101:The OLR was formatted using version 3.
    2011-07-02 16:44:43.214
    [ohasd(16188)]CRS-2112:The OLR service started on node rac2.
    2011-07-02 16:45:06.446
    [ohasd(16188)]CRS-2772:Server 'rac2' has been assigned to pool 'Free'.
    2011-07-02 16:53:30.061
    [ohasd(16188)]CRS-2302:Cannot get GPnP profile. Error CLSGPNP_NO_DAEMON (GPNPD daemon is not running).
    2011-07-02 16:53:55.042
    [cssd(17674)]CRS-1713:CSSD daemon is started in exclusive mode
    2011-07-02 16:54:38.334
    [cssd(17674)]CRS-1707:Lease acquisition for node rac2 number 2 completed
    [cssd(17674)]CRS-1636:The CSS daemon was started in exclusive mode but found an active CSS daemon on node rac1 and is terminating; details at (:CSSNM00006:) in /u01/grid/oracle/product/11.2.0/grid/log/rac2/cssd/ocssd.log
    2011-07-02 16:54:38.464
    [cssd(17674)]CRS-1603:CSSD on node rac2 shutdown by user.
    2011-07-02 16:54:39.174
    [ohasd(16188)]CRS-2765:Resource 'ora.cssdmonitor' has failed on server 'rac2'.
    2011-07-02 16:55:43.430
    [cssd(17945)]CRS-1713:CSSD daemon is started in clustered mode
    2011-07-02 16:56:02.852
    [cssd(17945)]CRS-1707:Lease acquisition for node rac2 number 2 completed
    2011-07-02 16:56:04.061
    [cssd(17945)]CRS-1605:CSSD voting file is online: ORCL:DATA; details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/cssd/ocssd.log.
    2011-07-02 16:56:18.350
    [cssd(17945)]CRS-1601:CSSD Reconfiguration complete. Active nodes are rac1 rac2 .
    2011-07-02 16:56:29.283
    [ctssd(18020)]CRS-2403:The Cluster Time Synchronization Service on host rac2 is in observer mode.
    2011-07-02 16:56:29.551
    [ctssd(18020)]CRS-2407:The new Cluster Time Synchronization Service reference node is host rac1.
    2011-07-02 16:56:29.615
    [ctssd(18020)]CRS-2412:The Cluster Time Synchronization Service detects that the local time is significantly different from the mean cluster time. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/ctssd/octssd.log.
    2011-07-02 16:56:29.616
    [ctssd(18020)]CRS-2409:The clock on host rac2 is not synchronous with the mean cluster time. No action has been taken as the Cluster Time Synchronization Service is running in observer mode.
    2011-07-02 16:56:29.641
    [ctssd(18020)]CRS-2401:The Cluster Time Synchronization Service started on host rac2.
    [client(18052)]CRS-10001:ACFS-9327: Verifying ADVM/ACFS devices.
    [client(18056)]CRS-10001:ACFS-9322: done.
    2011-07-02 17:01:40.963
    [ohasd(16188)]CRS-2757:Command 'Start' timed out waiting for response from the resource 'ora.asm'. Details at (:CRSPE00111:) in /u01/grid/oracle/product/11.2.0/grid/log/rac2/ohasd/ohasd.log.
    [client(18590)]CRS-10001:ACFS-9327: Verifying ADVM/ACFS devices.
    [client(18594)]CRS-10001:ACFS-9322: done.
    2011-07-02 17:27:46.385
    [ctssd(18020)]CRS-2412:The Cluster Time Synchronization Service detects that the local time is significantly different from the mean cluster time. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/ctssd/octssd.log.
    2011-07-02 17:27:46.385
    [ctssd(18020)]CRS-2409:The clock on host rac2 is not synchronous with the mean cluster time. No action has been taken as the Cluster Time Synchronization Service is running in observer mode.
    2011-07-02 17:46:48.717
    [crsd(22519)]CRS-1013:The OCR location in an ASM disk group is inaccessible. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/crsd/crsd.log.
    2011-07-02 17:46:49.641
    [ohasd(16188)]CRS-2765:Resource 'ora.crsd' has failed on server 'rac2'.
    2011-07-02 17:46:51.459
    [crsd(22553)]CRS-1013:The OCR location in an ASM disk group is inaccessible. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/crsd/crsd.log.
    2011-07-02 17:46:51.776
    [ohasd(16188)]CRS-2765:Resource 'ora.crsd' has failed on server 'rac2'.
    2011-07-02 17:46:53.928
    [crsd(22574)]CRS-1013:The OCR location in an ASM disk group is inaccessible. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/crsd/crsd.log.
    2011-07-02 17:46:53.956
    [ohasd(16188)]CRS-2765:Resource 'ora.crsd' has failed on server 'rac2'.
    2011-07-02 17:46:55.834
    [crsd(22592)]CRS-1013:The OCR location in an ASM disk group is inaccessible. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/crsd/crsd.log.
    2011-07-02 17:46:56.273
    [ohasd(16188)]CRS-2765:Resource 'ora.crsd' has failed on server 'rac2'.
    2011-07-02 17:46:57.762
    [crsd(22610)]CRS-1013:The OCR location in an ASM disk group is inaccessible. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/crsd/crsd.log.
    2011-07-02 17:46:58.631
    [ohasd(16188)]CRS-2765:Resource 'ora.crsd' has failed on server 'rac2'.
    2011-07-02 17:47:00.259
    [crsd(22628)]CRS-1013:The OCR location in an ASM disk group is inaccessible. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/crsd/crsd.log.
    2011-07-02 17:47:00.968
    [ohasd(16188)]CRS-2765:Resource 'ora.crsd' has failed on server 'rac2'.
    2011-07-02 17:47:02.513
    [crsd(22645)]CRS-1013:The OCR location in an ASM disk group is inaccessible. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/crsd/crsd.log.
    2011-07-02 17:47:03.309
    [ohasd(16188)]CRS-2765:Resource 'ora.crsd' has failed on server 'rac2'.
    2011-07-02 17:47:05.081
    [crsd(22663)]CRS-1013:The OCR location in an ASM disk group is inaccessible. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/crsd/crsd.log.
    2011-07-02 17:47:05.770
    [ohasd(16188)]CRS-2765:Resource 'ora.crsd' has failed on server 'rac2'.
    2011-07-02 17:47:07.796
    [crsd(22681)]CRS-1013:The OCR location in an ASM disk group is inaccessible. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/crsd/crsd.log.
    2011-07-02 17:47:08.257
    [ohasd(16188)]CRS-2765:Resource 'ora.crsd' has failed on server 'rac2'.
    2011-07-02 17:47:10.733
    [crsd(22699)]CRS-1013:The OCR location in an ASM disk group is inaccessible. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/crsd/crsd.log.
    2011-07-02 17:47:11.739
    [ohasd(16188)]CRS-2765:Resource 'ora.crsd' has failed on server 'rac2'.
    2011-07-02 17:47:13.547
    [crsd(22732)]CRS-1013:The OCR location in an ASM disk group is inaccessible. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/crsd/crsd.log.
    2011-07-02 17:47:14.111
    [ohasd(16188)]CRS-2765:Resource 'ora.crsd' has failed on server 'rac2'.
    2011-07-02 17:47:14.112
    [ohasd(16188)]CRS-2771:Maximum restart attempts reached for resource 'ora.crsd'; will not restart.
    2011-07-02 17:58:18.459
    [ctssd(18020)]CRS-2412:The Cluster Time Synchronization Service detects that the local time is significantly different from the mean cluster time. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/ctssd/octssd.log.
    2011-07-02 17:58:18.459
    [ctssd(18020)]CRS-2409:The clock on host rac2 is not synchronous with the mean cluster time. No action has been taken as the Cluster Time Synchronization Service is running in observer mode.
    [client(26883)]CRS-10001:ACFS-9200: Supported
    2011-07-02 18:13:34.627
    [ctssd(18020)]CRS-2405:The Cluster Time Synchronization Service on host rac2 is shutdown by user
    2011-07-02 18:13:42.368
    [cssd(17945)]CRS-1603:CSSD on node rac2 shutdown by user.
    2011-07-02 18:15:13.877
    [client(27222)]CRS-2106:The OLR location /u01/grid/oracle/product/11.2.0/grid/cdata/rac2.olr is inaccessible. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/client/ocrconfig_27222.log.
    2011-07-02 18:15:14.011
    [client(27222)]CRS-2101:The OLR was formatted using version 3.
    2011-07-02 18:15:23.226
    [ohasd(27261)]CRS-2112:The OLR service started on node rac2.
    2011-07-02 18:15:23.688
    [ohasd(27261)]CRS-8017:location: /etc/oracle/lastgasp has 2 reboot advisory log files, 0 were announced and 0 errors occurred
    2011-07-02 18:15:24.064
    [ohasd(27261)]CRS-2772:Server 'rac2' has been assigned to pool 'Free'.
    2011-07-02 18:16:29.761
    [ohasd(27261)]CRS-2302:Cannot get GPnP profile. Error CLSGPNP_NO_DAEMON (GPNPD daemon is not running).
    2011-07-02 18:16:30.190
    [gpnpd(28498)]CRS-2328:GPNPD started on node rac2.
    2011-07-02 18:16:41.561
    [cssd(28562)]CRS-1713:CSSD daemon is started in exclusive mode
    2011-07-02 18:16:49.111
    [cssd(28562)]CRS-1707:Lease acquisition for node rac2 number 2 completed
    2011-07-02 18:16:49.166
    [cssd(28562)]CRS-1605:CSSD voting file is online: ORCL:DATA; details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/cssd/ocssd.log.
    [cssd(28562)]CRS-1636:The CSS daemon was started in exclusive mode but found an active CSS daemon on node rac1 and is terminating; details at (:CSSNM00006:) in /u01/grid/oracle/product/11.2.0/grid/log/rac2/cssd/ocssd.log
    2011-07-02 18:17:01.122
    [cssd(28562)]CRS-1603:CSSD on node rac2 shutdown by user.
    2011-07-02 18:17:06.917
    [ohasd(27261)]CRS-2765:Resource 'ora.cssdmonitor' has failed on server 'rac2'.
    2011-07-02 18:17:23.602
    [mdnsd(28485)]CRS-5602:mDNS service stopping by request.
    2011-07-02 18:17:36.217
    [gpnpd(28732)]CRS-2328:GPNPD started on node rac2.
    2011-07-02 18:17:43.673
    [cssd(28794)]CRS-1713:CSSD daemon is started in clustered mode
    2011-07-02 18:17:49.826
    [cssd(28794)]CRS-1707:Lease acquisition for node rac2 number 2 completed
    2011-07-02 18:17:49.865
    [cssd(28794)]CRS-1605:CSSD voting file is online: ORCL:DATA; details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/cssd/ocssd.log.
    2011-07-02 18:18:03.049
    [cssd(28794)]CRS-1601:CSSD Reconfiguration complete. Active nodes are rac1 rac2 .
    2011-07-02 18:18:06.160
    [ctssd(28861)]CRS-2403:The Cluster Time Synchronization Service on host rac2 is in observer mode.
    2011-07-02 18:18:06.220
    [ctssd(28861)]CRS-2407:The new Cluster Time Synchronization Service reference node is host rac1.
    2011-07-02 18:18:06.238
    [ctssd(28861)]CRS-2412:The Cluster Time Synchronization Service detects that the local time is significantly different from the mean cluster time. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/ctssd/octssd.log.
    2011-07-02 18:18:06.239
    [ctssd(28861)]CRS-2409:The clock on host rac2 is not synchronous with the mean cluster time. No action has been taken as the Cluster Time Synchronization Service is running in observer mode.
    2011-07-02 18:18:06.794
    [ctssd(28861)]CRS-2401:The Cluster Time Synchronization Service started on host rac2.
    [client(28891)]CRS-10001:ACFS-9327: Verifying ADVM/ACFS devices.
    [client(28895)]CRS-10001:ACFS-9322: done.
    2011-07-02 18:18:33.465
    [crsd(29020)]CRS-1013:The OCR location in an ASM disk group is inaccessible. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/crsd/crsd.log.
    2011-07-02 18:18:33.575
    [ohasd(27261)]CRS-2765:Resource 'ora.crsd' has failed on server 'rac2'.
    2011-07-02 18:18:35.757
    [crsd(29051)]CRS-1013:The OCR location in an ASM disk group is inaccessible. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/crsd/crsd.log.
    2011-07-02 18:18:36.129
    [ohasd(27261)]CRS-2765:Resource 'ora.crsd' has failed on server 'rac2'.
    2011-07-02 18:18:38.596
    [crsd(29066)]CRS-1013:The OCR location in an ASM disk group is inaccessible. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/crsd/crsd.log.
    2011-07-02 18:18:39.146
    [ohasd(27261)]CRS-2765:Resource 'ora.crsd' has failed on server 'rac2'.
    2011-07-02 18:18:41.058
    [crsd(29085)]CRS-1013:The OCR location in an ASM disk group is inaccessible. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/crsd/crsd.log.
    2011-07-02 18:18:41.435
    [ohasd(27261)]CRS-2765:Resource 'ora.crsd' has failed on server 'rac2'.
    2011-07-02 18:18:44.255
    [crsd(29101)]CRS-1013:The OCR location in an ASM disk group is inaccessible. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/crsd/crsd.log.
    2011-07-02 18:18:45.165
    [ohasd(27261)]CRS-2765:Resource 'ora.crsd' has failed on server 'rac2'.
    2011-07-02 18:18:47.013
    [crsd(29121)]CRS-1013:The OCR location in an ASM disk group is inaccessible. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/crsd/crsd.log.
    2011-07-02 18:18:47.409
    [ohasd(27261)]CRS-2765:Resource 'ora.crsd' has failed on server 'rac2'.
    2011-07-02 18:18:50.071
    [crsd(29136)]CRS-1013:The OCR location in an ASM disk group is inaccessible. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/crsd/crsd.log.
    2011-07-02 18:18:50.118
    [ohasd(27261)]CRS-2765:Resource 'ora.crsd' has failed on server 'rac2'.
    2011-07-02 18:18:51.843
    [crsd(29156)]CRS-1013:The OCR location in an ASM disk group is inaccessible. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/crsd/crsd.log.
    2011-07-02 18:18:52.373
    [ohasd(27261)]CRS-2765:Resource 'ora.crsd' has failed on server 'rac2'.
    2011-07-02 18:18:54.361
    [crsd(29171)]CRS-1013:The OCR location in an ASM disk group is inaccessible. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/crsd/crsd.log.
    2011-07-02 18:18:54.772
    [ohasd(27261)]CRS-2765:Resource 'ora.crsd' has failed on server 'rac2'.
    2011-07-02 18:18:56.620
    [crsd(29202)]CRS-1013:The OCR location in an ASM disk group is inaccessible. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/crsd/crsd.log.
    2011-07-02 18:18:57.104
    [ohasd(27261)]CRS-2765:Resource 'ora.crsd' has failed on server 'rac2'.
    2011-07-02 18:18:58.997
    [crsd(29218)]CRS-1013:The OCR location in an ASM disk group is inaccessible. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/crsd/crsd.log.
    2011-07-02 18:18:59.301
    [ohasd(27261)]CRS-2765:Resource 'ora.crsd' has failed on server 'rac2'.
    2011-07-02 18:18:59.302
    [ohasd(27261)]CRS-2771:Maximum restart attempts reached for resource 'ora.crsd'; will not restart.
    2011-07-02 18:49:58.070
    [ctssd(28861)]CRS-2412:The Cluster Time Synchronization Service detects that the local time is significantly different from the mean cluster time. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/ctssd/octssd.log.
    2011-07-02 18:49:58.070
    [ctssd(28861)]CRS-2409:The clock on host rac2 is not synchronous with the mean cluster time. No action has been taken as the Cluster Time Synchronization Service is running in observer mode.
    2011-07-02 19:21:33.362
    [ctssd(28861)]CRS-2412:The Cluster Time Synchronization Service detects that the local time is significantly different from the mean cluster time. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/ctssd/octssd.log.
    2011-07-02 19:21:33.362
    [ctssd(28861)]CRS-2409:The clock on host rac2 is not synchronous with the mean cluster time. No action has been taken as the Cluster Time Synchronization Service is running in observer mode.
    2011-07-02 19:52:05.271
    [ctssd(28861)]CRS-2412:The Cluster Time Synchronization Service detects that the local time is significantly different from the mean cluster time. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/ctssd/octssd.log.
    2011-07-02 19:52:05.271
    [ctssd(28861)]CRS-2409:The clock on host rac2 is not synchronous with the mean cluster time. No action has been taken as the Cluster Time Synchronization Service is running in observer mode.
    2011-07-02 20:22:53.696
    [ctssd(28861)]CRS-2412:The Cluster Time Synchronization Service detects that the local time is significantly different from the mean cluster time. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/ctssd/octssd.log.
    2011-07-02 20:22:53.696
    [ctssd(28861)]CRS-2409:The clock on host rac2 is not synchronous with the mean cluster time. No action has been taken as the Cluster Time Synchronization Service is running in observer mode.
    2011-07-02 20:53:43.949
    [ctssd(28861)]CRS-2412:The Cluster Time Synchronization Service detects that the local time is significantly different from the mean cluster time. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/ctssd/octssd.log.
    2011-07-02 20:53:43.949
    [ctssd(28861)]CRS-2409:The clock on host rac2 is not synchronous with the mean cluster time. No action has been taken as the Cluster Time Synchronization Service is running in observer mode.
    2011-07-02 21:24:32.990
    [ctssd(28861)]CRS-2412:The Cluster Time Synchronization Service detects that the local time is significantly different from the mean cluster time. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/ctssd/octssd.log.
    2011-07-02 21:24:32.990
    [ctssd(28861)]CRS-2409:The clock on host rac2 is not synchronous with the mean cluster time. No action has been taken as the Cluster Time Synchronization Service is running in observer mode.
    2011-07-02 21:55:21.907
    [ctssd(28861)]CRS-2412:The Cluster Time Synchronization Service detects that the local time is significantly different from the mean cluster time. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/ctssd/octssd.log.
    2011-07-02 21:55:21.908
    [ctssd(28861)]CRS-2409:The clock on host rac2 is not synchronous with the mean cluster time. No action has been taken as the Cluster Time Synchronization Service is running in observer mode.
    2011-07-02 22:26:45.752
    [ctssd(28861)]CRS-2412:The Cluster Time Synchronization Service detects that the local time is significantly different from the mean cluster time. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/ctssd/octssd.log.
    2011-07-02 22:26:45.752
    [ctssd(28861)]CRS-2409:The clock on host rac2 is not synchronous with the mean cluster time. No action has been taken as the Cluster Time Synchronization Service is running in observer mode.
    2011-07-02 22:57:54.682
    [ctssd(28861)]CRS-2412:The Cluster Time Synchronization Service detects that the local time is significantly different from the mean cluster time. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/ctssd/octssd.log.
    2011-07-02 22:57:54.683
    [ctssd(28861)]CRS-2409:The clock on host rac2 is not synchronous with the mean cluster time. No action has been taken as the Cluster Time Synchronization Service is running in observer mode.
    2011-07-02 23:07:28.603
    [cssd(28794)]CRS-1612:Network communication with node rac1 (1) missing for 50% of timeout interval.  Removal of this node from cluster in 14.020 seconds
    2011-07-02 23:07:35.621
    [cssd(28794)]CRS-1611:Network communication with node rac1 (1) missing for 75% of timeout interval.  Removal of this node from cluster in 7.010 seconds
    2011-07-02 23:07:39.629
    [cssd(28794)]CRS-1610:Network communication with node rac1 (1) missing for 90% of timeout interval.  Removal of this node from cluster in 3.000 seconds
    2011-07-02 23:07:42.641
    [cssd(28794)]CRS-1632:Node rac1 is being removed from the cluster in cluster incarnation 205080558
    2011-07-02 23:07:44.751
    [cssd(28794)]CRS-1601:CSSD Reconfiguration complete. Active nodes are rac2 .
    2011-07-02 23:07:45.326
    [ctssd(28861)]CRS-2407:The new Cluster Time Synchronization Service reference node is host rac2.
    2011-07-04 19:46:26.008
    [ohasd(27261)]CRS-8011:reboot advisory message from host: rac1, component: mo155738, with time stamp: L-2011-07-04-19:44:43.318
    [ohasd(27261)]CRS-8013:reboot advisory message text: clsnomon_status: need to reboot, unexpected failure 8 received from CSS
    *[root@rac2 rac2]#* This log file start with complaint that OLR is not accessible. Here is what I see (rca2):
    -rw------- 1 root oinstall 272756736 Jul  2 18:18 /u01/grid/oracle/product/11.2.0/grid/cdata/rac2.olrAnd I guess rest of the problems start with this.

  • Runcluvfy.sh stage pre fails with node reachability on 1 node only

    Having a frustrating problem. 2 node RAC system on RHEL 5.2 installing 11.2.0.1 grid/clusterware. Performing the following pre check command from node 1:
    ./runcluvfy.sh stage -pre crsinst -n node1,node2 -verboseI'm getting the following error and it cannot write the trace information
    [grid@node1 grid]$ sudo chmod -R 777 /tmp
    [grid@node1 grid]$ ./runcluvfy.sh stage -pre crsinst -n node1,node2 -verbose
    WARNING:
    Could not access or create trace file path "/tmp/bootstrap/cv/log". Trace information could not be collected
    Performing pre-checks for cluster services setup
    Checking node reachability...
    node1.mydomain.com: node1.mydomain.com
    Check: Node reachability from node "null"
      Destination Node                      Reachable?
      node2                       no
      node1                       no
    Result: Node reachability check failed from node "null"
    ERROR:
    Unable to reach any of the nodes
    Verification cannot proceed
    Pre-check for cluster services setup was unsuccessful on all the nodes.
    [grid@node1 grid]$
    [grid@node1 grid]$ echo $CV_DESTLOC
    /home/grid/software/grid/11gr2/gridI've verified the following:
    1) there is user equivalence between the nodes for user grid
    2) /tmp is read/writable by user grid on both nodes
    3) Setting the CV_DESTLOC appears to do nothing - it seems to go back to wanting to write to /tmp
    4) ./runcluvfy comp nodecon -n node1,node2-verbose succeeds no problem
    And the weirdest thing of all, when I run ./runcluvfy.sh stage -pre crsinst -n node1,node2 -verbose from node 2, it succeeds without errors.
    What am I missing? And TIA..

    I made a copy of the runcluvfy.sh and commented out all rm -rf commands so that it would at least save the trace files. Re-ran, and the following trace output - not entirely helpful to me, but any gurus out there see anything?
    [main] [ 2010-04-20 15:48:38.275 CDT ] [TaskNodeConnectivity.performTask:354]  _nw_:Performing Node Reachability verification task...
    [main] [ 2010-04-20 15:48:38.282 CDT ] [ResultSet.traceResultSet:341]
    Target ResultSet BEFORE Upload===>
            Overall Status->UNKNOWN
    [main] [ 2010-04-20 15:48:38.283 CDT ] [ResultSet.traceResultSet:341]
    Source ResultSet ===>
            Overall Status->OPERATION_FAILED
            node2-->OPERATION_FAILED
            node1-->OPERATION_FAILED
    [main] [ 2010-04-20 15:48:38.283 CDT ] [ResultSet.traceResultSet:341]
    Target ResultSet AFTER Upload===>
            Overall Status->OPERATION_FAILED
            node2-->OPERATION_FAILED
            node1-->OPERATION_FAILED
    [main] [ 2010-04-20 15:48:38.284 CDT ] [ResultSet.getSuccNodes:556]  Checking for Success nodes from the total list of nodes in the resultset
    [main] [ 2010-04-20 15:48:38.284 CDT ] [ReportUtil.printReportFooter:1553]  stageMsgID: 8302
    [main] [ 2010-04-20 15:48:38.284 CDT ] [CluvfyDriver.main:299]  ==== cluvfy exiting normally.I'm still baffled why the precheck is successful from the second node. And, in fact, all other cluvfy checks that I've run succeed form both nodes.

  • Shared storage check failed on nodes

    hi friends,
    I am installing rac 10g on vmware and os is OEL4.i completed all the prerequisites but when i run the below command
    ./runclufy stage -post hwos -n rac1,rac2, i am facing below error.
    node connectivity check failed.
    Checking shared storage accessibility...
    WARNING:
    Unable to determine the sharedness of /dev/sde on nodes:
    rac2,rac2,rac2,rac2,rac2,rac1,rac1,rac1,rac1,rac1
    Shared storage check failed on nodes "rac2,rac1"
    please help me anyone ,it's urgent
    Thanks,
    poorna.
    Edited by: 958010 on 3 Oct, 2012 9:47 PM

    Hello,
    It seems that your storage is not accessible from both the nodes. If you want you can follow these steps to configure 10g RAC on VMware.
    Steps to configure Two Node 10 RAC on RHEL-4
    Remark-1: H/W requirement for RAC
    a) 4 Machines
    1. Node1
    2. Node2
    3. storage
    4. Grid Control
    b) 2 switchs
    c) 6 straight cables
    Remark-2: S/W requirement for RAC
    a) 10g cluserware
    b) 10g database
    Both must have the same version like (10.2.0.1.0)
    Remark-3: RPMs requirement for RAC
    a) all 10g rpms (Better to use RHEL-4 and choose everything option to install all the rpms)
    b) 4 new rpms are required for installations
    1. compat-gcc-7.3-2.96.128.i386.rpm
    2. compat-gcc-c++-7.3-2.96.128.i386.rpm
    3. compat-libstdc++-7.3-2.96.128.i386.rpm
    4. compat-libstdc++-devel-7.3-2.96.128.i386.rpm
    ------------ Start Machine Preparation --------------------
    1. Prepare 3 machines
    i. node1.oracle.com
    etho (192.9.201.183) - for public network
    eht1 (10.0.0.1) - for private n/w
    gateway (192.9.201.1)
    subnet (255.255.255.0)
    ii. node2.oracle.com
    etho (192.9.201.187) - for public network
    eht1 (10.0.0.2) - for private n/w
    gateway (192.9.201.1)
    subnet (255.255.255.0)
    iii. openfiler.oracle.com
    etho (192.9.201.182) - for public network
    gateway (192.9.201.1)
    subnet (255.255.255.0)
    NOTE:-
    -- Here eth0 of all the nodes should be connected by Public N/W using SWITCH-1
    -- eth1 of all the nodes should be connected by Private N/W using SWITCH-2
    2. network Configuration
    #vim /etc/host
    192.9.201.183 node1.oracle.com node1
    192.9.201.187 node2.oracle.com node2
    192.9.201.182 openfiler.oracle.com openfiler
    10.0.0.1 node1-priv.oracle.com node1
    10.0.0.2 node2-priv.oracle.com node2-priv
    192.9.201.184 node1-vip.oracle.com node1-vip
    192.9.201.188 node2-vip.oracle.com node2-vip
    2. Prepare Both the nodes for installation
    a. Set Kernel Parameters (/etc/sysctl.conf)
    kernel.shmall = 2097152
    kernel.shmmax = 2147483648
    kernel.shmmni = 4096
    kernel.sem = 250 32000 100 128
    fs.file-max = 65536
    net.ipv4.ip_local_port_range = 1024 65000
    net.core.rmem_default = 262144
    net.core.rmem_max = 262144
    net.core.wmem_default = 262144
    net.core.wmem_max = 262144
    b. Configure /etc/security/limits.conf file
    oracle soft nproc 2047
    oracle hard nproc 16384
    oracle soft nofile 1024
    oracle hard nofile 65536
    c. Configure /etc/pam.d/login file
    session required /lib/security/pam_limits.so
    d. Create user and groups on both nodes
    # groupadd oinstall
    # groupadd dba
    # groupadd oper
    # useradd -g oinstall -G dba oracle
    # passwd oracle
    e. Create required directories and set the ownership and permission.
    # mkdir –p /u01/crs1020
    # mkdir –p /u01/app/oracle/product/10.2.0/asm
    # mkdir –p /u01/app/oracle/product/10.2.0/db_1
    # chown –R oracle:oinstall /u01/
    # chmod –R 755 /u01/
    f. Set the environment variables
    $ vi .bash_profile
    ORACLE_BASE=/u01/app/oracle/; export ORACLE_BASE
    ORA_CRS_HOME=/u01/crs1020; export ORA_CRS_HOME
    #LD_ASSUME_KERNEL=2.4.19; export LD_ASSUME_KERNEL
    #LANG=”en_US”; export LANG
    3. storage configuration
    PART-A Open-filer Set-up
    Install openfiler on a machine (Leave 60GB free space on the hdd)
    a) Login to root user
    b) Start iSCSI target service
    # service iscsi-target start
    # chkconfig –level 345 iscsi-target on
    PART –B Configuring Storage on openfiler
    a) From any client machine open the browser and access openfiler console (446 ports).
    https://192.9.201.182:446/
    b) Open system tab and update the local N/W configuration for both nodes with netmask (255.255.255.255).
    c) From the Volume tab click "create a new physical volume group".
    d) From "block Device managemrnt" click on "(/dev/sda)" option under 'edit disk' option.
    e) Under "Create a partition in /dev/sda" section create physical Volume with full size and then click on 'CREATE'.
    f) Then go to the "Volume Section" on the right hand side tab and then click on "Volume groups"
    g) Then under the "Create a new Volume Group" specify the name of the volume group (ex- racvgrp) and click on the check box and then click on "Add Volume Group".
    h) Then go to the "Volume Section" on the right hand side tab and then click on "Add Volumes" and then specify the Volume name (ex- racvol1) and use all space and specify the "Filesytem/Volume type" as ISCSI and then click on CREATE.
    i) Then go to the "Volume Section" on the right hand side tab and then click on "iSCSI Targets" and then click on ADD button to add your Target IQN.
    j) then goto the 'LUN Mapping" and click on "MAP".
    k) then goto the "Network ACL" and allow both node from there and click on UPDATE.
    Note:- To create multiple volumes with openfiler we need to use Multipathing that is quite complex that’s why here we are going for a single volume. Edit the property of each volume and change access to allow.
    f) install iscsi-initiator rpm on both nodes to acces iscsi disk
    #rpm -ivh iscsi-initiator-utils-----------
    g) Make entry in iscsi.conf file about openfiler on both nodes.
    #vim /etc/iscsi.conf (in RHEL-4)
    and in this file you will get a line "#DiscoveryAddress=192.168.1.2" remove comment and specify your storage ip address here.
    OR
    #vim /etc/iscsi/iscsi.conf (in RHEL-5)
    and in this file you will get a line "#ins.address = 192.168.1.2" remove comment and specify your storage ip address here.
    g) #service iscsi restart (on both nodes)
    h) From both Nodes fire this command to access volume of openfiler-
    # iscsiadm -m discovery -t sendtargets -p 192.2.201.182
    i) #service iscsi restart (on both nodes)
    j) #chkconfig –level 345 iscsi on (on both nodes)
    k) make the partition 3 primary and 1 extended and within extended make 11 logical partition
    A. Prepare partitions
    1. #fdisk /dev/sdb
    :e (extended)
    Part No. 1
    First Cylinder:
    Last Cylinder:
    :p
    :n
    :l
    First Cylinder:
    Last Cylinder: +1024M
    2. Note the /dev/sdb* names.
    3. #partprobe
    4. Login as root user on node2 and run partprobe
    B. On node1 login as root user and create following raw devices
    # raw /dev/raw/raw5 /dev/sdb5
    #raw /dev/raw/taw6 /dev/sdb6
    # raw /dev/raw/raw12 /dev/sdb12
    Run ls –l /dev/sdb* and ls –l /dev/raw/raw* to confirm the above
    -Repeat the same thing on node2
    C. On node1 as root user
    # vi .etc/sysconfig/rawdevices
    /dev/raw/raw5 /dev/sdb5
    /dev/raw/raw6 /dev/sdb6
    /dev/raw/raw7 /dev/sdb7
    /dev/raw/raw8 /dev/sdb8
    /dev/raw/raw9 /dev/sdb9
    /dev/raw/raw10 /dev/sdb10
    /dev/raw/raw11 /dev/sdb11
    /dev/raw/raw12 /dev/sdb12
    /dev/raw/raw13 /dev/sdb13
    /dev/raw/raw14 /dev/sdb14
    /dev/raw/raw15 /dev/sdb15
    D. Restart the raw service (# service rawdevices restart)
    #service rawdevices restart
    Assigning devices:
    /dev/raw/raw5 --> /dev/sdb5
    /dev/raw/raw5: bound to major 8, minor 21
    /dev/raw/raw6 --> /dev/sdb6
    /dev/raw/raw6: bound to major 8, minor 22
    /dev/raw/raw7 --> /dev/sdb7
    /dev/raw/raw7: bound to major 8, minor 23
    /dev/raw/raw8 --> /dev/sdb8
    /dev/raw/raw8: bound to major 8, minor 24
    /dev/raw/raw9 --> /dev/sdb9
    /dev/raw/raw9: bound to major 8, minor 25
    /dev/raw/raw10 --> /dev/sdb10
    /dev/raw/raw10: bound to major 8, minor 26
    /dev/raw/raw11 --> /dev/sdb11
    /dev/raw/raw11: bound to major 8, minor 27
    /dev/raw/raw12 --> /dev/sdb12
    /dev/raw/raw12: bound to major 8, minor 28
    /dev/raw/raw13 --> /dev/sdb13
    /dev/raw/raw13: bound to major 8, minor 29
    /dev/raw/raw14 --> /dev/sdb14
    /dev/raw/raw14: bound to major 8, minor 30
    /dev/raw/raw15 --> /dev/sdb15
    /dev/raw/raw15: bound to major 8, minor 31
    done
    E. Repeat the same thing on node2 also
    F. To make these partitions accessible to oracle user fire these commands from both Nodes.
    # chown –R oracle:oinstall /dev/raw/raw*
    # chmod –R 755 /dev/raw/raw*
    F. To make these partitions accessible after restart make these entry on both nodes
    # vi /etc/rc.local
    Chown –R oracle:oinstall /dev/raw/raw*
    Chmod –R 755 /dev/raw/raw*
    4. SSH configuration (User quivalence)
    On node1:- $ssh-keygen –t rsa
    $ssh-keygen –t dsa
    On node2:- $ssh-keygen –t rsa
    $ssh-keygen –t dsa
    On node1:- $cd .ssh
    $cat *.pub>>node1
    On node2:- $cd .ssh
    $cat *.pub>>node2
    On node1:- $scp node1 node2:/home/oracle/.ssh
    On node2:- $scp node2 node2:/home/oracle/.ssh
    On node1:- $cat node*>>authowized_keys
    On node2:- $cat node*>>authowized_keys
    Now test the ssh configuration from both nodes
    $ vim a.sh
    ssh node1 hostname
    ssh node2 hostname
    ssh node1-priv hostname
    ssh node2-priv hostname
    $ chmod +x a.sh
    $./a.sh
    first time you'll have to give the password then it never ask for password
    5. To run cluster verifier
    On node1 :-$cd /…/stage…/cluster…/cluvfy
    $./runcluvfy stage –pre crsinst –n node1,node2
    First time it will ask for four New RPMs but remember install these rpms by double clicking because of dependancy. So better to install these rpms in this order (rpm-3, rpm-4, rpm-1, rpm-2)
    1. compat-gcc-7.3-2.96.128.i386.rpm
    2. compat-gcc-c++-7.3-2.96.128.i386.rpm
    3. compat-libstdc++-7.3-2.96.128.i386.rpm
    4. compat-libstdc++-devel-7.3-2.96.128.i386.rpm
    And again run cluvfy and check that "It should given a clean cheat" then start clusterware installation.

  • Data guard setup for 2 node RAC primary to 2 node RAC standby

    Hi All,
    I am going to setup data guard for 2 node RAC primary to 2 node RAC standby on Oracle 10.2.0.4. in AIX5L.
    Can you please provide the document on the above setup which is having all the steps (details).
    Also, the documents on different scenarios like
    1) If one node of standby goes down, how the redo logs will be applied. IS there any problem?
    2) If both nodes of standby are failed, how to reciver them?
    3) If one node of primary fails, is there any issue?
    4) If two nodes of primary fails, is there any issue?
    Thanks in advance,
    Mahi

    Have a look at the following location, you may find some similar documents:
    http://www.oracle.com/technology/deploy/availability/htdocs/maa.htm
    By
    http://www.oraxperts.com

  • Can't install ORACLE RAC on Solaris (specified nodes are not clusterable)

    Hi all,
    Could you please help with the Oracle CRS issue?
    During the installation Oracle CRS the OUI indicate that the specified nodes are not clusterable.
    The window appears and displays:
    "The specified nodes are not clusterable.
    The following error was returned by the operating system:"
    I am using 10gr2_cluster_sol.cpio.gz file.
    My Solaris 10 configuration:
    server - sun3
    bash-3.00# cat /etc/hosts
    # Internet host table
    127.0.0.1 localhost
    10.160.19.49 sun3 loghost
    10.160.19.50 sun4 loghost
    10.11.12.13 sun3prv
    10.11.12.14 sun4prv
    10.160.19.64 sun3pub
    10.160.19.65 sun4pub
    bash-3.00# ifconfig -a
    lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
    inet 127.0.0.1 netmask ff000000
    bge0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2
    inet 10.160.19.49 netmask fffffe00 broadcast 10.160.19.255
    ether 0:14:4f:0:64:82
    bge1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 3
    inet 10.11.12.13 netmask fffffe00 broadcast 10.11.13.255
    ether 0:14:4f:0:64:83
    bash-3.00# cat /etc/netmasks
    10.160.18.0 255.255.254.0
    10.160.19.0 255.255.254.0
    10.11.12.0 255.255.254.0
    bash-3.00# cat /etc/hostname.bge0
    sun3
    bash-3.00# cat /etc/hostname.bge1
    sun3prv
    server - sun4
    bash-3.00# cat /etc/hosts
    # Internet host table
    127.0.0.1 localhost
    10.160.19.50 sun4 loghost
    10.160.19.49 sun3 loghost
    10.11.12.14 sun4prv
    10.11.12.13 sun3prv
    10.160.19.63 sun4pub
    10.160.19.62 sun3pub
    bash-3.00# ifconfig -a
    lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
    inet 127.0.0.1 netmask ff000000
    bge0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2
    inet 10.160.19.50 netmask fffffe00 broadcast 10.160.19.255
    ether 0:14:4f:0:41:c8
    bge1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 3
    inet 10.11.12.14 netmask fffffe00 broadcast 10.11.13.255
    ether 0:14:4f:0:41:c9
    bash-3.00# cat /etc/netmasks
    10.160.18.0 255.255.254.0
    10.11.12.0 255.255.254.0
    10.160.19.0 255.255.254.0
    bash-3.00# cat /etc/hostname.bge1
    sun4prv
    bash-3.00# cat /etc/hostname.bge0
    sun4

    0) This error occur when I run .runInstaller
    All prerequisites check passed. The error window appears after clicking Next button in Specify Cluster Configuration window.
    1) I have changed /etc/hosts file as you have mentioned
    SUN3
    bash-3.00# cat /etc/hosts
    # Internet host table
    ::1 localhost
    127.0.0.1 localhost
    10.160.19.49 sun3
    10.160.19.50 sun4
    10.11.12.13 sun3-vip
    10.11.12.14 sun4-vip
    10.160.19.64 sun3pub
    10.160.19.65 sun4pub
    SUN4
    bash-3.00# cat /etc/hosts
    # Internet host table
    ::1 localhost
    127.0.0.1 localhost
    10.160.19.50 sun4
    10.160.19.49 sun3
    10.11.12.13 sun3-vip
    10.11.12.14 sun4-vip
    10.160.19.64 sun3pub
    10.160.19.65 sun4pub
    Also I have configured bge0:1 interface
    bash-3.00# ifconfig -a
    lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
    inet 127.0.0.1 netmask ff000000
    bge0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2
    inet 10.160.19.49 netmask fffffe00 broadcast 10.160.19.255
    ether 0:14:4f:0:64:82
    bge0:1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2
    inet 10.160.19.64 netmask ffffff00 broadcast 10.160.19.255
    bge1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 3
    inet 10.11.12.13 netmask fffffe00 broadcast 10.11.13.255
    ether 0:14:4f:0:64:83
    2) I have removed loghost from /etc/hosts file
    3) Currently I do not have shared storage. I am going to use Storage Foundation to create a shared storage
    Also I was trying to test the machines using runcluvfy.sh command
    The output is the following:
    -bash-3.00$ ./runcluvfy.sh stage -pre crsinst -n sun3,sun4
    Performing pre-checks for cluster services setup
    Checking node reachability...
    Node reachability check passed from node "sun3".
    Checking user equivalence...
    User equivalence check failed for user "oracle".
    Check failed on nodes:
    sun4,sun3
    ERROR:
    User equivalence unavailable on all the nodes.
    Verification cannot proceed.
    Pre-check for cluster services setup was unsuccessful on all the nodes.

  • Tree creation failed on node: pcd:portal_content/...

    Hi out there,
    after patching our EP 7 from SPS 10 to SPS 13 we have the following error:
    In the portal content directory there are some folders which cannot be
    expanded anymore. The following error message appears: "Could not load
    or refresh node Tree creation failed on node: pcd:portal_content/..."
    I have found out that this is always caused by an inconsistent role
    which is located in the folder I'd like to expand (or in a subfolder
    of it).
    It seems that this error has already been occuring in earlier SPs and / or releases. Has anyone found a solution to it?!?
    Thanx for your support and best regards
    Juergen Kuechle.

    Hi
    Restart the server it solve ur problem
    The problem is caused by a missing "ADD" notification for imported folder objects. If the package contains new folder objects , the caches on other cluster nodes are not synchronized for these folders. If the package contains a complete folder hierarchy with other PCD objects (like iViews, pages, roles, ...) under a folder, and this folder object is contained in the package and imported before the objects under the folder, the complete hierarchy is not synchronized in the cluster.
    Release the PCD cache on all cluster nodes after importing an EPA file containing folder objects
    Create separate transport packages for the folder objects and import them after the transport packages containing the (regular) objects under these folders
    Check SAP NOTE 991599
    Regards
    Krishna.

  • License Probelm  Fail over node.

    Dear Gurus,
    I am reciving an error while applying the liscense to the fail over server.
    SAPLICENSE (Release 700) ERROR ***
    ERROR: Can not set DbSl trace function
    DETAILS: DbSlControl(DBSL_CMD_IMP_FUNS_SET) failed with return code
    20
    RC-INFO: error loading dynamic db-library - check environment for:
    dbms_type = <db-type> (e.g. ora)
    DIR_LIBRARY = <path to db-dll>
    (e.g. /usr/sap/SID/SYS/exe/run)
    LD_LIBRARY_PATH = <path do db and sap libs>
    (e.g. /oracle/SID/lib)
    My Production system no is 00000000031XXXXXXX but the license key is for 00000000031XYYYYY.
    I am anot able to login to fail over node due to this.
    How can I resolve this problem ?
    Sachin

    sachin,
       how r u applying the license through slicense???
    unable to login in failover??  have u deleted the temporary license??? or has it crossed 4 weeks??
    if u have deleted the temp license apply the license through visual admin
    if it has crossed 4 weeks , revert the date backwards and apply license
    u must have applied for a license in oss using other system with same sid,
    u have to copy the system number from the node u have applied license on and apply for a license under the same system  in the oss and give the system number.
    when u get the license apply as mentioned above.

  • Could not load or refresh node tree creation failed on node

    Hi Experts
    I created one user with MSS role and ERP COMM role.
    It is working fine. We restarted the server for other purpose. Then surprisingly the MSS role was locked. I am unable to see the MSS tab in the top level navigation.
    When i expand the workset in line manager -> Manager Self Service -> Worksets
    I am getting the following error::
    Could not load or refresh node tree creation failed on node:
    pcd:portal_content/com.sap.pct/line_manager/com.sap.pct.erp.mss.bp_folder/com.sap.pct.erp.mss.worksets
    I can be able to expand and priveiw the Pages and iviews, except the worksets
    Please guide me, i am badly stucked here.
    Regards,
    Srinivas

    Hi KRISHNA,
    Check the below thread and notes mentioned in that
    Re: Tree creation failed on node: pcd:portal_content
    Koti Reddy

  • R12 Apps with 4 node RAC and 4 apps nodes - Post Install check list

    Hi All,
    We have a new Oracle Apps R12 instalaltion with 4 node RAC and 4 Apps nodes with load balancing and external web tier being done by an outside firm. We are assigned the responsibility of checking that everything has been configured properly both on the RAC side and Apps side. I haven't worked on RAC earlier. Please let me know what all need to be checked before approving the install done by the outside firm.
    Thank you!

    Please check below metalink notes
    RAC Assurance Support Team: RAC and Oracle Clusterware Starter Kit and Best Practices (Generic) [ID 810394.1]
    Using Oracle 10g Release 2 Real Application Clusters and Automatic Storage Management with Oracle E-Business Suite Release 12 [ID 388577.1]
    Using Oracle 11g Release 2 Real Application Clusters with Oracle E-Business Suite Release 12 [ID 823587.1]
    Oracle E-Business Suite and Oracle Real Application Clusters Documentation Roadmap [ID 745759.1]
    745759.1 Oracle E-Business Suite and Oracle Real Application Clusters Documentation Roadmap
    384248.1 Sharing The Application Tier file system in Oracle E-Business Suite Release 12
    387859.1 Using AutoConfig to Manage System Configurations with Oracle E-Business Suite Release 12
    406982.1 Cloning Oracle Applications Release 12 with Rapid Clone
    240575.1 RAC on Linux Best Practices
    265633.1 Automatic Storage Management Technical Best Practices
    Loadbalancer
    note 380489.1 Using Load-Balancers with Oracle E-Business Suite Release 12
    note 727171.1 Implementing Load Balancing On Oracle E-Business Suite - Documentation For Specific Load Balancer Hardware
    note 601694.1 How To Check Session Persistence On BigIP F5 And Cisco Ace Load Balancer Appliances
    note 603325.1 Using Cisco ACE Series Application Control Engine with Oracle E-Business Suite Release 12
    Installation R12
    note 761564.1 Oracle Applications Installation and Upgrade Notes Release 12 (12.1.1) for Linux x86
    note 402310.1 Oracle Applications Installation and Upgrade notes Release 12 (12.0) for Linux (32-bit)
    note 559518.1 Cloning Oracle E-Business Suite Release 12 RAC-Enabled Systems with Rapid Clone
    note 735276.1 Interoperability notes E-Business Suite R12 with Oracle Database 11gR1
    Shared Applications Confiugurations:
    note 380483.1 Oracle E-Business Suite Release 12 Additional Configuration and Deployment Options
    note 384248.1 Sharing The Application Tier File System in Oracle E-Business Suite Release 12
    Edited by: Amigo on Dec 20, 2010 9:58 AM

  • FPN Could not load or refresh node Tree creation failed on node: pcd:-Error

    Hi,
    I´m configuring a FPN using CRM Java as a producer and SAP Portal as a consumer, the SSO from SAP Portal to CRM Java is working fine, but we are having this Error when trying to Expand Content Administration > Portal Content > NetWeaver Content Producers > MyProducer System
    Could not load or refresh node Tree creation failed on node: pcd:NetWeaver_content_producers/........
    Also in Identity Management I can see the Federated Data Source but when search for roles no rol is retrieved and returns two messages.
    Last Search might be innacurate
    No element Found
    Has somebody faced this Error?
    Thanx in Advanced!
    Kind Regards,
    Gerardo J

    Hi KRISHNA,
    Check the below thread and notes mentioned in that
    Re: Tree creation failed on node: pcd:portal_content
    Koti Reddy

  • 11gR2 RAC install fail when running root.sh script on second node

    I get the errors:
    ORA-15018: diskgroup cannot be created
    ORA-15072: command requires at least 2 regular failure groups, discovered only 0
    ORA-15080: synchronous I/O operation to a disk failed
    [main] [ 2012-04-10 16:44:12.564 EDT ] [UsmcaLogger.logException:175] oracle.sysman.assistants.util.sqlEngine.SQLFatalErrorException: ORA-15018: diskgroup cannot be created
    ORA-15072: command requires at least 2 regular failure groups, discovered only 0
    ORA-15080: synchronous I/O operation to a disk failed
    I have tried the fix solutions from metalink note, but did not fix issue
    11GR2 GRID INFRASTRUCTURE INSTALLATION FAILS WHEN RUNNING ROOT.SH ON NODE 2 OF RAC USING ASMLIB [ID 1059847.1                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

    Hi,
    it looks like, that your "shared device" you are using is not really shared.
    The second node does "create an ASM diskgroup" and create OCR and Voting disks. If this indeed would be a shared device, he should have recognized, that your disk is shared.
    So as a result your VMware configuration must be wrong, and the disk you presented as shared disk is not really shared.
    Which VMWare version did you use? It will not work correctly with the workstation or player edition, since shared disks are only really working with the server version.
    If you indeed using the server, could you paste your vm configurations?
    Furthermore I recommend using Virtual Box. There is a nice how-to:
    http://www.oracle-base.com/articles/11g/OracleDB11gR2RACInstallationOnOEL5UsingVirtualBox.php
    Sebastian

  • !! ORACM always FAILS one node - Oracle 9i rac -Sles9 - 9.2.0.8 ORACM

    Hi,
    I really need help with this. I applied all the patches possible. I tried sharing the quorum.dbf as a nfs device, raw device, iscsi lun ... i patched the ORACM to 9.2.0.5, 9.2.0.6, and now 9.2.0.8. The setup has two hp dl360 with sles9 sp2, x86_64 and oracle 9.2 rac...
    The problem is the cluster manager starts on one node. and when i run ./ocmstart.sh on the other node, it always fails. The CM.LOG file is pasted below. I get the same errors at all the patch levels. The quorum.dbf is setup as an iscsi lun on a netapp filer, which is then bounded to a raw device on the host. Whichever node i start the oracle cluster manager first, works and the other node always fails with the errors shown below.
    It also keeps complaining about InitializeCM: query_module() failed about the hangcheck timer. The hangcheck timer is already loaded and i can see it in /sbin/lsmod
    I would really appreciate help on this. This is my master's project at school and i cant graduate if this doesnt work. Please provide some guidance.
    thanks
    vishal
    CM.LOG
    tweedledum:/u01/app/oracle/product/920/oracm/log # cat cm.log
    oracm, version[ 9.2.0.8.0.01 ] started {Tue Feb 13 00:56:16 2007 }
    KernelModuleName is hangcheck-timer {Tue Feb 13 00:56:16 2007 }
    OemNodeConfig(): Network Address of node0: 1.1.1.3 (port 9998)
    {Tue Feb 13 00:56:16 2007 }
    OemNodeConfig(): Network Address of node1: 1.1.1.4 (port 9998)
    {Tue Feb 13 00:56:16 2007 }
    WARNING: OemInit2: Opened file(/oradata/quorum.dbf 6), tid = main:182900764192 file = oem.c, line = 503 {Tue Feb 13 00:56:16 2007 }InitializeCM: ModuleName = hangcheck-timer {Tue Feb 13 00:56:16 2007 }
    ClusterListener: Spawned with tid 0x4080e960 pid: 19662 {Tue Feb 13 00:56:16 2007 }
    ERROR: InitializeCM: query_module() failed, tid = main:182900764192 file = cmstartup.c, line = 341 {Tue Feb 13 00:56:16 2007 }Debug Hang : ClusterListener (PID=19662) Registered with ORACM. {Tue Feb 13 00:56:16 2007 }
    Debug Hang :StartNMMon (PID=19662) Registered with ORACM. {Tue Feb 13 00:56:16 2007 }
    Debug Hang : CmConnectListener (PID=19662):Registered with ORACM. {Tue Feb 13 00:56:16 2007 }
    CreateLocalEndpoint(): Network Address: 1.1.1.4
    {Tue Feb 13 00:56:16 2007 }
    PollingThread: Spawned with tid 0x40c10960. pid: 19662 {Tue Feb 13 00:56:16 2007 }
    Debug Hang :PollingThread (PID=19662): Registered with ORACM. {Tue Feb 13 00:56:16 2007 }
    SendingThread: Spawned with tid 0x41012960, 0x41012960. pid: 19662 {Tue Feb 13 00:56:16 2007 }
    DiskPingThread: Spawned with tid 0x40e11960. pid: 19662 {Tue Feb 13 00:56:16 2007 }
    Debug Hang : DiskPingThread (PID=19662): Registered with ORACM. {Tue Feb 13 00:56:16 2007 }
    Debug Hang :SendingThread (PID=19662): Registered with ORACM. {Tue Feb 13 00:56:16 2007 }
    UpdateNodeState(): node(1) added udpated {Tue Feb 13 00:56:19 2007 }
    HandleUpdate(): SYNC(1) from node(0) completed {Tue Feb 13 00:56:19 2007 }
    HandleUpdate(): NODE(0) IS ACTIVE MEMBER OF CLUSTER, INCARNATION(1) {Tue Feb 13 00:56:19 2007 }
    HandleUpdate(): NODE(1) IS ACTIVE MEMBER OF CLUSTER, INCARNATION(2) {Tue Feb 13 00:56:19 2007 }
    --- DUMP GROUP STATE DB ---
    --- END OF GROUP STATE DUMP ---
    --- Begin Dump ---
    oracm, version[ 9.2.0.8.0.01 ] started {Tue Feb 13 00:56:16 2007 }
    TRACE: LogListener: Spawned with tid 0x4060d960., tid = LogListener:1080088928 file = logging.c, line = 116 {Tue Feb 13 00:56:16 2007 }
    TRACE: Can't read registry value for HeartBeat, tid = main:182900764192 file = unixinc.c, line = 1080 {Tue Feb 13 00:56:16 2007 }
    TRACE: Can't read registry value for PollInterval, tid = main:182900764192 file = unixinc.c, line = 1080 {Tue Feb 13 00:56:16 2007 }
    TRACE: Can't read registry value for WatchdogTimerMargin, tid = main:182900764192 file = unixinc.c, line = 1080 {Tue Feb 13 00:56:16 2007 }
    TRACE: Can't read registry value for WatchdogSafetyMargin, tid = main:182900764192 file = unixinc.c, line = 1080 {Tue Feb 13 00:56:16 2007 }KernelModuleName is hangcheck-timer {Tue Feb 13 00:56:16 2007 }
    TRACE: Can't read registry value for ClientTimeout, tid = main:182900764192 file = unixinc.c, line = 1080 {Tue Feb 13 00:56:16 2007 }
    TRACE: InitNMInfo: setting clientTimeout to 140s based on MissCount 210 and PollInterval 1000ms, tid = main:182900764192 file = nmconfig.c, line = 138 {Tue Feb 13 00:56:16 2007 }
    TRACE: InitClusterDb(): getservbyname on CMSrvr failed - 0 : assigning 9998, tid = main:182900764192 file = nmconfig.c, line = 208 {Tue Feb 13 00:56:16 2007 }OemNodeConfig(): Network Address of node0: 1.1.1.3 (port 9998)
    {Tue Feb 13 00:56:16 2007 }
    OemNodeConfig(): Network Address of node1: 1.1.1.4 (port 9998)
    {Tue Feb 13 00:56:16 2007 }
    TRACE: OemCreateListenPort: bound at 9998, tid = main:182900764192 file = oem.c, line = 907 {Tue Feb 13 00:56:16 2007 }
    TRACE: InitClusterDb(): found my node info at 1 name tweedledum, priv int-dum, port 3623, tid = main:182900764192 file = nmconfig.c, line = 261 {Tue Feb 13 00:56:16 2007 }
    TRACE: InitClusterDb(): Local Node(1) NodeName[int-dum], tid = main:182900764192 file = nmconfig.c, line = 279 {Tue Feb 13 00:56:16 2007 }
    TRACE: InitClusterDb(): Cluster(Oracle) with (2) Defined Nodes, tid = main:182900764192 file = nmconfig.c, line = 282 {Tue Feb 13 00:56:16 2007 }
    TRACE: OEMInits(): CM Disk File (/oradata/quorum.dbf), tid = main:182900764192 file = oem.c, line = 248 {Tue Feb 13 00:56:16 2007 }
    WARNING: OemInit2: Opened file(/oradata/quorum.dbf 6), tid = main:182900764192 file = oem.c, line = 503 {Tue Feb 13 00:56:16 2007 }
    TRACE: ReadOthersDskInfo(): node(0) rcfg(1) wrtcnt(1171356979) lastcnt(0) alive(1171356979), tid = main:182900764192 file = oem.c, line = 1491 {Tue Feb 13 00:56:16 2007 }
    TRACE: ReadOthersDskInfo(): node(1) rcfg(1) wrtcnt(180) lastcnt(0) alive(1), tid = main:182900764192 file = oem.c, line = 1491 {Tue Feb 13 00:56:16 2007 }
    TRACE: ReadOthersDskInfo(): node(2) rcfg(0) wrtcnt(0) lastcnt(0) alive(0), tid = main:182900764192 file = oem.c, line = 1491 {Tue Feb 13 00:56:16 2007 }
    TRACE: ReadOthersDskInfo(): node(3) rcfg(0) wrtcnt(0) lastcnt(0) alive(0), tid = main:182900764192 file = oem.c, line = 1491 {Tue Feb 13 00:56:16 2007 }
    TRACE: ReadOthersDskInfo(): node(4) rcfg(0) wrtcnt(0) lastcnt(0) alive(0), tid = main:182900764192 file = oem.c, line = 1491 {Tue Feb 13 00:56:16 2007 }
    TRACE: ReadOthersDskInfo(): node(5) rcfg(0) wrtcnt(0) lastcnt(0) alive(0), tid = main:182900764192 file = oem.c, line = 1491 {Tue Feb 13 00:56:16 2007 }
    TRACE: ReadOthersDskInfo(): node(6) rcfg(0) wrtcnt(0) lastcnt(0) alive(0), tid = main:182900764192 file = oem.c, line = 1491 {Tue Feb 13 00:56:16 2007 }
    TRACE: ReadOthersDskInfo(): node(7) rcfg(0) wrtcnt(0) lastcnt(0) alive(0), tid = main:182900764192 file = oem.c, line = 1491 {Tue Feb 13 00:56:16 2007 }
    TRACE: ReadOthersDskInfo(): node(8) rcfg(0) wrtcnt(0) lastcnt(0) alive(0), tid = main:182900764192 file = oem.c, line = 1491 {Tue Feb 13 00:56:16 2007 }
    TRACE: ReadOthersDskInfo(): node(9) rcfg(0) wrtcnt(0) lastcnt(0) alive(0), tid = main:182900764192 file = oem.c, line = 1491 {Tue Feb 13 00:56:16 2007 }
    TRACE: ReadOthersDskInfo(): node(10) rcfg(0) wrtcnt(0) lastcnt(0) alive(0), tid = main:182900764192 file = oem.c, line = 1491 {Tue Feb 13 00:56:16 2007 }
    TRACE: ReadOthersDskInfo(): node(11) rcfg(0) wrtcnt(0) lastcnt(0) alive(0), tid = main:182900764192 file = oem.c, line = 1491 {Tue Feb 13 00:56:16 2007 }
    TRACE: ReadOthersDskInfo(): node(12) rcfg(0) wrtcnt(0) lastcnt(0) alive(0), tid = main:182900764192 file = oem.c, line = 1491 {Tue Feb 13 00:56:16 2007 }
    TRACE: ReadOthersDskInfo(): node(13) rcfg(0) wrtcnt(0) lastcnt(0) alive(0), tid = main:182900764192 file = oem.c, line = 1491 {Tue Feb 13 00:56:16 2007 }
    TRACE: ReadOthersDskInfo(): node(14) rcfg(0) wrtcnt(0) lastcnt(0) alive(0), tid = main:182900764192 file = oem.c, line = 1491 {Tue Feb 13 00:56:16 2007 }
    TRACE: ReadOthersDskInfo(): node(15) rcfg(0) wrtcnt(0) lastcnt(0) alive(0), tid = main:182900764192 file = oem.c, line = 1491 {Tue Feb 13 00:56:16 2007 }
    TRACE: ReadOthersDskInfo(): node(16) rcfg(0) wrtcnt(0) lastcnt(0) alive(0), tid = main:182900764192 file = oem.c, line = 1491 {Tue Feb 13 00:56:16 2007 }
    TRACE: ReadOthersDskInfo(): node(17) rcfg(0) wrtcnt(0) lastcnt(0) alive(0), tid = main:182900764192 file = oem.c, line = 1491 {Tue Feb 13 00:56:16 2007 }
    TRACE: ReadOthersDskInfo(): node(18) rcfg(0) wrtcnt(0) lastcnt(0) alive(0), tid = main:182900764192 file = oem.c, line = 1491 {Tue Feb 13 00:56:16 2007 }
    TRACE: ReadOthersDskInfo(): node(19) rcfg(0) wrtcnt(0) lastcnt(0) alive(0), tid = main:182900764192 file = oem.c, line = 1491 {Tue Feb 13 00:56:16 2007 }
    TRACE: ReadOthersDskInfo(): node(20) rcfg(0) wrtcnt(0) lastcnt(0) alive(0), tid = main:182900764192 file = oem.c, line = 1491 {Tue Feb 13 00:56:16 2007 }
    TRACE: ReadOthersDskInfo(): node(21) rcfg(0) wrtcnt(0) lastcnt(0) alive(0), tid = main:182900764192 file = oem.c, line = 1491 {Tue Feb 13 00:56:16 2007 }
    TRACE: ReadOthersDskInfo(): node(22) rcfg(0) wrtcnt(0) lastcnt(0) alive(0), tid = main:182900764192 file = oem.c, line = 1491 {Tue Feb 13 00:56:16 2007 }
    TRACE: ReadOthersDskInfo(): node(23) rcfg(0) wrtcnt(0) lastcnt(0) alive(0), tid = main:182900764192 file = oem.c, line = 1491 {Tue Feb 13 00:56:16 2007 }
    TRACE: ReadOthersDskInfo(): node(24) rcfg(0) wrtcnt(0) lastcnt(0) alive(0), tid = main:182900764192 file = oem.c, line = 1491 {Tue Feb 13 00:56:16 2007 }
    TRACE: ReadOthersDskInfo(): node(25) rcfg(0) wrtcnt(0) lastcnt(0) alive(0), tid = main:182900764192 file = oem.c, line = 1491 {Tue Feb 13 00:56:16 2007 }
    TRACE: ReadOthersDskInfo(): node(26) rcfg(0) wrtcnt(0) lastcnt(0) alive(0), tid = main:182900764192 file = oem.c, line = 1491 {Tue Feb 13 00:56:16 2007 }
    TRACE: ReadOthersDskInfo(): node(27) rcfg(0) wrtcnt(0) lastcnt(0) alive(0), tid = main:182900764192 file = oem.c, line = 1491 {Tue Feb 13 00:56:16 2007 }
    TRACE: ReadOthersDskInfo(): node(28) rcfg(0) wrtcnt(0) lastcnt(0) alive(0), tid = main:182900764192 file = oem.c, line = 1491 {Tue Feb 13 00:56:16 2007 }
    TRACE: ReadOthersDskInfo(): node(29) rcfg(0) wrtcnt(0) lastcnt(0) alive(0), tid = main:182900764192 file = oem.c, line = 1491 {Tue Feb 13 00:56:16 2007 }
    TRACE: ReadOthersDskInfo(): node(30) rcfg(0) wrtcnt(0) lastcnt(0) alive(0), tid = main:182900764192 file = oem.c, line = 1491 {Tue Feb 13 00:56:16 2007 }
    TRACE: ReadOthersDskInfo(): node(31) rcfg(0) wrtcnt(0) lastcnt(0) alive(0), tid = main:182900764192 file = oem.c, line = 1491 {Tue Feb 13 00:56:16 2007 }
    TRACE: ReadOthersDskInfo(): node(32) rcfg(0) wrtcnt(0) lastcnt(0) alive(0), tid = main:182900764192 file = oem.c, line = 1491 {Tue Feb 13 00:56:16 2007 }
    TRACE: ReadOthersDskInfo(): node(33) rcfg(0) wrtcnt(0) lastcnt(0) alive(0), tid = main:182900764192 file = oem.c, line = 1491 {Tue Feb 13 00:56:16 2007 }
    TRACE: ReadOthersDskInfo(): node(34) rcfg(0) wrtcnt(0) lastcnt(0) alive(0), tid = main:182900764192 file = oem.c, line = 1491 {Tue Feb 13 00:56:16 2007 }
    TRACE: ReadOthersDskInfo(): node(35) rcfg(0) wrtcnt(0) lastcnt(0) alive(0), tid = main:182900764192 file = oem.c, line = 1491 {Tue Feb 13 00:56:16 2007 }
    TRACE: ReadOthersDskInfo(): node(36) rcfg(0) wrtcnt(0) lastcnt(0) alive(0), tid = main:182900764192 file = oem.c, line = 1491 {Tue Feb 13 00:56:16 2007 }
    TRACE: ReadOthersDskInfo(): node(37) rcfg(0) wrtcnt(0) lastcnt(0) alive(0), tid = main:182900764192 file = oem.c, line = 1491 {Tue Feb 13 00:56:16 2007 }
    TRACE: ReadOthersDskInfo(): node(38) rcfg(0) wrtcnt(0) lastcnt(0) alive(0), tid = main:182900764192 file = oem.c, line = 1491 {Tue Feb 13 00:56:16 2007 }
    TRACE: ReadOthersDskInfo(): node(39) rcfg(0) wrtcnt(0) lastcnt(0) alive(0), tid = main:182900764192 file = oem.c, line = 1491 {Tue Feb 13 00:56:16 2007 }
    TRACE: ReadOthersDskInfo(): node(40) rcfg(0) wrtcnt(0) lastcnt(0) alive(0), tid = main:182900764192 file = oem.c, line = 1491 {Tue Feb 13 00:56:16 2007 }
    TRACE: ReadOthersDskInfo(): node(41) rcfg(0) wrtcnt(0) lastcnt(0) alive(0), tid = main:182900764192 file = oem.c, line = 1491 {Tue Feb 13 00:56:16 2007 }
    TRACE: ReadOthersDskInfo(): node(42) rcfg(0) wrtcnt(0) lastcnt(0) alive(0), tid = main:182900764192 file = oem.c, line = 1491 {Tue Feb 13 00:56:16 2007 }
    TRACE: ReadOthersDskInfo(): node(43) rcfg(0) wrtcnt(0) lastcnt(0) alive(0), tid = main:182900764192 file = oem.c, line = 1491 {Tue Feb 13 00:56:16 2007 }
    TRACE: ReadOthersDskInfo(): node(44) rcfg(0) wrtcnt(0) lastcnt(0) alive(0), tid = main:182900764192 file = oem.c, line = 1491 {Tue Feb 13 00:56:16 2007 }
    TRACE: ReadOthersDskInfo(): node(45) rcfg(0) wrtcnt(0) lastcnt(0) alive(0), tid = main:182900764192 file = oem.c, line = 1491 {Tue Feb 13 00:56:16 2007 }
    TRACE: ReadOthersDskInfo(): node(46) rcfg(0) wrtcnt(0) lastcnt(0) alive(0), tid = main:182900764192 file = oem.c, line = 1491 {Tue Feb 13 00:56:16 2007 }
    TRACE: ReadOthersDskInfo(): node(47) rcfg(0) wrtcnt(0) lastcnt(0) alive(0), tid = main:182900764192 file = oem.c, line = 1491 {Tue Feb 13 00:56:16 2007 }
    TRACE: ReadOthersDskInfo(): node(48) rcfg(0) wrtcnt(0) lastcnt(0) alive(0), tid = main:182900764192 file = oem.c, line = 1491 {Tue Feb 13 00:56:16 2007 }
    TRACE: ReadOthersDskInfo(): node(49) rcfg(0) wrtcnt(0) lastcnt(0) alive(0), tid = main:182900764192 file = oem.c, line = 1491 {Tue Feb 13 00:56:16 2007 }
    TRACE: ReadOthersDskInfo(): node(50) rcfg(0) wrtcnt(0) lastcnt(0) alive(0), tid = main:182900764192 file = oem.c, line = 1491 {Tue Feb 13 00:56:16 2007 }
    TRACE: ReadOthersDskInfo(): node(51) rcfg(0) wrtcnt(0) lastcnt(0) alive(0), tid = main:182900764192 file = oem.c, line = 1491 {Tue Feb 13 00:56:16 2007 }
    TRACE: ReadOthersDskInfo(): node(52) rcfg(0) wrtcnt(0) lastcnt(0) alive(0), tid = main:182900764192 file = oem.c, line = 1491 {Tue Feb 13 00:56:16 2007 }
    TRACE: ReadOthersDskInfo(): node(53) rcfg(0) wrtcnt(0) lastcnt(0) alive(0), tid = main:182900764192 file = oem.c, line = 1491 {Tue Feb 13 00:56:16 2007 }
    TRACE: ReadOthersDskInfo(): node(54) rcfg(0) wrtcnt(0) lastcnt(0) alive(0), tid = main:182900764192 file = oem.c, line = 1491 {Tue Feb 13 00:56:16 2007 }
    TRACE: ReadOthersDskInfo(): node(55) rcfg(0) wrtcnt(0) lastcnt(0) alive(0), tid = main:182900764192 file = oem.c, line = 1491 {Tue Feb 13 00:56:16 2007 }
    TRACE: ReadOthersDskInfo(): node(56) rcfg(0) wrtcnt(0) lastcnt(0) alive(0), tid = main:182900764192 file = oem.c, line = 1491 {Tue Feb 13 00:56:16 2007 }
    TRACE: ReadOthersDskInfo(): node(57) rcfg(0) wrtcnt(0) lastcnt(0) alive(0), tid = main:182900764192 file = oem.c, line = 1491 {Tue Feb 13 00:56:16 2007 }
    TRACE: ReadOthersDskInfo(): node(58) rcfg(0) wrtcnt(0) lastcnt(0) alive(0), tid = main:182900764192 file = oem.c, line = 1491 {Tue Feb 13 00:56:16 2007 }
    TRACE: ReadOthersDskInfo(): node(59) rcfg(0) wrtcnt(0) lastcnt(0) alive(0), tid = main:182900764192 file = oem.c, line = 1491 {Tue Feb 13 00:56:16 2007 }
    TRACE: ReadOthersDskInfo(): node(60) rcfg(0) wrtcnt(0) lastcnt(0) alive(0), tid = main:182900764192 file = oem.c, line = 1491 {Tue Feb 13 00:56:16 2007 }
    TRACE: ReadOthersDskInfo(): node(61) rcfg(0) wrtcnt(0) lastcnt(0) alive(0), tid = main:182900764192 file = oem.c, line = 1491 {Tue Feb 13 00:56:16 2007 }
    TRACE: ReadOthersDskInfo(): node(62) rcfg(0) wrtcnt(0) lastcnt(0) alive(0), tid = main:182900764192 file = oem.c, line = 1491 {Tue Feb 13 00:56:16 2007 }
    TRACE: ReadOthersDskInfo(): node(63) rcfg(0) wrtcnt(0) lastcnt(0) alive(0), tid = main:182900764192 file = oem.c, line = 1491 {Tue Feb 13 00:56:16 2007 }InitializeCM: ModuleName = hangcheck-timer {Tue Feb 13 00:56:16 2007 }
    ClusterListener: Spawned with tid 0x4080e960 pid: 19662 {Tue Feb 13 00:56:16 2007 }
    ERROR: InitializeCM: query_module() failed, tid = main:182900764192 file = cmstartup.c, line = 341 {Tue Feb 13 00:56:16 2007 }Debug Hang : ClusterListener (PID=19662) Registered with ORACM. {Tue Feb 13 00:56:16 2007 }
    TRACE: ClusterListener (pid=19662, tid=1082190176): Registered with watchdog daemon., tid = ClusterListener:1082190176 file = nmlistener.c, line = 76 {Tue Feb 13 00:56:16 2007 }
    TRACE: CmConnectListener: Spawned with tid 0x40a0f960., tid = CMConnectListerner:1084291424 file = cmclient.c, line = 216 {Tue Feb 13 00:56:16 2007 }Debug Hang :StartNMMon (PID=19662) Registered with ORACM. {Tue Feb 13 00:56:16 2007 }
    TRACE: StartNMMon (pid=19662, tid=-1782829536): Registered with watchdog daemon., tid = main:182900764192 file = cmnodemon.c, line = 254 {Tue Feb 13 00:56:16 2007 }Debug Hang : CmConnectListener (PID=19662):Registered with ORACM. {Tue Feb 13 00:56:16 2007 }
    TRACE: CmConnectListener (pid=19662, tid=1084291424): Registered with watchdog daemon., tid = CMConnectListerner:1084291424 file = cmclient.c, line = 247 {Tue Feb 13 00:56:16 2007 }CreateLocalEndpoint(): Network Address: 1.1.1.4
    {Tue Feb 13 00:56:16 2007 }
    TRACE: StartClusterJoin(): clusterState(0) nodeState(0), tid = main:182900764192 file = nmmember.c, line = 282 {Tue Feb 13 00:56:16 2007 }PollingThread: Spawned with tid 0x40c10960. pid: 19662 {Tue Feb 13 00:56:16 2007 }
    Debug Hang :PollingThread (PID=19662): Registered with ORACM. {Tue Feb 13 00:56:16 2007 }
    TRACE: PollingThread (pid=19662, tid=1086392672): Registered with watchdog daemon., tid = PollingThread:1086392672 file = nmmember.c, line = 765 {Tue Feb 13 00:56:16 2007 }SendingThread: Spawned with tid 0x41012960, 0x41012960. pid: 19662 {Tue Feb 13 00:56:16 2007 }
    DiskPingThread: Spawned with tid 0x40e11960. pid: 19662 {Tue Feb 13 00:56:16 2007 }
    Debug Hang : DiskPingThread (PID=19662): Registered with ORACM. {Tue Feb 13 00:56:16 2007 }
    TRACE: DiskPingThread (pid=19662, tid=1088493920): Registered with watchdog daemon., tid = DiskPingThread:1088493920 file = nmmember.c, line = 1083 {Tue Feb 13 00:56:16 2007 }Debug Hang :SendingThread (PID=19662): Registered with ORACM. {Tue Feb 13 00:56:16 2007 }
    TRACE: SendingThread (pid=19662, tid=1090595168): Registered with watchdog daemon., tid = SendingThread:1090595168 file = nmmember.c, line = 581 {Tue Feb 13 00:56:16 2007 }
    TRACE: HandleJoin(): src[1] dest[1] dom[0] seq[1] sync[0], tid = ClusterListener:1082190176 file = nmlisten.c, line = 346 {Tue Feb 13 00:56:16 2007 }
    TRACE: HandleJoin(): JOIN from node(1)->(1), tid = ClusterListener:1082190176 file = nmlisten.c, line = 362 {Tue Feb 13 00:56:16 2007 }
    TRACE: HandleStatus(): node(0) UNKNOWN, tid = ClusterListener:1082190176 file = nmlisten.c, line = 404 {Tue Feb 13 00:56:17 2007 }
    TRACE: HandleStatus(): src[0] dest[1] dom[0] seq[6] sync[1], tid = ClusterListener:1082190176 file = nmlisten.c, line = 415 {Tue Feb 13 00:56:17 2007 }
    TRACE: HandleSync(): src[0] dest[1] dom[0] seq[7] sync[1], tid = ClusterListener:1082190176 file = nmlisten.c, line = 506 {Tue Feb 13 00:56:17 2007 }
    TRACE: SendAck(): node(0) domain(0) syncSeqNo(1) type(11), tid = ClusterListener:1082190176 file = nmmember.c, line = 1922 {Tue Feb 13 00:56:17 2007 }
    TRACE: HandleVote(): src[0] dest[1] dom[0] seq[8] sync[1], tid = ClusterListener:1082190176 file = nmlisten.c, line = 643 {Tue Feb 13 00:56:18 2007 }
    TRACE: SendVoteInfo(): node(0) domain(0) syncSeqNo(1), tid = ClusterListener:1082190176 file = nmmember.c, line = 1736 {Tue Feb 13 00:56:18 2007 }
    TRACE: HandleUpdate(): src[0] dest[1] dom[0] seq[9] sync[1], tid = ClusterListener:1082190176 file = nmlisten.c, line = 849 {Tue Feb 13 00:56:19 2007 }
    TRACE: UpdateNodeState(): nodeNum 0, newState 2, tid = ClusterListener:1082190176 file = nmlisten.c, line = 1153 {Tue Feb 13 00:56:19 2007 }
    TRACE: UpdateNodeState(): nodeNum 1, newState 2, tid = ClusterListener:1082190176 file = nmlisten.c, line = 1153 {Tue Feb 13 00:56:19 2007 }UpdateNodeState(): node(1) added udpated {Tue Feb 13 00:56:19 2007 }
    TRACE: SendAck(): node(0) domain(0) syncSeqNo(1) type(15), tid = ClusterListener:1082190176 file = nmmember.c, line = 1922 {Tue Feb 13 00:56:19 2007 }
    TRACE: HandleUpdate(): about to QueueClientEvent 0, 1, tid = ClusterListener:1082190176 file = nmlisten.c, line = 960 {Tue Feb 13 00:56:19 2007 }
    TRACE: QueueClientEvent(): Sending Event(1) , tid = ClusterListener:1082190176 file = nmlisten.c, line = 1386 {Tue Feb 13 00:56:19 2007 }
    TRACE: QueueClientEvent: Node[0] state = 2, tid = ClusterListener:1082190176 file = nmlisten.c, line = 1390 {Tue Feb 13 00:56:19 2007 }
    TRACE: QueueClientEvent: Node[1] state = 2, tid = ClusterListener:1082190176 file = nmlisten.c, line = 1390 {Tue Feb 13 00:56:19 2007 }HandleUpdate(): SYNC(1) from node(0) completed {Tue Feb 13 00:56:19 2007 }
    TRACE: HandleUpdate: saving incarnation value as 2, tid = ClusterListener:1082190176 file = nmlisten.c, line = 983 {Tue Feb 13 00:56:19 2007 }
    HandleUpdate(): NODE(0) IS ACTIVE MEMBER OF CLUSTER, INCARNATION(1) {Tue Feb 13 00:56:19 2007 }
    HandleUpdate(): NODE(1) IS ACTIVE MEMBER OF CLUSTER, INCARNATION(2) {Tue Feb 13 00:56:19 2007 }
    TRACE: HandleStatus(): src[1] dest[1] dom[0] seq[2] sync[2], tid = ClusterListener:1082190176 file = nmlisten.c, line = 415 {Tue Feb 13 00:56:19 2007 }
    TRACE: StartNMMon(): attached as node 1, tid = main:182900764192 file = cmnodemon.c, line = 288 {Tue Feb 13 00:56:19 2007 }
    TRACE: StartNMMon: starting reconfig(2), tid = main:182900764192 file = cmnodemon.c, line = 395 {Tue Feb 13 00:56:19 2007 }
    TRACE: UpdateEventValue: *(bfffe1f0) = (1, 1), tid = main:182900764192 file = unixinc.c, line = 336 {Tue Feb 13 00:56:19 2007 }
    TRACE: UpdateEventValue: *(401bbeb0) = (3, 1), tid = main:182900764192 file = unixinc.c, line = 336 {Tue Feb 13 00:56:19 2007 }
    TRACE: ReconfigThread: started for reconfig (2), tid = Reconfig Thread:1092696416 file = cmnodemon.c, line = 180 {Tue Feb 13 00:56:19 2007 }NMEVENT_RECONFIG [00][00][00][00][00][00][00][03] {Tue Feb 13 00:56:19 2007 }
    TRACE: CleanupNodeContexts(): cleaning up nodes, rcfg(2), tid = Reconfig Thread:1092696416 file = cmnodemon.c, line = 671 {Tue Feb 13 00:56:19 2007 }
    TRACE: DisconnectNode(): about to disconnect 0, tid = Reconfig Thread:1092696416 file = cmipc.c, line = 851 {Tue Feb 13 00:56:19 2007 }
    TRACE: DisconnectNode(): waiting for 0 listeners to terminate, tid = Reconfig Thread:1092696416 file = cmipc.c, line = 874 {Tue Feb 13 00:56:19 2007 }
    TRACE: UpdateEventValue: *(401be778) = (0, 1), tid = Reconfig Thread:1092696416 file = unixinc.c, line = 336 {Tue Feb 13 00:56:19 2007 }
    TRACE: CleanupNodeContexts(): successful cleanup of nodes rcfg(2), tid = Reconfig Thread:1092696416 file = cmnodemon.c, line = 690 {Tue Feb 13 00:56:19 2007 }
    TRACE: EstablishMasterNode(): MASTER is node(0) reconfigs(2), tid = Reconfig Thread:1092696416 file = cmnodemon.c, line = 832 {Tue Feb 13 00:56:19 2007 }
    TRACE: IncrementEventValue: *(401b97c0) = (1, 1), tid = Reconfig Thread:1092696416 file = unixinc.c, line = 365 {Tue Feb 13 00:56:19 2007 }
    TRACE: PrepareForConnectsX: still waiting at (0), tid = PrepareForConnectsX:1094797664 file = cmipc.c, line = 279 {Tue Feb 13 00:56:19 2007 }
    TRACE: IncrementEventValue: *(401b97c0) = (2, 2), tid = PrepareForConnectsX:1094797664 file = unixinc.c, line = 365 {Tue Feb 13 00:56:19 2007 }--- End Dump ---

    Set the LD_ASSUME_KERNEL before starting the cluster manager:
    export LD_ASSUME_KERNEL=2.4.19
    export ORACLE_HOME=/oracle/app/oracle/product/9.2.0
    rm -f /oracle/app/oracle/product/9.2.0/oracm/log/cm.log
    rm -f /oracle/app/oracle/product/9.2.0/oracm/log/ocmstart.ts
    $ORACLE_HOME/oracm/bin/ocmstart.sh
    tail -f /oracle/app/oracle/product/9.2.0/oracm/log/cm.log

  • RAC, ASM failed to start up on second node , ORA-03113: end-of-file on comm

    i'm installing an RAC with 2 nodes on top of ASM
    when creating ASM Diskgroup , it failed and reported error CRS-0215 failed to start asm on node2
    Oracle 10.2.0.1
    linux CentOs 4.x
    u01/app/oracle/product/10.2.0/db_1/bin/dbca  -progress_only   -configureASM -templateName NO_VALUE -gdbName NO -sid NO      -emConf
    iguration NONE    -diskList /dev/raw/raw2,/dev/raw/raw3  -diskGroupName DATA -datafileJarLocation /u01/app/oracle/product/10.2.0/db_
    1/assistants/dbca/templates  -responseFile NO_VALUE  -nodeinfo node1,node2    -obfuscatedPasswords true   -oratabLocation /u01/app/o
    racle/product/10.2.0/db_1/install/oratab   -asmSysPassword 05dbb0be38ecf8cca822cf3cf99e675448  -redundancy EXTERNA
    [oracle@node2 bin]$ ./crs_stat -t -v
    Name           Type           R/RA   F/FT   Target    State     Host       
    ora....SM1.asm application    0/5    0/0    ONLINE    ONLINE    node1      
    ora....E1.lsnr application    0/5    0/0    ONLINE    ONLINE    node1      
    ora.node1.gsd  application    0/5    0/0    ONLINE    ONLINE    node1      
    ora.node1.ons  application    0/3    0/0    ONLINE    ONLINE    node1      
    ora.node1.vip  application    0/0    0/0    ONLINE    ONLINE    node1      
    ora....SM2.asm application    0/5    0/0    OFFLINE   OFFLINE              
    ora....E2.lsnr application    0/5    0/0    ONLINE    ONLINE    node2      
    ora.node2.gsd  application    0/5    0/0    ONLINE    ONLINE    node2      
    ora.node2.ons  application    0/3    0/0    ONLINE    ONLINE    node2      
    ora.node2.vip  application    0/0    0/0    ONLINE    ONLINE    node2  
    i checked the status , asm is able to start on both nodes if not at the same time ,
    when trying to start the second node , with srvctl or sqlplus , each give the error 03113
    can anyone suggest me of how to bring up both instances ,
    thanks~
    [oracle@node2 bin]$ srvctl stop asm -n node1
    [oracle@node2 bin]$ srvctl start asm -n node1
    [oracle@node2 bin]$ srvctl start asm -n node2
    PRKS-1009 : Failed to start ASM instance "+ASM2" on node "node2", [PRKS-1009 : Failed to start ASM instance "+ASM2" on node "node2", [node2:ora.node2.ASM2.asm:
    node2:ora.node2.ASM2.asm:SQL*Plus: Release 10.2.0.1.0 - Production on Wed May 27 16:14:50 2009
    node2:ora.node2.ASM2.asm:
    node2:ora.node2.ASM2.asm:Copyright (c) 1982, 2005, Oracle.  All rights reserved.
    node2:ora.node2.ASM2.asm:
    node2:ora.node2.ASM2.asm:Enter user-name: Connected to an idle instance.
    node2:ora.node2.ASM2.asm:
    node2:ora.node2.ASM2.asm:SQL> ORA-03113: end-of-file on communication channel
    node2:ora.node2.ASM2.asm:SQL> Disconnected
    node2:ora.node2.ASM2.asm:
    [code/]
    Edited by: zs_hzh on May 27, 2009 1:25 AM                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

    Is it possible to start ASM on second node with SQL*Plus in NOMOUNT state?

  • 10g RAC installation fails....

    Hello All,
    I am new to RAC installation.while doing the installation in 2node cluster,till the configuration assisstants windows everything went fine. in CA while checking the status,the oracle clusterware config assistant failed with the following log....
    INFO: Starting Install on nodes 'server004'
    INFO: Saving Cluster Inventory
    INFO: Running command 'C:\DOCUME~1\uit0076\LOCALS~1\Temp\1\OraInstall2010-05-18_11-41-42AM\oui\bin\setup.exe -jreLoc C:\DOCUME~1\uit0076\LOCALS~1\Temp\1\OraInstall2010-05-18_11-41-42AM\jre\1.4.2 -paramFile C:\DOCUME~1\uit0076\LOCALS~1\Temp\1\OraInstall2010-05-18_11-41-42AM\oui/clusterparam.ini -silent -ignoreSysPrereqs -attachHome -noClusterEnabled ORACLE_HOME=C:\oracle\product\10.2.0\crs ORACLE_HOME_NAME=OraCr10g_home CLUSTER_NODES=server003,server004 CRS=true "INVENTORY_LOCATION=C:\Program Files\Oracle\Inventory" LOCAL_NODE=server004 -remoteInvocation -invokingNodeName server003 -logFilePath "C:\Program Files\Oracle\Inventory/logs" -timestamp 2010-05-18_11-41-42AM' on the nodes 'server004'.
    INFO: Deleting service 'OracleOUIOraCr10g_homeService' on nodes 'server004'.
    INFO: Creating service 'OracleOUIOraCr10g_homeService' on nodes 'server004'.
    INFO: Starting service 'OracleOUIOraCr10g_homeService' on nodes 'server004'.
    INFO: Stopping service OracleOUIOraCr10g_homeService on nodes server004.
    INFO: Deleting service 'OracleOUIOraCr10g_homeService' on nodes 'server004'.
    INFO: cf session retrieved for key: OraCr10g_home oracle.crs
    INFO: cf session retrieved for key: OraCr10g_home oracle.crs
    INFO: cf session retrieved for key: OraCr10g_home oracle.crs
    INFO: cf session retrieved for key: OraCr10g_home oracle.crs
    INFO: cf session retrieved for key: OraCr10g_home oracle.crs
    INFO: cf session retrieved for key: OraCr10g_home oracle.crs
    INFO: RUN_RECOMMENDED_TOOLS FIRST is set to false
    INFO: No of Recommended Tools5
    INFO: plugin-list is created
    INFO: pluginlist is updated for: Oracle Clusterware current size: 1
    INFO: No of ExitOnly Tools in this session: 0
    INFO: cf session for perform has hashcode: 1040982
    INFO: detached tool list getting prepared fo comp: Oracle Clusterware
    INFO: cfsession hashcode for exit only tools: 1040982
    INFO: hashcode for action: 17596838
    INFO: No of ExitOnly Tools: 0
    INFO: saving exit only tools ...
    INFO: no detached only tools in this session
    INFO: exit-only tools are created in single installation
    INFO: no. of sets of tools to be run: 1
    INFO: ca page to be shown: true
    INFO: exitonly tools to be excuted passed: 0
    INFO: Starting to execute configuration assistants
    INFO: Command = C:\WINDOWS\system32\cmd /c call C:\oracle\product\10.2.0\crs/install/crssetup.config.bat
    Step 1: checking status of CRS cluster
    Step 2: creating directories (C:\oracle\product\10.2.0\crs)
    Step 3: configuring OCR repository
    ocr upgrade failed with (-1073741819)
    Command = C:\WINDOWS\system32\cmd /c call C:\oracle\product\10.2.0\crs/install/crssetup.config.bat has failed
    INFO: Configuration assistant "Oracle Clusterware Configuration Assistant" failed
    *** Starting OUICA ***
    Oracle Home set to C:\oracle\product\10.2.0\crs
    Configuration directory is set to C:\oracle\product\10.2.0\crs\cfgtoollogs. All xml files under the directory will be processed
    INFO: The "C:\oracle\product\10.2.0\crs\cfgtoollogs/configToolFailedCommands" script contains all commands that failed, were skipped or were cancelled. This file may be used to run these configuration assistants outside of OUI. Note that you may have to update this script with passwords (if any) before executing the same.
    SEVERE: OUI-25031:Some of the configuration assistants failed. It is strongly recommended that you retry the configuration assistants at this time. Not successfully running any "Recommended" assistants means your system will not be correctly configured.
    1. Check the Details panel on the Configuration Assistant Screen to see the errors resulting in the failures.
    2. Fix the errors causing these failures.
    3. Select the failed assistants and click the 'Retry' button to retry them.
    INFO: User Selected: Yes/OK
    Have 1GB RAM in both the nodes.
    could any one help me to solve this?
    Thanks,
    Derick
    Edited by: user4487322 on May 18, 2010 1:46 AM

    Thank you madrid.I am installing the RAC in windows2003 sp2 server.in the link till the step no#12(12. The Configuration Assistants page appears.)everything went
    fine.while checking the 'Virtual Private IP Configuration Assistant',the installation fails with following error.....
    INFO: Configuration assistant "Oracle Clusterware Configuration Assistant" succeeded
    INFO: Command = C:\oracle\product\10.2.0\crs/bin/racgons.exe add_config server003:6200 server004:6200
    INFO: Configuration assistant "Oracle Notification Server Configuration Assistant" succeeded
    INFO: Command = C:\oracle\product\10.2.0\crs/bin/oifcfg.exe setif -global "Local Area Connection"/10.28.25.0:public "Local Area Connection 2"/10.10.10.0:cluster_interconnect
    INFO: Configuration assistant "Oracle Private Interconnect Configuration Assistant" succeeded
    INFO: Command = C:\WINDOWS\system32\cmd /c call C:\oracle\product\10.2.0\crs/bin/vipca.bat -silent -nodelist "server003,server004" -nodevips "server003/vipserver003/255.255.255.0/Local Area Connection,server004/vipserver004/255.255.255.0/Local Area Connection"
    Execution of the plugin was aborted
    INFO: Configuration assistant "Virtual Private IP Configuration Assistant" was canceled.
    *** Starting OUICA ***
    Oracle Home set to C:\oracle\product\10.2.0\crs
    Configuration directory is set to C:\oracle\product\10.2.0\crs\cfgtoollogs. All xml files under the directory will be processed
    INFO: The "C:\oracle\product\10.2.0\crs\cfgtoollogs/configToolFailedCommands" script contains all commands that failed, were skipped or were cancelled. This file may be used to run these configuration assistants outside of OUI. Note that you may have to update this script with passwords (if any) before executing the same.
    SEVERE: OUI-25031:Some of the configuration assistants failed. It is strongly recommended that you retry the configuration assistants at this time. Not successfully running any "Recommended" assistants means your system will not be correctly configured.
    1. Check the Details panel on the Configuration Assistant Screen to see the errors resulting in the failures.
    2. Fix the errors causing these failures.
    3. Select the failed assistants and click the 'Retry' button to retry them.
    INFO: User Selected: Yes/OK
    INFO: Starting to execute configuration assistants
    INFO: Command = C:\WINDOWS\system32\cmd /c call C:\oracle\product\10.2.0\crs/bin/vipca.bat -silent -nodelist "server003,server004" -nodevips "server003/vipserver003/255.255.255.0/Local Area Connection,server004/vipserver004/255.255.255.0/Local Area Connection"
    Execution of the plugin was aborted
    INFO: Configuration assistant "Virtual Private IP Configuration Assistant" was canceled.
    INFO: Starting to execute configuration assistants
    INFO: Command = C:\WINDOWS\system32\cmd /c call C:\oracle\product\10.2.0\crs/bin/vipca.bat -silent -nodelist "server003,server004" -nodevips "server003/vipserver003/255.255.255.0/Local Area Connection,server004/vipserver004/255.255.255.0/Local Area Connection"
    *** Starting OUICA ***
    Oracle Home set to C:\oracle\product\10.2.0\crs
    Configuration directory is set to C:\oracle\product\10.2.0\crs\cfgtoollogs. All xml files under the directory will be processed
    INFO: The "C:\oracle\product\10.2.0\crs\cfgtoollogs/configToolFailedCommands" script contains all commands that failed, were skipped or were cancelled. This file may be used to run these configuration assistants outside of OUI. Note that you may have to update this script with passwords (if any) before executing the same.
    SEVERE: OUI-25031:Some of the configuration assistants failed. It is strongly recommended that you retry the configuration assistants at this time. Not successfully running any "Recommended" assistants means your system will not be correctly configured.
    1. Check the Details panel on the Configuration Assistant Screen to see the errors resulting in the failures.
    2. Fix the errors causing these failures.
    3. Select the failed assistants and click the 'Retry' button to retry them.
    INFO: User Selected: Yes/OK
    I am able tp ping all my IP's from both the nodes,with host name &alias specified in the host file.
    any thing else to be checked to fix "Virtual Private IP Configuration Assistant"?Any help?

Maybe you are looking for

  • My Tasks View in my custom task list fails to display the tasks assigned to me (that is the currently logged in user)

    Hi I am new to sharepoint 2010. I created a custom task list, where I  have many tasks assigned to users (I had tasks assigned to me as well). I created a view as "My tasks "and having a filter as Assigned To is equal to [Me]. When I do that my view

  • KT6V + sempron 2800 does not boot when BIOS saved

    Hi, I have just bought a new sempron 2800 to be use on a KT6V LSR. I plugget in my KT6V but was not recognised well (but all was working ok with windows). To be recognised, I have flashed with the last bios of MSI my motherboard... and now I have the

  • Need Information regarding JTA Transactions

    Hi all,     I have written session beans with out transaction.But now i want to apply JTA Transactions to that session bean methods.Can you pls send some code,Examples and related links regarding this. Regards, Kiran

  • XI File Content Conversion Issue

    Peace to All, I am having this funny problem in File Content Conversion: com.sap.aii.utilxi.misc.api.BaseRuntimeException: RuntimeException in Message-Mapping transformation: Cannot produce target element /ns0:ZIN_MT. Check xml instance is valid for

  • Group by on different column

    Hi, I have the following 2 SQL's [1] select dept_id,sum(salary) from dept d, salary s, periods per where d.dept_id=s.dept_id and per.period_id=s.period_id and per.period_id=<parameter> group by dept_id; [2] select person_id,sum(wages) from persons p,