ACFS on RAC:
Hello ,
in our RAC environment we are using ACFS to store the backups of the database. For monitoring and administrating our databases we are uisng Cloud Contorl 12c.
This morning the CloudControl reported thew following error:
ASM Cluster File System Corrupt (Volume Device /dev/asm/v03_backup-33)
My understanding is that the ASM Cluster File System using volume device /dev/asm/v03_backup-33 has sections that are corrupt.
Running the command chkdg DG03_BACKUP under ASMCMD did not report any error at all. Isn't it so that ASM runs automatically checks if the file system is corrupt? Anyway, the error message persists even if reevaluating the alert in CloudControl.
The error occurred in a productional environment - therefore I have to be a bit careful when doing checks etc.
My question now is: are we really having a problem with a corrupt ACFS - and how could this be checked?
Any help will be appreciated
Rgds
JH
Hi Sebastian,
thanks for your fast reply...
I've tried to excute the following commands:
[root]# fsck.acfs -v /dev/asm/v03_backup-33
version = 11.2.0.3.0
fsck.acfs: temporary directory '/usr/tmp'
fsck.acfs: current directory '/root'
fsck.acfs: ACFS-00511: /dev/asm/v03_backup-33 is mounted on at least one node of the cluster.
fsck.acfs: ACFS-07656: unable to continue
Seems to me that fsck could only get executed when the volume is not mounted...
Rgds
JH
Similar Messages
-
Sapinst under RAC 11.2 using ACFS and ASM
Hi all,
we are actually doing some installationtests (BankingService-Netweaver 7.02) ) on a 2Node-RAC-Cluster with the actual sapinst based on OEL 5.5, Oragrid 11.2.0.2 and OracleBin 11.2.0.2.
We doing a HA-Installation and we put all "DB-related" files in ASM.
We put all failovercritical Filesystems on ACFS.
We are no ASM-mirroring (but SAN-based)
Of course we know that this combination is not (yet) SAP-certified - but it works fine.
We are using different Oracle-Users" as grid-ower (oragrid) and "DB-owner" (standard: ora<sid>).
We faced some privilege-problems regarding the cluster-ressources for ora<sid> -> apart from that the installations run without any issues.
As sapinst asked for the sapdata-directories, we answered with the ASM-structure.
So sapinst creates the +ASM-dierectories (if he what to...) and puts the coresponding create_sql-commands towards the DB, which where corectly interpreted against ASM.
So here some questions:
does anybody know, when there will be a sapinst with complete ASM-support?
should we use just one oracle-user?
is there a naming-standard for the RAC-instances?
are there brtools availbale with complete ASM-support?
Thanks and Regards
ThomasHi Sebastian,
I knew this presentation and I'm waiting for 7.03...
This is the newest sapinst, I can find under NW7.0:
SAPinst build information:
abi version : 722
make variant: 720_REL
build : 1201786
compile time: Nov 5 2010 02:07:45
Do you think, this is already the right one?
Thanks
Thomas -
Oracle Database 11gr2 rac on solaris 10 using ACFS and asm as storage
Can i get any step by step document to install 11gr2 rac on solaris 10.
My database is two node rac. I am using ASM as storage. So i need a document which should very easy to understand.
thanks in advanceHi,
Can i get any step by step document to install 11gr2 rac on solaris 10.
My database is two node rac. I am using ASM as storage. So i need a document which should very easy to understand.Refer below link:
http://www.oraclemasters.in/?p=961
Configure storage as per your requirement.
thanks,
X A H E E R -
Ora.reco.acfsvol.acfs only on one node on RAC on ODA
We have an ODA (old model) and by a power failure in the data center both boot disks in one node are we gone faulty.
After replacing the chassis, RAID controllers and disks (Oracle Filed Engenieer) reports crsctl stat res -t following:
[grid @ XXXXXXXXA ~] $ crsctl stat res -t
TARGET NAME SERVER STATE STATE_DETAILS
Local Resources
ora.reco.acfsvol.acfs
ONLINE ONLINE XXXXXXXXXA mounted on / cloudfs
OFFLINE OFFLINE XXXXXXXXXBvolume / cloudfs off
is that correct?
Oracle support referred me to MOS 1319263.1, but that's for Exadata ....
Thx
Christoph
(i masked the hostname)No, this is not correct. Your resource should be online on both nodes.
What happens if you try and start the resource manually using srvctl start filesystem?
Have you checked to see if your volume is online? -
Gns is getting failed with error CRS-2632 during RAC installation
hello guys i am new to oracle RAC and i am trying to configure two node ORACLE 11G R2 RAC setup on OEL 5.4 using GNS Every things works great till I execute
root.sh script on the first node
It gives me error
CRS-2674: Start of 'ora.gns' on 'host01' failed
CRS-2632: There are no more servers to try to place resource 'ora.gns' on that would satisfy its placement policy
start gns ... failed
Preparing packages for installation...
cvuqdisk-1.0.7-1
Configure Oracle Grid Infrastructure for a Cluster ... failed
Updating inventory properties for clusterware
Starting Oracle Universal Installer...
When i check status of cluster recourse i get this as output
[root@host01 ~]# crs_stat -t
Name Type Target State Host
ora.DATA.dg ora....up.type ONLINE ONLINE host01
ora....N1.lsnr ora....er.type OFFLINE OFFLINE
ora....N2.lsnr ora....er.type OFFLINE OFFLINE
ora....N3.lsnr ora....er.type OFFLINE OFFLINE
ora.asm ora.asm.type ONLINE ONLINE host01
ora.eons ora.eons.type ONLINE ONLINE host01
ora.gns ora.gns.type ONLINE OFFLINE
ora.gns.vip ora....ip.type ONLINE OFFLINE
ora.gsd ora.gsd.type OFFLINE OFFLINE
ora....SM1.asm application ONLINE ONLINE host01
ora.host01.gsd application OFFLINE OFFLINE
ora.host01.ons application ONLINE ONLINE host01
ora.host01.vip ora....t1.type ONLINE ONLINE host01
ora....network ora....rk.type ONLINE ONLINE host01
ora.oc4j ora.oc4j.type OFFLINE OFFLINE
ora.ons ora.ons.type ONLINE ONLINE host01
ora....ry.acfs ora....fs.type OFFLINE OFFLINE
ora.scan1.vip ora....ip.type OFFLINE OFFLINE
ora.scan2.vip ora....ip.type OFFLINE OFFLINE
ora.scan3.vip ora....ip.type OFFLINE OFFLINE
These are my GNS configuration file entries
vi /var/named/chroot/etc/named.conf
options {
listen-on port 53 { 192.9.201.59; };
listen-on-v6 port 53 { ::1; };
directory "/var/named";
dump-file "/var/named/data/cache_dump.db";
statistics-file "/var/named/data/named_stats.txt";
memstatistics-file "/var/named/data/named_mem_stats.txt";
allow-query { any; };
allow-query-cache { any; };
zone "." IN {
type hint;
file "named.ca";
zone "localdomain" IN {
type master;
file "localdomain.zone";
allow-update { none; };
zone "localhost" IN {
type master;
file "localhost.zone";
allow-update { none; };
zone "0.0.127.in-addr.arpa" IN {
type master;
file "named.local";
allow-update { none; };
zone "0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.ip6.arpa" IN {
type master;
file "named.ip6.local";
allow-update { none; };
zone "255.in-addr.arpa" IN {
type master;
file "named.broadcast";
allow-update { none; };
zone "0.in-addr.arpa" IN {
type master;
file "named.zero";
allow-update { none; };
zone "example.com" IN {
type master;
file "forward.zone";
allow-transfer { 192.9.201.180; };
zone "201.9.192.in-addr.arpa" IN {
type master;
file "reverse.zone";
zone "0.0.10.in-addr.arpa" IN {
type master;
file "reverse1.zone";
vi /var/named/chroot/var/named/forward.zone
$TTL 86400
@ IN SOA server1.example.com. root.server1.example.com. (
42 ; serial (d. adams)
3H ; refresh
15M ; retry
1W ; expiry
1D ) ; minimum
IN NS server1
IN A 192.9.201.59
server1 IN A 192.9.201.59
host01 IN A 192.9.201.181
host02 IN A 192.9.201.182
host03 IN A 192.9.201.183
openfiler IN A 192.9.201.184
host01-priv IN A 10.0.0.2
host02-priv IN A 10.0.0.3
host03-priv IN A 10.0.0.4
vi /var/named/chroot/var/named/reverse.zone
$ORIGIN cluster01.example.com.
@ IN NS cluster01-gns.cluster01.example.com.
cluster01-gns IN A 192.9.201.180
$TTL 86400
@ IN SOA server1.example.com. root.server1.example.com. (
1997022700 ; Serial
28800 ; Refresh
14400 ; Retry
3600000 ; Expire
86400 ) ; Minimum
IN NS server1.example.com.
59 IN PTR server1.example.com.
184 IN PTR openfiler.example.com.
181 IN PTR host01.example.com.
182 IN PTR host02.example.com.
183 IN PTR host03.example.com.
vi /var/named/chroot/var/named/reverse1.zone
$TTL 86400
@ IN SOA server1.example.com. root.server1.example.com. (
1997022700 ; Serial
28800 ; Refresh
14400 ; Retry
3600000 ; Expire
86400 ) ; Minimum
IN NS server1.example.com.
2 IN PTR host01-priv.example.com.
3 IN PTR host02-priv.example.com.
4 IN PTR host03-priv.example.com.
Please suggest me what i am doing wrong
Edited by: 1001408 on Apr 21, 2013 9:17 AM
Edited by: 1001408 on Apr 21, 2013 9:22 AMHello guys finally i find mistake i was doing
while configuring Public Ip for the nodes i was not giving Default Gateway .I was assuming as all these machine is in same network with same Ip range so they would not be needing Gateway but here my assumption mismatch with oracle well finally happy to see 11G r2 with GNS running on my personal laptop.
cheers
Rahul -
Started with 11.2.0.2.0 Grid Installation for 2 Node RAC on HP-UX 11.31 Itanium 64.
Copying Software to remote node & linking libraries were successfully without any issue (upto 76%). But got issue while executing root.sh on Node1
sph1erp:/oracle/11.2.0/grid #sh root.sh
Running Oracle 11g root script...
The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME= /oracle/11.2.0/grid
Enter the full pathname of the local bin directory: [usr/local/bin]:
Copying dbhome to /usr/local/bin ...
Copying oraenv to /usr/local/bin ...
Copying coraenv to /usr/local/bin ...
Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /oracle/11.2.0/grid/crs/install/crsconfig_params
Creating trace directory
LOCAL ADD MODE
Creating OCR keys for user 'root', privgrp 'sys'..
Operation successful.
OLR initialization - successful
root wallet
root wallet cert
root cert export
peer wallet
profile reader wallet
pa wallet
peer wallet keys
pa wallet keys
peer cert request
pa cert request
peer cert
pa cert
peer root cert TP
profile reader root cert TP
pa root cert TP
peer pa cert TP
pa peer cert TP
profile reader pa cert TP
profile reader peer cert TP
peer user cert
pa user cert
Adding daemon to inittab
CRS-2672: Attempting to start 'ora.mdnsd' on 'sph1erp'
CRS-2676: Start of 'ora.mdnsd' on 'sph1erp' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'sph1erp'
CRS-2676: Start of 'ora.gpnpd' on 'sph1erp' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'sph1erp'
CRS-2672: Attempting to start 'ora.gipcd' on 'sph1erp'
CRS-2676: Start of 'ora.gipcd' on 'sph1erp' succeeded
CRS-2676: Start of 'ora.cssdmonitor' on 'sph1erp' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'sph1erp'
CRS-2672: Attempting to start 'ora.diskmon' on 'sph1erp'
CRS-2676: Start of 'ora.diskmon' on 'sph1erp' succeeded
CRS-2676: Start of 'ora.cssd' on 'sph1erp' succeeded
ASM created and started successfully.
Disk Group OCRVOTE created successfully.
clscfg: -install mode specified
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'sys'..
Operation successful.
CRS-4256: Updating the profile
Successful addition of voting disk ab847ed2b4f04f2dbfb875226d2bb194.
Successful addition of voting disk 85c05a5b30384f8dbff48cc069de7a7c.
Successful addition of voting disk 649196fbdd614f9cbf26a9a0e6670a6e.
Successful addition of voting disk 8815dfcee2e64f64bf00b9c76626ab41.
Successful addition of voting disk 8ce55fe5534f4f77bfa9f54187592707.
Successfully replaced voting disk group with +OCRVOTE.
CRS-4256: Updating the profile
CRS-4266: Voting file(s) successfully replaced
## STATE File Universal Id File Name Disk group
1. ONLINE ab847ed2b4f04f2dbfb875226d2bb194 (/dev/oracle/ocrvote1) [OCRVOTE]
2. ONLINE 85c05a5b30384f8dbff48cc069de7a7c (/dev/oracle/ocrvote2) [OCRVOTE]
3. ONLINE 649196fbdd614f9cbf26a9a0e6670a6e (/dev/oracle/ocrvote3) [OCRVOTE]
4. ONLINE 8815dfcee2e64f64bf00b9c76626ab41 (/dev/oracle/ocrvote4) [OCRVOTE]
5. ONLINE 8ce55fe5534f4f77bfa9f54187592707 (/dev/oracle/ocrvote5) [OCRVOTE]
Located 5 voting disk(s).
Start of resource "ora.cluster_interconnect.haip" failed
CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'sph1erp'
CRS-5017: The resource action "ora.cluster_interconnect.haip start" encountered the following error:
Start action for HAIP aborted
CRS-2674: Start of 'ora.cluster_interconnect.haip' on 'sph1erp' failed
CRS-2679: Attempting to clean 'ora.cluster_interconnect.haip' on 'sph1erp'
CRS-2681: Clean of 'ora.cluster_interconnect.haip' on 'sph1erp' succeeded
CRS-4000: Command Start failed, or completed with errors.
Failed to start Oracle Clusterware stack
Failed to start High Availability IP at /oracle/11.2.0/grid/crs/install/crsconfig_lib.pm line 1046.
*/oracle/11.2.0/grid/perl/bin/perl -I/oracle/11.2.0/grid/perl/lib -I/oracle/11.2.0/grid/crs/install /oracle/11.2.0/grid/crs/install/rootcrs.pl execution failed*
sph1erp:/oracle/11.2.0/grid #
Last few lines from CRS Log for node 1, where error came
[ctssd(6467)]CRS-2401:The Cluster Time Synchronization Service started on host sph1erp.
2011-02-25 23:04:16.491
[oracle/11.2.0/grid/bin/orarootagent.bin(6423)]CRS-5818:Aborted command 'start for resource: ora.cluster_interconnect.haip 1 1' for resource 'ora.cluster_int
erconnect.haip'. Details at (:CRSAGF00113:) {0:0:178} in */oracle/11.2.0/grid/log/sph1erp/agent/ohasd/orarootagent_root/orarootagent_root.log.*
2011-02-25 23:04:20.521
[ohasd(5513)]CRS-2757:Command 'Start' timed out waiting for response from the resource 'ora.cluster_interconnect.haip'. Details at (:CRSPE00111:) {0:0:178} in
*/oracle/11.2.0/grid/log/sph1erp/ohasd/ohasd.log.*
Few lines from */oracle/11.2.0/grid/log/sph1erp/agent/ohasd/orarootagent_root/orarootagent_root.log.*
=====================================================================================================
2011-02-25 23:04:16.823: [ USRTHRD][16] {0:0:178} Starting Probe for ip 169.254.74.54
2011-02-25 23:04:16.823: [ USRTHRD][16] {0:0:178} Transitioning to Probe State
2011-02-25 23:04:17.177: [ USRTHRD][15] {0:0:178} [NetHAMain] thread stopping
2011-02-25 23:04:17.177: [ USRTHRD][15] {0:0:178} Thread:[NetHAMain]isRunning is reset to false here
2011-02-25 23:04:17.178: [ USRTHRD][12] {0:0:178} Thread:[NetHAMain]stop }
2011-02-25 23:04:17.178: [ USRTHRD][12] {0:0:178} thread cleaning up
2011-02-25 23:04:17.178: [ USRTHRD][12] {0:0:178} pausing thread
2011-02-25 23:04:17.178: [ USRTHRD][12] {0:0:178} posting thread
2011-02-25 23:04:17.178: [ USRTHRD][12] {0:0:178} Thread:[NetHAWork]stop {
2011-02-25 23:04:17.645: [ USRTHRD][16] {0:0:178} [NetHAWork] thread stopping
2011-02-25 23:04:17.645: [ USRTHRD][16] {0:0:178} Thread:[NetHAWork]isRunning is reset to false here
2011-02-25 23:04:17.645: [ USRTHRD][12] {0:0:178} Thread:[NetHAWork]stop }
2011-02-25 23:04:17.645: [ USRTHRD][12] {0:0:178} Thread:[NetHAWork]stop {
2011-02-25 23:04:17.645: [ USRTHRD][12] {0:0:178} Thread:[NetHAWork]stop }
2011-02-25 23:04:17.891: [ora.cluster_interconnect.haip][12] {0:0:178} [start] Start of HAIP aborted
2011-02-25 23:04:17.892: [ AGENT][12] {0:0:178} UserErrorException: Locale is
2011-02-25 23:04:17.893: [ora.cluster_interconnect.haip][12] {0:0:178} [start] clsnUtils::error Exception type=2 string=
CRS-5017: The resource action "ora.cluster_interconnect.haip start" encountered the following error:
Start action for HAIP aborted
2011-02-25 23:04:17.893: [ AGFW][12] {0:0:178} sending status msg [CRS-5017: The resource action "ora.cluster_interconnect.haip start" encountered the foll
owing error:
Start action for HAIP aborted
] for start for resource: ora.cluster_interconnect.haip 1 1
2011-02-25 23:04:17.893: [ora.cluster_interconnect.haip][12] {0:0:178} [start] clsn_agent::start }
2011-02-25 23:04:17.894: [ AGFW][10] {0:0:178} Agent sending reply for: RESOURCE_START[ora.cluster_interconnect.haip 1 1] ID 4098:661
2011-02-25 23:04:18.552: [ora.diskmon][12] {0:0:154} [check] DiskmonAgent::check {
2011-02-25 23:04:18.552: [ora.diskmon][12] {0:0:154} [check] DiskmonAgent::check } - 0
2011-02-25 23:04:19.573: [ AGFW][10] {0:0:154} Agent received the message: AGENT_HB[Engine] ID 12293:669
2011-02-25 23:04:20.510: [ora.cluster_interconnect.haip][18] {0:0:178} [start] got lock
2011-02-25 23:04:20.511: [ora.cluster_interconnect.haip][18] {0:0:178} [start] tryActionLock }
2011-02-25 23:04:20.511: [ora.cluster_interconnect.haip][18] {0:0:178} [start] abort }
2011-02-25 23:04:20.511: [ora.cluster_interconnect.haip][18] {0:0:178} [start] clsn_agent::abort }
2011-02-25 23:04:20.511: [ AGFW][18] {0:0:178} Command: start for resource: ora.cluster_interconnect.haip 1 1 completed with status: TIMEDOUT
2011-02-25 23:04:20.512: [ora.cluster_interconnect.haip][8] {0:0:178} [check] NetworkAgent::init enter {
2011-02-25 23:04:20.513: [ora.cluster_interconnect.haip][8] {0:0:178} [check] NetworkAgent::init exit }
2011-02-25 23:04:20.517: [ AGFW][10] {0:0:178} Agent sending reply for: RESOURCE_START[ora.cluster_interconnect.haip 1 1] ID 4098:661
2011-02-25 23:04:20.519: [ USRTHRD][8] {0:0:178} Ocr Context init default level 23886304
2011-02-25 23:04:20.519: [ default][8]clsvactversion:4: Retrieving Active Version from local storage.
[ CLWAL][8]clsw_Initialize: OLR initlevel [70000]
Few lines from */oracle/11.2.0/grid/log/sph1erp/ohasd/ohasd.log.*
=====================================================================================================
2011-02-25 23:04:21.627: [UiServer][30] {0:0:180} Done for ctx=6000000002604ce0
2011-02-25 23:04:21.642: [UiServer][31] Closed: remote end failed/disc.
2011-02-25 23:04:26.139: [ CLSINET][33]Returning NETDATA: 1 interfaces
2011-02-25 23:04:26.139: [ CLSINET][33]# 0 Interface 'lan2',ip='10.10.16.50',mac='3c-4a-92-48-71-be',mask='255.255.255.240',net='10.10.16.48',use='cluster_int
erconnect'
2011-02-25 23:04:26.973: [UiServer][31] CS(60000000014b0790)set Properties ( root,60000000012e0260)
2011-02-25 23:04:26.973: [UiServer][31] SS(6000000001372270)Accepted client connection: saddr =(ADDRESS=(PROTOCOL=ipc)(DEV=92)(KEY=OHASD_UI_SOCKET))daddr = (A
DDRESS=(PROTOCOL=ipc)(KEY=OHASD_UI_SOCKET))
2011-02-25 23:04:26.992: [UiServer][30] {0:0:181} processMessage called
2011-02-25 23:04:26.993: [UiServer][30] {0:0:181} Sending message to PE. ctx= 6000000001b440f0
2011-02-25 23:04:26.993: [UiServer][30] {0:0:181} Sending command to PE: 67
2011-02-25 23:04:26.994: [ CRSPE][29] {0:0:181} Processing PE command id=173. Description: [Stat Resource : 600000000135f760]
2011-02-25 23:04:26.997: [UiServer][30] {0:0:181} Done for ctx=6000000001b440f0
2011-02-25 23:04:27.012: [UiServer][31] Closed: remote end failed/disc.
2011-02-25 23:04:31.135: [ CLSINET][33]Returning NETDATA: 1 interfaces
2011-02-25 23:04:31.135: [ CLSINET][33]# 0 Interface 'lan2',ip='10.10.16.50',mac='3c-4a-92-48-71-be',mask='255.255.255.240',net='10.10.16.48',use='cluster_int
erconnect'
2011-02-25 23:04:32.318: [UiServer][31] CS(60000000014b0790)set Properties ( root,60000000012e0260)
2011-02-25 23:04:32.318: [UiServer][31] SS(6000000001372270)Accepted client connection: saddr =(ADDRESS=(PROTOCOL=ipc)(DEV=92)(KEY=OHASD_UI_SOCKET))daddr = (A
DDRESS=(PROTOCOL=ipc)(KEY=OHASD_UI_SOCKET))
2011-02-25 23:04:32.332: [UiServer][30] {0:0:182} processMessage called
2011-02-25 23:04:32.333: [UiServer][30] {0:0:182} Sending message to PE. ctx= 6000000001b45ef0
2011-02-25 23:04:32.333: [UiServer][30] {0:0:182} Sending command to PE: 68
2011-02-25 23:04:32.334: [ CRSPE][29] {0:0:182} Processing PE command id=174. Description: [Stat Resource : 600000000135f760]
2011-02-25 23:04:32.338: [UiServer][30] {0:0:182} Done for ctx=6000000001b45ef0
2011-02-25 23:04:32.352: [UiServer][31] Closed: remote end failed/disc.
2011-02-25 23:04:36.155: [ CLSINET][33]Returning NETDATA: 1 interfaces
2011-02-25 23:04:36.155: [ CLSINET][33]# 0 Interface 'lan2',ip='10.10.16.50',mac='3c-4a-92-48-71-be',mask='255.255.255.240',net='10.10.16.48',use='cluster_int
erconnect'
2011-02-25 23:04:37.683: [UiServer][31] CS(60000000014b0790)set Properties ( root,60000000012e0260)
2011-02-25 23:04:37.683: [UiServer][31] SS(6000000001372270)Accepted client connection: saddr =(ADDRESS=(PROTOCOL=ipc)(DEV=92)(KEY=OHASD_UI_SOCKET))daddr = (A
DDRESS=(PROTOCOL=ipc)(KEY=OHASD_UI_SOCKET))
2011-02-25 23:04:37.702: [UiServer][30] {0:0:183} processMessage called
2011-02-25 23:04:37.703: [UiServer][30] {0:0:183} Sending message to PE. ctx= 6000000002604ce0
2011-02-25 23:04:37.703: [UiServer][30] {0:0:183} Sending command to PE: 69
2011-02-25 23:04:37.704: [ CRSPE][29] {0:0:183} Processing PE command id=175. Description: [Stat Resource : 600000000135f760]
2011-02-25 23:04:37.708: [UiServer][30] {0:0:183} Done for ctx=6000000002604ce0
2011-02-25 23:04:37.722: [UiServer][31] Closed: remote end failed/disc.
2011-02-25 23:04:41.156: [ CLSINET][33]Returning NETDATA: 1 interfaces
2011-02-25 23:04:41.156: [ CLSINET][33]# 0 Interface 'lan2',ip='10.10.16.50',mac='3c-4a-92-48-71-be',mask='255.255.255.240',net='10.10.16.48',use='cluster_int
erconnect'
What could be the issue ????
Experts Please help me. Doing setup for the PRoduction Env...
Do response ASAP...... Thanks
Regards,
ManishThanks Sebastian for your input.
yes. my lan2 is used for Cluster_interconnect which is having subnet 255.255.255.240.
Below are IPs used for RAC
Public
Node1: 10.10.1.173/255.255.240.0
Node2: 10.10.1.174/255.255.240.0
Private
Node1: 10.10.16.50/255.255.255.240
Node2: 10.10.16.51/255.255.255.240
Virtual
Node1: 10.10.1.191/255.255.240.0
Node2: 10.10.1.192/255.255.240.0
SCAN (Defined in DNS)
10.10.1.193/255.255.240.0
10.10.1.194/255.255.240.0
10.10.1.195/255.255.240.0
As you said, I will scrap GI Software again & will try with 255.255.255.0.
I Believe this Redundant Interconnect and ora.cluster_interconnect.haip present in 11.2.0.2.0 Version.
Oracle says:
Redundant Interconnect without any 3rd-party IP failover technology (bond, IPMP or similar) is supported natively by Grid Infrastructure starting from 11.2.0.2. Multiple private network adapters can be defined either during the installation phase or afterward using the oifcfg. Oracle Database, CSS, OCR, CRS, CTSS, and EVM components in 11.2.0.2 employ it automatically.
Grid Infrastructure can activate a maximum of four private network adapters at a time even if more are defined. The ora.cluster_interconnect.haip resource will start one to four link local HAIP on private network adapters for interconnect communication for Oracle RAC, Oracle ASM, and Oracle ACFS etc.
Grid automatically picks link local addresses from reserved 169.254.*.* subnet for HAIP, and it will not attempt to use any 169.254.*.* address if it's already in use for another purpose. With HAIP, by default, interconnect traffic will be load balanced across all active interconnect interfaces, and corresponding HAIP address will be failed over transparently to other adapters if one fails or becomes non-communicative. .
The number of HAIP addresses is decided by how many private network adapters are active when Grid comes up on the first node in the cluster . If there's only one active private network, Grid will create one; if two, Grid will create two; and if more than two, Grid will create four HAIPs. The number of HAIPs won't change even if more private network adapters are activated later, a restart of clusterware on all nodes is required for new adapters to become effective.
In my Setup, I am having Teaming for NIC's for Public & Private Interface. So I am thinking to break teaming of NICs because HAIP internally searching for next available NIC & not getting as all 4 are already in used with OS level NIC teaming.
My only Concern is, as I am going to change subnet for the Private IPs, should I change Private IP address ????
Thanks for the Support...
Regards,
Manish -
RAC - Oracle Grid Infrastructure configure failed
Hi, am trying to install 2 node RAC on Oracle VMs. Before the installation during the -preinst check there were few issues which were resolved (ex user equivalence). After that during the installation process of the Grid it failed at step "Configure Oracle Grid Infrastructure for a cluster". After it failed at this step, subsequent steps too failed which I asked OUI to ignore and then I ran both the post installation scripts. And then ran post crsinst which failed. Pasting below the output of the root.sh script, post crsinst and other checks.
[root@bsfrac01 grid]# sh root.sh
Running Oracle 11g root.sh script...
The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME= /u01/app/11.2/grid
Enter the full pathname of the local bin directory: [usr/local/bin]:
Copying dbhome to /usr/local/bin ...
Copying oraenv to /usr/local/bin ...
Copying coraenv to /usr/local/bin ...
Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root.sh script.
Now product-specific root actions will be performed.
2011-02-13 00:11:55: Parsing the host name
2011-02-13 00:11:55: Checking for super user privileges
2011-02-13 00:11:55: User has super user privileges
Using configuration parameter file: /u01/app/11.2/grid/crs/install/crsconfig_params
Creating trace directory
LOCAL ADD MODE
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
root wallet
root wallet cert
root cert export
peer wallet
profile reader wallet
pa wallet
peer wallet keys
pa wallet keys
peer cert request
pa cert request
peer cert
pa cert
peer root cert TP
profile reader root cert TP
pa root cert TP
peer pa cert TP
pa peer cert TP
profile reader pa cert TP
profile reader peer cert TP
peer user cert
pa user cert
Adding daemon to inittab
CRS-4123: Oracle High Availability Services has been started.
ohasd is starting
CRS-2672: Attempting to start 'ora.gipcd' on 'bsfrac01'
CRS-2672: Attempting to start 'ora.mdnsd' on 'bsfrac01'
CRS-2676: Start of 'ora.mdnsd' on 'bsfrac01' succeeded
CRS-2676: Start of 'ora.gipcd' on 'bsfrac01' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'bsfrac01'
CRS-2676: Start of 'ora.gpnpd' on 'bsfrac01' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'bsfrac01'
CRS-2676: Start of 'ora.cssdmonitor' on 'bsfrac01' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'bsfrac01'
CRS-2672: Attempting to start 'ora.diskmon' on 'bsfrac01'
CRS-2676: Start of 'ora.diskmon' on 'bsfrac01' succeeded
CRS-2676: Start of 'ora.cssd' on 'bsfrac01' succeeded
CRS-2672: Attempting to start 'ora.ctssd' on 'bsfrac01'
CRS-2676: Start of 'ora.ctssd' on 'bsfrac01' succeeded
ASM created and started successfully.
DiskGroup DATA1 created successfully.
clscfg: -install mode specified
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
CRS-2672: Attempting to start 'ora.crsd' on 'bsfrac01'
CRS-2676: Start of 'ora.crsd' on 'bsfrac01' succeeded
CRS-4256: Updating the profile
Successful addition of voting disk 0ea2052d8a714fd7bf46d9d5c785483e.
Successfully replaced voting disk group with +DATA1.
CRS-4256: Updating the profile
CRS-4266: Voting file(s) successfully replaced
## STATE File Universal Id File Name Disk group
1. ONLINE 0ea2052d8a714fd7bf46d9d5c785483e (ORCL:DISK1) [DATA1]
Located 1 voting disk(s).
*Failed to rmtcopy "/tmp/filekRIMbG" to "/u01/app/11.2/grid/gpnp/manifest.txt" for nodes {bsfrac01,bsfrac02}, rc=256*
*Failed to rmtcopy "/u01/app/11.2/grid/gpnp/bsfrac01/profiles/peer/profile.xml" to "/u01/app/11.2/grid/gpnp/profiles/peer/profile.xml" for nodes {bsfrac01,bsfrac02}, rc=256*
rmtcopy aborted
Failed to promote local gpnp setup to other cluster nodes
CRS-2673: Attempting to stop 'ora.crsd' on 'bsfrac01'
CRS-2677: Stop of 'ora.crsd' on 'bsfrac01' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'bsfrac01'
CRS-2677: Stop of 'ora.asm' on 'bsfrac01' succeeded
CRS-2673: Attempting to stop 'ora.ctssd' on 'bsfrac01'
CRS-2677: Stop of 'ora.ctssd' on 'bsfrac01' succeeded
CRS-2673: Attempting to stop 'ora.cssdmonitor' on 'bsfrac01'
CRS-2677: Stop of 'ora.cssdmonitor' on 'bsfrac01' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'bsfrac01'
CRS-2677: Stop of 'ora.cssd' on 'bsfrac01' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'bsfrac01'
CRS-2677: Stop of 'ora.gpnpd' on 'bsfrac01' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'bsfrac01'
CRS-2677: Stop of 'ora.gipcd' on 'bsfrac01' succeeded
CRS-2673: Attempting to stop 'ora.mdnsd' on 'bsfrac01'
CRS-2677: Stop of 'ora.mdnsd' on 'bsfrac01' succeeded
Initial cluster configuration failed. See /u01/app/11.2/grid/cfgtoollogs/crsconfig/rootcrs_bsfrac01.log for details
[root@bsfrac01 grid]#
[oracle@bsfrac01 bin]$ ./cluvfy stage -post crsinst -n bsfrac01,bsfrac02 -verbose
Performing post-checks for cluster services setup
Checking node reachability...
Check: Node reachability from node "bsfrac01"
Destination Node Reachable?
bsfrac01 yes
bsfrac02 yes
Result: Node reachability check passed from node "bsfrac01"
Checking user equivalence...
Check: User equivalence for user "oracle"
Node Name Comment
bsfrac01 passed
bsfrac02 passed
Result: User equivalence check passed for user "oracle"
ERROR:
PRKC-1094 : Failed to retrieve the active version of crs: {0}
Checking time zone consistency...
Time zone consistency check passed.
ERROR:
PRKC-1093 : Failed to retrieve the version of crs software on node "java.io.IOException: /u01/app/11.2.0/grid/bin/crsctl: not found
" : {1}
ERROR:
Cluster manager integrity check failed
PRVF-5434 : Cannot identify the current CRS software version
UDev attributes check for OCR locations started...
Result: UDev attributes check passed for OCR locations
UDev attributes check for Voting Disk locations started...
ERROR:
PRVF-5197 : Failed to retrieve voting disk locationsPRKC-1092 : Failed to retrieve the location of votedisks: java.io.IOException: /u01/app/11.2.0/grid/bin/crsctl: not found
Result: UDev attributes check failed for Voting Disk locations
Check default user file creation mask
Node Name Available Required Comment
bsfrac01 0022 0022 passed
bsfrac02 0022 0022 passed
Result: Default user file creation mask check passed
Checking cluster integrity...
Node Name
bsfrac01
Cluster integrity check failed This check did not run on the following node(s):
bsfrac02
Checking OCR integrity...
Checking the absence of a non-clustered configuration...
All nodes free of non-clustered, local-only configurations
ERROR:
PRKC-1094 : Failed to retrieve the active version of crs: {0}
ERROR:
PRVF-5300 : Failed to retrieve active version for CRS on this node
OCR integrity check failed
Checking CRS integrity...
ERROR:
PRKC-1094 : Failed to retrieve the active version of crs: {0}
ERROR:
PRVF-5300 : Failed to retrieve active version for CRS on this node
CRS integrity check failed
OCR detected on ASM. Running ACFS Integrity checks...
Starting check to see if ASM is running on all cluster nodes...
PRVF-5137 : Failure while checking ASM status on node "bsfrac01"
PRVF-5137 : Failure while checking ASM status on node "bsfrac02"
Starting Disk Groups check to see if at least one Disk Group configured...
PRVF-5112 : An Exception occurred while checking for Disk Groups
PRVF-5114 : Disk Group check failed. No Disk Groups configured
Task ACFS Integrity check failed
Checking Oracle Cluster Voting Disk configuration...
ERROR:
PRKC-1093 : Failed to retrieve the version of crs software on node "java.io.IOException: /u01/app/11.2.0/grid/bin/crsctl: not found
" : {1}
ERROR:
PRVF-5434 : Cannot identify the current CRS software version
PRVF-5431 : Oracle Cluster Voting Disk configuration check failed
Checking to make sure user "oracle" is not in "root" group
Node Name Status Comment
bsfrac01 does not exist passed
bsfrac02 does not exist passed
Result: User "oracle" is not part of "root" group. Check passed
Post-check for cluster services setup was unsuccessful on all the nodes.
[oracle@bsfrac01 bin]$ /u01/app/11.2/grid/bin/ocrcheck
Status of Oracle Cluster Registry is as follows :
Version : 3
Total space (kbytes) : 262120
Used space (kbytes) : 408
Available space (kbytes) : 261712
ID : 1671840043
Device/File Name : +DATA1
Device/File integrity check succeeded
Device/File not configured
Device/File not configured
Device/File not configured
Device/File not configured
Cluster registry integrity check succeeded
Logical corruption check bypassed due to non-privileged user
ASM looks to be up and running..
[oracle@bsfrac01 bin]$ /usr/sbin/oracleasm listdisks
DISK1
DISK2
DISK3
DISK4
DISK5
DISK6
[oracle@bsfrac01 bin]$ /usr/sbin/oracleasm status
Checking if ASM is loaded: yes
Checking if /dev/oracleasm is mounted: yes
Please help.before installation have u configure the private interconnect on both the nodes to same network adapter..
for example on node 1 if the private interconnect is on eth0 then on the node 2 it should use eth0 only...
for private interconnect use the hostonly option on both the nodes in the network configuration page of the vmware or virtual box..
and for public network it can be bridged
more over if you are installing on the laptop its good to configure the SSH using the OUI.. rather than doing it manually.. as it saves time
the private and the public networks should not have same range of ip address. like if public address are like 192.168.2.222/255.255.255.0 and private address have to be different like 10.10.1.2/255.0.0.0 (this is just an example)
have to configured the NTP.
any ways try installing the oracle rac on virtual box follow the steps given the below website they are pretty straight forward...
http://www.oracle-base.com/articles/11g/OracleDB11gR2RACInstallationOnOEL5UsingVirtualBox.php -
11g R2 RAC - Grid Infrastructure installation - "root.sh" fails on node#2
Hi there,
I am trying to create a two node 11g R2 RAC on OEL 5.5 (32-bit) using VMWare virtual machines. I have correctly configured both nodes. Cluster Verification utility returns on following error \[which I believe can be ignored]:
Checking daemon liveness...
Liveness check failed for "ntpd"
Check failed on nodes:
rac2,rac1
PRVF-5415 : Check to see if NTP daemon is running failed
Clock synchronization check using Network Time Protocol(NTP) failed
Pre-check for cluster services setup was unsuccessful on all the nodes.
While Grid Infrastructure installation (for a Cluster option), things go very smooth until I run "root.sh" on node# 2. orainstRoot.sh ran OK on both. "root.sh" run OK on node# 1 and ends with:
Checking swap space: must be greater than 500 MB. Actual 1967 MB Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oraInventory
*'UpdateNodeList' was successful.*
*[root@rac1 ~]#*
"root.sh" fails on rac2 (2nd node) with following error:
CRS-2672: Attempting to start 'ora.evmd' on 'rac2'
CRS-2676: Start of 'ora.evmd' on 'rac2' succeeded
Timed out waiting for the CRS stack to start.
*[root@rac2 ~]#*
I know this info may not be enough to figure out what the problem may be. Please let me know what should I look for to find the issue and fix it. Its been like almost two weeks now :-(
Regards
AmerHi Zheng,
ocssd.log is HUGE. So I am putting few of the last lines in the log file hoping they may give some clue:
2011-07-04 19:49:24.007: [ CSSD][2997803920]clssnmvSchedDiskThreads: KillBlockThread for voting file ORCL:DATA sched delay 2180 > margin 1500 cur_ms 36118424 lastalive 36116244
2011-07-04 19:49:26.005: [ CSSD][2997803920]clssnmvSchedDiskThreads: DiskPingThread for voting file ORCL:DATA sched delay 4150 > margin 1500 cur_ms 36120424 lastalive 36116274
2011-07-04 19:49:26.006: [ CSSD][2997803920]clssnmvSchedDiskThreads: KillBlockThread for voting file ORCL:DATA sched delay 4180 > margin 1500 cur_ms 36120424 lastalive 36116244
2011-07-04 19:49:27.997: [ CSSD][2901298064]clssnmSendingThread: sending status msg to all nodes
2011-07-04 19:49:27.997: [ CSSD][2901298064]clssnmSendingThread: sent 5 status msgs to all nodes
2011-07-04 19:49:33.001: [ CSSD][2901298064]clssnmSendingThread: sending status msg to all nodes
2011-07-04 19:49:33.001: [ CSSD][2901298064]clssnmSendingThread: sent 5 status msgs to all nodes
2011-07-04 19:49:37.996: [ CSSD][2901298064]clssnmSendingThread: sending status msg to all nodes
2011-07-04 19:49:37.996: [ CSSD][2901298064]clssnmSendingThread: sent 5 status msgs to all nodes
2011-07-04 19:49:43.000: [ CSSD][2901298064]clssnmSendingThread: sending status msg to all nodes
2011-07-04 19:49:43.000: [ CSSD][2901298064]clssnmSendingThread: sent 5 status msgs to all nodes
2011-07-04 19:49:48.004: [ CSSD][2901298064]clssnmSendingThread: sending status msg to all nodes
2011-07-04 19:49:48.005: [ CSSD][2901298064]clssnmSendingThread: sent 5 status msgs to all nodes
2011-07-04 19:50:12.003: [ CSSD][2901298064]clssnmSendingThread: sent 5 status msgs to all nodes
2011-07-04 19:50:12.008: [ CSSD][2997803920]clssnmvSchedDiskThreads: DiskPingThread for voting file ORCL:DATA sched delay 1660 > margin 1500 cur_ms 36166424 lastalive 36164764
2011-07-04 19:50:12.009: [ CSSD][2997803920]clssnmvSchedDiskThreads: KillBlockThread for voting file ORCL:DATA sched delay 1660 > margin 1500 cur_ms 36166424 lastalive 36164764
2011-07-04 19:50:15.796: [ CSSD][2997803920]clssnmvSchedDiskThreads: KillBlockThread for voting file ORCL:DATA sched delay 2130 > margin 1500 cur_ms 36170214 lastalive 36168084
2011-07-04 19:50:16.996: [ CSSD][2901298064]clssnmSendingThread: sending status msg to all nodes
2011-07-04 19:50:16.996: [ CSSD][2901298064]clssnmSendingThread: sent 5 status msgs to all nodes
2011-07-04 19:50:17.826: [ CSSD][2997803920]clssnmvSchedDiskThreads: DiskPingThread for voting file ORCL:DATA sched delay 1540 > margin 1500 cur_ms 36172244 lastalive 36170704
2011-07-04 19:50:17.826: [ CSSD][2997803920]clssnmvSchedDiskThreads: KillBlockThread for voting file ORCL:DATA sched delay 1570 > margin 1500 cur_ms 36172244 lastalive 36170674
2011-07-04 19:50:21.999: [ CSSD][2901298064]clssnmSendingThread: sending status msg to all nodes
2011-07-04 19:50:21.999: [ CSSD][2901298064]clssnmSendingThread: sent 5 status msgs to all nodes
2011-07-04 19:50:26.011: [ CSSD][2997803920]clssnmvSchedDiskThreads: DiskPingThread for voting file ORCL:DATA sched delay 1740 > margin 1500 cur_ms 36180424 lastalive 36178684
2011-07-04 19:50:26.011: [ CSSD][2997803920]clssnmvSchedDiskThreads: KillBlockThread for voting file ORCL:DATA sched delay 1620 > margin 1500 cur_ms 36180424 lastalive 36178804
2011-07-04 19:50:27.004: [ CSSD][2901298064]clssnmSendingThread: sending status msg to all nodes
2011-07-04 19:50:27.004: [ CSSD][2901298064]clssnmSendingThread: sent 5 status msgs to all nodes
2011-07-04 19:50:28.002: [ CSSD][2997803920]clssnmvSchedDiskThreads: DiskPingThread for voting file ORCL:DATA sched delay 1700 > margin 1500 cur_ms 36182414 lastalive 36180714
2011-07-04 19:50:28.002: [ CSSD][2997803920]clssnmvSchedDiskThreads: KillBlockThread for voting file ORCL:DATA sched delay 1790 > margin 1500 cur_ms 36182414 lastalive 36180624
2011-07-04 19:50:31.998: [ CSSD][2901298064]clssnmSendingThread: sending status msg to all nodes
2011-07-04 19:50:31.998: [ CSSD][2901298064]clssnmSendingThread: sent 5 status msgs to all nodes
2011-07-04 19:50:37.001: [ CSSD][2901298064]clssnmSendingThread: sending status msg to all nodes
2011-07-04 19:50:37.002: [ CSSD][2901298064]clssnmSendingThread: sent 5 status msgs to all nodes
*<end of log file>*And the alertrac2.log contains:
*[root@rac2 rac2]# cat alertrac2.log*
Oracle Database 11g Clusterware Release 11.2.0.1.0 - Production Copyright 1996, 2009 Oracle. All rights reserved.
2011-07-02 16:43:51.571
[client(16134)]CRS-2106:The OLR location /u01/grid/oracle/product/11.2.0/grid/cdata/rac2.olr is inaccessible. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/client/ocrconfig_16134.log.
2011-07-02 16:43:57.125
[client(16134)]CRS-2101:The OLR was formatted using version 3.
2011-07-02 16:44:43.214
[ohasd(16188)]CRS-2112:The OLR service started on node rac2.
2011-07-02 16:45:06.446
[ohasd(16188)]CRS-2772:Server 'rac2' has been assigned to pool 'Free'.
2011-07-02 16:53:30.061
[ohasd(16188)]CRS-2302:Cannot get GPnP profile. Error CLSGPNP_NO_DAEMON (GPNPD daemon is not running).
2011-07-02 16:53:55.042
[cssd(17674)]CRS-1713:CSSD daemon is started in exclusive mode
2011-07-02 16:54:38.334
[cssd(17674)]CRS-1707:Lease acquisition for node rac2 number 2 completed
[cssd(17674)]CRS-1636:The CSS daemon was started in exclusive mode but found an active CSS daemon on node rac1 and is terminating; details at (:CSSNM00006:) in /u01/grid/oracle/product/11.2.0/grid/log/rac2/cssd/ocssd.log
2011-07-02 16:54:38.464
[cssd(17674)]CRS-1603:CSSD on node rac2 shutdown by user.
2011-07-02 16:54:39.174
[ohasd(16188)]CRS-2765:Resource 'ora.cssdmonitor' has failed on server 'rac2'.
2011-07-02 16:55:43.430
[cssd(17945)]CRS-1713:CSSD daemon is started in clustered mode
2011-07-02 16:56:02.852
[cssd(17945)]CRS-1707:Lease acquisition for node rac2 number 2 completed
2011-07-02 16:56:04.061
[cssd(17945)]CRS-1605:CSSD voting file is online: ORCL:DATA; details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/cssd/ocssd.log.
2011-07-02 16:56:18.350
[cssd(17945)]CRS-1601:CSSD Reconfiguration complete. Active nodes are rac1 rac2 .
2011-07-02 16:56:29.283
[ctssd(18020)]CRS-2403:The Cluster Time Synchronization Service on host rac2 is in observer mode.
2011-07-02 16:56:29.551
[ctssd(18020)]CRS-2407:The new Cluster Time Synchronization Service reference node is host rac1.
2011-07-02 16:56:29.615
[ctssd(18020)]CRS-2412:The Cluster Time Synchronization Service detects that the local time is significantly different from the mean cluster time. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/ctssd/octssd.log.
2011-07-02 16:56:29.616
[ctssd(18020)]CRS-2409:The clock on host rac2 is not synchronous with the mean cluster time. No action has been taken as the Cluster Time Synchronization Service is running in observer mode.
2011-07-02 16:56:29.641
[ctssd(18020)]CRS-2401:The Cluster Time Synchronization Service started on host rac2.
[client(18052)]CRS-10001:ACFS-9327: Verifying ADVM/ACFS devices.
[client(18056)]CRS-10001:ACFS-9322: done.
2011-07-02 17:01:40.963
[ohasd(16188)]CRS-2757:Command 'Start' timed out waiting for response from the resource 'ora.asm'. Details at (:CRSPE00111:) in /u01/grid/oracle/product/11.2.0/grid/log/rac2/ohasd/ohasd.log.
[client(18590)]CRS-10001:ACFS-9327: Verifying ADVM/ACFS devices.
[client(18594)]CRS-10001:ACFS-9322: done.
2011-07-02 17:27:46.385
[ctssd(18020)]CRS-2412:The Cluster Time Synchronization Service detects that the local time is significantly different from the mean cluster time. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/ctssd/octssd.log.
2011-07-02 17:27:46.385
[ctssd(18020)]CRS-2409:The clock on host rac2 is not synchronous with the mean cluster time. No action has been taken as the Cluster Time Synchronization Service is running in observer mode.
2011-07-02 17:46:48.717
[crsd(22519)]CRS-1013:The OCR location in an ASM disk group is inaccessible. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/crsd/crsd.log.
2011-07-02 17:46:49.641
[ohasd(16188)]CRS-2765:Resource 'ora.crsd' has failed on server 'rac2'.
2011-07-02 17:46:51.459
[crsd(22553)]CRS-1013:The OCR location in an ASM disk group is inaccessible. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/crsd/crsd.log.
2011-07-02 17:46:51.776
[ohasd(16188)]CRS-2765:Resource 'ora.crsd' has failed on server 'rac2'.
2011-07-02 17:46:53.928
[crsd(22574)]CRS-1013:The OCR location in an ASM disk group is inaccessible. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/crsd/crsd.log.
2011-07-02 17:46:53.956
[ohasd(16188)]CRS-2765:Resource 'ora.crsd' has failed on server 'rac2'.
2011-07-02 17:46:55.834
[crsd(22592)]CRS-1013:The OCR location in an ASM disk group is inaccessible. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/crsd/crsd.log.
2011-07-02 17:46:56.273
[ohasd(16188)]CRS-2765:Resource 'ora.crsd' has failed on server 'rac2'.
2011-07-02 17:46:57.762
[crsd(22610)]CRS-1013:The OCR location in an ASM disk group is inaccessible. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/crsd/crsd.log.
2011-07-02 17:46:58.631
[ohasd(16188)]CRS-2765:Resource 'ora.crsd' has failed on server 'rac2'.
2011-07-02 17:47:00.259
[crsd(22628)]CRS-1013:The OCR location in an ASM disk group is inaccessible. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/crsd/crsd.log.
2011-07-02 17:47:00.968
[ohasd(16188)]CRS-2765:Resource 'ora.crsd' has failed on server 'rac2'.
2011-07-02 17:47:02.513
[crsd(22645)]CRS-1013:The OCR location in an ASM disk group is inaccessible. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/crsd/crsd.log.
2011-07-02 17:47:03.309
[ohasd(16188)]CRS-2765:Resource 'ora.crsd' has failed on server 'rac2'.
2011-07-02 17:47:05.081
[crsd(22663)]CRS-1013:The OCR location in an ASM disk group is inaccessible. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/crsd/crsd.log.
2011-07-02 17:47:05.770
[ohasd(16188)]CRS-2765:Resource 'ora.crsd' has failed on server 'rac2'.
2011-07-02 17:47:07.796
[crsd(22681)]CRS-1013:The OCR location in an ASM disk group is inaccessible. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/crsd/crsd.log.
2011-07-02 17:47:08.257
[ohasd(16188)]CRS-2765:Resource 'ora.crsd' has failed on server 'rac2'.
2011-07-02 17:47:10.733
[crsd(22699)]CRS-1013:The OCR location in an ASM disk group is inaccessible. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/crsd/crsd.log.
2011-07-02 17:47:11.739
[ohasd(16188)]CRS-2765:Resource 'ora.crsd' has failed on server 'rac2'.
2011-07-02 17:47:13.547
[crsd(22732)]CRS-1013:The OCR location in an ASM disk group is inaccessible. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/crsd/crsd.log.
2011-07-02 17:47:14.111
[ohasd(16188)]CRS-2765:Resource 'ora.crsd' has failed on server 'rac2'.
2011-07-02 17:47:14.112
[ohasd(16188)]CRS-2771:Maximum restart attempts reached for resource 'ora.crsd'; will not restart.
2011-07-02 17:58:18.459
[ctssd(18020)]CRS-2412:The Cluster Time Synchronization Service detects that the local time is significantly different from the mean cluster time. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/ctssd/octssd.log.
2011-07-02 17:58:18.459
[ctssd(18020)]CRS-2409:The clock on host rac2 is not synchronous with the mean cluster time. No action has been taken as the Cluster Time Synchronization Service is running in observer mode.
[client(26883)]CRS-10001:ACFS-9200: Supported
2011-07-02 18:13:34.627
[ctssd(18020)]CRS-2405:The Cluster Time Synchronization Service on host rac2 is shutdown by user
2011-07-02 18:13:42.368
[cssd(17945)]CRS-1603:CSSD on node rac2 shutdown by user.
2011-07-02 18:15:13.877
[client(27222)]CRS-2106:The OLR location /u01/grid/oracle/product/11.2.0/grid/cdata/rac2.olr is inaccessible. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/client/ocrconfig_27222.log.
2011-07-02 18:15:14.011
[client(27222)]CRS-2101:The OLR was formatted using version 3.
2011-07-02 18:15:23.226
[ohasd(27261)]CRS-2112:The OLR service started on node rac2.
2011-07-02 18:15:23.688
[ohasd(27261)]CRS-8017:location: /etc/oracle/lastgasp has 2 reboot advisory log files, 0 were announced and 0 errors occurred
2011-07-02 18:15:24.064
[ohasd(27261)]CRS-2772:Server 'rac2' has been assigned to pool 'Free'.
2011-07-02 18:16:29.761
[ohasd(27261)]CRS-2302:Cannot get GPnP profile. Error CLSGPNP_NO_DAEMON (GPNPD daemon is not running).
2011-07-02 18:16:30.190
[gpnpd(28498)]CRS-2328:GPNPD started on node rac2.
2011-07-02 18:16:41.561
[cssd(28562)]CRS-1713:CSSD daemon is started in exclusive mode
2011-07-02 18:16:49.111
[cssd(28562)]CRS-1707:Lease acquisition for node rac2 number 2 completed
2011-07-02 18:16:49.166
[cssd(28562)]CRS-1605:CSSD voting file is online: ORCL:DATA; details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/cssd/ocssd.log.
[cssd(28562)]CRS-1636:The CSS daemon was started in exclusive mode but found an active CSS daemon on node rac1 and is terminating; details at (:CSSNM00006:) in /u01/grid/oracle/product/11.2.0/grid/log/rac2/cssd/ocssd.log
2011-07-02 18:17:01.122
[cssd(28562)]CRS-1603:CSSD on node rac2 shutdown by user.
2011-07-02 18:17:06.917
[ohasd(27261)]CRS-2765:Resource 'ora.cssdmonitor' has failed on server 'rac2'.
2011-07-02 18:17:23.602
[mdnsd(28485)]CRS-5602:mDNS service stopping by request.
2011-07-02 18:17:36.217
[gpnpd(28732)]CRS-2328:GPNPD started on node rac2.
2011-07-02 18:17:43.673
[cssd(28794)]CRS-1713:CSSD daemon is started in clustered mode
2011-07-02 18:17:49.826
[cssd(28794)]CRS-1707:Lease acquisition for node rac2 number 2 completed
2011-07-02 18:17:49.865
[cssd(28794)]CRS-1605:CSSD voting file is online: ORCL:DATA; details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/cssd/ocssd.log.
2011-07-02 18:18:03.049
[cssd(28794)]CRS-1601:CSSD Reconfiguration complete. Active nodes are rac1 rac2 .
2011-07-02 18:18:06.160
[ctssd(28861)]CRS-2403:The Cluster Time Synchronization Service on host rac2 is in observer mode.
2011-07-02 18:18:06.220
[ctssd(28861)]CRS-2407:The new Cluster Time Synchronization Service reference node is host rac1.
2011-07-02 18:18:06.238
[ctssd(28861)]CRS-2412:The Cluster Time Synchronization Service detects that the local time is significantly different from the mean cluster time. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/ctssd/octssd.log.
2011-07-02 18:18:06.239
[ctssd(28861)]CRS-2409:The clock on host rac2 is not synchronous with the mean cluster time. No action has been taken as the Cluster Time Synchronization Service is running in observer mode.
2011-07-02 18:18:06.794
[ctssd(28861)]CRS-2401:The Cluster Time Synchronization Service started on host rac2.
[client(28891)]CRS-10001:ACFS-9327: Verifying ADVM/ACFS devices.
[client(28895)]CRS-10001:ACFS-9322: done.
2011-07-02 18:18:33.465
[crsd(29020)]CRS-1013:The OCR location in an ASM disk group is inaccessible. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/crsd/crsd.log.
2011-07-02 18:18:33.575
[ohasd(27261)]CRS-2765:Resource 'ora.crsd' has failed on server 'rac2'.
2011-07-02 18:18:35.757
[crsd(29051)]CRS-1013:The OCR location in an ASM disk group is inaccessible. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/crsd/crsd.log.
2011-07-02 18:18:36.129
[ohasd(27261)]CRS-2765:Resource 'ora.crsd' has failed on server 'rac2'.
2011-07-02 18:18:38.596
[crsd(29066)]CRS-1013:The OCR location in an ASM disk group is inaccessible. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/crsd/crsd.log.
2011-07-02 18:18:39.146
[ohasd(27261)]CRS-2765:Resource 'ora.crsd' has failed on server 'rac2'.
2011-07-02 18:18:41.058
[crsd(29085)]CRS-1013:The OCR location in an ASM disk group is inaccessible. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/crsd/crsd.log.
2011-07-02 18:18:41.435
[ohasd(27261)]CRS-2765:Resource 'ora.crsd' has failed on server 'rac2'.
2011-07-02 18:18:44.255
[crsd(29101)]CRS-1013:The OCR location in an ASM disk group is inaccessible. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/crsd/crsd.log.
2011-07-02 18:18:45.165
[ohasd(27261)]CRS-2765:Resource 'ora.crsd' has failed on server 'rac2'.
2011-07-02 18:18:47.013
[crsd(29121)]CRS-1013:The OCR location in an ASM disk group is inaccessible. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/crsd/crsd.log.
2011-07-02 18:18:47.409
[ohasd(27261)]CRS-2765:Resource 'ora.crsd' has failed on server 'rac2'.
2011-07-02 18:18:50.071
[crsd(29136)]CRS-1013:The OCR location in an ASM disk group is inaccessible. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/crsd/crsd.log.
2011-07-02 18:18:50.118
[ohasd(27261)]CRS-2765:Resource 'ora.crsd' has failed on server 'rac2'.
2011-07-02 18:18:51.843
[crsd(29156)]CRS-1013:The OCR location in an ASM disk group is inaccessible. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/crsd/crsd.log.
2011-07-02 18:18:52.373
[ohasd(27261)]CRS-2765:Resource 'ora.crsd' has failed on server 'rac2'.
2011-07-02 18:18:54.361
[crsd(29171)]CRS-1013:The OCR location in an ASM disk group is inaccessible. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/crsd/crsd.log.
2011-07-02 18:18:54.772
[ohasd(27261)]CRS-2765:Resource 'ora.crsd' has failed on server 'rac2'.
2011-07-02 18:18:56.620
[crsd(29202)]CRS-1013:The OCR location in an ASM disk group is inaccessible. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/crsd/crsd.log.
2011-07-02 18:18:57.104
[ohasd(27261)]CRS-2765:Resource 'ora.crsd' has failed on server 'rac2'.
2011-07-02 18:18:58.997
[crsd(29218)]CRS-1013:The OCR location in an ASM disk group is inaccessible. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/crsd/crsd.log.
2011-07-02 18:18:59.301
[ohasd(27261)]CRS-2765:Resource 'ora.crsd' has failed on server 'rac2'.
2011-07-02 18:18:59.302
[ohasd(27261)]CRS-2771:Maximum restart attempts reached for resource 'ora.crsd'; will not restart.
2011-07-02 18:49:58.070
[ctssd(28861)]CRS-2412:The Cluster Time Synchronization Service detects that the local time is significantly different from the mean cluster time. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/ctssd/octssd.log.
2011-07-02 18:49:58.070
[ctssd(28861)]CRS-2409:The clock on host rac2 is not synchronous with the mean cluster time. No action has been taken as the Cluster Time Synchronization Service is running in observer mode.
2011-07-02 19:21:33.362
[ctssd(28861)]CRS-2412:The Cluster Time Synchronization Service detects that the local time is significantly different from the mean cluster time. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/ctssd/octssd.log.
2011-07-02 19:21:33.362
[ctssd(28861)]CRS-2409:The clock on host rac2 is not synchronous with the mean cluster time. No action has been taken as the Cluster Time Synchronization Service is running in observer mode.
2011-07-02 19:52:05.271
[ctssd(28861)]CRS-2412:The Cluster Time Synchronization Service detects that the local time is significantly different from the mean cluster time. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/ctssd/octssd.log.
2011-07-02 19:52:05.271
[ctssd(28861)]CRS-2409:The clock on host rac2 is not synchronous with the mean cluster time. No action has been taken as the Cluster Time Synchronization Service is running in observer mode.
2011-07-02 20:22:53.696
[ctssd(28861)]CRS-2412:The Cluster Time Synchronization Service detects that the local time is significantly different from the mean cluster time. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/ctssd/octssd.log.
2011-07-02 20:22:53.696
[ctssd(28861)]CRS-2409:The clock on host rac2 is not synchronous with the mean cluster time. No action has been taken as the Cluster Time Synchronization Service is running in observer mode.
2011-07-02 20:53:43.949
[ctssd(28861)]CRS-2412:The Cluster Time Synchronization Service detects that the local time is significantly different from the mean cluster time. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/ctssd/octssd.log.
2011-07-02 20:53:43.949
[ctssd(28861)]CRS-2409:The clock on host rac2 is not synchronous with the mean cluster time. No action has been taken as the Cluster Time Synchronization Service is running in observer mode.
2011-07-02 21:24:32.990
[ctssd(28861)]CRS-2412:The Cluster Time Synchronization Service detects that the local time is significantly different from the mean cluster time. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/ctssd/octssd.log.
2011-07-02 21:24:32.990
[ctssd(28861)]CRS-2409:The clock on host rac2 is not synchronous with the mean cluster time. No action has been taken as the Cluster Time Synchronization Service is running in observer mode.
2011-07-02 21:55:21.907
[ctssd(28861)]CRS-2412:The Cluster Time Synchronization Service detects that the local time is significantly different from the mean cluster time. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/ctssd/octssd.log.
2011-07-02 21:55:21.908
[ctssd(28861)]CRS-2409:The clock on host rac2 is not synchronous with the mean cluster time. No action has been taken as the Cluster Time Synchronization Service is running in observer mode.
2011-07-02 22:26:45.752
[ctssd(28861)]CRS-2412:The Cluster Time Synchronization Service detects that the local time is significantly different from the mean cluster time. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/ctssd/octssd.log.
2011-07-02 22:26:45.752
[ctssd(28861)]CRS-2409:The clock on host rac2 is not synchronous with the mean cluster time. No action has been taken as the Cluster Time Synchronization Service is running in observer mode.
2011-07-02 22:57:54.682
[ctssd(28861)]CRS-2412:The Cluster Time Synchronization Service detects that the local time is significantly different from the mean cluster time. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/ctssd/octssd.log.
2011-07-02 22:57:54.683
[ctssd(28861)]CRS-2409:The clock on host rac2 is not synchronous with the mean cluster time. No action has been taken as the Cluster Time Synchronization Service is running in observer mode.
2011-07-02 23:07:28.603
[cssd(28794)]CRS-1612:Network communication with node rac1 (1) missing for 50% of timeout interval. Removal of this node from cluster in 14.020 seconds
2011-07-02 23:07:35.621
[cssd(28794)]CRS-1611:Network communication with node rac1 (1) missing for 75% of timeout interval. Removal of this node from cluster in 7.010 seconds
2011-07-02 23:07:39.629
[cssd(28794)]CRS-1610:Network communication with node rac1 (1) missing for 90% of timeout interval. Removal of this node from cluster in 3.000 seconds
2011-07-02 23:07:42.641
[cssd(28794)]CRS-1632:Node rac1 is being removed from the cluster in cluster incarnation 205080558
2011-07-02 23:07:44.751
[cssd(28794)]CRS-1601:CSSD Reconfiguration complete. Active nodes are rac2 .
2011-07-02 23:07:45.326
[ctssd(28861)]CRS-2407:The new Cluster Time Synchronization Service reference node is host rac2.
2011-07-04 19:46:26.008
[ohasd(27261)]CRS-8011:reboot advisory message from host: rac1, component: mo155738, with time stamp: L-2011-07-04-19:44:43.318
[ohasd(27261)]CRS-8013:reboot advisory message text: clsnomon_status: need to reboot, unexpected failure 8 received from CSS
*[root@rac2 rac2]#* This log file start with complaint that OLR is not accessible. Here is what I see (rca2):
-rw------- 1 root oinstall 272756736 Jul 2 18:18 /u01/grid/oracle/product/11.2.0/grid/cdata/rac2.olrAnd I guess rest of the problems start with this. -
Cluster errors on 1 node of a RAC
Hello All,
I Installed Oracle RAC 11.2.0.1.0, on Oracle Enterprise Linux 5.5 32 bit.
the installation and the database creation went fine and no error were generated.
My RAC is 2 nodes (RAC1 and RAC2).
On RAC1 the instance is up and working but not on RAC2, I am not able to started, even i am not able to connect to sqlplus from RAC2.
I issued*: crsctl stat res -t* on RAC1 and below is the output:
[root@rac1 ~]# crsctl stat res -t
NAME TARGET STATE SERVER STATE_DETAILS
Local Resources
ora.DATA.dg
ONLINE ONLINE rac1
ora.LISTENER.lsnr
ONLINE OFFLINE rac1
ora.asm
ONLINE ONLINE rac1
ora.eons
ONLINE ONLINE rac1
ora.gsd
OFFLINE OFFLINE rac1
ora.net1.network
ONLINE ONLINE rac1
ora.ons
ONLINE OFFLINE rac1
ora.registry.acfs
ONLINE UNKNOWN rac1 CHECK TIMED OUT
Cluster Resources
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE rac1
ora.oc4j
1 OFFLINE OFFLINE
ora.orcl.db
1 ONLINE ONLINE rac1
2 ONLINE OFFLINE
ora.rac1.vip
1 ONLINE ONLINE rac1
ora.rac2.vip
1 ONLINE OFFLINE
ora.scan1.vip
1 ONLINE ONLINE rac1 but RAC2 below is the output:
[root@rac2 ~]# crsctl stat res -t
CRS-4535: Cannot communicate with Cluster Ready Services
CRS-4000: Command Status failed, or completed with errors.when i tried to restart crs on RAC2 below is the output:
[root@rac2 ~]# crsctl stop crs
CRS-2796: The command may not proceed when Cluster Ready Services is not running
CRS-4687: Shutdown command has completed with error(s).
CRS-4000: Command Stop failed, or completed with errors.when i try to start it :
[root@rac2 ~]# crsctl start crs
CRS-4640: Oracle High Availability Services is already active
CRS-4000: Command Start failed, or completed with errors.
[root@rac2 ~]# your help pls, what should i do? i am new to RAC adminsitration
Regards,Hi,
I applied these steps and below is teh output, still not able to communicate with crs:
[root@rac2 ~]# crsctl stop crs -f
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac2'
CRS-2673: Attempting to stop 'ora.cssdmonitor' on 'rac2'
CRS-2673: Attempting to stop 'ora.ctssd' on 'rac2'
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'rac2'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'rac2'
CRS-2677: Stop of 'ora.cssdmonitor' on 'rac2' succeeded
CRS-2677: Stop of 'ora.mdnsd' on 'rac2' succeeded
CRS-2677: Stop of 'ora.drivers.acfs' on 'rac2' succeeded
CRS-2677: Stop of 'ora.ctssd' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'rac2'
CRS-2677: Stop of 'ora.cssd' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'rac2'
CRS-2673: Attempting to stop 'ora.diskmon' on 'rac2'
CRS-2677: Stop of 'ora.gpnpd' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'rac2'
CRS-2677: Stop of 'ora.diskmon' on 'rac2' succeeded
CRS-2677: Stop of 'ora.gipcd' on 'rac2' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac2' has completed
CRS-4133: Oracle High Availability Services has been stopped.
[root@rac2 ~]# pgrep -l d.bin
[root@rac2 ~]# crsctl start crs
CRS-4123: Oracle High Availability Services has been started.
[root@rac2 ~]# crsctl stat res -t
CRS-4535: Cannot communicate with Cluster Ready Services
CRS-4000: Command Status failed, or completed with errors. -
Error INS-20802 when installing Oracle RAC 11.2.0.3
I'm trying to install the Oracle Grid Infraestructure in aix 6.1. At the end of the process when it checks "Oracle Cluster Verification utility" I get
the error INS-20802. Someone knows what is the problem?
The only service that is not online is "gsd".
$ ./crs_stat -t
Name Type Target State Host
ora....ER.lsnr ora....er.type ONLINE ONLINE rac1
ora....N1.lsnr ora....er.type ONLINE ONLINE rac2
ora....N2.lsnr ora....er.type ONLINE ONLINE rac1
ora....N3.lsnr ora....er.type ONLINE ONLINE rac1
ora.OCRVTD.dg ora....up.type ONLINE ONLINE rac1
ora.asm ora.asm.type ONLINE ONLINE rac1
ora.cvu ora.cvu.type ONLINE ONLINE rac1
ora.gsd ora.gsd.type OFFLINE OFFLINE
ora....network ora....rk.type ONLINE ONLINE rac1
ora.oc4j ora.oc4j.type ONLINE ONLINE rac1
ora.ons ora.ons.type ONLINE ONLINE rac1
ora....ry.acfs ora....fs.type ONLINE ONLINE rac1
ora.scan1.vip ora....ip.type ONLINE ONLINE rac2
ora.scan2.vip ora....ip.type ONLINE ONLINE rac1
ora.scan3.vip ora....ip.type ONLINE ONLINE rac1
ora....SM1.asm application ONLINE ONLINE rac1
ora....37.lsnr application ONLINE ONLINE rac1
ora.rac1.gsd application OFFLINE OFFLINE
ora.rac1.ons application ONLINE ONLINE rac1
ora.rac1.vip ora....t1.type ONLINE ONLINE rac1
ora....SM2.asm application ONLINE ONLINE rac2
ora....38.lsnr application ONLINE ONLINE rac2
ora.rac2.gsd application OFFLINE OFFLINE
ora.rac2.ons application ONLINE ONLINE rac2
ora.rac2.vip ora....t1.type ONLINE ONLINE rac2
This is the output of cluvfy.
$ ./cluvfy stage -post crsinst -n rac1,rac2 -verbose
Performing post-checks for cluster services setup
Checking node reachability...
Check: Node reachability from node "rac1"
Destination Node Reachable?
rac1 yes
rac2 yes
Result: Node reachability check passed from node "rac1"
Checking user equivalence...
Check: User equivalence for user "grid"
Node Name Status
rac2 passed
rac1 passed
Result: User equivalence check passed for user "grid"
Checking node connectivity...
Checking hosts config file...
Node Name Status
rac2 passed
rac1 passed
Verification of the hosts config file successful
Interface information for node "rac2"
Name IP Address Subnet Gateway Def. Gateway HW Address MTU
en0 192.168.255.58 192.168.255.0 192.168.255.58 192.168.255.1 EA:3A:6B:80:0A:51 1500
en1 192.168.171.15 192.168.171.0 192.168.171.15 192.168.255.1 EA:3A:6B:80:0A:97 1500
Interface information for node "rac1"
Name IP Address Subnet Gateway Def. Gateway HW Address MTU
en0 192.168.255.57 192.168.255.0 192.168.255.57 192.168.255.1 62:7E:A7:ED:03:51 1500
en1 192.168.171.14 192.168.171.0 192.168.171.14 192.168.255.1 62:7E:A7:ED:03:97 1500
Check: Node connectivity for interface "en0"
Source Destination Connected?
rac2[192.168.255.58] rac2[192.168.255.58] yes
rac1[192.168.255.57] rac1[192.168.255.57] yes
rac1[192.168.255.57] rac1[192.168.255.57] yes
Result: Node connectivity passed for interface "en0"
Check: TCP connectivity of subnet "192.168.255.0"
Source Destination Connected?
rac1:192.168.255.57 rac2:192.168.255.58 passed
Result: TCP connectivity check passed for subnet "192.168.255.0"
Check: Node connectivity for interface "en1"
Source Destination Connected?
rac2[192.168.171.15] rac2[192.168.171.15] yes
rac1[192.168.171.14] rac1[192.168.171.14] yes
Result: Node connectivity passed for interface "en1"
Check: TCP connectivity of subnet "192.168.171.0"
Source Destination Connected?
rac1:192.168.171.14 rac2:192.168.171.15 passed
Result: TCP connectivity check passed for subnet "192.168.171.0"
Checking subnet mask consistency...
Subnet mask consistency check passed for subnet "192.168.255.0".
Subnet mask consistency check passed for subnet "192.168.171.0".
Subnet mask consistency check passed.
Result: Node connectivity check passed
Checking multicast communication...
Checking subnet "192.168.255.0" for multicast communication with multicast group "230.0.1.0"...
Check of subnet "192.168.255.0" for multicast communication with multicast group "230.0.1.0" passed.
Checking subnet "192.168.171.0" for multicast communication with multicast group "230.0.1.0"...
Check of subnet "192.168.171.0" for multicast communication with multicast group "230.0.1.0" passed.
Check of multicast communication passed.
Check: Time zone consistency
Result: Time zone consistency check passed
Checking Oracle Cluster Voting Disk configuration...
ASM Running check passed. ASM is running on all specified nodes
Oracle Cluster Voting Disk configuration check passed
Checking Cluster manager integrity...
Checking CSS daemon...
Node Name Status
rac2 running
rac1 running
Oracle Cluster Synchronization Services appear to be online.
Cluster manager integrity check passed
Check default user file creation mask
Node Name Available Required Comment
rac2 022 0022 passed
rac1 022 0022 passed
Result: Default user file creation mask check passed
Checking cluster integrity...
Node Name
rac1
rac2
Cluster integrity check passed
Checking OCR integrity...
Checking the absence of a non-clustered configuration...
All nodes free of non-clustered, local-only configurations
ASM Running check passed. ASM is running on all specified nodes
Checking OCR config file "/etc/oracle/ocr.loc"...
OCR config file "/etc/oracle/ocr.loc" check successful
Disk group for ocr location "+OCRVTD" available on all the nodes
NOTE:
This check does not verify the integrity of the OCR contents. Execute 'ocrcheck' as a privileged user to verify the contents of OCR.
OCR integrity check passed
Checking CRS integrity...
Clusterware version consistency passed
The Oracle Clusterware is healthy on node "rac2"
The Oracle Clusterware is healthy on node "rac1"
CRS integrity check passed
Checking node application existence...
Checking existence of VIP node application (required)
Node Name Required Running? Comment
rac2 yes yes passed
rac1 yes yes passed
VIP node application check passed
Checking existence of NETWORK node application (required)
Node Name Required Running? Comment
rac2 yes yes passed
rac1 yes yes passed
NETWORK node application check passed
Checking existence of GSD node application (optional)
Node Name Required Running? Comment
rac2 no no exists
rac1 no no exists
GSD node application is offline on nodes "rac2,rac1"
Checking existence of ONS node application (optional)
Node Name Required Running? Comment
rac2 no yes passed
rac1 no yes passed
ONS node application check passed
Checking Single Client Access Name (SCAN)...
SCAN Name Node Running? ListenerName Port Running?
txora11gr202-scan rac2 true LISTENER_SCAN1 1521 true
txora11gr202-scan rac1 true LISTENER_SCAN2 1521 true
txora11gr202-scan rac1 true LISTENER_SCAN3 1521 true
Checking TCP connectivity to SCAN Listeners...
Node ListenerName TCP connectivity?
rac1 LISTENER_SCAN1 yes
rac1 LISTENER_SCAN2 yes
rac1 LISTENER_SCAN3 yes
TCP connectivity to SCAN Listeners exists on all cluster nodes
Checking name resolution setup for "txora11gr202-scan"...
SCAN Name IP Address Status Comment
txora11gr202-scan 192.168.255.62 passed
txora11gr202-scan 192.168.255.61 passed
txora11gr202-scan 192.168.255.63 passed
Verification of SCAN VIP and Listener setup passed
Checking OLR integrity...
Checking OLR config file...
OLR config file check successful
Checking OLR file attributes...
OLR file check successful
WARNING:
This check does not verify the integrity of the OLR contents. Execute 'ocrcheck -local' as a privileged user to verify the contents of OLR.
OLR integrity check passed
OCR detected on ASM. Running ACFS Integrity checks...
Starting check to see if ASM is running on all cluster nodes...
ASM Running check passed. ASM is running on all specified nodes
Starting Disk Groups check to see if at least one Disk Group configured...
Disk Group Check passed. At least one Disk Group configured
Task ACFS Integrity check passed
Checking to make sure user "grid" is not in "system" group
Node Name Status Comment
rac2 failed exists
rac1 failed exists
Result: User "grid" is part of group "system". Check failed
Checking if Clusterware is installed on all nodes...
Check of Clusterware install passed
Checking if CTSS Resource is running on all nodes...
Check: CTSS Resource running on all nodes
Node Name Status
rac2 passed
rac1 passed
Result: CTSS resource check passed
Querying CTSS for time offset on all nodes...
Result: Query of CTSS for time offset passed
Check CTSS state started...
Check: CTSS state
Node Name State
rac2 Observer
rac1 Observer
CTSS is in Observer state. Switching over to clock synchronization checks using NTP
Starting Clock synchronization checks using Network Time Protocol(NTP)...
NTP Configuration file check started...
The NTP configuration file "/etc/ntp.conf" is available on all nodes
NTP Configuration file check passed
Checking daemon liveness...
Check: Liveness for "xntpd"
Node Name Running?
rac2 yes
rac1 yes
Result: Liveness check passed for "xntpd"
Check for NTP daemon or service alive passed on all nodes
Checking NTP daemon command line for slewing option "-x"
Check: NTP daemon command line
Node Name Slewing Option Set?
rac2 yes
rac1 yes
Result:
NTP daemon slewing option check passed
Checking NTP daemon's boot time configuration, in file "/etc/rc.tcpip", for slewing option "-x"
Check: NTP daemon's boot time configuration
Node Name Slewing Option Set?
rac2 yes
rac1 yes
Result:
NTP daemon's boot time configuration check for slewing option passed
Checking whether NTP daemon or service is using UDP port 123 on all nodes
Check for NTP daemon or service using UDP port 123
Node Name Port Open?
rac2 yes
rac1 yes
Result: Clock synchronization check using Network Time Protocol(NTP) passed
Oracle Cluster Time Synchronization Services check passed
Checking VIP configuration.
Checking VIP Subnet configuration.
Check for VIP Subnet configuration passed.
Checking VIP reachability
Check for VIP reachability passed.
Post-check for cluster services setup was unsuccessful on all the nodes.
ThanksHi;
I suggest close your issue here as answered than move your issue Forum Home » High Availability » RAC, ASM & Clusterware Installation which is RAC dedicated forum site.
Regard
Helios -
Oracle 11gR2 RAC: Running the script root.sh problem
Folks,
Hello. I am installing Oracle 11gR2 RAC using 2 Virtual Machines (rac1 and rac2 whose OS are Oracle Linux 5.6) in VMPlayer and according to the tutorial
http://appsdbaworkshop.blogspot.com/2011/10/11gr2-rac-on-linux-56-using-vmware.html
I have been installing Grid Infrastructure using runInstaller from step 1 to step 9 of 10 in both VM rac1 and rac2.
Now, I am running the script root.sh in VM rac1 and rac2 as below:
[root@rac1 /]# /u01/app/grid/root.sh
Output:
CRS-4123: ohasd is starting
FATAL: Module oracleoks not found
FATAL: Module oracleadvm not found
FATAL: Module oracleacfs not found
ACFS-9121: Failed to detect /dev/asm/.asm_ctl_spec
ACFS-9310: ADVM/ACFS installation failed.
ACFS-9311: not all components were detected after installation
'UpdateNodeList' was successful.
[root@rac1 /]#
[root@rac2 /]# /u01/app/grid/root.sh
Output:
CRS-4123: ohasd is starting
FATAL: Module oracleoks not found
FATAL: Module oracleadvm not found
FATAL: Module oracleacfs not found
ACFS-9121: Failed to detect /dev/asm/.asm_ctl_spec
ACFS-9310: ADVM/ACFS installation failed.
ACFS-9311: not all components were detected after installation
Start of resource "ora.asm_init" failed.
Failed to start ASM.
Failed to start Oracle Clusterware Stack.
[root@rac2 /]#
As we see the output above, rac1 and rac2 get the same problems. in addition, rac2 fails to start ASM and Clusterware stack. Thus, I have 2 questions:
First, the common problem for rac1 and rac2 :
1)Module "oracleoks, oracleadvm, oracleacfs" not found
2)/dev/asm/.asm_ctl_spec not detected
3)ADVM/ACFS installation failed
Do the above 3 factors affect Grid and DataBase Installation later ? If yes, how to solve these problems ?
Second, how to start ASM and Clusterware stack in rac2 ?
Thanks.you have 2 options:
1. OEL 5.6 comes with 2 kernels, the original Red Hat, and Oracle UEK, you can choose not to use UEK
or the one i would recommend:
2. Upgrade Grid Infrastructure,
*11.2.0.3.1* (patch 13348650, released about a 1 month ago) supports ACFS on UEK (starting with 2.6.32.200 IIRC) -
Oracle Rac 11.2.0.3 doubts
Hi experts,
Current system info:
server 1 with Redhat 6.5 and Orale ASM with SAP ECC 6 GRID 11.2.0.3 standalone installation
Target system info:
Server 1 and server 2 running RAC 11.2.0.3 with SAP ECC 6 and RedHat 6.5 GRID with cluster
We are trying to convert our current system to oracle RAC but have some doubts.
We are following "Configuration of SAP NetWeaver for Oracle Grid Infrastructure 11.2.0.2 and Oracle Real Application Clusters 11g Release 2: A Best Practices Guide" so:
On page 29 It says: "Prepare the storage location for storing the shared ORACLE_HOME directory in the cluster. The Oracle RDBMS software should be installed into an empty directory, accessible from all nodes in the cluster" Same thing for ORACLE_BASE for the RDBMS, SAP subdirectories (sapbackup, sapcheck, sapreorg, saptrace, oraarch etc.) and homedirectories for SAP users ora<SID> and <SID>adm to a shared filesystem.
1.-Can we just use NFS for sharing them? or what is the recommended software on REDHAT for doing it?
'cause on note 527843 it says:
You must store the following components in a shared file system (cluster, NFS, or ACFS) here it says we can, but down the note on section linux says:
RAC 11.2.0.3/4 (x86 & x86_64 only):
Oracle Clusterware 11.2.0.3/4 + ASM/ACFS 11.2.0.3/4 (Oracle Linux 5, Oracle Linux 6, RHEL 5, RHEL 6, SLES 10, SLES 11)
Oracle Clusterware 11.2.0.3/4 + NetApp NFS or
Oracle Clusterware 11.2.0.3/4 + EMC Celerra NFS
It does not mention just NFS.
2.-In our system test, we want to backup all oracle configuration files on file systems and then delete Oracle Grid to Install GRID with cluster option, then install RDBMS with rac option and then follow the guide, is that correct?
RegardsHi Ramon,
1.-Can we just use NFS for sharing them? or what is the recommended software on REDHAT for doing it?
'cause on note 527843 it says:
You must store the following components in a shared file system (cluster, NFS, or ACFS) here it says we can, but down the note on section linux says:
RAC 11.2.0.3/4 (x86 & x86_64 only):
Oracle Clusterware 11.2.0.3/4 + ASM/ACFS 11.2.0.3/4 (Oracle Linux 5, Oracle Linux 6, RHEL 5, RHEL 6, SLES 10, SLES 11)
Oracle Clusterware 11.2.0.3/4 + NetApp NFS or
Oracle Clusterware 11.2.0.3/4 + EMC Celerra NFS
It does not mention just NFS.
NFS mount as suggest in SAP documentation should work. The use of ACFS always requires a special Oracle Grid Infrastructure (GI) Patch Set Update (PSU). Oracle Support Note 1369107.1 contains details about which GI PSU is required when you use ACFS with a specific RHEL update, service pack from SLES or UEK version of Oracle.
2.-In our system test, we want to backup all oracle configuration files on file systems and then delete Oracle Grid to Install GRID with cluster option, then install RDBMS with rac option and then follow the guide, is that correct?
You may perform DB backup using backup tools and then scrap the existing Grid setup. Configure RAC and then restore the backup into the new configuraiton as per SAP guidelines under
Configuration of SAP NetWeaver for Oracle Grid Infrastructure 11.2 with Oracle Real Application Clusters 11g Release 2
Hope this helps.
Regards,
Deepak Kori -
Problem when I extend an oracle rac 10g on new node
Hi everyone
I need to extend an oracle RAC but i have problems when I add a new node. My actual enviroment is:
1) Oracle Grid Infraestructure 11gR2 - 11.2.0.3 (Upgraded from Clusterware 10gR2 + ASM 10gR2)
2) Oracle Rac Database - 10.2.0.5
(all on one only node)
The first problem was when I executed the script "root.sh" on the new node because this script called the old Clusterware home (/oracle/product/10.2.0/crshome). I edited the file and I changed this path for /oracle/gridbase/product/11.2.0/gridhome (current home for GI). Finally, I execute the script.
Now, I tried to extend the rac through of DBCA, but when, I choose the new node and I clic on "next" button then appears the following error:
"The nodes "[rstatbdbpm02]" are not part of the cluster. Make sure clusterware is active on these nodes before proceeding"
However, when I execute the "crsctl" command to view the status of cluster the result is correct:
[oracle@rstatbdbpm01] /home/oracle > crsctl status res -t
NAME TARGET STATE SERVER STATE_DETAILS
Local Resources
ora.DATA.dg
ONLINE ONLINE rstatbdbpm01
ONLINE ONLINE rstatbdbpm02
ora.LISTENER.lsnr
ONLINE ONLINE rstatbdbpm01
ONLINE ONLINE rstatbdbpm02
ora.asm
ONLINE ONLINE rstatbdbpm01 Started
ONLINE ONLINE rstatbdbpm02 Started
ora.gsd
OFFLINE OFFLINE rstatbdbpm01
OFFLINE OFFLINE rstatbdbpm02
ora.net1.network
ONLINE ONLINE rstatbdbpm01
ONLINE ONLINE rstatbdbpm02
ora.ons
ONLINE ONLINE rstatbdbpm01
ONLINE ONLINE rstatbdbpm02
ora.registry.acfs
ONLINE ONLINE rstatbdbpm01
ONLINE ONLINE rstatbdbpm02
Cluster Resources
ora.BDBPM.BDBPM1.inst
1 ONLINE ONLINE rstatbdbpm01
ora.BDBPM.BPMVEH.BDBPM1.srv
1 ONLINE ONLINE rstatbdbpm01
ora.BDBPM.BPMVEH.cs
1 ONLINE ONLINE rstatbdbpm01
ora.BDBPM.db
1 ONLINE ONLINE rstatbdbpm01
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE rstatbdbpm02
ora.LISTENER_SCAN2.lsnr
1 ONLINE ONLINE rstatbdbpm02
ora.LISTENER_SCAN3.lsnr
1 ONLINE ONLINE rstatbdbpm01
ora.cvu
1 ONLINE ONLINE rstatbdbpm01
ora.oc4j
1 ONLINE ONLINE rstatbdbpm01
ora.rstatbdbpm01.vip
1 ONLINE ONLINE rstatbdbpm01
ora.rstatbdbpm02.vip
1 ONLINE ONLINE rstatbdbpm02
ora.scan1.vip
1 ONLINE ONLINE rstatbdbpm02
ora.scan2.vip
1 ONLINE ONLINE rstatbdbpm02
ora.scan3.vip
1 ONLINE ONLINE rstatbdbpm01
[oracle@rstatbdbpm01] /home/oracle >
Please, Any idea with that problem?
Thanks,
LuisHi,
Please check dbca trace logs for further checks, it will give an idea what command is being run to check status of cluster.
Generally first checks should be on inventory for rdbms home, grid home and making sure no ORACLE related parameter is set in environment.
Regards,
Sharma -
Can GG work in a RAC environment which has no shared storage except ASM
Gurus:
I have a 2 nodes RAC which is on ASM for database data and FRA. However, it has no other shared space on the file systems.
Can I still use GoldenGate? How can I do it.
Please advise.
dzGoldenGate needs to be able to read the online redo logs for each thread, and running on ASM requires a few extra configuration steps. You need an ASM user to be able to connect to the ASM instance (involving listener and tnsnames.ora files). Extract parameter file uses the TRANLOGOPTIONS parameter. This is covered in the installation guide ("Additional requirements for ASM," plus the section on "Additional requirements for Oracle RAC").
In 11gR2, you can use ACFS for the GoldenGate software and its files.
Oracle GoldenGate High Availability using Oracle Clusterware Technical Whitepaper
http://www.oracle.com/technetwork/middleware/goldengate/overview/ha-goldengate-whitepaper-128197.pdf -
I have installed Oracle RAC on VM Ware . I am facing problem during database connection after shutting down any node . It takes 7-8 minutes for making new connection after any node down.
Please find below crs_stat -t output
Name Type Target State Host
ora.DATA.dg ora....up.type ONLINE ONLINE rac1
ora....ER.lsnr ora....er.type ONLINE ONLINE rac1
ora....N1.lsnr ora....er.type ONLINE ONLINE rac1
ora.asm ora.asm.type ONLINE ONLINE rac1
ora....acdb.db ora....se.type ONLINE ONLINE rac1
ora.eons ora.eons.type ONLINE ONLINE rac1
ora.formtl.db ora....se.type ONLINE ONLINE rac1
ora.gsd ora.gsd.type ONLINE ONLINE rac1
ora....network ora....rk.type ONLINE ONLINE rac1
ora.oc4j ora.oc4j.type OFFLINE OFFLINE
ora.ons ora.ons.type ONLINE ONLINE rac1
ora.orcl.db ora....se.type ONLINE ONLINE rac1
ora....SM1.asm application ONLINE ONLINE rac1
ora....C1.lsnr application ONLINE ONLINE rac1
ora.rac1.gsd application ONLINE ONLINE rac1
ora.rac1.ons application ONLINE ONLINE rac1
ora.rac1.vip ora....t1.type ONLINE ONLINE rac1
ora....SM2.asm application ONLINE ONLINE rac2
ora....C2.lsnr application ONLINE ONLINE rac2
ora.rac2.gsd application ONLINE ONLINE rac2
ora.rac2.ons application ONLINE ONLINE rac2
ora.rac2.vip ora....t1.type ONLINE ONLINE rac2
ora....ry.acfs ora....fs.type ONLINE ONLINE rac1
ora.scan1.vip ora....ip.type ONLINE ONLINE rac1
One thing i noted that instance service is not running in crs_stat -t output .
After shutting down any node it give the error TNS-12514: TNS:listener does not currently know of service requested in connect descriptor . and it takes 7-8 for making any new connection.
Kindly suggest how i can resolve this issue.
Edited by: user10505923 on Aug 17, 2011 5:18 AMDepending on how your service was created you may need to register it with CRS:
srvctl add service -d db_name -s service_nameCan you post your listener info? output from issuing lsnrctl status and also your tnsnames or connect string you are using to connect?
(be sure to change any sensitive data to something generic but make sure you change it in all places to keep it consistent)
Also if you use the direct IP address or hostname in your connect string instead of the SCAN does the slowness go away?
Maybe you are looking for
-
Hi All, I have to design one dashboard of combination of diff Score card and Graphs. it's look like 6 tabs and per selection three scorecard and three graph for diff data. selection is look like ; Radio button 1: Current month / YTD / Annual Forecas
-
Hi , i m using merge query . below is my query . MERGE INTO tbltmonthlysales A USING (select DISTINCT 'JAN-2011' MON_YYYY,distributorname DISTRIBUTORNAME From SYN_VWABS) B ON (a.distributorname = B.distributorname) WHEN NOT MATCHED THEN INSERT (A.MON
-
ISW 6.0 SP1 Installation Failure
I have installed an Oracle Directory Server Version 11.1.1.3.0 (Sun-Directory-Server/11.1.1.3.0 B2010.0630.2203) and try to install the Identity Synchronization for Windows (isw 6.0 sp1). As host system I use a Solaris 10 Update 8 on SPARC resource.
-
Where are the scripted patterns!!!!!!!
i have the latest version of Photoshop cc. For some reason i cant find the scripted patterns option. I press fill and it is not inside the fill box. PLEASE HELP!!
-
My PC with Windows XP SP2 is recognizing my ipad as digital camera only. How can I overcome this problem?