Startup of Clusterware with missing voting disk

Hello,
in our environment we have a 2 node cluster.
The 2 nodes and 2 SAN storages are in different rooms.
Voting files for Clusterware are in ASM.
Additionally we have a third voting disk on a NFS server (configured like in this descripton: http://www.oracle.com/technetwork/products/clusterware/overview/grid-infra-thirdvoteonnfs-131158.pdf&ei=lzJXUPvJMsn-4QTJ8YDoDg&usg=AFQjCNGxaRWhwfTehOml-KgGGeRkl4yOGw)
The Quorum flag is on the disk that is on NFS.
The diskgroup is with normal redundancy.
Clusterware keeps running when one of the VDs gets lost (e.g. storage failure).
So far so good.
But when I have to restart Clusterware (e.g. reboot of a node) while the VD is still missing, then clusterware does not come up.
Did not find an indication if this whether is planned behaviour of Clusterware or maybe because I missed a detail.
From my point of view it should work to start Clusterware as long as the majority of VDs are available.
Thanks.

Hi,
actually what you see is expected (especially in a stretched cluster environment, with 2 failgroups and 1 quorum failgroup).
It has to do with how ASM handles a disk failure and is doing the mirriong (and the strange "issue" that you need a third failgroup for the votedisks).
So before looking at this special case, lets look at how ASM normally treats a diskgroup:
A diskgroup can only be mounted in normal mode, if all disks of the diskgroup are online. If a disks is missing ASM will not allow you to "normally" mount the diskgroup, before the error situation is solved. If a disks is lost, which contents can be mirrored to other disks, then ASM will be able to restore full redundancy and will allow you to mount the diskgroup. If this is not the case ASM expects the user to tell what it should do => The administrator can issue a "alter diskgroup mount force" to tell ASM even though it cannot held up the required redundancy it should mount with disks missing. This then will allow the administrator to correct the error (or replaced failed disks/failgroups). While ASM had the diskgroup mounted the loss of a failgroup will not result in a dismount of the diskgroup.
The same holds true with the diskgroup containing the voting disks. So what you see (will continue to run, but cannot restart) is pretty much the same like for a normal diskgroup: If a disk is lost, and the contents does not get relocated (like if the quorum failgroup fails it will not allow you to relocatore, since there are no more failgroups to relocate the third vote to), it will continue to run, but it will not be able to automatically remount the diskgroup in normal mode if a disk fails.
To bring the cluster back online, manual intervention is required: Start the cluster in exclusive mode:
crsctl start crs -exclThen connect to ASM and do a
alter disgkroup <dgname> mount forceThen resolve the error (like adding another disk to another failgroup, that the data can be remirrored and the disk can be dropped.
After that a normal startup will be possible again.
Regards
Sebastian

Similar Messages

  • PBG4 with 'missing / dissapearing' disk space issue

    My wife's powerbook appears to have 'missing' hard disk space and I can't figure out why. Here is the information from System Profiler about her only hard drive in the computer.
    Capacity: 74.41 GB
    Available: 6.91 GB
    Writable: Yes
    File System: Journaled HFS+
    The issue is that when I click on her hard drive icon in the Finder view I see 4 items:
    - Applications (4.92 GB)
    - Library (2.45 GB)
    - System (1.07 GB)
    - Users (24.17 GB)
    This totals 32.61 GB of used disk space
    The hard drive is 74.41 GB
    So I should have 41.80 GB of free space
    Why do I only have 6.91 GB?
    Where is the other 34.89 GB?
    I have run disk utilities and repaired permissions. I have also run fsck by booting while holding apple-S. I've looked at the logs in /var/logs and nothing is bigger than 2MB.
    Anyone have any ideas?
    Thanks.

    We upgraded the OS to 10.5 and the issue is gone.

  • Minimum number of voting disks?

    Hi,
    Is there is a minimum number of voting disks?
    does that depend on the RAC nodes?
    If I have 4 nodes what is the optimal voting disks?
    Thanks

    user9202785 wrote:
    Hi,
    Is there is a minimum number of voting disks?
    does that depend on the RAC nodes?
    If I have 4 nodes what is the optimal voting disks?
    ThanksOracle recommends 3 voting disk. but you can install clusterware with one voting disk by selecting external redudancy during clusterware installation.
    the number of voting disks is not depend on the number of RAC nodes. you can have upto maximum 32 voting disks for 2nodes to 100 RAC nodes.
    The Oracle Clusterware is comprised primarily of two components: the voting disk and the OCR (Oracle Cluster Registry). The voting disk is nothing but a file that contains and manages information of all the node memberships and the OCR is a file that manages the cluster and RAC configuration.
    refer : how to add/remove/repace voting disk in the below link:
    http://oracleinstance.blogspot.com/2009/12/how-to-addremovereplacemove-oracle.html.
    if voting disk fails: refer
    http://oracleinstance.blogspot.com/2009/12/how-to-addremovereplacemove-oracle.html.
    also refer Oracle® Database 2 Day + Real Application Clusters Guide 10g Release 2 (10.2)
    read the below ,hope this will help you.
    1) Why do we have to create odd number of voting disk?
    As far as voting disks are concerned, a node must be able to access strictly more than half of the voting disks at any time. So if you want to be able to tolerate a failure of n voting disks, you must have at least 2n+1 configured. (n=1 means 3 voting disks). You can configure up to 32 voting disks, providing protection against 15 simultaneous disk failures.
    Oracle recommends that customers use 3 or more voting disks in Oracle RAC 10g Release 2. Note: For best availability, the 3 voting files should be physically separate disks. It is recommended to use an odd number as 4 disks will not be any more highly available than 3 disks, 1/2 of 3 is 1.5...rounded to 2, 1/2 of 4 is 2, once we lose 2 disks, our cluster will fail with both 4 voting disks or 3 voting disks.
    2) Does the cluster actually check for the vote count before node eviction? If yes, could you expain this process briefly?
    Yes. If you lose half or more of all of your voting disks, then nodes get evicted from the cluster, or nodes kick themselves out of the cluster.
    3) What's the logic behind the documentaion note that says that "each node should be able to see more than half of the voting disks at any time"?
    The answer to first question itself will provide answer to this question too.answered by: Boochi
    refer: Question on voting disk
    Question on voting disk

  • Regarding Voting disk recovery scenarios

    Hi,
    For years i have read about RAC and Voting disk and it is said that each node should access more than half of the voting disks but never got a chance to work on the below scnerios which i have mentioned, if some one has practical done the below scenarios or have good knowledge do let me know.
    1) If i have 5 voting disks and out of which 2 got corrupted or deleted will the cluster keep working properly? If i boot my system or restart the cluster will it still work fine or not?
    2) The above scenario with 3 voting disk deleted or got corrupted  what will happen?
    3) If i have 2 OCR and i got deleted or corrupted will the system run fine?

    Aman,
    During startup the clusterware requires the majority of votes to start the CSS Daemon.
    The majority is counted on how many was configured, not how many remain alive.
    Below test using 3 Votedisk (11.2.0.3)
    alert.log
    [cssd(26233)]CRS-1705:Found 1 configured voting files but 2 voting files are required, terminating to ensure data integrity; details at (:CSSN                     M00021:) in /u01/app/11.2.0/grid/log/node11g02/cssd/ocssd.log
    ocssd.log
    2014-11-14 05:20:05.126: [    CSSD][3021551360]clssnmvDiskVerify: Successful discovery of 1 disks
    2014-11-14 05:20:05.126: [    CSSD][3021551360]clssnmCompleteInitVFDiscovery: Completing initial voting file discovery
    2014-11-14 05:20:05.126: [    CSSD][3021551360]clssnmCompleteVFDiscovery: Completing voting file discovery
    2014-11-14 05:20:05.126: [    CSSD][3021551360]clssnmvDiskStateChange: state from discovered to pending disk /dev/asm-ocr-vote01
    2014-11-14 05:20:05.126: [    CSSD][3021551360]clssnmvDiskStateChange: state from pending to configured disk /dev/asm-ocr-vote01
    2014-11-14 05:20:05.126: [    CSSD][3021551360]clssnmvVerifyCommittedConfigVFs: Insufficient voting files found, found 1 of 3 configured, needed 2 voting files
    With 2 voting files online, no problem, clusterware start with warning on logs.
    Ps. If was configured a diskgroup (OCR_VOTE) with high redundancy (5 asmdisk each in your onw failgroup), even diskgroup can hold 3 asmdisk failure. The diskgroup goes down and if there is only one OCR (no mirror) on that Diskgroup the whole clusterware goes down.

  • OCR and Voting Disk

    Hi all,
    What is the difference b/w OCR and Voting Disk

    The voting disk is a file that manages information about node membership and the OCR is a file that manages cluster and RAC database configuration information.
    Voting Disk -Oracle Clusterware uses the voting disk to determine which instances are members of a cluster. The voting disk must reside on a shared disk. Basically all nodes in the RAC cluster register their heart-beat information on thes voting disks. The number decides the number of active nodes in the RAC cluster. These are also used for checking the availability of instances in RAC and remove the unavailable nodes out of the cluster. It helps in preventing split-brain condition and keeps database information intact. The split brain syndrome and its affects and how it has been managed in oracle is mentioned below.
    For high availability, Oracle recommends that you have a minimum of three voting disks. If you configure a single voting disk, then you should use external mirroring to provide redundancy. You can have up to 32 voting disks in your cluster. What I could understand about the odd value of the number of voting disks is that a noe should see maximun number of voting disk to continue to function, so with 2, if it can see only 1, its not the maximum value but a half value of voting disk. I am still trying to search more on this concept.
    OCR (Oracle Cluster Registry) - resides on shared storage and maintains information about cluster configuration and information about cluster database. OCR contains information like which database instances run on which nodes and which services runs on which database.
    Oracle Cluster Registry (OCR) is one such component in 10g RAC used to store the cluster configuration information. It is a shared disk component, typically located in a shared raw volume that must be accessible to all nodes in the cluster. The daemon OCSSd manages the configuration info in OCR and maintains the changes to cluster in the registry.
    ocrcheck is the command to check out the OCR.

  • AIX 6.1 - Do we need HACMP to store the OCR and Voting Disks on Raw Logical

    We are planning to install 11G r1 rac on AIX 6.1. We will use clusterware and ASM. We would like to avoid GPFS and HACMP.
    We plan to put the Clusterware OCR and Voting Disks on Raw Logical Volumes.
    I do not believe we need HACMP. see the passage below from doc:
    Oracle® Clusterware
    Installation Guide
    11g Release 1 (11.1) for AIX Based Systems
    B28258-05
    Configuring Raw Disk Devices for Oracle Clusterware Without HACMP or GPFS
    If you are installing Oracle RAC on an AIX cluster without HACMP or GPFS, then you
    must use shared raw disk devices for the Oracle Clusterware files. You can also use
    shared raw disk devices for database file storage. However, Oracle recommends that
    you use Automatic Storage Management to store database files in this situation.
    This section describes how to configure the shared raw disk devices for Oracle
    Clusterware files (Oracle Cluster Registry and Oracle Clusterware voting disk). It also
    describes how to configure shared raw devices for Oracle ASM and for Database files,
    if you intend to install Oracle Database, and you need to create new disk devices.
    Question:
    Do we need HACMP to store the OCR and Voting Disks on Raw Logical Volumes? According to the passage above we do not.

    You can archive log either in shared location or separate location associated with each instance. Oracle recommends using a shared location for all the instances in a RAC configuration. Check the topic "Location of Archived Logs for the Cluster File System Archiving Scheme" in Clusters Administration and Deployment Guide

  • Backing up Voting disk in 11.2

    Grid Version : 11.2
    In 10.2, 11.1 , Voting disk can be backed up using
    dd if=voting_disk_name of=backup_file_name bs=4kBut in 11.2, the voting disk is in ASM Disk Group. Is there a way to make sure that Voting disk is separately backed up?

    spiral wrote:
    Grid Version : 11.2
    In 10.2, 11.1 , Voting disk can be backed up using
    dd if=voting_disk_name of=backup_file_name bs=4kBut in 11.2, the voting disk is in ASM Disk Group. Is there a way to make sure that Voting disk is separately backed up?Hi,
    Backing up and restoring voting disks using DD is only supported in versions prior to Oracle Clusterware 11.2.
    With Oracle Clusterware 11.2 voting disks contain more information. Hence, one must not operate on voting disks using DD. This holds true regardless of whether or not the voting disks are stored in ASM, on a cluster file system, or on raw / block devices (which could still be the case after an upgrade).
    Since DD must not be used to backup the voting disks with 11.2 anymore, the voting disks are backed up automatically into OCR. Since the OCR is backed up every 4 hours automatically, the voting disks are backed up indirectly, too. Any change to the information stored in the voting disks - e.g. due to the addition or removal of nodes in the cluster - triggers a new voting disk backup in the OCR.
    Regards,
    Levi Pereira

  • Voting disk and OCR disk in RAC

    Can anyone explain me in basic terms ...
    For what Voting and OCR disk are used in RAC.

    Hi Francisco,
    Thanks for the information.
    But i am having lot of doubts in Voting disk and types of heart-beats present in Oracle 10g RAC.
    As Oracle says css uses two heart beats 1. Network heart-beat 2. Disk heart-beat
    My doubts:
    1. How Oracle uses voting disk to avoid split-brain syndrome? OR what is the role of voting disk in split-brain syndrome?
    2. How Oracle monitors remote node health?
    3. Oracle says there is one more heart-beat where ckpt process keep on writing to control file for every 3 seconds and if it fails lmon process will evict the node....Is it true?
    5. Oracle Clusterware will send messages (via a special ping operation) to all nodes configured in the cluster, often called the "heartbeat." If the heartbeat fails for any of the nodes, it checks with the Oracle Clusterware configuration files (Voting disk) to distinguish between a real node failure and a network failure......Is it true? If true then what about the two heart beats(Network and Disk)...why these heart-beats are present or useful?
    If possible please clear my doubts by giving some simple examples....
    Awaiting for your reply and too much eager to learn and expertise in Oracle RAC architecture before administering it.
    Regards,
    Yasser.

  • Voting disks

    Voting disks are accessed by every node about once per second so performance of those is not very critical. You shouldn't use 2 voting disks - use either 1 or 3 (5 is also possible but doesn't make sense usually). The reason for that is that majority of voting disks should be available for cluster to operate. If one voting disk out of three fails - other 2 (majority) is still available. If you have two voting disks that majority is only two and if one voting disk fails - cluster goes down. So with two voting disks availability is even lower as it's enough to one of two disks to fail.
    I did not get the concept of Majority here... Any one has better idea...?

    Actually, there are about about 3 IOs per node per second - 2 reads + 1 write.
    Oracle Clusterware requires that majority of voting disks to be accessible. Majority means more than a half. If majority is not accessible (i.e. half or more are not available) then the node leaves the cluster (read evicts itself).
    Example:
    1 voting disk - if it not available - node leaves the cluster.
    2 voting disks - if one is not available (this is half) than node leaves the cluster. This is because majority of 2 is 2 (more than a half which is one). I.e. 2 voting disks configuration doesn't provide more resiliency than 1.
    3 voting disks - majority is 2. Thus, if one voting disk is unavailable - the node stay in the cluster. Consequently, you have higher availability than with 1.
    4 voting disks - majority is 3. Resiliency is the same as with 3 voting disks. I.e. you can sustain loss of ONE voting disk only out of four so it doesn't make sense to use one more voting disk in this case.
    5 voting disks - majority is 3 and you can loose 2 voting disks without effect on the cluster. Thus, it's more resilient than 3.
    And so on.

  • Does all voting disks in n node rac have same data

    Hi,
    I heard that Oracle/css access more than half of the voting disks, just had a question on that
    1) Why does Oracle access more than half of the voting disks?
    2) Why we keep odd number of voting disks?
    3) Does Oracle keep same information in all the voting disks ? is it like control or redofile multiplexing?
    4) Why we have only 2 ocr and not more than that?
    Regards
    ID

    Hi,
    1) Why does Oracle access more than half of the voting disks?
    To join the cluster greater than half number of voting disks must be accessible for the cluster node, Oracle made this restriction so that even in worst cases all nodes must have access on one common disk. Let's try to understand with simple classical example of two node cluster node1 and node2 with two voting disk vote1 and vote2 and assuming vote1 is accessible for node1 only and vote2 is accessible for node2 only, in this case if Oracle allow the node to join the cluster by passing this restriction then conflict will occur as both the node writing information to different voting disk but with this restriction Oracke make sure that one disk must be commonly accessible for all the nodes. For example in case of three voting disk at least two must be accessible, if two voting disks are accessible then one disk will be commonly accessible for all the nodes.
    2) Why we keep odd number of voting disks?
    I already answered this question indirectly with the answer of your first question. Greater than half number of voting disks must be accessible so either you configure three or next even number i.e. four but number of failover voting disk remains same. In case of three failure of one voting disk can be tolerated and same in case of four voting disks.
    3) Does Oracle keep same information in all the voting disks ? is it like control or redofile multiplexing?
    Yes, Clusterware maintains same information in all voting disks its just the multiplex copy.
    4) Why we have only 2 ocr and not more than that?
    We can configure upto five mirror OCR disk. Here is excrept of my ocr.loc file
    [root@host01 ~]# cat /etc/oracle/ocr.loc
    #Device/file getting replaced by device /dev/sdb13
    ocrconfig_loc=+DATA
    ocrmirrorconfig_loc=/dev/sdb15
    ocrconfig_loc3=/dev/sdb14
    ocrconfig_loc4=/dev/sdb13
    local_only=false[root@host01 ~]#
    [root@host01 ~]#
    [root@host01 ~]#
    [root@host01 ~]# ocrconfig -add /dev/sdb12
    [root@host01 ~]#
    [root@host01 ~]#
    [root@host01 ~]#
    [root@host01 ~]# ocrconfig -add /dev/sdb11
    PROT-27: Cannot configure another Oracle Cluster Registry location because the maximum number of Oracle Cluster Registry locations (5) have been configured
    Thanks

  • How to rename voting disk name in oracle clusterware 11gr2

    Hi:
    I need change the name of voting disk at os level, original name is /dev/rhdisk20, I need rename to /dev/asmocr_vote1 (unix AIX), the voting disk is locate in ASM diskgroup +OCR.
    Initial voting disk was: /dev/rhdisk20 in diskgroup +OCR
    #(root) /oracle/GRID/11203/bin->./crsctl query css votedisk
    ## STATE File Universal Id File Name Disk group
    1. ONLINE a2e6bb7e57044fcabf0d97f40357da18 (/dev/rhdisk20) [OCR]
    I createt a new alias disk name:
    #mknod /dev/asmocr_vote01 c 18 10
    # /dev->ls -lrt|grep "18, 10"
    brw------- 1 root system 18, 10 Aug 27 13:15 hdisk20
    crw-rw---- 1 oracle asmadmin 18, 10 Sep 6 16:57 rhdisk20 --> Old name
    crw-rw---- 1 oracle asmadmin 18, 10 Sep 6 16:59 asmocr_vote01 ---> alias to old name, the new name.
    After change votingn disk unix name, the cluster doesn't start, voting disk is not found by CRSSD.
    -STEPS to start clusterware after changing the OS voting disk name are:
    1- stop al nodes:
    #crsctl stop crs -f (every node)
    Work only in one node (node1, +ASM1 instance):
    2- Change asm_diskstring in init+ASM1.ora:
    asm_diskstring = /dev/asm*
    3- change disk unix permiss:
    # /dev->ls -lrt|grep "18, 10"
    brw------- 1 root system 18, 10 Aug 27 13:15 hdisk20
    crw-rw---- 1 root system 18, 10 Sep 6 16:59 asmocr_vote01
    crw-rw---- 1 oracle asmadmin 18, 10 Sep 6 17:37 rhdisk20
    #(root) /dev->chown oracle:asmadmin asmocr_vote01
    #(root) /dev->chown root:system rhdisk20
    #(root) /dev->ls -lrt|grep "18, 10"
    brw------- 1 root system 18, 10 Aug 27 13:15 hdisk20
    crw-rw---- 1 oracle asmadmin 18, 10 Sep 6 16:59 asmocr_vote01 --> new name only have oracle:oinstall
    crw-rw---- 1 root system 18, 10 Sep 6 17:37 rhdisk20
    4-start node in exclusive mode:
    # (root) /oracle/GRID/11203/bin->./crsctl start crs -excl
    CRS-4123: Oracle High Availability Services has been started.
    CRS-2672: Attempting to start 'ora.mdnsd' on 'orarac3intg'
    CRS-2676: Start of 'ora.mdnsd' on 'orarac3intg' succeeded
    CRS-2672: Attempting to start 'ora.gpnpd' on 'orarac3intg'
    CRS-2676: Start of 'ora.gpnpd' on 'orarac3intg' succeeded
    CRS-2672: Attempting to start 'ora.cssdmonitor' on 'orarac3intg'
    CRS-2672: Attempting to start 'ora.gipcd' on 'orarac3intg'
    CRS-2676: Start of 'ora.cssdmonitor' on 'orarac3intg' succeeded
    CRS-2676: Start of 'ora.gipcd' on 'orarac3intg' succeeded
    CRS-2672: Attempting to start 'ora.cssd' on 'orarac3intg'
    CRS-2672: Attempting to start 'ora.diskmon' on 'orarac3intg'
    CRS-2676: Start of 'ora.diskmon' on 'orarac3intg' succeeded
    CRS-2676: Start of 'ora.cssd' on 'orarac3intg' succeeded
    CRS-2672: Attempting to start 'ora.ctssd' on 'orarac3intg'
    CRS-2672: Attempting to start 'ora.drivers.acfs' on 'orarac3intg'
    CRS-2679: Attempting to clean 'ora.cluster_interconnect.haip' on 'orarac3intg'
    CRS-2681: Clean of 'ora.cluster_interconnect.haip' on 'orarac3intg' succeeded
    CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'orarac3intg'
    CRS-2676: Start of 'ora.ctssd' on 'orarac3intg' succeeded
    CRS-2676: Start of 'ora.drivers.acfs' on 'orarac3intg' succeeded
    CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'orarac3intg' succeeded
    CRS-2672: Attempting to start 'ora.asm' on 'orarac3intg'
    CRS-2676: Start of 'ora.asm' on 'orarac3intg' succeeded
    CRS-2672: Attempting to start 'ora.crsd' on 'orarac3intg'
    CRS-2676: Start of 'ora.crsd' on 'orarac3intg' succeeded
    5-check votedisk:
    # (root) /oracle/GRID/11203/bin->./crsctl query css votedisk
    Located 0 voting disk(s).
    --> NO VOTING DISK found
    6- mount diskgroup of voting disk (+OCR in this case) in +ASM1 instance:
    SQL> ALTER DISKGROUP OCR mount;
    7-add votedisk belongs diskgroup +OCR:
    # (root) /oracle/GRID/11203/bin->./crsctl replace votedisk +OCR
    Successful addition of voting disk 86d8b12b1c294f5ebfa66f7f482f41ec.
    Successfully replaced voting disk group with +OCR.
    CRS-4266: Voting file(s) successfully replaced
    #(root) /oracle/GRID/11203/bin->./crsctl query css votedisk
    ## STATE File Universal Id File Name Disk group
    1. ONLINE 86d8b12b1c294f5ebfa66f7f482f41ec (/dev/asmocr_vote01) [OCR]
    Located 1 voting disk(s).
    8-stop node:
    #(root) /oracle/GRID/11203/bin->./crsctl stop crs –f
    8-start node:
    #(root) /oracle/GRID/11203/bin->./crsctl start crs
    10- check:
    # (root) /oracle/GRID/11203/bin->./crsctl query css votedisk
    ## STATE File Universal Id File Name Disk group
    1. ONLINE 86d8b12b1c294f5ebfa66f7f482f41ec (/dev/asmocr_vote01) [OCR]
    Vicente.
    HP.
    Edited by: 957649 on 07-sep-2012 13:11

    There is no facilty to rename a column name in Oracle 8i. This is possible from Oracle 9.2 version onwards.
    For you task one example given below.
    Example:-
    Already existed table is ITEMS
    columns in ITEMS are ITID, ITEMNAME.
    But instead of ITID I want ITEMID.
    Solution:-
    step 1 :- create table items_dup
    as select itid itemid, itemname from items;
    step 2 :- drop table items;
    step 3 :- rename items_dup to items;
    Result:-
    ITEMS table contains columns ITEMID, ITEMNAME

  • Confusion with OCFS2 File system for OCR and Voting disk RHEL 5, Oracle11g,

    Dear all,
    I am in the process of installing Oracle 11g 3 Node RAC database
    The environment on which i have to do this implementation is as follows:
    Oracle 11g.
    Red Hat Linux 5 x86
    Oracle Clusterware
    ASM
    EMC Storage
    250 Gb of Storage drive.
    SAN
    As of now i am in the process of installing Oracle Clusterware on the 3 nodes.
    I have performed these tasks for the cluster installs.
    1. Configure Kernel Parameters
    2. Configure User Limits
    3. Modify the /etc/pam.d/login file
    4. Configure Operating System Users and Groups for Oracle Clusterware
    5. Configure Oracle Clusterware Owner Environment
    6. Install CVUQDISK rpm package
    7. Configure the Hosts file
    8. Verify the Network Setup
    9. Configure the SSH on all Cluster Nodes (User Equivalence)
    9. Enable the SSH on all Cluster Nodes (User Equivalence)
    10. Install Oracle Cluster File System (OCFS2)
    11.Verify the Installation of Oracle Cluster File System (OCFS2)
    12. Configure the OCFS2 (/etc/ocfs2/cluster.conf)
    13. Configure the O2CB Cluster Stack for OCFS2
    BUT, here after i am a little bit confused on how to proceed further. The next step is to Format the disk and mount the OCFS2, Create Software Directories... so and so forth.
    I asked my system admin to provide me two partitions so that i could format them with OCFS2 file system.
    He wrote back to me saying.
    *"Is what you want before I do it??*
    */dev/emcpowera1 is 3GB and formatted OCFS2.*
    */dev/emcpowera2 is 3GB and formatted OCFS2.*
    *Are those big enough for you? If not, I can re-size and re-format them*
    *before I mount them on the servers.*
    *the SAN is shared storage. /dev/emcpowera is one of three LUNs on*
    *the shared storage, and it's 214GB. Right now there are only two*
    *partitions on it- the ones I listed below. I can repartition the LUN any*
    *way you want it.*
    *Where do you want these mounted at:*
    */dev/emcpowera1*
    */dev/emcpowera2*
    *I was thinking if this mounting techique would work like so:*
    *emcpowera1: /u01/shared_config/OCR_config*
    *emcpowera2: /u01/shared_config/voting_disk*
    *Let me know how you'd like them mounted."*
    Please recommend me what i should convey to him so that i can ask him to the exact same thing.
    My second question is, as we are using ASM, for which i am gonna configure ASM after clusterware installation, should i install Openfiler??
    Pls refer the enviroment information i provided above and make recommendations.
    As of now i am using Jeffery Hunters guide to install the entire setup. You think the entire install guide goes well with my enviroment??
    http://www.oracle.com/technology/pub/articles/hunter_rac11gr1_iscsi.html?rssid=rss_otn_articles
    Kind regards
    MK

    Thanks for ur reply Mufalani,
    You have managed to solve half part of my query. But still i am stuck with what kind of mount point i should ask the system admin to create for OCR and Voting disk. Should i go with the mount point he is mentioning??
    Let me put forth few more questions here.
    1. Is 280 MB ok for OCR and voting disks respectively??
    2. Should i ask the system admin to create 4 voting disk mount points and two for ocr??
    3. As mentioned by the system admin.
    */u01/shared_config/OCR_config*
    */u01/shared_config/voting_disk*
    Is this ok for creating the ocr and voting disks?
    4. Can i use OCFS2 file system for formating the disk instead of using them as RAW device!!?
    5. As u mentioned that Openfiler is not needed for Configuring ASM... Could you provide me the links which will guide me to create partition disks, voting disks and ocr disks!! I could not locate them on the doc or else were. I did find a couple of them, but was unable to identify a suitable one for my envoirement.
    Regards
    MK

  • Cluster continues with less than half voting disks

    I wanted to try voting disk recovery. I have voting file on a diskgroup with normal redundancy. I corrupted two of the disks containing the voting disk using dd command. I was expecting clusterware to be down as only 1 voting disk ( < half of total(3)) was available but surprisingly clusterware kept running. I even waited for quite some time but to no avail. I did not have any messages in alert log /ocssd log. I would be thankful if experts can give more input on this.
    Regards

    Hi You can check the metalink notes
    OCR / Vote disk Maintenance Operations: (ADD/REMOVE/REPLACE/MOVE) [ID 428681.1]
    it says
    For 11.2+, it is no longer required to back up the voting disk. The voting disk data is automatically backed up in OCR as part of any configuration change.
    The voting disk files are backed up automatically by Oracle Clusterware if the contents of the files have changed in the following ways:
    1.Configuration parameters, for example misscount, have been added or modified
    2.After performing voting disk add or delete operations The voting disk contents are restored from a backup automatically when a new voting disk is added or replaced.

  • Cant start up on my Macbook pro. All I get is a grey spinning wheel. Tried all the Startup key combinations and nothing works. I reset the NVRAM and nothing. I booted with start up disk , ran disk repair and all is ok. Tried to re-install OS but when I ge

    Can not start up on my Macbook pro. All I get is a grey spinning wheel. Tried all the Startup key combinations and nothing works. I reset the NVRAM and nothing. I booted with start up disk , ran disk repair and all is ok. Tried to re-install OS but when I get to designating drive its blank. When I go to startup disk there is a "?" mark. When I restart under startup disk search for a drive it still comes up with nothing.  Before I got the grey spinning wheel I had a locked screen. Could not move curser or click on anything. Finally just shut down. Thank you in advance for any help!

    Shouldn't still have a 'beachballing'... if the drive is being found and the system is trying to boot from it but is having problems, I would suggest looking over ds store's user tip - Step by step to fix your Mac.
    It's likely either something corrupt in your system files or a failing hard drive. But follow ds's steps until you find the cause of the problem.
    Good luck,
    Clinton

  • Root.sh hangs at formatting voting disk on OEL32 11gR2 RAC with ocfs2

    Hi,
    Am trying to bring up Oracle11gR2 RAC on Enterprise Linux x86 (32bit) version 5.6. I am using Ocfs2 1.4 as my cluster file share. Everything went fine till the root.sh and it hangs with a message "now formatting voting disk <vdsk path>
    The logs are mentioned below.
    Checked the alert log:
    {quote}
    cssd(9506)]CRS-1601:CSSD Reconfiguration complete. Active nodes are oel32rac1 .
    2011-08-04 15:58:55.356
    [ctssd(9552)]CRS-2407:The new Cluster Time Synchronization Service reference node is host oel32rac1.
    2011-08-04 15:58:55.917
    [ctssd(9552)]CRS-2401:The Cluster Time Synchronization Service started on host oel32rac1.
    2011-08-04 15:58:56.213
    [client(9567)]CRS-1006:The OCR location /u02/storage/ocr is inaccessible. Details in /u01/app/11.2.0/grid/log/oel32rac1/client/ocrconfig_9567.log.
    2011-08-04 15:58:56.365
    [client(9567)]CRS-1001:The OCR was formatted using version 3.
    2011-08-04 15:58:59.977
    [crsd(9579)]CRS-1012:The OCR service started on node oel32rac1.
    {quote}
    crsctl.log:
    {quote}
    2011-08-04 15:59:00.246: [  CRSCTL][3046184656]crsctl_vformat: obtain cssmode 1
    2011-08-04 15:59:00.247: [  CRSCTL][3046184656]crsctl_vformat: obtain VFListSZ 0
    2011-08-04 15:59:00.258: [  CRSCTL][3046184656]crsctl_vformat: Fails to obtain backuped Lease from CSSD with error code 16
    2011-08-04 15:59:01.857: [  CRSCTL][3046184656]crsctl_vformat: to do clsscfg fmt with lease sz 0
    2011-08-04 15:59:01.910: [   SKGFD][3046184656]NOTE: No asm libraries found in the system
    2011-08-04 15:59:01.910: [    CLSF][3046184656]Allocated CLSF context
    2011-08-04 15:59:01.910: [   SKGFD][3046184656]Discovery with str:/u02/storage/vdsk:
    2011-08-04 15:59:01.910: [   SKGFD][3046184656]UFS discovery with :/u02/storage/vdsk:
    2011-08-04 15:59:01.910: [   SKGFD][3046184656]Fetching UFS disk :/u02/storage/vdsk:
    2011-08-04 15:59:01.911: [   SKGFD][3046184656]OSS discovery with :/u02/storage/vdsk:
    2011-08-04 15:59:01.911: [   SKGFD][3046184656]Handle 0xa6c19f8 from lib :UFS:: for disk :/u02/storage/vdsk:
    2011-08-04 17:10:37.522: [   SKGFD][3046184656]WARNING:io_getevents timed out 618 sec
    2011-08-04 17:10:37.526: [   SKGFD][3046184656]WARNING:io_getevents timed out 618 sec
    {quote}
    ocrconfig log:
    {quote}
    2011-08-04 15:58:56.214: [  OCRRAW][3046991552]ibctx: Failed to read the whole bootblock. Assumes invalid format.
    2011-08-04 15:58:56.214: [  OCRRAW][3046991552]proprinit:problem reading the bootblock or superbloc 22
    2011-08-04 15:58:56.214: [  OCROSD][3046991552]utread:3: Problem reading buffer 8532000 buflen 4096 retval 0 phy_offset 102400 retry 0
    2011-08-04 15:58:56.214: [  OCROSD][3046991552]utread:3: Problem reading buffer 8532000 buflen 4096 retval 0 phy_offset 102400 retry 1
    2011-08-04 15:58:56.214: [  OCROSD][3046991552]utread:3: Problem reading buffer 8532000 buflen 4096 retval 0 phy_offset 102400 retry 2
    2011-08-04 15:58:56.214: [  OCROSD][3046991552]utread:3: Problem reading buffer 8532000 buflen 4096 retval 0 phy_offset 102400 retry 3
    2011-08-04 15:58:56.214: [  OCROSD][3046991552]utread:3: Problem reading buffer 8532000 buflen 4096 retval 0 phy_offset 102400 retry 4
    2011-08-04 15:58:56.214: [  OCROSD][3046991552]utread:3: Problem reading buffer 8532000 buflen 4096 retval 0 phy_offset 102400 retry 5
    2011-08-04 15:58:56.214: [  OCRRAW][3046991552]propriogid:1_1: Failed to read the whole bootblock. Assumes invalid format.
    2011-08-04 15:58:56.365: [  OCRRAW][3046991552]iniconfig:No 92 configuration
    2011-08-04 15:58:56.365: [  OCRAPI][3046991552]a_init:6a: Backend init successful
    2011-08-04 15:58:56.390: [ OCRCONF][3046991552]Initialized DATABASE keys
    2011-08-04 15:58:56.564: [ OCRCONF][3046991552]csetskgfrblock0: output from clsmft: [clsfmt: successfully initialized file /u02/storage/ocr
    2011-08-04 15:58:56.577: [ OCRCONF][3046991552]Successfully set skgfr block 0
    2011-08-04 15:58:56.578: [ OCRCONF][3046991552]Exiting [status=success]...
    {quote}
    ocssd.log:
    {quote}
    2011-08-04 15:59:00.140: [    CSSD][2963602320]clssgmFreeRPCIndex: freeing rpc 23
    2011-08-04 15:59:00.228: [    CSSD][2996054928]clssgmExecuteClientRequest: CONFIG recvd from proc 6 (0xb35f7438)
    2011-08-04 15:59:00.228: [    CSSD][2996054928]clssgmConfig: type(1)
    2011-08-04 15:59:00.234: [    CSSD][2996054928]clssgmExecuteClientRequest: VOTEDISKQUERY recvd from proc 6 (0xb35f7438)
    2011-08-04 15:59:00.247: [    CSSD][2996054928]clssgmExecuteClientRequest: CONFIG recvd from proc 6 (0xb35f7438)
    2011-08-04 15:59:00.247: [    CSSD][2996054928]clssgmConfig: type(1)
    2011-08-04 15:59:03.039: [    CSSD][2942622608]clssnmSendingThread: sending status msg to all nodes
    2011-08-04 15:59:03.039: [    CSSD][2942622608]clssnmSendingThread: sent 4 status msgs to all nodes
    2011-08-04 15:59:07.047: [    CSSD][2942622608]clssnmSendingThread: sending status msg to all nodes
    2011-08-04 15:59:07.047: [    CSSD][2942622608]clssnmSendingThread: sent 4 status msgs to all nodes
    2011-08-04 15:59:11.057: [    CSSD][2942622608]clssnmSendingThread: sending status msg to all nodes
    2011-08-04 15:59:11.057: [    CSSD][2942622608]clssnmSendingThread: sent 4 status msgs to all nodes
    2011-08-04 15:59:16.068: [    CSSD][2942622608]clssnmSendingThread: sending status msg to all nodes
    2011-08-04 15:59:16.068: [    CSSD][2942622608]clssnmSendingThread: sent 5 status msgs to all nodes
    2011-08-04 15:59:21.079: [    CSSD][2942622608]clssnmSendingThread: sending status msg to all nodes
    2011-08-04 15:59:21.079: [    CSSD][2942622608]clssnmSendingThread: sent 5 status msgs to all nodes
    {quote}
    Any help here is appreciated.
    Regards
    Amith R
    Edited by: Mithzz on Aug 4, 2011 4:58 AM

    Did an lsof on vdisk and it showed
    >
    COMMAND PID USER FD TYPE DEVICE SIZE NODE NAME
    crsctl.bi 9589 root 26u REG 8,17 21004288 102980 /u02/storage/vdsk
    [root@oel32rac1 ~]# ps -ef |grep crsctl
    root 9589 7583 0 15:58 pts/1 00:00:00 [crsctl.bin] <defunct>
    >
    Could this be a permission issue ?
    --Amith                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

Maybe you are looking for

  • The attempt to burn a disc failed because of a medium write error-message

    and so I can't burn an audio disc from iTunes. I've wasted many many discs. Any idea what's going on? this *****!!

  • Problem in sending mails to multiple persons

    Hi, I have a problem in sending mails to multiple people.... My mail is using the local smtp server...The mail is to be send to multiple addresses in to and cc. for e.g the to address will have --> [email protected];[email protected] cc address will

  • Import Xml data into Core Data

    Hi im very new here, im trying to find a tutorial to import a xml data into a core data and so i can publish it in a tableview Can somebody help me?? thanks

  • FCC error

    Hi , I am doing file to file using FCC (txt - xml).here am getting below error.Please find and let me know the solution.   <SAP:Code area="MAPPING">EXCEPTION_DURING_EXECUTE</SAP:Code>   <SAP:P1>com/sap/xi/tf/_MM_EmpDetails_</SAP:P1>   <SAP:P2>com.sap

  • Oracle.jdbc.driver.OracleDriver() problem

    Hi guys. I am taking a dbsystems class at school. We are required to connect to the schools oracle db using this package: oracle.jdbc.driver the code provided by the profesor is meant to be used by logging into unix accounts on the schools servers an