Voting disk external redundancy

Hey,
question:
I´ve got an database which I haven´t installed myself.
A query for the votedisk shows only one configured votedisk for my 2 node rac cluster.
So I presume, that external redundancy was setup during installation of clusterware.
A few weeks ago I hit an unpublished error, where oracle sends a message that the voting disk are corrupted - which is a false message.
But anyway, what is happening, when I hit this error in my new scenarion (apart of applying the crs bundle patch - which I cant apply...)
How can make sure, that a new voting disk is added with external redundancy as well (crsctl add css votedisk /dev/raw/raw#)
or would this give back an error, because only one voting disk is configurable ?

Hi Christian,
How can make sure, that a new voting disk is added with external redundancy as well (crsctl add css votedisk /dev/raw/raw#) or would this give back an error, because only one voting disk is configurable ?I will assume you are using version 11.1 or earlier.
What determines whether your voting disk is with external redundancy or normal redudancy the amount of voting is that your environment has.
If you have only one voting disk you have external redundancy (The customers guarantees the recovery of the voting disk in case of failure. At least Oracle thinks so).
If you more than 2 (because you need to get an odd number) voting disk you are using normal redundancy. *## Edited by: Levi Pereira on Feb 7, 2011 12:09 PM*
Then you can add more voting disk in your environment without worrying about external redudancy previously configured.
External Redundancy or Normal Redundancy is nominated only by the amount from voting disk you have, there is no configuration to change External Redundancy to Normal Redundancy. As I have said is only the amount from voting that you have configured in your environment.
Make sure you have a fresh backup of your voting disk.
Warning: The voting disk backup is performed manually in the environment UNIX/Linux with the dd command.
Regards,
Levi Pereira
Edited by: Levi Pereira on Feb 7, 2011 12:09 PM

Similar Messages

  • Only ONE voting disk created on EXTERNAL redundancy

    Hi All,
    I am wondering why, when configuring 5 LUNs, then creating one DG with these 5 LUNS...
    then creating OCR/Vote disk , GRID installer only creates one voting disk?
    Of course, I can always create another voting disk in a separate DG...
    any ideas>?
    of course, when I try " ocrconfig -add +OCR_DG" i get a message that says OCR is already configured....
    thanks

    Hi JCGO,
    you are mixing up voting disks and OCR.
    Voting disks are placed on the disk header of an ASM diskgroup. The number of voting disks is based on the redundancy of the diskgroup:
    External (no mirroring) = 1; Normal = 3; High redundancy (triple mirroring) = 5;
    The OCR however is saved as a normal file (like any datafile of a tablespace) inside ASM and therefore "inherits" the redundancy of the diskgroup.
    So while you only see 1 file, the blocks of the file are mirrored correspondingly (either double or triple (High)).
    Voting disks can only reside in one diskgroup. You cannot create another voting disk in a separate DG. And the command to add voting disks ist:
    crsctl add votediskHowever this won't work when on ASM, you can only replace them into another DG:
    crsctl replace votedisk +DGThe OCR on the other side, since a normal file, allows additional OCRs to be created in different diskgroups.
    Since it is the heart of the cluster (and also containing backups of the voting disks) it does make sense to "logically" mirror it additionally (even if it already is based on the mirroring of the DG). Since if the DG cannot be mounted maybe another one containing actual OCR can. This is a little better than using the OCR backup (which can be up to to 4 hours old).
    Hence in my cluster installation I always put the OCR in every diskgroup I have (up to a number of 5).
    And since my clusters normally have 3 diskgroups (1 OCRVOT, 1 DATA and 1 FRA), I end up with 3.
    Having an uneven number of OCR is a little superior over having an even number (so 3 or 5 vs. 2 or 4). I skip the explanation to why.
    Regards
    Sebastian

  • Using external redundancy DG for voting files

    Hi all,
    I'm trying to convince a colleague of mine that having a single voting file/disk on an external redundancy ASM disk group is a bad idea since a logical corruption would mean a painful outage and recovery operation. There is also the possibility that during certain cluster failures there would always be only a single available cluster node since there is only a single voting disk (so no majority voting to decide on the active cluster members). I would greatly appreciate some feedback on further arguments as well as the validity of mine own using the term 'logical' corruption and the voting to decide on the active cluster members.
    I appreciate your time. Thanks in advance!

    Hi,
    Oracle recommended for OCR files and VOTE file/disks ASM redundancy is HIGH with 5 way.
    I think you must create a ASM disk group for example OCRVOTE, with HIGH redundancy.
    And replace vote disk to new ASM diskgroup, and and change OCR files location to new ASM diskgroup.
    Regards
    Mahir M. Quluzade
    www.mahir-quluzade.com

  • OCR and Voting Disk

    Hi all,
    What is the difference b/w OCR and Voting Disk

    The voting disk is a file that manages information about node membership and the OCR is a file that manages cluster and RAC database configuration information.
    Voting Disk -Oracle Clusterware uses the voting disk to determine which instances are members of a cluster. The voting disk must reside on a shared disk. Basically all nodes in the RAC cluster register their heart-beat information on thes voting disks. The number decides the number of active nodes in the RAC cluster. These are also used for checking the availability of instances in RAC and remove the unavailable nodes out of the cluster. It helps in preventing split-brain condition and keeps database information intact. The split brain syndrome and its affects and how it has been managed in oracle is mentioned below.
    For high availability, Oracle recommends that you have a minimum of three voting disks. If you configure a single voting disk, then you should use external mirroring to provide redundancy. You can have up to 32 voting disks in your cluster. What I could understand about the odd value of the number of voting disks is that a noe should see maximun number of voting disk to continue to function, so with 2, if it can see only 1, its not the maximum value but a half value of voting disk. I am still trying to search more on this concept.
    OCR (Oracle Cluster Registry) - resides on shared storage and maintains information about cluster configuration and information about cluster database. OCR contains information like which database instances run on which nodes and which services runs on which database.
    Oracle Cluster Registry (OCR) is one such component in 10g RAC used to store the cluster configuration information. It is a shared disk component, typically located in a shared raw volume that must be accessible to all nodes in the cluster. The daemon OCSSd manages the configuration info in OCR and maintains the changes to cluster in the registry.
    ocrcheck is the command to check out the OCR.

  • Multiplexing Voting Disk in Oracle 11gR2 ??

    Dear all,
    I have single voting disk residing in data group VDOCRDG with external redundancy. I would like to add two more voting disks to this group. Could you please let me know a way to add 2 more vd's to this disk group or should i create a new disk group with normal redundancy ???
    Appreciate if you could provide your answer with SQL commands.
    database: oracle 11g r2 patch-1
    os: RHEL 5.5
    RAC: two-node RAC
    ASM: voting disk and OCR reside in ASM
    let me know if you need more information.
    thanks
    P

    Just out of curiosity:
    (Obviously this is a doc question)
    is http://tahiti.oracle.com broken for you, so you must ask others to abstract the documentation for free, or don't you understand the line in the Forums ltiEtiquette Post
    reading 'Consulting documentation first is highly recommended'?
    I'm getting a bit annoyed by people trying to outsource as much of the task they get paid for to a forum of volunteers.
    Sybrand Bakker
    Senior Oracle DBA

  • Moving from external redundancy to nfs storage

    Hey, I am running GI 11.2.0.3 on SLES11.
    Currently my votingfiles are on one asm diskgroup called +OCR configured with external redundancy;
    While doing some failure tests it showed up, that the shared storage is not available for more than 40 seconds.
    Therefore clusterware stack is going down, due to no accessible votingfiles.
    Hence, I have to move my votingfiles.
    I created 3 virtual machines on 3 esx plattforms.
    Each vm is presenting one votingfile via nfg as described in: http://www.oracle.com/technetwork/products/clusterware/overview/grid-infra-thirdvoteonnfs-131158.pdf
    So, which steps needs to be performed to switch from by diskdgroup +OCR to a new configuration with 3 votingfiles on 3 servers ?
    Or can I stay with this diskgroup and added those 3 votingsfiles as quorum files ? That would be the easy.... But is it possible ?
    Chris

    Christian wrote:
    Hey, I am running GI 11.2.0.3 on SLES11.
    Currently my votingfiles are on one asm diskgroup called +OCR configured with external redundancy;
    While doing some failure tests it showed up, that the shared storage is not available for more than 40 seconds.
    Therefore clusterware stack is going down, due to no accessible votingfiles.
    Hence, I have to move my votingfiles.
    I created 3 virtual machines on 3 esx plattforms.
    Each vm is presenting one votingfile via nfg as described in: http://www.oracle.com/technetwork/products/clusterware/overview/grid-infra-thirdvoteonnfs-131158.pdf
    So, which steps needs to be performed to switch from by diskdgroup +OCR to a new configuration with 3 votingfiles on 3 servers ?
    Or can I stay with this diskgroup and added those 3 votingsfiles as quorum files ? That would be the easy.... But is it possible ?
    Hi Chris,
    I do not understand what you are trying to do.
    You have a poor configuration placing voting disk on localdisk of each host. Correct if I'm wrong. This is what I understood. If this is true this configuration is not valid either for testing or educational environment.
    All files of Clusterware must be in a shared location out of host member of cluster. As a workaround you can create another NFS server (i.e New VM) that is not a member of the cluster for this purpose.
    About Quorum:
    To use a quorum failgroup is required only if you are using RAC Extended or if you are using more than 1 Storage in your cluster. Quorum ASMDISK is an inexpensive solution to solve problems of split-brain that apply on environment where it's possible e.g Extended RAC or if your cluster have even number of Storages (i.e 2,4 and so on).
    If you are trying just move voting disk from diskgroup with external redundancy to diskgroup with normal redundancy. This step is easy.
    Move Voting Disk to a temporary Diskgroup can be a existing Diskgroup or you can create a temporary diskgroup to hold it.
    So recreate your diskgroup with normal redundancy and move back your votedisk to this new diskgroup.
    Also you have a option of use only NFS to store votedisk. You don't need ASM (I don't recommend this for production environment).
    You can read it:
    http://levipereira.wordpress.com/2012/01/11/explaining-how-to-store-ocr-voting-disks-and-asm-spfile-on-asm-diskgroup-rac-or-rac-extended/
    Regards,
    Levi Pereira

  • Adding ocrmirror and voting disk on another diskgroup 11gr2 RAC

    Hi Gurus,
    Can i add ocrmirror on another diskgroup(external redundancy) and voting disk(normal) on anotherdiskgroup (replace) in 11gr2 RAC while databases are up and running?

    Hi
    in Oracle 11gr2 voting disk are backed up automatically in the ocr part of any configuration change
    Voting disk is automatically restored to any added voting disk
    You can migrate voting fıska from non asm to asm without
    Taking download cluster
    To add a voting disk to asm
    Crsctl replace votedisk +diskgroup
    Ocrconfig -add +datagroup2
    Oracle manage up to five redundant ocr locations
    Hope this helps
    Zekeriya Besiroglu
    Http://zekeriyabesiroglu.blogspot.com

  • Max of 15 Voting disk ?

    Hi All,
    Please find the below errors:
    ./crsctl add css votedisk +DATA
    CRS-4671: This command is not supported for ASM diskgroups.
    CRS-4000: Command Add failed, or completed with errors.
    ./crsctl add css votedisk /u01/vote.dsk
    CRS-4258: Addition and deletion of voting files are not allowed because the voting files are on ASM #
    What I understood is that:
    1) It is not possible to put Voting disk in multiple diskgroups as it is for OCR.
    2) The Voting disk copy will be created according to the redundnacy of the Diskgroup where it is in.
    No I have a couple question based on this:
    1) If i create a disk group with external redundnacy , does that menat that I will be having only one voting disk ?I cannot add any more ?
    2) Oracle tells that it is possible tor create up to 15 voting disk.
    So does that mean that I will be having one voting (total 15) disk on each disk provided that If i have a disk group with normal or high redundnacy and the disk group have 15 disk with disk in its own faliure group
    May be I silly here, but if some one can advise me, please
    Regards
    Joe

    Hi Joe,
    Jomon Jacob wrote:
    Hi All,
    Please find the below errors:
    ./crsctl add css votedisk +DATA
    CRS-4671: This command is not supported for ASM diskgroups.
    CRS-4000: Command Add failed, or completed with errors.
    ./crsctl add css votedisk /u01/vote.dsk
    CRS-4258: Addition and deletion of voting files are not allowed because the voting files are on ASM #When votedisk is on ASM diskgroup, no add option available. The number of votedisk is determined by the diskgroup redundancy. If more copy of votedisk is desired, one can move votedisk to a diskgroup with higher redundancy.
    When votedisk is on ASM, no delete option available, one can only replace the existing votedisk group with another ASM diskgroup.
    When votedisk is on ASM the option to be used is replace.
    crsctl replace css votedisk +NEW_DGVOTE>
    >
    What I understood is that:
    1) It is not possible to put Voting disk in multiple diskgroups as it is for OCR.Yes.. because Voting file is placed directly on ASMDISK not in a DISKGROUP, although it uses the configuration of a diskgroup (e.g failgroup, asmdisks, and so on).
    2) The Voting disk copy will be created according to the redundnacy of the Diskgroup where it is in.When you move (replace) voting file to ASM, Oracle will take configuration of Diskgroup (i.e failgroup) and place one voting file in each ASM DISK in different failgroups.
    >
    >
    No I have a couple question based on this:
    1) If i create a disk group with external redundnacy , does that menat that I will be having only one voting disk ?I cannot add any more ?Yes.. with external redundancy you don't have the concept of failgroup because you not use mirror by ASM. So, it's like one failgroup. Then you can have only one Voting file in this Diskgroup.
    2) Oracle tells that it is possible tor create up to 15 voting disk.
    So does that mean that I will be having one voting (total 15) disk on each disk provided that If i have a disk group with normal or high redundnacy and the disk group have 15 disk with disk in its own faliure groupNo. 15 voting files is allowed if you not storing voting on ASM. If you are using ASM the maximum number of voting files is 5. Because Oracle will take configuration of Diskgroup.
    Using high number of voting disks can be useful when you have a big cluster environment with (e.g) 5 Storage Subsystem and 20 Hosts in a single Cluster. You must set up a voting file in each storage ... but if you're using only one storage voting 3 files is enough.
    Usage Note
    You should have at least three voting disks, unless you have a storage device, such as a disk array, that provides external redundancy. Oracle recommends that you do not use more than 5 voting disks. The maximum number of voting disks that is supported is 15.
    http://docs.oracle.com/cd/E11882_01/rac.112/e16794/crsref.htm#CHEJDHFH
    See this example;
    I configured 7 ASM DISK but ORACLE used only 5 ASM DISK.
    SQL> CREATE DISKGROUP DG_VOTE HIGH REDUNDANCY
         FAILGROUP STG1 DISK 'ORCL:DG_VOTE01'
         FAILGROUP STG2 DISK 'ORCL:DG_VOTE02'
         FAILGROUP STG3 DISK 'ORCL:DG_VOTE03'
         FAILGROUP STG4 DISK 'ORCL:DG_VOTE04'
         FAILGROUP STG5 DISK 'ORCL:DG_VOTE05'
         FAILGROUP STG6 DISK 'ORCL:DG_VOTE06'
         FAILGROUP STG7 DISK 'ORCL:DG_VOTE07'
       ATTRIBUTE 'compatible.asm' = '11.2.0.0.0';
    Diskgroup created.
    SQL> ! srvctl start diskgroup -g DG_VOTE -n lnxora02,lnxora03
    $  crsctl replace votedisk +DG_VOTE
    CRS-4256: Updating the profile
    Successful addition of voting disk 427f38b47ff24f52bf1228978354f1b2.
    Successful addition of voting disk 891c4a40caed4f05bfac445b2fef2e14.
    Successful addition of voting disk 5421865636524f5abf008becb19efe0e.
    Successful addition of voting disk a803232576a44f1bbff65ab626f51c9e.
    Successful addition of voting disk 346142ea30574f93bf870a117bea1a39.
    Successful deletion of voting disk 2166953a27a14fcbbf38dae2c4049fa2.
    Successfully replaced voting disk group with +DG_VOTE.
    $ crsctl query css votedisk
    ##  STATE    File Universal Id                File Name Disk group
    1. ONLINE   427f38b47ff24f52bf1228978354f1b2 (ORCL:DG_VOTE01) [DG_VOTE]
    2. ONLINE   891c4a40caed4f05bfac445b2fef2e14 (ORCL:DG_VOTE02) [DG_VOTE]
    3. ONLINE   5421865636524f5abf008becb19efe0e (ORCL:DG_VOTE03) [DG_VOTE]
    4. ONLINE   a803232576a44f1bbff65ab626f51c9e (ORCL:DG_VOTE04) [DG_VOTE]
    5. ONLINE   346142ea30574f93bf870a117bea1a39 (ORCL:DG_VOTE05) [DG_VOTE]
    SQL >
    SET LINESIZE 150
    COL PATH FOR A30
    COL NAME FOR A10
    COL HEADER_STATUS FOR A20
    COL FAILGROUP FOR A20
    COL FAILGROUP_TYPE FOR A20
    COL VOTING_FILE FOR A20
    SELECT NAME,PATH,HEADER_STATUS,FAILGROUP, FAILGROUP_TYPE, VOTING_FILE
    FROM V$ASM_DISK
    WHERE GROUP_NUMBER = ( SELECT GROUP_NUMBER
                    FROM V$ASM_DISKGROUP
                    WHERE NAME='DG_VOTE');
    NAME       PATH                           HEADER_STATUS        FAILGROUP            FAILGROUP_TYPE       VOTING_FILE
    DG_VOTE01  ORCL:DG_VOTE01                 MEMBER               STG1                 REGULAR              Y
    DG_VOTE02  ORCL:DG_VOTE02                 MEMBER               STG2                 REGULAR              Y
    DG_VOTE03  ORCL:DG_VOTE03                 MEMBER               STG3                 REGULAR              Y
    DG_VOTE04  ORCL:DG_VOTE04                 MEMBER               STG4                 REGULAR              Y
    DG_VOTE05  ORCL:DG_VOTE05                 MEMBER               STG5                 REGULAR              Y
    DG_VOTE06  ORCL:DG_VOTE06                 MEMBER               STG6                 REGULAR              N
    DG_VOTE07  ORCL:DG_VOTE07                 MEMBER               STG7                 REGULAR              NRegards,
    Levi Pereira
    Edited by: Levi Pereira on Jan 5, 2012 6:01 PM

  • Multiple Voting Disks Benefits?

    Are there any benefits to having 3 voting disks versus 1 voting disk?
    Is the sole benefit redundancy?
    Running 4 node RAC Cluster (11gR1) on Windows, currently have 1 voting disk, protected by RAID 1.
    Is there any good reason this should be changed to 3 voting disks?

    have 1 voting disk, protected by RAID 1You use external voting... that's great.
    You don't need to use normal voting (3 voting disks)...
    By the way, I hope you'll back up voting disk after setup RAC complete and after delete + add nodes ;)
    on window "ocopy"

  • How do I define 2 disk groups for ocr and voting disks at the oracle grid infrastructure installation window

    Hello,
    It may sound too easy to someone but I need to ask it anyway.
    I am in the middle of building Oracle RAC 11.2.0.3 on Linux. I have 2 storage and I created three LUNs in each of storage so 6 LUNs in total look in both of servers. If I choose NORMAL as a redundancy level, is it right to choose my 6 disks those are created for OCR_VOTE at the grid installation? Or should I choose only 3 of them and configure mirroring at latter stage?
    The reason why I am asking this is that I didn't find any place to create asm disk group for ocr and voting disks at the oracle grid infrastructure installation window. In fact, I would to like to create two disk groups in which one of groups contains three disks those were brought from storage 1 and the second one contains rest three 3 disk that come from the last storage.
    I believe that you will understand the manner and help me on choosing proper way.
    Thank you.

    Hi,
    You have 2 Storage H/W.
    You will need to configure a Quorum ASMDisk to store a Voting Disk.
    Because if you lose 1/2 or more of all of your voting disks, then nodes get evicted from the cluster, or nodes kick themselves out of the cluster.
    You must have the odd number of voting disk (i.e 1,3 or 5)  one voting disk per ASM DISK, So 1 Storage will hold more voting disk than another storage.
    (e.g 3 Voting Disk - 2 voting stored in stg-A and 1 voting disk store in stg-B)
    If fail the storage that hold the major number of  Voting disk the whole cluster (all nodes) goes down. Does not matter if you have a other storage online.
    It will happen:
    https://forums.oracle.com/message/9681811#9681811
    You must configure your Clusterware same as RAC extended
    Check this link:
    http://www.oracle.com/technetwork/database/clusterware/overview/grid-infra-thirdvoteonnfs-131158.pdf
    Explaining: How to store OCR, Voting disks and ASM SPFILE on ASM Diskgroup (RAC or RAC Extended) | Levi Pereira | Oracl…

  • Diagnosing an ASM space issue for a primary and a standby database instance with external redundancy.

    I've received an alert from Enterprise manager saying "Disk Group DATA_SID requires rebalance because at least one disk
    is low on space". My colleague who I would go to with this question is unavailable, so this is a learning opportunity
    for me. So far google and Oracle documentation have provided lots of information, but nothing that answers my questions.
    I've run the following query on both the primary and standby databases ASM instances:
    select name, disk_number, sector_size,os_mb, total_mb, free_mb, redundancy from v$asm_disk;
    On the primary I get 4810M Free space and 18431M Total Space
    on the standby I get 1248M Free space and 18431M Total Space -- this is the one that complained via OEM
    When I run the following query in the database instance:
    select sum(bytes)/1024/1024 MB from dba_segments;
    I get 3736.75M as a result.
    My questions are:
    1. Will OEM's suggestion to rebalance the disk actually help in this situation since the instance is set up with external redundancy?
    2. If I've got 18G of space and only 3.7G of data, why is OEM complaining?
    3. How can I reclaim what I presume is allocated but unused space in my problem disk group?
    4. How can I determine what extra data the standby has that the primary doesn't since both have the same total space allocation, but different amounts of free space?

    Thank you for the reply. That link is very good.
    We are an 11.1 version of our database. Linus is OEL 5.6.
    So, looking at the portion of the link that refers to 'Add Standby database and Instances to the OCR' - If we use SRVCTL to give the STANDBY the role of ‘physical_standby’ and the start option of ‘mount’, what effect will that have if the STANDBY becomes our PRIMARY?
    Would these database settings need to be modified manually with SRVCTL each time?
    We understand why the instance is not starting when the node is rebooted, we are looking for a best practice of how this is implemented.
    Thank you.

  • Regarding Voting disk recovery scenarios

    Hi,
    For years i have read about RAC and Voting disk and it is said that each node should access more than half of the voting disks but never got a chance to work on the below scnerios which i have mentioned, if some one has practical done the below scenarios or have good knowledge do let me know.
    1) If i have 5 voting disks and out of which 2 got corrupted or deleted will the cluster keep working properly? If i boot my system or restart the cluster will it still work fine or not?
    2) The above scenario with 3 voting disk deleted or got corrupted  what will happen?
    3) If i have 2 OCR and i got deleted or corrupted will the system run fine?

    Aman,
    During startup the clusterware requires the majority of votes to start the CSS Daemon.
    The majority is counted on how many was configured, not how many remain alive.
    Below test using 3 Votedisk (11.2.0.3)
    alert.log
    [cssd(26233)]CRS-1705:Found 1 configured voting files but 2 voting files are required, terminating to ensure data integrity; details at (:CSSN                     M00021:) in /u01/app/11.2.0/grid/log/node11g02/cssd/ocssd.log
    ocssd.log
    2014-11-14 05:20:05.126: [    CSSD][3021551360]clssnmvDiskVerify: Successful discovery of 1 disks
    2014-11-14 05:20:05.126: [    CSSD][3021551360]clssnmCompleteInitVFDiscovery: Completing initial voting file discovery
    2014-11-14 05:20:05.126: [    CSSD][3021551360]clssnmCompleteVFDiscovery: Completing voting file discovery
    2014-11-14 05:20:05.126: [    CSSD][3021551360]clssnmvDiskStateChange: state from discovered to pending disk /dev/asm-ocr-vote01
    2014-11-14 05:20:05.126: [    CSSD][3021551360]clssnmvDiskStateChange: state from pending to configured disk /dev/asm-ocr-vote01
    2014-11-14 05:20:05.126: [    CSSD][3021551360]clssnmvVerifyCommittedConfigVFs: Insufficient voting files found, found 1 of 3 configured, needed 2 voting files
    With 2 voting files online, no problem, clusterware start with warning on logs.
    Ps. If was configured a diskgroup (OCR_VOTE) with high redundancy (5 asmdisk each in your onw failgroup), even diskgroup can hold 3 asmdisk failure. The diskgroup goes down and if there is only one OCR (no mirror) on that Diskgroup the whole clusterware goes down.

  • Cluster continues with less than half voting disks

    I wanted to try voting disk recovery. I have voting file on a diskgroup with normal redundancy. I corrupted two of the disks containing the voting disk using dd command. I was expecting clusterware to be down as only 1 voting disk ( < half of total(3)) was available but surprisingly clusterware kept running. I even waited for quite some time but to no avail. I did not have any messages in alert log /ocssd log. I would be thankful if experts can give more input on this.
    Regards

    Hi You can check the metalink notes
    OCR / Vote disk Maintenance Operations: (ADD/REMOVE/REPLACE/MOVE) [ID 428681.1]
    it says
    For 11.2+, it is no longer required to back up the voting disk. The voting disk data is automatically backed up in OCR as part of any configuration change.
    The voting disk files are backed up automatically by Oracle Clusterware if the contents of the files have changed in the following ways:
    1.Configuration parameters, for example misscount, have been added or modified
    2.After performing voting disk add or delete operations The voting disk contents are restored from a backup automatically when a new voting disk is added or replaced.

  • Startup of Clusterware with missing voting disk

    Hello,
    in our environment we have a 2 node cluster.
    The 2 nodes and 2 SAN storages are in different rooms.
    Voting files for Clusterware are in ASM.
    Additionally we have a third voting disk on a NFS server (configured like in this descripton: http://www.oracle.com/technetwork/products/clusterware/overview/grid-infra-thirdvoteonnfs-131158.pdf&ei=lzJXUPvJMsn-4QTJ8YDoDg&usg=AFQjCNGxaRWhwfTehOml-KgGGeRkl4yOGw)
    The Quorum flag is on the disk that is on NFS.
    The diskgroup is with normal redundancy.
    Clusterware keeps running when one of the VDs gets lost (e.g. storage failure).
    So far so good.
    But when I have to restart Clusterware (e.g. reboot of a node) while the VD is still missing, then clusterware does not come up.
    Did not find an indication if this whether is planned behaviour of Clusterware or maybe because I missed a detail.
    From my point of view it should work to start Clusterware as long as the majority of VDs are available.
    Thanks.

    Hi,
    actually what you see is expected (especially in a stretched cluster environment, with 2 failgroups and 1 quorum failgroup).
    It has to do with how ASM handles a disk failure and is doing the mirriong (and the strange "issue" that you need a third failgroup for the votedisks).
    So before looking at this special case, lets look at how ASM normally treats a diskgroup:
    A diskgroup can only be mounted in normal mode, if all disks of the diskgroup are online. If a disks is missing ASM will not allow you to "normally" mount the diskgroup, before the error situation is solved. If a disks is lost, which contents can be mirrored to other disks, then ASM will be able to restore full redundancy and will allow you to mount the diskgroup. If this is not the case ASM expects the user to tell what it should do => The administrator can issue a "alter diskgroup mount force" to tell ASM even though it cannot held up the required redundancy it should mount with disks missing. This then will allow the administrator to correct the error (or replaced failed disks/failgroups). While ASM had the diskgroup mounted the loss of a failgroup will not result in a dismount of the diskgroup.
    The same holds true with the diskgroup containing the voting disks. So what you see (will continue to run, but cannot restart) is pretty much the same like for a normal diskgroup: If a disk is lost, and the contents does not get relocated (like if the quorum failgroup fails it will not allow you to relocatore, since there are no more failgroups to relocate the third vote to), it will continue to run, but it will not be able to automatically remount the diskgroup in normal mode if a disk fails.
    To bring the cluster back online, manual intervention is required: Start the cluster in exclusive mode:
    crsctl start crs -exclThen connect to ASM and do a
    alter disgkroup <dgname> mount forceThen resolve the error (like adding another disk to another failgroup, that the data can be remirrored and the disk can be dropped.
    After that a normal startup will be possible again.
    Regards
    Sebastian

  • Advantage of having OCR and Voting disk on ASM

    What are advantages of putting OCR and Voting disk on ASM from 11g

    well, other than the sharing thing, you dont have to go RAIDing an additional shared disk either. If you have properly configured ASM, redundancy should be built in as well, either soft or hard.
    not sure what other advantages you may need. Theres the IO thing with ASM, buit thats not really an advantage persay with ocr and vote. I may be contradicted by others but Ive never seen performance hit of any kind attributed to ocr and vote on non-ASM disk.

Maybe you are looking for