Only ONE voting disk created on EXTERNAL redundancy

Hi All,
I am wondering why, when configuring 5 LUNs, then creating one DG with these 5 LUNS...
then creating OCR/Vote disk , GRID installer only creates one voting disk?
Of course, I can always create another voting disk in a separate DG...
any ideas>?
of course, when I try " ocrconfig -add +OCR_DG" i get a message that says OCR is already configured....
thanks

Hi JCGO,
you are mixing up voting disks and OCR.
Voting disks are placed on the disk header of an ASM diskgroup. The number of voting disks is based on the redundancy of the diskgroup:
External (no mirroring) = 1; Normal = 3; High redundancy (triple mirroring) = 5;
The OCR however is saved as a normal file (like any datafile of a tablespace) inside ASM and therefore "inherits" the redundancy of the diskgroup.
So while you only see 1 file, the blocks of the file are mirrored correspondingly (either double or triple (High)).
Voting disks can only reside in one diskgroup. You cannot create another voting disk in a separate DG. And the command to add voting disks ist:
crsctl add votediskHowever this won't work when on ASM, you can only replace them into another DG:
crsctl replace votedisk +DGThe OCR on the other side, since a normal file, allows additional OCRs to be created in different diskgroups.
Since it is the heart of the cluster (and also containing backups of the voting disks) it does make sense to "logically" mirror it additionally (even if it already is based on the mirroring of the DG). Since if the DG cannot be mounted maybe another one containing actual OCR can. This is a little better than using the OCR backup (which can be up to to 4 hours old).
Hence in my cluster installation I always put the OCR in every diskgroup I have (up to a number of 5).
And since my clusters normally have 3 diskgroups (1 OCRVOT, 1 DATA and 1 FRA), I end up with 3.
Having an uneven number of OCR is a little superior over having an even number (so 3 or 5 vs. 2 or 4). I skip the explanation to why.
Regards
Sebastian

Similar Messages

  • Why only one voting disk showing while vote diskgroup has three disks?

    Hi, experts,
    OUr diskgroup OCR_VOTE did have three disks, see below:
    SQL> select group_number, name, path from v$asm_disk;
    GROUP_NUMBER NAME PATH
    2 ASMDISK01 ORCL:ASMDISK01
    2 ASMDISK02 ORCL:ASMDISK02
    2 ASMDISK03 ORCL:ASMDISK03
    2 ASMDISK04 ORCL:ASMDISK04
    2 ASMDISK05 ORCL:ASMDISK05
    2 ASMDISK06 ORCL:ASMDISK06
    3 ASMDISK07 ORCL:ASMDISK07
    3 ASMDISK08 ORCL:ASMDISK08
    3 ASMDISK09 ORCL:ASMDISK09
    1 OCR_VOTE01 ORCL:OCR_VOTE01
    1 OCR_VOTE02 ORCL:OCR_VOTE02
    1 OCR_VOTE03 ORCL:OCR_VOTE03
    However only one votedisk exsited:
    +ASM1$ crsctl query css votedisk
    ## STATE File Universal Id File Name Disk group
    1. ONLINE 4722e13b38c14fa0bfcce2db1dcb7b84 (ORCL:OCR_VOTE01) [OCR_VOTE]
    Located 1 voting disk(s).
    How do I have three votedisks?
    Thanks in advance.

    Thank you for response.
    I checked asmcmd, and under voting diskgroup, there are two folders:
    Y ASMPARAMETERFILE/
    Y OCRFILE/
    ASMCMD> cd ASMPARAMETERFILE/
    ASMPARAMETERFILE UNPROT COARSE FEB 14 2012 Y REGISTRY.253.775250583
    ASMCMD> cd ocrfile
    OCRFILE UNPROT COARSE NOV 28 01:00:00 Y REGISTRY.255.775250583
    our DB spfile per show parameter spfile , it shows it stored at DAT diskgroup instead of vote diskgroup.
    So my questions here are:
    1). since spfile in database shows:
    SQL> show parameter spfile
    spfile string +DAT/prd/spfileprd.ora
    The spfile under vote diskgroup must be for ASM instance. How do I move that? seems have to bounce asm to move it somewhere else. that will cause some disturbance to the services.
    2). is that under OCR_file, the ocr file including voting disks?
    Since OCR files are mirrored like this:
    ocrcheck
    Status of Oracle Cluster Registry is as follows :
    ID : 546691893
    Device/File Name : +OCR_VOTE
    Device/File integrity check succeeded
    Device/File Name : +FRA
    Device/File integrity check succeeded
    Is that means I can delete the one at vote disk and the one on FRA should still serve the purpose?
    Thanks again.

  • Can i install ops on my win2000 professional pc with only one hard disk

    hi all:
    i am trying to install oracle parallel server on win2000 pro
    pc just to test. i have only one hard disk.
    there's a primary partition which contains logic drive c: for
    my system (win2000). And i created another extended partition
    for the set up ops. but when i run clustersetup.exe , the error
    'oracle partition not found ' show up.
    the question is : does installing ops need some extra hardware for example another harddisk? if not, how can i set up oracle
    partition .
    i have tried many times and the document from oracle help me little.
    regards
    daniel wang

    Hello,
    I think you plan install RAC on virtual machines...
    Take a look on http://www.oracle.com/technology/products/database/clusterware/pdf/oracle_rac_in_oracle_vm_environments.pdf
    Also http://www.oracle-base.com/articles/10g/OracleDB10gR2RACInstallationOnCentos4UsingVMware.php
    Regards,
    Rodrigo Mufalani
    http://mufalani.blogspot.com

  • ORA-19025: EXTRACTVALUE returns value of only one nod- When creating index

    Hi,
    My table have 2 columns . Columns are
    Key varchar2(50)
    Attribute XMLType
    Below is my sample XML which is stored in Attribute column.
    <resource>
    <lang>
    <lc>FRE</lc>
    <lc>ARA</lc>
    </lang>
    <site>
    <sp>257204</sp>
    <sp>3664</sp>
    </site>
    <page>
    <pp>64272714</pp>
    <pp>64031775</pp>
    <pp>256635</pp>
    </page>
    <key>
    <kt>1</kt>
    </key>
    </resource>
    When i try to create a index on XML column and i am getting the above exception. kindly help me out becz i'm not familar with oracle.
    Query is
    create index XMLTest_ind on language_resource_wrapper
         (extractValue(attribute, '/resource/site/sp') )
    index on sp,pp elements in the above XML.

    Thanks for the suggestion...
    The problem is that in one row, information in a data dump about two different things was combined into one, due to a gap in the input file. The starting delimiter for the second thing and the ending delimiter for the first were missing. The result was that many entity tags that should have been unique were duplicated, ion that particular row.
    I was able to guard the view against future such errors with occurrences using this
    in the WHERE clause ...
    AND NOT ( XMLCLOB LIKE '%<TAG1>%<TAG1>%'
    OR XMLCLOB LIKE '%<TAG2>%<TAG2>%'
    OR XMLCLOB LIKE '%<TAG3>%<TAG3>%'
    /* ... Repeated once for each tag used with extractvalue */
    OR XMLCLOB LIKE '%<TAGN>%<TAGN>%'
    It filters out any row with two identical starting tags, where one is expected.
    I am pleased to see that the suggestion got through.

  • Voting disk external redundancy

    Hey,
    question:
    I´ve got an database which I haven´t installed myself.
    A query for the votedisk shows only one configured votedisk for my 2 node rac cluster.
    So I presume, that external redundancy was setup during installation of clusterware.
    A few weeks ago I hit an unpublished error, where oracle sends a message that the voting disk are corrupted - which is a false message.
    But anyway, what is happening, when I hit this error in my new scenarion (apart of applying the crs bundle patch - which I cant apply...)
    How can make sure, that a new voting disk is added with external redundancy as well (crsctl add css votedisk /dev/raw/raw#)
    or would this give back an error, because only one voting disk is configurable ?

    Hi Christian,
    How can make sure, that a new voting disk is added with external redundancy as well (crsctl add css votedisk /dev/raw/raw#) or would this give back an error, because only one voting disk is configurable ?I will assume you are using version 11.1 or earlier.
    What determines whether your voting disk is with external redundancy or normal redudancy the amount of voting is that your environment has.
    If you have only one voting disk you have external redundancy (The customers guarantees the recovery of the voting disk in case of failure. At least Oracle thinks so).
    If you more than 2 (because you need to get an odd number) voting disk you are using normal redundancy. *## Edited by: Levi Pereira on Feb 7, 2011 12:09 PM*
    Then you can add more voting disk in your environment without worrying about external redudancy previously configured.
    External Redundancy or Normal Redundancy is nominated only by the amount from voting disk you have, there is no configuration to change External Redundancy to Normal Redundancy. As I have said is only the amount from voting that you have configured in your environment.
    Make sure you have a fresh backup of your voting disk.
    Warning: The voting disk backup is performed manually in the environment UNIX/Linux with the dd command.
    Regards,
    Levi Pereira
    Edited by: Levi Pereira on Feb 7, 2011 12:09 PM

  • Max of 15 Voting disk ?

    Hi All,
    Please find the below errors:
    ./crsctl add css votedisk +DATA
    CRS-4671: This command is not supported for ASM diskgroups.
    CRS-4000: Command Add failed, or completed with errors.
    ./crsctl add css votedisk /u01/vote.dsk
    CRS-4258: Addition and deletion of voting files are not allowed because the voting files are on ASM #
    What I understood is that:
    1) It is not possible to put Voting disk in multiple diskgroups as it is for OCR.
    2) The Voting disk copy will be created according to the redundnacy of the Diskgroup where it is in.
    No I have a couple question based on this:
    1) If i create a disk group with external redundnacy , does that menat that I will be having only one voting disk ?I cannot add any more ?
    2) Oracle tells that it is possible tor create up to 15 voting disk.
    So does that mean that I will be having one voting (total 15) disk on each disk provided that If i have a disk group with normal or high redundnacy and the disk group have 15 disk with disk in its own faliure group
    May be I silly here, but if some one can advise me, please
    Regards
    Joe

    Hi Joe,
    Jomon Jacob wrote:
    Hi All,
    Please find the below errors:
    ./crsctl add css votedisk +DATA
    CRS-4671: This command is not supported for ASM diskgroups.
    CRS-4000: Command Add failed, or completed with errors.
    ./crsctl add css votedisk /u01/vote.dsk
    CRS-4258: Addition and deletion of voting files are not allowed because the voting files are on ASM #When votedisk is on ASM diskgroup, no add option available. The number of votedisk is determined by the diskgroup redundancy. If more copy of votedisk is desired, one can move votedisk to a diskgroup with higher redundancy.
    When votedisk is on ASM, no delete option available, one can only replace the existing votedisk group with another ASM diskgroup.
    When votedisk is on ASM the option to be used is replace.
    crsctl replace css votedisk +NEW_DGVOTE>
    >
    What I understood is that:
    1) It is not possible to put Voting disk in multiple diskgroups as it is for OCR.Yes.. because Voting file is placed directly on ASMDISK not in a DISKGROUP, although it uses the configuration of a diskgroup (e.g failgroup, asmdisks, and so on).
    2) The Voting disk copy will be created according to the redundnacy of the Diskgroup where it is in.When you move (replace) voting file to ASM, Oracle will take configuration of Diskgroup (i.e failgroup) and place one voting file in each ASM DISK in different failgroups.
    >
    >
    No I have a couple question based on this:
    1) If i create a disk group with external redundnacy , does that menat that I will be having only one voting disk ?I cannot add any more ?Yes.. with external redundancy you don't have the concept of failgroup because you not use mirror by ASM. So, it's like one failgroup. Then you can have only one Voting file in this Diskgroup.
    2) Oracle tells that it is possible tor create up to 15 voting disk.
    So does that mean that I will be having one voting (total 15) disk on each disk provided that If i have a disk group with normal or high redundnacy and the disk group have 15 disk with disk in its own faliure groupNo. 15 voting files is allowed if you not storing voting on ASM. If you are using ASM the maximum number of voting files is 5. Because Oracle will take configuration of Diskgroup.
    Using high number of voting disks can be useful when you have a big cluster environment with (e.g) 5 Storage Subsystem and 20 Hosts in a single Cluster. You must set up a voting file in each storage ... but if you're using only one storage voting 3 files is enough.
    Usage Note
    You should have at least three voting disks, unless you have a storage device, such as a disk array, that provides external redundancy. Oracle recommends that you do not use more than 5 voting disks. The maximum number of voting disks that is supported is 15.
    http://docs.oracle.com/cd/E11882_01/rac.112/e16794/crsref.htm#CHEJDHFH
    See this example;
    I configured 7 ASM DISK but ORACLE used only 5 ASM DISK.
    SQL> CREATE DISKGROUP DG_VOTE HIGH REDUNDANCY
         FAILGROUP STG1 DISK 'ORCL:DG_VOTE01'
         FAILGROUP STG2 DISK 'ORCL:DG_VOTE02'
         FAILGROUP STG3 DISK 'ORCL:DG_VOTE03'
         FAILGROUP STG4 DISK 'ORCL:DG_VOTE04'
         FAILGROUP STG5 DISK 'ORCL:DG_VOTE05'
         FAILGROUP STG6 DISK 'ORCL:DG_VOTE06'
         FAILGROUP STG7 DISK 'ORCL:DG_VOTE07'
       ATTRIBUTE 'compatible.asm' = '11.2.0.0.0';
    Diskgroup created.
    SQL> ! srvctl start diskgroup -g DG_VOTE -n lnxora02,lnxora03
    $  crsctl replace votedisk +DG_VOTE
    CRS-4256: Updating the profile
    Successful addition of voting disk 427f38b47ff24f52bf1228978354f1b2.
    Successful addition of voting disk 891c4a40caed4f05bfac445b2fef2e14.
    Successful addition of voting disk 5421865636524f5abf008becb19efe0e.
    Successful addition of voting disk a803232576a44f1bbff65ab626f51c9e.
    Successful addition of voting disk 346142ea30574f93bf870a117bea1a39.
    Successful deletion of voting disk 2166953a27a14fcbbf38dae2c4049fa2.
    Successfully replaced voting disk group with +DG_VOTE.
    $ crsctl query css votedisk
    ##  STATE    File Universal Id                File Name Disk group
    1. ONLINE   427f38b47ff24f52bf1228978354f1b2 (ORCL:DG_VOTE01) [DG_VOTE]
    2. ONLINE   891c4a40caed4f05bfac445b2fef2e14 (ORCL:DG_VOTE02) [DG_VOTE]
    3. ONLINE   5421865636524f5abf008becb19efe0e (ORCL:DG_VOTE03) [DG_VOTE]
    4. ONLINE   a803232576a44f1bbff65ab626f51c9e (ORCL:DG_VOTE04) [DG_VOTE]
    5. ONLINE   346142ea30574f93bf870a117bea1a39 (ORCL:DG_VOTE05) [DG_VOTE]
    SQL >
    SET LINESIZE 150
    COL PATH FOR A30
    COL NAME FOR A10
    COL HEADER_STATUS FOR A20
    COL FAILGROUP FOR A20
    COL FAILGROUP_TYPE FOR A20
    COL VOTING_FILE FOR A20
    SELECT NAME,PATH,HEADER_STATUS,FAILGROUP, FAILGROUP_TYPE, VOTING_FILE
    FROM V$ASM_DISK
    WHERE GROUP_NUMBER = ( SELECT GROUP_NUMBER
                    FROM V$ASM_DISKGROUP
                    WHERE NAME='DG_VOTE');
    NAME       PATH                           HEADER_STATUS        FAILGROUP            FAILGROUP_TYPE       VOTING_FILE
    DG_VOTE01  ORCL:DG_VOTE01                 MEMBER               STG1                 REGULAR              Y
    DG_VOTE02  ORCL:DG_VOTE02                 MEMBER               STG2                 REGULAR              Y
    DG_VOTE03  ORCL:DG_VOTE03                 MEMBER               STG3                 REGULAR              Y
    DG_VOTE04  ORCL:DG_VOTE04                 MEMBER               STG4                 REGULAR              Y
    DG_VOTE05  ORCL:DG_VOTE05                 MEMBER               STG5                 REGULAR              Y
    DG_VOTE06  ORCL:DG_VOTE06                 MEMBER               STG6                 REGULAR              N
    DG_VOTE07  ORCL:DG_VOTE07                 MEMBER               STG7                 REGULAR              NRegards,
    Levi Pereira
    Edited by: Levi Pereira on Jan 5, 2012 6:01 PM

  • Minimum number of voting disks?

    Hi,
    Is there is a minimum number of voting disks?
    does that depend on the RAC nodes?
    If I have 4 nodes what is the optimal voting disks?
    Thanks

    user9202785 wrote:
    Hi,
    Is there is a minimum number of voting disks?
    does that depend on the RAC nodes?
    If I have 4 nodes what is the optimal voting disks?
    ThanksOracle recommends 3 voting disk. but you can install clusterware with one voting disk by selecting external redudancy during clusterware installation.
    the number of voting disks is not depend on the number of RAC nodes. you can have upto maximum 32 voting disks for 2nodes to 100 RAC nodes.
    The Oracle Clusterware is comprised primarily of two components: the voting disk and the OCR (Oracle Cluster Registry). The voting disk is nothing but a file that contains and manages information of all the node memberships and the OCR is a file that manages the cluster and RAC configuration.
    refer : how to add/remove/repace voting disk in the below link:
    http://oracleinstance.blogspot.com/2009/12/how-to-addremovereplacemove-oracle.html.
    if voting disk fails: refer
    http://oracleinstance.blogspot.com/2009/12/how-to-addremovereplacemove-oracle.html.
    also refer Oracle® Database 2 Day + Real Application Clusters Guide 10g Release 2 (10.2)
    read the below ,hope this will help you.
    1) Why do we have to create odd number of voting disk?
    As far as voting disks are concerned, a node must be able to access strictly more than half of the voting disks at any time. So if you want to be able to tolerate a failure of n voting disks, you must have at least 2n+1 configured. (n=1 means 3 voting disks). You can configure up to 32 voting disks, providing protection against 15 simultaneous disk failures.
    Oracle recommends that customers use 3 or more voting disks in Oracle RAC 10g Release 2. Note: For best availability, the 3 voting files should be physically separate disks. It is recommended to use an odd number as 4 disks will not be any more highly available than 3 disks, 1/2 of 3 is 1.5...rounded to 2, 1/2 of 4 is 2, once we lose 2 disks, our cluster will fail with both 4 voting disks or 3 voting disks.
    2) Does the cluster actually check for the vote count before node eviction? If yes, could you expain this process briefly?
    Yes. If you lose half or more of all of your voting disks, then nodes get evicted from the cluster, or nodes kick themselves out of the cluster.
    3) What's the logic behind the documentaion note that says that "each node should be able to see more than half of the voting disks at any time"?
    The answer to first question itself will provide answer to this question too.answered by: Boochi
    refer: Question on voting disk
    Question on voting disk

  • Always ODD number of Voting disks.

    I would like to know answer for a basic question like.....Why we configure always ODD number of voting disks in RAC environment??
    Also one more question....Why Oracle has not provided Shared Undo tablespace between nodes?? Why we need to configure seperate Undo tablespaces and Redo log files for each node??
    Please clear my doubts if possible...as these basic questions are creating lot of inconvinience to me..
    -Yasser

    The odd number of voting disks is important because in order to determine survival, the clusterware has to decide which instances should be evicted. If you had an even number of voting disks (2) then it wouldn't help much if after/during a split each instance can still access only one voting disk - in that case no instance could be more significant than the other, no way to determine which side of the split brain is the 'right' one.
    As for redo: since there is a LGWR process running on each instance, it is just much easier (and performs better) to just give each instance (and LGWR) its own set of redo files. No worries about who can write to these files at any given time. The only other solutions would have been to either
    - transfer to contents of the redo log buffer between instances and have one LGWR be the only one to physically write files (sounds like a lot of work to me)
    - synchronize access to one shared set of redo log files (a lot of locking an overhead)
    I can't think of a reason why each instance needs its own undo tablespace. But I am sure it makes a lot of sense. Maybe contention when looking for free space? Maybe the fact that an active undo segment is always tied a a specific session on that instance?
    Bjoern

  • Recovering Voting Disk

    Hello,
    This is a general know-how question for Oracle 11.2.0.3 RAC.
    Assuming I have only one voting disk with no mirrors and it has got corrupted. How do I recover it?
    Regards,
    Suddhasatwa

    Can you paste here ?
    crsctl query css votedisk
    crsctl stat res -tIf you sure your Vote disk is corrupted, then you can use below steps, recover vote disk.
    1. Stop crs on all the nodes(if it does not stop, kill ohasd process and retry)
    crsctl stop crs -f2. Start crs in exclusive mode on one of the nodes (host01)
    crsctl start crs -excl3. Start asm instance on host01 using pfile
    echo INSTANCE_TYPE=ASM >> /u01/app/oracle/init+ASM1.ora
    chown grid:oinstall /u01/app/oracle/init+ASM1.ora
    SQL>startup pfile='/u01/app/oracle/init+ASM1.ora';4. Create a new diskgroup votedg
    5. Move voting disk to data diskgroup – voting disk is automatically recovered using latest available backup of OCR.
    crsctl replace votedisk +votedg5. Stop crs on host01(was running in exclusive mode)
    crsctl stop crs5. Restart crs on host01
    crsctl start crs6. Start cluster on all the nodes and check that it is running
    crsctl start cluster -all
    crsctl stat res -tRegards
    Mahir M. Quluzade
    http://www.mahir-quluzade.com

  • Using external redundancy DG for voting files

    Hi all,
    I'm trying to convince a colleague of mine that having a single voting file/disk on an external redundancy ASM disk group is a bad idea since a logical corruption would mean a painful outage and recovery operation. There is also the possibility that during certain cluster failures there would always be only a single available cluster node since there is only a single voting disk (so no majority voting to decide on the active cluster members). I would greatly appreciate some feedback on further arguments as well as the validity of mine own using the term 'logical' corruption and the voting to decide on the active cluster members.
    I appreciate your time. Thanks in advance!

    Hi,
    Oracle recommended for OCR files and VOTE file/disks ASM redundancy is HIGH with 5 way.
    I think you must create a ASM disk group for example OCRVOTE, with HIGH redundancy.
    And replace vote disk to new ASM diskgroup, and and change OCR files location to new ASM diskgroup.
    Regards
    Mahir M. Quluzade
    www.mahir-quluzade.com

  • How do I define 2 disk groups for ocr and voting disks at the oracle grid infrastructure installation window

    Hello,
    It may sound too easy to someone but I need to ask it anyway.
    I am in the middle of building Oracle RAC 11.2.0.3 on Linux. I have 2 storage and I created three LUNs in each of storage so 6 LUNs in total look in both of servers. If I choose NORMAL as a redundancy level, is it right to choose my 6 disks those are created for OCR_VOTE at the grid installation? Or should I choose only 3 of them and configure mirroring at latter stage?
    The reason why I am asking this is that I didn't find any place to create asm disk group for ocr and voting disks at the oracle grid infrastructure installation window. In fact, I would to like to create two disk groups in which one of groups contains three disks those were brought from storage 1 and the second one contains rest three 3 disk that come from the last storage.
    I believe that you will understand the manner and help me on choosing proper way.
    Thank you.

    Hi,
    You have 2 Storage H/W.
    You will need to configure a Quorum ASMDisk to store a Voting Disk.
    Because if you lose 1/2 or more of all of your voting disks, then nodes get evicted from the cluster, or nodes kick themselves out of the cluster.
    You must have the odd number of voting disk (i.e 1,3 or 5)  one voting disk per ASM DISK, So 1 Storage will hold more voting disk than another storage.
    (e.g 3 Voting Disk - 2 voting stored in stg-A and 1 voting disk store in stg-B)
    If fail the storage that hold the major number of  Voting disk the whole cluster (all nodes) goes down. Does not matter if you have a other storage online.
    It will happen:
    https://forums.oracle.com/message/9681811#9681811
    You must configure your Clusterware same as RAC extended
    Check this link:
    http://www.oracle.com/technetwork/database/clusterware/overview/grid-infra-thirdvoteonnfs-131158.pdf
    Explaining: How to store OCR, Voting disks and ASM SPFILE on ASM Diskgroup (RAC or RAC Extended) | Levi Pereira | Oracl…

  • Cluster continues with less than half voting disks

    I wanted to try voting disk recovery. I have voting file on a diskgroup with normal redundancy. I corrupted two of the disks containing the voting disk using dd command. I was expecting clusterware to be down as only 1 voting disk ( < half of total(3)) was available but surprisingly clusterware kept running. I even waited for quite some time but to no avail. I did not have any messages in alert log /ocssd log. I would be thankful if experts can give more input on this.
    Regards

    Hi You can check the metalink notes
    OCR / Vote disk Maintenance Operations: (ADD/REMOVE/REPLACE/MOVE) [ID 428681.1]
    it says
    For 11.2+, it is no longer required to back up the voting disk. The voting disk data is automatically backed up in OCR as part of any configuration change.
    The voting disk files are backed up automatically by Oracle Clusterware if the contents of the files have changed in the following ways:
    1.Configuration parameters, for example misscount, have been added or modified
    2.After performing voting disk add or delete operations The voting disk contents are restored from a backup automatically when a new voting disk is added or replaced.

  • Voting disks

    Voting disks are accessed by every node about once per second so performance of those is not very critical. You shouldn't use 2 voting disks - use either 1 or 3 (5 is also possible but doesn't make sense usually). The reason for that is that majority of voting disks should be available for cluster to operate. If one voting disk out of three fails - other 2 (majority) is still available. If you have two voting disks that majority is only two and if one voting disk fails - cluster goes down. So with two voting disks availability is even lower as it's enough to one of two disks to fail.
    I did not get the concept of Majority here... Any one has better idea...?

    Actually, there are about about 3 IOs per node per second - 2 reads + 1 write.
    Oracle Clusterware requires that majority of voting disks to be accessible. Majority means more than a half. If majority is not accessible (i.e. half or more are not available) then the node leaves the cluster (read evicts itself).
    Example:
    1 voting disk - if it not available - node leaves the cluster.
    2 voting disks - if one is not available (this is half) than node leaves the cluster. This is because majority of 2 is 2 (more than a half which is one). I.e. 2 voting disks configuration doesn't provide more resiliency than 1.
    3 voting disks - majority is 2. Thus, if one voting disk is unavailable - the node stay in the cluster. Consequently, you have higher availability than with 1.
    4 voting disks - majority is 3. Resiliency is the same as with 3 voting disks. I.e. you can sustain loss of ONE voting disk only out of four so it doesn't make sense to use one more voting disk in this case.
    5 voting disks - majority is 3 and you can loose 2 voting disks without effect on the cluster. Thus, it's more resilient than 3.
    And so on.

  • Does all voting disks in n node rac have same data

    Hi,
    I heard that Oracle/css access more than half of the voting disks, just had a question on that
    1) Why does Oracle access more than half of the voting disks?
    2) Why we keep odd number of voting disks?
    3) Does Oracle keep same information in all the voting disks ? is it like control or redofile multiplexing?
    4) Why we have only 2 ocr and not more than that?
    Regards
    ID

    Hi,
    1) Why does Oracle access more than half of the voting disks?
    To join the cluster greater than half number of voting disks must be accessible for the cluster node, Oracle made this restriction so that even in worst cases all nodes must have access on one common disk. Let's try to understand with simple classical example of two node cluster node1 and node2 with two voting disk vote1 and vote2 and assuming vote1 is accessible for node1 only and vote2 is accessible for node2 only, in this case if Oracle allow the node to join the cluster by passing this restriction then conflict will occur as both the node writing information to different voting disk but with this restriction Oracke make sure that one disk must be commonly accessible for all the nodes. For example in case of three voting disk at least two must be accessible, if two voting disks are accessible then one disk will be commonly accessible for all the nodes.
    2) Why we keep odd number of voting disks?
    I already answered this question indirectly with the answer of your first question. Greater than half number of voting disks must be accessible so either you configure three or next even number i.e. four but number of failover voting disk remains same. In case of three failure of one voting disk can be tolerated and same in case of four voting disks.
    3) Does Oracle keep same information in all the voting disks ? is it like control or redofile multiplexing?
    Yes, Clusterware maintains same information in all voting disks its just the multiplex copy.
    4) Why we have only 2 ocr and not more than that?
    We can configure upto five mirror OCR disk. Here is excrept of my ocr.loc file
    [root@host01 ~]# cat /etc/oracle/ocr.loc
    #Device/file getting replaced by device /dev/sdb13
    ocrconfig_loc=+DATA
    ocrmirrorconfig_loc=/dev/sdb15
    ocrconfig_loc3=/dev/sdb14
    ocrconfig_loc4=/dev/sdb13
    local_only=false[root@host01 ~]#
    [root@host01 ~]#
    [root@host01 ~]#
    [root@host01 ~]# ocrconfig -add /dev/sdb12
    [root@host01 ~]#
    [root@host01 ~]#
    [root@host01 ~]#
    [root@host01 ~]# ocrconfig -add /dev/sdb11
    PROT-27: Cannot configure another Oracle Cluster Registry location because the maximum number of Oracle Cluster Registry locations (5) have been configured
    Thanks

  • Split one message and create N Files on target side based on FieldName

    Hi Experts
    How to split one message into N messages
    I have used BPM and I have put
    One Receiver Step
    One Transformation Step and
    One Send Step.
    In Receiver step I have used Correlation as Field2
    In Tansformation Step I have done One to One Mapping
    But In receiver Side only one File is creating.
    Message Structure is
    <Main_MT>.... 1...1
    ........<test>01</test>.... 1...1
    ...........<Sub_MT>...... 0...Unbound
    ................<Field1> </Field1>
    ................<Field2>123</Field>
    ..........</Sub_MT>
    ..........<Sub_MT>...... 0...Unbound
    ................<Field1> </Field1>
    ................<Field2>234</Field>
              </Sub_MT>
    </Main_MT>
    How to resolve this problem
    Thanks & Regards
    Sowmya

    Hey Jayson,
    I dint search the blogs using the keyword " mutiple mapping". The given two blogs I had used for creating my first 1:n mapping, so their names are by heart to me.
      I believe both the blogs provided by me help in creating the mutiple seperate messages on the target end.
    The Claus's blog covers more detailed mapping logic for message split and the Jin's blog covers the ID pieces needed to be configured when we want to create mutiple messages on the target end.
    Pardon me if I have misconstrued your point.
    Thanks,
    Pooja

Maybe you are looking for