Odd number of voting disks

Can anyone tell the reason for odd number of voting disk

Metalink Note:220970.1, where you can see
What happens if I lose my voting disk(s)?
If you lose 1/2 or more of all of your voting disks, then nodes get evicted from the cluster, or nodes kick themselves out of the cluster. It doesn't threaten database corruption. For this reason we recommend that customers use an 3 or more voting disks in 10g Release 2 (always in an odd number).

Similar Messages

  • Always ODD number of Voting disks.

    I would like to know answer for a basic question like.....Why we configure always ODD number of voting disks in RAC environment??
    Also one more question....Why Oracle has not provided Shared Undo tablespace between nodes?? Why we need to configure seperate Undo tablespaces and Redo log files for each node??
    Please clear my doubts if possible...as these basic questions are creating lot of inconvinience to me..
    -Yasser

    The odd number of voting disks is important because in order to determine survival, the clusterware has to decide which instances should be evicted. If you had an even number of voting disks (2) then it wouldn't help much if after/during a split each instance can still access only one voting disk - in that case no instance could be more significant than the other, no way to determine which side of the split brain is the 'right' one.
    As for redo: since there is a LGWR process running on each instance, it is just much easier (and performs better) to just give each instance (and LGWR) its own set of redo files. No worries about who can write to these files at any given time. The only other solutions would have been to either
    - transfer to contents of the redo log buffer between instances and have one LGWR be the only one to physically write files (sounds like a lot of work to me)
    - synchronize access to one shared set of redo log files (a lot of locking an overhead)
    I can't think of a reason why each instance needs its own undo tablespace. But I am sure it makes a lot of sense. Maybe contention when looking for free space? Maybe the fact that an active undo segment is always tied a a specific session on that instance?
    Bjoern

  • Minimum number of voting disks?

    Hi,
    Is there is a minimum number of voting disks?
    does that depend on the RAC nodes?
    If I have 4 nodes what is the optimal voting disks?
    Thanks

    user9202785 wrote:
    Hi,
    Is there is a minimum number of voting disks?
    does that depend on the RAC nodes?
    If I have 4 nodes what is the optimal voting disks?
    ThanksOracle recommends 3 voting disk. but you can install clusterware with one voting disk by selecting external redudancy during clusterware installation.
    the number of voting disks is not depend on the number of RAC nodes. you can have upto maximum 32 voting disks for 2nodes to 100 RAC nodes.
    The Oracle Clusterware is comprised primarily of two components: the voting disk and the OCR (Oracle Cluster Registry). The voting disk is nothing but a file that contains and manages information of all the node memberships and the OCR is a file that manages the cluster and RAC configuration.
    refer : how to add/remove/repace voting disk in the below link:
    http://oracleinstance.blogspot.com/2009/12/how-to-addremovereplacemove-oracle.html.
    if voting disk fails: refer
    http://oracleinstance.blogspot.com/2009/12/how-to-addremovereplacemove-oracle.html.
    also refer Oracle® Database 2 Day + Real Application Clusters Guide 10g Release 2 (10.2)
    read the below ,hope this will help you.
    1) Why do we have to create odd number of voting disk?
    As far as voting disks are concerned, a node must be able to access strictly more than half of the voting disks at any time. So if you want to be able to tolerate a failure of n voting disks, you must have at least 2n+1 configured. (n=1 means 3 voting disks). You can configure up to 32 voting disks, providing protection against 15 simultaneous disk failures.
    Oracle recommends that customers use 3 or more voting disks in Oracle RAC 10g Release 2. Note: For best availability, the 3 voting files should be physically separate disks. It is recommended to use an odd number as 4 disks will not be any more highly available than 3 disks, 1/2 of 3 is 1.5...rounded to 2, 1/2 of 4 is 2, once we lose 2 disks, our cluster will fail with both 4 voting disks or 3 voting disks.
    2) Does the cluster actually check for the vote count before node eviction? If yes, could you expain this process briefly?
    Yes. If you lose half or more of all of your voting disks, then nodes get evicted from the cluster, or nodes kick themselves out of the cluster.
    3) What's the logic behind the documentaion note that says that "each node should be able to see more than half of the voting disks at any time"?
    The answer to first question itself will provide answer to this question too.answered by: Boochi
    refer: Question on voting disk
    Question on voting disk

  • How do I define 2 disk groups for ocr and voting disks at the oracle grid infrastructure installation window

    Hello,
    It may sound too easy to someone but I need to ask it anyway.
    I am in the middle of building Oracle RAC 11.2.0.3 on Linux. I have 2 storage and I created three LUNs in each of storage so 6 LUNs in total look in both of servers. If I choose NORMAL as a redundancy level, is it right to choose my 6 disks those are created for OCR_VOTE at the grid installation? Or should I choose only 3 of them and configure mirroring at latter stage?
    The reason why I am asking this is that I didn't find any place to create asm disk group for ocr and voting disks at the oracle grid infrastructure installation window. In fact, I would to like to create two disk groups in which one of groups contains three disks those were brought from storage 1 and the second one contains rest three 3 disk that come from the last storage.
    I believe that you will understand the manner and help me on choosing proper way.
    Thank you.

    Hi,
    You have 2 Storage H/W.
    You will need to configure a Quorum ASMDisk to store a Voting Disk.
    Because if you lose 1/2 or more of all of your voting disks, then nodes get evicted from the cluster, or nodes kick themselves out of the cluster.
    You must have the odd number of voting disk (i.e 1,3 or 5)  one voting disk per ASM DISK, So 1 Storage will hold more voting disk than another storage.
    (e.g 3 Voting Disk - 2 voting stored in stg-A and 1 voting disk store in stg-B)
    If fail the storage that hold the major number of  Voting disk the whole cluster (all nodes) goes down. Does not matter if you have a other storage online.
    It will happen:
    https://forums.oracle.com/message/9681811#9681811
    You must configure your Clusterware same as RAC extended
    Check this link:
    http://www.oracle.com/technetwork/database/clusterware/overview/grid-infra-thirdvoteonnfs-131158.pdf
    Explaining: How to store OCR, Voting disks and ASM SPFILE on ASM Diskgroup (RAC or RAC Extended) | Levi Pereira | Oracl…

  • Question on voting disk

    Hello all,
    I'm aware of the fact that the voting disks manage current cluster/node membership information and that various nodes constantly check in with voting disk to register their availability. If a node is unable to ping the voting disk the cluster evicts that node to avoid split brain situation. These are some of the points that I don't understand as documentaion is not very clear. Could you explain or post links that have a clear explanation?
    1) Why do we have to create odd number of voting disk?
    2) Does the cluster actually check for the vote count before node eviction? If yes, could you expain this process briefly?
    3) What's the logic behind the documentaion note that says that "each node should be able to see more than half of the voting disks at any time"?
    Thanks

    1) Why do we have to create odd number of voting disk?As far as voting disks are concerned, a node must be able to access strictly more than half of the voting disks at any time. So if you want to be able to tolerate a failure of n voting disks, you must have at least 2n+1 configured. (n=1 means 3 voting disks). You can configure up to 32 voting disks, providing protection against 15 simultaneous disk failures.
    Oracle recommends that customers use 3 or more voting disks in Oracle RAC 10g Release 2. Note: For best availability, the 3 voting files should be physically separate disks. It is recommended to use an odd number as 4 disks will not be any more highly available than 3 disks, 1/2 of 3 is 1.5...rounded to 2, 1/2 of 4 is 2, once we lose 2 disks, our cluster will fail with both 4 voting disks or 3 voting disks.
    2) Does the cluster actually check for the vote count before node eviction? If yes, could you expain this process briefly?Yes. If you lose half or more of all of your voting disks, then nodes get evicted from the cluster, or nodes kick themselves out of the cluster.
    3) What's the logic behind the documentaion note that says that "each node should be able to see more than half of the voting disks at any time"?The answer to first question itself will provide answer to this question too.

  • Does all voting disks in n node rac have same data

    Hi,
    I heard that Oracle/css access more than half of the voting disks, just had a question on that
    1) Why does Oracle access more than half of the voting disks?
    2) Why we keep odd number of voting disks?
    3) Does Oracle keep same information in all the voting disks ? is it like control or redofile multiplexing?
    4) Why we have only 2 ocr and not more than that?
    Regards
    ID

    Hi,
    1) Why does Oracle access more than half of the voting disks?
    To join the cluster greater than half number of voting disks must be accessible for the cluster node, Oracle made this restriction so that even in worst cases all nodes must have access on one common disk. Let's try to understand with simple classical example of two node cluster node1 and node2 with two voting disk vote1 and vote2 and assuming vote1 is accessible for node1 only and vote2 is accessible for node2 only, in this case if Oracle allow the node to join the cluster by passing this restriction then conflict will occur as both the node writing information to different voting disk but with this restriction Oracke make sure that one disk must be commonly accessible for all the nodes. For example in case of three voting disk at least two must be accessible, if two voting disks are accessible then one disk will be commonly accessible for all the nodes.
    2) Why we keep odd number of voting disks?
    I already answered this question indirectly with the answer of your first question. Greater than half number of voting disks must be accessible so either you configure three or next even number i.e. four but number of failover voting disk remains same. In case of three failure of one voting disk can be tolerated and same in case of four voting disks.
    3) Does Oracle keep same information in all the voting disks ? is it like control or redofile multiplexing?
    Yes, Clusterware maintains same information in all voting disks its just the multiplex copy.
    4) Why we have only 2 ocr and not more than that?
    We can configure upto five mirror OCR disk. Here is excrept of my ocr.loc file
    [root@host01 ~]# cat /etc/oracle/ocr.loc
    #Device/file getting replaced by device /dev/sdb13
    ocrconfig_loc=+DATA
    ocrmirrorconfig_loc=/dev/sdb15
    ocrconfig_loc3=/dev/sdb14
    ocrconfig_loc4=/dev/sdb13
    local_only=false[root@host01 ~]#
    [root@host01 ~]#
    [root@host01 ~]#
    [root@host01 ~]# ocrconfig -add /dev/sdb12
    [root@host01 ~]#
    [root@host01 ~]#
    [root@host01 ~]#
    [root@host01 ~]# ocrconfig -add /dev/sdb11
    PROT-27: Cannot configure another Oracle Cluster Registry location because the maximum number of Oracle Cluster Registry locations (5) have been configured
    Thanks

  • How does oracle use voting disk?

    How does oracle use voting disk? How does it do health check and arbitrates cluster ownership among the instances in case of network failures?
    Why it must have an odd number ?
    Thanks.

    Did they mentioned anywhere, that there should be odd number of voting disks? (it is just multiplexing).
    ~Sameer.

  • OCR and Voting Disk

    Hi all,
    What is the difference b/w OCR and Voting Disk

    The voting disk is a file that manages information about node membership and the OCR is a file that manages cluster and RAC database configuration information.
    Voting Disk -Oracle Clusterware uses the voting disk to determine which instances are members of a cluster. The voting disk must reside on a shared disk. Basically all nodes in the RAC cluster register their heart-beat information on thes voting disks. The number decides the number of active nodes in the RAC cluster. These are also used for checking the availability of instances in RAC and remove the unavailable nodes out of the cluster. It helps in preventing split-brain condition and keeps database information intact. The split brain syndrome and its affects and how it has been managed in oracle is mentioned below.
    For high availability, Oracle recommends that you have a minimum of three voting disks. If you configure a single voting disk, then you should use external mirroring to provide redundancy. You can have up to 32 voting disks in your cluster. What I could understand about the odd value of the number of voting disks is that a noe should see maximun number of voting disk to continue to function, so with 2, if it can see only 1, its not the maximum value but a half value of voting disk. I am still trying to search more on this concept.
    OCR (Oracle Cluster Registry) - resides on shared storage and maintains information about cluster configuration and information about cluster database. OCR contains information like which database instances run on which nodes and which services runs on which database.
    Oracle Cluster Registry (OCR) is one such component in 10g RAC used to store the cluster configuration information. It is a shared disk component, typically located in a shared raw volume that must be accessible to all nodes in the cluster. The daemon OCSSd manages the configuration info in OCR and maintains the changes to cluster in the registry.
    ocrcheck is the command to check out the OCR.

  • Max of 15 Voting disk ?

    Hi All,
    Please find the below errors:
    ./crsctl add css votedisk +DATA
    CRS-4671: This command is not supported for ASM diskgroups.
    CRS-4000: Command Add failed, or completed with errors.
    ./crsctl add css votedisk /u01/vote.dsk
    CRS-4258: Addition and deletion of voting files are not allowed because the voting files are on ASM #
    What I understood is that:
    1) It is not possible to put Voting disk in multiple diskgroups as it is for OCR.
    2) The Voting disk copy will be created according to the redundnacy of the Diskgroup where it is in.
    No I have a couple question based on this:
    1) If i create a disk group with external redundnacy , does that menat that I will be having only one voting disk ?I cannot add any more ?
    2) Oracle tells that it is possible tor create up to 15 voting disk.
    So does that mean that I will be having one voting (total 15) disk on each disk provided that If i have a disk group with normal or high redundnacy and the disk group have 15 disk with disk in its own faliure group
    May be I silly here, but if some one can advise me, please
    Regards
    Joe

    Hi Joe,
    Jomon Jacob wrote:
    Hi All,
    Please find the below errors:
    ./crsctl add css votedisk +DATA
    CRS-4671: This command is not supported for ASM diskgroups.
    CRS-4000: Command Add failed, or completed with errors.
    ./crsctl add css votedisk /u01/vote.dsk
    CRS-4258: Addition and deletion of voting files are not allowed because the voting files are on ASM #When votedisk is on ASM diskgroup, no add option available. The number of votedisk is determined by the diskgroup redundancy. If more copy of votedisk is desired, one can move votedisk to a diskgroup with higher redundancy.
    When votedisk is on ASM, no delete option available, one can only replace the existing votedisk group with another ASM diskgroup.
    When votedisk is on ASM the option to be used is replace.
    crsctl replace css votedisk +NEW_DGVOTE>
    >
    What I understood is that:
    1) It is not possible to put Voting disk in multiple diskgroups as it is for OCR.Yes.. because Voting file is placed directly on ASMDISK not in a DISKGROUP, although it uses the configuration of a diskgroup (e.g failgroup, asmdisks, and so on).
    2) The Voting disk copy will be created according to the redundnacy of the Diskgroup where it is in.When you move (replace) voting file to ASM, Oracle will take configuration of Diskgroup (i.e failgroup) and place one voting file in each ASM DISK in different failgroups.
    >
    >
    No I have a couple question based on this:
    1) If i create a disk group with external redundnacy , does that menat that I will be having only one voting disk ?I cannot add any more ?Yes.. with external redundancy you don't have the concept of failgroup because you not use mirror by ASM. So, it's like one failgroup. Then you can have only one Voting file in this Diskgroup.
    2) Oracle tells that it is possible tor create up to 15 voting disk.
    So does that mean that I will be having one voting (total 15) disk on each disk provided that If i have a disk group with normal or high redundnacy and the disk group have 15 disk with disk in its own faliure groupNo. 15 voting files is allowed if you not storing voting on ASM. If you are using ASM the maximum number of voting files is 5. Because Oracle will take configuration of Diskgroup.
    Using high number of voting disks can be useful when you have a big cluster environment with (e.g) 5 Storage Subsystem and 20 Hosts in a single Cluster. You must set up a voting file in each storage ... but if you're using only one storage voting 3 files is enough.
    Usage Note
    You should have at least three voting disks, unless you have a storage device, such as a disk array, that provides external redundancy. Oracle recommends that you do not use more than 5 voting disks. The maximum number of voting disks that is supported is 15.
    http://docs.oracle.com/cd/E11882_01/rac.112/e16794/crsref.htm#CHEJDHFH
    See this example;
    I configured 7 ASM DISK but ORACLE used only 5 ASM DISK.
    SQL> CREATE DISKGROUP DG_VOTE HIGH REDUNDANCY
         FAILGROUP STG1 DISK 'ORCL:DG_VOTE01'
         FAILGROUP STG2 DISK 'ORCL:DG_VOTE02'
         FAILGROUP STG3 DISK 'ORCL:DG_VOTE03'
         FAILGROUP STG4 DISK 'ORCL:DG_VOTE04'
         FAILGROUP STG5 DISK 'ORCL:DG_VOTE05'
         FAILGROUP STG6 DISK 'ORCL:DG_VOTE06'
         FAILGROUP STG7 DISK 'ORCL:DG_VOTE07'
       ATTRIBUTE 'compatible.asm' = '11.2.0.0.0';
    Diskgroup created.
    SQL> ! srvctl start diskgroup -g DG_VOTE -n lnxora02,lnxora03
    $  crsctl replace votedisk +DG_VOTE
    CRS-4256: Updating the profile
    Successful addition of voting disk 427f38b47ff24f52bf1228978354f1b2.
    Successful addition of voting disk 891c4a40caed4f05bfac445b2fef2e14.
    Successful addition of voting disk 5421865636524f5abf008becb19efe0e.
    Successful addition of voting disk a803232576a44f1bbff65ab626f51c9e.
    Successful addition of voting disk 346142ea30574f93bf870a117bea1a39.
    Successful deletion of voting disk 2166953a27a14fcbbf38dae2c4049fa2.
    Successfully replaced voting disk group with +DG_VOTE.
    $ crsctl query css votedisk
    ##  STATE    File Universal Id                File Name Disk group
    1. ONLINE   427f38b47ff24f52bf1228978354f1b2 (ORCL:DG_VOTE01) [DG_VOTE]
    2. ONLINE   891c4a40caed4f05bfac445b2fef2e14 (ORCL:DG_VOTE02) [DG_VOTE]
    3. ONLINE   5421865636524f5abf008becb19efe0e (ORCL:DG_VOTE03) [DG_VOTE]
    4. ONLINE   a803232576a44f1bbff65ab626f51c9e (ORCL:DG_VOTE04) [DG_VOTE]
    5. ONLINE   346142ea30574f93bf870a117bea1a39 (ORCL:DG_VOTE05) [DG_VOTE]
    SQL >
    SET LINESIZE 150
    COL PATH FOR A30
    COL NAME FOR A10
    COL HEADER_STATUS FOR A20
    COL FAILGROUP FOR A20
    COL FAILGROUP_TYPE FOR A20
    COL VOTING_FILE FOR A20
    SELECT NAME,PATH,HEADER_STATUS,FAILGROUP, FAILGROUP_TYPE, VOTING_FILE
    FROM V$ASM_DISK
    WHERE GROUP_NUMBER = ( SELECT GROUP_NUMBER
                    FROM V$ASM_DISKGROUP
                    WHERE NAME='DG_VOTE');
    NAME       PATH                           HEADER_STATUS        FAILGROUP            FAILGROUP_TYPE       VOTING_FILE
    DG_VOTE01  ORCL:DG_VOTE01                 MEMBER               STG1                 REGULAR              Y
    DG_VOTE02  ORCL:DG_VOTE02                 MEMBER               STG2                 REGULAR              Y
    DG_VOTE03  ORCL:DG_VOTE03                 MEMBER               STG3                 REGULAR              Y
    DG_VOTE04  ORCL:DG_VOTE04                 MEMBER               STG4                 REGULAR              Y
    DG_VOTE05  ORCL:DG_VOTE05                 MEMBER               STG5                 REGULAR              Y
    DG_VOTE06  ORCL:DG_VOTE06                 MEMBER               STG6                 REGULAR              N
    DG_VOTE07  ORCL:DG_VOTE07                 MEMBER               STG7                 REGULAR              NRegards,
    Levi Pereira
    Edited by: Levi Pereira on Jan 5, 2012 6:01 PM

  • Only ONE voting disk created on EXTERNAL redundancy

    Hi All,
    I am wondering why, when configuring 5 LUNs, then creating one DG with these 5 LUNS...
    then creating OCR/Vote disk , GRID installer only creates one voting disk?
    Of course, I can always create another voting disk in a separate DG...
    any ideas>?
    of course, when I try " ocrconfig -add +OCR_DG" i get a message that says OCR is already configured....
    thanks

    Hi JCGO,
    you are mixing up voting disks and OCR.
    Voting disks are placed on the disk header of an ASM diskgroup. The number of voting disks is based on the redundancy of the diskgroup:
    External (no mirroring) = 1; Normal = 3; High redundancy (triple mirroring) = 5;
    The OCR however is saved as a normal file (like any datafile of a tablespace) inside ASM and therefore "inherits" the redundancy of the diskgroup.
    So while you only see 1 file, the blocks of the file are mirrored correspondingly (either double or triple (High)).
    Voting disks can only reside in one diskgroup. You cannot create another voting disk in a separate DG. And the command to add voting disks ist:
    crsctl add votediskHowever this won't work when on ASM, you can only replace them into another DG:
    crsctl replace votedisk +DGThe OCR on the other side, since a normal file, allows additional OCRs to be created in different diskgroups.
    Since it is the heart of the cluster (and also containing backups of the voting disks) it does make sense to "logically" mirror it additionally (even if it already is based on the mirroring of the DG). Since if the DG cannot be mounted maybe another one containing actual OCR can. This is a little better than using the OCR backup (which can be up to to 4 hours old).
    Hence in my cluster installation I always put the OCR in every diskgroup I have (up to a number of 5).
    And since my clusters normally have 3 diskgroups (1 OCRVOT, 1 DATA and 1 FRA), I end up with 3.
    Having an uneven number of OCR is a little superior over having an even number (so 3 or 5 vs. 2 or 4). I skip the explanation to why.
    Regards
    Sebastian

  • OCR and voting disks on ASM, problems in case of fail-over instances

    Hi everybody
    in case at your site you :
    - have an 11.2 fail-over cluster using Grid Infrastructure (CRS, OCR, voting disks),
    where you have yourself created additional CRS resources to handle single-node db instances,
    their listener, their disks and so on (which are started only on one node at a time,
    can fail from that node and restart to another);
    - have put OCR and voting disks into an ASM diskgroup (as strongly suggested by Oracle);
    then you might have problems (as we had) because you might:
    - reach max number of diskgroups handled by an ASM instance (63 only, above which you get ORA-15068);
    - experiment delays (especially in case of multipath), find fake CRS resources, etc.
    whenever you dismount disks from one node and mount to another;
    So (if both conditions are true) you might be interested in this story,
    then please keep reading on for the boring details.
    One step backward (I'll try to keep it simple).
    Oracle Grid Infrastructure is mainly used by RAC db instances,
    which means that any db you create usually has one instance started on each node,
    and all instances access read / write the same disks from each node.
    So, ASM instance on each node will mount diskgroups in Shared Mode,
    because the same diskgroups are mounted also by other ASM instances on the other nodes.
    ASM instances have a spfile parameter CLUSTER_DATABASE=true (and this parameter implies
    that every diskgroup is mounted in Shared Mode, among other things).
    In this context, it is quite obvious that Oracle strongly recommends to put OCR and voting disks
    inside ASM: this (usually called CRS_DATA) will become diskgroup number 1
    and ASM instances will mount it before CRS starts.
    Then, additional diskgroup will be added by users, for DATA, REDO, FRA etc of each RAC db,
    and will be mounted later when a RAC db instance starts on the specific node.
    In case of fail-over cluster, where instances are not RAC type and there is
    only one instance running (on one of the nodes) at any time for each db, it is different.
    All diskgroups of db instances don't need to be mounted in Shared Mode,
    because they are used by one instance only at a time
    (on the contrary, they should be mounted in Exclusive Mode).
    Yet, if you follow Oracle advice and put OCR and voting inside ASM, then:
    - at installation OUI will start ASM instance on each node with CLUSTER_DATABASE=true;
    - the first diskgroup, which contains OCR and votings, will be mounted Shared Mode;
    - all other diskgroups, used by each db instance, will be mounted Shared Mode, too,
    even if you'll take care that they'll be mounted by one ASM instance at a time.
    At our site, for our three-nodes cluster, this fact has two consequences.
    One conseguence is that we hit ORA-15068 limit (max 63 diskgroups) earlier than expected:
    - none ot the instances on this cluster are Production (only Test, Dev, etc);
    - we planned to have usually 10 instances on each node, each of them with 3 diskgroups (DATA, REDO, FRA),
    so 30 diskgroups each node, for a total of 90 diskgroups (30 instances) on the cluster;
    - in case one node failed, surviving two should get resources of the failing node,
    in the worst case: one node with 60 diskgroups (20 instances), the other one with 30 diskgroups (10 instances)
    - in case two nodes failed, the only node survived should not be able to mount additional diskgroups
    (because of limit of max 63 diskgroup mounted by an ASM instance), so all other would remain unmounted
    and their db instances stopped (they are not Production instances);
    But it didn't worked, since ASM has parameter CLUSTER_DATABASE=true, so you cannot mount 90 diskgroups,
    you can mount 62 globally (once a diskgroup is mounted on one node, it is given a number between 2 and 63,
    and other diskgroups mounted on other nodes cannot reuse that number).
    So as a matter of fact we can mount only 21 diskgroups (about 7 instances) on each node.
    The second conseguence is that, every time our CRS handmade scripts dismount diskgroups
    from one node and mount it to another, there are delays in the range of seconds (especially with multipath).
    Also we found inside CRS log that, whenever we mounted diskgroups (on one node only), then
    behind the scenes were created on the fly additional fake resources
    of type ora*.dg, maybe to accomodate the fact that on other nodes those diskgroups were left unmounted
    (once again, instances are single-node here, and not RAC type).
    That's all.
    Did anyone go into similar problems?
    We opened a SR to Oracle asking about what options do we have here, and we are disappointed by their answer.
    Regards
    Oscar

    Hi Klaas-Jan
    - best practises require that also online redolog files are in a separate diskgroup, in case of ASM logical corruption (we are a little bit paranoid): in case DATA dg gets corrupted, you can restore Full backup plus Archived RedoLog plus Online Redolog (otherwise you will stop at the latest Archived).
    So we have 3 diskgroups for each db instance: DATA, REDO, FRA.
    - in case of fail-over cluster (active-passive), Oracle provide some templates of CRS scripts (in $CRS_HOME/crs/crs/public) that you edit and change at your will, also you might create additionale scripts in case of additional resources you might need (Oracle Agents, backups agent, file systems, monitoring tools, etc)
    About our problem, the only solution is to move OCR and voting disks from ASM and change pfile af all ASM instance (parameter CLUSTER_DATABASE from true to false ).
    Oracle aswers were a litlle bit odd:
    - first they told us to use Grid Standalone (without CRS, OCR, voting at all), but we told them that we needed a Fail-over solution
    - then they told us to use RAC Single Node, which actually has some better features, in csae of planned fail-over it might be able to migreate
    client sessions without causing a reconnect (for SELECTs only, not in case of a running transaction), but we already have a few fail-over cluster, we cannot change them all
    So we plan to move OCR and voting disks into block devices (we think that the other solution, which needs a Shared File System, will take longer).
    Thanks Marko for pointing us to OCFS2 pros / cons.
    We asked Oracle a confirmation that it supported, they said yes but it is discouraged (and also, doesn't work with OUI nor ASMCA).
    Anyway that's the simplest approach, this is a non-Prod cluster, we'll start here and if everthing is fine, after a while we'll do it also on Prod ones.
    - Note 605828.1, paragraph 5, Configuring non-raw multipath devices for Oracle Clusterware 11g (11.1.0, 11.2.0) on RHEL5/OL5
    - Note 428681.1: OCR / Vote disk Maintenance Operations: (ADD/REMOVE/REPLACE/MOVE)
    -"Grid Infrastructure Install on Linux", paragraph 3.1.6, Table 3-2
    Oscar

  • Voting disk external redundancy

    Hey,
    question:
    I´ve got an database which I haven´t installed myself.
    A query for the votedisk shows only one configured votedisk for my 2 node rac cluster.
    So I presume, that external redundancy was setup during installation of clusterware.
    A few weeks ago I hit an unpublished error, where oracle sends a message that the voting disk are corrupted - which is a false message.
    But anyway, what is happening, when I hit this error in my new scenarion (apart of applying the crs bundle patch - which I cant apply...)
    How can make sure, that a new voting disk is added with external redundancy as well (crsctl add css votedisk /dev/raw/raw#)
    or would this give back an error, because only one voting disk is configurable ?

    Hi Christian,
    How can make sure, that a new voting disk is added with external redundancy as well (crsctl add css votedisk /dev/raw/raw#) or would this give back an error, because only one voting disk is configurable ?I will assume you are using version 11.1 or earlier.
    What determines whether your voting disk is with external redundancy or normal redudancy the amount of voting is that your environment has.
    If you have only one voting disk you have external redundancy (The customers guarantees the recovery of the voting disk in case of failure. At least Oracle thinks so).
    If you more than 2 (because you need to get an odd number) voting disk you are using normal redundancy. *## Edited by: Levi Pereira on Feb 7, 2011 12:09 PM*
    Then you can add more voting disk in your environment without worrying about external redudancy previously configured.
    External Redundancy or Normal Redundancy is nominated only by the amount from voting disk you have, there is no configuration to change External Redundancy to Normal Redundancy. As I have said is only the amount from voting that you have configured in your environment.
    Make sure you have a fresh backup of your voting disk.
    Warning: The voting disk backup is performed manually in the environment UNIX/Linux with the dd command.
    Regards,
    Levi Pereira
    Edited by: Levi Pereira on Feb 7, 2011 12:09 PM

  • Why do we need voting Disk or Voting File

    Hi all
    What is the need of Voting file or disk??? i know it maintains the node information n all. But if nodes can ping all other nodes what is the need of voting disk.
    Can anyone please explain.
    PS : I read the
    Oracle® Database
    Oracle Clusterware and Oracle Real Application Clusters
    Administration and Deployment Guide
    10g Release 2 (10.2)
    But could not find how does it actually come into picture.
    Thanks in anticipation

    Lets take the case of a 2 node RAC - instance 'A' and instance 'B';
    Lets assume that instances A & instance B is unable to communicate with each other. Now, this might be due the fact that the other instance is actually down oR because there is some problem with the interconnect communication between the instances.
    A split-brain occurs when the instances cannot speak to each other due to interconnect communication problem and each instance happens to think that it's the only surviving instance and starts updating the database. This obviously is problematic, isn't it?
    To avoid split brain, both instances are required to update the voting disk periodically.
    Now consider the earlier case from the perspecive of one of the nodes:
    case 1) instance B is actually down:- Here, looking from instance A's perspective; there is no communication over the interconnect with instance B and also instance B is not updating the voting disk ('coz it's down). Instance A can safely assume that instance B is down and it can go on providing the services and updating the database.
    Case 2) the problem is with interconnect communication:- Here, looking from instance A's perspective; there is no communication over the interconnect with instance B. However it can see that instance B is updating the voting disk; which means instance B is not actually down. This state is true from instance B's perspective; it can see that instance A is updating the voting disk but it cannot speak to instance A over the interconnect. At this point, both instances rush and try to lock down the voting disk and whoever gets the lock ejects the other one thus avoiding split-brain syndrome.
    That's the theory behind using a voting disk.
    When the instances of a RAC boots up, one of the instances will be assigned the master. In the case of a split-brain the master gets the lock on the voting disk and survives. In reality the instance with the lower node number survives, even if the problem is with the interconnect network card on that node :)
    Cheers.

  • Why logical partition is a must for voting disk and OCR

    Hi Guys,
    I just started handling jobs for RAC installation, I have a simple question regarding the setup.
    Why does logical partition have to be used for voting disk and OCR?
    I tried partition the disk that were provisioned for voting disk and OCR with primary partition but when OUI is trying to recognize the disk, it cannot find the disk that has been partitioned with primary partition.
    Thank you,
    Adhika

    Hello Adhika,
    I found it on this doc http://download.oracle.com/docs/cd/B28359_01/install.111/b28250/storage.htm
    Be aware of the following restrictions for partitions:
    * You cannot use primary partitions for storing Oracle Clusterware files while running the OUI to install Oracle Clusterware as described in Chapter 5, "Installing Oracle Clusterware". You must create logical drives inside extended partitions for the disks to be used by Oracle Clusterware files and Oracle ASM.
    * With 32-bit Windows, you cannot create more than four primary disk partitions for each disk. One of the primary partitions can be an extend partition, which can then be subdivided into multiple logical partitions.
    * You can assign mount points only to primary partitions and logical drives.
    * You must create logical drives inside extended partitions for the disks to be used by Oracle Clusterware files and Oracle ASM.
    * Oracle recommends that you limit the number of partitions you create on a single disk to prevent disk contention. Therefore, you may prefer to use extended partitions rather than primary partitions.
    For these reasons, you might prefer to use extended partitions for storing Oracle software files and not primary partitions.
    All the best,
    Rodrigo Mufalani
    http://www.mrdba.com.br/mufalani

  • How do you print two-sided in In Design when some sections have an odd number of pages?

    I'm new to In Design, and I'm trying to do something that seems pretty basic and having trouble figuring out how to do it, and I haven't found anything in the Adobe Forum or online that addresses this situation.
    I am working on a document that is broken into sections, with page numbering starting on Page 1 for each section.  I have mirror image layouts for the odd and even master pages - for example, the page number is on the left side on even pages and on the right side for odd pages, as you might generally expect to see on a two-sided printout, where in a book or a binder, the even pages are on the left and the odd pages are on the right.
    I'm printing the document from my computer onto a printer that supports two-sided printing.  I don't see any options in the File / Print window for two sided printing, so I set this option on the printer itself using the [Setup...] button at the bottom of the window.  This works - I am getting the printout on both sides of the sheets of paper.
    At the moment, the first section is 3 pages (two spreads, one with the first page, and one with the other two).  The next section starts with page number 1 on the third spread (one page) and then continues two pages per spread - all as expected.
    Here's the problem - the first page of the second section is printing on the back of the last sheet of first section, rather than starting on a new front-facing sheet - so when I put this in a binder, the odd pages are on the left and the even pages on the right.  Every time I hit the end of another section with an odd number of pages, it flips around again.
    There's a manual workaround I found - scan through the document and add a blank page to the end of every section that has an odd number of pages (and check "Print Blank Pages).  But I'd have to do this every time I print as I'm adding new pages, and add/delete blank pages as the number in each section changes.
    I can't believe that In Design doesn't provide a way to get this to work properly, as I'm sure many people have had this same situation, and I'm hoping that it's some simple thing that I just haven't been able to figure out...
    HELP!!! (thanks!  B^)

    Well, MS Word does something like this automatically - if you tell it to start a new section on an odd page, and if the section before it ends on an odd page, it will "skip" the even page number in between - and when it prints, it automatically spits out a blank page in between them, so that on a two-sided printer, the odd pages all print on the front of the paper (which is the right-facing sheet when they are put in a binder or book).
    InDesign shows the pages like this on the spreads (odd page numbers always on the right), so it is aware of the desired orientation, and I'd think it would provide a mechanism to ensure that they print out this way as well...

Maybe you are looking for