RAC Voting disk recommended size?

Hi RAC gurus
what is the Voting disk recommended size?
i need to do an appropriate size , so when i back up using dd command , it don't take very long
Am i missing something? , can i create a very big one , say 10G , then backup only the used portion
Thanks for help

You asked if you are missing something and my initial thought is that what you missed was reading the docs.
First of all the recommended size is clearly stated in the docs ... so I am not going to hand it to you and force you to read them. I hope everyone else does so too.
Second of all, and most importantly, if you have any concept of what the Voting Disk is ... why on earth would you think that making a large one, or a small one, has any relevance to its use? Are you planning to build a cluster larger than 128 nodes?
PS: What is 10g ... is this 10.1.0.2 or 10.1.0.3 or 10.1.0.4 or 10.2.0.1 or ....
PPS: Given that you didn't read the docs ... it would be interesting to know, too, your operating system, your cache fusion interconnect solution, your shared storage solution, and how you are planning to define your VIPs. But I will leave that for the future as no doubt you will stub your toe on each if you don't get to those docs and read them with great care.
My apology for being a bit harsh here but your chances of success, in RAC, are precisely zero if you do not understand these things. And I am really tired of people blaming Oracle and RAC for what they did to themselves.

Similar Messages

  • 2 out of 3 voting disk gets corrupted

    O/S - Oracle Linux
    DB Version - Oracle 11.2.0,3
    Three node RAC
    Voting disks is 3 in Normal redundancy out of which 2 disks gets corrupted.
    I know the process of recovering the voting disk but I am not able to stop the cluster services and start the same in exclusive mode. I am always getting "Unable to communicate with CRS services" . I rebotted 3 nodes but the same issue persists.
    Any advice will be appreciated.

    Is ASM used for OCR and voting disk storage?
    This is the recommendation and the default - creating a normal redundancy diskgroup with an additional quorum disk (i.e. 3 disks in total).
    There is an automated backup of the OCR. No backup needed for voting files. So it is not a disaster loosing the cluster's OCR and/or voting disks.
    Assuming ASM and a diskgroup is used.
    Anything else on the diskgroup?
    I recently had a similar problem. I used the diskgroup only for OCR and voting disks. Lost 2 disks. I shutdown the RAC. Started CRS in exclusive mode on 1 node. Launched sqlplus and dropped the diskgroup and recreated it with 3 disks (and a quorum). Restored the OCR backup. Confirmed the voting disks config (no need to change it as the underlying diskgroup did not have a name change).  Restarted that node. Forced a rebalance of the diskgroup. Then started the remaining cluster nodes.
    There are Oracle Support Notes that describe these steps in detail. It should be relatively easy finding the note for fixing a 11gr2 RAC's OCR/voting ASM diskgroup.
    If you have database data on it too - problems. If the remaining working disk is the quorum disk, then both mirror disks are toast. If not, you can try to force the disks online again - assuming disks are okay and broken path was fixed. Else add 2 new disks to replace the 2 broken disks and rebalance. Never done this myself though with a quorum disk..

  • Root.sh hangs at formatting voting disk on OEL32 11gR2 RAC with ocfs2

    Hi,
    Am trying to bring up Oracle11gR2 RAC on Enterprise Linux x86 (32bit) version 5.6. I am using Ocfs2 1.4 as my cluster file share. Everything went fine till the root.sh and it hangs with a message "now formatting voting disk <vdsk path>
    The logs are mentioned below.
    Checked the alert log:
    {quote}
    cssd(9506)]CRS-1601:CSSD Reconfiguration complete. Active nodes are oel32rac1 .
    2011-08-04 15:58:55.356
    [ctssd(9552)]CRS-2407:The new Cluster Time Synchronization Service reference node is host oel32rac1.
    2011-08-04 15:58:55.917
    [ctssd(9552)]CRS-2401:The Cluster Time Synchronization Service started on host oel32rac1.
    2011-08-04 15:58:56.213
    [client(9567)]CRS-1006:The OCR location /u02/storage/ocr is inaccessible. Details in /u01/app/11.2.0/grid/log/oel32rac1/client/ocrconfig_9567.log.
    2011-08-04 15:58:56.365
    [client(9567)]CRS-1001:The OCR was formatted using version 3.
    2011-08-04 15:58:59.977
    [crsd(9579)]CRS-1012:The OCR service started on node oel32rac1.
    {quote}
    crsctl.log:
    {quote}
    2011-08-04 15:59:00.246: [  CRSCTL][3046184656]crsctl_vformat: obtain cssmode 1
    2011-08-04 15:59:00.247: [  CRSCTL][3046184656]crsctl_vformat: obtain VFListSZ 0
    2011-08-04 15:59:00.258: [  CRSCTL][3046184656]crsctl_vformat: Fails to obtain backuped Lease from CSSD with error code 16
    2011-08-04 15:59:01.857: [  CRSCTL][3046184656]crsctl_vformat: to do clsscfg fmt with lease sz 0
    2011-08-04 15:59:01.910: [   SKGFD][3046184656]NOTE: No asm libraries found in the system
    2011-08-04 15:59:01.910: [    CLSF][3046184656]Allocated CLSF context
    2011-08-04 15:59:01.910: [   SKGFD][3046184656]Discovery with str:/u02/storage/vdsk:
    2011-08-04 15:59:01.910: [   SKGFD][3046184656]UFS discovery with :/u02/storage/vdsk:
    2011-08-04 15:59:01.910: [   SKGFD][3046184656]Fetching UFS disk :/u02/storage/vdsk:
    2011-08-04 15:59:01.911: [   SKGFD][3046184656]OSS discovery with :/u02/storage/vdsk:
    2011-08-04 15:59:01.911: [   SKGFD][3046184656]Handle 0xa6c19f8 from lib :UFS:: for disk :/u02/storage/vdsk:
    2011-08-04 17:10:37.522: [   SKGFD][3046184656]WARNING:io_getevents timed out 618 sec
    2011-08-04 17:10:37.526: [   SKGFD][3046184656]WARNING:io_getevents timed out 618 sec
    {quote}
    ocrconfig log:
    {quote}
    2011-08-04 15:58:56.214: [  OCRRAW][3046991552]ibctx: Failed to read the whole bootblock. Assumes invalid format.
    2011-08-04 15:58:56.214: [  OCRRAW][3046991552]proprinit:problem reading the bootblock or superbloc 22
    2011-08-04 15:58:56.214: [  OCROSD][3046991552]utread:3: Problem reading buffer 8532000 buflen 4096 retval 0 phy_offset 102400 retry 0
    2011-08-04 15:58:56.214: [  OCROSD][3046991552]utread:3: Problem reading buffer 8532000 buflen 4096 retval 0 phy_offset 102400 retry 1
    2011-08-04 15:58:56.214: [  OCROSD][3046991552]utread:3: Problem reading buffer 8532000 buflen 4096 retval 0 phy_offset 102400 retry 2
    2011-08-04 15:58:56.214: [  OCROSD][3046991552]utread:3: Problem reading buffer 8532000 buflen 4096 retval 0 phy_offset 102400 retry 3
    2011-08-04 15:58:56.214: [  OCROSD][3046991552]utread:3: Problem reading buffer 8532000 buflen 4096 retval 0 phy_offset 102400 retry 4
    2011-08-04 15:58:56.214: [  OCROSD][3046991552]utread:3: Problem reading buffer 8532000 buflen 4096 retval 0 phy_offset 102400 retry 5
    2011-08-04 15:58:56.214: [  OCRRAW][3046991552]propriogid:1_1: Failed to read the whole bootblock. Assumes invalid format.
    2011-08-04 15:58:56.365: [  OCRRAW][3046991552]iniconfig:No 92 configuration
    2011-08-04 15:58:56.365: [  OCRAPI][3046991552]a_init:6a: Backend init successful
    2011-08-04 15:58:56.390: [ OCRCONF][3046991552]Initialized DATABASE keys
    2011-08-04 15:58:56.564: [ OCRCONF][3046991552]csetskgfrblock0: output from clsmft: [clsfmt: successfully initialized file /u02/storage/ocr
    2011-08-04 15:58:56.577: [ OCRCONF][3046991552]Successfully set skgfr block 0
    2011-08-04 15:58:56.578: [ OCRCONF][3046991552]Exiting [status=success]...
    {quote}
    ocssd.log:
    {quote}
    2011-08-04 15:59:00.140: [    CSSD][2963602320]clssgmFreeRPCIndex: freeing rpc 23
    2011-08-04 15:59:00.228: [    CSSD][2996054928]clssgmExecuteClientRequest: CONFIG recvd from proc 6 (0xb35f7438)
    2011-08-04 15:59:00.228: [    CSSD][2996054928]clssgmConfig: type(1)
    2011-08-04 15:59:00.234: [    CSSD][2996054928]clssgmExecuteClientRequest: VOTEDISKQUERY recvd from proc 6 (0xb35f7438)
    2011-08-04 15:59:00.247: [    CSSD][2996054928]clssgmExecuteClientRequest: CONFIG recvd from proc 6 (0xb35f7438)
    2011-08-04 15:59:00.247: [    CSSD][2996054928]clssgmConfig: type(1)
    2011-08-04 15:59:03.039: [    CSSD][2942622608]clssnmSendingThread: sending status msg to all nodes
    2011-08-04 15:59:03.039: [    CSSD][2942622608]clssnmSendingThread: sent 4 status msgs to all nodes
    2011-08-04 15:59:07.047: [    CSSD][2942622608]clssnmSendingThread: sending status msg to all nodes
    2011-08-04 15:59:07.047: [    CSSD][2942622608]clssnmSendingThread: sent 4 status msgs to all nodes
    2011-08-04 15:59:11.057: [    CSSD][2942622608]clssnmSendingThread: sending status msg to all nodes
    2011-08-04 15:59:11.057: [    CSSD][2942622608]clssnmSendingThread: sent 4 status msgs to all nodes
    2011-08-04 15:59:16.068: [    CSSD][2942622608]clssnmSendingThread: sending status msg to all nodes
    2011-08-04 15:59:16.068: [    CSSD][2942622608]clssnmSendingThread: sent 5 status msgs to all nodes
    2011-08-04 15:59:21.079: [    CSSD][2942622608]clssnmSendingThread: sending status msg to all nodes
    2011-08-04 15:59:21.079: [    CSSD][2942622608]clssnmSendingThread: sent 5 status msgs to all nodes
    {quote}
    Any help here is appreciated.
    Regards
    Amith R
    Edited by: Mithzz on Aug 4, 2011 4:58 AM

    Did an lsof on vdisk and it showed
    >
    COMMAND PID USER FD TYPE DEVICE SIZE NODE NAME
    crsctl.bi 9589 root 26u REG 8,17 21004288 102980 /u02/storage/vdsk
    [root@oel32rac1 ~]# ps -ef |grep crsctl
    root 9589 7583 0 15:58 pts/1 00:00:00 [crsctl.bin] <defunct>
    >
    Could this be a permission issue ?
    --Amith                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

  • Oracle RAC 11g. 3rd Voting disk mounting automatically without fstab

    Hi,
    We have a 2 node Oracle 11Gr2 Extened RAC database on Redhat Linux. We have a vote disk on each node and a 3rd vote disk on a separate site. The vote disk directories are NFS mounted. We have noticed that there is no entry in the fstab (on either RAC node) for the 3rd voting disk location, yet the directory is still mounted automatically on each RAC node at startup.
    Can Oracle manage mounting the disks itself without using fstab? Oracle recommends using the fstab for mounting the directories and I have found nothing on Oracle mounting the directories in any other way other than fstab.
    I am completely lost here. We need to do some configuration on the 3rd voting disk location and I need to find out how the disk is being mounted on the RAC nodes. Any help on this would be greatly appreciated. Thanks.
    Rgs,
    Rob

    Did you check that rc.local file? Perhaps, the mount entries are in there.
    HTH

  • Backup and Restore OCR,Voting Disk and ASM Disks in new SAN-10g RAC

    Dear Friends,
    I am using 10g R2 RAC serup on Linux
    My OCR,Voting Disk and ASM Disk for DBF Files are on a SAN box
    Now i am reorganising SAN by scrapping the entire SAN and cretaing new LUN's (It's a must)
    so pleae let me know
    1) how do i take backup of OCR and Voting Disk from existing SAN and how do i restore it in new LUN's after SAN reorganisation
    2) how do i take backup of existing Database's from existing SAN and how do i restore it in new LUN's after SAN reorganisation.
    I will be doing it in Planned downtime only
    Regards,
    DB

    For step 1 you should following metalink doc.
    For step 2 here is simple backup command script.
    I have done this in windows for you.
    D:\app\ranjit\product\11.2.0\dbhome_1\BIN>rman target /
    Recovery Manager: Release 11.2.0.1.0 - Production on Wed Feb 8 21:48:47 2012
    Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
    connected to target database: ORCL (DBID=1299593730)
    RMAN> run
    +{+
    allocate channel c1 device type disk format 'D:\app\ranjit\rman\%U';
    allocate channel c2 device type disk format 'D:\app\ranjit\rman\%U';
    backup database;
    backup current controlfile format 'D:\app\ranjit\rman\%U';
    +}+
    Regards

  • CRS and Voting Disk for RAC

    Hi Friends,
    How much disk space size should be allocated for CRS and Voting Disk?
    Thanks a lot

    Hi,
    Is it ok if i put them into one LUN ? say /u02 200M under it are directory CRS and Voting, or it is
    better to separate them into two LUNS /u02 /u03.
    Thanks a lot

  • Voting disk and OCR disk in RAC

    Can anyone explain me in basic terms ...
    For what Voting and OCR disk are used in RAC.

    Hi Francisco,
    Thanks for the information.
    But i am having lot of doubts in Voting disk and types of heart-beats present in Oracle 10g RAC.
    As Oracle says css uses two heart beats 1. Network heart-beat 2. Disk heart-beat
    My doubts:
    1. How Oracle uses voting disk to avoid split-brain syndrome? OR what is the role of voting disk in split-brain syndrome?
    2. How Oracle monitors remote node health?
    3. Oracle says there is one more heart-beat where ckpt process keep on writing to control file for every 3 seconds and if it fails lmon process will evict the node....Is it true?
    5. Oracle Clusterware will send messages (via a special ping operation) to all nodes configured in the cluster, often called the "heartbeat." If the heartbeat fails for any of the nodes, it checks with the Oracle Clusterware configuration files (Voting disk) to distinguish between a real node failure and a network failure......Is it true? If true then what about the two heart beats(Network and Disk)...why these heart-beats are present or useful?
    If possible please clear my doubts by giving some simple examples....
    Awaiting for your reply and too much eager to learn and expertise in Oracle RAC architecture before administering it.
    Regards,
    Yasser.

  • Adding ocrmirror and voting disk on another diskgroup 11gr2 RAC

    Hi Gurus,
    Can i add ocrmirror on another diskgroup(external redundancy) and voting disk(normal) on anotherdiskgroup (replace) in 11gr2 RAC while databases are up and running?

    Hi
    in Oracle 11gr2 voting disk are backed up automatically in the ocr part of any configuration change
    Voting disk is automatically restored to any added voting disk
    You can migrate voting fıska from non asm to asm without
    Taking download cluster
    To add a voting disk to asm
    Crsctl replace votedisk +diskgroup
    Ocrconfig -add +datagroup2
    Oracle manage up to five redundant ocr locations
    Hope this helps
    Zekeriya Besiroglu
    Http://zekeriyabesiroglu.blogspot.com

  • OCR and VOTING DISK for RAC Database on ASMLib env

    Can i use raw device for OCR and Voting Disk while installing Clusterware, i am planning to use ASM with ASMlib,
    i know i can configure asm with dbca when i am using raw devices (to be user for ASM), How i can configure diskgroups when i use ASMlib, Can i configure ASM diskgroup with dbca , if yes plz forward me some metalink notes to study or any other help.

    Hi,
    Can i use raw device for OCR and Voting Disk while installing ClusterwareYes:
    http://www.oracle.com/technology/pub/notes/technote_singh_crs.html
    "Step 3: Create Raw Devices for the OCR and Voting Disk Before we start the installation"
    plz forward me some metalink notes to study or any other help. I like Madhu Tumma's examples with raw devices because it has step-by-step examples:
    http://www.rampant-books.com/book_2004_1_10g_grid.htm
    Hope this helps . . .
    Donald K. Burleson
    http://www.dba-oracle.com

  • Back Up the Voting Disk After RAC Installation

    What would be the dd command to Back Up the Voting Disk
    my voting disk is on /dev/raw/raw2

    What would be the dd command to Back Up the Voting Disk
    my voting disk is on /dev/raw/raw2 dd if=/dev/raw/raw2 of=/home/backupuser/backup_voting_disk
    What is the command to Back Up the root.sh Script?cp root.sh root.sh.bkup
    Message was edited by:
    forbrich

  • Creating voting disk - RAC hp-ux v3

    Hi,
    Is there a document about creating mirrored voting disks on HP-UX v3?
    Thanks,
    B

    Salut Marc
    I can confirm that the command given (crsctl add css votedisk path) works quite well. but sometimes you can get the following Bug 4898020 ADDING VOTING DISK ONLINE CRASH THE CRS. This should be fixed in the 10.2.0.4 patchset
    Brian please any command which just needs to query information can be run using oracle user. But anything which alters Oracle Clusterware requires root privileges.
    Note: Shutdown the Oracle Clusterware (crsctl stop crs as root) on all nodes
    before making any modification to the voting disk. Determine the current
    voting disk location using:
    crsctl query css votedisk
    Take a backup of all voting disk:
    dd if=voting_disk_name of=backup_file_name
    Note: Use UNIX man pages for additional information on the dd command.
    The following can be used to restore the voting disk from the backup file
    created.
    dd if=backup_file_name of=voting_disk_name
    1. To add a Voting Disk, provide the full path including file name.:
    crsctl add votedisk css <RAW_LOCATION> -force
    2. To delete a Voting Disk, provide the full path including file name.:
    crsctl delete votedisk css <RAW_LOCATION> -force
    3. To move a Voting Disk, provide the full path including file name.:
    crsctl delete votedisk css <OLD_LOCATION> –force
    crsctl add votedisk css <NEW_LOCATION> –force
    After modifying the voting disk, start the Oracle Clusterware stack on all
    nodes
    crsctl start crs
    Verify the voting disk location using
    crsctl query css votedisk
    Edited by: Hub on Aug 25, 2008 4:41 AM

  • Does all voting disks in n node rac have same data

    Hi,
    I heard that Oracle/css access more than half of the voting disks, just had a question on that
    1) Why does Oracle access more than half of the voting disks?
    2) Why we keep odd number of voting disks?
    3) Does Oracle keep same information in all the voting disks ? is it like control or redofile multiplexing?
    4) Why we have only 2 ocr and not more than that?
    Regards
    ID

    Hi,
    1) Why does Oracle access more than half of the voting disks?
    To join the cluster greater than half number of voting disks must be accessible for the cluster node, Oracle made this restriction so that even in worst cases all nodes must have access on one common disk. Let's try to understand with simple classical example of two node cluster node1 and node2 with two voting disk vote1 and vote2 and assuming vote1 is accessible for node1 only and vote2 is accessible for node2 only, in this case if Oracle allow the node to join the cluster by passing this restriction then conflict will occur as both the node writing information to different voting disk but with this restriction Oracke make sure that one disk must be commonly accessible for all the nodes. For example in case of three voting disk at least two must be accessible, if two voting disks are accessible then one disk will be commonly accessible for all the nodes.
    2) Why we keep odd number of voting disks?
    I already answered this question indirectly with the answer of your first question. Greater than half number of voting disks must be accessible so either you configure three or next even number i.e. four but number of failover voting disk remains same. In case of three failure of one voting disk can be tolerated and same in case of four voting disks.
    3) Does Oracle keep same information in all the voting disks ? is it like control or redofile multiplexing?
    Yes, Clusterware maintains same information in all voting disks its just the multiplex copy.
    4) Why we have only 2 ocr and not more than that?
    We can configure upto five mirror OCR disk. Here is excrept of my ocr.loc file
    [root@host01 ~]# cat /etc/oracle/ocr.loc
    #Device/file getting replaced by device /dev/sdb13
    ocrconfig_loc=+DATA
    ocrmirrorconfig_loc=/dev/sdb15
    ocrconfig_loc3=/dev/sdb14
    ocrconfig_loc4=/dev/sdb13
    local_only=false[root@host01 ~]#
    [root@host01 ~]#
    [root@host01 ~]#
    [root@host01 ~]# ocrconfig -add /dev/sdb12
    [root@host01 ~]#
    [root@host01 ~]#
    [root@host01 ~]#
    [root@host01 ~]# ocrconfig -add /dev/sdb11
    PROT-27: Cannot configure another Oracle Cluster Registry location because the maximum number of Oracle Cluster Registry locations (5) have been configured
    Thanks

  • Confusion with OCFS2 File system for OCR and Voting disk RHEL 5, Oracle11g,

    Dear all,
    I am in the process of installing Oracle 11g 3 Node RAC database
    The environment on which i have to do this implementation is as follows:
    Oracle 11g.
    Red Hat Linux 5 x86
    Oracle Clusterware
    ASM
    EMC Storage
    250 Gb of Storage drive.
    SAN
    As of now i am in the process of installing Oracle Clusterware on the 3 nodes.
    I have performed these tasks for the cluster installs.
    1. Configure Kernel Parameters
    2. Configure User Limits
    3. Modify the /etc/pam.d/login file
    4. Configure Operating System Users and Groups for Oracle Clusterware
    5. Configure Oracle Clusterware Owner Environment
    6. Install CVUQDISK rpm package
    7. Configure the Hosts file
    8. Verify the Network Setup
    9. Configure the SSH on all Cluster Nodes (User Equivalence)
    9. Enable the SSH on all Cluster Nodes (User Equivalence)
    10. Install Oracle Cluster File System (OCFS2)
    11.Verify the Installation of Oracle Cluster File System (OCFS2)
    12. Configure the OCFS2 (/etc/ocfs2/cluster.conf)
    13. Configure the O2CB Cluster Stack for OCFS2
    BUT, here after i am a little bit confused on how to proceed further. The next step is to Format the disk and mount the OCFS2, Create Software Directories... so and so forth.
    I asked my system admin to provide me two partitions so that i could format them with OCFS2 file system.
    He wrote back to me saying.
    *"Is what you want before I do it??*
    */dev/emcpowera1 is 3GB and formatted OCFS2.*
    */dev/emcpowera2 is 3GB and formatted OCFS2.*
    *Are those big enough for you? If not, I can re-size and re-format them*
    *before I mount them on the servers.*
    *the SAN is shared storage. /dev/emcpowera is one of three LUNs on*
    *the shared storage, and it's 214GB. Right now there are only two*
    *partitions on it- the ones I listed below. I can repartition the LUN any*
    *way you want it.*
    *Where do you want these mounted at:*
    */dev/emcpowera1*
    */dev/emcpowera2*
    *I was thinking if this mounting techique would work like so:*
    *emcpowera1: /u01/shared_config/OCR_config*
    *emcpowera2: /u01/shared_config/voting_disk*
    *Let me know how you'd like them mounted."*
    Please recommend me what i should convey to him so that i can ask him to the exact same thing.
    My second question is, as we are using ASM, for which i am gonna configure ASM after clusterware installation, should i install Openfiler??
    Pls refer the enviroment information i provided above and make recommendations.
    As of now i am using Jeffery Hunters guide to install the entire setup. You think the entire install guide goes well with my enviroment??
    http://www.oracle.com/technology/pub/articles/hunter_rac11gr1_iscsi.html?rssid=rss_otn_articles
    Kind regards
    MK

    Thanks for ur reply Mufalani,
    You have managed to solve half part of my query. But still i am stuck with what kind of mount point i should ask the system admin to create for OCR and Voting disk. Should i go with the mount point he is mentioning??
    Let me put forth few more questions here.
    1. Is 280 MB ok for OCR and voting disks respectively??
    2. Should i ask the system admin to create 4 voting disk mount points and two for ocr??
    3. As mentioned by the system admin.
    */u01/shared_config/OCR_config*
    */u01/shared_config/voting_disk*
    Is this ok for creating the ocr and voting disks?
    4. Can i use OCFS2 file system for formating the disk instead of using them as RAW device!!?
    5. As u mentioned that Openfiler is not needed for Configuring ASM... Could you provide me the links which will guide me to create partition disks, voting disks and ocr disks!! I could not locate them on the doc or else were. I did find a couple of them, but was unable to identify a suitable one for my envoirement.
    Regards
    MK

  • OCR and voting disks on ASM, problems in case of fail-over instances

    Hi everybody
    in case at your site you :
    - have an 11.2 fail-over cluster using Grid Infrastructure (CRS, OCR, voting disks),
    where you have yourself created additional CRS resources to handle single-node db instances,
    their listener, their disks and so on (which are started only on one node at a time,
    can fail from that node and restart to another);
    - have put OCR and voting disks into an ASM diskgroup (as strongly suggested by Oracle);
    then you might have problems (as we had) because you might:
    - reach max number of diskgroups handled by an ASM instance (63 only, above which you get ORA-15068);
    - experiment delays (especially in case of multipath), find fake CRS resources, etc.
    whenever you dismount disks from one node and mount to another;
    So (if both conditions are true) you might be interested in this story,
    then please keep reading on for the boring details.
    One step backward (I'll try to keep it simple).
    Oracle Grid Infrastructure is mainly used by RAC db instances,
    which means that any db you create usually has one instance started on each node,
    and all instances access read / write the same disks from each node.
    So, ASM instance on each node will mount diskgroups in Shared Mode,
    because the same diskgroups are mounted also by other ASM instances on the other nodes.
    ASM instances have a spfile parameter CLUSTER_DATABASE=true (and this parameter implies
    that every diskgroup is mounted in Shared Mode, among other things).
    In this context, it is quite obvious that Oracle strongly recommends to put OCR and voting disks
    inside ASM: this (usually called CRS_DATA) will become diskgroup number 1
    and ASM instances will mount it before CRS starts.
    Then, additional diskgroup will be added by users, for DATA, REDO, FRA etc of each RAC db,
    and will be mounted later when a RAC db instance starts on the specific node.
    In case of fail-over cluster, where instances are not RAC type and there is
    only one instance running (on one of the nodes) at any time for each db, it is different.
    All diskgroups of db instances don't need to be mounted in Shared Mode,
    because they are used by one instance only at a time
    (on the contrary, they should be mounted in Exclusive Mode).
    Yet, if you follow Oracle advice and put OCR and voting inside ASM, then:
    - at installation OUI will start ASM instance on each node with CLUSTER_DATABASE=true;
    - the first diskgroup, which contains OCR and votings, will be mounted Shared Mode;
    - all other diskgroups, used by each db instance, will be mounted Shared Mode, too,
    even if you'll take care that they'll be mounted by one ASM instance at a time.
    At our site, for our three-nodes cluster, this fact has two consequences.
    One conseguence is that we hit ORA-15068 limit (max 63 diskgroups) earlier than expected:
    - none ot the instances on this cluster are Production (only Test, Dev, etc);
    - we planned to have usually 10 instances on each node, each of them with 3 diskgroups (DATA, REDO, FRA),
    so 30 diskgroups each node, for a total of 90 diskgroups (30 instances) on the cluster;
    - in case one node failed, surviving two should get resources of the failing node,
    in the worst case: one node with 60 diskgroups (20 instances), the other one with 30 diskgroups (10 instances)
    - in case two nodes failed, the only node survived should not be able to mount additional diskgroups
    (because of limit of max 63 diskgroup mounted by an ASM instance), so all other would remain unmounted
    and their db instances stopped (they are not Production instances);
    But it didn't worked, since ASM has parameter CLUSTER_DATABASE=true, so you cannot mount 90 diskgroups,
    you can mount 62 globally (once a diskgroup is mounted on one node, it is given a number between 2 and 63,
    and other diskgroups mounted on other nodes cannot reuse that number).
    So as a matter of fact we can mount only 21 diskgroups (about 7 instances) on each node.
    The second conseguence is that, every time our CRS handmade scripts dismount diskgroups
    from one node and mount it to another, there are delays in the range of seconds (especially with multipath).
    Also we found inside CRS log that, whenever we mounted diskgroups (on one node only), then
    behind the scenes were created on the fly additional fake resources
    of type ora*.dg, maybe to accomodate the fact that on other nodes those diskgroups were left unmounted
    (once again, instances are single-node here, and not RAC type).
    That's all.
    Did anyone go into similar problems?
    We opened a SR to Oracle asking about what options do we have here, and we are disappointed by their answer.
    Regards
    Oscar

    Hi Klaas-Jan
    - best practises require that also online redolog files are in a separate diskgroup, in case of ASM logical corruption (we are a little bit paranoid): in case DATA dg gets corrupted, you can restore Full backup plus Archived RedoLog plus Online Redolog (otherwise you will stop at the latest Archived).
    So we have 3 diskgroups for each db instance: DATA, REDO, FRA.
    - in case of fail-over cluster (active-passive), Oracle provide some templates of CRS scripts (in $CRS_HOME/crs/crs/public) that you edit and change at your will, also you might create additionale scripts in case of additional resources you might need (Oracle Agents, backups agent, file systems, monitoring tools, etc)
    About our problem, the only solution is to move OCR and voting disks from ASM and change pfile af all ASM instance (parameter CLUSTER_DATABASE from true to false ).
    Oracle aswers were a litlle bit odd:
    - first they told us to use Grid Standalone (without CRS, OCR, voting at all), but we told them that we needed a Fail-over solution
    - then they told us to use RAC Single Node, which actually has some better features, in csae of planned fail-over it might be able to migreate
    client sessions without causing a reconnect (for SELECTs only, not in case of a running transaction), but we already have a few fail-over cluster, we cannot change them all
    So we plan to move OCR and voting disks into block devices (we think that the other solution, which needs a Shared File System, will take longer).
    Thanks Marko for pointing us to OCFS2 pros / cons.
    We asked Oracle a confirmation that it supported, they said yes but it is discouraged (and also, doesn't work with OUI nor ASMCA).
    Anyway that's the simplest approach, this is a non-Prod cluster, we'll start here and if everthing is fine, after a while we'll do it also on Prod ones.
    - Note 605828.1, paragraph 5, Configuring non-raw multipath devices for Oracle Clusterware 11g (11.1.0, 11.2.0) on RHEL5/OL5
    - Note 428681.1: OCR / Vote disk Maintenance Operations: (ADD/REMOVE/REPLACE/MOVE)
    -"Grid Infrastructure Install on Linux", paragraph 3.1.6, Table 3-2
    Oscar

  • OCR and Voting Disk Mirror

    Hi,
    We have oracle 10.2.0.4 RAC database. Now we are planning to mirror the OCR and Voting disk. Currently we are using SAN mirror copy. SAN admin assures that we don't need mirror copy as it is being mirrored at the disk level. Can any one please suggest me if we need any mirror copy of these. If so what advantages I get from the regular SAN copy.

    You use SAN or High Availability Storage + RAID. You don't need to mirror OCR/VOTING .
    Anyway , If you can do it... Mirror OCR + Multi VOTING... , that's good thing oracle recommend.
    By the way, If you can not mirror OCR + multi VOTE on 10g, You should backup VOTE File ("dd" linux command) after setup completed/add node/delete node.
    http://download.oracle.com/docs/cd/B19306_01/rac.102/b28759/adminoc.htm#sthref148
    And Check OCR file automatic backup:
    $ ocrconfig -showbackup
    http://download.oracle.com/docs/cd/B19306_01/rac.102/b14197/ocrsyntax.htm

Maybe you are looking for

  • To Dos on Palm won't sync with iCal

    I noticed this morning that my To Do list on my Palm pilot is not syncing to my iCal To Do list. However, the contacts and calendar are syncing fine. I checked the Hot Sync manager for the Palm, but everything seems to be in order. Contacts and Calen

  • Uploading data from Interactive form in Webdynpro ABAP

    Hello All, I am uploading data from an Offline interactive form to an ABAP WebDynpro application. Form is first downloaded, data is filled and now I am trying to upload the form to another WebDynpro application. Form DataSource is mapped to Context n

  • How to create a DVD.

    Hi all. I've imported a 1980x1080 25p video. Now I'm working on a project with iMovie 11To then export to Compressor 4. What are the settings to create a DVD with good quality compatibbile dvd home? And 'possible to have a step by step guide to get i

  • Quicktime mpeg-2 and quicktime versions don't match

    I bought quicktime pro and the mpeg-2 add on but the program tells me the versions are not compatible even though they both say the same version identification. Does anyone know why it tells me the versions are different when they aren't ?

  • Gap in standby db

    hi, backgroup: i've changed my primary db from archivelog mode to noarchivelog mode when some problem happend which cause the arch generated about 1G data within two minutes that lead to disk space used up within few hours, i've recovered the primary