Multiplexing Voting Disk in Oracle 11gR2 ??

Dear all,
I have single voting disk residing in data group VDOCRDG with external redundancy. I would like to add two more voting disks to this group. Could you please let me know a way to add 2 more vd's to this disk group or should i create a new disk group with normal redundancy ???
Appreciate if you could provide your answer with SQL commands.
database: oracle 11g r2 patch-1
os: RHEL 5.5
RAC: two-node RAC
ASM: voting disk and OCR reside in ASM
let me know if you need more information.
thanks
P

Just out of curiosity:
(Obviously this is a doc question)
is http://tahiti.oracle.com broken for you, so you must ask others to abstract the documentation for free, or don't you understand the line in the Forums ltiEtiquette Post
reading 'Consulting documentation first is highly recommended'?
I'm getting a bit annoyed by people trying to outsource as much of the task they get paid for to a forum of volunteers.
Sybrand Bakker
Senior Oracle DBA

Similar Messages

  • Create raw devices for OCR & Voting Disk for Oracle 10g R2 RACLinux 64 bit)

    Hi Friends,
    Please let me know the Document to cretae RAW Disk for OCR and Voting Disks (like rpm's required,process tio cretae raw disks) in Oracle 10g R2 on Linux (64 bit)
    Regards,
    DB

    http://docs.oracle.com/cd/B19306_01/install.102/b14203/storage.htm#BABFFBBA
    and
    Configuring raw devices (singlepath) for Oracle Clusterware 10g Release 2 (10.2.0) on RHEL5/OEL5 [ID 465001.1]
    Configuring raw devices (multipath) for Oracle Clusterware 10g Release 2 (10.2.0) on RHEL5/OEL5 [ID 564580.1]

  • Root.sh hangs at formatting voting disk on OEL32 11gR2 RAC with ocfs2

    Hi,
    Am trying to bring up Oracle11gR2 RAC on Enterprise Linux x86 (32bit) version 5.6. I am using Ocfs2 1.4 as my cluster file share. Everything went fine till the root.sh and it hangs with a message "now formatting voting disk <vdsk path>
    The logs are mentioned below.
    Checked the alert log:
    {quote}
    cssd(9506)]CRS-1601:CSSD Reconfiguration complete. Active nodes are oel32rac1 .
    2011-08-04 15:58:55.356
    [ctssd(9552)]CRS-2407:The new Cluster Time Synchronization Service reference node is host oel32rac1.
    2011-08-04 15:58:55.917
    [ctssd(9552)]CRS-2401:The Cluster Time Synchronization Service started on host oel32rac1.
    2011-08-04 15:58:56.213
    [client(9567)]CRS-1006:The OCR location /u02/storage/ocr is inaccessible. Details in /u01/app/11.2.0/grid/log/oel32rac1/client/ocrconfig_9567.log.
    2011-08-04 15:58:56.365
    [client(9567)]CRS-1001:The OCR was formatted using version 3.
    2011-08-04 15:58:59.977
    [crsd(9579)]CRS-1012:The OCR service started on node oel32rac1.
    {quote}
    crsctl.log:
    {quote}
    2011-08-04 15:59:00.246: [  CRSCTL][3046184656]crsctl_vformat: obtain cssmode 1
    2011-08-04 15:59:00.247: [  CRSCTL][3046184656]crsctl_vformat: obtain VFListSZ 0
    2011-08-04 15:59:00.258: [  CRSCTL][3046184656]crsctl_vformat: Fails to obtain backuped Lease from CSSD with error code 16
    2011-08-04 15:59:01.857: [  CRSCTL][3046184656]crsctl_vformat: to do clsscfg fmt with lease sz 0
    2011-08-04 15:59:01.910: [   SKGFD][3046184656]NOTE: No asm libraries found in the system
    2011-08-04 15:59:01.910: [    CLSF][3046184656]Allocated CLSF context
    2011-08-04 15:59:01.910: [   SKGFD][3046184656]Discovery with str:/u02/storage/vdsk:
    2011-08-04 15:59:01.910: [   SKGFD][3046184656]UFS discovery with :/u02/storage/vdsk:
    2011-08-04 15:59:01.910: [   SKGFD][3046184656]Fetching UFS disk :/u02/storage/vdsk:
    2011-08-04 15:59:01.911: [   SKGFD][3046184656]OSS discovery with :/u02/storage/vdsk:
    2011-08-04 15:59:01.911: [   SKGFD][3046184656]Handle 0xa6c19f8 from lib :UFS:: for disk :/u02/storage/vdsk:
    2011-08-04 17:10:37.522: [   SKGFD][3046184656]WARNING:io_getevents timed out 618 sec
    2011-08-04 17:10:37.526: [   SKGFD][3046184656]WARNING:io_getevents timed out 618 sec
    {quote}
    ocrconfig log:
    {quote}
    2011-08-04 15:58:56.214: [  OCRRAW][3046991552]ibctx: Failed to read the whole bootblock. Assumes invalid format.
    2011-08-04 15:58:56.214: [  OCRRAW][3046991552]proprinit:problem reading the bootblock or superbloc 22
    2011-08-04 15:58:56.214: [  OCROSD][3046991552]utread:3: Problem reading buffer 8532000 buflen 4096 retval 0 phy_offset 102400 retry 0
    2011-08-04 15:58:56.214: [  OCROSD][3046991552]utread:3: Problem reading buffer 8532000 buflen 4096 retval 0 phy_offset 102400 retry 1
    2011-08-04 15:58:56.214: [  OCROSD][3046991552]utread:3: Problem reading buffer 8532000 buflen 4096 retval 0 phy_offset 102400 retry 2
    2011-08-04 15:58:56.214: [  OCROSD][3046991552]utread:3: Problem reading buffer 8532000 buflen 4096 retval 0 phy_offset 102400 retry 3
    2011-08-04 15:58:56.214: [  OCROSD][3046991552]utread:3: Problem reading buffer 8532000 buflen 4096 retval 0 phy_offset 102400 retry 4
    2011-08-04 15:58:56.214: [  OCROSD][3046991552]utread:3: Problem reading buffer 8532000 buflen 4096 retval 0 phy_offset 102400 retry 5
    2011-08-04 15:58:56.214: [  OCRRAW][3046991552]propriogid:1_1: Failed to read the whole bootblock. Assumes invalid format.
    2011-08-04 15:58:56.365: [  OCRRAW][3046991552]iniconfig:No 92 configuration
    2011-08-04 15:58:56.365: [  OCRAPI][3046991552]a_init:6a: Backend init successful
    2011-08-04 15:58:56.390: [ OCRCONF][3046991552]Initialized DATABASE keys
    2011-08-04 15:58:56.564: [ OCRCONF][3046991552]csetskgfrblock0: output from clsmft: [clsfmt: successfully initialized file /u02/storage/ocr
    2011-08-04 15:58:56.577: [ OCRCONF][3046991552]Successfully set skgfr block 0
    2011-08-04 15:58:56.578: [ OCRCONF][3046991552]Exiting [status=success]...
    {quote}
    ocssd.log:
    {quote}
    2011-08-04 15:59:00.140: [    CSSD][2963602320]clssgmFreeRPCIndex: freeing rpc 23
    2011-08-04 15:59:00.228: [    CSSD][2996054928]clssgmExecuteClientRequest: CONFIG recvd from proc 6 (0xb35f7438)
    2011-08-04 15:59:00.228: [    CSSD][2996054928]clssgmConfig: type(1)
    2011-08-04 15:59:00.234: [    CSSD][2996054928]clssgmExecuteClientRequest: VOTEDISKQUERY recvd from proc 6 (0xb35f7438)
    2011-08-04 15:59:00.247: [    CSSD][2996054928]clssgmExecuteClientRequest: CONFIG recvd from proc 6 (0xb35f7438)
    2011-08-04 15:59:00.247: [    CSSD][2996054928]clssgmConfig: type(1)
    2011-08-04 15:59:03.039: [    CSSD][2942622608]clssnmSendingThread: sending status msg to all nodes
    2011-08-04 15:59:03.039: [    CSSD][2942622608]clssnmSendingThread: sent 4 status msgs to all nodes
    2011-08-04 15:59:07.047: [    CSSD][2942622608]clssnmSendingThread: sending status msg to all nodes
    2011-08-04 15:59:07.047: [    CSSD][2942622608]clssnmSendingThread: sent 4 status msgs to all nodes
    2011-08-04 15:59:11.057: [    CSSD][2942622608]clssnmSendingThread: sending status msg to all nodes
    2011-08-04 15:59:11.057: [    CSSD][2942622608]clssnmSendingThread: sent 4 status msgs to all nodes
    2011-08-04 15:59:16.068: [    CSSD][2942622608]clssnmSendingThread: sending status msg to all nodes
    2011-08-04 15:59:16.068: [    CSSD][2942622608]clssnmSendingThread: sent 5 status msgs to all nodes
    2011-08-04 15:59:21.079: [    CSSD][2942622608]clssnmSendingThread: sending status msg to all nodes
    2011-08-04 15:59:21.079: [    CSSD][2942622608]clssnmSendingThread: sent 5 status msgs to all nodes
    {quote}
    Any help here is appreciated.
    Regards
    Amith R
    Edited by: Mithzz on Aug 4, 2011 4:58 AM

    Did an lsof on vdisk and it showed
    >
    COMMAND PID USER FD TYPE DEVICE SIZE NODE NAME
    crsctl.bi 9589 root 26u REG 8,17 21004288 102980 /u02/storage/vdsk
    [root@oel32rac1 ~]# ps -ef |grep crsctl
    root 9589 7583 0 15:58 pts/1 00:00:00 [crsctl.bin] <defunct>
    >
    Could this be a permission issue ?
    --Amith                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

  • Question on voting disk

    Hello all,
    I'm aware of the fact that the voting disks manage current cluster/node membership information and that various nodes constantly check in with voting disk to register their availability. If a node is unable to ping the voting disk the cluster evicts that node to avoid split brain situation. These are some of the points that I don't understand as documentaion is not very clear. Could you explain or post links that have a clear explanation?
    1) Why do we have to create odd number of voting disk?
    2) Does the cluster actually check for the vote count before node eviction? If yes, could you expain this process briefly?
    3) What's the logic behind the documentaion note that says that "each node should be able to see more than half of the voting disks at any time"?
    Thanks

    1) Why do we have to create odd number of voting disk?As far as voting disks are concerned, a node must be able to access strictly more than half of the voting disks at any time. So if you want to be able to tolerate a failure of n voting disks, you must have at least 2n+1 configured. (n=1 means 3 voting disks). You can configure up to 32 voting disks, providing protection against 15 simultaneous disk failures.
    Oracle recommends that customers use 3 or more voting disks in Oracle RAC 10g Release 2. Note: For best availability, the 3 voting files should be physically separate disks. It is recommended to use an odd number as 4 disks will not be any more highly available than 3 disks, 1/2 of 3 is 1.5...rounded to 2, 1/2 of 4 is 2, once we lose 2 disks, our cluster will fail with both 4 voting disks or 3 voting disks.
    2) Does the cluster actually check for the vote count before node eviction? If yes, could you expain this process briefly?Yes. If you lose half or more of all of your voting disks, then nodes get evicted from the cluster, or nodes kick themselves out of the cluster.
    3) What's the logic behind the documentaion note that says that "each node should be able to see more than half of the voting disks at any time"?The answer to first question itself will provide answer to this question too.

  • Minimum number of voting disks?

    Hi,
    Is there is a minimum number of voting disks?
    does that depend on the RAC nodes?
    If I have 4 nodes what is the optimal voting disks?
    Thanks

    user9202785 wrote:
    Hi,
    Is there is a minimum number of voting disks?
    does that depend on the RAC nodes?
    If I have 4 nodes what is the optimal voting disks?
    ThanksOracle recommends 3 voting disk. but you can install clusterware with one voting disk by selecting external redudancy during clusterware installation.
    the number of voting disks is not depend on the number of RAC nodes. you can have upto maximum 32 voting disks for 2nodes to 100 RAC nodes.
    The Oracle Clusterware is comprised primarily of two components: the voting disk and the OCR (Oracle Cluster Registry). The voting disk is nothing but a file that contains and manages information of all the node memberships and the OCR is a file that manages the cluster and RAC configuration.
    refer : how to add/remove/repace voting disk in the below link:
    http://oracleinstance.blogspot.com/2009/12/how-to-addremovereplacemove-oracle.html.
    if voting disk fails: refer
    http://oracleinstance.blogspot.com/2009/12/how-to-addremovereplacemove-oracle.html.
    also refer Oracle® Database 2 Day + Real Application Clusters Guide 10g Release 2 (10.2)
    read the below ,hope this will help you.
    1) Why do we have to create odd number of voting disk?
    As far as voting disks are concerned, a node must be able to access strictly more than half of the voting disks at any time. So if you want to be able to tolerate a failure of n voting disks, you must have at least 2n+1 configured. (n=1 means 3 voting disks). You can configure up to 32 voting disks, providing protection against 15 simultaneous disk failures.
    Oracle recommends that customers use 3 or more voting disks in Oracle RAC 10g Release 2. Note: For best availability, the 3 voting files should be physically separate disks. It is recommended to use an odd number as 4 disks will not be any more highly available than 3 disks, 1/2 of 3 is 1.5...rounded to 2, 1/2 of 4 is 2, once we lose 2 disks, our cluster will fail with both 4 voting disks or 3 voting disks.
    2) Does the cluster actually check for the vote count before node eviction? If yes, could you expain this process briefly?
    Yes. If you lose half or more of all of your voting disks, then nodes get evicted from the cluster, or nodes kick themselves out of the cluster.
    3) What's the logic behind the documentaion note that says that "each node should be able to see more than half of the voting disks at any time"?
    The answer to first question itself will provide answer to this question too.answered by: Boochi
    refer: Question on voting disk
    Question on voting disk

  • How to rename voting disk name in oracle clusterware 11gr2

    Hi:
    I need change the name of voting disk at os level, original name is /dev/rhdisk20, I need rename to /dev/asmocr_vote1 (unix AIX), the voting disk is locate in ASM diskgroup +OCR.
    Initial voting disk was: /dev/rhdisk20 in diskgroup +OCR
    #(root) /oracle/GRID/11203/bin->./crsctl query css votedisk
    ## STATE File Universal Id File Name Disk group
    1. ONLINE a2e6bb7e57044fcabf0d97f40357da18 (/dev/rhdisk20) [OCR]
    I createt a new alias disk name:
    #mknod /dev/asmocr_vote01 c 18 10
    # /dev->ls -lrt|grep "18, 10"
    brw------- 1 root system 18, 10 Aug 27 13:15 hdisk20
    crw-rw---- 1 oracle asmadmin 18, 10 Sep 6 16:57 rhdisk20 --> Old name
    crw-rw---- 1 oracle asmadmin 18, 10 Sep 6 16:59 asmocr_vote01 ---> alias to old name, the new name.
    After change votingn disk unix name, the cluster doesn't start, voting disk is not found by CRSSD.
    -STEPS to start clusterware after changing the OS voting disk name are:
    1- stop al nodes:
    #crsctl stop crs -f (every node)
    Work only in one node (node1, +ASM1 instance):
    2- Change asm_diskstring in init+ASM1.ora:
    asm_diskstring = /dev/asm*
    3- change disk unix permiss:
    # /dev->ls -lrt|grep "18, 10"
    brw------- 1 root system 18, 10 Aug 27 13:15 hdisk20
    crw-rw---- 1 root system 18, 10 Sep 6 16:59 asmocr_vote01
    crw-rw---- 1 oracle asmadmin 18, 10 Sep 6 17:37 rhdisk20
    #(root) /dev->chown oracle:asmadmin asmocr_vote01
    #(root) /dev->chown root:system rhdisk20
    #(root) /dev->ls -lrt|grep "18, 10"
    brw------- 1 root system 18, 10 Aug 27 13:15 hdisk20
    crw-rw---- 1 oracle asmadmin 18, 10 Sep 6 16:59 asmocr_vote01 --> new name only have oracle:oinstall
    crw-rw---- 1 root system 18, 10 Sep 6 17:37 rhdisk20
    4-start node in exclusive mode:
    # (root) /oracle/GRID/11203/bin->./crsctl start crs -excl
    CRS-4123: Oracle High Availability Services has been started.
    CRS-2672: Attempting to start 'ora.mdnsd' on 'orarac3intg'
    CRS-2676: Start of 'ora.mdnsd' on 'orarac3intg' succeeded
    CRS-2672: Attempting to start 'ora.gpnpd' on 'orarac3intg'
    CRS-2676: Start of 'ora.gpnpd' on 'orarac3intg' succeeded
    CRS-2672: Attempting to start 'ora.cssdmonitor' on 'orarac3intg'
    CRS-2672: Attempting to start 'ora.gipcd' on 'orarac3intg'
    CRS-2676: Start of 'ora.cssdmonitor' on 'orarac3intg' succeeded
    CRS-2676: Start of 'ora.gipcd' on 'orarac3intg' succeeded
    CRS-2672: Attempting to start 'ora.cssd' on 'orarac3intg'
    CRS-2672: Attempting to start 'ora.diskmon' on 'orarac3intg'
    CRS-2676: Start of 'ora.diskmon' on 'orarac3intg' succeeded
    CRS-2676: Start of 'ora.cssd' on 'orarac3intg' succeeded
    CRS-2672: Attempting to start 'ora.ctssd' on 'orarac3intg'
    CRS-2672: Attempting to start 'ora.drivers.acfs' on 'orarac3intg'
    CRS-2679: Attempting to clean 'ora.cluster_interconnect.haip' on 'orarac3intg'
    CRS-2681: Clean of 'ora.cluster_interconnect.haip' on 'orarac3intg' succeeded
    CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'orarac3intg'
    CRS-2676: Start of 'ora.ctssd' on 'orarac3intg' succeeded
    CRS-2676: Start of 'ora.drivers.acfs' on 'orarac3intg' succeeded
    CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'orarac3intg' succeeded
    CRS-2672: Attempting to start 'ora.asm' on 'orarac3intg'
    CRS-2676: Start of 'ora.asm' on 'orarac3intg' succeeded
    CRS-2672: Attempting to start 'ora.crsd' on 'orarac3intg'
    CRS-2676: Start of 'ora.crsd' on 'orarac3intg' succeeded
    5-check votedisk:
    # (root) /oracle/GRID/11203/bin->./crsctl query css votedisk
    Located 0 voting disk(s).
    --> NO VOTING DISK found
    6- mount diskgroup of voting disk (+OCR in this case) in +ASM1 instance:
    SQL> ALTER DISKGROUP OCR mount;
    7-add votedisk belongs diskgroup +OCR:
    # (root) /oracle/GRID/11203/bin->./crsctl replace votedisk +OCR
    Successful addition of voting disk 86d8b12b1c294f5ebfa66f7f482f41ec.
    Successfully replaced voting disk group with +OCR.
    CRS-4266: Voting file(s) successfully replaced
    #(root) /oracle/GRID/11203/bin->./crsctl query css votedisk
    ## STATE File Universal Id File Name Disk group
    1. ONLINE 86d8b12b1c294f5ebfa66f7f482f41ec (/dev/asmocr_vote01) [OCR]
    Located 1 voting disk(s).
    8-stop node:
    #(root) /oracle/GRID/11203/bin->./crsctl stop crs –f
    8-start node:
    #(root) /oracle/GRID/11203/bin->./crsctl start crs
    10- check:
    # (root) /oracle/GRID/11203/bin->./crsctl query css votedisk
    ## STATE File Universal Id File Name Disk group
    1. ONLINE 86d8b12b1c294f5ebfa66f7f482f41ec (/dev/asmocr_vote01) [OCR]
    Vicente.
    HP.
    Edited by: 957649 on 07-sep-2012 13:11

    There is no facilty to rename a column name in Oracle 8i. This is possible from Oracle 9.2 version onwards.
    For you task one example given below.
    Example:-
    Already existed table is ITEMS
    columns in ITEMS are ITID, ITEMNAME.
    But instead of ITID I want ITEMID.
    Solution:-
    step 1 :- create table items_dup
    as select itid itemid, itemname from items;
    step 2 :- drop table items;
    step 3 :- rename items_dup to items;
    Result:-
    ITEMS table contains columns ITEMID, ITEMNAME

  • Oracle RAC 11g. 3rd Voting disk mounting automatically without fstab

    Hi,
    We have a 2 node Oracle 11Gr2 Extened RAC database on Redhat Linux. We have a vote disk on each node and a 3rd vote disk on a separate site. The vote disk directories are NFS mounted. We have noticed that there is no entry in the fstab (on either RAC node) for the 3rd voting disk location, yet the directory is still mounted automatically on each RAC node at startup.
    Can Oracle manage mounting the disks itself without using fstab? Oracle recommends using the fstab for mounting the directories and I have found nothing on Oracle mounting the directories in any other way other than fstab.
    I am completely lost here. We need to do some configuration on the 3rd voting disk location and I need to find out how the disk is being mounted on the RAC nodes. Any help on this would be greatly appreciated. Thanks.
    Rgs,
    Rob

    Did you check that rc.local file? Perhaps, the mount entries are in there.
    HTH

  • Adding ocrmirror and voting disk on another diskgroup 11gr2 RAC

    Hi Gurus,
    Can i add ocrmirror on another diskgroup(external redundancy) and voting disk(normal) on anotherdiskgroup (replace) in 11gr2 RAC while databases are up and running?

    Hi
    in Oracle 11gr2 voting disk are backed up automatically in the ocr part of any configuration change
    Voting disk is automatically restored to any added voting disk
    You can migrate voting fıska from non asm to asm without
    Taking download cluster
    To add a voting disk to asm
    Crsctl replace votedisk +diskgroup
    Ocrconfig -add +datagroup2
    Oracle manage up to five redundant ocr locations
    Hope this helps
    Zekeriya Besiroglu
    Http://zekeriyabesiroglu.blogspot.com

  • How does oracle use voting disk?

    How does oracle use voting disk? How does it do health check and arbitrates cluster ownership among the instances in case of network failures?
    Why it must have an odd number ?
    Thanks.

    Did they mentioned anywhere, that there should be odd number of voting disks? (it is just multiplexing).
    ~Sameer.

  • How do I define 2 disk groups for ocr and voting disks at the oracle grid infrastructure installation window

    Hello,
    It may sound too easy to someone but I need to ask it anyway.
    I am in the middle of building Oracle RAC 11.2.0.3 on Linux. I have 2 storage and I created three LUNs in each of storage so 6 LUNs in total look in both of servers. If I choose NORMAL as a redundancy level, is it right to choose my 6 disks those are created for OCR_VOTE at the grid installation? Or should I choose only 3 of them and configure mirroring at latter stage?
    The reason why I am asking this is that I didn't find any place to create asm disk group for ocr and voting disks at the oracle grid infrastructure installation window. In fact, I would to like to create two disk groups in which one of groups contains three disks those were brought from storage 1 and the second one contains rest three 3 disk that come from the last storage.
    I believe that you will understand the manner and help me on choosing proper way.
    Thank you.

    Hi,
    You have 2 Storage H/W.
    You will need to configure a Quorum ASMDisk to store a Voting Disk.
    Because if you lose 1/2 or more of all of your voting disks, then nodes get evicted from the cluster, or nodes kick themselves out of the cluster.
    You must have the odd number of voting disk (i.e 1,3 or 5)  one voting disk per ASM DISK, So 1 Storage will hold more voting disk than another storage.
    (e.g 3 Voting Disk - 2 voting stored in stg-A and 1 voting disk store in stg-B)
    If fail the storage that hold the major number of  Voting disk the whole cluster (all nodes) goes down. Does not matter if you have a other storage online.
    It will happen:
    https://forums.oracle.com/message/9681811#9681811
    You must configure your Clusterware same as RAC extended
    Check this link:
    http://www.oracle.com/technetwork/database/clusterware/overview/grid-infra-thirdvoteonnfs-131158.pdf
    Explaining: How to store OCR, Voting disks and ASM SPFILE on ASM Diskgroup (RAC or RAC Extended) | Levi Pereira | Oracl…

  • Oracle 11gR2 RAC Installation - ASM Disks - Need advice on configurations

    Hi Guys
    How many disks are needed for Voting, Data, Log and Failover. Each should be in a seperate group or it can be in a single group of disk. Please advice. Thanks.

    Hi Friend,
    Option 1 :
    If Oracle Clusterware is used for implementing normal redundancy. We require the following for failover purpose also.
    1. Two OCR files - 280 MB each
    2. Three Voting Disks - 280 MB each
    Total - 1.4 GB approx
    Option 2 :
    Oracle recommends that the disk used for the file system be on a RAID. When you use external redundancy the minimum requirement is One OCR and One Voting Disk with 280 MB each.
    Choices :
    1. External Redundancy - > Minimum No Of Disks (1) - > One OCR (280 MB) and One Voting Disk (280 MB) -> 580 MB
    2. Normal Redundancy - > Minimum No Of Disks (3) - > Two OCR (560 MB) and Three Voting Disk (840 MB) -> 1.4 GB
    3. High Redundancy - > Minimum No Of Disks (5) - > Three OCR (840 MB) and Five Voting Disk (1.4 GB) -> 2.3 GB
    So, Choose based on redundancy...
    Hope it helps..
    Note : Use ASM for storing above files..
    Thanks
    LaserSoft

  • Installing Oracle 11gR2 RAC Problem: ASM disks

    Folks,
    Hello. I am installing Oracle 11gR2 RAC using 2 Virtual Machines (rac1 and rac2 whose OS are Oracle Linux 5.6) in VMPlayer and according to the tutorial
    http://appsdbaworkshop.blogspot.com/2011/10/11gr2-rac-on-linux-56-using-vmware.html
    At the first time, I create VM rac1. While its OS Oracle Linux 5.6 is booting, initializing ASMLib Driver OK. I create 5 ASM disks successfully using the command:
    [root@rac1 /]# /etc/init.d/oracleasm createdisk ASMDISK1 /dev/sdb1
    Output: Marking disk "ASMDISK1" as an ASM disk: OK
    Because the hard disk space of rac1 is not enough to install Database, I create VM rac1 again with bigger hard disk space (30GB). I do everything the same with that at the first time. But this time, While OS Oracle Linux 5.6 is booting, initializing ASMLib Driver failed. I create 5 ASM disks using the same command:
    [root@rac1 /]# /etc/init.d/oracleasm createdisk ASMDISK1 /dev/sdb1
    Output: Marking disk "ASMDISK1" as an ASM disk: failed
    But when I use the command:
    [root@rac1 /]# /etc/init.d/oracleasm listdisks
    Output: ASMDISK1 ASMDISK2 ASMDISK3 ASMDISK4 ASMDISK5
    My questions are:
    First, can the 5 disks "ASMDISK1 ASMDISK2 ASMDISK3 ASMDISK4 ASMDISK5" be used correctly in spite of Marking disk "ASMDISK1(2 3 4 5)" as an ASM disk failed ?
    Second, how to fix OS Oracle Linux 5.6 so that initializing ASMLib Driver OK while booting ?
    Thanks.

    Folks,
    Hello. The issue is solved by myself. Thanks.

  • Reinstalling Oracle 11gR2 RAC Grid Problem - ASM Disks Group

    Folks,
    Hello.
    I am installing Oracle 11gR2 RAC using 2 Virtual Machines (rac1 and rac2 whose OS are Oracle Linux 5.6) in VMPlayer.
    I have been installing Grid Infrastructure using runInstaller in the first VM rac1 from step 1 to step 9 of 10.
    On the step 9 of 10 in the Wizard, accidentally, I touch the Mouse, and the Wizard is gone.
    The directory for installing Grid in the 2 VMs is the same: /u01
    In order to make sure everything is correct, I delete entire directory /u01 in the 2 VMs and install Grid in rac1 again.
    I have understood it's not the right way to delete /u01. The right way is to follow the tutorial
    http://docs.oracle.com/cd/E11882_01/install.112/e22489/rem_orcl.htm#CBHEFHAC
    But I have deleted /u01 and need to fix one by one. I install Grid again and get the error message on step 5 of 9 as follows:
    [INS - 30516] Please specify unique disk groups.
    [INS-3050] Empty ASM disk group.
    Cause - Installer has detected the disk group name provided already exists on the system.
    Action - Specify different disk group.
    In Wizard, the previous Disk Group name is "DATA" and its Candidate disks (5 ASMDISKs) are gone. I try to use a different name "DATA2", but no ASMDISKs come up under "Candidate disks". For "ALL Disks", all ASMDISKs cannot be selected.
    I want to use the same ASM disk group "DATA" and don't want to create a new disk group.
    My question is:
    How to have the previous ASM disks and its group "DATA" come up under "Candidate Disks" so that can use it again ?
    Thanks.

    Hi, in case this helps anyone else. I got this INS-30516 error too was stumped for little while. I have 2 x 2-node RAC which are hitting same SAN. The first-built RAC has a DATA diskgroup. When went to build second RAC on new ASM disk new DIskgroup (but same diskgroup name DATA) got INS-30516 about diskgroup name already in use etc. Finally figured out all that was required was to restrict diskstring using button in installer to only retrieve the LUNS for this RAC (this was quick and dirty - all LUNS for both RAC being presented to both RAC). Once diskstring only searched for the LUNS required for this RAC only, e.g.
    ORCL:DATA_P* (for DATA_PD and FRA_PD)
    the error went away.
    I also have DATA_DR and FRA_DR presenting to both RAC. Apparently it scans the header and if it finds a diskgroup name that is already in use based on the diskstring scan it will not allow reuse of the diskgroup name since it has no way of knowing that the other ASM disks are for a different RAC.
    HTH

  • Oracle 11gR2 RAC Grid installation - Configuration of ASM failed,

    Hi guys,
    Trying to install/configure Oracle 11gR2 Grid for 2 nodes RAC on Suse 10, root.sh failed with this error:
    2010-08-19 15:30:47: Querying for existing CSS voting disks
    2010-08-19 15:30:47: Performing initial configuration for cluster
    2010-08-19 15:30:48: Start of resource "ora.ctssd -init" Succeeded
    2010-08-19 15:30:48: Configuring ASM via ASMCA
    2010-08-19 15:30:48: Executing as oracle: /u01/app/11.2.0/grid/bin/asmca -silent -diskGroupName DATA -diskList ORCL:DATA1,ORCL:DATA2 -redundancy EXTERNAL -configureLocalASM
    2010-08-19 15:30:48: Running as user oracle: /u01/app/11.2.0/grid/bin/asmca -silent -diskGroupName DATA -diskList ORCL:DATA1,ORCL:DATA2 -redundancy EXTERNAL -configureLocalASM
    2010-08-19 15:30:48: Invoking "/u01/app/11.2.0/grid/bin/asmca -silent -diskGroupName DATA -diskList ORCL:DATA1,ORCL:DATA2 -redundancy EXTERNAL -configureLocalASM" as user "oracle"
    2010-08-19 15:30:51: Configuration of ASM failed, see logs for details
    2010-08-19 15:30:51: Did not succssfully configure and start ASM
    2010-08-19 15:30:51: Exiting exclusive mode
    2010-08-19 15:30:51: Command return code of 1 (256) from command: /u01/app/11.2.0/grid/bin/crsctl stop resource ora.crsd -init
    2010-08-19 15:30:51: Stop of resource "ora.crsd -init" failed
    2010-08-19 15:30:51: Failed to stop CRSD
    2010-08-19 15:30:51: Command return code of 1 (256) from command: /u01/app/11.2.0/grid/bin/crsctl stop resource ora.asm -init
    2010-08-19 15:30:51: Stop of resource "ora.asm -init" failed
    2010-08-19 15:30:51: Failed to stop ASM
    2010-08-19 15:31:12: Initial cluster configuration failed. See /u01/app/11.2.0/grid/cfgtoollogs/crsconfig/rootcrs_oracle-node2.log for details
    oracle-node2:/u01/app/11.2.0/grid/cfgtoollogs/crsconfig #
    Can any one please help/assist/guide ... how to move forward?
    Thanks in advance.

    failed ... with the same error
    2010-08-19 16:52:05: Start of resource "ora.ctssd -init" Succeeded
    2010-08-19 16:52:05: Configuring ASM via ASMCA
    2010-08-19 16:52:06: Executing as oracle: /u01/app/11.2.0/grid/bin/asmca -silent -diskGroupName DATA -diskList ORCL:DATA1,ORCL:DATA2 -redundancy EXTERNAL -configureLocalASM
    2010-08-19 16:52:06: Running as user oracle: /u01/app/11.2.0/grid/bin/asmca -silent -diskGroupName DATA -diskList ORCL:DATA1,ORCL:DATA2 -redundancy EXTERNAL -configureLocalASM
    2010-08-19 16:52:06: Invoking "/u01/app/11.2.0/grid/bin/asmca -silent -diskGroupName DATA -diskList ORCL:DATA1,ORCL:DATA2 -redundancy EXTERNAL -configureLocalASM" as user "oracle"
    2010-08-19 16:52:08: Configuration of ASM failed, see logs for details
    2010-08-19 16:52:08: Did not succssfully configure and start ASM
    2010-08-19 16:52:08: Exiting exclusive mode
    2010-08-19 16:52:08: Command return code of 1 (256) from command: /u01/app/11.2.0/grid/bin/crsctl stop resource ora.crsd -init
    2010-08-19 16:52:08: Stop of resource "ora.crsd -init" failed
    2010-08-19 16:52:08: Failed to stop CRSD
    2010-08-19 16:52:08: Command return code of 1 (256) from command: /u01/app/11.2.0/grid/bin/crsctl stop resource ora.asm -init
    2010-08-19 16:52:08: Stop of resource "ora.asm -init" failed
    2010-08-19 16:52:08: Failed to stop ASM
    2010-08-19 16:52:30: Initial cluster configuration failed. See /u01/app/11.2.0/grid/cfgtoollogs/crsconfig/rootcrs_oracle-node1.log for details
    oracle-node1:/u01/app/11.2.0/grid/cfgtoollogs/crsconfig #

  • Oracle 11gR2 RAC on Linux 5

    Hi,
    I am trying to setup Oracle 11gR2 RAC on Linux 5.4 64-bit machine. I have used OCFS2 for creating shared disks. Encountered with the following error message while installing "Grid Infrastructure".
    [INS-30014] Unable to check wether the location specified is on CFS
    [INS-40103] The installer has detected that the software location specified is on an OCFS2 partition.
    [IS-41312] One or more locations specified for Oracle cluster Regestry (OCR) cannot be used.
    [INS-41514] One or more locations specified for placing voting disks cannot be used.
    -- At the time of configuration i have created 2 share disks, 1) For Oracle software 2) For voting disk and OCR
    Provided the same details at time of Clusterware installation.
    Please hel me.
    Thanks,

    Actually for both Oracle software and OCR i have used OCFS2. bcz earlier in Oracle 11g1 RAC i did the same and able to do smooth installation Well, you can no more do that with 11gR2 and it's clearly mentioned in 11gR2 Error messages documentation that configuration of grid Infrastructure software for a cluster is not supported on OCFS2 partition. You'll need to specify a location that is not an OCFS2 partition.
    Search for the error INS-40103 message at http://download.oracle.com/docs/cd/E11882_01/server.112/e10880/giinstaller_errormessages.htm
    Cheers.

Maybe you are looking for