Test to kill the voting disks

Hello -
I am running RAC 11.1.0.7 and I want to test what happens when I corrupt the voting disks. When I run the dd unix command, the node does not go down. I am also not seeing anything reported in the Clusterware log. Is it correct that Oracle automatically fixes corrupt voting disks? I ran the dd command on all three of them pretty quickly and I am still seeing it report they are healthy.
Any insights are greatly appreciated!
Thanks!

Hello,
this seems to be a duplicate of this thread:
How to simulate loss of voting disks?
Thanks,
Markus

Similar Messages

  • Back Up the Voting Disk After RAC Installation

    What would be the dd command to Back Up the Voting Disk
    my voting disk is on /dev/raw/raw2

    What would be the dd command to Back Up the Voting Disk
    my voting disk is on /dev/raw/raw2 dd if=/dev/raw/raw2 of=/home/backupuser/backup_voting_disk
    What is the command to Back Up the root.sh Script?cp root.sh root.sh.bkup
    Message was edited by:
    forbrich

  • Recovery scenario - Voting disk  does not match with the cluster guid

    Hi all,
    Think of you can not start your guest VMs just because it has a corrupted system.img root image. And assume it contains 5 physical disk( which are all created by the RAC template) hence ASM on them.
    What is the simplest recovery scneario of the guest vms (RAC)?
    Can it be a feasible scenario for recover of the availablity? (Assume both of the RAC system images are corrupted and we prefer not a system level recovery rather than backup / restore)
    1. Create 2 RAC instances using the same networking and hostname details as the ones that are corrupted. - Use 5 different new disks.
    2 Shutdown the newly created instances. Drop the disks from the newly created instances using VM manager.
    3. Add the old disks whose system image is failing to be recoverd but ASM disks are still in use (from the newly created instances using VM manager.) to the newly created instances.
    4. Open the newly created instances
    Can we expect the ASM and CRS could be initialized and be opened without a problem?
    When I try this scenario I get the folllowing error from the cssd/crsd .
    - Cluster guid 9112ddc0824fefd5ff2b7f9f7be8f048 found in voting disk does not match with the cluster guid a3eec66a2854ff0bffe784260856f92a obtained from the GPnP profile.
    - Found 0 configured voting files but 1 voting files are required, terminating to ensure data integrity.
    What could be the simplest way of recovery of a virtual machine that has healthy ASM disks but corrupted system image?
    Thank you

    Hi,
    you have a similar problem, when trying to clone databases with 11.2.
    The problem is that a cluster is uniquely identified, and this information is hold in the OCR and the Voting disks. So exactly these 2 are not to be cloned.
    To achieve what you want, simply setup your system in that way, that you have a separate diskgroup for OCR and Voting (and ASM spfile), which is not to be restored in this case of szeanrio.
    Only all database files in ASM will then be exchanged later.
    Then what you want can be achieved.
    However I am not sure that the RAC templates have the option to install OCR and Voting into a separated diskgroup.
    Regards
    Sebastian

  • Regarding Voting disk recovery scenarios

    Hi,
    For years i have read about RAC and Voting disk and it is said that each node should access more than half of the voting disks but never got a chance to work on the below scnerios which i have mentioned, if some one has practical done the below scenarios or have good knowledge do let me know.
    1) If i have 5 voting disks and out of which 2 got corrupted or deleted will the cluster keep working properly? If i boot my system or restart the cluster will it still work fine or not?
    2) The above scenario with 3 voting disk deleted or got corrupted  what will happen?
    3) If i have 2 OCR and i got deleted or corrupted will the system run fine?

    Aman,
    During startup the clusterware requires the majority of votes to start the CSS Daemon.
    The majority is counted on how many was configured, not how many remain alive.
    Below test using 3 Votedisk (11.2.0.3)
    alert.log
    [cssd(26233)]CRS-1705:Found 1 configured voting files but 2 voting files are required, terminating to ensure data integrity; details at (:CSSN                     M00021:) in /u01/app/11.2.0/grid/log/node11g02/cssd/ocssd.log
    ocssd.log
    2014-11-14 05:20:05.126: [    CSSD][3021551360]clssnmvDiskVerify: Successful discovery of 1 disks
    2014-11-14 05:20:05.126: [    CSSD][3021551360]clssnmCompleteInitVFDiscovery: Completing initial voting file discovery
    2014-11-14 05:20:05.126: [    CSSD][3021551360]clssnmCompleteVFDiscovery: Completing voting file discovery
    2014-11-14 05:20:05.126: [    CSSD][3021551360]clssnmvDiskStateChange: state from discovered to pending disk /dev/asm-ocr-vote01
    2014-11-14 05:20:05.126: [    CSSD][3021551360]clssnmvDiskStateChange: state from pending to configured disk /dev/asm-ocr-vote01
    2014-11-14 05:20:05.126: [    CSSD][3021551360]clssnmvVerifyCommittedConfigVFs: Insufficient voting files found, found 1 of 3 configured, needed 2 voting files
    With 2 voting files online, no problem, clusterware start with warning on logs.
    Ps. If was configured a diskgroup (OCR_VOTE) with high redundancy (5 asmdisk each in your onw failgroup), even diskgroup can hold 3 asmdisk failure. The diskgroup goes down and if there is only one OCR (no mirror) on that Diskgroup the whole clusterware goes down.

  • Voting disk external redundancy

    Hey,
    question:
    I´ve got an database which I haven´t installed myself.
    A query for the votedisk shows only one configured votedisk for my 2 node rac cluster.
    So I presume, that external redundancy was setup during installation of clusterware.
    A few weeks ago I hit an unpublished error, where oracle sends a message that the voting disk are corrupted - which is a false message.
    But anyway, what is happening, when I hit this error in my new scenarion (apart of applying the crs bundle patch - which I cant apply...)
    How can make sure, that a new voting disk is added with external redundancy as well (crsctl add css votedisk /dev/raw/raw#)
    or would this give back an error, because only one voting disk is configurable ?

    Hi Christian,
    How can make sure, that a new voting disk is added with external redundancy as well (crsctl add css votedisk /dev/raw/raw#) or would this give back an error, because only one voting disk is configurable ?I will assume you are using version 11.1 or earlier.
    What determines whether your voting disk is with external redundancy or normal redudancy the amount of voting is that your environment has.
    If you have only one voting disk you have external redundancy (The customers guarantees the recovery of the voting disk in case of failure. At least Oracle thinks so).
    If you more than 2 (because you need to get an odd number) voting disk you are using normal redundancy. *## Edited by: Levi Pereira on Feb 7, 2011 12:09 PM*
    Then you can add more voting disk in your environment without worrying about external redudancy previously configured.
    External Redundancy or Normal Redundancy is nominated only by the amount from voting disk you have, there is no configuration to change External Redundancy to Normal Redundancy. As I have said is only the amount from voting that you have configured in your environment.
    Make sure you have a fresh backup of your voting disk.
    Warning: The voting disk backup is performed manually in the environment UNIX/Linux with the dd command.
    Regards,
    Levi Pereira
    Edited by: Levi Pereira on Feb 7, 2011 12:09 PM

  • How to rename voting disk name in oracle clusterware 11gr2

    Hi:
    I need change the name of voting disk at os level, original name is /dev/rhdisk20, I need rename to /dev/asmocr_vote1 (unix AIX), the voting disk is locate in ASM diskgroup +OCR.
    Initial voting disk was: /dev/rhdisk20 in diskgroup +OCR
    #(root) /oracle/GRID/11203/bin->./crsctl query css votedisk
    ## STATE File Universal Id File Name Disk group
    1. ONLINE a2e6bb7e57044fcabf0d97f40357da18 (/dev/rhdisk20) [OCR]
    I createt a new alias disk name:
    #mknod /dev/asmocr_vote01 c 18 10
    # /dev->ls -lrt|grep "18, 10"
    brw------- 1 root system 18, 10 Aug 27 13:15 hdisk20
    crw-rw---- 1 oracle asmadmin 18, 10 Sep 6 16:57 rhdisk20 --> Old name
    crw-rw---- 1 oracle asmadmin 18, 10 Sep 6 16:59 asmocr_vote01 ---> alias to old name, the new name.
    After change votingn disk unix name, the cluster doesn't start, voting disk is not found by CRSSD.
    -STEPS to start clusterware after changing the OS voting disk name are:
    1- stop al nodes:
    #crsctl stop crs -f (every node)
    Work only in one node (node1, +ASM1 instance):
    2- Change asm_diskstring in init+ASM1.ora:
    asm_diskstring = /dev/asm*
    3- change disk unix permiss:
    # /dev->ls -lrt|grep "18, 10"
    brw------- 1 root system 18, 10 Aug 27 13:15 hdisk20
    crw-rw---- 1 root system 18, 10 Sep 6 16:59 asmocr_vote01
    crw-rw---- 1 oracle asmadmin 18, 10 Sep 6 17:37 rhdisk20
    #(root) /dev->chown oracle:asmadmin asmocr_vote01
    #(root) /dev->chown root:system rhdisk20
    #(root) /dev->ls -lrt|grep "18, 10"
    brw------- 1 root system 18, 10 Aug 27 13:15 hdisk20
    crw-rw---- 1 oracle asmadmin 18, 10 Sep 6 16:59 asmocr_vote01 --> new name only have oracle:oinstall
    crw-rw---- 1 root system 18, 10 Sep 6 17:37 rhdisk20
    4-start node in exclusive mode:
    # (root) /oracle/GRID/11203/bin->./crsctl start crs -excl
    CRS-4123: Oracle High Availability Services has been started.
    CRS-2672: Attempting to start 'ora.mdnsd' on 'orarac3intg'
    CRS-2676: Start of 'ora.mdnsd' on 'orarac3intg' succeeded
    CRS-2672: Attempting to start 'ora.gpnpd' on 'orarac3intg'
    CRS-2676: Start of 'ora.gpnpd' on 'orarac3intg' succeeded
    CRS-2672: Attempting to start 'ora.cssdmonitor' on 'orarac3intg'
    CRS-2672: Attempting to start 'ora.gipcd' on 'orarac3intg'
    CRS-2676: Start of 'ora.cssdmonitor' on 'orarac3intg' succeeded
    CRS-2676: Start of 'ora.gipcd' on 'orarac3intg' succeeded
    CRS-2672: Attempting to start 'ora.cssd' on 'orarac3intg'
    CRS-2672: Attempting to start 'ora.diskmon' on 'orarac3intg'
    CRS-2676: Start of 'ora.diskmon' on 'orarac3intg' succeeded
    CRS-2676: Start of 'ora.cssd' on 'orarac3intg' succeeded
    CRS-2672: Attempting to start 'ora.ctssd' on 'orarac3intg'
    CRS-2672: Attempting to start 'ora.drivers.acfs' on 'orarac3intg'
    CRS-2679: Attempting to clean 'ora.cluster_interconnect.haip' on 'orarac3intg'
    CRS-2681: Clean of 'ora.cluster_interconnect.haip' on 'orarac3intg' succeeded
    CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'orarac3intg'
    CRS-2676: Start of 'ora.ctssd' on 'orarac3intg' succeeded
    CRS-2676: Start of 'ora.drivers.acfs' on 'orarac3intg' succeeded
    CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'orarac3intg' succeeded
    CRS-2672: Attempting to start 'ora.asm' on 'orarac3intg'
    CRS-2676: Start of 'ora.asm' on 'orarac3intg' succeeded
    CRS-2672: Attempting to start 'ora.crsd' on 'orarac3intg'
    CRS-2676: Start of 'ora.crsd' on 'orarac3intg' succeeded
    5-check votedisk:
    # (root) /oracle/GRID/11203/bin->./crsctl query css votedisk
    Located 0 voting disk(s).
    --> NO VOTING DISK found
    6- mount diskgroup of voting disk (+OCR in this case) in +ASM1 instance:
    SQL> ALTER DISKGROUP OCR mount;
    7-add votedisk belongs diskgroup +OCR:
    # (root) /oracle/GRID/11203/bin->./crsctl replace votedisk +OCR
    Successful addition of voting disk 86d8b12b1c294f5ebfa66f7f482f41ec.
    Successfully replaced voting disk group with +OCR.
    CRS-4266: Voting file(s) successfully replaced
    #(root) /oracle/GRID/11203/bin->./crsctl query css votedisk
    ## STATE File Universal Id File Name Disk group
    1. ONLINE 86d8b12b1c294f5ebfa66f7f482f41ec (/dev/asmocr_vote01) [OCR]
    Located 1 voting disk(s).
    8-stop node:
    #(root) /oracle/GRID/11203/bin->./crsctl stop crs –f
    8-start node:
    #(root) /oracle/GRID/11203/bin->./crsctl start crs
    10- check:
    # (root) /oracle/GRID/11203/bin->./crsctl query css votedisk
    ## STATE File Universal Id File Name Disk group
    1. ONLINE 86d8b12b1c294f5ebfa66f7f482f41ec (/dev/asmocr_vote01) [OCR]
    Vicente.
    HP.
    Edited by: 957649 on 07-sep-2012 13:11

    There is no facilty to rename a column name in Oracle 8i. This is possible from Oracle 9.2 version onwards.
    For you task one example given below.
    Example:-
    Already existed table is ITEMS
    columns in ITEMS are ITID, ITEMNAME.
    But instead of ITID I want ITEMID.
    Solution:-
    step 1 :- create table items_dup
    as select itid itemid, itemname from items;
    step 2 :- drop table items;
    step 3 :- rename items_dup to items;
    Result:-
    ITEMS table contains columns ITEMID, ITEMNAME

  • OCR and Voting Disk

    Hi all,
    What is the difference b/w OCR and Voting Disk

    The voting disk is a file that manages information about node membership and the OCR is a file that manages cluster and RAC database configuration information.
    Voting Disk -Oracle Clusterware uses the voting disk to determine which instances are members of a cluster. The voting disk must reside on a shared disk. Basically all nodes in the RAC cluster register their heart-beat information on thes voting disks. The number decides the number of active nodes in the RAC cluster. These are also used for checking the availability of instances in RAC and remove the unavailable nodes out of the cluster. It helps in preventing split-brain condition and keeps database information intact. The split brain syndrome and its affects and how it has been managed in oracle is mentioned below.
    For high availability, Oracle recommends that you have a minimum of three voting disks. If you configure a single voting disk, then you should use external mirroring to provide redundancy. You can have up to 32 voting disks in your cluster. What I could understand about the odd value of the number of voting disks is that a noe should see maximun number of voting disk to continue to function, so with 2, if it can see only 1, its not the maximum value but a half value of voting disk. I am still trying to search more on this concept.
    OCR (Oracle Cluster Registry) - resides on shared storage and maintains information about cluster configuration and information about cluster database. OCR contains information like which database instances run on which nodes and which services runs on which database.
    Oracle Cluster Registry (OCR) is one such component in 10g RAC used to store the cluster configuration information. It is a shared disk component, typically located in a shared raw volume that must be accessible to all nodes in the cluster. The daemon OCSSd manages the configuration info in OCR and maintains the changes to cluster in the registry.
    ocrcheck is the command to check out the OCR.

  • Why do we need voting Disk or Voting File

    Hi all
    What is the need of Voting file or disk??? i know it maintains the node information n all. But if nodes can ping all other nodes what is the need of voting disk.
    Can anyone please explain.
    PS : I read the
    Oracle® Database
    Oracle Clusterware and Oracle Real Application Clusters
    Administration and Deployment Guide
    10g Release 2 (10.2)
    But could not find how does it actually come into picture.
    Thanks in anticipation

    Lets take the case of a 2 node RAC - instance 'A' and instance 'B';
    Lets assume that instances A & instance B is unable to communicate with each other. Now, this might be due the fact that the other instance is actually down oR because there is some problem with the interconnect communication between the instances.
    A split-brain occurs when the instances cannot speak to each other due to interconnect communication problem and each instance happens to think that it's the only surviving instance and starts updating the database. This obviously is problematic, isn't it?
    To avoid split brain, both instances are required to update the voting disk periodically.
    Now consider the earlier case from the perspecive of one of the nodes:
    case 1) instance B is actually down:- Here, looking from instance A's perspective; there is no communication over the interconnect with instance B and also instance B is not updating the voting disk ('coz it's down). Instance A can safely assume that instance B is down and it can go on providing the services and updating the database.
    Case 2) the problem is with interconnect communication:- Here, looking from instance A's perspective; there is no communication over the interconnect with instance B. However it can see that instance B is updating the voting disk; which means instance B is not actually down. This state is true from instance B's perspective; it can see that instance A is updating the voting disk but it cannot speak to instance A over the interconnect. At this point, both instances rush and try to lock down the voting disk and whoever gets the lock ejects the other one thus avoiding split-brain syndrome.
    That's the theory behind using a voting disk.
    When the instances of a RAC boots up, one of the instances will be assigned the master. In the case of a split-brain the master gets the lock on the voting disk and survives. In reality the instance with the lower node number survives, even if the problem is with the interconnect network card on that node :)
    Cheers.

  • Voting disk issue in Windows 2003 server

    I installed oracle RAC 10g with ASM on windows server 2003 R2. I was able to successfully install clusterware as well as ASM with database. Prior to the install, I ran diskpart and then set automount enable. I was able to configure the ocr, ocrmirror and votedisk without any problem. After completion of the installation and then restarting both the nodes, looking at the Disk Management from the Computer Management, my first node is showing the partition for votedisk as raw disk but on the second node, I am seeing drive letters configured for the votedisks. I ran crsctl query css votedisk on both the nodes and both the nodes are pointing to the same location, \\.\votedsk1, \\.\votedsk2 and \\.\votedsk3.
    Now my question is why am I seeing the drive letter(s) (G:) for the drives that were configured to votedsk from the second node and not seeing the drive letters from the first node?
    Can I change the drive letter that were configured for voting disks from the second node?
    I did take a backup of the votedsk using ocopy, but can this be a valid backup if I clear the drive letter on second node and then restore from the backup? I appreciate any help to correct this problem. Thanks

    I can perform delete and add another voting disk, But what about the drive letter that is showing on the second node and not on the first? Do I need to remove the drive letter showing on node 2 while the voting disk is still showing up prior to performing the delete and add of another voting disk? Thanks for your help.
    Edited by: user8769472 on May 25, 2010 3:43 PM

  • Hardware Mirroring for Voting Disk

    With Oracle 10g r2 the clusterware installation now allows to use hardware mirroring for the voting disk. My question is does anyone see anything wrong with using the hardware mirroring?

    Nope, nothing wrong with it. I also wouldn't know why pre 10gr2 would not allow this since oracle only sees a 'device' anyway and this should be transparent to the clusterware. Definitely do this, especially if you only have 1 voting disk. Otherwise, failure of one disk would bring down you whole rac.

  • Cluster continues with less than half voting disks

    I wanted to try voting disk recovery. I have voting file on a diskgroup with normal redundancy. I corrupted two of the disks containing the voting disk using dd command. I was expecting clusterware to be down as only 1 voting disk ( < half of total(3)) was available but surprisingly clusterware kept running. I even waited for quite some time but to no avail. I did not have any messages in alert log /ocssd log. I would be thankful if experts can give more input on this.
    Regards

    Hi You can check the metalink notes
    OCR / Vote disk Maintenance Operations: (ADD/REMOVE/REPLACE/MOVE) [ID 428681.1]
    it says
    For 11.2+, it is no longer required to back up the voting disk. The voting disk data is automatically backed up in OCR as part of any configuration change.
    The voting disk files are backed up automatically by Oracle Clusterware if the contents of the files have changed in the following ways:
    1.Configuration parameters, for example misscount, have been added or modified
    2.After performing voting disk add or delete operations The voting disk contents are restored from a backup automatically when a new voting disk is added or replaced.

  • Question on voting disk

    Hello all,
    I'm aware of the fact that the voting disks manage current cluster/node membership information and that various nodes constantly check in with voting disk to register their availability. If a node is unable to ping the voting disk the cluster evicts that node to avoid split brain situation. These are some of the points that I don't understand as documentaion is not very clear. Could you explain or post links that have a clear explanation?
    1) Why do we have to create odd number of voting disk?
    2) Does the cluster actually check for the vote count before node eviction? If yes, could you expain this process briefly?
    3) What's the logic behind the documentaion note that says that "each node should be able to see more than half of the voting disks at any time"?
    Thanks

    1) Why do we have to create odd number of voting disk?As far as voting disks are concerned, a node must be able to access strictly more than half of the voting disks at any time. So if you want to be able to tolerate a failure of n voting disks, you must have at least 2n+1 configured. (n=1 means 3 voting disks). You can configure up to 32 voting disks, providing protection against 15 simultaneous disk failures.
    Oracle recommends that customers use 3 or more voting disks in Oracle RAC 10g Release 2. Note: For best availability, the 3 voting files should be physically separate disks. It is recommended to use an odd number as 4 disks will not be any more highly available than 3 disks, 1/2 of 3 is 1.5...rounded to 2, 1/2 of 4 is 2, once we lose 2 disks, our cluster will fail with both 4 voting disks or 3 voting disks.
    2) Does the cluster actually check for the vote count before node eviction? If yes, could you expain this process briefly?Yes. If you lose half or more of all of your voting disks, then nodes get evicted from the cluster, or nodes kick themselves out of the cluster.
    3) What's the logic behind the documentaion note that says that "each node should be able to see more than half of the voting disks at any time"?The answer to first question itself will provide answer to this question too.

  • Startup of Clusterware with missing voting disk

    Hello,
    in our environment we have a 2 node cluster.
    The 2 nodes and 2 SAN storages are in different rooms.
    Voting files for Clusterware are in ASM.
    Additionally we have a third voting disk on a NFS server (configured like in this descripton: http://www.oracle.com/technetwork/products/clusterware/overview/grid-infra-thirdvoteonnfs-131158.pdf&ei=lzJXUPvJMsn-4QTJ8YDoDg&usg=AFQjCNGxaRWhwfTehOml-KgGGeRkl4yOGw)
    The Quorum flag is on the disk that is on NFS.
    The diskgroup is with normal redundancy.
    Clusterware keeps running when one of the VDs gets lost (e.g. storage failure).
    So far so good.
    But when I have to restart Clusterware (e.g. reboot of a node) while the VD is still missing, then clusterware does not come up.
    Did not find an indication if this whether is planned behaviour of Clusterware or maybe because I missed a detail.
    From my point of view it should work to start Clusterware as long as the majority of VDs are available.
    Thanks.

    Hi,
    actually what you see is expected (especially in a stretched cluster environment, with 2 failgroups and 1 quorum failgroup).
    It has to do with how ASM handles a disk failure and is doing the mirriong (and the strange "issue" that you need a third failgroup for the votedisks).
    So before looking at this special case, lets look at how ASM normally treats a diskgroup:
    A diskgroup can only be mounted in normal mode, if all disks of the diskgroup are online. If a disks is missing ASM will not allow you to "normally" mount the diskgroup, before the error situation is solved. If a disks is lost, which contents can be mirrored to other disks, then ASM will be able to restore full redundancy and will allow you to mount the diskgroup. If this is not the case ASM expects the user to tell what it should do => The administrator can issue a "alter diskgroup mount force" to tell ASM even though it cannot held up the required redundancy it should mount with disks missing. This then will allow the administrator to correct the error (or replaced failed disks/failgroups). While ASM had the diskgroup mounted the loss of a failgroup will not result in a dismount of the diskgroup.
    The same holds true with the diskgroup containing the voting disks. So what you see (will continue to run, but cannot restart) is pretty much the same like for a normal diskgroup: If a disk is lost, and the contents does not get relocated (like if the quorum failgroup fails it will not allow you to relocatore, since there are no more failgroups to relocate the third vote to), it will continue to run, but it will not be able to automatically remount the diskgroup in normal mode if a disk fails.
    To bring the cluster back online, manual intervention is required: Start the cluster in exclusive mode:
    crsctl start crs -exclThen connect to ASM and do a
    alter disgkroup <dgname> mount forceThen resolve the error (like adding another disk to another failgroup, that the data can be remirrored and the disk can be dropped.
    After that a normal startup will be possible again.
    Regards
    Sebastian

  • RAC Voting disk recommended size?

    Hi RAC gurus
    what is the Voting disk recommended size?
    i need to do an appropriate size , so when i back up using dd command , it don't take very long
    Am i missing something? , can i create a very big one , say 10G , then backup only the used portion
    Thanks for help

    You asked if you are missing something and my initial thought is that what you missed was reading the docs.
    First of all the recommended size is clearly stated in the docs ... so I am not going to hand it to you and force you to read them. I hope everyone else does so too.
    Second of all, and most importantly, if you have any concept of what the Voting Disk is ... why on earth would you think that making a large one, or a small one, has any relevance to its use? Are you planning to build a cluster larger than 128 nodes?
    PS: What is 10g ... is this 10.1.0.2 or 10.1.0.3 or 10.1.0.4 or 10.2.0.1 or ....
    PPS: Given that you didn't read the docs ... it would be interesting to know, too, your operating system, your cache fusion interconnect solution, your shared storage solution, and how you are planning to define your VIPs. But I will leave that for the future as no doubt you will stub your toe on each if you don't get to those docs and read them with great care.
    My apology for being a bit harsh here but your chances of success, in RAC, are precisely zero if you do not understand these things. And I am really tired of people blaming Oracle and RAC for what they did to themselves.

  • Backing up Voting disk in 11.2

    Grid Version : 11.2
    In 10.2, 11.1 , Voting disk can be backed up using
    dd if=voting_disk_name of=backup_file_name bs=4kBut in 11.2, the voting disk is in ASM Disk Group. Is there a way to make sure that Voting disk is separately backed up?

    spiral wrote:
    Grid Version : 11.2
    In 10.2, 11.1 , Voting disk can be backed up using
    dd if=voting_disk_name of=backup_file_name bs=4kBut in 11.2, the voting disk is in ASM Disk Group. Is there a way to make sure that Voting disk is separately backed up?Hi,
    Backing up and restoring voting disks using DD is only supported in versions prior to Oracle Clusterware 11.2.
    With Oracle Clusterware 11.2 voting disks contain more information. Hence, one must not operate on voting disks using DD. This holds true regardless of whether or not the voting disks are stored in ASM, on a cluster file system, or on raw / block devices (which could still be the case after an upgrade).
    Since DD must not be used to backup the voting disks with 11.2 anymore, the voting disks are backed up automatically into OCR. Since the OCR is backed up every 4 hours automatically, the voting disks are backed up indirectly, too. Any change to the information stored in the voting disks - e.g. due to the addition or removal of nodes in the cluster - triggers a new voting disk backup in the OCR.
    Regards,
    Levi Pereira

Maybe you are looking for

  • Installation problem on Weblogic 10.3.2

    I'm trying to install BI on Weblogic 10.3.2 under Linux. I followed the instructions in the installtion guide and edited the xmlp-server-config.xml file to change the file path to /home/repository/XMLP under /home/repository/XMLP I have Admin, DemoFi

  • The System cannot find the path specified

    I saved "HelloWorldApp.java" in note pad Then, what will be its path? In Command Prompt, .>>cd C:\java.> ENTER. Instead of getting C:\java> in Command Prompt, i am getting error "The System cannot find the path specified" I set PATH variable> C:\Prog

  • Hyperlinks Don't Work in Preview/Publish Mode - Correct in Design View

    I am having trouble with hyperlinks. I've deleted existing hyperlinks, then copied and pasted addresses back in but it is still not working. Any idea why this is happening. www.ronbrendle.com/reviews.html To the right of each album the links for amaz

  • Conversion of indexes during upgrade

    Hello everybody I am searching as well in SAP Notes but perhaps someone has had the same problem. During the upgrade of one R/3 4.6C system to NW there is a phase when the process tries to convert indexes to "non padded". I want to tell my colleague

  • When I open firefox the page says "Invalid header recieved from client."

    when i open Firefox i get an error message on the screen saying "Invalid header received from client. This happens with Internet explorer too but not with any of my other web browsers. How do i fix this? == This happened == Every time Firefox opened