2 out of 3 voting disk gets corrupted

O/S - Oracle Linux
DB Version - Oracle 11.2.0,3
Three node RAC
Voting disks is 3 in Normal redundancy out of which 2 disks gets corrupted.
I know the process of recovering the voting disk but I am not able to stop the cluster services and start the same in exclusive mode. I am always getting "Unable to communicate with CRS services" . I rebotted 3 nodes but the same issue persists.
Any advice will be appreciated.

Is ASM used for OCR and voting disk storage?
This is the recommendation and the default - creating a normal redundancy diskgroup with an additional quorum disk (i.e. 3 disks in total).
There is an automated backup of the OCR. No backup needed for voting files. So it is not a disaster loosing the cluster's OCR and/or voting disks.
Assuming ASM and a diskgroup is used.
Anything else on the diskgroup?
I recently had a similar problem. I used the diskgroup only for OCR and voting disks. Lost 2 disks. I shutdown the RAC. Started CRS in exclusive mode on 1 node. Launched sqlplus and dropped the diskgroup and recreated it with 3 disks (and a quorum). Restored the OCR backup. Confirmed the voting disks config (no need to change it as the underlying diskgroup did not have a name change).  Restarted that node. Forced a rebalance of the diskgroup. Then started the remaining cluster nodes.
There are Oracle Support Notes that describe these steps in detail. It should be relatively easy finding the note for fixing a 11gr2 RAC's OCR/voting ASM diskgroup.
If you have database data on it too - problems. If the remaining working disk is the quorum disk, then both mirror disks are toast. If not, you can try to force the disks online again - assuming disks are okay and broken path was fixed. Else add 2 new disks to replace the 2 broken disks and rebalance. Never done this myself though with a quorum disk..

Similar Messages

  • Disk gets corrupted everytime I connect to my MBA

    I connected my previous hard-disk to my laptop, and it got corrupted. I purchased a new hard-disk, copied everything from that Hard-disk to new one. After few days, my new hard-disk also got corrupted.
    Few days later, I connected my friend's hard disk. Now his hard-disk is corrupted too. My guess is it is because of my Macbook Air.
    Any ideas how to fix it?
    Screenshots:
    After this, when I hit repair, it says "Copy everything from this hard-disk"

    Try here:
    iTunes for Windows Vista or Windows 7: Troubleshooting unexpected quits, freezes, or launch issues
    It contains a link for XP

  • Voting disk external redundancy

    Hey,
    question:
    I´ve got an database which I haven´t installed myself.
    A query for the votedisk shows only one configured votedisk for my 2 node rac cluster.
    So I presume, that external redundancy was setup during installation of clusterware.
    A few weeks ago I hit an unpublished error, where oracle sends a message that the voting disk are corrupted - which is a false message.
    But anyway, what is happening, when I hit this error in my new scenarion (apart of applying the crs bundle patch - which I cant apply...)
    How can make sure, that a new voting disk is added with external redundancy as well (crsctl add css votedisk /dev/raw/raw#)
    or would this give back an error, because only one voting disk is configurable ?

    Hi Christian,
    How can make sure, that a new voting disk is added with external redundancy as well (crsctl add css votedisk /dev/raw/raw#) or would this give back an error, because only one voting disk is configurable ?I will assume you are using version 11.1 or earlier.
    What determines whether your voting disk is with external redundancy or normal redudancy the amount of voting is that your environment has.
    If you have only one voting disk you have external redundancy (The customers guarantees the recovery of the voting disk in case of failure. At least Oracle thinks so).
    If you more than 2 (because you need to get an odd number) voting disk you are using normal redundancy. *## Edited by: Levi Pereira on Feb 7, 2011 12:09 PM*
    Then you can add more voting disk in your environment without worrying about external redudancy previously configured.
    External Redundancy or Normal Redundancy is nominated only by the amount from voting disk you have, there is no configuration to change External Redundancy to Normal Redundancy. As I have said is only the amount from voting that you have configured in your environment.
    Make sure you have a fresh backup of your voting disk.
    Warning: The voting disk backup is performed manually in the environment UNIX/Linux with the dd command.
    Regards,
    Levi Pereira
    Edited by: Levi Pereira on Feb 7, 2011 12:09 PM

  • 10gR1 Voting Disk

    Hallo,
    is there a way to recreate a voting disk (not backed up) in 10gR1 without reinstalling CRS?

    upper realese it can be easily recreated!Not sure. AFAIK, in 11gR2, the voting disks get automatically backed up therefore, we don't have to manually backup. But, I believe we can't "easily recreate" the voting devices without backups in any release. (some one will correct if I am wrong though!). In previous releases to 11gR2, "dd" is used to backup voting devices but in 11gR2 it's no longer supported to use dd.
    HTH
    Thanks
    Chandra

  • Just finished using iTunes, closed out and then tried to get back in.  Got this message "he iTunes library .itl file is locked, on a locked disk, or you do not have write permission for this file."  How can I get back into iTunes ?

    I just finished using iTunes, closed out and then tried to get ack in.  Got this message "The iTunes library .9tl file is locked, on a locked disk, or you do not have write permission for this file."   How can I get back ino iTunes ?

    I actually figured it out...I had to go to the iTunes Library Extras.itdb file and give myself permission to have full control.  THEN, I could go and estore a previos version.  Once I had done this, I got the same message for iTunes Library Genius.itdb . . . I did the same thing with it and Voila'!!
    Hope this helps...
    SVT

  • I am having an issue in PS CS6 (on a Mac) where in the middle of working on a file, it gets corrupted by filling in a grid of squares in my layer masks as well as deleting out those same sections of misc layers throughout the file. I have reset the prefer

    I am having an issue in PS CS6 (on a Mac) where in the middle of working on a file, it gets corrupted by filling in a grid of squares in my layer masks as well as deleting out those same sections of misc layers throughout the file. I have reset the preferences, cleaned up the file. Renamed it to another file and 2 weeks later it is doing the same thing. Luckily I had my main subjects as smart objects and it saved them. This second time I just closed it out and opened it again and it is fine. Anybody else have an issue with this? Also it runs really slow these days on my MacBook Pro Retina 2.6 Core i7 with 16gb Ram.

        Hello APVzW, we absolutely want the best path to resolution. My apologies for multiple attempts of replacing the device. We'd like to verify the order information and see if we can locate the tracking number. Please send a direct message with the order number so we can dive deeper. Here's steps to send a direct message: http://vz.to/1b8XnPy We look forward to hearing from you soon.
    WiltonA_VZW
    VZW Support
    Follow us on twitter @VZWSupport

  • After Patch  10.2.0.4, got "voting disk  corrupted" error.

    Dear all,
    My setting is:
    OS:RHEL4.8 U8 x86
    DB:10.2.0.1
    CRS:10.2.0.1
    After I installed Patchset 10.2.0.4 and execute $CRS_HOME/install/root102.sh, I could't start clusterware anymore. After checking ocssd.log, I found some messages like this:
    [    CSSD]2010-01-07 15:53:06.519 [3086919360] >TRACE: clssscmain: local-only set to false
    [    CSSD]2010-01-07 15:53:06.562 [3086919360] >TRACE: clssnmReadNodeInfo: added node 1 (x101) to cluster
    [    CSSD]2010-01-07 15:53:06.578 [3086919360] >TRACE: clssnmReadNodeInfo: added node 2 (x102) to cluster
    [    CSSD]2010-01-07 15:53:06.580 [3086919360] >TRACE: clssnmInitNMInfo: Initialized with unique 1262850786
    [    CSSD]2010-01-07 15:53:06.586 [3086919360] >TRACE: clssNMInitialize: Initializing with OCR id (33483723)
    [    CSSD]2010-01-07 15:53:06.650 [84294560] >TRACE: clssnm_skgxninit: Compatible vendor clusterware not in use
    [    CSSD]2010-01-07 15:53:06.650 [84294560] >TRACE: clssnm_skgxnmon: skgxn init failed
    [    CSSD]2010-01-07 15:53:06.657 [3086919360] >TRACE: clssnmNMInitialize: misscount set to (60), impending reconfig threshold set to (56000)
    [    CSSD]2010-01-07 15:53:06.659 [3086919360] >TRACE: clssnmNMInitialize: Network heartbeat thresholds are: impending reconfig 30000 ms, reconfig start (misscount) 60000 ms
    [    CSSD]2010-01-07 15:53:06.669 [3086919360] >TRACE: clssnmDiskStateChange: state from 1 to 2 disk (0//ocfs/voting)
    [    CSSD]2010-01-07 15:53:06.669 [84294560] >TRACE: clssnmvDPT: spawned for disk 0 (/ocfs/voting)
    [    CSSD]2010-01-07 15:53:08.755 [84294560] >TRACE: clssnmDiskStateChange: state from 2 to 4 disk (0//ocfs/voting)
    [    CSSD]2010-01-07 15:53:08.821 [113339296] >TRACE: clssnmvKillBlockThread: spawned for disk 0 (/ocfs/voting) initial sleep interval (1000)ms
    [    CSSD]2010-01-07 15:53:08.889 [3086919360] >TRACE: clssnmFatalInit: fatal mode enabled
    [    CSSD]2010-01-07 15:53:08.999 [94784416] >TRACE: clssnmClusterListener: Spawned
    [    CSSD]2010-01-07 15:53:09.002 [94784416] >TRACE: clssnmClusterListener: Listening on (ADDRESS=(PROTOCOL=tcp)(HOST=x102-priv)(PORT=49895))
    [    CSSD]2010-01-07 15:53:09.002 [94784416] >TRACE: clssnmconnect: connecting to node(1), con(0x81bc480), flags 0x0003
    [    CSSD]2010-01-07 15:53:09.069 [3083459488] >TRACE: clssgmclientlsnr: Spawned
    [    CSSD]2010-01-07 15:53:09.072 [113339296] >ERROR: clssnmvDiskKillCheck: voting disk corrupted (0x00000000,0x00000000) (0//ocfs/voting)
    [    CSSD]2010-01-07 15:53:09.072 [113339296] >TRACE: clssnmDiskStateChange: state from 4 to 3 disk (0//ocfs/voting)
    [    CSSD]2010-01-07 15:53:09.072 [130767776] >TRACE: clssnmDiskPMT: disk offline (0//ocfs/voting)
    [    CSSD]2010-01-07 15:53:09.072 [130767776] >ERROR: clssnmDiskPMT: Aborting, 1 of 1 voting disks unavailable
    [    CSSD]2010-01-07 15:53:09.072 [130767776] >ERROR: ###################################
    [    CSSD]2010-01-07 15:53:09.072 [130767776] >ERROR: clssscExit: CSSD aborting
    [    CSSD]2010-01-07 15:53:09.072 [130767776] >ERROR: ###################################
    [    CSSD]--- DUMP GROCK STATE DB ---
    I tried to recreate my voting disk like this:
    $CRS_HOME/bin/crsctl add css votedisk /ocfs/voting1 -force
    $CRS_HOME/bin/crsctl delete css votedisk /ocfs/voting -force
    $CRS_HOME/bin/crsctl add css votedisk /ocfs/voting -force
    $CRS_HOME/bin/crsctl delete css votedisk /ocfs/voting1 -force
    Then run root102.sh again but still got the same error message about "voting disk corrupted".
    I was totally confused and don't know how to fix it.
    Please, any suggestion would be appreciated.

    Hello,
    Did u try restore the OCR?
    http://download-west.oracle.com/docs/cd/B19306_01/rac.102/b14197/votocr.htm#i1012456
    The steps to apply RAC patch were followed correctly?
    See a patchset roadmap at http://mufalani.blogspot.com/2009/06/applying-10204-patch-on-rac-systems.html
    Best Regards,
    Rodrigo Mufalani
    http://www.mrdba.com.br/mufalani

  • Dock gets corrupted on Iintel machines/10.4 after restoring from disk image

    Hi all,
    I run a script at home that creates an image of my boot disk every night (it reboots into another volume, runs SuperDuper! and boots back into boot volume. I also use master disk images to deploy Macs at work. Every time an image is restored the dock becomes a mess, it looks something like this:
    http://a-world.net/files/dockmess1.jpg
    ...random icons pointing to random files, example, this is what used to be a Firefox icon, not:
    http://a-world.net/files/dockmess2.jpg
    This only happens on Intel Macs and started to happen recently after one the system updates. It wasn't happening under 10.4.6 I think but it started some time after.
    What's even weirder is that restoring com.apple.dock.plist from backup (Retrospect) doesn't seem to help, the prefs get corrupted anyway. Is there any other prefs file responsible for maintaining links betwen items in the Dock and originals? By the way desktop shortcuts do not get messed up, only the dock.
    Any ideas? This is very annoying. Thanks.

    This behavior is happening to quite a few of our computers. What they all have in common is that they are Intel Macs that have been upgraded to 10.4.10. I have verified on our test boxes that the problem does not occur with 10.4.9, then appears in 10.4.10. Don't know why.

  • Something gets corrupted then email won't send out of Apple mail

    I believe something gets corrupted in Mail and then won't allow email to be sent.  I had realized this problem seveal months ago on my Sister in law's Macbook.  I had created a new mail file and noticed I had this problem with sending mail.  So I'd deleted that one and created a new mail file.  Same problem, I started over again.  This time it seemed to work  Well that was until arond April.  She said she was getting the message that mail could not be sent out of mail several months ago.  Checking her Outbox, she has messages that go back to April that never sent.
    I don't know what keeps happening to Mail and whhy after months of functioning correctly, it will suddenly just stop working, that is, mail will no longer send.  She has no problems receiving it.  Only sending.  Her ISP is Comcast so I wouldn't think it would be their problem, but I don't know what else it could be.
    Looking for ideas!
    Thanks,
    Jeff

    Here's the thing. I have all the correct setting from Comcast and everytime I've setup Mail on her Macbook I've entereted them correctly.  Like i said, it had been working, up until April.  Oh when I say created a new mail file I mean that I created a new Account rather than changed the settings in the existing account.  That will create a whole new mail file.
    The message she gets is:
    The server “smtp.comcast.net” cannot be contacted on the default ports.
    Select a different outgoing mail server from the list below or click Try Later to leave the message in your Outbox until it can be sent.
    I believe that is her smtp server.  I don't have her laptop in front of me, since she lives 300+ miles away.  i don't understand how something that works for months at a time would suddenly just stop working all together.  She had been able to send mail from last Christmas until April.  That's why I assumed something on her  systrem had corrupted.
    Thanks,
    Jeff

  • Why do disk permissions keep getting corrupted?

    The disk permissions on my Macbook Pro keep getting corrupted and I have to run the Disk Utility Program to repair them.  What is likely causing this issue? 

    If you see permissions appear to be repaired again, it could be among those that can safely be ignored:
    Mac OS X: Disk Utility's Repair Disk Permissions messages that you can safely ignore

  • Odd number of voting disks

    Can anyone tell the reason for odd number of voting disk

    Metalink Note:220970.1, where you can see
    What happens if I lose my voting disk(s)?
    If you lose 1/2 or more of all of your voting disks, then nodes get evicted from the cluster, or nodes kick themselves out of the cluster. It doesn't threaten database corruption. For this reason we recommend that customers use an 3 or more voting disks in 10g Release 2 (always in an odd number).

  • OCR and voting disks on ASM, problems in case of fail-over instances

    Hi everybody
    in case at your site you :
    - have an 11.2 fail-over cluster using Grid Infrastructure (CRS, OCR, voting disks),
    where you have yourself created additional CRS resources to handle single-node db instances,
    their listener, their disks and so on (which are started only on one node at a time,
    can fail from that node and restart to another);
    - have put OCR and voting disks into an ASM diskgroup (as strongly suggested by Oracle);
    then you might have problems (as we had) because you might:
    - reach max number of diskgroups handled by an ASM instance (63 only, above which you get ORA-15068);
    - experiment delays (especially in case of multipath), find fake CRS resources, etc.
    whenever you dismount disks from one node and mount to another;
    So (if both conditions are true) you might be interested in this story,
    then please keep reading on for the boring details.
    One step backward (I'll try to keep it simple).
    Oracle Grid Infrastructure is mainly used by RAC db instances,
    which means that any db you create usually has one instance started on each node,
    and all instances access read / write the same disks from each node.
    So, ASM instance on each node will mount diskgroups in Shared Mode,
    because the same diskgroups are mounted also by other ASM instances on the other nodes.
    ASM instances have a spfile parameter CLUSTER_DATABASE=true (and this parameter implies
    that every diskgroup is mounted in Shared Mode, among other things).
    In this context, it is quite obvious that Oracle strongly recommends to put OCR and voting disks
    inside ASM: this (usually called CRS_DATA) will become diskgroup number 1
    and ASM instances will mount it before CRS starts.
    Then, additional diskgroup will be added by users, for DATA, REDO, FRA etc of each RAC db,
    and will be mounted later when a RAC db instance starts on the specific node.
    In case of fail-over cluster, where instances are not RAC type and there is
    only one instance running (on one of the nodes) at any time for each db, it is different.
    All diskgroups of db instances don't need to be mounted in Shared Mode,
    because they are used by one instance only at a time
    (on the contrary, they should be mounted in Exclusive Mode).
    Yet, if you follow Oracle advice and put OCR and voting inside ASM, then:
    - at installation OUI will start ASM instance on each node with CLUSTER_DATABASE=true;
    - the first diskgroup, which contains OCR and votings, will be mounted Shared Mode;
    - all other diskgroups, used by each db instance, will be mounted Shared Mode, too,
    even if you'll take care that they'll be mounted by one ASM instance at a time.
    At our site, for our three-nodes cluster, this fact has two consequences.
    One conseguence is that we hit ORA-15068 limit (max 63 diskgroups) earlier than expected:
    - none ot the instances on this cluster are Production (only Test, Dev, etc);
    - we planned to have usually 10 instances on each node, each of them with 3 diskgroups (DATA, REDO, FRA),
    so 30 diskgroups each node, for a total of 90 diskgroups (30 instances) on the cluster;
    - in case one node failed, surviving two should get resources of the failing node,
    in the worst case: one node with 60 diskgroups (20 instances), the other one with 30 diskgroups (10 instances)
    - in case two nodes failed, the only node survived should not be able to mount additional diskgroups
    (because of limit of max 63 diskgroup mounted by an ASM instance), so all other would remain unmounted
    and their db instances stopped (they are not Production instances);
    But it didn't worked, since ASM has parameter CLUSTER_DATABASE=true, so you cannot mount 90 diskgroups,
    you can mount 62 globally (once a diskgroup is mounted on one node, it is given a number between 2 and 63,
    and other diskgroups mounted on other nodes cannot reuse that number).
    So as a matter of fact we can mount only 21 diskgroups (about 7 instances) on each node.
    The second conseguence is that, every time our CRS handmade scripts dismount diskgroups
    from one node and mount it to another, there are delays in the range of seconds (especially with multipath).
    Also we found inside CRS log that, whenever we mounted diskgroups (on one node only), then
    behind the scenes were created on the fly additional fake resources
    of type ora*.dg, maybe to accomodate the fact that on other nodes those diskgroups were left unmounted
    (once again, instances are single-node here, and not RAC type).
    That's all.
    Did anyone go into similar problems?
    We opened a SR to Oracle asking about what options do we have here, and we are disappointed by their answer.
    Regards
    Oscar

    Hi Klaas-Jan
    - best practises require that also online redolog files are in a separate diskgroup, in case of ASM logical corruption (we are a little bit paranoid): in case DATA dg gets corrupted, you can restore Full backup plus Archived RedoLog plus Online Redolog (otherwise you will stop at the latest Archived).
    So we have 3 diskgroups for each db instance: DATA, REDO, FRA.
    - in case of fail-over cluster (active-passive), Oracle provide some templates of CRS scripts (in $CRS_HOME/crs/crs/public) that you edit and change at your will, also you might create additionale scripts in case of additional resources you might need (Oracle Agents, backups agent, file systems, monitoring tools, etc)
    About our problem, the only solution is to move OCR and voting disks from ASM and change pfile af all ASM instance (parameter CLUSTER_DATABASE from true to false ).
    Oracle aswers were a litlle bit odd:
    - first they told us to use Grid Standalone (without CRS, OCR, voting at all), but we told them that we needed a Fail-over solution
    - then they told us to use RAC Single Node, which actually has some better features, in csae of planned fail-over it might be able to migreate
    client sessions without causing a reconnect (for SELECTs only, not in case of a running transaction), but we already have a few fail-over cluster, we cannot change them all
    So we plan to move OCR and voting disks into block devices (we think that the other solution, which needs a Shared File System, will take longer).
    Thanks Marko for pointing us to OCFS2 pros / cons.
    We asked Oracle a confirmation that it supported, they said yes but it is discouraged (and also, doesn't work with OUI nor ASMCA).
    Anyway that's the simplest approach, this is a non-Prod cluster, we'll start here and if everthing is fine, after a while we'll do it also on Prod ones.
    - Note 605828.1, paragraph 5, Configuring non-raw multipath devices for Oracle Clusterware 11g (11.1.0, 11.2.0) on RHEL5/OL5
    - Note 428681.1: OCR / Vote disk Maintenance Operations: (ADD/REMOVE/REPLACE/MOVE)
    -"Grid Infrastructure Install on Linux", paragraph 3.1.6, Table 3-2
    Oscar

  • How do I define 2 disk groups for ocr and voting disks at the oracle grid infrastructure installation window

    Hello,
    It may sound too easy to someone but I need to ask it anyway.
    I am in the middle of building Oracle RAC 11.2.0.3 on Linux. I have 2 storage and I created three LUNs in each of storage so 6 LUNs in total look in both of servers. If I choose NORMAL as a redundancy level, is it right to choose my 6 disks those are created for OCR_VOTE at the grid installation? Or should I choose only 3 of them and configure mirroring at latter stage?
    The reason why I am asking this is that I didn't find any place to create asm disk group for ocr and voting disks at the oracle grid infrastructure installation window. In fact, I would to like to create two disk groups in which one of groups contains three disks those were brought from storage 1 and the second one contains rest three 3 disk that come from the last storage.
    I believe that you will understand the manner and help me on choosing proper way.
    Thank you.

    Hi,
    You have 2 Storage H/W.
    You will need to configure a Quorum ASMDisk to store a Voting Disk.
    Because if you lose 1/2 or more of all of your voting disks, then nodes get evicted from the cluster, or nodes kick themselves out of the cluster.
    You must have the odd number of voting disk (i.e 1,3 or 5)  one voting disk per ASM DISK, So 1 Storage will hold more voting disk than another storage.
    (e.g 3 Voting Disk - 2 voting stored in stg-A and 1 voting disk store in stg-B)
    If fail the storage that hold the major number of  Voting disk the whole cluster (all nodes) goes down. Does not matter if you have a other storage online.
    It will happen:
    https://forums.oracle.com/message/9681811#9681811
    You must configure your Clusterware same as RAC extended
    Check this link:
    http://www.oracle.com/technetwork/database/clusterware/overview/grid-infra-thirdvoteonnfs-131158.pdf
    Explaining: How to store OCR, Voting disks and ASM SPFILE on ASM Diskgroup (RAC or RAC Extended) | Levi Pereira | Oracl…

  • Voting disk problem

    I had a problem with my votingdisks:
    First CSS would not start. CSS log said votingdisks were missing:
    C:\oracle\product\10.2.0>crsctl query css votedisk
    0. 0 \\.\votedsk1
    1. 0
    2. 0
    located 3 votedisk(s).
    I stopped the cluster and could add my voting disks:
    Result:
    C:\oracle\product\10.2.0>crsctl query css votedisk
    0. 0 \\.\votedsk1
    1. 0
    2. 0
    3. 0 \\.\votedsk3
    4. 0 \\.\votedsk2
    located 5 votedisk(s).
    How can I get rid off the "empty" values?

    I ran into the samething today and figured out how to get rid of them at least on a Win 2003 SP2R2 cluster. Mine were showing 0s for the path as well so after a lot of trial and error I finally just tried:
    crsctl delete css votedisk 0
    And it worked... What happens is, when you have to do a -force to create extra votedisks the cluster has to be stopped on all nodes or this will happen. We were doing a totally fresh install so we played with it a little to determine this is what happened. When we stopped everything before adding votedisks using -force and cleaning up the bad ones, everything went fine.

  • Recovery scenario - Voting disk  does not match with the cluster guid

    Hi all,
    Think of you can not start your guest VMs just because it has a corrupted system.img root image. And assume it contains 5 physical disk( which are all created by the RAC template) hence ASM on them.
    What is the simplest recovery scneario of the guest vms (RAC)?
    Can it be a feasible scenario for recover of the availablity? (Assume both of the RAC system images are corrupted and we prefer not a system level recovery rather than backup / restore)
    1. Create 2 RAC instances using the same networking and hostname details as the ones that are corrupted. - Use 5 different new disks.
    2 Shutdown the newly created instances. Drop the disks from the newly created instances using VM manager.
    3. Add the old disks whose system image is failing to be recoverd but ASM disks are still in use (from the newly created instances using VM manager.) to the newly created instances.
    4. Open the newly created instances
    Can we expect the ASM and CRS could be initialized and be opened without a problem?
    When I try this scenario I get the folllowing error from the cssd/crsd .
    - Cluster guid 9112ddc0824fefd5ff2b7f9f7be8f048 found in voting disk does not match with the cluster guid a3eec66a2854ff0bffe784260856f92a obtained from the GPnP profile.
    - Found 0 configured voting files but 1 voting files are required, terminating to ensure data integrity.
    What could be the simplest way of recovery of a virtual machine that has healthy ASM disks but corrupted system image?
    Thank you

    Hi,
    you have a similar problem, when trying to clone databases with 11.2.
    The problem is that a cluster is uniquely identified, and this information is hold in the OCR and the Voting disks. So exactly these 2 are not to be cloned.
    To achieve what you want, simply setup your system in that way, that you have a separate diskgroup for OCR and Voting (and ASM spfile), which is not to be restored in this case of szeanrio.
    Only all database files in ASM will then be exchanged later.
    Then what you want can be achieved.
    However I am not sure that the RAC templates have the option to install OCR and Voting into a separated diskgroup.
    Regards
    Sebastian

Maybe you are looking for