Storage level redundancy - DR solution

Database:Oracle10g Enterprise release 2 with partition option, 64bit
Type: OLTP (High number of transaction - 3000 SQL per second)
To design the high availability solution for database, Oracle provides Datagaurd Physical/Logical
Please validate the following scenario to implement the DR solution using Storage level option for OLTP database (number of transaction - 3000 SQL per second)
1. Two database server – Primary, standby
2. Two Storage Sun 6140 – Primary, Standby
3. Each database server connected to each storage – Primary database connected to Primary storage, Secondary storage connected to Secondary storage
4. Primary Database is up & running. Secondary database is down.
5. Secondary storage get synchronized from primary storage online or delay in 15min
6. In case of failure of primary server, start the secondary database manually
Is any oracle certifying option available for Storage level replication on Sun platform? During the synchronization, online transaction shouldn’t be effected
Please reply
Thanks
Yogendra

One big advantage of dataguard over storage replication is that you are transferring fewer changes with dataguard (only redo vectors) rather than all writes on the storage.
This can be very helpful if the connection between your primary and standby does not have a large amount of bandwidth.
Your Standby is in another datacenter, right?

Similar Messages

  • SAP File System are being updated at Storage Level and as well as in Trash

    Hi Friends,
    We are facing a strange but serious issue with our linux system. We have multiple instance installed on it but one of the instance's file system are being visible in Trash.
    The exact issue is this.
    1. We have db2 installed on Linux now one of our instance's Mount Points are being available in LInux Trash if i create a file at storage level i.e /db2/SID/log_dir/touch test it will be dynamically updated in the trash and will be create in Trash also.
    2  this can not be a normal behavior  of any O.S.
    3 if i delete any file from trash related to this particular SID (instance) file will be deleted from the actual location.
    i know this is not related to SAP Configuration but i just want to find out root cause analysis. if any Linux expert can help in this issue waiting for an early reply.
    Regards,

    Hi Nelis,
    I think you have misinterpreted this issue. let me explain you in detail. we have following mount points in storage including SAP installed on it.
    /db2/BID
    /db2/BID/log_dir
    /db2/BID/log_dir2
    /db2/BID/log_archive
    /db2/BID/db2dump
    /db2/BID/saptemp1
    /db2/BID/sapdata1
    /db2/BID/sapdata2
    /db2/BID/sapdata3
    /db2/BID/sapdata4
    Now i can see the same mount points in the Trash of linux and if i create a folder ./ file in any of the above mentioned mount point it will be dynamically updated in Trash and if i delete some thing at storage / os level the same will be deleted from trash and vis versa.
    I have checked everything no softlink exists anywhere but i am not sure about storage / os level thats what i want to find out?
    Regards,

  • How GSS Provides ISP level redundancy ?

    Hi,
    We are in process of deploying GSS @ our Data Center. Here @ Data Center we have our Web server , which is having one public address given from IP pool from ISP1 ( Say a.b.c.d ) . We plan to have ISP2 also have one ip address to be given to access our website , as a failover protection. If ISP1 link to data center fails then ISP2 shoud route the data for our webser. Now as ISP1 link is down webserver address a.b.c.d is not reachable and obviously as ISP2 dont route a.b.c.d from their link. So in this case can GSS resolve our problem. I think we need to do some setting at authoritive level DNS so that if first ( a.b.c.d ip address is backed by second ip address by ISP2 ) IP is not reachable , user can access our website by picking up second or failover ip address provided by ISP2. I may be wrong in last point : ) off course I am not DNS expert !
    Please share experience of ISP level redundancy. Appreciate it.
    Secondly any link on cisco.com or any other URL is highly appreciated.
    Subodh

    If you want to limit the number of IP consumed in the zone file then your design will have to include "Anycast" or the GSS will need to behind a VIP.

  • Prosumer video storage: sub $1000 firewire solutions

    This question has been asked before [1], but that thread went in a different direction.
    I want to edit and manage about 0.7 to 1.0 TB of family video on hard drives, perhaps using FCPX. My home backup and data management won't scale to this level.
    I'd like to hear what other prosumers do for $1000 or less (I assume professional setups are different.)
    I'm assuming something like this:
    3 1TB drives
    RAID 0 setup with two drives (OS X RAID?)
    Periodically take one drive offsite, the mirror rebuilds. (So only real backup is offsite backup, every year retire one drive and add another)
    Mac formatted drives (so presumably not SAN/Drobo, though low-end DROBO is in my price range)
    I assume over time, as drive capacity grows, I'll integrate my video storage with my other data use/backup. For a few years though they'll be different. For example: Video work would kill my Time Capsule Wifi backup.
    Any suggestions? What vendors do you like?
    [1] https://discussions.apple.com/thread/3664396?start=0&tstart=0

    Sorry, got my RAIDs switched! I do want redundancy.
    I use CCC as a Time Capsule complement for routine backup, so I could go that route rather than RAID.
    I woud like a solution where I didn't have to deal with multiple power cords and multiple data connectors.
    What do you think of the LaCie d2 or 2big solutions? They have a new thunderbolt series out with MacOS RAID (so presumably FCPX compatible)
    http://www.lacie.com/us/products/product.htm?id=10573
    The LaCie 2Big 4TB solution is $650. With a 3rd drive to swap offsite I'd be in budget.
    The Pegasus R4 Thunderbolt solution is a bit higher end, but is sold through the Apple store:
    http://www.promise.com/storage/raid_series.aspx?m=192&region=en-global&rsn1=40&r sn3=47
    http://store.apple.com/us/product/H5184VC/A/Thunderbolt
    At $1150 it's over my budget, but it's something to think about ...

  • Solution and Service level Reporting in Solution Manager

    Hi All,
    I have got some requirement , can anyone please provide me the link/blog regarding Solution and Service level Reporting.
    Thanks,
    vk

    Hi,
    got confused since previous you said it was enterprise service report,
    https://websmp207.sap-ag.de/~form/sapnet?_SCENARIO=01100035870000000202&_SHORTKEY=01100035870000742027&_OBJECT=011000358700000999072011E
    the Following SAP notes
    https://websmp230.sap-ag.de/sap(bD1lbiZjPTAwMQ==)/bc/bsp/spn/sapnotes/index2.htm?numm=1442799
    https://websmp230.sap-ag.de/sap(bD1lbiZjPTAwMQ==)/bc/bsp/spn/sapnotes/index2.htm?numm=1250529
    https://websmp230.sap-ag.de/sap(bD1lbiZjPTAwMQ==)/bc/bsp/spn/sapnotes/index2.htm?numm=1391968
    Thanks
    Jansi

  • Regarding cisco storage level certification

    Hello,
    i am planing to write cisco storage certification.
    so i would like to know from what level of exam i need to start & what will be resources(like books,documents) i need to refer for writing my exams.
    appreciated for your  help.
    Thanks

    You should start by reading the information at www.cisco.com/go/certification
    Those pages explain recommended resources for the storage design and support exams as well as the ccie storage exams.
    www.ciscopress.com makes a few books that are about storage networks.
    I would recommend reading the MDS Cookbook as well as the MDS CLI Guide for sure.  Its also important to be familiar with all Gen1/Gen2/Gen3 MDS products and capabilities.
    Brian

  • Storage for FCP - temporary solution

    Hi,
    I'm buing new Mac Pro - 2* Quad Core Xeon 2.26 (nehalem). It will be used for editing on FCP. Right now I cannot afford external storage, so I would like to build internal array using Apple's RAID card. Is it good idea? If I use first disk as system, am I capable to use 3 remaining disks for RAID 5 or there are posibilities to install additional disks in the case?
    Kal800

    Hi kal800;
    Since you admit that this solution is temporary, I think spending $700 for the Apple RAID card is an expensive way to go. Instead I would suggest that you consider installing three drives with the same specs in the empty slots and configuring them as RAID 0. This should give very quick access to your data which is what I assume you are after. The only thing is that any data on the RAID 0 array should be backed up for it's protection. I think you can find all of that hardware for less the what the card is going to cost you.
    Allan

  • ICloud and photo stream storage levels

    I have 50 gb of photos in my photo stream why can't I upload it to iCloud, I thought photo stream doesn't count against storage limit?

    Sorry, I accidently clicked the "This solved my question" thing and I couldn't see how to remove it.
    You cannot remove the "Solved" mark, sorry.
    Post your question again, with a slightly different title and mention everything you already have done to solve the problem.
    When you select the Photo Sream in the "Web" section, what do you see? I see a message "YOu are signed in as [email protected] Don't you see this message? You said, that you are signed into iCloud.
    Maybe these links can help:
    iCloud: Photo Stream FAQ
    iCloud: Photo Stream Troubleshooting

  • Auditing failed access to files and folders in Windows Storage Server 2008 R2

    Hello,
    I've been trying to figure out why I cannot audit the failed access to files and folders on my server.  I'm trying to replace a unix-based NAS with a Windows Storage Server 2008 R2 solution so I can use my current audit tools (the 'nix NAS
    has basically none).  I'm looking for a solution for a small remote office with 5-10 users and am looking at Windows Storage Server 2008 R2 (no props yet, but on a Buffalo appliance).  I specifically need to audit the failure of a user to access
    folders and files they are not supposed to view, but on this appliance it never shows.  I have:
    Enabled audit Object access for File system, File share and Detailed file share
    Set the security of the top-level share to everyone full control
    Used NTFS file permissions to set who can/cannot see particular folders
    On those folders (and letting those permissions flow down) I've set the auditing tab to "Fail - Everyone - Full Control - This folder, subfolders and files"
    On the audit log I only see "Audit Success" messages for items like "A network share object was checked to see whether client can be granted desired access (Event 5145) - but never a failure audit (because this user was not allowed access by NTFS permissions).
    I've done this successfully with Windows Server 2008 R2 x64 w/SP1 and am wondering if anybody has tried this with the Windows Storage Server version (with success of course).  My customer wants an inexpensive "appliance" and I thought this new
    variant of 2008 was the ticket, but I can't if it won't provide this audit.
    Any thoughts? Any of you have luck with this?  I am (due to the fact I bought this appliance out of my own pocket) using the WSS "Workgroup" flavor and am wondering if this feature has been stripped from the workgroup edition of WSS.
    TIA,
    --Jeffrey

    Hi Jeffrey,
    The steps to setup Audit on a WSS system should be the same as a standard version of Windows Server. So please redo the steps listed below to see if issue still exists:
    Enabling file auditing is a 2-step process.
    [1] Configure "audit object access" in AD Group Policy or on the server's local GPO. This setting is located under Computer Configuration-->Windows Settings-->Security Settings-->Local Policies-->Audit Policies. Enable success/failure auditing
    for "Audit object access."
    [2] Configure an audit entry on the specific folder(s) that you wish to audit. Right-click on the folder-->Properties-->Advanced. From the Auditing tab, click Add, then enter the users/groups whom you wish to audit and what actions you wish to audit
    - auditing Full Control will create an audit entry every time anyone opens/changes/closes/deletes a file, or you can just audit for Delete operations.
    A similar thread:
    http://social.technet.microsoft.com/Forums/en-US/winserverfiles/thread/da689e43-d51d-4005-bc48-26d3c387e859
    TechNet Subscriber Support in forum |If you have any feedback on our support, please contact [email protected]

  • Azure Site Recovery to Azure - cost for data transfer and storage

    Hello,
    I send you this message on behalf of a small firm in Greece interested to implement Azure Site Recovery to Azure.
    We have one VM (Windows 2008 R2 Small Business Server) with 2 VHDs (100GB VHD for OS and 550GB VHD for Data) on a Windows 2012 server Std Edition.
    I would like to ask you a few questions about the cost of the data transfer and the storage 
    First: About the initial replication of the VHDs to Azure. It will be 650GBs. Is it free as inbound traffic? If not the Azure Pricing calculator shows about 57€. But there is also the import/export option which costs about the same:
    https://azure.microsoft.com/en-us/pricing/details/storage-import-export/
    What would be the best solution for our case? Please advice.
    Second: What kind of storage is required for the VHDs fo the VM (650GBs). My guess is Blob storage. For this storage locally redundant, the cost will be about 12-13€/month. Please verify.
    Third: Is the bandwidth for the replication of our VM to Azure free?
    That's all for now.
    Thank you in advance.
    Kind regards
    Harry Arsenidis 

    Hi Harry,
    1st question response: ASR doesn't support Storage Import/Export for seeding the initial replication storage. ASR pricing can be found
    here which details about 100GB of Azure replication & storage per VM is included with the purchase of the ASR to Azure subscription SKU through the Microsoft Enterprise Agreement. 
    Data transfer pricing
    here  indicates that inbound data transfers are free.
    As of now only option will be online replication. What is the current current network link type & bandwidth to Azure? Can you vote for the feature & update requirements here?
    2nd question response: A storage account with geo-redundancy is required. But as mentioned earlier with Microsoft Enterprise Agreement you will get 100GB of Azure replication & storage per VM included with ASR. 
    3rd question response: Covered as part earlier queries.
    Regards, Anoob

  • Can I design a SOFS-cluster using non-clsutered storage?

    Hi.
    I've been trying to figure out if I can build a SOFS-cluster with 4 nodes and 4 JBOD cabinets per node for a total of 16 cabinets, like this:
    I haven't seen this design though so I'm not sure if it's even possible and if it is, what features do I lose (enclosure awareness, etc)?
    Thanks.

    Yeah, I was in a hurry when I posted my initial question and didn't explain my thought process clearly enough.
    Are you saying that you can't build a unified CSV namespace on top of multiple SOFS-clusters, despite MSFT proposing this exact design at multiple occasions?
    As for building one-node clusters; it's certainly possible, albeit a bit pointless I suppose unless you want to cheat a bit like I did. :)
    The reason I'm asking about this particular design is that the hardware vendor that the customer wants to use for their Storage Spaces design only supports cascading up to 4 JBOD-cabinets in one SAS-chain.
    As their cabinets only support at the most 48 TB per cabinet and the customer wants roughly 220 TB usable space in a mirror config that gives us 10 cabinets. On top of this though we want to use tiering to SSD and with all those limitations taken into consideration
    we end up with 16 cabinets.
    This results in 8 server nodes (2 per 4 cabinets) which is quite a lot of servers for 220 TB of usable disk space and hard to motivate when compared to a traditional FC-based storage solution.
    Perhaps not the cost, pizza boxes are quite cheap, but the rack space for 8 1U servers and 16 2U cabinets is quite a lot.
    I'll put together a design based on these numbers and see what the cost is though, perhaps it's cheap enough for the customer to consider. :)
    Thanks for the feedback.
    1) I'm saying we did not ever manage to have unified namespace from multiple SoFS with no shared block storage between all of them, we did not find any references from MSFT how to do this, we did not find any people who had done this either. If you'd search
    this particular forum you'll see this question asked many times but not answered (we also did ask). If you'd manage to do this and share some information how I'd appreciate this as we;re still interested. See:
    SoFS Scaling
    http://social.technet.microsoft.com/Forums/windowsserver/en-US/20e0e320-ee90-4edf-a6df-4f91b1ef8531/scaling-the-cluster-2012-r2
    SoFS Architecture
    http://social.technet.microsoft.com/Forums/windowsserver/en-US/dc0213da-5ba1-4fad-a5e4-091e047f06b9/clustered-storage-spaces-architecture
    Answer from the guy who was presenting this "picture" to public on TechEd:
    In this specific example, I started by building up an at-scale Spaces deployment - comprising of "units" of 2-4 servers, attached to 4 SAS JBODs, for a total of 240 disks. As scaling beyond those 240 disks with the 2-4 existing servers would become impractical
    due to either port connectivity limitations of the JBOD units themselves, or limitations of the servers due to PCI-E or HBA limitations, further scale is achieved by adding more units to the cluster.
    These additional units would further comprise of servers and JBODs, but the underlying storage connectivity (Shared SAS) exists only between servers and JBODs within individual units. This means that each unit would have it's own storage pool,
    and it's own collection of provisioned virtual disks. Resiliency of data and creation of virtual disks occurs within each unit.
    As there can be multiple units with no physical SAS connectivity between them, Ethernet connectivity between all the cluster nodes and cluster shared volumes (CSV) presents the means to unify the data access namespace between all the cluster nodes regardless
    of physical connectivity at the underlying storage level - making the logical storage architecture from a client and cluster point of view completely flat, regardless of how it's actually physically organized. Further, as you are using scale-out file server
    atop of CSV (with R2 and SMB 3.0) client connections to the file server cluster will automatically connect to the correct cluster nodes which are attached to the clients’ data.
    Data availability and resiliency occurs at the unit level, and these can be extended across units through a workload replication mechanism such as Hyper-V replica, or data replication mechanisms such as DFSR.
    I hope this helps clear up any confusion on the above, and let me know if you have any further questions!
    Bryan"
    2) Sure there's no point as single node cluster is not fault tolerant which compromises a bit whole idea of having a cluster :) Consensus!
    3) Idea is nice except I don't know how to implement it w/o third-party software *or* SAS switches limiting bandwidth, and increasing cost and complexity :(
    StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts.

  • Directly connect UCS C-series to External storage

    Hi,
    I have a customer that is looking for an entry level data centre solution.
    I have speced up 2 UCS C240 servers (with 10GB or 8GB FC) and I was wondering whether we can attach directly to external storage from the servers for now and buy a switch in the future.
    As replication of their database is an issue, they would like to use external storage. The customer is looking at an IBM V7000. in the following URL -
    http://www.cisco.com/en/US/docs/switches/datacenter/mds9000/interoperability/matrix/Matrix8.html the V7000 is compatible with the B-series but it's not listed under the C-series. Has anyone used the V7000 with the C-Series? Is there a better option for an entry level external storage array?
    Any pointers or help would be much appreciated.
    Cheers,
    Georgia

    Hello Georgia,
    What is the HBA installed on the server ?
    The matrix lists supported arrays only for Cisco CNA.
    Padma

  • Virtualization for SAP solutions

    Hi all,
    as i recently wrote a blog about 'Virtualization and enterprise SOA' and as we will have many virtualization related activities in 2007 i think it's a good idea to start a thread in this forum for general discussion about virtualization and its utilization in SAP landscapes.
    As many of you know SAP is offering application virtualization capabilities in SAP NetWeaver with Adaptive Computing. This was introduced in at SAPPHIRE 2004.
    Meanwhile virtualization concepts, technologies and products cover a much broader part of an SAP landscape, starting at network virtualization and going up to desktop virtualization. I am sure storage virtualization with SAN or NAS is almost a standard IT scenario, but some of the newer virtualization technologies are in their early stages first.
    How are you using virtualization technologies today, and which concepts/products/... do you consider to be most important for your environment ?
    Best regards
    Roland

    Hi Kwee,
    my name is Nils and I am topic owner of the virtualization topic within SAP Consulting. You ask:
    1. How will SAP system maintenance and administration be affected now that we have tools in VM that can hanlde copying, backup and recovery?
    Generally spoken, SAP should react on this when it comes to cloning instances. Change of an SID still is not supported, so I would expect the SAP Virtualization Lab to provide a solution for this demand asap.
    In the VMware context, our Consulting team developped an approach that lets you run SAPINST in a silent mode so that no user interaction is needed any more for installing application servers, central instances and so on. You create a master template for sapinst, and this template can then be modified script-based to deploy your dialog instances in background. Siemens implemented our solution as well as MAN Diesel in Kopenhagen
    If you have customers who deal with this topic, ask them to order me for an implementation. Backup and recovery should still be performed on database or storage level (Netapp). If you ask the question more precisely, I will give you a more detailed answer.
    2. What about SAP high availability and disaster recovery scenarios? How useful is clustering now that we have tools in VM that can offer an alternative solution?
    Clustering still remains one of the operational keys. VMWare can never fully replace a Cluster although the fancy VMWare slides would like to make us thinking so. When it comes to disaster recovery, VMWare HA is a very fancy solution. But not every customer has the infrastructure and the money to implement VMWare HA. Also from a performance point of view, Clustering still stays a valid approach. Or did you ever try a HA-setup for a 8 TB solution with VMWare? A valid alternative for Clustering can be Adaptive Computing which gives you an enhanced availability (no high availability) bit downtimes of customer sometimes allow this and it is more stable than most cluster solutions around. You should also keep your eyes on Windows Server 2008! I spoke to Microsoft this week, and they put live-migration of virtual machines on their hot topic list again (after it was removed). Beginning of 2009, Windows Server will be able to migrate virtual machines in a live manner and also provide HA functionality for virtual machines. VMWare is still a little bit ahead of MS, but I saw the newest tools from MS, and this advantage is shrinking.
    3. How will hardware sizing be handled for SAP upgrades and rollout of new functions in a VM architecture?
    From a sizing perspecive, calculate an overhead of 20% for your hardware to be on the safe side. Sizing is a task to be done by the customer together with the hardware vendor. He needs to tell the vendor that he wants to operate SAP in a virtualized environment. The hard limit for this is the IO based on your approach how to host the virtual systems (VMFS based with central FC storage or container- based). The hard limit is 6000 SAPs at the moment, for larger implementations, a lot of money has to be spent, technically, more then 18.000 SAPs are possible (but do not make sense).
    4. How can performance be optimized in a VM environment?
    SAP-Performance? IO-Performance? There are some guidelines when it comes to the setup of virtual machines with SAP. You can find them here:
    http://www.vmware.com/partners/alliances/technology/sap-whitepapers.html
    Best Practice Guidelines for SAP Solutions on VMware® Infrastructure
    Is there a SAP training that specifically deals with such topics?
    No, but if you can take care on an IO, I could come over and teach you and your collegues about those topics. I developped a workshop I present to customers (one-day) about all of those topics mentioned here.
    Best regards,
    Nils, Technical Solution Architect, SAP Consulting

  • How many disk for redundancy on Grid Infrastructure 11gR2 Installation?

    I would like too know, How many least disk for normal and high redundancy ?
    and please explain to me about structure of normal redundancy & high redundancy.
    Edited by: user13049841 on Mar 15, 2011 2:37 AM
    Edited by: user13049841 on Mar 15, 2011 2:37 AM

    Redundancy Level      Minimum Number of Disks
    External           1
    Normal                2
    High                3
    If you only have a disk, means you will not be able to implement normal/high redundancy. In external redundancy ASM presumes the data duplication is taken care at a storage level (e.g. mirrored SAN etc.).
    structure of normal redundancy & high redundancyRedundancy is actually a data duplication, so let say if a disk in a Disk Group1 fails, and there is a fail over Disk Group B, so ASM will continue operation.
    Must read note for you..
    http://download.oracle.com/docs/cd/B28359_01/server.111/b28301/asm002.htm#ADMQS12079
    Extractions from the link..
    Failure groups are used to determine which ASM disks to use for storing redundant copies of data. example, if 2-way mirroring is specified for a file, ASM automatically stores redundant copies of file extents in separate failure groups.

  • Questions on redundancy for RAC

    Grid Infra version: 11.2.0.4
    Platform: Oracle Linux
    http://docs.oracle.com/cd/E11882_01/server.112/e18951/asmdiskgrps.htm#OSTMG13711
    Question1.
    We use Enterprise class storage from Hitachi Data Systems called VSP for our diskgroup LUNs. They provided frame(hardware) level RAID protection. Hitachi SAN admins confirmed that the LUNs they allocate are created our disks from different failure groups. In this case , do we need to use High or Normal redundancy ?
    Question2.
    We are going to build a 3 node RAC cluster for a very critical application. For the +OCR_VOTE diskgroup, we are going to use LUNs from Hitachi VSP. SAN admins are going to allocate 1GB LUNs for this. In addition to the hardware level protection provided by Hitachi , we would like to have an ASM level redundancy as well. We prefer High Redundancy for  +OCR_VOTE diskgroup. So, how many 1GB LUNs will be required for this ? How is this calculation done ? The above oracle documentation just says "three way mirroring" for High redundancy , does this mean, that,  to protect data in a 1GB LUN we need another three 1GB LUNs (total four 1GB LUNs ) for High Redundancy ?

    Thanks. Billy.
    It looks like +OCR_VOTE diskgroup is a special case in which
    the number of disks for Normal redundancy is 3
    the number of disks for Hig redundancy is 5
    Using three 1GB LUNs , i tried to create +OCR_VOTE diskgroup with High redundancy. But, it errored out saying
    INS-30510 Insufficient Number of ASM disk selected
    Action:    For a disk group with redundancy level 'High', at least '5' disks are recommended.
    So, I had to allocate five 1GB LUNs in total to create a High Redundancy +OCR_VOTE diskgroup.
    Confirmed from ML Note ID:428681.1
    For Voting disks (never use even number of voting disks):
    External redundancy requires minimum of 1 voting disk (or 1 failure group)
    Normal redundancy requires minimum of 3 voting disks (or 3 failure group)
    High redundancy requires minimum of 5 voting disks (or 5 failure group)
    Any idea why do we need 5 disks instead of 3 disks for High redundancy when it comes to OCR_VOTE diskgroup ?

Maybe you are looking for