Failover Cluster Hyper-V Storage Choice

I am trying to deploy a 2 nodes Hyper-V Failover cluster in a closed environment.  My current setup is 2 servers as hypervisors and 1 server as AD DC + Storage server.  All 3 are running Windows Server 2012 R2.
Since everything is running on Ethernet, my choice of storage is between iSCSI and SMB3.0 ?
I am more inclined to use SMB3.0 and I did find some instructions online about setting up a Hyper-V cluster connecting to a SMB3.0 File server cluster.  However, I only have budget for 1 storage Server.  Is it a good idea to choice SMB over iSCSI
in this scenario?  ( where there is only 1 storage for the Hyper-V Cluster ). 
What do I need to pay attention to in this setup?  Except some unavoidable single points of failures. 
In the SMB3.0 File server cluster scenario that I mentioned above, they had to use SAS drives for the file server cluster (CSV).  I am guessing in my scenario, SATA drives should be fine, right?

"I suspect that Starwind solution achieves FT by running shadow copies of VMs on the partner Hypervisor"
No, it does not run shadow VMs on the partner hypervisor.  Starwind is a product in a family known as 'software defined storage'.  There are a number of solutions on the market.  They all provide a similar service in that they allow for the
use of local storage, also known as Direct Attached Storage (DAS), instead of external, shared storage for clustering.  Each of these products provides some method to mirror or 'RAID' the storage among the nodes in the software defined storage. 
So, yes, there is some overhead to ensure data redundancy, but none of this class of product will 'shadow' VMs on another node.  Products like Starwind, Datacore, and others are nice entry points to HA without the expense of purchasing an external storage
shelf/array of some sort because DAS is used instead.
1) "Software Defined Storage" is a VERY wide term. Many companies use it for solutions that DO require actual hardware to run on. Say Nexenta claims they do SDS and they need a separate physical servers running Solaris and their (Nexenta) storage app. Microsoft
we all love so much because they give us infrastructure we use to make our living also has Clustered Storage Spaces MSFT tells is a "Software Defined Storage" but they need physical SAS JBODs, SAS controllers and fabric to operate. These are hybrid software-hardware
solutions. More pure ones don't need any hardware but they still share actual server hardware with hypervisor (HP VSA, VMware Virtual SAN, oh, BTW, it does require flash to operate so it's again not pure software thing). 
2) Yes there are number of solutions but devil is in details. Technically all virtualization world is sliding away from ancient way of VM-running storage virtualization stacks to ones being part of a hypervisor (VMware Virtual Storage Appliance replaced
with VMware Virtual SAN is an excellent example). So talking about Hyper-V there are not so many companies who have implemented VM-less solutions. Except the ones you've named it's also SteelEye and that's all probably (Double-Take cannot replicate running
VMs effectively so cannot be counted). Running storage virtualization stack as part of a Hyper-V has many benefits compared to VM-running stuff:
- Performance. Obviously kernel-space running DMA engines (StarWind) and polling driver model (DataCore) are faster in terms of latency and IOPS compared to VM-running I/O all routed over VMBus and emulated storage and network hardware.
- Simplicity. With native apps it's click and install. With VMs it's UNIX management burden (BTW, who will update forked-out Solaris VSA is running on top of? Sun? Out of business. Oracle? You did not get your ZFS VSA from Oracle. Who?) and always "hen and
chicken" issue. Cluster starts, it needs access to shared storage to spawn a VMs but VMs are inside a VM VSA that need to be spawned. So first you start storage VMs, then make them sync (few terabytes, maybe couple of hours to check access bitmaps for volumes)
and only after that you can start your other production VMs. Very nice!
- Scenario limitations. You want to implement a SCV for Scale-Out File Servers? You canont use HP VSA or StorMagic because SoFS and Hyper-V roles cannot mix on the same hardware. To surf SMB 3.0 tide you need native apps or physical hardware behind. 
That's why current virtualization leader VMware had clearly pointed where these types of things need to run - side-by-side with hypervisor kernel.
3) DAS is not only cheaper but also faster then SAN and NAS (obviously). So sure there's no "one size fits all" but unless somebody needs a a) very high LUN density (Oracle or huge SQL database or maybe SAP) and b) very strict SLAs (friendly telecom company
we provide Tier2 infrastructure for runs cell phone stats on EMC, $1M for a few terabytes. Reason? EMC people have FOUR units like that marked as a "spare" and have requirement to replace failed one in less then 15 minutes) there's no point to deploy hardware
SAN / NAS for shared storage. SAN / NAS is an sustained innovation and Virtual SAN is disruptive. Disruptive comes to replace sustained for 80-90% of business cases to allow sustained live in a niche deployments. Clayton Christiansen\s "Innovator's Dilemma".
Classic. More here:
Disruptive Innovation
http://en.wikipedia.org/wiki/Disruptive_innovation
So I would not consider Software Defined Storage as a poor-mans HA or usable to Test & Development only. Thing is ready for prime time long time ago. Talk to hardware SAN VARs if you have connections: How many stand-alone units did they sell to SMBs
& ROBO deployments last year?
StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts.

Similar Messages

  • Failover Cluster, Hyper-V, Virtualised PDC and Time Sync

    Hi,
    I wonder if anyone can clear something up for me.
    We have two hosts running failover cluster and hyper-v. On these hosts we two Virtual DCs (one of which is our PDC) as well as a number of member servers. In Hyper-V integration services we have set the time synchronisation option for each server including
    the DCs. Each host is joined to the domain so that Failover Cluster works. We also have two physical domain controllers.
    When we run w32tm /query /source we can see that the VMs, including the Virtual DCs are getting their time from the Hosts (VM IC Time Synchronization Provider) and when we run w32tm /query /source on the hosts we find that they are getting the time from
    each other (host1 has host2 and host2 has host1).
    We are experiencing some time drift and I wonder whether this is due to the PDC getting its time from the host, instead of having its w32time type parameter setup to be NTP instead of NT5DS?
    What's the best practice for a situation like this??
    Thanks in advance!
    Stephen

    yes, I was actually just digging the article out to post back but looks like you found it already.
    i'm not aware of snapshot issues that you mention, its not something I've experienced, but if I was snapshotting a DC I would make sure if was running on 2012 hyper-v and the dc was also 2012, with the pdc being 2012 too. otherwise snapshotting a DC could cause
    issues.
    Regards,
    Denis Cooper
    MCITP EA - MCT
    Help keep the forums tidy, if this has helped please mark it as an answer
    My Blog
    LinkedIn:

  • Windows Server 2012 Failover Cluster (Hyper-V) Event Id 1196

    Hi All,
    I just installed Failover Cluster for Hyper-V on windows server 2012 with 2 nodes. I got following error event id 1196.
    reCreated / deleted Cluster Host A record on DNS and nothing happened.
    Any suggestion?
    There is similar topic but it couldnt help
    http://social.technet.microsoft.com/Forums/en-US/winserverClustering/thread/2ad0afaf-8d86-4f16-b748-49bf9ac447a3/
    Regards

    Hi,
    You may refer to this article to troubleshoot this issue;
    Event ID 1196 — Network Name Resource Availability
    http://technet.microsoft.com/en-us/library/dd354022(v=WS.10).aspx
    Check the following:
    Check that on the DNS server, the record for the Network Name resource still exists. If the record was accidentally deleted, or was scavenged by the DNS server, create it again, or arrange to have a network administrator create it.
    Ensure that a valid, accessible DNS server has been specified for the indicated network adapter or adapters in the cluster.
    Check the system event log for Netlogon or DNS events that occurred near the time of the failover cluster event. Troubleshooting these events might solve the problem that prevented the clustered Network Name resource from registering the DNS name.
    For more information please refer to following MS articles:
    DNS Registration with the Network Name Resource
    http://blogs.msdn.com/b/clustering/archive/2009/07/17/9836756.aspx
    Lawrence
    TechNet Community Support

  • Gentoo Linux and Microsoft Failover Clusters / Hyper-V

    Hello,
    Hoping there are a few people on the boards familiar with running Gentoo Linux guests under Microsoft FailOver Cluster / Hyper-V hosts.
    I have four Gentoo Linux guest VMs (running kernel 3.12.21-r1) running under the Microsoft Failover Cluster system with Hyper-V as the host. All of the Hyper-V drivers are built into the kernel (including the utilities and balloon drivers) and generally they
    run without issue.
    For several months now, however, I have been having strange issues with them. Essentially they stop responding to network requests after random intervals. However, these intervals aren't a few minutes or hours from each other; more like days or even weeks before
    one of them will stop responding on the network side.
    The funny thing is that the VMs themselves on the console side still responds. However, if I issue a reboot command on the externally non-responsive VM, the system will eventually get to a stage where all of the services are stopped and then hangs right after
    the "mounting remaining system ro" line (or something like that).
    The Failover Cluster Manager then reports that the system is "Stopping" but the system never reboots.
    I have to completely restart the HOST system so that either (A) the VM in question transfers to another host and starts responding again or (B) when the HOST comes back up I can work with the VM again.
    This *ONLY* happens on the Gentoo Linux guest VMs and not my Windows VMs.
    Wondering if anyone has hints on this.
    Thank you for your time.
    Regards, Christopher K.

    Hi ckoeber,
    Hyper-V has support most of Linux distribution but not all because it’s have so many distribution, until now the Gentoo Linux not in the support list, you can refer the following
    KB to know more about the detail:
    Linux Virtual Machines on Hyper-V
    http://technet.microsoft.com/en-US/library/dn531030
    Hope this helps.
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Failover cluster without replication

    Hello,
    This might be a basic question to many, but I couldn't find a straight answer so ..
    is it possible to create a failover cluster with shared storage and without any replication/copies of the databases?
    i.e.:
    Create two exchange nodes with two shared luns, make each an owner of a lun and have its database stored in it.
    If node1 failed, it's lun and database gets mounted on node2, making node2 the host of both databases until node1 is back online.
    if the answer is no, was it possible in 2010?
    Thanks.

    Hello,
    This might be a basic question to many, but I couldn't find a straight answer so ..
    is it possible to create a failover cluster with shared storage and without any replication/copies of the databases?
    i.e.:
    Create two exchange nodes with two shared luns, make each an owner of a lun and have its database stored in it.
    If node1 failed, it's lun and database gets mounted on node2, making node2 the host of both databases until node1 is back online.
    if the answer is no, was it possible in 2010?
    Thanks.
    Nothing in Exchange does that. Anything that did would be a 3rd party solution and not supported by Microsoft.
    Twitter!:
    Please Note: My Posts are provided “AS IS” without warranty of any kind, either expressed or implied.

  • 3 Node Failover Cluster With iSCSI

    Is there any information available on the steps to create a 3 node failover cluster with iSCSI storage?  Is there a step-by-step guide?  I looked around but couldn't find much.  Thanks!

    Hi SCPSTech,
    The 3 node cluster create steps same with 2 node, you can refer the following step by step white paper create the 2 node cluster then add the another node to cluster.
     Configuring Failover Clusters with Windows Storage Server 2008
    http://blogs.technet.com/b/storageserver/archive/2009/12/17/configuring-failover-clusters-with-windows-storage-server-2008.aspx
    Add a Server to a Failover Cluster
    http://technet.microsoft.com/en-us/library/cc730998.aspx
    More information:
    Add or Remove Nodes in a SQL Server Failover Cluster (Setup)
    http://msdn.microsoft.com/en-us/library/ms191545.aspx
    Hope this helps.
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Server 2008 Hyper-V Failover Cluster Error on Domain Controller Reboot

    I am pretty new to Hyper-V virtual but I have 2 Hyper-V Clusters, each with 2 Nodes and a SAN, 1 Physical Domain Controller for failover cluster management and 1 virtual domain controller as backup.  All is running well, no issues.  I installed
    windows updates on the physical DC and upon reboot, got an error 5120 on cluster 2 that says "Cluster Shared Volume 'Volume1' ('Cluster Disk 1') is no longer available on this node because of 'STATUS_CONNECTION_DISCONNECTED(c000020c)'.  All I/O will
    temporarily be queued until a path to the volume is reestablished.  It pointed to the 2nd node in that cluster as being the issue but when I look at it, it is online and all healthy so I don't understand why the error was triggered and if the DC would
    go down for a failure, would that node not be able to access the CSV permanently.
    Appreciate any help anyone can provide.

    Hi mtnbikediver,
    In theory, if you has the correct configuration of cluster the DC restart will not cause the CSV down, does your shared storage installed on your DC? Did you run
    the cluster validation before you install the cluster? We strongly recommend you run the cluster validation before you build the cluster, same time please install the recommend update of 2008 cluster first.
    Recommended hotfixes for Windows Server 2008-based server clusters
    http://support.microsoft.com/kb/957311
    I found a similar scenario issue the DC restart will effect the cluster network name resource offline, but it is for 2008R2.
    Cluster network name resource cannot be brought online when one of the domain controllers is partly down in Windows Server 2008 R2
    http://support2.microsoft.com/?id=2860142
    I’m glad to be of help to you!
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Support, contact [email protected]

  • Exchange 2013 MBX in DAG along with Hyper-V and Failover Cluster

    Hi Guys! I've tried to find out an answer of my question or some kind of solution, but with no luck that's why I am writing here. The case is as follows. I have two powerful server boxes and iSCSI storage and I have to design high availability
    solution, which includes SCOM 2012, SC DPM 2012 and exchange 2013 (two CAS+HUB servers and two MBX servers).
    Let me tell you how I plan to do that and you will correct me if proposed solution is wrong.
    1. On both hosts - add Hyper-V role.
    2. On both hosts - add failover clustering role.
    3. Create 2 VMs through failover cluster manager, VMs will be stored on a iSCSI LUN, the first one VM for SCOM 2012 and the second one for SCDPM 2012. Both VMs will be added as failover resource.
    4. Create 4 VMs - 2 for CAS+HUB role and 2 for MBX role, VMs will be stored on a iSCSI LUN as well.
    5. Create a DAG within the two MBX servers.
    In general, that's all. What I wonder is whether I can use failover clustering to acheive High Availability for 2 VMs and at the same time to create DAG between MBX-servers and NLB between CAS-servers?
    Excuse me for this question, but I am not proficient in this matter.

    Hi,
    As far as I know, it’s supported to create DAG for mailbox server installed in hyper-v server.
    And since load balance has been changed for CAS 2013, it is more worth with DNS round robin instead of NLB. However, you can use NLB in Exchange 2013.
    For more information, you can refer to the following article:
    http://technet.microsoft.com/en-us/library/jj898588(v=exchg.150).aspx
    If you have any question, please feel free to let me know.
    Thanks,
    Angela Shi
    TechNet Community Support

  • How to assign SMB storage to CSV in HV failover cluster?

    I have a Hyper-V Cluster that looks like this:
    Clustered-Hyper-V-Diagram
    2012 R2 Failover Cluster
    2 Hyper-V nodes
    iSCSI Disk Witness on isolated "Cluster Only" Network
    "Cluster and Client" Network with nic-team connectivity to 2012 R2 File Server
    Share configured using: server manager > file and storage services > shares > tasks > new share > SMB Share - Applications > my RAID 1 volume.
    My question is this: how do I configure a Clustered Shared Volume?  How do I present the Shared Folder to the cluster?
    I can create/add VMs from Cluster Manager > Roles > Virtual Machines using \\SMB\Share for the location of the vhd...  but how do I use a CSV with this config?  Am I missing something?

    right click one of the disks that you assigned to cluster as available storage
    I don't yet have any disks assigned to the cluster as available storage.
    Just for grins, I added an 8Gb iSCSI lun and added it to a CSV:
    PS C:\> get-clusterresource
    Name State OwnerGroup ResourceType
    Cluster IP Address Online Cluster Group IP Address
    Cluster Name Online Cluster Group Network Name
    witness Online Cluster Group Physical Disk
    PS C:\> Get-ClusterSharedVolume
    Name State Node
    test8Gb Online CLUSTERNODE01
    All well and good, but from what I've read elsewhere...
    SMB 3.0 via a 2012 File server can only be added to a Hyper-V CSV cluster using the VMM component of System Center 2012.  That is the only way to import an SMB 3 share for CSV storage usage.
    http://community.spiceworks.com/topic/439383-hyper-v-2012-and-smb-in-a-csv
    http://technet.microsoft.com/en-us/library/jj614620.aspx

  • Hyper-V Failover Cluster virtual guests suddenly reboots

    The environment is Server 2012 R2 using dual clusters--a Hyper-V Failover Cluster running guest application virtual machines and a Scale-Out File Server Cluster using Tiered Storage Spaces which are used to supply SMB3 shares
    for Quorum and CSV. Has anyone had this problem?

    Anything relevant in the host or guest event logs? I would also check the cluster event logs to see if there are any indications there as well.
    Does the guest go down hard or gracefully reboot?
    Need more info.
    Andy Syrewicze
    Come talk more about Hyper-V and the Microsoft Server Stack at
    Syrewiczeit.com and the Altaro Hyper-V Hub!
    Post are my own and in no way reflect the views of my employer or any other entity in which I produce technical content for.

  • Hyper-V Failover Cluster Configuration Confirmation

    Dear All,
            I have created a Hyper-V Failover Cluster and I want you to confirm if the configuration I have done is okay and I have not missed
    out anything that is mandatory for a Hyper-V Failover Cluster to work.  My configuration is below:
    1. Presented Disks to servers, formatted and taken offline
    2. Installed necessary features, such as failover clustering
    3. Configured NIC Teaming
    4. Created cluster, not adding storage at the time of creation
     - Added disks to the cluster
     - Added disks as CSV
     - Renamed disks to represent respective CSV volumes
     - Assigning each node a CSV volume
     - Configured quorum automatically which configured the disk witness
     - There were two networks so renamed them to Management and Cluster Communication
     - Exposed Management Network to Cluster and Clients
     - Exposed Cluster Communication Network to Cluster only
    5. Installed Hyper-V
     - Changed Virtual Disks, Configuration and Snapshots Location
     - Assigned one CSV volume to each node
     - Configured External switch with allow management option checked
    1. For minimum configuration, is this enough?
    2. If I create a virtual machine and make it highly available from hyper-v console, would it be highly available and would live
    migrate, etc.?
    3. Are there any configuration changes required?
    4. Please, suggest how it can be made better?
    Thanks in advan

    Hi ,
    Please refer to following steps to build a hyper-v failover cluster :
    Step 1: Connect both physical computers to the networks and storage
    Step 2: Install Hyper-V and Failover Clustering on both physical computers
    Step 3: Create a virtual switch
    Step 4: Validate the cluster configuration
    Step 5: Create the cluster
    Step 6: Add a disk as CSV to store virtual machine data
    Step 7: Create a highly available virtual machine 
    Step 8: Install the guest operating system on the virtual machine
    Step 9: Test a planned failover
    Step 10: Test an unplanned failover
    Step 11: Modify the settings of a virtual machine
    Step 12: Remove a virtual machine from a cluster
    For details please refer to following link:
    http://technet.microsoft.com/en-us//library/jj863389.aspx
    Hope it helps
    Best Regards
    Elton Ji
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Guest VM failover cluster on Hyper-V 2012 Cluster does not work across hosts

    Hi all,
    We are evaluating Hyper-V on Windows Server 2012, and I have bumped in to this problem:
    I have a Exchange 2010SP2 DAG installed on 2 vms in our Hyper-V cluster (a DAG forms a failover cluster, but does not use any shared storage). As long as my vms are on the same host, all is good. However, if I live migrate or shutdown-->move-->start one
    of the guest nodes on another pysical host, it loses connectivity with the cluster. "regular" network is fine across hosts, and I can ping/browse one guest node from the other. I have tried looking for guidance for Exchange on Hyper-V clusters but have not
    been able to find anything.
    According to the Exchange documentation this configuration is supported, so I guess I'm asking for any tips and pointers on where to troubleshoot this.
    regards,
    Trond

    Hi All,
    so some updates...
    We have a ticket logged with Microsoft, more of a check box exercise to reassure the business we're doing the needful.  Anyway, they had us....
    Apply hotfix http://support.microsoft.com/kb/2789968?wa=wsignin1.0  to both guest DAG nodes, which seems pretty random, but they wanted to update the TCP/IP stack...
    There was no change in error, move guest to another Hyper-V node, and the failover cluster, well, fails with the following event ids I the node that fails...
    1564 -File share witness resource 'xxxx)' failed to arbitrate for the file share 'xxx'. Please ensure that file share '\xxx' exists and is accessible by the cluster..
    1069 - Cluster resource 'File Share Witness (xxxxx)' in clustered service or application 'Cluster Group' failed
    1573 - Node xxxx  failed to form a cluster. This was because the witness was not accessible. Please ensure that the witness resource is online and available
    The other node stays up, and the Exchange DB's mounted on that node stay up, the ones mounted on the way that fails failover to the remaining node...
    So we then
    Removed 3 x Nic's in one of the 4 x NIC teams, so, leaving a single NIC in the team (no change)
    Removed one NIC from the LACP group on each Hyper-V host
    Created new Virtual Switch using this simple trunk port NIC on each Hyper-V host
    Moved the DAG nodes to this vSwitch
    Failover cluster works as expected, guest VM's running on separate Hyper-V hosts, when on this vswitch with single NIC
    So Microsoft were keen to close the call, as there scope was, I kid you not, to "consider this issue
    resolved once we are able to find the cause of the above mentioned issue", which we have now done, as in, teaming is the cause... argh.
    But after talking, they are now escalating internally.
    The other thing we are doing, is building Server 2010 Guests, and installing Exchange 2010 SP3, to get a Exchange 2010 DAG running on Server 2010 and see if this has the same issue, as people indicate that this is perhaps not got the same problem.
    Cheers
    Ben
    Name                   : Virtual Machine Network 1
    Members                : {Ethernet, Ethernet 9, Ethernet 7, Ethernet 12}
    TeamNics               : Virtual Machine Network 1
    TeamingMode            : Lacp
    LoadBalancingAlgorithm : HyperVPort
    Status                 : Up
    Name                   : Parent Partition
    Members                : {Ethernet 8, Ethernet 6}
    TeamNics               : Parent Partition
    TeamingMode            : SwitchIndependent
    LoadBalancingAlgorithm : TransportPorts
    Status                 : Up
    Name                   : Heartbeat
    Members                : {Ethernet 3, Ethernet 11}
    TeamNics               : Heartbeat
    TeamingMode            : SwitchIndependent
    LoadBalancingAlgorithm : TransportPorts
    Status                 : Up
    Name                   : Virtual Machine Network 2
    Members                : {Ethernet 5, Ethernet 10, Ethernet 4}
    TeamNics               : Virtual Machine Network 2
    TeamingMode            : Lacp
    LoadBalancingAlgorithm : HyperVPort
    Status                 : Up
    A Cloud Mechanic.

  • I can't find failover cluster management after creating hyper-v cluster on SCVMM 2012 R2

    I've created a hyper-v cluster on scvmm 2012 r2 but I can't find the failover cluster manager to move storage resources. all hosts are showing to have hyperv role and failover clustering feature installed. disk Witness in Quorum is good, same as for the
    other CSV lun. Please help me Microsoft. Thank you. 

    The management consoles are not getting installed while building the cluster through SCVMM. However, its not mandatory to have the management tools on the server. You can have it on a different machine with this management tools installed and connect to
    this cluster remotely.
    Optimism is the faith that leads to achievement. Nothing can be done without hope and confidence.
    InsideVirtualization.com

  • Can a SQL Server Failover Cluster Instance (FCI) be Implemented Between Two Hyper-V Hosted Virtual Machines?

    I haven't had the opportunity to implement a SQL Server Failover Cluster Instance (FCI) for over 10 years and that was done with two physical, identical database servers way back in the day of Windows Server 2003 and SQL Server 2000 (old school).
    Can a SQL Server 2008 R2 Failover Cluster Instance (FCI) be implemented between two Hyper-V hosted virtual machines? The environment in question already has Windows Server 2012 R2 Hyper-V hosts in place, so I'm just looking to see if this is even
    possible and/or supported when utilizing virtual machines.
    The client in question is currently using SQL Server 2008 R2 instances running on Win2008R2, Win2012, and Win2012R2, but I'd also be interested how this can be done or not with SQL Server 2012 or 2014 as well. Thanks in advance.
    Bill Thacker

    Yes, it can be done with Hyper-V guests. In fact, with Windows Server 2012 R2 Hyper-V, guests can use the Shared VHDX feature for shared storage used by Windows clusters. The guests can run Windows Server 2008 and higher provided that the Hyper-V Integration
    Services are installed to support Shared VHDX. The only challenge here is making the Hyper-V hosts highly available as well, running it on WSFC.
    Edwin Sarmiento SQL Server MVP | Microsoft Certified Master
    Blog |
    Twitter | LinkedIn
    SQL Server High Availability and Disaster Recover Deep Dive Course

  • 2 Hyper-V Servers with Failover Cluster and a single File Server and .VHDs stored on a SMB 3 Share

    I have 2 X M600 Dell Blades (100 GB local storage and 2 NICs)  and a Single R720 File Server (2.5 TB local SAS storage and 6 NICs).  I´m planning a Lab/developer enrironment using 2 Hyper-V Servers with Failover Cluster and a single File Server putting
    all  .VHDs stored on a SMB 3 Share on the File Server.
    The ideia is to have a HA solution, live migration, etc, storing the .VHDs onm a SMB 3 share
    \\fileserver\shareforVHDs
    It is possible? How Cluster will understand the
    \\fileserver\shareforVHDs as a cluster disk and offer HA on it?
    Or i´ll have to "re-think" and forget about VHDs on SMb 3 Share and deploy using iSCSI?
    Storage Spaces makes difference in this case?
    All based on wind2012 R2 STD English version

    I have 2 X M600 Dell Blades (100 GB local storage and 2 NICs)  and a Single R720 File Server (2.5 TB local SAS storage and 6 NICs).  I´m planning a Lab/developer enrironment using 2 Hyper-V Servers with Failover Cluster and a single File Server putting
    all  .VHDs stored on a SMB 3 Share on the File Server.
    The ideia is to have a HA solution, live migration, etc, storing the .VHDs onm a SMB 3 share
    \\fileserver\shareforVHDs
    It is possible? How Cluster will understand the
    \\fileserver\shareforVHDs as a cluster disk and offer HA on it?
    Or i´ll have to "re-think" and forget about VHDs on SMb 3 Share and deploy using iSCSI?
    Storage Spaces makes difference in this case?
    All based on wind2012 R2 STD English version
    You can do what you want to do just fine. Hyper-V / Windows Server 2012 R2 can use SMB 3.0 share instead of a block storage (iSCSI/FC/etc). See:
    Deploy Hyper-V over SMB
    http://technet.microsoft.com/en-us/library/jj134187.aspx
    There would be no shared disk and no CSV just SMB 3.0 folder both hypervisor hosts would have access to. Much simplier to use. See:
    Hyper-V recommends SMB or CSV ?
    http://social.technet.microsoft.com/Forums/en-US/d6e06d59-bef3-42ba-82f1-5043713b5552/hyperv-recommends-smb-or-csv-
    You'll have however a limited solution as your single physical server being a file server would be a single point of failure.
    You can use Storage Spaces just fine but you cannot use Clustered Storage Spaces as in this case you'll have to take away your SAS spindles from your R720 box and mount them into SAS JBOD (make sure it's certified). So you get rid of an active components
    (CPU, RAM) and keep more robust all-passive SAS JBOD as your physical shared storage. Better then a single Windows-running server but for a true fault tolerance you'll have to have 3 SAS JBODs. Not exactly cheap :) See:
    Deploy Clustered Storage Spaces
    http://technet.microsoft.com/en-us/library/jj822937.aspx
    Storage Spaces,
    JBODs, and Failover Clustering – A Recipe for Cost-Effective, Highly Available Storage
    http://blogs.technet.com/b/storageserver/archive/2013/10/19/storage-spaces-jbods-and-failover-clustering-a-recipe-for-cost-effective-highly-available-storage.aspx
    Using
    Storage Spaces for Storage Subsystem Performance
    http://msdn.microsoft.com/en-us/library/windows/hardware/dn567634.aspx#enclosure
    Storage
    Spaces FAQ
    https://social.technet.microsoft.com/wiki/contents/articles/11382.storage-spaces-frequently-asked-questions-faq.aspx
    Alternative way would be using Virtual SAN similar to VMware VSAN in this case you can get rid of a physical shared storage @ all and use cheap high capacity SATA spindles (and SATA SSDs!) instead of an expensive SAS.
    Hope this helped :)
    StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts.

Maybe you are looking for