Hyper-V internal (1 of 2, NOT Cluster) network unavailable in Failover Cluster Manager 2008R2

Hi all,
I had a very strange situation in my Hyper-V 2 nodes-cluster:
I have one networtk for HertBeat only (10.0.0.0/24) and second for HyperV internal networking for virtual machines (In properties marked "Do not allow clustern network communication")
Machines were working properly and any migration too.
One day, my secon done HyperV2 was marked red in Failover Cluster Manager mmc. I discovered that HyperV LAN is unavailable on this second node. BUT everything war working properly - HyperV2 node was on internet, communicated to AD domain, even culd run any
virtual machine...
Several times I checked the configuration, also check TMG configuratio, I was wondering if it can not be wrong settings on network access rule, I tried to restart this host - no result, ... network was still unavailable.
After about a hour I found the resolutuion:
On my second Hyper-V node Disable / Enable Local Area Connection network adapter, connected to Hyper-V LAN in Network Connections control panel!
Hope this will help to somebody ;)
Marian, just trying to help you

Resolutuion:
On affected Hyper-V node Disable / Enable Local Area Connection network adapter, connected to Hyper-V LAN in Network Connections control panel
I guess, sometnig flush on network configuration and / or some combination with network adapter driver
Marian, just trying to help you

Similar Messages

  • Network DR test causes Exchange DAG network to fail (Failover Cluster Manager reports comms errors)

    We have a DAG configured between 2 mailbox servers, one in each of our main data centres. Our comms team recently performed a DR test between our 2 data centres, switiching from the main production link to the backup link. During this outage the Failover
    Cluster Manager reported errors, with each mailbox server reporting the other as uncontactable. The Events that were logged include the following:
    Isatap interface isatap.{02ADE20A-D5D4-437F-AD00-E6601F7E7A9D} is no longer active. (EventID 4201)
    Cluster node 'MAILBOX_SERVER' was removed from the active failover cluster membership. The Cluster service on this node may have stopped. This could also be due to the node having lost communication with other active nodes in the failover cluster. Run the
    Validate a Configuration wizard to check your network configuration. If the condition persists, check for hardware or software errors related to the network adapters on this node. Also check for failures in any other network components to which the node is
    connected such as hubs, switches, or bridges. (EventID 1135)
    File share witness resource 'File Share Witness (\\WITNESS_SERVER\SHARE_NAME)' failed to arbitrate for the file share '\\WITNESS_SERVER\SHARE_NAME'. Please ensure that file share '\\WITNESS_SERVER\SHARE_NAME' exists and is accessible by the cluster. (EventID
    1564)
    Cluster resource 'File Share Witness (\\\WITNESS_SERVER\SHARE_NAME)' in clustered service or application 'Cluster Group' failed. (EventID 1069)
    The Cluster service is shutting down because quorum was lost. This could be due to the loss of network connectivity between some or all nodes in the cluster, or a failover of the witness disk. Run the Validate a Configuration wizard to check your network
    configuration. If the condition persists, check for hardware or software errors related to the network adapter. Also check for failures in any other network components to which the node is connected such as hubs, switches, or bridges. (EventID 1177)
    The Cluster Service service terminated with service-specific error A quorum of cluster nodes was not present to form a cluster. (EventID 7024)
    The Microsoft Exchange Information Store service terminated unexpectedly.  It has done this 1 time(s).  The following corrective action will be taken in 5000 milliseconds: Restart the service. (EventID 7031)
    Looking at the Cluster Events in the Failover Cluster Manager Snap-In i see a heap of Event ID 47 (cannot activate the DAG databases as the server is not up according to Windows Failover Cluster Service) and:
    Node status could not be recorded. This could prevent some network failure logic from functioning correctly. NodeStatus:IsHealthy=True,HasADAccess=True,ClusterErrorOverrideFalse,LastUpdate=5/2/2011 8:25:42 AMUTC Failure:An Active Manager operation failed.
    Error: An error occurred while attempting a cluster operation. Error: Cluster API '"ClusterRegSetValue() failed with 0x6be. Error: The remote procedure call failed"' failed.. (EventID 184)
    Forcefully dismounting all the locally mounted databases on server 'BACKUP_MAILBOX_SERVER. (EventID 307).
    Our Comms team doesn't believe it is a comms issue as they did not log any network communication errors between the servers in the two sites (using icmp). So if it is not a comms issue, how can I configure the Failover Cluster Manager to be resilient to
    this type of network failover event.
    Thanks
    Dan

    Isn't it also true that in a stretched DAG with even numbered nodes, the PAM needs to be in the same site as the active DAG node?  If the connection between both nodes goes down, and the PAM is in the "passive" site, the primary node will
    dismount the databases since it can't check with the PAM to make sure its safe for it to be up.  
    In a even-numbered node stretched DAG, the PAM changes to the DR/passive site everytime a failover occurs, but doesn't automatically switch back when you reactivate the primary node.

  • VM will not boot after moving using Failover Cluster Manager - "a disk read error occurred......"

    My current Configuration:
    3 node cluster, using clustered shared storage and about 22 VM's.   The Host servers are running 2012 Data Center while all guest are running 2012 Standard.  The SAN is EqualLogic and we are using HIT Kit 4.5.
    I have a CSV that is running out of space, so I created another CSV so that I could move some of the VM's to a new home.    I tested this by creating a test VM, and moved it successfully 3 times.     I then moved an actual
    LIVE VM and while it seemed to move ok, it will now not start.   The message is "a disk read error occurred Press ctrl+alt+del to restart".     I moved the test VM and it failed as well.    
    I have read several things about this, but nothing seems to relate to my specific issue.   I have verified that VSS is working and free of errors as well.    From the Settings menu for the VM, if I select "Inspect" the drive,
    the properties all look fine.    It is a VHDX and both the current file size and maximum disk size seem correct.
    The VM's were moved using the "move - virtual machine storage" option within Failover Cluster Manager.
    Suggestions?
    Thanks.

    Lets see if I can answer all of those and I appreciate the brain storming.   This really needs to work, correctly.
    1.  The Storage is moving.
    2.   VM's and SAN are on same device.
    3.  No, my  Clustered Shared Volume, CSV, is out of room, (more one that later)
    4.  No, I actually have 2 sans grouped together.   However, I'm moving the VM', form one CSV to another CSV on the Same san.  EqualLogic PS 6110 is the one I am trying to move VMS around on, and the other SAN not involved in any way except
    for it is in a SAN group is an EqualLogic PS6010.
    5.  No error During move, it took about 5-10 minutes, no error messages.   Note, I did a test and it worked GREAT 3 times.   Now both a live VM, and the test VM are doing the same thing.
    6.  No, the machine is not to large.   The test making was a 50 gig drive, just 2012 standard installed with updates.   The live VM was a 75 gig VM that was my Trend Micro Server, or anti-virus host.
    7.  Expand the existing SCV?   Yes I should be able to, but there is an issue there.   The volume was expanded correctly, Equallogic sees the added space, Fail Over cluster manager sees the added space, however disk manager only
    sort of does.    When looking at disk manager, there are 2 areas that tell you a little bit about the drive.   The top part and then the bottom part.   The top part only shows 500G, the original size, while the bottom part
    says that it is 1 TB in size.   I call Dell's technical support and after they looked at it I was told by the technician that they had seen this a couple of times and the only way to fix it was to move all the VM's to another CSV and delete the troubled
    CSV.   I thought about adding more space to the troubled CSV, but its on a production server with about 12 VM's running on it and I did not want to take a chance.   The Trend VM was running on CSV-1 and working fine.   
    I must admit that the test VM, was on CSV-2.    I moved the Test VM from csv-2 to csv-3 back and forth several times with no errors.   The Trend Server was on CSV-1 and was moved to CSV-3, however it failed.  Again, I then moved
    the test VM from CSV-2 to CSV-3 and it failed the same way.   I could not test the "TEST - VM" on csv-1 due to csv-1 not having enough space.
    8.   I did disable the network from the VM to see if that mattered it did not. 
    9.   I have not yet had a chance to connect the VHDX to a new VM, but I will do that in about an hour, hopefully.    Once I am able to test that suggestion I will post the results as well.
    Again, thanks for all the suggestions and comments, as I had rather have lots to look at and try.   I hope I answered them well enough.
    Kenny

  • Hyper-v Failover Cluster management via powershell

    Hi
        We are looking at having a management server act as proxy for managing couple of hyper-v clusters using CSV. We plan to do management using powershell commands.
        We create a session one of the host in the cluster  and execute commands using invoke-command. The cluster verbs seems to fail with the following warning. 
    WARNING: If you are running Windows PowerShell remotely, note that some failover clustering cmdlets do not
    work remotely. When possible, run the cmdlet locally and specify a remote computer as the target. To run the
     cmdlet remotely, try using the Credential Security Service Provider (CredSSP). All additional errors or
    warnings from this cmdlet might be caused by running it remotely.
      What is the recommended way to do setup for using FailoverCluster ? We want to have a single management server that act proxy for all servers clustered or not.
      Also, is there a document that describe various operations done via Failover Cluster Manager and corresponding powershell commands (or set of commands).
    Thanks
    /Jd

    Regarding the Stop action from Failover Cluster Manager, Eric, I understand your point. But when I do shutdown from Failover Cluster Manager, the VM shuts down as expected even when the setting is set to Save.
    I was very specifically talking about the Stop-ClusterGroup cmdlet, not any command issued in Failover Cluster Manager. But, well, yeah, if you tell a VM to shut down, it shuts down. I don't know why you'd expect anything different to happen. If you're looking
    for the equivalent to Stop-ClusterGroup inside Failover Cluster Manager, it's not called "Shut Down". You can use "Stop Role" on the "More Actions" menu for the VM. You can also find the configuration object (usually named in the format of "Virtual Machine
    Configuration XXX") and take it offline.
    I tested a number of times after your first post, and Stop-ClusterGroup does what the Cluster-Controlled Action is set to every single time for me.
    I could only make educated guesses at the underlying mechanics of FCM and PowerShell's cluster cmdlets, but the stand-out difference is that FCM has no method to operate in a double-hop situation at all, while PowerShell does. You only encounter these difficulties
    with PowerShell in that second hop. The question you're asking: "it would be great to know how Failover Cluster Manager works without this setup ?" is an apples-to-oranges comparison.
    This particular sentence of yours sort of changes the overall parameter of your question:
    "... so our automation works..."
    I was under the impression you were setting up this double-hop because you wanted admins to manually execute PowerShell cmdlets against your cluster from a single controlled location.
    If automation is your goal, do it right from the cluster. I obviously don't know your entire wishlist and it's none of my business, but this double-hop situation may not be ideal.
    Eric Siron Altaro Hyper-V Blog
    I am an independent blog contributor, not an Altaro employee. I am solely responsible for the content of my posts.
    "Every relationship you have is in worse shape than you think."

  • LUN can't be accessed after move it to another Hyper-V Failover Cluster without "remove from cluster shared volumes" on the original cluster

    Hi all,
    I have a old cluster, let's call it cluster01, and a new cluster, cluster02. There is a LUN attach to cluster01 as a CSV volume. I forgot to  "remove from cluster shared volumes" in Failover Cluster console, then power off the cluster01 and
    attach the LUN to the cluster02. Now the LUN can't be accessed in cluster02, it show as a RAW disk. I tried to attach the LUN to it's original cluster cluster01 but can't read it too.
    Is there any way to get it back?

    Hi Zephyrhu,
    Can you run Clear-ClusterDiskReservation powershell cmdlet and see that helps?
    http://technet.microsoft.com/en-us/library/ee461016(WS.10).aspx
    Thanks,
    Umesh.S.K

  • Can I change which nic is used for a cluster network when more than one nic on the node is on same subnet?

    This cluster has been up and working for maybe a year and a half the way it is.  There are two nodes, running Server 2012.  In addition to a couple network interfaces devoted to VM traffic each node has:
    Management Interface: 192.168.1.0/24
    iSCSI Interface: 192.168.1.0/24
    Internal Cluster Interface: 192.168.99.0/24
    The iSCSI interfaces have to be on same subnet as management interfaces due to limitations in the shared storage.  Basically if I segregate it I wouldn't be able access the shared storage itself for any kind of management or maintenance tasks. 
    I have restricted the iSCSI traffic to only use the one interface on each cluster node but I noticed that one of the cluster networks is connecting the management interface on one cluster node member with the iSCSI interface on the other cluster node member. 
    I would like for the cluster network to be using the management interface on both cluster node members so as not to interfere with iSCSI traffic.  Can I change this?
    Binding order of interfaces is the same on both boxes but maybe I did that after I created the cluster, not sure. 

    Hi MnM Show,
    Tim is correct, if you are using ISCSI Storage and using the network to get to it, it is recommended that the iSCSI Storage fabric have a dedicated and isolated network. This
    network should be disabled for Cluster communications so that the network is dedicated to only storage related traffic.
    This prevents intra-cluster communication as well as CSV traffic from flowing over same network. During the creation of the Cluster, ISCSI traffic will be detected and the network
    will be disabled from Cluster use. This network should set to lowest in the binding order.
    The related article:
    Configuring Windows Failover Cluster Networks
    http://blogs.technet.com/b/askcore/archive/2014/02/20/configuring-windows-failover-cluster-networks.aspx
    I’m glad to be of help to you!
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Support, contact [email protected]

  • Cluster Network 2 is missing from Failover cluster Manager

    I have two node Windows 2008 R2 SP1 MS SQL cluster, there was an issue with one node NIC card which was replaced and since then "Cluster Network 2" is not visible in failover cluster manager, also Heartbeat and Public IP nic card of both nodes
    are appearing in "Cluster Network 1" container. even though both network public and hearbeat are on differenet subnet, but still cluster detects it as one subnet. Need to know how to bring back "Cluster Network 2"

    After replacing NIC, cluster worked fine on other node for couple of days and then cluster services started terminating on both the nodes.
    when checked cluster logs found below error.
    "ERR   [CORE] Node 2: exception caught AlreadyExists(183)' because of 'already exists'(AODBP2DB1 - Local Area Connection)"
    Hence to correct this problem, renamed both nodes NIC cards as Public and Private and cluster was started and tested the failover also. but since then only able to see "Cluster Network 1" and all NIC are listed under this container.
    Though when looked at cluster validation report, it shows Cluster Network 1 and Cluster Network 2 entries in it
    Pasted network section from Cluster Validation report
    Network: Cluster Network 1
    DHCP Enabled: False
    Network Role: Enabled
    Prefix
    Prefix Length
    10.100.18.0
    25
    Item
    Value
    Network Interface
    Node1 - Public
    DHCP Enabled
    False
    IP Address
    10.100.18.11
    Prefix Length
    25
    Item
    Value
    Network Interface
    Node2 - Public
    DHCP Enabled
    False
    IP Address
    10.100.18.16
    Prefix Length
    25
    Network: Cluster Network 2
    DHCP Enabled: False
    Network Role: Internal
    Prefix
    Prefix Length
    10.101.130.0
    25
    Item
    Value
    Network Interface
    Node1 - Private
    DHCP Enabled
    False
    IP Address
    10.101.130.11
    Prefix Length
    25
    Item
    Value
    Network Interface
    Node2 - Private1
    DHCP Enabled
    False
    IP Address
    10.101.130.13
    Prefix Length
    25
    Verifying that each cluster network interface within a cluster network is configured with the same IP subnets.
    Examining network Cluster Network 1.
    Network interface Node1- Public has addresses on all the subnet prefixes of network Cluster Network 1.
    Network interface Node2- Public has addresses on all the subnet prefixes of network Cluster Network 1.
    Examining network Cluster Network 2.
    Network interface Node1- Private has addresses on all the subnet prefixes of network Cluster Network 2.
    Network interface Node2- Private1 has addresses on all the subnet prefixes of network Cluster Network 2.
    Verifying that, for each cluster network, all adapters are consistently configured with either DHCP or static IP addresses.
    Checking DHCP consistency for network: Cluster Network 1. Network DHCP status is disabled.
    DHCP status (disabled) for network interface Node1- Public matches network Cluster Network 1.
    DHCP status (disabled) for network interface Node2- Public matches network Cluster Network 1.
    Checking DHCP consistency for network: Cluster Network 2. Network DHCP status is disabled.
    DHCP status (disabled) for network interface Node1- Private matches network Cluster Network 2.
    DHCP status (disabled) for network interface Node2- Private1 matches network Cluster Network 2.

  • Upgrading from SQL Server 2012 Standard to SQL Server 2014 Standard Failover Cluster

    Goal: To upgrade my default instance from SQL Server 2012 to SQL Server 2014 in a failover cluster.
    Given:
    1) Operating System Windows 2012 R2
    2) 2 Virtual Machines in a cluster with SQL Server as a Guest Cluster resource.  The two VMS are called APPS08 and APPS09.  They are our development environment that is setup similar to our production environment.
    Problem:  When running the SQL Server 2014 upgrade, I started on the VM that was not running the instance.  I then moved onto upgrading on the node that was running the instance.  As soon as the install attempt to failover the running instance
    an install error occurred that it could not failover.  After many install attempts I consistently received the error
    The SQL Server Failover cluster instance name 'Dev01'
    already exists as a cluster resource.  Opening Failover cluster manager there is no record of a DEV01.
    New Strategy:  Create a SQLCluster called DEV07.  At the end of the Install I get 'Resource for instance 'MSSQLSERVER' should not exists.  Neither I nor my Windows 2012 guy understand what Resource the install may be referring to.  
    We do not see anything out of the ordinary.
    Any suggestions as to what Resource may be seeing the default instance would be greatly appreciated.

    Hi PSCSQLDBA,
    As your description, you want to upgrade the default instance in SQL Server cluster.
    >> 'SQL Server failover cluster instance name 'Dev01' already exists as cluster resource'
    This error could occur when there is a previously used instance name which may not be removed completely.
    To work around the issue, please use one of the ways below.
    1. At command prompt, type Cluster res. This command will list you all the resources including orphan resources. To delete the orphan resource, type Cluster res <resource name>/delete.
    For more information about the process, please refer to the article:
    http://gemanjyothisqlserver.blogspot.in/2012/12/sql-2008r22012-cluster-installation.html
    2. Delete DNS entries, and force a replication of DNS to the other DNS servers.
    For more information about the process, please refer to the article:
    http://jon.netdork.net/2011/06/07/failed-cluster-install-and-name-already-exists/#solution
    >> 'Resource for instance 'MSSQLSERVER' should not exist'
    This error could occur when you already have MSSQLSERVER as a resource in the cluster, which may not be removed completely. To work around the issue, you could rebuild the SQL Server cluster node.
    Regards,
    Michelle Li

  • Can we setup FILESTREAM on Failover Cluster

    I saw following point on Technet article about RBS.
    The local FILESTREAM provider is supported only when it is used on local hard disk drives or an attached Internet Small Computer System Interface (iSCSI) device. You cannot use the local RBS FILESTREAM provider on remote storage devices such as network attached storage (NAS).
    It looks like that we cannot use FILESTREAM on Failover Cluster because to setup Failover Cluster we need to have NAS. But then the NAS is made available locally for Failover Cluster so FILESTREAM should work right?
    Found another article which talks about setting up FILESTREAM on Failover Cluster so I am a bit confused.
    https://msdn.microsoft.com/en-us/library/cc645886.aspx

    Hi Frank,
    As other post, we can set up FILESTREAM on a Failover cluster.
    However, FILESTREAM can't live on a network addressable storage (NAS) device unless the NAS device is presented as a local NFS volume via iSCSI. With iSCSI , it is supported by Microsoft
    FILESTREAM provider. 
    Reference:
    Description of support for network database files in SQL Server
    Programming with FileStreams in SQL Server 2008
    Thanks,
    Lydia Zhang
    Lydia Zhang
    TechNet Community Support

  • Guest VM failover cluster on Hyper-V 2012 Cluster does not work across hosts

    Hi all,
    We are evaluating Hyper-V on Windows Server 2012, and I have bumped in to this problem:
    I have a Exchange 2010SP2 DAG installed on 2 vms in our Hyper-V cluster (a DAG forms a failover cluster, but does not use any shared storage). As long as my vms are on the same host, all is good. However, if I live migrate or shutdown-->move-->start one
    of the guest nodes on another pysical host, it loses connectivity with the cluster. "regular" network is fine across hosts, and I can ping/browse one guest node from the other. I have tried looking for guidance for Exchange on Hyper-V clusters but have not
    been able to find anything.
    According to the Exchange documentation this configuration is supported, so I guess I'm asking for any tips and pointers on where to troubleshoot this.
    regards,
    Trond

    Hi All,
    so some updates...
    We have a ticket logged with Microsoft, more of a check box exercise to reassure the business we're doing the needful.  Anyway, they had us....
    Apply hotfix http://support.microsoft.com/kb/2789968?wa=wsignin1.0  to both guest DAG nodes, which seems pretty random, but they wanted to update the TCP/IP stack...
    There was no change in error, move guest to another Hyper-V node, and the failover cluster, well, fails with the following event ids I the node that fails...
    1564 -File share witness resource 'xxxx)' failed to arbitrate for the file share 'xxx'. Please ensure that file share '\xxx' exists and is accessible by the cluster..
    1069 - Cluster resource 'File Share Witness (xxxxx)' in clustered service or application 'Cluster Group' failed
    1573 - Node xxxx  failed to form a cluster. This was because the witness was not accessible. Please ensure that the witness resource is online and available
    The other node stays up, and the Exchange DB's mounted on that node stay up, the ones mounted on the way that fails failover to the remaining node...
    So we then
    Removed 3 x Nic's in one of the 4 x NIC teams, so, leaving a single NIC in the team (no change)
    Removed one NIC from the LACP group on each Hyper-V host
    Created new Virtual Switch using this simple trunk port NIC on each Hyper-V host
    Moved the DAG nodes to this vSwitch
    Failover cluster works as expected, guest VM's running on separate Hyper-V hosts, when on this vswitch with single NIC
    So Microsoft were keen to close the call, as there scope was, I kid you not, to "consider this issue
    resolved once we are able to find the cause of the above mentioned issue", which we have now done, as in, teaming is the cause... argh.
    But after talking, they are now escalating internally.
    The other thing we are doing, is building Server 2010 Guests, and installing Exchange 2010 SP3, to get a Exchange 2010 DAG running on Server 2010 and see if this has the same issue, as people indicate that this is perhaps not got the same problem.
    Cheers
    Ben
    Name                   : Virtual Machine Network 1
    Members                : {Ethernet, Ethernet 9, Ethernet 7, Ethernet 12}
    TeamNics               : Virtual Machine Network 1
    TeamingMode            : Lacp
    LoadBalancingAlgorithm : HyperVPort
    Status                 : Up
    Name                   : Parent Partition
    Members                : {Ethernet 8, Ethernet 6}
    TeamNics               : Parent Partition
    TeamingMode            : SwitchIndependent
    LoadBalancingAlgorithm : TransportPorts
    Status                 : Up
    Name                   : Heartbeat
    Members                : {Ethernet 3, Ethernet 11}
    TeamNics               : Heartbeat
    TeamingMode            : SwitchIndependent
    LoadBalancingAlgorithm : TransportPorts
    Status                 : Up
    Name                   : Virtual Machine Network 2
    Members                : {Ethernet 5, Ethernet 10, Ethernet 4}
    TeamNics               : Virtual Machine Network 2
    TeamingMode            : Lacp
    LoadBalancingAlgorithm : HyperVPort
    Status                 : Up
    A Cloud Mechanic.

  • Failover Cluster Hyper-V Storage Choice

    I am trying to deploy a 2 nodes Hyper-V Failover cluster in a closed environment.  My current setup is 2 servers as hypervisors and 1 server as AD DC + Storage server.  All 3 are running Windows Server 2012 R2.
    Since everything is running on Ethernet, my choice of storage is between iSCSI and SMB3.0 ?
    I am more inclined to use SMB3.0 and I did find some instructions online about setting up a Hyper-V cluster connecting to a SMB3.0 File server cluster.  However, I only have budget for 1 storage Server.  Is it a good idea to choice SMB over iSCSI
    in this scenario?  ( where there is only 1 storage for the Hyper-V Cluster ). 
    What do I need to pay attention to in this setup?  Except some unavoidable single points of failures. 
    In the SMB3.0 File server cluster scenario that I mentioned above, they had to use SAS drives for the file server cluster (CSV).  I am guessing in my scenario, SATA drives should be fine, right?

    "I suspect that Starwind solution achieves FT by running shadow copies of VMs on the partner Hypervisor"
    No, it does not run shadow VMs on the partner hypervisor.  Starwind is a product in a family known as 'software defined storage'.  There are a number of solutions on the market.  They all provide a similar service in that they allow for the
    use of local storage, also known as Direct Attached Storage (DAS), instead of external, shared storage for clustering.  Each of these products provides some method to mirror or 'RAID' the storage among the nodes in the software defined storage. 
    So, yes, there is some overhead to ensure data redundancy, but none of this class of product will 'shadow' VMs on another node.  Products like Starwind, Datacore, and others are nice entry points to HA without the expense of purchasing an external storage
    shelf/array of some sort because DAS is used instead.
    1) "Software Defined Storage" is a VERY wide term. Many companies use it for solutions that DO require actual hardware to run on. Say Nexenta claims they do SDS and they need a separate physical servers running Solaris and their (Nexenta) storage app. Microsoft
    we all love so much because they give us infrastructure we use to make our living also has Clustered Storage Spaces MSFT tells is a "Software Defined Storage" but they need physical SAS JBODs, SAS controllers and fabric to operate. These are hybrid software-hardware
    solutions. More pure ones don't need any hardware but they still share actual server hardware with hypervisor (HP VSA, VMware Virtual SAN, oh, BTW, it does require flash to operate so it's again not pure software thing). 
    2) Yes there are number of solutions but devil is in details. Technically all virtualization world is sliding away from ancient way of VM-running storage virtualization stacks to ones being part of a hypervisor (VMware Virtual Storage Appliance replaced
    with VMware Virtual SAN is an excellent example). So talking about Hyper-V there are not so many companies who have implemented VM-less solutions. Except the ones you've named it's also SteelEye and that's all probably (Double-Take cannot replicate running
    VMs effectively so cannot be counted). Running storage virtualization stack as part of a Hyper-V has many benefits compared to VM-running stuff:
    - Performance. Obviously kernel-space running DMA engines (StarWind) and polling driver model (DataCore) are faster in terms of latency and IOPS compared to VM-running I/O all routed over VMBus and emulated storage and network hardware.
    - Simplicity. With native apps it's click and install. With VMs it's UNIX management burden (BTW, who will update forked-out Solaris VSA is running on top of? Sun? Out of business. Oracle? You did not get your ZFS VSA from Oracle. Who?) and always "hen and
    chicken" issue. Cluster starts, it needs access to shared storage to spawn a VMs but VMs are inside a VM VSA that need to be spawned. So first you start storage VMs, then make them sync (few terabytes, maybe couple of hours to check access bitmaps for volumes)
    and only after that you can start your other production VMs. Very nice!
    - Scenario limitations. You want to implement a SCV for Scale-Out File Servers? You canont use HP VSA or StorMagic because SoFS and Hyper-V roles cannot mix on the same hardware. To surf SMB 3.0 tide you need native apps or physical hardware behind. 
    That's why current virtualization leader VMware had clearly pointed where these types of things need to run - side-by-side with hypervisor kernel.
    3) DAS is not only cheaper but also faster then SAN and NAS (obviously). So sure there's no "one size fits all" but unless somebody needs a a) very high LUN density (Oracle or huge SQL database or maybe SAP) and b) very strict SLAs (friendly telecom company
    we provide Tier2 infrastructure for runs cell phone stats on EMC, $1M for a few terabytes. Reason? EMC people have FOUR units like that marked as a "spare" and have requirement to replace failed one in less then 15 minutes) there's no point to deploy hardware
    SAN / NAS for shared storage. SAN / NAS is an sustained innovation and Virtual SAN is disruptive. Disruptive comes to replace sustained for 80-90% of business cases to allow sustained live in a niche deployments. Clayton Christiansen\s "Innovator's Dilemma".
    Classic. More here:
    Disruptive Innovation
    http://en.wikipedia.org/wiki/Disruptive_innovation
    So I would not consider Software Defined Storage as a poor-mans HA or usable to Test & Development only. Thing is ready for prime time long time ago. Talk to hardware SAN VARs if you have connections: How many stand-alone units did they sell to SMBs
    & ROBO deployments last year?
    StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts.

  • 2 Hyper-V Servers with Failover Cluster and a single File Server and .VHDs stored on a SMB 3 Share

    I have 2 X M600 Dell Blades (100 GB local storage and 2 NICs)  and a Single R720 File Server (2.5 TB local SAS storage and 6 NICs).  I´m planning a Lab/developer enrironment using 2 Hyper-V Servers with Failover Cluster and a single File Server putting
    all  .VHDs stored on a SMB 3 Share on the File Server.
    The ideia is to have a HA solution, live migration, etc, storing the .VHDs onm a SMB 3 share
    \\fileserver\shareforVHDs
    It is possible? How Cluster will understand the
    \\fileserver\shareforVHDs as a cluster disk and offer HA on it?
    Or i´ll have to "re-think" and forget about VHDs on SMb 3 Share and deploy using iSCSI?
    Storage Spaces makes difference in this case?
    All based on wind2012 R2 STD English version

    I have 2 X M600 Dell Blades (100 GB local storage and 2 NICs)  and a Single R720 File Server (2.5 TB local SAS storage and 6 NICs).  I´m planning a Lab/developer enrironment using 2 Hyper-V Servers with Failover Cluster and a single File Server putting
    all  .VHDs stored on a SMB 3 Share on the File Server.
    The ideia is to have a HA solution, live migration, etc, storing the .VHDs onm a SMB 3 share
    \\fileserver\shareforVHDs
    It is possible? How Cluster will understand the
    \\fileserver\shareforVHDs as a cluster disk and offer HA on it?
    Or i´ll have to "re-think" and forget about VHDs on SMb 3 Share and deploy using iSCSI?
    Storage Spaces makes difference in this case?
    All based on wind2012 R2 STD English version
    You can do what you want to do just fine. Hyper-V / Windows Server 2012 R2 can use SMB 3.0 share instead of a block storage (iSCSI/FC/etc). See:
    Deploy Hyper-V over SMB
    http://technet.microsoft.com/en-us/library/jj134187.aspx
    There would be no shared disk and no CSV just SMB 3.0 folder both hypervisor hosts would have access to. Much simplier to use. See:
    Hyper-V recommends SMB or CSV ?
    http://social.technet.microsoft.com/Forums/en-US/d6e06d59-bef3-42ba-82f1-5043713b5552/hyperv-recommends-smb-or-csv-
    You'll have however a limited solution as your single physical server being a file server would be a single point of failure.
    You can use Storage Spaces just fine but you cannot use Clustered Storage Spaces as in this case you'll have to take away your SAS spindles from your R720 box and mount them into SAS JBOD (make sure it's certified). So you get rid of an active components
    (CPU, RAM) and keep more robust all-passive SAS JBOD as your physical shared storage. Better then a single Windows-running server but for a true fault tolerance you'll have to have 3 SAS JBODs. Not exactly cheap :) See:
    Deploy Clustered Storage Spaces
    http://technet.microsoft.com/en-us/library/jj822937.aspx
    Storage Spaces,
    JBODs, and Failover Clustering – A Recipe for Cost-Effective, Highly Available Storage
    http://blogs.technet.com/b/storageserver/archive/2013/10/19/storage-spaces-jbods-and-failover-clustering-a-recipe-for-cost-effective-highly-available-storage.aspx
    Using
    Storage Spaces for Storage Subsystem Performance
    http://msdn.microsoft.com/en-us/library/windows/hardware/dn567634.aspx#enclosure
    Storage
    Spaces FAQ
    https://social.technet.microsoft.com/wiki/contents/articles/11382.storage-spaces-frequently-asked-questions-faq.aspx
    Alternative way would be using Virtual SAN similar to VMware VSAN in this case you can get rid of a physical shared storage @ all and use cheap high capacity SATA spindles (and SATA SSDs!) instead of an expensive SAS.
    Hope this helped :)
    StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts.

  • Heartbeat Network inside Hyper-V Failover Cluster

    Dear All,
                I need to build an SQL Server 2012 SP1 Two-node cluster inside Hyper-V Failover Cluster.  There is only one virtual switch in the Hyper-V Failover Cluster environment which is being
    used for communication to the outside world using VLAN Tagging.  Now, since SQL Server 2012 Failover Cluster would require a heartbeat network as well and although possible to assign VLAN tag to the heartbeat adapter and use that for hearbeat, but since there
    would not be any gateway on the heartbeat network, it would render the VLAN tagging useless, so is following plan good enough:
    1.  Create an Internal Virtual Switch on all nodes of the cluster with the same name
    2.  Link the heartbeat adapter of the virtual machines to the Internal Virtual Switch
    Is it good enough? or is there any other better way?
    Thanks in advance.   

    I am not an expert on this, but here is what I would do.
    Create the VLAN and use that to keep the network setup as easy to understand as possible. Our Hyper-V cluster is only using IPv6 on the heartbeat network, and that is working like a charm. I would do the same inside the hyper-v hosts if I need to builld
    a virtual cluster. 
    What is the reason for deploying a failover cluster for SQL inside Hyper-V? Wouldn't a log shipping "cluster" provide a more secure solution for your SQL?
    /Martin
    Exchange is a passion not just a collaboration software.

  • Very Strange Network Issue With Two Guests on 2012 R2 Hyper-V Failover Cluster

    Hi all.  We're having a odd issue with two guests on our 2012 R2 failover cluster.  
    In a nutshell, if we shutdown a particular server (I'll call it Server A) another totally different server (Server B) on the same node loses it's network connectivity to the domain. If we start server A back up, network connectivity returns on server B.
    At first I thought server A might be running a service that was somehow linked to server B, so I decided to disable server A's NIC.  Interestingly, that had no affect on server B's connectivity.  
    The next step I tried was pausing server A and again, no adverse affect on server B's connectivity.  
    Next step was to live migrate server A to another node.  This action did
    cause server B to lose its network connection. 
    One other clue is that if I ping server B from either of the Hyper-V hosts in the cluster, I never lose network connection to server B.
    So I would suspect this is some network issue on the cluster, but I'm kind of at a loss where to go from here.  
    Has anyone seen this behavior before or does anyone have any troubleshooting suggestions I can try?
    Thanks! 
    George Moore

    Hi Sir,
    I'v never seen this before .
    >>Next step was to live migrate server A to another node.  This action did
    cause server B to lose its network connection. 
    They are connecting to same virtual switch ?
    First please run cluster validation to check if there is any error .
    If it is ok , please try the following items for troubleshooting :
    1. shutdown  serverA   serverB
    2. then add another virtual NIC for serverB
    3. start server B  check if the issue happens to both "old" and "new" virtual NIC .
    In addition , you can live migrate both A and B to another node , then try to live migrate A to the original node .
    If the issue persists , I would suggest you to remove that virtual switch on both nodes then re-create them .
    Best Regards,
    Elton Ji
    If it is not the answer please unmark it to continue
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Subscriber Support, contact [email protected] .

  • Server 2008 Hyper-V Failover Cluster Error on Domain Controller Reboot

    I am pretty new to Hyper-V virtual but I have 2 Hyper-V Clusters, each with 2 Nodes and a SAN, 1 Physical Domain Controller for failover cluster management and 1 virtual domain controller as backup.  All is running well, no issues.  I installed
    windows updates on the physical DC and upon reboot, got an error 5120 on cluster 2 that says "Cluster Shared Volume 'Volume1' ('Cluster Disk 1') is no longer available on this node because of 'STATUS_CONNECTION_DISCONNECTED(c000020c)'.  All I/O will
    temporarily be queued until a path to the volume is reestablished.  It pointed to the 2nd node in that cluster as being the issue but when I look at it, it is online and all healthy so I don't understand why the error was triggered and if the DC would
    go down for a failure, would that node not be able to access the CSV permanently.
    Appreciate any help anyone can provide.

    Hi mtnbikediver,
    In theory, if you has the correct configuration of cluster the DC restart will not cause the CSV down, does your shared storage installed on your DC? Did you run
    the cluster validation before you install the cluster? We strongly recommend you run the cluster validation before you build the cluster, same time please install the recommend update of 2008 cluster first.
    Recommended hotfixes for Windows Server 2008-based server clusters
    http://support.microsoft.com/kb/957311
    I found a similar scenario issue the DC restart will effect the cluster network name resource offline, but it is for 2008R2.
    Cluster network name resource cannot be brought online when one of the domain controllers is partly down in Windows Server 2008 R2
    http://support2.microsoft.com/?id=2860142
    I’m glad to be of help to you!
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Support, contact [email protected]

Maybe you are looking for