Hyper V clustering

Good day to all!,
 I just have a simple question.. Here's the scenario..
We have an IBM database server configured as RAID 1 (mirroring) with (two) 1 terabyte Hard disk drive.. I was wondering what will happen if the primary disk fails?.. Will our server continue to run?.. and i will just swap the defective Drive 0 with a new one?.. Or is there anything I need to do?.. 

Skype TX Launches All New Control SoftwareWe’re excited to announce that Skype TX controller, our softwarewhich gives broadcasters and content producers the ability to control andmanage multiple Skype TX unitsfrom a single UI, has been updated, and now includes all new features thathelps broadcasters create new innovative content.This includes live video previews, remote setting controls, and integrationwith Skype Contacts. Providing you have the latest version of the clientsoftware, you can download the new Skype TX controller for free from here
Skype TX is a software package designed for use in Television productionstudios and other professional environments that enables you to send Skypecalls as SD/HD-SDI video with embedded or balanced analogue audio, providingthe best call quality available in an HD video format. Skype TX can: ...

Similar Messages

  • Using single SMB share with multiple Hyper-V clusters

    Hello,
    I'm trying to find out if I can use a single SMB share with multiple Hyper-V Clusters. Looking at:
    How to Assign SMB 3.0 File Shares to Hyper-V Hosts and Clusters in VMM
    I think it's possible. Since the File Server is going to handle the file locking it shouldn't be a problem.
    Has anyone tried that?
    Thank you in advance!

    Hello,
    I'm not sure that's possible, I get this from this statement:"Assign the share—Assign
    the share to a virtual machine host or cluster."
    Even if it worked I wouldn't do that. Why don't  you just create multiple shares?

  • Upgrading a 3-node Hyper-V clusters storage for £10k and getting the most bang for our money.

    Hi all, looking for some discussion and advice on a few questions I have regarding storage for our next cluster upgrade cycle.
    Our current system for a bit of background:
    3x Clustered Hyper-V Servers running Server 2008 R2 (72TB Ram, dual cpu etc...)
    1x Dell MD3220i iSCSI with dual 1GB connections to each server (24x 146GB 15k SAS drives in RAID 10) - Tier 1 storage
    1x Dell MD1200 Expansion Array with 12x 2TB 7.2K drives in RAID 10 - Tier 2 storage, large vm's, files etc...
    ~25 VM's running all manner of workloads, SQL, Exchange, WSUS, Linux web servers etc....
    1x DPM 2012 SP1 Backup server with its own storage.
    Reasons for upgrading:
    Storage though put is becoming an issue as we only get around 125MB/s over the dual 1GB iSCSI connections to each physical server.  (tried everything under the sun to improve bandwidth but I suspect the MD3220i Raid is the bottleneck here.
    Backup times for vm's (once every night) is now in the 5-6 hours range.
    Storage performance during backups and large file syncronisations (DPM)
    Tier 1 storage is running out of capacity and we would like to build in more IOPS for future expansion.
    Tier 2 storage is massively underused (6tb of 12tb Raid 10 space)
    Migrating to 10GB server links.
    Total budget for the upgrade is in the region of £10k so I have to make sure we get absolutely the most bang for our buck.  
    Current Plan:
    Upgrade the cluster to Server 2012 R2
    Install a dual port 10GB NIC team in each server and virtualize cluster, live migration, vm and management traffic (with QoS of course)
    Purchase a new JBOD SAS array and leverage the new Storage Spaces and SSD caching/tiering capabilities.  Use our existing 2TB drives for capacity and purchase sufficient SSD's to replace the 15k SAS disks.
    On to the questions:
    Is it supported to use storage spaces directly connected to a Hyper-V cluster?  I have seen that for our setup we are on the verge of requiring a separate SOFS for storage but the extra costs and complexity are out of our reach. (RDMA, extra 10GB NIC's
    etc...)
    When using a storage space in a cluster, I have seen various articles suggesting that each csv will be active/passive within the cluster.  Causing redirected IO for all cluster nodes not currently active?
    If CSV's are active/passive its suggested that you should have a csv for each node in your cluster?  How in production do you balance vm's accross 3 CSV's without manually moving them to keep 1/3 of load on each csv?  Ideally I would like just
    a single CSV active/active for all vm's to sit on.  (ease of management etc...)
    If the CSV is active/active am I correct in assuming that DPM will backup vm's without causing any re-directed IO?
    Will DPM backups of VM's be incremental in terms of data transferred from the cluster to the backup server?
    Thanks in advance for anyone who can be bothered to read through all that and help me out!  I'm sure there are more questions I've forgotten but those will certainly get us started.
    Also lastly, does anyone else have a better suggestion for how we should proceed?
    Thanks

    Current Plan:
    Upgrade the cluster to Server 2012 R2
    Install a dual port 10GB NIC team in each server and virtualize cluster, live migration, vm and management traffic (with QoS of course)
    Purchase a new JBOD SAS array and leverage the new Storage Spaces and SSD caching/tiering capabilities.  Use our existing 2TB drives for capacity and purchase sufficient SSD's to replace the 15k SAS disks.
    On to the questions:
    Is it supported to use storage spaces directly connected to a Hyper-V cluster?  I have seen that for our setup we are on the verge of requiring a separate SOFS for storage but the extra costs and complexity are out of our reach. (RDMA, extra 10GB NIC's
    etc...)
    When using a storage space in a cluster, I have seen various articles suggesting that each csv will be active/passive within the cluster.  Causing redirected IO for all cluster nodes not currently active?
    If CSV's are active/passive its suggested that you should have a csv for each node in your cluster?  How in production do you balance vm's accross 3 CSV's without manually moving them to keep 1/3 of load on each csv?  Ideally I would like just
    a single CSV active/active for all vm's to sit on.  (ease of management etc...)
    If the CSV is active/active am I correct in assuming that DPM will backup vm's without causing any re-directed IO?
    Will DPM backups of VM's be incremental in terms of data transferred from the cluster to the backup server?
    Thanks in advance for anyone who can be bothered to read through all that and help me out!  I'm sure there are more questions I've forgotten but those will certainly get us started.
    Also lastly, does anyone else have a better suggestion for how we should proceed?
    Thanks
    1) You can use direct connection to SAS with a 3-node cluster of course (4-node, 5-node etc). Sure it would be much faster then running with an additional SoFS layer (with SAS fed directly to your Hyper-V cluster nodes all reads and writes would be local
    travelling down the SAS fabric and with SoFS layer added you'll have the same amount of I/Os targeting SAS + Ethernet with its huge compared to SAS latency sitting in between a requestor and your data residing on SAS spindles, I/Os overwrapped into SMB-over-TCP-over-IP-over-Etherent
    requests at the hypervisor-SoFS layers). Reason why SoFS is recommended - final SoFS-based solution would be cheaper as SAS-only is a pain to scale beyond basic 2-node configs. Instead of getting SAS switches, adding redundant SAS controllers to every hypervisor
    node and / or looking for expensive multi-port SAS JBODs you'll have a pair (at least) of SoFS boxes doing a file level proxy in front of a SAS-controlled back end. So you'll compromise performance in favor of cost. See:
    http://davidzi.com/windows-server-2012/hyper-v-and-scale-out-file-cluster-home-lab-design/
    Used interconnect diagram within this design would actually scale beyond 2 hosts. But you'll have to get a SAS switch (actually at least two of them for redundancy as you don't want any component to become a single point of failure, don't you?)
    2) With 2012 R2 all I/O from a multiple hypervisor nodes is done thru the storage fabric (in your case that's SAS) and only metadata updates would be done thru the coordinator node and using Ethernet connectivity. Redirected I/O would be used in a two cases
    only a) no SAS connectivity from the hypervisor node (but Ethernet one is still present) and b) broken-by-implementation backup software would keep access to CSV using snapshot mechanism for too long. In a nutshell: you'll be fine :) See for references:
    http://www.petri.co.il/redirected-io-windows-server-2012r2-cluster-shared-volumes.htm
    http://www.aidanfinn.com/?p=12844
    3) These are independent things. CSV is not active/passive (see 2) so basically with an interconnection design you'll be using there's virtually no point to having one-CSV-per-hypervisor. There are cases when you'd still probably do this. For example if
    you'd have all-flash and combined spindle/flash LUNs and you know for sure you want some VMs to sit on flash and others (no so I/O hungry) to stay within "spinning rust". One more case is many-node cluster. With it multiple nodes basically fight for a single
    LUN and a lot of time is wasted for SCSI reservation conflicts resove (ODX has no reservation offload like VAAI has so even if ODX is present its not going to help). Again it's a place where SoFS "helps" as having intermediate proxy level turns block I/O into
    file I/O triggering SCSI reservation conflicts for a two SoFS nodes only instead of an evey node in a hypervisor cluster. One more good example is when you'll have a mix of a local I/O (SAS) and Ethernet with a Virtual SAN products. Virtual SAN runs directly
    as part of the hypervisor and emulates high performance SAN using cheap DAS. To increase performance it DOES make sense to create a  concept of a "local LUN" (and thus "local CSV") as reads targeting this LUN/CSV would be passed down the local storage
    stack instead of hitting the wire (Ethernet) and going to partner hypervisor nodes to fetch the VM data. See:
    http://www.starwindsoftware.com/starwind-native-san-on-two-physical-servers
    http://www.starwindsoftware.com/sw-configuring-ha-shared-storage-on-scale-out-file-servers
    (feeding basically DAS to Hyper-V and SoFS to avoid expensive SAS JBOD and SAS spindles). The same thing as VMware is doing with their VSAN on vSphere. But again that's NOT your case so it DOES NOT make sense to keep many CSVs with only 3 nodes present or
    SoFS possibly used. 
    4) DPM is going to put your cluster in redirected mode for a very short period of time. Microsoft says NEVER. See:
    http://technet.microsoft.com/en-us/library/hh758090.aspx
    Direct and Redirect I/O
    Each Hyper-V host has a direct path (direct I/O) to the CSV storage Logical Unit Number (LUN). However, in Windows Server 2008 R2 there are a couple of limitations:
    For some actions, including DPM backup, the CSV coordinator takes control of the volume and uses redirected instead of direct I/O. With redirection, storage operations are no longer through a host’s direct SAN connection, but are instead routed
    through the CSV coordinator. This has a direct impact on performance.
    CSV backup is serialized, so that only one virtual machine on a CSV is backed up at a time.
    In Windows Server 2012, these limitations were removed:
    Redirection is no longer used. 
    CSV backup is now parallel and not serialized.
    5) Yes, VSS and CBT would be used so data would be incremental after first initial "seed" backup. See:
    http://technet.microsoft.com/en-us/library/ff399619.aspx
    http://itsalllegit.wordpress.com/2013/08/05/dpm-2012-sp1-manually-copy-large-volume-to-secondary-dpm-server/
    I'd look at some other options. There are few good discussion you may want to read. See:
    http://arstechnica.com/civis/viewtopic.php?f=10&t=1209963
    http://community.spiceworks.com/topic/316868-server-2012-2-node-cluster-without-san
    Good luck :)
    StarWind iSCSI SAN & NAS

  • P2V SQL Server 2008 R2 HYPER-V Clustering

    Hi all,
    i have current sql server 2008 environment which have the following requirements:
    1. SERVER A with failover cluster 1 node server (physical server)
    2. 4 shared disk storage (SAN)
    3. The connectivity is HBA
    i am supposed to migrate the existing into hyper-v environment which the following information detail:
    1. provision CSV in the same storage with existing 4 shared disk storage
    2. SERVER B (HYPER-V will be deployed for future sql server cluster)
    the scenario is we want to migrate the server A to Server B. 
    Please correct my plan as the following detail:
    - We want to P2V the existing into SERVER B (Hyper-V).
    here is my questions 
    1. is it possible the vhd after P2V process is attached into CSV storage ?
    2. is it possible the shared storage in the previous environment is re-mapping volume into the Hyper-V ?
    sorry if my thread is disorganized, i am novice in this case. need your kind advice for the best practice solve this issue.
    please tell me if there is anything less clear.
    Best Regards,
    ari

    First question is why do you want to P2V?  It does not take that much effort to create a new VM and then use SQL tools to backup and restore the databases into the new environment.  Then you are using known and proven tools instead of trying to
    make everything work from a P2V.
    When you P2V, the process will create virtual hard drives from each physical drive.  If you want to make your SQL Server into a VM, it does not make sense to keep the storage on pass-through disks.  It is better to use VHDs for the storage of a
    VM.
    Having single node clusters does not make much sense, unless the idea is to immediately add a second node when available. 
    . : | : . : | : . tim

  • Server 2012 Clustered Hosts - How can I place a Hyper-V Guest in the DMZ?

    I have 3 server 2012 hyper v clustered hosts. I've been recently asked to create a VM where external parties will have local admin rights. I've been resisting things for a variety of IMO valid security reasons. What I'm trying to understand is if it would
    be possible to build a guest VM that was not part of our domain and put in our firewalls DMZ zone. In this way these folks could be local admins but there's be no connection with the internal network.
    In a single host environment if I'm understand things correctly I'd create a external type virtual switch, connect it to a specific physical network card on my host and then connect that card to my switch's DMZ port. But my environment is clustered... does
    that mean I'd designate a physical network card on all 3 hosts, connect them to all the same named external virtual switch and plug all 3 in to DMZ ports on my firewall? Could I also instead of plugging all 3 in to DMZ ports on my firewall plug all 3 into
    some little rinky dink 4 port gigabit switch and then plug that in to my firewall's dmz port?

    Hi,
    When your guest vm using the external vswitch, it can be considered as the physical host, therefore it has the physical network features, in the DMZ zone we often create the
    decided subnet for the security reason. Therefore the decide NIC is needed, it will used for the Hyper-V host VLAN settings.
    When considering Hyper-V for server consolidation in a DMZ it is recommended not to run VMs of vastly differing trust levels on the same physical host in production environments
    (i.e. do not consolidate all DMZ boxes on one physical host). 
    Instead, the recommendation is to consolidate all the front-end boxes on one physical server and do the same for the back-end, depending on the workloads.
    More information:
    Hyper-V 2008 R2: Virtual Networking Survival Guide
    http://social.technet.microsoft.com/wiki/contents/articles/151.hyper-v-2008-r2-virtual-networking-survival-guide.aspx
    Hyper-V: What are the uses for different types of virtual networks?
     http://blogs.technet.com/jhoward/archive/2008/06/17/hyper-v-what-are-the-uses-for-different-types-of-virtual-networks.aspx
    Understanding Networking with Hyper-V
     http://www.microsoft.com/downloads/details.aspx?FamilyID=3FAC6D40-D6B5-4658-BC54-62B925ED7EEA&displaylang=en&displaylang=en
    VLAN Settings and Hyper-V
    http://blogs.msdn.com/virtual_pc_guy/archive/2008/03/10/vlan-settings-and-hyper-v.aspx
    Hope this helps.
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Storage spaces + Hyper-V with multiple 1GBe nics for storage?

    Hi guys!
    So I just got my private cloud hardware. I actually put in the order before summer, but due to firmware and certification issues on my desired SuperMicro JBODs delivery was seriously delayed. So much that I've completely forgotten my networking ideas. I
    need help/verification. Or at least a URL - most described setups are 10 GBe nowadays... Or even a "not gonna work"  :-)
    My setup is supposed to be a 3 JBOD, 2 head node storage spaces/sfos cluster providing storage to a 4 node Hyper-V cluster. I didn't have a budget for a 10 GBe setup, but got a great price on a lot of 1 GBe nics. After allocating management, Hyper-V, etc
    I have 3x 1 GBe ports left on all Hyper-V and Storage servers. 
    I think my original plan was to create three subnets and add one nic from each server. And then I guess I've imagined some kind of SMB3 magic discovering these paths between Hyper-V and storage and just aggregating bandwitdh and providing fault tolerance
    by sprinkling fairy dust. Must have been the heat...
    So now I'm "replanning" and I realize that I'm going to create a failover cluster at the storage level providing a cluster name and IP. I'm thinking the management subnet where domain info resides is appropriate, but then what about the other three
    subnets? I don't want to flood my management subnet with storage traffic, but do want bandwidth and resilience. Did I make a design error, and how do I make the best of the situation?
    Disclaimer: My previous experience on virtualization clusters is ISCSI SAN and 2008 R2 Hyper-V clusters. Storage Spaces is completely new to me :-)
    And due to overlapping technologies I struggled a bit on placing this thread. Hope I got it right

    Hello,
    i did not understand how many NICs you have in each Host. Hyper-V Cluster with 1 GBe NICs work as long as you know that it is not 10 GBe.
    In this article is the complete Hyper-V Cluster design in checklist form. I think you should work with this list for some further ideas:
    http://blogs.technet.com/b/askpfeplat/archive/2013/03/10/windows-server-2012-hyper-v-best-practices-in-easy-checklist-form.aspx
    Sorry that i cant give a better answer, but i lack information about about your environment.
    Regards,
    Thomas
    Thomas Hanrath [MCT | Regional Lead Germany] |
    http://www.hanrath.de
    Microsoft Learning Blog |
    http://blog.microsoftlearning.de
    MCSE | Private Cloud

  • Server 2008 Hyper-V Failover Cluster Error on Domain Controller Reboot

    I am pretty new to Hyper-V virtual but I have 2 Hyper-V Clusters, each with 2 Nodes and a SAN, 1 Physical Domain Controller for failover cluster management and 1 virtual domain controller as backup.  All is running well, no issues.  I installed
    windows updates on the physical DC and upon reboot, got an error 5120 on cluster 2 that says "Cluster Shared Volume 'Volume1' ('Cluster Disk 1') is no longer available on this node because of 'STATUS_CONNECTION_DISCONNECTED(c000020c)'.  All I/O will
    temporarily be queued until a path to the volume is reestablished.  It pointed to the 2nd node in that cluster as being the issue but when I look at it, it is online and all healthy so I don't understand why the error was triggered and if the DC would
    go down for a failure, would that node not be able to access the CSV permanently.
    Appreciate any help anyone can provide.

    Hi mtnbikediver,
    In theory, if you has the correct configuration of cluster the DC restart will not cause the CSV down, does your shared storage installed on your DC? Did you run
    the cluster validation before you install the cluster? We strongly recommend you run the cluster validation before you build the cluster, same time please install the recommend update of 2008 cluster first.
    Recommended hotfixes for Windows Server 2008-based server clusters
    http://support.microsoft.com/kb/957311
    I found a similar scenario issue the DC restart will effect the cluster network name resource offline, but it is for 2008R2.
    Cluster network name resource cannot be brought online when one of the domain controllers is partly down in Windows Server 2008 R2
    http://support2.microsoft.com/?id=2860142
    I’m glad to be of help to you!
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Support, contact [email protected]

  • Hyper-V Live Migration Compatibility with Hyper-V Replica/Hyper-V Recovery Manager

    Hi,
    Is Hyper-V Live Migration compatible with Hyper-V Replica/Hyper-V Recovery
    Manager?
    I have 2 Hyper-V clusters in my datacenter - both using CSVs on Fibre Channel arrays. These clusters where created and are managed using the same "System Center 2012 R2 VMM" installation. My goal it to eventually move one of these clusters to a remote
    DR site. Both sites are connected/will be connected to each other through dark fibre.
    I manually configured Hyper-V Replica in the Fail Over Cluster Manager on both clusters and started replicating some VMs using Hyper-V
    Replica.
    Now every time I attempt to use SCVMM to do a Live Migration of a VM that is protected using Hyper-V Replica to
    another host within the same cluster,
    the Migration VM Wizard gives me the following "Rating Explanation" error:
    "The virtual machine virtual machine name which
    requires Hyper-V Recovery Manager protection is going to be moved using the type "Live". This could break the recovery protection status of the virtual machine.
    When I ignore the error and do the Live Migration anyway, the Live migration completes successfully with the info above. There doesn't seem to be any impact on the VM or it's replication.
    When a Host Shuts-down or is put into maintenance, the VM Migrates successfully, again, with no noticeable impact on users or replication.
    When I stop replication of the VM, the error goes away.
    Initially, I thought this error was because I attempted to manually configure
    the replication between both clusters using Hyper-V Replica in Failover Cluster Manager (instead of using Hyper-V Recovery Manager).
    However, even after configuring and using Hyper-V Recovery Manager, I still get the same error. This error does not seem to have any impact on the high-availability of
    my VM or on Replication of this VM. Live migrations still occur successfully and replication seems to carry on without any issues.
    However, it now has me concern that Live Migration may one day occur and break replication of my VMs between both clusters.
    I have searched, and searched and searched, and I cannot find any mention in official or un-official Microsoft channels, on the compatibility of these two features. 
    I know vMware vSphere replication and vMotion are compatible with each otherhttp://pubs.vmware.com/vsphere-55/index.jsp?topic=%2Fcom.vmware.vsphere.replication_admin.doc%2FGUID-8006BF58-6FA8-4F02-AFB9-A6AC5CD73021.html.
    Please confirm to me: Are Hyper-V Live Migration and Hyper-V Replica compatible
    with each other?
    If they are, any link to further documentation on configuring these services so that they work in a fully supported manner will be highly appreciated.
    D

    This can be considered as a minor GUI bug. 
    Let me explain. Live Migration and Hyper-V Replica is supported on both Windows Server 2012 and 2012 R2 Hyper-V.
    This is because we have the Hyper-V Replica Broker Role (in a cluster) that is able to detect, receive and keep track of the VMs and the synchronizations. The configuration related to VMs enabled with replications follows the VMs itself. 
    If you try to live migrate a VM within Failover Cluster Manager, you will not get any message at all. But VMM will (as you can see), give you an
    error but it should rather be an informative message instead.
    Intelligent placement (in VMM) is responsible for putting everything in your environment together to give you tips about where the VM best possible can run, and that is why we are seeing this message here.
    I have personally reported this as a bug. I will check on this one and get back to this thread.
    Update: just spoke to one of the PMs of HRM and they can confirm that live migration is supported - and should work in this context.
    Please see this thread as well: http://social.msdn.microsoft.com/Forums/windowsazure/en-US/29163570-22a6-4da4-b309-21878aeb8ff8/hyperv-live-migration-compatibility-with-hyperv-replicahyperv-recovery-manager?forum=hypervrecovmgr
    -kn
    Kristian (Virtualization and some coffee: http://kristiannese.blogspot.com )

  • Choosing the correct clustering method

    Hi, I currently have a small 3 server environment (win 2012), 1 PDC, and 2 Servers running hyper-v. I have 8 or so hyper v machines on one of the servers and using hyper-v replica to replicate all of them to the other server. I woud like to upgrade
    this to something that will provide hot migration/failover for the VM's. I was considering having a storage cluster with 2 machines for storage and a VM cluster with 2 machines for migration. My confusion is what to cluster? Do I cluster the physical machines,
    the VM's or both? Note SQL server is running on one of the VM's.
    Lee

    "if I take 2 servers with lots of storage I can cluster the to provide highly available file storage that I could then provide as shared storage for the Hyper-V machines"
    Yes, that can be done with third party software (slog, datacore, starwind). It will mirror the storage between the two Hyper-V hosts and present the storage back to the Hyper-V hosts as shared storage for a cluster.  Normally, the storage is totally
    separate from either machine - SAN or shared external storage - but these companies have provided a niche solution that will allow for a supported Hyper-V clustering solution.
    "would the host Hyper-V machines need to be clustered, or just the VMs or both"
    Depends on what you want to do.  Any of those is possible.  Clustering at the Hyper-V level ensures that if a host fails, the VM is restarted on another node in the cluster.  Think of this as HA at the operating system level.  Clustering
    at the VM level ensures that should the VM or the application fail on the running VM, the application is automatically moved to another node in the cluster.  Think of this as HA at the application level.  Clustering at both levels provides both levels
    of HA.  Clustering at the VM (application) layer provides an environment where the recovery of the application is faster.  When an application, such as SQL is clustered, it is actually running on two or more nodes in the cluster.  Should something
    happen on the active SQL node, the database ownership is simply transferred to another SQL instance in the cluster that is already running, providing for a relatively quick recovering.  If the Hyper-V hosts are clustered and you have a single VM running
    SQL, the VM will restart on another node in the cluster.  Since the operating system and SQL have to start (a reboot), it will take a bit longer for the SQL instance to come back to life.  That's why I say you need to figure out what sort of recovery
    times you need in your environment and then provide the level of availability that you need.
    .:|:.:|:. tim

  • Hyper-v Failover Cluster management via powershell

    Hi
        We are looking at having a management server act as proxy for managing couple of hyper-v clusters using CSV. We plan to do management using powershell commands.
        We create a session one of the host in the cluster  and execute commands using invoke-command. The cluster verbs seems to fail with the following warning. 
    WARNING: If you are running Windows PowerShell remotely, note that some failover clustering cmdlets do not
    work remotely. When possible, run the cmdlet locally and specify a remote computer as the target. To run the
     cmdlet remotely, try using the Credential Security Service Provider (CredSSP). All additional errors or
    warnings from this cmdlet might be caused by running it remotely.
      What is the recommended way to do setup for using FailoverCluster ? We want to have a single management server that act proxy for all servers clustered or not.
      Also, is there a document that describe various operations done via Failover Cluster Manager and corresponding powershell commands (or set of commands).
    Thanks
    /Jd

    Regarding the Stop action from Failover Cluster Manager, Eric, I understand your point. But when I do shutdown from Failover Cluster Manager, the VM shuts down as expected even when the setting is set to Save.
    I was very specifically talking about the Stop-ClusterGroup cmdlet, not any command issued in Failover Cluster Manager. But, well, yeah, if you tell a VM to shut down, it shuts down. I don't know why you'd expect anything different to happen. If you're looking
    for the equivalent to Stop-ClusterGroup inside Failover Cluster Manager, it's not called "Shut Down". You can use "Stop Role" on the "More Actions" menu for the VM. You can also find the configuration object (usually named in the format of "Virtual Machine
    Configuration XXX") and take it offline.
    I tested a number of times after your first post, and Stop-ClusterGroup does what the Cluster-Controlled Action is set to every single time for me.
    I could only make educated guesses at the underlying mechanics of FCM and PowerShell's cluster cmdlets, but the stand-out difference is that FCM has no method to operate in a double-hop situation at all, while PowerShell does. You only encounter these difficulties
    with PowerShell in that second hop. The question you're asking: "it would be great to know how Failover Cluster Manager works without this setup ?" is an apples-to-oranges comparison.
    This particular sentence of yours sort of changes the overall parameter of your question:
    "... so our automation works..."
    I was under the impression you were setting up this double-hop because you wanted admins to manually execute PowerShell cmdlets against your cluster from a single controlled location.
    If automation is your goal, do it right from the cluster. I obviously don't know your entire wishlist and it's none of my business, but this double-hop situation may not be ideal.
    Eric Siron Altaro Hyper-V Blog
    I am an independent blog contributor, not an Altaro employee. I am solely responsible for the content of my posts.
    "Every relationship you have is in worse shape than you think."

  • Guest VM failover cluster on Hyper-V 2012 Cluster does not work across hosts

    Hi all,
    We are evaluating Hyper-V on Windows Server 2012, and I have bumped in to this problem:
    I have a Exchange 2010SP2 DAG installed on 2 vms in our Hyper-V cluster (a DAG forms a failover cluster, but does not use any shared storage). As long as my vms are on the same host, all is good. However, if I live migrate or shutdown-->move-->start one
    of the guest nodes on another pysical host, it loses connectivity with the cluster. "regular" network is fine across hosts, and I can ping/browse one guest node from the other. I have tried looking for guidance for Exchange on Hyper-V clusters but have not
    been able to find anything.
    According to the Exchange documentation this configuration is supported, so I guess I'm asking for any tips and pointers on where to troubleshoot this.
    regards,
    Trond

    Hi All,
    so some updates...
    We have a ticket logged with Microsoft, more of a check box exercise to reassure the business we're doing the needful.  Anyway, they had us....
    Apply hotfix http://support.microsoft.com/kb/2789968?wa=wsignin1.0  to both guest DAG nodes, which seems pretty random, but they wanted to update the TCP/IP stack...
    There was no change in error, move guest to another Hyper-V node, and the failover cluster, well, fails with the following event ids I the node that fails...
    1564 -File share witness resource 'xxxx)' failed to arbitrate for the file share 'xxx'. Please ensure that file share '\xxx' exists and is accessible by the cluster..
    1069 - Cluster resource 'File Share Witness (xxxxx)' in clustered service or application 'Cluster Group' failed
    1573 - Node xxxx  failed to form a cluster. This was because the witness was not accessible. Please ensure that the witness resource is online and available
    The other node stays up, and the Exchange DB's mounted on that node stay up, the ones mounted on the way that fails failover to the remaining node...
    So we then
    Removed 3 x Nic's in one of the 4 x NIC teams, so, leaving a single NIC in the team (no change)
    Removed one NIC from the LACP group on each Hyper-V host
    Created new Virtual Switch using this simple trunk port NIC on each Hyper-V host
    Moved the DAG nodes to this vSwitch
    Failover cluster works as expected, guest VM's running on separate Hyper-V hosts, when on this vswitch with single NIC
    So Microsoft were keen to close the call, as there scope was, I kid you not, to "consider this issue
    resolved once we are able to find the cause of the above mentioned issue", which we have now done, as in, teaming is the cause... argh.
    But after talking, they are now escalating internally.
    The other thing we are doing, is building Server 2010 Guests, and installing Exchange 2010 SP3, to get a Exchange 2010 DAG running on Server 2010 and see if this has the same issue, as people indicate that this is perhaps not got the same problem.
    Cheers
    Ben
    Name                   : Virtual Machine Network 1
    Members                : {Ethernet, Ethernet 9, Ethernet 7, Ethernet 12}
    TeamNics               : Virtual Machine Network 1
    TeamingMode            : Lacp
    LoadBalancingAlgorithm : HyperVPort
    Status                 : Up
    Name                   : Parent Partition
    Members                : {Ethernet 8, Ethernet 6}
    TeamNics               : Parent Partition
    TeamingMode            : SwitchIndependent
    LoadBalancingAlgorithm : TransportPorts
    Status                 : Up
    Name                   : Heartbeat
    Members                : {Ethernet 3, Ethernet 11}
    TeamNics               : Heartbeat
    TeamingMode            : SwitchIndependent
    LoadBalancingAlgorithm : TransportPorts
    Status                 : Up
    Name                   : Virtual Machine Network 2
    Members                : {Ethernet 5, Ethernet 10, Ethernet 4}
    TeamNics               : Virtual Machine Network 2
    TeamingMode            : Lacp
    LoadBalancingAlgorithm : HyperVPort
    Status                 : Up
    A Cloud Mechanic.

  • Guest Cluster error in Hyper-V Cluster

    Hello everybody,
    in my environment I do have an issue with failover clusters (Exchange, Fileserver) while performing a live migration of one virtual clusternode. The clustergroup is going offline.
    The environment is the following:
    2x Hyper-V Clusters: Hyper-V-Cluster1 and Hyper-V-Cluster2 (Windows Server 2012 R2) with 5 Nodes per Cluster
    1x Scaleout Fileserver (Windows Server 2012 R2) with 2 Nodes
    1x Exchange Cluster (Windows Server 2012 R2) with EX01 VM running on Hyper-V-Cluster1 and EX02 VM running on Hyper-V-Cluster2
    1x Fileserver Failover Cluster (Windows Server 2012 R2) with FS01 VM running on Hyper-V-Cluster1 and FS02 VM running on Hyper-V-Cluster2
    The physical networks on the Hyper-V Nodes are redundant with 2x 10Gb/s uplinks to 2x physical switches for VMs in a LBFO Team:
    New-NetLbfoTeam
    -Name 10Gbit_TEAM -TeamMembers 10Gbit_01,10Gbit_02
    -TeamingMode SwitchIndependent -LoadBalancingAlgorithm HyperVPort
    The SMB 3 traffic runs on 2x 10Gb/s NIC without NIC-Teaming (SMB-Multichannel).
    SMB is used for livemigrations.
    The VMs for clustering were installed according to the technet guideline:
    http://technet.microsoft.com/en-us/library/dn265980.aspx
    Because my Hyper-V Uplinks are allready redundant, I am using one NIC inside the VM.
    As I understand, there is no advantage of using two NICs inside the VM as long they are connected to the same vSwitch.
    Now, when I want to perform a hardware maintenance, I have to livemigrate the EX01 VM from Hyper-V-Cluster1-Node-1 to Hyper-V-Cluster1-Node-2.
    EX02 VM still runs untouched on Hyper-V-Cluster2-Node-1.
    At the end of the livemigration I see error 1135 (source: FailoverClustering) on EX01 VM, which says that EX02 VM was removed from Failover Cluster and I have to check my network.
    The clustergroup of exchange is offline after that event and I have to bring it online again manually.
    Any ideas what can cause this behavior?
    Thanks.
    Greetings,
    torsten

    Hello again,
    I found the cause and the solution :-)
    In the article here: http://technet.microsoft.com/en-us/library/dn440540.aspx
    is the description of my cluster failure:
    ########## relevant part from article #######################
    Protect against short-term network interruptions
    Failover cluster nodes use the network to send heartbeat packets to other nodes of the cluster. If a node does not receive a response from another node for a specified period of time, the cluster removes the node from cluster membership. By default, a guest
    cluster node is considered down if it does not respond within 5 seconds. Other nodes that are members of the cluster will take over any clustered roles that were running on the removed node.
    Typically, during the live migration of a virtual machine there is a fast final transition when the virtual machine is stopped on the source node and is running on the destination node. However, if something causes the final transition to take longer than
    the configured heartbeat threshold settings, the guest cluster considers the node to be down even though the live migration eventually succeeds. If the live migration final transition is completed within the TCP time-out interval (typically around 20 seconds),
    clients that are connected through the network to the virtual machine seamlessly reconnect.
    To make the cluster heartbeat time-out more consistent with the TCP time-out interval, you can change the
    SameSubnetThreshold and CrossSubnetThreshold cluster properties from the default of 5 seconds to 20 seconds. By default, the cluster sends a heartbeat every 1 second. The threshold specifies how many heartbeats to miss in succession
    before the cluster considers the cluster node to be down.
    After changing both parameters in failover cluster as described the error is gone.
    Greetings,
    torsten

  • Assign management ip address with SCVMM 2012 R2 for hyper-v converged network?

    Hi,
    I am setting up a converged network for our Hyper-V clusters using vNICs for the different network traffic including management, live migration, cluster-csv, hyper-v etc.
    Problem is, how do I assign the hyper-v hosts a management IP address? They need a network connection on the management network for scvmm to manage them in the first place. How do I take the existing management IP address that is directly assigned to the
    host and transfer it directly to the new vNIC so scvmm has management of it? Kind of in a chicken and egg situation here. I thought about assigning a temp ip address to the host initially but am worried that assigning the address will cause problems as then
    the host would then have 2 default gateways configured. How have others managed this scenario?
    Thanks
    Microsoft Partner

    Rule of thumb: Use one connected network for your Fabric networks (read the whitepaper), and use VLAN based networks for your tenant VMs when you want to associate VM Networks with each VLAN.
    -kn
    Kristian (Virtualization and some coffee: http://kristiannese.blogspot.com )
    We don't have tenants as such due to this being an environment on a private company LAN for sole use of the company virtual machines.
    What I have so far:
    I created "one connected network" for Hyper-V-Virtual-Machine traffic.
    Unchecked "Allow new VM networks created on this logical switch to use network virtualization"
    Checked "Create a VM network with the same name to allow vms to access this logical network directly"
    This logical network has one site called UK.
    Within this site I have defined all of the different VLANS for this site.
    Created IP pools for each VLAN subnet range.
    I hope I understand this correctly. Started reading the whitepaper from cover to cover now.
    Microsoft Partner

  • Hyper-V DR Scenario

    I have a customer who has a production Hyper-V cluster managed by VMM (all good). Their DR site has a separate infrastructure which is not managed by VMM in any way.
    Can we use Hyper-V replica for example as the storage between live and DR is clustered at a storage level to replicate the configuration to another cluster rather than the full VHDX?
    I can achieve this using Orchestrator but reliance on another product is not ideal.
    Please don't forget to mark posts as helpful or answers.
    Inframon Blogs |
    All Things ConfigMgr

    Yes it did a little thanks :)
    What if I had both Hyper-V clusters managed by a single VMM instance? Could we configure failover from one cluster to another?
    Please don't forget to mark posts as helpful or answers.
    Inframon Blogs |
    All Things ConfigMgr
    You don't need single VMM. Yes, you can replicate VM from one cluster to another. See for reference:
    Why is the "Hyper-V Replica Broker" required?
    http://blogs.technet.com/b/virtualization/archive/2012/03/27/why-is-the-quot-hyper-v-replica-broker-quot-required.aspx
    The following example will be used through the rest of the article:
    Cluster-P – Failover Cluster in city 1
    P1, P2, P3 (.contoso.com) – names of the cluster nodes on a cluster Cluster-P
    P-Broker-CAP.contoso.com – the client access point of the broker on Cluster-P
    VirtualMachine_Workload – the name of the virtual machine running on Cluster-P         
    Cluster-R – Failover Cluster in city 2
    R1, R2 (.contoso.com) – names of the cluster nodes on the Cluster-R
    R-Broker-CAP.contoso.com – the client access point of the broker on Cluster-R
    Good luck :)
    StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts.

  • Can't change Run as account in the Host Access tab for a clustered host

    Gentlemen, we have a bunch of Hyper-V clusters added to hyper-V and most of them have a run as account assigned to them, which we don't want to keep (they were added with a domain admin user).
    However, I'm not able to change it on the clustered host nodes. I can change it on non clustered hosts, no issue.
    I've tried and, of course, removing and re-adding the clusters with another account also fixes the issue, but there is a good number of clusters and it is all production.
    Any other ideas?
    I could also rename the run as account and make sure it has local admin right for all nodes. Is it safe? Any gotchas?
    I've tried powershell, but I couldn't find a command to change the run as account for a clustered host (read-only).
    Thank you,
    JF
    MCITP, MCSE, MCTS

    you're right. once the hosts are clustered, this option is greyed out.
    the only option is to remove the cluster from VMM, and add it again with the right run as account.
    I have not tested to rename the run as account.
    -kn
    Kristian (Virtualization and some coffee: http://kristiannese.blogspot.com )

Maybe you are looking for

  • Trying to understand the big picture

    Let me first say that I am not only new to Java but also to OOP. So, having said that, does anyone have any serious problem with the following two statements: 1. Importing a class allows the use of certain behaviors which are not necessarily associat

  • Blurred text in DSP interface??

    I was placing some chapter markers manually in DSP, and as I was scrolling back and forth, the chapter marker text above the timeline ruler would blur at random... looked like a graphics card or gui issue. I can't get a screen grab, the moment I chan

  • Why am I getting an install error when I try to update my Acrobat XI Pro?

    I keep getting these errors after I install Acrobat XI Pro

  • Multiple events handled in a case, variant type newval

    Hi! I use an event case structure, and one case handles two value change events. One is a button's, the other is a cluster of controls'. I would like to distinguish, whether the source was the button, or one of the controls in the cluster, but i get

  • New iMac's iTunes Not Finding My Music

    I've twice followed these instructions carefully for transferring my Music from my PC to my new iMac: http://support.apple.com/kb/HT1329?viewlocale=en_US But my iMac iTunes still isn't showing any of my music. The music files appear to be in the Musi