Hyper V Cluster Migration - Options

Hello,
I have a Server 2012 4 node cluster that i need to migrate to new hardware along with all the VMs. The new hardware will be a 2 node Server 2012 R2 cluster. Can i please have some advice on my migration options? I am using iSCSi SAN storage but have limited
available to make available to the new cluster as it is all being used by the current one.
I was hoping I could add the host to the existing cluster but that does not appear to be the case. So i think my only options now would be to either use copy cluster roles wizard or import/export process, found here: http://technet.microsoft.com/en-us/library/dn486792.aspx
If I use the copy cluster roles wizard should i setup the storage on the new hosts, as in make it available via iscsi initiator, before i create the cluster? I just have concerns about giving access to the same CSV storage that is currently live on the current
cluster. I am using SCVMM 2012 for management. Many Thanks. Carl.

Hi Alexey, thanks for your reply and sorry for my delay. I ended up setting up the new cluster, giving it some storage and live migrating the VMs - this was taking a long time so basically completed what you explained above - i used the copy cluster roles
wizard, turned off the VMs on the old cluster, bought the CSV online on the new cluster then turned the machines on. It was actually pretty much issue free which was a nice surprise. Yes there was also a chunk of downtime but luckily I managed to get approval.
Thanks, Carl.

Similar Messages

  • Server 2012 R2 Hyper-V Cluster, VM blue screens after migration between nodes.

    I currently have a two node Server 2012 R2 Hyper-v Cluster (fully patched) with a Windows Server 2012 R2 Iscsi target.
    The VMs run fine all day long, but when I try to do a live/quick migration the VM blue screens after about 20 minutes. The blue reports back about a “Critical_Structure_Corruption”.
    I’m being to think it might be down to CPU, as one system has an E-2640v2 and the other one has an E5-2670v3. Should I be able to migrate between these two systems with these type of CPU?
    Tim

    Sorry Tim, is that all 50 blue screen if live migrated?
    Are they all on the latest integration services? Does a cluster validation complete successfully? Are the hosts patched to the same level?
    The fact that if you power them off then migrate them and they boot fine, does point to a processor incompatibility and the memory BIN file is not accepted on the new host.
    Bit of a long shot but the only other thing I can think of off the top of my head if the compatibility option is checked, is checking the location of the BIN file when the VM is running to make sure its in the same place as the VHD\VHDX in the
    CSV storage where the VM is located and not on the local host somewhere like C:\program data\.... that is stopping it being migrated to the new host when the VM is live migrated.
    Kind Regards
    Michael Coutanche
    Blog:   
    Twitter:   LinkedIn:
    Note: Posts are provided “AS IS” without warranty of any kind, either expressed or implied, including but not limited to the implied warranties of merchantability and/or fitness for a particular purpose.

  • Hyper-V Failover Cluster Migration

    We currently have two Dell VRTX boxes, one in production, and one not (yet).  The new one is running Server 2012 R2 and the production system is running Server 2012.  I have read all of the documentation on migrating the cluster to the new servers,
    but what I want to do is a little different.  I want to upgrade the existing blade servers to R2.  I'm not actually worried about migrating the VM's themselves.  Does anyone know if it is possible to move the cluster roles to the new server,
    upgrade the existing servers to R2, then move the roles back, and start up the VM's again.  The VM storage wouldn't change at all, and we would obviously be down while we are upgrading the current servers to R2.  The goal is to avoid moving the terrabytes
    of data to a temporary location, just to move it back.  It would take longer to transfer the data than install the new OS.
    Any thoughts or comments would be appreciated.  Thank you.

    Hi RTat10,
    You can migrate your 2012 cluster vms to 2012r2 cluster, then upgrade your current 2012 node, then add this nodes back to 2012r2 cluster, last evict the new 2012r2
    nodes. You can refer the following KB to performing the migrate.
     Migrate Cluster Roles to Windows Server 2012 R2
    http://technet.microsoft.com/en-us/library/dn530779.aspx
    More information:
    1. TechEd session - http://channel9.msdn.com/Events/TechEd/NorthAmerica/2013/MDC-B331#fbid=?fbid
    2. “Hyper-V Cluster Using Cluster Shared Volumes (CSV) Migration” - http://technet.microsoft.com/en-US/library/dn486822.aspx
    3. “PowerShell Script for Automated Install of Hyper-V Integration Services” -
    http://social.technet.microsoft.com/wiki/contents/articles/14295.powershell-script-for-automated-install-of-hyper-v-integration-services-in-a-vm-running-on-windows-server-2012-with-hyper-v-v-3-0-role-using-powershell-remoting.aspx
    I’m glad to be of help to you!

  • Windows server 2012 Datacenter Hyper-V Cluster -- Failed to validate Operating System Installation Option?

    Hi I have a 4 node Windows server 2012 Hyper-V cluster. When I try to run a cluster validation report, everything else is fine but it fails at validate the Operating System Installation Option step. I did some research but couldn't really find any solution.
    Anyone knows how to pass this test? Thanks.
    Here's the error I get when run the test:
    An error occurred while executing the test.
    The operation has failed. An error occurred while getting the operating system installation option for node "server1"

    Hi JasonLiu2002,
    Please post the original error information, the current information is so wide that difficult to determine where may have issue and please offer more information about your
    server configuration, you can refer the following article to prepare your cluster environment first.
    Windows Server 2012 Hyper-V Best Practices (In Easy Checklist Form)
    http://blogs.technet.com/b/askpfeplat/archive/2013/03/10/windows-server-2012-hyper-v-best-practices-in-easy-checklist-form.aspx
    When you preparing the new cluster on Server 2012 please install the Recommended hotfixes and updates for Windows Server 2012-based failover clusters updates.
    http://support.microsoft.com/kb/2784261
    I’m glad to be of help to you!
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Microsoft Virtual Machine Converter VMware To Hyper-V Cluster

    I'm not sure if this should technically be in the clustering section but I have just moved from SCVMM 2012 SP1 to 2012 R2 and I kinda miss the built-in converter tool. What I used to do when converting VMware to Hyper-V was uninstall VMware Tools and then
    I would do a physical to virtual conversion on the VMware virtual machine and SCVMM would handle copying the virtual machine while it was online and register it in our Hyper-V cluster. Now, the only thing I could come up with is the Microsoft Virtual Machine
    Converter but it seems rather limited and doesn't seem to have an option to import to a cluster. So is the only option to convert it over to Hyper-V as if it were a local machine and then run another export/import process to get it in the cluster? I tried
    to point it to a CSV and while it copied the disk over, it registered the virtual machine in ProgramData (the default location). This obviously causes issues when trying to make the VM highly available. Any one have a suggested process of the best way to go
    about this? Thank you in advance for your time!

    So no matter what option I choose, V2V always shuts down the source VM during the conversion process. On the other hand, if I use the old method of uninstalling VMware tools manually and then doing a P2V instead, the source VM has to stay online for
    that type of process. Then I just have to migrate it to become highly available and that seems to accomplish what I want. The only annoying part is that I couldn't run it on my Win 8.1 Pro workstation as it requires the BITS feature to be installed
    and that only appears to be available on server editions (correct me if there is a way to get it on 8.1). I guess the documentation (I think) says that it should run on server editions only but V2V runs fine from 8.1 since it doesn't need BITS since it's
    an offline process.

  • Hyper-V cluster: Unable to fail VM over to secondary host

    I am working on a Server 2012 Hyper-V Cluster. I am unable to fail my VMs from one node to the other using either LIVE or Quick migration.
    A force shutdown of VMHost01 will force a migration to VMHost02. And once we are on VMHost02 we can migrate back to VMHost01, but once that is done we can't move the VMs back to VMHost02 without a force shutdown.
    The following error pops up:
    Event ID: 21502 The Virtual Machine Management Service failed to establish a connection for a Virtual machine migration with host.... The connection attempt failed because the connected party did not properly respond after a period of time, or the established
    connection failed because connected host has failed to respond (0X8007274C)
    Here's what I noticed:
    VMMS.exe is running on VMHost02 however it is not listening on Port 6600. Confirmed this after a reboot by running netstat -a. We have tried setting this service to a delayed start.
    I have checked Firewall rules and Anti-Virus exclusions, and they are correct. I have not run the cluster validation test yet, because I'll need to schedule a period of downtime to do so.
    We can start/stop the VMMS.exe service just fine and without errors, but I am puzzled as to why it will not listen on Port 6600 anywhere. Anyone have any suggestions on how to troubleshoot this particular issue? 
    Thanks,
    Tho H. Le

    Just ran into the same issue in a 16-node cluster being managed by VMM. When trying to live migrate VMs using the VMM console the migration would fail with the following: Error 10698. Failover Cluster manager would report the following error code: Error
    (0x8007274C).
    + Validated Live Migration and Cluster networks. Everything checked out.
    + Looking in Hyper-V manager and migrations are enabled and correct networks displayed.
    + Found this particular Blog that mentions that the Virtual Machine Management service is not listening to port 6600
    http://blogs.technet.com/b/roplatforms/archive/2012/10/16/shared-nothing-migration-fails-0x8007274c.aspx
    Ran the following from an elivated command line:
    Netstat -ano | findstr 6600
    Node 2 did not return anything
    Node 1 returned correct output:
    TCP
    10.xxx.251.xxx:6600
    0.0.0.0:0
    LISTENING
    4540
    TCP
    10.xxx.252.xxx:6600
    0.0.0.0:0
    LISTENING
    4560
    Set Hyper-V Virtual Machine Service to delayed start.
    Restarted the service; no change.
    Checked the Event Logs for Hyper-V VMMS and noted the following events - VMMS Listener started
    for Live Migration networks, and then shortly after listener stopped.
    Removed the system from the cluster and restarted - No change
    Checked this host by running gpedit.msc - could not open console: Permission Error
    Tried to run a GPO refresh (gpupdate /force), but error returned that LocalGPO could not apply registry settings. Group Policy
    processing would not continue until this was resolved.
    Checked the local group policy folder on node 2 and it was corrupt:
    C:\Windows\System32\GroupPolicy\Machine\reg.pol showed 0K for the size.
    Copied local policy folders from Node 1 to 2, and then was able to refresh the GPOs.
    Restarting the VMMS service did not change the status of the ports.
    Restarted Server, added Live Migration networks back into Hyper-V manager and now netstat output reports that VMMS service
    is listening on 6600.

  • Unplanned failover in a Hyper-V cluster vs unplanned failover in an ordinary (not Hyper-V) cluster

    Hello!
    Please excuse me if you think my question is silly, but before deploying something  in a production environment I'd like to dot the i's  and cross the t's.
    1) Suppose there's a two node cluster with a Hyper-V role that hosts a number of highly available VM.
    If the both cluster nodes are up and running an administrator can initiate a planned failover wich will transfer all VMs
    including their system state to another node without downtime.
    In case any cluster node goes down unexpectedly the unplanned failover fires up that will transfer all VMs to another node
    WITHOUT their system state. As far as I understand this can lead to some data loss.
    http://itknowledgeexchange.techtarget.com/itanswers/how-does-live-migration-differ-from-vm-failover/
    If, for example, I have an Exchange vm and it would be transfered to the second node during unplanned failover in the Hyper-V cluster I will lose some data by design.
    2) Suppose there's a two node cluster with the Exchange clustered installation: in case one node crashes the other takes over without any data loss.
    Conclusion: it's more disaster resilient to implement some server role in an "ordinary" cluster that deploy it inside a vm in the Hyper-V cluster.
    Is it correct?
    Thank you in advance,
    Michael

    "And if this "anything in memory and any active threads" is so large that can take up to 13m15s to transfer during Live Migration it will be lost."
    First, that 13m15s required to live migrate all your VMs is not the time it takes to move individual VMs.  By default, Hyper-V is set to move a maximum of 2 VMs at a time.  You can change that, but it would be foolish to increase that value if
    all you have is a single 1GE network.  The other VMs will be queued.
    Secondly, you are getting that amount of time confused with what is actually happening.  Think of a single VM.  Let's even assume that it has so much memory and is so active that it takes 13 minutes to live migrate.  (Highly unlikely, even
    on a 1 GE NIC).  During that 13 minutes the VM takes to live migrate, the VM continues to perform normally.  In fact, if something happens in the middle of the LM, causing the LM to fail, absolutely nothing is lost because the VM is still operating
    on the original host.
    Now let's look at what happens when the host on which that VM is running fails, causing a restart of the VM on another node of the cluster.  The VM is doing its work reading and writing to its data files.  At that instance in time when the host
    fails, the VM may have some unwritten data buffers in memory.  Since the host fails, the VM crashes, losing whatever it had in memory at the instant in time.  It is not going to lose any 13 minutes of data.  In fact, if you have an application
    that is processing data at this volume, you most likely have something like SQL running.  When the VM goes down, the cluster will automatically restart the VM on another node of the cluster.  SQL will automatically replay transaction logs to recover
    to the best of its ability.
    Is there a possibility of data loss?  Yes, a very tiny possibility for a very small amount.  Is there a possibility of data corruption?  Yes, a very, very tiny possibility, just like with a physical machine.
    The amount of time required for a live migration is meaningless when it comes to potential data loss of that VM.  The potential data loss is pretty much the same as if you had it running on a physical machine, the machine crashed, and then restarted.
    "clustered applicationsDO NOT STOP working during unplanned failover (so there is no recovery time), "
    Not exactly true.  Let's use SQL as an example again.  When SQL is instsalled in a cluster, you install at a minimum one instance, but you can have multiple instances.  When the node on which the active instance is running fails, there is
    a brief pause in service while the instance starts on the other node.  Depending on transactions outstanding, last write, etc., it will take a little bit of time for the SQL instance to be ready to start handling requests on the new node.
    Yes, there is a definite difference between restarting the entire VM (just the VM is clustered) and clustering the application.  Recovery time is about the biggest issue.  As you have noted, restarting a VM, i.e. rebooting it, takes time. 
    And because it takes a longer period of time, there is a good chance that clients will be unable to access the resource for a period of time, maybe as much as 1-5 minutes depending upon a lot of different factors, whereas with a clustered application, the
    clients may be unable to access for up to a minute or so.
    However, the amount of data potentially lost is quite dependent upon the application.  SQL is designed to recover nicely in either environment, and it is likely not to lose any data.  Sequential writing applications will be dependent upon things
    like disk cache held in memory - large caches means higher probability of losing data.  No disk cache means there is not likely to be any loss of data.
    .:|:.:|:. tim

  • Hyper-V Live Migration Compatibility with Hyper-V Replica/Hyper-V Recovery Manager

    Hi,
    Is Hyper-V Live Migration compatible with Hyper-V Replica/Hyper-V Recovery
    Manager?
    I have 2 Hyper-V clusters in my datacenter - both using CSVs on Fibre Channel arrays. These clusters where created and are managed using the same "System Center 2012 R2 VMM" installation. My goal it to eventually move one of these clusters to a remote
    DR site. Both sites are connected/will be connected to each other through dark fibre.
    I manually configured Hyper-V Replica in the Fail Over Cluster Manager on both clusters and started replicating some VMs using Hyper-V
    Replica.
    Now every time I attempt to use SCVMM to do a Live Migration of a VM that is protected using Hyper-V Replica to
    another host within the same cluster,
    the Migration VM Wizard gives me the following "Rating Explanation" error:
    "The virtual machine virtual machine name which
    requires Hyper-V Recovery Manager protection is going to be moved using the type "Live". This could break the recovery protection status of the virtual machine.
    When I ignore the error and do the Live Migration anyway, the Live migration completes successfully with the info above. There doesn't seem to be any impact on the VM or it's replication.
    When a Host Shuts-down or is put into maintenance, the VM Migrates successfully, again, with no noticeable impact on users or replication.
    When I stop replication of the VM, the error goes away.
    Initially, I thought this error was because I attempted to manually configure
    the replication between both clusters using Hyper-V Replica in Failover Cluster Manager (instead of using Hyper-V Recovery Manager).
    However, even after configuring and using Hyper-V Recovery Manager, I still get the same error. This error does not seem to have any impact on the high-availability of
    my VM or on Replication of this VM. Live migrations still occur successfully and replication seems to carry on without any issues.
    However, it now has me concern that Live Migration may one day occur and break replication of my VMs between both clusters.
    I have searched, and searched and searched, and I cannot find any mention in official or un-official Microsoft channels, on the compatibility of these two features. 
    I know vMware vSphere replication and vMotion are compatible with each otherhttp://pubs.vmware.com/vsphere-55/index.jsp?topic=%2Fcom.vmware.vsphere.replication_admin.doc%2FGUID-8006BF58-6FA8-4F02-AFB9-A6AC5CD73021.html.
    Please confirm to me: Are Hyper-V Live Migration and Hyper-V Replica compatible
    with each other?
    If they are, any link to further documentation on configuring these services so that they work in a fully supported manner will be highly appreciated.
    D

    This can be considered as a minor GUI bug. 
    Let me explain. Live Migration and Hyper-V Replica is supported on both Windows Server 2012 and 2012 R2 Hyper-V.
    This is because we have the Hyper-V Replica Broker Role (in a cluster) that is able to detect, receive and keep track of the VMs and the synchronizations. The configuration related to VMs enabled with replications follows the VMs itself. 
    If you try to live migrate a VM within Failover Cluster Manager, you will not get any message at all. But VMM will (as you can see), give you an
    error but it should rather be an informative message instead.
    Intelligent placement (in VMM) is responsible for putting everything in your environment together to give you tips about where the VM best possible can run, and that is why we are seeing this message here.
    I have personally reported this as a bug. I will check on this one and get back to this thread.
    Update: just spoke to one of the PMs of HRM and they can confirm that live migration is supported - and should work in this context.
    Please see this thread as well: http://social.msdn.microsoft.com/Forums/windowsazure/en-US/29163570-22a6-4da4-b309-21878aeb8ff8/hyperv-live-migration-compatibility-with-hyperv-replicahyperv-recovery-manager?forum=hypervrecovmgr
    -kn
    Kristian (Virtualization and some coffee: http://kristiannese.blogspot.com )

  • 2012 R2 Hyper-V Cluster two node design with abundance 1Gbps NICs and FC storage

    Hello All,
    First post so please be gentle!
    We are currently in the process of building/testing the upgrade of two node 2012 R2 Hyper-V cluster.
    Two hosts built with Datacentre 2012 R2 which will host approx. 30 VM's.
    Shared storage will be fault tolerant FC- connection.
    10, (yes 10!) 1Gbps NICs are available, Intel i350.
    trying to decide on teaming interfaces using native LBFO and the 2008 'style' of using un-converged networking, or teaming up most interfaces and using QoS.  Can find many/numerous examples of using 10Gbps NICs and converged, however 10Gbps networking
    isn't an option right now.
    recommendations appreciated.
    thanks

    Hi Sir,
    >>trying to decide on teaming interfaces using native LBFO and the 2008 'style' of using un-converged networking, or teaming up most interfaces and using QoS. 
    The following link detailed the teaming configuration and it's applicable scenario (server 2012):
    http://www.aidanfinn.com/?p=14039
    Also please refer to the document for 2012R2 LBFO :
    http://www.microsoft.com/en-us/download/details.aspx?id=40319
    In server 2012R2 , there is a new setting "Dynamic" for "load banacing mode " and it is recommended to use Dynamic for "Load banacing mode" :
    If you can accept 1GB max bandwidth for each VM I would suggest you to use LBFO mode : Independent/dynamic/None(All adapter active)  .
    Best Regards,
    Elton Ji
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Subscriber Support, contact [email protected] .

  • Hyper-v cluster with core switch downtime... what to do?

    Is there a way to essentially "pause" the hyper-v cluster and keep things running but do NOT attempt to failover anything for any reason?
    We have one Procurve 5412zl switch with two c7000 enclosures. In each c7000 enclosure there are two switches that connect all the blade servers within the enclosure. Those two switches are interconnected internally so they can communicate within the enclosure.
    So if the core switch goes down the hyper-v servers in the same c7000 enclosure can still communicate but they will be seperated from the others in the other enclosure.
    So we have 4 hyper-v servers in one enclosure and 3 in another. If i disconnect the core switch i'm wondering what will happen (if I reboot the switch which is what I need to do).
    How can I avoid having to shut down everything for this and just tell hyper-v cluster to not do anything when the network is lost?

    Hi Quadrantids,
    " to essentially "pause" the hyper-v cluster and keep things running but
    do NOT attempt to failover anything for any reason"
    Based on my understanding  you need to keep cluster running on the same C7000 enclosure , in another words before you cut the connection between the C7000 enclosures  you may migrate VMs to same enclosure to keep running (I assume that the
    storage will not be affected by the restart ).
    Best Regards
    Elton Ji
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Hyper-V Cluster Name offline

    We have a 2012 Hyper-V cluster that isn't online and we can't migrate VMs to the other Hyper-V host.  We see event errors in the Failover Cluster Manager:
    The description for Event ID 1069 from source Microsoft-Windows-FailoverClustering cannot be found. Either the component that raises this event is not installed on your local computer or the installation is corrupted. You can install or repair the component
    on the local computer.
    If the event originated on another computer, the display information had to be saved with the event.
    The following information was included with the event:
    Cluster Name
    Cluster Group
    Network Name
    The description for Event ID 1254 from source Microsoft-Windows-FailoverClustering cannot be found. Either the component that raises this event is not installed on your local computer or the installation is corrupted. You can install or repair the component
    on the local computer.
    If the event originated on another computer, the display information had to be saved with the event.
    The following information was included with the event:
    Cluster Group
    The description for Event ID 1155 from source Microsoft-Windows-FailoverClustering cannot be found. Either the component that raises this event is not installed on your local computer or the installation is corrupted. You can install or repair the component
    on the local computer.
    If the event originated on another computer, the display information had to be saved with the event.
    The following information was included with the event:
    ACMAIL
    3604536
    Any help or info is appreciated.
    Thank you!

    Here is the network validation.  Any thoughts?
    Failover Cluster Validation Report
    Failover Cluster Validation Report
          Node: ACHV01.AshtaChemicals.localValidated
          Node: ACHV02.AshtaChemicals.localValidated
          Started8/6/2014 5:04:47 PM
          Completed8/6/2014 5:05:22 PM
    The Validate a Configuration Wizard must be run after any change is made to the
    configuration of the cluster or hardware. For more information, see
    Results by Category
          NameResult SummaryDescription
          NetworkWarning
    Network
          NameResultDescription
          List Network Binding OrderSuccess
          Validate Cluster Network ConfigurationSuccess
          Validate IP ConfigurationWarning
          Validate Multiple Subnet PropertiesSuccess
          Validate Network CommunicationSuccess
          Validate Windows Firewall ConfigurationSuccess
    Overall Result
      Testing has completed for the tests you selected. You should review the
      warnings in the Report. A cluster solution is supported by Microsoft only if
      it passes all cluster validation tests.
    List Network Binding Order
      Description: List the order in which networks are bound to the adapters on
      each node.
      ACHV01.AshtaChemicals.local
            Binding OrderAdapterSpeed
            iSCSI3Intel(R) PRO/1000 PT Quad Port LP Server Adapter #31000 Mbit/s
            Ethernet 3Intel(R) PRO/1000 PT Quad Port LP Server AdapterUnavailable
            Mgt - HeartbeatMicrosoft Network Adapter Multiplexor Driver #42000
    Mbit/s
            Mgt - LiveMigrationMicrosoft Network Adapter Multiplexor Driver #32000
            Mbit/s
            MgtMicrosoft Network Adapter Multiplexor Driver2000 Mbit/s
            iSCSI2Broadcom BCM5709C NetXtreme II GigE (NDIS VBD Client) #371000
            Mbit/s
            3Broadcom BCM5709C NetXtreme II GigE (NDIS VBD Client)Unavailable
      ACHV02.AshtaChemicals.local
            Binding OrderAdapterSpeed
            Mgt - HeartbeatMicrosoft Network Adapter Multiplexor Driver #42000
    Mbit/s
            Mgt - LiveMigrationMicrosoft Network Adapter Multiplexor Driver #32000
            Mbit/s
            MgtMicrosoft Network Adapter Multiplexor Driver #22000 Mbit/s
            iSCSI1Broadcom NetXtreme Gigabit Ethernet #71000 Mbit/s
            NIC2Broadcom NetXtreme Gigabit EthernetUnavailable
            SLOT 5 2Broadcom NetXtreme Gigabit EthernetUnavailable
            iSCSI2Broadcom NetXtreme Gigabit Ethernet1000 Mbit/s
    Back to Summary
    Back to Top
    Validate Cluster Network Configuration
      Description: Validate the cluster networks that would be created for these
      servers.
      Network: Cluster Network 1
      DHCP Enabled: False
      Network Role: Disabled
      One or more interfaces on this network are connected to an iSCSI Target. This
      network will not be used for cluster communication.
            PrefixPrefix Length
            192.168.131.024
            ItemValue
            Network InterfaceACHV01.AshtaChemicals.local - iSCSI3
            DHCP EnabledFalse
            Connected to iSCSI targetTrue
            IP Address192.168.131.113
            Prefix Length24
            ItemValue
            Network InterfaceACHV02.AshtaChemicals.local - iSCSI2
            DHCP EnabledFalse
            Connected to iSCSI targetTrue
            IP Address192.168.131.121
            Prefix Length24
      Network: Cluster Network 2
      DHCP Enabled: False
      Network Role: Internal
            PrefixPrefix Length
            192.168.141.024
            ItemValue
            Network InterfaceACHV01.AshtaChemicals.local - Mgt - Heartbeat
            DHCP EnabledFalse
            Connected to iSCSI targetFalse
            IP Address192.168.141.10
            Prefix Length24
            ItemValue
            Network InterfaceACHV02.AshtaChemicals.local - Mgt - Heartbeat
            DHCP EnabledFalse
            Connected to iSCSI targetFalse
            IP Address192.168.141.12
            Prefix Length24
      Network: Cluster Network 3
      DHCP Enabled: False
      Network Role: Internal
            PrefixPrefix Length
            192.168.140.024
            ItemValue
            Network InterfaceACHV01.AshtaChemicals.local - Mgt - LiveMigration
            DHCP EnabledFalse
            Connected to iSCSI targetFalse
            IP Address192.168.140.10
            Prefix Length24
            ItemValue
            Network InterfaceACHV02.AshtaChemicals.local - Mgt - LiveMigration
            DHCP EnabledFalse
            Connected to iSCSI targetFalse
            IP Address192.168.140.12
            Prefix Length24
      Network: Cluster Network 4
      DHCP Enabled: False
      Network Role: Enabled
            PrefixPrefix Length
            10.1.1.024
            ItemValue
            Network InterfaceACHV01.AshtaChemicals.local - Mgt
            DHCP EnabledFalse
            Connected to iSCSI targetFalse
            IP Address10.1.1.4
            Prefix Length24
            ItemValue
            Network InterfaceACHV02.AshtaChemicals.local - Mgt
            DHCP EnabledFalse
            Connected to iSCSI targetFalse
            IP Address10.1.1.5
            Prefix Length24
      Network: Cluster Network 5
      DHCP Enabled: False
      Network Role: Disabled
      One or more interfaces on this network are connected to an iSCSI Target. This
      network will not be used for cluster communication.
            PrefixPrefix Length
            192.168.130.024
            ItemValue
            Network InterfaceACHV01.AshtaChemicals.local - iSCSI2
            DHCP EnabledFalse
            Connected to iSCSI targetTrue
            IP Address192.168.130.112
            Prefix Length24
            ItemValue
            Network InterfaceACHV02.AshtaChemicals.local - iSCSI1
            DHCP EnabledFalse
            Connected to iSCSI targetTrue
            IP Address192.168.130.121
            Prefix Length24
      Verifying that each cluster network interface within a cluster network is
      configured with the same IP subnets.
      Examining network Cluster Network 1.
      Network interface ACHV01.AshtaChemicals.local - iSCSI3 has addresses on all
      the subnet prefixes of network Cluster Network 1.
      Network interface ACHV02.AshtaChemicals.local - iSCSI2 has addresses on all
      the subnet prefixes of network Cluster Network 1.
      Examining network Cluster Network 2.
      Network interface ACHV01.AshtaChemicals.local - Mgt - Heartbeat has addresses
      on all the subnet prefixes of network Cluster Network 2.
      Network interface ACHV02.AshtaChemicals.local - Mgt - Heartbeat has addresses
      on all the subnet prefixes of network Cluster Network 2.
      Examining network Cluster Network 3.
      Network interface ACHV01.AshtaChemicals.local - Mgt - LiveMigration has
      addresses on all the subnet prefixes of network Cluster Network 3.
      Network interface ACHV02.AshtaChemicals.local - Mgt - LiveMigration has
      addresses on all the subnet prefixes of network Cluster Network 3.
      Examining network Cluster Network 4.
      Network interface ACHV01.AshtaChemicals.local - Mgt has addresses on all the
      subnet prefixes of network Cluster Network 4.
      Network interface ACHV02.AshtaChemicals.local - Mgt has addresses on all the
      subnet prefixes of network Cluster Network 4.
      Examining network Cluster Network 5.
      Network interface ACHV01.AshtaChemicals.local - iSCSI2 has addresses on all
      the subnet prefixes of network Cluster Network 5.
      Network interface ACHV02.AshtaChemicals.local - iSCSI1 has addresses on all
      the subnet prefixes of network Cluster Network 5.
      Verifying that, for each cluster network, all adapters are consistently
      configured with either DHCP or static IP addresses.
      Checking DHCP consistency for network: Cluster Network 1. Network DHCP status
      is disabled.
      DHCP status (disabled) for network interface ACHV01.AshtaChemicals.local -
      iSCSI3 matches network Cluster Network 1.
      DHCP status (disabled) for network interface ACHV02.AshtaChemicals.local -
      iSCSI2 matches network Cluster Network 1.
      Checking DHCP consistency for network: Cluster Network 2. Network DHCP status
      is disabled.
      DHCP status (disabled) for network interface ACHV01.AshtaChemicals.local - Mgt
      - Heartbeat matches network Cluster Network 2.
      DHCP status (disabled) for network interface ACHV02.AshtaChemicals.local - Mgt
      - Heartbeat matches network Cluster Network 2.
      Checking DHCP consistency for network: Cluster Network 3. Network DHCP status
      is disabled.
      DHCP status (disabled) for network interface ACHV01.AshtaChemicals.local - Mgt
      - LiveMigration matches network Cluster Network 3.
      DHCP status (disabled) for network interface ACHV02.AshtaChemicals.local - Mgt
      - LiveMigration matches network Cluster Network 3.
      Checking DHCP consistency for network: Cluster Network 4. Network DHCP status
      is disabled.
      DHCP status (disabled) for network interface ACHV01.AshtaChemicals.local - Mgt
      matches network Cluster Network 4.
      DHCP status (disabled) for network interface ACHV02.AshtaChemicals.local - Mgt
      matches network Cluster Network 4.
      Checking DHCP consistency for network: Cluster Network 5. Network DHCP status
      is disabled.
      DHCP status (disabled) for network interface ACHV01.AshtaChemicals.local -
      iSCSI2 matches network Cluster Network 5.
      DHCP status (disabled) for network interface ACHV02.AshtaChemicals.local -
      iSCSI1 matches network Cluster Network 5.
    Back to Summary
    Back to Top
    Validate IP Configuration
      Description: Validate that IP addresses are unique and subnets configured
      correctly.
      ACHV01.AshtaChemicals.local
            ItemName
            Adapter NameiSCSI3
            Adapter DescriptionIntel(R) PRO/1000 PT Quad Port LP Server Adapter #3
            Physical Address00-26-55-DB-CF-73
            StatusOperational
            DNS Servers
            IP Address192.168.131.113
            Prefix Length24
            ItemName
            Adapter NameMgt - Heartbeat
            Adapter DescriptionMicrosoft Network Adapter Multiplexor Driver #4
            Physical Address78-2B-CB-3C-DC-F5
            StatusOperational
            DNS Servers10.1.1.2, 10.1.1.8
            IP Address192.168.141.10
            Prefix Length24
            ItemName
            Adapter NameMgt - LiveMigration
            Adapter DescriptionMicrosoft Network Adapter Multiplexor Driver #3
            Physical Address78-2B-CB-3C-DC-F5
            StatusOperational
            DNS Servers10.1.1.2, 10.1.1.8
            IP Address192.168.140.10
            Prefix Length24
            ItemName
            Adapter NameMgt
            Adapter DescriptionMicrosoft Network Adapter Multiplexor Driver
            Physical Address78-2B-CB-3C-DC-F5
            StatusOperational
            DNS Servers10.1.1.2, 10.1.1.8
            IP Address10.1.1.4
            Prefix Length24
            ItemName
            Adapter NameiSCSI2
            Adapter DescriptionBroadcom BCM5709C NetXtreme II GigE (NDIS VBD Client)
            #37
            Physical Address78-2B-CB-3C-DC-F7
            StatusOperational
            DNS Servers
            IP Address192.168.130.112
            Prefix Length24
            ItemName
            Adapter NameLocal Area Connection* 12
            Adapter DescriptionMicrosoft Failover Cluster Virtual Adapter
            Physical Address02-61-1E-49-32-8F
            StatusOperational
            DNS Servers
            IP Addressfe80::cc2f:d769:fe24:3d04%23
            Prefix Length64
            IP Address169.254.2.195
            Prefix Length16
            ItemName
            Adapter NameLoopback Pseudo-Interface 1
            Adapter DescriptionSoftware Loopback Interface 1
            Physical Address
            StatusOperational
            DNS Servers
            IP Address::1
            Prefix Length128
            IP Address127.0.0.1
            Prefix Length8
            ItemName
            Adapter Nameisatap.{96B6424D-DB32-480F-8B46-056A11A0A6A8}
            Adapter DescriptionMicrosoft ISATAP Adapter
            Physical Address00-00-00-00-00-00-00-E0
            StatusNot Operational
            DNS Servers
            IP Addressfe80::5efe:192.168.131.113%16
            Prefix Length128
            ItemName
            Adapter Nameisatap.{A0353AF4-CE7F-4811-B4FC-35273C2F2C6E}
            Adapter DescriptionMicrosoft ISATAP Adapter #3
            Physical Address00-00-00-00-00-00-00-E0
            StatusNot Operational
            DNS Servers
            IP Addressfe80::5efe:192.168.130.112%18
            Prefix Length128
            ItemName
            Adapter Nameisatap.{FAAF4D6A-5A41-4725-9E83-689D8E6682EE}
            Adapter DescriptionMicrosoft ISATAP Adapter #4
            Physical Address00-00-00-00-00-00-00-E0
            StatusNot Operational
            DNS Servers
            IP Addressfe80::5efe:192.168.141.10%22
            Prefix Length128
            ItemName
            Adapter Nameisatap.{C66443C2-DC5F-4C2A-A674-2191F76E33E1}
            Adapter DescriptionMicrosoft ISATAP Adapter #5
            Physical Address00-00-00-00-00-00-00-E0
            StatusNot Operational
            DNS Servers
            IP Addressfe80::5efe:10.1.1.4%27
            Prefix Length128
            ItemName
            Adapter Nameisatap.{B3A95E1D-CB95-4111-89E5-276497D7EF42}
            Adapter DescriptionMicrosoft ISATAP Adapter #6
            Physical Address00-00-00-00-00-00-00-E0
            StatusNot Operational
            DNS Servers
            IP Addressfe80::5efe:192.168.140.10%29
            Prefix Length128
            ItemName
            Adapter Nameisatap.{7705D42A-1988-463E-9DA3-98D8BD74337E}
            Adapter DescriptionMicrosoft ISATAP Adapter #7
            Physical Address00-00-00-00-00-00-00-E0
            StatusNot Operational
            DNS Servers
            IP Addressfe80::5efe:169.254.2.195%30
            Prefix Length128
      ACHV02.AshtaChemicals.local
            ItemName
            Adapter NameMgt - Heartbeat
            Adapter DescriptionMicrosoft Network Adapter Multiplexor Driver #4
            Physical Address74-86-7A-D4-C9-8B
            StatusOperational
            DNS Servers10.1.1.8, 10.1.1.2
            IP Address192.168.141.12
            Prefix Length24
            ItemName
            Adapter NameMgt - LiveMigration
            Adapter DescriptionMicrosoft Network Adapter Multiplexor Driver #3
            Physical Address74-86-7A-D4-C9-8B
            StatusOperational
            DNS Servers10.1.1.8, 10.1.1.2
            IP Address192.168.140.12
            Prefix Length24
            ItemName
            Adapter NameMgt
            Adapter DescriptionMicrosoft Network Adapter Multiplexor Driver #2
            Physical Address74-86-7A-D4-C9-8B
            StatusOperational
            DNS Servers10.1.1.8, 10.1.1.2
            IP Address10.1.1.5
            Prefix Length24
            IP Address10.1.1.248
            Prefix Length24
            ItemName
            Adapter NameiSCSI1
            Adapter DescriptionBroadcom NetXtreme Gigabit Ethernet #7
            Physical Address74-86-7A-D4-C9-8A
            StatusOperational
            DNS Servers
            IP Address192.168.130.121
            Prefix Length24
            ItemName
            Adapter NameiSCSI2
            Adapter DescriptionBroadcom NetXtreme Gigabit Ethernet
            Physical Address00-10-18-F5-08-9C
            StatusOperational
            DNS Servers
            IP Address192.168.131.121
            Prefix Length24
            ItemName
            Adapter NameLocal Area Connection* 11
            Adapter DescriptionMicrosoft Failover Cluster Virtual Adapter
            Physical Address02-8F-46-67-27-51
            StatusOperational
            DNS Servers
            IP Addressfe80::3471:c9bf:29ad:99db%25
            Prefix Length64
            IP Address169.254.1.193
            Prefix Length16
            ItemName
            Adapter NameLoopback Pseudo-Interface 1
            Adapter DescriptionSoftware Loopback Interface 1
            Physical Address
            StatusOperational
            DNS Servers
            IP Address::1
            Prefix Length128
            IP Address127.0.0.1
            Prefix Length8
            ItemName
            Adapter Nameisatap.{8D7DF16A-1D5F-43D9-B2D6-81143A7225D2}
            Adapter DescriptionMicrosoft ISATAP Adapter #2
            Physical Address00-00-00-00-00-00-00-E0
            StatusNot Operational
            DNS Servers
            IP Addressfe80::5efe:192.168.131.121%21
            Prefix Length128
            ItemName
            Adapter Nameisatap.{82E35DBD-52BE-4BCF-BC74-E97BB10BF4B0}
            Adapter DescriptionMicrosoft ISATAP Adapter #3
            Physical Address00-00-00-00-00-00-00-E0
            StatusNot Operational
            DNS Servers
            IP Addressfe80::5efe:192.168.130.121%22
            Prefix Length128
            ItemName
            Adapter Nameisatap.{5A315B7D-D94E-492B-8065-D760234BA42E}
            Adapter DescriptionMicrosoft ISATAP Adapter #4
            Physical Address00-00-00-00-00-00-00-E0
            StatusNot Operational
            DNS Servers
            IP Addressfe80::5efe:192.168.141.12%23
            Prefix Length128
            ItemName
            Adapter Nameisatap.{2182B37C-B674-4E65-9F78-19D93E78FECB}
            Adapter DescriptionMicrosoft ISATAP Adapter #5
            Physical Address00-00-00-00-00-00-00-E0
            StatusNot Operational
            DNS Servers
            IP Addressfe80::5efe:192.168.140.12%24
            Prefix Length128
            ItemName
            Adapter Nameisatap.{104DC629-D13A-4A36-8845-0726AC9AE25E}
            Adapter DescriptionMicrosoft ISATAP Adapter #6
            Physical Address00-00-00-00-00-00-00-E0
            StatusNot Operational
            DNS Servers
            IP Addressfe80::5efe:10.1.1.5%33
            Prefix Length128
            ItemName
            Adapter Nameisatap.{483266DF-7620-4427-BE5D-3585C8D92A12}
            Adapter DescriptionMicrosoft ISATAP Adapter #7
            Physical Address00-00-00-00-00-00-00-E0
            StatusNot Operational
            DNS Servers
            IP Addressfe80::5efe:169.254.1.193%34
            Prefix Length128
      Verifying that a node does not have multiple adapters connected to the same
      subnet.
      Verifying that each node has at least one adapter with a defined default
      gateway.
      Verifying that there are no node adapters with the same MAC physical address.
      Found duplicate physical address 78-2B-CB-3C-DC-F5 on node
      ACHV01.AshtaChemicals.local adapter Mgt - Heartbeat and node
      ACHV01.AshtaChemicals.local adapter Mgt - LiveMigration.
      Found duplicate physical address 78-2B-CB-3C-DC-F5 on node
      ACHV01.AshtaChemicals.local adapter Mgt - Heartbeat and node
      ACHV01.AshtaChemicals.local adapter Mgt.
      Found duplicate physical address 78-2B-CB-3C-DC-F5 on node
      ACHV01.AshtaChemicals.local adapter Mgt - LiveMigration and node
      ACHV01.AshtaChemicals.local adapter Mgt.
      Found duplicate physical address 74-86-7A-D4-C9-8B on node
      ACHV02.AshtaChemicals.local adapter Mgt - Heartbeat and node
      ACHV02.AshtaChemicals.local adapter Mgt - LiveMigration.
      Found duplicate physical address 74-86-7A-D4-C9-8B on node
      ACHV02.AshtaChemicals.local adapter Mgt - Heartbeat and node
      ACHV02.AshtaChemicals.local adapter Mgt.
      Found duplicate physical address 74-86-7A-D4-C9-8B on node
      ACHV02.AshtaChemicals.local adapter Mgt - LiveMigration and node
      ACHV02.AshtaChemicals.local adapter Mgt.
      Verifying that there are no duplicate IP addresses between any pair of nodes.
      Checking that nodes are consistently configured with IPv4 and/or IPv6
      addresses.
      Verifying that all nodes IPv4 networks are not configured using Automatic
      Private IP Addresses (APIPA).
    Back to Summary
    Back to Top
    Validate Multiple Subnet Properties
      Description: For clusters using multiple subnets, validate the network
      properties.
      Testing that the HostRecordTTL property for network name Name: Cluster1 is set
      to the optimal value for the current cluster configuration.
      HostRecordTTL property for network name Name: Cluster1 has a value of 1200.
      Testing that the RegisterAllProvidersIP property for network name Name:
      Cluster1 is set to the optimal value for the current cluster configuration.
      RegisterAllProvidersIP property for network name Name: Cluster1 has a value of
      0.
      Testing that the PublishPTRRecords property for network name Name: Cluster1 is
      set to the optimal value for the current cluster configuration.
      The PublishPTRRecords property forces the network name to register a PTR in
      DNS reverse lookup record IP address to name mapping.
    Back to Summary
    Back to Top
    Validate Network Communication
      Description: Validate that servers can communicate, with acceptable latency,
      on all networks.
      Analyzing connectivity results ...
      Multiple communication paths were detected between each pair of nodes.
    Back to Summary
    Back to Top
    Validate Windows Firewall Configuration
      Description: Validate that the Windows Firewall is properly configured to
      allow failover cluster network communication.
      The Windows Firewall on node 'ACHV01.AshtaChemicals.local' is configured to
      allow network communication between cluster nodes over adapter
      'ACHV01.AshtaChemicals.local - Mgt - LiveMigration'.
      The Windows Firewall on node 'ACHV01.AshtaChemicals.local' is configured to
      allow network communication between cluster nodes over adapter
      'ACHV01.AshtaChemicals.local - iSCSI3'.
      The Windows Firewall on node 'ACHV01.AshtaChemicals.local' is configured to
      allow network communication between cluster nodes over adapter
      'ACHV01.AshtaChemicals.local - Mgt - Heartbeat'.
      The Windows Firewall on node 'ACHV01.AshtaChemicals.local' is configured to
      allow network communication between cluster nodes over adapter
      'ACHV01.AshtaChemicals.local - Mgt'.
      The Windows Firewall on node 'ACHV01.AshtaChemicals.local' is configured to
      allow network communication between cluster nodes over adapter
      'ACHV01.AshtaChemicals.local - iSCSI2'.
      The Windows Firewall on node ACHV01.AshtaChemicals.local is configured to
      allow network communication between cluster nodes.
      The Windows Firewall on node 'ACHV02.AshtaChemicals.local' is configured to
      allow network communication between cluster nodes over adapter
      'ACHV02.AshtaChemicals.local - Mgt'.
      The Windows Firewall on node 'ACHV02.AshtaChemicals.local' is configured to
      allow network communication between cluster nodes over adapter
      'ACHV02.AshtaChemicals.local - iSCSI2'.
      The Windows Firewall on node 'ACHV02.AshtaChemicals.local' is configured to
      allow network communication between cluster nodes over adapter
      'ACHV02.AshtaChemicals.local - Mgt - LiveMigration'.
      The Windows Firewall on node 'ACHV02.AshtaChemicals.local' is configured to
      allow network communication between cluster nodes over adapter
      'ACHV02.AshtaChemicals.local - Mgt - Heartbeat'.
      The Windows Firewall on node 'ACHV02.AshtaChemicals.local' is configured to
      allow network communication between cluster nodes over adapter
      'ACHV02.AshtaChemicals.local - iSCSI1'.
      The Windows Firewall on node ACHV02.AshtaChemicals.local is configured to
      allow network communication between cluster nodes.
    Back to Summary
    Back to Top

  • Add Node to Hyper-V Cluster running Server 2012 R2

    Hi All,
    I am in the process to upgrade our Hyper-V Cluster to Server 2012 R2 but I am not sure about the required Validation test.
    The Situation at the Moment-> 1 Node Cluster running Server 2012 R2 with 2 CSVs and Quorum. Addtional Server prepared to add to the Cluster. One CSV is empty and could be used for the Validation Test. On the Other CSV are running 10 VMs in production.
    So when I start the Validation wizard I can select specific CSVs to test, which makes sense;-) But the Warning message is not clear for me "TO AVOID ROLE FAILURES, IT IS RECOMMENDED THAT ALL ROLES USING CLUSTER SHARED VOLUMES BE STOPPED BEFORE THE STORAGE
    IS VALIDATED". Does it mean that ALL CSVs will be testest and Switched offline during the test or just the CSV that i have selected in the Options? I have to avoid definitly that the CSV where all the VMs are running will be switched offline and also
    that the configuration will be corputed after loosing the CSV where the VMs are running.
    Can someone confirm that ONLY the selected CSV will be used for the Validation test ???
    Many thanks
    Markus

    Hi,
    The validation will test the select the CSV storage, if you have guest vm running this CSV it must shutdown or saved before you validate the CSV.
    Several tests will actually trigger failovers and move the disks and groups to different cluster nodes which will cause downtime, and these include Validating Disk Arbitration,
    Disk Failover, Multiple Arbitration, SCSI-3 Persistent Reservation, and Simultaneous Failover. 
    So if you want to test a majority of the functionality of your cluster without impacting availability, exclude these tests.
    The related information:
    Validating a Cluster with Zero Downtime
    http://blogs.msdn.com/b/clustering/archive/2011/06/28/10180803.aspx
    Hope this helps.
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Best design for HA Fileshare on existing Hyper-V Cluster?

    Have a three node 2012 R2 Hyper-V Cluster. The storage is a HP MSA 2000 G3 SAS Block Storage with CSV's. 
    We have a fileserver for all users running as VM on the cluster. Fileserver availability is important and it's difficult to take this fileserver down for the monthly patching. So we want to make these file services HA. Nearly all clients are Windows 8.1,
    so SMB 3 can be used. 
    What is the best way to make these file services HA?
    1. The easiest way would probably be to migrate these fileserver ressources to a dedicated LUN on the MSA 2000, and to add a "general fileserver role" to the existing hyper-V cluster. But is it supported and a good solution to provide Hyper-V VM's
    and HA file services on the same cluster (even when the performance requirements for file services are not high)? Or does this configuration affect the Hyper-V VM performance too much?
    2. Is it better to create a two node guest cluster with "Shared VHDX" for the file services? I'm not sure if this would even work. Because we had "Persistent Reservation" warnings when creating the Hyper-V cluster with the MSA 2000. According "http://blogs.msdn.com/b/clustering/archive/2013/05/24/10421247.aspx",
    these warnings are normal with block storage and can be ignored when we never want to create Windows storage pools or storage spaces. But the Hyper-V MMC shows that "shared VHDX" work with "persistent reservations". 
    3. Are there other possibilities to provide HA file services with this configuration without buying new HW? (Remark: DFSR with two independet Fileservers is probably not a good solution, we have a lot of data that change frequently).
    Thank you in advance for any advice and recommedations!
    Franz

    Hi Franz,
    If you are not going to be using Storage Spaces in the Cluster, this is a warning that you can safely ignore. 
    It passes the normal SCSI3 Persistent Reservation tests, so you are good with those. Additional, when we use the cluster we can install the cluster CAU it will automatically install the cluster updates.
    The related KB:
    Requirements and Best Practices for Cluster-Aware Updating
    https://technet.microsoft.com/en-us/library/jj134234.aspx
    Cluster-Aware Updating: Frequently Asked Questions
    https://technet.microsoft.com/en-us/library/hh831367.aspx
    I’m glad to be of help to you!
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Support, contact [email protected]

  • DPM 2012 R2 Unable to create protection groups after Cluster Migration

    Hi Team
    we have an issue relating to DPM 2012 R2. In our environment we have a Hyper-V Cluster and we recently did a Cluster Migration since Hyper-V Cluster was upgraded to Windows Server 2012 R2.
    Before the upgrade, Windows Server 2008 R2 Cluster was protected by DPM Environment. After we upgraded we are unable to Create new protection Groups existing the new cluster and when we select VMs it's not showing up any members.
    Please let us know how we can solve this issue since the Cluster Nodes and the Cluster Name is different from previous Cluster, we were expecting that DPM would allow us to create new Protection Groups while preserving old protection groups for recovery later
    if needed. 

    Hi,
    you need to stop the Protection of the old HyperV Datasources
    Clear the Cache of the Protectiongroup,
    and read the new VM to Protectiongroup
    Seidl Michael | http://www.techguy.at | twitter.com/techguyat |
    facebook.com/techguyat |
    youtube.com/techguyat

  • How to monitor the performances of VMs & Hyper-v Cluster host node running on SCVMM Cluster.

    hello...,
    How to monitor the performances of VMs & Hyper-v Cluster hosts node running on SCVMM Cluster from SCOM so that we can
    Identify the highest utilized(CPU and Memory ) VM on that from cluster  hyper-v host.
    Identify the lowest utilized (CPU and Memory )Hyper-v Host in the Cluster .
    After identifies VMs and Hyper-v cluster  host on SCVMM , so that we  can  proceed to do migrate the Highest Utilized VM to Lowest Utilized
    Hyper-v cluster host. 
    To identified and implement above ,what are the things I need to do or configured on SCOM.
    Thanks
    RICHA KM

    hello...,
    How to monitor the performances of VMs & Hyper-v Cluster hosts node running on SCVMM Cluster from SCOM
    so that we can
    Identify the highest utilized(CPU and Memory ) VM on that from cluster  hyper-v
    host.
    Identify the lowest utilized (CPU and Memory )Hyper-v Host in the Cluster .
    After identifies VMs and Hyper-v cluster  host on SCVMM , so that we  can  proceed
    to do migrate the Highest Utilized VM to Lowest Utilized Hyper-v cluster host. 
    To identified and implement above ,what are MPs i need to installed on SCOM for implementing
    this.
    Thanks
    RICHA KM

Maybe you are looking for

  • Insertion of image in the data base - Help-me!!!!

    hello for all..., Well, somebody has some example of insertion of image in the data base saw upload, using JSP (Scriptlets)? or some example that I can insert in the bank the name of the image, and the image in a directory!??????? Thank�s!

  • Movement types and physical inventory posting

    HI Expert, All inventory differences are currently booked using SAP-movement types (MVT) 291 and 292.   Inventory differences have to be booked via the correct movement types in order to increase transparency to better analyze real consumption vs. lo

  • Write to measuremen​t file VI - every second, is that too fast ?

    I use the write to measurement file VI to save 5 values + a comment to a file. The VI is in a loop. The VI adds new values to the same file every time it is called. In one of the tutorials it is said "VIs can be more efficient if you avoid opening an

  • Configuring Siebel Gateway Name Server on Linux 64 bit OS

    Hello Siebel Experts, I am currently installing Siebel 8.1.1.11 on a Linux 64-bit machine. After successfully completing the installation steps, I am proceeding with the configuring the server and its components. I am starting with the configuration

  • Add AL11 directory: what authorizations are required?

    We are on V5R3, SAP 4.70 x110. We are attempting to add a directory to AL11 using the instructions found in [SM69 access; We have been successful in adding the link (to a DIFFERENT partition via QFileSvr.400):      /QFileSvr.400/System1/Directory1/IN