Live migration with zones

Hi all,
I have been reading into making "SPARC Private Cloud" whitepapers with LDOM's from Oracle. One thing really pops out from the text which really confuses me:
from Page 8 and 10:
"VMs may also be securely live migrated or automatically started or restarted across any servers in their respective pools. *Zones are cold migrated*"
"Secure live migration—Move domains off of servers that are undergoing planned maintenance. *Zones are cold-migrated*."
Does this really mean that if I have zones inside LDOM guest, I can live migrate the LDOM guests but not the zones? Hence zones will go down if I do this? If so, whats the reason behind this, its hard to grasp the idea that the OS itself can be live migrated, but not zones inside it that are using the same kernel, binaries etc from it....
Links:
https://blogs.oracle.com/infrared/entry/building_private_iaas_with_sparc
http://www.oracle.com/us/groups/public/@otn/documents/webcontent/1659149.pdf
- Jukka

Lumi, I'm pretty sure they are comparing LDOMs with zones on a standalone system (i.e. no LDOMs).
When you migrate a domain, everything the guest kernel is doing should emerge as it was before.
Migration might take a bit longer than for the GZ alone, since you're using more virtual memory.
To move an NGZ between standalone GZ's, you would indeed have to halt, detach, attach, and boot it.
But please don't take my word for it... feel free to try both methods for yourself. =-)
The only limitation for zones in LDOMs that I'm aware of: You cannot currently set elastic power policy.
Other than that, I don't see why you couldn't keep zones running inside your guest as it moves around.
Hope that helps... -cheers, CSB

Similar Messages

  • Live Upgrade with Zones - still not working ?

    Hi Guys,
    I'm trying to do LiveUpdate from Solaris update 3 to update 4 with non-global zone installed. It's driving me crazy now.
    I did everything as described in documentation, installed SUNWlucfg and supposedly updated SUNWluu and SUNWlur (supposedly because they are exactly the same as were in update 3) both from packages and with script from update 4 DVD, installed all patches mentioned in 72099, but lucreate process still complains about missing patches and I've checked if they're installed five times. They are. It doesn't even allow to create second BE. Once I detached Zone - everything went smooth, but I had an impression that Live Upgrade with Zones will work in Update 4.
    It did create second BE before SUNWlucfg was installed, but failed on update stage with exactly the same message - install patches according to 72099. After installation of SUNWlucfg Live Upgrade process fails instantly, that's a real progress, must admit.
    Is it still "mission impossible" to Live Upgrade with non-global zones installed ? Or am I missed something ?
    Any ideas or success stories are greatly appreciated. Thanks.

    I upgraded from u3 to u5.
    The upgrade went fine, the zones boot up but there are problems.
    sshd doesn't work
    svsc -vx prints out this.
    svc:/network/rpc/gss:default (Generic Security Service)
    State: uninitialized since Fri Apr 18 09:54:33 2008
    Reason: Restarter svc:/network/inetd:default is not running.
    See: http://sun.com/msg/SMF-8000-5H
    See: man -M /usr/share/man -s 1M gssd
    Impact: 8 dependent services are not running:
    svc:/network/nfs/client:default
    svc:/system/filesystem/autofs:default
    svc:/system/system-log:default
    svc:/milestone/multi-user:default
    svc:/system/webconsole:console
    svc:/milestone/multi-user-server:default
    svc:/network/smtp:sendmail
    svc:/network/ssh:default
    svc:/network/inetd:default (inetd)
    State: maintenance since Fri Apr 18 09:54:41 2008
    Reason: Restarting too quickly.
    See: http://sun.com/msg/SMF-8000-L5
    See: man -M /usr/share/man -s 1M inetd
    See: /var/svc/log/network-inetd:default.log
    Impact: This service is not running.
    It seems as thought the container is not upgraded.
    more /etc/release in the container shows this
    Solaris 10 11/06 s10s_u3wos_10 SPARC
    Copyright 2006 Sun Microsystems, Inc. All Rights Reserved.
    Use is subject to license terms.
    Assembled 14 November 2006
    How do I get it to fix the inetd service?

  • Live migration with Fault Tolerance (FT)?

    Dose the new version of VM server for SPARC (2.1) have an option for live migration with Fault Tolerance (seemlier to VMware FT)? while live migration is great it dose not protect against a server failure vs something like live migration with FT would makes this seamless.
    Thanks,
    -Eli

    Hi,
    Thank you very much.
    I tried SiteSurvey Plugin, now i know why it doesn't work
    These ESX hosts are not compatible with FT, but may contain VMs that are:
    esxi02.vclass.local
    CPU type Intel(R) Core(TM) i3-4150 CPU @ 3.50GHz is not supported by FT.

  • Live Migration with Different CPU versions on the hosts, win 2012R2 Datacenter

    Hello
    This question have been asked in different forums but when I read the the thread's I feel that I get mixed answers.
    And most answers are dating from 2012 (Win 2008R2), I don't know if they are still correct in win 2012R2.
    So now I ask the question myself and hope to get at clear answer :)
    We are in the process of installing a new Hyper-V cluster using Win srv 2012 R2 Datacenter as OS.
    I'm planning to re-use some of the "old" servers from our current Hyper-V 2008 R2 cluster, removing it from the cluster and do a clean installation of 2012R2 Datacenter.
    But I will need to buy two new servers to manage this (with a new version of CPU, same brand (AMD))
    Old server: AMD Opteron(tm) Processor 6172 (12 Cores)
    New server:
    AMD Opteron™ 6344 (12-core)
    Now my question:
    Will Live Migration work between these servers in my new cluster without me doing any special settings in hyper-v or in the VM or what do I need to do to get this to work?
    /Anders

    Hi,
    It is important that all the hardware supporting Windows Server 2012 Failover Clusters be certified to work with Windows Server 2012. 
    In a cluster where all the nodes of the cluster are exactly the same, hardware migration is fairly straightforward. There are no concerns about differences in hardware, and
    especially no concerns about different capabilities of the CPUs.
    More information:
    When to Use Processor Compatibility Mode to Migrate Virtual Machines
    http://technet.microsoft.com/en-us/magazine/gg299590.aspx
    Hope this helps.
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Live upgrade with zones

    Hi,
    While trying to create a new be environment in solaris 10 update 6 i 'm getting following errors for my zone
    Updating compare databases on boot environment <zfsBE>.
    Making boot environment <zfsBE> bootable.
    ERROR: unable to mount zones:
    zoneadm: zone 'OTM1_wa_lab': "/usr/lib/fs/lofs/mount -o ro /.alt.tmp.b-AKc.mnt/swdump /zones/app/OTM1_wa_lab-zfsBE/lu/a/swdump" failed with exit code 33
    zoneadm: zone 'OTM1_wa_lab': call to zoneadmd failed
    ERROR: unable to mount zone <OTM1_wa_lab> in </.alt.tmp.b-AKc.mnt>
    ERROR: unmounting partially mounted boot environment file systems
    ERROR: cannot mount boot environment by icf file </etc/lu/ICF.1>
    ERROR: Unable to remount ABE <zfsBE>: cannot make ABE bootable
    ERROR: no boot environment is mounted on root device <rootpool/ROOT/zfsBE>
    Making the ABE <zfsBE> bootable FAILED.
    Although my zone is running fine
    zoneadm -z OTM1_wa_lab list -v
    ID NAME STATUS PATH BRAND IP
    3 OTM1_wa_lab running /zones/app/OTM1_wa_lab native shared
    Does any body know what could be the reason for this ?

    http://opensolaris.org/jive/thread.jspa?messageID=322728

  • Live Migration Fails with error Synthetic FiberChannel Port: Failed to finish reserving resources on an VM using Windows Server 2012 R2 Hyper-V

    Hi, I'm currently experiencing a problem with some VMs in a Hyper-V 2012 R2 failover cluster using Fiber Channel adapters with Virtual SAN configured on the hyper-v hosts.
    I have read several articles about this issues like this ones:
    https://social.technet.microsoft.com/Forums/windowsserver/en-US/baca348d-fb57-4d8f-978b-f1e7282f89a1/synthetic-fibrechannel-port-failed-to-start-reserving-resources-with-error-insufficient-system?forum=winserverhyperv
    http://social.technet.microsoft.com/wiki/contents/articles/18698.hyper-v-virtual-fibre-channel-troubleshooting-guide.aspx
    But haven't been able to fix my issue.
    The Virtual SAN is configured on every hyper-v host node in the cluster. And every VM has 2 fiber channel adapters configured.
    All the World Wide Names are configured both on the FC Switch as well as the FC SAN.
    All the drivers for the FC Adapter in the Hyper-V Hosts have been updated to their latest versions.
    The strange thing is that the issue is not affecting all of the VMs, some of the VMs with FC adapters configured are live migrating just fine, others are getting this error.
    Quick migration works without problems.
    We even tried removing and creating new FC Adapters on a VM with problems, we had to configure the switch and SAN with the new WWN names and all, but ended up having the same problem.
    At first we thought is was related to the hosts, but since some VMs do work live migrating with FC adapters we tried migrating them on every host, everything worked well.
    My guess is that it has to be something related to the VMs itself but I haven't been able to figure out what is it.
    Any ideas on how to solve this is deeply appreciated.
    Thank you!
    Eduardo Rojas

    Hi Eduardo,
    How are things going ?
    Best Regards
    Elton Ji
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Server 2012 cluster - virtual machine live migration does not work

    Hi,
    We have a hyper-v cluster with two nodes running Windows Server 2012. All the configurations are identical.
    When I try to make a Live migration from one node to the other I get an error message saying:
    Live migration of 'Virtual Machine XXXXXX' failed.
    I get no other error messages, not even in event viewer. This same happens with all of our virtual machines.
    A normal Quick migration works just fine for all of the virtual machines, so network configuration should not be an issue.
    The above error message does not provide much information.

    Hi,
    Please check whether your configuration meet live migration requirement:
    Two (or more) servers running Hyper-V that:
    Support hardware virtualization.
    Yes they support virtualization. 
    Are using processors from the same manufacturer (for example, all AMD or all Intel).
    Both Servers are identical and brand new Fujitsu-Siemens RX300S7 with the same kind of processor (Xeon E5-2620).
    Belong to either the same Active Directory domain, or to domains that trust each other.
    Both nodes are in the same domain.
    Virtual machines must be configured to use virtual hard disks or virtual Fibre Channel disks (no physical disks).
    All of the vitual machines have virtual hard disks.
    Use of a private network is recommended for live migration network traffic.
    Have tried this, but does not help.
    Requirements for live migration in a cluster:
    Windows Failover Clustering is enabled and configured.
    Yes
    Cluster Shared Volume (CSV) storage in the cluster is enabled.
    Yes
    Requirements for live migration using shared storage:
    All files that comprise a virtual machine (for example, virtual hard disks, snapshots, and configuration) are stored on an SMB share. They are all on the same CSV
    Permissions on the SMB share have been configured to grant access to the computer accounts of all servers running Hyper-V.
    Requirements for live migration with no shared infrastructure:
    No extra requirements exist.
    Also please refer to this article to check whether you have finished all preparation works for live migration:
    Virtual Machine Live Migration Overview
    http://technet.microsoft.com/en-us/library/hh831435.aspx
    Hyper-V: Using Live Migration with Cluster Shared Volumes in Windows Server 2008 R2
    http://technet.microsoft.com/en-us/library/dd446679(v=WS.10).aspx
    Configure and Use Live Migration on Non-clustered Virtual Machines
    http://technet.microsoft.com/en-us/library/jj134199.aspx
    Hope this helps!
    TechNet Subscriber Support
    If you are
    TechNet Subscription user and have any feedback on our support quality, please send your feedback
    here.
    Lawrence
    TechNet Community Support
    I have also read all of the technet articles but can't find anything that could help.

  • Hyper-V Failover Cluster Live migration over Virtual Adapter

    Hello,
    Currently we have a test environment 3 HyperHost wich have 2 Nic teams.  LAN team and team Live/Mngt.
    In Hyper-V we created a Virtual Switch (Wich is Nic team Live/Mngt)
    We want to seperate the Mngt and live with Vlans. To do this we created 2 virtual adapters. Mngt and Live, assigned ip adresses and 2 vlans(Mngt 10, Live 20)
    Now here is our problem: In Failover cluster you cannot select the Virtual Adapter(live). Only the Virtual Switch wich both are on. Meaning live migration simple uses the vSwitch instead of the Virtual adapter.
    Either its not possible to seperate Live migration with a Vlan this way or maybe there are Powershell commands to add Live migration to a Virtual adapter?
    Greetings Selmer

    It can be done in PowerShell but it's not intuitive.
    In Failover Cluster Manager, right-click Networks and you'll find this:
    Click Live Migration Settings and you'll find this:
    Checked networks are allowed to carry traffic. Networks higher in the list are preferred.
    Eric Siron
    Altaro Hyper-V Blog
    I am an independent blog contributor, not an Altaro employee. I am solely responsible for the content of my posts.

  • Migrate a Zone ?

    Hi All,
    Can one migrate a Zone from one system to another without halting the zone through Ops center?
    Thanks,
    Shawn
    Edited by: 806338 on Dec 31, 2010 8:04 AM

    Hi Shawn,
    Right now live migration of zones isn't supported in Solaris yet, so unfortunately you can't do it in Ops Center either. It can automate all the tasks for you (so moving a local zone from one physical system to another is point-and-click in your browser) but the zone will be halted while it's moved.
    Regards,
    [email protected]

  • Server 2012 r2 live migration fails with hardware error

    Hello all, we just upgraded one of our hyper v hosts from server 2012 to server 2012 r2; previously we had live replication setup between it and another box on the network which was also running server 2012. After installing server 2012 r2 when a live migration
    is attempted we get the message:
    "The virtual machine cannot be moved to the destination computer. The hardware on the destination computer is not compatible with the hardware requirements of this virtual machine. Virtual machine migration failed at migration source."
    The servers in question are both dell, currently we have a poweredge r910 running server 2012 and a poweredge r900 running server 2012 r2. The section under processor for "migrate to a physical computer using a different processor" is already checked
    and this same vm was successfully being live replicated before the upgrade to server 2012 r2. What would have changed around hardware requirements?
    We are migrating from server 2012 on the poweredge r910 to server 2012 r2 on the poweredge r900. Also When I say this was an upgrade, we did a full re install and wiped out the installation of server 2012 and installed server 2012 r2, this was not an upgrade
    installation.

    The only cause I’ve seen so far is virtual switches being named differently. I do remember that one of our VMs didn’t move, but we simply bypassed this problem, using one-time backup (VeeamZIP, more specifically).
    If it’s one-time operation you can use the same procedure for the VMs in question -> backup and restore them at new server.
    Kind regards, Leonardo.

  • Hyper-V Live Migration Compatibility with Hyper-V Replica/Hyper-V Recovery Manager

    Hi,
    Is Hyper-V Live Migration compatible with Hyper-V Replica/Hyper-V Recovery
    Manager?
    I have 2 Hyper-V clusters in my datacenter - both using CSVs on Fibre Channel arrays. These clusters where created and are managed using the same "System Center 2012 R2 VMM" installation. My goal it to eventually move one of these clusters to a remote
    DR site. Both sites are connected/will be connected to each other through dark fibre.
    I manually configured Hyper-V Replica in the Fail Over Cluster Manager on both clusters and started replicating some VMs using Hyper-V
    Replica.
    Now every time I attempt to use SCVMM to do a Live Migration of a VM that is protected using Hyper-V Replica to
    another host within the same cluster,
    the Migration VM Wizard gives me the following "Rating Explanation" error:
    "The virtual machine virtual machine name which
    requires Hyper-V Recovery Manager protection is going to be moved using the type "Live". This could break the recovery protection status of the virtual machine.
    When I ignore the error and do the Live Migration anyway, the Live migration completes successfully with the info above. There doesn't seem to be any impact on the VM or it's replication.
    When a Host Shuts-down or is put into maintenance, the VM Migrates successfully, again, with no noticeable impact on users or replication.
    When I stop replication of the VM, the error goes away.
    Initially, I thought this error was because I attempted to manually configure
    the replication between both clusters using Hyper-V Replica in Failover Cluster Manager (instead of using Hyper-V Recovery Manager).
    However, even after configuring and using Hyper-V Recovery Manager, I still get the same error. This error does not seem to have any impact on the high-availability of
    my VM or on Replication of this VM. Live migrations still occur successfully and replication seems to carry on without any issues.
    However, it now has me concern that Live Migration may one day occur and break replication of my VMs between both clusters.
    I have searched, and searched and searched, and I cannot find any mention in official or un-official Microsoft channels, on the compatibility of these two features. 
    I know vMware vSphere replication and vMotion are compatible with each otherhttp://pubs.vmware.com/vsphere-55/index.jsp?topic=%2Fcom.vmware.vsphere.replication_admin.doc%2FGUID-8006BF58-6FA8-4F02-AFB9-A6AC5CD73021.html.
    Please confirm to me: Are Hyper-V Live Migration and Hyper-V Replica compatible
    with each other?
    If they are, any link to further documentation on configuring these services so that they work in a fully supported manner will be highly appreciated.
    D

    This can be considered as a minor GUI bug. 
    Let me explain. Live Migration and Hyper-V Replica is supported on both Windows Server 2012 and 2012 R2 Hyper-V.
    This is because we have the Hyper-V Replica Broker Role (in a cluster) that is able to detect, receive and keep track of the VMs and the synchronizations. The configuration related to VMs enabled with replications follows the VMs itself. 
    If you try to live migrate a VM within Failover Cluster Manager, you will not get any message at all. But VMM will (as you can see), give you an
    error but it should rather be an informative message instead.
    Intelligent placement (in VMM) is responsible for putting everything in your environment together to give you tips about where the VM best possible can run, and that is why we are seeing this message here.
    I have personally reported this as a bug. I will check on this one and get back to this thread.
    Update: just spoke to one of the PMs of HRM and they can confirm that live migration is supported - and should work in this context.
    Please see this thread as well: http://social.msdn.microsoft.com/Forums/windowsazure/en-US/29163570-22a6-4da4-b309-21878aeb8ff8/hyperv-live-migration-compatibility-with-hyperv-replicahyperv-recovery-manager?forum=hypervrecovmgr
    -kn
    Kristian (Virtualization and some coffee: http://kristiannese.blogspot.com )

  • Hyper-v Live Migration not completing when using VM with large RAM

    hi,
    i have a two node server 2012 R2 cluster hyper-v which uses 100GB CSV, and 128GB RAM across 2 physical CPU's (approx 7.1GB used when the VM is not booted), and 1 VM running windows 7 which has 64GB RAM assigned, the VHD size is around 21GB and the BIN file
    is 64GB (by the way do we have to have that, can we get rid of the BIN file?). 
    NUMA is enabled on both servers, when I attempt to live migrate i get event 1155 in the cluster events, the LM starts and gets into 60 something % but then fails. the event details are "The pending move for the role 'New Virtual Machine' did not complete."
    however, when i lower the amount of RAM assigned to the VM to around 56GB (56+7 = 63GB) the LM seems to work, any amount of RAM below this allows LM to succeed, but it seems if the total used RAM from the physical server (including that used for the
    VMs) is 64GB or above, the LM fails.... coincidence since the server has 64GB per CPU.....
    why would this be?
    many thanks
    Steve

    Hi,
    I turned NUMA spanning off on both servers in the cluster - I assigned 62 GB, 64GB and 88GB and each time the VM started up no problems. with 62GB LM completed, but I cant get LM to complete with 64GB+.
    my server is a HP DL380 G8, it has the latest BIOS (I just updated it today as it was a couple of months behind), i cant see any settings in BIOS relating to NUMA so i'm guessing it is enabled and cant be changed.
    if i run the cmdlt as admin I get ProcessorsAvailability : {0, 0, 0, 0...}
    if i run it as standard user i get ProcessorsAvailability
    my memory and CPU config are as follows, hyper-threading is enabled for the CPU but I dont
    think that would make a difference?
    Processor 1 1 Good, In Use Yes 713756-081 DIMM DDR3 16384 MB 1600 MHz 1.35 V 2 Synchronous
    Processor 1 4 Good, In Use Yes 713756-081 DIMM DDR3 16384 MB 1600 MHz 1.35 V 2 Synchronous
    Processor 1 9 Good, In Use Yes 713756-081 DIMM DDR3 16384 MB 1600 MHz 1.35 V 2 Synchronous
    Processor 1 12 Good, In Use Yes 713756-081 DIMM DDR3 16384 MB 1600 MHz 1.35 V 2 Synchronous
    Processor 2 1 Good, In Use Yes 713756-081 DIMM DDR3 16384 MB 1600 MHz 1.35 V 2 Synchronous
    Processor 2 4 Good, In Use Yes 713756-081 DIMM DDR3 16384 MB 1600 MHz 1.35 V 2 Synchronous
    Processor 2 9 Good, In Use Yes 713756-081 DIMM DDR3 16384 MB 1600 MHz 1.35 V 2 Synchronous
    Processor 2 12 Good, In Use Yes 713756-081 DIMM DDR3 16384 MB 1600 MHz 1.35 V 2 Synchronous
    Processor Name
    Intel(R) Xeon(R) CPU E5-2695 v2 @ 2.40GHz
    Processor Status
    OK
    Processor Speed
    2400 MHz
    Execution Technology
    12/12 cores; 24 threads
    Memory Technology
    64-bit Capable
    Internal L1 cache
    384 KB
    Internal L2 cache
    3072 KB
    Internal L3 cache
    30720 KB
    Processor 2
    Processor Name
    Intel(R) Xeon(R) CPU E5-2695 v2 @ 2.40GHz
    Processor Status
    OK
    Processor Speed
    2400 MHz
    Execution Technology
    12/12 cores; 24 threads
    Memory Technology
    64-bit Capable
    Internal L1 cache
    384 KB
    Internal L2 cache
    3072 KB
    Internal L3 cache
    30720 KB
    thanks
    Steve

  • Live migration of domains with OVMM

    Can anybody point out the reference for performing live migration of domains through OVM Manager instead of CLI. Could not find it in the User's manual for OVM Server 2.2 release.

    Does the same thing happen if you initiate the live migration from within SCVMM?  Generally speaking, if SCVMM is in use, you should first try to perform the operation from SCVMM before trying to use the Failover Cluster Manager or the Hyper-V Manager. 
    Should not make any difference, but it would be another point of information.
    What is so strange is that a live migration within the cluster is not doing anything with the disks.  All it is doing is moving the executing VM from the memory of one node of the cluster to another node in the cluster.  The virtual hard disks,
    .vhd or .vhdx, are not touched, unless when you say 'live migration' you mean 'live storage migration'.  Very strange that it is only happening on the VMs with .vhd.
    Another thing you could try ... convert the .vhd on one of the problematic VMs to a .vhdx file and see if that fixes the issue.
    . : | : . : | : . tim

  • Configure Live Migration Multi-channel with SW QoS

    I'm a bit confused about what is needed to properly configure Live Migration using SMB multi-channel with QoS, using a SCVMM logical switch. Where I'm confused, coming from a VMware background, is how many and what type of vNICs that I need. I'm thinking
    I need two of some virtual NIC type, each with a unique IP address, which SMB multi-channel for LM would use. I would then somehow tie a SW QoS policy to the vNICs to ensure LM traffic doesn't stomp all over other network traffic types.
    Any explanations would be most welcomed.
    Blog: www.derekseaman.com, VMware vExpert 2012/2013

    Hi there.
    Hopefully this guide should give you a detailed explanation of the configuration of this setup, explaining logical switches, port profiles (virtual and uplink) and QoS:
    http://gallery.technet.microsoft.com/Hybrid-Cloud-with-NVGRE-aa6e1e9a
    -kn
    Kristian (Virtualization and some coffee: http://kristiannese.blogspot.com )

  • Live Migration between two WS2012 R2 Clusters with SCVMM 2012 R2 creates multiple objects on Cluster

    Hi,
    I'm seeing an issue when migrating VM's between two 2012 R2 Hyper-V Clusters, using VMM 2012 R2, that have their Storage provided by a 4 Node Scale Out File Server Cluster that the two clusters share.
    A migration between the two clusters is successful and the VM is operation but I'm left with two roles added to the cluster the VM has moved to instead of the expected one.
    For example: Say I have a VM that was created on cluster A with SCVMM, resulting in a name of : "SCVMM Test-01 Resources"
    I then do a live migration to Cluster B which has access to the same storage and then I end up with two new roles instead of one.
    "SCVMM abw-app-fl-01 Resources" and "abw-app-fl-01"
    The "SCVMM abw-app-fl-01 Resources" is left in an unknown state and "abw-app-fl-01" is operational.
    I can safely delete "SCVMM abw-app-fl-01 Resources" and everything still works but it looks like something is failing during the process.
    Has anyone else seen this?
    I'll probably have one of my guys open a support ticket in the new year but was wondering if anyone else is seeing this.
    Kind regards,
    Jas :)

    In my case the VMs where created in VMM in my one and only Hyper-V cluster (that's been created and is managed by VMM).
    All Higly Available VM:s have a FCM role named "SCVMM vmname" where vmname is the name if the VM in VMM. On top of that a lot
    of VM:s, but not all,  have a second role name named vmname. Lots of name in that sentence.
    All VMs that have duplicates are using the role named vmname.
    I thought it had to do with whether a VM had been migrated so I took one that never had been migrated and did. It did not get a duplicate.
    Is there any progress on this?

Maybe you are looking for