LDOMs, Solaris zones and Live Migration

Hi all,
If you are planning to use Solaris zones inside a LDOM and using an external zpool as Solaris zone disk, wouldn't this break one of the requirements for being able to do a Live Migration ? If so, do you have any ideas on how to use Solaris zones inside an LDOM and at the same time be able to do a Live Migration or is it impossible ? I know this may sound as a bad idea but I would very much like to know if it is doable.

Thanks,
By external pool I am thinking of the way you probably are doing it, separate LUNs mirrored in a zpool for the zones coming from two separate IO/Service domains. So even if this zpool exist inside the LDOM as zone storage this will not prevent LM ? That's good news. The requirement "no zpool if Live Migration" must then only be valid for the LDOM storage itself and not for storage attached to the running LDOM. I am also worried about a possible performance penalty introducing an extra layer of virtualisation. Have you done any tests regarding this ?

Similar Messages

  • When setting up converged network in VMM cluster and live migration virtual nics not working

    Hello Everyone,
    I am having issues setting up converged network in VMM.  I have been working with MS engineers to no avail.  I am very surprised with the expertise of the MS engineers.  They had no idea what a converged network even was.  I had way more
    experience then these guys and they said there was no escalation track so I am posting here in hopes of getting some assistance.
    Everyone including our consultants says my setup is correct. 
    What I want to do:
    I have servers with 5 nics and want to use 3 of the nics for a team and then configure cluster, live migration and host management as virtual network adapters.  I have created all my logical networks, port profile with the uplink defined as team and
    networks selected.  Created logical switch and associated portprofle.  When I deploy logical switch and create virtual network adapters the logical switch works for VMs and my management nic works as well.  Problem is that the cluster and live
    migration virtual nics do not work.  The correct Vlans get pulled in for the corresponding networks and If I run get-vmnetworkadaptervlan it shows cluster and live migration in vlans 14 and 15 which is correct.  However nics do not work at all.
    I finally decided to do this via the host in powershell and everything works fine which means this is definitely an issue with VMM.  I then imported host into VMM again but now I cannot use any of the objects I created and VMM and have to use standard
    switch.
    I am really losing faith in VMM fast. 
    Hosts are 2012 R2 and VMM is 2012 R2 all fresh builds with latest drivers
    Thanks

    Have you checked our whitepaper http://gallery.technet.microsoft.com/Hybrid-Cloud-with-NVGRE-aa6e1e9a for how to configure this through VMM?
    Are you using static IP address assignment for those vNICs?
    Are you sure your are teaming the correct physical adapters where the VLANs are trunked through the connected ports?
    Note; if you create the teaming configuration outside of VMM, and then import the hosts to VMM, then VMM will not recognize the configuration. 
    The details should be all in this whitepaper.
    -kn
    Kristian (Virtualization and some coffee: http://kristiannese.blogspot.com )

  • 2012 R2 Cluster and Live Migration

    6 Node Cluster with Server 2012 R2, all VM's are Server 2012 R2
    4 Fiber SAN's
    Moving and Live Migration worked fine in FailOver Cluster Manager
    But this time we were trying do it with SCVMM 2012 R2 and just move one VM (Gen2)
    Of course it failed at 99%
    Error (12711)
    VMM cannot complete the WMI operation on the server (whatever) because of an error: [MSCluster_Resource.Name="SCVMM VMHost"] The cluster resource could not be found.
    The cluster resource could not be found (0x138F)
    Recommended Action
    Resolve the issue and then try the operation again.
    How do I fix this? the VM is still running. The two vhdx files it was moving are smaller then orginal's , but it change the configuration file to point to new ones, which are bad.
    it says I can Repair it... Redo or Undo....of course neither of those options work.
    Wait for the object to be updated automatically by the next periodic Live migrate storage of virtual machine vmhost from whatever to whatever job.
    ID: 1708
    Cluster has no errors, SAN's have no errors, CSV have no errors. the machine running scvmm is VM running on the cluster

    How did you create this VM? if this is created outside of VMM, I recommend doing a manual refresh of the VM first to ensure that VMM can read its attributes. Then retry the operation.
    Btw, are the VMs using diff disk? any checkpoints associated with them?
    Kristian (Virtualization and some coffee: http://kristiannese.blogspot.com )

  • Best Configuration for T4-4 and live migration

    I have several T4-4 servers with Solaris 11.1
    What kind of settings I need for production environments with OVM live migration?
    Should I use OVM manager for rugged environments?
    Should I set two io-domain on each server?
    Considering that I do live migration, for guess ldom system disk should I configure (publish) one big LUN to all servers or just a separate LUN for each ldom guest.

    For the first questions: it's best to have separate LUNs or other backend disks for each guest - otherwise how would you plan to share them across different guests and systems? Separate LUNs also is good for performance by encouraging parallel I/O. Choice of using an additional I/O domain is up to you, but it's very typical for production users because of the resiliency it adds.
    For advice on live migration, seehttps://blogs.oracle.com/jsavit/entry/best_practices_live_migration
    Other blog posts after that discuss availability and performance.
    Hope that helps, Jeff

  • Hyper V Lab and Live Migration

    Hi Guys,
    I have 2 Hyper V hosts I am setting up in a lab environment. Initially, I successfully setup a 2 node cluster with CSVs which allowed me to do Live Migrations. 
    The problem I have is my shared storage is a bit of a cheat as I have one disk assigned in each host and each host has starwinds virtual SAN installed. The hostA has an iscsi connection to hostB storage and visa versa.
    The issue this causes is when the hosts shutdown (because this is a lab its only on when required), the cluster is in a mess when it starts up eg VMs missing etc. I can recover from it but it takes time. I tinkered with the HA settings and the VM settings
    so they restarted/ didnt restart etc but with no success.
    My question is can I use something like SMB3 shared storage on one of the hosts to perform live migrations but without a full on cluster? I know I can do Shared Nothing Live Migrations but this takes time.
    Any ideas on a better solution (rather than actually buying proper shared storage ;-) ) Or if shared storage is the only option to do this cleanly, what would people recommend bearing in mind I have SSDs in the hyper V hosts.
    Hope all that makes sense

    Hi Sir,
    >>I have 2 Hyper V hosts I am setting up in a lab environment. Initially, I successfully setup a 2 node cluster with CSVs which allowed me to do Live Migrations. 
    As you mentioned , you have 2 hyper-v host and use starwind to provide ISCSI target (this is same as my first lab environment ) , then I realized that I need one or more host to simulate more production scenario . 
    But ,if you have more physical computers you may try other's progects .
    Also please refer to this thread :
    https://social.technet.microsoft.com/Forums/windowsserver/en-US/e9f81a9e-0d50-4bca-a24d-608a4cce20e8/2012-r2-hyperv-cluster-smb-30-sofs-share-permissions-issues?forum=winserverhyperv
    Best Regards
    Elton Ji
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Oracle VM (XEN) and Live Migration throws PCI error

    First off, anyone using a HS21 XM blade with Oracle VM? Anyone attempted a live-migration, does it work?
    When attempting a live migration on a HS21 XM blade we get the following errors and the host hangs:
    102 I Blade_02 08/10/08, 08:46:38 (CADCOVMA02) Blade reboot
    103 E Blade_02 08/10/08, 08:46:26 (CADCOVMA02) Software critical interrupt.
    104 E Blade_02 08/10/08, 08:46:21 (CADCOVMA02) SMI Hdlr: 00150700 PERR: Slave signaled parity error B/D/F/Slot=07000300 Vend=8086 Dev=3 0C Status=8000
    105 E Blade_02 08/10/08, 08:46:19 (CADCOVMA02) SMI Hdlr: 00150900 SERR/PERR Detected on PCI bus
    106 E Blade_02 08/10/08, 08:46:19 (CADCOVMA02) SMI Hdlr: 00150700 PERR: Slave signaled parity error B/D/F/Slot=00020000 Vend=8086 Dev=25F7 Statu
    107 E Blade_02 08/10/08, 08:46:18 (CADCOVMA02) PCI PERR: parity error.
    108 E Blade_02 08/10/08, 08:46:17 (CADCOVMA02) SMI Hdlr: 00150900 SERR/PERR Detected on PCI bus
    109 E Blade_02 08/10/08, 08:46:17 (CADCOVMA02) SMI Hdlr: 00150400 SERR: Device Signaled SERR B/D/F/Slot=07000300 Vend=8086 Dev=350C Sta us=4110
    110 E Blade_02 08/10/08, 08:46:16 (CADCOVMA02) SMI Hdlr: 00150400 SERR: Device Signaled SERR B/D/F/Slot=00020000 Vend=8086 Dev=25F7 Status=4110
    111 E Blade_02 08/10/08, 08:46:14 (CADCOVMA02) PCI system error.
    any ideas? I've called IBM support but their options are reseat the hardware or replace it. this happens on more than one blade so we're assuming it has something to do with oracle VM. Thanks!

    Hi Eduardo,
    How are things going ?
    Best Regards
    Elton Ji
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Non-Global zones and Live Upgrade

    Good afternoon,
    Trying to find an answer for a question that I have.
    Currently we have (2) T5140 servers.  One of them is our production Sun Messaging Server and the other is the backup.  The zones are SAN-attached disks(currently running on the Production server) and each server is "aware" of them.  They are only mounted on one server at a time.  My question is,  I can do a LiveUpgrade on the backup server (from Solaris10u10 to Solaris10u11) and then detach/export the NGZ from the production system and use "update on attach" to upgrade the NGZ to Solaris10u11.  If I don't upgrade the Production Box(Global) to u11 and have to move my NGZ back to it, will "update on attach" rollback the NGZs back to u10?
    We have a test system that we will be working through to test using LiveUpgrade without detaching the zones.  But wanted to see the feasibility of doing this the way I have it mention in the above paragraph.
    Thanks in advance for your help!!
    Doug

    Found my answer:  BigAdmin Feature Article: The Zones Update on Attach Feature and Patching in the Solaris 10 OS</title><meta nam…

  • Hyper-V host crash and live migration - what happens?

    Hello there,
    it would be great if you could help me understand how live migrationworks on a sudden host crash of a cluster node.
    I know that the cluster functionality mounts the vhd file on another host and runs the VM.
    But what happens to the VM itself? Does it crash too and is simply restarted on another host?
    To my knowledge the RAM of the VM is run in the RAM of the host, wo I would assume that is all lost on a host crash?
    Thanks for any help understanding this,
    Best regards, Marcus

    In a sudden crash, there is no time for the state of any running VMs to move to another system.  Therefore, the VM is restarted on another node of the cluster.  Think of the host crashing as being the same as pulling the power cable on the VM.
    . : | : . : | : . tim

  • Solaris Zones and NFS mounts

    Hi all,
    Got a customer who wants to seperate his web environments on the same node. The release of apache, Java and PHP are different so kind of makes sense. Seems a perfect opportunity to implement zoning. It seems quite straight forward to setup (I'm sure I'll find out its not). The only concern I have is that all Zones will need access to a single NFS mount from a NAS storage array that we have. Is this going to be a problem to configure and how would I get them to mount automatically on boot.
    Cheers

    Not necessarily, you can create (from Global zone) a /zone/zonename/etc/dfs/dfstab (NOT a /zone/[i[zonename[/i]/root/etc/dfs/dfstab notice you don't use the root dir) and from global do a shareall and the zone will start serving. Check your multi-level ports and make sure they are correct. You will run into some problems if you are running Trusted Extensions or the NFS share is ZFS but they can be overcome rather easily.
    EDIT: I believe you have to be running TX for this to work. I'll double check.
    Message was edited by:
    AdamRichards

  • Solaris zone and IBM DB2

    We have a container in T3-1 in which IBM DB2 running it. Recently we migrated the container to T4-1 server. The container is up and running but unable to start DB2. The container configuration is similar as in T3-1. Did anyone faced similar issue while running DB2 on T4-1 server ?

    You can refer
    App Server 9.0 developer guide
    http://docs.sun.com/app/docs/doc/819-3659
    making driver .jar files accessible :
    http://docs.sun.com/app/docs/doc/819-3659/6n5s6m5bk?a=view#beamn
    IBM DB2 8.2 datasource configuration
    http://docs.sun.com/app/docs/doc/819-3658/6n5s5nklk?a=view#beanc
    If you are still not able to setup:
    can you post
    1) con pool configuration from domain.xml
    2) the error message that you get in domains/<domainname>logs/server.log
    Thanks,
    -Jagadish

  • Live migration with zones

    Hi all,
    I have been reading into making "SPARC Private Cloud" whitepapers with LDOM's from Oracle. One thing really pops out from the text which really confuses me:
    from Page 8 and 10:
    "VMs may also be securely live migrated or automatically started or restarted across any servers in their respective pools. *Zones are cold migrated*"
    "Secure live migration—Move domains off of servers that are undergoing planned maintenance. *Zones are cold-migrated*."
    Does this really mean that if I have zones inside LDOM guest, I can live migrate the LDOM guests but not the zones? Hence zones will go down if I do this? If so, whats the reason behind this, its hard to grasp the idea that the OS itself can be live migrated, but not zones inside it that are using the same kernel, binaries etc from it....
    Links:
    https://blogs.oracle.com/infrared/entry/building_private_iaas_with_sparc
    http://www.oracle.com/us/groups/public/@otn/documents/webcontent/1659149.pdf
    - Jukka

    Lumi, I'm pretty sure they are comparing LDOMs with zones on a standalone system (i.e. no LDOMs).
    When you migrate a domain, everything the guest kernel is doing should emerge as it was before.
    Migration might take a bit longer than for the GZ alone, since you're using more virtual memory.
    To move an NGZ between standalone GZ's, you would indeed have to halt, detach, attach, and boot it.
    But please don't take my word for it... feel free to try both methods for yourself. =-)
    The only limitation for zones in LDOMs that I'm aware of: You cannot currently set elastic power policy.
    Other than that, I don't see why you couldn't keep zones running inside your guest as it moves around.
    Hope that helps... -cheers, CSB

  • Live Migration failed using virtual HBA's and Guest Clustering

    Hi,
    We have a Guest Cluster Configuration on top of an Hyper-V Cluster. We are using Windows 2012 and Fiber Channel shared storage.
    The problem is regarding Live Migration. Some times when we move a virtual machine from node A to node B everything goes well but when we try to move back to node A Live Migration fails. What we can see is that when we move the VM from node A to B and Live
    Migration completes successfully the virtual ports remain active on node A, so when we try to move back from B to A Live Migration fails because the virtual ports are already there.
    This doesn't happen every time.
    We have checked the zoning between Host Cluster Hyper-V and the SAN, the mapping between physical HBA's and the vSAN's on the Hyper-V and everything is ok.
    Our doubt is, what is the best practice for zoning the vHBA on the VM's and our Fabric? We setup our zoning using an alias for the vHBA 1 and the two WWN (A and B) on the same object and an alias for the vHBA 2 and the correspondent WWN (A and B). Is it
    better to create an alias for vHBA 1 -> A (with WWN A) and other alias for vHBA 1 -> B (with WWN B)? 
    The guest cluster VM's have 98GB of RAM each. Could it be a time out issue when Live Migration happen's and the virtual ports remain active on the source node? When everything goes well, the VM moves from node A with vHBA WWN A to node B and stays there
    with vHBA WWN B. On the source node the virtual ports should be removed automatically when the Live Migration completes. And that is the issue... sometimes the virtual ports (WWN A) stay active on the source node and when we try to move back the VM Live Migration
    fails.
    I hope You may understand the issue.
    Regards,
    Carlos Monteiro.

    Hi ,
    Hope the following link may help.
    To support live migration of virtual machines across Hyper-V hosts while maintaining Fibre Channel connectivity, two WWNs are configured for each virtual Fibre Channel adapter: Set A and Set B. Hyper-V automatically alternates between the Set A and Set B
    WWN addresses during a live migration. This ensures that all LUNs are available on the destination host before the migration and that no downtime occurs during the migration.
    Hyper-V Virtual Fibre Channel Overview
    http://technet.microsoft.com/en-us/library/hh831413.aspx
    More information:
    Hyper-V Virtual Fibre Channel Troubleshooting Guide
    http://social.technet.microsoft.com/wiki/contents/articles/18698.hyper-v-virtual-fibre-channel-troubleshooting-guide.aspx
    Hyper-V Virtual Fibre Channel Design Guide
    http://blogs.technet.com/b/privatecloud/archive/2013/07/23/hyper-v-virtual-fibre-channel-design-guide.aspx
    Hyper-V virtual SAN
    http://salworx.blogspot.co.uk/
    Thanks.
    Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading the thread.

  • Host server live migration causing Guest Cluster node goes down

    Hi 
    I have two node Hyper host cluster , Im using converged network for Host management,Live migartion and cluster network. And Separate NICs for ISCSI multi-pathing. When I live migrate the Guest node from one host to another , within guest cluster the node
    is going down.  I have increased clusterthroshold and clusterdelay values.  Guest nodes are connecting to ISCSI network directly from ISCSI initiator on Server 2012. 
    The converged networks for management ,cluster and live migration networks are built on top of a NIC Team with switch Independent mode and load balancing as Hyper V port. 
    I have VMQ enabled on Converged fabric  and jumbo frames enabled on ISCSI. 
    Can Anyone guess why would live migration cause failure on the guest node. 
    thanks
    mumtaz 

    Repost here: http://social.technet.microsoft.com/Forums/en-US/winserverhyperv/threads
    in the Hyper-V forum.  You'll get a lot more help there.
    This forum is for Virtual Server 2005.

  • Live migration Vnic on hosts randomly losing connectivity HELP

    Hello Everyone,
    I am building out a new 2012 R2 cluster using VMM with converged network configuration.  I have 5 physical nics and teaming 3 of them using dynamic load balancing.  I have configured 3 virtual network adapters in host which are for management,
    cluster and Live migration.  The live migration nic loses connectivity randomly and fails migrations 50% of the time.
    Hardware is IBM blades (HS22) with Broadcom Netextreme II nics.  I have updated firmware and drivers to the latest versions.  I found a forum with something that looks very similar but this was back in November so Im guessing there is a fix. 
    http://www.hyper-v.nu/archives/mvaneijk/2013/11/vnics-and-vms-loose-connectivity-at-random-on-windows-server-2012-r2/
    Really need help with this.
    Thanks

    Hi,
    Does your cluster can pass the cluster validation test? Please install the recommended hotfixes and updates for Windows Server 2012 R2-based failover clusters update first
    then monitor again.
    More information:
    Configuring Windows Failover Cluster Networks
    http://blogs.technet.com/b/askcore/archive/2014/02/20/configuring-windows-failover-cluster-networks.aspx
    Hope this helps.
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Live migration

    Hello,
    I am using HA infrastructure with OCFS2, and live migration doesn't work (and also HA, but i believe it is connected!)
    Also using xm migrate command
    It just hangs indefinitely ...
    Any ideas?
    Boris

    user614980 wrote:
    I am using HA infrastructure with OCFS2, and live migration doesn't work (and also HA, but i believe it is connected!)Can all the servers in your pool see the same shared storage? If you run xentop on the source/destination servers, do you see the DomU being migrated on both (you should)? How long do you wait for the migration to complete?

Maybe you are looking for

  • Uninstall of SAP GUI 7.2 and Patch 4

    Hi, I have installed SAPGUI 7.2 and applied patch 4.. I was able to uninstall SAP GUI 7.2 through however, the PATCH still exists in the add remove section in the control panel ... I can not see any option to uninstall all gui components completely .

  • Adobe Presenter 10 installation error

    When I install Adobe Presenter 10 64-bit for trial and Windows 7, it says I have to use disk to continue the installation. How do I get a disk for Presenter 10?

  • Music Wont Play

    About 5 days ago for no apparent reason at all, about 7000 songs which i have collected over a number of years has just decided that it wont play in ITUNES or in any other of my media players. When you click on Play over a song nothing happens. The p

  • JOB on a specific node

    Is it possible to execute a scheduled job to run on a specific node.

  • Trying to download iTunes to my laptop, it says its downloaded but nothing happening?

    TTrying to download iTunes to my laptop for the first time, it says thank you for downloading but nothing is happening? Help please, I'm trying to fix my phone after doing the latest download