Live Migration to Best possible node

Hi,
I have a 20 node cluster with virtual machine role.
I would like know equivalent power shell command for migrating VMs to best possible node.
When I right click on VM from cluster manager, I see below option for Live migration to Best possible node. Same thing I would like to achieve through powershell.
Thanks in advance.
Thanks, Krishna

Well, you're asking the cluster to make its best determination on where the VMs should go, so I don't really know that I can second-guess its behavior.
You could ensure that all highly available VMs are moved off a specific node by using
Suspend-ClusterNode:
Suspend-ClusterNode -Name "node1" -Drain
Then when you want to put the roles back, use
Resume-ClusterNode:
Resume-ClusterNode -Name "node1" -Failback Immediate
You can enter multiple node names at once, if you want.
But if stress testing a network device is your aim, I would look at actual test tools, like
IOMeter.
Eric Siron Altaro Hyper-V Blog
I am an independent blog contributor, not an Altaro employee. I am solely responsible for the content of my posts.
"Every relationship you have is in worse shape than you think."

Similar Messages

  • Host server live migration causing Guest Cluster node goes down

    Hi 
    I have two node Hyper host cluster , Im using converged network for Host management,Live migartion and cluster network. And Separate NICs for ISCSI multi-pathing. When I live migrate the Guest node from one host to another , within guest cluster the node
    is going down.  I have increased clusterthroshold and clusterdelay values.  Guest nodes are connecting to ISCSI network directly from ISCSI initiator on Server 2012. 
    The converged networks for management ,cluster and live migration networks are built on top of a NIC Team with switch Independent mode and load balancing as Hyper V port. 
    I have VMQ enabled on Converged fabric  and jumbo frames enabled on ISCSI. 
    Can Anyone guess why would live migration cause failure on the guest node. 
    thanks
    mumtaz 

    Repost here: http://social.technet.microsoft.com/Forums/en-US/winserverhyperv/threads
    in the Hyper-V forum.  You'll get a lot more help there.
    This forum is for Virtual Server 2005.

  • Unable to live migrate VM (error 21502)

    Hi,
    I have four node Hyper-V cluster build on Windows Server 2012. I found an issue when one virtual machine is unable to live migrate to another cluster node with following error:
    Live migration of 'Virtual Machine VM' failed.
    Virtual machine migration operation for 'VM' failed at migration destination 'HYPERV2'. (Virtual machine ID EB7708F3-6D0B-4F7E-9EC9-EA7EE718A134)
    'VM' Microsoft Emulated IDE Controller (Instance ID 83F8638B-8DCA-4152-9EDA-2CA8B33039B4): Failed to restore with Error 'The process cannot access the file because another process has locked a portion of the file.' (0x80070021). (Virtual machine ID EB7708F3-6D0B-4F7E-9EC9-EA7EE718A134)
    'VM': Failed to open attachment 'C:\ClusterStorage\Volume1\VM\VM.vhdx'. Error: 'The process cannot access the file because another process has locked a portion of the file.' (0x80070021). (Virtual machine ID EB7708F3-6D0B-4F7E-9EC9-EA7EE718A134)
    'VM': Failed to open attachment 'C:\ClusterStorage\Volume1\VM\VM.vhdx'. Error: 'The process cannot access the file because another process has locked a portion of the file.' (0x80070021). (Virtual machine ID EB7708F3-6D0B-4F7E-9EC9-EA7EE718A134)
    It's possible to migrate VM in Stopped state but then the VM cannot start on new host with following error:
    'Virtual Machine VM' failed to start.
    'VM' failed to start. (Virtual machine ID EB7708F3-6D0B-4F7E-9EC9-EA7EE718A134)
    'VM' Microsoft Emulated IDE Controller (Instance ID 83F8638B-8DCA-4152-9EDA-2CA8B33039B4): Failed to Power on with Error 'The process cannot access the file because another process has locked a portion of the file.' (0x80070021). (Virtual machine ID EB7708F3-6D0B-4F7E-9EC9-EA7EE718A134)
    'VM': Failed to open attachment 'C:\ClusterStorage\Volume1\VM\VM.vhdx'. Error: 'The process cannot access the file because another process has locked a portion of the file.' (0x80070021). (Virtual machine ID EB7708F3-6D0B-4F7E-9EC9-EA7EE718A134)
    'VM': Failed to open attachment 'C:\ClusterStorage\Volume1\VM\VM.vhdx'. Error: 'The process cannot access the file because another process has locked a portion of the file.' (0x80070021). (Virtual machine ID EB7708F3-6D0B-4F7E-9EC9-EA7EE718A134)
    Live storage migration works fine. When I migrate VM back to original node then VM starts correctly.
    Thanks for any response.

    Hi, Daniel,
    Sometimes you might face failed live migration due to VMSwitches being named differently. So, the first thing to do is to make sure that VMSwitches on both hosts have the same name.
    Also, you can try to take cluster offline and perform repairing procedure that appears to fix the mysterious issue causing live migrations of VMs to fail. ( Open Failover Cluster Manager -> Select the cluster name -> Take Offline
    -> More Actions, click Repair.
    Otherwise, if you’re short on time and willing to migrate VM as soon as possible, you can perform one-time backup/restore operation, using one of the free backup utilities available on the market (VeaamZIP or similar). In many way this
    tool acts as zip-utility for VMs. It helped us a lot, when migration failed for whatever reason, and we didn't have enough time to find the root cause.
    Kind regards, Leonardo.

  • Hyper-V live migration failed

    There is Hyper-V cluster with 2 nodes. Windows Server 2012 R2 is used as operating system.
    Trying to live migrate test VM from node 1 to node 2 and get error 21502:
    Live migration of 'Virtual Machine test' failed.
    'Virtual Machine test' failed to fixup network settings. Verify VM settings and update them as necessary.
    VM has Network Adapter connected to Virtual switch. This vSwitch has Private network as connection type.
    If I set virtual switch property to "Not connected" in Network Adapter settings of VM I get successful migration.
    All VM's that are not connected to any private networks (virtual switches with private network connection type) can be live migrated without any issues.
    Is there any official reference related to Hyper-V live migration of VM's that have "private network" connection type?

    I can Live Migrate virtual machines with adapters on private switches without error. Aside from having the wrong name, the only way I can get it to fail is if I make the switch on one host use a different QoS minimum mode than the other and
    enable QoS on the virtual adapter. Even then I get a different message than what you're getting. I only get that one with differently named switches.
    There is a PowerShell cmdlet available to see why a guest won't run on another host.
    Here's an example of its usage.
    There's a way to use it to get it to Live Migrate.
    But there is no way to truly Live Migrate three virtual machines in perfect lockstep. Even if you figure out whatever is preventing you from migrating these machines, there will still be periods during Live Migration where they can't communicate across that
    private network. You also can't guarantee that all these guests will always be running on the same host without preventing Live Migration in the first place. This is why there really isn't anyone doing what you're trying to do. I suggest you consider another
    isolation solution, like VLANs.
    Eric Siron Altaro Hyper-V Blog
    I am an independent blog contributor, not an Altaro employee. I am solely responsible for the content of my posts.
    "Every relationship you have is in worse shape than you think."

  • TS2646 Settings to record in on Canon HD Vixia HG20 for easy migration into iMovee'11 v.9.0.9?  It would be most helpful to list what the best possible settings are.

    So, what are the best settings to record in using Canon HD Vixia HG20 for easy migration into iMovee'11 v.9.0.9? 
    It would be most helpful to list what the best possible settings are.
    frame rate?
    format?  FXP?

    I suggest filming in 30P with this camera. That should be the MXP setting.
    You could also use 30i which would be the FXP setting.
    You will get the best results in iMovie by shooting progressive.

  • VM windows 2008 Cluster on Hyperv 2012 Server FC Disk error after Live Migration of the active VM Cluster Node

    Deployed a 2 nodes Windows 2008 R2 SP1 Failover Cluster on a HyperV 2012 Server cluster deployed on IBM HS23 blade.
    Disk susbsystem is IBM Storwize V7000. MPIO driver installed plus IBM DDSM.
    Lun presented to the VM are connected with Virtual FC Adapter and everything works fine in the cluster until we start a live migration of the VM which hold the cluster disks online.
    After migration complete positively, at the moment the MPIO of the migrated VM goes crazy with a lot of errors (source: mpio eventID 16 ) and warnings (source: mpio EventID: 17) in the system event log. After that the disks becomes unavailable.
    Consequently everything hangs until power off the migrated vm, so the services on the cluster switchs on the second node.
    I try to set the registry key HKLM\CurrentControlSet\Services\Disk\TimeOutValue to 190 as i found on various articles but nothing seems to change....
    Any idea?
    vannig

    Hello,
    I've just been through the IBM interoperability matrix and came across this statement:
    Hyper-V on x64 based systems is supported with the following guest OS: Windows 2003 32bit, Windows 2003 64bit, Windows 2008 32bit, Windows 2008 64bit.
    Clustering of guest OS is not supported at this time. When using Emulex HBAs with Hyper-V please select the settings mentioned in the Host Attachment section of SVC Info Center
    http://publib.boulder.ibm.com/infocenter/svc/ic/index.jsp
    Thanks

  • Best Configuration for T4-4 and live migration

    I have several T4-4 servers with Solaris 11.1
    What kind of settings I need for production environments with OVM live migration?
    Should I use OVM manager for rugged environments?
    Should I set two io-domain on each server?
    Considering that I do live migration, for guess ldom system disk should I configure (publish) one big LUN to all servers or just a separate LUN for each ldom guest.

    For the first questions: it's best to have separate LUNs or other backend disks for each guest - otherwise how would you plan to share them across different guests and systems? Separate LUNs also is good for performance by encouraging parallel I/O. Choice of using an additional I/O domain is up to you, but it's very typical for production users because of the resiliency it adds.
    For advice on live migration, seehttps://blogs.oracle.com/jsavit/entry/best_practices_live_migration
    Other blog posts after that discuss availability and performance.
    Hope that helps, Jeff

  • Cluster node reboot and Quick Migration of VMs instead of Live Migration...

    Hi to all,
    how can one configure a Windows Server 2012 multi-node failover cluster, that vms are migrated per Live Migration and NOT per Quick Migration, if one node of the failover cluster will be rebooted.
    Thanks in advance
    Joerg

    Hi Aidan,
    only for the record:
    We get the requested functionality - Live migrate all VMs on reboot without first pausing the cluster- when we do the following:
    Change the value of HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\PreshutdownOrder
    from the default
    vmms
    wuauserv
    gpsvc
    trustedinstall
    to
    clussvc
    vmms
    wuauserv
    gpsvc
    trustedinstall
    Now the cluster service stops at first, if we Trigger a reboot and all VMs migrate as configured per MoveTypeThreshold cluster setting.
    Greetings
    Joerg

  • Best possible appraoch to migrate OAF personalization from 11i to R12

    Hi
    We are upgrading the EBS application from 11i to R12 and I am dealing with OAF personalization.
    Since some of the screen (Agent and Sales Dashboard) layouts in R12 are changed significantly, I believe we have no alternative but to re-implement those changes manually.
    I was wondering what would be the best approach to do this and while doing this how I can make sure that I have not missed any change.
    Appreciate any help on this.
    Swaroop

    The best possible approach is to deprecate them. Are they still needed with R12?
    Kristofer Cruz

  • Live Migration Fails with error Synthetic FiberChannel Port: Failed to finish reserving resources on an VM using Windows Server 2012 R2 Hyper-V

    Hi, I'm currently experiencing a problem with some VMs in a Hyper-V 2012 R2 failover cluster using Fiber Channel adapters with Virtual SAN configured on the hyper-v hosts.
    I have read several articles about this issues like this ones:
    https://social.technet.microsoft.com/Forums/windowsserver/en-US/baca348d-fb57-4d8f-978b-f1e7282f89a1/synthetic-fibrechannel-port-failed-to-start-reserving-resources-with-error-insufficient-system?forum=winserverhyperv
    http://social.technet.microsoft.com/wiki/contents/articles/18698.hyper-v-virtual-fibre-channel-troubleshooting-guide.aspx
    But haven't been able to fix my issue.
    The Virtual SAN is configured on every hyper-v host node in the cluster. And every VM has 2 fiber channel adapters configured.
    All the World Wide Names are configured both on the FC Switch as well as the FC SAN.
    All the drivers for the FC Adapter in the Hyper-V Hosts have been updated to their latest versions.
    The strange thing is that the issue is not affecting all of the VMs, some of the VMs with FC adapters configured are live migrating just fine, others are getting this error.
    Quick migration works without problems.
    We even tried removing and creating new FC Adapters on a VM with problems, we had to configure the switch and SAN with the new WWN names and all, but ended up having the same problem.
    At first we thought is was related to the hosts, but since some VMs do work live migrating with FC adapters we tried migrating them on every host, everything worked well.
    My guess is that it has to be something related to the VMs itself but I haven't been able to figure out what is it.
    Any ideas on how to solve this is deeply appreciated.
    Thank you!
    Eduardo Rojas

    Hi Eduardo,
    How are things going ?
    Best Regards
    Elton Ji
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Hyper V Lab and Live Migration

    Hi Guys,
    I have 2 Hyper V hosts I am setting up in a lab environment. Initially, I successfully setup a 2 node cluster with CSVs which allowed me to do Live Migrations. 
    The problem I have is my shared storage is a bit of a cheat as I have one disk assigned in each host and each host has starwinds virtual SAN installed. The hostA has an iscsi connection to hostB storage and visa versa.
    The issue this causes is when the hosts shutdown (because this is a lab its only on when required), the cluster is in a mess when it starts up eg VMs missing etc. I can recover from it but it takes time. I tinkered with the HA settings and the VM settings
    so they restarted/ didnt restart etc but with no success.
    My question is can I use something like SMB3 shared storage on one of the hosts to perform live migrations but without a full on cluster? I know I can do Shared Nothing Live Migrations but this takes time.
    Any ideas on a better solution (rather than actually buying proper shared storage ;-) ) Or if shared storage is the only option to do this cleanly, what would people recommend bearing in mind I have SSDs in the hyper V hosts.
    Hope all that makes sense

    Hi Sir,
    >>I have 2 Hyper V hosts I am setting up in a lab environment. Initially, I successfully setup a 2 node cluster with CSVs which allowed me to do Live Migrations. 
    As you mentioned , you have 2 hyper-v host and use starwind to provide ISCSI target (this is same as my first lab environment ) , then I realized that I need one or more host to simulate more production scenario . 
    But ,if you have more physical computers you may try other's progects .
    Also please refer to this thread :
    https://social.technet.microsoft.com/Forums/windowsserver/en-US/e9f81a9e-0d50-4bca-a24d-608a4cce20e8/2012-r2-hyperv-cluster-smb-30-sofs-share-permissions-issues?forum=winserverhyperv
    Best Regards
    Elton Ji
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Hyper-V Live Migration Compatibility with Hyper-V Replica/Hyper-V Recovery Manager

    Hi,
    Is Hyper-V Live Migration compatible with Hyper-V Replica/Hyper-V Recovery
    Manager?
    I have 2 Hyper-V clusters in my datacenter - both using CSVs on Fibre Channel arrays. These clusters where created and are managed using the same "System Center 2012 R2 VMM" installation. My goal it to eventually move one of these clusters to a remote
    DR site. Both sites are connected/will be connected to each other through dark fibre.
    I manually configured Hyper-V Replica in the Fail Over Cluster Manager on both clusters and started replicating some VMs using Hyper-V
    Replica.
    Now every time I attempt to use SCVMM to do a Live Migration of a VM that is protected using Hyper-V Replica to
    another host within the same cluster,
    the Migration VM Wizard gives me the following "Rating Explanation" error:
    "The virtual machine virtual machine name which
    requires Hyper-V Recovery Manager protection is going to be moved using the type "Live". This could break the recovery protection status of the virtual machine.
    When I ignore the error and do the Live Migration anyway, the Live migration completes successfully with the info above. There doesn't seem to be any impact on the VM or it's replication.
    When a Host Shuts-down or is put into maintenance, the VM Migrates successfully, again, with no noticeable impact on users or replication.
    When I stop replication of the VM, the error goes away.
    Initially, I thought this error was because I attempted to manually configure
    the replication between both clusters using Hyper-V Replica in Failover Cluster Manager (instead of using Hyper-V Recovery Manager).
    However, even after configuring and using Hyper-V Recovery Manager, I still get the same error. This error does not seem to have any impact on the high-availability of
    my VM or on Replication of this VM. Live migrations still occur successfully and replication seems to carry on without any issues.
    However, it now has me concern that Live Migration may one day occur and break replication of my VMs between both clusters.
    I have searched, and searched and searched, and I cannot find any mention in official or un-official Microsoft channels, on the compatibility of these two features. 
    I know vMware vSphere replication and vMotion are compatible with each otherhttp://pubs.vmware.com/vsphere-55/index.jsp?topic=%2Fcom.vmware.vsphere.replication_admin.doc%2FGUID-8006BF58-6FA8-4F02-AFB9-A6AC5CD73021.html.
    Please confirm to me: Are Hyper-V Live Migration and Hyper-V Replica compatible
    with each other?
    If they are, any link to further documentation on configuring these services so that they work in a fully supported manner will be highly appreciated.
    D

    This can be considered as a minor GUI bug. 
    Let me explain. Live Migration and Hyper-V Replica is supported on both Windows Server 2012 and 2012 R2 Hyper-V.
    This is because we have the Hyper-V Replica Broker Role (in a cluster) that is able to detect, receive and keep track of the VMs and the synchronizations. The configuration related to VMs enabled with replications follows the VMs itself. 
    If you try to live migrate a VM within Failover Cluster Manager, you will not get any message at all. But VMM will (as you can see), give you an
    error but it should rather be an informative message instead.
    Intelligent placement (in VMM) is responsible for putting everything in your environment together to give you tips about where the VM best possible can run, and that is why we are seeing this message here.
    I have personally reported this as a bug. I will check on this one and get back to this thread.
    Update: just spoke to one of the PMs of HRM and they can confirm that live migration is supported - and should work in this context.
    Please see this thread as well: http://social.msdn.microsoft.com/Forums/windowsazure/en-US/29163570-22a6-4da4-b309-21878aeb8ff8/hyperv-live-migration-compatibility-with-hyperv-replicahyperv-recovery-manager?forum=hypervrecovmgr
    -kn
    Kristian (Virtualization and some coffee: http://kristiannese.blogspot.com )

  • Error 10698 Virtual machine could not be live migrated to virtual machine host

    Hi all,
    I am running a fail over cluster of
    Host:
    2 x WS2008 R2 Data Centre
    managed by VMM:
    VMM 2008 R2
    Virtual Host:
    1x windows 2003 64bit guest host/virtual machine
    I have attempted a live migration through VMM 2008 R2 and im presented withe the following error:
    Error (10698)
    Virtual machine XXXXX could not be live migrated to virtual machine host xxx-Host01 using this cluster configuration.
     (Unspecified error (0x80004005))
    What i have found when running the cluster validation:
    1 out of the 2 hosts have an error with RPC related to network configuration:
    An error occurred while executing the test.
    Failed to connect to the service manager on 'xxx-Host02'.
    The RPC server is unavailable
    However there are no errors or events on host02 that are showing any probelms at all.
    In fact the validation report goes on to showing the rest of the configuration information of both cluster hosts as ok.
    See below:
    List BIOS Information
    List BIOS information from each node.
    xxx-Host01
    Gathering BIOS Information for xxx-Host01
    Item  Value 
    Name  Phoenix ROM BIOS PLUS Version 1.10 1.1.6 
    Manufacturer  Dell Inc. 
    SMBios Present  True 
    SMBios Version  1.1.6 
    SMBios Major Version  2 
    SMBios Minor Version  5 
    Current Language  en|US|iso8859-1 
    Release Date  3/23/2008 9:00:00 AM 
    Primary BIOS  True 
    xxx-Host02
    Gathering BIOS Information for xxx-Host02
    Item  Value 
    Name  Phoenix ROM BIOS PLUS Version 1.10 1.1.6 
    Manufacturer  Dell Inc. 
    SMBios Present  True 
    SMBios Version  1.1.6 
    SMBios Major Version  2 
    SMBios Minor Version  5 
    Current Language  en|US|iso8859-1 
    Release Date  3/23/2008 9:00:00 AM 
    Primary BIOS  True 
    Back to Summary
    Back to Top
    List Cluster Core Groups
    List information about the available storage group and the core group in the cluster.
    Summary 
    Cluster Name: xxx-Cluster01 
    Total Groups: 2 
    Group  Status  Type 
    Cluster Group  Online  Core Cluster 
    Available Storage  Offline  Available Storage 
     Cluster Group
    Description:
    Status: Online
    Current Owner: xxx-Host01
    Preferred Owners: None
    Failback Policy: No failback policy defined.
    Resource  Type  Status  Possible Owners 
    Cluster Disk 1  Physical Disk  Online  All Nodes 
    IP Address: 10.10.0.60  IP Address  Online  All Nodes 
    Name: xxx-Cluster01  Network Name  Online  All Nodes 
     Available Storage
    Description:
    Status: Offline
    Current Owner: Per-Host02
    Preferred Owners: None
    Failback Policy: No failback policy defined.
     Cluster Shared Volumes
    Resource  Type  Status  Possible Owners 
    Data  Cluster Shared Volume  Online  All Nodes 
    Snapshots  Cluster Shared Volume  Online  All Nodes 
    System  Cluster Shared Volume  Online  All Nodes 
    Back to Summary
    Back to Top
    List Cluster Network Information
    List cluster-specific network settings that are stored in the cluster configuration.
    Network: Cluster Network 1 
    DHCP Enabled: False 
    Network Role: Internal and client use 
    Metric: 10000 
    Prefix  Prefix Length 
    10.10.0.0  20 
    Network: Cluster Network 2 
    DHCP Enabled: False 
    Network Role: Internal use 
    Metric: 1000 
    Prefix  Prefix Length 
    10.13.0.0  24 
    Subnet Delay  
    CrossSubnetDelay  1000 
    CrossSubnetThreshold  5 
    SameSubnetDelay  1000 
    SameSubnetThreshold  5 
    Validating that Network Load Balancing is not configured on node xxx-Host01.
    Validating that Network Load Balancing is not configured on node xxx-Host02.
    An error occurred while executing the test.
    Failed to connect to the service manager on 'xxx-Host02'.
    The RPC server is unavailable
    Back to Summary
    Back to Top
    If it was an RPC connection issue, then i shouldnt be able to mstsc, explorer shares to host02. Well i can access them, which makes the report above is a bit misleading.
    I have also checked the rpc service and it has started.
    If there is anyone that can shed some light or advice me oany other option for trouble shooting this, that would be greatley appreciated.
    Kind regards,
    Chucky

    Hi all,
    I am running a fail over cluster of
    Host:
    2 x WS2008 R2 Data Centre
    managed by VMM:
    VMM 2008 R2
    Virtual Host:
    1x windows 2003 64bit guest host/virtual machine
    I have attempted a live migration through VMM 2008 R2 and im presented withe the following error:
    Error (10698)
    Virtual machine XXXXX could not be live migrated to virtual machine host xxx-Host01 using this cluster configuration.
     (Unspecified error (0x80004005))
    What i have found when running the cluster validation:
    1 out of the 2 hosts have an error with RPC related to network configuration:
    An error occurred while executing the test.
    Failed to connect to the service manager on 'xxx-Host02'.
    The RPC server is unavailable
    However there are no errors or events on host02 that are showing any probelms at all.
    In fact the validation report goes on to showing the rest of the configuration information of both cluster hosts as ok.
    See below:
    List BIOS Information
    List BIOS information from each node.
    xxx-Host01
    Gathering BIOS Information for xxx-Host01
    Item  Value 
    Name  Phoenix ROM BIOS PLUS Version 1.10 1.1.6 
    Manufacturer  Dell Inc. 
    SMBios Present  True 
    SMBios Version  1.1.6 
    SMBios Major Version  2 
    SMBios Minor Version  5 
    Current Language  en|US|iso8859-1 
    Release Date  3/23/2008 9:00:00 AM 
    Primary BIOS  True 
    xxx-Host02
    Gathering BIOS Information for xxx-Host02
    Item  Value 
    Name  Phoenix ROM BIOS PLUS Version 1.10 1.1.6 
    Manufacturer  Dell Inc. 
    SMBios Present  True 
    SMBios Version  1.1.6 
    SMBios Major Version  2 
    SMBios Minor Version  5 
    Current Language  en|US|iso8859-1 
    Release Date  3/23/2008 9:00:00 AM 
    Primary BIOS  True 
    Back to Summary
    Back to Top
    List Cluster Core Groups
    List information about the available storage group and the core group in the cluster.
    Summary 
    Cluster Name: xxx-Cluster01 
    Total Groups: 2 
    Group  Status  Type 
    Cluster Group  Online  Core Cluster 
    Available Storage  Offline  Available Storage 
     Cluster Group
    Description:
    Status: Online
    Current Owner: xxx-Host01
    Preferred Owners: None
    Failback Policy: No failback policy defined.
    Resource  Type  Status  Possible Owners 
    Cluster Disk 1  Physical Disk  Online  All Nodes 
    IP Address: 10.10.0.60  IP Address  Online  All Nodes 
    Name: xxx-Cluster01  Network Name  Online  All Nodes 
     Available Storage
    Description:
    Status: Offline
    Current Owner: Per-Host02
    Preferred Owners: None
    Failback Policy: No failback policy defined.
     Cluster Shared Volumes
    Resource  Type  Status  Possible Owners 
    Data  Cluster Shared Volume  Online  All Nodes 
    Snapshots  Cluster Shared Volume  Online  All Nodes 
    System  Cluster Shared Volume  Online  All Nodes 
    Back to Summary
    Back to Top
    List Cluster Network Information
    List cluster-specific network settings that are stored in the cluster configuration.
    Network: Cluster Network 1 
    DHCP Enabled: False 
    Network Role: Internal and client use 
    Metric: 10000 
    Prefix  Prefix Length 
    10.10.0.0  20 
    Network: Cluster Network 2 
    DHCP Enabled: False 
    Network Role: Internal use 
    Metric: 1000 
    Prefix  Prefix Length 
    10.13.0.0  24 
    Subnet Delay  
    CrossSubnetDelay  1000 
    CrossSubnetThreshold  5 
    SameSubnetDelay  1000 
    SameSubnetThreshold  5 
    Validating that Network Load Balancing is not configured on node xxx-Host01.
    Validating that Network Load Balancing is not configured on node xxx-Host02.
    An error occurred while executing the test.
    Failed to connect to the service manager on 'xxx-Host02'.
    The RPC server is unavailable
    Back to Summary
    Back to Top
    If it was an RPC connection issue, then i shouldnt be able to mstsc, explorer shares to host02. Well i can access them, which makes the report above is a bit misleading.
    I have also checked the rpc service and it has started.
    If there is anyone that can shed some light or advice me oany other option for trouble shooting this, that would be greatley appreciated.
    Kind regards,
    Chucky
    Raja. B

  • Live Migration and private network

    Is it a best practice to put up a Private Network beetween the nodes in a pool (reserving a few network cards and switch ports for it), to have a dedicated network for the traffic generated e.g. by live migration and/or ocfs2 heartbeat? I was wondering why such setup is generally recommended in other virtualization solutions, but apparently it's not considered strictly necessary in OVM... Why? Are there any docs regarding this? I couldn't find any.
    Thanks!

    Hi Roynor,
    regarding the physical separation beetween management+hypervisor and the guest VMs, it's now implemented and working...
    My next doubt on the list of doubts :-) at this point is:
    I could easily set up ONE MORE dedicated bond, create a Bridge with a private IP on it on each server (e.g. 10.xxx.xxx.xxx), and then create a Private VLAN completely insulated from the rest of the world.
    I'd be putting the physical switch ports where the Private Bonds/Bridges belong to on the same VLAN ID.
    But:
    - How can I be sure that this network WILL be actually used by the relevant traffic? If I'm not wrong, when you set up e.g. a physical RAC cluster, at a certain point you are prompted to choose what network to use for the Heartbeat (and it will be marked as PRIVATE), and what network will be used by clients traffic (PUBLIC).
    In Oracle VM such setting does not exist... Neither during installation, nor in VM Manager, nowhere.
    - Apart from Security, I'm doubting that during heavy VMs migration problems could arise, because if the network gets saturated, there are chances that the OCFS2 heartbeat would be somehow "lost", therefore messing up HA etc. This is at least the reason why in a RAC setup a private network is highly recommended.
    - I finally found that doc you mention from IBM (thanks for pointing it out!) but my opinion is that THEIR INTENTION was to separate the traffic at the same way I'd like to, but there is simply NO PROOF that such setup would work... They do not mention where you can specify what traffic you want to be on what network...
    This is a very important point... I'm wondering why this lack of information.
    Thanks for your feedback, btw
    Edited by: rlomba on Dec 17, 2009 6:16 AM

  • How to migrate DB from single node 10gR2 to RAC 11gR2 on diff platform?

    How to migrate DB from single node 10gR2 to RAC 11gR2 on different platform with possible minimum downtime? We have a situation of upgrade/migrate oracle 10gR2 single instance DB to 2-node RAC 11gR. The source OS is Solaris 10 on SPARC and the target OS is Linux (the target servers could be changed to Solaris 11 x86 if needed). What is the best solution on that?
    Thanks,

    Technically, can do the following for upgrading and migration?
    1. Create 11gR2 oracle home on the same server and upgrade the database from 10gR2 to 11gR2 by running conversion (2 hour down time?)
    2. Set up Heterogeneous Primary and Physical Standbys by RMAN. The standby is the RAC with ASM. No need down time. (from Solaris Space to Linux - this may be a problem)
    3. At the cutoff time, activate the standby DB from the RAC ASM.
    If feasible, do we have some detail guild line for each step?

Maybe you are looking for