Hyper-V Cluster in VMM
I am trying to build a Hyper-V cluster with VMM 2012 R2 but require some advice as it is not working how I want it too.
I have 2 Hyper-V servers, both with their own local storage and 1 iSCSI disk shared between them. I am trying to cluster the servers so that the shared iSCSI disk becomes a shared volume while maintaining the ability to use the local storage as well - some
VMs will run from local storage while others will run from the CSV.
The issue I'm having is that when I cluster the 2 servers the iSCSI disk does not show up in VMM as a shared volume. In Windows Explorer the disk has the cluster icon but in VMM there is nothing. In the cluster properties I can add a shared volume... but
it asks for a logical node which I cannot create because I have no storage pools (server manager says no groups of disks are available to pool).
I also noticed when I clustered the servers my 2 file shares to their local storage disappeared from VMM which isn't what I want.
Can someone please advise, or link to, a way to achieve my desired configuration?
Cheers,
MrGoodBytes
Note: Posts are provided “AS IS” without warranty of any kind, either expressed or implied, including but not limited to the implied warranties of merchantability and/or fitness for a particular purpose.
Hi MrGoodBytes,
Hi,
Unfortunately, the available information is not enough to have a clear view of the occurred behavior. Could you provide more information about your environment. For example,
the server version of the problem on, when this problem occurs the system log record information, screenshots is the best information.
Before you create the cluster we strongly recommend you run the cluster validation, If you are considering the cluster may have some issue please rerun the validation, then
post the validation report warning and error part information, this report will quickly locate the cluster potential issue.
A disk witness is a disk in the cluster storage that is designated to hold a copy of the cluster configuration database. A failover cluster has a disk witness only if this
is specified as part of the quorum configuration.
Configure and Manage the Quorum in a Windows Server 2012 Failover Cluster
http://technet.microsoft.com/zh-cn/library/jj612870.aspx
I am not familiar with SVCMM so please refer the following related KB to confirm your shared storage add steps is correct.
How to Configure Storage on a Hyper-V Host Cluster in VMM
http://technet.microsoft.com/en-us/library/gg610692.aspx
Configuring Storage in VMM
http://technet.microsoft.com/en-us/library/gg610600.aspx
More information:
How to add storage to Clustered Shared Volumes in Windows Server 2012
http://blogs.msdn.com/b/clustering/archive/2012/04/06/10291490.aspx
Configure and Manage the Quorum in a Windows Server 2012 Failover Cluster
http://technet.microsoft.com/zh-cn/library/jj612870.aspx
Event Logs
http://technet.microsoft.com/en-us/library/cc722404.aspx
I’m glad to be of help to you!
We
are trying to better understand customer views on social support experience, so your participation in this
interview project would be greatly appreciated if you have time.
Thanks for helping make community forums a great place.
Similar Messages
-
Hyper-V cluster: Unable to fail VM over to secondary host
I am working on a Server 2012 Hyper-V Cluster. I am unable to fail my VMs from one node to the other using either LIVE or Quick migration.
A force shutdown of VMHost01 will force a migration to VMHost02. And once we are on VMHost02 we can migrate back to VMHost01, but once that is done we can't move the VMs back to VMHost02 without a force shutdown.
The following error pops up:
Event ID: 21502 The Virtual Machine Management Service failed to establish a connection for a Virtual machine migration with host.... The connection attempt failed because the connected party did not properly respond after a period of time, or the established
connection failed because connected host has failed to respond (0X8007274C)
Here's what I noticed:
VMMS.exe is running on VMHost02 however it is not listening on Port 6600. Confirmed this after a reboot by running netstat -a. We have tried setting this service to a delayed start.
I have checked Firewall rules and Anti-Virus exclusions, and they are correct. I have not run the cluster validation test yet, because I'll need to schedule a period of downtime to do so.
We can start/stop the VMMS.exe service just fine and without errors, but I am puzzled as to why it will not listen on Port 6600 anywhere. Anyone have any suggestions on how to troubleshoot this particular issue?
Thanks,
Tho H. LeJust ran into the same issue in a 16-node cluster being managed by VMM. When trying to live migrate VMs using the VMM console the migration would fail with the following: Error 10698. Failover Cluster manager would report the following error code: Error
(0x8007274C).
+ Validated Live Migration and Cluster networks. Everything checked out.
+ Looking in Hyper-V manager and migrations are enabled and correct networks displayed.
+ Found this particular Blog that mentions that the Virtual Machine Management service is not listening to port 6600
http://blogs.technet.com/b/roplatforms/archive/2012/10/16/shared-nothing-migration-fails-0x8007274c.aspx
Ran the following from an elivated command line:
Netstat -ano | findstr 6600
Node 2 did not return anything
Node 1 returned correct output:
TCP
10.xxx.251.xxx:6600
0.0.0.0:0
LISTENING
4540
TCP
10.xxx.252.xxx:6600
0.0.0.0:0
LISTENING
4560
Set Hyper-V Virtual Machine Service to delayed start.
Restarted the service; no change.
Checked the Event Logs for Hyper-V VMMS and noted the following events - VMMS Listener started
for Live Migration networks, and then shortly after listener stopped.
Removed the system from the cluster and restarted - No change
Checked this host by running gpedit.msc - could not open console: Permission Error
Tried to run a GPO refresh (gpupdate /force), but error returned that LocalGPO could not apply registry settings. Group Policy
processing would not continue until this was resolved.
Checked the local group policy folder on node 2 and it was corrupt:
C:\Windows\System32\GroupPolicy\Machine\reg.pol showed 0K for the size.
Copied local policy folders from Node 1 to 2, and then was able to refresh the GPOs.
Restarting the VMMS service did not change the status of the ports.
Restarted Server, added Live Migration networks back into Hyper-V manager and now netstat output reports that VMMS service
is listening on 6600. -
How to use Fibre Chanel matrifx for Hyper-V Cluster
Hi
I created Hyper-V Cluster (2012 R2) and have Fibre Chanel matrix (4 TB). Is it better co create one big lun for hiper-v storage or create two small luns (2 x 2 TB). Where will be better I/O? All disk used in matrix are the same.
Thank you for help
Kind Regards TomaszHi Yukio,
I agree with Tim , the best way is to contact with hardware vendor for the disk construction of FC storage .
Based on my understanding , if these "basic disks" are same and controlled by same controller , I think it will not create any I/O when you divide it , the total amount of I/O is equal .
Best Regards
Elton Ji
We
are trying to better understand customer views on social support experience, so your participation in this
interview project would be greatly appreciated if you have time.
Thanks for helping make community forums a great place. -
Hello!
Please excuse me if you think my question is silly, but before deploying something in a production environment I'd like to dot the i's and cross the t's.
1) Suppose there's a two node cluster with a Hyper-V role that hosts a number of highly available VM.
If the both cluster nodes are up and running an administrator can initiate a planned failover wich will transfer all VMs
including their system state to another node without downtime.
In case any cluster node goes down unexpectedly the unplanned failover fires up that will transfer all VMs to another node
WITHOUT their system state. As far as I understand this can lead to some data loss.
http://itknowledgeexchange.techtarget.com/itanswers/how-does-live-migration-differ-from-vm-failover/
If, for example, I have an Exchange vm and it would be transfered to the second node during unplanned failover in the Hyper-V cluster I will lose some data by design.
2) Suppose there's a two node cluster with the Exchange clustered installation: in case one node crashes the other takes over without any data loss.
Conclusion: it's more disaster resilient to implement some server role in an "ordinary" cluster that deploy it inside a vm in the Hyper-V cluster.
Is it correct?
Thank you in advance,
Michael"And if this "anything in memory and any active threads" is so large that can take up to 13m15s to transfer during Live Migration it will be lost."
First, that 13m15s required to live migrate all your VMs is not the time it takes to move individual VMs. By default, Hyper-V is set to move a maximum of 2 VMs at a time. You can change that, but it would be foolish to increase that value if
all you have is a single 1GE network. The other VMs will be queued.
Secondly, you are getting that amount of time confused with what is actually happening. Think of a single VM. Let's even assume that it has so much memory and is so active that it takes 13 minutes to live migrate. (Highly unlikely, even
on a 1 GE NIC). During that 13 minutes the VM takes to live migrate, the VM continues to perform normally. In fact, if something happens in the middle of the LM, causing the LM to fail, absolutely nothing is lost because the VM is still operating
on the original host.
Now let's look at what happens when the host on which that VM is running fails, causing a restart of the VM on another node of the cluster. The VM is doing its work reading and writing to its data files. At that instance in time when the host
fails, the VM may have some unwritten data buffers in memory. Since the host fails, the VM crashes, losing whatever it had in memory at the instant in time. It is not going to lose any 13 minutes of data. In fact, if you have an application
that is processing data at this volume, you most likely have something like SQL running. When the VM goes down, the cluster will automatically restart the VM on another node of the cluster. SQL will automatically replay transaction logs to recover
to the best of its ability.
Is there a possibility of data loss? Yes, a very tiny possibility for a very small amount. Is there a possibility of data corruption? Yes, a very, very tiny possibility, just like with a physical machine.
The amount of time required for a live migration is meaningless when it comes to potential data loss of that VM. The potential data loss is pretty much the same as if you had it running on a physical machine, the machine crashed, and then restarted.
"clustered applicationsDO NOT STOP working during unplanned failover (so there is no recovery time), "
Not exactly true. Let's use SQL as an example again. When SQL is instsalled in a cluster, you install at a minimum one instance, but you can have multiple instances. When the node on which the active instance is running fails, there is
a brief pause in service while the instance starts on the other node. Depending on transactions outstanding, last write, etc., it will take a little bit of time for the SQL instance to be ready to start handling requests on the new node.
Yes, there is a definite difference between restarting the entire VM (just the VM is clustered) and clustering the application. Recovery time is about the biggest issue. As you have noted, restarting a VM, i.e. rebooting it, takes time.
And because it takes a longer period of time, there is a good chance that clients will be unable to access the resource for a period of time, maybe as much as 1-5 minutes depending upon a lot of different factors, whereas with a clustered application, the
clients may be unable to access for up to a minute or so.
However, the amount of data potentially lost is quite dependent upon the application. SQL is designed to recover nicely in either environment, and it is likely not to lose any data. Sequential writing applications will be dependent upon things
like disk cache held in memory - large caches means higher probability of losing data. No disk cache means there is not likely to be any loss of data.
.:|:.:|:. tim -
Hi, I'm having a problem in a VM Guest cluster using Windows Server 2012 R2 and virtual disk sharing enabled.
It's a SQL 2012 cluster, which has around 10 vhdx disks shared this way. all the VHDX files are inside LUNs on a SAN. These LUNs are presented to all clustered members of the Windows Server 2012 R2 Hyper-V cluster, via Cluster Shared Volumes.
Yesterday happened a very strange problem, both the Quorum Disk and the DTC disks got the information completetly erased. The vhdx disks themselves where there, but the info inside was gone.
The SQL admin had to recreated both disks, but now we don't know if this issue was related to the virtualization platform or another event inside the cluster itself.
Right now I'm seen this errors on one of the VM Guest:
Log Name: System
Source: Microsoft-Windows-FailoverClustering
Date: 3/4/2014 11:54:55 AM
Event ID: 1069
Task Category: Resource Control Manager
Level: Error
Keywords:
User: SYSTEM
Computer: ServerDB02.domain.com
Description:
Cluster resource 'Quorum-HDD' of type 'Physical Disk' in clustered role 'Cluster Group' failed.
Based on the failure policies for the resource and role, the cluster service may try to bring the resource online on this node or move the group to another node of the cluster and then restart it. Check the resource and group state using Failover Cluster
Manager or the Get-ClusterResource Windows PowerShell cmdlet.
Event Xml:
<Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event">
<System>
<Provider Name="Microsoft-Windows-FailoverClustering" Guid="{BAF908EA-3421-4CA9-9B84-6689B8C6F85F}" />
<EventID>1069</EventID>
<Version>1</Version>
<Level>2</Level>
<Task>3</Task>
<Opcode>0</Opcode>
<Keywords>0x8000000000000000</Keywords>
<TimeCreated SystemTime="2014-03-04T17:54:55.498842300Z" />
<EventRecordID>14140</EventRecordID>
<Correlation />
<Execution ProcessID="1684" ThreadID="2180" />
<Channel>System</Channel>
<Computer>ServerDB02.domain.com</Computer>
<Security UserID="S-1-5-18" />
</System>
<EventData>
<Data Name="ResourceName">Quorum-HDD</Data>
<Data Name="ResourceGroup">Cluster Group</Data>
<Data Name="ResTypeDll">Physical Disk</Data>
</EventData>
</Event>
Log Name: System
Source: Microsoft-Windows-FailoverClustering
Date: 3/4/2014 11:54:55 AM
Event ID: 1558
Task Category: Quorum Manager
Level: Warning
Keywords:
User: SYSTEM
Computer: ServerDB02.domain.com
Description:
The cluster service detected a problem with the witness resource. The witness resource will be failed over to another node within the cluster in an attempt to reestablish access to cluster configuration data.
Event Xml:
<Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event">
<System>
<Provider Name="Microsoft-Windows-FailoverClustering" Guid="{BAF908EA-3421-4CA9-9B84-6689B8C6F85F}" />
<EventID>1558</EventID>
<Version>0</Version>
<Level>3</Level>
<Task>42</Task>
<Opcode>0</Opcode>
<Keywords>0x8000000000000000</Keywords>
<TimeCreated SystemTime="2014-03-04T17:54:55.498842300Z" />
<EventRecordID>14139</EventRecordID>
<Correlation />
<Execution ProcessID="1684" ThreadID="2180" />
<Channel>System</Channel>
<Computer>ServerDB02.domain.com</Computer>
<Security UserID="S-1-5-18" />
</System>
<EventData>
<Data Name="NodeName">ServerDB02</Data>
</EventData>
</Event>
We don't know if this can happen again, what if this happens on disk with data?! We don't know if this is related to the virtual disk sharing technology or anything related to virtualization, but I'm asking here to find out if it is a possibility.
Any ideas are appreciated.
Thanks.
Eduardo RojasHi,
Please refer to the following link:
http://blogs.technet.com/b/keithmayer/archive/2013/03/21/virtual-machine-guest-clustering-with-windows-server-2012-become-a-virtualization-expert-in-20-days-part-14-of-20.aspx#.Ux172HnxtNA
Best Regards,
Vincent Wu
Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading the thread. -
2012 R2 Hyper-V Cluster two node design with abundance 1Gbps NICs and FC storage
Hello All,
First post so please be gentle!
We are currently in the process of building/testing the upgrade of two node 2012 R2 Hyper-V cluster.
Two hosts built with Datacentre 2012 R2 which will host approx. 30 VM's.
Shared storage will be fault tolerant FC- connection.
10, (yes 10!) 1Gbps NICs are available, Intel i350.
trying to decide on teaming interfaces using native LBFO and the 2008 'style' of using un-converged networking, or teaming up most interfaces and using QoS. Can find many/numerous examples of using 10Gbps NICs and converged, however 10Gbps networking
isn't an option right now.
recommendations appreciated.
thanksHi Sir,
>>trying to decide on teaming interfaces using native LBFO and the 2008 'style' of using un-converged networking, or teaming up most interfaces and using QoS.
The following link detailed the teaming configuration and it's applicable scenario (server 2012):
http://www.aidanfinn.com/?p=14039
Also please refer to the document for 2012R2 LBFO :
http://www.microsoft.com/en-us/download/details.aspx?id=40319
In server 2012R2 , there is a new setting "Dynamic" for "load banacing mode " and it is recommended to use Dynamic for "Load banacing mode" :
If you can accept 1GB max bandwidth for each VM I would suggest you to use LBFO mode : Independent/dynamic/None(All adapter active) .
Best Regards,
Elton Ji
Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Subscriber Support, contact [email protected] . -
Server 2012 R2 Hyper-V Cluster, VM blue screens after migration between nodes.
I currently have a two node Server 2012 R2 Hyper-v Cluster (fully patched) with a Windows Server 2012 R2 Iscsi target.
The VMs run fine all day long, but when I try to do a live/quick migration the VM blue screens after about 20 minutes. The blue reports back about a “Critical_Structure_Corruption”.
I’m being to think it might be down to CPU, as one system has an E-2640v2 and the other one has an E5-2670v3. Should I be able to migrate between these two systems with these type of CPU?
TimSorry Tim, is that all 50 blue screen if live migrated?
Are they all on the latest integration services? Does a cluster validation complete successfully? Are the hosts patched to the same level?
The fact that if you power them off then migrate them and they boot fine, does point to a processor incompatibility and the memory BIN file is not accepted on the new host.
Bit of a long shot but the only other thing I can think of off the top of my head if the compatibility option is checked, is checking the location of the BIN file when the VM is running to make sure its in the same place as the VHD\VHDX in the
CSV storage where the VM is located and not on the local host somewhere like C:\program data\.... that is stopping it being migrated to the new host when the VM is live migrated.
Kind Regards
Michael Coutanche
Blog:
Twitter: LinkedIn:
Note: Posts are provided “AS IS” without warranty of any kind, either expressed or implied, including but not limited to the implied warranties of merchantability and/or fitness for a particular purpose. -
2012 R2 Hyper-V Cluster Replication to Single Non-Cluster Host?
I can't seem to find a straight answer and in my testing, this doesn't seem to work. Is it supported to replicate from a 2012 R2 Hyper-V cluster to a remote, single 2012 R2 Hyper-V non-cluster host?
Yes, it's supported, and yes it works. The cluster uses the Hyper-V Replica Broker and the destination system does not.
Eric Siron Altaro Hyper-V Blog
I am an independent blog contributor, not an Altaro employee. I am solely responsible for the content of my posts.
"Every relationship you have is in worse shape than you think."
To replicate back to the primary site I'll need a replica broker at the DR site however, correct? -
Using Datasources.xml in backup of Hyper-V cluster and standalone Hyper-V hosts
Hi!
I have a DPM 2012 server which I have used to backup standalone Hyper-V hosts and Exchange 2010 server active DAG node. Since yesterday I have added Hyper-V cluster backup, and now I have generated Datasources.xml using .\DSConfig.ps1 on one of the
Hyper-V nodes.
My question is: Do I have to manually add other protected data sources (Exchange, VMs from standalone hosts) to Datasources.xml? Or this xml is used only for CSV clusters?
Thanks in advance for your answers!
KrunoHi,
Yes, but the key is wrong - this is required as part of the csv serialization configuration.
On the DPM Server, Copy / paste below into notepad, then save as MaxAllowedParallelBackups.reg on the DPM server, then right-click and select merge.
Windows Registry Editor Version 5.00
[HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft Data Protection Manager\2.0\Configuration\MaxAllowedParallelBackups]
"Microsoft Hyper-V"=dword:00000001
Please review these two sources to assist you.
http://social.technet.microsoft.com/wiki/contents/articles/17493.protecting-hyper-v-virtual-machines-with-system-center-dpm-2012.aspx
http://blogs.technet.com/b/dpm/archive/2010/12/09/system-center-data-protection-manager-2010-hyper-v-protection-configuring-cluster-networks-for-csv-redirected-access.aspx
Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading the thread. Regards, Mike J. [MSFT]
This posting is provided "AS IS" with no warranties, and confers no rights. -
Hyper-v cluster with core switch downtime... what to do?
Is there a way to essentially "pause" the hyper-v cluster and keep things running but do NOT attempt to failover anything for any reason?
We have one Procurve 5412zl switch with two c7000 enclosures. In each c7000 enclosure there are two switches that connect all the blade servers within the enclosure. Those two switches are interconnected internally so they can communicate within the enclosure.
So if the core switch goes down the hyper-v servers in the same c7000 enclosure can still communicate but they will be seperated from the others in the other enclosure.
So we have 4 hyper-v servers in one enclosure and 3 in another. If i disconnect the core switch i'm wondering what will happen (if I reboot the switch which is what I need to do).
How can I avoid having to shut down everything for this and just tell hyper-v cluster to not do anything when the network is lost?Hi Quadrantids,
" to essentially "pause" the hyper-v cluster and keep things running but
do NOT attempt to failover anything for any reason"
Based on my understanding you need to keep cluster running on the same C7000 enclosure , in another words before you cut the connection between the C7000 enclosures you may migrate VMs to same enclosure to keep running (I assume that the
storage will not be affected by the restart ).
Best Regards
Elton Ji
We
are trying to better understand customer views on social support experience, so your participation in this
interview project would be greatly appreciated if you have time.
Thanks for helping make community forums a great place. -
We have a 2012 Hyper-V cluster that isn't online and we can't migrate VMs to the other Hyper-V host. We see event errors in the Failover Cluster Manager:
The description for Event ID 1069 from source Microsoft-Windows-FailoverClustering cannot be found. Either the component that raises this event is not installed on your local computer or the installation is corrupted. You can install or repair the component
on the local computer.
If the event originated on another computer, the display information had to be saved with the event.
The following information was included with the event:
Cluster Name
Cluster Group
Network Name
The description for Event ID 1254 from source Microsoft-Windows-FailoverClustering cannot be found. Either the component that raises this event is not installed on your local computer or the installation is corrupted. You can install or repair the component
on the local computer.
If the event originated on another computer, the display information had to be saved with the event.
The following information was included with the event:
Cluster Group
The description for Event ID 1155 from source Microsoft-Windows-FailoverClustering cannot be found. Either the component that raises this event is not installed on your local computer or the installation is corrupted. You can install or repair the component
on the local computer.
If the event originated on another computer, the display information had to be saved with the event.
The following information was included with the event:
ACMAIL
3604536
Any help or info is appreciated.
Thank you!Here is the network validation. Any thoughts?
Failover Cluster Validation Report
Failover Cluster Validation Report
Node: ACHV01.AshtaChemicals.localValidated
Node: ACHV02.AshtaChemicals.localValidated
Started8/6/2014 5:04:47 PM
Completed8/6/2014 5:05:22 PM
The Validate a Configuration Wizard must be run after any change is made to the
configuration of the cluster or hardware. For more information, see
Results by Category
NameResult SummaryDescription
NetworkWarning
Network
NameResultDescription
List Network Binding OrderSuccess
Validate Cluster Network ConfigurationSuccess
Validate IP ConfigurationWarning
Validate Multiple Subnet PropertiesSuccess
Validate Network CommunicationSuccess
Validate Windows Firewall ConfigurationSuccess
Overall Result
Testing has completed for the tests you selected. You should review the
warnings in the Report. A cluster solution is supported by Microsoft only if
it passes all cluster validation tests.
List Network Binding Order
Description: List the order in which networks are bound to the adapters on
each node.
ACHV01.AshtaChemicals.local
Binding OrderAdapterSpeed
iSCSI3Intel(R) PRO/1000 PT Quad Port LP Server Adapter #31000 Mbit/s
Ethernet 3Intel(R) PRO/1000 PT Quad Port LP Server AdapterUnavailable
Mgt - HeartbeatMicrosoft Network Adapter Multiplexor Driver #42000
Mbit/s
Mgt - LiveMigrationMicrosoft Network Adapter Multiplexor Driver #32000
Mbit/s
MgtMicrosoft Network Adapter Multiplexor Driver2000 Mbit/s
iSCSI2Broadcom BCM5709C NetXtreme II GigE (NDIS VBD Client) #371000
Mbit/s
3Broadcom BCM5709C NetXtreme II GigE (NDIS VBD Client)Unavailable
ACHV02.AshtaChemicals.local
Binding OrderAdapterSpeed
Mgt - HeartbeatMicrosoft Network Adapter Multiplexor Driver #42000
Mbit/s
Mgt - LiveMigrationMicrosoft Network Adapter Multiplexor Driver #32000
Mbit/s
MgtMicrosoft Network Adapter Multiplexor Driver #22000 Mbit/s
iSCSI1Broadcom NetXtreme Gigabit Ethernet #71000 Mbit/s
NIC2Broadcom NetXtreme Gigabit EthernetUnavailable
SLOT 5 2Broadcom NetXtreme Gigabit EthernetUnavailable
iSCSI2Broadcom NetXtreme Gigabit Ethernet1000 Mbit/s
Back to Summary
Back to Top
Validate Cluster Network Configuration
Description: Validate the cluster networks that would be created for these
servers.
Network: Cluster Network 1
DHCP Enabled: False
Network Role: Disabled
One or more interfaces on this network are connected to an iSCSI Target. This
network will not be used for cluster communication.
PrefixPrefix Length
192.168.131.024
ItemValue
Network InterfaceACHV01.AshtaChemicals.local - iSCSI3
DHCP EnabledFalse
Connected to iSCSI targetTrue
IP Address192.168.131.113
Prefix Length24
ItemValue
Network InterfaceACHV02.AshtaChemicals.local - iSCSI2
DHCP EnabledFalse
Connected to iSCSI targetTrue
IP Address192.168.131.121
Prefix Length24
Network: Cluster Network 2
DHCP Enabled: False
Network Role: Internal
PrefixPrefix Length
192.168.141.024
ItemValue
Network InterfaceACHV01.AshtaChemicals.local - Mgt - Heartbeat
DHCP EnabledFalse
Connected to iSCSI targetFalse
IP Address192.168.141.10
Prefix Length24
ItemValue
Network InterfaceACHV02.AshtaChemicals.local - Mgt - Heartbeat
DHCP EnabledFalse
Connected to iSCSI targetFalse
IP Address192.168.141.12
Prefix Length24
Network: Cluster Network 3
DHCP Enabled: False
Network Role: Internal
PrefixPrefix Length
192.168.140.024
ItemValue
Network InterfaceACHV01.AshtaChemicals.local - Mgt - LiveMigration
DHCP EnabledFalse
Connected to iSCSI targetFalse
IP Address192.168.140.10
Prefix Length24
ItemValue
Network InterfaceACHV02.AshtaChemicals.local - Mgt - LiveMigration
DHCP EnabledFalse
Connected to iSCSI targetFalse
IP Address192.168.140.12
Prefix Length24
Network: Cluster Network 4
DHCP Enabled: False
Network Role: Enabled
PrefixPrefix Length
10.1.1.024
ItemValue
Network InterfaceACHV01.AshtaChemicals.local - Mgt
DHCP EnabledFalse
Connected to iSCSI targetFalse
IP Address10.1.1.4
Prefix Length24
ItemValue
Network InterfaceACHV02.AshtaChemicals.local - Mgt
DHCP EnabledFalse
Connected to iSCSI targetFalse
IP Address10.1.1.5
Prefix Length24
Network: Cluster Network 5
DHCP Enabled: False
Network Role: Disabled
One or more interfaces on this network are connected to an iSCSI Target. This
network will not be used for cluster communication.
PrefixPrefix Length
192.168.130.024
ItemValue
Network InterfaceACHV01.AshtaChemicals.local - iSCSI2
DHCP EnabledFalse
Connected to iSCSI targetTrue
IP Address192.168.130.112
Prefix Length24
ItemValue
Network InterfaceACHV02.AshtaChemicals.local - iSCSI1
DHCP EnabledFalse
Connected to iSCSI targetTrue
IP Address192.168.130.121
Prefix Length24
Verifying that each cluster network interface within a cluster network is
configured with the same IP subnets.
Examining network Cluster Network 1.
Network interface ACHV01.AshtaChemicals.local - iSCSI3 has addresses on all
the subnet prefixes of network Cluster Network 1.
Network interface ACHV02.AshtaChemicals.local - iSCSI2 has addresses on all
the subnet prefixes of network Cluster Network 1.
Examining network Cluster Network 2.
Network interface ACHV01.AshtaChemicals.local - Mgt - Heartbeat has addresses
on all the subnet prefixes of network Cluster Network 2.
Network interface ACHV02.AshtaChemicals.local - Mgt - Heartbeat has addresses
on all the subnet prefixes of network Cluster Network 2.
Examining network Cluster Network 3.
Network interface ACHV01.AshtaChemicals.local - Mgt - LiveMigration has
addresses on all the subnet prefixes of network Cluster Network 3.
Network interface ACHV02.AshtaChemicals.local - Mgt - LiveMigration has
addresses on all the subnet prefixes of network Cluster Network 3.
Examining network Cluster Network 4.
Network interface ACHV01.AshtaChemicals.local - Mgt has addresses on all the
subnet prefixes of network Cluster Network 4.
Network interface ACHV02.AshtaChemicals.local - Mgt has addresses on all the
subnet prefixes of network Cluster Network 4.
Examining network Cluster Network 5.
Network interface ACHV01.AshtaChemicals.local - iSCSI2 has addresses on all
the subnet prefixes of network Cluster Network 5.
Network interface ACHV02.AshtaChemicals.local - iSCSI1 has addresses on all
the subnet prefixes of network Cluster Network 5.
Verifying that, for each cluster network, all adapters are consistently
configured with either DHCP or static IP addresses.
Checking DHCP consistency for network: Cluster Network 1. Network DHCP status
is disabled.
DHCP status (disabled) for network interface ACHV01.AshtaChemicals.local -
iSCSI3 matches network Cluster Network 1.
DHCP status (disabled) for network interface ACHV02.AshtaChemicals.local -
iSCSI2 matches network Cluster Network 1.
Checking DHCP consistency for network: Cluster Network 2. Network DHCP status
is disabled.
DHCP status (disabled) for network interface ACHV01.AshtaChemicals.local - Mgt
- Heartbeat matches network Cluster Network 2.
DHCP status (disabled) for network interface ACHV02.AshtaChemicals.local - Mgt
- Heartbeat matches network Cluster Network 2.
Checking DHCP consistency for network: Cluster Network 3. Network DHCP status
is disabled.
DHCP status (disabled) for network interface ACHV01.AshtaChemicals.local - Mgt
- LiveMigration matches network Cluster Network 3.
DHCP status (disabled) for network interface ACHV02.AshtaChemicals.local - Mgt
- LiveMigration matches network Cluster Network 3.
Checking DHCP consistency for network: Cluster Network 4. Network DHCP status
is disabled.
DHCP status (disabled) for network interface ACHV01.AshtaChemicals.local - Mgt
matches network Cluster Network 4.
DHCP status (disabled) for network interface ACHV02.AshtaChemicals.local - Mgt
matches network Cluster Network 4.
Checking DHCP consistency for network: Cluster Network 5. Network DHCP status
is disabled.
DHCP status (disabled) for network interface ACHV01.AshtaChemicals.local -
iSCSI2 matches network Cluster Network 5.
DHCP status (disabled) for network interface ACHV02.AshtaChemicals.local -
iSCSI1 matches network Cluster Network 5.
Back to Summary
Back to Top
Validate IP Configuration
Description: Validate that IP addresses are unique and subnets configured
correctly.
ACHV01.AshtaChemicals.local
ItemName
Adapter NameiSCSI3
Adapter DescriptionIntel(R) PRO/1000 PT Quad Port LP Server Adapter #3
Physical Address00-26-55-DB-CF-73
StatusOperational
DNS Servers
IP Address192.168.131.113
Prefix Length24
ItemName
Adapter NameMgt - Heartbeat
Adapter DescriptionMicrosoft Network Adapter Multiplexor Driver #4
Physical Address78-2B-CB-3C-DC-F5
StatusOperational
DNS Servers10.1.1.2, 10.1.1.8
IP Address192.168.141.10
Prefix Length24
ItemName
Adapter NameMgt - LiveMigration
Adapter DescriptionMicrosoft Network Adapter Multiplexor Driver #3
Physical Address78-2B-CB-3C-DC-F5
StatusOperational
DNS Servers10.1.1.2, 10.1.1.8
IP Address192.168.140.10
Prefix Length24
ItemName
Adapter NameMgt
Adapter DescriptionMicrosoft Network Adapter Multiplexor Driver
Physical Address78-2B-CB-3C-DC-F5
StatusOperational
DNS Servers10.1.1.2, 10.1.1.8
IP Address10.1.1.4
Prefix Length24
ItemName
Adapter NameiSCSI2
Adapter DescriptionBroadcom BCM5709C NetXtreme II GigE (NDIS VBD Client)
#37
Physical Address78-2B-CB-3C-DC-F7
StatusOperational
DNS Servers
IP Address192.168.130.112
Prefix Length24
ItemName
Adapter NameLocal Area Connection* 12
Adapter DescriptionMicrosoft Failover Cluster Virtual Adapter
Physical Address02-61-1E-49-32-8F
StatusOperational
DNS Servers
IP Addressfe80::cc2f:d769:fe24:3d04%23
Prefix Length64
IP Address169.254.2.195
Prefix Length16
ItemName
Adapter NameLoopback Pseudo-Interface 1
Adapter DescriptionSoftware Loopback Interface 1
Physical Address
StatusOperational
DNS Servers
IP Address::1
Prefix Length128
IP Address127.0.0.1
Prefix Length8
ItemName
Adapter Nameisatap.{96B6424D-DB32-480F-8B46-056A11A0A6A8}
Adapter DescriptionMicrosoft ISATAP Adapter
Physical Address00-00-00-00-00-00-00-E0
StatusNot Operational
DNS Servers
IP Addressfe80::5efe:192.168.131.113%16
Prefix Length128
ItemName
Adapter Nameisatap.{A0353AF4-CE7F-4811-B4FC-35273C2F2C6E}
Adapter DescriptionMicrosoft ISATAP Adapter #3
Physical Address00-00-00-00-00-00-00-E0
StatusNot Operational
DNS Servers
IP Addressfe80::5efe:192.168.130.112%18
Prefix Length128
ItemName
Adapter Nameisatap.{FAAF4D6A-5A41-4725-9E83-689D8E6682EE}
Adapter DescriptionMicrosoft ISATAP Adapter #4
Physical Address00-00-00-00-00-00-00-E0
StatusNot Operational
DNS Servers
IP Addressfe80::5efe:192.168.141.10%22
Prefix Length128
ItemName
Adapter Nameisatap.{C66443C2-DC5F-4C2A-A674-2191F76E33E1}
Adapter DescriptionMicrosoft ISATAP Adapter #5
Physical Address00-00-00-00-00-00-00-E0
StatusNot Operational
DNS Servers
IP Addressfe80::5efe:10.1.1.4%27
Prefix Length128
ItemName
Adapter Nameisatap.{B3A95E1D-CB95-4111-89E5-276497D7EF42}
Adapter DescriptionMicrosoft ISATAP Adapter #6
Physical Address00-00-00-00-00-00-00-E0
StatusNot Operational
DNS Servers
IP Addressfe80::5efe:192.168.140.10%29
Prefix Length128
ItemName
Adapter Nameisatap.{7705D42A-1988-463E-9DA3-98D8BD74337E}
Adapter DescriptionMicrosoft ISATAP Adapter #7
Physical Address00-00-00-00-00-00-00-E0
StatusNot Operational
DNS Servers
IP Addressfe80::5efe:169.254.2.195%30
Prefix Length128
ACHV02.AshtaChemicals.local
ItemName
Adapter NameMgt - Heartbeat
Adapter DescriptionMicrosoft Network Adapter Multiplexor Driver #4
Physical Address74-86-7A-D4-C9-8B
StatusOperational
DNS Servers10.1.1.8, 10.1.1.2
IP Address192.168.141.12
Prefix Length24
ItemName
Adapter NameMgt - LiveMigration
Adapter DescriptionMicrosoft Network Adapter Multiplexor Driver #3
Physical Address74-86-7A-D4-C9-8B
StatusOperational
DNS Servers10.1.1.8, 10.1.1.2
IP Address192.168.140.12
Prefix Length24
ItemName
Adapter NameMgt
Adapter DescriptionMicrosoft Network Adapter Multiplexor Driver #2
Physical Address74-86-7A-D4-C9-8B
StatusOperational
DNS Servers10.1.1.8, 10.1.1.2
IP Address10.1.1.5
Prefix Length24
IP Address10.1.1.248
Prefix Length24
ItemName
Adapter NameiSCSI1
Adapter DescriptionBroadcom NetXtreme Gigabit Ethernet #7
Physical Address74-86-7A-D4-C9-8A
StatusOperational
DNS Servers
IP Address192.168.130.121
Prefix Length24
ItemName
Adapter NameiSCSI2
Adapter DescriptionBroadcom NetXtreme Gigabit Ethernet
Physical Address00-10-18-F5-08-9C
StatusOperational
DNS Servers
IP Address192.168.131.121
Prefix Length24
ItemName
Adapter NameLocal Area Connection* 11
Adapter DescriptionMicrosoft Failover Cluster Virtual Adapter
Physical Address02-8F-46-67-27-51
StatusOperational
DNS Servers
IP Addressfe80::3471:c9bf:29ad:99db%25
Prefix Length64
IP Address169.254.1.193
Prefix Length16
ItemName
Adapter NameLoopback Pseudo-Interface 1
Adapter DescriptionSoftware Loopback Interface 1
Physical Address
StatusOperational
DNS Servers
IP Address::1
Prefix Length128
IP Address127.0.0.1
Prefix Length8
ItemName
Adapter Nameisatap.{8D7DF16A-1D5F-43D9-B2D6-81143A7225D2}
Adapter DescriptionMicrosoft ISATAP Adapter #2
Physical Address00-00-00-00-00-00-00-E0
StatusNot Operational
DNS Servers
IP Addressfe80::5efe:192.168.131.121%21
Prefix Length128
ItemName
Adapter Nameisatap.{82E35DBD-52BE-4BCF-BC74-E97BB10BF4B0}
Adapter DescriptionMicrosoft ISATAP Adapter #3
Physical Address00-00-00-00-00-00-00-E0
StatusNot Operational
DNS Servers
IP Addressfe80::5efe:192.168.130.121%22
Prefix Length128
ItemName
Adapter Nameisatap.{5A315B7D-D94E-492B-8065-D760234BA42E}
Adapter DescriptionMicrosoft ISATAP Adapter #4
Physical Address00-00-00-00-00-00-00-E0
StatusNot Operational
DNS Servers
IP Addressfe80::5efe:192.168.141.12%23
Prefix Length128
ItemName
Adapter Nameisatap.{2182B37C-B674-4E65-9F78-19D93E78FECB}
Adapter DescriptionMicrosoft ISATAP Adapter #5
Physical Address00-00-00-00-00-00-00-E0
StatusNot Operational
DNS Servers
IP Addressfe80::5efe:192.168.140.12%24
Prefix Length128
ItemName
Adapter Nameisatap.{104DC629-D13A-4A36-8845-0726AC9AE25E}
Adapter DescriptionMicrosoft ISATAP Adapter #6
Physical Address00-00-00-00-00-00-00-E0
StatusNot Operational
DNS Servers
IP Addressfe80::5efe:10.1.1.5%33
Prefix Length128
ItemName
Adapter Nameisatap.{483266DF-7620-4427-BE5D-3585C8D92A12}
Adapter DescriptionMicrosoft ISATAP Adapter #7
Physical Address00-00-00-00-00-00-00-E0
StatusNot Operational
DNS Servers
IP Addressfe80::5efe:169.254.1.193%34
Prefix Length128
Verifying that a node does not have multiple adapters connected to the same
subnet.
Verifying that each node has at least one adapter with a defined default
gateway.
Verifying that there are no node adapters with the same MAC physical address.
Found duplicate physical address 78-2B-CB-3C-DC-F5 on node
ACHV01.AshtaChemicals.local adapter Mgt - Heartbeat and node
ACHV01.AshtaChemicals.local adapter Mgt - LiveMigration.
Found duplicate physical address 78-2B-CB-3C-DC-F5 on node
ACHV01.AshtaChemicals.local adapter Mgt - Heartbeat and node
ACHV01.AshtaChemicals.local adapter Mgt.
Found duplicate physical address 78-2B-CB-3C-DC-F5 on node
ACHV01.AshtaChemicals.local adapter Mgt - LiveMigration and node
ACHV01.AshtaChemicals.local adapter Mgt.
Found duplicate physical address 74-86-7A-D4-C9-8B on node
ACHV02.AshtaChemicals.local adapter Mgt - Heartbeat and node
ACHV02.AshtaChemicals.local adapter Mgt - LiveMigration.
Found duplicate physical address 74-86-7A-D4-C9-8B on node
ACHV02.AshtaChemicals.local adapter Mgt - Heartbeat and node
ACHV02.AshtaChemicals.local adapter Mgt.
Found duplicate physical address 74-86-7A-D4-C9-8B on node
ACHV02.AshtaChemicals.local adapter Mgt - LiveMigration and node
ACHV02.AshtaChemicals.local adapter Mgt.
Verifying that there are no duplicate IP addresses between any pair of nodes.
Checking that nodes are consistently configured with IPv4 and/or IPv6
addresses.
Verifying that all nodes IPv4 networks are not configured using Automatic
Private IP Addresses (APIPA).
Back to Summary
Back to Top
Validate Multiple Subnet Properties
Description: For clusters using multiple subnets, validate the network
properties.
Testing that the HostRecordTTL property for network name Name: Cluster1 is set
to the optimal value for the current cluster configuration.
HostRecordTTL property for network name Name: Cluster1 has a value of 1200.
Testing that the RegisterAllProvidersIP property for network name Name:
Cluster1 is set to the optimal value for the current cluster configuration.
RegisterAllProvidersIP property for network name Name: Cluster1 has a value of
0.
Testing that the PublishPTRRecords property for network name Name: Cluster1 is
set to the optimal value for the current cluster configuration.
The PublishPTRRecords property forces the network name to register a PTR in
DNS reverse lookup record IP address to name mapping.
Back to Summary
Back to Top
Validate Network Communication
Description: Validate that servers can communicate, with acceptable latency,
on all networks.
Analyzing connectivity results ...
Multiple communication paths were detected between each pair of nodes.
Back to Summary
Back to Top
Validate Windows Firewall Configuration
Description: Validate that the Windows Firewall is properly configured to
allow failover cluster network communication.
The Windows Firewall on node 'ACHV01.AshtaChemicals.local' is configured to
allow network communication between cluster nodes over adapter
'ACHV01.AshtaChemicals.local - Mgt - LiveMigration'.
The Windows Firewall on node 'ACHV01.AshtaChemicals.local' is configured to
allow network communication between cluster nodes over adapter
'ACHV01.AshtaChemicals.local - iSCSI3'.
The Windows Firewall on node 'ACHV01.AshtaChemicals.local' is configured to
allow network communication between cluster nodes over adapter
'ACHV01.AshtaChemicals.local - Mgt - Heartbeat'.
The Windows Firewall on node 'ACHV01.AshtaChemicals.local' is configured to
allow network communication between cluster nodes over adapter
'ACHV01.AshtaChemicals.local - Mgt'.
The Windows Firewall on node 'ACHV01.AshtaChemicals.local' is configured to
allow network communication between cluster nodes over adapter
'ACHV01.AshtaChemicals.local - iSCSI2'.
The Windows Firewall on node ACHV01.AshtaChemicals.local is configured to
allow network communication between cluster nodes.
The Windows Firewall on node 'ACHV02.AshtaChemicals.local' is configured to
allow network communication between cluster nodes over adapter
'ACHV02.AshtaChemicals.local - Mgt'.
The Windows Firewall on node 'ACHV02.AshtaChemicals.local' is configured to
allow network communication between cluster nodes over adapter
'ACHV02.AshtaChemicals.local - iSCSI2'.
The Windows Firewall on node 'ACHV02.AshtaChemicals.local' is configured to
allow network communication between cluster nodes over adapter
'ACHV02.AshtaChemicals.local - Mgt - LiveMigration'.
The Windows Firewall on node 'ACHV02.AshtaChemicals.local' is configured to
allow network communication between cluster nodes over adapter
'ACHV02.AshtaChemicals.local - Mgt - Heartbeat'.
The Windows Firewall on node 'ACHV02.AshtaChemicals.local' is configured to
allow network communication between cluster nodes over adapter
'ACHV02.AshtaChemicals.local - iSCSI1'.
The Windows Firewall on node ACHV02.AshtaChemicals.local is configured to
allow network communication between cluster nodes.
Back to Summary
Back to Top -
Add Node to Hyper-V Cluster running Server 2012 R2
Hi All,
I am in the process to upgrade our Hyper-V Cluster to Server 2012 R2 but I am not sure about the required Validation test.
The Situation at the Moment-> 1 Node Cluster running Server 2012 R2 with 2 CSVs and Quorum. Addtional Server prepared to add to the Cluster. One CSV is empty and could be used for the Validation Test. On the Other CSV are running 10 VMs in production.
So when I start the Validation wizard I can select specific CSVs to test, which makes sense;-) But the Warning message is not clear for me "TO AVOID ROLE FAILURES, IT IS RECOMMENDED THAT ALL ROLES USING CLUSTER SHARED VOLUMES BE STOPPED BEFORE THE STORAGE
IS VALIDATED". Does it mean that ALL CSVs will be testest and Switched offline during the test or just the CSV that i have selected in the Options? I have to avoid definitly that the CSV where all the VMs are running will be switched offline and also
that the configuration will be corputed after loosing the CSV where the VMs are running.
Can someone confirm that ONLY the selected CSV will be used for the Validation test ???
Many thanks
MarkusHi,
The validation will test the select the CSV storage, if you have guest vm running this CSV it must shutdown or saved before you validate the CSV.
Several tests will actually trigger failovers and move the disks and groups to different cluster nodes which will cause downtime, and these include Validating Disk Arbitration,
Disk Failover, Multiple Arbitration, SCSI-3 Persistent Reservation, and Simultaneous Failover.
So if you want to test a majority of the functionality of your cluster without impacting availability, exclude these tests.
The related information:
Validating a Cluster with Zero Downtime
http://blogs.msdn.com/b/clustering/archive/2011/06/28/10180803.aspx
Hope this helps.
We
are trying to better understand customer views on social support experience, so your participation in this
interview project would be greatly appreciated if you have time.
Thanks for helping make community forums a great place. -
Adding nodes to Windows Server 2008 R2 Hyper-V Cluster..
Currently we have a 3 node Windows Server 2008 R2 Hyper-V Cluster in production. There are about 3 terrabytes worth of VMs running across these nodes.
It is over-committed, so i've setup two new nodes to add to the cluster.
I've done this before in a SQL cluster but never a Hyper-V cluster.
If I don't run validation when adding the nodes, will there be downtime?
The quorum is setup for disk majority, everything is identical on all nodes that needs to be. Shared storage is recognized and ready on the new nodes. I've gone through every checklist that Microsoft has. I'm just curious if the virtual machines will go
offline on the current nodes when i add the two new nodes.
Everything is identical down to the wsus updates installed. From networking to storage everything is perfect.
I don't want to run validation as I know that'll take everything offline.Hi,
It is recommend to run a validation test. You can select custom test. (skip storage).
When add the new node to existing cluster . it will not bring down existing VM.
Lai (My blog:- http://www.ms4u.info) -
I can't find failover cluster management after creating hyper-v cluster on SCVMM 2012 R2
I've created a hyper-v cluster on scvmm 2012 r2 but I can't find the failover cluster manager to move storage resources. all hosts are showing to have hyperv role and failover clustering feature installed. disk Witness in Quorum is good, same as for the
other CSV lun. Please help me Microsoft. Thank you.The management consoles are not getting installed while building the cluster through SCVMM. However, its not mandatory to have the management tools on the server. You can have it on a different machine with this management tools installed and connect to
this cluster remotely.
Optimism is the faith that leads to achievement. Nothing can be done without hope and confidence.
InsideVirtualization.com -
Best design for HA Fileshare on existing Hyper-V Cluster?
Have a three node 2012 R2 Hyper-V Cluster. The storage is a HP MSA 2000 G3 SAS Block Storage with CSV's.
We have a fileserver for all users running as VM on the cluster. Fileserver availability is important and it's difficult to take this fileserver down for the monthly patching. So we want to make these file services HA. Nearly all clients are Windows 8.1,
so SMB 3 can be used.
What is the best way to make these file services HA?
1. The easiest way would probably be to migrate these fileserver ressources to a dedicated LUN on the MSA 2000, and to add a "general fileserver role" to the existing hyper-V cluster. But is it supported and a good solution to provide Hyper-V VM's
and HA file services on the same cluster (even when the performance requirements for file services are not high)? Or does this configuration affect the Hyper-V VM performance too much?
2. Is it better to create a two node guest cluster with "Shared VHDX" for the file services? I'm not sure if this would even work. Because we had "Persistent Reservation" warnings when creating the Hyper-V cluster with the MSA 2000. According "http://blogs.msdn.com/b/clustering/archive/2013/05/24/10421247.aspx",
these warnings are normal with block storage and can be ignored when we never want to create Windows storage pools or storage spaces. But the Hyper-V MMC shows that "shared VHDX" work with "persistent reservations".
3. Are there other possibilities to provide HA file services with this configuration without buying new HW? (Remark: DFSR with two independet Fileservers is probably not a good solution, we have a lot of data that change frequently).
Thank you in advance for any advice and recommedations!
FranzHi Franz,
If you are not going to be using Storage Spaces in the Cluster, this is a warning that you can safely ignore.
It passes the normal SCSI3 Persistent Reservation tests, so you are good with those. Additional, when we use the cluster we can install the cluster CAU it will automatically install the cluster updates.
The related KB:
Requirements and Best Practices for Cluster-Aware Updating
https://technet.microsoft.com/en-us/library/jj134234.aspx
Cluster-Aware Updating: Frequently Asked Questions
https://technet.microsoft.com/en-us/library/hh831367.aspx
I’m glad to be of help to you!
Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Support, contact [email protected]
Maybe you are looking for
-
I've just waded through many, many pages of entries to find if anyone has asked this and haven't found a precise match so please forgive me if I am (re)covering something of this specific nature. This is addressed to Adobe itself, having found no oth
-
Error occurs while starting the console in browser in wlportal4.0
I am also getting the same problem for [wlserver6.1 with SP1, wlportal4.0,ebcc4.0, Windows NT4.0] the server has been started successfully, but if i try to start the console in the browser using(http://localhost:7501/console/index.jsp) am getting cla
-
Purchase Requisitions Approval Error
Hi, We are working in Oracle Applications 11i. We have the follwoing modules installed and used: Purchasing, Inventory, HRMS(Payrol), General Ledger, Assets, Payables. I am trying to fill in the gaps in our initial setup in order to enable the user t
-
ATV doesn't show up in iTunes...and I have a PC
I am reposting a common question, only b/c the other posts seem to deal with ATV to Mac issues. I have had ATV for about 1 yr and I have been able to use it fine. Until recently! I have a Hewlett Packard PC with a Linksys wireless router. I have a WP
-
Hello, I'm not sure what is happening, but when I place the cursor over a link on a web page the text is incorrect. For instance, in the picture below, when I hold the cursor over the Google image, the text ebay shows up. I've run a virus scan and t