More Failover Cluster CSV problems

Windows 2008 R2 Failover Cluster for Hyper V using CSVSo I come into work today and one of our two nodes on our HyperV cluster is reporting a memory DIMM error. So I am anticipating that we have to bring that server down.I know from the past that if the server that goes down is the Owner of any of the LUNs then if the other server goes down for any reason, the entire cluster will crap out when the other server comes back up and the owner is still down.So in the past I've been able to use Failover Cluster MMC to right click on a LUN and transfer ownership to the other node.I did that today on my 1st of 4 LUNs and instead of transferring over, it now says "Failed" and when I right click on it, all it has is "Help" , no other action available.I'm doing my googling to figure out what to do but I'm sure this is one of the problems I have...
This topic first appeared in the Spiceworks Community

Windows 2008 R2 Failover Cluster for Hyper V using CSVSo I come into work today and one of our two nodes on our HyperV cluster is reporting a memory DIMM error. So I am anticipating that we have to bring that server down.I know from the past that if the server that goes down is the Owner of any of the LUNs then if the other server goes down for any reason, the entire cluster will crap out when the other server comes back up and the owner is still down.So in the past I've been able to use Failover Cluster MMC to right click on a LUN and transfer ownership to the other node.I did that today on my 1st of 4 LUNs and instead of transferring over, it now says "Failed" and when I right click on it, all it has is "Help" , no other action available.I'm doing my googling to figure out what to do but I'm sure this is one of the problems I have...
This topic first appeared in the Spiceworks Community

Similar Messages

  • HyperV 2012 R2 Failover cluster, HV problem, all VMs restart

    Hello, I have 2 node Failover cluster with two nodes, Hyperv 2012, multipath SAS storage MSA2000. But hardware problem with one node (node2). It shutdown unexpectly. When It hapens NODE1 restar all VMs it is normal? It was configured by cluster validation
    tool. There is no witness. I don't clearly understand what happens if one node crash. KR.

    As Eric has said it will start the VM's in a crash consistent state on the non crashed host.
    But from your example I take your seeing your guests on the non crashed host restart. If this is the case I would say yes! I have seen this happen before. It can happen if your not using quorum because only one node has a vote. I would recommend you create
    a witness, on your MSA 2000 carve out 1 GB and do a disk witness. Or if you have a server not in the VM cluster you could do a file share witness, file share is my preferred. Once you have a witness in play you will see all of your hosts having a vote. Look
    in the cluster manager at the nodes section. You should see a vote column. Currently it will say 1/0, once the witness is created it will show 1/1.

  • CSV V/s Pass through disks with HV 2012 R2 failover cluster

    Hi
    We are using HV 2012 R2 failover cluster with CSV. We found some articles saying pass through disks outperforms CSV. Is this correct?
    Regards
    LMS

    Hi
    We are using HV 2012 R2 failover cluster with CSV. We found some articles saying pass through disks outperforms CSV. Is this correct?
    "The juice isn't worth the squeeze" (c) ...
    Tim is 200% correct here: any performance gains (if any...) you'll get will fade in darkness compared to management burden you'll have with pass-thru disks (issues with failover, VM migration and no
    real support with a major VM backup vendors). Making long story short: don't do it.
    StarWind Virtual SAN clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts.

  • How to assign SMB storage to CSV in HV failover cluster?

    I have a Hyper-V Cluster that looks like this:
    Clustered-Hyper-V-Diagram
    2012 R2 Failover Cluster
    2 Hyper-V nodes
    iSCSI Disk Witness on isolated "Cluster Only" Network
    "Cluster and Client" Network with nic-team connectivity to 2012 R2 File Server
    Share configured using: server manager > file and storage services > shares > tasks > new share > SMB Share - Applications > my RAID 1 volume.
    My question is this: how do I configure a Clustered Shared Volume?  How do I present the Shared Folder to the cluster?
    I can create/add VMs from Cluster Manager > Roles > Virtual Machines using \\SMB\Share for the location of the vhd...  but how do I use a CSV with this config?  Am I missing something?

    right click one of the disks that you assigned to cluster as available storage
    I don't yet have any disks assigned to the cluster as available storage.
    Just for grins, I added an 8Gb iSCSI lun and added it to a CSV:
    PS C:\> get-clusterresource
    Name State OwnerGroup ResourceType
    Cluster IP Address Online Cluster Group IP Address
    Cluster Name Online Cluster Group Network Name
    witness Online Cluster Group Physical Disk
    PS C:\> Get-ClusterSharedVolume
    Name State Node
    test8Gb Online CLUSTERNODE01
    All well and good, but from what I've read elsewhere...
    SMB 3.0 via a 2012 File server can only be added to a Hyper-V CSV cluster using the VMM component of System Center 2012.  That is the only way to import an SMB 3 share for CSV storage usage.
    http://community.spiceworks.com/topic/439383-hyper-v-2012-and-smb-in-a-csv
    http://technet.microsoft.com/en-us/library/jj614620.aspx

  • Adding more RAM to all 3 nodes in a hyperV failover cluster, re-validate config?

    No issues with 2012 R2 also. Just add the ram and you will be fine.

    hey spiceheads,
    I have a 3 server node hyper-v failover cluster running hyper-v server 2012 R2.
    two of the servers have 96gb and the other 120gb.  Going to even all three servers to 128gb.
    Once this is done do I need to re-validate?  If so, re-validation would take my cluster completely off-line, correct?
    Thanks,
    ceez
    This topic first appeared in the Spiceworks Community

  • Failover Cluster Manager 2012 Showing Wrong Disk Resource - Fix by Powershell

    On Server 2012 Failover Cluster Manager, we have one Hyper-V virtual machine that is showing the wrong storage resource.  That is, it is showing a CSV that is in no way associated with the VM.  The VM has only one .vhd, which exists on Volume 16. 
    The snapshot file location and smart paging file are also on Volume 16.  This much is confirmed by using the Failover Cluster Manager to look at the VM settings.  If you start into the "Move Virtual Machine Storage" dialog, you can see
    the .vhd, snapshots, second level paging, and current configuration all exist on Volume 16.  Sounds good.
    However, if you look at the resources tab for the virtual machine, Volume 16 is not listed under storage.  Instead, it says Volume 17, which is a disk associated with a different virtual machine.  That virtual machine also (correctly) shows Volume
    17 as a resource.
    So, if everything is on Volume 16, why does the Failover Cluster Manager show Volume 17, and not 16, as the Storage Resource?  Perhaps this was caused by an earlier move with the wrong tool (Hyper-V manager), but I don't remember doing this.
    In Server 2003, there was a "refresh virtual machine configuration" option to fix this, but it doesn't appear in Failover Cluster Manager in Server 2012.
    Instead, the only way I've found to fix the problem is in PowerShell.
      Update-ClusterVirtualMachineConfiguration "put configuration name here in quotes"
    You would think that this would be an important enough operation to include GUI support for it, possibly in the "More Actions" right-click action on the configuration file.

    Hi,
    Thanks for sharing your experience!
    You experience and solution can help other community members facing similar problems.
    Please copy your post and create a new reply, then we can mark the new reply as answer.
    Thanks for your contribution to Windows Server Forum!
    Have a nice day!
    Lawrence
    TechNet Community Support

  • Hyper-V Failover Cluster virtual guests suddenly reboots

    The environment is Server 2012 R2 using dual clusters--a Hyper-V Failover Cluster running guest application virtual machines and a Scale-Out File Server Cluster using Tiered Storage Spaces which are used to supply SMB3 shares
    for Quorum and CSV. Has anyone had this problem?

    Anything relevant in the host or guest event logs? I would also check the cluster event logs to see if there are any indications there as well.
    Does the guest go down hard or gracefully reboot?
    Need more info.
    Andy Syrewicze
    Come talk more about Hyper-V and the Microsoft Server Stack at
    Syrewiczeit.com and the Altaro Hyper-V Hub!
    Post are my own and in no way reflect the views of my employer or any other entity in which I produce technical content for.

  • Microsoft update KB 3002657 and 2008 R2 failover cluster for virtualization

    After installing Microsoft update KB 3002657 on my Windows 2008 R2 failover cluster for virtualization, nodes in cluster lost connection to CSV and all my VM's were moved to node owning the volume.
    I lost whole day to solve that problem.
    But should i keep that update not installed on cluster nodes or anyone maybe has a solution for that ?

    Hi a3pl,
    Unfortunately, the available information is not enough to have a clear view of the occurred behavior from the cluster perspective. Please offer us
    more information such as the failover cluster validation error, the failover error event ID, with the current information it is difficult to presume which part may cause this issue, we strongly suggest you install the following update when you use failover
    cluster.
    Recommended hotfixes and updates for Windows Server 2008 R2-based server clusters
    http://support.microsoft.com/en-us/kb/980054
    I’m glad to be of help to you!
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Support, contact [email protected]

  • Cannot add multiple members of a failover cluster to a DFSR replication group

    Server 2012 RTM. I have two physical servers, in two separate data centers 35 miles apart, with a GbE link over metro fibre between them. Both have a large (10TB+) local RAID storage arrays, but given the physical separation there is no physical shared storage.
    The hosts need to be in a Windows failover cluster (WSFC), so that I can run high-availability VMs and SQL Availability Groups across these two hosts for HA and DR. VM and SQL app data storage is using a SOFS (scale out file server) network share on separate
    servers.
    I need to be able to use DFSR to replicate multi-TB user data file folders between the two local storage arrays on these two hosts for HA and DR. But when I try to add the second server to a DFSR replication group, I get the error:
    The specified member is part of a failover cluster that is already a member of the replication group. You cannot add multiple members for the same cluster to a replication group.
    I'm not clear why this has to be a restriction. I need to be able to replicate files somehow for HA & DR of the 10TB+ of file storage. I can't use a clustered file server for file storage, as I don't have any shared storage on these two servers. Likewise
    I can't run a HA single DFSR target for the same reason (no shared storage) - and in any case, this doesn't solve the problem of replicating files between the two hosts for HA & DR. DFSR is the solution for replicating files storage across servers with
    non-shared storage.
    Why would there be a restriction against using DFSR between multiple hosts in a cluster, so long as you are not trying to replicate folders in a shared storage target accessible to both hosts (which would obviously be a problem)? So long as you are not replicating
    folders in c:\ClusterStorage, there should be no conflict. 
    Is there a workaround or alternative solution?

    Yes, I read that series. But it doesn't address the issue. The article is about making a DFSR target highly available. That won't help me here.
    I need to be able to use DFSR to replicate files between two different servers, with those servers being in a WSFC for the purpose of providing other clustered services (Hyper-V, SQL availability groups, etc.). DFSR should not interfere with this, but it
    is being blocked between nodes in the same WSFC for a reason that is not clear to me.
    This is a valid use case and I can't see an alternative solution in the case where you only have two physical servers. Windows needs to be able to provide HA, DR, and replication of everything - VMs, SQL, and file folders. But it seems that this artificial
    barrier is causing us to need to choose either clustered services or DFSR between nodes. But I can't see any rationale to block DFSR between cluster nodes - especially those without shared storage.
    Perhaps this blanket block should be changed to a more selective block at the DFSR folder level, not the node level.

  • Cannot migrate VM in VMM but can in Failover Cluster Manager network adapters network optimization warning

    I have a 4 node Server 2012 R2 Hyper-V Cluster and manage it with VMM 2012 R2.  I just upgraded the cluster from 2012 RTM to 2012 R2 last week which meant pulling 2 nodes out of the existing cluster, creating the new R2 cluster, running the copy
    cluster roles wizard since the VHDs are stored on CSVs, and then added the other 2 nodes after installing R2 on them, back into the cluster.  After upgrading the cluster I am unable to migrate some VMs from one node to another.  When trying to do
    a live migration, I get the following notifications under the Rating Explanation tab:
    Warning: There currently are not network adapters with network optimization available on host Node7. 
    Error: Configuration issues related to the virtual machine VM1 prevent deployment and must be resolved before deployment can continue. 
    I get this error for 3 out of the 4 nodes in the cluster.  I do not get this error for Node10 and I can live migrate to that node in VMM.  It has a green check for Network optimization.  The others do not.  These errors only affect
    VMM. In the Failover Cluster Manager, I can live migrate any VM to any node in the cluster without any issues.  In the old 2012 RTM cluster I used to get the warning but I could still migrate the VMs anywhere I wanted to.  I've checked the network
    adapter settings in VMM on VM1 and they are the same as VM2 which can migrate to any host in VMM.  I then checked the network adapter settings of the VMs from the Failover Cluster Manager and VM1 under Hardware Acceleration has "Enable virtual machine
    queue" and Enable IPsec task offloading" checked.  I unchecked those 2 boxes refreshed the VMs, refreshed the cluster, rebooted the VM and refreshed again but I still could not live migrate VM1.  Why is this an issue now but it wasn't before
    running on the new cluster?  How do I resolve the issue?  VMM is useless if I can't migrate all my VMs with it.

    I checked the settings on the physical nics on each node and here is what I found:
    Node7: Virtual machine queue is not listed (Cannot live migrate problem VM's to this node in VMM)
    Node8: Virtual machine queue is not listed (Cannot live migrate problem VM's to this node in VMM)
    Node9: Virtual machine queue is listed and enabled (Cannot live migrate problem VM's to this node in VMM)Node10: Virtual machine queue is listed and enabled (Live Migration works on all VMs in VMM)
    From Hyper-V or the Failover Cluster manager I can see in the network adapter settings of the VMs under Hardware Acceleration that these two settings are checked "Enable virtual machine queue" and Enable IPsec task offloading".  I unchecked those
    2 boxes, refreshed the VMs, refreshed the cluster, rebooted the VM and refreshed again but I still cannot live migrate the problem VMs.
    It seems to me that if I could adjust those VM settings from VMM that it might fix the problem.  Why isn't that an option to do so in VMM? 
    Do I have to rebuild the VMM server with a new DB and then before adding the Hyper-V cluster uncheck those two settings on the VM's from Hyper-V manager?  That would be a lot of unnecessary work but I don't know what else to do at this point.

  • Failover Cluster Network Name Failed and Can't be Repaired

    I have an issue that seem to be a different problem than any others have encountered.
    I've scoured everything I can find and nothing has fixed my problem.
    The problem starts with the common problem of the cluster network name failing on my 2 node server 2012 file server cluster.  The computer object was still in AD and appeared to be fine so it was not the common problem of the object
    getting deleted somehow.  At the time, there was no other object with that name in the recycling bin, so I don't think it was mistakenly deleted and quickly recreated to cover any tracks, so to speak.
    Following one guide, I tried to find the registry key that corresponded with the GUID of the object, but neither node in the cluster had it in its registry (which may be part of the problem).
    Since it was in the failed state, I tried to do the repair on the object to no avail.
    We run a "locked down" DC environment so all computer objects have to be pre-provisioned.  They were all pre-provisioned successfully and successfully assigned during cluster creation.  The cluster was running with no issues for a month
    or so before this problem came up.
    When I do a repair on the object while taking diagnostic logs the following 4609 error appears:
    The action 'Repair' did not complete. - System.ApplicationException: An error occurred resetting the password for 'Cluster Name'. ---> System.ComponentModel.Win32Exception: Unknown error (0x80005000)
    There appears to be a corresponding 4771 error with a failure code 0x18 that comes from the security log of the DC that states there was a Kerberos pre-authentication failure for the cluster network name object (Domain\Clustername$)
    I believe this is what is causing the repair failure.  All the information I found related to security error 4771 was either a bad credentials given for a user account or the fix was to reconnect the computer to the domain.  I can't seem to find
    a way to do this with the cluster network name.  If there's a way please let me know.
    I've tried a number of things, like resetting the object, disabling it, deleting and creating a new object with the same name, deleting that new object and recovering the original, etc...
    Can anyone shed some light on what is going on and hopefully how to fix it other than rebuilding the cluster?  I'm quite close to just tearing it down and building it back up but am hesitant because this cluster in currently in production...
    Any help would be appreciated

    Hi,
    I don’t find out the similar issue with yours, base on my experience, the 4096 error
     often caused by the CSV disk issue, and the 0x80005000 error some time caused by the repetitive computer object in OU. Please check the above related part or run the validate test then post the error information.
    Although I do have a CSV, there doesn't seem to be any problems with it and it was running just fine for a month or so before the problem started.  I double checked and there is no duplicate computer objects, maybe I don't understand what you mean by
    repetitive, could you explain further?
    The cluster validates successfully with a few warnings:
    Validating cluster resource Name: DT-FileCluster.
    This resource is marked with a state of 'Failed' instead of
    'Online'. This failed state indicates that the resource had a problem either
    coming online or had a failure while it was online. The event logs and cluster
    logs may have information that is helpful in identifying the cause of the
    failure.
    - This is because the cluster name is in the failed state
    Validating the service principal names for Name:
    DT-FileCluster.
    The network name Name: DT-FileCluster does not have a valid
    value for the read-only property 'ObjectGUID'. To validate the service principal
    name the read-only private property 'ObjectGuid' must have a valid value. To
    correct this issue make sure that the network name has been brought online at
    least once. If this does not correct this issue you will need to delete the
    network name and re-create it.
    - This is definitely related to the problem and the GUID probably got removed when we attempted a fix by resetting the object and trying the repair from the failover cluster manager.
    The user running validate, does not have permissions to create
    computer objects in the 'ad.unlv.edu' domain.
    - This is correct, we run a restricted domain.  I have a delegated OU that I can pre-provision accounts in.  The account was pro-provisioned successfully and was at one point setup and working just fine.
    There are no other errors nor warnings.

  • VM will not boot after moving using Failover Cluster Manager - "a disk read error occurred......"

    My current Configuration:
    3 node cluster, using clustered shared storage and about 22 VM's.   The Host servers are running 2012 Data Center while all guest are running 2012 Standard.  The SAN is EqualLogic and we are using HIT Kit 4.5.
    I have a CSV that is running out of space, so I created another CSV so that I could move some of the VM's to a new home.    I tested this by creating a test VM, and moved it successfully 3 times.     I then moved an actual
    LIVE VM and while it seemed to move ok, it will now not start.   The message is "a disk read error occurred Press ctrl+alt+del to restart".     I moved the test VM and it failed as well.    
    I have read several things about this, but nothing seems to relate to my specific issue.   I have verified that VSS is working and free of errors as well.    From the Settings menu for the VM, if I select "Inspect" the drive,
    the properties all look fine.    It is a VHDX and both the current file size and maximum disk size seem correct.
    The VM's were moved using the "move - virtual machine storage" option within Failover Cluster Manager.
    Suggestions?
    Thanks.

    Lets see if I can answer all of those and I appreciate the brain storming.   This really needs to work, correctly.
    1.  The Storage is moving.
    2.   VM's and SAN are on same device.
    3.  No, my  Clustered Shared Volume, CSV, is out of room, (more one that later)
    4.  No, I actually have 2 sans grouped together.   However, I'm moving the VM', form one CSV to another CSV on the Same san.  EqualLogic PS 6110 is the one I am trying to move VMS around on, and the other SAN not involved in any way except
    for it is in a SAN group is an EqualLogic PS6010.
    5.  No error During move, it took about 5-10 minutes, no error messages.   Note, I did a test and it worked GREAT 3 times.   Now both a live VM, and the test VM are doing the same thing.
    6.  No, the machine is not to large.   The test making was a 50 gig drive, just 2012 standard installed with updates.   The live VM was a 75 gig VM that was my Trend Micro Server, or anti-virus host.
    7.  Expand the existing SCV?   Yes I should be able to, but there is an issue there.   The volume was expanded correctly, Equallogic sees the added space, Fail Over cluster manager sees the added space, however disk manager only
    sort of does.    When looking at disk manager, there are 2 areas that tell you a little bit about the drive.   The top part and then the bottom part.   The top part only shows 500G, the original size, while the bottom part
    says that it is 1 TB in size.   I call Dell's technical support and after they looked at it I was told by the technician that they had seen this a couple of times and the only way to fix it was to move all the VM's to another CSV and delete the troubled
    CSV.   I thought about adding more space to the troubled CSV, but its on a production server with about 12 VM's running on it and I did not want to take a chance.   The Trend VM was running on CSV-1 and working fine.   
    I must admit that the test VM, was on CSV-2.    I moved the Test VM from csv-2 to csv-3 back and forth several times with no errors.   The Trend Server was on CSV-1 and was moved to CSV-3, however it failed.  Again, I then moved
    the test VM from CSV-2 to CSV-3 and it failed the same way.   I could not test the "TEST - VM" on csv-1 due to csv-1 not having enough space.
    8.   I did disable the network from the VM to see if that mattered it did not. 
    9.   I have not yet had a chance to connect the VHDX to a new VM, but I will do that in about an hour, hopefully.    Once I am able to test that suggestion I will post the results as well.
    Again, thanks for all the suggestions and comments, as I had rather have lots to look at and try.   I hope I answered them well enough.
    Kenny

  • Failover Cluster - GHOST VMS / ROLES

    I mean Ghost as in mysterious non-existent machine, not the old Norton program.  I've periodically had random cluster crashes, mainly due to my own negligence.  99%
    of the time everything comes back up normally.  However periodically a machine will have very strange symptoms that i'm unable to resolve.  The only resolution I've found is to create a new VM and link to the the old VHD.  A description of the
    machines with this issue:
    Shown in Failover Cluster role as Running but cannot Connect, turn off, shutdown, etc.
    Login to Host machine for the VM and open Hyper-V Manager the machine does not exist.  The only place this machine seems to exist is in Failover cluster.
    No details available on the Summary Tab, machine doesn't actually appear to be running despite what the console says.
    Under the resources tab for that Machine is shows the VM as running, but the VM Configuration as failed.
    Unable to bring the configuration back online.  Error is "The group or resource is not the correct state to perform the requested operation"
    I've seen other vague areas about null context pointers or something alone those lines.  I've tried researching the users methods to no avail.  How can I fix these? Or at least remove them when i've recreated the machine.

    Hi,
    Unfortunately, the available information is not enough to have a clear view of the occurred behavior.Could you provide more information about your environment.  The server version of the problem on, when this problem occurs the system log record information,
    screenshots is the best information.
    If you are using Server 2012R2 failover cluster please install the following update:
    Recommended hotfixes and updates for Windows Server 2012 R2-based failover clusters
    http://support.microsoft.com/kb/2920151
    More information:
    Event Logs
    http://technet.microsoft.com/en-us/library/cc722404.aspx
    Thanks.
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Failover cluster not cleanly shutting down service

    I've got a two node 2008 R2 failover cluster.  I have a single service being managed by it that I configured just as a generic service.  The failover works perfectly when the service is stopped, or when one of the machines goes down, and the immediate
    failback I have configured works perfectly in both scenarios as well.
    However, there's an issue when I take the networking down on the preferred owner of the service.  As far as I can tell (this is the first time I've tried failover clustering, so I'm learning), when I take the networking down, the cluster service shuts
    down, and in turn shuts down the service I've told it to manage.  At this point, when the services aren't running, the service fails over to the secondary as intended.  The problem shows up when I turn the networking back on.  The service tries
    and fails to start on the primary (as many times as I've configured it to try), and then eventually gives up and goes back to the secondary.
    The reason for this, examining logs for the service, is that the required port is already in use.  I checked some more, and sure enough, when I take the networking offline the service gets shut down, but the executable is still running.  This is
    repeatable every time.  When I just stop the service, though, the executables go away.  So it's something to do specifically with how the managed service gets shut down *when it's shut down due to the cluster service stopping*.  For some reason
    it's not cleaning up that associated executable.
    Any ideas as to why this is happening and how to fix/work around it would be extremely welcome.  Thank you!

    Try to generate cluster log using closter log /g /copy:<path to a local folder>. You might need to bump up log verbosity using cluster /prop ClusterLogLevel=5 (you can check current level using cluster /prop).
    You also can look at the SCM diagnostic channel in the event viewer. Start eventvwr. Wait for the clock icon on the Application and Services Logs to go away. Once the clock icon is gone select this entry and in the menu check Show Analytic and Debug Logs.
    Now expand to the SCM provider located at
    Application and Services Logs\Microsoft\Service Control Manager Performance Diagnostic Provider\Diagnostic.
    or Microsoft-Windows-Services/Diagnostic
    Enable the log, run repro, disable the log. After that you should see events from the SCM showing you your service state transitions.
    The terminate parameters do not seems to be configurable. I can think of two ways fixing the issue
    - Writing your own cluster resource DLL where you can implement your own policies. THis would be a place to start http://blogs.msdn.com/b/clustering/archive/2010/08/24/10053405.aspx.
    - This option is assuming you cannot change the source code of the service to kill orphaned child processes on startup so you have to clenup using some other means. Create another service and make your service dependent on this new service. This new serice
    must be much faster in responding do the SCM commands. On start of this service you using PSAPI enumirate all processes running on the machine and kill the orphaned child processes. You probably should be able to acheve something similar using GenScript resource
    + VB script that does the cleanup.
    Regards, Vladimir Petter, Microsoft Corporation

  • Virtual Domain Controllers in 2012 Failover Cluster. Time Skew

    Hi All,
    Not sure if this is the correct space for this topic, however i'll give it a go anyway.
    We have a 2 Hosts (HP DL385) Windows Server 2012 Failover Cluster.
    Storage is provided by a 12 Bay NAS with iSCSI connections (This is catering for CSV's and Quorum)
    We are running 2 Virtual domain controllers (2008R2)
    The issue we experience is that if the cluster goes down, and when it comes back online the time on the domain controllers (one or the other or both) skews by any where up to 3 days which causes havoc for our office until we can resync clocks with the PDCe.
    Time Synchronisation Integration Service is disabled on both Domain Contollers
    A few days back we need to reboot the storage on the cluster, and the tasks performed were as follows:
    -Power off all virtual machines (Graceful Shutdown)
    -Put all CSV's into maintenance mode
    -Offline Disk Witness to Quorum
    -Rebooted Storage (Waited until it came back online)
    -Online Quorum Storage (Successful)
    -Bring CSV's out of maintenance mode (Successful & Browsable)
    -Power on all Virtual Machines (Successful)
    This is where the time Skewed and caused headaches. The time for some reason went to 2 days 11hrs in the past on 1 domain controller.
    With this DNS lookups failed to work, Cluster services failed, Cluster Aware Updating Failed, RDP to VM's (and Virtual Hosts) by DNS Name failed (Date time error) 
    There doesn't seem to be anything in the EventLog except for date/time stamp on events being 2 days in the past.
    Now this is why i'm not sure if the issue is cause by fail over clustering, or is an issue with the domain controllers.
    Any advice regarding this or if anyone has seen this behaviour before any info would be great
    Thanks
    Rob 

    Hi Rob,
    Does both this two DCs on your cluster VM and there have not others DCs? Microsoft recommends that files for virtualized domain controllers be placed on non-CSV
    disks, Non-CSV disks can be brought online without authentication. Because non-CSV disks can be brought online more easily.
    For virtual machines that are configured as domain controllers, it is recommended that you disable time synchronization between the host system and guest operating
    system acting as a domain controller. This enables your guest domain controller to synchronize time from the domain hierarchy, please confirm your PDC time is always correct.
    The related KB:
    Running Domain Controllers in Hyper-V
    https://technet.microsoft.com/en-us/library/d2cae85b-41ac-497f-8cd1-5fbaa6740ffe(v=ws.10)#deployment_considerations_for_virtualized_domain_controllers
    Things to consider when you host Active Directory domain controllers in virtual hosting environments
    http://support.microsoft.com/kb/888794?wa=wsignin1.0
    I’m glad to be of help to you!
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Support, contact [email protected]

Maybe you are looking for