2 node failover cluster power down

I have a 2node failover cluster. When I power down a node that has the SQL server instance and resources, all the resources and service failover to the other node.   When I see that all the resources and service report "online" I then power
that node.  I am being told that this is improper because failover may not have completed.  Is that correct?
Also, in our 2 node failover cluster is there a proper sequence to restarting the powered down nodes?

Hi,
The cluster group containing SQL Server can be configured for automatic failback to the primary node when it becomes available again. By default, this is set to off.
To Configure:
Right-click the group containing SQL Server in the cluster administrator, select 'properties' then 'failback' tab.
To prevent an auto-failback, select 'Prevent Failback', to allow select 'Allow Failback' then one of the following options:
Immediately: Not recommended as it can disrupt clients
Failback between n and n1 hours: allows a controlled failback to a preferred node (if it's online) during a certain period.
The related article:
Windows Failover Clustering Overview
http://blogs.technet.com/b/rob/archive/2008/05/07/failover-clustering.aspx
Hope this helps.
We
are trying to better understand customer views on social support experience, so your participation in this
interview project would be greatly appreciated if you have time.
Thanks for helping make community forums a great place.

Similar Messages

  • 2 Node Failover Cluster - ISCSI Disks as 1 volume?

    Hi,
    Not sure if I'm in the correct forum. If I am I apologize.  I need some advice.  
    I have created a 2-node failover cluster with 2 HP Blades.  I also currently have 2 NAS Servers (HP X1600 24tb servers running 2008 Storage server) -- The ultimate goal would be to combine all of the storage space from the NAS's into 1 volume addressable
    by the failover cluster. (As well as disk space from any additional NAS's added in the future.)
    Right now, I can add the ISCSI disk space from the NAS Targets as different volumes under cluster shared volumes.  Because of the 16TB limit in the ISCSI target, I essentially have 2 ISCSI disks on each NAS. One for 16TB, and the other for 4TB (The
    NAS Drives are configured for RAID 5 so there's a 4TB Loss.)  So, I have 4 ISCSI disks in the cluster, each as their own volume.
    Any thoughts on making the 4 drives addressable as one volume? 
    Regards,
    -Eric

    We're running Server 2012 Data Center on the cluster nodes.
    I was thinking the same about the 3rd party software to do what I'd like it to do.   The data  is mostly security camera video from our security system.  Since its not really critical data, i'm just looking for a way to maximize
    the available hard drive space, and make it addressable as one volume or network share...
    -Eric
    You can build Storage Spaces (simple, not clustered as it would waste 50% of your capacity, MSFT can do mirror and parity with R2 for clustered only) from iSCSI LUs. Dog slow and unsupported but you'll have linear spanned space. See:
    Rough Guide To Setting Up A Scale-Out File Server
    http://www.aidanfinn.com/?p=13176
    Creating Virtual SoFS with shared VHDX
    http://www.aidanfinn.com/?p=15145
    you don;t need SoFS (obviously) but in this article Aidan creates Storage Spaces from iSCSI LUNs.
    Good luck!
    StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts.

  • 3 Node Failover Cluster With iSCSI

    Is there any information available on the steps to create a 3 node failover cluster with iSCSI storage?  Is there a step-by-step guide?  I looked around but couldn't find much.  Thanks!

    Hi SCPSTech,
    The 3 node cluster create steps same with 2 node, you can refer the following step by step white paper create the 2 node cluster then add the another node to cluster.
     Configuring Failover Clusters with Windows Storage Server 2008
    http://blogs.technet.com/b/storageserver/archive/2009/12/17/configuring-failover-clusters-with-windows-storage-server-2008.aspx
    Add a Server to a Failover Cluster
    http://technet.microsoft.com/en-us/library/cc730998.aspx
    More information:
    Add or Remove Nodes in a SQL Server Failover Cluster (Setup)
    http://msdn.microsoft.com/en-us/library/ms191545.aspx
    Hope this helps.
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Guest two node Failover Cluster with SAS HBA and Windows 2012 R1

    Hi all, i have two IBM x3560 brand new servers with V3700 IBM Storage. The Servers are connected to the storage through four SAS HBA adabters (two HBA's on each server). I want to create a two node guest Fileserver Failover Cluster. I can present the
    LUN's to the guest machines, but when i 'm running the cluster creation wizard it can't see any disk. I can see the disks on disk management console. Is there any way to achive this (the cluster creation) using my SAS HBA presented
    disks, or i have to use iSCSI to present the disks to my cluster?
    Thank you in advance, George
      

    Hi all, i have two IBM x3560 brand new servers with V3700 IBM Storage. The Servers are connected to the storage through four SAS HBA adabters (two HBA's on each server). I want to create a two node guest Fileserver Failover Cluster. I can present the
    LUN's to the guest machines, but when i 'm running the cluster creation wizard it can't see any disk. I can see the disks on disk management console. Is there any way to achive this (the cluster creation) using my SAS HBA presented
    disks, or i have to use iSCSI to present the disks to my cluster?
    Thank you in advance, George
    1) Update to R2 and use shared VHDX which is a better way to go. See:
    Shared VHDX
    http://blogs.technet.com/b/storageserver/archive/2013/11/25/shared-vhdx-files-my-favorite-new-feature-in-windows-server-2012-r2.aspx
    Clustering
    Options
    http://blogs.technet.com/b/josebda/archive/2013/07/31/windows-server-2012-r2-storage-step-by-step-with-storage-spaces-smb-scale-out-and-shared-vhdx-virtual.aspx
    2) If you want to stick with non-R2 (which is a BAD idea b/c tons of reasons) you can spawn an iSCSI target on top of your storage, make it clustered and make it provide LUs to your guest VMs. See:
    iSCSI Target in Failover
    http://technet.microsoft.com/en-us/library/gg232632(v=ws.10).aspx
    iSCSI Target Failover Step-by-Step
    http://techontip.wordpress.com/2011/05/03/microsoft-iscsi-target-cluster-building-walkthrough/
    3) Use third-party software providing clustered storage (active-active) out-of-box. 
    I would strongly recommend to upgrade to R2 and use shared VHDX.
    StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts.

  • Network Name Resource Availability - failover cluster error 1196 on Hyper-V 2012 R2 nodes

    Hello,
    We're getting this error in our even logs of our four node failover cluster, we tried deleting Host A record in DNS management, that did nothing.
    Failover cluster event: 1196
    "Cluster network name resource 'CAUCrgt8' failed registration of one or more associated DNS name(s) for the following reason: This operation returned because the timeout period expired.
    Ensure that the network adapters associated with dependent IP address resources are configured with at least one accessible DNS server."
    And this resource http://technet.microsoft.com/en-us/library/cc773529%28v=WS.10%29.aspx did not help in solving this.
    Do you guys have any other suggestions we could try to resolve this error?

    Hi Jonas,
    Please offer us which platform server you are using, such as if you are using server 2012 please refer the following update first.
    Recommended hotfixes and updates for Windows Server 2012-based Failover Clusters - http://support.microsoft.com/kb/2784261
    If the update not work please ppply Full Control permissions for the Cluster Name Resources in the DNS console.
    More detail information please refer the following article:
    Windows Server 2008 Troubleshooting: Event ID 1196 — Microsoft-Windows-FailoverClustering - http://social.technet.microsoft.com/wiki/contents/articles/windows-server-2008-troubleshooting-event-id-1196-microsoft-windows-failoverclustering.aspx
    DNS Registration with the Network Name Resource -
    http://blogs.msdn.com/b/clustering/archive/2009/07/17/9836756.aspx
    I’m glad to be of help to you!
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • SSAS 2012 (SP2) - Connecting to a Named Instance in a Failover Cluster

    I posted this question some months ago and never got a resolution...still having the same problem. (http://social.msdn.microsoft.com/Forums/sqlserver/en-US/4178ba62-87e2-4672-a4ef-acd970ac1011/ssas-2012-sp1-connecting-to-a-named-instance-in-a-failover-cluster?forum=sqlanalysisservices)
    I have a 3 node failover cluster installation (active-passive-passive) configured as follows:
    Node1: DB Engine 1 (named instance DB01)
    Node2: DB Engine 2 (named instance DB02)
    Node3: Analysis Services (named instance DBAS)
    Obviously, the node indicated is merely the default active node for each service, with each service able to fail from node to node as required.
    Strangely, when I attempt to connect to the SSAS node using the cluster netbios "alias" (dunno, what else it would be called, so I apologize if I am mixing terminology or somesuch), I am only able to do so by specifying the the alias _without_ the
    required named instance. If I issue a connection request using an external program or even SSMS using Node3\DBAS or Node3.domain\DBAS, it appears that the SQL Server Browser is offering up a bogus TCP port for the named instance (in my case TCP/58554), when
    in reality, the SSAS service is running on TCP/2383 (confirmed with netstat) -- which if I understand correctly after much, much reading on the subject is the only port that can be used in a failover cluster. In any case, I'm puzzled beyond words. As I think
    through it, I believe I've seen this issue in the past, but never worried about it since it wasn't necessary to specify the named instance when I had SSAS requirements... It's only a showstopper now because I'm finalizing my implementation of SCVMM/SCOM 2012
    R2, and for some strange reason the PRO configuration in VMM gets all butthurt if you don't offer up a named instance...
    Thank you much for reading. I appreciate any help to get this resolved.
    POSSIBLY NOT RELEVANT...?
    I've properly configured the SPNs for the SSAS service (MSOLAPSvc.3) and the SQL Browser (MSOLAPDisco.3), with the former mapped to the SSAS service account and the latter to the cluster "alias" (since it runs as "NT AUTHORITY\LOCALSERVICE"
    as is customary) and have permitted delegation on the service and machine accounts as required. So, I'm not getting any kerberos issues with the service...any more, that is... ;) I'm not sure that's important, but I wanted to be forthcoming with details to
    help solve the issue.

    When connecting to SSAS in a cluster, you do not specify an instance name.  In your case, you would use the name of the SSAS IP address to connect.
    See:
    http://msdn.microsoft.com/en-us/library/dn141153.aspx
    For servers deployed in a failover cluster, connect using the network name of the SSAS cluster. This name is specified during SQL Server setup, as
    SQL Server Network Name. Note that if you installed SSAS as a named instance onto a Windows Server Failover Cluster (WSFC), you never add the instance name on the connection. This practice is unique to SSAS; in contrast, a named
    instance of a clustered relational database engine does include the instance name. For example, if you installed both SSAS and the database engine as named instance (Contoso-Accounting) with a SQL Server Network Name of SQL-CLU, you would connect to SSAS using
    "SQL-CLU" and to the database engine as "SQL-CLU\Contoso-Accounting". See
    How to Cluster SQL Server Analysis Services for more information and examples.

  • SQL Server Failover Cluster Questions

    Dear All,
                I am building a two-node failover cluster on SQL Server 2012 SP1 (inside Hyper-V as a Guest Cluster) and want clarification on few things that I am facing.
    1.  I am receiving MSDTC Warning.  I can go ahead and create the cluster, but want to understand whether this MSDTC is to be configured as a role on the cluster or not.  I plan to run SCVMM, SCOM, Orchestrator and Windows Azure Pack Databases
    and Reports through it so in such a scenario, do I need MSDTC? If yes, how much should be the size of the MSDTC Drive? Is following process correct?
    http://www.sqlnotebook.info/configure-msdtc-on-windows-cluster-2012/
    2.  During First Node configuration, one needs to provide the "SQL CLUSTER RESEOURCE GROUP NAME".  Does it have any bearing on how it will be accessed by other servers for databases and logs? or is it just how the cluster resource group
    would be named? would it be required for every instance that is created inside the cluster? Just to be clear, so one can name it according to the instance name.
    3.  During the instance creation, one needs to provide "SQL Server Network Name".  As stated above, I plan to run SCVMM, SCOM, Orchestrator and Windows Azure Pack Databases and Reports through it, so would I be required to provide this
    for all instances that I create or this is only required once in the cluster:
    4.  During the instance creation, one needs to provide the features required for installation i.e. instance features and shared features.  As stated above, I plan to run SCVMM, SCOM, Orchestrator and Windows Azure Pack Databases and Reports through
    it, so which features should be selected? so that there is less workload on the server.
    5.  All the instances use TempDB for databases that are present inside it.  What would be the best practice with respect to TempDB.  One TempDB for all instance on the servers on a separate LUN or all instance having their own TempDB LUN?  What
    should be the ideal size of the TempDB LUN?
    6.  Should all the disks required for DBs and Logs be added to Cluster?  Should they be added normal disks or CSV Volumes?
    Thanks in advance. 

    Hello,
     1.You can run the Microsoft Distributed Transaction Coordinator service (MSDTC) as a clustered resource on a failover cluster server for increased reliability, based on the failover capabilities of the clustered servers. You can
    refer to the MSDTC section of the following blog about determine whether the Microsoft Distributed Transaction Coordinator (MSDTC) cluster resource must be created.
    Reference:http://msdn.microsoft.com/en-us/library/ms189910.aspx#MSDTC
    2. The Cluster Resource Group is where SQL Server failover cluster resource will be placed. Each clustered SQL Server will belong to a Failover
    Cluster Resource Group. For example, if you had configure a two node SQL Server Cluster, each clustered instance on the two node belong to a same Cluster Resource Group.
    You can change the Cluster Resource Group name, but notes the following name is reserved and already used as Resource Group names: Available Storage, Cluster Group.
    3. Each SQL Server cluster is assigned a virtual Network name and IP address, which client applications use to connect to the clustered SQL Server.
    4. Not familiar with SCVMM, SCOM, Orchestrator, but you should install the Database Engine Services and SQL Server Management tools.If you want to use SQL Server Reporting Services, you can install Reporting Servers, but Report Server service cannot participate
    in a failover cluster.
    5. You can use isolated disk for user database and temp DB of each SQL Server Cluster
    6. Yes. You should use Cluster Disks which add to Clustered Shared Volumes to host the data file and log of databases.
    http://www.pythian.com/blog/how-to-install-a-clustered-sql-server-2012-instance-step-by-step-part-1/
    Regards,
    Fanny Liu
    Fanny Liu
    TechNet Community Support

  • New SQL Server Failover Cluster Installation - No disk is available to select in section "Cluster Disk Slection"

    Hello Everyone,
    I am in a deep need for your help regarding the problem I am facing.
    I am doing a New SQL Server Failover Cluster Installation in a virtual server that is part of a failover cluster. I am able to complete all the steps successfully but, when I reach the point where I am supposed to select the shared disk that will
    be included in the SQL server resource cluster group, I don't find any disk in the list (as u can see in the figure below).
    I have already created the a 2 nodes failover cluster and added 3 disks (1 as a witness in Quorum and 2 other available storage).
    No roles were created, 2 nodes are available and 1 network is there in the cluster.
    If u take a look at the message it say: "The search for mount points failed. Error: the system cannot find the path specified". What is this and how can I solve this issue??
    Thanks in advance for your support and looking forward for your valuable feedbacks.
    Mark as answer if it was an answer for you question.. Please don't hesitate to ask for any further help..

    Dear Ashwin,
    I have granted the privileges mentioned in the link you provided as below:
      Act as Part of the Operating Sywstem = SeTcbPrivileg
      Bypass Traverse Checking = SeChangeNotify
      Lock Pages In Memory = SeLockMemory
      Log on as a Batch Job = SeBatchLogonRight
      Log on as a Service = SeServiceLogonRight
      Replace a Process Level Token = SeAssignPrimaryTokenPrivilege
    I was not able to solve the problem by giving these privileges to the domain account I am using to install SQL.
    Mark as answer if it was an answer for you question.. Please don't hesitate to ask for any further help..

  • HyperV 2012 R2 Failover cluster, HV problem, all VMs restart

    Hello, I have 2 node Failover cluster with two nodes, Hyperv 2012, multipath SAS storage MSA2000. But hardware problem with one node (node2). It shutdown unexpectly. When It hapens NODE1 restar all VMs it is normal? It was configured by cluster validation
    tool. There is no witness. I don't clearly understand what happens if one node crash. KR.

    As Eric has said it will start the VM's in a crash consistent state on the non crashed host.
    But from your example I take your seeing your guests on the non crashed host restart. If this is the case I would say yes! I have seen this happen before. It can happen if your not using quorum because only one node has a vote. I would recommend you create
    a witness, on your MSA 2000 carve out 1 GB and do a disk witness. Or if you have a server not in the VM cluster you could do a file share witness, file share is my preferred. Once you have a witness in play you will see all of your hosts having a vote. Look
    in the cluster manager at the nodes section. You should see a vote column. Currently it will say 1/0, once the witness is created it will show 1/1.

  • Migrating single file server to failover cluster file server

    Hi,
    Currently we have one file server which is our Domain controller also. File Server data stored in SAN storage lun. Users home folders also resides at file server.Now we want to create 2 node failover cluster file server.
    Questions:
    1)Can we assign same storage lun to failover cluster?
    2)Do we need to assign all permissions and storage quota again?
    What is the best option to achieve this goal.                                                                            

    Hi,
    if you attach the same LUN to the failover cluster the NTFS permission (security tab) are retained because they are stored in the LUN itself
    Share permission are lost because they are stored in the registry of the file server. If you have few share with a simple share permission structure, simply recreate them in the cluster. If you have a lot of shares and complex permission tree, you have to
    export registry hives related to lanmanserver\shares

  • Hyper-V - 2 node cluster goes down if one server shutsdown

    Hi all,
    I built a 2 node cluster with tiered storage and then I started doing some tests:
    * Drain one node and all the VMs moved to the other node, perfect
    * shutdown the drained node.
    The entire cluster crashed!!! The remaining node is trying to re-connect to the iSCSI SAN without success.
    * I booted the drained node. And it would not re-connect to the iSCSI SAN either. I had to force the reconnect in the iscsi control panel to make it re connect.
    So why would shutting down one node kill the cluster ? Sure it was the node that had the tiered pool online, but even then, isn't failover cluster supposed to put that one back and working on the other node ?
    Why did the active node lose the iSCSI connection too ? It had VMs running on it prior to the shutdown of the other node. My DC that was running on that other node is also now un available, can't ping it or anything.
    So what did I miss in the configuration of the cluster ? I followed the msdn 2 node hyper-v cluster doc.
    I am really worried atm since I had over the past 3 months a ton of issues with hyper-v going from using tiered storage, shutting down nodes, MAC address on the VMs and the hosts,... I thought that after hyper-v 2008, Microsoft had really made some progress
    with Hyper-V but I truly regret not going with VMWare again this time around.
    That cluster was supposed to go into homologation phase tomorrow at the datacenter but now I am unsure if I ll ever be able to trust it to work.
    The SAN is an MD3200i which is reported as Hyper-V ready.
    Any hint on where I have gone wrong would be appreciated.
    Regards,
    Edit: even from the host with powershell I was not able to shutdown the DC and reboot it clean. Said the integration services were not reachable... it is a 2012 R2 servers...
    Edit2: One of my VM is gone ! Can't even find the file on the disks either locally on the hosts or on the SAN. WTF!!!

    Actually comes across very reasonable. And I think you are right. I tend to compare Hyper-V to vSphere with vCenter included. I have not seen nor used VMM. Also true that Storage Pools and iSCSI is not Hyper-V, but to me it comes as a package just as much
    as ESXi 'comes' with it.
    As for burning personnal hours and money on books, I have, just as much as I go to conferences when I can and can afford it. And the only thing I would envy you, is the fact you have your colleagues to bounce idea of / lay on if necessary.
    As for the few hundred box for the management suite. I believe the stack you speak of would actually cost my current company about 14k$, that is not a few hundred box. That is pretty much the cost of one of the 2 SANs.  By 14k$, I mean that
    we have 6 servers with 2 sockets each, running a lot of VMs, which means Datacenter licences which list price is 3.6k$. I am not even including the CALs. Or am I mistaken on the licensing ?
    If VMWare, I would be going with essentials which, at the same server perimeter, would be 30% less expensive. We don't really need the full blown one at our level.
    I am also locked by hardware that was ordered by my predecessor which do not provide the service we need them for. (I blame the vendor on that one, my predecessor was not an infrastructure guy). As for storage, if you are referring to SMB3 and using
    a failover cluster to provide the disks to the hyper-v hosts, I agree. I am just not too sure on the technology yet and went more for safety until I can test it thoroughly in the dev environment.
    I also hope Microsoft will add an easier way to set the media type on disks as well as allow for more than just SSD and HDD or even allow us to define our own.
    I actually fooled the system this time around because SSD are too expensive and too high a failure rate compared to 15kRPM (yes the performance are lovely) at the moment. So I made the 15kRPMs into SSD.
    For the remote management issue, i meant that actually, at this time, I have to disable the domain firewall to be able to manage the hyper-v 2012 R2. I tried hvremote, adding all the necessary rules, etc... What I did for 2012 worked perfectly fine and I
    had full controls. 2012 R2 does not. I have another thread on this in the forum and I ll come back to it as soon as I can.
    I don't really care for the no interface thingy, I enjoy Powershell and scripting fine :) Does a lot of automation for me :) I am used to scripting anyway and it is faster to reproduce steps that way. You do it once, and you got something you can apply with
    little changes to everything.
    I joined the company in November last year and I got dropped a full stack to upgrade in 6 months while maintaining the current one. Encountering the problem now is better and is, fundamentally, good, though it is time consumming.
    By full stack I mean:
    * Help the dev re-design the apps so it handles load better. Get out of Windows 2003 and migrate to 2012 and validate all the applications on 2012 as well as improve security.
    * Implementing a monitoring system for the infrastructure and the applications.
    * Upgrade SQL Server 2005 to, in this case 2012 Standard (no choice and enterprise not in the price range of the company). Converge our current test and prod environments. Optimize all the queries... And naturally validate the applications.
    * Upgrade the certificate authorities so they are available on all sites. Haven't scratched that one yet.
    * Design a fully site redundant architecture so that if a site goes dark we have no impact. If there is partial failure on one site, no issue either, and so on XD. I wish I had AG :)
    * Implement a single windows domain on all the sites, that will be a relief :) Running 4 domains and about 10 different workgroups atm
    * Upgrade the firewalls, switches, servers hardware, ... Implement the necessary networks for virtualization and improved security rules... (Don't buy SonicWalls, had a worse headache with them than Hyper-V :) )
    * Migrate the users from old domain to new one. Windows 8 does not have the user migration tool anymore :(
    * Plan the DR tests and processes :)
    * Gotta get Hyper-V replica working as well as backups.
    * And naturally make all the documentation so that if I happen to get under a bus, anyone in the company can just follow the documentation in case of an emergency.
    And I am sure I am still missing some part of the environment no one knows about at some places :) Found a new network today XD.
    As for the mixed VM management, I mean some that are in HA on the cluster and others that don't need and must not failover. Am sure there is a way to configure them in the failover cluster so they don't failover. I just need to spend the 15 minutes looking
    into it :)
    Since I have had that SAN, I spent about 5 days on the phone with Dell support. That iSCSI issue is not the first one I got :( But in the production environment, that issue has disappeared. So at this time, i have switched working on other issues. And with
    the information you provided, I'll be able to know if one of the node of the cluster lose the connectivity which will help avoid this issue in the future.
    And it is a very good and interesting challenge. I am just running in way to many issues doing a 10 year old system upgrade :) I was expecting to have an easier time with Microsoft Virtualization than I've had.
    Oh and just for the fun, I also had VM disappear, no more file, no nothing, just poof. Have not figured how that happened yet. Had the backup so not much of an issue, but still, it went poof when the original issue happened.
    Thanks for the time and the tips :)

  • Failover cluster not cleanly shutting down service

    I've got a two node 2008 R2 failover cluster.  I have a single service being managed by it that I configured just as a generic service.  The failover works perfectly when the service is stopped, or when one of the machines goes down, and the immediate
    failback I have configured works perfectly in both scenarios as well.
    However, there's an issue when I take the networking down on the preferred owner of the service.  As far as I can tell (this is the first time I've tried failover clustering, so I'm learning), when I take the networking down, the cluster service shuts
    down, and in turn shuts down the service I've told it to manage.  At this point, when the services aren't running, the service fails over to the secondary as intended.  The problem shows up when I turn the networking back on.  The service tries
    and fails to start on the primary (as many times as I've configured it to try), and then eventually gives up and goes back to the secondary.
    The reason for this, examining logs for the service, is that the required port is already in use.  I checked some more, and sure enough, when I take the networking offline the service gets shut down, but the executable is still running.  This is
    repeatable every time.  When I just stop the service, though, the executables go away.  So it's something to do specifically with how the managed service gets shut down *when it's shut down due to the cluster service stopping*.  For some reason
    it's not cleaning up that associated executable.
    Any ideas as to why this is happening and how to fix/work around it would be extremely welcome.  Thank you!

    Try to generate cluster log using closter log /g /copy:<path to a local folder>. You might need to bump up log verbosity using cluster /prop ClusterLogLevel=5 (you can check current level using cluster /prop).
    You also can look at the SCM diagnostic channel in the event viewer. Start eventvwr. Wait for the clock icon on the Application and Services Logs to go away. Once the clock icon is gone select this entry and in the menu check Show Analytic and Debug Logs.
    Now expand to the SCM provider located at
    Application and Services Logs\Microsoft\Service Control Manager Performance Diagnostic Provider\Diagnostic.
    or Microsoft-Windows-Services/Diagnostic
    Enable the log, run repro, disable the log. After that you should see events from the SCM showing you your service state transitions.
    The terminate parameters do not seems to be configurable. I can think of two ways fixing the issue
    - Writing your own cluster resource DLL where you can implement your own policies. THis would be a place to start http://blogs.msdn.com/b/clustering/archive/2010/08/24/10053405.aspx.
    - This option is assuming you cannot change the source code of the service to kill orphaned child processes on startup so you have to clenup using some other means. Create another service and make your service dependent on this new service. This new serice
    must be much faster in responding do the SCM commands. On start of this service you using PSAPI enumirate all processes running on the machine and kill the orphaned child processes. You probably should be able to acheve something similar using GenScript resource
    + VB script that does the cleanup.
    Regards, Vladimir Petter, Microsoft Corporation

  • In Failover Cluster 2008 mailbox2 server network status Unavailable (down).

    Hi 
    I am new here, & hope i can get some help for the below issue
    let me keep it simple scenario 
    i have 2 mailbox server ( mbx01 & mbx02)
    2 cashub Server ( cshb1-cshb2) all running on hyper-v. last week due to some maintenance in data center i have to bring down all the production environment, once the thing are fixed in DC , i started all the server and found out that my exchange node 2 database
    are failed.
    In exchange 2010 failover cluster node ( node2 ) is down network status unavailable. due to by exchange node2 database are failed.
    also i cannot access the file share witness directory from any of the above server.
    can any body assist me on this please 
    thank you 

    Make sure both the DAG members have access to the witness directory. Check the location and make sure you have access. Open the DAG properties from Node2 and try to update the witness server and witness directory. Check the firewall/antivirus of the witness
    server. Try disabling antivirus and firewall. If witness server is not online your databases will go offline. Pleae check
    this
    Get-DatabaseAvailabilityGroup -Identity DAG -Status | fl name,servers,witnessserver,witnessdirectory,alternatewitnessserver,alternatewitnessdirectory,operationalservers,primaryactivemanager,witnessshareinuse
    Thanks, MAS
    Please mark as helpful if you find my comment helpful or as an answer if it does answer your question. That will encourage me - and others - to take time out to help you.

  • Failover cluster server - File Server role is clustered - Shadow copies do not seem to travel to other node when failing over

    Hi,
    New to 2012 and implementing a clustered environment for our File Services role.  Have got to a point where I have successfully configured the Shadow copy settings.
    Have a large (15tb) disk.  S:
    Have a VSS drive (volume shadow copy drive) V:
    Have successfully configured through Windows Explorer the Shadow copy settings.
    Created dependencies in Failcover Cluster Server console whereby S: depends on V:
    However, when I failover the resource and browse the Client Access Point share there are no entries under the "Previous Versions" tab. 
    When I visit the S: drive in windows explorer and open the Shadow copy dialogue box, there are entries showing the times and dates of the shadow copies ran when on the original node.  So the disk knows about the shadow copies that were ran on the
    original node but the "previous versions" tab has no entries to display.
    This is in a 2012 server (NOT R2 version).
    Can anyone explain what might be the reason?  Do I have an "issue" or is this by design?
    All help apprecieated!
    Kathy
    Kathleen Hayhurst Senior IT Support Analyst

    Hi,
    Please first check the requirements in following article:
    Using Shadow Copies of Shared Folders in a server cluster
    http://technet.microsoft.com/en-us/library/cc779378(v=ws.10).aspx
    Cluster-managed shadow copies can only be created in a single quorum device cluster on a disk with a Physical Disk resource. In a single node cluster or majority node set cluster without a shared cluster disk, shadow copies can only be created and managed
    locally.
    You cannot enable Shadow Copies of Shared Folders for the quorum resource, although you can enable Shadow Copies of Shared Folders for a File Share resource.
    The recurring scheduled task that generates volume shadow copies must run on the same node that currently owns the storage volume.
    The cluster resource that manages the scheduled task must be able to fail over with the Physical Disk resource that manages the storage volume.
    If you have any feedback on our support, please send to [email protected]

  • Adding more RAM to all 3 nodes in a hyperV failover cluster, re-validate config?

    No issues with 2012 R2 also. Just add the ram and you will be fine.

    hey spiceheads,
    I have a 3 server node hyper-v failover cluster running hyper-v server 2012 R2.
    two of the servers have 96gb and the other 120gb.  Going to even all three servers to 128gb.
    Once this is done do I need to re-validate?  If so, re-validation would take my cluster completely off-line, correct?
    Thanks,
    ceez
    This topic first appeared in the Spiceworks Community

Maybe you are looking for

  • ABC analysis

    Hi, for the client we have customized the ABC analysis report which shoes the material details and its status for the last 5 years,with last purchase and sales date. Some materials are not been caught in this report. What may be the reason for this?.

  • Not showing duplication.

    Example below. For the first 3 records I do not want to duplicate the district for the same user: I do not want to duplicate the acct # for the same name. change from DISTRICT USER ACCT# NAME 1 MIC 11 James 1 MIC 11 Maria 1 MIC 11 Jennifer 2 MIN 22 H

  • Payment type method

    I can't change my payment type to none, i was unclick-able. Please help!

  • BADI UJR_WRITE_BACK

    Hi Experts, I am a beginner in BPC and would like to know how many times UJR_WRITE_BACK BADI will be called per Input Schedule? I know the write back will be called automatically. The reason for this is i am trying to debug an application build by so

  • Canon Powershot S50 and iPhoto issues

    After doing some major searching of the discussion pages I am posting in order to get some help with my Canon S50 and iPhoto 5.0.4 (263). When importing into iPhoto or from Canon's image viewer or from the card reader built into my printer (HP photos