Failover cluster

Hey all,
Essbase 11.1.2 failover support with OPMN.
Does planning support this feature? does anybody try this?
thanks

Hi MHD,
This forum is a English support forum could you post it as English it will attract more folks to help you, thanks.
Best Regards,
Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Support, contact [email protected]

Similar Messages

  • Reporting Services as a generic service in a failover cluster group?

    There is some confusion on whether or not Microsoft will support a Reporting Services deployment on a failover cluster using scale-out, and adding the Reporting Services service as a generic service in a cluster group to achieve active-passive high
    availability.
    A deployment like this is described by Lukasz Pawlowski (Program Manager on the Reporting Services team) in this blog article
    http://blogs.msdn.com/b/lukaszp/archive/2009/10/28/high-availability-frequently-asked-questions-about-failover-clustering-and-reporting-services.aspx. There it is stated that it can be done, and what needs to be considered when doing such a deployment.
    This article (http://technet.microsoft.com/en-us/library/bb630402.aspx) on the other hand states: "Failover clustering is supported only for the report server database; you
    cannot run the Report Server service as part of a failover cluster."
    This is somewhat confusing to me. Can I expect to receive support from Microsoft for a setup like this?
    Best Regards,
    Peter Wretmo

    Hi Peter,
    Thanks for your posting.
    As Lukasz said in the
    blog, failover clustering with SSRS is possible. However, during the failover there is some time during which users will receive errors when accessing SSRS since the network names will resolve to a computer where the SSRS service is in the process of starting.
    Besides, there are several considerations and manual steps involved on your part before configuring the failover clustering with SSRS service:
    Impact on other applications that share the SQL Server. One common idea is to put SSRS in the same cluster group as SQL Server.  If SQL Server is hosting multiple application databases, other than just the SSRS databases, a failure in SSRS may cause
    a significant failover impact to the entire environment.
    SSRS fails over independently of SQL Server.
    If SSRS is running, it is going to do work on behalf of the overall deployment so it will be Active. To make SSRS Passive is to stop the SSRS service on all passive cluster nodes.
    So, SSRS is designed to achieve High Availability through the Scale-Out deployment. Though a failover clustered SSRS deployment is achievable, it is not the best option for achieving High Availability with Reporting Services.
    Regards,
    Mike Yin
    If you have any feedback on our support, please click
    here
    Mike Yin
    TechNet Community Support

  • Difference between scalable and failover cluster

    Difference between scalable and fail over cluster

    A scalable cluster is usually associated with HPC clusters but some might argue that Oracle RAC is this type of cluster. Where the workload can be divided up and sent to many compute nodes. Usually used for a vectored workload.
    A failover cluster is where a standby system or systems are available to take the workload when needed. Usually used for scalar workloads.

  • Install Guide - SQL Server 2014, Failover Cluster, Windows 2012 R2 Server Core

    I am looking for anyone who has a guide with notes about an installation of a two node, multi subnet failover cluster for SQL Server 2014 on Server Core edition

    Hi KamarasJaranger,
    According to your description, you want configure a SQL Server 2014 Multi-Subnet failover Cluster on Windows Server 2012 R2. Below are the whole steps for the configuration. For the detailed steps about the configuration, please download
    and refer to the
    PDF file.
    1.Add Required Windows Features (.NET Framework 3.5 Features, Failover Clustering and Multipath I/O).
    2.Discover target portals.
    3.Connect targets and configuring Multipathing.
    4.Initialize and format the Disks.
    5.Verify the Storage Replication Process.
    6.Run the Failover Cluster Validation Wizard.
    7.Create the Windows Server 2012 R2 Multi-Subnet Cluster.
    8.Tune Cluster Heartbeat Settings.
    9.Install SQL Server 2014 on a Multi-Subnet Failover Cluster.
    10.Add a Node on a SQL Server 2014 Multi-Subnet Cluster.
    11.Tune the SQL Server 2014 Failover Clustered Instance DNS Settings.
    12.Test application connectivity.
    Regards,
    Michelle Li

  • Mounted Volume not shwoing up with Windows 2012 R2 failover cluster

    Hi
    We configured some drives as mounted volumes and configured it with Failover cluster. But it's not showing up the mounted volume details with Cluster Manager, it's showing the details as seen below
    Expect support from someone to correct this issue
    Thanks in advance
    LMS

    Hi LMS,
    Are you doubt about the disk shown as GUID? Cluster Mount point Disk is showing as a Volume GUID in server 2012 R2 Failover Cluster I creating a mountpoint inside the cluster
    and had the same behavior, instead of mount point name we had the volume GUI after volume label, that must by design.
    How to configure volume mount points on a Microsoft Cluster Server
    http://support.microsoft.com/kb/280297
    I’m glad to be of help to you!
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Support, contact [email protected]

  • The Cluster Service function call 'ClusterResourceControl' failed with error code '1008(An attempt was made to reference a token that does not exist.)' while verifying the file path. Verify that your failover cluster is configured properly.

    I am experiencing this error with one of our cluster environment. Can anyone help me in this issue.
    The Cluster Service function call 'ClusterResourceControl' failed with error code '1008(An attempt was made to reference a token that does not exist.)' while verifying the file path. Verify that your failover cluster is configured properly.
    Thanks,
    Venu S.
    Venugopal S ----------------------------------------------------------- Please click the Mark as Answer button if a post solves your problem!

    Hi Venu S,
    Based on my research, you might encounter a known issue, please try the hotfix in this KB:
    http://support.microsoft.com/kb/928385
    Meanwhile since there is less information about this issue, before further investigation, please provide us the following information:
    The version of Windows Server you are using
    The result of SELECT @@VERSION
    The scenario when you get this error
    If anything is unclear, please let me know.
    Regards,
    Tom Li

  • Very Strange Network Issue With Two Guests on 2012 R2 Hyper-V Failover Cluster

    Hi all.  We're having a odd issue with two guests on our 2012 R2 failover cluster.  
    In a nutshell, if we shutdown a particular server (I'll call it Server A) another totally different server (Server B) on the same node loses it's network connectivity to the domain. If we start server A back up, network connectivity returns on server B.
    At first I thought server A might be running a service that was somehow linked to server B, so I decided to disable server A's NIC.  Interestingly, that had no affect on server B's connectivity.  
    The next step I tried was pausing server A and again, no adverse affect on server B's connectivity.  
    Next step was to live migrate server A to another node.  This action did
    cause server B to lose its network connection. 
    One other clue is that if I ping server B from either of the Hyper-V hosts in the cluster, I never lose network connection to server B.
    So I would suspect this is some network issue on the cluster, but I'm kind of at a loss where to go from here.  
    Has anyone seen this behavior before or does anyone have any troubleshooting suggestions I can try?
    Thanks! 
    George Moore

    Hi Sir,
    I'v never seen this before .
    >>Next step was to live migrate server A to another node.  This action did
    cause server B to lose its network connection. 
    They are connecting to same virtual switch ?
    First please run cluster validation to check if there is any error .
    If it is ok , please try the following items for troubleshooting :
    1. shutdown  serverA   serverB
    2. then add another virtual NIC for serverB
    3. start server B  check if the issue happens to both "old" and "new" virtual NIC .
    In addition , you can live migrate both A and B to another node , then try to live migrate A to the original node .
    If the issue persists , I would suggest you to remove that virtual switch on both nodes then re-create them .
    Best Regards,
    Elton Ji
    If it is not the answer please unmark it to continue
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Subscriber Support, contact [email protected] .

  • Cannot add multiple members of a failover cluster to a DFSR replication group

    Server 2012 RTM. I have two physical servers, in two separate data centers 35 miles apart, with a GbE link over metro fibre between them. Both have a large (10TB+) local RAID storage arrays, but given the physical separation there is no physical shared storage.
    The hosts need to be in a Windows failover cluster (WSFC), so that I can run high-availability VMs and SQL Availability Groups across these two hosts for HA and DR. VM and SQL app data storage is using a SOFS (scale out file server) network share on separate
    servers.
    I need to be able to use DFSR to replicate multi-TB user data file folders between the two local storage arrays on these two hosts for HA and DR. But when I try to add the second server to a DFSR replication group, I get the error:
    The specified member is part of a failover cluster that is already a member of the replication group. You cannot add multiple members for the same cluster to a replication group.
    I'm not clear why this has to be a restriction. I need to be able to replicate files somehow for HA & DR of the 10TB+ of file storage. I can't use a clustered file server for file storage, as I don't have any shared storage on these two servers. Likewise
    I can't run a HA single DFSR target for the same reason (no shared storage) - and in any case, this doesn't solve the problem of replicating files between the two hosts for HA & DR. DFSR is the solution for replicating files storage across servers with
    non-shared storage.
    Why would there be a restriction against using DFSR between multiple hosts in a cluster, so long as you are not trying to replicate folders in a shared storage target accessible to both hosts (which would obviously be a problem)? So long as you are not replicating
    folders in c:\ClusterStorage, there should be no conflict. 
    Is there a workaround or alternative solution?

    Yes, I read that series. But it doesn't address the issue. The article is about making a DFSR target highly available. That won't help me here.
    I need to be able to use DFSR to replicate files between two different servers, with those servers being in a WSFC for the purpose of providing other clustered services (Hyper-V, SQL availability groups, etc.). DFSR should not interfere with this, but it
    is being blocked between nodes in the same WSFC for a reason that is not clear to me.
    This is a valid use case and I can't see an alternative solution in the case where you only have two physical servers. Windows needs to be able to provide HA, DR, and replication of everything - VMs, SQL, and file folders. But it seems that this artificial
    barrier is causing us to need to choose either clustered services or DFSR between nodes. But I can't see any rationale to block DFSR between cluster nodes - especially those without shared storage.
    Perhaps this blanket block should be changed to a more selective block at the DFSR folder level, not the node level.

  • Selecting VHDx as storage for File Server Role (Failover Cluster 2012 R2)

    Is it possible to select an already existing (offline) VHD or VHDX as storage when creating the "File Server" role? Reason I want to do that is because I already have a file server setup as a virtual machine and causing issues so my company
    decided to make the change towards a File Server role.
    Thank you
    David

    Hi David,
    Do you mean you configured it to file server failover cluster via "High Availability Wizard" ?
    I think you need to choose a shared volume between two nodes to achieve high availability .
    Please refer to following link :
    http://technet.microsoft.com/en-us/library/cc731844(v=WS.10).aspx
    If you do not select a shared volume , I think there is no difference than sharing a mounted VHDX file on a standalone file server .
    I would suggest to copy these files to CSV and share them .
    Hope it helps
    Best Regards
    Elton Ji
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • DSC, SQL Server 2012 Enterprise sp2 x64, SQL Server Failover Cluster Install not succeeding

    Summary: DSC fails to fully install the SQL Server 2012 Failover Cluster, but the identical code snippet below run in powershell ise with administrator credentials works perfectly as does running the SQL server install interface.
    In order to develop DSC configurations, I have set up a Windows Server 2012 R2 failover cluster in VMware Workstation v10 consisting of 3 nodes.  All have the same Windows Server 2012 version and have been fully patched via Microsoft Updates. 
    The cluster properly fails over on command and the cluster validates.  Powershell 4.0 is being used as installed in windows.
    PDC
    Node1
    Node2
    The DSC script builds up the parameters to setup.exe for SQL Server.  Here is the cmd that gets built...
    $cmd2 = "C:\SOFTWARE\SQL\Setup.exe /Q /ACTION=InstallFailoverCluster /INSTANCENAME=MSSQLSERVER /INSTANCEID=MSSQLSERVER /IACCEPTSQLSERVERLICENSETERMS /UpdateEnabled=false /IndicateProgress=false /FEATURES=SQLEngine,FullText,SSMS,ADV_SSMS,BIDS,IS,BC,CONN,BOL /SECURITYMODE=SQL /SAPWD=password#1 /SQLSVCACCOUNT=SAASLAB1\sql_services /SQLSVCPASSWORD=password#1 /SQLSYSADMINACCOUNTS=`"SAASLAB1\sql_admin`" `"SAASLAB1\sql_services`" `"SAASLAB1\cubara01`" /AGTSVCACCOUNT=SAASLAB1\sql_services /AGTSVCPASSWORD=password#1 /ISSVCACCOUNT=SAASLAB1\sql_services /ISSVCPASSWORD=password#1 /ISSVCSTARTUPTYPE=Automatic /FAILOVERCLUSTERDISKS=MountRoot /FAILOVERCLUSTERGROUP='SQL Server (MSSQLSERVER)' /FAILOVERCLUSTERNETWORKNAME=SQLClusterLab1 /FAILOVERCLUSTERIPADDRESSES=`"IPv4;192.168.100.15;LAN;255.255.255.0`" /INSTALLSQLDATADIR=M:\SAN\SQLData\MSSQLSERVER /SQLUSERDBDIR=M:\SAN\SQLData\MSSQLSERVER /SQLUSERDBLOGDIR=M:\SAN\SQLLogs\MSSQLSERVER /SQLTEMPDBDIR=M:\SAN\SQLTempDB\MSSQLSERVER /SQLTEMPDBLOGDIR=M:\SAN\SQLTempDB\MSSQLSERVER /SQLBACKUPDIR=M:\SAN\Backups\MSSQLSERVER > C:\Logs\sqlInstall-log.txt "
    Invoke-Expression $cmd2
    When I run this specific command in Powershell ISE running as administrator, logged in as domain account that is in the Node1's administrators group and has domain administrative authority, it works perfectly fine and sets up the initial node properly.
    When I use the EXACT SAME code above pasted into my custom DSC resource, as a test with a known successful install, run with the same user as above, it does NOT completely install the cluster properly.  It still installs 17 applications
    related to SQL Server and seems to properly configure everything except the cluster.  The Failover Cluster Manager shows that the SQL Server Role will not come on line and the SQL Server Agent Role is not created. 
    The code is run on Node1 so the setup folder is local to Node1.
    The ConfigurationFile.ini files for the two types of installs are identical.
    Summary.txt does have issues..
    Feature:                       Database Engine Services
      Status:                        Failed: see logs for details
      Reason for failure:            An error occurred during the setup process of the feature.
      Next Step:                     Use the following information to resolve the error, uninstall this feature, and then run the setup process again.
      Component name:                SQL Server Database Engine Services Instance Features
      Component error code:          0x86D8003A
      Error description:             The cluster resource 'SQL Server' could not be brought online.  Error: There was a failure to call cluster code from a provider. Exception message: Generic
    failure . Status code: 5023. Description: The group or resource is not in the correct state to perform the requested operation.  .
    It feels like this is a security issue with DSC or an issue with the setup in SQL Server, but please note I have granted administrators group and domain administrators authority.  The nodes were built with the same login.  Windows firewall
    is completely disabled.
    Please let me know if any more detail is required.

    Hi Lydia,
    Thanks for your interest and help.
    I tried "Option 3 (recommended)" and that did not help.
    The issue I encounter with the fail-over cluster only occurs when trying to install with DSC!
    Using the SQL Server Install wizard, Command Prompt and even in Powershell by invoking the setup.exe all work perfectly.
    So, to reiterate, this issue only occurs while running in the context of DSC.
    I am using the same domain login with Domain Admin Security and locally the account has Administrators group credentials.  The SQL Server Service account also has Administrators Group Credentials.

  • Cannot migrate VM in VMM but can in Failover Cluster Manager network adapters network optimization warning

    I have a 4 node Server 2012 R2 Hyper-V Cluster and manage it with VMM 2012 R2.  I just upgraded the cluster from 2012 RTM to 2012 R2 last week which meant pulling 2 nodes out of the existing cluster, creating the new R2 cluster, running the copy
    cluster roles wizard since the VHDs are stored on CSVs, and then added the other 2 nodes after installing R2 on them, back into the cluster.  After upgrading the cluster I am unable to migrate some VMs from one node to another.  When trying to do
    a live migration, I get the following notifications under the Rating Explanation tab:
    Warning: There currently are not network adapters with network optimization available on host Node7. 
    Error: Configuration issues related to the virtual machine VM1 prevent deployment and must be resolved before deployment can continue. 
    I get this error for 3 out of the 4 nodes in the cluster.  I do not get this error for Node10 and I can live migrate to that node in VMM.  It has a green check for Network optimization.  The others do not.  These errors only affect
    VMM. In the Failover Cluster Manager, I can live migrate any VM to any node in the cluster without any issues.  In the old 2012 RTM cluster I used to get the warning but I could still migrate the VMs anywhere I wanted to.  I've checked the network
    adapter settings in VMM on VM1 and they are the same as VM2 which can migrate to any host in VMM.  I then checked the network adapter settings of the VMs from the Failover Cluster Manager and VM1 under Hardware Acceleration has "Enable virtual machine
    queue" and Enable IPsec task offloading" checked.  I unchecked those 2 boxes refreshed the VMs, refreshed the cluster, rebooted the VM and refreshed again but I still could not live migrate VM1.  Why is this an issue now but it wasn't before
    running on the new cluster?  How do I resolve the issue?  VMM is useless if I can't migrate all my VMs with it.

    I checked the settings on the physical nics on each node and here is what I found:
    Node7: Virtual machine queue is not listed (Cannot live migrate problem VM's to this node in VMM)
    Node8: Virtual machine queue is not listed (Cannot live migrate problem VM's to this node in VMM)
    Node9: Virtual machine queue is listed and enabled (Cannot live migrate problem VM's to this node in VMM)Node10: Virtual machine queue is listed and enabled (Live Migration works on all VMs in VMM)
    From Hyper-V or the Failover Cluster manager I can see in the network adapter settings of the VMs under Hardware Acceleration that these two settings are checked "Enable virtual machine queue" and Enable IPsec task offloading".  I unchecked those
    2 boxes, refreshed the VMs, refreshed the cluster, rebooted the VM and refreshed again but I still cannot live migrate the problem VMs.
    It seems to me that if I could adjust those VM settings from VMM that it might fix the problem.  Why isn't that an option to do so in VMM? 
    Do I have to rebuild the VMM server with a new DB and then before adding the Hyper-V cluster uncheck those two settings on the VM's from Hyper-V manager?  That would be a lot of unnecessary work but I don't know what else to do at this point.

  • Transactional replication from a failover cluster instance to a SQL Server Express DB

    Hello,
    I have been poking around on Google trying to understand if there are any gotchas in configuring transactional replication on a instance DB of a failover cluster, to a SQL Server Express DB. Also, this client would like to replicate a set of tables between
    two instances DB's which both reside on nodes of the cluster.
    Everything I've read suggests there is no problem using transactional replication on clustered instance as long as you use a shared snapshot folder. I still have some concerns:
    1) Should the distributor need to live on a separate instance?
    2) What happens in the event of an automatic, or manual failover of a publisher, especially if the distributor does not need to live on a separate instance? I know that when a failover occurs, all jobs in progress are stopped and this seems like a recipe for
    inconsistency between the publisher and subscriber.
    There is a paramount concern, that this particular client won't have staff on hand to troubleshoot replication if there are problems, hence my hesitancy to implement a solution that relies on it.
    Thanks in advance.

    1) Should the distributor need to live on a separate instance?
    Answer: It is recommended to configure the distributor on the different server, but it also be configured on Publisher/subscriber server. (Subscriber in our case is not possible as its a Express edition)
    2) What happens in the event of an automatic, or manual failover of a publisher, especially if the distributor does not need to live on a separate instance? I know that when a failover occurs, all jobs in progress are stopped and this seems like a recipe for
    inconsistency between the publisher and subscriber. There is a paramount concern, that this particular client won't have staff on hand to troubleshoot replication if there are problems, hence my hesitancy to implement a solution that relies on it.
    Answer: If you configure both publisher and distributor on the same server and the SQL instance is failed over, the data synchronization/replication is suspended till the instance comes online. 
    Once the instance is up,all the replication jobs will start again and it will continue to synchronize the data to subscriber. No manual intervention is required.

  • Server 2008 Hyper-V Failover Cluster Error on Domain Controller Reboot

    I am pretty new to Hyper-V virtual but I have 2 Hyper-V Clusters, each with 2 Nodes and a SAN, 1 Physical Domain Controller for failover cluster management and 1 virtual domain controller as backup.  All is running well, no issues.  I installed
    windows updates on the physical DC and upon reboot, got an error 5120 on cluster 2 that says "Cluster Shared Volume 'Volume1' ('Cluster Disk 1') is no longer available on this node because of 'STATUS_CONNECTION_DISCONNECTED(c000020c)'.  All I/O will
    temporarily be queued until a path to the volume is reestablished.  It pointed to the 2nd node in that cluster as being the issue but when I look at it, it is online and all healthy so I don't understand why the error was triggered and if the DC would
    go down for a failure, would that node not be able to access the CSV permanently.
    Appreciate any help anyone can provide.

    Hi mtnbikediver,
    In theory, if you has the correct configuration of cluster the DC restart will not cause the CSV down, does your shared storage installed on your DC? Did you run
    the cluster validation before you install the cluster? We strongly recommend you run the cluster validation before you build the cluster, same time please install the recommend update of 2008 cluster first.
    Recommended hotfixes for Windows Server 2008-based server clusters
    http://support.microsoft.com/kb/957311
    I found a similar scenario issue the DC restart will effect the cluster network name resource offline, but it is for 2008R2.
    Cluster network name resource cannot be brought online when one of the domain controllers is partly down in Windows Server 2008 R2
    http://support2.microsoft.com/?id=2860142
    I’m glad to be of help to you!
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Support, contact [email protected]

  • SQL Server Agent fails to connect to DB after enabling mirror on failover cluster

    Hello:
    We have multiple databases running in a Failover Cluster instance: SQL 2012SP1 on Server 2008 R2 failover cluster (NOT AlwaysOn). We are trying to add a high-performance mirror in a standalone instance for DR. My understanding is that should be a perfectly
    normal, supported configuration.
    The mirroring is working properly; however, the clustered SQL Server agent is unable to run jobs that run in the mirrored databases.
    We get the following in the job log: Unable to connect to SQL Server 'VIRTUALSERVERNAME\INSTANCE'.  The step failed.
    There is a partner message in the agent log: [165] ODBC Error: 0, Connecting to a mirrored SQL Server instance using the MultiSubnetFailover connection option is not supported. [SQLSTATE IMH01]
    The cluster is not a mulitsubnet cluster. All hosts are connected to the same subnets and there is no storage replication. I can not find any place where I can adjust the connect string options for SQL Agent.
    Any guidance or suggestions on how to resolve this would be appreciated.
    ~joe

    SQL Team - MSFT:
    Thank you for taking the time to research and provide a clear answer.
    This seems very much a workaround and very unsatisfactory.
    You are correct, there is an IP dependency with OR condition. Moving to an AND condition is not viable for us. The whole point is to provide network redundancy. With an AND condition, if EITHER network interface fails, the service will go offline or fail
    to come online without manual intervention. This is arguably worse for uptime than having a single interface available.
    We are in process of rewriting all our SQL jobs to start in tempdb before transitioning to the appropriate target database. If this works for all of our jobs, I will mark the above response as answer.
    Again, thank you for the answer.
    Regards,
    Joe M.

  • Failover Cluster Network Name Failed and Can't be Repaired

    I have an issue that seem to be a different problem than any others have encountered.
    I've scoured everything I can find and nothing has fixed my problem.
    The problem starts with the common problem of the cluster network name failing on my 2 node server 2012 file server cluster.  The computer object was still in AD and appeared to be fine so it was not the common problem of the object
    getting deleted somehow.  At the time, there was no other object with that name in the recycling bin, so I don't think it was mistakenly deleted and quickly recreated to cover any tracks, so to speak.
    Following one guide, I tried to find the registry key that corresponded with the GUID of the object, but neither node in the cluster had it in its registry (which may be part of the problem).
    Since it was in the failed state, I tried to do the repair on the object to no avail.
    We run a "locked down" DC environment so all computer objects have to be pre-provisioned.  They were all pre-provisioned successfully and successfully assigned during cluster creation.  The cluster was running with no issues for a month
    or so before this problem came up.
    When I do a repair on the object while taking diagnostic logs the following 4609 error appears:
    The action 'Repair' did not complete. - System.ApplicationException: An error occurred resetting the password for 'Cluster Name'. ---> System.ComponentModel.Win32Exception: Unknown error (0x80005000)
    There appears to be a corresponding 4771 error with a failure code 0x18 that comes from the security log of the DC that states there was a Kerberos pre-authentication failure for the cluster network name object (Domain\Clustername$)
    I believe this is what is causing the repair failure.  All the information I found related to security error 4771 was either a bad credentials given for a user account or the fix was to reconnect the computer to the domain.  I can't seem to find
    a way to do this with the cluster network name.  If there's a way please let me know.
    I've tried a number of things, like resetting the object, disabling it, deleting and creating a new object with the same name, deleting that new object and recovering the original, etc...
    Can anyone shed some light on what is going on and hopefully how to fix it other than rebuilding the cluster?  I'm quite close to just tearing it down and building it back up but am hesitant because this cluster in currently in production...
    Any help would be appreciated

    Hi,
    I don’t find out the similar issue with yours, base on my experience, the 4096 error
     often caused by the CSV disk issue, and the 0x80005000 error some time caused by the repetitive computer object in OU. Please check the above related part or run the validate test then post the error information.
    Although I do have a CSV, there doesn't seem to be any problems with it and it was running just fine for a month or so before the problem started.  I double checked and there is no duplicate computer objects, maybe I don't understand what you mean by
    repetitive, could you explain further?
    The cluster validates successfully with a few warnings:
    Validating cluster resource Name: DT-FileCluster.
    This resource is marked with a state of 'Failed' instead of
    'Online'. This failed state indicates that the resource had a problem either
    coming online or had a failure while it was online. The event logs and cluster
    logs may have information that is helpful in identifying the cause of the
    failure.
    - This is because the cluster name is in the failed state
    Validating the service principal names for Name:
    DT-FileCluster.
    The network name Name: DT-FileCluster does not have a valid
    value for the read-only property 'ObjectGUID'. To validate the service principal
    name the read-only private property 'ObjectGuid' must have a valid value. To
    correct this issue make sure that the network name has been brought online at
    least once. If this does not correct this issue you will need to delete the
    network name and re-create it.
    - This is definitely related to the problem and the GUID probably got removed when we attempted a fix by resetting the object and trying the repair from the failover cluster manager.
    The user running validate, does not have permissions to create
    computer objects in the 'ad.unlv.edu' domain.
    - This is correct, we run a restricted domain.  I have a delegated OU that I can pre-provision accounts in.  The account was pro-provisioned successfully and was at one point setup and working just fine.
    There are no other errors nor warnings.

  • Exchange 2013 MBX in DAG along with Hyper-V and Failover Cluster

    Hi Guys! I've tried to find out an answer of my question or some kind of solution, but with no luck that's why I am writing here. The case is as follows. I have two powerful server boxes and iSCSI storage and I have to design high availability
    solution, which includes SCOM 2012, SC DPM 2012 and exchange 2013 (two CAS+HUB servers and two MBX servers).
    Let me tell you how I plan to do that and you will correct me if proposed solution is wrong.
    1. On both hosts - add Hyper-V role.
    2. On both hosts - add failover clustering role.
    3. Create 2 VMs through failover cluster manager, VMs will be stored on a iSCSI LUN, the first one VM for SCOM 2012 and the second one for SCDPM 2012. Both VMs will be added as failover resource.
    4. Create 4 VMs - 2 for CAS+HUB role and 2 for MBX role, VMs will be stored on a iSCSI LUN as well.
    5. Create a DAG within the two MBX servers.
    In general, that's all. What I wonder is whether I can use failover clustering to acheive High Availability for 2 VMs and at the same time to create DAG between MBX-servers and NLB between CAS-servers?
    Excuse me for this question, but I am not proficient in this matter.

    Hi,
    As far as I know, it’s supported to create DAG for mailbox server installed in hyper-v server.
    And since load balance has been changed for CAS 2013, it is more worth with DNS round robin instead of NLB. However, you can use NLB in Exchange 2013.
    For more information, you can refer to the following article:
    http://technet.microsoft.com/en-us/library/jj898588(v=exchg.150).aspx
    If you have any question, please feel free to let me know.
    Thanks,
    Angela Shi
    TechNet Community Support

Maybe you are looking for

  • Problem with editable ALV

    Hi All, I am using the method set_table_for_first_display for ALV Editable. After entering the values on the alv screen the values are not getting reflected unless press enter or should come out of my last cell to reflect the changes. I have used the

  • DNS for Multiple Domains

    I am trying to figure out the proper configuration for DNS that will support multiple domains. I have DSN working now for just one domain. My XServe has a static IP connected directly to cable modem and is the master nameserver. I also have an Ubuntu

  • Running Windows 2003 Server on Xserve Intel with BootCamp?

    Well... This sounds like awfully bad idea, but one of our clients needs to run Windows 2003 Server for a certain project in their organisation. Fundings for the new hardware is available. Windows-compatible hardware! The idea here is to get fundings

  • What are the exact connections to a BNC2110 to test event counters in MAX?

    I am trying to test my 6034E / BNC-2110 setup, using MAX.  The anallog inputs seem fine, but I am having trouble getting external events counted by Counter 0 or 1.  I am testing the device using Test Panels in MAX.  What is the precise wiring to do t

  • New song download is not compatible? What is this all about?

    I just tried to download a song onto my nano and when I went to move it over, a prompt came up saying the song was not compatible with my device. What is this all about? Wickets924