One disk in failover cluster but needs multiple volumes

I made a dumb move and did my disk in MBR instead of GPT. Now I'm at 2.5TB but can only allocate 2TB because of the limitation.
So I have about 300GB available on a cluster disk but can't seem to figure out how to create a volume out of the unallocated space and add it to my file server cluster.
Is this possible? Here is a picture of what i'm talking about:

You will not be able to add a partition as it's own cluster disk. I would backup your data and convert your disk to GPT and then restore the data.
David A. Bermingham, MVP, Senior Technical Evangelist, SIOS Technology Corp

Similar Messages

  • Windows Server 2012R2 Failover Cluster error with mounted volumes

    Hi all,
    I've a problem with mounted volume on a WSFC build on top of Windows Server 2012R2, the situation is:
    M: is the volume hosting mounting points
    disk-1, disk-2, disk-3 are volume mounted on M:\SomeFolder
    Theese volumes are used by a SQL Server Failover Cluster Instance, but my problem is related to WSFC. I've set dependencies so disk-1, disk-2, disk-3 depend upon H:
    If I try a failover of the role "SQL Server" I observe that when the disk come online in the other node they fail with this error:
    Cluster resource 'disk-1' of type 'Physical Disk' in clustered role 'SQL Server (ISQL2014A)' failed. The error code was '0xaa' ('The requested resource is in use.').
    Based on the failure policies for the resource and role, the cluster service may try to bring the resource online on this node or move the group to another node of the cluster and then restart it.  Check the resource and group state using Failover Cluster
    Manager or the Get-ClusterResource Windows PowerShell cmdlet.
    If I manually take offline H: and than bring it online and then manually take online all disk (1 to 3) they come online with no error.
    I'm going crazy!

    I've found the root of the problem: the servers are virtual machine on a VMware ESX 5.5 infrastructure, VMware claims that on 5.5 multipath is supported for raw device mapping disks but disabling multipath (I've set to fixed path) the Windows Server Failover
    Cluster stops to get problems.
    Now we have opened a support call with VMware.

  • Failover Cluster Quorum Disk is fallen off the shared volume

    Hi, we had a Cluster that was holding 40+ VMs and was originally setup with the building 1GB Ethernet adapter, we yesterday installed Qlogic 10 GB NIC teaming for both of the nodes and reconfigured the network on both nodes. However now the Quorum disk
    is not a part of Cluster Shared Volume How can I add that disk back to the shared volum please?

    Hi Riaz,
    Add a quorum disk is easy so please let us know if there is any specific error occurs during the steps provided in following thread:
    http://social.technet.microsoft.com/Forums/windowsserver/en-US/0566ede4-55bb-4694-a134-104fac2a7052/replace-quorum-disk-on-failover-cluster-on-different-lun?forum=winserverClustering
    If you have any feedback on our support, please send to [email protected]

  • Windows 2008R2 Failover Cluster node

    I am new to this company and have been asked to take over a task that the previous employee left unfinished.   I am "familiar" with Failover clustering but have not done any recently.   It looks like the previous employee evicted one
    of the 2 node cluster servers to replace the hardware.   I've setup the new hardware with Failover cluster but now need to setup the iSCSI volumes on this new node.   I have not done this before.  All I've done before was create NEW failover
    disks and those were Fibre Channel disks.  Can anyone give me advice or a link to a tutorial to setup these existing iSCSI volumes on the second node?  One more note:   the iSCSI volumes are on a Dell EqualLogic system and I've never worked
    on EqualLogic before either.

    Hi,
    If I doesn’t misunderstand it seems you are finding how to setup the fiber channel storage to the node, you can refer your hardware manual to setup the hardware connection,
    or call your hardware vendor for the more help, you can refer the following support scenario to configure your fiber channel storage:
    The Cluster service allows you to reconnect or replace a cable between the HBA and the switch, and then allows for the node to take ownership of the physical disk resource.
    The HBA driver must consider several issues for a Plug and Play rescan to occur. The HBA miniport driver must issue a "BusChangeDetected" notification when the cable is reconnected (an HBA port driver issues an "IoInvalidateDeviceRelations" notification) so
    that Windows is notified that a change has been made to the shared bus.
    The related KB:
    Removing the HBA cable on a server cluster
    http://support.microsoft.com/kb/294173/en-us
    Hope this helps.
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Can we setup FILESTREAM on Failover Cluster

    I saw following point on Technet article about RBS.
    The local FILESTREAM provider is supported only when it is used on local hard disk drives or an attached Internet Small Computer System Interface (iSCSI) device. You cannot use the local RBS FILESTREAM provider on remote storage devices such as network attached storage (NAS).
    It looks like that we cannot use FILESTREAM on Failover Cluster because to setup Failover Cluster we need to have NAS. But then the NAS is made available locally for Failover Cluster so FILESTREAM should work right?
    Found another article which talks about setting up FILESTREAM on Failover Cluster so I am a bit confused.
    https://msdn.microsoft.com/en-us/library/cc645886.aspx

    Hi Frank,
    As other post, we can set up FILESTREAM on a Failover cluster.
    However, FILESTREAM can't live on a network addressable storage (NAS) device unless the NAS device is presented as a local NFS volume via iSCSI. With iSCSI , it is supported by Microsoft
    FILESTREAM provider. 
    Reference:
    Description of support for network database files in SQL Server
    Programming with FileStreams in SQL Server 2008
    Thanks,
    Lydia Zhang
    Lydia Zhang
    TechNet Community Support

  • Failover Cluster in a box

    Hi,
    I'm in the process of building a test environment which requires a failover cluster for a SQL Server Always on configuration of 2 DB nodes. I have a single Windows 2012 R2 host server with Hyper-V installed and my guest OS's configured using local storage
    on this server. 
    In order to create the guest failover cluster, I need to create a Quorum drive - which needs to be shared between the two DB nodes. Using only this physical host server and it's local storage, is this possible to achieve?
    I believe that options for creating a shared VHDX disc are to host it on a CSV or a Scale out file server, but both of these seem to require the creation of a cluster themselves, which will require a Quorum disc on shared storage itself!
    Thanks, 
    David

    Hi,
    I'm in the process of building a test environment which requires a failover cluster for a SQL Server Always on configuration of 2 DB nodes. I have a single Windows 2012 R2 host server with Hyper-V installed and my guest OS's configured using local storage
    on this server. 
    In order to create the guest failover cluster, I need to create a Quorum drive - which needs to be shared between the two DB nodes. Using only this physical host server and it's local storage, is this possible to achieve?
    I believe that options for creating a shared VHDX disc are to host it on a CSV or a Scale out file server, but both of these seem to require the creation of a cluster themselves, which will require a Quorum disc on shared storage itself!
    Thanks, 
    David
    You can run a guest VM cluster on your single node no problem! You can spawn a third VM and expose SMB3 share (or iSCSI target LUN with a CSV on top of it if you want to run old-school) and put your shared VHDX there. See:
    Virtualization: Hyper-V and High Availability
    http://technet.microsoft.com/en-us/magazine/hh127064.aspx
    "You can also cluster the virtual machines (VMs) themselves (“guests”) on a single host. Using virtual networks for heartbeat and virtual iSCSI for the quorum and other
    attached storage lets you create guest clusters, even if the host hardware wouldn’t normally support it."
    StarWind Virtual SAN clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts.

  • Guest VM failover cluster on Hyper-V 2012 Cluster does not work across hosts

    Hi all,
    We are evaluating Hyper-V on Windows Server 2012, and I have bumped in to this problem:
    I have a Exchange 2010SP2 DAG installed on 2 vms in our Hyper-V cluster (a DAG forms a failover cluster, but does not use any shared storage). As long as my vms are on the same host, all is good. However, if I live migrate or shutdown-->move-->start one
    of the guest nodes on another pysical host, it loses connectivity with the cluster. "regular" network is fine across hosts, and I can ping/browse one guest node from the other. I have tried looking for guidance for Exchange on Hyper-V clusters but have not
    been able to find anything.
    According to the Exchange documentation this configuration is supported, so I guess I'm asking for any tips and pointers on where to troubleshoot this.
    regards,
    Trond

    Hi All,
    so some updates...
    We have a ticket logged with Microsoft, more of a check box exercise to reassure the business we're doing the needful.  Anyway, they had us....
    Apply hotfix http://support.microsoft.com/kb/2789968?wa=wsignin1.0  to both guest DAG nodes, which seems pretty random, but they wanted to update the TCP/IP stack...
    There was no change in error, move guest to another Hyper-V node, and the failover cluster, well, fails with the following event ids I the node that fails...
    1564 -File share witness resource 'xxxx)' failed to arbitrate for the file share 'xxx'. Please ensure that file share '\xxx' exists and is accessible by the cluster..
    1069 - Cluster resource 'File Share Witness (xxxxx)' in clustered service or application 'Cluster Group' failed
    1573 - Node xxxx  failed to form a cluster. This was because the witness was not accessible. Please ensure that the witness resource is online and available
    The other node stays up, and the Exchange DB's mounted on that node stay up, the ones mounted on the way that fails failover to the remaining node...
    So we then
    Removed 3 x Nic's in one of the 4 x NIC teams, so, leaving a single NIC in the team (no change)
    Removed one NIC from the LACP group on each Hyper-V host
    Created new Virtual Switch using this simple trunk port NIC on each Hyper-V host
    Moved the DAG nodes to this vSwitch
    Failover cluster works as expected, guest VM's running on separate Hyper-V hosts, when on this vswitch with single NIC
    So Microsoft were keen to close the call, as there scope was, I kid you not, to "consider this issue
    resolved once we are able to find the cause of the above mentioned issue", which we have now done, as in, teaming is the cause... argh.
    But after talking, they are now escalating internally.
    The other thing we are doing, is building Server 2010 Guests, and installing Exchange 2010 SP3, to get a Exchange 2010 DAG running on Server 2010 and see if this has the same issue, as people indicate that this is perhaps not got the same problem.
    Cheers
    Ben
    Name                   : Virtual Machine Network 1
    Members                : {Ethernet, Ethernet 9, Ethernet 7, Ethernet 12}
    TeamNics               : Virtual Machine Network 1
    TeamingMode            : Lacp
    LoadBalancingAlgorithm : HyperVPort
    Status                 : Up
    Name                   : Parent Partition
    Members                : {Ethernet 8, Ethernet 6}
    TeamNics               : Parent Partition
    TeamingMode            : SwitchIndependent
    LoadBalancingAlgorithm : TransportPorts
    Status                 : Up
    Name                   : Heartbeat
    Members                : {Ethernet 3, Ethernet 11}
    TeamNics               : Heartbeat
    TeamingMode            : SwitchIndependent
    LoadBalancingAlgorithm : TransportPorts
    Status                 : Up
    Name                   : Virtual Machine Network 2
    Members                : {Ethernet 5, Ethernet 10, Ethernet 4}
    TeamNics               : Virtual Machine Network 2
    TeamingMode            : Lacp
    LoadBalancingAlgorithm : HyperVPort
    Status                 : Up
    A Cloud Mechanic.

  • OracleAS cold failover cluster

    I am installing oracleAS cold failover cluster. but during installing a am getting error in database configuration
    assistant. can anybody help me?

    This is as simple as "Two heads think better that once", for this you will have two points (or more) of access to your applications, the process are split between servers and if one of the nodes fails, there are other to backup them. That's the only think that a Cold Fail Over cluster does, as well the administration is centralized if you have a nice configuration you will need to do only one configuration and all nodes will notice about it, finaly when you have a cold fail over cluster you have 50% of your potential workload in use, you have a server as a portrait that only is good to look at it but doesn't do nothing else, you dont need benchmarks or notes, this is really simple as it sounds.
    Greetings.

  • Witness Server in a failover cluster

    Hi.
    I was reading about this witness server in a failover cluster but always thought that the switch between active and passive would be automatic, never thought of a "vote" and a "quorum" to reach.
    Under what circumstances do I need this quorum (and therefore a witness server) and what happens if I am even?
    Does a witness server have necessarily to be a third one or can be set up as either the active or the passive one?
    Cheers 

    Hi,
    In addition to Jared's suggestion, I would like to clarify the following things:
    1. The witness server and its directory are used only for quorum purposes where there's an even number of members in the DAG.
    2. The witness server should be in the same Active Directory forest as the DAG.
    3. It is recommended to specify a Client Access server as the witness server for Exchange 2013 DAG.
    Hope my clarification is helpful.
    If there are any problems, please feel free to let me know.
    Best regards,
    Amy
    Amy Wang
    TechNet Community Support

  • The acount information could not be verified - SQL 2005 SP3 installation on Windows 2008 R2 Failover Cluster

    Hi,
    We have a problem when we are trying to install SQL SP3 on SQL 2005 cluster.
    The problem is described bellow:
    We installed a new Windows 2008 R2 x64 failover cluster with two nodes.
    As a next step we installed SQL 2005 Enterprise edition x64 on this failover cluster. We have finished the installation without problems. After SQL installation we are trying to install SQL SP3 . We have logged on using a domain administrator account and
    we start the installation of SP3 with that user. In authentication mode screen the test gives a success status using either domain administrator-windows authentication or sa-sql authentication. On remote user account screen, when we give the domain
    administrator account we get the message "the account information could not be verified. Press ok to return to authentication mode screen to determine the reason for failure".
    We have tried to give the SQL services startup user on remote user account screen but the error is the same.
    the domain administrator account exists on security logins of SQL with sysadmin rights.
    Can someone help?
    Thanks. 

    Hi StelioMavro,
    According to your error message, when you install SQL Server 2005 SP3 in your failover cluster, we need to verify if there was other instance that had been removed and the installation did not pass the login check. I recommend you use windows authentication
    instead of SQL Server authentication when applying SP3. And the windows account used to run the SP3 setup program must have administrator privileges on the computer where SP3 will be installed.
    For more information about how to install SQL Server 2005 SP3 in failover cluster, you can review the following article.
    http://www.sqlcoffee.com/Tips0007.htm
    Regards,
    Sofiya Li
    Sofiya Li
    TechNet Community Support

  • In windows2012, IIS is not able to configure in Microsoft Failover Cluster

    Hello All
    I configured IIS under Microsoft Failover Cluster with the help of below link. The setup is working successful. I configured this on Windows2008.
    http://support.microsoft.com/kb/970759/en-us
    I verified below as test cases.
    1.) I kill PID of W3SVC , the failover executed.
    2.) I stopped the Default Web Site, the failover executed.
    3.) I stopped the Default Application Pool, the failover executed.
    4.) I stopped the other Application Pools, the failover executed(after changing the script under above mentioned link).
    PLEASE NOTE: I used the following links to configure the setup on Windows2008 and Windows2012
    http://zahidhaseeb.wordpress.com/2014/02/12/how-to-configure-iis-web-site-and-application-pool-in-microsoft-failover-cluster/
    But, when I create the same environment under Windows2012, see what happen below.
    I verified below as test cases.
    1.) I kill PID of W3SVC , the failover executed.
    2.) I stopped the Default Web Site, the failover executed.
    3.) I stopped the Default Application Pool, the failover executed.
    4.) I stopped the other Application Pools, the failover is not executed(after changing the script under above mentioned link).
    Any comment will be appreciated. Thanks. Zahid Haseeb.

    Hi,
    As the KB mentioned, Microsoft recommends administrators carefully evaluate the use of Network Load Balancing (NLB) as the primary and preferred method for improving the scalability and availability of Web applications with multiple servers running IIS 7.5
    or IIS 7.0, as opposed to using failover clustering.
    It is important to consider that clustering IIS by means of clustering the IIS services does not always guarantee a high availability solution for Web applications.
    While the IIS services (specifically the WWW service) might be up and running, a specific application pool’s hosting process could have terminated, or the application might be throwing internal server HTTP errors. Clustering the Web applications and
    monitoring their health by using a custom script is the correct and recommended way to achieve a high availability IIS cluster using failover clustering. Below is a sample script that monitors the state of an application pool to determine if it is started
    or not.
    That’s more about the IIS issue, maybe you can ask more detail question in IIS forum.
    IIS support forum
    http://forums.iis.net/
    Thanks for your understanding and support.
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • DSC, SQL Server 2012 Enterprise sp2 x64, SQL Server Failover Cluster Install not succeeding

    Summary: DSC fails to fully install the SQL Server 2012 Failover Cluster, but the identical code snippet below run in powershell ise with administrator credentials works perfectly as does running the SQL server install interface.
    In order to develop DSC configurations, I have set up a Windows Server 2012 R2 failover cluster in VMware Workstation v10 consisting of 3 nodes.  All have the same Windows Server 2012 version and have been fully patched via Microsoft Updates. 
    The cluster properly fails over on command and the cluster validates.  Powershell 4.0 is being used as installed in windows.
    PDC
    Node1
    Node2
    The DSC script builds up the parameters to setup.exe for SQL Server.  Here is the cmd that gets built...
    $cmd2 = "C:\SOFTWARE\SQL\Setup.exe /Q /ACTION=InstallFailoverCluster /INSTANCENAME=MSSQLSERVER /INSTANCEID=MSSQLSERVER /IACCEPTSQLSERVERLICENSETERMS /UpdateEnabled=false /IndicateProgress=false /FEATURES=SQLEngine,FullText,SSMS,ADV_SSMS,BIDS,IS,BC,CONN,BOL /SECURITYMODE=SQL /SAPWD=password#1 /SQLSVCACCOUNT=SAASLAB1\sql_services /SQLSVCPASSWORD=password#1 /SQLSYSADMINACCOUNTS=`"SAASLAB1\sql_admin`" `"SAASLAB1\sql_services`" `"SAASLAB1\cubara01`" /AGTSVCACCOUNT=SAASLAB1\sql_services /AGTSVCPASSWORD=password#1 /ISSVCACCOUNT=SAASLAB1\sql_services /ISSVCPASSWORD=password#1 /ISSVCSTARTUPTYPE=Automatic /FAILOVERCLUSTERDISKS=MountRoot /FAILOVERCLUSTERGROUP='SQL Server (MSSQLSERVER)' /FAILOVERCLUSTERNETWORKNAME=SQLClusterLab1 /FAILOVERCLUSTERIPADDRESSES=`"IPv4;192.168.100.15;LAN;255.255.255.0`" /INSTALLSQLDATADIR=M:\SAN\SQLData\MSSQLSERVER /SQLUSERDBDIR=M:\SAN\SQLData\MSSQLSERVER /SQLUSERDBLOGDIR=M:\SAN\SQLLogs\MSSQLSERVER /SQLTEMPDBDIR=M:\SAN\SQLTempDB\MSSQLSERVER /SQLTEMPDBLOGDIR=M:\SAN\SQLTempDB\MSSQLSERVER /SQLBACKUPDIR=M:\SAN\Backups\MSSQLSERVER > C:\Logs\sqlInstall-log.txt "
    Invoke-Expression $cmd2
    When I run this specific command in Powershell ISE running as administrator, logged in as domain account that is in the Node1's administrators group and has domain administrative authority, it works perfectly fine and sets up the initial node properly.
    When I use the EXACT SAME code above pasted into my custom DSC resource, as a test with a known successful install, run with the same user as above, it does NOT completely install the cluster properly.  It still installs 17 applications
    related to SQL Server and seems to properly configure everything except the cluster.  The Failover Cluster Manager shows that the SQL Server Role will not come on line and the SQL Server Agent Role is not created. 
    The code is run on Node1 so the setup folder is local to Node1.
    The ConfigurationFile.ini files for the two types of installs are identical.
    Summary.txt does have issues..
    Feature:                       Database Engine Services
      Status:                        Failed: see logs for details
      Reason for failure:            An error occurred during the setup process of the feature.
      Next Step:                     Use the following information to resolve the error, uninstall this feature, and then run the setup process again.
      Component name:                SQL Server Database Engine Services Instance Features
      Component error code:          0x86D8003A
      Error description:             The cluster resource 'SQL Server' could not be brought online.  Error: There was a failure to call cluster code from a provider. Exception message: Generic
    failure . Status code: 5023. Description: The group or resource is not in the correct state to perform the requested operation.  .
    It feels like this is a security issue with DSC or an issue with the setup in SQL Server, but please note I have granted administrators group and domain administrators authority.  The nodes were built with the same login.  Windows firewall
    is completely disabled.
    Please let me know if any more detail is required.

    Hi Lydia,
    Thanks for your interest and help.
    I tried "Option 3 (recommended)" and that did not help.
    The issue I encounter with the fail-over cluster only occurs when trying to install with DSC!
    Using the SQL Server Install wizard, Command Prompt and even in Powershell by invoking the setup.exe all work perfectly.
    So, to reiterate, this issue only occurs while running in the context of DSC.
    I am using the same domain login with Domain Admin Security and locally the account has Administrators group credentials.  The SQL Server Service account also has Administrators Group Credentials.

  • VM will not boot after moving using Failover Cluster Manager - "a disk read error occurred......"

    My current Configuration:
    3 node cluster, using clustered shared storage and about 22 VM's.   The Host servers are running 2012 Data Center while all guest are running 2012 Standard.  The SAN is EqualLogic and we are using HIT Kit 4.5.
    I have a CSV that is running out of space, so I created another CSV so that I could move some of the VM's to a new home.    I tested this by creating a test VM, and moved it successfully 3 times.     I then moved an actual
    LIVE VM and while it seemed to move ok, it will now not start.   The message is "a disk read error occurred Press ctrl+alt+del to restart".     I moved the test VM and it failed as well.    
    I have read several things about this, but nothing seems to relate to my specific issue.   I have verified that VSS is working and free of errors as well.    From the Settings menu for the VM, if I select "Inspect" the drive,
    the properties all look fine.    It is a VHDX and both the current file size and maximum disk size seem correct.
    The VM's were moved using the "move - virtual machine storage" option within Failover Cluster Manager.
    Suggestions?
    Thanks.

    Lets see if I can answer all of those and I appreciate the brain storming.   This really needs to work, correctly.
    1.  The Storage is moving.
    2.   VM's and SAN are on same device.
    3.  No, my  Clustered Shared Volume, CSV, is out of room, (more one that later)
    4.  No, I actually have 2 sans grouped together.   However, I'm moving the VM', form one CSV to another CSV on the Same san.  EqualLogic PS 6110 is the one I am trying to move VMS around on, and the other SAN not involved in any way except
    for it is in a SAN group is an EqualLogic PS6010.
    5.  No error During move, it took about 5-10 minutes, no error messages.   Note, I did a test and it worked GREAT 3 times.   Now both a live VM, and the test VM are doing the same thing.
    6.  No, the machine is not to large.   The test making was a 50 gig drive, just 2012 standard installed with updates.   The live VM was a 75 gig VM that was my Trend Micro Server, or anti-virus host.
    7.  Expand the existing SCV?   Yes I should be able to, but there is an issue there.   The volume was expanded correctly, Equallogic sees the added space, Fail Over cluster manager sees the added space, however disk manager only
    sort of does.    When looking at disk manager, there are 2 areas that tell you a little bit about the drive.   The top part and then the bottom part.   The top part only shows 500G, the original size, while the bottom part
    says that it is 1 TB in size.   I call Dell's technical support and after they looked at it I was told by the technician that they had seen this a couple of times and the only way to fix it was to move all the VM's to another CSV and delete the troubled
    CSV.   I thought about adding more space to the troubled CSV, but its on a production server with about 12 VM's running on it and I did not want to take a chance.   The Trend VM was running on CSV-1 and working fine.   
    I must admit that the test VM, was on CSV-2.    I moved the Test VM from csv-2 to csv-3 back and forth several times with no errors.   The Trend Server was on CSV-1 and was moved to CSV-3, however it failed.  Again, I then moved
    the test VM from CSV-2 to CSV-3 and it failed the same way.   I could not test the "TEST - VM" on csv-1 due to csv-1 not having enough space.
    8.   I did disable the network from the VM to see if that mattered it did not. 
    9.   I have not yet had a chance to connect the VHDX to a new VM, but I will do that in about an hour, hopefully.    Once I am able to test that suggestion I will post the results as well.
    Again, thanks for all the suggestions and comments, as I had rather have lots to look at and try.   I hope I answered them well enough.
    Kenny

  • 2 Node Failover Cluster - ISCSI Disks as 1 volume?

    Hi,
    Not sure if I'm in the correct forum. If I am I apologize.  I need some advice.  
    I have created a 2-node failover cluster with 2 HP Blades.  I also currently have 2 NAS Servers (HP X1600 24tb servers running 2008 Storage server) -- The ultimate goal would be to combine all of the storage space from the NAS's into 1 volume addressable
    by the failover cluster. (As well as disk space from any additional NAS's added in the future.)
    Right now, I can add the ISCSI disk space from the NAS Targets as different volumes under cluster shared volumes.  Because of the 16TB limit in the ISCSI target, I essentially have 2 ISCSI disks on each NAS. One for 16TB, and the other for 4TB (The
    NAS Drives are configured for RAID 5 so there's a 4TB Loss.)  So, I have 4 ISCSI disks in the cluster, each as their own volume.
    Any thoughts on making the 4 drives addressable as one volume? 
    Regards,
    -Eric

    We're running Server 2012 Data Center on the cluster nodes.
    I was thinking the same about the 3rd party software to do what I'd like it to do.   The data  is mostly security camera video from our security system.  Since its not really critical data, i'm just looking for a way to maximize
    the available hard drive space, and make it addressable as one volume or network share...
    -Eric
    You can build Storage Spaces (simple, not clustered as it would waste 50% of your capacity, MSFT can do mirror and parity with R2 for clustered only) from iSCSI LUs. Dog slow and unsupported but you'll have linear spanned space. See:
    Rough Guide To Setting Up A Scale-Out File Server
    http://www.aidanfinn.com/?p=13176
    Creating Virtual SoFS with shared VHDX
    http://www.aidanfinn.com/?p=15145
    you don;t need SoFS (obviously) but in this article Aidan creates Storage Spaces from iSCSI LUNs.
    Good luck!
    StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts.

  • Cannot add multiple members of a failover cluster to a DFSR replication group

    Server 2012 RTM. I have two physical servers, in two separate data centers 35 miles apart, with a GbE link over metro fibre between them. Both have a large (10TB+) local RAID storage arrays, but given the physical separation there is no physical shared storage.
    The hosts need to be in a Windows failover cluster (WSFC), so that I can run high-availability VMs and SQL Availability Groups across these two hosts for HA and DR. VM and SQL app data storage is using a SOFS (scale out file server) network share on separate
    servers.
    I need to be able to use DFSR to replicate multi-TB user data file folders between the two local storage arrays on these two hosts for HA and DR. But when I try to add the second server to a DFSR replication group, I get the error:
    The specified member is part of a failover cluster that is already a member of the replication group. You cannot add multiple members for the same cluster to a replication group.
    I'm not clear why this has to be a restriction. I need to be able to replicate files somehow for HA & DR of the 10TB+ of file storage. I can't use a clustered file server for file storage, as I don't have any shared storage on these two servers. Likewise
    I can't run a HA single DFSR target for the same reason (no shared storage) - and in any case, this doesn't solve the problem of replicating files between the two hosts for HA & DR. DFSR is the solution for replicating files storage across servers with
    non-shared storage.
    Why would there be a restriction against using DFSR between multiple hosts in a cluster, so long as you are not trying to replicate folders in a shared storage target accessible to both hosts (which would obviously be a problem)? So long as you are not replicating
    folders in c:\ClusterStorage, there should be no conflict. 
    Is there a workaround or alternative solution?

    Yes, I read that series. But it doesn't address the issue. The article is about making a DFSR target highly available. That won't help me here.
    I need to be able to use DFSR to replicate files between two different servers, with those servers being in a WSFC for the purpose of providing other clustered services (Hyper-V, SQL availability groups, etc.). DFSR should not interfere with this, but it
    is being blocked between nodes in the same WSFC for a reason that is not clear to me.
    This is a valid use case and I can't see an alternative solution in the case where you only have two physical servers. Windows needs to be able to provide HA, DR, and replication of everything - VMs, SQL, and file folders. But it seems that this artificial
    barrier is causing us to need to choose either clustered services or DFSR between nodes. But I can't see any rationale to block DFSR between cluster nodes - especially those without shared storage.
    Perhaps this blanket block should be changed to a more selective block at the DFSR folder level, not the node level.

Maybe you are looking for