File Server Failover Cluster without shared disks

i have two servers that i wish to cluster as my Hyper V hosts and also two file servers each with 10 4TB SATA disks, all i have read about implementing high availability at the storage level involves clustering the file servers (e.g SOFS) which require external
shared storage that the servers in the cluster can access directly. i do not have external storage and do not have budget for it.
Is it possible to implement some for of HA with windows server 2012 R2 file servers without shared storage? for example is it possible to cluster the servers and have data on one server mirrored real-time to the other server such that if one server goes
down, the other server will take over processing storage request using the mirrored data?
i intend to use the storage to host VMs for Hyper V fail-over cluster and an SQL server cluster. they will access the shared on the file server through SMB
each file server also has 144GB SSD, how can i use it to improve performance?

i have two servers that i wish to cluster as my Hyper V hosts and also two file servers each with 10 4TB SATA disks, all i have read about implementing high availability at the storage level involves clustering the file servers (e.g SOFS) which require external
shared storage that the servers in the cluster can access directly. i do not have external storage and do not have budget for it.
Is it possible to implement some for of HA with windows server 2012 R2 file servers without shared storage? for example is it possible to cluster the servers and have data on one server mirrored real-time to the other server such that if one server goes
down, the other server will take over processing storage request using the mirrored data?
i intend to use the storage to host VMs for Hyper V fail-over cluster and an SQL server cluster. they will access the shared on the file server through SMB
each file server also has 144GB SSD, how can i use it to improve performance?
There are two ways for you to go:
1) Build a cluster w/o shared storage using MSFT upcoming version of Windows (yes, finally they have that feature and tons of other cool stuff). We've recently build both Scale-Out File Server serving Hyper-V cluster and standard general-purpose File Server
cluster with this version. I'll blog next week edited content (you can drop me a message to get drafts right now) or you can use Dave's blog who was the first one I know who build it and posted, see :
Windows Server Technical Preview (Storage Replica)
http://clusteringformeremortals.com
Feature you should be interested in it Storage Replica. Official guide is here:
Storage Replica Guide
http://blogs.technet.com/b/filecab/archive/2014/10/07/storage-replica-guide-released-for-windows-server-technical-preview.aspx
Will do things like on the picture below:
Just be aware: feature is new, build is preview (not even beta) so failover does not happen transparently (even with CA feature of SMB 3.0 enabled). However I think tuning timeout and improving I/O performance will fix that. SoFS failover is transparent
even right away.
2) If you cannot wait 9-12 months from now (hope MSFT is not going to delay their release) and you're not happy with a very basic functionality MSFT had put there (active-passive design, no RAM cache, requirement for separated storage, system/boot and dedicated
log disks where SSD is assumed) you can get advanced stuff with a third-party software doing things similar to the picture below:
So it will basically "mirror" some part of your storage (can be even directly-accessed file on your only system/boot disk) between hypervisor or just Windows hosts creating fault-tolerant and distributed SAN volume with optimal SMB3/NFS shares.
For more details see:
StarWind Virtual SAN
http://www.starwindsoftware.com/starwind-virtual-san/
There are other guys who do similar things but as you want file server (no virtualization?) most of them who are Linux/FreeBSD/Solaris-based and VM-running are out and you need to check for native Windows implementations. Guess SteelEye DataKeeper (that's
Dave who blogged about Storage Replica File Server) and DataCore.
Good luck :) 
StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts.

Similar Messages

  • New SQL Server Failover Cluster Installation - No disk is available to select in section "Cluster Disk Slection"

    Hello Everyone,
    I am in a deep need for your help regarding the problem I am facing.
    I am doing a New SQL Server Failover Cluster Installation in a virtual server that is part of a failover cluster. I am able to complete all the steps successfully but, when I reach the point where I am supposed to select the shared disk that will
    be included in the SQL server resource cluster group, I don't find any disk in the list (as u can see in the figure below).
    I have already created the a 2 nodes failover cluster and added 3 disks (1 as a witness in Quorum and 2 other available storage).
    No roles were created, 2 nodes are available and 1 network is there in the cluster.
    If u take a look at the message it say: "The search for mount points failed. Error: the system cannot find the path specified". What is this and how can I solve this issue??
    Thanks in advance for your support and looking forward for your valuable feedbacks.
    Mark as answer if it was an answer for you question.. Please don't hesitate to ask for any further help..

    Dear Ashwin,
    I have granted the privileges mentioned in the link you provided as below:
      Act as Part of the Operating Sywstem = SeTcbPrivileg
      Bypass Traverse Checking = SeChangeNotify
      Lock Pages In Memory = SeLockMemory
      Log on as a Batch Job = SeBatchLogonRight
      Log on as a Service = SeServiceLogonRight
      Replace a Process Level Token = SeAssignPrimaryTokenPrivilege
    I was not able to solve the problem by giving these privileges to the domain account I am using to install SQL.
    Mark as answer if it was an answer for you question.. Please don't hesitate to ask for any further help..

  • WFC without Shared Disks for SCVMM Failover

    SCVMM Failover Cluster requires us to first have a WFC in place.  If shared disk is the limitation, can a WFC be created without shared disks? just to fulfill the purpose of providing WFC to the SCVMM Failover Cluster.

    Thanks for the reply.  Actually, design is not my decision unfortunately.
    I have gone ahead with a disk witness and I have used Shared VHDX as a Quorum Disk.  I faced some issues when I live migrated, but when I enabled Guest Services on both nodes which is disabled by default, it started migrating fine.
    Please, do share your thoughts on it.  Especially with respect to Shared VHDX and Live migration and the overall strategy as MS always prefers Disk Witness to File Share Witness.

  • SQL Server Failover Cluster Questions

    Dear All,
                I am building a two-node failover cluster on SQL Server 2012 SP1 (inside Hyper-V as a Guest Cluster) and want clarification on few things that I am facing.
    1.  I am receiving MSDTC Warning.  I can go ahead and create the cluster, but want to understand whether this MSDTC is to be configured as a role on the cluster or not.  I plan to run SCVMM, SCOM, Orchestrator and Windows Azure Pack Databases
    and Reports through it so in such a scenario, do I need MSDTC? If yes, how much should be the size of the MSDTC Drive? Is following process correct?
    http://www.sqlnotebook.info/configure-msdtc-on-windows-cluster-2012/
    2.  During First Node configuration, one needs to provide the "SQL CLUSTER RESEOURCE GROUP NAME".  Does it have any bearing on how it will be accessed by other servers for databases and logs? or is it just how the cluster resource group
    would be named? would it be required for every instance that is created inside the cluster? Just to be clear, so one can name it according to the instance name.
    3.  During the instance creation, one needs to provide "SQL Server Network Name".  As stated above, I plan to run SCVMM, SCOM, Orchestrator and Windows Azure Pack Databases and Reports through it, so would I be required to provide this
    for all instances that I create or this is only required once in the cluster:
    4.  During the instance creation, one needs to provide the features required for installation i.e. instance features and shared features.  As stated above, I plan to run SCVMM, SCOM, Orchestrator and Windows Azure Pack Databases and Reports through
    it, so which features should be selected? so that there is less workload on the server.
    5.  All the instances use TempDB for databases that are present inside it.  What would be the best practice with respect to TempDB.  One TempDB for all instance on the servers on a separate LUN or all instance having their own TempDB LUN?  What
    should be the ideal size of the TempDB LUN?
    6.  Should all the disks required for DBs and Logs be added to Cluster?  Should they be added normal disks or CSV Volumes?
    Thanks in advance. 

    Hello,
     1.You can run the Microsoft Distributed Transaction Coordinator service (MSDTC) as a clustered resource on a failover cluster server for increased reliability, based on the failover capabilities of the clustered servers. You can
    refer to the MSDTC section of the following blog about determine whether the Microsoft Distributed Transaction Coordinator (MSDTC) cluster resource must be created.
    Reference:http://msdn.microsoft.com/en-us/library/ms189910.aspx#MSDTC
    2. The Cluster Resource Group is where SQL Server failover cluster resource will be placed. Each clustered SQL Server will belong to a Failover
    Cluster Resource Group. For example, if you had configure a two node SQL Server Cluster, each clustered instance on the two node belong to a same Cluster Resource Group.
    You can change the Cluster Resource Group name, but notes the following name is reserved and already used as Resource Group names: Available Storage, Cluster Group.
    3. Each SQL Server cluster is assigned a virtual Network name and IP address, which client applications use to connect to the clustered SQL Server.
    4. Not familiar with SCVMM, SCOM, Orchestrator, but you should install the Database Engine Services and SQL Server Management tools.If you want to use SQL Server Reporting Services, you can install Reporting Servers, but Report Server service cannot participate
    in a failover cluster.
    5. You can use isolated disk for user database and temp DB of each SQL Server Cluster
    6. Yes. You should use Cluster Disks which add to Clustered Shared Volumes to host the data file and log of databases.
    http://www.pythian.com/blog/how-to-install-a-clustered-sql-server-2012-instance-step-by-step-part-1/
    Regards,
    Fanny Liu
    Fanny Liu
    TechNet Community Support

  • How to Perform Forced Manual Failover of Availability Group (SQL Server) and WSFC (Windows Server Failover Cluster) with scrpiting

    I have a scenario with the three nodes with server 2012 standard, each running an instance of SQL Server 2012 enterprise, participate in a
    single Windows Server Failover Cluster (WSFC) that spans two data centers.
    If the nodes in the primary data center are unavailable due to data center outage. Then how I can able to access node in the WSFC (Windows Server Failover Cluster) in the secondary disaster recovery data center automatically with some script.
    I want to write script that can be able to check primary data center by pinging some IP after every 5 or 10 minutes.
    If that IP is unable to respond then script can be able to Perform Forced Manual Failover of Availability Group (SQL Server) and WSFC (Windows Server Failover Cluster)
    Can you please guide me for script writing for automatic failover in case of primary datacenter outage?

    You are trying to implement manually what should be happening automatically in the cluster. If the primary SQL Server becomes unavailable in the data center, it should fail over to the secondary SQL Server automatically.  Is that not working?
    You also might want to run this configuration by some SQL experts.  I am not a SQL expert, but if you have both hosts in the data center in a cluster, there is no need for replication between those two nodes as they would be accessing
    the database from some form of shared storage.  Then it looks like you are trying to implement Always On to the DR site.  I'm not sure you can mix both types of failover in a single configuration.
    FYI, it would make more sense to establish a file share witness in your DR site instead of placing a third node in the data center for Node Majority quorum.
    . : | : . : | : . tim

  • The lease timeout between avaiability group and the Windows Server Failover Cluster has expired

    Hi,
    I am having some issues where I get a lease timeout from time to time.  I have a Windows 2012 Failover Cluster with 2 nodes and 2 SQL 2012 Always-on Availability Groups.  Both nodes
    are a physical machines and each node is the primary for an AG. 
    From what I understand if
    the HealhCheckTimeout
    is exceeded without the signal exchange the lease is declared 'expired' and the SQL Server resource dll reports that the SQL Server availability group no longer 'looks alive' to the Windows cluster manager.  Here are the properties I have setup
    which are the default settings:
    LeaseTimeout - 20000
    HealthCheckTimeout - 30000
    VerboseLoging - 0>
    FailureConditionLevel – 3
    Here are the events that occur in the Application Event Viewer:
    Event ID 19407:
    The lease between availability group 'AG_NAME' and the Windows Server Failover Cluster has expired. A connectivity issue occurred between the instance of SQL Server and the Windows Server Failover
    Cluster. To determine whether the availability group is failing over correctly, check the corresponding availability group resource in the Windows Server Failover Cluster.
    Event ID 35285:
    The recovery LSN (120881:37533:1) was identified for the database with ID 32. This is an informational message only. No user action is required.
    SQl server logs are too long to post in this box but I can send them if you request.
    The AG is setup to failover automatically but it did not failover.  I am trying to figure out why the lease timed out.  Thanks.

    From what I've been able to find out, this is due to an issue with the procedure sp_server_diagnostics.  It sounds like the cluster is expecting this procedure to regularly log good status "Clean" in the log files, but the procedure is designed not
    to flood the logs with "Clean" messages, so only reports changes, and does not make an entry when the last status was "Clean" and the current status is "Clean".  The result is that the cluster looks to be unresponsive.  However, once it initiates
    the failover, the primary machine responds, since it was never really down, and the failover operation stops.   
    The end result is that there really never is a failover, but the database becomes unavailable for  a few minutes while this is resolved.
    I'm going to try setting the cluster's failure condition level to 2 (instead of 3) and see if that prevents the down time.
    blogs.msdn.com/b/sql_pfe_blog/archive/2013/04/08/sql-2012-alwayson-availability-groups-automatic-failover-doesn-t-occur-or-does-it-a-look-at-the-logs.aspx

  • Service Accounts for Reporting Service in SQL Server Failover Cluster setup

    I am setting up 2 Report Services (SSRS) in SQL Failover Clustering (Version: 2012SP1) on Windows 2012, as part of scale out architecture.
    There are 2 options to configure the service account for SSRS:
    Option 1) Using domain accounts, as what I have done for DB Engine and SQL Agent.
    Option 2) accept the default, which is virtual account for SSRS. Per documentation URL:
    http://msdn.microsoft.com/en-us/library/ms143504.aspx
    which is the recommended one? is it option 2?
    There is security note on above URL as well, but does not clearly mention that option 1 is not recommended.
    Security Note:  Always run SQL Server services by using the lowest possible user rights. Use a MSA or  virtual account when possible. When MSA and virtual accounts are not possible, use a specific low-privilege user account or domain account instead
    of a shared account for SQL Server services. Use separate accounts for different SQL Server services. Do not grant additional permissions to the SQL Server service account or the service groups. Permissions will be granted through group membership or granted
    directly to a service SID, where a service SID is supported.
    Thanks very much for your help!

    Hi Luo Donghua,
    In SQL Server Failover Cluster Instance, personally two options can run well. If you use the virtual account for SQL Server Reporting Service. Virtual accounts in Windows Server 2008 R2 and Windows 7 are managed local accounts that provide the features to
    simplify service administration. The virtual account is auto-managed, and the virtual account can access the network in a domain environment.
    Of cause, you can also use domain accounts in your clustering. 
    Just make sure your service account is set up here, or that it is using a proper built-in account.For more information, see:http://ermahblerg.com/2012/11/08/cluster-ssrs-in-2008/
    Thanks,
    Sofiya Li
    Sofiya Li
    TechNet Community Support

  • Failover cluster without replication

    Hello,
    This might be a basic question to many, but I couldn't find a straight answer so ..
    is it possible to create a failover cluster with shared storage and without any replication/copies of the databases?
    i.e.:
    Create two exchange nodes with two shared luns, make each an owner of a lun and have its database stored in it.
    If node1 failed, it's lun and database gets mounted on node2, making node2 the host of both databases until node1 is back online.
    if the answer is no, was it possible in 2010?
    Thanks.

    Hello,
    This might be a basic question to many, but I couldn't find a straight answer so ..
    is it possible to create a failover cluster with shared storage and without any replication/copies of the databases?
    i.e.:
    Create two exchange nodes with two shared luns, make each an owner of a lun and have its database stored in it.
    If node1 failed, it's lun and database gets mounted on node2, making node2 the host of both databases until node1 is back online.
    if the answer is no, was it possible in 2010?
    Thanks.
    Nothing in Exchange does that. Anything that did would be a 3rd party solution and not supported by Microsoft.
    Twitter!:
    Please Note: My Posts are provided “AS IS” without warranty of any kind, either expressed or implied.

  • Can a SQL Server Failover Cluster Instance (FCI) be Implemented Between Two Hyper-V Hosted Virtual Machines?

    I haven't had the opportunity to implement a SQL Server Failover Cluster Instance (FCI) for over 10 years and that was done with two physical, identical database servers way back in the day of Windows Server 2003 and SQL Server 2000 (old school).
    Can a SQL Server 2008 R2 Failover Cluster Instance (FCI) be implemented between two Hyper-V hosted virtual machines? The environment in question already has Windows Server 2012 R2 Hyper-V hosts in place, so I'm just looking to see if this is even
    possible and/or supported when utilizing virtual machines.
    The client in question is currently using SQL Server 2008 R2 instances running on Win2008R2, Win2012, and Win2012R2, but I'd also be interested how this can be done or not with SQL Server 2012 or 2014 as well. Thanks in advance.
    Bill Thacker

    Yes, it can be done with Hyper-V guests. In fact, with Windows Server 2012 R2 Hyper-V, guests can use the Shared VHDX feature for shared storage used by Windows clusters. The guests can run Windows Server 2008 and higher provided that the Hyper-V Integration
    Services are installed to support Shared VHDX. The only challenge here is making the Hyper-V hosts highly available as well, running it on WSFC.
    Edwin Sarmiento SQL Server MVP | Microsoft Certified Master
    Blog |
    Twitter | LinkedIn
    SQL Server High Availability and Disaster Recover Deep Dive Course

  • How to Perform Forced Manual Failover of Availability Group (SQL Server) and WSFC (Windows Server Failover Cluster)

    I have a scenario with the three nodes with server 2012 standard, each running an instance of SQL Server 2012 enterprise, participate in a
    single Windows Server Failover Cluster (WSFC) that spans two data centers.
    If the nodes in the primary data center are unavailable due to data center outage. Then how I can able to access node in the WSFC (Windows Server Failover Cluster) in the secondary disaster recovery data center automatically with some script.
    I want to write script that can be able to check primary data center by pinging some IP after every 5 or 10 minutes.
    If that IP is unable to respond then script can be able to Perform Forced Manual Failover of Availability Group (SQL Server) and WSFC (Windows Server Failover Cluster)
    Can you please guide me for script writing for automatic failover in case of primary datacenter outage?

    please post you question on failover clusters in the cluster forum.  THey will explain how this works and point you at scipts.
    You should also look in the Gallery for cluster management scripts.
    ¯\_(ツ)_/¯

  • The harddrive holding my itunes library is damaged and I would like to purchase the icloud.  Can I restore files to the icloud without the disk/libraries?

    the harddrive holding my itunes library is damaged and I would like to purchase the icloud.  Can I restore files to the icloud without the disk/libraries?  Thanks!

    iCloud does not provide a service like Dropbox where you can save random files to it as a way of having external storage.  As for the HD damage, don't you have a backup of the itunes library elsewhere?  If so, copy it to the replacement hard drive, along with any other important files that were on the damaged HD.

  • LUN can't be accessed after move it to another Hyper-V Failover Cluster without "remove from cluster shared volumes" on the original cluster

    Hi all,
    I have a old cluster, let's call it cluster01, and a new cluster, cluster02. There is a LUN attach to cluster01 as a CSV volume. I forgot to  "remove from cluster shared volumes" in Failover Cluster console, then power off the cluster01 and
    attach the LUN to the cluster02. Now the LUN can't be accessed in cluster02, it show as a RAW disk. I tried to attach the LUN to it's original cluster cluster01 but can't read it too.
    Is there any way to get it back?

    Hi Zephyrhu,
    Can you run Clear-ClusterDiskReservation powershell cmdlet and see that helps?
    http://technet.microsoft.com/en-us/library/ee461016(WS.10).aspx
    Thanks,
    Umesh.S.K

  • Scale Out File Server for Applications using Shared VHDX

    Just trying to get a definitive answer to the question of can we use a Shared VHDX in a SOFS Cluster which will be used to store VHDX files?
    We have a 2012 R2 RDS Solution and store the User Profile Disks (UPD) on a SOFS Cluster that uses "traditional" storage from a SAN. We are planning on creating a new SOFS Cluster and wondered if we can use a shared VHDX instead of CSV as the storage
    that will then be used to store the UPDs (one VHDX file per user).
    Cheers for now
    Russell

    Just trying to get a definitive answer to the question of can we use a Shared VHDX in a SOFS Cluster which will be used to store VHDX files?
    We have a 2012 R2 RDS Solution and store the User Profile Disks (UPD) on a SOFS Cluster that uses "traditional" storage from a SAN. We are planning on creating a new SOFS Cluster and wondered if we can use a shared VHDX instead of CSV as the storage that
    will then be used to store the UPDs (one VHDX file per user).
    Cheers for now
    Russell
    Sure you can do it. See:
    Deploy a Guest Cluster Using a Shared Virtual Hard Disk
    http://technet.microsoft.com/en-us/library/dn265980.aspx
    Scenario 2: Hyper-V failover cluster using file-based storage in a separate Scale-Out File Server
    This scenario uses Server Message Block (SMB) file-based storage as the location of the shared .vhdx files. You must deploy a Scale-Out File Server and create an SMB file share as the storage location. You also need a separate Hyper-V failover cluster.
    The following table describes the physical host prerequisites.
    Cluster Type
    Requirements
    Scale-Out File Server
    At least two servers that are running Windows Server 2012 R2.
    The servers must be members of the same Active Directory domain.
    The servers must meet the requirements for failover clustering.
    For more information, see Failover Clustering Hardware Requirements and Storage Options and Validate
    Hardware for a Failover Cluster.
    The servers must have access to block-level storage, which you can add as shared storage to the physical cluster. This storage can be iSCSI, Fibre Channel, SAS, or clustered storage spaces that use a set of shared SAS JBOD enclosures.
    StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts.

  • VM will not boot after moving using Failover Cluster Manager - "a disk read error occurred......"

    My current Configuration:
    3 node cluster, using clustered shared storage and about 22 VM's.   The Host servers are running 2012 Data Center while all guest are running 2012 Standard.  The SAN is EqualLogic and we are using HIT Kit 4.5.
    I have a CSV that is running out of space, so I created another CSV so that I could move some of the VM's to a new home.    I tested this by creating a test VM, and moved it successfully 3 times.     I then moved an actual
    LIVE VM and while it seemed to move ok, it will now not start.   The message is "a disk read error occurred Press ctrl+alt+del to restart".     I moved the test VM and it failed as well.    
    I have read several things about this, but nothing seems to relate to my specific issue.   I have verified that VSS is working and free of errors as well.    From the Settings menu for the VM, if I select "Inspect" the drive,
    the properties all look fine.    It is a VHDX and both the current file size and maximum disk size seem correct.
    The VM's were moved using the "move - virtual machine storage" option within Failover Cluster Manager.
    Suggestions?
    Thanks.

    Lets see if I can answer all of those and I appreciate the brain storming.   This really needs to work, correctly.
    1.  The Storage is moving.
    2.   VM's and SAN are on same device.
    3.  No, my  Clustered Shared Volume, CSV, is out of room, (more one that later)
    4.  No, I actually have 2 sans grouped together.   However, I'm moving the VM', form one CSV to another CSV on the Same san.  EqualLogic PS 6110 is the one I am trying to move VMS around on, and the other SAN not involved in any way except
    for it is in a SAN group is an EqualLogic PS6010.
    5.  No error During move, it took about 5-10 minutes, no error messages.   Note, I did a test and it worked GREAT 3 times.   Now both a live VM, and the test VM are doing the same thing.
    6.  No, the machine is not to large.   The test making was a 50 gig drive, just 2012 standard installed with updates.   The live VM was a 75 gig VM that was my Trend Micro Server, or anti-virus host.
    7.  Expand the existing SCV?   Yes I should be able to, but there is an issue there.   The volume was expanded correctly, Equallogic sees the added space, Fail Over cluster manager sees the added space, however disk manager only
    sort of does.    When looking at disk manager, there are 2 areas that tell you a little bit about the drive.   The top part and then the bottom part.   The top part only shows 500G, the original size, while the bottom part
    says that it is 1 TB in size.   I call Dell's technical support and after they looked at it I was told by the technician that they had seen this a couple of times and the only way to fix it was to move all the VM's to another CSV and delete the troubled
    CSV.   I thought about adding more space to the troubled CSV, but its on a production server with about 12 VM's running on it and I did not want to take a chance.   The Trend VM was running on CSV-1 and working fine.   
    I must admit that the test VM, was on CSV-2.    I moved the Test VM from csv-2 to csv-3 back and forth several times with no errors.   The Trend Server was on CSV-1 and was moved to CSV-3, however it failed.  Again, I then moved
    the test VM from CSV-2 to CSV-3 and it failed the same way.   I could not test the "TEST - VM" on csv-1 due to csv-1 not having enough space.
    8.   I did disable the network from the VM to see if that mattered it did not. 
    9.   I have not yet had a chance to connect the VHDX to a new VM, but I will do that in about an hour, hopefully.    Once I am able to test that suggestion I will post the results as well.
    Again, thanks for all the suggestions and comments, as I had rather have lots to look at and try.   I hope I answered them well enough.
    Kenny

  • IIS Web Farm - File Server for Content and Shared Configuration - SOFS?

    We currently have a number of web farms with their content and shared configuration files located on a standalone file server.
    I am looking to utilise the clustered file server role in Windows 2012 to provide improved uptime and load balancing for the configuration and content shares.
    Please could someone provide clarification as to whether SOFS is supported for this scenario?
    There appears to be some conflicting advice in the documentation and across the internet.

    We currently have a number of web farms with their content and shared configuration files located on a standalone file server.
    I am looking to utilise the clustered file server role in Windows 2012 to provide improved uptime and load balancing for the configuration and content shares.
    Please could someone provide clarification as to whether SOFS is supported for this scenario?
    There appears to be some conflicting advice in the documentation and across the internet.
    SoFS is a good choice for a) Hyper-V and b) SQL Server (anything working with a big files like VHDX and MDF). For a bunch of a small files IIS has in a shared directories SoFS is not going to work and it's cleary said in the documentation. See:
    Scale-Out File Server for Application Data Overview
    http://technet.microsoft.com/en-us/library/hh831349.aspx
    Workload: Information worker
    Yes
    Not recommended
    Workload: Hyper-V
    Yes
    Yes
    Workload: Microsoft SQL Server
    Yes
    Yes
    So in your case just an ordinary clustered file servers should be enough. See:
    Create a Clustered File Server
    http://technet.microsoft.com/en-us/library/cc753969.aspx
    Hope this helped :)
    StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts.

Maybe you are looking for

  • How to use database link

    OS in both systems is linux. database oracle 10g how to connect two databases present in other locations; that is sitting at place A i should be able to access database at B

  • Setting user status using actions in business activity

    Hi I have defined a new action profile. In that I have defined a new action using which I want to set user status to overdue. I have also defined a new user status "overdue". Now I want to set user status to overdue. For that we have defined new badi

  • Version Number -  Both numeric and alphanumeric

    Hello I need to have for a particular document both numeric and alphanumeric versions. I tried it will user exit * but while doing this the sequence of version is getting lost. The version is not proposed while creating new version Kindly help in thi

  • After latest update to yosemite I can not open finder or any other windows from my desktop

    I can not open any windows after last update to yosemite. Finder can not be reopened. Software version is OS X 10.9.5, Macbook pro mid 2009.

  • Keeping up with windows 7 64bit updates

    I am deploying the windows 7 security updates through patch management. However, I am using a computer that is configured for automatic updates as a baseline and zcm is not covering all security updates. I have created bundles to deploy the missing u