Failover cluster without replication

Hello,
This might be a basic question to many, but I couldn't find a straight answer so ..
is it possible to create a failover cluster with shared storage and without any replication/copies of the databases?
i.e.:
Create two exchange nodes with two shared luns, make each an owner of a lun and have its database stored in it.
If node1 failed, it's lun and database gets mounted on node2, making node2 the host of both databases until node1 is back online.
if the answer is no, was it possible in 2010?
Thanks.

Hello,
This might be a basic question to many, but I couldn't find a straight answer so ..
is it possible to create a failover cluster with shared storage and without any replication/copies of the databases?
i.e.:
Create two exchange nodes with two shared luns, make each an owner of a lun and have its database stored in it.
If node1 failed, it's lun and database gets mounted on node2, making node2 the host of both databases until node1 is back online.
if the answer is no, was it possible in 2010?
Thanks.
Nothing in Exchange does that. Anything that did would be a 3rd party solution and not supported by Microsoft.
Twitter!:
Please Note: My Posts are provided “AS IS” without warranty of any kind, either expressed or implied.

Similar Messages

  • LUN can't be accessed after move it to another Hyper-V Failover Cluster without "remove from cluster shared volumes" on the original cluster

    Hi all,
    I have a old cluster, let's call it cluster01, and a new cluster, cluster02. There is a LUN attach to cluster01 as a CSV volume. I forgot to  "remove from cluster shared volumes" in Failover Cluster console, then power off the cluster01 and
    attach the LUN to the cluster02. Now the LUN can't be accessed in cluster02, it show as a RAW disk. I tried to attach the LUN to it's original cluster cluster01 but can't read it too.
    Is there any way to get it back?

    Hi Zephyrhu,
    Can you run Clear-ClusterDiskReservation powershell cmdlet and see that helps?
    http://technet.microsoft.com/en-us/library/ee461016(WS.10).aspx
    Thanks,
    Umesh.S.K

  • File Server Failover Cluster without shared disks

    i have two servers that i wish to cluster as my Hyper V hosts and also two file servers each with 10 4TB SATA disks, all i have read about implementing high availability at the storage level involves clustering the file servers (e.g SOFS) which require external
    shared storage that the servers in the cluster can access directly. i do not have external storage and do not have budget for it.
    Is it possible to implement some for of HA with windows server 2012 R2 file servers without shared storage? for example is it possible to cluster the servers and have data on one server mirrored real-time to the other server such that if one server goes
    down, the other server will take over processing storage request using the mirrored data?
    i intend to use the storage to host VMs for Hyper V fail-over cluster and an SQL server cluster. they will access the shared on the file server through SMB
    each file server also has 144GB SSD, how can i use it to improve performance?

    i have two servers that i wish to cluster as my Hyper V hosts and also two file servers each with 10 4TB SATA disks, all i have read about implementing high availability at the storage level involves clustering the file servers (e.g SOFS) which require external
    shared storage that the servers in the cluster can access directly. i do not have external storage and do not have budget for it.
    Is it possible to implement some for of HA with windows server 2012 R2 file servers without shared storage? for example is it possible to cluster the servers and have data on one server mirrored real-time to the other server such that if one server goes
    down, the other server will take over processing storage request using the mirrored data?
    i intend to use the storage to host VMs for Hyper V fail-over cluster and an SQL server cluster. they will access the shared on the file server through SMB
    each file server also has 144GB SSD, how can i use it to improve performance?
    There are two ways for you to go:
    1) Build a cluster w/o shared storage using MSFT upcoming version of Windows (yes, finally they have that feature and tons of other cool stuff). We've recently build both Scale-Out File Server serving Hyper-V cluster and standard general-purpose File Server
    cluster with this version. I'll blog next week edited content (you can drop me a message to get drafts right now) or you can use Dave's blog who was the first one I know who build it and posted, see :
    Windows Server Technical Preview (Storage Replica)
    http://clusteringformeremortals.com
    Feature you should be interested in it Storage Replica. Official guide is here:
    Storage Replica Guide
    http://blogs.technet.com/b/filecab/archive/2014/10/07/storage-replica-guide-released-for-windows-server-technical-preview.aspx
    Will do things like on the picture below:
    Just be aware: feature is new, build is preview (not even beta) so failover does not happen transparently (even with CA feature of SMB 3.0 enabled). However I think tuning timeout and improving I/O performance will fix that. SoFS failover is transparent
    even right away.
    2) If you cannot wait 9-12 months from now (hope MSFT is not going to delay their release) and you're not happy with a very basic functionality MSFT had put there (active-passive design, no RAM cache, requirement for separated storage, system/boot and dedicated
    log disks where SSD is assumed) you can get advanced stuff with a third-party software doing things similar to the picture below:
    So it will basically "mirror" some part of your storage (can be even directly-accessed file on your only system/boot disk) between hypervisor or just Windows hosts creating fault-tolerant and distributed SAN volume with optimal SMB3/NFS shares.
    For more details see:
    StarWind Virtual SAN
    http://www.starwindsoftware.com/starwind-virtual-san/
    There are other guys who do similar things but as you want file server (no virtualization?) most of them who are Linux/FreeBSD/Solaris-based and VM-running are out and you need to check for native Windows implementations. Guess SteelEye DataKeeper (that's
    Dave who blogged about Storage Replica File Server) and DataCore.
    Good luck :) 
    StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts.

  • Cannot add multiple members of a failover cluster to a DFSR replication group

    Server 2012 RTM. I have two physical servers, in two separate data centers 35 miles apart, with a GbE link over metro fibre between them. Both have a large (10TB+) local RAID storage arrays, but given the physical separation there is no physical shared storage.
    The hosts need to be in a Windows failover cluster (WSFC), so that I can run high-availability VMs and SQL Availability Groups across these two hosts for HA and DR. VM and SQL app data storage is using a SOFS (scale out file server) network share on separate
    servers.
    I need to be able to use DFSR to replicate multi-TB user data file folders between the two local storage arrays on these two hosts for HA and DR. But when I try to add the second server to a DFSR replication group, I get the error:
    The specified member is part of a failover cluster that is already a member of the replication group. You cannot add multiple members for the same cluster to a replication group.
    I'm not clear why this has to be a restriction. I need to be able to replicate files somehow for HA & DR of the 10TB+ of file storage. I can't use a clustered file server for file storage, as I don't have any shared storage on these two servers. Likewise
    I can't run a HA single DFSR target for the same reason (no shared storage) - and in any case, this doesn't solve the problem of replicating files between the two hosts for HA & DR. DFSR is the solution for replicating files storage across servers with
    non-shared storage.
    Why would there be a restriction against using DFSR between multiple hosts in a cluster, so long as you are not trying to replicate folders in a shared storage target accessible to both hosts (which would obviously be a problem)? So long as you are not replicating
    folders in c:\ClusterStorage, there should be no conflict. 
    Is there a workaround or alternative solution?

    Yes, I read that series. But it doesn't address the issue. The article is about making a DFSR target highly available. That won't help me here.
    I need to be able to use DFSR to replicate files between two different servers, with those servers being in a WSFC for the purpose of providing other clustered services (Hyper-V, SQL availability groups, etc.). DFSR should not interfere with this, but it
    is being blocked between nodes in the same WSFC for a reason that is not clear to me.
    This is a valid use case and I can't see an alternative solution in the case where you only have two physical servers. Windows needs to be able to provide HA, DR, and replication of everything - VMs, SQL, and file folders. But it seems that this artificial
    barrier is causing us to need to choose either clustered services or DFSR between nodes. But I can't see any rationale to block DFSR between cluster nodes - especially those without shared storage.
    Perhaps this blanket block should be changed to a more selective block at the DFSR folder level, not the node level.

  • Monitor Replication Health in Failover Cluster Manager?

    Have two Hyper-V 2012 R2 Clusters.
    One Cluster contains all production VM's.The second cluster is on another location and is intended only for HA purposes. All production VM's from the first clusters should be replicated to the other cluster.
    What is the best way to monitor the replication health of all VM's? In Failover Cluster manager, it's not possible to add a column "replication health" to the "roles" window. It's not practical to click on every VM to view the replication
    health. In Hyper-V Manager, it's easy to add "replication health" to the virtual machine window, but clicking on every host to verify the replication health is not very practical as well.
    Thank you in advance for any hint (I know that we could use SCOM or something similar, but these complex tools are out of scope).
    Franz

    Hi FranzSchenk
    I think the only real option open to you is the use of PowerShell in your situation. Have a look at this link and see if this is the sort of thing that will help you.
    http://www.serverwatch.com/server-tutorials/checking-hyper-v-replication-health-using-powershell-cmdlets.html
    Kind Regards
    Michael Coutanche
    Blog:   
    Twitter:   LinkedIn:
    Note: Posts are provided “AS IS” without warranty of any kind, either expressed or implied, including but not limited to the implied warranties of merchantability and/or fitness for a particular purpose.

  • Transactional replication from a failover cluster instance to a SQL Server Express DB

    Hello,
    I have been poking around on Google trying to understand if there are any gotchas in configuring transactional replication on a instance DB of a failover cluster, to a SQL Server Express DB. Also, this client would like to replicate a set of tables between
    two instances DB's which both reside on nodes of the cluster.
    Everything I've read suggests there is no problem using transactional replication on clustered instance as long as you use a shared snapshot folder. I still have some concerns:
    1) Should the distributor need to live on a separate instance?
    2) What happens in the event of an automatic, or manual failover of a publisher, especially if the distributor does not need to live on a separate instance? I know that when a failover occurs, all jobs in progress are stopped and this seems like a recipe for
    inconsistency between the publisher and subscriber.
    There is a paramount concern, that this particular client won't have staff on hand to troubleshoot replication if there are problems, hence my hesitancy to implement a solution that relies on it.
    Thanks in advance.

    1) Should the distributor need to live on a separate instance?
    Answer: It is recommended to configure the distributor on the different server, but it also be configured on Publisher/subscriber server. (Subscriber in our case is not possible as its a Express edition)
    2) What happens in the event of an automatic, or manual failover of a publisher, especially if the distributor does not need to live on a separate instance? I know that when a failover occurs, all jobs in progress are stopped and this seems like a recipe for
    inconsistency between the publisher and subscriber. There is a paramount concern, that this particular client won't have staff on hand to troubleshoot replication if there are problems, hence my hesitancy to implement a solution that relies on it.
    Answer: If you configure both publisher and distributor on the same server and the SQL instance is failed over, the data synchronization/replication is suspended till the instance comes online. 
    Once the instance is up,all the replication jobs will start again and it will continue to synchronize the data to subscriber. No manual intervention is required.

  • Sun Cluster 3.1 Failover Resource without Logical Hostname

    Maybe it could sound strange, but I'd need to create a failover service without any network resource in use (or at least with a dependency on a logical hostname created in a different resource-group).
    Does anybody know how to do that?

    Well, you don't really NEED a LogicalHostname in a RG. So, i guess i am not understanding
    the question.
    Is there an application agent which demands to have a network resource in the RG? Sometimes
    the VALIDATE method of such agents refuses to work if there is no network resource in
    the RG.
    If so, tell us a bit more about the application. Is this GDS based and generated by
    Sun Cluster Agent Builder? The Agent Builder has a option of "non Network Aware", if you
    select that while building you app, it ought to work without a network resource in the RG.
    But maybe i should back up and ask the more basic question of exactly what is REQUIRING
    you to create a LogicalHostname?
    HTH,
    -ashu

  • Dedicated network for AlwaysON replication traffic when a replica is a Failover Cluster Instance

    Hi,
        We are planning to setup dedicated network for our Availability Group replication traffic. We have a Failover Cluster Instance as the primary replica and a standalone SQL server instance as the secondary. 
        I understand that we will need to manually configure the database mirroring endpoints on both the replicas to listen on the specific IP. 
       But how do I configure the database mirroring endpoint on the Failover Cluster Instance ?
    Please help.
    Thanks and Regards,
    Jisha

    If you have a dedicated network for your Availability Group replication traffic between the FCI and the standalone instance, you need to identify if there will be other network services included in the mix. For example, your public network is already using
    it's own DNS server by virtue of Active Directory integration. Your dedicated network for replication traffic may or may not have its own DNS server so configuring the endpoints would involve using either IP addresses like the one highlighted in the
    blog post or using hosts file with fully qualified domain names so you can use them when creating the endpoints
    Edwin Sarmiento SQL Server MVP | Microsoft Certified Master
    Blog |
    Twitter | LinkedIn
    SQL Server High Availability and Disaster Recover Deep Dive Course

  • Is there any way to enable eventlog replication between two nodes in windows 2008 failover cluster.

    Is there any way to enable eventlog replication between two nodes in windows 2008 failover cluster.
    Thanks Azam When you see answers please Mark as Answer if Helpful..vote as helpful.

    Hi,
    As far as I know there don’t have the log replica function between failover cluster node, if you want to have the Unified log management you can refer the following related
    KB:
    Configure Computers to Forward and Collect Events
    http://technet.microsoft.com/en-us/library/cc748890.aspx
    Hope this helps.
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Replication solution for MSCS Failover Cluster?

    That is definitely the plan, just a matter of finding the best software solution to achieve that. I imagine StarWind free version is an option...

    I am setting up a multi-site windows failover cluster (windows server 2008 r2), to be used for a print services cluster. My sites are connected via mlps network, but I've run into a bit of a snag: I need to have shared storage (cluster disk), and since it is across multiple sites, it is going to have to be 2 volumes that have replication between them that complies with MSCS (removing options like vmware replication and MS DFS).  Each location has SAN storage, but are from different providers and cannot replicate between each other on that level, so I am looking for a (free or low-cost) software solution that I can put inside a VM at each location to serve as my cluster disk.
    Any suggestions are greatly appreciated, thanks.
    This topic first appeared in the Spiceworks Community

  • SQL Server Agent fails to connect to DB after enabling mirror on failover cluster

    Hello:
    We have multiple databases running in a Failover Cluster instance: SQL 2012SP1 on Server 2008 R2 failover cluster (NOT AlwaysOn). We are trying to add a high-performance mirror in a standalone instance for DR. My understanding is that should be a perfectly
    normal, supported configuration.
    The mirroring is working properly; however, the clustered SQL Server agent is unable to run jobs that run in the mirrored databases.
    We get the following in the job log: Unable to connect to SQL Server 'VIRTUALSERVERNAME\INSTANCE'.  The step failed.
    There is a partner message in the agent log: [165] ODBC Error: 0, Connecting to a mirrored SQL Server instance using the MultiSubnetFailover connection option is not supported. [SQLSTATE IMH01]
    The cluster is not a mulitsubnet cluster. All hosts are connected to the same subnets and there is no storage replication. I can not find any place where I can adjust the connect string options for SQL Agent.
    Any guidance or suggestions on how to resolve this would be appreciated.
    ~joe

    SQL Team - MSFT:
    Thank you for taking the time to research and provide a clear answer.
    This seems very much a workaround and very unsatisfactory.
    You are correct, there is an IP dependency with OR condition. Moving to an AND condition is not viable for us. The whole point is to provide network redundancy. With an AND condition, if EITHER network interface fails, the service will go offline or fail
    to come online without manual intervention. This is arguably worse for uptime than having a single interface available.
    We are in process of rewriting all our SQL jobs to start in tempdb before transitioning to the appropriate target database. If this works for all of our jobs, I will mark the above response as answer.
    Again, thank you for the answer.
    Regards,
    Joe M.

  • Failover cluster installation

    Hello,
    I would like to install SAP ECC6 EhP4 from System Copy (backup/restore) in Windows failover cluster environnement with two nodes.
    So, I have installed MSSQL Server 2008R2 SP2 in cluster mode (using virtual host : FRSERVERSQL).
    Now, I have to install SAP ECC. (A virtual host has been created : FRSERVERSAP)
    I have created manually SAP SID service on "Failover cluster manager" windows menu.
    Inside this service, I have created :
    - IP resource
    - Shared folder (sapmnt : E:\usr\sap)
    - Disque resource (E:)
    I have created saploc shared folder (c:\usr\sap)
    Following install guide and some SCN threads, I have doubt with the way to install SAP in Windows failover cluster environnement.
    I found this way on SCN :
    1. Central services instance for ABAP installation.
      Use the command line sapinst.exe SAPINST_USE_HOSTNAME=FRSERVERSAP
      Note - during the installation time all the cluster resource should be in Node 1
    2 First MSCS node installation => on Node I? Do I need to use SAPINST_USE_HOSTNAME=FRSERVERSAP ???
    3 Database Instance Installation => Do I need to use SAPINST_USE_HOSTNAME? If yes, do I use FRSERVERSQL???
    4 Additional or Second MSCS node installation => on Node II? with SAPINST_USE_HOSTNAME?
    5 Enqueue replication server installation for ABAP in Node I
    6 Enqueue replication server installation for ABAP in Node II
    7 Central Instance Installation in Node I
    8 Dialog Instance Installation in Node II
    9 Post processing steps after system copy
    One more question.
    After installation is OK, If I have to refreh this environnement (with homogenous sytem copy - backup/restore),
    which option should I have to use after restore? Only Database instance installation or all options described above?
    Thanks for your help
    Regards

    Hi Durecu
    I will follow the steps mentioned but my question is about options to user with sapinst.exe?
    When should I have to usr SAPINST_USE_HOSTNAME?
    1. Central services instance for ABAP installation.
      Use the command line sapinst.exe SAPINST_USE_HOSTNAME=FRSERVERSAP
      Note - during the installation time all the cluster resource should be in Node 1
    During the time of Central service instance for ABAP & JAVA you have to use the Virtual hosts (that is on MSCS SAP Group DNS name, & Cluster disk before that you have to create in MSCS)
    Syntax " SAPINST.EXE SAPINST_USE_HOSTNAME=<*Virtual Host Name*> "
    2 First MSCS node installation => on Node I? Do I need to use SAPINST_USE_HOSTNAME=FRSERVERSAP ???
    Normal SAPINST.exe Without virtual name
    3 Database Instance Installation => Do I need to use SAPINST_USE_HOSTNAME? If yes, do I use FRSERVERSQL???
    Normal SAPINST.exe Without virtual name
    4 Additional or Second MSCS node installation => on Node II? with SAPINST_USE_HOSTNAME?
    Normal SAPINST.exe Without virtual name
    Regards
    Sriram

  • Failover Cluster Scenario - Suggestions?

    I am attempting to create a 2 node SQL Server 2012 Failover Cluster and am trying to gather information about what might be a better configuration in my particular case.
    This is home lab environment in a single ESXi host and as such there really isn't the need for a cluster to begin with, but I am trying to configure things as close to what I would find in a corporate environment as possible, hence me being so picky about
    the configuration.
    The SQL servers themselves will be running a few System Center databases, EPO, Exchange, Lync, and a few more smaller products (different instances will be used as per best practice). However, there will be under 10 users in total in the environment, so
    the load should be insignificant.
    My physical server is rather powerful (2x 6 core Xeon 2.13Ghz - 24 vCPUs -, 72GB ram, 8 x 900GB SAS disks), so VM resource allocation is not too much of a problem.
    All my storage is local to the ESXi server so the way I see it there are two ways I can do this:
    1. I can create a few shared disks in VMware and assign them directly to the SQL servers and form the cluster from there. These will be seen as local disks by the SQL servers but shared on the VM configuration side.
    The advantage of this configuration is that it is easier to setup. It requires less VMs in comparison to method 2 and less administration overhead as far as the cluster is concerned.
    However, because I am planning on utilizing multiple instances and each instance has to have it's own disks (one for the data files another for the logs, as per best practices), having to manually create disks in ESXi and add it to the two VMs every time
    I want a new instance is a slightly more long winded way of configuring things than the 2nd method.
    2. Another method would be to create a server as an iSCSI Target Server, and present the disks to the SQL servers as iSCSI storage. The SQL servers seeing the storage as iSCSI is a lot more similar to what you'd encounter on an corporate environment, so
    I am leaning toward this currently, as it would allow me to play around with the iSCSI side of it too.
    This would allow me to create just a single volume on the iSCSI Target Server and then multiple iSCSI Target Volumes that get presented to the SQL servers.
    However, there is the obvious single point of failure with having a single iSCSI Target Server, so I'd have to create a failover cluster for that too.
    Using this method would mean that there are 4 servers to manage instead of 2 (and 2 clusters instead of 1) but I think at least on the SQL side it would be a configuration more similar to what a corporate environment would have, and creating more instances
    in SQL would require less steps (no configuration in ESXi required at all, everything done on the iSCSI Target Servers)
    I imagine performancewise this will not be as good as all VHDX files (from the iSCSI target server) will be located on the same VMDK and all VMs are on the same host.
    Still, for a lab environment with 10 users, I can't really imagine that being too much an issue to the point where I'd actually notice any issues - happy for someone to tell me otherwise.
    Would appreciate any recommendations possible, depending on recommendations I do have further questions about the actual setup of it, hence raising this as a question (for example, I started creating it as the second method but am having some issues with
    networking on the iSCSI target server cluster).

    No problem with running your environment on ESX because you are familiar with it.  But one of the reasons Hyper-V is taking market share from VMware is because the old FUD that VMware used to throw around has all been debunked.  Organizations realize
    they are not giving up anything, and are often gaining things, by moving to Hyper-V.
    +100500
    Few reasons (except already mentioned maturity of Hyper-V that was not true 4-5 years ago)
    1) Built-in Windows licensing (esp. Datacenter) that makes sense with many VMs
    2) Free features inside free Hyper-V that free (or even paid) ESXi lacks (live migration of a VMs, VM replication etc)
    3) "One throat to choke" in terms of support for both hypervisor OS and guest VMs OS
    4) MUCH wider HCL for Hyper-V compared to what VMware has
    5) Ability to control "kind of" Windows from Windows GUI (same happened before with Windows NT 4.0 takes over NetWare 4.x)
    Cheers,
    Anton Kolomyeytsev [MVP]
    StarWind Software Chief Architect
    Profile:  
    Blog:  
    Twitter:  
    LinkedIn:  
    Note: Posts are provided “AS IS” without warranty of any kind, either expressed or implied, including but not limited to the implied warranties of merchantability and/or fitness for a particular purpose.

  • Quorum location for Failover cluster file share witness

    So, I've done quite a bit of searching for what I'm about to propose and I've been able to find nothing.
    I currently have a multi site failover cluster hosting separate 2-node SQL clusters at each site connected through an Availability Group.  For the failover cluster I am using a file share witness hosted on a server at the primary site.  Both sites
    are built entirely on vSphere 5.5 and have full replication for the production servers.  If the primary site goes down (disaster), I'll need to force quorum in the secondary site to a new file share witness.
    Well, I got to thinking...
    Why not just replicate the server hosting the file share?  I completely understand the reasoning behind not putting the file share witness on DFS, but a replicated virtual server, why not?  If the primary site fails, the replicated server hosting
    the file share witness is brought online with the rest of the production servers in the DR site.  In that case, the only thing that changes is the server IP address, but ultimately, the server name and share where quorum is hosted all stays the same.
    Ultimately this prevents needing to find a 3rd geographical/cloud location to host a quorum/witness at.  I can't imagine it's this "simple", but maybe it is.  If this is possible, and there's not something I'm missing, this essentially
    makes the quorum file share witness site agnostic, meaning it could live or be moved anywhere that replication is allowed.
    Ideas, thoughts?  Am I missing something?
    Thanks!
    Chris Miller

    Most likely it would be better to put this question to the High Availability forum -
    https://social.technet.microsoft.com/Forums/en-US/home?forum=winserverClustering
    But, might need a more complete definition of your environment.  Are you saying that you have multiple 2-node clusters, each with one node in the primary and the second node in the DR site?  And you have a single file share server located in primary that
    is used to host the file share witnesses for all these clusters?  You want to use VMware to replicate the file share server to the DR site so that it can be made available should the primary site fail?
    It should work, but it will not be automatic.  After all, the replicated VM will need to be brought online at the DR site so the SQL cluster will recognize it.  That is not an automatic process.  So the cluster will be down until
    you bring the file share server online so it can be recognized.  Not a whole lot different that simply forcing the DR SQL host to run without quorum.
    . : | : . : | : . tim

  • DFSr supported cluster configurations - replication between shared storage

    I have a very specific configuration for DFSr that appears to be suffering severe performance issues when hosted on a cluster, as part of a DFS replication group.
    My configuration:
    3 Physical machines (blades) within a physical quadrant.
    3 Physical machines (blades) hosted within a separate physical quadrant
    Both quadrants are extremely well connected, local, 10GBit/s fibre.
    There is local storage in each quadrant, no storage replication takes place.
    The 3 machines in the first quadrant are MS clustered with shared storage LUNs on a 3PAR filer.
    The 3 machines in the second quadrant are also clustered with shared storage, but on a separate 3PAR device.
    8 shared LUNs are presented to the cluster in the first quadrant, and an identical storage layout is connected in the second quadrant. Each LUN has an associated HAFS application associated with it which can fail-over onto any machine in the local cluster.
    DFS replication groups have been set up for each LUN and data is replicated from an "Active" cluster node entry point, to a "Passive" cluster node that provides no entry point to the data via DFSn and a Read-Only copy on it's shared cluster
    storage.
    For the sake of argument, assume that all HAFS application instances in the first quadrant are "Active" in a read/write configuration, and all "Passive" instances of the HAFS applications in the other quadrants are Read-Only.
    This guide: http://blogs.technet.com/b/filecab/archive/2009/06/29/deploying-dfs-replication-on-a-windows-failover-cluster-part-i.aspx defines
    how to add a clustered service to a replication group. It clearly shows using "Shared storage" for the cluster, which is common sense otherwise there effectively is no application fail-over possible and removes the entire point of using a resilient
    cluster.
    This article: http://technet.microsoft.com/en-us/library/cc773238(v=ws.10).aspx#BKMK_061 defines the following:
    DFS Replication in Windows Server 2012 and Windows Server 2008 R2 includes the ability to add a failover cluster
    as a member of a replication group. The DFS Replication service on versions of Windows prior to Windows Server 2008 R2
    is not designed to coordinate with a failover cluster, and the service will not fail over to another node.
    It then goes on to state, quite incredibly: DFS Replication does not support replicating files on Cluster Shared Volumes.
    Stating quite simply that DFSr does not support Cluster Shared Volumes makes absolutely no sense at all after stating clusters
    are supported in replication groups and a technet guide is provided to setup and configure this configuration. What possible use is a clustered HAFS solution that has no shared storage between the clustered nodes - none at all.
    My question:  I need some clarification, is the text meant to read "between" Clustered
    Shared Volumes?
    The storage configuration must to be shared in order to form a clustered service in the first place. What
    we am seeing from experience is a serious degradation of
    performance when attempting to replicate / write data between two clusters running a HAFS configuration, in a DFS replication group.
    If for instance, as a test, local / logical storage is mounted to a physical machine the performance of a DFS replication group between the unshared, logical storage on the physical nodes is approaching 15k small files per minute on initial write and even high
    for file amendments. When replicating between two nodes in a cluster, with shared clustered storage the solution manages a weak 2,500 files per minute on initial write and only 260 files per minute when attempting to update data / amend files.
    By testing various configurations we have effectively ruled out the SAN, the storage, drivers, firmware, DFSr configuration, replication group configuration - the only factor left that makes any difference is replicating from shared clustered storage, to another
    shared clustered storage LUN.
    So in summary:
    Logical Volume ---> Logical Volume = Fast
    Logical Volume ---> Clustered Shared Volume = ??
    Clusted Shared Volume ---> Clustered Shared Volume = Pitifully slow
    Can anyone explain why this might be?
    The guidance in the article is in clear conflict with all other evidence provided around DFSr and clustering, however it seems to lean towards why we may be seeing a real issue with replication performance.
    Many thanks for your time and any help/replies that may be received.
    Paul

    Hello Shaon Shan,
    I am also having the same scenario at one of my customer place.
    We have two FileServers running on Hyper-V 2012 R2 as guest VM using Cluster Shared Volume.  Even the data partition drive also a part of CSV.
    It's really confusing whether the DFS replication on CSV are supported or not, then what would be consequence if using.
    In my knowledge we have some customers they are using Hyper-V 2008 R2 and DFS is configured and running fine on CSV since more than 4 years without any issue.
    Appreciate if you can please elaborate and explain in details about the limitations on using CSV.
    Thanks in advance,
    Abul

Maybe you are looking for