VSAN or SAN

What is your budget and storage requirements?

Hi AllI need some advice/input before I pull the trigger on potentially buying a SAN. In a really quick nutshell I am currently running 3 Dell R720xd’s as ESXI hosts with vSphere essentials (no vmotion etc). The main VMs I am running are 2 DCs, Docs, Exchange, Sharepoint, Print, IT Support, Backup, Phone and RDS Farm (10 servers’ inc broker and gateway/web). All Server 2012R2. Currently all storage is local per host – RAID 10 Nearline 2TB drives. I backup using VEEAM to a Synology DS1513+ in a RAID 5 with WD Reds and then backup copy to an offsite NAS. Its all far from ideal and capacity is about to increase with users – 110 will be the total.I originally posted a thread here and totally expected to go vSAN and use local storage and keep things simple. My concern was the difference in cost from a 3 host or 4 host setup - costs increase...
This topic first appeared in the Spiceworks Community

Similar Messages

  • Live Migration : virtual Fibre Channel vSAN

    I can do live migration, from one node to another. No error. Problem / Question that I have is, is live migration really lie migration.
    When I do live migration from cluster or SCVMM  it save and start  virtual machine. Which fro me is not live migration.
    I have describe in more details : http://social.technet.microsoft.com/Forums/en-US/a52ac102-4ea3-491c-a8c5-4cf4dd14768d/synthetic-fibre-channel-hba-live-migration-savestopstart?forum=winserverhyperv
    BlatniS

    I can do live migration, from one node to another. No error. Problem / Question that I have is, is live migration really lie migration.
    When I do live migration from cluster or SCVMM  it save and start  virtual machine. Which fro me is not live migration.
    I have describe in more details : http://social.technet.microsoft.com/Forums/en-US/a52ac102-4ea3-491c-a8c5-4cf4dd14768d/synthetic-fibre-channel-hba-live-migration-savestopstart?forum=winserverhyperv
    Virtual Fibre Channel had sense in pre-R2 times when there was no shared VHDX and you had to somehow provide fault tolerant shared storage to guest VM cluster (spawning iSCSI target on top of FC was slow and ugly). Now there\s no point in putting one into
    production so if you have issues just use shared VHDX. See:
    Deploy a Guest Cluster Using a Shared Virtual Hard Disk
    http://technet.microsoft.com/en-us/library/dn265980.aspx
    Shared VHDX
    http://blogs.technet.com/b/storageserver/archive/2013/11/25/shared-vhdx-files-my-favorite-new-feature-in-windows-server-2012-r2.aspx
    Shared VHDX is much more flexible and has better performance. 
    Good luck!
    StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts.

  • SAN design : core edge and dual-homing access switch

    Hello all.
    It may sound as a dumb question (from a LAN guy) but when designing a core/edge or edge/ecore/edge design, why do we connect access switches to both core switches ? Doesn't it break the isolation of a dual fabric backbone ?
    If an access switch fails the fault (bug or anything else) will propagate to both core switches ? Am I wrong ?
    Example :
    http://www.cisco.com/en/US/prod/collateral/modules/ps5991/prod_white_paper0900aecd8044c807_ps5990_Products_White_Paper.html
    or from netrworkers sessions in 2006

    Answer also from LAN guy,
    Most likely this design diagram is due to assumption that there is no use of VSANs and SAN Multipathing drivers in host.
    Following is excerpt from same like yo posted.
    "SAN designs should always use two isolated fabrics  for high availability, with both hosts and storage connecting to both  fabrics. Multipathing software should be deployed on the hosts to manage  connectivity between the host and storage so that I/O uses both paths,  and there is non-disruptive failover between fabrics in the event of a  problem in one fabric. Fabric isolation can be achieved using either  VSANs, or dual physical switches. Both provide separation of fabric  services, although it could be argued that multiple physical fabrics  provide increased physical protection (e.g. protection against a  sprinkler head failing above a switch) and protection against equipment  failure. "

  • Trim with clustere shared volume and SAN

    Hi all,
    I have a cluster of 4 Hyper-V 2012 R2 hosts. They are connected to a SAN via fiber channel.
    My guest machines range from server 2000 to server 2012 r2. 
    The LUN provisioned on my SAN (Oracle ZFS3-2 or 7000) are thin provisioned.
    My virtual hard disks are both VHD and VHDX. 
    I need to know how to recover the deleted space from my SAN. Today it just keeps building up until it fills up. I have read about Trim or Unmap but am yet to find a comprehensive doc where I can explore it in detail. I appreciate any help that comes my way.

    Hi all,
    I have a cluster of 4 Hyper-V 2012 R2 hosts. They are connected to a SAN via fiber channel.
    My guest machines range from server 2000 to server 2012 r2. 
    The LUN provisioned on my SAN (Oracle ZFS3-2 or 7000) are thin provisioned.
    My virtual hard disks are both VHD and VHDX. 
    I need to know how to recover the deleted space from my SAN. Today it just keeps building up until it fills up. I have read about Trim or Unmap but am yet to find a comprehensive doc where I can explore it in detail. I appreciate any help that comes my way.
    Try running sdelete inside a VM or shrink VHD(x). First thing will zero out unused blocks and second would do de-allocation on host level. Both should trigger UNMAP on ZFS-based Oracle storage ending with TRIM or whatever. See some links below:
    SDelete Utility
    http://technet.microsoft.com/en-us/sysinternals/bb897443.aspx
    Resize-VHD
    http://technet.microsoft.com/en-us/library/hh848535.aspx
    Shrinking
    VHDX on a running VM
    http://blogs.msdn.com/b/virtual_pc_guy/archive/2014/01/30/shrinking-a-vhdx-on-a-running-virtual-machine.aspx
    Hope this helped a bit :)
    StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts.

  • Disk caching on host or guest?

    OK, this is probably a noob question, but if we have 64GB RAM on our HyperV (2008R2) host, and we are running disk intensive software, do we:
    a) Allocate the 'minimum' RAM to the guest, and leave the rest for the host to use for disk caching, or
    b) Allocate the maximum RAM to the guest (leaving 1GB for the host), and let the guest use it for disk caching?
    Allocating half & half would seem to be a waste as they will probably both end up caching the same data (will they?), but it's not clear whether we're best letting the host or the guest do the caching. Or does it actually matter at all?
    I've had a good look around and haven't been able to find any relevant recommendations.
    More Info - the 'disk intensive' software is mainly a PostgreSQL server. We'll give that about 8GB for its shared buffers, but it seems to be recommended to use OS disk caching beyond that. There is a 1GB BBWC P420i RAID controller so write caching is performed
    on that. Currently, our biggest performance bottleneck seems to be due to uncached reads, so we are increasing the host RAM from 16GB to 64GB (and adding an SSD for index storage), but just want to know whether it's best to increase the guest RAM allocation,
    or leave it 'spare' on the host.

    OK, this is probably a noob question, but if we have 64GB RAM on our HyperV (2008R2) host, and we are running disk intensive software, do we:
    a) Allocate the 'minimum' RAM to the guest, and leave the rest for the host to use for disk caching, or
    b) Allocate the maximum RAM to the guest (leaving 1GB for the host), and let the guest use it for disk caching?
    Allocating half & half would seem to be a waste as they will probably both end up caching the same data (will they?), but it's not clear whether we're best letting the host or the guest do the caching. Or does it actually matter at all?
    I've had a good look around and haven't been able to find any relevant recommendations.
    More Info - the 'disk intensive' software is mainly a PostgreSQL server. We'll give that about 8GB for its shared buffers, but it seems to be recommended to use OS disk caching beyond that. There is a 1GB BBWC P420i RAID controller so write caching is performed
    on that. Currently, our biggest performance bottleneck seems to be due to uncached reads, so we are increasing the host RAM from 16GB to 64GB (and adding an SSD for index storage), but just want to know whether it's best to increase the guest RAM allocation,
    or leave it 'spare' on the host.
    With Windows Server 2008 R2 / Hyper-V 2.0 you don't have that many options as VHD access is not cached by host. At all... So you'd better allocate move VM memory as I/O would be cached inside a VM. Windows Server 2012 R2 / Hyper-V 3.0 would give you more
    caching options that include Read-Only CSV Cache, Flash-based Write-Back Cache coming with Tiering and also SMB access is also extensively cached @ both client and server sides. See:
    CSV Cache
    http://blogs.msdn.com/b/clustering/archive/2013/07/19/10286676.aspx
    Write Back Cache
    http://technet.microsoft.com/en-us/library/dn387076.aspx
    Hyper-V over SMB
    http://technet.microsoft.com/en-us/library/jj134187.aspx
    So it could be a good idea to upgrade to Windows Server 2012 R2 now :) 
    You may deploy third-party software to do a RAM and flash cache but you need to think twice as it could be simply dangerous - no reboot you may lose gigabytes of your transactions...
    Hope this helped a bit :)
    StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts.

  • Advice Requested - High Availability WITHOUT Failover Clustering

    We're creating an entirely new Hyper-V virtualized environment on Server 2012 R2.  My question is:  Can we accomplish high availability WITHOUT using failover clustering?
    So, I don't really have anything AGAINST failover clustering, and we will happily use it if it's the right solution for us, but to be honest, we really don't want ANYTHING to happen automatically when it comes to failover.  Here's what I mean:
    In this new environment, we have architected 2 identical, very capable Hyper-V physical hosts, each of which will run several VMs comprising the equivalent of a scaled-back version of our entire environment.  In other words, there is at least a domain
    controller, multiple web servers, and a (mirrored/HA/AlwaysOn) SQL Server 2012 VM running on each host, along with a few other miscellaneous one-off worker-bee VMs doing things like system monitoring.  The SQL Server VM on each host has about 75% of the
    physical memory resources dedicated to it (for performance reasons).  We need pretty much the full horsepower of both machines up and going at all times under normal conditions.
    So now, to high availability.  The standard approach is to use failover clustering, but I am concerned that if these hosts are clustered, we'll have the equivalent of just 50% hardware capacity going at all times, with full failover in place of course
    (we are using an iSCSI SAN for storage).
    BUT, if these hosts are NOT clustered, and one of them is suddenly switched off, experiences some kind of catastrophic failure, or simply needs to be rebooted while applying WSUS patches, the SQL Server HA will fail over (so all databases will remain up
    and going on the surviving VM), and the environment would continue functioning at somewhat reduced capacity until the failed host is restarted.  With this approach, it seems to me that we would be running at 100% for the most part, and running at 50%
    or so only in the event of a major failure, rather than running at 50% ALL the time.
    Of course, in the event of a catastrophic failure, I'm also thinking that the one-off worker-bee VMs could be replicated to the alternate host so they could be started on the surviving host if needed during a long-term outage.
    So basically, I am very interested in the thoughts of others with experience regarding taking this approach to Hyper-V architecture, as it seems as if failover clustering is almost a given when it comes to best practices and high availability.  I guess
    I'm looking for validation on my thinking.
    So what do you think?  What am I missing or forgetting?  What will we LOSE if we go with a NON-clustered high-availability environment as I've described it?
    Thanks in advance for your thoughts!

    Udo -
    Yes your responses are very helpful.
    Can we use the built-in Server 2012 iSCSI Target Server role to convert the local RAID disks into an iSCSI LUN that the VMs could access?  Or can that not run on the same physical box as the Hyper-V host?  I guess if the physical box goes down
    the LUN would go down anyway, huh?  Or can I cluster that role (iSCSI target) as well?  If not, do you have any other specific product suggestions I can research, or do I just end up wasting this 12TB of local disk storage?
    - Morgan
    That's a bad idea. First of all Microsoft iSCSI target is slow (it's non-cached @ server side). So if you really decided to use dedicated hardware for storage (maybe you do have a reason I don't know...) and if you're fine with your storage being a single
    point of failure (OK, maybe your RTOs and RPOs are fair enough) then at least use SMB share. SMB at least does cache I/O on both client and server sides and also you can use Storage Spaces as a back end of it (non-clustered) so read "write back flash cache
    for cheap". See:
    What's new in iSCSI target with Windows Server 2012 R2
    http://technet.microsoft.com/en-us/library/dn305893.aspx
    Improved optimization to allow disk-level caching
    Updated
    iSCSI Target Server now sets the disk cache bypass flag on a hosting disk I/O, through Force Unit Access (FUA), only when the issuing initiator explicitly requests it. This change can potentially improve performance.
    Previously, iSCSI Target Server would always set the disk cache bypass flag on all I/O’s. System cache bypass functionality remains unchanged in iSCSI Target Server; for instance, the file system cache on the target server is always bypassed.
    Yes you can cluster iSCSI target from Microsoft but a) it would be SLOW as there would be only active-passive I/O model (no real use from MPIO between multiple hosts) and b) that would require a shared storage for Windows Cluster. What for? Scenario was
    usable with a) there was no virtual FC so guest VM cluster could not use FC LUs and b) there was no shared VHDX so SAS could not be used for guest VM cluster as well. Now both are present so scenario is useless: just export your existing shared storage without
    any Microsoft iSCSI target and you'll be happy. For references see:
    MSFT iSCSI Target in HA mode
    http://technet.microsoft.com/en-us/library/gg232621(v=ws.10).aspx
    Cluster MSFT iSCSI Target with SAS back end
    http://techontip.wordpress.com/2011/05/03/microsoft-iscsi-target-cluster-building-walkthrough/
    Guest
    VM Cluster Storage Options
    http://technet.microsoft.com/en-us/library/dn440540.aspx
    Storage options
    The following tables lists the storage types that you can use to provide shared storage for a guest cluster.
    Storage Type
    Description
    Shared virtual hard disk
    New in Windows Server 2012 R2, you can configure multiple virtual machines to connect to and use a single virtual hard disk (.vhdx) file. Each virtual machine can access the virtual hard disk just like servers
    would connect to the same LUN in a storage area network (SAN). For more information, see Deploy a Guest Cluster Using a Shared Virtual Hard Disk.
    Virtual Fibre Channel
    Introduced in Windows Server 2012, virtual Fibre Channel enables you to connect virtual machines to LUNs on a Fibre Channel SAN. For more information, see Hyper-V
    Virtual Fibre Channel Overview.
    iSCSI
    The iSCSI initiator inside a virtual machine enables you to connect over the network to an iSCSI target. For more information, see iSCSI
    Target Block Storage Overviewand the blog post Introduction of iSCSI Target in Windows
    Server 2012.
    Storage requirements depend on the clustered roles that run on the cluster. Most clustered roles use clustered storage, where the storage is available on any cluster node that runs a clustered
    role. Examples of clustered storage include Physical Disk resources and Cluster Shared Volumes (CSV). Some roles do not require storage that is managed by the cluster. For example, you can configure Microsoft SQL Server to use availability groups that replicate
    the data between nodes. Other clustered roles may use Server Message Block (SMB) shares or Network File System (NFS) shares as data stores that any cluster node can access.
    Sure you can use third-party software to replicate 12TB of your storage between just a pair of nodes to create a fully fault-tolerant cluster. See (there's also a free offering):
    StarWind VSAN [Virtual SAN] for Hyper-V
    http://www.starwindsoftware.com/native-san-for-hyper-v-free-edition
    Product is similar to what VMware had just released for ESXi except it's selling for ~2 years so is mature :)
    There are other guys doing this say DataCore (more playing for Windows-based FC) and SteelEye (more about geo-cluster & replication). But you may want to give them a try.
    Hope this helped a bit :) 
    StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts.

  • File systems available on Windows Server 2012 R2?

    What are the supported file systems in Windows Server 2012 R2? I mean the complete list. I know you can create, read and write on Fat32, NTFS and ReFS. What about non-Microsoft file systems, like EXT4 or HFS+? If I create a VM with a Linux OS, will
    I be able to acces the virtual hard disk natively from WS 2012 R2, or will I need a third party tool, like the one from Paragon? If I have a drive formated in EXT4 or HFS+, will I be able to acces it from Windows, without any third party tool? Acces it,
    I mean both read and write on them. I know that on the client OS, Windows 8.1, this is not possible natively, this is why I am asking here, I guess it is very possible for the server OS to have build-in support for accesing thoose file systems. If Hyper-V
    has been optimised to run not just Windows VMs, but also Linux VMs, it would make sense to me that file systems like thoose from Linux or OS X to be available using a build-in feature. I have tried to mount the vhd from a Linux VM I have created in HyperV,
    Windows Explorer could not read the hard drive.

    Installed Paragon ExtFS free. With it loaded, tried to mount on Windows Explorer a ext4 formated vhd, created on a Linux Hyper-V vm, it failed, and Paragon ExtFS crashed. Uninstalled Paragon ExtFS. The free version was not supported on WS 2012 R2
    by Paragon, if Windows has no build-in support for ext4, this means this free software has not messed around anything in the OS, I guess.
    Don't mess with third-party kernel-mode file systems as it's basically begging for troubles: crash inside them will make whole system BSOD and third-party FS are typically buggy... Because a) FS development for Windows is VERY complex and b) there are very
    few external adopters so not that many people actually theist them. What you can do however:
    1) Spawn an OS with a supported FS inside VM and configure loopback connectivity (even over SMB) with your host. So you'll read and write your volume inside a VM and copy content to / from host.
    (I personally use this approach in a reversed direction, my primary OS is MacOS X but I read/write NTFS-formatted disks from inside a Windows 7 VM I run on VMware Fusion)
    2) Use user-mode file system explorer (see sample links below, I'm NOT affiliated with that companie). So you'll copy content from the volume as it would be some sort of a shell extension.
    Crashes in 1) and 2) would not touch your whole OS stability. 
    HFS Explorer for Windows
    http://www.heise.de/download/hfsexplorer.html
    Ext2Read
    http://sourceforge.net/projects/ext2read/
    (both are user-land applications for HFS(+) and EXT2/3/4 accordingly)
    Hope this helped :)
    StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts.

  • Why does my 10GB iSCSI setup seem see such high latency and how can I fix it?

    I have a iscsi server setup with the following configuration
    Dell R510
    Perc H700 Raid controller
    Windows Server 2012 R2
    Intel Ethernet X520 10Gb
    12 near line SAS drives
    I have tried both Starwind and the built in Server 2012 iscsi software but see similar results.  I am currently running the latest version of starwinds free
    iscsi server.
    I have connected it to a HP 8212 10Gb port which is also connected via 10Gb to our vmware servers.  I have a dedicated vlan just for iscsi and have enabled
    jumbo frames on the vlan.
    I frequently see very high latency on my iscsi storage.  So much so that it can timeout or hang vmware.  I am not sure why.  I can run IOmeter and
    get some pretty decent results.
    I am trying to determine why I see such high latency 100'ms.  It doesn't seem to always happen, but several times throughout the day, vmware is complaining
    about the latency of the datastore.  I have a 10Gb iscsi connection between the servers.  I wouldn't expect the disks to be able to max that out.  The highest I could see when running IO meter was around 5Gb.  I also don't see much load
    at all on the iscsi server when I see the high latency.  It seems network related, but I am not sure what settings I could check.  The 10Gb connect should be plenty as I said and it is no where near maxing that out.
    Any thoughts about any configuration changes I could make to my vmware enviroment, network card settings or any ideas on where I can troubleshoot this.  I
    am not able to find what is causing it.  I reference this document and for changes to my iscsi settings 
    http://en.community.dell.com/techcenter/extras/m/white_papers/20403565.aspx
    Thank you for your time.

    I have a iscsi server setup with the following configuration
    Dell R510
    Perc H700 Raid controller
    Windows Server 2012 R2
    Intel Ethernet X520 10Gb
    12 near line SAS drives
    I have tried both Starwind and the built in Server 2012 iscsi software but see similar results.  I am currently running the latest version of starwinds free
    iscsi server.
    I have connected it to a HP 8212 10Gb port which is also connected via 10Gb to our vmware servers.  I have a dedicated vlan just for iscsi and have enabled
    jumbo frames on the vlan.
    I frequently see very high latency on my iscsi storage.  So much so that it can timeout or hang vmware.  I am not sure why.  I can run IOmeter and
    get some pretty decent results.
    I am trying to determine why I see such high latency 100'ms.  It doesn't seem to always happen, but several times throughout the day, vmware is complaining
    about the latency of the datastore.  I have a 10Gb iscsi connection between the servers.  I wouldn't expect the disks to be able to max that out.  The highest I could see when running IO meter was around 5Gb.  I also don't see much load
    at all on the iscsi server when I see the high latency.  It seems network related, but I am not sure what settings I could check.  The 10Gb connect should be plenty as I said and it is no where near maxing that out.
    Any thoughts about any configuration changes I could make to my vmware enviroment, network card settings or any ideas on where I can troubleshoot this.  I
    am not able to find what is causing it.  I reference this document and for changes to my iscsi settings 
    http://en.community.dell.com/techcenter/extras/m/white_papers/20403565.aspx
    Thank you for your time.
    If both StarWind and MSFT target show the same numbers I can guess it's network configuration issue. Anything higher then 30 ms is a nightmare :( Did you properly tune your network stacks? What numbers (x-put and latency) you get for raw TCP numbers (NTtcp
    and Iperf are handy to show)?
    StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts.

  • Best practices for setting up virtual servers on Windows Server 2012 R2

    I am creating a Web server from scratch with Windows Server 2012 R2. I expect to have a host server, and then 3 virtual servers...one that runs all of the web apps as a web server, another as a Database Server, and then on for session state.  I
    expect to use Windows Server 2012 R2 for the Web Server and Database Server, but Windows 7 for the session state.
    I have an SATA2 Intel SROMBSASMR RAID card with battery back up that I am attaching a small SSD drive that I expect to use for the session state, and an IBM Server RAID M1015 SATA3 card that I am running Intel 520 Series SSD's that I expect to
    use for Web server and Database server.
    I have some questions. I am considering using the internal USB with a flash drive to boot the Host off of, and then using two small SSD's in a Raid 0 for the Web server (theory being that if something goes wrong, session state is on a different drive), and
    then 2 more for the Database server in a RAID 1 configuration.
    please feel free to poke holes in this and tell me of a better way to do it.
    I am assuming that having the host running on a slow USB drive that is internal has no effect on the virtual servers after it is booted up, and the virtual servers are booted up?
    DCSSR

    I am creating a Web server from scratch with Windows Server 2012 R2. I expect to have a host server, and then 3 virtual servers...one that runs all of the web apps as a web server, another as a Database Server, and then on for session state.  I
    expect to use Windows Server 2012 R2 for the Web Server and Database Server, but Windows 7 for the session state.
    I have an SATA2 Intel SROMBSASMR RAID card with battery back up that I am attaching a small SSD drive that I expect to use for the session state, and an IBM Server RAID M1015 SATA3 card that I am running Intel 520 Series SSD's that I expect to
    use for Web server and Database server.
    I have some questions. I am considering using the internal USB with a flash drive to boot the Host off of, and then using two small SSD's in a Raid 0 for the Web server (theory being that if something goes wrong, session state is on a different drive), and
    then 2 more for the Database server in a RAID 1 configuration.
    please feel free to poke holes in this and tell me of a better way to do it.
    I am assuming that having the host running on a slow USB drive that is internal has no effect on the virtual servers after it is booted up, and the virtual servers are booted up?
    There are two issues about RAID0:
    1) It's not as fast as people think. So with a general purpose file system like NTFS or ReFS (choice for Windows is limited) you're not going to have any great benefits as there are very low chances whole RAID stripe would be updated @ the same time (I/Os
    need to touch all SSDs in a set so 256KB+ in a real life). Web server workload is quite far away from sequential reads or writes so RAID0 is not going to shine here. Log-structures file system (or at least some FS with logging capabilities, think about ZFS
    and ZIL enabled) *will* benefit from SSDs in RAID0 properly assigned. 
    2) RAID0 is dangerous. One lost SSD would render whole RAID set useless. So unless you build a network RAID1-over-RAID0 (mirror RAID sets between multiple hosts with a virtual SAN like or synchronous replication solutions) - you'll be sitting on a time bomb.
    Not good :)
    StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts.

  • Server 2012 Failover Cluster No Disks available / iSCSI

    Hi All,
    I am testing out the Failover Clustering on Windows Server 2012 with hopes of winding up with a clustered File Server once I am done. 
    I am starting with a single node in the cluster for testing purposes; I have connected to this cluster a single iSCSI LUN that is 100GB in size.
    When I right click on Storage -> Disks  and then click 'Add Disk', I get No disks suitable for cluster disks were found.
    I get this, even if I add a second server to the cluster, and connect it to the iSCSI drive as well.
    Any ideas?

    Hi All,
    I am testing out the Failover Clustering on Windows Server 2012 with hopes of winding up with a clustered File Server once I am done. 
    I am starting with a single node in the cluster for testing purposes; I have connected to this cluster a single iSCSI LUN that is 100GB in size.
    When I right click on Storage -> Disks  and then click 'Add Disk', I get No disks suitable for cluster disks were found.
    I get this, even if I add a second server to the cluster, and connect it to the iSCSI drive as well.
    Any ideas?
    For testing purpose you'd better spawn a set of VMs on a single physical Hyper-V host and use shared VHDX as a back clusterd storage. That would be both much easier and much faster then what you do. + it would be trivial move one of the VMs to another physical
    host, shared VHDX to CSV on a shared storage and go from Test & Development to production :) See:
    Shared VHDX
    http://blogs.technet.com/b/storageserver/archive/2013/11/25/shared-vhdx-files-my-favorite-new-feature-in-windows-server-2012-r2.aspx
    Virtual File Server with Shared VHDX
    http://www.aidanfinn.com/?p=15145
    Guest
    VM Cluster with Shared VHDX
    http://technet.microsoft.com/en-us/library/dn265980.aspx
    For a pure iSCSI scenario you may try this step-by-step guide (just skip StarWind config as you do have a shared storage already with your SAN). See:
    Configuring HA File Server on Windows Server 2012 for SMB NAS
    http://www.starwindsoftware.com/configuring-ha-file-server-on-windows-server-2012-for-smb-nas
    Hope this helped a bit :)
    StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts.

  • Best Practice for General User File Server HA/Failover

    Hi All,
    Looking for some general advice or documentation on recommend approaches to file storage.  If you were in our position how would you approach adding more rubustness into our setup?
    We currently run a single 2012 R2 VM with around 6TB of user files and data.  We deduplicate the volume and use quota's.
    We need a solution that provides better redundancy that a single VM.  If that VM goes offline how do we maintain user access to the files.
    We use DFS to publish file shares to users and machines.
    Solutions I have researched with potential draw backs:
    Create a guest VM cluster and use a Continuosly Available File Share (not SOFS)
     - This would leave us without support for de-duplication. (we get around 50% savings atm and space is tight)
    Create a second VM and add it as secondary DFS folder targets, configure replication between the two servers
     -  Is this the prefered enterprise approach to share avialability?  How will hosting user shares (documents etc...) cope in a replication environment.
    Note: we have run a physical clustered file server in the past with great results except for the ~5 mins downtime when failover occurs.
    Any thoughts on where I should be focusing my efforts?
    Thanks

    Hi All,
    Looking for some general advice or documentation on recommend approaches to file storage.  If you were in our position how would you approach adding more rubustness into our setup?
    We currently run a single 2012 R2 VM with around 6TB of user files and data.  We deduplicate the volume and use quota's.
    We need a solution that provides better redundancy that a single VM.  If that VM goes offline how do we maintain user access to the files.
    We use DFS to publish file shares to users and machines.
    Solutions I have researched with potential draw backs:
    Create a guest VM cluster and use a Continuosly Available File Share (not SOFS)
     - This would leave us without support for de-duplication. (we get around 50% savings atm and space is tight)
    Create a second VM and add it as secondary DFS folder targets, configure replication between the two servers
     -  Is this the prefered enterprise approach to share avialability?  How will hosting user shares (documents etc...) cope in a replication environment.
    Note: we have run a physical clustered file server in the past with great results except for the ~5 mins downtime when failover occurs.
    Any thoughts on where I should be focusing my efforts?
    Thanks
    If you care about performance and real failover transparency then guest VM cluster is a way to go (compared to DFS of course). I don't get your point about "no deduplication". You can still use dedupe inside your VM just will have sure you "shrink" the VHDX
    from time to time to give away space to host file system. See:
    Using Guest Clustering for High Availability
    http://technet.microsoft.com/en-us/library/dn440540.aspx
    Super-fast
    Failovers with VM Guest Clustering in Windows Server 2012 Hyper-V
    http://blogs.technet.com/b/keithmayer/archive/2013/03/21/virtual-machine-guest-clustering-with-windows-server-2012-become-a-virtualization-expert-in-20-days-part-14-of-20.aspx
    can't
    shrink vhdx file after applying deduplication
    http://social.technet.microsoft.com/Forums/windowsserver/en-US/533aac39-b08d-4a67-b3d4-e2a90167081b/cant-shrink-vhdx-file-after-applying-deduplication?forum=winserver8gen
    Hope this helped :)
    StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts.

  • NFS Sharing Tab is not found..NFS sharing provisioning fails with error....NFS sharing using command fails in windows 2008 r2

    Hi All,
         Operating System: Windows 2008 R2 Enterprise SP1
    H/w:  VMware virtual machine vmx 09
    Installed: File server role with "Services for network file system"
    Server for NFS and Client for NFS services and up and running.
    But when I try to create an NFS share using the folder properties the NFS Sharing tab is missing.
    I tried to provision an NFS share using server manager "New NFS share folder cannot be created"
    Noticed Event ID 1015 from NFSserver " Server for NFS was unable to validate licensing information at this time, the server will be nonfunctional until this information can be validated"
    Tried to create the NFS share using command line but that too failed.
    Request all to kindly assist me in isolating and fixing this issue.
    Thank you so much
    Shaji P.K.
    Server for NFS was unable to validate licensing information at this time, the server will be nonfunctional until this information can be validated

    Hi All,
         Operating System: Windows 2008 R2 Enterprise SP1
    H/w:  VMware virtual machine vmx 09
    Installed: File server role with "Services for network file system"
    Server for NFS and Client for NFS services and up and running.
    But when I try to create an NFS share using the folder properties the NFS Sharing tab is missing.
    I tried to provision an NFS share using server manager "New NFS share folder cannot be created"
    Noticed Event ID 1015 from NFSserver " Server for NFS was unable to validate licensing information at this time, the server will be nonfunctional until this information can be validated"
    Tried to create the NFS share using command line but that too failed.
    Request all to kindly assist me in isolating and fixing this issue.
    Thank you so much
    Shaji P.K.
    Server for NFS was unable to validate licensing information at this time, the server will be nonfunctional until this information can be validated
    NFS server is so weak (especially for 2008 R2) and keeping in mind you run that all hosted by VMware ESXi the best thing you can do is to get rid of using Windows as a NFS server completely and spawn a FreeBSD or Linux VM with a decent and recent version
    of Samba.
    StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts.

  • Hyper Disk Layout and Raid For Essintal Server 2012 R2 With Exchange

    Hi would raid mirror be good enough support configuration on single server run both essentials server 2012  and  exchange 2013  new to exchange look for input and suggestions 
    user load is very light just 5  
    boot disk  120 gig ssd  on it own Controller run 2012 run with hyper v installed
    2 x 3 Tb raid 1 by Lsi 1064e raid control   essentials server 2012  disk are vhdx fixed size  two each
    2 x 2 Tb raid 1 by Lsi 1064e raid control   2012 R2 server  disk are vhdx fixed size  two each
    System specs is 2 x Amd  opertron  4122 with 32 gig of Ram  4 core to each os
    Andy A

    Hi would raid mirror be good enough support configuration on single server run both essentials server 2012  and  exchange 2013  new to exchange look for input and suggestions 
    user load is very light just 5  
    boot disk  120 gig ssd  on it own Controller run 2012 run with hyper v installed
    2 x 3 Tb raid 1 by Lsi 1064e raid control   essentials server 2012  disk are vhdx fixed size  two each
    2 x 2 Tb raid 1 by Lsi 1064e raid control   2012 R2 server  disk are vhdx fixed size  two each
    System specs is 2 x Amd  opertron  4122 with 32 gig of Ram  4 core to each os
    1) Boot Hyper-V from cheap SATA or even USB stick (see link below). No point in wasting SSD for that. Completely agree with Eric. See:
    Run Hyper-V from USB Flash
    http://technet.microsoft.com/en-us/library/jj733589.aspx
    2) Don't use RAID controllers in RAID mode rather (as you already got them) stick with HBA mode passing disks AS IS, add some more SSDs and configure Storage Spaces in RAID10 equivalent mode and use SSDs as a flash cache. See:
    Storage Spaces Overview
    http://technet.microsoft.com/en-us/library/hh831739.aspx
    Because having single pool touching all spindles would provide you better IOPS compared to creating "islands of storage" that are waste or performance and hell of management.
    3) Come up with IOPS requirements for your workload (no idea from above) keeping in mind RAID10 provides ALL IOPS for reads and half IOPS for writes (because of mirroring). So as single SATA can do maybe 120-150 IOPS and single SAS can do up to 200 (you
    don't provide any model names so we have to guess) you may calculate how many IOPS your config would give away in a best and worst case scenario (write back cache from above will help but you always need to come from a WORST case). See calculator link below.
    IOPS Calculator
    http://www.wmarow.com/strcalc/
    Hope this helped a bit :)
    StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts.

  • What is correct method to deploy cluster aware technology using HA VMs?

    Dear all, 
    I recently had experience creating Hyper-V Server 2012 cluster. This allows for deploying a highly available virtual machine. That's fine. The business machine (virtual machine) becomes highly available. This includes an existing VM enabled to be highly
    available or a new VM installed into cluster from scratch.
    On the other hand; we have cluster aware applications (SQL Server, SCVMM etc.) which are installed in clustered OS (Windows 2008 R2 Enterprise edition which has failover clustering service) . 
    Just for clearing concept; what is correct way of deploying a cluster aware technology (SQL Server, SCVMM) in the scenario where the underlying OS; running in VM(s) can be made highly available. 
    Method 1:
    Create simple non clustered VM, install cluster aware application (SQL Server e.g.). Make this VM highly available using Hyper-V cluster.  (This seems to be clustering the VM running cluster aware application, not the cluster ware application; which
    requires clustering.)
    Method 2:
    Create HA-enabled  VM onto Hyper-V Servers cluster; install cluster aware application within this HA-enabled VM.   (This again; the underlying OS/VM is clustered first, the cluster aware application (SQL server or others) how would it leverage
    the cluster?)   
    Please shed light on what is correct method. In both cases it seems the VM running the cluster aware application; is made highly available; meaning leveraging clustering. What about clustering the application
    itself? The objective is to be able to not only make the VM highly available; but also deploy clustered SQL Server  or other cluster aware technology using such HA VM.
    Regards, 
    Shahzad.

    Dear all, 
    I recently had experience creating Hyper-V Server 2012 cluster. This allows for deploying a highly available virtual machine. That's fine. The business machine (virtual machine) becomes highly available. This includes an existing VM enabled to be highly
    available or a new VM installed into cluster from scratch.
    On the other hand; we have cluster aware applications (SQL Server, SCVMM etc.) which are installed in clustered OS (Windows 2008 R2 Enterprise edition which has failover clustering service) . 
    Just for clearing concept; what is correct way of deploying a cluster aware technology (SQL Server, SCVMM) in the scenario where the underlying OS; running in VM(s) can be made highly available. 
    Method 1:
    Create simple non clustered VM, install cluster aware application (SQL Server e.g.). Make this VM highly available using Hyper-V cluster.  (This seems to be clustering the VM running cluster aware application, not the cluster ware application; which
    requires clustering.)
    Method 2:
    Create HA-enabled  VM onto Hyper-V Servers cluster; install cluster aware application within this HA-enabled VM.   (This again; the underlying OS/VM is clustered first, the cluster aware application (SQL server or others) how would it leverage
    the cluster?)   
    Please shed light on what is correct method. In both cases it seems the VM running the cluster aware application; is made highly available; meaning leveraging clustering. What about clustering the application
    itself? The objective is to be able to not only make the VM highly available; but also deploy clustered SQL Server  or other cluster aware technology using such HA VM.
    Regards, 
    Shahzad.
    With SQL Server both M1 and M2 are by far the best solutions. See guest VM cluster is non-optimal as SQL Server works better with own clustering features (AlwaysOn, see link below). And HA would make VM re-boot on another physical host so there would be
    both downtime and potential data loss. Run SQL Server in a pair of VMs on a different physical hosts, configue AlwaysOn (use failover SMB share as a witness) and you'll be fine. See:
    Overview of AlwaysOn Availability Groups (SQL Server)
    http://technet.microsoft.com/en-us/library/ff877884.aspx
    How to Build SQL Server
    2012 AlwaysOn Hyper-V Virtual Machines
    http://social.technet.microsoft.com/wiki/contents/articles/6198.how-to-build-sql-server-2012-alwayson-hyper-v-virtual-machines-for-demos-emu-build.aspx
    SQL Server 2012 AlwaysOn High Availability and Disaster Recovery Design Patterns
    http://blogs.msdn.com/b/sqlcat/archive/2013/11/20/sql-server-2012-alwayson-high-availability-and-disaster-recovery-design-patterns.aspx
    Also the best place to ask about SQL Server High Availability is dedicated MSFT group here:
    SQL Disaster Recovery Forum
    http://social.technet.microsoft.com/Forums/sqlserver/en-US/home?forum=sqldisasterrecoveryHope
    this helped :)
    StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts.

  • ISCSI MPIO, how many path do I need to create

    Hi,
    I've a server with 4 NIC connection to a DELL MD32xx which have 8 NIC.
    My question is how many path do I need to create under iSCSI connection.
    Do I need to create a path from each Server NIC to each MD32xx NIC, which will make 32 connection (and doesn't make sense).
    If not, how should I proceed, I've looked at many example and none seem to cover that kind of situation, they just directly connect the server NIC to the MD32xx NIC instead of going through switch for redundancy.
    Thank
    ML

    Hi,
    I've a server with 4 NIC connection to a DELL MD32xx which have 8 NIC.
    My question is how many path do I need to create under iSCSI connection.
    Do I need to create a path from each Server NIC to each MD32xx NIC, which will make 32 connection (and doesn't make sense).
    If not, how should I proceed, I've looked at many example and none seem to cover that kind of situation, they just directly connect the server NIC to the MD32xx NIC instead of going through switch for redundancy.
    Thank
    ML
    Please follow the guides and discussions here:
    Windows MPIO Setup for Dual Controller SAN
    http://social.technet.microsoft.com/Forums/windowsserver/en-US/3fa0942e-7d07-4396-8f2e-31276e3d6564/windows-mpio-setup-for-dual-controller-san?forum=winserverfiles
    It's MD3260 so can be used for 32xx storage unit with iSCSI uplinks.
    StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts.

Maybe you are looking for