Shared nothing live migration over SMB. Poor performance

Hi,
I´m experiencing really poor performance when migrating VMs between newly installed server 2012 R2 Hyper-V hosts.
Hardware:
Dell M620 blades
256Gb RAM
2*8C Intel E5-2680 CPUs
Samsung 840 Pro 512Gb SSD running in Raid1
6* Intel X520 10G NICs connected to Force10 MXL enclosure switches
The NICs are using the latest drivers from Intel (18.7) and firmware version 14.5.9
The OS installation is pretty clean. Its Windows Server 2012 R2 + Dell OMSA + Dell HIT 4.6
I have removed the NIC teams and vmSwitch/vNICs to simplify the problem solving process. Now there is one nic configured with one IP. RSS is enabled, no VMQ.
The graphs are from 4 tests.
Test 1 and 2 are nttcp-tests to establish that the network is working as expected.
Test 3 is a shared nothing live migration of a live VM over SMB
Test 4 is a storage migration of the same VM when shut down. The VMs is transferred using BITS over http.
It´s obvious that the network and NICs can push a lot of data. Test 2 had a throughput of 1130MB/s (9Gb/s) using 4 threads. The disks can handle a lot more than 74MB/s, proven by test 4.
While the graph above doesn´t show the cpu load, I have verified that no cpu core was even close to running at 100% during test 3 and 4.
Any ideas?
Test
Config
Vmswitch
RSS
VMQ
Live Migration Config
Throughput (MB/s)
NTtcp
NTttcp.exe -r -m 1,*,172.17.20.45 -t 30
No
Yes
No
N/A
500
NTtcp
NTttcp.exe -r -m 4,*,172.17.20.45 -t 30
No
Yes
No
N/A
1130
Shared nothing live migration
Online VM, 8GB disk, 2GB RAM. Migrated from host 1 -> host2
No
Yes
No
Kerberos, Use SMB, any available net
74
Storage migration
Offline VM, 8Gb disk. Migrated from host 1 -> host2
No
Yes
No
Unencrypted BITS transfer
350

Hi Per Kjellkvist,
Please try to change the "advanced features"settings of "live migrations" in "Hyper-v settings" , select "Compression" in "performance options" area .
Then test 3 and 4 .
Best Regards
Elton Ji
We
are trying to better understand customer views on social support experience, so your participation in this
interview project would be greatly appreciated if you have time.
Thanks for helping make community forums a great place.

Similar Messages

  • Slow migration rates for shared-nothing live migration over teaming NICs

    I'm trying to increase the migration/data transfer rates for shared-nothing live migrations (i.e., especially the storage migration part of the live migration) between two Hyper-V hosts. Both of these hosts have a dedicated teaming interface (switch-independent,
    dynamic) with two 1GBit/s NICs which is used for only for management and transfers. Both of the NICs for both hosts have RSS enabled (and configured), and the teaming interface also shows RSS enabled, as does the corresponding output from Get-SmbMultichannelConnection).
    I'm currently unable to see data transfers of the physical volume of more than around 600-700 MBit/s, even though the team is able to saturate both interfaces with data rates going close to the 2GBit/s boundary when transferring simple files over SMB. The
    storage migration seems to use multichannel SMB, as I am able to see several connections all transferring data on the remote end.
    As I'm not seeing any form of resource saturation (neither the NIC/team is full, nor is a CPU, nor is the storage adapter on either end), I'm slightly stumped that live migration seems to have a built-in limit to 700 MBit/s, even over a (pretty much) dedicated
    interface which can handle more traffic when transferring simple files. Is this a known limitation wrt. teaming and shared-nothing live migrations?
    Thanks for any insights and for any hints where to look further!

    Compression is not configured on the live migrations (but rather it's set to SMB), but as far as I understand, for the storage migration part of the shared-nothing live migration this is not relevant anyway.
    Yes, all NICs and drivers are at their latest version, and RSS is configured (as also stated by the corresponding output from Get-SmbMultichannelConnection, which recognizes RSS on both ends of the connection), and for all NICs bound to the team, Jumbo Frames
    (9k) have been enabled and the team is also identified with 9k support (as shown by Get-NetIPInterface).
    As the interface is dedicated to migrations and management only (i.e., the corresponding Team is not bound to a Hyper-V Switch, but rather is just a "normal" Team with IP configuration), Hyper-V port does not make a difference here, as there are
    no VMs to bind to interfaces on the outbound NIC but just traffic from the Hyper-V base system.
    Finally, there are no bandwidth weights and/or QoS rules for the migration traffic bound to the corresponding interface(s).
    As I'm able to transfer close to 2GBit/s SMB traffic over the interface (using just a plain file copy), I'm wondering why the SMB(?) transfer of the disk volume during shared-nothing live migration is seemingly limited to somewhere around 700 MBit/s on the
    team; looking at the TCP-connections on the remote host, it does seem to use multichannel SMB to copy the disk, but I might be mistaken on that.
    Are there any further hints or is there any further information I might offer to diagnose this? I'm currently pretty much stumped on where to go on looking.

  • Win 2012 shared-nothing live migration slow performance

    Win 2012 shared-nothing live migration slow performance
    Hello,
    i have two standalone hyper-v servers - A) 2012 and  B) 2012 r2.
    I tried live migration on non-shared storage between them. The functionality is alright but performance is low. The copying take a long time with max 200 Mbps.
    I am not able to find the bottleneck. Network a disk performance seems to be good. It is 1Gbps network and when I tried simple copy paste the vhd file from A to B through cifs protocol, the speed was about almost full 1Gbps - it was nice and fast.
    Is it feature? I am not able reach full network performance with shared-nothing live migration.
    Thank you for reply
    sincerely
    Peter Weiss

    Hi,
    I don’t found the similar issue with Hyper-V, Does both of your hosts have the chipsets in the same family? Could you try to switch the three Live Migration Performance
    Option method then monitor again? Or It seems there may have some disk or file system performance issue, please try to update your RAID card firmware to the least version.
    More information:
    Shared Nothing Live Migration: Goodbye Shared Storage?
    http://blogs.technet.com/b/canitpro/archive/2013/01/03/shared-nothing-live-migration-goodbye-shared-storage.aspx
    How to: Copy very large files across a slow or unreliable network
    http://blogs.msdn.com/b/granth/archive/2010/05/10/how-to-copy-very-large-files-across-a-slow-or-unreliable-network.aspx
    The similar thread:
    How to determine when to use the xcopy with the /j parameter?
    http://social.technet.microsoft.com/Forums/en-US/5ebfc25a-41c8-4d82-a2a6-d0f15d298e90/how-to-determine-when-to-use-the-xcopy-with-the-j-parameter?forum=winserverfiles
    Hope this helps

  • Hyper-V replica vs Shared Nothing Live Migration

      Shared Nothing Live Migration allows to transport your VM over the WAN without shutting it down (how much time it takes on io intensive vm is another story)
      Hyper-V replica does not allow to perform the DR switch without shutdown operation on primary site VM !
      why can't it take the VM live to the DR ?
      that's because if we use Shared Nothing across the WAN, we don't use the data that Hyper-V replica can and then it also breaks everything hyper-V replica does.
      Point is: how to take the VM to DR in running state, what is the best way to do that ?
    Shahid Roofi

    Hi Shahid,
    Hyper-V Replica is designed as a DR technology, not as a technique to move VMs. It assumes that should you require it, the source VM would probably be offline and therefore you would be powering up the passive copy from a previous point in time
    as its not a true synchronous replica copy. It does give you the added benefit to be able to run a planned failover which as you say, powers of the VM first, runs a final Sync then powers the new VM up. Obviously you cant have the duplicate copy of this VM
    running all the time at the remote site, otherwise you would have a split brain situation for network traffic.
    Like the live migration the shared nothing live migration is a technology aimed at moving a VM, but as you know offers the ability to do this without having shared storage and only requires a network connection. When initiated moves the whole
    VM, well copies the virtual drive and memory before sending machine writes to both, then only to the new VM when they both match. With regards to the speed, I assume you have the SNLM setup to compress data before sending across the wire?
    If you want a true live migration between remote sites, one way would be to have a SAN array between both sites synchronously replicating data, then stretch the Hyper-V cluster across both sites. Obviously this is a very expensive solution but perhaps
    the perfect scenario.
    Kind Regards
    Michael Coutanche
    Blog:   
    Twitter:   LinkedIn:
    Note: Posts are provided “AS IS” without warranty of any kind, either expressed or implied, including but not limited to the implied warranties of merchantability and/or fitness for a particular purpose.

  • Shared nothing live migration wrong network

    Hey,
    I have a problem when doing shared nothing live migration between clusters using SCVMM 2012 R2.
    The transfer goes ok. But It choose to use my team of 2 x 1GB nic (routable) instead of the team with 2x10GB (non routable, but opened between all hosts).
    I have set vmmigrationsubnet to correct net. Check live migration setting, and that port 6600 is listening on the correct IP.
    But still it chooses to use the 1GB net.
    Anyone got any idea what I can do next?
    //Johan Runesson

    Do you have only your live migration network defined as such on the clusters or do you have both defined?  What are the IP networks on the live migration networks on each cluster?
    .:|:.:|:. tim

  • Hyper-V Failover Cluster Live migration over Virtual Adapter

    Hello,
    Currently we have a test environment 3 HyperHost wich have 2 Nic teams.  LAN team and team Live/Mngt.
    In Hyper-V we created a Virtual Switch (Wich is Nic team Live/Mngt)
    We want to seperate the Mngt and live with Vlans. To do this we created 2 virtual adapters. Mngt and Live, assigned ip adresses and 2 vlans(Mngt 10, Live 20)
    Now here is our problem: In Failover cluster you cannot select the Virtual Adapter(live). Only the Virtual Switch wich both are on. Meaning live migration simple uses the vSwitch instead of the Virtual adapter.
    Either its not possible to seperate Live migration with a Vlan this way or maybe there are Powershell commands to add Live migration to a Virtual adapter?
    Greetings Selmer

    It can be done in PowerShell but it's not intuitive.
    In Failover Cluster Manager, right-click Networks and you'll find this:
    Click Live Migration Settings and you'll find this:
    Checked networks are allowed to carry traffic. Networks higher in the list are preferred.
    Eric Siron
    Altaro Hyper-V Blog
    I am an independent blog contributor, not an Altaro employee. I am solely responsible for the content of my posts.

  • Memory sky rockets over time, poor performance.

    I've noticed when I come home from work, my Fire Fox is performing incredibly bad. Tabs take awhile to open, mouse wheel scrolling has a delay, etc. You tube videos would pause and stutter on the video but audio is fine. I started to monitor it and its the fact my wife will open 10 windows, sometimes with tabs, but never fully close all of them. When I come home, its at 1.5-2.2gb of memory used in task manager. I tried about:memory and its littered with facebook mentions, even though no facebook tabs are open. She might have 1 or 2 tabs up for cooking recipes.
    The only fix is to close FireFox all together and reopen it.
    If I lose her opened tabs, she gets a little upset, so I have to kill the exe in task manager so when I reopen it it will offer to resume the previous session.
    This is just a touch tedious after a few weeks.
    Is there a different command or something I can do to leave current windows open and just flush out all that used memory?

    This looks like it may have done it.
    I only got to do it on Tuesday, but I cleared out a bunch of the unused extensions. So now just Adblock and an old Mouse Gestures Redox.
    Came home last night and it was sitting at 300mb, instead of 1.7gb. Much nicer. Thanks.

  • Hyper-V 2012 R2 VMQ live migrate (shared nothing) Blue Screen

    Hello,
    Have Windows Server 2012 R2 Hyper-V server, fully patched, new install. There are two intel Network cards ant there is configured NIC team of them (Windows NIC teaming). Also there is some "Virtual" NICS with assigned VLAN ID.
    If I enable VMQ on these NICS and do shared nothing live migration of VM - host gets BSDO. What can be wrong?

    Hi,
    I would like to check if you need further assistance.
    Thanks.
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Hyper V Lab and Live Migration

    Hi Guys,
    I have 2 Hyper V hosts I am setting up in a lab environment. Initially, I successfully setup a 2 node cluster with CSVs which allowed me to do Live Migrations. 
    The problem I have is my shared storage is a bit of a cheat as I have one disk assigned in each host and each host has starwinds virtual SAN installed. The hostA has an iscsi connection to hostB storage and visa versa.
    The issue this causes is when the hosts shutdown (because this is a lab its only on when required), the cluster is in a mess when it starts up eg VMs missing etc. I can recover from it but it takes time. I tinkered with the HA settings and the VM settings
    so they restarted/ didnt restart etc but with no success.
    My question is can I use something like SMB3 shared storage on one of the hosts to perform live migrations but without a full on cluster? I know I can do Shared Nothing Live Migrations but this takes time.
    Any ideas on a better solution (rather than actually buying proper shared storage ;-) ) Or if shared storage is the only option to do this cleanly, what would people recommend bearing in mind I have SSDs in the hyper V hosts.
    Hope all that makes sense

    Hi Sir,
    >>I have 2 Hyper V hosts I am setting up in a lab environment. Initially, I successfully setup a 2 node cluster with CSVs which allowed me to do Live Migrations. 
    As you mentioned , you have 2 hyper-v host and use starwind to provide ISCSI target (this is same as my first lab environment ) , then I realized that I need one or more host to simulate more production scenario . 
    But ,if you have more physical computers you may try other's progects .
    Also please refer to this thread :
    https://social.technet.microsoft.com/Forums/windowsserver/en-US/e9f81a9e-0d50-4bca-a24d-608a4cce20e8/2012-r2-hyperv-cluster-smb-30-sofs-share-permissions-issues?forum=winserverhyperv
    Best Regards
    Elton Ji
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • RDS 2012 re-connection after live migration.

    Is there a way to speed up the re-connection after a live migration?
    So if i am in a vm that live migrates it feels like it hangs for about 10 seconds the reconnects and is fine..... While this is OK its not ideal. Is there a way to improve this?

    Actually 10 seconds sounds like a very long time to me. In my experience using Shared Nothing Live Migration I've seen the switch being almost instantaneous, with a continual ping possibly dropping one or two packets, and certainly quick enough that it's
    unlikely any users would notice the change. So in terms of whether it can be improved I'd say yes.
    As you can see from the technical overview here
    http://technet.microsoft.com/en-us/library/hh831435.aspx the final step is for a signal to be sent to the switch informing it of the new MAC address of the servers new destination, so I wonder if the slow switch over might be connected to that, or perhaps
    some other network issue.
    Is the network connection poor between the servers which might cause a delay during the final sync of changes between the server copies? Are you moving between subnets?

  • Live Migration Failure

    I am attempting to migrate virtual machines from one site to another but they are failing. I have sufficient bandwidth but am concerned about the latency. Can anyone recommend the maximum latency between the two sites?

    Hi Sir,
    I assume it is shared nothing live migration .
    I would suggest you to perform live migration inside each site to check if the configuration is OK.
    If live migration run successfully within local site , you may need to check the link layer between two sites .
    During live migration you also can use netmon.exe to analyse the traffic to find some useful information.
    In addition , please check the evet log of hyper-v host to check if there is any clue .
    Best Regards,
    Elton Ji 
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Subscriber Support, contact [email protected] .

  • Cannot perform a live migration while replication is enabled

    I have two Server 2012 machines that I am running hyper-v on (just testing for now).  I can right click on a virtual machine and move it (live migration) to the other server with no issues.  I can also enable replication on the VM and that
    appears to work properly.  However, if I have replication enabled on a VM I am unable to perform a live migration.  I receive an error that says the VM already exists on the destination server (because of the replication). 
    Is there any way to perform a live migration while replication is enabled?

    I can't think of a reason why you would want to anywhere else but a test environment. Replication's intended use is DR across a WAN between two physical location (there are others, but this is primarily why it was created). If Office1 burns down, you can
    boot your Replica at Office2. Live Migration is for local HA. If Server1 has a hardware failure, Live Migrate to Server2.
    If everyone could simply Live Migrate over their WAN link between offices, Replication would be redundant. But getting a fast enough WAN link for this is extremely expensive, so Microsoft created Replication for high latency, low bandwidth WAN connections.
    TL;DR, Replication and Live Migration are mutually exclusive in almost all environments. It's not even that they *can't* work together, it's just that there's no point in making them work together because they're for different use cases.

  • Live migration suddenly won't work on 3.1.1

    I've done many live migrations over the last few months, with no problems. Suddenly, they don't work any more. Here's what I see:
    I start the migration from the Manager.
    The VM immediately disappears from the list of VMs on the source servers, and appears on the destination server.
    The job shows "in progress", and it NEVER completes.
    The "% complete" for the job never says anything but ZERO.
    If I look at the 'details' on the 'in progress' migration job, it says:
    Job Construction Phase
    begin()
    Appended operation 'Bridge Configure Operation' to object '0004fb00002000005c945b4212271249 (network.BondPort (2) in oravm3.acbl.net)'.
    Appended operation 'Virtual Machine Migrate' to object '0004fb000006000066c8e49bc5ab54b0 (jiplcm01)'.
    commit()
    Completed Step: COMMIT
    Objects and Operations
    Object (IN_USE): [Server] e2:a3:70:c6:67:89:e1:11:bb:8e:e4:1f:13:eb:92:b2 (oravm3.acbl.net)
    Object (IN_USE): [BondPort] 0004fb00002000005c945b4212271249 (network.BondPort (2) in oravm3.acbl.net)
    Operation: Bridge Configure Operation
    Object (IN_USE): [Server] 92:0f:60:b4:84:91:e1:11:aa:cb:e4:1f:13:eb:d2:3a (oravm2.acbl.net)
    Object (IN_USE): [VirtualMachine] 0004fb000006000066c8e49bc5ab54b0 (jiplcm01)
    Operation: Virtual Machine Migrate
    Job Running Phase at 13:10 on Wed, Jan 2, 2013
    Job Participants: [92:0f:60:b4:84:91:e1:11:aa:cb:e4:1f:13:eb:d2:3a (oravm2.acbl.net)]
    Actioner
    Starting operation 'Bridge Configure Operation' on object '0004fb00002000005c945b4212271249 (network.BondPort (2) in oravm3.acbl.net)'
    Bridge [0004fb001018c4c] already exists (and should exist) on interface [bond1] on server [oravm3.acbl.net]; skipping bridge creation
    Completed operation 'Bridge Configure Operation' completed with direction ==> DONE
    Starting operation 'Virtual Machine Migrate' on object '0004fb000006000066c8e49bc5ab54b0 (jiplcm01)'
    Job failed commit (internal) due to Caught during invoke method: java.net.SocketException: Socket closed
    Wed Jan 02 13:11:36 EST 2013
    com.oracle.odof.exception.InternalException: Caught during invoke method: java.net.SocketException: Socket closed
    Wed Jan 02 13:11:36 EST 2013
    at com.oracle.odof.OdofExchange.invokeMethod(OdofExchange.java:956)
    at com.oracle.ovm.mgr.api.job.InternalJobProxy.objectCommitter(Unknown Source)
    at com.oracle.ovm.mgr.api.job.JobImpl.internalJobCommit(JobImpl.java:281)
    at com.oracle.ovm.mgr.api.job.JobImpl.commit(JobImpl.java:651)
    at com.oracle.ovm.mgr.faces.model.JobEO$CommitWork.run(JobEO.java:233)
    at weblogic.work.j2ee.J2EEWorkManager$WorkWithListener.run(J2EEWorkManager.java:183)
    at weblogic.work.ExecuteThread.execute(ExecuteThread.java:209)
    at weblogic.work.ExecuteThread.run(ExecuteThread.java:178)
    Caused by: java.net.SocketException: Socket closed
    at java.net.SocketInputStream.socketRead0(Native Method)
    at java.net.SocketInputStream.read(SocketInputStream.java:129)
    at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
    at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
    at java.io.ObjectInputStream$PeekInputStream.peek(ObjectInputStream.java:2248)
    at java.io.ObjectInputStream$BlockDataInputStream.peek(ObjectInputStream.java:2541)
    at java.io.ObjectInputStream$BlockDataInputStream.peekByte(ObjectInputStream.java:2551)
    at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1296)
    at java.io.ObjectInputStream.readObject(ObjectInputStream.java:350)
    at com.oracle.odof.io.AbstractSocket.receive(AbstractSocket.java:220)
    at com.oracle.odof.io.AbstractSocket.receive(AbstractSocket.java:173)
    at com.oracle.odof.OdofExchange.send(OdofExchange.java:473)
    at com.oracle.odof.OdofExchange.send(OdofExchange.java:427)
    at com.oracle.odof.OdofExchange.invokeMethod(OdofExchange.java:938)
    ... 7 more
    Anyone have any idea what the problem is? What can I do to gather useful information?

    Job failed commit (internal) due to Caught during invoke method: java.net.SocketException: Socket closed
    at com.oracle.ovm.mgr.api.job.InternalJobProxy.objectCommitter(Unknown Source)
    It looks to me that either the target server does not have access to everything needed to complete the migration (access to the shared pool, access to the shared storage and etc) or the target server is having an issue communicating with the VM Manager.
    I wish such error were more descriptive but I believe the "unknown source" and "socket closed" indicates such a problem.

  • Configure Live Migration Multi-channel with SW QoS

    I'm a bit confused about what is needed to properly configure Live Migration using SMB multi-channel with QoS, using a SCVMM logical switch. Where I'm confused, coming from a VMware background, is how many and what type of vNICs that I need. I'm thinking
    I need two of some virtual NIC type, each with a unique IP address, which SMB multi-channel for LM would use. I would then somehow tie a SW QoS policy to the vNICs to ensure LM traffic doesn't stomp all over other network traffic types.
    Any explanations would be most welcomed.
    Blog: www.derekseaman.com, VMware vExpert 2012/2013

    Hi there.
    Hopefully this guide should give you a detailed explanation of the configuration of this setup, explaining logical switches, port profiles (virtual and uplink) and QoS:
    http://gallery.technet.microsoft.com/Hybrid-Cloud-with-NVGRE-aa6e1e9a
    -kn
    Kristian (Virtualization and some coffee: http://kristiannese.blogspot.com )

  • Hyper-V over SMB 3.0 poor performance on 1GB NIC's without RDMA

    This is a bit of a repost as the last time I tried to troubleshoot this my question got hijacked by people spamming alternative solutions (starwind) 
    For my own reasons I am currently evaluating Hyper-V over SMB with a view to designing our new production cluster based on this technology.  Given our budget and resources a SoFS makes perfect sense.
    The problem I have is that in all my testing, as soon as I host a VM's files on a SMB 3.0 server (SoFS or standalone) I am not getting the performance I should over the network.  
    My testing so far:
    4 different decent spec machines with 4-8gb ram, dual/quad core cpu's, 
    Test machines are mostly Server 2012 R2 with one Windows 8.1 hyper-v host thrown in for extra measure.
    Storage is a variety of HD and SSD and are easily capable of handling >100MB/s of traffic and 5k+ IOPS
    Have tested storage configurations as standalone, storage spaces (mirrored, spanned and with tiering)
    All storage is performing as expected in each configuration.
    Multiple 1GB NIC's from broadcom, intel and atheros.  The broadcoms are server grade dual port adapters.
    Switching has been a combination of HP E5400zl, HP 2810 and even direct connect with crossover cables.
    Have tried stand alone NIC's, teamed NIC's and even storage through hyper-v extensible switch.
    File copies between machines will easily max out 1GB in any direction.
    VM's hosted locally show internal benchmark performance in line with roughly 90% of underlying storage performance.
    Tested with dynamic and fixed vhdx's
    NIC's have been used in combinations of RSS and TCP offload enabled/disabled.
    Whenever I host VM files on a different server from where it is running, I observe the following:
    Write speeds within the VM to any attached vhd's are severely effected and run at around 30-50% of 1GB
    Read Speeds are not as badly effected but just about manager to hit 70% of 1GB
    Random IOPS are not noticeably affected.
    Running multiple tests at the same time over the same 1GB links results in the same total through put.
    The same results are observed no matter which machine hosts the vm or the vhdx files. 
    Any host involved in a test will show a healthy amount of cpu time allocated to hardware interupts.  On a 6 core 3.8Ghz cpu this is around 5% of total.  On the slowest machine (dual core 2.4Ghz) this is roughly 30% of cpu load.
    Things I have yet to test:
    Gen 1 VM's
    VM's running anything other than server 2012 r2
    Running the tests on actual server hardware. (hard as most of ours are in production use)
    Is there a default QoS or IOPS limit when SMB detects hyper-v traffic?  I just can't wrap my head around how all the tests are seeing an identical bottleneck as soon as the storage traffic goes over smb.
    What else should I be looking for? There must be something obvious that I am overlooking!

    By nature of a SOFS reads are really good, but there is no write cache, SOFS only seems to perform well with Disk mirroring, this improves the write performance and redundancy but halves your disk capacity.
    Mirror (RAID1 or RAID10) actually REDUCES number of IOPS. With read every spindle takes part in I/O request processing (assumimg I/O is big enough to cover the stripe) so you multiply IOPS and MBps on amount of spindles you have in a RAID group and all writes
    need to go to the duplicated locations that's why READS are fast and WRITES are slow (1/2 of the read performance). This is absolutely basic thing and SoFS layered on top can do nothing to change this.
    StarWind iSCSI SAN & NAS
    Not wanting to put the cat amongst the pigeons, this isn't strictly true, RAID 1 and 10 give you the best IOP performance of any Raid group, this is why all the best performing SQL Cluster use RAID 10 for most of their storage requirements,
    Features
    RAID 0
    RAID 1
    RAID 1E
    RAID 5
    RAID 5EE
    Minimum # Drives
    2
    2
    3
    3
    4
    Data Protection
    No Protection
    Single-drive
    failure
    Single-drive
    failure
    Single-drive
    failure
    Single-drive
    failure
    Read Performance
    High
    High
    High
    High
    High
    Write Performance
    High
    Medium
    Medium
    Low
    Low
    Read Performance (degraded)
    N/A
    Medium
    High
    Low
    Low
    Write Performance (degraded)
    N/A
    High
    High
    Low
    Low
    Capacity Utilization
    100%
    50%
    50%
    67% - 94%
    50% - 88%
    Typical Applications
    High End Workstations, data
    logging, real-time rendering, very transitory data
    Operating System, transaction
    databases
    Operating system, transaction
    databases
    Data warehousing, web serving,
    archiving
    Data warehousing, web serving,
    archiving

Maybe you are looking for

  • Can I use home sharing on separate accounts on same computer?

    We have iPods (one 3G and one 4G) and separate accounts on our Mac and on iTunes but would like to share some but not all of the same things through home sharing. Is this possible? I haven't found a way to do it. The information I found available onl

  • Why can the users in one child domain logon to computers in a different child domain in Server 2012 R2?

    I have setup a test system. It has a domain with 2 child domains.  DomainA.xyz.com has users and workstations. DomainB.xyz.com is a resource domain and has servers.  wyx.com is for IT administration. Users in domainA can logon to the domainB computer

  • Fglrx + acpi = hard problems

    Hello to all, I installed one year ago chakra linux distrbution and all works fine but some things doesn't like to me, so I installed Slackware, but I don't have much time to spent compiling programs.... and compiling... so, I come back with arch + k

  • Outputting Lossy Dolby Digital & Lossy DTS 5.1 audio via HDMI - Dv7-4074 Win 7 64bit ATI HD 5650

    Hi Everybody I'm trying to output lossy DD & DTS 5.1 soundtracks via HDMI to my HDTV then to my AV receiver via optical. Two channel LPCM works fine. I checked for updated drivers for both the ATI HD 5650 AV Card and Win 7; says both are current vers

  • Can not reset all settings

    I recently updated to ios 7.0.2. on my 4S and noticed a problem in not being able to enter events on my calendar. This turned out ot be a set up problem as only the "Birthdays Calendar" was selected. However, in the process of troubleshooting with Ap