Wierd SMB Multichannel performance VM to Storage

Hi,
I have some wierd speed issues with SMB multichannel. I have tried almost every combination but i won't get the speed stable from VM to my storage system. When i transfer a file from my workstation to the VM i get a constant speed but when i transfer a file
to the storage the speed is a bit lower.
The current configuration is;
Systeem 1 - HYPER-V
OS: Hyper-V 2012 R2
NIC: HP Broadcom 331T (latest HP drivers)
NIC-1 and 2 in LACP Adress Hash team
NIC-3 and 4 in LACP Adress Hash team
VM 1
VM: Server 2012 (also tried Server 2012 R2)
Virtual NIC 1 in range 10.0.0.212/24
Virtual NIC 2 in range 10.1.0.212/24
System 2 (storage)
OS: Server 2012 R2
NIC: HP Broadcom 331T latest HP Drivers)
NIC-1: 10.0.0.218/24
NIC-2: 10.0.0.219/24
NIC-3: 10.1.0.218/24
NIC-4: 10.1.0.219/24
* RSS is enabled on all NIC's
When i transfer a file from VM1 to my storage system the speed is somewhere between 120MB/s and 190MB/s. Almost the same speed as from my storage to VM1. Somethimes the speed drops to 100mb/s. I have seen file copy speed of 350-260MB/s once but i haven't
been able to achieve it again.
Because i am not sure what is causing the problem above i also tried to send some files from my workstation. When i transfer a file to VM1 i get 226MB/s constant and when i transfer a file to my storage i get somewhere between 190-220MB/s. My workstation
has 2x1GB Realtek NIC's and is running Windows 8.1
I would expect to have a better transfer speed from VM1 to my storage and the other way around. I also tried not to use team on VM1 and have the 4 NIC's in their own subnet.
It does load balance all traffic across all NIC's.
I have tested the file transfers between 2 Samsung EVO 250GB SSD

Hi Sir,
" SMB Multichannel is the feature responsible for detecting the RDMA capabilities of network adapters to enable SMB Direct. "
http://technet.microsoft.com/en-us/library/jj134210.aspx
I also checked the NIC "Broadcom 331T " , but there is no RDMA feature :
http://zh-cn.broadcom.com/products/Ethernet-Controllers-and-Adapters/Enterprise-Adapters/HP
 I would suggest you to disable VMQ for physical NICs in hyper-v host then test gain .
Best Regards
Elton Ji
We
are trying to better understand customer views on social support experience, so your participation in this
interview project would be greatly appreciated if you have time.
Thanks for helping make community forums a great place.

Similar Messages

  • Can the 128 GB SSD part of a fusion drive be replaced with a 960 GB SSD dive for greater performance and added storage?

    Can the 128 GB SSD part of a fusion drive be replaced with a 960 GB SSD dive for greater performance and added storage in a 2012 mac mini?

    First, I suspect Apple fine tuned its choice of SSD size for the Fusion drive based on the best performance/price ratio. I'd be surprised if a larger SSD provided any improvement that would justify the added price of a larger SSD.
    Second, if I could justify the cost of a 960 SSD drive I wouldn't bother with a Fusion drive!

  • Very poor SMB share performance on 3TB time capsule.

    Hi,
    Is the SMB shares performance on time capsule suposed to be very poor compared to the AFP shares?
    My TC will to around 9MB/s transfers using AFP, but on SMB it will do around 1.25MB/s (varies from a few KB/s to peaks of 4MB/s).
    I tested both protocols using a ethernet cable connected from my macbook pro directly to the TC.
    This is a 3TB time capsule runing firmware 7.6.1, and I did a factory reset before my tests.
    Time machine backups were disabled during the tests.
    I need SMB for my WDTV media players.

    I would not say it is supposed to be.. but you are probably correct that it is very poor.. This issue crops up time and again,,, but not from people running Macs generally.. it is windows that is the problem.
    A few things to check.
    1. Are the names actually SMB correct?? ie short, no spaces, pure alphanumeric. Both TC name and wireless names if you use wireless.
    2. Workgroup set correctly.  ie WORKGROUP .. this probably doesn't come up with the Mac but affects most PC connections and possibly the WD TV
    3. How old is the TC? If it is an early release Gen 4, you can take it back to 7.5.2 firmware.. frankly much of the trouble was introduced with Lion.. as they changed the SAMBA setup.. I would test from the WDTV.. or a PC.. if it doesn't glitch you are doing better than the Mac can do.. ie the reading in the Mac is pure Lion Lunacy.. https://discussions.apple.com/thread/3206668?start=0&tstart=0
    One of those apple changes that shows.. they hardly test anything anymore. Do a google search for Lion SMB fix.

  • X4500 RAID Configuration for best performance for video storage

    Hello all:
    Our company is working with a local university to deploy IP video security cameras. The university has an X4500 Thumper that they would like to use for the storage of the video archives. The video management software (VMS) will run on an Intel based server with Windows 2003 server as the OS and connect to the Thumper via iSCSI. The VMS manages the permissions, schedules and other features of the cameras and records all video on the local drives until the scheduled archive time. When the archive time occurs, the VMS transfers the video to the Thumper for long term storage. It is our understanding that when using iSCSI and Windows OS there is a 2TB limit for the storage space - so we will divide the pool into several 2TB segments.
    The question is: Given this configuration, what RAID level (0, 1, Z or Z2) will provide the highest level of data protection without comprimising performance to a level that would be noticable? We are not writing the video directly to the Thumper, we are transferring it from the drives of the Windows server to the Thumper, and we need that transfer to be very fast - since the VMS stops recording during the archiving and restarts when complete, creating down time for the cameras.
    Any advice would be appreciated.

    I'd put as many disks as possible into a RAID 5 (striping) set. This will provide the highest level of performance, with the ability to sustain a single disk failure.
    With striping, some data is written to all the disks in the stripe set. So, if you have 10 disks in the set, then instead of writing data to a single disk, which is slow, 1/10th of the data is written to each disk simultaneously, which is very fast. In effect, the more disks you write to, the faster the operation completes.

  • YTD Performance in Aggregate Storage

    Has anyone had any problems with performance of YTD calculations in Aggregate storage? Any solutions?

    did yuo ever get this resolved? we are running into the same problem.We have an ASO db which requires YTD calcs and TB Last. We've tried using two separate options (CASE and IF) statements on the YTD, Year and Qtr members (ie MarYTD). Both worked and now concerned about performance. Any suggestions?

  • Networking Best Practices - Connecting Two Switches

    Connecting two switches together is an easy task, which makes it so frustrating when it doesn’t work. Here we will outline a basic scenario of connecting two switches and achieving connectivity. In these scenarios we will be using commands and settings that will work for most modern PowerConnect switches. However this does not cover all possible scenarios and the commands may differ slightly from switch to switch.
    For instance, in most cases you can use General or Trunk mode when connecting two switches. However, on the PowerConnect 62xx series switches, you must use General mode if you want to allow management traffic onto the switch over the PVID.  If you use Trunk mode, you will not have the default VLAN on those ports.  The ports will only allow tagged traffic.
    For more details on the difference between Access, General, and Trunk modes, follow this link.
    http://en.community.dell.com/support-forums/network-switches/f/866/p/19445142/20089157.aspx#20089157
    It is always a good idea to have the user and CLI guide for your switch, to reference any possible changes in command syntax.
    http://support.dell.com/support/edocs/network/
    Layer 2
    Layer 2 switches operate at the data link layer of the OSI model. Layer 2 is responsible for error checking and transmitting data across the physical media. MAC addressing sources and destination protocols are layer 2 protocols. Layer 2 switches use the MAC address of data packets to determine where those packets should go. It learns the MAC addresses of all devices and creates a segment/forwarding table.
    When a switch receives a frame with a destination address that isn't in its forwarding table, the switch forwards the frame to all other ports. If the destination machine responds to the server, the switch will listen to the reply and learn which port the destination machine is attached to. It then adds that MAC address to the forwarding table.
    The Dell PowerConnect Layer 2 switches have ports that all operate in VLAN 1 by default. If it is acceptable to have all traffic on the same broadcast domain, then you can simply leave the default alone, connect the two switches and traffic will flow.
     If you do not want all traffic on the same broadcast domain, then we need to look at adding additional broadcast domains through the use of VLANs.
     We will use 3 VLANs for the following scenario.
    VLAN 1=Management
    VLAN 2=Client
    VLAN 3=Server
    To create these VLANs we do the following commands (VLAN 1 is already created by default)
    console(config)# vlan database
    console(config-vlan)# VLAN 2
    console(config-vlan)# VLAN 3
    console(config-vlan)# exit
    We can then name the VLANs to help keep things organized.
    console(config)# interface vlan 2
    console(config-vlan)# name Client
    console(config-vlan)# exit
    console(config)# interface vlan 3
    console(config-vlan)# name Server
    console(config-vlan)# exit
    Once we have the VLANs created we can place a device in that VLAN by placing the port it plugs into, in access mode for the specific VLAN.
    So we have a workstation on port e2 we want to be placed in VLAN 2, we would issue the following commands.
    console(config)# interface ethernet 1/e2
    console(config-if)# switchport mode access
    console(config-if)# switchport access vlan 2
    console(config-if)# exit
    The next port plugs into a server on port e3 we want on VLAN 3, we would issue these commands.
    console(config)# interface ethernet 1/e3
    console(config-if)# switchport mode access
    console(config-if)# switchport access vlan 3
    console(config-if)# exit
    For the ports connecting the two switches together, we place the ports in trunk mode and specify the native VLAN and allowed VLANs.
    For the port e1 that connect the two switches to each other would be configured like this.
    console(config)# interface ethernet 1/e1
    console(config-if)# switchport mode general
    console(config-if)# switchport general allowed vlan add 2,3 tagged
    console(config-if)# switchport general pvid 1
    console(config-if)# exit
    Once these VLANs and port settings are made on both switches. A server connected to switch A on VLAN 3 should be able to communicate with another Server connected to switch B that is also in VLAN 3.  Without the use of a router the devices in VLAN 3 will not be able to communicate with devices that are outside of their broadcast domain (i.e. VLAN 2 devices could not reach VLAN 3 devices)
    Layer 3 + Layer 2
     Until recently, routers were the only devices capable of layer 3 protocols. Switches capable of routing are now available and in widespread use. In most cases we will connect our layer 2 switches to a Layer 3 capable switch to perform our routing for us.
     On the layer 3 switches we will use the same VLANs and setup that we did with the layer 2 switches.  Then we will add to the configuration.
     We can assign an IP address to each switch with the following command.
    Switch A
    console(config)#ip address 172.16.1.1 255.255.255.0
    Switch B
    console(config)#ip address 172.16.2.1 255.255.255.0
    Then we will enable routing only on Switch A
    console(config)# ip routing
    Switch A we assign an IP address to VLAN 2 and enabling routing on the VLAN.
    console(config)# interface vlan 2
    console(config-if-vlan2)# Routing
    console(config-if-vlan2)# ip address 172.16.20.1 255.255.255.0
    console(config-if-vlan2)# exit
    Switch A we assign an IP address to VLAN 3 and enabling routing on the VLAN.
    console(config)# interface vlan 3
    console(config-if-vlan2)# Routing
    console(config-if-vlan2)# ip address 172.16.30.1 255.255.255.0
    console(config-if-vlan2)# exit
    On both switch A and switch B we will keep things simple and use interface 1/e1 for the connection between each switch. Setting both switches 1/e1 to general mode, allowing the additional VLAN 2,3, and keeping the PVID of 1.
    console(config)# interface ethernet 1/e1
    console(config-if)# switchport mode general
    console(config-if)# switchport general allowed vlan add 2,3 tagged
    console(config-if)# switchport general pvid 1
    console(config-if)# exit
    We will have one client computer connect to switch A on port 1/e2 and one client connect to switch B on port 1/e2. These ports will be in access mode for VLAN 2, and the config should look like this on both switches.
    console(config)# interface ethernet 1/e2
    console(config-if)# switchport mode access
    console(config-if)# switchport access vlan 2
    console(config-if)# exit
    We will have another client computer connect to switch A on port 1/e3 and one client connect to switch B on port 1/e3. These ports will be in access mode for VLAN 3, and the config should look like this on both switches.
    console(config)# interface ethernet 1/e3
    console(config-if)# switchport mode access
    console(config-if)# switchport access vlan 3
    console(config-if)# exit
    On Clients connected to Switch A we will assign an IP address and gateway based on the VLAN they are in access mode for.
    Client connected to access port for VLAN 2.
    IP Address:172.16.20.11
    Default Gateway:172.16.20.1
    Client connected to access port for VLAN 3.
    IP Address:172.16.30.11
    Default Gateway:172.16.30.1
    On Clients connected to Switch B we will assign an IP address and gateway based on the VLAN they are in access mode for.
    Client connected to access port for VLAN 2.
    IP Address:172.16.20.12
    Default Gateway:172.16.20.1
    Client connected to access port for VLAN 3.
    IP Address:172.16.30.12
    Default Gateway:172.16.30.1
    External Connection
    At some point we may want traffic to have an external connection. To do this we can create a new VLAN for our point to point connection from Switch A to our router. We will use VLAN 7 for this and assign an IP address.
    console(config)# vlan database
    console(config-vlan)# VLAN 7
    console(config-vlan)# exit
    console(config)# interface vlan 7
    console(config-vlan)# name WAN
    console(config-if-vlan2)# Routing
    console(config-if-vlan2)# ip address 10.10.10.2 255.255.255.0
    console(config-if-vlan2)# exit
    On our router we will assign an IP address of 10.10.10.1
    Then place the port connecting the switch and router into access mode for VLAN 7.  In this case we use port e4.
     console(config)# interface ethernet 1/e4
    console(config-if)# switchport mode access
    console(config-if)# switchport access vlan 7
    console(config-if)# exit
    We will then need to put in a default route with the next hop as the router IP address.  This allows the switch to know where to route traffic not destined for VLANs 2, 3, or 7.
    console(config)#ip route 0.0.0.0 0.0.0.0 10.10.10.1
    Next on the router we’ll need to add a route back so the router knows about the networks attached to switch A.  Generally adding a static route on most routers is done with the following command: 
    ip route {Network} {Wildcard Mask} {Next Hop-IP}
    In our case here are the 2 static routes we could use.
    Ip route 172.16.20.0 0.0.0.255 10.10.10.2
    Ip route 172.16.30.0 0.0.0.255 10.10.10.2
    The routing that we enabled on Switch A will enable traffic from the other VLANs to traverse over port 1/e4 to the router, connecting us to external traffic. The routes we added to the router allow the traffic to flow back to the switch over port 1/e4.
    Layer 3 + Layer 3
    In some situations we have two switches, each setup to route for its own broadcast domain, which we want to connect together. In this situation we no longer have a need to use Trunk or General mode between the switches. Instead we can create a common VLAN that will be used for the connection between the two switches.
    To create this VLAN we will run the following commands on both switch A and B
    console(config)# vlan database
    console(config-vlan)# vlan 6
    console(config-vlan)# exit
    console(config)# interface vlan 6
    console(config-vlan)# name Connection
    console(config-vlan)# exit
    On switch A we assign an IP address to VLAN 6 and enable routing on the VLAN.
    console(config)# interface vlan 6
    console(config-if-vlan2)# Routing
    console(config-if-vlan2)# ip address 172.16.60.1 255.255.255.0
    console(config-if-vlan2)# exit
    On switch B we assign an IP address to VLAN 6 and enable routing on the VLAN.
    console(config)# interface vlan 6
    console(config-if-vlan2)# Routing
    console(config-if-vlan2)# ip address 172.16.60.2 255.255.255.0
    console(config-if-vlan2)# exit
    On both switch A and B we place the connecting ports into Access mode for VLAN 6.
    console(config)# interface ethernet 1/e1
    console(config-if)# switchport mode access
    console(config-if)# switchport access vlan 6
    console(config-if)# exit
    We then need to make some changes to switch B now that it is layer 3 and not layer 2 and has its own broadcast domain.
    We will enable routing on Switch B
    console(config)# ip routing
    What used to be VLAN 2 and 3 will now be VLAN 4 and 5 for our separate broadcast domains.
    Switch B we assign an IP address to VLAN 4 and enabling routing on the VLAN.
    console(config)# interface vlan 4
    console(config-if-vlan2)# Routing
    console(config-if-vlan2)# ip address 172.16.40.1 255.255.255.0
    console(config-if-vlan2)# exit
    Switch B we assign an IP address to VLAN 5 and enabling routing on the VLAN.
    console(config)# interface vlan 5
    console(config-if-vlan2)# Routing
    console(config-if-vlan2)# ip address 172.16.50.1 255.255.255.0
    console(config-if-vlan2)# exit
    On Clients connected to Switch B we will assign an IP address and gateway based on the VLAN they are in access mode for.
    Client connected to access port for VLAN 4.
    IP Address:172.16.40.11
    Default Gateway:172.16.40.1
    Client connected to access port for VLAN 5.
    IP Address:172.16.50.11
    Default Gateway:172.16.50.1
    The end result should look like this.
     Troubleshooting
    If we are having issues with connectivity, we may need to place some static routes in place to help traffic to the next hop in the network.
    On switch A we configure a static route to help traffic to the next hop in the network, which is the router.
    console(config)# ip route 0.0.0.0 0.0.0.0 10.10.10.1
    The external router will also need a path defined back to all networks/VLANs.
    To check the status of a port we can use the command. Show interfaces detail, this will help us see the port status. For example to check the status of port 48, we would run this command.
    console# show interfaces detail ethernet 1/g48
     To check routing paths:
    console# show ip route
    The IP address of the network for each VLAN should be listed as C – Connected. Then also a path or default route to your upstream router.
    We can use basic ping commands from a client to help test where connectivity is dropping off at. By doing this we can narrow down where in the network to start troubleshooting.
    -Ping from client to default gateway, being the VLAN the client is in access mode for. If this fails then we may need to double check our client settings making sure the proper IP and gateway are being used.
    -Ping from client to the ip address of the switch the client plugs into. If this fails we may not have VLAN routing enabled on the VLAN the client is in.
    -Ping from client to another client on same VLAN, same switch. If this fails we need to check on client settings, IP address and gateway.
    -ping from client to another client on different VLAN, same switch. If this fails we need to double check the VLAN routing commands are in place.
    -ping from client to the ip address of the next switch in the network. If this fails then check Trunk port configuration from switch to switch, ensuring the VLAN is added to the Trunk port.
    -ping from client to another client on same VLAN, different switch. If this fails, check Trunk port settings.
    -ping from client to another client on different VLAN, different switch. If this fails then check trunk settings and VLAN routing configuration.

    Derek,
    I tried to draw my prefered setup for this network configuration.
    I would create a Team with the two 1 GBit NICs and use it for Domain, DNS, Backup and any SystemCenter Agents.
    I would also Team the two 10 GBit NICs and than assign it to a Hyper-V Switch for the VMs. In Windows Server 2012 it is posible to create vNICs for the Management OS that use this Hyper-V Switch (Converged Network Design). I would create two vNICs SMB1
    and SMB2 to use them for Cluster and Livemigration traffic with SMB Multichannel. If your storage system supports SMB Multichannel you can also use both as storage NICs (but this depends wich vendor you have).
    Hope this helps.
    Grüße/Regards Carsten Rachfahl | MVP Virtual Machine | MCT | MCITP | MCSA | CCA | Husband and Papa |
    www.hyper-v-server.de | First German Gold Virtualisation Kompetenz Partner ---- If my answer is helpful please mark it as answer or press the green arrow.

  • [Forum FAQ] Troubleshooting Network File Copy Slowness

    1. Introduction
    The Server Message Block (SMB) Protocol is a network file sharing protocol, and as implemented in Microsoft Windows is known as Microsoft SMB Protocol. The set of message packets that defines a particular version of the protocol is called a dialect. The Common
    Internet File System (CIFS) Protocol is a dialect of SMB. Both SMB and CIFS are also available on VMS, several versions of Unix, and other operating systems.
    Microsoft SMB Protocol and CIFS Protocol Overview
    http://msdn.microsoft.com/en-us/library/windows/desktop/aa365233(v=vs.85).aspx
    Server Message Block overview
    http://technet.microsoft.com/en-us/library/hh831795.aspx
    1.1
    SMB Versions and Negotiated Versions
    - Thanks for the
    Jose Barreto's Blog
    There are several different versions of SMB used by Windows operating systems:
    CIFS – The ancient version of SMB that was part of Microsoft Windows NT 4.0 in 1996. SMB1 supersedes this version.
    SMB 1.0 (or SMB1) – The version used in Windows 2000, Windows XP, Windows Server 2003 and Windows Server 2003 R2
    SMB 2.0 (technically SMB2 version 2.002) – The version used in Windows Vista (SP1 or later) and Windows Server 2008 (or any SP)
    SMB 2.1 ((technically SMB2 version 2.1) – The version used in Windows 7 (or any SP) and Windows Server 2008 R2 (or any SP)
    SMB 3.0 (or SMB3) – The version used in Windows 8 and Windows Server 2012
    SMB 3.02 (or SMB3) – The version used in Windows 8.1 and Windows Server 2012 R2
    Windows NT is no longer supported, so CIFS is definitely out. Windows Server 2003 R2 with a current service pack is under Extended Support, so SMB1 is still around for a little while. SMB 2.x in Windows Server 2008 and Windows Server 2008
    R2 are under Mainstream Support until 2015. You can find the most current information on the
    support lifecycle page for Windows Server. The information is subject to the
    Microsoft Policy Disclaimer and Change Notice.  You can use the support pages to also find support policy information for Windows
    XP, Windows Vista, Windows 7 and Windows 8.
    In Windows 8.1 and Windows Server 2012 R2, we introduced the option to completely disable CIFS/SMB1 support, including the actual removal of the related binaries. While this is not the default configuration, we recommend disabling this older
    version of the protocol in scenarios where it’s not useful, like Hyper-V over SMB. You can find details about this new option in item 7 of this blog post:
    What’s new in SMB PowerShell in Windows Server 2012 R2.
    Negotiated Versions
    Here’s a table to help you understand what version you will end up using, depending on what Windows version is running as the SMB client and what version of Windows is running as the SMB server:
    OS
    Windows 8.1  WS 2012 R2
    Windows 8  WS 2012
    Windows 7  WS 2008 R2
    Windows Vista  WS 2008
    Previous versions
    Windows 8.1 WS 2012 R2
    SMB 3.02
    SMB 3.0
    SMB 2.1
    SMB 2.0
    SMB 1.0
    Windows 8 WS 2012
    SMB 3.0
    SMB 3.0
    SMB 2.1
    SMB 2.0
    SMB 1.0
    Windows 7 WS 2008 R2
    SMB 2.1
    SMB 2.1
    SMB 2.1
    SMB 2.0
    SMB 1.0
    Windows Vista WS 2008
    SMB 2.0
    SMB 2.0
    SMB 2.0
    SMB 2.0
    SMB 1.0
    Previous versions
    SMB 1.0
    SMB 1.0
    SMB 1.0
    SMB 1.0
    SMB 1.0
    * WS = Windows Server
    1.2 Check, Enable and Disable SMB Versions in Windows operating systems
    In Windows 8 or Windows Server 2012 and later, there is a new PowerShell cmdlet that can easily tell you what version of SMB the client has negotiated with the File Server. You simply access a remote file server (or create a new mapping to it) and use Get-SmbConnection.
    To enable and disable SMBv1, SMBv2, and SMBv3 in Windows Vista, Windows Server 2008, Windows 7, Windows Server 2008 R2, Windows 8, and Windows Server 2012, please follow the steps in the article below.
    Warning: We do not recommend that you disable SMBv2 or SMBv3. Disable SMBv2 or SMBv3 only as a temporary troubleshooting measure. Do not leave SMBv2 or SMBv3 disabled.
    http://support.microsoft.com/kb/2696547
    1.3 Features and Capabilities
    - Thanks for the
    Jose Barreto's Blog
    Here’s a very short summary of what changed with each version of SMB:
    From SMB 1.0 to SMB 2.0 - The first major redesign of SMB
    Increased file sharing scalability
    Improved performance
    Request compounding
    Asynchronous operations
    Larger reads/writes
    More secure and robust
    Small command set
    Signing now uses HMAC SHA-256 instead of MD5
    SMB2 durability
    From SMB 2.0 to SMB 2.1
    File leasing improvements
    Large MTU support
    BranchCache
    From SMB 2.1 to SMB 3.0
    Availability
    SMB Transparent Failover
    SMB Witness
    SMB Multichannel
    Performance
    SMB Scale-Out
    SMB Direct (SMB 3.0 over RDMA)
    SMB Multichannel
    Directory Leasing
    BranchCache V2
    Backup
    VSS for Remote File Shares
    Security
    SMB Encryption using AES-CCM (Optional)
    Signing now uses AES-CMAC
    Management
    SMB PowerShell
    Improved Performance Counters
    Improved Eventing
    From SMB 3.0 to SMB 3.02
    Automatic rebalancing of Scale-Out File Server clients
    Improved performance of SMB Direct (SMB over RDMA)
    Support for multiple SMB instances on a Scale-Out File Server
    You can get additional details on the SMB 2.0 improvements listed above at
    http://blogs.technet.com/b/josebda/archive/2008/12/09/smb2-a-complete-redesign-of-the-main-remote-file-protocol-for-windows.aspx
    You can get additional details on the SMB 3.0 improvements listed above at
    http://blogs.technet.com/b/josebda/archive/2012/05/03/updated-links-on-windows-server-2012-file-server-and-smb-3-0.aspx
    You can get additional details on the SMB 3.02 improvements in Windows Server 2012 R2 at
    http://technet.microsoft.com/en-us/library/hh831474.aspx
    1.4 Related Registry Keys
    HKLM\SYSTEM\CurrentControlSet\Services\MrxSmb\Parameters\
    DeferredOpensEnabled – Indicates whether the Redirector can defer opens for certain cases where the file does not really need to be opened, such as for certain delete requests and adjusting file attributes.
    This defaults to true and is stored in the Redirector variable MRxSmbDeferredOpensEnabled.
    OplocksDisabled – Whether the Redirector should not request oplocks, this defaults to false (the Redirector will request oplocks) and is stored in the variable MrxSmbOplocksDisabled.
    CscEnabled – Whether Client Side Caching is enabled. This value defaults to true and stored in MRxSmbIsCscEnabled. It is used to determine whether to execute CSC operations when called. If CSC is enabled,
    several other parameters controlling CSC behavior are checked, such as CscEnabledDCON, CscEnableTransitionByDefault, and CscEnableAutoDial. CSC will be discussed in depth in its own module, so will be only mentioned in this module when it is necessary to understanding
    the operation of the Redirector.
    DisableShadowLoopback – Whether to disable the behavior of the Redirector getting a handle to loopback opens (opens on the same machine) so that it can shortcut the network path to the resource and
    just access local files locally. Shadow opens are enabled by default, and this registry value can be used to turn them off. It is stored in the global Redirector variable RxSmbDisableShadowLoopback.
    IgnoreBindingOrder – Controls whether the Redirector should use the binding order specified in the registry and controlled by the Network Connections UI, or ignore this order when choosing a transport
    provider to provide a connection to the server. By default the Redirector will ignore the binding order and can use any transport. The results of this setting are stored in the variable MRxSmbObeyBindingOrder.
    HKLM\SYSTEM\CurrentControlSet\Services\LanmanWorkstation\Parameters\
    Security Signature settings – The RequireSecuritySignature setting is stored in MRxSmbSecuritySignaturesRequired, EnableSecuritySignature in MRxSmbSecuritySignaturesEnabled, RequireExtendedSignature
    in MRxSmbExtendedSignaturesRequired, and EnableExtendedSignature in MRxSmbExtendedSignaturesEnabled. Note that the Extended Security Signatures assume the regular security signatures are enabled, so those settings are adjusted if necessary based on the extended
    settings. If extended signatures are required, regular signatures have to be required.
    EnablePlainTextPassword – Support for using plain text passwords can be turned on using this key. They are disabled by default.
    OffLineFileTimeoutIntervalInSeconds – Used to set the expiration time for timing out an Exchange (discussed later) when the exchange is accessing an offline file. This value defaults to 1000 seconds,
    but can be changed in the registry and is stored in the global Redirector variable OffLineFileTimeoutInterval
    SessTimeout – This is the amount of time the client waits for the server to respond to an outstanding request. The default value is 60 seconds (Windows Vista). When the client does not receive the
    response to a request before the Request Expiration Timer expires, it will reset the connection because the operation is considered blocked. In Windows 8, the request expiration timer for the SMB 2 Negotiate is set to a smaller value, typically under 20 seconds,
    so that if a node of a continuously available (CA) cluster server is not responding, the SMB 3.0 client can expedite failover to the other node.
    ExtendedSessTimeout – Stored in the ExtendedSessTimeoutInterval variable, this value is used to extend the timeout on exchanges for servers that require an extended session timeout as listed in the
    ServersWithExtendedSessTimeout key. These are third party servers that handle SMB sessions with different processes and vary dramatically on the time required to process SMB requests. The default value is 1000 seconds. If the client is running at least Windows
    7 and ExtendedSessTimeout is not configured (By Default), the timeout is extended to four times the value of SessTimeout (4 * SessTimeout).
    MaxNumOfExchangesForPipelineReadWrite – This value is used to determine the maximum number of write exchanges that can be pipelined to a server. The default is 8 and the value is stored in the variable
    MaxNumOfExchangesForPipelineReadWrite.
    Win9xSessionRestriction – This value defaults to false, but is used to impose a restriction on Windows 9x clients that they can only have one active non-NULL session with the server at a time. Also,
    existing session based connections (VNETROOTS) are scavenged immediately, without a timeout to allow them to be reused.
    EnableCachingOnWriteOnlyOpens – This value can cause the Redirector to attempt to open a file that is being opened for write only access in a manner that will enable the Redirector to cache the file
    data. If the open fails, the request will revert back to the original requested access. The value of this parameter defaults to false and is stored in the MRxSmbEnableCachingOnWriteOnlyOpens variable.
    DisableByteRangeLockingOnReadOnlyFiles – This parameter defaults to false, but if set to true will cause level II oplocks to automatically be upgraded to batch oplocks on read-only files opened for
    read only access. It is stored in the variable DisableByteRangeLockingOnReadOnlyFiles.
    EnableDownLevelLogOff – False by default, this value controls whether a Logoff SMB will be sent to down-level servers when a session is being closed. If this is false, and the server has not negotiated
    to the NT SMB dialect or does not support NT Status codes, the logoff will not be sent because we aren’t sure that server will understand the request. The value is stored in MrxSmbEnableDownLevelLogOff.
    HKLM\SYSTEM\CurrentControlSet\Services\LanmanServer\Parameters\
    ResilientTimeout – This timer is started when the transport connection associated with a resilient handle is lost. It controls the amount of time the server keeps a resilient handle active after the
    transport connection to the client is lost. The default value is 300 seconds (Windows 7, Server 2008 R2, 8, Server 2012).
    DurableHandleV2TimeoutInSecond – This timer is started when the transport connection associated with a durable handle is lost. It controls the amount of time the server keeps a durable handle active
    after the transport connection to the client is lost. The default value is 60 seconds (Windows 8, Windows Server 2012). The maximum value is 300 seconds.
    HKLM\SYSTEM\CurrentControlSet\Services\SMBWitness\Parameters\
    KeepAliveInterval – This functionality was introduced for SMB 3.0 in Windows 8 and Windows Server 2012. The witness protocol is used to explicitly notify a client of resource changes that have occurred
    on a highly available cluster server. This enables faster recovery from unplanned failures, so that the client does not need to wait for TCP timeouts. The default value is 20 minutes (Windows 8, Windows Server 2012).
    HKLM\System\CurrentControlSet\Services\SmbDirect\Parameters\
    ConnectTimeoutInMs – Establish a connection and complete negotiation. ConnectTimeoutInMs is the deadline for the remote peer to accept the connection request and complete SMB Direct negotiation. Default
    is 120 seconds (Windows 8).
    AcceptTimeoutInMs – Accept negotiation: The SMB Direct Negotiate request should be received before AcceptTimeoutInMs expires. The servers starts this timer as soon as it accepted the connection. Default
    is 5 seconds (Windows 8).
    IdleConnectionTimeoutInMs – This timer is per-connection. It is the amount of time the connection can be idle without receiving a message from the remote peer. Before the local peer terminates the
    connection, it sends a keep alive request to the remote peer and applies a keep alive timer. Default is Default: 120 seconds (Windows 8).
    KeepaliveResponseTimeoutInMs – This attribute is per-connection. It defines the timeout to wait for the peer response for a keep-alive message on an idle RDMA connection. Default is 5 seconds (Windows
    8).
    CreditGrantTimeoutInMs – This timer is per-connection.  It regulates the amount of time that the local peer waits for the remote peer to grant Send credits before disconnecting the connection.
    This timer is started when the local peer runs out of Send credits. Default is 5 seconds (Windows 8).
    References:
    [MS-SMB]: Server Message Block (SMB) Protocol
    http://msdn.microsoft.com/en-us/library/cc246231.aspx
    [MS-SMB2]: Server Message Block (SMB) Protocol Versions 2 and 3
    http://msdn.microsoft.com/en-us/library/cc246482.aspx
    SMB 2.x and SMB 3.0 Timeouts in Windows
    http://blogs.msdn.com/b/openspecification/archive/2013/03/27/smb-2-x-and-smb-3-0-timeouts-in-windows.aspx

    3. How to Troubleshoot
    3.1 Troubleshooting Decision Tree
    1
    Is the slowness occurring in browsing a network shared folder or   copying a file, or both?
    Browsing, go to 1.1.
    Copying, go to 1.2.
    Both, go to 1.3.
    1.1
    Is the target a DFS path or not?
    Yes, go to 1.1.1.
    No, go to 1.1.2.
    1.1.1
    Is the client visiting the nearest DFS root server and file   server?
    Yes, go to 1.1.1.1.
    No, go to 1.1.1.2.
    1.1.1.1
    Browse the corresponding (Non-DFS) UNC path directly. Do you   still experience the slowness?
    Yes, go to 1.1.1.1.1.
     No,
    go to 1.1.1.1.2.
    1.1.1.1.1
    Issue is the particular file server responds to the share folder   enumeration requests slowly. Most probably it’s
    unrelated to DFS. Follow   1.1.2.
    1.1.1.1.2
    Issue is that client experiences delay when browsing the DFS   path, but no delay is visiting the target file server
    directly. Capture   Network Monitor trace from the client and study if the DFS path is cracked   down.
    1.1.1.2
    Use dfsutil.exe to clear local domain and referral cache. Then   visit the DFS path again and capture Network Monitor
    trace from the client to   study why the client goes to a wrong file server or DFS root server.
    1.1.2
    Not a DFS issue. Issue is the particular file server responds to   the share folder enumeration requests slowly. “Dir”
    the same share folder   from Command Prompt. Is it slow?
    Yes, go to 1.1.2.1
    No, go to 1.1.2.2
    1.1.2.1
    Check the number of subfolders and files in that share folder.   Is the number large?
    Yes, go to 1.1.2.1.1
    No, go to 1.1.2.1.2
    1.1.2.1.1
    Try to “dir” a different share folder on the same file server,   but with less items. Is it still slow or not?
    Yes, go to 1.1.2.1.1.1
    No, go to 1.1.2.1.1.2
    1.1.2.1.1.1
    Probably to be performance issue of the file server. Capture   Network Monitor trace from both sides, plus Performance
    Monitor on the file   server.
    1.1.2.1.1.2
    Probably to be performance issue of the file server,   particularly, of the disk. Capture Network Monitor trace from
    both sides,   plus Performance Monitor on the file server.
    1.1.2.1.2
    Same as 1.1.2.1.1.1. Probably to be performance issue of the   file server. Capture Network Monitor trace from both
    sides, plus Performance   Monitor on the file server.
    1.1.2.2
    Explorer.exe browses the share folder slowly while “dir” does   fast. The issue should lie in the particular SMB traffic
    incurred by   explorer.exe. It's a Shell issue.
    1.2
    Is the target a DFS path or not?
    Yes, go to 1.2.1
    No, go to 1.2.2
    1.2.1
    Is the client downloading/uploading against the nearest file   server?
    Yes, go to 1.2.1.1
    No, go to 1.2.1.2
    1.2.1.1
    Try to download/upload against that file server using the   Non-DFS share path. Still slow?
    Yes, go to 1.2.1.1.1
    No, go to 1.2.1.1.2
    1.2.1.1.1
    Not a DFS issue. Capture Network Monitor trace from both sides   to identify the pattern of the slowness.
    1.2.1.1.2
    This is unlikely to occur because the conclusion is   contradictory to itself. Start from the beginning to double
    check.
    1.2.1.2
    Same situation as 1.1.1.2. Use dfsutil.exe to clear local domain   and referral cache. Then visit the DFS path again
    and capture Network Monitor   trace from the client to study why the client goes to a wrong file server or   DFS root server.
    1.2.2
    Same as 1.2.1.1.1. It's not a DFS issue. Capture Network Monitor   trace from both sides to identify the pattern of
    the slowness.
    1.3
    Follow 1.1 and then 1.2.
    3.2 Troubleshooting Tools
    Network Monitor or Message Analyzer
    Download
    http://www.microsoft.com/en-us/download/details.aspx?id=40308
    Blog
    http://blogs.technet.com/b/messageanalyzer/
    Microsoft Message Analyzer Operating Guide
    http://technet.microsoft.com/en-us/library/jj649776.aspx
    Performance Monitor
    http://technet.microsoft.com/en-us/library/cc749249.aspx
    DiskMon
    http://technet.microsoft.com/en-us/sysinternals/bb896646.aspx
    Process Monitor
    http://technet.microsoft.com/en-us/sysinternals/bb896645

  • 10Gb Networking best practices

    I'm looking for good guidance on Hyper-V 2012 R2 network configuration best practices for a converged server. Meaning, dual 10Gb NICs and using SMB 3.0 file shares for storage. The servers also have two 1Gb NICs. I'm very familiar with VMware, but ramping
    up on HV networking best practices.
    Blog: www.derekseaman.com, VMware vExpert 2012/2013

    Derek,
    I tried to draw my prefered setup for this network configuration.
    I would create a Team with the two 1 GBit NICs and use it for Domain, DNS, Backup and any SystemCenter Agents.
    I would also Team the two 10 GBit NICs and than assign it to a Hyper-V Switch for the VMs. In Windows Server 2012 it is posible to create vNICs for the Management OS that use this Hyper-V Switch (Converged Network Design). I would create two vNICs SMB1
    and SMB2 to use them for Cluster and Livemigration traffic with SMB Multichannel. If your storage system supports SMB Multichannel you can also use both as storage NICs (but this depends wich vendor you have).
    Hope this helps.
    Grüße/Regards Carsten Rachfahl | MVP Virtual Machine | MCT | MCITP | MCSA | CCA | Husband and Papa |
    www.hyper-v-server.de | First German Gold Virtualisation Kompetenz Partner ---- If my answer is helpful please mark it as answer or press the green arrow.

  • Shared nothing live migration over SMB. Poor performance

    Hi,
    I´m experiencing really poor performance when migrating VMs between newly installed server 2012 R2 Hyper-V hosts.
    Hardware:
    Dell M620 blades
    256Gb RAM
    2*8C Intel E5-2680 CPUs
    Samsung 840 Pro 512Gb SSD running in Raid1
    6* Intel X520 10G NICs connected to Force10 MXL enclosure switches
    The NICs are using the latest drivers from Intel (18.7) and firmware version 14.5.9
    The OS installation is pretty clean. Its Windows Server 2012 R2 + Dell OMSA + Dell HIT 4.6
    I have removed the NIC teams and vmSwitch/vNICs to simplify the problem solving process. Now there is one nic configured with one IP. RSS is enabled, no VMQ.
    The graphs are from 4 tests.
    Test 1 and 2 are nttcp-tests to establish that the network is working as expected.
    Test 3 is a shared nothing live migration of a live VM over SMB
    Test 4 is a storage migration of the same VM when shut down. The VMs is transferred using BITS over http.
    It´s obvious that the network and NICs can push a lot of data. Test 2 had a throughput of 1130MB/s (9Gb/s) using 4 threads. The disks can handle a lot more than 74MB/s, proven by test 4.
    While the graph above doesn´t show the cpu load, I have verified that no cpu core was even close to running at 100% during test 3 and 4.
    Any ideas?
    Test
    Config
    Vmswitch
    RSS
    VMQ
    Live Migration Config
    Throughput (MB/s)
    NTtcp
    NTttcp.exe -r -m 1,*,172.17.20.45 -t 30
    No
    Yes
    No
    N/A
    500
    NTtcp
    NTttcp.exe -r -m 4,*,172.17.20.45 -t 30
    No
    Yes
    No
    N/A
    1130
    Shared nothing live migration
    Online VM, 8GB disk, 2GB RAM. Migrated from host 1 -> host2
    No
    Yes
    No
    Kerberos, Use SMB, any available net
    74
    Storage migration
    Offline VM, 8Gb disk. Migrated from host 1 -> host2
    No
    Yes
    No
    Unencrypted BITS transfer
    350

    Hi Per Kjellkvist,
    Please try to change the "advanced features"settings of "live migrations" in "Hyper-v settings" , select "Compression" in "performance options" area .
    Then test 3 and 4 .
    Best Regards
    Elton Ji
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Slow performance Storage pool.

    We also encounter performance problems with storage pools.
    The RC is somewhat faster than the CP version.
    Hardware: Intel S1200BT (test) motherboard with LSI 9200-8e SAS 6Gb/s HBA connected to 12 ST91000640SS disks. Heavy problems with “Bursts”.
    Using the ARC 1320IX-16 HBA card is somewhat faster and looks more stable (less bursts).
    Inserting an ARC 1882X RAID card increases speed with a factor 5 – 10.
    Hence hardware RAID on the same hardware is 5 – 10 times faster!
    We noticed that the “Recourse Monitor” becomes unstable (irresponsive) while testing.
    There are no heavy processor loads while testing.
    JanV.
    JanV

    Based on some testing, I have several new pieces of information on this issue.
    1. Performance limited by controller configuration.
    First, I tracked down the underlying root cause of the performance problems I've been having. Two of my controller cards are RAIDCore PCI-X controllers, which I am using for 16x SATA connections. These have fantastic performance for physical disks
    that are initialized with RAIDCore structures (so they can be used in arrays, or even as JBOD). They also support non-initialized disks in "Legacy" mode, which is what I've been using to pass-through the entire physical disk to SS. But for some reason, occasionally
    (but not always) the performance on Server 2012 in Legacy mode is terrible - 8MB/sec read and write per disk. So this was not directly a SS issue.
    So given my SS pools were built on top of disks, some of which were on the RAIDCore controllers in Legacy mode, on the prior configuration the performance of virtual disks was limited by some of the underlying disks having this poor performance. This may
    also have caused the unresponsiveness the entire machine, if the Legacy mode operation had interrupt problems. So the first lesson is - check the entire physical disk stack, under the configuration you are using for SS first.
    My solution is to use all RAIDCore-attached disks with the RAIDCore structures in place, and so the performance is more like 100MB/sec read and write per disk. The problems with this are (a) a limit of 8 arrays/JBOD groups to be presented to the OS (for
    16 disks across two controllers, and (b) loss of a little capacity to RAIDCore structures.
    However, the other advantage is the ability to group disks as JBOD or RAID0 before presenting them to SS, which provides better performance and efficiency due to limitations in SS.
    Unfortunately, this goes against advice at http://social.technet.microsoft.com/wiki/contents/articles/11382.storage-spaces-frequently-asked-questions-faq.aspx,
    which says "RAID adapters, if used, must be in non-RAID mode with all RAID functionality disabled.". But it seems necessary for performance, at least on RAIDCore controllers.
    2. SS/Virtual disk performance guidelines. Based on testing different configurations, I have the following suggestions for parity virtual disks:
    (a) Use disks in SS pools in multiples of 8 disks. SS has a maximum of 8 columns for parity virtual disks. But it will use all disks in the pool to create the virtual disk. So if you have 14 disks in the pool, it will use all 14
    disks with a rotating parity, but still with 8 columns (1 parity slab per 7 data slabs). Then, and unexpectedly, the write performance of this is a little worse than if you were just to use 8 disks. Also, the efficiency of being able to fully use different
    sized disks is much higher with multiples of 8 disks in the pool.
    I have 32 underlying disks but a maximum of 28 disks available to the OS (due to the 8 array limit for RAIDCore). But my best configuration for performance and efficiency is when using 24 disks in the pool.
    (b) Use disks as similar sized as possible in the SS pool.
    This is about the efficiency of being able to use all the space available. SS can use different sized disks with reasonable efficiency, but it can't fully use the last hundred GB of the pool with 8 columns - if there are different sized disks and there
    are not a multiple of 8 disks in the pool. You can create a second virtual disk with fewer columns to soak up this remaining space. However, my solution to this has been to put my smaller disks on the RAIDCore controller, and group them as RAID0 (for equal
    sized) or JBOD (for different sized) before presenting them to SS. 
    It would be better if SS could do this itself rather than needing a RAID controller to do this. e.g. you have 6x 2TB and 4x 1TB disks in the pool. Right now, SS will stripe 8 columns across all 10 disks (for the first 10TB /8*7), then 8 columns across 6
    disks (for the remaining 6TB /8*7). But it would be higher performance and a more efficient use of space to stripe 8 columns across 8 disk groups, configured as 6x 2TB and 2x (1TB + 1TB JBOD).
    (c) For maximum performance, use Windows to stripe different virtual disks across different pools of 8 disks each.
    On my hardware, each SS parity virtual disk appears to be limited to 490MB/sec reads (70MB/sec/disk, up to 7 disks with 8 columns) and usually only 55MB/sec writes (regardless of the number of disks). If I use more disks - e.g. 16 disks, this limit is
    still in place. But you can create two separate pools of 8 disks, create a virtual disk in each pool, and stripe them together in Disk Management. This then doubles the read and write performance to 980MB/sec read and 110MB/sec write.
    It is a shame that SS does not parallelize the virtual disk access across multiple 8-column groups that are on different physical disks, and that you need work around this by striping virtual disks together. Effectively you are creating a RAID50 - a Windows
    RAID0 of SS RAID5 disks. It would be better if SS could natively create and use a RAID50 for performance. There doesn't seem like any advantage not to do this, as with the 8 column limit SS is using 2/16 of the available disk space for parity anyhow.
    You may pay a space efficiency penalty if you have unequal sized disks by going the striping route. SS's layout algorithm seems optimized for space efficiency, not performance. Though it would be even more efficient to have dynamic striping / variable column
    width (like ZFS) on a single virtual disk, to fully be able to use the space at the end of the disks.
    (d) Journal does not seem to add much performance. I tried a 14-disk configuration, both with and without dedicated journal disks. Read speed was unaffected (as expected), but write speed only increased slightly (from 48MB/sec to
    54MB/sec). This was the same as what I got with a balanced 8-disk configuration. It may be that dedicated journal disks have more advantages under random writes. I am primarily interested in sequential read and write performance.
    Also, the journal only seems to be used if it in on the pool before the virtual disk is created. It doesn't seem that journal disks are used for existing virtual disks if added to the pool after the virtual disk is created.
    Final configuration
    For my configuration, I have now configured my 32 underlying disks over 5 controllers (15 over 2x PCI-X RAIDCore BC4852, 13 over 2x PCIe Supermicro AOC-SASLP-MV8, and 4 over motherboard SATA), as 24 disks presented to Windows. Some are grouped on my RAIDCore
    card to get as close as possible to 1TB disks, given various limitations. I am optimizing for space efficiency and sequential write speed, which are the effective limits for use as a network file share.
    So I have: 5x 1TB, 5x (500GB+500GB RAID0), (640GB+250GB JBOD), (3x250GB RAID0), and 12x 500GB. This gets me 366MB/sec reads (note - for some reason, this is worse than the 490MB/sec when just using 8 of disks in a virtual disk) and 76MB/sec write (better
    than 55MB/sec on a 8-disk group). On space efficiency, I'm able to use all but 29GB in the pool in a single 14,266GB parity virtual disk.
    I hope these results are interesting and helpful to others!

  • How to assign SMB storage to CSV in HV failover cluster?

    I have a Hyper-V Cluster that looks like this:
    Clustered-Hyper-V-Diagram
    2012 R2 Failover Cluster
    2 Hyper-V nodes
    iSCSI Disk Witness on isolated "Cluster Only" Network
    "Cluster and Client" Network with nic-team connectivity to 2012 R2 File Server
    Share configured using: server manager > file and storage services > shares > tasks > new share > SMB Share - Applications > my RAID 1 volume.
    My question is this: how do I configure a Clustered Shared Volume?  How do I present the Shared Folder to the cluster?
    I can create/add VMs from Cluster Manager > Roles > Virtual Machines using \\SMB\Share for the location of the vhd...  but how do I use a CSV with this config?  Am I missing something?

    right click one of the disks that you assigned to cluster as available storage
    I don't yet have any disks assigned to the cluster as available storage.
    Just for grins, I added an 8Gb iSCSI lun and added it to a CSV:
    PS C:\> get-clusterresource
    Name State OwnerGroup ResourceType
    Cluster IP Address Online Cluster Group IP Address
    Cluster Name Online Cluster Group Network Name
    witness Online Cluster Group Physical Disk
    PS C:\> Get-ClusterSharedVolume
    Name State Node
    test8Gb Online CLUSTERNODE01
    All well and good, but from what I've read elsewhere...
    SMB 3.0 via a 2012 File server can only be added to a Hyper-V CSV cluster using the VMM component of System Center 2012.  That is the only way to import an SMB 3 share for CSV storage usage.
    http://community.spiceworks.com/topic/439383-hyper-v-2012-and-smb-in-a-csv
    http://technet.microsoft.com/en-us/library/jj614620.aspx

  • Make the relationship in between multiple table storage's tables will affect the performance

    Hi,
    i'm going to develop business application,the product ID needs to be generic one and it should automatically generate the unique id(like identity in sql ) but,it has to be generate in formatted way 
    for example the ID would be "cityCode+areaCode+uniqueNumber" . here, cityCode and areaCode are going to maintain in separate table. while generate the product id, going to find the cityCode table and AreaCode table the generate  unique
    number by merge all the respective information.
    1) while doing all this will affect the performance Azure table storage performance and  web application ?
    2) making multiple relationship among multi-Table Storage will decrease the performance ?. 

    Hello,
    When you say tables, are referring to Azure Storage Tables or Relational Databases?
    Please note Windows Azure tables do not function in the same manner as tables in a relational database since they do not make use of relationships or have schemas.
    And if you are referring to relational databases, the latency in performance would depend on the logic used to generate the unique ID.
    You should be able to use the logic in an On-Prem SQL database and check for the latency.
    Regards,
    Malar.

  • Migration/Live Migration fails with shared storage (SMB), but not all the time

    I have 2 Hyper-V Hosts using shared storage via SMB. I'm having intermittent issues with migrations failing. It'll either go through the motions of doing the whole move and fail and/or when I try to start it again I get the following:
    I've done a lot of reading on this and have done various things that have been recommended. For each Hyper-V host I've setup delagation via Kerberos. ie. HV_Host1 has CIFS - HV_Host2; CIFS - SMB Server; MS Virtual System Migration Service - HV_Host2 and
    vice versa.
    On the actual SMB share on the Shared Storage server I added permissions to the folder for HV_Host1, HV_Host2, Storage Server, Domain Admins and my user account for full access.
    In Group Policy, I've blocked Inheritance for the OU containing the Hyper-V hosts, though I have added some of the Group Policy's manually that I needed.
    Last thing, I also added Domain Admins to the Local GP on each Host for "Log on as a Service" and "Impersonate a client after authentication" as shown in this thread: http://social.technet.microsoft.com/forums/windowsserver/en-US/7e6b9b1b-e5b1-4e9e-a1f3-2ce72ea1e543/unable-to-migrate-create-vm-hyperv-cluster-2012-logon-failure-have-to-restart-the-hyperv
    I get these failed migrations regardless if I start them on the actual Host or via the admin console from Win8.
    At this point I'm not sure what else to check or the next step to take for this.

    Hi Granite,
    Please refer to following article to build hyper-v over SMB :
    http://technet.microsoft.com/en-us/library/jj134187.aspx#BKMK_Step3
    Best Regards
    Elton Ji
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Scale out SSAS server for better performance

    HI
    i have a sharepoint farm
    running performance point service in a server where ANaylysis srver,reporting server installed
    and we have anyalysis server dbs and cubes
    and a wfe server where secure store service running
    we have
    1) application server + domain controller
    2) two wfes
    1) sql server sharepoint
    1) SSAS server ( analysis server dbs+ reporting server)
    here how i scaled out my SSAS server for better performance 
    adil

    Just trying to get a definitive answer to the question of can we use a Shared VHDX in a SOFS Cluster which will be used to store VHDX files?
    We have a 2012 R2 RDS Solution and store the User Profile Disks (UPD) on a SOFS Cluster that uses "traditional" storage from a SAN. We are planning on creating a new SOFS Cluster and wondered if we can use a shared VHDX instead of CSV as the storage that
    will then be used to store the UPDs (one VHDX file per user).
    Cheers for now
    Russell
    Sure you can do it. See:
    Deploy a Guest Cluster Using a Shared Virtual Hard Disk
    http://technet.microsoft.com/en-us/library/dn265980.aspx
    Scenario 2: Hyper-V failover cluster using file-based storage in a separate Scale-Out File Server
    This scenario uses Server Message Block (SMB) file-based storage as the location of the shared .vhdx files. You must deploy a Scale-Out File Server and create an SMB file share as the storage location. You also need a separate Hyper-V failover cluster.
    The following table describes the physical host prerequisites.
    Cluster Type
    Requirements
    Scale-Out File Server
    At least two servers that are running Windows Server 2012 R2.
    The servers must be members of the same Active Directory domain.
    The servers must meet the requirements for failover clustering.
    For more information, see Failover Clustering Hardware Requirements and Storage Options and Validate
    Hardware for a Failover Cluster.
    The servers must have access to block-level storage, which you can add as shared storage to the physical cluster. This storage can be iSCSI, Fibre Channel, SAS, or clustered storage spaces that use a set of shared SAS JBOD enclosures.
    StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts.

  • Super Slow Performance with Xserve RAID / Promise Fibre

    Hey guys,
    I am experiencing a strange thing on both 10.5.8 and 10.7.5 servers (yes these are in the process of being retired). They are sharing out either an Xserve RAID or Promise Vtrak with OS X Server.
    We have the fibre channel card and the storage attaches to there and they appear mounted on the desktop.
    The problem we are having is that whenever OS X's file sharing is enabled these volumes run incredibly slow! I did a clean install of OS X 10.7.5 on an entirely different machine and it was still bad. All of my tests show that when you turn off file sharing, both AFP and SMB, the performance goes back to normal -- so what can we do here??? These are file servers, so that's pretty much the only thing that needs to work.
    Thanks,
    Andrew

    Slow/poor RAID performance can be down to a number of things. Some of which are easier to check than others.
    Some easy things to check are:
    How much data is stored on the RAIDs?
    Performance really drops off alarmingly once the RAID gets to 85% full. For a 1TB RAID this means you need to keep at least 250GB of free disk space at all times otherwise you will get the poor performance you're seeing. Once it gets to 86/87/88% full performance can drop alarmingly. Yes those few percent points can make a big difference.
    How is the data stored on the RAIDs?
    Having thousands and thousands of single files stored loosely on the RAID or within one single folder is not a good idea. The Finder can be and is fairly inefficient at drawing the icons for those files that are presented to network users accessing the share point(s). This can/and is perceived as very slow performance as you're viewing the files over a network connection. One way to overcome this 'feature' is to organise the data into logical folders/sub-folders and/or shares.
    Are there problems with the file/directory structure of the RAIDs themselves?
    Use DiskUtility (or better still DiskWarrior) to repair any issues (generally built up over a period of time) with the storage disks you have. You can't repair permissions using DiskUtility or anything else on a non-OS bearing drive. Don't forget to do this also for your System Drive (which can have its permissions repaired) as well. This will mean the server being off-line for a time whilst the system (OS) drive is being repaired. Before doing this step I would make sure you have a current and effective backup of your System/Server OS as well as all data stored on the RAIDs themselves. Hopefully you should be doing this for your data already?
    To understand Disk Utility's repair permissions feature please read Apple's support document here:
    About Disk Utility's Repair Disk Permissions feature - Apple Support

Maybe you are looking for

  • Average quantity for a given period

    Hi All Is there any standard report for arriving at Average quantity , maximum quantity and maximum days in stock for a particular material or list of materials. If not all together atleast sepwerately. Kind regards Samson

  • COST CENTRE

    hi every one, When i create PO with Acc *** Cat system ask for cost centre and by default it takes the G/L account this is for standard 1000 Co.code. in my case i did all the settings but system dose not show the g/l account and CC. why is it happeni

  • How to invoke Custom HTML pages on Dashboard page when running OBIEEon AIX

    Hi, i am facing an issue with regards to invoking a custom HTML page( say definitions.html) using html code inside a text section( with html markup) on a dashboard page. I have performed the following configuration on Windows and it perfectly works w

  • Partition the sparsebundle in Time Machine

    Hello: I recently bought a Seagate GoFlex 2 TB network hard drive. I assume I would be able to partition the drive and use one partition for Time Machine backups and the other partition to be used by data on other Windows computers. Apparently you ca

  • Chome Browser on Mavericks won't show saved url's

    I have a Macbook Air running Mavericks. When using the Chrome browser for Mac, all url's saved as favorites show the full url and not the shortened, annotated webpage description. So, I can't tell from looking at the url what it is for. Its not like