SMB Performance

Hello all,
I recently installed security update 2010-004 on my Leopard server. Since then, SMB performance has been slow for new connections on my Windows clients. The log file is full of this kind of thing:
[2010/07/27 14:24:04, 0, pid=33279] /SourceCache/samba/samba-187.12/samba/source/lib/opendirectory.c:opendirectoryuser_auth_and_sessionkey(679)
dsDoDirNodeAuthOnRecordType gave -14091 [eDSAuthMethodNotSupported]
[2010/07/27 14:24:04, 0, pid=33279] /SourceCache/samba/samba-187.12/samba/source/auth/authodsam.c:opendirectory_smb_pwd_checkntlmv1(383)
opendirectoryuser_auth_and_sessionkey gave -14091 [eDSAuthMethodNotSupported]
[2010/07/27 14:24:04, 1, pid=33279] /SourceCache/samba/samba-187.12/samba/source/smbd/service.c:makeconnectionsnum(1096)
*client name* (*IP address*) connect to service proofs initially as user username (uid=1038, gid=20) (pid 33279)
It looks like the lag is being caused by Samba trying to authenticate against Open Directory before falling back to local accounts. I don't use Open Directory on this server and I don't understand why this behaviour is suddenly happening.
Anyone have any ideas as to how I might rectify this?

Interesting. What were you copying from. I'm not quite sure where I stand before and after. Also, my gigabit network tranfers are from Mac to PC. I made a note that implies that I transferred about 7.154 gigabytes (as measured on my Dell) from my MBP to My Dell Desktop in about 4 minutes. So my tranfer rate was about 30 Mb ps. This value is sort of in between your before and after. When you calculated your rate did you allow for Apple's new GB measuring system--which is the way that drive manufacturers measure GB's. In not, then your after would be about 17 / 1.07 = 15.9. Either way you after looks slow. I wish I had a before.
Message was edited by: donv (The Ghost)

Similar Messages

  • 10.6.4 update: poor SMB performance

    I have an iMac (5,1) and Macbook Pro (3,1), both of which received the 10.6.4 update. Immediately following this update, SMB performance declined to around one-sixth of its previous speed on both machines.
    Connectivity from the two Macs is over a switched gigabit LAN to a server running FreeNAS 0.7.1. Copying two different 1GB files (created with dd; one was from /dev/zero, the other from /dev/random) between the Mac and fileserver or vice-versa results in a sustained data rate of 60-80Mbit; previously, 400-480Mbit was about normal.
    If I connect either of the two Macs directly to the server, I get exactly the same result so network infrastructure does not appear to be part of the equation. The PCs on this network that are connected to the same fileserver do not appear to be having any issues either, so it really is looking like 10.6.4 hit something related to SMB.
    Has anyone else run across this issue? I'm dealing with multigigabyte files all day, so having this kind of performance hit is not acceptable.

    Just FYI: I pulled the smbd and nmbd binaries out of the 10.6.3 installer, and both report the same versions (Version 3.0.28a-apple) as their 10.6.4 counterparts. However, their md5sums are different; 'otool -L' is showing some differences in what each of the binaries was linked to:
    10.6.3 (smbd):
    /usr/lib/libresolv.9.dylib (compatibility version 1.0.0, current version 40.0.0)
    /System/Library/Frameworks/CoreFoundation.framework/Versions/A/CoreFoundation (compatibility version 150.0.0, current version 550.19.0)
    /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 125.0.1)
    /System/Library/Frameworks/SystemConfiguration.framework/Versions/A/SystemConfig uration (compatibility version 1.0.0, current version 293.4.0)
    10.6.4 (smbd):
    /usr/lib/libresolv.9.dylib (compatibility version 1.0.0, current version 41.0.0)
    /System/Library/Frameworks/CoreFoundation.framework/Versions/A/CoreFoundation (compatibility version 150.0.0, current version 550.29.0)
    /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 125.2.0)
    /System/Library/Frameworks/SystemConfiguration.framework/Versions/A/SystemConfig uration (compatibility version 1.0.0, current version 293.5.0)
    10.6.3 (nmbd):
    /usr/lib/libresolv.9.dylib (compatibility version 1.0.0, current version 40.0.0)
    /System/Library/Frameworks/CoreFoundation.framework/Versions/A/CoreFoundation (compatibility version 150.0.0, current version 550.19.0)
    /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 125.0.1)
    10.6.4 (nmbd):
    /usr/lib/libresolv.9.dylib (compatibility version 1.0.0, current version 41.0.0)
    /System/Library/Frameworks/CoreFoundation.framework/Versions/A/CoreFoundation (compatibility version 150.0.0, current version 550.29.0)
    /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 125.2.0)
    It's past 2am here right now, so I'm not going to go much further with this for tonight, but tomorrow I'm going to try replacing just the /usr/sbin/nmbd binary with the one from 10.6.3, seeing if that works, and, if so, if it fixes the problem or not. If not, I'll do the same thing with smbd, then start moving into the kexts if that doesn't go anywhere.

  • Very Slow SMB Performance with DroboShare

    Hi there,
    We have a big issue with a DroboShare NAS device connected to a Drobo Drive (FAT32):
    If we connect to our DroboShare NAS directly via LAN, the performance with SMB is terrible. Browsing folders takes minutes until the contents (15 items) are shown, opening and saving files takes minutes.
    If we connect the Drobo via USB to one of our Macs and then share it via SMB, it is fast as ****.
    Problem is that we cannot leave it this way, as we only have portable macs, so if they are removed, nobody can access the Drobo.
    The problem nevertheless seems to be restricted to Macs: If we connect to the DroboShare with Linux or Windows Boxes, everything is as fast as expected.
    Does anyone has similar issues with weak performance and/or knows how this can improved?
    Any help is greatly appreciated.
    Thanks,
    Simon

    I just ported netatalk (AFP) over to the DroboShare.
    http://code.google.com/p/drobocapsule/

  • Poor SMB performance

    I appear to have very bad performance on my windows domain when logging in / logging out, and transferring files on the network - but I don't know if this problem is related to my OS X server or just a poor network.
    The network is fairly small, only about 10 workstations, and I seem to have no problems transferring files between workstations. However connection to the OS X server is often slow and can be temperamental - users occasionally fail to login unless I restart the SMB service on the server.
    Is it normal for me to expect less than 100% reliability for the windows domain services on OS X vs. a Windows server? Or is there a problem somewhere which I should look at in more detail?

    Interesting. What were you copying from. I'm not quite sure where I stand before and after. Also, my gigabit network tranfers are from Mac to PC. I made a note that implies that I transferred about 7.154 gigabytes (as measured on my Dell) from my MBP to My Dell Desktop in about 4 minutes. So my tranfer rate was about 30 Mb ps. This value is sort of in between your before and after. When you calculated your rate did you allow for Apple's new GB measuring system--which is the way that drive manufacturers measure GB's. In not, then your after would be about 17 / 1.07 = 15.9. Either way you after looks slow. I wish I had a before.
    Message was edited by: donv (The Ghost)

  • Best way of making redundant AFP/SMB server

    We have bought two XServe G5 systems in the hope of setting up a load balanced pair that would provide redundant access to AFP/SMB file shares (plus web and email). We have an XServe RAID box which we where going to attach to both servers.
    What have people done to get the redundancy we want. These are the options I have looked at:
    1) Put XSan on the RAID and connect both servers to it and then reshare the connection. Reading the XSan I see that XSan is not good for re-sharing via AFP/SMB. Also, it seems that I would have to setup everything on each machine seperately as the settings are stored locally.
    2) Split the jobs between the two servers say email/web on one and AFP/SMB on the other. Then setup IP Failover for each server to the other, so that if one server fails the other takes over it's jobs. I presume I cannot simply use a single volume for this as mounting the same volume on each of the servers will cause problems. Is that correct?
    3) Put all the services on one machine and setup IP failover to fall back to the second machine as required. This removes the issue of two machines accessing the RAID array at the same time.
    Any advice would be greatly accepted.
    Thanks
    Ian
    iMac CoreDuo   Mac OS X (10.4.6)  

    This may help:
    About IP Failover
    First off, the IP Failover feature won't magically
    configure the backup server to do what the primary
    one was doing. In other words, with one server
    configured for file services and the other for web
    and email services, if the file server failed, the
    other one wouldn't just start up file services by
    itself!
    IP Failover is best used where one server is
    configured exactly as another. One server is used,
    and the backup server is not used until the first one
    fails. The backup server has a separate IP address
    and network connection that it uses to periodically
    poll the main server. If the main server leaves the
    network, then the backup server enables a second
    Ethernet interface which has been configured with the
    main server's IP address. (You may be able to
    multi-home one Ethernet connection, but I haven't
    tried that. Plus, each of your Xserves has two
    Ethernet ports, so that shouldn't be a problem.)
    Yes, but you are not getting good value for money out of that solution.
    Looking at the IP Failover it seems to work by having the secondary server monitor the primary. Once it fails it activates that address on the secondary and then enabling the required services on there. This doesn't seem to preclude each server being the master for the others secondary, as long as the services on each master are different.
    Lets say mail runs on server 1 and AFP runs on server 2. Server 2 acts as the secondary for server 1 and has a script that activates mail on server 2 when it detects failure. Similarly if server 2 fails it will migrate the ip address server 1 and then startup the AFP service. Obviously each will have to be configured with matching services on each server.
    Using the Xserve RAID
    If you plan on storing all of your data on the Xserve
    RAID, you need to know how "full" the RAID unit is
    going to be and how many RAID volumes you wish to
    create so that you can decide on how to connect it to
    each server.
    As we're migrating from an existing server to these we have a very good idea of what the sizes and volumes should be. We have a Xserve raid with four disks all on one side. Setup as a Raid 5 array that is suffient space for all our services. We can either partition that down into multiple volumes for each server/service.
    What I don't have my mind around yet is how you control which server sees which volumes. We have to have each server be able to access each volume (so it can run that service in the event of failure) but I think it's going to be an issue to have two servers access the same volume at once. Is that correct? If two servers access the same volume at the same time it's going to cause trouble?
    Think of the Xserve RAID as two
    devices: each one has a fibre-channel port and holds
    seven hard drives. Each "group of seven" can be
    connected to any other device on the fibre-channel
    network. Without a fibre-channel switch, you can
    connect one or two other devices (one or two other
    Xserves) to the Xserve RAID.
    So you have these scenarios:
    Using 1-7 drives and creating one large RAID volume:
    you'll use one of two RAID controllers on your
    Xserve RAID. You can connect the RAID to only one
    Xserve or to a fibre-channel switch in order to
    connect it to more devices.
    Using 2-7 drives and creating two RAID volumes:
    you'll use both RAID controllers (say, 3 on one side
    and 3 on the other). Each RAID controller connects
    to another fibre-channel device; thus, without a
    switch, you can connect one to each server or both
    to the same one.
    Using 8 to 14 drives and creating one large RAID
    volume: you'll be filling up one RAID controller and
    all or part of the second, connecting both
    fibre-channel connectors to the same Xserve, unless
    you use a switch.
    Using 8 to 14 drives and creating two RAID volumes:
    You have the same options as with 2-7 drives.
    See http://www.apple.com/xserve/raid/deployment.html
    for some pictures, which may help.
    We have the fibre switch so each server can access all drives. I just don't know to limit what sees what, or if I need to do that.
    Xsan
    If you're just connecting the Xserve RAID to one or
    two servers and planning to reshare the volume(s)
    created on the Xserve RAID, then Xsan is not
    necessary, nor should it be used.
    But is it possible for two servers to use the same disk, I know it is with XSan but what about without it.
    I think what you want is the ability to use the
    Xserve RAID as a locally-connected volume on each
    server, where share points can be defined. The
    server is connected to the Xserve RAID using a
    fibre-channel cable, just as a FireWire, USB, or
    eSATA hard drive would be connected to the server.
    As far as the server is concerned, the Xserve RAID
    controller or controllers represent external hard
    disks. The difference is that the Xserve RAID also
    has Ethernet connectivity so that you can manage it
    without having to log into your server. Once
    connected, addressing using fibre-channel is
    automatic; the Xserve RAID gets a WWPN address and
    the RAID volume appears on the desktop of the server
    - no Xsan required at all. From there, it's
    perfectly safe to create share points and share them
    via AFP or SMB; clients would connect using AFP or
    SMB protocols with IP addressing over an Ethernet
    network to which both the Xserve and the client
    computers are connected. Even though the Xserve
    RAID may also have an Ethernet connection to that
    network, no read and write commands are sent via
    Ethernet to the RAID; the server sends those via its
    fibre-channel connection to the Xserve RAID as it
    would with any other drive when housing share
    points.
    As I say it's fine for one machine at a time, what about 2.
    For load balancing concerns, see Apple's File
    Services Guide for recommendations. On an Xserve G5
    with 2GB or more of memory, you should easily
    accommodate 50 simultaneously connected users
    (mixture of AFP and SMB) without a performance hit.
    Depending on what your users are doing (and the
    speed of the Ethernet network to which your clients
    and Xserve are connected), you may be able to handle
    more or less users. More RAM in the Xserve and a
    Gigabit Ethernet network for the server and clients
    would allow you to simultaneously balance more
    clients with less of a reduction in performance.
    Xsan works differently, and its requirements are
    somewhat different. With Xsan, your Xserve RAID is
    connected to a fibre-channel network. (In the
    previous direct-to-server example, the fibre-channel
    network consisted of just the Xserve and the Xserve
    RAID.) In this network, all clients, at least one
    Xserve, and the Xserve RAID are all connected via a
    fibre-channel network. The Xserve has Xsan software
    installed on it and becomes a dedicated metadata
    controller. Clients must have special Xsan client
    software installed for the SAN volumes to appear. In
    such cases, the protocol used to mount the SAN
    volumes is the Xsan client protocol, not AFP or SMB.
    Although you technically can reshare an Xsan volume
    via AFP or SMB, performance would take a hit. Since
    the Xsan volume is writeable by all other clients on
    the fibre-channel network, adding an Xserve to
    reshare the SAN volume via AFP or SMB would allow
    clients to connect via Ethernet (Wi-Fi, etc.), but
    the server wouldn't have exclusive access to the SAN
    volume.
    To be honest, two Xserves and an Xserve RAID simply
    aren't enough to warrant a SAN in my book. Typically
    SANs are used where there are a large number of "done
    servers" doing computation tasks, and they all need
    to be able to have access to the same "local volume."
    They're also used in cases where clients need access
    to large amounts of storage set up to work like a
    "local disk" on a few video production computers.
    Just for comparison, Gigabit Ethernet offers
    throughput bursts of up to 125MB/s (megabytes per
    second), and Fibre Channel offers burst of up to
    200MB/s. (Apple's 400MB/s claim only somewhat makes
    sense if you're using both Xserve RAID controllers
    connected to the same server.) Even so, both media
    sustain around 50MB/s to 75MB/s, which is quite good.
    In fact, that matches local disk performance. The
    local serial ATA hard disks that are used in Power
    Mac G5 models are serial ATA 1.5Gb/s (gigaBITS per
    second) or 187.5MB/s at maximum bursts;I typically
    see performance in the 40MB/s to 50MB/s range.
    Just for fun, here are throughput speeds of several
    common connectors, in megabytes per second:
    Serial ATA "3.0": bursts up to 375MB/s, sustains
    about 75MB/s
    LVD SCSI 320: bursts up to 320MB/s, sustains about
    100MB/s
    Fibre Channel: bursts up to 200MB/s, sustains about
    60MB/s
    Serial ATA "1.5": bursts up to 187.5 MB/s, sustains
    about 50MB/s
    SCSI-3 LVD SCSI 160: bursts up to 160MB/s, sustains
    about 60MB/s
    Ultra ATA/133 (PATA): bursts up to 133MB/s, sustains
    about 50MB/s
    Gigabit Ethernet: bursts up to 125MB/s, sustains
    about 50MB/s
    FireWire 800: bursts up to 100MB/s, sustains about
    50MB/s
    ATA/66 (PATA): bursts up to 66MB/s, sustains about
    40MB/s
    USB 2.0: bursts up to 60MB/s, sustains about 30MB/s
    FireWire 400: bursts up to 50MB/s, sustains about
    20MB/s
    10/100 "Fast" Ethernet: bursts up to 12.5MB/s
    Judging by your request, I think the "no Xsan"
    scenario is the one you want.
    --Gerrit
    I would love to take XSan out of the picture and if that means that I can only use one of the servers at a time then fine I can live with that, but, I would rather have both active and working for me even if I have to split the services between the systems.

  • 10.4.6 - delayed_ack = 3 ??

    For people familiar with changing the net.inet.tcp.delayed_ack setting from 1 (default) to 0 for better SMB performance, does anyone know what the new default value 3 corresponds to? I use SMB mounts constantly at work and so after the 10.4.6 install reboot I went to change it to 0 again and noticed that it reported changing from 3 to 0 instead of 1 to 0 as it has for the past many versions.

    I can confirm this issue.
    It appears after updating to 10.4.6.
    The consequences of this issue is not clear for me - but they could be non-trivial for any networking utility that looks for or otherwise use this setting. For ex. will IPNetTunerX now report that the net.inet.tcp.delayed_ack has a non-standard value.
    In my book this a bug and I have reported it to apple.

  • DirectAccess and Clustered Shares

    Hi all! I am seeing an issue with our Windows 8.1 DirectAccess clients that is confounding me. We deployed DirectAccess with Server 2012 R2 with the primary purpose of allowing our users access to several file shares on our internal network. One of
    these is the user's home directory share which was previously made available using Offline files. Other shares are departmental.
    Most users have both an H: and S: mapped drive and the UNC paths to the shares are
    \\userdata and \\shared . These shares live on a server 2008 R2 SP1 file server cluster. We do not have any issues using these internally.
    The symptoms of the issue are as follows:
    A user goes home and connects to DirectAccess and sometimes is not able to open the mapped H: and S: drives. Browsing directly to the UNC also fails. Pinging these shares by name fails as well. Event logs show some warnings regarding NTPclient, registering
    A records with DNS, etc. Running the Direct Access Client Troubleshooter does not report any issues. The user can then manually disconnect from the intranet tunnel from the network connections interface, reconnect and the issue is resolved. At this point
    the pings to the resource name also work again.
    I have also seen the problem resolve itself by simply waiting an inordinate amount of time - 20-30 minutes.
    When this issue appears other connectivity over DA does not seem to be affected and in fact it only seems to affect the shares on this file server cluster. For instance, I can use RDP, update group policy and generally do anything I would normally do over
    DA except for browse to the file shares.
    Has anyone encountered issues like this before? Any ideas about where I should begin looking to resolve this?
    Thanks!

    Hi Matt,
    Have you tried with the FQDN? It may be a DNS suffix issue.
    If it doesn't work, please try to use nslookup for further troubeshooting.
    Here is a ariticle about how to troubleshoot name resolution issue of DirectAccess, it may be helpful,
    http://technet.microsoft.com/en-us/library/ee844142(v=ws.10).aspx
    If we can't resolve the name with Intranet DNS Servers, please check if there is any hotfix suitable for your situation.
    Recommended hotfixes and updates for Windows Server 2012 DirectAccess and Windows Server 2012 R2 DirectAccess
    http://support.microsoft.com/kb/2883952
    Best Regards.
    Steven Lee
    TechNet Community Support
    I am currently testing the FQDN angle. It doesn't really appear to be a name resolution issue on the surface since the drives usually start working after a certain amount of time. I am beginning to think that this is an SMB performance issue at heart since
    SMB over a WAN is known to have latency issues. Using FQDNs may improve this though since it will get rid of some of the chattiness native to SMB. I'll let you know if this ends up being a good solution.
    By the way, this is going to involve 3 changes for our users: I'll have to change the home directory in AD, a mapped drive in a login script, and the location of folder redirection. Hopefully the impact is minimal, but I have seen many issues with Folder
    Redirection policies not working because of logon optimization. We'll see...

  • SMB1/2 transfering extremely slow for small files. Sends file 6 times.

    Having an issue with transferring small files using SMB1 or SMB2. 100MBs will transfer faster than 100 files totaling 10kb.
    Doing a packet capture on the source computer will show a file being sent to the destination 6 times before completing. Each upload will have a different FID but file path and name are the same. A large file does not do this.
     

    Hi, 
    Please capture packets between source computer and destination and upload the two packets.
    You could use Windows Live SkyDrive (http://www.skydrive.live.com/) to upload the files and then give us the download address.
    In the meantime, please refer to the article below to see if it related to SMB performance:
    SMB/CIFS Performance Over WAN Links
    http://blogs.technet.com/b/neilcar/archive/2004/10/26/247903.aspx
    Regards, 
    Mandy
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Very Slow Windows Networking

    My department is running OS X 10.4, and we're networked with a Windows server for storage and job backup. One of our main backup servers is very slow. Logging into the server can take up to 5-7 minutes, and for a period of 5-7 minutes after that the machine logging into the server will run very slow, even when not copying a file.
    Now the backup server is quite large (almost a terrabyte), which is my belief why it runs so slowly. We have two other servers on the same network that behave fine.
    Is there something on my end (OS X) that can speed up the problem, or is this a server issue? My boss is demanding answers and our Windows IT guy is placing the blame solely on our Macs (with no evidence of course). Help!

    Google for "SMB performance". There are a bunch of discussions on macosxhints on how to tweak SMB performance on Max OS X. I have tried a few different ones, which all seem to help in some small way. YMMV.
    Apple SMB is not known for its performance.

  • Performance problems with DFSN, ABE and SMB

    Hello,
    We have identified a problem with DFS-Namespace (DFSN), Access Based Enumeration (ABE) and SMB File Service.
    Currently we have two Windows Server 2008 R2 servers providing the domain-based DFSN in functional level Windows Server 2008 R2 with activated ABE.
    The DFSN servers have the most current hotfixes for DFSN and SMB installed, according to http://support.microsoft.com/kb/968429/en-us and http://support.microsoft.com/kb/2473205/en-us
    We have only one AD-site and don't use DFS-Replication.
    Servers have 2 Intel X5550 4 Core CPUs and 32 GB Ram.
    Network is a LAN.
    Our DFSN looks like this:
    \\contoso.com\home
        Contains 10.000 Links
        Drive mapping on clients to subfolder \\contoso.com\home\username
    \\contoso.com\group
        Contains 2500 Links
        Drive mapping on clients directly to \\contoso.com\group
    On \\contoso.com\group we serve different folders for teams, projects and other groups with different access permissions based on AD groups.
    We have to use ABE, so that users see only accessible Links (folders)
    We encounter sometimes multiple times a day enterprise-wide performance problems for 30 seconds when accessing our Namespaces.
    After six weeks of researching and analyzing we were able to identify the exact problem.
    Administrators create a new DFS-Link in our Namespace \\contoso.com\group with correct permissions using the following command line:
    dfsutil.exe link \\contoso.com\group\project123 \\fileserver1\share\project123
    dfsutil.exe property sd grant \\contoso.com\group\project123 CONTOSO\group-project123:RX protect replace
    This is done a few times a day.
    There is no possibility to create the folder and set the permissions in one step.
    DFSN process on our DFSN-servers create the new link and the corresponding folder in C:\DFSRoots.
    At this time, we have for example 2000+ clients having an active session to the root of the namespace \\contoso.com\group.
    Active session means a Windows Explorer opened to the mapped drive or to any subfolder.
    The file server process (Lanmanserver) sends a change notification (SMB-Protocol) to each client with an active session \\contoso.com\group.
    All the clients which were getting the notification now start to refresh the folder listing of \\contoso.com\group
    This was identified by an network trace on our DFSN-servers and different clients.
    Due to ABE the servers have to compute the folder listing for each request.
    DFS-Service on the servers doen't respond for propably 30 seconds to any additional requests. CPU usage increases significantly over this period and went back to normal afterwards. On our hardware from about 5% to 50%.
    Users can't access all DFS-Namespaces during this time and applications using data from DFS-Namespace stop responding.
    Side effect: Windows reports on clients a slow-link detection for \\contoso.com\home, which can be offline available for users (described here for WAN-connections: http://blogs.technet.com/b/askds/archive/2011/12/14/slow-link-with-windows-7-and-dfs-namespaces.aspx)
    Problem doesn't occure when creating a link in \\contoso.com\home, because users have only a mapping to subfolders.
    Currently, the problem doesn't occure also for \\contoso.com\app, because users usually don't use Windows Explorer accessing this mapping.
    Disabling ABE reduces the DFSN freeze time, but doesn't solve the problem.
    Problem also occurs with Windows Server 2012 R2 as DFSN-server.
    There is a registry key available for clients to avoid the reponse to the change notification (NoRemoteChangeNotify, see http://support.microsoft.com/kb/812669/en-us)
    This might fix the problem with DFSN, but results in other problems for the users. For example, they have to press F5 for refreshing every remote directory on change.
    Is there a possibility to disable the SMB change notification on server side ?
    TIA and regards,
    Ralf Gaudes

    Hi,
    Thanks for posting in Microsoft Technet Forums.
    I am trying to involve someone familiar with this topic to further look at this issue. There might be some time delay. Appreciate your patience.
    Thank you for your understanding and support.
    Regards.
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Hyper-V over SMB 3.0 poor performance on 1GB NIC's without RDMA

    This is a bit of a repost as the last time I tried to troubleshoot this my question got hijacked by people spamming alternative solutions (starwind) 
    For my own reasons I am currently evaluating Hyper-V over SMB with a view to designing our new production cluster based on this technology.  Given our budget and resources a SoFS makes perfect sense.
    The problem I have is that in all my testing, as soon as I host a VM's files on a SMB 3.0 server (SoFS or standalone) I am not getting the performance I should over the network.  
    My testing so far:
    4 different decent spec machines with 4-8gb ram, dual/quad core cpu's, 
    Test machines are mostly Server 2012 R2 with one Windows 8.1 hyper-v host thrown in for extra measure.
    Storage is a variety of HD and SSD and are easily capable of handling >100MB/s of traffic and 5k+ IOPS
    Have tested storage configurations as standalone, storage spaces (mirrored, spanned and with tiering)
    All storage is performing as expected in each configuration.
    Multiple 1GB NIC's from broadcom, intel and atheros.  The broadcoms are server grade dual port adapters.
    Switching has been a combination of HP E5400zl, HP 2810 and even direct connect with crossover cables.
    Have tried stand alone NIC's, teamed NIC's and even storage through hyper-v extensible switch.
    File copies between machines will easily max out 1GB in any direction.
    VM's hosted locally show internal benchmark performance in line with roughly 90% of underlying storage performance.
    Tested with dynamic and fixed vhdx's
    NIC's have been used in combinations of RSS and TCP offload enabled/disabled.
    Whenever I host VM files on a different server from where it is running, I observe the following:
    Write speeds within the VM to any attached vhd's are severely effected and run at around 30-50% of 1GB
    Read Speeds are not as badly effected but just about manager to hit 70% of 1GB
    Random IOPS are not noticeably affected.
    Running multiple tests at the same time over the same 1GB links results in the same total through put.
    The same results are observed no matter which machine hosts the vm or the vhdx files. 
    Any host involved in a test will show a healthy amount of cpu time allocated to hardware interupts.  On a 6 core 3.8Ghz cpu this is around 5% of total.  On the slowest machine (dual core 2.4Ghz) this is roughly 30% of cpu load.
    Things I have yet to test:
    Gen 1 VM's
    VM's running anything other than server 2012 r2
    Running the tests on actual server hardware. (hard as most of ours are in production use)
    Is there a default QoS or IOPS limit when SMB detects hyper-v traffic?  I just can't wrap my head around how all the tests are seeing an identical bottleneck as soon as the storage traffic goes over smb.
    What else should I be looking for? There must be something obvious that I am overlooking!

    By nature of a SOFS reads are really good, but there is no write cache, SOFS only seems to perform well with Disk mirroring, this improves the write performance and redundancy but halves your disk capacity.
    Mirror (RAID1 or RAID10) actually REDUCES number of IOPS. With read every spindle takes part in I/O request processing (assumimg I/O is big enough to cover the stripe) so you multiply IOPS and MBps on amount of spindles you have in a RAID group and all writes
    need to go to the duplicated locations that's why READS are fast and WRITES are slow (1/2 of the read performance). This is absolutely basic thing and SoFS layered on top can do nothing to change this.
    StarWind iSCSI SAN & NAS
    Not wanting to put the cat amongst the pigeons, this isn't strictly true, RAID 1 and 10 give you the best IOP performance of any Raid group, this is why all the best performing SQL Cluster use RAID 10 for most of their storage requirements,
    Features
    RAID 0
    RAID 1
    RAID 1E
    RAID 5
    RAID 5EE
    Minimum # Drives
    2
    2
    3
    3
    4
    Data Protection
    No Protection
    Single-drive
    failure
    Single-drive
    failure
    Single-drive
    failure
    Single-drive
    failure
    Read Performance
    High
    High
    High
    High
    High
    Write Performance
    High
    Medium
    Medium
    Low
    Low
    Read Performance (degraded)
    N/A
    Medium
    High
    Low
    Low
    Write Performance (degraded)
    N/A
    High
    High
    Low
    Low
    Capacity Utilization
    100%
    50%
    50%
    67% - 94%
    50% - 88%
    Typical Applications
    High End Workstations, data
    logging, real-time rendering, very transitory data
    Operating System, transaction
    databases
    Operating system, transaction
    databases
    Data warehousing, web serving,
    archiving
    Data warehousing, web serving,
    archiving

  • Is there a fix for terrible performance of file transfers using finder via CIFS and SMB?

    I am connecting to an EMC VNXe running Celera file server and am getting abysmal file transfer performance using both CIFS and SMB-  what gives? This seems to be a long-standing issue- any good work-arounds or solutions?

    I rarely see cover art issues unless the cover art embedded in the file is poor to begin with.
    1000x1000 is probably overkill anyway (and if you had several 1000 large cover art images, could eat at AppleTV's memory) though I must admit I always try to get good hi-res images and don't like anything much below 500x500 or 600x600.  There's no hard and fast rule but even a few 300x300 images I have look reasonable at normal viewing distance.
    I wonder as you say if it might be an issue with iTunes for Windows as I'm on OS X.  Actually that reminds me of another issue I have.
    AC

  • Shared nothing live migration over SMB. Poor performance

    Hi,
    I´m experiencing really poor performance when migrating VMs between newly installed server 2012 R2 Hyper-V hosts.
    Hardware:
    Dell M620 blades
    256Gb RAM
    2*8C Intel E5-2680 CPUs
    Samsung 840 Pro 512Gb SSD running in Raid1
    6* Intel X520 10G NICs connected to Force10 MXL enclosure switches
    The NICs are using the latest drivers from Intel (18.7) and firmware version 14.5.9
    The OS installation is pretty clean. Its Windows Server 2012 R2 + Dell OMSA + Dell HIT 4.6
    I have removed the NIC teams and vmSwitch/vNICs to simplify the problem solving process. Now there is one nic configured with one IP. RSS is enabled, no VMQ.
    The graphs are from 4 tests.
    Test 1 and 2 are nttcp-tests to establish that the network is working as expected.
    Test 3 is a shared nothing live migration of a live VM over SMB
    Test 4 is a storage migration of the same VM when shut down. The VMs is transferred using BITS over http.
    It´s obvious that the network and NICs can push a lot of data. Test 2 had a throughput of 1130MB/s (9Gb/s) using 4 threads. The disks can handle a lot more than 74MB/s, proven by test 4.
    While the graph above doesn´t show the cpu load, I have verified that no cpu core was even close to running at 100% during test 3 and 4.
    Any ideas?
    Test
    Config
    Vmswitch
    RSS
    VMQ
    Live Migration Config
    Throughput (MB/s)
    NTtcp
    NTttcp.exe -r -m 1,*,172.17.20.45 -t 30
    No
    Yes
    No
    N/A
    500
    NTtcp
    NTttcp.exe -r -m 4,*,172.17.20.45 -t 30
    No
    Yes
    No
    N/A
    1130
    Shared nothing live migration
    Online VM, 8GB disk, 2GB RAM. Migrated from host 1 -> host2
    No
    Yes
    No
    Kerberos, Use SMB, any available net
    74
    Storage migration
    Offline VM, 8Gb disk. Migrated from host 1 -> host2
    No
    Yes
    No
    Unencrypted BITS transfer
    350

    Hi Per Kjellkvist,
    Please try to change the "advanced features"settings of "live migrations" in "Hyper-v settings" , select "Compression" in "performance options" area .
    Then test 3 and 4 .
    Best Regards
    Elton Ji
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Very poor SMB share performance on 3TB time capsule.

    Hi,
    Is the SMB shares performance on time capsule suposed to be very poor compared to the AFP shares?
    My TC will to around 9MB/s transfers using AFP, but on SMB it will do around 1.25MB/s (varies from a few KB/s to peaks of 4MB/s).
    I tested both protocols using a ethernet cable connected from my macbook pro directly to the TC.
    This is a 3TB time capsule runing firmware 7.6.1, and I did a factory reset before my tests.
    Time machine backups were disabled during the tests.
    I need SMB for my WDTV media players.

    I would not say it is supposed to be.. but you are probably correct that it is very poor.. This issue crops up time and again,,, but not from people running Macs generally.. it is windows that is the problem.
    A few things to check.
    1. Are the names actually SMB correct?? ie short, no spaces, pure alphanumeric. Both TC name and wireless names if you use wireless.
    2. Workgroup set correctly.  ie WORKGROUP .. this probably doesn't come up with the Mac but affects most PC connections and possibly the WD TV
    3. How old is the TC? If it is an early release Gen 4, you can take it back to 7.5.2 firmware.. frankly much of the trouble was introduced with Lion.. as they changed the SAMBA setup.. I would test from the WDTV.. or a PC.. if it doesn't glitch you are doing better than the Mac can do.. ie the reading in the Mac is pure Lion Lunacy.. https://discussions.apple.com/thread/3206668?start=0&tstart=0
    One of those apple changes that shows.. they hardly test anything anymore. Do a google search for Lion SMB fix.

  • ISCSI, AFP, SMB, and NFS performance with Mac OS X 10.5.5 clients

    Been doing some performance testing with various protocols related to shared storage...
    Client: iMac 24 (Intel), Mac OS X 10.5.5 w/globalSAN iSCSI Initiator version 3.3.0.43
    NAS/Target: Thecus N5200 Pro w/firmware 2.00.14 (Linux-based, 5 x 500 GB SATA II, RAID 6, all volumes XFS except iSCSI which was Mac OS Extended (Journaled))
    Because my NAS/target supports iSCSI, AFP, SMB, and NFS, I was able to run tests that show some interesting performance differences. Because the Thecus N5200 Pro is a closed appliance, no performance tuning could be done on the server side.
    Here are the results of running the following command from the Terminal (where test is the name of the appropriately mounted volume on the NAS) on a gigabit LAN with one subnet (jumbo frames not turned on):
    time dd if=/dev/zero of=/Volumes/test/testfile bs=1048576k count=4
    In seconds:
    iSCSI 134.267530
    AFP 140.285572
    SMB 159.061026
    NFSv3 (w/o tuning) 477.432503
    NFSv3 (w/tuning) 293.994605
    Here's what I put in /etc/nfs.conf to tune the NFS performance:
    nfs.client.allow_async = 1
    nfs.client.mount.options = rsize=32768,wsize=32768,vers=3
    Note: I tried forcing TCP as well as used an rsize and wsize that doubled what I had above. It didn't help.
    I was surprised to see how close AFP performance came to iSCSI. NFS was a huge disappointment but it could have been limitations of the server settings that could not have been changed because it was an appliance. I'll be getting a Sun Ultra 64 Workstation in soon and retrying the tests (and adding NFSv4).
    If you have any suggestions for performance tuning Mac OS X 10.5.5 clients with any of these protocols (beyond using jumbo frames), please share your results here. I'd be especially interested to know whether anyone has found a situation where Mac clients using NFS has an advantage.

    With fully functional ZFS expected in Snow Leopard Server, I thought I'd do some performance testing using a few different zpool configurations and post the results.
    Client:
    - iMac 24 (Intel), 2 GB of RAM, 2.3 GHz dual core
    - Mac OS X 10.5.6
    - globalSAN iSCSI Initiator 3.3.0.43
    NAS/Target:
    - Sun Ultra 24 Workstation, 8 GB of RAM, 2.2 GHz quad core
    - OpenSolaris 2008.11
    - 4 x 1.5 TB Seagate Barracuda SATA II in ZFS zpools (see below)
    - For iSCSI test, created a 200 GB zvol shared as iSCSI target (formatted as Mac OS Extended Journaled)
    Network:
    - Gigabit with MTU of 1500 (performance should be better with jumbo frames).
    Average of 3 tests of:
    # time dd if=/dev/zero of=/Volumes/test/testfile bs=1048576k count=4
    # zpool create vault raidz2 c4t1d0 c4t2d0 c4t3d0 c4t4d0
    # zfs create -o shareiscsi=on -V 200g vault/iscsi
    iSCSI with RAIDZ2: 148.98 seconds
    # zpool create vault raidz c4t1d0 c4t2d0 c4t3d0 c4t4d0
    # zfs create -o shareiscsi=on -V 200g vault/iscsi
    iSCSI with RAIDZ: 123.68 seconds
    # zpool create vault mirror c4t1d0 c4t2d0 mirror c4t3d0 c4t4d0
    # zfs create -o shareiscsi=on -V 200g vault/iscsi
    iSCSI with two mirrors: 117.57 seconds
    # zpool create vault mirror c4t1d0 c4t2d0 mirror c4t3d0 c4t4d0
    # zfs create -o shareiscsi=on -V 200g vault/iscsi
    # zfs set compression=lzjb vault
    iSCSI with two mirrors and compression: 112.99 seconds
    Compared with my earlier testing against the Thecus N5200 Pro as an iSCSI target, I got roughly 16% better performance using the Sun Ultra 24 (with one less SATA II drive in the array).

Maybe you are looking for