X-Serve RAID unmounts on large file copy

Hi
We are running an X-Serve RAID in a video production environment. The RAID is striped RAID 5 with 1 hot spare on either side and concatenated with Apple Disk Utility as RAID 0 on our G5, to appear as one logical volume.
I have noticed, lately, that large file copies from the RAID will cause it to unmount and the copy to fail.
In the console, we see:
May 4 00:23:37 Orobourus kernel[0]: FusionMPT: Notification = 5 (External Bus Reset) for SCSI Domain = 1
May 4 00:23:37 Orobourus kernel[0]: FusionMPT: External Bus Reset for SCSI Domain = 1
May 4 00:23:37 Orobourus kernel[0]: FusionMPT: Notification = 7 (Link Status Change) for SCSI Domain = 1
May 4 00:23:37 Orobourus kernel[0]: FusionFC: Link is down for SCSI Domain = 1.
May 4 00:23:37 Orobourus kernel[0]: FusionMPT: Notification = 8 (Loop State Change) for SCSI Domain = 1
May 4 00:23:37 Orobourus kernel[0]: FusionFC: Loop Initialization Packet for SCSI Domain = 1.
May 4 00:23:37 Orobourus kernel[0]: FusionMPT: Notification = 7 (Link Status Change) for SCSI Domain = 1
May 4 00:23:37 Orobourus kernel[0]: FusionFC: Link is active for SCSI Domain = 1.
May 4 00:23:37 Orobourus kernel[0]: FusionMPT: Notification = 6 (Rescan) for SCSI Domain = 1
May 4 00:23:37 Orobourus kernel[0]: AppleRAID::completeRAIDRequest - error 0xe00002ca detected for set "tao.av.complexity.org" (868584D1-E485-4B37-AFCE-E7AFC3AD5BB7), member CBDFD75C-5DC8-4089-A7AE-1EFBE0B1A6EB, set byte offset = 1674060574720.
May 4 00:23:37 Orobourus kernel[0]: disk5: I/O error.
May 4 00:23:37 Orobourus kernel[0]: disk5: device is offline.
May 4 00:23:37 Orobourus kernel[0]: AppleRAID::recover() member CBDFD75C-5DC8-4089-A7AE-1EFBE0B1A6EB from set "tao.av.complexity.org" (868584D1-E485-4B37-AFCE-E7AFC3AD5BB7) has been marked offline.
May 4 00:23:37 Orobourus kernel[0]: AppleRAID::restartSet - restarting set "tao.av.complexity.org" (868584D1-E485-4B37-AFCE-E7AFC3AD5BB7).
May 4 00:23:37 Orobourus kernel[0]: disk5: media is not present.
May 4 00:23:37 Orobourus kernel[0]: disk5: media is not present.
May 4 00:23:37 Orobourus kernel[0]: jnl: dojnlio: strategy err 0x6
May 4 00:23:37 Orobourus kernel[0]: jnl: end_transaction: only wrote 0 of 12288 bytes to the journal!
May 4 00:23:37 Orobourus kernel[0]: disk5: media is not present.
May 4 00:23:37 Orobourus kernel[0]: disk5: media is not present.
May 4 00:23:37 Orobourus kernel[0]: disk5: media is not present.
May 4 00:23:37 Orobourus kernel[0]: jnl: close: journal 0x7b319ec, is invalid. aborting outstanding transactions
Following such an ungainly dismount, the only way to get the RAID back online is to reboot the G5 - attempting to mount the volume via Disk Utility doesn't work.
This looks similar to another 'RAID volume unmounts' thread I found elsewhere in the forum - it appeared to have no solution apart from contacting Apple.
Is this the only recourse?
G5 2.7 6.5Gb + X-RAID 3.5Tb   Mac OS X (10.4.9)   Decklink Extreme / HD

There should be no need to restart the controllers to get additional reporting -- updating the firmware restarts them.
If blocks are found to be bad during reconditioning, then the data in the blocks should be relocated, and the "bad" blocks mapped out as bad. The Xserve RAID has no idea whatsoever what constitutes a "file," given it doesn't know anything about the OS (you can format the volumes as HFS+, Xsan, ZFS, ext3, ReiserFS, whatever... this is done from the host and the Xserve RAID is completely unaware of this). So if the blocks can be successfully relocated (shouldn't be too hard, given the data can be generated from parity), then you're good to go. If somehow you have bad blocks in multiple locations on multiple spindles that overlap, then that would of course be bad. I'd expect that to be fairly rare though.
One thing I should note is that drives don't last forever. As the RAID ages, the chances of drive failure increase. Typically drive lifespan follows a "bathtub curve," if you imagined looking a a cross-section of a crow's foot tub. Failures will be very high early (due to manufacturing defects -- something testing as "burn in" should catch -- and I believe Apple does this for you), then very low for a while, and then will rise markedly as drives get "old." Where this is would be a subject for debate, but typical data center folks start proactively replacing entire storage systems at some point to avoid the chance of multiple drive failure. The Xserve RAIDs have only been out for 4 years, so I wouldn't anticipate this yet -- but if it were me I'd probably start thinking about doing this after 4 or 5 years. Few experienced IT data center folks would feel comfortable letting spindles run 24x7x365 for 6 years... even if 95% of the time it would be okay.

Similar Messages

  • Network speed affected by large file copy operations. Also, why intermittent network outages?

    Hi
    I have a couple of issues on our company network.
    The first is thate a single large file copy imapcts the entire network and dramatically reduces network speed and the second is that there are periodic outages where file open/close/save operations may appear to hang, and also where programs that rely on
    network connectivity e.g. email, appear to hang. It is as though the PC loses it's connection to the network, but the status of the network icon does not change. For the second issue if we wait the program will respond but the wait period can be up to 1min.
    The downside of this is that this affects Access databases on our server so that when an 'outage' occurs the Access client cannot recover and hangs permamnently.
    We have a Windows Active Directory domain that comprises Windows 2003 R2 (soon to be decommissioned), Windows Server 2008 Standard and Windows Server 2012 R2 Standard domain controllers. There are two member servers: A file server running Windows 2008 Storage
    Server and a remote access server (which also runs WSUS) running Windows Server 2012 Standard. The clients comprise about 35 Win7 PC's and 1 Vista PC.
    When I copy or move a large file from the 2008 Storage Server to my Win7 client other staff experience massive slowdowns when accessing the network. Recently I was moving several files from the Storage Server to my local drive. The files comprised pairs
    (e.g. folo76t5.pmm and folo76t5.pmi), one of which is less than 1MB and the other varies between 1.5 - 1.9GB. I was moving two files at a time so the total file size for each operation was just under 2GB.
    While the file move operation was taking place a colleague was trying to open a 36k Excel file. After waiting 3mins he asked me for help. I did some tests and noticed that when I was not copying large files he could open the Excel file immediately. When
    I started copying more data from the Storage Server to my local drive it took several minutes before his PC could open the Excel file.
    I also noticed on my Win7 client that our email client (Pegasus Mail), which was the only application I had open at the time would hang when the move operation was started and it would take at least a minute for it to start responding.
    Ordinarlily we work with many files
    Anyone have any suggestions, please? This is something that is affecting all clients. I can't carry out file maintenance on large files during normal work hours if network speed is going to be so badly impacted.
    I'm still working on the intermittent network outages (the second issue), but if anyone has any suggestions about what may be causing this I would be grateful if you could share them.
    Thanks

    What have you checked for resource usage during one of these copies of a large file?
    At a minimum I would check Task Manager>Resource Monitor.  In particular check the disk and network usage.  Also, look at RAM and CPU while the copy is taking place.
    What RAID level is there on the file server?
    There are many possible areas that could be causing your problem(s).  And it could be more than one thing.  Start by checking these things.  And go from there.
    Hi, JohnB352
    Thanks for the suggestions. I have monitored the server and can see that the memory is nearly maxed out with a lot of hard faults (varies between several hundred to several thousand), recorded during normal usage. The Disk and CPU seem normal.
    I'm going to replace the RAM and double it up to 12GB.
    Thanks! This may help with some other issues we are having. I'll post back after it has been done.
    [Edit]
    Forgot to mention: there are 6 drives in the server. 2 for the OS (Mirrored RAID 1) and 4 for the data (Striped RAID 5).

  • Qosmio X500-148 - Large file copy stucks

    Large file copy (~5-10gb) between USB or Firewire disk stucks the PC. It goes up to 50% then slows down and there is no possibility to do anything including cancel the task or open the browser or WIn Explorer or Task manager.
    Event viewer does not show anything strange.
    This happens only with Windows 7 64bit - no problem with Windows 7 32bit (same external hardware). No problem when copying between internal PC disk.
    The PC is very powerful - Qosmio X500-148

    The external Hardware is:
    1.5 TB WD Hard disk USB
    1.0 TB WD + 250GB Maxtor + 250GB Maxtor on Firewire
    I have used standard copy feature - copy and paste - as well as
    Viceversa Pro for folder sync with same results
    Please note that the same external configuration was running properly on a 1 core PC - Satellite - running on Win7 x86 without problems. Since I have moved on Win7 x64 on my brand new X500-148 I have a great del of copying problems
    Installed all Windows upgrade and the Event Monitor doesn't show anything strange when copying

  • Large file copy to iSCSI drive fills all memory until server stalls.

    I am having the file copy issues that people have been having with various versions of Server now for years, as can be read in the forums. I am having this issue on Server 2012 Std., using Hyper-V.
    When a large file is copied to an iSCSI drive, the file is copied into memory first faster than it can be sent over the network. It fills all available GB of memory until the server, which is a VM host, pretty much stalls and also all the VMs stall. This
    continues until the file copy is finished or stopped, then the memory is gradually released as it is taken out of memory as it is sent over the network.
    This issue was happening on send and receive. I change the registry setting for Large Cache to disable it, and now I can receive large files from the iSCSI. They now take an additional 1 GB of memory and it sits there until the file copy is finished.
    I have tried all the NIC and disk settings as can be found in the forums around the internet that people have posted in regard to this issue.
    To describe in a little more detail, when receiving a file from iSCSI, the file copy windows shows a speed of around 60-80 MB / sec, which is wire speed. When sending a file to iSCSI, the file copy window shows a speed of 150 MB/sec, which is actually the
    speed at which it is being written to memory. The NIC counter in Task Mgr shows instead the actual network speed which is about half of that. The difference is the rate at which memory fills until it is full.
    This also happens when using Window Server Backup. It freezes up the VM Host and Guests while the host backup is running because of this issue. It does cause some software issues.
    The problem does not happen inside the Guests. I can transfer files to a different LUN on the same iSCSI, which uses the same NIC as the Host with no issue.
    Does anyone know if the fix has been found for this? All forum posts I have found for this have closed with no definite resolution found.
    Thanks for you help.
    KTSaved

    Hi,
    Sorry if it causes confusion but "by design" I mean "by design it will use memory for copying files via network".
    In Windows 2000/2003, the following keys could help control the memory usage:
    LargSystemCache (0 or 1) HKEY_LOCAL_MACHINE\CurrentControlSet\Control\Session Manager\Memory Management
    Size (1, 2 or 3) in HKLM\SYSTEM\CurrentControlSet\Services\LanmanServer\Parameter
    I saw threads mentioned that it will not work in later systems such as Windows 2008 R2.
    For Windows 2008 R2 and Windows 2008, there is a service named Microsoft Windows Dynamic Cache Service which addressed this issue:
    https://www.microsoft.com/en-us/download/details.aspx?id=9258
    However I searched and there is no update version for Windows 2012 and 2012 R2.
    I also noticed that the following command could help control the memory usage. With value = 1, NTFS uses the default amount of paged-pool memory:
    fsutil behavior set memoryusage 1
    You need a reboot after changing the value. 
    Please remember to mark the replies as answers if they help and un-mark them if they provide no help. If you have feedback for TechNet Support, contact [email protected]

  • Large file copy fails through 4240 sensor

    Customer attempts to copy a large file from a server in an IPS protected vlan to a host in an IPS un-protected vlan and the copy fails if file is greater than about 2Gbytes in size. If the server is moved to the un-protected vlan the copy succeeds. There are no events on the IPS suggesting any blocking or other actions.

    The CPU does occasionly peak at 100% when transferring a large file but the copy often fails when the CPU is significantly lower. I know a 4240 has 300Mbit/s throughput but as I understood it traffic would still be serviced but would bypass the inspection process if exceeded, maybe a transition from inspection to non inspection causes the copy to fail like a tcp reset, I may try a sniffer.
    I do have TAC involved but like to try and utilise the knowledge of other expert users like yourself to try and rectify issues. Thanks for your help. If you have any other comments please let me know, I will certainly post my findings if you are interested.

  • Windows 7 64-bit Corrupting (Altering) Large Files Copied to External NTFS Drives

    http://social.technet.microsoft.com/Forums/en-US/w7itproperf/thread/13a7426e-1a5d-41b0-9e16-19437697f62b/
    continue to this thread, I have the same problems, the corrupted files only archives, zip or 7z ... and exe, there are no problems copying larg files like movies for example, no problem when copying via linux in the same laptop, it is a windows issue, I
    have all updates installed, nothing missing

    Ok, lets be brief.
    This problem is annoying me for years. It is totally reproducible although random. It happens when copying to external drives (mainly USB) when they are configured for "safe removal". I have had issues copying to NTFS and FAT32 partitions. I have had issues
    using 4 different computers, from 7 years old to brand new ones, using AMD or intel chipsets and totally different USB controllers, and using many different USB sticks, hard disks, etc. The only common thing in those computers is Windows 7 x64 and the external
    drives optimization for "safe removal". Installing Teracopy reduces the chances of data corrpution, but does not eliminate them completely. The only real workaround (tested for 2 years) is activating the write cache in the device manager properties of the
    drive. In that way, windows uses the same transfer mechanisms as for the internal drives, and everything is OK.
    MICROSOFT guys, there is a BIG BUG in windows 7 x64 external drives data transfer mechanism. There is a bug in the cache handling mechanism of the safe removal function. Nobody hears, I've been years talking about this in forums. It is a very dangerous bug
    because it is silent, and many non professional people is experiencing random errors in their backup data. PLEASE, INVESTIGATE THIS. YOU NEED TO FIX SUCH IMPORTANT BUG. IT IS UNBELIEVABLE THAT IT IS STILL THERE SINCE 2009!!!
    Hope this helps.

  • Large file copy fails

    trying to move a 60GB folder from a NAS to a local USB drive, and regardless of how many times it try to do this it fails within the first few minutes.
    I'm on a managed Ciscso Gigabit Ethernet switch in a commercial building and I have hundreds of users having no problems with OS 10.6, Windows XP and Windows 7 but my Yosemite system is not able to do this unless I boot into my OS 10.6. partition.
    Reconfig of the switch is not a viable option, I can't change things on a switch that would jeopardize hundreds of users to fix one thing on a mac testing the legitimacy of OS 10.10 in  a corporate setting.

    The CPU does occasionly peak at 100% when transferring a large file but the copy often fails when the CPU is significantly lower. I know a 4240 has 300Mbit/s throughput but as I understood it traffic would still be serviced but would bypass the inspection process if exceeded, maybe a transition from inspection to non inspection causes the copy to fail like a tcp reset, I may try a sniffer.
    I do have TAC involved but like to try and utilise the knowledge of other expert users like yourself to try and rectify issues. Thanks for your help. If you have any other comments please let me know, I will certainly post my findings if you are interested.

  • Large File Copy fails in KDEmod 4.2.2

    I'm using KDEmod 4.2.2 and I often have to transfer large files on my computer.  Files are usually 1-2Gb and I have to transfer about 150Gb of them.  The files are stored on one external 500Gb HD and are being transferred to another identical 500Gb HD over firewire.  Also I have another external firewire drive that I transfer about 20-30Gb of the same type of 1-2Gb files to the internal HD of my laptop over firewire.  If I try to drag and drop in Dolphin, it gets through a few hundred Mb of the transfer then fails.  Now if I use the cp in the terminal, the transfer is fine.  Also when I was still distro hopping and using Fedora 10 with KDE 4.2.0 I had this same problem.  When I use Gnome this problem is non existent.  I do this often with work so it is a very important function to me.  All drives are FAT32 and there is no option to change them as they are used on serveral different machines/OS's before all is said and done and the only file system that all of the machines will read is FAT32 (thanks to one machine of course).  In many cases time is very important on the transfer and that is why I prefer to do the transfer in a desktop environment, so I can see progress and ETA.  This is a huge deal breaker for KDE and I would like to fix it.  Any help is greatly appreciated and please don't reply just use Gnome.

    You can use any other file manager under KDE that works, you know? Just disable that nautilus takes command of your desktop and you should be fine with it.
    AFAIR the display of the remaining time for a transfer comes at the cost of even more transfer time. And wouldn't some file synchronisation tool work too for this task? (someone with more knowledge please tell me if this would be a bad idea ).

  • Help with large file copy to WKS

    I need to copy a folder and all of it's contents from
    \apps\reallybigfolder to each workstation at c:\putithere.
    Because of the size of the file copy, I chose to create an app object
    with logic that says if xxx.exe exists then do not push. Basically, I
    don't want this thing to just run constantly. I need it to run one time
    to each workstation on the network when the users log in.
    I have tried pointing the app object to \public\ncopy.exe and then
    setting the parameters the way I want them, but I keep getting an
    error: "Cannot load VDM IPX/SPX Support". The two files in the folder
    are copied, but the subfolders are not. I have tried using the /s/e
    switches, but it does not help.
    I have also tried writing a .bat file, to test it - but I get the same
    results as above. So next I tried using copy instead of ncopy. I do not
    get the error message, but it still does not copy any of the subfolders.
    Is there another way? An easier way? I really appreciate the help.
    Tony

    What you are doing should work.
    It sounds as if there are some other workstation issues going on.
    I don't think I seen or could make the error "Cannot load VDM IPX/SPX
    Support" happen if I tried. Perhaps this would happen w/o a Novell
    Client installed. In such a case you could use XCOPY or ROBOCOPY.
    (Robocopy is way cooler than XPCOPY and is free from MS.)
    You can also use the "Distribution Tab" and the "Files" section to copy
    entire directories. Just use *.* as the source.
    [email protected] wrote:
    > I need to copy a folder and all of it's contents from
    > \apps\reallybigfolder to each workstation at c:\putithere.
    >
    > Because of the size of the file copy, I chose to create an app object
    > with logic that says if xxx.exe exists then do not push. Basically, I
    > don't want this thing to just run constantly. I need it to run one time
    > to each workstation on the network when the users log in.
    >
    > I have tried pointing the app object to \public\ncopy.exe and then
    > setting the parameters the way I want them, but I keep getting an
    > error: "Cannot load VDM IPX/SPX Support". The two files in the folder
    > are copied, but the subfolders are not. I have tried using the /s/e
    > switches, but it does not help.
    >
    > I have also tried writing a .bat file, to test it - but I get the same
    > results as above. So next I tried using copy instead of ncopy. I do not
    > get the error message, but it still does not copy any of the subfolders.
    >
    > Is there another way? An easier way? I really appreciate the help.
    >
    > Tony
    Craig Wilson
    Novell Product Support Forum Sysop
    Master CNE, MCSE 2003, CCN

  • Slow Files Copy File Server DFS Namespace

    I have two file servers running on VM both servers are on different physical servers.
    Both connect with dfs namespace.
    The problem part is both servers never have same copy speed.
    Sometime very slow files copy about 1MBps on FS01 and fast copy 12MBps on FS02.
    Sometime fast on FS01 and slow on FS02.
    Sometime both of them slow..
    So as usual I reboot the servers. Doesn't work.
    Then I reboot the DC01 also doesn't work. There is another brother DC02.
    After I reboot DC02, one of the FS become normal and another FS still slow.
    FS01 and FS02 randomly. They never get faster speed together.
    Users never complain slow FS because 1MBps is acceptable for them to open word excel etc.,.
    The HUGE problem is I don't have backup when the slow FS days.
    The problem since two weeks I'm giving up fixing it myself and need help from you expert guys.
    Thanks!
    DC01, DC02, FS01, FS02 (Win 2012 and All VMs)

    Hi,
    Since the slow copy is also occurred when you tried the direct copy from both shared folder, you could enable the disk write cache on the destination server to check the results.
    HOW TO: Manually Enable/Disable Disk Write Caching
    http://support.microsoft.com/kb/259716
    Windows 2008 R2 - large file copy uses all available memory and then tranfer rate decreases dramatically (20x)
    http://social.technet.microsoft.com/Forums/windowsserver/en-US/3f8a80fd-914b-4fe7-8c93-b06787b03662/windows-2008-r2-large-file-copy-uses-all-available-memory-and-then-tranfer-rate-decreases?forum=winservergen
    You could also refer to the FAQ article to troubleshoot the slow copy issue:
    [Forum FAQ] Troubleshooting Network File Copy Slowness
    http://social.technet.microsoft.com/Forums/windowsserver/en-US/7bd9978c-69b4-42bf-90cd-fc7541ccb663/forum-faq-troubleshooting-network-file-copy-slowness?forum=winserverPN
    Best Regards,
    Mandy 
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Server for NFS deletes the partially copied file after failover

    Hello, 
    I found the strange behaviour in Server for NFS in the microsoft foc cluster. 
    I have a foc cluster of two nodes(windows server 2k12r2) and server for nfs role is installed on both. now I did mount the export from the client(windows server 2k12r2) where client for nfs role is installed;
    and a large file copying from the clients local drive to the mounted export is started.
    Now during this copying operation I did switch the export to second node(moved it using the failover cluster manager wizard). and what I found is the copying interrupts with
    error message "there is problem accessing file path" and on retry the copying starts all over again, I mean if the 50% was copied previously then
    after retry it will start from 0%.
    I watched the system events on second node with the "procmon" and I found; after the disk gets attached to the second node, the delete operation is get called for this file.
    Now what I need to know is why this delete is been called and who made this called. In the stack of the delete operation I could track the nfssvr.sys but not sure what attributes of that file made it to call the
    delete. 

    Hi,
    It takes us some time to create a failover cluster for testing.
    We tried to copy a large folder which contains many files, and a large VHD file we created.
    The result is the same - it will not restart the copy after failover. Thus as you said it is not caused by design and it should be an issue on your side which causes a Delete and redo the copy.
    Is there any application or specific network settings could terminate a disconnected copy process? Currently I do not have much information about this. 
    Edit: I noticed that it is NFS share and in our test we are using SMB share. Will do the test again to see if any difference. 

  • E1000 wifi dead whenever I copy a large file between 2 PC's

    Whenever I try to copy a large file (eg. 200 MB) from one PC to another (both connected wirelessly to the Linksys E1000 router), the wifi on the router dies.  I can see the wifi indicator light on the router go off, and my PC's lose their connection (of course).
    I understand that the E1000 is a low-end router so if it copies really slowly I can accept it.  But how can the wifi just die out like that?  Is there something I didn't setup properly?
    btw, this issue is present even after I upgraded to the latest firmware (2.1.02 build 5).
    Would greatly appreciate any advice.

    Hi,
    Thank you for your reply.
    1.  Problem not due to firmware upgrade.  The problem existed before the firmware upgrade, and persist after the upgrade. But yes, I did power off and power on my router again.
    2. I followed some instructions on this forum to chnage the following settings:
    Channel: 11
    MTU: 1340
    Beacon: 75
    Fragmentation Threshold: 2304
    RTS: 2304
    And strangely enough, it works now!
    3.  This morning, upon seeing your reply, I decided to do some investigation to see which setting did the trick. I modified each setting back to the default, one by one, and tested the large file copy each time I revert something back to default.
    Surprisingly, the file copy operation was successful throughout the tests, even upon reverting all settings back to default.
    So, what is the "problem" with this router? I had problems for 1 month with the default settings, and then suddenly all problems disappear?
    Wai Kee

  • File Copy Dialog box hangs in mid-copy; Cancel does nothing.

    Hi all,
    I'm having the aforementioned issue fairly often with Windows 7.  It seems to happen when copying to network shares.  It is more than annoying--it is literally stopping production at our company.  I would really appreciate an realistic fix
    for this--that is, one that isn't the catch-all "Reinstall windows".
    Let me further state that it has nothing to do with hardware--I can reliably copy the same files in XP.  It is a Windows 7/Explorer thing.
    It also seems to be size-related--but I haven't had the time to do scientific testing to verify this.  I know that if I'm copying an Access .mdb file from my development machine (Windows 7) to a client machine (mostly windows XP) with a mapped drive,
    it seems to hang.  But if I compact/repair the .mdb file and bring it from, say 25 meg to 12 meg, it seems to copy fine.
    Worse, is that when it hangs, I'll finally hit "Cancel" after a few minutes, and then the entire Windows 7 locks!!  I can't do anything until I ctrl-alt-del and kill the Explorer.exe process, which wipes out my desktop, etc, until Explorer.exe auto-starts.
    This is such a huge problem that I  have to think there's a fix upcoming.
    Thanks,
    --Jim

    Hi,
    May I know how it works if you copy files between local drives? Please also boot the system to Safe Mode with networking and check this issue.
    In addition, may I know more information about the machine which hosts the share folder you mapped is located, a Windows based computer? Please also
    let us know the operating system.
    Based on my research, I would like to suggest the following:
    1.   
    Try disabling Receive Side Scaling, Chimney Offload, and NetDMA support and see if it works:
    Information about the TCP Chimney Offload, Receive Side
    Scaling, and Network Direct Memory Access features in Windows Server 2008
    2.   
    Also run the following commands in Windows 7:
    netsh interface tcp set global autotuninglevel=disabled
    netsh int ip set global taskoffload=disabled
    3.   
    Try Robocopy:
    Get to Know Robocopy for More Powerful File Management
    For detail information about the usage of the Robocopy command, please also refer to:
    Robocopy
    Meanwhile, I would like to share the following with you for your reference:
    Slow Large File Copy Issues
    Windows Explorer and SMB Traffic
    Hope this helps.
    Thanks.
    Nicholas Li - MSFT
    Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading the thread.

  • Airport Extreme, Airdisk, and Vista - large file problem with a twist

    Hi all,
    I'm having a problem moving large-ish files to an external drive attached to my Airport Extreme SOMETIMES. Let me explain.
    My system - Macbook Pro, 4gb ram, 10gb free HD space on macbook, running latest updates for mac and vista, external hard drive on AE is an internal WD in an enclosure w/ 25gb free (its formatted for pc, but I've used it directly connected to multiple computers, mac and pc, without fail), AE is using firmware 7.3.2, and im only allowing 802.11n. The problem occurs on the Vista side - havent checked the mac side yet.
    The Good - I have bit torrents set up, using utorrent, to automatically copy files over to my airdisk once they've completed. This works flawlessly. If i connect the hard drive directly to my laptop (macbook running bootcamp w/ vista, all updates applied), large files copy over without a hitch as well.
    The Bad - For the past couple weeks (could be longer, but I've just noticed it recently being a problem - is that a firmware problem clue?) If i download the files to my Vista desktop and copy them over manually, the copy just sits for a while and eventually fails with a "try again" error. If i try to copy any file over 300mb, the same error occurs.
    What I've tried - well, not a lot. The first thing i did was to make sure my hard drive was error free, and worked when physically connected - it is and it did. I've read a few posts about formatting the drive for mac, but this really isnt a good option for me since all of my music is on this drive, and itunes pulls from it (which also works without a problem). I do however get the hang in itunes if i try to import large files (movies). I've also read about trying to go back to an earlier AE firmware, but the posts were outdated. I can try the mac side and see if large files move over, but again, i prefer to do this in windows vista.
    this is my first post, so im sure im leaving out vital info. If anyone wants to take a stab at helping, I'd love to discuss. Thanks in advance.

    Hello,
    Just noticed the other day that I am having the same problem. I have two Vista machines attached to TC w/ a Western Digital 500 gig USB'd. I can write to the TC (any file size) with no problem, however I cannot write larger files to my attached WD drive. I can write smaller folders of music and such, but I cannot back up larger video files. Yet, if I directly attach the drive to my laptop I can copy over the files, no problem. I could not find any setting in the Airport Utility with regards to file size limits or anything of the like. Any help on this would be much appreciated.

  • Mac OSX desktop dropping connection with multiple copy processes & large files

    The servers are 6.5 SP3 running NFAP, the MAC OSX is 10.4.2 updated. The
    volume the macs are using is part of a cluster. The users mount the volumes
    on their macs and everying is for the most part fine. If they grab a bunch
    of files and copy them from desktop to server it's fine as long as it's only
    a single copy process. The users are part of the hi-res department and the
    files can be 1GB or larger. If they drag one or more large files, and then
    while that's copying they drag some more files, so both copy processes are
    running at once....quite often the volume will dismount from the desktop and
    you will get unable to copy because some resource is unavailable. Sometimes
    the finder crashes, sometimes not. Often the files that were partially
    copied get locked and the users needs to reboot their Mac in order to delete
    them. I'm getting pretty desperate hear, anyone have an idea what's going
    on. I don't know if this is a Tiger thing or a large file thing or a
    multiple copy stream thing, a netware thing or a mac thing.....we have
    hundreds of other users running OSX 10.3 and earlier who are not reporting
    this problem, but they also don't copy files that size. Someone please tell
    me they have seen this before....thanks very much. Oh, before going to 6.5
    and NFAP the servers were 5.1 with Prosoft server and they never had the
    problem.
    Jake

    Thanks for your help, I have incidents open now with Apple and Novell, I
    hope one of them can provide something for us. We tried applying 6.5 SP4 to
    a test server....the problem still happened but was "better", the copy
    operations still quit but with SP4 applied the volume did not dismount....or
    if it did it remounted automatically because it was still connected after
    OKing through the copy errors.
    "Jeffrey D Sessler" <[email protected]> wrote in message
    news:[email protected]...
    >I tried two 2GB files. No problems at all but I'm in a 100% end-to-end
    >Gigabit environment. My server storage is also a very-fast SAN.
    >
    > Best,
    > Jeff
    >
    >
    > "Jacob Shorr" <[email protected]> wrote in message
    > news:[email protected]...
    >> Jeffrey,
    >>
    >> Have you tried the exact same test, dragging say two 500MB files in
    >> seperate
    >> copy operations? I hear what you're saying about the 10/100 link, but we
    >> don't run gigabit to the desktops, and we're not going to anytime soon.
    >> Even if that could resolve the issue we need something kind of other fix
    >> for
    >> our infrastructure. I will look into any errors on the switch.
    >>
    >> "Jeffrey D Sessler" <[email protected]> wrote in message
    >> news:[email protected]...
    >>> Well, considering that I'm not seeing the issue on my 10.4.2 machines
    >>> against my 6.5Sp3 servers, I'm not sure what you should do at this
    >>> point.
    >>> Since you say that the 10.3 machines don't have an issue, it makes it
    >> sound
    >>> to me like this is an Apple issue.
    >>>
    >>> The logs point at a communication issue... Is there anyway to get that
    >>> Mac
    >>> on to a Gigabit connection to see if you can duplicate it?
    >>>
    >>> The other option is to wait for 10.4.3 to be released and see if the
    >> problem
    >>> goes away.
    >>>
    >>> Again, on only a 10/100 link, one copy of a large file _will_ saturate
    >>> the
    >>> link.Perhaps 10.4.2 has an issue with this?
    >>>
    >>> Also, when you're doing the copy, what to the error counters in the
    >> switches
    >>> say?
    >>>
    >>> Jeff
    >>>
    >>> "Jacob Shorr" <[email protected]> wrote in message
    >>> news:[email protected]...
    >>> > There are definately no mis-matches. This has been checked and
    >> re-checked
    >>> > a
    >>> > dozen times. It's only on 10.4......we can replicate it on every 10.4
    >>> > machine, and we cannot replicate it on any machine that is 10.3. What
    >>> > should I do to go about getting this fixed, should I be contacting
    >>> > Apple
    >>> > or
    >>> > Novell? The speed is always good until it actually decides to drop
    >>> > and
    >>> > cut
    >>> > off.
    >>> >
    >>> >
    >>> > "Jeffrey D Sessler" <[email protected]> wrote in message
    >>> > news:7jj%[email protected]...
    >>> >> Looks like communication between the Mac and the Netware server is
    >>> > dropping.
    >>> >> AFP in 10.3 and 10.4 support auto-reconnection but I'm sure that it
    >> will
    >>> >> fail the copy process.
    >>> >>
    >>> >> I'd first check to make sure that there are not any mis-matches on
    >>> >> the
    >>> >> switch e.g. the Mac is set to Auto (as it should be) but someone has
    >> set
    >>> > the
    >>> >> switch to a forced mode. Both should be auto. A duplex miss-match
    >>> >> could
    >>> >> cause the Mac not to see the heart beat back from the Novell server.
    >>> >>
    >>> >> Like I said, if the workstation is only on 10/100, a single copy
    >> process
    >>> > on
    >>> >> a G5 Mac will saturate that link. Adding more concurrent copies will
    >> only
    >>> >> result in everything slowing down and taking longer, or you'll get
    >>> >> the
    >>> >> dropped connections.
    >>> >>
    >>> >> Best,
    >>> >> Jeff
    >>> >>
    >>> >>
    >>> >> "Jacob Shorr" <[email protected]> wrote in message
    >>> >> news:Ybc%[email protected]...
    >>> >> > Take a look at the last entries in the system log right after it
    >>> > happened,
    >>> >> > let me know if it means anything to you. Thanks.
    >>> >> >
    >>> >> > Sep 29 13:26:10 yapostolides kernel[0]: AFP_VFS afpfs_mount:
    >>> >> > /Volumes/FP04SYS11, pid 210
    >>> >> > Sep 29 13:31:13 yapostolides kernel[0]: AFP_VFS afpfs_Reconnect:
    >> doing
    >>> >> > reconnect on /Volumes/FP04SYS11
    >>> >> > Sep 29 13:31:13 yapostolides kernel[0]: AFP_VFS afpfs_Reconnect:
    >>> > connect
    >>> >> > to
    >>> >> > the server /Volumes/FP04SYS11
    >>> >> > Sep 29 13:31:13 yapostolides kernel[0]: AFP_VFS afpfs_Reconnect:
    >>> > Opening
    >>> >> > session /Volumes/FP04SYS11
    >>> >> > Sep 29 13:31:13 yapostolides kernel[0]: AFP_VFS afpfs_Reconnect:
    >>> > Logging
    >>> >> > in
    >>> >> > with uam 2 /Volumes/FP04SYS11
    >>> >> > Sep 29 13:31:13 yapostolides kernel[0]: AFP_VFS afpfs_Reconnect:
    >>> >> > Restoring
    >>> >> > session /Volumes/FP04SYS11
    >>> >> > Sep 29 13:31:13 yapostolides kernel[0]: AFP_VFS
    >>> >> > afpfs_MountAFPVolume:
    >>> >> > GetVolParms failed 0x16
    >>> >> > Sep 29 13:31:13 yapostolides kernel[0]: AFP_VFS afpfs_Reconnect:
    >>> >> > afpfs_MountAFPVolume failed 22 /Volumes/FP04SYS11
    >>> >> > Sep 29 13:31:13 yapostolides KernelEventAgent[43]: tid 00000000
    >>> >> > received
    >>> >> > VQ_DEAD event (32)
    >>> >> > Sep 29 13:31:13 yapostolides kernel[0]: AFP_VFS afpfs_Reconnect:
    >>> > posting
    >>> >> > to
    >>> >> > KEA to unmount /Volumes/FP04SYS11
    >>> >> > Sep 29 13:31:13 yapostolides KernelEventAgent[43]: tid 00000000
    >>> >> > type
    >>> >> > 'afpfs', mounted on '/Volumes/FP04SYS11', from
    >>> >> > 'afp_0TQCV10QsPgy0TShVK000000-4340.2c000006', dead
    >>> >> > Sep 29 13:31:13 yapostolides KernelEventAgent[43]: tid 00000000
    >>> >> > found
    >> 1
    >>> >> > filesystem(s) with problem(s)
    >>> >> > Sep 29 13:31:13 yapostolides kernel[0]: AFP_VFS afpfs_unmount:
    >>> >> > /Volumes/FP04SYS11, flags 524288, pid 43
    >>> >> >
    >>> >> >
    >>> >> >
    >>> >> >
    >>> >> > "Jeffrey D Sessler" <[email protected]> wrote in message
    >>> >> > news:GH%[email protected]...
    >>> >> >> We move large files all the time under SP3 with no issues however,
    >>> > there
    >>> >> > are
    >>> >> >> several finder/copy/afp issues in Tiger that are do to be fixed in
    >>> >> >> 10.4.3.
    >>> >> >>
    >>> >> >> Also, if you have any type of network issue such as duplex
    >> mis-matches
    >>> > or
    >>> >> >> are running say, only a 10/100 network, a single Mac can not only
    >>> >> >> transfer
    >>> >> >> more than 10MB/sec (filling the network pipe) or generate so many
    >>> >> > collisions
    >>> >> >> (duplex mis-match) that you could drop communication to the
    >>> >> >> server.
    >>> >> >>
    >>> >> >> What type of server (speed, disks, raid level, NIC speed) and what
    >>> >> >> type
    >>> >> >> of
    >>> >> >> network (switched gigabit, switched 10/100, shared 10/100, etc.)
    >>> >> >>
    >>> >> >> How long does it take to copy that single 1GB file to the server?
    >>> >> >>
    >>> >> >> Does a single copy process always work?
    >>> >> >>
    >>> >> >> Jeff
    >>> >> >>
    >>> >> >> "Jacob Shorr" <[email protected]> wrote in message
    >>> >> >> news:[email protected]...
    >>> >> >> > The servers are 6.5 SP3 running NFAP, the MAC OSX is 10.4.2
    >> updated.
    >>> >> > The
    >>> >> >> > volume the macs are using is part of a cluster. The users mount
    >> the
    >>> >> >> > volumes
    >>> >> >> > on their macs and everying is for the most part fine. If they
    >> grab
    >>> >> >> > a
    >>> >> >> > bunch
    >>> >> >> > of files and copy them from desktop to server it's fine as long
    >>> >> >> > as
    >>> > it's
    >>> >> >> > only
    >>> >> >> > a single copy process. The users are part of the hi-res
    >> department
    >>> > and
    >>> >> >> > the
    >>> >> >> > files can be 1GB or larger. If they drag one or more large
    >>> >> >> > files,
    >>> > and
    >>> >> >> > then
    >>> >> >> > while that's copying they drag some more files, so both copy
    >>> > processes
    >>> >> > are
    >>> >> >> > running at once....quite often the volume will dismount from the
    >>> >> >> > desktop
    >>> >> >> > and
    >>> >> >> > you will get unable to copy because some resource is
    >>> >> >> > unavailable.
    >>> >> >> > Sometimes
    >>> >> >> > the finder crashes, sometimes not. Often the files that were
    >>> > partially
    >>> >> >> > copied get locked and the users needs to reboot their Mac in
    >>> >> >> > order
    >>> >> >> > to
    >>> >> >> > delete
    >>> >> >> > them. I'm getting pretty desperate hear, anyone have an idea
    >> what's
    >>> >> > going
    >>> >> >> > on. I don't know if this is a Tiger thing or a large file thing
    >> or
    >>> >> >> > a
    >>> >> >> > multiple copy stream thing, a netware thing or a mac
    >>> >> >> > thing.....we
    >>> > have
    >>> >> >> > hundreds of other users running OSX 10.3 and earlier who are not
    >>> >> > reporting
    >>> >> >> > this problem, but they also don't copy files that size. Someone
    >>> > please
    >>> >> >> > tell
    >>> >> >> > me they have seen this before....thanks very much. Oh, before
    >> going
    >>> > to
    >>> >> >> > 6.5
    >>> >> >> > and NFAP the servers were 5.1 with Prosoft server and they never
    >> had
    >>> >> >> > the
    >>> >> >> > problem.
    >>> >> >> >
    >>> >> >> > Jake
    >>> >> >> >
    >>> >> >> >
    >>> >> >>
    >>> >> >>
    >>> >> >
    >>> >> >
    >>> >>
    >>> >>
    >>> >
    >>> >
    >>>
    >>>
    >>
    >>
    >
    >

Maybe you are looking for

  • Since Update to Mavericks my MacBook Pro (Mid 2012) restarts randomly

    Hey, I'm using a 13" MacBook Pro (Mid 2012) with the original 500gb HDD and a 128gb SSD and 16gb RAM which worked out very well ever since. But since updating to Mavericks at least once a day the system crashes out of nowhere a screen shows up that s

  • Trouble Printing PDF's scanned with Image Capture

    OK this is a strange problem.  I have Macbook Pro and a Lexmark E230 laser printer which prints everything out just fine in safari, word, pages, numbers - anything except PDF's that are specifially scanned with the Image Capture application. I printe

  • Cant AUTHORIZE my computer

    After I spoke with an iTunes agent last night I have been unable to authorize my mac to play movies and music (worked fine before I spoke with her). I get a message "we could not complete your iTunes store request. An unknown error occurred (-42408).

  • Does Logic Studio v.2.1 work with "any" intel processor?

    Does Logic Studio v.2.1 work with "any" intel processor?

  • Audio only (no video) using MPEG-4

    I uploaded video footage in the 'organizer' section, but when I drag it to the timeline, I only get the audio. The screen is red. I am using a Sony T900 which seems to give me MPEG-4 formatted videos. Strange enough, in the 'organizer' section, I can