Large file copy fails through 4240 sensor

Customer attempts to copy a large file from a server in an IPS protected vlan to a host in an IPS un-protected vlan and the copy fails if file is greater than about 2Gbytes in size. If the server is moved to the un-protected vlan the copy succeeds. There are no events on the IPS suggesting any blocking or other actions.

The CPU does occasionly peak at 100% when transferring a large file but the copy often fails when the CPU is significantly lower. I know a 4240 has 300Mbit/s throughput but as I understood it traffic would still be serviced but would bypass the inspection process if exceeded, maybe a transition from inspection to non inspection causes the copy to fail like a tcp reset, I may try a sniffer.
I do have TAC involved but like to try and utilise the knowledge of other expert users like yourself to try and rectify issues. Thanks for your help. If you have any other comments please let me know, I will certainly post my findings if you are interested.

Similar Messages

  • Large file copy fails

    trying to move a 60GB folder from a NAS to a local USB drive, and regardless of how many times it try to do this it fails within the first few minutes.
    I'm on a managed Ciscso Gigabit Ethernet switch in a commercial building and I have hundreds of users having no problems with OS 10.6, Windows XP and Windows 7 but my Yosemite system is not able to do this unless I boot into my OS 10.6. partition.
    Reconfig of the switch is not a viable option, I can't change things on a switch that would jeopardize hundreds of users to fix one thing on a mac testing the legitimacy of OS 10.10 in  a corporate setting.

    The CPU does occasionly peak at 100% when transferring a large file but the copy often fails when the CPU is significantly lower. I know a 4240 has 300Mbit/s throughput but as I understood it traffic would still be serviced but would bypass the inspection process if exceeded, maybe a transition from inspection to non inspection causes the copy to fail like a tcp reset, I may try a sniffer.
    I do have TAC involved but like to try and utilise the knowledge of other expert users like yourself to try and rectify issues. Thanks for your help. If you have any other comments please let me know, I will certainly post my findings if you are interested.

  • Large File Copy fails in KDEmod 4.2.2

    I'm using KDEmod 4.2.2 and I often have to transfer large files on my computer.  Files are usually 1-2Gb and I have to transfer about 150Gb of them.  The files are stored on one external 500Gb HD and are being transferred to another identical 500Gb HD over firewire.  Also I have another external firewire drive that I transfer about 20-30Gb of the same type of 1-2Gb files to the internal HD of my laptop over firewire.  If I try to drag and drop in Dolphin, it gets through a few hundred Mb of the transfer then fails.  Now if I use the cp in the terminal, the transfer is fine.  Also when I was still distro hopping and using Fedora 10 with KDE 4.2.0 I had this same problem.  When I use Gnome this problem is non existent.  I do this often with work so it is a very important function to me.  All drives are FAT32 and there is no option to change them as they are used on serveral different machines/OS's before all is said and done and the only file system that all of the machines will read is FAT32 (thanks to one machine of course).  In many cases time is very important on the transfer and that is why I prefer to do the transfer in a desktop environment, so I can see progress and ETA.  This is a huge deal breaker for KDE and I would like to fix it.  Any help is greatly appreciated and please don't reply just use Gnome.

    You can use any other file manager under KDE that works, you know? Just disable that nautilus takes command of your desktop and you should be fine with it.
    AFAIR the display of the remaining time for a transfer comes at the cost of even more transfer time. And wouldn't some file synchronisation tool work too for this task? (someone with more knowledge please tell me if this would be a bad idea ).

  • Network speed affected by large file copy operations. Also, why intermittent network outages?

    Hi
    I have a couple of issues on our company network.
    The first is thate a single large file copy imapcts the entire network and dramatically reduces network speed and the second is that there are periodic outages where file open/close/save operations may appear to hang, and also where programs that rely on
    network connectivity e.g. email, appear to hang. It is as though the PC loses it's connection to the network, but the status of the network icon does not change. For the second issue if we wait the program will respond but the wait period can be up to 1min.
    The downside of this is that this affects Access databases on our server so that when an 'outage' occurs the Access client cannot recover and hangs permamnently.
    We have a Windows Active Directory domain that comprises Windows 2003 R2 (soon to be decommissioned), Windows Server 2008 Standard and Windows Server 2012 R2 Standard domain controllers. There are two member servers: A file server running Windows 2008 Storage
    Server and a remote access server (which also runs WSUS) running Windows Server 2012 Standard. The clients comprise about 35 Win7 PC's and 1 Vista PC.
    When I copy or move a large file from the 2008 Storage Server to my Win7 client other staff experience massive slowdowns when accessing the network. Recently I was moving several files from the Storage Server to my local drive. The files comprised pairs
    (e.g. folo76t5.pmm and folo76t5.pmi), one of which is less than 1MB and the other varies between 1.5 - 1.9GB. I was moving two files at a time so the total file size for each operation was just under 2GB.
    While the file move operation was taking place a colleague was trying to open a 36k Excel file. After waiting 3mins he asked me for help. I did some tests and noticed that when I was not copying large files he could open the Excel file immediately. When
    I started copying more data from the Storage Server to my local drive it took several minutes before his PC could open the Excel file.
    I also noticed on my Win7 client that our email client (Pegasus Mail), which was the only application I had open at the time would hang when the move operation was started and it would take at least a minute for it to start responding.
    Ordinarlily we work with many files
    Anyone have any suggestions, please? This is something that is affecting all clients. I can't carry out file maintenance on large files during normal work hours if network speed is going to be so badly impacted.
    I'm still working on the intermittent network outages (the second issue), but if anyone has any suggestions about what may be causing this I would be grateful if you could share them.
    Thanks

    What have you checked for resource usage during one of these copies of a large file?
    At a minimum I would check Task Manager>Resource Monitor.  In particular check the disk and network usage.  Also, look at RAM and CPU while the copy is taking place.
    What RAID level is there on the file server?
    There are many possible areas that could be causing your problem(s).  And it could be more than one thing.  Start by checking these things.  And go from there.
    Hi, JohnB352
    Thanks for the suggestions. I have monitored the server and can see that the memory is nearly maxed out with a lot of hard faults (varies between several hundred to several thousand), recorded during normal usage. The Disk and CPU seem normal.
    I'm going to replace the RAM and double it up to 12GB.
    Thanks! This may help with some other issues we are having. I'll post back after it has been done.
    [Edit]
    Forgot to mention: there are 6 drives in the server. 2 for the OS (Mirrored RAID 1) and 4 for the data (Striped RAID 5).

  • Qosmio X500-148 - Large file copy stucks

    Large file copy (~5-10gb) between USB or Firewire disk stucks the PC. It goes up to 50% then slows down and there is no possibility to do anything including cancel the task or open the browser or WIn Explorer or Task manager.
    Event viewer does not show anything strange.
    This happens only with Windows 7 64bit - no problem with Windows 7 32bit (same external hardware). No problem when copying between internal PC disk.
    The PC is very powerful - Qosmio X500-148

    The external Hardware is:
    1.5 TB WD Hard disk USB
    1.0 TB WD + 250GB Maxtor + 250GB Maxtor on Firewire
    I have used standard copy feature - copy and paste - as well as
    Viceversa Pro for folder sync with same results
    Please note that the same external configuration was running properly on a 1 core PC - Satellite - running on Win7 x86 without problems. Since I have moved on Win7 x64 on my brand new X500-148 I have a great del of copying problems
    Installed all Windows upgrade and the Event Monitor doesn't show anything strange when copying

  • 2008R2 Network File Copy Fails

    Mystified on this one.
    2008R2 SP1 server in remote location. Trying to copy files over the network either through Windows Copy/Paste, Robocopy, RSync...or you name it.
    Copy fails at some point regardless of method. When using Windows Copy/Paste I get an error that says "There is a problem accessing \\SERVER\SHARE Make sure you are connected to the network and try again."
    I have:
    Upgraded the NIC drivers to the newest
    Tried disabling TCP Chimney
    Tried disabling all TCP offloads in NIC settins
    Applied Hotfix # <cite class="vurls">2477730</cite>
    Removed antivirus software and tried without it (Symantec SEP 12)
    Windows Firewall is disabled
    <cite class="vurls">Verified and even reset permission on the source and destination data</cite>
    Network guy sees nothing as far as errors and problems on the line. When running a ping to the server in question, or from the server to the copy destination, there are no packet drops while the copying is running or when it fails.
    I did notice in the security log there are a larger number of event ID 5156 and 5158, far more than similar servers in remote sites.

    Hi,
    Based on your description, we can try restarting the server in
    Safe Mode with Networking to see whether we can copy the files.
    Besides, although I think the following article may not be too much corresponding to our situation, we can give a try.
    Fix Problems With Copying Large Files in Windows Vista
    http://www.howtogeek.com/howto/windows-vista/fix-problems-with-copying-large-files-in-windows-vista/
    Hope it helps.
    Best regards,
    Frank Shen

  • X-Serve RAID unmounts on large file copy

    Hi
    We are running an X-Serve RAID in a video production environment. The RAID is striped RAID 5 with 1 hot spare on either side and concatenated with Apple Disk Utility as RAID 0 on our G5, to appear as one logical volume.
    I have noticed, lately, that large file copies from the RAID will cause it to unmount and the copy to fail.
    In the console, we see:
    May 4 00:23:37 Orobourus kernel[0]: FusionMPT: Notification = 5 (External Bus Reset) for SCSI Domain = 1
    May 4 00:23:37 Orobourus kernel[0]: FusionMPT: External Bus Reset for SCSI Domain = 1
    May 4 00:23:37 Orobourus kernel[0]: FusionMPT: Notification = 7 (Link Status Change) for SCSI Domain = 1
    May 4 00:23:37 Orobourus kernel[0]: FusionFC: Link is down for SCSI Domain = 1.
    May 4 00:23:37 Orobourus kernel[0]: FusionMPT: Notification = 8 (Loop State Change) for SCSI Domain = 1
    May 4 00:23:37 Orobourus kernel[0]: FusionFC: Loop Initialization Packet for SCSI Domain = 1.
    May 4 00:23:37 Orobourus kernel[0]: FusionMPT: Notification = 7 (Link Status Change) for SCSI Domain = 1
    May 4 00:23:37 Orobourus kernel[0]: FusionFC: Link is active for SCSI Domain = 1.
    May 4 00:23:37 Orobourus kernel[0]: FusionMPT: Notification = 6 (Rescan) for SCSI Domain = 1
    May 4 00:23:37 Orobourus kernel[0]: AppleRAID::completeRAIDRequest - error 0xe00002ca detected for set "tao.av.complexity.org" (868584D1-E485-4B37-AFCE-E7AFC3AD5BB7), member CBDFD75C-5DC8-4089-A7AE-1EFBE0B1A6EB, set byte offset = 1674060574720.
    May 4 00:23:37 Orobourus kernel[0]: disk5: I/O error.
    May 4 00:23:37 Orobourus kernel[0]: disk5: device is offline.
    May 4 00:23:37 Orobourus kernel[0]: AppleRAID::recover() member CBDFD75C-5DC8-4089-A7AE-1EFBE0B1A6EB from set "tao.av.complexity.org" (868584D1-E485-4B37-AFCE-E7AFC3AD5BB7) has been marked offline.
    May 4 00:23:37 Orobourus kernel[0]: AppleRAID::restartSet - restarting set "tao.av.complexity.org" (868584D1-E485-4B37-AFCE-E7AFC3AD5BB7).
    May 4 00:23:37 Orobourus kernel[0]: disk5: media is not present.
    May 4 00:23:37 Orobourus kernel[0]: disk5: media is not present.
    May 4 00:23:37 Orobourus kernel[0]: jnl: dojnlio: strategy err 0x6
    May 4 00:23:37 Orobourus kernel[0]: jnl: end_transaction: only wrote 0 of 12288 bytes to the journal!
    May 4 00:23:37 Orobourus kernel[0]: disk5: media is not present.
    May 4 00:23:37 Orobourus kernel[0]: disk5: media is not present.
    May 4 00:23:37 Orobourus kernel[0]: disk5: media is not present.
    May 4 00:23:37 Orobourus kernel[0]: jnl: close: journal 0x7b319ec, is invalid. aborting outstanding transactions
    Following such an ungainly dismount, the only way to get the RAID back online is to reboot the G5 - attempting to mount the volume via Disk Utility doesn't work.
    This looks similar to another 'RAID volume unmounts' thread I found elsewhere in the forum - it appeared to have no solution apart from contacting Apple.
    Is this the only recourse?
    G5 2.7 6.5Gb + X-RAID 3.5Tb   Mac OS X (10.4.9)   Decklink Extreme / HD

    There should be no need to restart the controllers to get additional reporting -- updating the firmware restarts them.
    If blocks are found to be bad during reconditioning, then the data in the blocks should be relocated, and the "bad" blocks mapped out as bad. The Xserve RAID has no idea whatsoever what constitutes a "file," given it doesn't know anything about the OS (you can format the volumes as HFS+, Xsan, ZFS, ext3, ReiserFS, whatever... this is done from the host and the Xserve RAID is completely unaware of this). So if the blocks can be successfully relocated (shouldn't be too hard, given the data can be generated from parity), then you're good to go. If somehow you have bad blocks in multiple locations on multiple spindles that overlap, then that would of course be bad. I'd expect that to be fairly rare though.
    One thing I should note is that drives don't last forever. As the RAID ages, the chances of drive failure increase. Typically drive lifespan follows a "bathtub curve," if you imagined looking a a cross-section of a crow's foot tub. Failures will be very high early (due to manufacturing defects -- something testing as "burn in" should catch -- and I believe Apple does this for you), then very low for a while, and then will rise markedly as drives get "old." Where this is would be a subject for debate, but typical data center folks start proactively replacing entire storage systems at some point to avoid the chance of multiple drive failure. The Xserve RAIDs have only been out for 4 years, so I wouldn't anticipate this yet -- but if it were me I'd probably start thinking about doing this after 4 or 5 years. Few experienced IT data center folks would feel comfortable letting spindles run 24x7x365 for 6 years... even if 95% of the time it would be okay.

  • Windows 7 64-bit Corrupting (Altering) Large Files Copied to External NTFS Drives

    http://social.technet.microsoft.com/Forums/en-US/w7itproperf/thread/13a7426e-1a5d-41b0-9e16-19437697f62b/
    continue to this thread, I have the same problems, the corrupted files only archives, zip or 7z ... and exe, there are no problems copying larg files like movies for example, no problem when copying via linux in the same laptop, it is a windows issue, I
    have all updates installed, nothing missing

    Ok, lets be brief.
    This problem is annoying me for years. It is totally reproducible although random. It happens when copying to external drives (mainly USB) when they are configured for "safe removal". I have had issues copying to NTFS and FAT32 partitions. I have had issues
    using 4 different computers, from 7 years old to brand new ones, using AMD or intel chipsets and totally different USB controllers, and using many different USB sticks, hard disks, etc. The only common thing in those computers is Windows 7 x64 and the external
    drives optimization for "safe removal". Installing Teracopy reduces the chances of data corrpution, but does not eliminate them completely. The only real workaround (tested for 2 years) is activating the write cache in the device manager properties of the
    drive. In that way, windows uses the same transfer mechanisms as for the internal drives, and everything is OK.
    MICROSOFT guys, there is a BIG BUG in windows 7 x64 external drives data transfer mechanism. There is a bug in the cache handling mechanism of the safe removal function. Nobody hears, I've been years talking about this in forums. It is a very dangerous bug
    because it is silent, and many non professional people is experiencing random errors in their backup data. PLEASE, INVESTIGATE THIS. YOU NEED TO FIX SUCH IMPORTANT BUG. IT IS UNBELIEVABLE THAT IT IS STILL THERE SINCE 2009!!!
    Hope this helps.

  • Large file copy to iSCSI drive fills all memory until server stalls.

    I am having the file copy issues that people have been having with various versions of Server now for years, as can be read in the forums. I am having this issue on Server 2012 Std., using Hyper-V.
    When a large file is copied to an iSCSI drive, the file is copied into memory first faster than it can be sent over the network. It fills all available GB of memory until the server, which is a VM host, pretty much stalls and also all the VMs stall. This
    continues until the file copy is finished or stopped, then the memory is gradually released as it is taken out of memory as it is sent over the network.
    This issue was happening on send and receive. I change the registry setting for Large Cache to disable it, and now I can receive large files from the iSCSI. They now take an additional 1 GB of memory and it sits there until the file copy is finished.
    I have tried all the NIC and disk settings as can be found in the forums around the internet that people have posted in regard to this issue.
    To describe in a little more detail, when receiving a file from iSCSI, the file copy windows shows a speed of around 60-80 MB / sec, which is wire speed. When sending a file to iSCSI, the file copy window shows a speed of 150 MB/sec, which is actually the
    speed at which it is being written to memory. The NIC counter in Task Mgr shows instead the actual network speed which is about half of that. The difference is the rate at which memory fills until it is full.
    This also happens when using Window Server Backup. It freezes up the VM Host and Guests while the host backup is running because of this issue. It does cause some software issues.
    The problem does not happen inside the Guests. I can transfer files to a different LUN on the same iSCSI, which uses the same NIC as the Host with no issue.
    Does anyone know if the fix has been found for this? All forum posts I have found for this have closed with no definite resolution found.
    Thanks for you help.
    KTSaved

    Hi,
    Sorry if it causes confusion but "by design" I mean "by design it will use memory for copying files via network".
    In Windows 2000/2003, the following keys could help control the memory usage:
    LargSystemCache (0 or 1) HKEY_LOCAL_MACHINE\CurrentControlSet\Control\Session Manager\Memory Management
    Size (1, 2 or 3) in HKLM\SYSTEM\CurrentControlSet\Services\LanmanServer\Parameter
    I saw threads mentioned that it will not work in later systems such as Windows 2008 R2.
    For Windows 2008 R2 and Windows 2008, there is a service named Microsoft Windows Dynamic Cache Service which addressed this issue:
    https://www.microsoft.com/en-us/download/details.aspx?id=9258
    However I searched and there is no update version for Windows 2012 and 2012 R2.
    I also noticed that the following command could help control the memory usage. With value = 1, NTFS uses the default amount of paged-pool memory:
    fsutil behavior set memoryusage 1
    You need a reboot after changing the value. 
    Please remember to mark the replies as answers if they help and un-mark them if they provide no help. If you have feedback for TechNet Support, contact [email protected]

  • Can't Upload Large Files (Upload Fails using Internet Explorer but works with Google Chrome)

    I've been experience an issue uploading large (75MB & greater) PDF files to a SharePoint 2010 document library. Using normal upload procedures using Internet Explorer 7 (our company standard for the time being) the upload fails. No error message is thrown,
    the upload screen goes away and the page refreshes and the document isn't there. I tried upload multiple and it says throws a failed error after a while.
    Using google chrome I made an attempt just to see what it did and the file using the "Add a document" uploaded in seconds. Can't figure out why one browser worked and the other doesn't. We are getting sporadic inquiries with the same issue.
    We have previously setup large file support in the appropriate areas and large files are uploaded to the sites successfully. Any thoughts?

    File size upload has to be configured on the server farm level. Your administrator most likely set
    up the limit to size of files that can be uploaded. This size can be increased and you would then be able to upload your documents.

  • Help with large file copy to WKS

    I need to copy a folder and all of it's contents from
    \apps\reallybigfolder to each workstation at c:\putithere.
    Because of the size of the file copy, I chose to create an app object
    with logic that says if xxx.exe exists then do not push. Basically, I
    don't want this thing to just run constantly. I need it to run one time
    to each workstation on the network when the users log in.
    I have tried pointing the app object to \public\ncopy.exe and then
    setting the parameters the way I want them, but I keep getting an
    error: "Cannot load VDM IPX/SPX Support". The two files in the folder
    are copied, but the subfolders are not. I have tried using the /s/e
    switches, but it does not help.
    I have also tried writing a .bat file, to test it - but I get the same
    results as above. So next I tried using copy instead of ncopy. I do not
    get the error message, but it still does not copy any of the subfolders.
    Is there another way? An easier way? I really appreciate the help.
    Tony

    What you are doing should work.
    It sounds as if there are some other workstation issues going on.
    I don't think I seen or could make the error "Cannot load VDM IPX/SPX
    Support" happen if I tried. Perhaps this would happen w/o a Novell
    Client installed. In such a case you could use XCOPY or ROBOCOPY.
    (Robocopy is way cooler than XPCOPY and is free from MS.)
    You can also use the "Distribution Tab" and the "Files" section to copy
    entire directories. Just use *.* as the source.
    [email protected] wrote:
    > I need to copy a folder and all of it's contents from
    > \apps\reallybigfolder to each workstation at c:\putithere.
    >
    > Because of the size of the file copy, I chose to create an app object
    > with logic that says if xxx.exe exists then do not push. Basically, I
    > don't want this thing to just run constantly. I need it to run one time
    > to each workstation on the network when the users log in.
    >
    > I have tried pointing the app object to \public\ncopy.exe and then
    > setting the parameters the way I want them, but I keep getting an
    > error: "Cannot load VDM IPX/SPX Support". The two files in the folder
    > are copied, but the subfolders are not. I have tried using the /s/e
    > switches, but it does not help.
    >
    > I have also tried writing a .bat file, to test it - but I get the same
    > results as above. So next I tried using copy instead of ncopy. I do not
    > get the error message, but it still does not copy any of the subfolders.
    >
    > Is there another way? An easier way? I really appreciate the help.
    >
    > Tony
    Craig Wilson
    Novell Product Support Forum Sysop
    Master CNE, MCSE 2003, CCN

  • If skydrive support resumable upload or how to upload large files to SkyDrive through application

    Hi,
    Does SkyDrive support resumable upload. For 33 MB file upload it took almost 5 min to upload . So to minimise time to upload i want to use resumable upload.
    Explain how it should be done.
    I do not want to use BITS protocol as given in one of forum. I want to upload more than 100 MB files.
    Thanks,
    Ashlesha

    Wrong forum.
    Here is the forum for you.

  • File copy of large file fails.

    I am trying to copy a large file, say "file.txt", from an external drive to my hard drive. The size of the file is 140 GB. I have tried twice, and both times the copy operation has stopped (after a couple of hours and 135 GB) with the message: "Operation cannot be completed. File already exists." The only file named file.txt that exists at that time is the partial one that was copied.
    What's going on and how can I perform the file copy?
    Thanks!

    Can you tell me if the external drive (for any of the respondents) was NOT formatted with a Mac partition? Instead something like NTFS or FAT?
    The file transfer failure issue is a recurrent frustration to a lot of users for many iterations of the Finder. Either through singular large file transfers or through multiple files (thousands) that in total are large in size.
    I am speculating that the issue arises from files being transferred between storage locations of differing formats or in the case of internet based storage varying file translations. In part this speculation exists because I do not seem to experience this issue if the drives are all Mac OS Extended (Journaled).
    Just looking to confirm your configuration.

  • E1000 wifi dead whenever I copy a large file between 2 PC's

    Whenever I try to copy a large file (eg. 200 MB) from one PC to another (both connected wirelessly to the Linksys E1000 router), the wifi on the router dies.  I can see the wifi indicator light on the router go off, and my PC's lose their connection (of course).
    I understand that the E1000 is a low-end router so if it copies really slowly I can accept it.  But how can the wifi just die out like that?  Is there something I didn't setup properly?
    btw, this issue is present even after I upgraded to the latest firmware (2.1.02 build 5).
    Would greatly appreciate any advice.

    Hi,
    Thank you for your reply.
    1.  Problem not due to firmware upgrade.  The problem existed before the firmware upgrade, and persist after the upgrade. But yes, I did power off and power on my router again.
    2. I followed some instructions on this forum to chnage the following settings:
    Channel: 11
    MTU: 1340
    Beacon: 75
    Fragmentation Threshold: 2304
    RTS: 2304
    And strangely enough, it works now!
    3.  This morning, upon seeing your reply, I decided to do some investigation to see which setting did the trick. I modified each setting back to the default, one by one, and tested the large file copy each time I revert something back to default.
    Surprisingly, the file copy operation was successful throughout the tests, even upon reverting all settings back to default.
    So, what is the "problem" with this router? I had problems for 1 month with the default settings, and then suddenly all problems disappear?
    Wai Kee

  • Airport Extreme, Airdisk, and Vista - large file problem with a twist

    Hi all,
    I'm having a problem moving large-ish files to an external drive attached to my Airport Extreme SOMETIMES. Let me explain.
    My system - Macbook Pro, 4gb ram, 10gb free HD space on macbook, running latest updates for mac and vista, external hard drive on AE is an internal WD in an enclosure w/ 25gb free (its formatted for pc, but I've used it directly connected to multiple computers, mac and pc, without fail), AE is using firmware 7.3.2, and im only allowing 802.11n. The problem occurs on the Vista side - havent checked the mac side yet.
    The Good - I have bit torrents set up, using utorrent, to automatically copy files over to my airdisk once they've completed. This works flawlessly. If i connect the hard drive directly to my laptop (macbook running bootcamp w/ vista, all updates applied), large files copy over without a hitch as well.
    The Bad - For the past couple weeks (could be longer, but I've just noticed it recently being a problem - is that a firmware problem clue?) If i download the files to my Vista desktop and copy them over manually, the copy just sits for a while and eventually fails with a "try again" error. If i try to copy any file over 300mb, the same error occurs.
    What I've tried - well, not a lot. The first thing i did was to make sure my hard drive was error free, and worked when physically connected - it is and it did. I've read a few posts about formatting the drive for mac, but this really isnt a good option for me since all of my music is on this drive, and itunes pulls from it (which also works without a problem). I do however get the hang in itunes if i try to import large files (movies). I've also read about trying to go back to an earlier AE firmware, but the posts were outdated. I can try the mac side and see if large files move over, but again, i prefer to do this in windows vista.
    this is my first post, so im sure im leaving out vital info. If anyone wants to take a stab at helping, I'd love to discuss. Thanks in advance.

    Hello,
    Just noticed the other day that I am having the same problem. I have two Vista machines attached to TC w/ a Western Digital 500 gig USB'd. I can write to the TC (any file size) with no problem, however I cannot write larger files to my attached WD drive. I can write smaller folders of music and such, but I cannot back up larger video files. Yet, if I directly attach the drive to my laptop I can copy over the files, no problem. I could not find any setting in the Airport Utility with regards to file size limits or anything of the like. Any help on this would be much appreciated.

Maybe you are looking for