Network speed affected by large file copy operations. Also, why intermittent network outages?

Hi
I have a couple of issues on our company network.
The first is thate a single large file copy imapcts the entire network and dramatically reduces network speed and the second is that there are periodic outages where file open/close/save operations may appear to hang, and also where programs that rely on
network connectivity e.g. email, appear to hang. It is as though the PC loses it's connection to the network, but the status of the network icon does not change. For the second issue if we wait the program will respond but the wait period can be up to 1min.
The downside of this is that this affects Access databases on our server so that when an 'outage' occurs the Access client cannot recover and hangs permamnently.
We have a Windows Active Directory domain that comprises Windows 2003 R2 (soon to be decommissioned), Windows Server 2008 Standard and Windows Server 2012 R2 Standard domain controllers. There are two member servers: A file server running Windows 2008 Storage
Server and a remote access server (which also runs WSUS) running Windows Server 2012 Standard. The clients comprise about 35 Win7 PC's and 1 Vista PC.
When I copy or move a large file from the 2008 Storage Server to my Win7 client other staff experience massive slowdowns when accessing the network. Recently I was moving several files from the Storage Server to my local drive. The files comprised pairs
(e.g. folo76t5.pmm and folo76t5.pmi), one of which is less than 1MB and the other varies between 1.5 - 1.9GB. I was moving two files at a time so the total file size for each operation was just under 2GB.
While the file move operation was taking place a colleague was trying to open a 36k Excel file. After waiting 3mins he asked me for help. I did some tests and noticed that when I was not copying large files he could open the Excel file immediately. When
I started copying more data from the Storage Server to my local drive it took several minutes before his PC could open the Excel file.
I also noticed on my Win7 client that our email client (Pegasus Mail), which was the only application I had open at the time would hang when the move operation was started and it would take at least a minute for it to start responding.
Ordinarlily we work with many files
Anyone have any suggestions, please? This is something that is affecting all clients. I can't carry out file maintenance on large files during normal work hours if network speed is going to be so badly impacted.
I'm still working on the intermittent network outages (the second issue), but if anyone has any suggestions about what may be causing this I would be grateful if you could share them.
Thanks

What have you checked for resource usage during one of these copies of a large file?
At a minimum I would check Task Manager>Resource Monitor.  In particular check the disk and network usage.  Also, look at RAM and CPU while the copy is taking place.
What RAID level is there on the file server?
There are many possible areas that could be causing your problem(s).  And it could be more than one thing.  Start by checking these things.  And go from there.
Hi, JohnB352
Thanks for the suggestions. I have monitored the server and can see that the memory is nearly maxed out with a lot of hard faults (varies between several hundred to several thousand), recorded during normal usage. The Disk and CPU seem normal.
I'm going to replace the RAM and double it up to 12GB.
Thanks! This may help with some other issues we are having. I'll post back after it has been done.
[Edit]
Forgot to mention: there are 6 drives in the server. 2 for the OS (Mirrored RAID 1) and 4 for the data (Striped RAID 5).

Similar Messages

  • Finder Consumes 90% CPU at file copy operation

    I am trying to copy a large number of files (over 100,000) in a bunch of folders, from internal HDD to external USB one. The files are mostly images. Finder starts at a canter, with about 20 Mb/sec copy speed, and only around 5% CPU usage (NOTE - I have disabled "show icons preview" option in FInder, which is a common CPU hog in multiple image files copy operations)
    After a while the CPU usage goes up to 80-90%, and the file copy operation slows down to 1-2 Mb/Sec.
    I am at a loss as to what might be causing it! Any suggestions?

    files are relatively small - from 100 kb to 3-5 Mb each, and the issue is - the slowdown occurs quite some time after the beginning of the file copy operation - for several minutes it goes OK, with Finder consuming only below 10% CPU, and copy speed 20Mb/sec. It is only AFTER SOME TIME that the issue starts to happen - and the files being copied are THE SAME TYPE FILES. and suddenly the speed drops to 1-2 Mb/sec, and CPU usage jumps up. The result - it takes ages to copy and overloads CPU.
    So it must be a bug somewhere.
    I cannot use Firewire, since the external drive is USB 2.0 which when it works as its supposed to (transfer speed 20-30 Mb/sec) is quite acceptable to me.
    Surely I am trying to copy smaller batches of files - but that is a workaround, not a solution!

  • Qosmio X500-148 - Large file copy stucks

    Large file copy (~5-10gb) between USB or Firewire disk stucks the PC. It goes up to 50% then slows down and there is no possibility to do anything including cancel the task or open the browser or WIn Explorer or Task manager.
    Event viewer does not show anything strange.
    This happens only with Windows 7 64bit - no problem with Windows 7 32bit (same external hardware). No problem when copying between internal PC disk.
    The PC is very powerful - Qosmio X500-148

    The external Hardware is:
    1.5 TB WD Hard disk USB
    1.0 TB WD + 250GB Maxtor + 250GB Maxtor on Firewire
    I have used standard copy feature - copy and paste - as well as
    Viceversa Pro for folder sync with same results
    Please note that the same external configuration was running properly on a 1 core PC - Satellite - running on Win7 x86 without problems. Since I have moved on Win7 x64 on my brand new X500-148 I have a great del of copying problems
    Installed all Windows upgrade and the Event Monitor doesn't show anything strange when copying

  • Copying large files freezes other Macs on the network

    Hello everyone,
    since we added a Mac Pro to our network, other clients sometimes hang and have to disconnect to a server (Xserve), when we copy large files (100MB and up) to that server with the Mac Pro.
    Like the other G5's, the faster G4's and some PC servers, it is connected on gigabit speed.
    Can someone help me to pinpoint this problem, or give me more information on finding the cause?
    Maybe this can be usefull:
    We replaced the cable by a Cat6 UTP cable.
    Slowing down to 100mbit did not solve the problem.
    We could not reproduce the problem when connected directly to the Xserve (no switch or patch pannel, only the cat6 cable).
    Many thanks in advance for your help.

    Hello,
    yesterday, may boss, ordered some people to check the cabling and connections in our network.
    They revealed several problems and advice to replace all cabling to cat6 or cat7.
    Is the MP susceptible to network imperfections? Our other G5's never suffered from network flaws (now the ethernet cards are Intel instead of 'Apple' design...)
    I did some other tests, and set the MTU to 750 instead of de default 1500. Without success.
    I am trying to finde som info in the logfiles, but do not seam to find any details about the network problems. Where should I look? The system log, netinfo log is empty...
    Greetings,
    peter.

  • Large file copy to iSCSI drive fills all memory until server stalls.

    I am having the file copy issues that people have been having with various versions of Server now for years, as can be read in the forums. I am having this issue on Server 2012 Std., using Hyper-V.
    When a large file is copied to an iSCSI drive, the file is copied into memory first faster than it can be sent over the network. It fills all available GB of memory until the server, which is a VM host, pretty much stalls and also all the VMs stall. This
    continues until the file copy is finished or stopped, then the memory is gradually released as it is taken out of memory as it is sent over the network.
    This issue was happening on send and receive. I change the registry setting for Large Cache to disable it, and now I can receive large files from the iSCSI. They now take an additional 1 GB of memory and it sits there until the file copy is finished.
    I have tried all the NIC and disk settings as can be found in the forums around the internet that people have posted in regard to this issue.
    To describe in a little more detail, when receiving a file from iSCSI, the file copy windows shows a speed of around 60-80 MB / sec, which is wire speed. When sending a file to iSCSI, the file copy window shows a speed of 150 MB/sec, which is actually the
    speed at which it is being written to memory. The NIC counter in Task Mgr shows instead the actual network speed which is about half of that. The difference is the rate at which memory fills until it is full.
    This also happens when using Window Server Backup. It freezes up the VM Host and Guests while the host backup is running because of this issue. It does cause some software issues.
    The problem does not happen inside the Guests. I can transfer files to a different LUN on the same iSCSI, which uses the same NIC as the Host with no issue.
    Does anyone know if the fix has been found for this? All forum posts I have found for this have closed with no definite resolution found.
    Thanks for you help.
    KTSaved

    Hi,
    Sorry if it causes confusion but "by design" I mean "by design it will use memory for copying files via network".
    In Windows 2000/2003, the following keys could help control the memory usage:
    LargSystemCache (0 or 1) HKEY_LOCAL_MACHINE\CurrentControlSet\Control\Session Manager\Memory Management
    Size (1, 2 or 3) in HKLM\SYSTEM\CurrentControlSet\Services\LanmanServer\Parameter
    I saw threads mentioned that it will not work in later systems such as Windows 2008 R2.
    For Windows 2008 R2 and Windows 2008, there is a service named Microsoft Windows Dynamic Cache Service which addressed this issue:
    https://www.microsoft.com/en-us/download/details.aspx?id=9258
    However I searched and there is no update version for Windows 2012 and 2012 R2.
    I also noticed that the following command could help control the memory usage. With value = 1, NTFS uses the default amount of paged-pool memory:
    fsutil behavior set memoryusage 1
    You need a reboot after changing the value. 
    Please remember to mark the replies as answers if they help and un-mark them if they provide no help. If you have feedback for TechNet Support, contact [email protected]

  • File Copy Operation

    Hi,
    I am developing a desktop application that needs a file copying feature. I basically need to copy mp3 files from a PC drive to another external drive that is connected to my PC through a USB Port...this external device is USB2.0 compatible. Currently I have coded a simple filecopy thread using Buffered Reader/Writer. The operation for a 6 MB file takes about 6.3 secs, but the same operation in C took me about 1.4 seconds. I am wondering if this is the best speed I can get using Java...and if so should I switch to using C and writing a JNI wrapper around the C function.
    Please suggest me on a suitable course of action.

    I have been investigating the use of buffered streams
    in network transfers (bigger files than mp3s though)
    and i have noticed that the size of the buffer as in
    byte [] buf = new byte[8096];makes very little difference, Yeah it is fun benchmarking.........what i observe is that as we increase the array size the time taken also reduces but then there comes a point (of inflexion?)at which u get a best speed and beyond that any increase in the array size does not make any difference...has anyone observed such a thing??
    Anyway I equalled/beat the C code...and I am just happy for that!!!!!

  • Windows 7 64-bit Corrupting (Altering) Large Files Copied to External NTFS Drives

    http://social.technet.microsoft.com/Forums/en-US/w7itproperf/thread/13a7426e-1a5d-41b0-9e16-19437697f62b/
    continue to this thread, I have the same problems, the corrupted files only archives, zip or 7z ... and exe, there are no problems copying larg files like movies for example, no problem when copying via linux in the same laptop, it is a windows issue, I
    have all updates installed, nothing missing

    Ok, lets be brief.
    This problem is annoying me for years. It is totally reproducible although random. It happens when copying to external drives (mainly USB) when they are configured for "safe removal". I have had issues copying to NTFS and FAT32 partitions. I have had issues
    using 4 different computers, from 7 years old to brand new ones, using AMD or intel chipsets and totally different USB controllers, and using many different USB sticks, hard disks, etc. The only common thing in those computers is Windows 7 x64 and the external
    drives optimization for "safe removal". Installing Teracopy reduces the chances of data corrpution, but does not eliminate them completely. The only real workaround (tested for 2 years) is activating the write cache in the device manager properties of the
    drive. In that way, windows uses the same transfer mechanisms as for the internal drives, and everything is OK.
    MICROSOFT guys, there is a BIG BUG in windows 7 x64 external drives data transfer mechanism. There is a bug in the cache handling mechanism of the safe removal function. Nobody hears, I've been years talking about this in forums. It is a very dangerous bug
    because it is silent, and many non professional people is experiencing random errors in their backup data. PLEASE, INVESTIGATE THIS. YOU NEED TO FIX SUCH IMPORTANT BUG. IT IS UNBELIEVABLE THAT IT IS STILL THERE SINCE 2009!!!
    Hope this helps.

  • X-Serve RAID unmounts on large file copy

    Hi
    We are running an X-Serve RAID in a video production environment. The RAID is striped RAID 5 with 1 hot spare on either side and concatenated with Apple Disk Utility as RAID 0 on our G5, to appear as one logical volume.
    I have noticed, lately, that large file copies from the RAID will cause it to unmount and the copy to fail.
    In the console, we see:
    May 4 00:23:37 Orobourus kernel[0]: FusionMPT: Notification = 5 (External Bus Reset) for SCSI Domain = 1
    May 4 00:23:37 Orobourus kernel[0]: FusionMPT: External Bus Reset for SCSI Domain = 1
    May 4 00:23:37 Orobourus kernel[0]: FusionMPT: Notification = 7 (Link Status Change) for SCSI Domain = 1
    May 4 00:23:37 Orobourus kernel[0]: FusionFC: Link is down for SCSI Domain = 1.
    May 4 00:23:37 Orobourus kernel[0]: FusionMPT: Notification = 8 (Loop State Change) for SCSI Domain = 1
    May 4 00:23:37 Orobourus kernel[0]: FusionFC: Loop Initialization Packet for SCSI Domain = 1.
    May 4 00:23:37 Orobourus kernel[0]: FusionMPT: Notification = 7 (Link Status Change) for SCSI Domain = 1
    May 4 00:23:37 Orobourus kernel[0]: FusionFC: Link is active for SCSI Domain = 1.
    May 4 00:23:37 Orobourus kernel[0]: FusionMPT: Notification = 6 (Rescan) for SCSI Domain = 1
    May 4 00:23:37 Orobourus kernel[0]: AppleRAID::completeRAIDRequest - error 0xe00002ca detected for set "tao.av.complexity.org" (868584D1-E485-4B37-AFCE-E7AFC3AD5BB7), member CBDFD75C-5DC8-4089-A7AE-1EFBE0B1A6EB, set byte offset = 1674060574720.
    May 4 00:23:37 Orobourus kernel[0]: disk5: I/O error.
    May 4 00:23:37 Orobourus kernel[0]: disk5: device is offline.
    May 4 00:23:37 Orobourus kernel[0]: AppleRAID::recover() member CBDFD75C-5DC8-4089-A7AE-1EFBE0B1A6EB from set "tao.av.complexity.org" (868584D1-E485-4B37-AFCE-E7AFC3AD5BB7) has been marked offline.
    May 4 00:23:37 Orobourus kernel[0]: AppleRAID::restartSet - restarting set "tao.av.complexity.org" (868584D1-E485-4B37-AFCE-E7AFC3AD5BB7).
    May 4 00:23:37 Orobourus kernel[0]: disk5: media is not present.
    May 4 00:23:37 Orobourus kernel[0]: disk5: media is not present.
    May 4 00:23:37 Orobourus kernel[0]: jnl: dojnlio: strategy err 0x6
    May 4 00:23:37 Orobourus kernel[0]: jnl: end_transaction: only wrote 0 of 12288 bytes to the journal!
    May 4 00:23:37 Orobourus kernel[0]: disk5: media is not present.
    May 4 00:23:37 Orobourus kernel[0]: disk5: media is not present.
    May 4 00:23:37 Orobourus kernel[0]: disk5: media is not present.
    May 4 00:23:37 Orobourus kernel[0]: jnl: close: journal 0x7b319ec, is invalid. aborting outstanding transactions
    Following such an ungainly dismount, the only way to get the RAID back online is to reboot the G5 - attempting to mount the volume via Disk Utility doesn't work.
    This looks similar to another 'RAID volume unmounts' thread I found elsewhere in the forum - it appeared to have no solution apart from contacting Apple.
    Is this the only recourse?
    G5 2.7 6.5Gb + X-RAID 3.5Tb   Mac OS X (10.4.9)   Decklink Extreme / HD

    There should be no need to restart the controllers to get additional reporting -- updating the firmware restarts them.
    If blocks are found to be bad during reconditioning, then the data in the blocks should be relocated, and the "bad" blocks mapped out as bad. The Xserve RAID has no idea whatsoever what constitutes a "file," given it doesn't know anything about the OS (you can format the volumes as HFS+, Xsan, ZFS, ext3, ReiserFS, whatever... this is done from the host and the Xserve RAID is completely unaware of this). So if the blocks can be successfully relocated (shouldn't be too hard, given the data can be generated from parity), then you're good to go. If somehow you have bad blocks in multiple locations on multiple spindles that overlap, then that would of course be bad. I'd expect that to be fairly rare though.
    One thing I should note is that drives don't last forever. As the RAID ages, the chances of drive failure increase. Typically drive lifespan follows a "bathtub curve," if you imagined looking a a cross-section of a crow's foot tub. Failures will be very high early (due to manufacturing defects -- something testing as "burn in" should catch -- and I believe Apple does this for you), then very low for a while, and then will rise markedly as drives get "old." Where this is would be a subject for debate, but typical data center folks start proactively replacing entire storage systems at some point to avoid the chance of multiple drive failure. The Xserve RAIDs have only been out for 4 years, so I wouldn't anticipate this yet -- but if it were me I'd probably start thinking about doing this after 4 or 5 years. Few experienced IT data center folks would feel comfortable letting spindles run 24x7x365 for 6 years... even if 95% of the time it would be okay.

  • Large file copy fails through 4240 sensor

    Customer attempts to copy a large file from a server in an IPS protected vlan to a host in an IPS un-protected vlan and the copy fails if file is greater than about 2Gbytes in size. If the server is moved to the un-protected vlan the copy succeeds. There are no events on the IPS suggesting any blocking or other actions.

    The CPU does occasionly peak at 100% when transferring a large file but the copy often fails when the CPU is significantly lower. I know a 4240 has 300Mbit/s throughput but as I understood it traffic would still be serviced but would bypass the inspection process if exceeded, maybe a transition from inspection to non inspection causes the copy to fail like a tcp reset, I may try a sniffer.
    I do have TAC involved but like to try and utilise the knowledge of other expert users like yourself to try and rectify issues. Thanks for your help. If you have any other comments please let me know, I will certainly post my findings if you are interested.

  • Large file copy fails

    trying to move a 60GB folder from a NAS to a local USB drive, and regardless of how many times it try to do this it fails within the first few minutes.
    I'm on a managed Ciscso Gigabit Ethernet switch in a commercial building and I have hundreds of users having no problems with OS 10.6, Windows XP and Windows 7 but my Yosemite system is not able to do this unless I boot into my OS 10.6. partition.
    Reconfig of the switch is not a viable option, I can't change things on a switch that would jeopardize hundreds of users to fix one thing on a mac testing the legitimacy of OS 10.10 in  a corporate setting.

    The CPU does occasionly peak at 100% when transferring a large file but the copy often fails when the CPU is significantly lower. I know a 4240 has 300Mbit/s throughput but as I understood it traffic would still be serviced but would bypass the inspection process if exceeded, maybe a transition from inspection to non inspection causes the copy to fail like a tcp reset, I may try a sniffer.
    I do have TAC involved but like to try and utilise the knowledge of other expert users like yourself to try and rectify issues. Thanks for your help. If you have any other comments please let me know, I will certainly post my findings if you are interested.

  • Help with large file copy to WKS

    I need to copy a folder and all of it's contents from
    \apps\reallybigfolder to each workstation at c:\putithere.
    Because of the size of the file copy, I chose to create an app object
    with logic that says if xxx.exe exists then do not push. Basically, I
    don't want this thing to just run constantly. I need it to run one time
    to each workstation on the network when the users log in.
    I have tried pointing the app object to \public\ncopy.exe and then
    setting the parameters the way I want them, but I keep getting an
    error: "Cannot load VDM IPX/SPX Support". The two files in the folder
    are copied, but the subfolders are not. I have tried using the /s/e
    switches, but it does not help.
    I have also tried writing a .bat file, to test it - but I get the same
    results as above. So next I tried using copy instead of ncopy. I do not
    get the error message, but it still does not copy any of the subfolders.
    Is there another way? An easier way? I really appreciate the help.
    Tony

    What you are doing should work.
    It sounds as if there are some other workstation issues going on.
    I don't think I seen or could make the error "Cannot load VDM IPX/SPX
    Support" happen if I tried. Perhaps this would happen w/o a Novell
    Client installed. In such a case you could use XCOPY or ROBOCOPY.
    (Robocopy is way cooler than XPCOPY and is free from MS.)
    You can also use the "Distribution Tab" and the "Files" section to copy
    entire directories. Just use *.* as the source.
    [email protected] wrote:
    > I need to copy a folder and all of it's contents from
    > \apps\reallybigfolder to each workstation at c:\putithere.
    >
    > Because of the size of the file copy, I chose to create an app object
    > with logic that says if xxx.exe exists then do not push. Basically, I
    > don't want this thing to just run constantly. I need it to run one time
    > to each workstation on the network when the users log in.
    >
    > I have tried pointing the app object to \public\ncopy.exe and then
    > setting the parameters the way I want them, but I keep getting an
    > error: "Cannot load VDM IPX/SPX Support". The two files in the folder
    > are copied, but the subfolders are not. I have tried using the /s/e
    > switches, but it does not help.
    >
    > I have also tried writing a .bat file, to test it - but I get the same
    > results as above. So next I tried using copy instead of ncopy. I do not
    > get the error message, but it still does not copy any of the subfolders.
    >
    > Is there another way? An easier way? I really appreciate the help.
    >
    > Tony
    Craig Wilson
    Novell Product Support Forum Sysop
    Master CNE, MCSE 2003, CCN

  • Large File Copy fails in KDEmod 4.2.2

    I'm using KDEmod 4.2.2 and I often have to transfer large files on my computer.  Files are usually 1-2Gb and I have to transfer about 150Gb of them.  The files are stored on one external 500Gb HD and are being transferred to another identical 500Gb HD over firewire.  Also I have another external firewire drive that I transfer about 20-30Gb of the same type of 1-2Gb files to the internal HD of my laptop over firewire.  If I try to drag and drop in Dolphin, it gets through a few hundred Mb of the transfer then fails.  Now if I use the cp in the terminal, the transfer is fine.  Also when I was still distro hopping and using Fedora 10 with KDE 4.2.0 I had this same problem.  When I use Gnome this problem is non existent.  I do this often with work so it is a very important function to me.  All drives are FAT32 and there is no option to change them as they are used on serveral different machines/OS's before all is said and done and the only file system that all of the machines will read is FAT32 (thanks to one machine of course).  In many cases time is very important on the transfer and that is why I prefer to do the transfer in a desktop environment, so I can see progress and ETA.  This is a huge deal breaker for KDE and I would like to fix it.  Any help is greatly appreciated and please don't reply just use Gnome.

    You can use any other file manager under KDE that works, you know? Just disable that nautilus takes command of your desktop and you should be fine with it.
    AFAIR the display of the remaining time for a transfer comes at the cost of even more transfer time. And wouldn't some file synchronisation tool work too for this task? (someone with more knowledge please tell me if this would be a bad idea ).

  • How can I find out the progression on a file copy operation

    I was wondering how when I use a file stream to copy a file how much is done and how much I have to go?

    You can't. A FileInputStream gives you no method of obtaining the size of the file.
    If you have access to the File object used to create the stream then you can get the size of
    the file from that.
    Copying from a FileInputStream to a FileOutputStream will require you to use one of the
    read methods. These allow tell you (and some allow you to set) the numbe of bytes copied
    in that call. If you know how many bytes are copied in each call and also know how many
    bytes there are to copy then you can tell how far you have progressed.
    Using a RandomAccessFile as the source also allows you to determine the size of the file.
    Using FileChannels can be significantly more efficient and also allows you to determine the
    size of the source file.
    matfud

  • Windows 7 freezes on files copy operation on Macbook Pro 15" 2013 Late

    Hi,
    I use Windows installed via BootCamp on my Macbook Pro 15" 2013 Late with 512GB SSD. I installed all Apple drivers, but my laptop freezes(all windows stop responding) on a copy a lot of small files from one folder to another. After 2-5 min windows start responding again. I use Windows 7 with all upgrades.
    SSD works good in Mac OS.
    Seems Apple SSD drives in not work correctly under Windows.
    Please help!

    I have exactly the same Macbook Pro as yours but mine has the matte screen. I bought mine in September 2006 but it is classed by Apple as the 'early 2006' Macbook Pro because they released the 'late 2006' Macbook Pro in October 2006, a few weeks after I bought mine :@. As I said in my post, http://discussions.info.apple.com/thread.jspa?threadID=2307301&tstart=0 , Windows 7 is working fine on my Macbook Pro, so it should work on yours. Good luck!

  • E1000 wifi dead whenever I copy a large file between 2 PC's

    Whenever I try to copy a large file (eg. 200 MB) from one PC to another (both connected wirelessly to the Linksys E1000 router), the wifi on the router dies.  I can see the wifi indicator light on the router go off, and my PC's lose their connection (of course).
    I understand that the E1000 is a low-end router so if it copies really slowly I can accept it.  But how can the wifi just die out like that?  Is there something I didn't setup properly?
    btw, this issue is present even after I upgraded to the latest firmware (2.1.02 build 5).
    Would greatly appreciate any advice.

    Hi,
    Thank you for your reply.
    1.  Problem not due to firmware upgrade.  The problem existed before the firmware upgrade, and persist after the upgrade. But yes, I did power off and power on my router again.
    2. I followed some instructions on this forum to chnage the following settings:
    Channel: 11
    MTU: 1340
    Beacon: 75
    Fragmentation Threshold: 2304
    RTS: 2304
    And strangely enough, it works now!
    3.  This morning, upon seeing your reply, I decided to do some investigation to see which setting did the trick. I modified each setting back to the default, one by one, and tested the large file copy each time I revert something back to default.
    Surprisingly, the file copy operation was successful throughout the tests, even upon reverting all settings back to default.
    So, what is the "problem" with this router? I had problems for 1 month with the default settings, and then suddenly all problems disappear?
    Wai Kee

Maybe you are looking for

  • How can I fix a website that's not compatible with Firefox?

    I work with a website (www.cumctopeka.org) which seems to have compatibility issues with Firefox. I upload audio files through the program FileZilla, and through the website designer Bridge Element (www.bridgeelement.com). Recently the problem has oc

  • Queries on multiprovider not working correctly

    dear all, our bw version is 3.5.After applying patch level 20 we are getting the problem like the queries on multiprovider are not working correctly. can any one suggest us and guide how we can trouble shoot. regards, sekhar reddy

  • How can I take a video snap shot during skype video call from my MacBook Pro?

    How can I take a video snap shot during skype video call from my MacBook Pro?

  • Play button showing when publishing to responsive template

    I'm having difficulties when previewing and publishing a responsive project. Initially when I viewed the review/published project the first slide was blank and I had to use the TOC to navigate to the first page. I searched for a resolution and found

  • LSMW Auth Error COND_A

    Hi Sap Community, I am trying to load Sales and Cost prices using COND_A. However i receive an error message status "51" stating the following: No authorization for Create for condition type PB00 Message no. VK085 When i run the failed created IDOC f