Large file copy to iSCSI drive fills all memory until server stalls.

I am having the file copy issues that people have been having with various versions of Server now for years, as can be read in the forums. I am having this issue on Server 2012 Std., using Hyper-V.
When a large file is copied to an iSCSI drive, the file is copied into memory first faster than it can be sent over the network. It fills all available GB of memory until the server, which is a VM host, pretty much stalls and also all the VMs stall. This
continues until the file copy is finished or stopped, then the memory is gradually released as it is taken out of memory as it is sent over the network.
This issue was happening on send and receive. I change the registry setting for Large Cache to disable it, and now I can receive large files from the iSCSI. They now take an additional 1 GB of memory and it sits there until the file copy is finished.
I have tried all the NIC and disk settings as can be found in the forums around the internet that people have posted in regard to this issue.
To describe in a little more detail, when receiving a file from iSCSI, the file copy windows shows a speed of around 60-80 MB / sec, which is wire speed. When sending a file to iSCSI, the file copy window shows a speed of 150 MB/sec, which is actually the
speed at which it is being written to memory. The NIC counter in Task Mgr shows instead the actual network speed which is about half of that. The difference is the rate at which memory fills until it is full.
This also happens when using Window Server Backup. It freezes up the VM Host and Guests while the host backup is running because of this issue. It does cause some software issues.
The problem does not happen inside the Guests. I can transfer files to a different LUN on the same iSCSI, which uses the same NIC as the Host with no issue.
Does anyone know if the fix has been found for this? All forum posts I have found for this have closed with no definite resolution found.
Thanks for you help.
KTSaved

Hi,
Sorry if it causes confusion but "by design" I mean "by design it will use memory for copying files via network".
In Windows 2000/2003, the following keys could help control the memory usage:
LargSystemCache (0 or 1) HKEY_LOCAL_MACHINE\CurrentControlSet\Control\Session Manager\Memory Management
Size (1, 2 or 3) in HKLM\SYSTEM\CurrentControlSet\Services\LanmanServer\Parameter
I saw threads mentioned that it will not work in later systems such as Windows 2008 R2.
For Windows 2008 R2 and Windows 2008, there is a service named Microsoft Windows Dynamic Cache Service which addressed this issue:
https://www.microsoft.com/en-us/download/details.aspx?id=9258
However I searched and there is no update version for Windows 2012 and 2012 R2.
I also noticed that the following command could help control the memory usage. With value = 1, NTFS uses the default amount of paged-pool memory:
fsutil behavior set memoryusage 1
You need a reboot after changing the value. 
Please remember to mark the replies as answers if they help and un-mark them if they provide no help. If you have feedback for TechNet Support, contact [email protected]

Similar Messages

  • Network speed affected by large file copy operations. Also, why intermittent network outages?

    Hi
    I have a couple of issues on our company network.
    The first is thate a single large file copy imapcts the entire network and dramatically reduces network speed and the second is that there are periodic outages where file open/close/save operations may appear to hang, and also where programs that rely on
    network connectivity e.g. email, appear to hang. It is as though the PC loses it's connection to the network, but the status of the network icon does not change. For the second issue if we wait the program will respond but the wait period can be up to 1min.
    The downside of this is that this affects Access databases on our server so that when an 'outage' occurs the Access client cannot recover and hangs permamnently.
    We have a Windows Active Directory domain that comprises Windows 2003 R2 (soon to be decommissioned), Windows Server 2008 Standard and Windows Server 2012 R2 Standard domain controllers. There are two member servers: A file server running Windows 2008 Storage
    Server and a remote access server (which also runs WSUS) running Windows Server 2012 Standard. The clients comprise about 35 Win7 PC's and 1 Vista PC.
    When I copy or move a large file from the 2008 Storage Server to my Win7 client other staff experience massive slowdowns when accessing the network. Recently I was moving several files from the Storage Server to my local drive. The files comprised pairs
    (e.g. folo76t5.pmm and folo76t5.pmi), one of which is less than 1MB and the other varies between 1.5 - 1.9GB. I was moving two files at a time so the total file size for each operation was just under 2GB.
    While the file move operation was taking place a colleague was trying to open a 36k Excel file. After waiting 3mins he asked me for help. I did some tests and noticed that when I was not copying large files he could open the Excel file immediately. When
    I started copying more data from the Storage Server to my local drive it took several minutes before his PC could open the Excel file.
    I also noticed on my Win7 client that our email client (Pegasus Mail), which was the only application I had open at the time would hang when the move operation was started and it would take at least a minute for it to start responding.
    Ordinarlily we work with many files
    Anyone have any suggestions, please? This is something that is affecting all clients. I can't carry out file maintenance on large files during normal work hours if network speed is going to be so badly impacted.
    I'm still working on the intermittent network outages (the second issue), but if anyone has any suggestions about what may be causing this I would be grateful if you could share them.
    Thanks

    What have you checked for resource usage during one of these copies of a large file?
    At a minimum I would check Task Manager>Resource Monitor.  In particular check the disk and network usage.  Also, look at RAM and CPU while the copy is taking place.
    What RAID level is there on the file server?
    There are many possible areas that could be causing your problem(s).  And it could be more than one thing.  Start by checking these things.  And go from there.
    Hi, JohnB352
    Thanks for the suggestions. I have monitored the server and can see that the memory is nearly maxed out with a lot of hard faults (varies between several hundred to several thousand), recorded during normal usage. The Disk and CPU seem normal.
    I'm going to replace the RAM and double it up to 12GB.
    Thanks! This may help with some other issues we are having. I'll post back after it has been done.
    [Edit]
    Forgot to mention: there are 6 drives in the server. 2 for the OS (Mirrored RAID 1) and 4 for the data (Striped RAID 5).

  • Qosmio X500-148 - Large file copy stucks

    Large file copy (~5-10gb) between USB or Firewire disk stucks the PC. It goes up to 50% then slows down and there is no possibility to do anything including cancel the task or open the browser or WIn Explorer or Task manager.
    Event viewer does not show anything strange.
    This happens only with Windows 7 64bit - no problem with Windows 7 32bit (same external hardware). No problem when copying between internal PC disk.
    The PC is very powerful - Qosmio X500-148

    The external Hardware is:
    1.5 TB WD Hard disk USB
    1.0 TB WD + 250GB Maxtor + 250GB Maxtor on Firewire
    I have used standard copy feature - copy and paste - as well as
    Viceversa Pro for folder sync with same results
    Please note that the same external configuration was running properly on a 1 core PC - Satellite - running on Win7 x86 without problems. Since I have moved on Win7 x64 on my brand new X500-148 I have a great del of copying problems
    Installed all Windows upgrade and the Event Monitor doesn't show anything strange when copying

  • Can't preview files from a network drive to a local CF9 server.

    Hi,
    I have the following set up:
    CF9 Local Dev version
    CF Builder 2.
    Project files are in a network drive N:\project
    I have RDS set up and everything seems to work ok, view DSN, files, etc on my local server.
    In the URL prefix section I even created a map:
    Local Path: N:\project,
    URL prefix: http://localhost:8500/project/
    There is no FTP set up on the CF9 server.
    The problem is that when I try to preview a file that I am working on from N:\project, it doesn't show on my local server. The previewing part works if I manually move the file to CF9. But it is the automatic uploading (or moving) of the file to the local CF9  that doesn't seem to work.
    BTW, I have a similar set up in DreamWeaver, where I am editing the same project (or site in this case) off of  N:\project, uploading to the same local CF9 server through RDS, and it works fine.
    I know that if in CF Builder you move the project to under the web root it will work but that would not work for us, since we need to keep the project source files on a network drive for sharing/backup purposes.
    Has anyone been able to successfully preview source files from a network drive on a local CF9 server?
    Thanks in advance,

    Hi again,
    After doing some more googling I realize that for files to be previewed the MUST be under wwwroot. This wasn't too clear in the CF documentation.
    So my new question is:
    Is there are way, once I save a file to automatically copy it from one location(N:\project) to another (c:/ColdFusion9/wwwroot/project)?
    I think there is a way to do it with Ant or some sort of Eclipse feature, but I am not too familiar with either. Could someone point me in the right direction?
    Thanks,

  • Copying large file sets to external drives hangs copy process

    Hi all,
    Goal: to move large media file libraries for iTunes, iPhoto, and iMovie to external drives. Will move this drive as a media drive for a new iMac 2013. I am attempting to consolidate many old drives over the years and consolidate to newer and larger drives.
    Hardware: moving from a Mac Pro 2010 to variety of USB and other drives for use with a 2013 iMac.  The example below is from the boot drive of the Mac Pro. Today, the target drive was a 3 TB Seagate GoFlex ? USB 3 drive formatted as HFS+ Journaled. All drives are this format. I was using the Seagate drive on both the MacPro USB 2 and the iMac USB 3. I also use a NitroAV Firewire and USB hub to connect 3-4 USB and FW drives to the Mac Pro.
    OS: Mac OS X 10.9.1 on Mac Pro 2010
    Problem: Today--trying to copy large file sets such as iTunes, iPhoto libs, iMovie events from internal Mac drives to external drive(s) will hang the copy process (forever). This seems to mostly happen with very large batches of files: for example, an entire folder of iMovie events, the iTunes library; the iPhoto library. Symptom is that the process starts and then hangs at a variety of different points, never completing the copy. Requires a force quit of Finder and then a hard power reboot of the Mac. Recent examples today were (a) a hang at 3 Gb for a 72 Gb iTunes file; (b) hang at 13 Gb for same 72 Gb iTunes file; (c) hang at 61 Gb for a 290 Gb iPhoto file. In the past, I have had similar drive-copying issues from a variety of USB 2, USB 3 and FW drives (old and new) mostly on the Mac Pro 2010. The libraries and programs seem to run fine with no errors. Small folder copying is rarely an issue. Drives are not making weird noises. Drives were checked for permissions and repairs. Early trip to Genius Bar did not find any hardware issues on the internal drives.
    I seem to get these "dropoff" of hard drives unmounting themselves and other drive-copy hangs more often than I should. These drives seem to be ok much of the time but they do drop off here and there.
    Attempted solutions today: (1) Turned off all networking on Mac -- Ethernet and WiFi. This appeared to work and allowed the 72 Gb iTunes file to fully copy without an issue. However, on the next several attempts to copy the iPhoto and the hangs returned (at 16 and then 61 Gb) with no additional workarounds. (2) Restart changes the amount of copying per instance but still hangs. (3) Last line of a crash report said "Thunderbolt" but the Mac Pro had no Thunderbolt or Mini Display Port. I did format the Seagate drive on the new iMac that has Thunderbolt. ???
    Related threads were slightly different. Any thoughts or solutions would be appreciated. Better copy software than Apple's Finder? I want the new Mac to be clean and thus did not do data migration. Should I do that only for the iPhoto library? I'm stumped.
    It seems like more and more people will need to large media file sets to external drives as they load up more and more iPhone movies (my thing) and buy new Macs with smaller Flash storage. Why can't the copy process just "skip" the parts of the thing it can't copy and continue the process? Put an X on the photos/movies that didn't make it?
    Thanks -- John

    I'm having a similar problem.  I'm using a MacBook Pro 2012 with a 500GB SSD as the main drive, 1TB internal drive (removed the optical drive), and also tried running from a Sandisk Ultra 64GB Micro SDXC card with the beta version of Mavericks.
    I have a HUGE 1TB Final Cut Pro library that I need to get off my LaCie Thunderbolt drive and moved to a 3TB WD USB 3.0 drive.  Every time I've tried to copy it the process would hang at some point, roughly 20% of the way through, then my MacBook would eventually restart on its own.  No luck getting the file copied.  Now I'm trying to create a disk image using disk utility to get the file from the Thunderbolt drive and saved to the 3TB WD drive. It's been running for half an hour so far and appears that it could take as long a 5 hours to complete.
    Doing the copy via disk image was a shot in the dark and I'm not sure how well it will work if I need to actually use the files again. I'll post my results after I see what's happened.

  • Windows 7 64-bit Corrupting (Altering) Large Files Copied to External NTFS Drives

    http://social.technet.microsoft.com/Forums/en-US/w7itproperf/thread/13a7426e-1a5d-41b0-9e16-19437697f62b/
    continue to this thread, I have the same problems, the corrupted files only archives, zip or 7z ... and exe, there are no problems copying larg files like movies for example, no problem when copying via linux in the same laptop, it is a windows issue, I
    have all updates installed, nothing missing

    Ok, lets be brief.
    This problem is annoying me for years. It is totally reproducible although random. It happens when copying to external drives (mainly USB) when they are configured for "safe removal". I have had issues copying to NTFS and FAT32 partitions. I have had issues
    using 4 different computers, from 7 years old to brand new ones, using AMD or intel chipsets and totally different USB controllers, and using many different USB sticks, hard disks, etc. The only common thing in those computers is Windows 7 x64 and the external
    drives optimization for "safe removal". Installing Teracopy reduces the chances of data corrpution, but does not eliminate them completely. The only real workaround (tested for 2 years) is activating the write cache in the device manager properties of the
    drive. In that way, windows uses the same transfer mechanisms as for the internal drives, and everything is OK.
    MICROSOFT guys, there is a BIG BUG in windows 7 x64 external drives data transfer mechanism. There is a bug in the cache handling mechanism of the safe removal function. Nobody hears, I've been years talking about this in forums. It is a very dangerous bug
    because it is silent, and many non professional people is experiencing random errors in their backup data. PLEASE, INVESTIGATE THIS. YOU NEED TO FIX SUCH IMPORTANT BUG. IT IS UNBELIEVABLE THAT IT IS STILL THERE SINCE 2009!!!
    Hope this helps.

  • Copy large files to external hard drive

    How do I copy files fromone external hard drive to another external hard drive via mac book air

    Gregory Taddeo wrote:
    How do I copy files fromone external hard drive to another external hard drive via mac book air
    If its from one drive TO A BLANK / NEW HD, and its a LOT of files (400+ GB)
    then just use Superduper cloning software to copy it all in one click and sit back and let it work
    free to use
    http://www.shirt-pocket.com/SuperDuper/superduperdescription.html

  • I cannot copy large files from my hard drive

    My MacBook Pro 15 is on OS X 10.8.5 (12F37) and I reformatted my external hard drive to Mac OS Extended (Journaled) but cannot copy my virtual machine 53GB file from my hard drive to the external drive.

    iheartapple1970 wrote:
    he wasn't real clear on exactly what virtual machine he was trying to move so i said that IF it's just a windows partition (not w/ parallels or VM fusion ) then he would need to reformat the drive to the format i suggested for it to move and still be used w/ the mac as well. but thanks for the additional information for those who may not have known it.
    If it was a partition it would not be a VM file.
    This is what you said:
    if you are trying to copy a virtual windows machine
    I see no mention of a Windows partition here.
    As I said before you should test your suggestions to ensure that they are correct, and you should do that prior to suggesting them, and It would help if you understood the difference between a file and a partition.

  • Itunes with 10.3x not transferring large files to external hard drive

    I have been unable to transfer any large files, basically anything over over two artists or so from old external hard drive for my old windows xp computer to my new external hard drive (both iomega and western digital have been tried) without the transfer cutting out and popping up a message to effect of itunes can not copy or add file to folder...something like that. am i missing something that i need to be doing? ive got about 33,000 items, and the thought of doing them 10-15 at a time really isn't something i want to do. But as of this point, i can't find a way to simply xfer the whole itunes library from the old external to the new one. Thanks for any help.

    You can transfer purchased content ONLY! Unfortunately, you have no choice as to what gets transferred. Here are some work arounds
    1. Delete off iPod Touch/iPhone/iPod then transfer
    2. Transfer all content then delete from computer

  • X-Serve RAID unmounts on large file copy

    Hi
    We are running an X-Serve RAID in a video production environment. The RAID is striped RAID 5 with 1 hot spare on either side and concatenated with Apple Disk Utility as RAID 0 on our G5, to appear as one logical volume.
    I have noticed, lately, that large file copies from the RAID will cause it to unmount and the copy to fail.
    In the console, we see:
    May 4 00:23:37 Orobourus kernel[0]: FusionMPT: Notification = 5 (External Bus Reset) for SCSI Domain = 1
    May 4 00:23:37 Orobourus kernel[0]: FusionMPT: External Bus Reset for SCSI Domain = 1
    May 4 00:23:37 Orobourus kernel[0]: FusionMPT: Notification = 7 (Link Status Change) for SCSI Domain = 1
    May 4 00:23:37 Orobourus kernel[0]: FusionFC: Link is down for SCSI Domain = 1.
    May 4 00:23:37 Orobourus kernel[0]: FusionMPT: Notification = 8 (Loop State Change) for SCSI Domain = 1
    May 4 00:23:37 Orobourus kernel[0]: FusionFC: Loop Initialization Packet for SCSI Domain = 1.
    May 4 00:23:37 Orobourus kernel[0]: FusionMPT: Notification = 7 (Link Status Change) for SCSI Domain = 1
    May 4 00:23:37 Orobourus kernel[0]: FusionFC: Link is active for SCSI Domain = 1.
    May 4 00:23:37 Orobourus kernel[0]: FusionMPT: Notification = 6 (Rescan) for SCSI Domain = 1
    May 4 00:23:37 Orobourus kernel[0]: AppleRAID::completeRAIDRequest - error 0xe00002ca detected for set "tao.av.complexity.org" (868584D1-E485-4B37-AFCE-E7AFC3AD5BB7), member CBDFD75C-5DC8-4089-A7AE-1EFBE0B1A6EB, set byte offset = 1674060574720.
    May 4 00:23:37 Orobourus kernel[0]: disk5: I/O error.
    May 4 00:23:37 Orobourus kernel[0]: disk5: device is offline.
    May 4 00:23:37 Orobourus kernel[0]: AppleRAID::recover() member CBDFD75C-5DC8-4089-A7AE-1EFBE0B1A6EB from set "tao.av.complexity.org" (868584D1-E485-4B37-AFCE-E7AFC3AD5BB7) has been marked offline.
    May 4 00:23:37 Orobourus kernel[0]: AppleRAID::restartSet - restarting set "tao.av.complexity.org" (868584D1-E485-4B37-AFCE-E7AFC3AD5BB7).
    May 4 00:23:37 Orobourus kernel[0]: disk5: media is not present.
    May 4 00:23:37 Orobourus kernel[0]: disk5: media is not present.
    May 4 00:23:37 Orobourus kernel[0]: jnl: dojnlio: strategy err 0x6
    May 4 00:23:37 Orobourus kernel[0]: jnl: end_transaction: only wrote 0 of 12288 bytes to the journal!
    May 4 00:23:37 Orobourus kernel[0]: disk5: media is not present.
    May 4 00:23:37 Orobourus kernel[0]: disk5: media is not present.
    May 4 00:23:37 Orobourus kernel[0]: disk5: media is not present.
    May 4 00:23:37 Orobourus kernel[0]: jnl: close: journal 0x7b319ec, is invalid. aborting outstanding transactions
    Following such an ungainly dismount, the only way to get the RAID back online is to reboot the G5 - attempting to mount the volume via Disk Utility doesn't work.
    This looks similar to another 'RAID volume unmounts' thread I found elsewhere in the forum - it appeared to have no solution apart from contacting Apple.
    Is this the only recourse?
    G5 2.7 6.5Gb + X-RAID 3.5Tb   Mac OS X (10.4.9)   Decklink Extreme / HD

    There should be no need to restart the controllers to get additional reporting -- updating the firmware restarts them.
    If blocks are found to be bad during reconditioning, then the data in the blocks should be relocated, and the "bad" blocks mapped out as bad. The Xserve RAID has no idea whatsoever what constitutes a "file," given it doesn't know anything about the OS (you can format the volumes as HFS+, Xsan, ZFS, ext3, ReiserFS, whatever... this is done from the host and the Xserve RAID is completely unaware of this). So if the blocks can be successfully relocated (shouldn't be too hard, given the data can be generated from parity), then you're good to go. If somehow you have bad blocks in multiple locations on multiple spindles that overlap, then that would of course be bad. I'd expect that to be fairly rare though.
    One thing I should note is that drives don't last forever. As the RAID ages, the chances of drive failure increase. Typically drive lifespan follows a "bathtub curve," if you imagined looking a a cross-section of a crow's foot tub. Failures will be very high early (due to manufacturing defects -- something testing as "burn in" should catch -- and I believe Apple does this for you), then very low for a while, and then will rise markedly as drives get "old." Where this is would be a subject for debate, but typical data center folks start proactively replacing entire storage systems at some point to avoid the chance of multiple drive failure. The Xserve RAIDs have only been out for 4 years, so I wouldn't anticipate this yet -- but if it were me I'd probably start thinking about doing this after 4 or 5 years. Few experienced IT data center folks would feel comfortable letting spindles run 24x7x365 for 6 years... even if 95% of the time it would be okay.

  • Large File Copy fails in KDEmod 4.2.2

    I'm using KDEmod 4.2.2 and I often have to transfer large files on my computer.  Files are usually 1-2Gb and I have to transfer about 150Gb of them.  The files are stored on one external 500Gb HD and are being transferred to another identical 500Gb HD over firewire.  Also I have another external firewire drive that I transfer about 20-30Gb of the same type of 1-2Gb files to the internal HD of my laptop over firewire.  If I try to drag and drop in Dolphin, it gets through a few hundred Mb of the transfer then fails.  Now if I use the cp in the terminal, the transfer is fine.  Also when I was still distro hopping and using Fedora 10 with KDE 4.2.0 I had this same problem.  When I use Gnome this problem is non existent.  I do this often with work so it is a very important function to me.  All drives are FAT32 and there is no option to change them as they are used on serveral different machines/OS's before all is said and done and the only file system that all of the machines will read is FAT32 (thanks to one machine of course).  In many cases time is very important on the transfer and that is why I prefer to do the transfer in a desktop environment, so I can see progress and ETA.  This is a huge deal breaker for KDE and I would like to fix it.  Any help is greatly appreciated and please don't reply just use Gnome.

    You can use any other file manager under KDE that works, you know? Just disable that nautilus takes command of your desktop and you should be fine with it.
    AFAIR the display of the remaining time for a transfer comes at the cost of even more transfer time. And wouldn't some file synchronisation tool work too for this task? (someone with more knowledge please tell me if this would be a bad idea ).

  • Large file copy fails

    trying to move a 60GB folder from a NAS to a local USB drive, and regardless of how many times it try to do this it fails within the first few minutes.
    I'm on a managed Ciscso Gigabit Ethernet switch in a commercial building and I have hundreds of users having no problems with OS 10.6, Windows XP and Windows 7 but my Yosemite system is not able to do this unless I boot into my OS 10.6. partition.
    Reconfig of the switch is not a viable option, I can't change things on a switch that would jeopardize hundreds of users to fix one thing on a mac testing the legitimacy of OS 10.10 in  a corporate setting.

    The CPU does occasionly peak at 100% when transferring a large file but the copy often fails when the CPU is significantly lower. I know a 4240 has 300Mbit/s throughput but as I understood it traffic would still be serviced but would bypass the inspection process if exceeded, maybe a transition from inspection to non inspection causes the copy to fail like a tcp reset, I may try a sniffer.
    I do have TAC involved but like to try and utilise the knowledge of other expert users like yourself to try and rectify issues. Thanks for your help. If you have any other comments please let me know, I will certainly post my findings if you are interested.

  • Files Copied to USB Drive Disappear

    I attached a WD USB drive formatted as FAT32 to my new Airport Extreme but cannot seem to get it to work correctly. I try to copy a file to it, it appears to copy fine (copy finishes, the file shows up in the drive) then a few seconds later the file disappears. I then hooked the drive directly to my Mac and, sure enough, the file is not there. I hooked it back to the AEBS and tried copying a file to it from a PC. Same as before, all appears to go well but the file doesn't actually get copied. Any idea what might cause this and how to fix it?

    This morning I was playing around some more and found that some of the files I tried to copy to the drive now show up when the drive is connected to my Mac but are still not visible when connected to the Airport Extreme. I have a different drive that works fine but it is smaller (500GB vs 750GB).

  • Any easy way to remove files copied to second drive on import that were later deleted from catalog?

    If I do an import from my memory card, choose my external hard drive as a second location for the import, and then go into my catalog and delete the bad photos, is there any quick way to also remove those files from the backup on the external hard drive? I've got a ton of wasted space on my external HD due to things that I've deleted and don't know of any simple way to get rid of them short of formatting the drive and doing another export, which takes a long time and isn't something I want to do all that often. Thanks for your help.

    I'm afraid there is no easy way to do that. The problem is that in using the "make second copy" option as your prime image backup, you're not using the function in quite the way the developers intended. Their purpose was to provide a method to take an immediate extra copy of the contents of the memory card, thus allowing you you reuse that memory card before your "regular" backup routine has been run. So yes, they have assumed that you would have a separate backup procedure for your image files, which allows for two things:
    1. That separate backup process could use an incremental backup utility, which means after the very first run all subsequent runs would backup only new or changed files....and you could also set it to delete from the backup any files you cull from the Lightroom library. Thus your backup wouldn't take up any more space than your originals.
    2. Once you've run the backup utility (after first cull so the duds never even get backed up?), the "second copy" can be deleted at your leisure.

  • ITunes hangs when syncing large files to network attached drives, ideas?

    I've done a lot of research on my problem and have come up blank so far. This is my setup and situation. I use a macbook pro and iTunes 10, but since I use a small SSD, I do not have much room to keep iTunes media on my disk. So, I use an external USB drive, formatted HFS+ J, accessible over my AirPort Extreme as a networked drive and am trying to keep my media there by setting my iTunes Media folder location to my "/Volumes/<drive name>/iTunes/iTunes Media" under iTunes>Preferences>Advanced. For the most part this works, but there are a few gotchas that are making this setup unusable.
    The biggest problem is if I try to sync a large file (for example a 250MB app) with iTunes to my networked storage, the culprit almost always being a large iOS app in Mobile Application folder, iTunes will hang with the spinning color wheel, and in the toolbar shows "Not Responding". This also hangs the Finder. After this situation, I pretty much have to force quit everything and reboot my machine to get the Finder and iTunes back to health.
    So, I thought that I would try keeping the iOS apps on my local drive and just move my music out to a network drive. But iTunes doesn't allow any granular configuration of the sort (big FAIL on Apple's part), and I've read up that iTunes does not play nice with symbolic links or else I would keep the iTunes Media location set to my local drive and symlink the Music folder out to my network drive.
    So, given that 1) I don't have enough space to keep everything local, 2) trying to keep everything on NAS causes my machine to fail when syncing my iPhone or trying to consolidate my iTunes library because of large file transfers over the network making thing go insane, and 3) iTunes doesn't easily let you manage your library with symlinks, any ideas on what I should do?

    i assume you are connecting wirelessly to the AEBS, correct ?
    as a test, run an ethernet cable from your Mac to one of the LAN ports of the AEBS (you might want to turn off airport on your machine), then try again.
    does the problem persist ?

Maybe you are looking for

  • How to display XML file(as markup) in jsp page..?

    Hi All, * I have to display the XML file(as markup) in jsp page (Tree Format).... * My XML file is an java.io.file object , and how to view this XML file on my JSP page........... Thanks in Advance, JavaImran

  • Non-cumulative key figure in Real Estate implementation

    Hi all,     I am implementing Real Estate Module in BW.  We are doing analysis on Condition amount ( BKOND).     The mapped Info Object on BW side is 0COND_AMT, according to standara info sources given by sap ( for e.g in  0RE_3)  and it is defined a

  • Is there a CLI for BAPI?

    Hi, I need to run the following BAPIs: - BAPI_USER_CHNAGE - ME_USER_CHANGE_PASSWORD currently I do it via the .net connector - is there any cli (command line interface) for calling these BAPIs? tx.

  • Unlocking of a disk depending on user?

    I have a machine (10.9.4) with an encrypted external disk (FireWire) for Time Machine. Three users automatically unlock the Time Machine disk when logging in after booting. However, the fourth user can't seem to unlock the Time Machine disk and I can

  • Setting User Interaction Level to DONOTDISPLAYALERTS.  JSX CS4

    In my Illustrator script I'm able to say "app.userInteractionLevel = UserInteractionLevel.DONTDISPLAYALERTS;" In my Photoshop script, this same line of code is pumping out a "variable is undefined" error. Please help!