Very Slow performance with large files

Using iTunes with my AppleTV I've been in the slow and painful process of digitizing my dvd library and when converting the LOTR (extended edition) trilogy I ran into a problem post-conversion. They play fine in Quicktime 7.3.1, I can add them to the iTunes library but when attempting to edit any information within iTunes and attempting to save iTunes freezes for several minutes before working or crashing (odds are around 50/50). If I just add the file to the library and try to play it the movie doesn't show up on the AppleTV either which is even stranger.
Output format of the movie: MP4/H.264, native 720x480 resolution, 23.97fps, 2Mbps video stream, 128k audio stream(limit of winavi).
Output Size: 4.4GB
Length: 4hours 24minutes
Software versions: iTunes 7.3.1, QuickTime 7.3.1
OS: Windows XP Pro SP2(current patch level as of 7/15).

Is possible than iTunes have 4 Gb folder limits. I'm trying put a little of light over the problem because iTunes Help don't said.
Cheers

Similar Messages

  • Slow Performance with large library (PC)

    I've been reading many posts about slow performance but didn't see anything addressing this issue:
    I have some 40000 photos in my catalog and despite generating previews for a group of directories, LR still is very slow in scrolling through the pics in these directories.
    When I take 2000 of these pics and import them into a new catalog - again generating previews, the scroll through the pics happens much much faster.
    So is there some upper limit of recommended catalog size for acceptable performance?
    Do I need to split my pics up by year? Seems counter productive, but the only way to see the pics at an acceptable speed.

    I also have serious performance issues, and i don´t even have a large database catalog, only around 2.000 pictures, the db file itself is only 75 mb big. Done optimization - didn´t help. What i encountered is that the cpu usage of LR 1.1 goes up and STAYS up around 85% for 4-5 minutes after programm start - during that time, zooming in to an image can take 2-3 minutes! After 4-5 minutes, cpu usage drops to 0%, the background task (whatever LR does during that time!) has finished and i can work very smoothly. preview generation cannot be the problem, since it also happens when i´m working in a folder that already has all previews build, close LR, and start again instantly. LR loads and AGAIN i´ll have to wait 4-5 minutes untill cpu ussage has dropped so i can continue working with my images smoothly.
    This is very annoying! I will stop using LR and go back to bridge/acr/ps, this is MUCH much faster. BUMMER!

  • VERY slow performance with CS4 + Save for web

    Note: I have done a search for similar issues but can find nothing regarding CS4 and this particular issue. I'm using a MacBook Pro 2.4Ghz w/4GB of RAM and resources are aplenty when the issue occurs.
    Issue
    My problem is that under certain circumstances the 'save as web' function can take upto 3 minutes before the diaglog box appears. I've had this for some time but only in the last couple of days did I notice something that might be relevant:
    Works
    1. Open 3 large TIFF files @ 120mb approx (mine are 3 processed RAW files from Capture One / 4200x5600@300dpi)
    2. Create a new adjustment layer for each image
    3. Resize each image to 72dpi @ 800x600
    4. Run a SmartSharpen & flatten layers
    5. In each tab (for each image) do 'save as web'
    6. 'Save as web' dialog appears in expected time
    VERY slow
    1. Open 3 large TIFF files @ 120mb approx (mine are 3 processed RAW files from Capture One / 4200x5600@300dpi)
    2. Create a new adjustment layer for each image
    3. Resize each image to 72dpi @ 800x600
    4. Run a SmartSharpen & flatten layers for each image
    5. In first tab do 'save as web'
    6. 'Save as web' dialog appears in expected time
    7. Close this document (tab)
    8. Go to the next image (tab) and do 'save as web'
    9. Dialog box can take anything from 20secs to 3mins to appear.
    10. Close this document (tab)
    11. Go to the last image (tab) and do 'save as web'
    12. Dialog box can take anything from 20secs to 3mins to appear.
    As I can easily go through this process for 50-60 images you can imagine that it can waste over an hour of my time waiting for Photoshop to get it's act together. I assume it's something todo with memory allocation, or that it's processing previews for all three tabs on the first 'save as web', but in any respect it's very annoying. There seems nothing obviously wrong resource-wise at the time of doing these.
    Would be interested if anyone else has this issue!
    Thanks!

    I had the same problem when I setup a new computer this week. Here's what I had installed:
    Photoshop CS5
    Suitcase Fusion 3
    The problem was that when I was in Photoshop and used the "Save for Web" export screen, the dropdown menu for the different export types (JPG, PNG, GIF, etc) would take an extremely long time to update the preview of the graphic.
    After a few days of research , I found that Suitcase was conflicting with Photoshop. Here's how you fix the problem (or at least here's what I did with my situation):
    Open Suitcase
    Go to "Tools", click "Manage Plugins"
    Deactivate the Photoshop plugin
    This fixed the problem immediately and now my Photoshop" Save For Web" feature is working really fast. Hope this helps anyone else in the same boat.

  • Very slow working with eps files over new network

    Please help!!!
    Our IT has recently upgraded us from a OSX server to a 2 Terabyte SNAP SERVER 520.
    I am using the only INTEL Mac in the department (we have another 10+ G5 PPCs running 10.4), and the only one using OSX 10.5. Since changing to the new server, I am also the only one with several file saving/opening issues.
    I believe this is a network issue.
    It is with mostly ADOBE products and we all know they will not support, working over the network. Due to the volume of files, I simply cannot drag files to my HD, and work locally. It is not practical. I also know that we have not had any issues in the past, and don't see why I can't get to the root of the problem.
    **The most obvious issue is Illustrator .eps files.**
    I can open and save .ai files in good time both locally, and over the network. There is no difference either way. HOWEVER, if it is an .eps file working over the network is not practical. I get the spinning wheel for about 2 minutes each time I open, save, or make a change. It is VERY slow. Working on eps files locally is fine (about the same speed as .ai files), but as soon as it hits the server, it is painfully slow to do anything.
    Additionally, when saving Photoshop, Illustrator, InDesign AND Quark 7 files (direct to the network), I am regularly getting "Could not save because write access was not granted", or "Could not save becasue unexpected end-of-file was encountered", or "Could not save because the file is already in use or as left open" type errors.
    I simply then click 'OK', and then hit save again, and it either gives me the same message, or it goes into 'save as' mode which lets me save the file (same name, and same location, but the original file has disappeared).
    I am connected to the server IP address through afp:// but have also tried cifs://. IT have removed all access to the server through smb://, so I cannot try this.
    ANY help is appreciated.

    With regard to the EPS issue, I think I may have found the source of the problem for SMB users: the Suitcase Fusion plug-in.
    Did you make any progress on these issues?

  • Linux AMD64, JDK 1.5_03: slow performance with large heap

    Tomcat app server running on jdk 1.4.2 on 32 bit Linux configured with mx1750m, ms1750m, runs fast. Returns 2MB of data through HttpServlet in under 30 secs.
    Moving the same app server to 64 bit on jdk 1.5.03 configured with mx13000m, ms10000m, the same request for data takes 5-20 minutes. Not sure why the timing is not consistent. If the app server is configured with mx1750m, ms1750m, performance is about 60 secs or less.
    I checked java settings through jstat. -d64 is the default. Why would increasing the heap cause such slow performance? Physical memory on the box = 32MB.
    It looks like it's definitely java related since a perl app making a http request to the server takes under a minute to run. We moved to 64bit to get around 1.7GB limitation of 32bit Linux but now performance is unacceptable.

    I Aggree, a AMD 64 with only 32 MB of memory would be a very strange beast indeed, heck, my graphics card has 4 times that, and it's not the most up-to-date.
    Keep in mind that switching to 64 does not only mean bigger memory space but also bigger pointers (on the sub-java level) and probably more padding in your memory, which leads to bigger memory consumption which in turn leads to more bus traffic which slows stuff down. This might be a cause for your slowdown, but that should not usually result in a slowdown as sever as the one you noticed.
    Maybe it's also a simple question of a not-yet-completely-optimized JDK for amd64.

  • Slow performance with large images working at 300 DPI

    I'm working on creating a poster for a film. I have my workspace set up for 24" x 36" movie poster size at 300 DPI. I have an intel i5 2500k processor @ 3.3 Ghz. I have 4 gigs of RAM and I have PS set to use 2/3 that. I have a scratch disk set on a large separate drive. The program runs very slow and takes forever to save or render a resize of any image. I'm wondering if there's a way to decrease the "size" of the images (in otherwords the data so the layers arent ginormous in terms of data) but still be able to work at 300 DPI?

    inDesign costs something like two thirds the price of Photoshop, so expensive for a one off job. It's sacrilege to mention it in these forums, but of you have a high end Office version installed, you might have Publisher on your system.  It would do the job easily, and is probably more intuitive to learn than inDesign.  But if your serious about it, inDesign is ten times the program, and the printer won't smile knowingly when you deliver the image file and they ask how it was created.

  • Finder Very Slow - Folders with Many Files

    I've recently noticed that whenever I use Finder to access folders that contain a large number of files (100 or more files in a folder)... it seems to take an inordinately long time for the files to be displayed. Being relatively new to the Mac world and maybe not fully cognizant of some of the finer "tweaking" options....
    Is there any way to speed up Finder's response on these types of (large file count) folders?
    (When using Parallels/Windows on my Mac, I can open this same folders almost instantly; no delay when using the My Computer or Windows explorer apps)

    Rick,
    The folders (that seem to take sooo long to open with Finder) are on my Mac and some are also on a networked drive. The folders were created using Finder. The files contained in these folders are various types, JPG images, MP3, DOC, XLS Spreadsheets, and/or miscellaneous data files.
    The only thing that all of these problematic folders have in common is that they contain a large number of files; some folders with as many as 400 files....

  • Very slow performance with Motion

    Using Motion 3.0.2 on a new gen MacPro 2 x 2.4 Quad Core Intel Xeon and 8GB Ram OSX 10.6.4
    I'm having very poor performance in Motion the system was set to use 80% of the memory so I tried bumping that up to 90% and there was no change. Could this be the performance of the graphic card because I have recently seen a post regarding the 5770 card could this be related to these new cards?
    The performance has been bad when using lots of particles in a project but also when just using two lines of text in the canvas so I don't believe that it is because I'm running Motion to extreme.

    Text is a real performance killer in Motion because of Apple's insistence on redrawing the vectors in every frame. Unless I need remarkable interactions or 3D stuff, I usually export text layers as movies and bring them back in for that very reason.
    No one really knows how the 5770 and 5870 cards compare directly agains our older 4870s. The online test geeks do not supply enough information to be helpful in evaluating their applicability to Motion.
    bogiesan

  • Slow Performance with large OR query

    Hi All;
    I am new to this forum... so please tread lightly on me if I am asking some rather basic questions. This question has been addressed before in this forum more than a year ago (http://swforum.sun.com/jive/thread.jsp?forum=13&thread=9041). I am going to ask it again. We have a situation where we have large filters using the OR operator. The searches look like:
    & (objectclass=something) (|(attribute=this) (attribute=that) (attribute=something) .... )
    We are finding that the performance between 100 attributes versus 1 attribute in a filter is significant. In order to increase performance, we have to issue the following filters in seperate searches:
    & (objectclass=something) (attribute=this)
    & (objectclass=something) (attribute=that)
    & (objectclass=something) (attribute=something)
    The first search takes an average of 60 seconds, and the combination of searches in the second filter takes an average of 4 seconds. This is a large performance improvement.
    We feel that this solution is not desirable because:
    1. When the server is under heavy load, this solution will not scale very well.
    2. We feel we should not have to modify our code to deal with a server deficiency
    3. This solution creates too much network traffic
    My questions:
    1. Is there a query optimizer in the server? If so, shouldn't the query optimizer take care of this?
    2. Why is there such a large performance difference between the two filters above?
    3. Is there a setting somewhere in the server (documented or undocumented) that would handle this issue? (ie average query size)
    4. Is this a known issue?
    5. Besides breaking up the filter into pieces, is there a better way to approach this type of problem?
    Thanks in advance,
    Paul Rowe

    I also have serious performance issues, and i don´t even have a large database catalog, only around 2.000 pictures, the db file itself is only 75 mb big. Done optimization - didn´t help. What i encountered is that the cpu usage of LR 1.1 goes up and STAYS up around 85% for 4-5 minutes after programm start - during that time, zooming in to an image can take 2-3 minutes! After 4-5 minutes, cpu usage drops to 0%, the background task (whatever LR does during that time!) has finished and i can work very smoothly. preview generation cannot be the problem, since it also happens when i´m working in a folder that already has all previews build, close LR, and start again instantly. LR loads and AGAIN i´ll have to wait 4-5 minutes untill cpu ussage has dropped so i can continue working with my images smoothly.
    This is very annoying! I will stop using LR and go back to bridge/acr/ps, this is MUCH much faster. BUMMER!

  • Radeonhd: very slow performance with 2D accel enabled

    I have a Radeon HD 4850 in my desktop, and I'm having serious issues trying to get 2D support enabled with the radeonhd driver.
    The driver works fine without EXA and DRI enabled. X seems smooth and snappy. I get a little tearing with X11 and Flash video, but I bought an ATi card; I kind of expect it. I see no discernable difference in performance between the radeon and radeonhd drivers.
    However, if I turn the 2D accel options on, it feels like i'm using a pII or something. Windows tear all over the place, and everything in X slows waay down. I did some cursory seaching and it seems like this problem was solved a while ago and shouldn't show up in the up-to-date packages. I just ran pacman -Syu today; everything is synced.
    I feel like I shouldn't be having these kinds of performance issues, which probably means I missed something somewhere when I was setting stuff up. I'm not a brand-new arch user but this card is new to me and I haven't fiddled with ATi drivers very much.
    Anyone have any suggestions? I'm sort of at a loss here.
    Last edited by f0nd004u (2009-10-08 10:33:13)

    drm-radeon-module-git-r6xx-r7xx refuses to compile, breaks at the end:
    ==> Starting make...
    make DRM_MODULES=radeon.o modules
    make[1]: Entering directory `/tmp/yaourt-tmp-josh/aur-drm-radeon-module-git-r6xx-r7xx/drm-radeon-module-git-r6xx-r7xx/src/drm-build/linux-core'
    sh ../scripts/create_linux_pci_lists.sh < ../shared-core/drm_pciids.txt
    make -C /lib/modules/2.6.30-ARCH/build SUBDIRS=`/bin/pwd` DRMSRCDIR=`/bin/pwd` modules
    make[2]: Entering directory `/usr/src/linux-2.6.30-ARCH'
    CC [M] /tmp/yaourt-tmp-josh/aur-drm-radeon-module-git-r6xx-r7xx/drm-radeon-module-git-r6xx-r7xx/src/drm-build/linux-core/drm_auth.o
    In file included from /tmp/yaourt-tmp-josh/aur-drm-radeon-module-git-r6xx-r7xx/drm-radeon-module-git-r6xx-r7xx/src/drm-build/linux-core/drmP.h:84,
    from /tmp/yaourt-tmp-josh/aur-drm-radeon-module-git-r6xx-r7xx/drm-radeon-module-git-r6xx-r7xx/src/drm-build/linux-core/drm_auth.c:36:
    /tmp/yaourt-tmp-josh/aur-drm-radeon-module-git-r6xx-r7xx/drm-radeon-module-git-r6xx-r7xx/src/drm-build/linux-core/drm_os_linux.h:36: error: conflicting types for 'irqreturn_t'
    include/linux/irqreturn.h:16: note: previous declaration of 'irqreturn_t' was here
    make[3]: *** [/tmp/yaourt-tmp-josh/aur-drm-radeon-module-git-r6xx-r7xx/drm-radeon-module-git-r6xx-r7xx/src/drm-build/linux-core/drm_auth.o] Error 1
    make[2]: *** [_module_/tmp/yaourt-tmp-josh/aur-drm-radeon-module-git-r6xx-r7xx/drm-radeon-module-git-r6xx-r7xx/src/drm-build/linux-core] Error 2
    make[2]: Leaving directory `/usr/src/linux-2.6.30-ARCH'
    make[1]: *** [modules] Error 2
    make[1]: Leaving directory `/tmp/yaourt-tmp-josh/aur-drm-radeon-module-git-r6xx-r7xx/drm-radeon-module-git-r6xx-r7xx/src/drm-build/linux-core'
    make: *** [radeon.o] Error 2
    ==> ERROR: Build Failed.
    Aborting...
    Error: Makepkg was unable to build drm-radeon-module-git-r6xx-r7xx package.
    I'm not really sure what to make of that, and the driver definitely won't work without that module. Anyone got an idea?
    *EDIT: Nevermind, I'm dumb and tired and didn't pay attention to make when I was building this.
    Last edited by f0nd004u (2009-10-08 14:28:58)

  • Very slow performance with 56K modem

    I will be in a location with no high-speed access, so I bought a US Robotics 56K v.92 USB modem for my MacBook. It's taking 5 to 15 minutes to load even simple pages. Anything I can do to optimize performance?

    In addition to what Dan said, see if changing the array size in sqlplus makes a difference. For example,
    set autotrace on
    (do fast query)
    (do slow query)
    set arraysize 100
    (do fast query)
    (do slow query)
    We might see some significant differences in sqlnet roundtrips. On the other hand, things may be all screwy because your rows are so big that it has to do something different. You might get a buffer overflow. How big are your rows (all the columns together, and average size of table2)?
    There may also be some pga strangeness going on. Are you using pga_aggregate_target?
    Are you fetching over a network? Larger amounts of data may be mired in packet splits.
    It may be necessary to trace if the explain isn't obvious.
    Are you doing this in sqlplus or using some other tool?

  • Apple TV slow performance with large iTunes library

    We have a rather large (movies + music) iTunes library and the appleTV is performing very poorly... Are there any suggestions how to improve wake-up time, menu response time or performance overall? Will it help to put our music on a separate HDD and turn that off when we want to use the appleTV? Anything else?

    Sorry to butt-in, we have a 2.6TB iTunes library. Music and Photos are kept on the AppleTV all else is streamed. We use a Lacie 4Big Quadra attached to a iMac G5 streamed via Airport Extreme.
    The only delays we get are for the Lacie coming out of sleep mode (takes about 45 seconds), which sometimes results in an on-screen message of File Format not compatible (which just means "not found"). Retrying the selection starts the streaming. I expect the iTunes library streams the Movie/TV programme to the AppleTV which buffers it for delivery. This could be the delay you're experiencing, maybe 15-20 seconds before it starts to play - is this what you mean?
    As our library has grown we've noticed no significant performance drop-off.
    Possibly it's a network issue, have you run the network diagnostics on Apple TV ?
    Roger

  • Very slow performance with queries sent to MS Access

    Hi, folks,
    I'm developing a site with JSP and MS Access and I'm having a lot of DB headaches. Since the site is still under development (not released yet), I'm using very small tables (like 20 rows, 20 columns).
    The first problem was that every time I executed ResultSet.getXXX() it took 50-100 ms. This made a page with little data (200 values read) take more than 30 seconds to display (!!!). I solved this issue changing the old jdbc:odbc for a brand new (trial) Netdirect JDataConnect driver (I include this info mainly because I think that may be helpful for some people out there).
    Now I have significantly reduced time consumption, but still the Statement.executeQuery() sentence has a terrible performance. I've measured 2 to 3 seconds for each query to return. I think that this is not logical for such small tables (What could happen with the 50,000 rows tables that we are planning to use in the future!).
    As always, thanks in advance for any help.

    It's been some days since my last post, but I have got valuable info in this time.
    First, answering your question, Sun's jdbc:odbc bridge only supports one isolation level: TRANSACTION_READ_COMMITED (at least for MS Access), so it's not possible to try to change it.
    However, I've discovered that a good solution to improve performance is to use another driver. Right now I'm using one from Atinav that dramatically improves performance (from 30 seconds to 300 ms).
    The problem with this driver is that it only supports JDBC 1.0, so I can't even use a scrollable ResultSet.
    Right now I think that I would be happy if I found I driver with a performance similar to Atinav's, but also supporting at least JDBC 2.0.
    As always, thanks in advance for any help.

  • New VM Server - Very Slow Performance with QuickBooks

    Hello all,
    Oh the dreaded performance issues!
    Okay we currently have a virtual server hosted with a different company - it has two VPU's and 8GB RAM.  The contract is about to expire so we're switching to a Microsoft Azure VM.  This server is running Windows Server 2008 R2 - main purpose is
    to host QuickBooks Enterprise v15 - with 10 users connecting via RemoteApp and/or RD.
    So for the new Azure server we selected the A3 (4 cores/7GB RAM) option - running Windows Server 2008 R2 - I know old O/S - but still need for now.  We also attached a new 250GB disk for additional storage of data/programs.  The test users noticed
    performance issues right away so we temporarily upgraded it to the A4 option (8 cores/14GB RAM) and although it is better there are still performance issues.  For example to run a statement report in QuickBooks - viewing both connections side-by-side
    - it is 2x faster on the current "live" server we have hosted with the current hosting provider.  On the Azure VM we also tested installing QuickBooks on the C:\ drive and then on the attached 250GB disk - the F:\ drive - same performance results
    on both.  
    This is a new VM...fresh install....just so strange why it would be slower than the 8GB server.
    Does anyone have any ideas on where we can start to troubleshoot this?  
    Thanks so much!

    I understand the performance seen by end users are low. However, do you know what is really wrong? Do you see Azure VM using almost 80% of CPU & RAM, if not adding more RAM & CPU may not really help you.
    - How is the CPU usage of your on-premise node vs Azure VM
    - How is the memory usage of your on-premise node vs Azure VM
    Do you have any specific characteristics for Disk IO requirement? Can you check IOPS on on-premise vs Azure?

  • Very slow in sending large file email

    I am having problem sending email with attachements around 8MB size, the email could not be sent out. Any setting or solution that can help on this?

    I have been experiencing similar problems with slow downloads from my WS2012 running Mercury Mail to our outlook 2010 clients.
    I finally sussed this one out.
    Even though there is much talk out there that Anti Viruses no longer scan emails because in the words of another post
    There's no need to scan email, only attachments and that is done by all modern antimalware programs as the files are written to the PC file system.
    All of the major AV vendors have been telling their customers for several years now that 'email scanning' is no longer necessary, it's a holdover from the early days of AV programs when Microsoft hadn't yet created the API (Application Programming Interface)
    sets that allow the AV programs to scan a file before any user access (including email programs) is allowed.
    That being said the AV I have installed on my WS2012 (SC2012EP) still looks at the app. So what I did what exclude the Mercury mail application in SC2012EP and tested it.
    An 8.9MB email which took 4 minutes to download now only take 9 seconds.
    Problems solved!!
    J I hope it helps you

Maybe you are looking for