Large file export times? over 20 hours?

Hi I am trying to export a 3,4GB tiff from aperture to jpg original size. My MBPRetina has been counting away about 22 hours now. Activity monitor says things are cranking away fine but the file has yet to materialize. The "expoert file" jpg has appeared but it is empty. Is there any way to figure out if things are fine? Or should I just wait. How long do these usually take?

SD, I am using a brnd new MBP with retina. 8 GB standard RAM. 250SSD. The hard drive has approx 10GB used for my pics and nothing else. So 81gigs used 160 gigs free.
I used PT gui to stitch up around 80 pictures from a nature scene. I am planning to print a large canvas from it. If I ever get it that far. I cant get a 3,4GB file to print so trying to squeez it down with JPG conversion should get it under 250MB.
I am not sure if PTgui lets you output PSD will look into it.
THnaks for the help so far. I will let Paerture think until morning so another 12 hours before I force quit it and see what happens.
P

Similar Messages

  • GB crashing during large file export

    GB 09 is crashing during an export of a project that is 2 hours and 20 min. I've tried diff export settings but the program still quits when it reaches about 3/4 of the project length.
    No crashes in the past. Been exporting the project to rough cuts at various lengths from1 hour to 1 1/2, but upon finalizing the project, which happen to total over 2 hours, it can't make it through an export.
    It's an audio book - talking only -no effects, other instruments, very straight forward.
    I've tried exporting from the master .book file and a version with all the clips joined, tracks locked, and extra stuff deleted for project.
    Here is some of the crash report (just pasting the first page of the crash report).
    Process: GarageBand [2716]
    Path: /Applications/GarageBand.app/Contents/MacOS/GarageBand
    Identifier: com.apple.garageband
    Version: 5.1 (398)
    Build Info: GarageBand_App-3980000~2
    Code Type: X86 (Native)
    Parent Process: launchd [138]
    Date/Time: 2010-03-29 11:14:16.969 -0400
    OS Version: Mac OS X 10.6.2 (10C540)
    Report Version: 6
    Interval Since Last Report: 1758706 sec
    Crashes Since Last Report: 39
    Per-App Interval Since Last Report: 219501 sec
    Per-App Crashes Since Last Report: 6
    Exception Type: EXC_BREAKPOINT (SIGTRAP)
    Exception Codes: 0x0000000000000002, 0x0000000000000000
    Crashed Thread: 0 Dispatch queue: com.apple.main-thread
    Thread 0 Crashed: Dispatch queue: com.apple.main-thread
    0 com.apple.CoreFoundation 0x94dd0554 CFRelease + 196
    1 com.apple.AppKit 0x9468967a -[NSBitmapImageRep initWithFocusedViewRect:] + 2257
    2 com.apple.garageband 0x001f5fea 0x1000 + 2052074
    3 com.apple.garageband 0x00205650 0x1000 + 2115152
    4 com.apple.garageband 0x0014f7c2 0x1000 + 1370050
    5 com.apple.garageband 0x0015a58a 0x1000 + 1414538
    6 com.apple.AppKit 0x945edb6c -[NSView _drawRect:clip:] + 3721
    7 com.apple.AppKit 0x945eb238 -[NSView _recursiveDisplayRectIfNeededIgnoringOpacity:isVisibleRect:rectIsVisibleRectFor View:topView:] + 2217
    8 com.apple.AppKit 0x945ebbcb -[NSView _recursiveDisplayRectIfNeededIgnoringOpacity:isVisibleRect:rectIsVisibleRectFor View:topView:] + 4668
    9 com.apple.AppKit 0x945ebbcb -[NSView _recursiveDisplayRectIfNeededIgnoringOpacity:isVisibleRect:rectIsVisibleRectFor View:topView:] + 4668
    10 com.apple.AppKit 0x945ebbcb -[NSView _recursiveDisplayRectIfNeededIgnoringOpacity:isVisibleRect:rectIsVisibleRectFor View:topView:] + 4668
    11 com.apple.AppKit 0x945ebbcb -[NSView _recursiveDisplayRectIfNeededIgnoringOpacity:isVisibleRect:rectIsVisibleRectFor View:topView:] + 4668
    12 com.apple.AppKit 0x945ebbcb -[NSView _recursiveDisplayRectIfNeededIgnoringOpacity:isVisibleRect:rectIsVisibleRectFor View:topView:] + 4668
    13 com.apple.AppKit 0x945ebbcb -[NSView _recursiveDisplayRectIfNeededIgnoringOpacity:isVisibleRect:rectIsVisibleRectFor View:topView:] + 4668
    14 com.apple.AppKit 0x945ebbcb -[NSView _recursiveDisplayRectIfNeededIgnoringOpacity:isVisibleRect:rectIsVisibleRectFor View:topView:] + 4668
    15 com.apple.AppKit 0x945ebbcb -[NSView _recursiveDisplayRectIfNeededIgnoringOpacity:isVisibleRect:rectIsVisibleRectFor View:topView:] + 4668
    16 com.apple.AppKit 0x945ebbcb -[NSView _recursiveDisplayRectIfNeededIgnoringOpacity:isVisibleRect:rectIsVisibleRectFor View:topView:] + 4668
    17 com.apple.AppKit 0x945ebbcb -[NSView _recursiveDisplayRectIfNeededIgnoringOpacity:isVisibleRect:rectIsVisibleRectFor View:topView:] + 4668
    18 com.apple.AppKit 0x945ebbcb -[NSView _recursiveDisplayRectIfNeededIgnoringOpacity:isVisibleRect:rectIsVisibleRectFor View:topView:] + 4668
    19 com.apple.AppKit 0x945ebbcb -[NSView _recursiveDisplayRectIfNeededIgnoringOpacity:isVisibleRect:rectIsVisibleRectFor View:topView:] + 4668
    20 com.apple.AppKit 0x94689b5f -[NSNextStepFrame _recursiveDisplayRectIfNeededIgnoringOpacity:isVisibleRect:rectIsVisibleRectFor View:topView:] + 311
    21 com.apple.AppKit 0x945e7111 -[NSView _displayRectIgnoringOpacity:isVisibleRect:rectIsVisibleRectForView:] + 3309
    22 com.apple.AppKit 0x94547d6e -[NSView displayIfNeeded] + 818
    23 com.apple.AppKit 0x944fb9c5 -[NSNextStepFrame displayIfNeeded] + 98
    24 com.apple.AppKit 0x94511094 -[NSWindow displayIfNeeded] + 204
    25 com.apple.AppKit 0x945425aa _handleWindowNeedsDisplay + 696
    26 com.apple.CoreFoundation 0x94e43892 __CFRunLoopDoObservers + 1186
    27 com.apple.CoreFoundation 0x94e0018d __CFRunLoopRun + 557
    28 com.apple.CoreFoundation 0x94dff864 CFRunLoopRunSpecific + 452
    29 com.apple.CoreFoundation 0x94dff691 CFRunLoopRunInMode + 97

    Sorry about listing too much data from the GB crash report.
    Anyone have trouble shooing tips for crashing during large mix downs/exports?
    The file that crashes during export is over 2 hours long, but only 1 voice track with no effects!

  • Large file transfer problems over client/server socket

    Hi,
    I wrote a simple chat problem in which I can send files from client to client. The problem is when I send large files over 101 MB the transfer freezes. I do not even get any error mesages in the console. The files I am sending are of any type (Ex mp3, movies, etc...). I am serializing the data into a "byteArray[packetSize]" and sending the file packet by packet until the whole file has been sent, and reconstructed on the other side. The process works perfectly for files smaller than 101MB, but for some reason freezes after that if the file is larger. I have read many forums and there aren't too many solutions out there. I made sure to use the .reset() method to reset my ObjectOutputStream each time I write to it.
    Here's my file sending code:
    byte[] byteArray = new byte[defaultPacketSize];
    numPacketsRequired = Math.ceil(fileSize / defaultPacketSize);
    try {
    int i = 0;
    reader = new FileInputStream(filePath);
    while (reader.available() > 0) {
    if (reader.available() < defaultPacketSize) {
    byte[] lastPacket = new byte[reader.available()];
    reader.read(lastPacket);
    try {
    if (socket == null || output == null) {
    throw new SocketException("socket does not exist");
    output.writeObject(lastPacket);
    output.reset();
    output.writeObject("DONE");
    output.reset();
    output.close();
    socket.close();
    catch (Exception e) {
    System.out.println("Exception ***: " + e);
    output.close();
    socket.close();
    else {
    reader.read(byteArray);
    try {
    if (socket == null || output == null) {
    throw new SocketException("socket does not exist");
    output.writeObject(byteArray);
    output.reset();
    catch (Exception e) {
    System.out.println("Exception ***: " + e);
    output.close();
    socket.close();
    reader.close();
    catch (Exception e) {
    System.out.println("COULD NOT READ PACKET");
    Here's my file receiving code:
    try {
    // The message from the client
    Object streamInput;
    FileOutputStream writer;
    byte[] packet;
    while (true) {
    streamInput = input.readObject();
    if (streamInput instanceof byte[]) {
    packet = (byte[]) streamInput;
    try {
    writer = new FileOutputStream(outputPath, true);
    writer.write(packet); //Storing the bytes on file
    writer.close();
    catch (Exception e) {
    System.out.println("Exception: " + e);
    else if (streamInput.equals("DONE")) {
    socket.close();
    input.close();
    break;
    catch (Exception e) {
    I'm looking for any way I can possibly send large files from client to client without having it freeze. Are there any better file transfer ways other than socket? I don't really want FTP. I think I want to keep it HTTP.
    Any suggestions would be helpful.Thanks!
    Evan

    I've taken a better look a the code you posted, and
    there is one problem with the receiving code. You
    keep repeatedly opening and closing the
    FileOutputStream. This is not going to be efficient
    as the file will keep needing to be positioned to its
    end.Yes sorry I did change that code so that it leaves the file open until completely done writing. Basically I have a progress bar that records how far along in the writing process the client is, and when the progress bar reaches 100%, meaning the file is complete, the file.close() method is invoked. Sorry about that.
    I also ran some memory tests using the "Runtime.getRuntime().totalMemory()", and "Runtime.getRuntime().freeMemory()" methods. I put these methods inside the loop where I read in the file and send it to the client. here's the output:
    Sender's free memory: 704672
    File reader read 51200 bytes of data.
    767548 left to read.
    Sender's runtime memory: 2818048
    Sender's free memory: 702968
    File reader read 51200 bytes of data.
    716348 left to read.
    Sender's runtime memory: 2818048
    Sender's free memory: 701264
    File reader read 51200 bytes of data.
    665148 left to read.
    Sender's runtime memory: 2818048
    Sender's free memory: 699560
    File reader read 51200 bytes of data.
    613948 left to read.
    Sender's runtime memory: 2818048
    Sender's free memory: 697856
    File reader read 51200 bytes of data.
    562748 left to read.
    Sender's runtime memory: 2818048
    Sender's free memory: 696152
    File reader read 51200 bytes of data.
    511548 left to read.
    Sender's runtime memory: 2818048
    Sender's free memory: 694448
    File reader read 51200 bytes of data.
    460348 left to read.
    Sender's runtime memory: 2818048
    Sender's free memory: 692744
    File reader read 51200 bytes of data.
    409148 left to read.
    Sender's runtime memory: 2818048
    Sender's free memory: 691040
    File reader read 51200 bytes of data.
    357948 left to read.
    Sender's runtime memory: 2818048
    Sender's free memory: 689336
    File reader read 51200 bytes of data.
    306748 left to read.
    Sender's runtime memory: 2818048
    Sender's free memory: 687632
    File reader read 51200 bytes of data.
    255548 left to read.
    Sender's runtime memory: 2818048
    Sender's free memory: 685928
    File reader read 51200 bytes of data.
    204348 left to read.
    Sender's runtime memory: 2818048
    Sender's free memory: 684224
    File reader read 51200 bytes of data.
    153148 left to read.
    Sender's runtime memory: 2818048
    Sender's free memory: 682520
    File reader read 51200 bytes of data.
    101948 left to read.
    Sender's runtime memory: 2818048
    Sender's free memory: 680816
    File reader read 51200 bytes of data.
    50748 left to read.
    Sender's runtime memory: 2818048
    Sender's free memory: 679112
    File reader read 50748 bytes of data.
    0 left to read.
    Creating last packet of size: 50748
    Last packet size after setting it equal to byteArray: 50748
    Here's the memory stats from the receiver:
    Receiver's free memory: 639856
    Receiver's total memory: 2842624
    Receiver's free memory: 638920
    Receiver's total memory: 2842624
    Receiver's free memory: 637984
    Receiver's total memory: 2842624
    Receiver's free memory: 637048
    Receiver's total memory: 2842624
    Receiver's free memory: 636112
    Receiver's total memory: 2842624
    Receiver's free memory: 635176
    Receiver's total memory: 2842624
    Receiver's free memory: 634240
    Receiver's total memory: 2842624
    Receiver's free memory: 633304
    Receiver's total memory: 2842624
    Receiver's free memory: 632368
    Receiver's total memory: 2842624
    Receiver's free memory: 631432
    Receiver's total memory: 2842624
    Receiver's free memory: 630496
    Receiver's total memory: 2842624
    Receiver's free memory: 629560
    Receiver's total memory: 2842624
    Receiver's free memory: 628624
    Receiver's total memory: 2842624
    Receiver's free memory: 627688
    Receiver's total memory: 2842624
    Receiver's free memory: 626752
    Receiver's total memory: 2842624
    Receiver's free memory: 625816
    Receiver's total memory: 2842624
    Receiver's free memory: 624880
    Receiver's total memory: 2842624
    Receiver's free memory: 623944
    Receiver's total memory: 2842624
    Receiver's free memory: 623008
    Receiver's total memory: 2842624
    Receiver's free memory: 622072
    Receiver's total memory: 2842624
    Receiver's free memory: 621136
    Receiver's total memory: 2842624
    Receiver's free memory: 620200
    Receiver's total memory: 2842624
    Receiver's free memory: 619264
    Receiver's total memory: 2842624
    Receiver's free memory: 618328
    Receiver's total memory: 2842624
    Receiver's free memory: 617392
    Receiver's total memory: 2842624
    Receiver's free memory: 616456
    Receiver's total memory: 2842624
    Receiver's free memory: 615520
    Receiver's total memory: 2842624
    Receiver's free memory: 614584
    this is just a sample of both receiver and sender's stats. Everything appears to be fine! Hope this message post isn't too long.
    Thanks!

  • Hold time over an hour!

    Been on hold for over an hour to talk to somebody any idea whats going on with the hold times?

    Part of the problem may be from users with issues due to this week's storm.  You can try this:
    If you are having an issue with your Verizon service, you can use the Verizon Troubleshooter to fix and report issues with your Verizon Phone, FiOS TV, or Internet Service, as well as to schedule a repair, here is the link:
    http://www.verizon.com/repair
    You can find tools on the Verizon Residential Support page that may help you diagnose your issue:
    http://www22.verizon.com/residentialhelp
    Or you can try to contact Verizon via chat or email
    http://www.verizon.com/contactus
    Choose “Live Chat.”
    If a chat agent is available to assist you, the chat link will become live after the page is fully loaded.

  • Delete large file from Time Machine backup drive?

    I recently selected to exclude my Aperture.library file from Time Machine, but only after it had done about 8 backups already... the file doesn't exist in the latest folder on my TM hard drive but it is there in previous ones.
    how can i delete this file from my TM backups and regain 40gb? (it won't let you delete it in finder)

    Ewen,
    Let me make my self more clear.
    Because of the Apple KB article regarding Aperture and Time Macine, that's exactly why i chose to EXCLUDE my aperture.library file from time machine.
    Please don't read any negative tone in this, i'm just trying to explain myself.
    But hind sight is 20/20, so i was looking for a way to delete the old previous version of the aperture.library that was sitting on my backup time machine drive (useless taking up space)
    I apologize for not searching out that KB article, but i didn't think i'd have to do that. I assumed Apple would know how to have Aperture play nice with Time Machine....
    Any way all is good. we're all educated now

  • Trouble Writing Large Files to Time Capsule - Windows 7

    Hi
    I'm having some trouble with my Time Capsule. I can access it and read files from it fine. However, I have trouble writing files to it. Small files ( say multiple word docs) write fine. However, with medium size files (say 120 mb) it crashes. I have noticed that no matter how large the file is, it writes up to 10% of the file very quickly but then crashes. It fails with the error message that 'the network location specified is no longer available'. Sometimes, when it crashes, it essentially crashes my Time Capsule network. I.E I am still connected to the internet but I can access the Time Capsule.
    I used to be able to write files (when I was using Vista) no problem. I still have 250gb left.
    What can I do to fix this?
    For details:
    500gb Time Capsule
    Windows 7 64bit Home Premium

    anyone know how to help?

  • Converting Large Files - It Times-Out

    Hi, I have two files that I need to convert from PDF to Word. One is 40 MB and the other is 18 MB. They both time-out while converting. Note, however, that I did successfully convert a 27 MB file - so the limitation can't be strictly based on size. It seems to be based on the intensity of the conversion/complexity of the file itself.
    Is there a way to download the PDF-to-Word converter so that I can do the conversion locally? Can I send (FTP?) the files to you and ask you to do these conversions?
    Any help you can provide would be great.
    Michael Dowding

    Thanks anyway. I purchased a different utility that lets me convert locally on my Windows laptop.
    I’d been using the online service you offer (in Acrobat Reader it’s Save As Word or Excel Online) and it’s done a good job. But these files I was working with were so big and so graphics-intensive, it was timing-out your remote service. So I bought Wondershare PDF Editor and it did the job.
    Thanks for your reply.

  • Time dimension with Hourly base time periods

    Hi all
    I need to analyze data at Hour, Day, Month, and Year levels. The data in the fact and dimension tables are at the 'Hour' level with DATE datatype, such as:
    02-SEP-10 10:00:00 AM
    02-SEP-10 11:00:00 AM
    To use Time-Series type calculations, I understand that I have to create an OLAP dimension of type 'TIME' (and not 'USER') and map it to the populated relational time dimension table.
    1) Can I have the primary key for 'Hour' level as the actual base level value of datatype DATE (eg. 02-SEP-10 10:00:00 AM) ?
    2) For the END_DATE and TIME_SPAN attributes at the 'Hour' level, what should I use?
    The documentation is only available for minimum 'Day' level hierarchies, which allows setting END_DATE and TIME_SPAN to the actual 'Day' value and 1, respectively.
    3) For the END_DATE and TIME_SPAN attributes at the 'Month' level, do I need to supply the last-date-of-each-month and number-of-days-in-that-month, respectively?
    Please bear in mind that I am relatively new to Oracle OLAP. Any assistance will be appreciated.
    Cheers.

    Thank you Szilard and Adnan for the very prompt and informative responses.
    I managed to follow the advice on the oracleolap.blogspot link and created a time dimension with members at Hour level loaded into the dimension in character format: TO_CHAR(hour_id, 'DD-MON-YYYY HH24')
    The problem now is the maintenance (loading) of the dimension is taking an abnormally large amount of time (over 1 hour) as opposed to when the members were being loaded in DATE format (5 minutes). The mapping table only as 10,000 entries.
    Why is these such a big difference? Is it normal? Is there a way to speed up the maintenance time?
    FYI, I have not created any indexes on any of the attributes.
    My platform is:
    11.1.0.7.0 DB
    11.1.0.7.0B Client

  • Time Dimension with Hourly base level

    Hi all
    I need to analyze data at Hour, Day, Month, and Year levels. The data in the fact and dimension tables are at the 'Hour' level with DATE datatype, such as:
    02-SEP-10 10:00:00 AM
    02-SEP-10 11:00:00 AM
    To use Time-Series type calculations, I understand that I have to create an OLAP dimension of type 'TIME' (and not 'USER') and map it to the populated relational time dimension table. My questions are:
    1) Can I have the primary key for 'Hour' level as the actual base level value of datatype DATE (eg. 02-SEP-10 10:00:00 AM) ?
    2) For the END_DATE and TIME_SPAN attributes at the 'Hour' level, what should I use?
    The documentation is only available for minimum 'Day' level hierarchies, which allows setting END_DATE and TIME_SPAN to the actual 'Day' value and 1, respectively.
    3) For the END_DATE and TIME_SPAN attributes at the 'Month' level, do I need to supply the last-date-of-each-month and number-of-days-in-that-month, respectively?
    Please bear in mind that I am relatively new to Oracle OLAP. Any assistance will be appreciated.
    Cheers.

    Thank you Szilard and Adnan for the very prompt and informative responses.
    I managed to follow the advice on the oracleolap.blogspot link and created a time dimension with members at Hour level loaded into the dimension in character format: TO_CHAR(hour_id, 'DD-MON-YYYY HH24')
    The problem now is the maintenance (loading) of the dimension is taking an abnormally large amount of time (over 1 hour) as opposed to when the members were being loaded in DATE format (5 minutes). The mapping table only as 10,000 entries.
    Why is these such a big difference? Is it normal? Is there a way to speed up the maintenance time?
    FYI, I have not created any indexes on any of the attributes.
    My platform is:
    11.1.0.7.0 DB
    11.1.0.7.0B Client

  • Large backup takes more than 1 hour - time machine kicks in again!

    I have just watched my Time Machine backup while it's been going.
    It was a large backup and took over 1 hour. However, as time machine backups up "every hour" - the original backup was still going when it then stopped, and started the new backup (as the original backup had been going for 1 hour already).
    The original backup had over 1GB left - yet the new backup only backed up 14MB.
    And, there is no way of telling what happened or what was or was not backed up.
    Anyone got any suggestions?
    Snapper

    SnapperNZ wrote:
    I have just watched my Time Machine backup while it's been going.
    It was a large backup and took over 1 hour. However, as time machine backups up "every hour" - the original backup was still going when it then stopped,
    It probably finished ok. The amount to be backed-up is an estimate, usually pretty close, but sometimes not.
    and started the new backup (as the original backup had been going for 1 hour already).
    The original backup had over 1GB left - yet the new backup only backed up 14MB.
    And, there is no way of telling what happened or what was or was not backed up.
    Yes, there are.
    See #A1 in [Time Machine - Troubleshooting|http://web.me.com/pondini/Time_Machine/Troubleshooting.html] (or use the link in *User Tips* at the top of this forum) for a handy widget that will display the backup messages from your logs.
    See #A2 there for a couple of apps that will show exactly what was backed-up each time.

  • Itunes with 10.3x not transferring large files to external hard drive

    I have been unable to transfer any large files, basically anything over over two artists or so from old external hard drive for my old windows xp computer to my new external hard drive (both iomega and western digital have been tried) without the transfer cutting out and popping up a message to effect of itunes can not copy or add file to folder...something like that. am i missing something that i need to be doing? ive got about 33,000 items, and the thought of doing them 10-15 at a time really isn't something i want to do. But as of this point, i can't find a way to simply xfer the whole itunes library from the old external to the new one. Thanks for any help.

    You can transfer purchased content ONLY! Unfortunately, you have no choice as to what gets transferred. Here are some work arounds
    1. Delete off iPod Touch/iPhone/iPod then transfer
    2. Transfer all content then delete from computer

  • Beach Ball of Death on OSX while trying to upload large files(updated)

    Hi,
    I did a search on this and while I got lots of hits none on this issue.
    We have a web portal that allows customers to upload files to a DAM server, we have control server.
    The issue is not the back end, the back end is php and all set up correctly, we have 2gb ram assigned to php, the upload_max_filesize is set to 1.8gb max post size is set accordingly.
    The flex app loads, user selects large file say 1.6gb ( we do file size check to make sure its below the limit), click upload button, beach ball of death shows, a little while later the file uploads.
    Great so far, now the user tries another files same size, beach ball of death, script times out (we capture this and end gracefully). If the user restarts safari then the file will upload just fine.
    Seems you can upload a large file first time and small files subsequently below 1.2gb, but you can not upload multiple large files.
    Seems some sort of memory leak, I was looking at converting this upload app to send files via ftp, but if the flash player needs to process/load the files first then I don't think this will work looking to increase the file size ceiling to 4gb via ftp.
    Code is a bit involved but in the end just calls file reference upload, then beach ball appears
    Any ideas? player version 10.0.32.18 debug
    UPDATED 09_17_09
    It appears to be a memory leak in the player when in safari, the file is read but the memory not freed up after completion. Firefox frees the used memory after upload and you can continue. Not sure if Apple or Adobe issue.
    However why bother reading the file first? can we have an HTP stream file upload in the web flash player like AIR as. This way the player would not care on the file size and the limitation would reside on the server, which we can deal with.
    Message was edited by: flashharry!

    Hi,
    I did a search on this and while I got lots of hits none on this issue.
    We have a web portal that allows customers to upload files to a DAM server, we have control server.
    The issue is not the back end, the back end is php and all set up correctly, we have 2gb ram assigned to php, the upload_max_filesize is set to 1.8gb max post size is set accordingly.
    The flex app loads, user selects large file say 1.6gb ( we do file size check to make sure its below the limit), click upload button, beach ball of death shows, a little while later the file uploads.
    Great so far, now the user tries another files same size, beach ball of death, script times out (we capture this and end gracefully). If the user restarts safari then the file will upload just fine.
    Seems you can upload a large file first time and small files subsequently below 1.2gb, but you can not upload multiple large files.
    Seems some sort of memory leak, I was looking at converting this upload app to send files via ftp, but if the flash player needs to process/load the files first then I don't think this will work looking to increase the file size ceiling to 4gb via ftp.
    Code is a bit involved but in the end just calls file reference upload, then beach ball appears
    Any ideas? player version 10.0.32.18 debug
    UPDATED 09_17_09
    It appears to be a memory leak in the player when in safari, the file is read but the memory not freed up after completion. Firefox frees the used memory after upload and you can continue. Not sure if Apple or Adobe issue.
    However why bother reading the file first? can we have an HTP stream file upload in the web flash player like AIR as. This way the player would not care on the file size and the limitation would reside on the server, which we can deal with.
    Message was edited by: flashharry!

  • Upload very large file to SharePoint Online

    Hi,
    I tried uploading very large files to SharePoint online via the REST API using JAVA. Uploading files with a size of ~160 mb works well, but if the filesize is between 400mb and 1,6 gb I either receive a Socket Exception (Connection reset by peer) or an error
    of SharePoint telling me "the security validation for the page is invalid". 
    So first of all I want to ask you, how to work around the security validation error. As far as I understand the token, which I have added to my X-RequestDigest header of the http post request seems to be expired (by default it expires after 1800 seconds).
    Uploading such large files is time consuming, so I can't change the token in my header while uploading the file. How can I extend the expiration time of the token (if it is possible)? Could it help to send post requests with the same token
    continuously while uploading the file and therefore prevent the token from expiration? Is there any other strategy to upload such large files which need much more than 1800 seconds for upload?
    Additionally any thoughts on the socket exception? It happens quite sporadically. So sometimes it happens after uploading 100 mb, and the other time I have already uploaded 1 gb.
    Thanks in advance!

    Hi,
    Thanks for the reply, the reason I'm looking into this is so users can migrate there files to SharePoint online/Office 365. The max file limit is 2gb, so thought I would try and cover this.
    I've looked into that link before, and when I try and use those end points ie StartUpload I get the following response
    {"error":{"code":"-1, System.NotImplementedException","message":{"lang":"en-US","value":"The
    method or operation is not implemented."}}}
    I can only presume is that these endpoints have not been implemented yet, even though the documentation says they only available for Office 365 which is what  I am using. 
    Also, the other strange thing is that I can actually upload a file of 512mb to the server as I can see it in the web UI. The problem I am experiencing is get a response, it hangs on this line until the TimeOut is reached.
    WebResponse wresp = wreq.GetResponse();
    I've tried flushing and closing the stream, sendchunked but still no success

  • What determines a "large" file?

         Adobe Send's selling point is that you can send "large" files.  What does Adobe consider a large file?

    Anything over 5MB is too large to email reliably or kindly, so that's where "large" starts.

  • I am having trouble transferring files from an old MacBook (2007) to a MacBook Air over a wireless network.  The connection was interrupted and the time was over 24 hours.  Is there a better way to do this?  I'm using Migration assistant.

    I am having trouble transferring files from an old MacBook (2007) to a MacBook Air over a wireless network.  The connection was interrupted and the time was over 24 hours.  Is there a better way to do this?  I'm using Migration assistant.  The lack of an ethernet port on MacBook air does not help.

    William ..
    Alternative data transfer methods suggested here > OS X: How to migrate data from another Mac using Mavericks

Maybe you are looking for