GNU Unzip large files

A GNU Unzip procedure makes me troubles for large size files. It takes too much time.GZIPInputStream zipin = new GZIPInputStream( new FileInputStream( zipFile ) );
   byte buffer[] = new byte[8192];
   FileOutputStream out = new FileOutputStream( source );
   for ( int length; (length = zipin.read(buffer, 0, 8192)) != -1; )
       out.write( buffer, 0, length );
   out.close();
   zipin.close();Is there a way to GNU unzip a large file without reading into buffer and then writing from buffer? I mean directly operating on input zipped stream and outputting unzipped byte stream (instead of to file) for further processing. Somewhat more concurrently?

A GNU Unzip procedure makes me troubles for large
size files. It takes too much
time.GZIPInputStream zipin = new
GZIPInputStream( new FileInputStream( zipFile ) );
byte buffer[] = new byte[8192];
FileOutputStream out = new FileOutputStream(
am( source );
for ( int length; (length = zipin.read(buffer, 0,
0, 8192)) != -1; )
out.write( buffer, 0, length );
out.close();
zipin.close();Is there a way to GNU unzip a
p a large file without reading into buffer and then
writing from buffer? I mean directly operating on
input zipped stream and outputting unzipped byte
stream (instead of to file) for further processing.
Somewhat more concurrently?If you mean to turn a GZIPInputStream into an OutputStream, you can use [url http://java.sun.com/j2se/1.5.0/docs/api/java/io/PipedInputStream.html]PipedInputStream and [url http://java.sun.com/j2se/1.5.0/docs/api/java/io/PipedOutputStream.html]PipedOutputStream. It involves creating a new thread and hooking the Pipes up correctly.

Similar Messages

  • Problem unzipping larger files with Java

    When I extract small zip files with java it works fine. If I extract large zip files I get errors. Can anyone help me out please?
    import java.io.*;
    import java.util.*;
    import java.net.*;
    import java.util.zip.*;
    public class  updategrabtest
         public static String filename = "";
         //public static String filesave = "";
        public static boolean DLtest = false, DBtest = false;
         // update
         public static void main(String[] args)
              System.out.println("Downloading small zip");
              download("small.zip"); // a few k
              System.out.println("Extracting small zip");
              extract("small.zip");
              System.out.println("Downloading large zip");
              download("large.zip"); // 1 meg
              System.out.println("Extracting large zip");
              extract("large.zip");
              System.out.println("Finished.");
              // update database
              boolean maindb = false; //database wasnt updated
         // download
         public static void download (String filesave)
              try
                   java.io.BufferedInputStream in = new java.io.BufferedInputStream(new
                   java.net.URL("http://saveourmacs.com/update/" + filesave).openStream());
                   java.io.FileOutputStream fos = new java.io.FileOutputStream(filesave);
                   java.io.BufferedOutputStream bout = new BufferedOutputStream(fos,1024);
                   byte data[] = new byte[1024];
                   while(in.read(data,0,1024)>=0)
                        bout.write(data);
                   bout.close();
                   in.close();
              catch (Exception e)
                   System.out.println ("Error writing to file");
                   //System.exit(-1);
         // extract
         public static void extract(String filez)
              filename = filez;
            try
                updategrab list = new updategrab( );
                list.getZipFiles();
            catch (Exception e)
                e.printStackTrace();
         // extract (part 2)
        public static void getZipFiles()
            try
                //String destinationname = ".\\temp\\";
                String destinationname = ".\\";
                byte[] buf = new byte[1024]; //1k
                ZipInputStream zipinputstream = null;
                ZipEntry zipentry;
                zipinputstream = new ZipInputStream(
                    new FileInputStream(filename));
                zipentry = zipinputstream.getNextEntry();
                   while (zipentry != null)
                    //for each entry to be extracted
                    String entryName = zipentry.getName();
                    System.out.println("entryname "+entryName);
                    int n;
                    FileOutputStream fileoutputstream;
                    File newFile = new File(entryName);
                    String directory = newFile.getParent();
                    if(directory == null)
                        if(newFile.isDirectory())
                            break;
                    fileoutputstream = new FileOutputStream(
                       destinationname+entryName);            
                    while ((n = zipinputstream.read(buf, 0, 1024)) > -1)
                        fileoutputstream.write(buf, 0, n);
                    fileoutputstream.close();
                    zipinputstream.closeEntry();
                    zipentry = zipinputstream.getNextEntry();
                }//while
                zipinputstream.close();
            catch (Exception e)
                e.printStackTrace();
    }

    In addition to the other advice, also change every instance of..
    kingryanj wrote:
              catch (Exception e)
                   System.out.println ("Error writing to file");
                   //System.exit(-1);
    ..to..
    catch (Exception e)
    e.printStackTrace();
    }I am a big fan of the stacktrace.

  • Windows Explorer misreads large-file .zip archives

       I just spent about 90 minutes trying to report this problem through
    the normal support channels with no useful result, so, in desperation,
    I'm trying here, in the hope that someone can direct this report to some
    useful place.
       There appears to be a bug in the .zip archive reader used by Windows
    Explorer in Windows 7 (and up, most likely).
       An Info-ZIP Zip user recently reported a problem with an archive
    created using our Zip program.  The archive was valid, but it contained
    a file which was larger than 4GiB.  The complaint was that Windows
    Explorer displayed (and, apparently believed) an absurdly large size
    value for this large-file archive member.  We have since reproduced the
    problem.
       The original .zip archive format includes uncompressed and compressed
    sizes for archive members (files), and these sizes were stored in 32-bit
    fields.  This caused problems for files which are larger than 4GiB (or,
    on some system types, where signed size values were used, 2GiB).  The
    solution to this fundamental limitation was to extend the .zip archive
    format to allow storage of 64-bit member sizes, when necessary.  (PKWARE
    identifies this format extension as "Zip64".)
       The .zip archive format includes a mechanism, the "Extra Field", for
    storing various kinds of metadata which had no place in the normal
    archive file headers.  Examples include OS-specific file-attribute data,
    such as Finder info and extended attributes for Apple Macintosh; record
    format, record size, and record type data for VMS/OpenVMS; universal
    file times and/or UID/GID for UNIX(-like) systems; and so on.  The Extra
    Field is where the 64-bit member sizes are stored, when the fixed 32-bit
    size fields are too small.
       An Extra Field has a structure which allows multiple types of extra
    data to be included.  It comprises one or more "Extra Blocks", each of
    which has the following structure:
           Size (bytes) | Description
          --------------+------------
                2       | Type code
                2       | Number of data bytes to follow
            (variable)  | Extra block data
       The problem with the .zip archive reader used by Windows Explorer is
    that it appears to expect the Extra Block which includes the 64-bit
    member sizes (type code = 0x0001) to be the first (or only) Extra Block
    in the Extra Field.  If some other Extra Block appears at the start of
    the Extra Field, then its (non-size) data are being incorrectly
    interpreted as the 64-bit sizes, while the actual 64-bit size data,
    further along in the Extra Field, are ignored.
       Perhaps the .zip archive _writer_ used by Windows Explorer always
    places the Extra Block with the 64-bit sizes in this special location,
    but the .zip specification does not demand any particular order or
    placement of Extra Blocks in the Extra Field, and other programs
    (Info-ZIP Zip, for example) should not be expected to abide by this
    artificial restriction.  For details, see section "4.5 Extensible data
    fields" in the PKWARE APPNOTE:
          http://www.pkware.com/documents/casestudies/APPNOTE.TXT
       A .zip archive reader is expected to consider the Extra Block type
    codes, and interpret accordingly the data which follow.  In particular,
    it's not sufficient to trust that any particular Extra Block will be the
    first one in the Extra Field.  It's generally safe to ignore any Extra
    Block whose type code is not recognized, but it's crucial to scan the
    Extra Field, identify each Extra Block, and handle it according to its
    type.
       Here are some relatively small (about 14MiB each) test archives which
    illustrate the problem:
          http://antinode.info/ftp/info-zip/ms_zip64/test_4g.zip
          http://antinode.info/ftp/info-zip/ms_zip64/test_4g_V.zip
          http://antinode.info/ftp/info-zip/ms_zip64/test_4g_W.zip
       Correct info, from UnZip 6.00 ("unzip -lv"):
    Archive:  test_4g.zip
     Length   Method    Size  Cmpr    Date    Time   CRC-32   Name
    4362076160  Defl:X 14800839 100% 05-01-2014 15:33 6d8d2ece  test_4g.txt
    Archive:  test_4g_V.zip
     Length   Method    Size  Cmpr    Date    Time   CRC-32   Name
    4362076160  Defl:X 14800839 100% 05-01-2014 15:33 6d8d2ece  test_4g.txt
    Archive:  test_4g_W.zip
     Length   Method    Size  Cmpr    Date    Time   CRC-32   Name
    4362076160  Defl:X 14800839 100% 05-01-2014 15:33 6d8d2ece  test_4g.txt
    (In these reports, "Length" is the uncompressed size; "Size" is the
    compressed size.)
       Incorrect info, from (Windows 7) Windows Explorer:
    Archive        Name          Compressed size   Size
    test_4g.zip    test_4g.txt         14,454 KB   562,951,376,907,238 KB
    test_4g_V.zip  test_4g.txt         14,454 KB   8,796,110,221,518 KB
    test_4g_W.zip  test_4g.txt         14,454 KB   1,464,940,363,777 KB
       Faced with these unrealistic sizes, Windows Explorer refuses to
    extract the member file, for lack of (petabytes of) free disk space.
       The archive test_4g.zip has the following Extra Blocks: universal
    time (type = 0x5455) and 64-bit sizes (type = 0x0001).  test_4g_V.zip
    has: PWWARE VMS (type = 0x000c) and 64-bit sizes (type = 0x0001).
    test_4g_W.zip has: NT security descriptor (type = 0x4453), universal
    time (type = 0x5455), and 64-bit sizes (type = 0x0001).  Obviously,
    Info-ZIP UnZip has no trouble correctly finding the 64-bit size info in
    these archives, but Windows Explorer is clearly confused.  (Note that
    "1,464,940,363,777 KB" translates to 0x0005545500000400 (bytes), and
    "0x00055455" looks exactly like the size, "0x0005" and the type code
    "0x5455" for a "UT" universal time Extra Block, which was present in
    that archive.  This is consistent with the hypothesis that the wrong
    data in the Extra Field are being interpreted as the 64-bit size data.)
       Without being able to see the source code involved here, it's hard to
    know exactly what it's doing wrong, but it does appear that the .zip
    reader used by Windows Explorer is using a very (too) simple-minded
    method to extract 64-bit size data from the Extra Field, causing it to
    get bad data from a properly formed archive.
       I suspect that the engineer involved will have little trouble finding
    and fixing the code which parses an Extra Field to extract the 64-bit
    sizes correctly, but if anyone has any questions, we'd be happy to help.
       For the Info-ZIP (http://info-zip.org/) team,
       Steven Schweda

    > We can't get the source (info-zip) program for test.
       I don't know why you would need to, but yes, you can:
          http://www.info-zip.org/
          ftp://ftp.info-zip.org/pub/infozip/src/
    You can also get pre-built executables for Windows:
          ftp://ftp.info-zip.org/pub/infozip/win32/unz600xn.exe
          ftp://ftp.info-zip.org/pub/infozip/win32/zip300xn.zip
    > In addition, since other zip application runs correctly. Since it should
    > be your software itself issue.
       You seem to misunderstand the situation.  The facts are these:
       1.  For your convenience, I've provided three test archives, each of
    which includes a file larger than 4GiB.  These archives are valid.
       2.  Info-ZIP UnZip (version 6.00 or newer) can process these archives
    correctly.  This is consistent with the fact that these archives are
    valid.
       3.  Programs from other vendors can process these archives correctly.
    I've supplied a screenshot showing one of them (7-Zip) doing so, as you
    requested.  This is consistent with the fact that these archives are
    valid.
       4.  Windows Explorer (on Windows 7) cannot process these archives
    correctly, apparently because it misreads the (Zip64) file size data.
    I've supplied a screenshot of Windows Explorer showing the bad file size
    it gets, and the failure that occurs when one tries to use it to extract
    the file from one of these archives, as you requested.  This is
    consistent with the fact that there's a bug in the .zip reader used by
    Windows Explorer.
       Yes, "other zip application runs correctly."  Info-ZIP UnZip runs
    correctly.  Only Windows Explorer does _not_ run correctly.

  • I have been working on movies in iMovie but after downloading dropbox and exporting iMovies as large files I can no longer open up imovie on the computer.  If I download imovie again, will I have lost all my work ?

    Yesterday I was working in imovie.  I exported the files as both Quicktime movies and as large files which I then placed in a Dropbox account.  Subsequently, perhaps because the files were so large, I have been unable to open the imovie programme.  The menu appears but nothing else happens. If I download the software again will I loose all my previous work ?  Has anyone any suggestions ?
    Thanks.
    Min

    In case it could be helpful to anyone else this is the solution that Smart sent to me:
    Please follow the steps below:
    1. Write down any product keys you are currently using for all SMART software installed on the machine.
    2. Download, unzip, and run the SMART cleanup utility below: http://www.smartmacsupport.com/downloads/SMARTUninstaller.zip
    3. If you wish to use SMART Response™ interactive response systems, download this:http://downloads.smarttech.com/techsupport/k1fi2rh8stf3eja6kjs1sdl0wa/SMARTRespo nse_with_NBSP2andDriversSP5-Mac.zip
    If you wish to use SMART Notebook software and SMART Board Drivers only, download this:http://downloads.smarttech.com/techsupport/k1fi2rh8stf3eja6kjs1sdl0wa/SMARTNoteb ookSP2withSMARTBoardDriversSP5-MAC.zip
    4. Reinstall the software and reactivate it using the product keys from step 2 above.

  • Upload and Process large files

    We have a SharePoint 2013 OnPrem installation and have a business application that provides an option to copy local files into UNC path and some processing logic applied before copying it into SharePoint library. The current implementation is
    1. Users opens the application and  clicks “Web Upload” link from left navigation. This will open a \Layouts custom page to select upload file and its properties
    2. User specifies the file details and chooses a Web Zip file from his local machine 
    3. Web Upload Page Submit Action will
         a. call WCF  Service to copy Zip file from local machine to a preconfigure UNC path
         b. Creates a list item to store its properties along with the UNC path details
    4. Timer Job executes in a periodic interval to
         a. Query the List to see the items that are NOT processed and finds the path of ZIP file folder
         b. Unzip the selected file 
         c. Loops of unzipped file content - Push it into SharePoint library 
         d. Updates list item in “Manual Upload List”
    Can someone suggest a different design approach that can manage the large file outside of SharePoint context? Something like
       1. Some option to initiate file copy from user local machine to UNC path when he submits the layouts page
       2. Instead of timer jobs, have external services that grab data from a UNC path and processes periodic intervals to push it into SharePoint.

    Hi,
    According to your post, my understanding is that you want to upload and process files for SharePoint 2013 server.
    The following suggestion for your reference:
    1.We can create a service to process the upload file and copy the files to the UNC folder.
    2.Create a upload file visual web part and call the process file service.
    Thanks,
    Dennis Guo
    TechNet Community Support
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Subscriber Support, contact
    [email protected]
    Dennis Guo
    TechNet Community Support

  • Unzip/ Untar files

    I am remotely logged into a UNIX cluster at the university and attempting to manipulate some files that I have received from a collaborator. He has "tarred" and "zipped" the files. Unzipping the file with the command
    <remote host:directory>% gunzip file.tar.gz
    (remote host is the machine I am logged into (via ssh -X username): directory is my directory on the remote host)
    This creates the unzipped file- file.tar on <remote host:directory>. The problem arises when I attempt to untar the file. I am using the command
    tar -xvfp file.tar
    After entering this command, I get the message
    tar: can't mkdir /net: Permission denied
    tar extract: failed mkdir of <:senders directory>
    The same error is produced using [tar -xvf] as the bundled-options. I am assuming that it is attempting to make the directory on my collaborators remote host because the listed location of the failed mkdir is the senders directory <:senders directory> on a different UNIX cluster.
    I have tried variations on the tar command (-x or -xv) but those produce the error
    tar: tape read error: no tape in drive
    I am a bit unfamiliar with the syntax for the [[-]bundled-options Args] in the tar command, but I suspect that I need to specify where the untarred archive is going to be written/ copied. Could anyone provide some guidance on this.
    Sincerely,
    Aric

    Hi Aric,
       Did you copy the error message verbatim? Does it really want to create the directory, /net? If so then the problem is likely to be in the tarball itself.
       There's nothing wrong with your initial syntax. The -x tells tar to unpack the tarball, which is what you're trying to do. The -p says to preserve permissions, which is fine. The 'p' is not supposed to go after the 'f' in the options because the tarball, file.tar, is technically an argument of the -f option. However, most implementations of tar are rather forgiving in that regard. The -v option simply tells tar to be verbose and list the files. Of course there are many implementations of tar and you don't bother to tell us into what kind of machine you're logged.
       Tar has the ability to archive many files and preserve the directory structure. However, it has to recreate those directories when unpacking and that's when it had a problem. It seems that tar wants to create a directory named "net" at the root of the boot drive. I assume that the creator of the tarball archived such a directory on his machine.
       However, most implementations of tar strip the leading slash, turning an absolute path into a relative path. That way when unpacked, the directory structure starts at the current working directory. Unless you tried to unpack this tarball from the root directory of the boot drive, the tar that created this tarball appears to be different.
       Unless you can get the privilege of writing to the root of this boot drive, I would guess that your only recourse is to get the person who created the tarball to redo it using relative paths. See if you can get him to create it with GNU's tar, as that version strips the leading slashes by default.
    Gary
    ~~~~
       OK, so you're a Ph.D. Just don't touch anything.

  • Large file processing issue

    Hi,
    A 2MB source file is found to be generating a file of over 180 MB causing it to fail in pre prod and production. The processes successfully in Development Box where there is no web dispatcher or restrictions on size.
    The recommendation from SAP is that we try to reduce the outout file size.
    Kindly look into the issue ASAP.
    Appreciate your help.
    Thanks,
    Satya Kumar

    Hi Satya,
    There are many ways are available check the below links
    /people/stefan.grube/blog/2007/02/20/working-with-the-payloadzipbean-module-of-the-xi-adapter-framework
    /people/aayush.dubey2/blog/2007/10/10/zip-transfer-unzip-increase-the-performance-of-your-java-abap-applications
    /people/pooja.pandey/blog/2005/10/17/number-formatting-to-handle-large-numbers
    /people/alessandro.guarneri/blog/2007/02/21/sap-xi-acting-as-a-huge-file-mover
    /people/alessandro.guarneri/blog/2006/03/05/managing-bulky-flat-messages-with-sap-xi-tunneling-once-again--updated
    One more way is we have to develope the ZIP Adapter and send the zip file after processing again we have to unzip the file.
    Regards
    Ramesh

  • Problem using payloadZipBean to unzip a file

    I would like to run OS command to zip large amount of xml into one zip file before XI pick it up, then use unzip option in payloadZipBean to unzip all the xml files so XI can process them one by one from memory, this will reduce lots file I/O time.
    But I am getting this warning message:
    Warning Zip: message is empty or has no payload
    But if I display this message in RWB, the only payload is this zip file.
    Why the payloadZipBean is not unzipping this file as supposed to.

    Did you check this blog
    /people/stefan.grube/blog/2007/02/20/working-with-the-payloadzipbean-module-of-the-xi-adapter-framework
    Sameer

  • Auto unzip "zipped" files inside OWB. possible?

    Scenario:
    Client will be placing large fixed-delimited files in a directory w/c are in ZIPPED format. This file directory will be my source location.
    Question?
    Is there a way to automate the process of unzipping the files in OWB and placing it in the same directory. I am thinking of using a process flow but I cannot find any utility activity for this. Any suggestions?
    Thanks
    -carlo

    I haven't done it for a while, but I manage to archive files from the workflow via a script.
    In the workflow you have to use the ExternalProcess command.

  • Unzip .bz2 file

    Guys,
    Anyone knows how to unzip .bz2 file? I am having a problem on this to unzip the SFWgcc33.bz2 file.
    Thanks....

    You're probably going to need to install some dependencies, the sun download center has them all, then run pkgadd as below:
    Stole this code from:
    http://gcc.gnu.org/ml/gcc-help/2004-08/msg00022.html
         unzip $HOME/gcmn-1.0-pkg.zip
         unzip $HOME/gcc-2.95.3-pkg.zip
         unzip $HOME/gmake-3.79.1-pkg.zip
         sudo /usr/sbin/pkgadd -n -d SFWgcmn all
         sudo /usr/sbin/pkgadd -n -d SFWgcc all
         sudo /usr/sbin/pkgadd -n -d SFWgmake all

  • Large File Support OEL 5.8 x86 64 bit

    All,
    I have downloaded some software from OTN and would like to combine the individual zip files into one large file. I assumed that I would unzip the individual files and then create a single zip file. However, my OEL zip utility fails with the file is too large error.
    Perhaps related, I can't ls the directories that contain huge files. For example, a huge VM image file.
    Do I need to rebuild the utilities to support huge files or do I need install/upgrade rpm's?  If I need to relink lint programs, what are the commands to do such?
    Thanks.

    Correction:  This is occurring on a 5.7 release and not a 5.8.  The command uname -a yields 2.6.32-200.13.1.2...
    The file size I'm trying to assemble is ~ 4 GB (the zip files for 12c DB).  As for using tar, as I understand it, EM 12c wants a single zip file for software deployments.
    I'm assuming that the large file support was not linked into the utilities. For example when I try to list contents of a directory which has huge/large files, I get the following message:
    ls:  V29653-01.iso:  Value too large for defined data type
    I'm assuming that I need to upgrade to later release.
    Thanks.

  • BT Cloud - large file ( ~95MB) uploads failing

    I am consistently getting upload failures for any files over approximately 95MB in size.  This happens with both the Web interface, and the PC client.  
    With the Web interface the file upload gets to a percentage that would be around the 95MB amount, then fails showing a red icon with a exclamation mark.  
    With the PC client the file gets to the same percentage equating to approximately 95MB, then resets to 0%, and repeats this continuously.  I left my PC running 24/7 for 5 days, and this resulted in around 60GB of upload bandwidth being used just trying to upload a single 100MB file.
    I've verified this on two PCs (Win XP, SP3), one laptop (Win 7, 64 bit), and also my work PC (Win 7, 64 bit).  I've also verified it with multiple different types and sizes of files.  Everything from 1KB to ~95MB upload perfectly, but anything above this size ( I've tried 100MB, 120MB, 180MB, 250MB, 400MB) fails every time.
    I've completely uninstalled the PC Client, done a Windows "roll-back", reinstalled, but this has had no effect.  I also tried completely wiping the cloud account (deleting all files and disconnecting all devices), and starting from scratch a couple of times, but no improvement.
    I phoned technical support yesterday and had a BT support rep remote control my PC, but he was completely unfamiliar with the application and after fumbling around for over two hours, he had no suggestion other than trying to wait for longer to see if the failure would clear itself !!!!!
    Basically I suspect my Cloud account is just corrupted in some way and needs to be deleted and recreated from scratch by BT.  However I'm not sure how to get them to do this as calling technical support was futile.
    Any suggestions?
    Thanks,
    Elinor.
    Solved!
    Go to Solution.

    Hi,
    I too have been having problems uploading a large file (362Mb) for many weeks now and as this topic is marked as SOLVED I wanted to let BT know that it isn't solved for me.
    All I want to do is share a video with a friend and thought that BT cloud would be perfect!  Oh, if only that were the case :-(
    I first tried web upload (as I didn't want to use the PC client's Backup facility) - it failed.
    I then tried the PC client Backup.... after about 4 hrs of "progress" it reached 100% and an icon appeared.  I selected it and tried to Share it by email, only to have the share fail and no link.   Cloud backup thinks it's there but there are no files in my Cloud storage!
    I too spent a long time on the phone to Cloud support during which the tech took over my PC.  When he began trying to do completely inappropriate and irrelevant  things such as cleaning up my temporary internet files and cookies I stopped him.
    We did together successfully upload a small file and sharing that was successful - trouble is, it's not that file I want to share!
    Finally he said he would escalate the problem to next level of support.
    After a couple of weeks of hearing nothing, I called again and went through the same farce again with a different tech.  After which he assured me it was already escalated.  I demanded that someone give me some kind of update on the problem and he assured me I would hear from BT within a week.  I did - they rang to ask if the problem was fixed!  Needless to say it isn't.
    A couple of weeks later now and I've still heard nothing and it still doesn't work.
    Why can't Cloud support at least send me an email to let me know they exist and are working on this problem.
    I despair of ever being able to share this file with BT Cloud.
    C'mon BT Cloud surely you can do it - many other organisations can!

  • TC and WD500 External drive - Locks up when copying large files

    I have a TC with a Western Digital 500GB my Book connected to the USB port on the TC that I use for network Storage. Everything works fine until I try to copy large files or directories from the Finder mounted share. After the copy freezes the external drive can no longer be seen by airport utility. If I look at disks it sees the disk but no partition. I have to power cycle the external drive to have TC see the drive again. Then I disconnect the drive and plug it in directly to the MAC and run disk utility and his repairs the journal. I have to do the repair every time it locks up. Should the external drive be formatted different or does it have to be connected to a USB hub. Or is this just a bug with the TC and afp ?

    I have this very same problem.
    I tried to copy a 4GB+ folder from the Macbook to the 500GB MyBook attached to my TC and it froze.
    Now the the files on the MyBook are invisible when attached to the TC ("0 Items") but are available when the drive is connected directly to the Macbook. I have run disk utility but it hasn't helped. The disk is FAT 32 formatted and the files on the MyBook are still invisible over the network.
    When the drive attached to the Macbook directly I can see a 'phantom' file with the same title as the 4GB+ folder I'd tried to copy. This file is "ZERO KB" and it cannot be deleted (error code -43).
    Anyone have a clue what is happening here?

  • Unable to transfer large files from MB to External HDs

    Hi,
    This may be an unusual one. My 1st edition Macbook won't let me transfer large files to my external hard-drives, either by USB, Ethernet or wirelessly through my home wi-fi network.
    I've started video editing so I'm importing DVcam tapes through firewire from the camera. A full DV tape translates to about 8-12gb I think.
    Using iMovie and saving the import to my Freecom 3.5inch network harddrive vis USB or ethernet, it fails saying 'Unexpected error, error code 1309'.
    When I try quicktime pro instead to import the tape to the external HD, I get 'Operation could not be completed, An attempt to add a resource to the file failed'. This happens too with my WD 2.5inch external drive.
    However, there is no such problem when I import to the MB's internal harddrive. When I then try and shift the large file off the laptop to an external drive, via USB, ethernet of wirelessly, for editing and save keeping, it fails half way through.
    I have the same problem when I try back-up my 17gb virtual windows machine I use with VMware fusion off the laptop to an external drive.
    I am currently burning an 18gb iMovie project from the MB's internal HD, over 5 DVDs using Toast's disc-spanning and will reimport them to the external, which is really time consuming.
    Also to note, Final Cut Express doesn't have these problems as it seems to break up what would be an 18gb movie file into several smaller files, which my MB is quiet happy to let go onto an external drive. However, the problem re-emerges on Final Cut Ex when I try and import a HDV tape. Possibly FCE isn't breaking them up into small enough pieces for my MB to handle?
    Any ideas why this is happening and what I can do to fix it? Or does this mean a new laptop?

    Irish Apple Fan wrote:
    Hi,
    This may be an unusual one. My 1st edition Macbook won't let me transfer large files to my external hard-drives, either by USB, Ethernet or wirelessly through my home wi-fi network.
    I've started video editing so I'm importing DVcam tapes through firewire from the camera. A full DV tape translates to about 8-12gb I think.
    Using iMovie and saving the import to my Freecom 3.5inch network harddrive vis USB or ethernet, it fails saying 'Unexpected error, error code 1309'.
    When I try quicktime pro instead to import the tape to the external HD, I get 'Operation could not be completed, An attempt to add a resource to the file failed'. This happens too with my WD 2.5inch external drive.
    I'm a bit late to the party, but specifically there's a 4GB file size limit in the FAT32 format that is standard on many external hard drives. I've run across this problem before, and usually it copies over most of the file and then quits when it reaches the limit.

  • SharePoint Foundation 2013 Optimization For Large File Transfer?

    We are considering upgrading from  WSS 3.0 to SharePoint Foundation 2013.
    One of the improvements we want to see after the upgrade is a better user experience when downloading large files.  It can be done now, but it is not reliable.
    Our document library consists of mostly average sized Office documents, but it also includes some audio and video files and software installer package zip files ranging from 100MB to 2GB in size.
    I know we can change the settings to "allow" larger than default file downloads but how do we optimize the server setup to make these large file transfers work as seamlessly as possible? More RAM on the SharePoint Foundation server? Other Windows,
    SharePoint or IIS optimizations?  The files will often be downloaded from the Internet, so we will not have control over the download speed.

    SharePoint is capable of sending large files, it is an HTTP stateless system like any other website in that regard. Given your server is sized appropriately for the amount of concurrent traffic you expect, I don't see any special optimizations required.
    Trevor Seward
    Follow or contact me at...
    This post is my own opinion and does not necessarily reflect the opinion or view of Microsoft, its employees, or other MVPs.
    I see information like this posted warning against doing it as if large files are going to cause your SharePoint server and SQL to crash. 
    http://blogs.technet.com/b/praveenh/archive/2012/11/16/issues-with-uploading-large-documents-on-document-library-wss-3-0-amp-moss-2007.aspx
    "Though SharePoint is meant to handle files that are up to 2 gigs in size, it is not practically feasible and not recommended as well."
    "Not practically feasible" sounds like a pretty dire warning to stay away from large files.
    I had seen some other links warning that large file in the SharePoint database causes problems with fragmentation and large amounts of wasted space that doesn't go away when files are removed or that the server may run out of memory because downloaded
    files are held in RAM.

Maybe you are looking for