Failure of CS2/3 to print large files on HP5500uv printer

I have a file with the following parameters. Pixel Dimension 472.8M, w=15300 pixels, H=8100 pixels, W=51 in., H=27 in., R=300. I have saved this file in many different formats including tiff, psd, psb, jped and pdf. I have a dual 2Ghz PowerPC G5 computer with OS 10.4.11 with 4GB SDRAM and an internal scratch disk of 500GB. When I attempt to print this file on a HP5500ps UV printer the sequence looks normal for several minutes, then a message Printer sent unexpected EOF appears and then after a short time, a message No job printing. That is, there is no print output. I have tried the different formats and Ive tried CS2 and CS3, all with the same results. I have tried a direct link from computer to printer, no change. No other programs are active.
If I drop the resolution down to 250, so the file is about 325Mb the file prints normally. Is there any way to print the full size file?

There are a couple of things you can try; 1.) see if you have enough system memory to process the page ( it could be running out of system memory ); 2.) set an invisible holding line to the page / document and lock it ( no stroke, no fill @ 13 x 19 ); 3.)  Use the Page tool ( if you haven't already ) and set it to the document, edge-to-edge.  Also, check you ink supplies, there may be an empty cartridge preventing you from printing the entire file.  Hope this helps.

Similar Messages

  • Printing problem with ads on as java when printing large files

    hi all
    we have an wd for java application running on as java 7.0 sp18 and adobe document service if we print
    small files everything works fine with large files it fails with the following error (after arround 2 minutes)
    any ideas
    #1.5^H#869A6C5E590200710000092C000B20D000046E16922042E2#1246943126766#com.sap.engine.services.servlets_js
    p.server.HttpHandlerImpl#sap.com/tcwddispwda#com.sap.engine.services.servlets_jsp.server.HttpHandlerImp
    l#KRATHHO#8929##sabad19023_CHP_5307351#KRATHHO#63a60b106ab311de9cb4869a6c5e5902#SAPEngine_Application_Thr
    ead[impl:3]_15##0#0#Error#1#/System/Server/WebRequests#Plain###application [webdynpro/dispatcher] Process
    ing HTTP request to servlet [dispatcher] finished with error.^M
    The error is: com.sap.tc.webdynpro.clientserver.adobe.pdfdocument.base.core.PDFDocumentRuntimeException:
    Failed to  UPDATEDATAINPDF^M
    Exception id: [869A6C5E590200710000092A000B20D000046E1692201472]#

    Hello
    on which support package level is the java stack  ?
    kr,
    andreas

  • Very large file 1.6 GB..sports program for school. lots of photos, template where media spots saved and dropped photos in..trying to get it to printer swho says its too large but when reduces size loses all quality...HELP??

    Please help...large file my sons Athletic program. Had noto problem with Quality last year..this new printer said file is too large..too many layers, can I FLATTEN..He reduced size and photos look awful. It is 81 pages. Have to have complete before next friday!! Help anyone??

    At that size it sounds like you have inserted lots of photos at too large a size and resolution.
    Especially for a year book style project, first determine your image sizes and crop and fix the resolution to 300 dpi, then drag them into the publication. Pages does not use external references to files and rapidly grows to quite large size unless you trim and size your images before placement.
    Peter

  • Problems converting PDF to MS Word document.  I successfuly converted 4 files and now subsequent files generate a "conversion failure" error when attempting to convert the file.  I have a large manuscript and I separated each chapter to assist with the co

    Problems converting PDF to MS Word document.  I successfully converted 4 files and now subsequent files generate a "conversion failure" error when attempting to convert the file.  I have a large manuscript and I separated each chapter to assist with the conversion; like I said, first 4 parts no problem, then conversion failure.  I attempted to convert the entire document and same result.  I specifically purchased the export to Word feature.  Please assist.  I initially had to export the Word Perfect document into PDF and attempting to go from PDF to MS Word.

    Hi sdr2014,
    I'm sorry to hear your conversion process has stalled. It sounds as though the problem isn't specific to one file, as you've been unable to convert anything since the first four chapters converted successfully.
    So, let's try this:
    If you're converting via the ExportPDF website, please log out, clear the browser cache, and then log back in. If you're using Reader, please choose Help > Check for Updates to make sure that you have the most current version installed.
    Please let us know how it goes.
    Best,
    Sara

  • Printing large files in LR

    When printing very large files (800+ megs) from lightroom to my HP z3200ps 44" printer (on mac), nearly half of the print is this gray noise pattern. It seems to only be with large prints, smaller prints of the same file have no issues. Also large prints through the mac preview program work fine as well, but I'd like to print via LR. Ideas or solutions?

    The term video usually means non-stills, so I was confused.
    Do you mean the Photos app (where the camera roll is) is crashing?
    Most files I've shot with the 3GS camera are 1.1-1.2 megabytes so files size is not unusual.
    4441x984 is larger than the typical file found in Camera Roll, so maybe the dimensions are an issue.
    Or maybe there is an issue with files imported to Camera Roll from a non-iPhone source. Where are these odd-sized files from?
    Phil
    Message was edited by: w7ox

  • I have added photos to a portfolio website I'm building and a lot of them look VERY grainy when you click on them. They are nice clean large files and can print bat high res way over 16" x 20" so whats happened to them in iWeb?

    I have added photos to a portfolio website I'm building and a lot of them look VERY grainy when you click on them. They are nice clean large files and can print bat high res way over 16" x 20" so whats happened to them in iWeb?

    When you are dealing with websites, image file size is a trade off between quality and download speed. There's not a lot of point to having high quality images if they take too long to download in the browser.
    Nowadays we also have to consider the device that the end user is viewing the images on. An image that is optimized for viewing on a large screen is total overkill and unsuitable for those using mobile devices.
    Really we should be supplying different versions of media files for different devices using @media rules in the stylesheet but this is rather outside of the scope of iWeb. If you use the built in image optimizer and the iWeb Photo template with slideshow, the application will optimize the images according to the way in which you set this function in preferences and the slideshow size will be automatically reduced for those viewing it on smaller screens.
    If you want to give your viewers the opportunity to view large, high quality images, you can supply them as a download.

  • Elements 9 failure to read large files.

    I recently purchased a Nikon D5300. I was a previous D50 user and always saved and converted RAW NEF files with Elements 9. The D5300 save 24Mb NEF files and Elements 9 would not read these files. I chanded to save RAW and JPEG files and it would not read those either. It will read lower res JPEG files. Is there an update to Elements 9 to alllow reading larger files?
    Bob Jacobson
    [email protected]

    Here's an interesting take on the "dependent operation failure" error message
    Adobe CS5: Payload cannot be installed due to dependent operation failure

  • Failing to print very large files

    I've got an iMac running the newest Leopard, printing to a network printer. Printing usually works without incident. However, when printing large documents -- say a 20-40MB PowerPoint presentation with lots of color -- I receive the following message after 10-20 minutes of waiting:
    /usr/libexec/cups/backend/socket failed
    The printer is an HP Color LaserJet 8550DN with 96MB RAM and a 3.1GB internal HD.
    When printing large documents to this printer, printing seems to proceed correctly (if slowly) before reaching the inevitable error message.
    Possible causes would seem to be
    --too large a print job for the printer (though lpq suggests that many of the jobs that failed were <50MB). There was no other network printing taking place on this printer when the jobs failed.
    --some kind of timeout issue. How is the timeout value set? It doesn't appear to be in /etc/cups/cupsd.conf, at least at the moment.
    Any other suggestions appreciated.
    Thanks.

    This info relates directly only to an Epson 9800. However, it may have applications elsewhere.
    Generally, the print spooler in the 9800 with Photoshop will not print large files (>100 mb or so) unless you first convert them to the printer profile and shut down and restart the Mac and then print the converted image. This is true for Tiger and Leopard. In addition Leopard takes about twice as long to spool and since I print lots of 500 mb files, I have given up and gone back to Tiger (lots of other reasons also).

  • Canon ip5000 freezes with large files while printing via Airport Express

    I've discovered a fairly consistent problem when I print from my Powerbook G4 17" to my Canon ip5000 printer. My printer is connected to an Airport Express, which is set as a print server.
    When I print large files - say, a full-quality photo, or a web page with a lot of images - the printer freezes about 20% of the way through the document. It will hang, and the print queue just says its still printing, with the status bar at the 20% - 30% range and holding.
    If I leave things alone, eventually, the printer will spit out the unfinished print-out. The queue will continue to read "in progress" until I manually delete the print job.
    I've been able to repeat this problem with about 80% consistently, although 20% of the time, the print-out works fine (even when printing the same image that failed previously).
    Any ideas / assistance is apprecaited.
    - David

    Same problem here with my Canon ip4200. Still searching for a solution.
    I've discovered a fairly consistent problem when I
    print from my Powerbook G4 17" to my Canon ip5000
    printer. My printer is connected to an Airport
    Express, which is set as a print server.
    When I print large files - say, a full-quality photo,
    or a web page with a lot of images - the printer
    freezes about 20% of the way through the document. It
    will hang, and the print queue just says its still
    printing, with the status bar at the 20% - 30% range
    and holding.
    If I leave things alone, eventually, the printer will
    spit out the unfinished print-out. The queue will
    continue to read "in progress" until I manually
    delete the print job.
    I've been able to repeat this problem with about 80%
    consistently, although 20% of the time, the print-out
    works fine (even when printing the same image that
    failed previously).
    Any ideas / assistance is apprecaited.
    - David
      Mac OS X (10.4.7)  

  • Back up - large files - external USB drive - copy failure

    Just the facts:
    I cannot backup my Music folder to an external USB FAT32 drive. I have no large single files over the FAT32 limit of 4GB. The copy process stops at various amounts progess: 500MB to 6GB. My Music folder is ~16GB total in size. I recently did a clean install from Tiger. I had no issues moving my 14GB Music folder in Tiger to this same external drive.
    1) Has anyone solved this?
    2) Has anyone talked to Apple support in a store or over the phone on this?
    3) Anyway to highlight a (likely) important issue with Leopard to the Apple folks?
    There are numerous posts all dealing with moving/copying large files. Seems to be a common issue to Time Machine, iDisk, external drives, copying large folders to Desktop, and others.
    Cheers,
    MostlyHarmless
    PS New Mac user and really like it so far. I realize these are just Leopard "birth pains."

    This is the Windows forum, not the Mac forum.
    But it's the same. Simply drag the iTunes folder to the external drive.
    Or use a backup prgroam such as SuperDuper!
    You do have a bootable backup of your internal HD, correct?

  • Nokia C7 - Copying large files failure...

    I am not sure about other users, but i recently discovered that my Nokia C7 fails to copy large files (750MB - 1GB). It gives error saying that it is "UNABLE TO READ THE FILE FROM THE SOURCE." However, when i try the same file copying in my flash drive, it successfully copy's the file...
    What exactly is going wrong ???

    How are you copying? USB, USB memory stick in USB on the go adapter cable??
    Are you copying to internal memory or memory card?

  • ID CS2 -- SNAFU colors print way off after PDF prints, color setting changes

    I have messed things up on my InDesign pretty good. Working on ID 4.0.5 in CS2, with a corrupt Help file, on OS 10.4.11.
    It started on a particular document after printing to PDF, something got tweaked and the colors went almost inverted (gray is pink or purple, green is red). Reading in the posts about RGB, CMYK, and color picker, I feel like that might have something to do with it but I don't know what happened. I did use the color picker (perhaps incorrectly) to match to my friend's PMS picks, but it didn't seem to just be that because it did print fine at one time. I THINK it was after the PDF. But now at least another important document is messed up, although not completely -- one small picture prints perfectly, but the main large image is off -- pink where it should be gray.
    Fixes i tried that didn't work:
    Went to Adobe Bridge and synched color
    "Emulate Adobe InDesign CS2 CMS Off"
    Monitor Color
    any ideas?

    I agree with Mylenium on this one.  Check your application color settings and synch them.  I briefly visited some tech info on that photo copier.  Neither printer is Postscript compatable.  Paper profiles are usually chosen in the Print dialog, not in the application.  Read the User's Guides to those machines.  Submit questions on output to them.  Check adobe.com for color management white papers.

  • Uploading Very Large Files via HTTP

    I am developing some classes that must upload files to a web server via HTTP and multipart/form-data. I am using Apache's Tomcat FileUpload library contained within the commons-fileupload-1.0.jar file on the server side. My code fails on large files or large quantities of small files because of the memory restriction of the VM. For example when uploading a 429 MB file I get this exception:
    java.lang.OutOfMemoryError
    Exception in thread "main"I have never been successful in uploading, regardless of the server-side component, more than ~30 MB.
    In a production environment I cannot alter the clients VM memory setting, so I must code my client classes to handle such cases.
    How can this be done in Java? This is the method that reads in a selected file and immediately writes it upon the output stream to the web resource as referenced by bufferedOutputStream:
    private void write(File file) throws IOException {
      byte[] buffer = new byte[bufferSize];
      BufferedInputStream fileInputStream = new BufferedInputStream(new FileInputStream(file));
      // read in the file
      if (file.isFile()) {
        System.out.print("----- " + file.getName() + " -----");
        while (fileInputStream.available() > 0) {
          if (fileInputStream.available() >= 0 &&
              fileInputStream.available() < bufferSize) {
            buffer = new byte[fileInputStream.available()];
          fileInputStream.read(buffer, 0, buffer.length);
          bufferedOutputStream.write(buffer);
          bufferedOutputStream.flush();
        // close the files input stream
        try {
          fileInputStream.close();
        } catch (IOException ignored) {
          fileInputStream = null;
      else {
        // do nothing for now
    }The problem is, the entire file, and any subsequent files being read in, are all being packed onto the output stream and don't begin actually moving until close() is called. Eventually the VM gives way.
    I require my client code to behave no different than the typcial web browser when uploading or downloading a file via HTTP. I know of several commercial applets that can do this, why can't I? Can someone please educate me or at least point me to a useful resource?
    Thank you,
    Henryiv

    Are you guys suggesting that the failures I'm
    experiencing in my client code is a direct result of
    the web resource's (servlet) caching of my request
    (files)? Because the exception that I am catching is
    on the client machine and is not generated by the web
    server.
    trumpetinc, your last statement intrigues me. It
    sounds as if you are suggesting having the client code
    and the servlet code open sockets and talk directly
    with one another. I don't think out customers would
    like that too much.Answering your first question:
    Your original post made it sound like the server is running out of memory. Is the out of memory error happening in your client code???
    If so, then the code you provided is a bit confusing - you don't tell us where you are getting the bufferedOutputStream - I guess I'll just assume that it is a properly configured member variable.
    OK - so now, on to what is actually causing your problem:
    You are sending the stream in a very odd way. I highly suspect that your call to
    buffer = new byte[fileInputStream.available()];is resulting in a massive buffer (fileInputStream.available() probably just returns the size of the file).
    This is what is causing your out of memory problem.
    The proper way to send a stream is as follows:
         static public void sendStream(InputStream is, OutputStream os, int bufsize)
                     throws IOException {
              byte[] buf = new byte[bufsize];
              int n;
              while ((n = is.read(buf)) > 0) {
                   os.write(buf, 0, n);
         static public void sendStream(InputStream is, OutputStream os)
                     throws IOException {
              sendStream(is, os, 2048);
         }the simple implementation with the hard coded 2048 buffer size is fine for almost any situation.
    Note that in your code, you are allocating a new buffer every time through your loop. The purpose of a buffer is to have a block of memory allocated that you then move data into and out of.
    Answering your second question:
    No - actually, I'm suggesting that you use an HTTPUrlConnection to connect to your servlet directly - no need for special sockets or ports, or even custom protocols.
    Just emulate what your browser does, but do it in the applet instead.
    There's nothing that says that you can't send a large payload to an http servlet without multi-part mime encoding it. It's just that is what browsers do when uploading a file using a standard HTML form tag.
    I can't see that a customer would have anything to say on the matter at all - you are using standard ports and standard communication protocols... Unless you are not in control of the server side implementation, and they've already dictated that you will mime-encode the upload. If that is the case, and they are really supporting uploads of huge files like this, then their architect should be encouraged to think of a more efficient upload mechanism (like the one I describe) that does NOT mime encode the file contents.
    - K

  • BT Cloud - large file ( ~95MB) uploads failing

    I am consistently getting upload failures for any files over approximately 95MB in size.  This happens with both the Web interface, and the PC client.  
    With the Web interface the file upload gets to a percentage that would be around the 95MB amount, then fails showing a red icon with a exclamation mark.  
    With the PC client the file gets to the same percentage equating to approximately 95MB, then resets to 0%, and repeats this continuously.  I left my PC running 24/7 for 5 days, and this resulted in around 60GB of upload bandwidth being used just trying to upload a single 100MB file.
    I've verified this on two PCs (Win XP, SP3), one laptop (Win 7, 64 bit), and also my work PC (Win 7, 64 bit).  I've also verified it with multiple different types and sizes of files.  Everything from 1KB to ~95MB upload perfectly, but anything above this size ( I've tried 100MB, 120MB, 180MB, 250MB, 400MB) fails every time.
    I've completely uninstalled the PC Client, done a Windows "roll-back", reinstalled, but this has had no effect.  I also tried completely wiping the cloud account (deleting all files and disconnecting all devices), and starting from scratch a couple of times, but no improvement.
    I phoned technical support yesterday and had a BT support rep remote control my PC, but he was completely unfamiliar with the application and after fumbling around for over two hours, he had no suggestion other than trying to wait for longer to see if the failure would clear itself !!!!!
    Basically I suspect my Cloud account is just corrupted in some way and needs to be deleted and recreated from scratch by BT.  However I'm not sure how to get them to do this as calling technical support was futile.
    Any suggestions?
    Thanks,
    Elinor.
    Solved!
    Go to Solution.

    Hi,
    I too have been having problems uploading a large file (362Mb) for many weeks now and as this topic is marked as SOLVED I wanted to let BT know that it isn't solved for me.
    All I want to do is share a video with a friend and thought that BT cloud would be perfect!  Oh, if only that were the case :-(
    I first tried web upload (as I didn't want to use the PC client's Backup facility) - it failed.
    I then tried the PC client Backup.... after about 4 hrs of "progress" it reached 100% and an icon appeared.  I selected it and tried to Share it by email, only to have the share fail and no link.   Cloud backup thinks it's there but there are no files in my Cloud storage!
    I too spent a long time on the phone to Cloud support during which the tech took over my PC.  When he began trying to do completely inappropriate and irrelevant  things such as cleaning up my temporary internet files and cookies I stopped him.
    We did together successfully upload a small file and sharing that was successful - trouble is, it's not that file I want to share!
    Finally he said he would escalate the problem to next level of support.
    After a couple of weeks of hearing nothing, I called again and went through the same farce again with a different tech.  After which he assured me it was already escalated.  I demanded that someone give me some kind of update on the problem and he assured me I would hear from BT within a week.  I did - they rang to ask if the problem was fixed!  Needless to say it isn't.
    A couple of weeks later now and I've still heard nothing and it still doesn't work.
    Why can't Cloud support at least send me an email to let me know they exist and are working on this problem.
    I despair of ever being able to share this file with BT Cloud.
    C'mon BT Cloud surely you can do it - many other organisations can!

  • Windows Explorer misreads large-file .zip archives

       I just spent about 90 minutes trying to report this problem through
    the normal support channels with no useful result, so, in desperation,
    I'm trying here, in the hope that someone can direct this report to some
    useful place.
       There appears to be a bug in the .zip archive reader used by Windows
    Explorer in Windows 7 (and up, most likely).
       An Info-ZIP Zip user recently reported a problem with an archive
    created using our Zip program.  The archive was valid, but it contained
    a file which was larger than 4GiB.  The complaint was that Windows
    Explorer displayed (and, apparently believed) an absurdly large size
    value for this large-file archive member.  We have since reproduced the
    problem.
       The original .zip archive format includes uncompressed and compressed
    sizes for archive members (files), and these sizes were stored in 32-bit
    fields.  This caused problems for files which are larger than 4GiB (or,
    on some system types, where signed size values were used, 2GiB).  The
    solution to this fundamental limitation was to extend the .zip archive
    format to allow storage of 64-bit member sizes, when necessary.  (PKWARE
    identifies this format extension as "Zip64".)
       The .zip archive format includes a mechanism, the "Extra Field", for
    storing various kinds of metadata which had no place in the normal
    archive file headers.  Examples include OS-specific file-attribute data,
    such as Finder info and extended attributes for Apple Macintosh; record
    format, record size, and record type data for VMS/OpenVMS; universal
    file times and/or UID/GID for UNIX(-like) systems; and so on.  The Extra
    Field is where the 64-bit member sizes are stored, when the fixed 32-bit
    size fields are too small.
       An Extra Field has a structure which allows multiple types of extra
    data to be included.  It comprises one or more "Extra Blocks", each of
    which has the following structure:
           Size (bytes) | Description
          --------------+------------
                2       | Type code
                2       | Number of data bytes to follow
            (variable)  | Extra block data
       The problem with the .zip archive reader used by Windows Explorer is
    that it appears to expect the Extra Block which includes the 64-bit
    member sizes (type code = 0x0001) to be the first (or only) Extra Block
    in the Extra Field.  If some other Extra Block appears at the start of
    the Extra Field, then its (non-size) data are being incorrectly
    interpreted as the 64-bit sizes, while the actual 64-bit size data,
    further along in the Extra Field, are ignored.
       Perhaps the .zip archive _writer_ used by Windows Explorer always
    places the Extra Block with the 64-bit sizes in this special location,
    but the .zip specification does not demand any particular order or
    placement of Extra Blocks in the Extra Field, and other programs
    (Info-ZIP Zip, for example) should not be expected to abide by this
    artificial restriction.  For details, see section "4.5 Extensible data
    fields" in the PKWARE APPNOTE:
          http://www.pkware.com/documents/casestudies/APPNOTE.TXT
       A .zip archive reader is expected to consider the Extra Block type
    codes, and interpret accordingly the data which follow.  In particular,
    it's not sufficient to trust that any particular Extra Block will be the
    first one in the Extra Field.  It's generally safe to ignore any Extra
    Block whose type code is not recognized, but it's crucial to scan the
    Extra Field, identify each Extra Block, and handle it according to its
    type.
       Here are some relatively small (about 14MiB each) test archives which
    illustrate the problem:
          http://antinode.info/ftp/info-zip/ms_zip64/test_4g.zip
          http://antinode.info/ftp/info-zip/ms_zip64/test_4g_V.zip
          http://antinode.info/ftp/info-zip/ms_zip64/test_4g_W.zip
       Correct info, from UnZip 6.00 ("unzip -lv"):
    Archive:  test_4g.zip
     Length   Method    Size  Cmpr    Date    Time   CRC-32   Name
    4362076160  Defl:X 14800839 100% 05-01-2014 15:33 6d8d2ece  test_4g.txt
    Archive:  test_4g_V.zip
     Length   Method    Size  Cmpr    Date    Time   CRC-32   Name
    4362076160  Defl:X 14800839 100% 05-01-2014 15:33 6d8d2ece  test_4g.txt
    Archive:  test_4g_W.zip
     Length   Method    Size  Cmpr    Date    Time   CRC-32   Name
    4362076160  Defl:X 14800839 100% 05-01-2014 15:33 6d8d2ece  test_4g.txt
    (In these reports, "Length" is the uncompressed size; "Size" is the
    compressed size.)
       Incorrect info, from (Windows 7) Windows Explorer:
    Archive        Name          Compressed size   Size
    test_4g.zip    test_4g.txt         14,454 KB   562,951,376,907,238 KB
    test_4g_V.zip  test_4g.txt         14,454 KB   8,796,110,221,518 KB
    test_4g_W.zip  test_4g.txt         14,454 KB   1,464,940,363,777 KB
       Faced with these unrealistic sizes, Windows Explorer refuses to
    extract the member file, for lack of (petabytes of) free disk space.
       The archive test_4g.zip has the following Extra Blocks: universal
    time (type = 0x5455) and 64-bit sizes (type = 0x0001).  test_4g_V.zip
    has: PWWARE VMS (type = 0x000c) and 64-bit sizes (type = 0x0001).
    test_4g_W.zip has: NT security descriptor (type = 0x4453), universal
    time (type = 0x5455), and 64-bit sizes (type = 0x0001).  Obviously,
    Info-ZIP UnZip has no trouble correctly finding the 64-bit size info in
    these archives, but Windows Explorer is clearly confused.  (Note that
    "1,464,940,363,777 KB" translates to 0x0005545500000400 (bytes), and
    "0x00055455" looks exactly like the size, "0x0005" and the type code
    "0x5455" for a "UT" universal time Extra Block, which was present in
    that archive.  This is consistent with the hypothesis that the wrong
    data in the Extra Field are being interpreted as the 64-bit size data.)
       Without being able to see the source code involved here, it's hard to
    know exactly what it's doing wrong, but it does appear that the .zip
    reader used by Windows Explorer is using a very (too) simple-minded
    method to extract 64-bit size data from the Extra Field, causing it to
    get bad data from a properly formed archive.
       I suspect that the engineer involved will have little trouble finding
    and fixing the code which parses an Extra Field to extract the 64-bit
    sizes correctly, but if anyone has any questions, we'd be happy to help.
       For the Info-ZIP (http://info-zip.org/) team,
       Steven Schweda

    > We can't get the source (info-zip) program for test.
       I don't know why you would need to, but yes, you can:
          http://www.info-zip.org/
          ftp://ftp.info-zip.org/pub/infozip/src/
    You can also get pre-built executables for Windows:
          ftp://ftp.info-zip.org/pub/infozip/win32/unz600xn.exe
          ftp://ftp.info-zip.org/pub/infozip/win32/zip300xn.zip
    > In addition, since other zip application runs correctly. Since it should
    > be your software itself issue.
       You seem to misunderstand the situation.  The facts are these:
       1.  For your convenience, I've provided three test archives, each of
    which includes a file larger than 4GiB.  These archives are valid.
       2.  Info-ZIP UnZip (version 6.00 or newer) can process these archives
    correctly.  This is consistent with the fact that these archives are
    valid.
       3.  Programs from other vendors can process these archives correctly.
    I've supplied a screenshot showing one of them (7-Zip) doing so, as you
    requested.  This is consistent with the fact that these archives are
    valid.
       4.  Windows Explorer (on Windows 7) cannot process these archives
    correctly, apparently because it misreads the (Zip64) file size data.
    I've supplied a screenshot of Windows Explorer showing the bad file size
    it gets, and the failure that occurs when one tries to use it to extract
    the file from one of these archives, as you requested.  This is
    consistent with the fact that there's a bug in the .zip reader used by
    Windows Explorer.
       Yes, "other zip application runs correctly."  Info-ZIP UnZip runs
    correctly.  Only Windows Explorer does _not_ run correctly.

Maybe you are looking for