Large file compression

Hi there all,
I have CS3 extended and have been compiling a series of high resolution photographs into one layer in photoshop.  I have copy and pasted each photo and then stitched them together resulting in a file that is 1.35Gb.  I want to compress the file, keeping the resolution as high as possible, so it is easier to use in illustrator and import into ArcGIS.  I have tried saving the file as a jpg, pdf and tiff with variable compression qualities.  As soon as i open the image up in a different programme and zoom in I loose the picture, eitehr to a blank screen or a message saying 'invalid format/picture'.  Opening the file and saving is taking so long too do as well...
Any tips?
Thanks a lot

PSB, JPEG 2000, ECW and OpenEXR are way to go. They all support wavelet or other sophisticated compression methods and, most importantly, tiled loading, which is what you want to avoid those pesky out of memory errors in your GIS application. You may however need specific (commercial) plug-ins to get the functionality for ECW and EXR in Photoshop. a JPEG 2000 plug-in should be in the extra content folder of your install disk/ package. Dunno of it supports tiling, though. Never tried.
Mylenium

Similar Messages

  • Difficulty compressing a large file - compact command fails

    I am trying to compress a 419 GB SQL Server database, following the instructions in the compression example described here:
    http://msdn.microsoft.com/en-us/library/ms190257(v=sql.90).aspx
    I have set the filegroup to read-only, and taken the database offline.  I logged into the server that hosts the file, and ran this from a command prompt:
    ===================================================== 
    F:\Database>compact /c CE_SecurityLogsArchive_DataSecondary.ndf
    Compressing files in F:\Database\
    CE_SecurityLogsArchive_DataSecondary.ndf [ERR]
    CE_SecurityLogsArchive_DataSecondary.ndf: Insufficient system resources exist to
    complete the requested service.
    0 files within 1 directories were compressed.
    0 total bytes of data are stored in 0 bytes.
    The compression ratio is 1.0 to 1.
    F:\Database>
    ===============================================
    As you can see, it gave me an error: "Insufficient system resources exist to complete the requested service."  The drive has 564 GB free, so I doubt that is the issue.  Here are some specs on the server:
    MS Windows Server 2003 R2, Enterprise x64 Edition, SP2
    Intel Xeon E7420 CPU @ 2.13 GHz; 8 logical processors (8 physical)
    7.99 GB RAM
    Any suggestions how to handle this?  I really need to get this large file compressed, and this method seems appealing if I can make it work, because you can supposedly continue to query from the database even though it has been compressed.  If
    I use a compression utility like 7Zip, I doubt that I'll be able to query from the compressed file.

    Hi,
    Based on my knowledge, if you compress a file that is larger than 30 gigabytes, the compression may not succeed.
    For detailed information:
    http://msdn.microsoft.com/en-us/library/aa364219(VS.85).aspx
    Regards,
    Yan Li
    Regards, Yan Li

  • Compressing a large file into a small file.

    So i have a pretty large file that i am trying to make very small with good quality. the file before exporting it is about 1gig. I need to make it 100mbs. Right now i've tried compressing it with the h.264 compressing type, and i am having to go as low as 300kbits. I use aac 48 for the audio. It is just way to pixelated to submit something like this. But i guess i could make the actual video a smaller size something like 720x480 and just letterboxing it to keep it widescreen? Any hints on a good way to make this 21 minute video around 100mbs?

    There are three ways to decrease the file size of a video.
    1. Reduce the image size. For example, the change of a 720x480 DV image to a 320x240 will decrease the size by a factor of 4
    2. Reduce the frame rate. For example, changing from a 30 fps to 15 fps will decrease the size by a factor of 2
    3. Increase the compression/ change code. This is the black magic part of online material. Only you can decide what's good enough.
    x

  • Compressing large file into several small files

    what can i use to compress a 5gb file in several smaller files, and that will easy rejoined at a later date?
    thanks

    Hi, Simon.
    Actually, what it sounds like you want to do is take a large file and break it up into several compressed files that can later be rejoined.
    Two ideas for you:
    1. Put a copy of the file in a folder of its own, then create a disk image of that folder. You can then create a segmented disk image using the segment verb of the hditutil command in Terminal. Disk Utility provides a graphical user interface (GUI) to some of the functions in hdiutil, but unfortunately not the segment verb, so you have to use hditutil in Terminal to segment a disk image.
    2. If you have StuffIt Deluxe, you can create a segmented archive. This takes one large StuffIt archive and breaks it into smaller segments of a size you define.2.1. You first make a StuffIt archive of the large file, then use StuffIt's Segment function to break this into segments.
    2.2. Copying all the segments back to your hard drive and Unstuffing the first segment (which is readily identifiable) will unpack all the segments and recreate the original, large file.I'm not sure if StufIt Standard Edition supports creating segmented archives, but I know StuffIt Deluxe does as I have that product.
    Good luck!
    Dr. Smoke
    Author: Troubleshooting Mac® OS X

  • Compressing large files

    How do I compress a large file 25mb to send via email. I have stuffit, but no clue how to use it!
    Rose

    I used yousendit.com. problem solved.

  • Compressor won't compress large files from QT X Screen cap

    all of my large screen capture files give a "Quicktime error: -36" when I try to transcode them in Compressor. Small ones work fine. Trimming one large file into two smaller ones makes no difference; the second of the two still fails. Ideas please..?

    Need more info:
    What compression settings are you using?
    File size of original file?
    Hard drive space remaining?
    How much ram?
    How many cores?
    etc
    Pretty much every little detail.

  • Hey All, what is the best file compression app to compress large video file on my Mac Book Pro?

    Hey All, do you recommend compressor 4 or what is the best file compression app to compress large video file on my Mac Book Pro?

    X,
    Thanks for checking in.  The anwser to your questions are below.
    Where is the material coming from?
    These are MP4 files from my Canon vixia HFR40
    What software are you using to edit the material?
    I am using iMOVIE 9.0.9
    What do you want to do with the files after compression? I want to email I need to have the file be smaller than 100MB to send.
    I am an actor and I video tape my auditions and send them to my agent so I need Highest Quality HD files that have small file sizes, no more than 100MB.  The scenes are like 1-5 Mins long in most cases. Thanks for your help.

  • Windows Explorer misreads large-file .zip archives

       I just spent about 90 minutes trying to report this problem through
    the normal support channels with no useful result, so, in desperation,
    I'm trying here, in the hope that someone can direct this report to some
    useful place.
       There appears to be a bug in the .zip archive reader used by Windows
    Explorer in Windows 7 (and up, most likely).
       An Info-ZIP Zip user recently reported a problem with an archive
    created using our Zip program.  The archive was valid, but it contained
    a file which was larger than 4GiB.  The complaint was that Windows
    Explorer displayed (and, apparently believed) an absurdly large size
    value for this large-file archive member.  We have since reproduced the
    problem.
       The original .zip archive format includes uncompressed and compressed
    sizes for archive members (files), and these sizes were stored in 32-bit
    fields.  This caused problems for files which are larger than 4GiB (or,
    on some system types, where signed size values were used, 2GiB).  The
    solution to this fundamental limitation was to extend the .zip archive
    format to allow storage of 64-bit member sizes, when necessary.  (PKWARE
    identifies this format extension as "Zip64".)
       The .zip archive format includes a mechanism, the "Extra Field", for
    storing various kinds of metadata which had no place in the normal
    archive file headers.  Examples include OS-specific file-attribute data,
    such as Finder info and extended attributes for Apple Macintosh; record
    format, record size, and record type data for VMS/OpenVMS; universal
    file times and/or UID/GID for UNIX(-like) systems; and so on.  The Extra
    Field is where the 64-bit member sizes are stored, when the fixed 32-bit
    size fields are too small.
       An Extra Field has a structure which allows multiple types of extra
    data to be included.  It comprises one or more "Extra Blocks", each of
    which has the following structure:
           Size (bytes) | Description
          --------------+------------
                2       | Type code
                2       | Number of data bytes to follow
            (variable)  | Extra block data
       The problem with the .zip archive reader used by Windows Explorer is
    that it appears to expect the Extra Block which includes the 64-bit
    member sizes (type code = 0x0001) to be the first (or only) Extra Block
    in the Extra Field.  If some other Extra Block appears at the start of
    the Extra Field, then its (non-size) data are being incorrectly
    interpreted as the 64-bit sizes, while the actual 64-bit size data,
    further along in the Extra Field, are ignored.
       Perhaps the .zip archive _writer_ used by Windows Explorer always
    places the Extra Block with the 64-bit sizes in this special location,
    but the .zip specification does not demand any particular order or
    placement of Extra Blocks in the Extra Field, and other programs
    (Info-ZIP Zip, for example) should not be expected to abide by this
    artificial restriction.  For details, see section "4.5 Extensible data
    fields" in the PKWARE APPNOTE:
          http://www.pkware.com/documents/casestudies/APPNOTE.TXT
       A .zip archive reader is expected to consider the Extra Block type
    codes, and interpret accordingly the data which follow.  In particular,
    it's not sufficient to trust that any particular Extra Block will be the
    first one in the Extra Field.  It's generally safe to ignore any Extra
    Block whose type code is not recognized, but it's crucial to scan the
    Extra Field, identify each Extra Block, and handle it according to its
    type.
       Here are some relatively small (about 14MiB each) test archives which
    illustrate the problem:
          http://antinode.info/ftp/info-zip/ms_zip64/test_4g.zip
          http://antinode.info/ftp/info-zip/ms_zip64/test_4g_V.zip
          http://antinode.info/ftp/info-zip/ms_zip64/test_4g_W.zip
       Correct info, from UnZip 6.00 ("unzip -lv"):
    Archive:  test_4g.zip
     Length   Method    Size  Cmpr    Date    Time   CRC-32   Name
    4362076160  Defl:X 14800839 100% 05-01-2014 15:33 6d8d2ece  test_4g.txt
    Archive:  test_4g_V.zip
     Length   Method    Size  Cmpr    Date    Time   CRC-32   Name
    4362076160  Defl:X 14800839 100% 05-01-2014 15:33 6d8d2ece  test_4g.txt
    Archive:  test_4g_W.zip
     Length   Method    Size  Cmpr    Date    Time   CRC-32   Name
    4362076160  Defl:X 14800839 100% 05-01-2014 15:33 6d8d2ece  test_4g.txt
    (In these reports, "Length" is the uncompressed size; "Size" is the
    compressed size.)
       Incorrect info, from (Windows 7) Windows Explorer:
    Archive        Name          Compressed size   Size
    test_4g.zip    test_4g.txt         14,454 KB   562,951,376,907,238 KB
    test_4g_V.zip  test_4g.txt         14,454 KB   8,796,110,221,518 KB
    test_4g_W.zip  test_4g.txt         14,454 KB   1,464,940,363,777 KB
       Faced with these unrealistic sizes, Windows Explorer refuses to
    extract the member file, for lack of (petabytes of) free disk space.
       The archive test_4g.zip has the following Extra Blocks: universal
    time (type = 0x5455) and 64-bit sizes (type = 0x0001).  test_4g_V.zip
    has: PWWARE VMS (type = 0x000c) and 64-bit sizes (type = 0x0001).
    test_4g_W.zip has: NT security descriptor (type = 0x4453), universal
    time (type = 0x5455), and 64-bit sizes (type = 0x0001).  Obviously,
    Info-ZIP UnZip has no trouble correctly finding the 64-bit size info in
    these archives, but Windows Explorer is clearly confused.  (Note that
    "1,464,940,363,777 KB" translates to 0x0005545500000400 (bytes), and
    "0x00055455" looks exactly like the size, "0x0005" and the type code
    "0x5455" for a "UT" universal time Extra Block, which was present in
    that archive.  This is consistent with the hypothesis that the wrong
    data in the Extra Field are being interpreted as the 64-bit size data.)
       Without being able to see the source code involved here, it's hard to
    know exactly what it's doing wrong, but it does appear that the .zip
    reader used by Windows Explorer is using a very (too) simple-minded
    method to extract 64-bit size data from the Extra Field, causing it to
    get bad data from a properly formed archive.
       I suspect that the engineer involved will have little trouble finding
    and fixing the code which parses an Extra Field to extract the 64-bit
    sizes correctly, but if anyone has any questions, we'd be happy to help.
       For the Info-ZIP (http://info-zip.org/) team,
       Steven Schweda

    > We can't get the source (info-zip) program for test.
       I don't know why you would need to, but yes, you can:
          http://www.info-zip.org/
          ftp://ftp.info-zip.org/pub/infozip/src/
    You can also get pre-built executables for Windows:
          ftp://ftp.info-zip.org/pub/infozip/win32/unz600xn.exe
          ftp://ftp.info-zip.org/pub/infozip/win32/zip300xn.zip
    > In addition, since other zip application runs correctly. Since it should
    > be your software itself issue.
       You seem to misunderstand the situation.  The facts are these:
       1.  For your convenience, I've provided three test archives, each of
    which includes a file larger than 4GiB.  These archives are valid.
       2.  Info-ZIP UnZip (version 6.00 or newer) can process these archives
    correctly.  This is consistent with the fact that these archives are
    valid.
       3.  Programs from other vendors can process these archives correctly.
    I've supplied a screenshot showing one of them (7-Zip) doing so, as you
    requested.  This is consistent with the fact that these archives are
    valid.
       4.  Windows Explorer (on Windows 7) cannot process these archives
    correctly, apparently because it misreads the (Zip64) file size data.
    I've supplied a screenshot of Windows Explorer showing the bad file size
    it gets, and the failure that occurs when one tries to use it to extract
    the file from one of these archives, as you requested.  This is
    consistent with the fact that there's a bug in the .zip reader used by
    Windows Explorer.
       Yes, "other zip application runs correctly."  Info-ZIP UnZip runs
    correctly.  Only Windows Explorer does _not_ run correctly.

  • Does Time Machine use a differential/delta file compression when copying files ?

    Hello,
    I would like to use Time Machine to backup to a MacBook Air but that computer is using a Virtual Machines store in a single file of 50 GBytes.
    Once the initial backup will be done, does Time Machine will only copy the changes in this large file or will everyday copy the full 50 Bytes?
    In other word does Time Machine use a differential/dela file compression algorithm (like un rsync)?
    If it is not yet the case, can you please file for me an application request to the development team internally?
    If others are also interested in such a feature, you’re welcome to vote for it.
    Kind regards,
    Olivier

    Ok, it looks like the current version of Time Machine cannot handle efficiently in network terms large files like Virtual Machine files and this a real issue today.
    Is anybody here able to file an official feature request to Apple to be able to use Time Machine efficiently also with large files (let's stay > 5 GBytes) ?
    I should propably mean using a differential compression algorythm over the network like the examples below:
    http://rsync.samba.org/tech_report/tech_report.html
    http://en.wikipedia.org/wiki/Remote_Differential_Compression

  • CS6 very slow saving large files

    I have recently moved from PS CS5 to CS6 and have noticed what seems to be an increase in the amount of time it takes to save large files. At the moment I am working with a roughly 8GB .psb and to do a save takes about 20 minutes. For this reason I have had to disable the new autosave features, otherwise it just takes far too long.
    CS5 managed to save larger files, more quickly and with less memory available to it. Looking at system resources while Photoshop is trying to save, it is not using its full allocation of RAM specified in performance preferences and there is still space free on the primary scratch disc. The processor is barely being used and disc activity is minimal (Photoshop might be writing at 4mb/s max, often not at all though according to Windows).
    I am finding the new layer filtering system invaluable so would rather not go back to CS5. Is this is a known issue or is there something I can do to speed up the saving process?
    Thanks.

    Thanks for the quick replies.
    Noel: I did actually experiment with turing off 'maximize compatibility' and compression and it had both good and bad effects. On the plus side it did reduce the save time somewhat, to somewhere a little over 10 minutes. However it also had the effect of gobbling up ever more RAM while saving, only leaving me with a few hundred mb's during the save process. This is odd in itself as it actually used up more RAM than I had allocated in the preferences. The resulting file was also huge, almost 35 GB. Although total HD space isn't a problem, for backing up externally and sharing with others this would make things a real headache.
    Curt: I have the latest video driver and keep it as up to date as possible.
    Trevor: I am not saving to the same drive as the scratch dics, although my primary scratch disc does hold the OS as well (it's my only SSD). The secondary scratch disc is a normal mechanical drive, entirely separate from where the actual file is held. If during the save process my primary scratch disc was entirely filled I would be concerned that that was an issue, but it's not.
    Noel: I have 48 GB with Photoshop allowed access to about 44 of that. FYI my CPU's are dual Xeon X5660's and during the save process Photoshop isn't even using 1 full core i.e. less than 4% CPU time.

  • Unable to copy very large file to eSATA external HDD

    I am trying to copy a VMWare Fusion virtual machine, 57 GB, from my Macbook Pro's laptop hard drive to an external, eSATA hard drive, which is attached through an ExpressPort adapter. VMWare Fusion is not running and the external drive has lots of room. The disk utility finds no problems with either drive. I have excluded both the external disk and the folder on my laptop hard drive that contains my virtual machine from my Time Machihne backups. At about the 42 GB mark, an error message appears:
    The Finder cannot complete the operation because some data in "Windows1-Snapshot6.vmem" could not be read or written. (Error code -36)
    After I press OK to remove the dialog, the copy does not continue, and I cannot cancel the copy. I have to force-quit the Finder to make the copy dialog go away before I can attempt the copy again. I've tried rebooting between attempts, still no luck. I have tried a total of 4 times now, exact same result at the exact same place, 42 GB / 57 GB.
    Any ideas?

    Still no breakthrough from Apple. They're telling me to terminate the VMWare processes before attempting the copy, but had they actually read my description of the problem first, they would have known that I already tried this. Hopefully they'll continue to investigate.
    From a correspondence with Tim, a support representative at Apple:
    Hi Tim,
    Thank you for getting back to me, I got your message. Although it is true that at the time I ran the Capture Data program there were some VMWare-related processes running (PID's 105, 106, 107 and 108), this was not the case when the issue occurred earlier. After initially experiencing the problem, this possibility had occurred to me so I took the time to terminate all VMWare processes using the activity monitor before again attempting to copy the files, including the processes mentioned by your engineering department. I documented this in my posting to apple's forum as follows: (quote is from my post of Feb 19, 2008, 1:28pm, to the thread "Unable to copy very large file to eSATA external HDD", relevant section in >bold print<)
    Thanks for the suggestions. I have since tried this operation with 3 different drives through two different interface types. Two of the drives are identical - 3.5" 7200 RPM 1TB Western Digital WD10EACS (WD Caviar SE16) in external hard drive enclosures, and the other is a smaller USB2 100GB Western Digital WD1200U0170-001 external drive. I tried the two 1TB drives through eSATA - ExpressPort and also over USB2. I have tried the 100GB drive only over USB2 since that is the only interface on the drive. In all cases the result is the same. All 3 drives are formatted Mac OS Extended (Journaled).
    I know the files work on my laptop's hard drive. They are a VMWare virtual machine that works just fine when I use it every day. >Before attempting the copy, I shut down VMWare and terminated all VMWare processes using the Activity Monitor for good measure.< I have tried the copy operation both through the finder and through the Unix command prompt using the drive's mount point of /Volumes/jfinney-ext-3.
    Any more ideas?
    Furthermore, to prove that there were no file locks present on the affected files, I moved them to a different location on my laptop's HDD and renamed them, which would not have been possible if there had been interference from vmware-related processes. So, that's not it.
    Your suggested workaround, to compress the files before copying them to the external drive, may serve as a temporary workaround but it is not a solution. This VM will grow over time to the point where even the compressed version is larger than the 42GB maximum, and compressing and uncompressing the files will take me a lot of time for files of this size. Could you please continue to pursue this issue and identify the underlying cause?
    Thank you,
    - Jeremy

  • How do share a very large file?

    How do share a very large file?

    Do you want to send a GarageBand project or the bounced audio file?  To send an audio file is not critical, but if you want to send the project use "File > Compress" to create a .zip file of the project before you send it.
    If you have a Dropbox account,  I'd simply copy the file into the Dropbox "public" folder and mail the link. Right-click the file in the Dropbox, then choose Dropbox > Copy Public Link. This copies an Internet link to your file that you can paste anywhere: emails, instant messages, blogs, etc.
    2 GB on Dropbox are free.     https://www.dropbox.com/help/category/Sharing

  • Large File problem

    Hi!
    I have a swing application that must be abble to open a large file (4.8 Gb). But when i precess the file it is incomplete.
    I think the system memory is not enought for open the file.
    There is some way to know if a file must cannot be opened correctly? (some way to know the free system memory)
    Or there is a way to open very large files???
    Thank you

    Might it be, you are using somewhere an int instead of a long for file positions?
    Holding the entire file in memory makes no sence.
    You might try java.nio for file to memory mapping.
    And you might utilize java.util.zip to compress file parts on the fly loading them.
    The java switches -Xms and -Xmx might further help out.
    Your application could use soft and weak references to allow for memory-low cleanups.

  • Aperture is exporting large file size e.g. original image is 18.2MB and the exported version (TFF 16 Bit) is 47.9MB, any ideas please

    Aperture is exporting large file size e.g. original image is 18.2MB and the exported version (TFF 16 Bit) is 47.9MB, any ideas please

    Raws, even if not compressed, Sould be smaller than a 24-bit Tiff, since they have only one bitplane. My T3i shoots 14 bit 18MP raws and has a raw file size of 24.5 MB*. An uncompressed Tiff should have a size of 18 MP x 3 bytes per pixel or 54 MB.
    *There must be some lossless compression going on since 18 MP times 1.75 bytes per pixel is 31.5MB for uncompressed raw.

  • Saving large files quickly (less slowly) in PS CS3

    I have some very large files (Document Size box in PS says 7GB, file size as a .psb about 3GB). I guess I should be happy that I can open and save these documents at all. But, on a fast Macintosh, save times are about 20 minutes, and open times about 10 minutes. I know that if I could save these as uncompressed TIFFs, the save and open times should be much faster, but I can't because the files are over the 4GB limit for a tiff. Is there anything I can do to speed up open and save times?
    A

    Peter and Tom
    Thanks for your replies. Have you done any timing on this? I have done some, but didn't keep the results. But my impression is that RAM size and disk speed are relatively less important for save times, and that multiple processors (cores) are not helpful at all.
    What I find is that my save times for a 3GB file are about 20 minutes, which is a write speed of about 2.5 MB/sec, which is well within the capacity of 100Base-T ethernet, and certainly nowhere near the limit of even a slow hard drive. So I don't think write time should have anything to do with it.
    Similarly, while I have 14 GB of RAM, I don't think that makes much of a difference. The time it would take to read data out of a disk cache, put it into memory, and write it back out should be a short fraction of the save time. But, I must say, I have not bothered to pull RAM chips, and see if my save times vary.
    I do find that maximize compatibility does affect things. However, I always leave this on for big files, as I like to be able to use the Finder to remind me which file is which, without firing up Photoshop (and waiting 10 minutes for the file to open). I find it always saves me time in the long run to "Maximize compatibility."
    Now, not being a Photoshop Engineer, what I think is going on is that Photoshop is trying to compress the file when it saves it, and the compression is very slow. It also has no parallel processing (as noted from processor utilization rates in Activity Monitor). I suspect that the save code is very old, and that Adobe is loathe to change it because introducing bugs into saving makes for very unhappy customers. (I know, I was bitten by a save bug once in ?CS2?, can't remember).
    Honestly, for my money, the best thing Adobe could do for me would be to put a switch or a slider for time/space tradeoffs in saving. Or, maybe somebody could come up with a TIFF replacement for files bigger than 4GB, and I could then saved uncompressed in that new "large-tiff" format.
    Anyway, it looks like my original question is answered: there isn't anything I can do to significantly speed up save times, unless someone has figures on what a 10,000 RPM disk does to change open/save times, which I would love to see.
    Cheers
    A

Maybe you are looking for