Compressiong And Decompression

Hi,
I do have an issue with compression and decompression.
I have used following program to compress and decompress a file...
I don't have any issue in compression. But when same file is tried to
decompress I am getting some exception
<code>
public class Zipper {
     public static void main(String args[]) {
     try{
          Zipper zipper = new Zipper();
          if(args.length <= 0)
               throw new IllegalArgumentException("Please provide file name for compression");
          String compressedFileName = zipper.compress(args[0]);
          String deCompressedFileName = zipper.deCompress(compressedFileName);
}catch(Exception e){
          e.printStackTrace();
          //e.getCause().printStackTrace();
/*                                        INFLATER METHODS                                             */
     private String compress(String fileName)throws Exception{
          String outputName = fileName+".zipped";
          File inputFile = new File(fileName);
          File zippedFile = new File(outputName);
          //FileInputStream fips = new FileInputStream(inputFile);
          RandomAccessFile raf = new RandomAccessFile(inputFile,"r");
          //FileOutputStream fops = new FileOutputStream(zippedFile);
          RandomAccessFile rafOut = new RandomAccessFile(zippedFile,"rw");
          Deflater compresser = new Deflater(Deflater.BEST_COMPRESSION);
          byte[] buf = new byte[1000];
          byte[] out = new byte[50];
          int intialCount = 0;
          int zippedSize = 0;
          int count = 0;
          String tempString = null;
          while( (count = raf.read(buf)) != -1){
               intialCount += count;
               compresser.setInput(buf);
               compresser.finish();
               count = compresser.deflate(out);
               zippedSize += count;
               //tempString = new String(out);
               //tempString.trim();
               //fops.write(tempString.getBytes("UTF-8"));
               rafOut.write(out);
               compresser.reset();
          //fops.close();
          //fips.close();
          rafOut.close();
          System.out.println("Intial File Size "+intialCount);
          System.out.println("Zipped File Size "+zippedSize);
          return outputName;
     private String deCompress(String fileName)throws Exception{
          String outputName = fileName+".unzipped";
          File inputFile = new File(fileName);
          File zippedFile = new File(outputName);
          //FileInputStream fips = new FileInputStream(inputFile);
          RandomAccessFile raf = new RandomAccessFile(inputFile,"r");
          FileOutputStream fops = new FileOutputStream(zippedFile);
          Inflater deCompresser = new Inflater();
          byte[] buf = new byte[100];
          byte[] out = new byte[1000];
          int intialCount = 0;
          int unZippedSize = 0;
          int count = 0;
          String tempString = null;
          while( (count = raf.read(buf)) != -1){
               System.out.println("Count = "+count);
               intialCount += count;
               deCompresser.setInput(buf);
               count = deCompresser.inflate(out);
               unZippedSize += count;
               //tempString = new String(out);
               //tempString.trim();
               //fops.write(tempString.getBytes("UTF-8"));
               fops.write(out);
               deCompresser.reset();
          fops.close();
          raf.close();
          //fips.close();
          System.out.println("Intial File Size "+intialCount);
          System.out.println("UnZipped File Size "+unZippedSize);
          return outputName;
</code>
Exception I got is
Intial File Size 125952
Zipped File Size 4938
Count = 100
java.util.zip.DataFormatException: invalid bit length repeat
at java.util.zip.Inflater.inflateBytes(Native Method)
at java.util.zip.Inflater.inflate(Unknown Source)
at java.util.zip.Inflater.inflate(Unknown Source)
at Zipper.deCompress(Zipper.java:213)
at Zipper.main(Zipper.java:29)

How about some google?
Ans try this one: http://www.tinyline.com/utils/index.html

Similar Messages

  • How to compress and decompress a pdf file in java

    I have a PDF file ,
    what i want to do is I need a lossless compression technique to compress and decompress that file,
    Please help me to do so,
    I am always worried about this topic

    Here is a simple program that does the compression bit.
    static void compressFile(String compressedFilePath, String filePathTobeCompressed)
              try
                   ZipOutputStream zipOutputStream = new ZipOutputStream(new FileOutputStream(compressedFilePath));
                   File file = new File(filePathTobeCompressed);
                   int iSize = (int) file.length();
                   byte[] readBuffer = new byte[iSize];
                   int bytesIn = 0;
                   FileInputStream fileInputStream = new FileInputStream(file);
                   ZipEntry zipEntry = new ZipEntry(file.getPath());
                   zipOutputStream.putNextEntry(zipEntry);
                   while((bytesIn = (fileInputStream.read(readBuffer))) != -1)
                        zipOutputStream.write(readBuffer, 0, bytesIn);
                   fileInputStream.close();
                   zipOutputStream.close();
              catch (FileNotFoundException e)
              catch (IOException e)
                   // TODO Auto-generated catch block
                   e.printStackTrace();
         }

  • Compressing and decompressing a PDF file

    Dear Gurus,
    Anyone who can assist with the code to compress and decompress PDF file so that I can send it in an email.
    Thanks

    1) you should go to ABAP Printing forum where this problem has been discussed many times (for example remove the coloring to reduce the file size etc.). Just in case you would insist on working your way.
    2) You can create some printing "device" like smartform etc. to print your data for you and then convert it to PDF, the result should be smaller then option 1)
    3) the best one: if you want to send a PDF somewhere (or archive it etc.) you should start with Adobe forms, for you purpose Adobe print forms will be suffiscient. You only need to install and configure the ADS component to generate forms from the template for you. If you want to see a brief ovewrview about how to create this thing, start reading somewhere like here (this is not the most simple example!!):
    http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/c2567f2b-0b01-0010-b7b5-977cbf80665d
    Regards Otto

  • Download and decompression

    I am a new to J2ME. I am trying develop application to download a zip file from our server and decompress it in a specific location on Pocket PC, PDA and mobiles. Could please let me know If I can do this in J2ME. If so,
    Could please give me some idea or refer me to some web sites where I can found some relevant information
    Thanking you in advance
    With best regards
    Aruna
    Edited by: arunag on Dec 20, 2007 1:38 AM

    How about some google?
    Ans try this one: http://www.tinyline.com/utils/index.html

  • Compressing and decompressing a  string list

    Hi All,
    i need a help:
    i have an string list say,
    String sample="test";
    String sample1="test1";
    String sample2="test3";
    i want to compress this sample,sample1,sample2 and store that compressed results into another string variable.
    Ex:
    compressedSample=sample;
    compressedSample1=sample1;
    compressedSample2=sample2;
    similarly how can i uncompress the compressedSample,compressedSample1,compressedSample2 variables.
    Thanks in advance

    Try something like this :
    Compressing :
    ByteArrayOutputStream aos = new ByteArrayOutputStream();
    GZIPOutputStream gos = new GZIPOutputStream(aos);
    byte[] ba = "the_string_to_compress".getBytes();
    gos.write(ba,0,ba.length);
    gos.flush();
    String compressed_string = aos.toString();
    gos.close();
    Decompressing :
    GZIPInputStream zin = new GZIPInputStream( new ByteArrayInputStream( compressed_string.getBytes() ) );
    StringBuffer dat = new StringBuffer();
    byte[] buf = new Byte[128];
    for(int r; (r=zin.read(buf))!=-1;) dat.append( new String(buf,0,r) );
    zin.close();
    String org_string = dat.toString();

  • Compressed AND  Decompressed  STRING

    i want java porg 2 part
    part 1
    Compress largre string to small size and the compress string stroed in mysql data base
    part 2
    the prog read the compress string from data base then decompress .
    i am beginner lever java programmer
    pls give me code
    pls reply quickly
    with regards
    kumars

    hi everone,
    i have some problems with compress/decompress strings.
    i want to store a serialized php array, witch looks like this
    a:30:{i:0;s:1:"0";i:1;s:1:"1";i:2;s:1:"2";i:3;s:1:"3";i:4;s:1......
    on linux and mac os x it works fine, but not on win32 (winxp)
    i always get this error:
    java.io.IOException: Corrupt GZIP trailer
            at java.util.zip.GZIPInputStream.readTrailer(Unknown Source)
            at java.util.zip.GZIPInputStream.read(Unknown Source)
            at StringZip.decompress(StringZip.java:57)
            at StringZip.main(StringZip.java:84)is my test string too long ??
    maybe anyone can try this sample..
    much thx :D
    import java.io.*;
    import java.util.zip.*;
    public class StringZip {
         private static final int BLOCKSIZE = 1024;
         public static String compress( String data ) {          
              ByteArrayOutputStream zipbos = new ByteArrayOutputStream();
              ByteArrayInputStream zipbis = null;
              zipbis = new ByteArrayInputStream(data.getBytes());          
             GZIPOutputStream gzos = null;
             // now try to zip
             try {
                  gzos = new GZIPOutputStream(zipbos);
                  byte[] buffer = new byte[ BLOCKSIZE ];
                  // write into the zip stream
                  for ( int length; (length = zipbis.read(buffer, 0, BLOCKSIZE)) != -1; )
                      gzos.write( buffer, 0, length );
             } catch ( IOException e ) {
                  System.err.println( "Error: Couldn't compress" );
                  e.printStackTrace();
             } finally {
                  if ( zipbis != null )
                      try { zipbis.close(); } catch ( IOException e ) { e.printStackTrace(); }
                  if ( gzos != null )
                      try { gzos.close(); } catch ( IOException e ) { e.printStackTrace(); }
             // return the zipped string
             return zipbos.toString();
         public static String decompress( String data ) {
              ByteArrayInputStream zipbis = null;
              zipbis = new ByteArrayInputStream(data.getBytes());
              ByteArrayOutputStream zipbos = new ByteArrayOutputStream();
              GZIPInputStream gzis = null;
              try {
                   gzis = new GZIPInputStream( zipbis );
                   byte[] buffer = new byte[ BLOCKSIZE ];
                   // write the decompressed data into the stream
                   for ( int length; (length = gzis.read(buffer, 0, BLOCKSIZE)) != -1; )
                        zipbos.write( buffer, 0, length );
              } catch ( IOException e ) {
                   System.err.println( "Error: Couldn't decompress" );               
                   e.printStackTrace();
              } finally {
                   if ( zipbos != null )
                      try { zipbos.close(); } catch ( IOException e ) { e.printStackTrace(); }
                  if ( gzis != null )
                      try { gzis.close(); } catch ( IOException e ) { e.printStackTrace(); }
              return zipbos.toString();
         public static void main(String[] args) {
               String test = "a:30:{i:0;s:1:\"0\";i:1;s:1:\"1\";i:2;s:1:\"2\";i:3;s:1:\"3\";i:4;s:1:\"4\";i:5;s:1:\"5\";i:6;s:1:\"6\";i:7;s:1:\"7\";i:8;s:1:\"8\";i:9;s:1:\"9\";i:10;s:2:\"10\";i:11;s:2:\"11\";i:12;s:2:\"12\";i:13;s:2:\"13\";i:14;s:2:\"14\";i:15;i:15;i:16;i:16;i:17;i:17;i:18;i:18;i:19;i:19;i:20;i:20;i:21;i:21;i:22;i:22;i:23;i:23;i:24;i:24;i:25;i:25;i:26;i:26;i:27;i:27;i:28;i:28;i:29;i:29;}";
               String test_compressed   = compress(test);
               String test_decompressed = decompress(test_compressed);
               System.out.println(test);
               System.out.println(test_decompressed);
    }Message was edited by: antiben
    // edit
    ok, found a workaround
    seems to be a windows only problem :/

  • How do I install firefox on my Asus EeePC having downloaded and decompressed it?

    My notebook came with firefox installed on the Xandros operating system, but I have been prompted to download the latest version. I have done so and now need to load into the system. I have no instructions for that I realize I need to get the command console up but what do I type into this? The Xandros system has an add new software button but only for the existing linux - it does not recognise ither versions

    How old is that EeePC?
    If that is one of the original versions, from 2007 - 2009, the Xandros operating system doesn't support anything newer than Firefox 2.0 !
    I have a EeePC 900, and in 2009 I installed EasyPeasy OS [http://en.wikipedia.org/wiki/EasyPeasy] so that I could run Firefox 3.6. Unfortunately, EasyPeasy was discontinued in 2012 altogether, but work on it had quit in like 2010 when Ubuntu started adding files needed for Netbooks like the EeePC.
    Overall, ASUS used a weird OS to begin with on the first EeePC 700 model - Xandros, which was already a couple of years behind the Linux-curve as far as security updates and file version levels. And then ASUS didn't provide funding to the team that created Xandros to make it more up-to-date.
    Then there is the issue of Microsoft seeing the Netbook market slip thru their fingers, and MS putting pressure on ASUS to use a "Lite" version of WinXP, which did become an option after the 900 versions. But that's another story ...
    Bottom line is that you need a different, newer Operating System for that device to be able to use a newer version of Firefox. Best thing to do is to head over to the Ubuntu support fora and ask about an appropriate Linux distro for your EeePC.
    http://ubuntuforums.org/
    I don't use my EeePC for the purpose I originally got it for ''(the company I worked in for 2008 went bust in 2010 - I was an outside service rep and did on-site job reporting with the EeePC which fit in my toolbag)'', and I never updated it beyond EasyPeasy and Firefox 3.6. I use it now on my local network in my home and don't go on the internet with it. Basically a terminal on the network for viewing and editing locally saved files when I am watching TV, during commercials.

  • Download and decompress blob

    Hi! I have a Servlet that retrieves a blob form a DB (GZIP file) and download it, here's the code:
    isZip = new GZIPInputStream (blob.getBinaryStream());
    out=response.getOutputStream();
    //4.-Preparamos el contentType del Servlet, forzado a download.
    response.reset();
    response.setContentType("application/force-download");
    int num=is.read();
    response.setHeader("Content-Disposition","attachment; filename=\"" + "melon.gz" + "\"");
    while(num!=-1){
    out.print(num); //out is a ServletOutputStream
    num=is.read();
    response.flushBuffer();
    //Close and flush all the streams.
    But it doesn't work well, the Zip doesn't recognize the file like a gzip file, can u help me?

    Ops, I was reading for an inputStream is and form a isZip.

  • Compress and decompress in string

    hai
    how to compress the large string in java ? after string (compress string )how to store in database . then the jave prog read string (compress string) form database how to decompressed ?
    pls give me advice
    or useful web like
    with regards
    kumar

    What is wrong with the answer(s) giving in the past thread?
    You could use the StringReader/ByteInputStream + ByteOutpuStream + ( java.util.zip or GZIPOutputStream ).

  • Z68 and memtest 86 mystery

    So where do we start this time ?
    After having set up my new Z68A-GD65 (G3) board, which went without a hitch I must add, I went through a couple of beta BIOS versions to test and see what the restricted multi was all about.
    The .N34 beta BIOS eventually had no more multi restriction, but was by no means perfect. Even with the minor issues that still exist, my board is solid and performs up to expectations, maybe even better than what I expected after now having run some comprehensive tests. But more about that later.
    First the hardware:
    MSI Z68-GD65 (G3)
    i5-2500K
    Arctic Cooling Freezer 13
    Crucial 2X2GB CT2KIT25664BA1339 first mem kit tested
    Corsair CML8GX3M2A1600C9W  1,35V Vengeance  kit currently on the MB
    Nvidia 8600GT (currently for testing only)
    Seagate 500GB SATA II
    Sony Optiarc DVD RW
    Corsair CX500 PSU
    all mounted on a CM Lab test bench
    OS Win 7-64bit
    BIOS used for testing V23.3B4 (.N34)
    Other hardware used: Fluke model 77 digital multimeter.
    One thing that does need mentioning is that the .N34 BIOS applies the correct memory voltage for the Low voltage Vengeance when XMP is enabled. The previous BIOS did not do that and one has to manually adjust it.
    Within the BIOS itself, the reported "Current DRAM voltage" in Auto and Manual does not reflect what is applied in "DRAM Voltage" and even that differs from the real voltage. I used a Fluke Digital Multimeter to eventually find the correct voltages and for anyone that wants to apply the correct voltages, here are two tables as tested.
    Having used the multimeter to verify the VCore, I found that CPUZ reflects the most accurate reading out of all the applets used. A 1,36 Core voltage in CPUZ was reflected as 1,363V on the multi.
    Now let's get to the real reasons for the tests.
    In the recent past we had a user on a MSI sister forum that ran memtest 86 and found what he described as reduced base clocks whenever the CPU multiplier was increased by the "Adjust CPU Ratio" function in the BIOS. This led him to believe that although the board is running at an increased core frequency, that the performance would decrease as the BCLK and mem speed in memtest showed this clearly. No real benchmark tests had been done in the OS to either confirm or deny this suspicion, except a remark that it feels considerably slower than the same CPU on another manufacturer's P67.
    At this point I must point out that it is not my intention nor the forum's intention to compare manufacturers motherboards for benchmarking. I shall include sample tests performed on that board which should clearly show any performance degradation, if the memtest data is anything to go by.
    Having read the post I decided it would be a nice challenge to see if this memtest phenomena actually translates to a real performance reduction once the OS is loaded and benchmarks are performed. I am fortunate that I also happen to have the exact board that the user was referring to and mine has an i7-2600K on it.
    First things first. Let's have a look at what the reported memtest issue is all about
    This first screenshot shows the values that were monitored to be inconsistent with what is set in the BIOS. My first screenshot was taken with the BCLK at 100 (99.8) and the DDR3-1600 modules.
    The second screenshot was taken with my 1333 memory installed and my mutiplier set at X45 and still a BCLK of 100 (99.8). You can clearly see  that the BCLK is at 73, the memory is at DDR3-975. I tested various multis from 33 through to 45 and as the multi increased, these reflected values decreased. Below are the results of my tests with the 1333 memory:
    @ X33 everything is still normal
    @X34 BCLK=96  mem DDR3-1291
    @X38 BCLK=86  mem DDR3-1155
    @X40 BCLK=82  mem DDR3-1097
    @X43 BCLK=76  mem DDR3-1021
    @X44 BCLK=74  mem DDR3- 997
    @X45 BCLK=73  mem DDR3- 975
    I checked this with the DDR3-1600 modules and the behaviour is the same. I only ran the DDR3-1600 up to X42 as it was enough to prove the similar results.
    @X33 BCLK=99  mem DDR3-1596
    @X36 BCLK=91  mem DDR3-1463
    @X39 BCLK=84  mem DDR3-1351
    @X42 BCLK=78  mem DDR3-1254
    It must be remembered that at all times the memory was set at 1333 and 1600 respectively in the BIOS.
    Thus the user's valid assumption that something is amiss with the Z68 board.
    Since these values in memtest do tend to lead one to believe an associated performance penalty, it was time to put it to the test in the real world and run some applications that could measure overall memory performance and also the total system performance.
    My choice was to use Maxxmem which produces comprehensive theoretical benchmark numbers and the 7ZIP which has a benchmark tool included, also providing a comprehensive system bench in terms of file compression and decompression.
    The software used:
    Win7-64bit OS
    CPU-Z v1.58
    Maxxmem v1.95
    7Zip v9.2
    Both the systems used are running with 2X4GB DDR3-1600 Vengeance 9.9.9.24 timings.
    My Z68 has the i5-2500K and the P8P67-Pro has the i7-2600K.
    Now before anyone cries foul that the i7-2600K has 8 threads and the 2500K only 4. My choice of 7Zip was done because it's benchmark has the ability to restrict the amount of threads for testing. So within the 7ZIP benchmark I used only 1 thread throughout all the tests to ensure a fair comparison.
    I quote the following from the Help file.
    The benchmark shows a rating in MIPS (million instructions per second). The rating value is calculated from the measured speed, and it is normalized with results of Intel Core 2 CPU with multi-threading option switched off. So if you have modern CPU from Intel or AMD, rating values in single-thread mode must be close to real CPU frequency.
    The focus is on the Z68 tests and only X45 was used to do a performance comparison between the Z68 and P67 platform.
    First the Maxxmem results:
    This already indicates that the perceived performance penalty deduced from memtest 86 has no influence on real world performance. As the multi is increased the memory performance increases proportionally.
    Now let's have a look at the 7ZIP benchmark tests. These tests were all done with one core, unless otherwise specified and all used a default dictionary size of 32mb.
    This one came as a bit of a surprise. You can see that the Z68 actually outperforms the P67 with it's 2600K when both are running 4 threads.
    Once all 8 threads are used, then everything is as per expectation.
    A sample overview of the values as reflected within the benchmark applications.
    And below a snapshot of my Z68 on the bench test. I know it looks like a mess, but when you are doing testing, nothing beats an open bench.
    In closing, I believe that memtest 86 needs some work to be compatible with the Z68 chipset as it is clear from the tests that there is no real world performance loss.
    Furthermore I invite other users of Z68 boards to do their own tests on MSI boards and post them for comparative purposes, especially those users that still have a BIOS causing them to have the mysterious throttling.

    SonDa5 this thread is about memtest86 and the fact that it incorrectly reports BCLK and mem speeds on Sandy Bridge boards as you have observed yourself. Nothing to do with the BIOS.
    The test that we conducted were done to show that whatever was reported in memtest86 was not a true reflection of the real BCLK and mem speeds once the system was running in the OS. It further concluded that it did not have any performance impact.
    My own voltage observations on my board were added within this thread for the sake of completeness of the testing.
    If you have particular bugs with your board, then kindly start a thread in the appropriate section of the forum about your problems and your test results.
    You started by assuming a problem of some sort with your board, based on memtest86 observations. This was sufficiently answered within this thread.
    You are now using someone else's test results to come to some kind of conclusion on whatever perceived problems you have. Do your own tests if you want to prove a point and then take those as a starting point to substantiate your generalised statements.

  • What is the best and easiest way to upload a big file from an AIR app to a server?

    hello everyone
    i am a self-teach-as-i-go kind on person, and this is my first encounter with uploading to a server, websites and all
    i have written an AIR app in which the user chooses pictures from his/her computer and fills out numerous forms. at the end i want to upload all this data to my server
    currently, all the data folder gets compressed to a single zip file (using noChump zip library). i did this for simplicity reasons (uploading only a single file) - the size is the same. this files can get up to 200mb in size
    as a server, i have one domain I have bought and currently only a small space (1G - basic). I control it using Parallels® Plesk panel (default from the company i bought the domain and space from)
    I have no knowledge other then as3 (thanks, OReilly!), so i thought of something that doesn't require server side scripting.
    after messing around a bit i found the code at this question: http://stackoverflow.com/questions/2285645/flex-crossdomain-xml-file-and-ftp
    (thank you Joshua). please look at that code, basically, it uploads through a socket
    I fixed it up a bit and was able to upload a 64mb zip file to my httpdocs folder in my domain. this included hard coding my username and password
    looking at my site managing panel i see the file created and expanding in size, end at the end i even dowloaded the zip and decompressed it - all well.
    my questions are:
    the upload continued even when i exit my air app! how does this work?
    i cant get progress events to fire (this relates to question 1).
    this domain also holds my web page. is httpdocs the correct folder to put in user data? how do i give each user their own username and password?
    is this the right way to go anyway? remember file sizes could reach 200mb and also, secure transferring is not a must
    hope you guys can make sense in the mess
    cheers
    Saar

    Google search.
    iTunes does not sync with non-Apple devices.

  • DSLR workflow for ideal render and export

    I recently completed my first 5 minute project in Premiere Pro CS6 with great results.  The source material is a Canon 7D, and I'm working with unconverted native files.  Unfortunately, as the project had color-correction with Magic Bullet looks, filters like sharpening, and layers to "blur" certain elements on-screen, the initial compressed h264 export took nearly 7 hours.
    A bit of research led me to realize I was better off exporting a "master" and converting outside of Premiere Pro.  So, what would be the best way to proceed?
    Should I render my timeline and then choose export with "match sequence settings" and "use previews"?  Should I not pre-render and instead export only with "match sequence settings"?  If I do pre-render and use previews, will the resulting quality be the same as if I render while exporting?
    Or is there another way?  Could I create preview files with ProRes and export to ProRes?
    Thanks for helping out a thrilled PP convert.

    A little primer on Preview files...(btw I think they are great when used as they are intended)
    Preview files are caches of raw frames [with effects applied to them] that are encoded into the Native Editing format (eg DV, HDC, MPEG-Intra).
    It's a little known fact that ANY Export format can be made into an editing format. When you make a new Sequence you just have to pop over to the Custom Tab to change to that video format. In general this isn't a good idea though. Why? Because most Export formats are not good Digital Intermediates (DI). By that I mean that repeatedly compressing and decompressing damages/degrades the footage or they are slow to decode and encode.
    So, what format to choose? Formats like Avid DNxHD and Apple ProRes are great editing modes. They don't degrade over multiple generations very much and are relatively low on CPU power to decode and encode. There are lossless DI's out there too like UtLossless (which is free). At the other extreme are codecs that don't fair well to being used as a DI - like H.264/AVC. They are simply too CPU hungry for the task.
    Don't misunderstand me - editing source AVC footage is fine because decode is waaaay easier than encode. It's just not practical because of the encode time.
    So why use Preview files at all?
    IF your effects (like Magic Bullet or Motion Stabilizer) takes a LONG time you only have to do the computation once. Work in the highest resolution you'll be using (eg 1920x1080p). After you have the preview files "rendered" Premiere Pro will use then instead in that section and you'll have nice smoooooooth editing.
    You then are free to Export your final video to whatever format you want in whatever size you want. And because you chose a DI that isn't hard on the CPU to decode you pick up a TON of time vs having Magic Bullet recalculate the same thing over and over again for each output format/size/datarate. - Just make sure you check the "Use Previews" button otherwise they will be ignored.
    Level 200 tips:
    IF, and this is a big IF, you happen to be exporting your final video at the same
        FRAME SIZE
        FRAME RATE
        COMPRESSION SETTINGS THAT MATCH THE EDITING MODE
    Then, you can get an extra Encode time boost for Exporters that support "Smart Rendering". Smart Rendering is when all the above conditions are true and the compression scheme supports being able to COPY the preview frame instead of de-compressing it and then re-compressing it.
    Basically there is little to no chance that's gonna happen unless your final output is a Digital Intermediate format to then go onto some other post-production step.
    So... choose a good Editing Mode that doesn't lose quality when encoded (during the preview render) then decoded (during the early phase of Exporting each frame) to make the raw uncompressed frame that is Encoded into your final output format (eg F4V, AVC for Bluray, YouTube, Vimeo etc).
    'hope that helps.
    Rallymax.

  • LiveCycle Forms and ZipFiles issues

    I've created about 15 fillable forms in LiveCycle ES2 that have extended rights (as they need to have savable text for users). We want to upload these forms to our website for users to download, and since the forms can take a while to open, I thought it best to Zip them. The problem, however, is that when I tested putting a form in a zip file and unzipped it, it no longer seems to have the ability to save text any longer.  It is ready strange.  In other words, it can be zipped, but not zipped with extended rights kept. Does anyone on this site have any suggestions to get around this issue? Thanks.

    That is really strange. Zipping and unzipping of PDF forms doesn't affect their Usage Rights.
    I just added few Rights enabled PDFs to a zip file, and decompressed them. They all working fine.

  • Decompressing BytesMessage in JMS

    Hi,
              I am currently facing problems during decompression of bytesmessage data.
              What i am trying to do:
              Compress data and publish it onto Tibco JMS channel as bytesMessgae using weblogic server (Tibco JMS is configured as foreign jms server in weblogic).
              Decompress the bytesMessage data on the subscriber side.
              The same set of compression and decompression components work fine as individual components as a plain java program.
              i.e. compress some inmemory data and then uncompress them is working fine.
              But, when i publish at one end and then subscribe and try to uncompress, i get errors as below.
              Guys, can you please pour me in your expert thoughts on this. This is vey critical for my project which is right now in integration phase.
              Error:
              DataFormatException: java.util.zip.DataFormatException: unknown compression method
              I am using
              java.util.zip.Inflater;
              java.util.zip.Deflater;
              for my compression/decompression logic.
              Thanks,
              Kiran Kumar

    Hi Kiran,
              The problem is likely with the app, the JVM, or Tibco, since WL isn't directly in the message flow. You might want to check to see if Tibco is somehow changing the contents of your message, or if your app is incorrectly serializing/deserializing the compressed data into the message. Of course, you can also try posting to Tibco newsgroups.
              Tom, BEA
              P.S. This probably doesn't help you much, but as an FYI, WebLogic 9.0 JMS provides a built-in automatic message compress/decompress feature.

  • Reading and Writing to files in JARs

    I'm making a card game (TriPeaks). I need to store the scores for the users in files. Everything worked fine until I packaged it into a JAR. I tried using getClass().getResource(fileName); and then converting the URL to a URI to create the File object. That gave me an error saying the URI wasn't hierarchical.
    So I Googled around and found that I can use:
    InputStream is = getClass().getResourceAsStream(fileName);
    try {
        BufferedReader in = new BufferedReader(new InputStreamReader(is));
        //read the file, etc.
        is.close();
    catch (IOException eIO) { }That works both in and out of the JAR. However, I still can't write the scores to the files. I know that you can write to a file inside a JAR because I created another program that writes its settings to a file. However, that program has its settings file in the same folder (the root of the JAR) as the .class file in the JAR. The scores, however, are in a subfolder in the JAR file.
    How would I write to the files in the subfolder in the JAR file?

    Vetruvet wrote:
    Why wouldn't there be such a useful feature in the API?Because it's extremely difficult to implement. Let me give you a quick description of how a jar file is structured. It uses the ZIP format that's been around for about 30 years now. The contents of a ZIP archive are arranged like this:
    - Header information
    - Compressed file #1
    - Compressed file #2
    - ... and so on ...
    - Directory with pointers to offsets of the compressed files.
    So when you want to get a file from the archive, you go to the end and find the directory. You search it for the file you want to get and find its offset, then you go there and decompress the file that's there.
    Now consider trying to update one of these files. The compressed version of the new file may well differ in size from the existing file in the archive. So you'd have to do something intrusive and expensive like moving all the files after it, including the directory, or just adding it after the last compressed file and moving the directory.
    And doing this would mess up things in the case where another thread or process or object (e.g. a URLClassLoader) was already using the archive.
    It just isn't practical. So consider JAR and ZIP archives as read-only. That's what they were designed for in the first place anyway.

Maybe you are looking for