GZIP performance

Hi,
I use GZIP in order to compress data, I use a ByteArray, but I have 64 MB process size. The process generate "out of memory". However, I compress only 27 Kb or 32 Kb.
Any ideas?

I create a Filter, I use FilterWrapper that use an OutputStream in order to compress, I have use weblogic 6.1 SP5 and AIX.
ByteArrayOutputStream byteStream = new ByteArrayOutputStream();
GZIPOutputStream zipOut = new GZIPOutputStream(byteStream);
FilterWrapper responseWrapper = new FilterWrapper(res,zipOut);
chain.doFilter(req,responseWrapper);
// Gzip streams must be explicitly closed.
zipOut.flush();
zipOut.close();

Similar Messages

  • Gzip is very slow on T5220

    I have one SUN Server as oracle 9i server.
    Server Model: T5220, 32 Virtual CPUs and 16G memory. I am very disappointe for its IO speed using internal disk. When I import data into DB server , it spend 16Hours. but it only spend 8 hours in SUN V480.
    And then, I use command "gzip" to compress one file. I found it's also very slow. It spend 1 hours to gzip 10G size file. When it gzip, I check using iostat. The IO speed is only 3M/s.
    What's wrong is it? How can I improve the IO speed?

    shula4 wrote:
    I have one SUN Server as oracle 9i server.
    Server Model: T5220, 32 Virtual CPUs and 16G memory. I am very disappointe for its IO speed using internal disk. When I import data into DB server , it spend 16Hours. but it only spend 8 hours in SUN V480.
    And then, I use command "gzip" to compress one file. I found it's also very slow. It spend 1 hours to gzip 10G size file. When it gzip, I check using iostat. The IO speed is only 3M/s.
    What's wrong is it? How can I improve the IO speed?Both commands (import into db server and gzip) are typical single-threaded tasks. CMT-based servers (e.g. T5220) doesn't perform very well in such
    tasks. They were designed to perform very well with many, many tasks or jobs which can be parallelized. Use different tools in each case:
    Import into DB
    Try to paralleize it, e.g. since Oracle 10 you can use data pump (instead of import). Data pump is typical parallel processing.
    gzip
    Use pbzip2 which is parallel implementation of the bzip2. Example of its use:
    [http://przemol.blogspot.com/2009/01/parallel-bzip-in-cmt-multicore.html|http://przemol.blogspot.com/2009/01/parallel-bzip-in-cmt-multicore.html]

  • Tuning the performance of ObjectInputStream.readObject()

    I'm using JWorks, which roughly corresponds to jdk1.1.8, on my client (VxWorks) and jdk1.4.2 on the server side (Windows, Tomcat).
    I'm serializing a big vector and compressing it using GZipOutputStream on the server side and doing the reverse on the client side to recreate the objects.
    Server side:
    Vector v = new Vector(50000);//also filled with 50k different MyObject instances
    ObjectOutputStream oos = new ObjectOutputStream(new GZipOutputStream(socket.getOutputStream()));
    oos.writeObject(v);Client side:
    ObjectInputStream ois = new ObjectInputStream(new GZipInputStream(socket.getInputStream()));
    Vector v = (Vector)ois.readObject();ObjectInputStream.readObject() at the client takes a long time (50secs) to complete, which is understandable as the client is a PIII-700MHz and the uncompressed vector is around 10MB. Now the problem is that because my Java thread runs with real time priority (which is the default on Vxworks) and deprive other threads, including non-Java ones, for this whole 50 sec period. This causes a watchdog thread to reset the system. I guess most of this delay is in GZipInputStream, as part of the un-gzipping process.
    My question is, is there a way to make ObjectInputStream.readObject() or any of the other connected streams (gzip and socket) yield the CPU to other threads once in a while? Or is there any other way to deal with the situation?
    Is the following a good solution?
    class MyGZipInputStream extends GZipInputStream {
        public int count = 0;
        public int read(byte b[], int off, int len) throws IOException {
             if(++count %10 == 0) // to avoid too many yields
                  Thread.yield();
              return super.read(b, off,len);
    }Thanks
    --kannan                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

    I'd be inclined to put the yielding input stream as close to the incoming data as possible - thus avoiding any risk that time taken to read in data and buffer it will cause the watchdog to trip.I could do that. But as I'm doing Thread.yield only once every x times, would it help much?
    Also, as I've now found out, Thread.yield() wouldn't give other low priority tasks a chance to run. So I've switched to Thread.sleep(100) now, even though it could mean a performance hit.
    Another relaed question - MyGzipStream.read(byte[], int, int) is called about 3million times during the readObject() of my 10MB vector. This would mean that ObjectInputStream is using very small buffer size. Is there a way to increase it, other than overriding read() to call super.read() with a bigger buffer and then doing a System.arrayCopy()?
    Thanks
    --kannan                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

  • Poor I/O Performance on Solaris - v1.4.1_01

    Does anyone have any comments on the following?
    It's an I/O analysis done to determine which Java
    methods might be used to replace an existing C++
    platform-specific file re-compression subsystem.
    The system has to handle up to 200,000 files per
    day (every day).
    Java IO test results for converting ZERO_ONE compressed
    files to standard compressed files.
    Java 1.4.1, 12-04-2002
    The input dataset contains 623,230,991 bytes in 1391 files.
    The input files are in ZERO_ONE compression format.
    For all tests:
    1) an input data file was opened in buffered mode.
    2) the data was read from the input and expanded
    (byte by byte).
    3) the expanded data was written to a compressing
    output stream as it was created.
    4) repeat 1 thru 3 for each file.
    64K buffers were used for all input and output streams.
    Note: Items marked with "**" hang at random on Solaris
    (2.7 & 2.8) when processing a large number of small files. They always hang on BufferedInputStream.read().
    There may be a deadlock situation with the 'process
    reaper' because we're calling 'exec()' and 'waitFor()'
    so quickly. The elapsed times for those items are
    estimates based on the volume of data processed up to
    the point where the process hung. This 'bug' has been
    reported to Sun.
    -- elapsed time --
    NT Solaris 2.7 Method
    n/a 18 min Current C++ code:
    fopen(r) -> system(compress)
    19 min 19 min ** BufferedInputStream -> exec(compress)
    29 min 21 min 1) BufferdInputStream -> file
    2) exec(compress file)
    24 min 42 min ** BufferedInputStream -> exec(gzip)
    77 min 136 min BufferedInputStream -> GZIPOutputStream
    77 min -- BufferedInputStream -> ZipOutputStream
    The performance of GZIPOutputStream and ZipOutputStream
    makes them useless for any Production system. The 2x
    performance degradation on Solaris (vs NT) for these two
    streams is surprising. Does this imply that the 'libz'
    on Solaris is flawed? Notice that whenever 'libz' is
    involved in the process stream (exec(gzip),
    GZIPOutputStream, ZipOutputStream) the elapsed time
    climbs dramatically on Solaris.

    Re-submitted Performance Matrix with formatting retained.
    Note: for the "-> system()" and "-> exec()" methods, we write to the
    STDIN of the spawned process.
    -- elapsed time --
    NT     Solaris 2.7 Method
    n/a    18 min      Current Solaris C++ code:
                         fopen(r) -> system("compress -c >aFile")
    19 min 19 min **   BufferedInputStream -> exec("compress -c >aFile")
    29 min 21 min      1) BufferdInputStream -> "aFile"
                       2) exec("compress aFile")
    24 min 42 min **   BufferedInputStream -> exec("gzip -c aFile")
    77 min 136 min     BufferedInputStream -> GZIPOutputStream("aFile")
    77 min --          BufferedInputStream -> ZipOutputStream("aFile")

  • Opera performance degraded after systemd and fixing catalyst

    Two days ago, I ran pacman -Syu. I noticed after rebooting that the catalyst drivers weren't being loaded. Curiously, a previous pacman -R $(pacman -Qtdq) removed linux-headers, despite them being needed by the installed catalyst-utils. I reinstalled linux-headers, added fglrx to /etc/modules-load.d, and rebooted. The messages at boot showed that the drivers built just fine, and when the system started, all seemed well. 3D acceleration is indeed working, my games run just as they did before. Opera, however, does not. When it works, the performance is awful. Css animations play back between one and two frames per second. Also, the application occasionally crashes. When running on the terminal, there is no output at the point of the crash. The only trace that I have been able to find is in /var/log/everything.log, and is as follows:
    Oct 15 22:02:45 localhost kernel: [ 243.616368] opera[16250]: segfault at 28 ip 00007fc94a0764ce sp 00007fffff8a1630 error 4 in fglrx-libGL.so.1.2[fc94a03f000+bf000]
    Is there any way I can get opera back to running as well as it did before? I have tried removing opera, moving the ~/.opera directory, and reinstalling opera, the results were the same (minus my settings being gone). Any advice would be greatly appreciated.
    There hasn't been an opera update in some time. Here's the log of the update that seems to have caused the problem:
    [2012-10-14 17:41] Running 'pacman -Syu'
    [2012-10-14 17:41] synchronizing package lists
    [2012-10-14 17:41] starting full system upgrade
    [2012-10-14 17:43] upgraded libtiff (4.0.2-1 -> 4.0.3-1)
    [2012-10-14 17:43] upgraded openexr (1.7.0-2 -> 1.7.1-1)
    [2012-10-14 17:43] upgraded xorg-server-common (1.12.4-1 -> 1.13.0-2)
    [2012-10-14 17:43] warning: /etc/group installed as /etc/group.pacnew
    [2012-10-14 17:43] warning: /etc/passwd installed as /etc/passwd.pacnew
    [2012-10-14 17:43] warning: /etc/gshadow installed as /etc/gshadow.pacnew
    [2012-10-14 17:43] warning: directory permissions differ on srv/http/
    filesystem: 775 package: 755
    [2012-10-14 17:43] upgraded filesystem (2012.8-1 -> 2012.10-1)
    [2012-10-14 17:43] upgraded dbus-core (1.6.4-1 -> 1.6.8-1)
    [2012-10-14 17:43] upgraded util-linux (2.22-6 -> 2.22-7)
    [2012-10-14 17:43] upgraded systemd (193-1 -> 194-3)
    [2012-10-14 17:43] upgraded mtdev (1.1.2-1 -> 1.1.3-1)
    [2012-10-14 17:43] upgraded xf86-input-evdev (2.7.3-1 -> 2.7.3-2)
    [2012-10-14 17:43] upgraded xorg-server (1.12.4-1 -> 1.13.0-2)
    [2012-10-14 17:43] upgraded lib32-gcc-libs (4.7.1-6 -> 4.7.2-1)
    [2012-10-14 17:43] upgraded gcc-libs-multilib (4.7.1-6 -> 4.7.2-1)
    [2012-10-14 17:43] upgraded catalyst-utils (12.8-1 -> 12.9-0.1)
    [2012-10-14 17:43] installed glu (9.0.0-1)
    [2012-10-14 17:43] upgraded glew (1.8.0-1 -> 1.8.0-2)
    [2012-10-14 17:43] upgraded freeglut (2.8.0-1 -> 2.8.0-2)
    [2012-10-14 17:43] upgraded jasper (1.900.1-7 -> 1.900.1-8)
    [2012-10-14 17:43] upgraded openimageio (1.0.8-1 -> 1.0.9-3)
    [2012-10-14 17:43] upgraded jack (0.121.3-6 -> 0.121.3-7)
    [2012-10-14 17:43] installed opencolorio (1.0.7-1)
    [2012-10-14 17:43] upgraded blender (4:2.64-3 -> 5:2.64a-1)
    [2012-10-14 17:43] upgraded cairo (1.12.2-2 -> 1.12.2-3)
    [2012-10-14 17:43] upgraded xcb-proto (1.7.1-1 -> 1.8-1)
    [2012-10-14 17:43] upgraded libxcb (1.8.1-1 -> 1.9-1)
    [2012-10-14 17:43] upgraded mesa (8.0.4-3 -> 9.0-1)
    [2012-10-14 17:43] upgraded cinelerra-cv (1:2.2-7 -> 1:2.2-9)
    [2012-10-14 17:43] upgraded curl (7.27.0-1 -> 7.28.0-1)
    [2012-10-14 17:43] upgraded dbus (1.6.4-1 -> 1.6.8-1)
    [2012-10-14 17:43] upgraded flashplugin (11.2.202.238-1 -> 11.2.202.243-1)
    [2012-10-14 17:43] upgraded gcc-multilib (4.7.1-6 -> 4.7.2-1)
    [2012-10-14 17:43] upgraded gegl (0.2.0-3 -> 0.2.0-4)
    [2012-10-14 17:43] upgraded git (1.7.12.2-1 -> 1.7.12.3-1)
    [2012-10-14 17:43] upgraded gnutls (3.1.2-1 -> 3.1.3-1)
    [2012-10-14 17:43] upgraded gstreamer0.10-bad (0.10.23-2 -> 0.10.23-3)
    [2012-10-14 17:43] installed opus (1.0.1-2)
    [2012-10-14 17:43] upgraded gstreamer0.10-bad-plugins (0.10.23-2 -> 0.10.23-3)
    [2012-10-14 17:43] upgraded hdparm (9.39-1 -> 9.42-1)
    [2012-10-14 17:43] upgraded libltdl (2.4.2-6 -> 2.4.2-7)
    [2012-10-14 17:43] upgraded imagemagick (6.7.9.8-1 -> 6.7.9.8-2)
    [2012-10-14 17:43] upgraded sysvinit-tools (2.88-8 -> 2.88-9)
    [2012-10-14 17:43] warning: /etc/rc.conf installed as /etc/rc.conf.pacnew
    [2012-10-14 17:43] ----
    [2012-10-14 17:43] > systemd no longer reads MODULES from rc.conf.
    [2012-10-14 17:43] ----
    [2012-10-14 17:43] upgraded initscripts (2012.09.1-1 -> 2012.10.1-1)
    [2012-10-14 17:43] upgraded iputils (20101006-4 -> 20101006-7)
    [2012-10-14 17:43] upgraded khrplatform-devel (8.0.4-3 -> 9.0-1)
    [2012-10-14 17:43] upgraded lib32-catalyst-utils (12.8-2 -> 12.9-0.1)
    [2012-10-14 17:43] upgraded lib32-dbus-core (1.6.4-1 -> 1.6.8-1)
    [2012-10-14 17:43] upgraded lib32-libltdl (2.4.2-6 -> 2.4.2-7)
    [2012-10-14 17:43] upgraded lib32-libtiff (4.0.2-1 -> 4.0.3-1)
    [2012-10-14 17:43] upgraded lib32-libxcb (1.8.1-2 -> 1.9-1)
    [2012-10-14 17:43] installed lib32-libxxf86vm (1.1.2-1)
    [2012-10-14 17:43] upgraded lib32-mesa (8.0.4-4 -> 9.0-1)
    [2012-10-14 17:43] upgraded libbluray (0.2.2-1 -> 0.2.3-1)
    [2012-10-14 17:43] upgraded libdmapsharing (2.9.12-2 -> 2.9.15-1)
    [2012-10-14 17:43] upgraded libglapi (8.0.4-3 -> 9.0-1)
    [2012-10-14 17:43] upgraded libgbm (8.0.4-3 -> 9.0-1)
    [2012-10-14 17:43] upgraded libegl (8.0.4-3 -> 9.0-1)
    [2012-10-14 17:43] upgraded libgles (8.0.4-3 -> 9.0-1)
    [2012-10-14 17:43] upgraded libldap (2.4.32-1 -> 2.4.33-1)
    [2012-10-14 17:43] upgraded libreoffice-en-US (3.6.1-4 -> 3.6.2-2)
    [2012-10-14 17:44] upgraded libreoffice-common (3.6.1-4 -> 3.6.2-2)
    [2012-10-14 17:44] upgraded libreoffice-calc (3.6.1-4 -> 3.6.2-2)
    [2012-10-14 17:44] upgraded libreoffice-impress (3.6.1-4 -> 3.6.2-2)
    [2012-10-14 17:44] upgraded libreoffice-writer (3.6.1-4 -> 3.6.2-2)
    [2012-10-14 17:44] upgraded libshout (1:2.3.0-1 -> 1:2.3.1-1)
    [2012-10-14 17:44] upgraded libtool-multilib (2.4.2-6 -> 2.4.2-7)
    [2012-10-14 17:44] upgraded libusbx (1.0.12-2 -> 1.0.14-1)
    [2012-10-14 17:44] upgraded libva (1.1.0-1 -> 1.1.0-2)
    [2012-10-14 17:44] >>> Updating module dependencies. Please wait ...
    [2012-10-14 17:44] >>> Generating initial ramdisk, using mkinitcpio. Please wait...
    [2012-10-14 17:44] ==> Building image from preset: 'default'
    [2012-10-14 17:44] -> -k /boot/vmlinuz-linux -c /etc/mkinitcpio.conf -g /boot/initramfs-linux.img
    [2012-10-14 17:44] ==> Starting build: 3.5.6-1-ARCH
    [2012-10-14 17:44] -> Running build hook: [base]
    [2012-10-14 17:44] -> Running build hook: [udev]
    [2012-10-14 17:44] -> Running build hook: [autodetect]
    [2012-10-14 17:44] -> Running build hook: [pata]
    [2012-10-14 17:44] -> Running build hook: [scsi]
    [2012-10-14 17:44] -> Running build hook: [sata]
    [2012-10-14 17:44] -> Running build hook: [filesystems]
    [2012-10-14 17:44] -> Running build hook: [usbinput]
    [2012-10-14 17:44] -> Running build hook: [fsck]
    [2012-10-14 17:44] ==> Generating module dependencies
    [2012-10-14 17:44] ==> Creating gzip initcpio image: /boot/initramfs-linux.img
    [2012-10-14 17:44] ==> Image generation successful
    [2012-10-14 17:44] ==> Building image from preset: 'fallback'
    [2012-10-14 17:44] -> -k /boot/vmlinuz-linux -c /etc/mkinitcpio.conf -g /boot/initramfs-linux-fallback.img -S autodetect
    [2012-10-14 17:44] ==> Starting build: 3.5.6-1-ARCH
    [2012-10-14 17:44] -> Running build hook: [base]
    [2012-10-14 17:44] -> Running build hook: [udev]
    [2012-10-14 17:44] -> Running build hook: [pata]
    [2012-10-14 17:44] -> Running build hook: [scsi]
    [2012-10-14 17:44] -> Running build hook: [sata]
    [2012-10-14 17:44] -> Running build hook: [filesystems]
    [2012-10-14 17:44] -> Running build hook: [usbinput]
    [2012-10-14 17:44] -> Running build hook: [fsck]
    [2012-10-14 17:44] ==> Generating module dependencies
    [2012-10-14 17:44] ==> Creating gzip initcpio image: /boot/initramfs-linux-fallback.img
    [2012-10-14 17:44] ==> Image generation successful
    [2012-10-14 17:44] upgraded linux (3.5.4-1 -> 3.5.6-1)
    [2012-10-14 17:44] upgraded linux-api-headers (3.5.1-1 -> 3.5.5-1)
    [2012-10-14 17:44] upgraded lirc-utils (1:0.9.0-28 -> 1:0.9.0-30)
    [2012-10-14 17:44] upgraded net-snmp (5.7.1-3 -> 5.7.1-4)
    [2012-10-14 17:44] upgraded nodejs (0.8.11-1 -> 0.8.12-1)
    [2012-10-14 17:44] upgraded xine-lib (1.2.2-1 -> 1.2.2-2)
    [2012-10-14 17:44] upgraded opencv (2.4.2-2 -> 2.4.2-4)
    [2012-10-14 17:44] upgraded rsync (3.0.9-4 -> 3.0.9-5)
    [2012-10-14 17:44] upgraded run-parts (4.3.2-1 -> 4.3.4-1)
    [2012-10-14 17:44] upgraded smpeg (0.4.4-6 -> 0.4.4-7)
    [2012-10-14 17:44] upgraded sqlite (3.7.14-1 -> 3.7.14.1-1)
    [2012-10-14 17:44] upgraded sysvinit (2.88-8 -> 2.88-9)
    [2012-10-14 17:44] In order to use the new version, reload all virtualbox modules manually.
    [2012-10-14 17:44] upgraded virtualbox-host-modules (4.2.0-2 -> 4.2.0-5)
    [2012-10-14 17:44] installed lib32-glu (9.0.0-1)
    [2012-10-14 17:44] upgraded wine (1.5.14-1 -> 1.5.15-1)
    [2012-10-14 17:44] upgraded xbmc (11.0-6 -> 11.0-8)
    [2012-10-14 17:44] upgraded xf86-input-wacom (0.17.0-1 -> 0.17.0-2)
    [2012-10-14 17:44] upgraded xorg-server-xephyr (1.12.4-1 -> 1.13.0-2)
    [2012-10-14 17:44] upgraded xscreensaver (5.19-1 -> 5.19-2)
    [2012-10-14 17:44] upgraded xterm (282-1 -> 283-1)
    Finally, here is a list of the packages that were removed in the pacman invocation that removed linux-headers.
    [2012-10-07 11:28] Running 'pacman -R torus-trooper'
    [2012-10-07 11:28] removed torus-trooper (0.22-4)
    [2012-10-07 11:28] Running 'pacman -R freealut lib32-alsa-lib lib32-curl lib32-libglapi lib32-libidn lib32-libxxf86vm lib32-nvidia-cg-toolkit lib32-sdl libbulletml linux-headers mono xclip'
    [2012-10-07 11:28] removed xclip (0.12-3)
    [2012-10-07 11:28] removed mono (2.10.8-1)
    [2012-10-07 11:28] removed linux-headers (3.5.4-1)
    [2012-10-07 11:28] removed libbulletml (0.0.6-4)
    [2012-10-07 11:28] removed lib32-sdl (1.2.15-3)
    [2012-10-07 11:28] removed lib32-nvidia-cg-toolkit (3.1-2)
    [2012-10-07 11:28] removed lib32-libxxf86vm (1.1.2-1)
    [2012-10-07 11:28] removed lib32-libidn (1.25-1)
    [2012-10-07 11:28] removed lib32-libglapi (8.0.4-4)
    [2012-10-07 11:28] removed lib32-curl (7.27.0-1)
    [2012-10-07 11:28] removed lib32-alsa-lib (1.0.26-1)
    [2012-10-07 11:28] removed freealut (1.1.0-4)
    [2012-10-07 11:28] Running 'pacman -R lib32-libssh2 libgdiplus'
    [2012-10-07 11:28] removed libgdiplus (2.10-2)
    [2012-10-07 11:28] removed lib32-libssh2 (1.4.2-1)
    The system in question is a Dell XPS, with an i7 processor and an ATI Radeon HD5700 series graphics card. I'm using slim and awesome wm, and the arch install is now seven months old.
    I know that we're supposed to post entire logs, but I'm not sure which logs are actually relevant, and my logs are quite large. I can definitely provide more information, if it's helpful. I've done some looking around on the wiki, and on google in general. I haven't found anything useful, but it could always be that I'm just phrasing my searches poorly.
    Thanks!

    I reinstalled the whole system. As said, I couldn't even go to tty. I know using installation disk I could have recovered it, but I decided to reinstall.

  • Numbers Import and Load Performance Problems

    Some initial results of converting a single 1.9MB Excel spreadsheet to Numbers:
    _Results using Numbers v1.0_
    Import 1.9MB Excel spreadsheet into Numbers: 7 minutes 3.5 seconds
    Load (saved) Numbers spreadsheet (2.4MB): 5 minutes 11.7 seconds
    _Results using Numbers v1.0.1_
    Import 1.9MB Excel spreadsheet into Numbers: 6 minutes 36.1 seconds
    Load (saved) Numbers spreadsheet (2.4MB): 5 minutes 5.8 seconds
    _Comparison to Excel_
    Excel loads the original 1.9MB spreadsheet in 4.2 seconds.
    Summary
    Numbers v1.0 and v1.0.1 exhibit severe performance problems with loading (of it's own files) and importing of Excel V.x files.

    Hello
    It seems that you missed a detail.
    When a Numbers document is 1.9MB on disk, it may be a 7 or 8 MB file to load.
    A Numbers document s not a file but a package which is a disguised folder.
    The document itself is described in an WML extremely verbose file stored in a gzip archive.
    Opening such a document starts with an unpack sequence which is a fast one (except maybe if the space available on the support is short).
    The unpacked file may easily be 10 times larger than the packed one.
    Just an example, the xml.gz file containing the report of my bank operations for 2007 is a 300Kb one but the expanded one, the one which Numers must read, is a 4 MB one, yes 13,3 times the original.
    And, loading it is not sufficient, this huge file must be "interpreted" to build the display.
    As it is very long, Apple treats it as the TRUE description of the document and so, each time it must display something, it must work as the interpreters that old users like me knew when they used the Basic available in Apple // machines.
    Addind a supplemetary stage would have add time to the opening sequence but would have fasten the usage of the document.
    Of course, it would also had added a supplementary stage duringthe save it process.
    I hope that they will adopt this scheme but of course I don't know if they will do that.
    Of course, the problem is quite the same when we import a document from Excel or from AppleWorks.
    The app reads the original which is stored in a compact shape then it deciphers it to create the XML code. Optimisation would perhaps reduce a bit these tasks but it will continue to be a time consuming one.
    Yvan KOENIG (from FRANCE dimanche 27 janvier 2008 16:46:12)

  • Uusing gzip with ALUI 6.1

    I have a vanilla portal that I would like to perform some benchmarking on and I have been unable to find any documentation that describes how to enable gzip to compress components in the portal.
    My environment is ALUI 6.1 MP1, WebLogic Server 9.2, and Oracle 10g database.
    Can anyone point me in the right direction?
    Thank you,

    i think you want to look into mod_deflate for apache.
    basically the compression is done by the app server, not WCI.

  • Safari vs in-app-browser performance

    I have heard that performance using in app browsing vs using safari on an ios device is much slower. Are there any signifant tests that can proove this?
    In cases where a webpage is using gzip to compress webcontent, are there any known performance issues for in-app-browsing?

    It happens but to a much lesser - almost unnoticeable - extent.
    On 15 Jan 2014, at 14:12, Neil Enns - Adobe <[email protected]<mailto:[email protected]

  • Gzip and compress slow on T6320

    So far testing has shown that gzip and compress run 2-3 times slower on the new blades as on the older V-440s.
    Mkfile, Tar, unzip, rm –rf, and Copy run at about the same speed.
    I have eliminated LDOMs, RAM, swap, and local hard drives as issues (by putting SWAP and the target directory on SAN, and running in single user mode).
    Dusting off my performance tuning hat - been a long time - any hints as to where to look would be greatly appreciated.

    Further research found that the problem is with the Niagra chip - for single threaded processes it is "less efficient" than the older SPARC chips (it blows). For most applications that use a large number of small threads these chips rock (pun intended) - BUT for large single thread jobs (like gzip or compress) the processor is just not as fast. Sun recommends checking Oracle jobs for large sequential queries, and breaking these into many smaller queries.
    Experience so far has been that 4 threads on the T6320 can match 2 CPUs on a F-280 for our Oracle instances, the only problem has been with the backups, where a large number of files are gzip'd. With 64 threads available we will increase the number of threads during the backup window - or - be more patient.
    gzip version was not considered since the performance issue arises with compress also.

  • Adobe Air needs HTTP gzip compression

    Hello
    We are developing an Adibe Air application. We use SOAP for
    service calls and we depend entirely upon gzip HTTP compression to
    make the network performance even vaguely acceptable. SOAP is such
    a fat format that it really needs gzip compression to get the
    responses down to a reasonable size to pass over the Internet.
    Adobe Air does not currently support HTTP gzip compression
    and I would like to request that this feature be added ASAP. We
    can't release our application until it can get reasonable network
    performance through HTTP gzip compression.
    Thanks
    Andrew

    Hi blahxxxx,
    Sorry for the slow reply -- I wanted to take some time to try
    this out rather than give an incomplete response.
    It's not built into AIR, but if you're using
    Flex/ActionScript for your application you can use a gzip library
    to decompress a gzipped SOAP response (or any other gzipped
    response from a server -- it doesn't have to be SOAP). Danny
    Patterson gives an example of how to do that here:
    http://blog.dannypatterson.com/?p=133
    I've been prototyping a way to make a subclass of the Flex
    WebService class that has this built in, so if I can get that
    working it would be as easy as using the Flex WebService component.
    I did some tests of this technique, just to see for myself if
    the bandwidth savings is worth the additional processing overhead
    of decompressing the gzip data. (The good news is that the
    decompression part is built into AIR -- just not the specific gzip
    format -- so the most processor-intensive part of the gzip
    decompression happens in native code.)
    Here is what I found:
    I tested this using the
    http://validator.nu/ HTML validator
    web service to validate the HTML source of
    http://www.google.com/. This
    isn't a SOAP web service, but it does deliver an XML response
    that's fairly large, so it's similar to SOAP.
    The size of the payload (the actual HTTP response body) was
    5321 bytes compressed, 45487 bytes uncompressed. I ran ten trials
    of each variant. All of this was done in my home, where I have a
    max 6Mbit DSL connection. In the uncompressed case I measured the
    time starting immediately after sending the HTTP request and ending
    as soon as the response was received. In the compressed case I
    started the time immediately after sending the HTTP request and
    ended it after receiving the response, decompressing it and
    assigning the compressed content to a ByteArray (so the compressed
    case times include decompression, not just download). The average
    times for ten trials were:
    Uncompressed (text) response: 1878.6 ms
    Compressed (gzip) response: 983.1
    Obviously these will vary a lot depending on the payload
    size, the structure of the compressed data, the speed of the
    network, the speed of the computer, etc. But in this particular
    case there's obviously a benefit to using gzipped data.
    I'll probably write up the test I ran, including the source,
    and post it on my blog. I'll post another reply here once I've done
    that.

  • HELP!What's wrong with my GZIP program?

    Here's part of the source code:
    static void Compress(String strfilename)     
         int iTemp;
         try
              BufferedReader brIn=new BufferedReader(new FileReader(strfilename));
              BufferedOutputStream bosOut=new BufferedOutputStream(new GZIPOutputStream(new FileOutputStream(strfilename+".gz")));
              while(true)
                   iTemp=brIn.read();
                   if(iTemp==-1) break;
                   bosOut.write(iTemp);                         }
              brIn.close();
              bosOut.close();
         catch(Exception e)
              System.out.println("Error: "+e);
    If I use it to compress a small file(20k),the compression is successful,and I can also see it using WINRAR;but once I decompress
    this GZIP file,all the English words are correct,while other unicodes
    such as Chinese become unrecognizable;
    If I use it to compress a "large" file(3M) such as abc.chm,the size of the compressed file is about 2M(INCREDIBLE! WINRAR can only decrease
    the size of this file by 30k);But..the size of decompressed file is
    still about 2M,and windows cannot read it (Invalid page error).
    I wonder what's wrong with my program? Even if I use "javac -encoding
    gb2312 ..." to compile my program,the result is also the same.
    HOW SHOULD I DO??

    Hi, jchc,
    Let me help you a little bit, here. hwalkup is correct that for data compression you should use input and output stream instead of readers and writers. I don't think that mixing a reader with an output stream would work anyway as streams read and write bytes (8-bit data), while readers and writers read and write characters (16-bit data). hwalkup is also correct to suggest that you should use a buffer (a byte[]) to improve the performance of your streaming processes.
    Here is code for methods to compress and decompress one file into another. Note: the decompress() method will throw an IOException if the file to decompress (inFile) is not actually compressed in the first place.
        public static void compress(File inFile, File outFile)
                throws FileNotFoundException, IOException {
            InputStream in = null;
            OutputStream out = null;
            try {
                in = new BufferedInputStream(new FileInputStream(inFile));
                GZIPOutputStream gzipOut = new GZIPOutputStream(
                                           new FileOutputStream(outFile));
                out = new BufferedOutputStream(gzipOut);
                byte[] buffer = new byte[512];
                for (int i = 0; (i = in.read(buffer)) > -1; ) {
                    out.write(buffer, 0, i);
                out.flush();
                gzipOut.finish(); // absolutely necessary
            } finally {
                // always close your output streams before your input streams
                if (out != null) {
                    try {
                        out.close();
                    } catch (IOException e) {
                if (in != null) {
                    try {
                        in.close();
                    } catch (IOException e) {
        public static void decompress(File inFile, File outFile)
                throws FileNotFoundException, IOException {
            InputStream in = null;
            OutputStream out = null;
            try {
                GZIPInputStream gzipIn = new GZIPInputStream(
                                         new FileInputStream(inFile));
                in = new BufferedInputStream(gzipIn);
                out = new BufferedOutputStream(new FileOutputStream(outFile));
                byte[] buffer = new byte[512];
                for (int i = 0; (i = in.read(buffer)) > -1; ) {
                    out.write(buffer, 0, i);
                out.flush();
            } finally {
                // always close your output streams before your input streams
                if (out != null) {
                    try {
                        out.close();
                    } catch (IOException e) {
                if (in != null) {
                    try {
                        in.close();
                    } catch (IOException e) {
        }Shaun

  • Help to improve expdp performance and compression

    Hi,
    We have a oracle standard Edition which do not support parallel , compression features of expdp.
    The expdp dmp file is around 90GB and it takes 45 minutes to create in production server . Then the scripts compresses the file with gzip utility and compression takes 80 minutes.
    To copy the compressed file from prod to staging server it takes another 47 minutes .
    We have automated the process but it takes long time for expdp + compression + copy ( Around 3 hrs ) . On staging server it does take more than 4 hours to create the staging db.
    Is there anyway I can improve the performance of these 3 operations .
    Can I do compression while file is exporting ? I tried using pipes in unix and it doesn't work for expdp.
    We don't want to use network link .
    Will expdp commands writes the file sequentially ? If so , can I start gzipping parallely when files are exported .
    Also tried compressing with gzip -1 option , but it has increased the file size by 30% , and eventually increased the copy time to staging server .
    Please help
    Thanks,
    Bharani J
    Edited by: 973089 on Nov 27, 2012 9:40 AM
    Edited by: 973089 on Nov 27, 2012 9:41 AM

    Hi,
    Why 'do not support parallel' ?
    I understand you don't want to use database link, i had this problem here [i used expdp].
    This is what i've done:
    A script that do:
    A full logic backup using expdp,
    a bzip2 to compact,
    and a transfer to the machine of destiny.
    It would be far more easily if i could use the database link, but i couldn't.
    however i used the parallel in expdp command.
    Hope you find a good solution.

  • GZip Compression on JSPs?

    Hi There,
    I was wondering if there is a nice simple way to apply GZip compression to JSPs. I use it at the moment when writing out of a servlet to a browser by doing:
    OutputStream out1 = response.getOutputStream();
    out = new PrintWriter(new GZIPOutputStream(out1), false);
    response.setHeader("Content-Encoding", "gzip");
    Then writing to the client by
    out.println(<html>... etc etc);
    [Note: idea "borrowed" from Marty Hall]
    It works a treat, much better performance (esp for dial-up clients).
    Can it be done with JSP?
    Cheers
    matt

    [1/10/03 19:26:42:395 COT] 29d6a00e WebGroup E SRVE0026E: [Error de servlet]-[JSP 1.2 Processor]: java.lang.IllegalStateException: Ya se ha obtenido OutputStream
         at com.ibm.ws.webcontainer.srt.SRTServletResponse.getWriter(SRTServletResponse.java:450)
         at com.ibm.ws.webcontainer.servlet.HttpServletResponseProxy.getWriter(HttpServletResponseProxy.java:123)
         at org.apache.jasper.runtime.JspWriterImpl.initOut(JspWriterImpl.java:202)
         at org.apache.jasper.runtime.JspWriterImpl.flushBuffer(JspWriterImpl.java:193)
         at org.apache.jasper.runtime.JspWriterImpl.flush(JspWriterImpl.java:246)
         at org.apache.jasper.runtime.PageContextImpl.release(PageContextImpl.java:197)
         at org.apache.jasper.runtime.JspFactoryImpl.internalReleasePageContext(JspFactoryImpl.java:255)
         at org.apache.jasper.runtime.JspFactoryImpl.releasePageContext(JspFactoryImpl.java:249)
         at org.apache.jsp._SalidaJSPGZIP2._jspService(_SalidaJSPGZIP2.java(Compiled Code))
         at com.ibm.ws.webcontainer.jsp.runtime.HttpJspBase.service(HttpJspBase.java:89)
         at javax.servlet.http.HttpServlet.service(HttpServlet.java:853)
         at com.ibm.ws.webcontainer.jsp.servlet.JspServlet$JspServletWrapper.service(JspServlet.java:344)
         at com.ibm.ws.webcontainer.jsp.servlet.JspServlet.serviceJspFile(JspServlet.java:598)
         at com.ibm.ws.webcontainer.jsp.servlet.JspServlet.service(JspServlet.java:696)
         at javax.servlet.http.HttpServlet.service(HttpServlet.java:853)
         at com.ibm.ws.webcontainer.servlet.StrictServletInstance.doService(StrictServletInstance.java:110)
         at com.ibm.ws.webcontainer.servlet.StrictLifecycleServlet._service(StrictLifecycleServlet.java:174)
         at com.ibm.ws.webcontainer.servlet.IdleServletState.service(StrictLifecycleServlet.java:313)
         at com.ibm.ws.webcontainer.servlet.StrictLifecycleServlet.service(StrictLifecycleServlet.java:116)
         at com.ibm.ws.webcontainer.servlet.ServletInstance.service(ServletInstance.java:258)
         at com.ibm.ws.webcontainer.servlet.ValidServletReferenceState.dispatch(ValidServletReferenceState.java:42)
         at com.ibm.ws.webcontainer.servlet.ServletInstanceReference.dispatch(ServletInstanceReference.java:40)
         at com.ibm.ws.webcontainer.webapp.WebAppRequestDispatcher.handleWebAppDispatch(WebAppRequestDispatcher.java:872)
         at com.ibm.ws.webcontainer.webapp.WebAppRequestDispatcher.dispatch(WebAppRequestDispatcher.java:491)
         at com.ibm.ws.webcontainer.webapp.WebAppRequestDispatcher.forward(WebAppRequestDispatcher.java:173)
         at com.ibm.ws.webcontainer.srt.WebAppInvoker.doForward(WebAppInvoker.java:79)
         at com.ibm.ws.webcontainer.srt.WebAppInvoker.handleInvocationHook(WebAppInvoker.java:199)
         at com.ibm.ws.webcontainer.cache.invocation.CachedInvocation.handleInvocation(CachedInvocation.java:71)
         at com.ibm.ws.webcontainer.srp.ServletRequestProcessor.dispatchByURI(ServletRequestProcessor.java:182)
         at com.ibm.ws.webcontainer.oselistener.OSEListenerDispatcher.service(OSEListener.java:331)
         at com.ibm.ws.webcontainer.http.HttpConnection.handleRequest(HttpConnection.java:56)
         at com.ibm.ws.http.HttpConnection.readAndHandleRequest(HttpConnection.java:432)
         at com.ibm.ws.http.HttpConnection.run(HttpConnection.java:343)
         at com.ibm.ws.util.ThreadPool$Worker.run(ThreadPool.java:592)

  • GZIP using nio ByteBuffer

    In Java 1.4 CharsetEncoder/CharsetDecoder classes were added to encode/decode unicode to/from ByteBuffers to fully utilize the performance advantages of the nio package. Why didn't they do the same for GZIP compresssion/decompression? Does anyone know of a way to read/write a GZIP file using nio and compress/decompress to/from ByteBuffers instead of using GZIPInputStream and GZIPOutputStream?

    llschumacher wrote:
    That will work but it is not what I had in mind. I wish to offload compression/decompression to another thread. Basically what I am doing is reading byte buffers with one thread, queueing them to a second thread to decompress and then queueing the decompressed buffer to a third thread for Unicode Decoding. I also want to use 3 threads in reverse order to perform writes. I can do this with Unicode encode/decode using CharsetEncoder/CharsetDecoder classes because they deal directly with ByteBuffers. Your solution would require me to use the same thread for IO and compress/decompress.here you go...
    1) create a threadpool executor.
    2) inherit from and extend callables to read or write nio channel objects
    3) use Inflate/Deflate on byte[] for your needs.
    4) works for JCE code as well

  • GZIP Filter works sometimes, doesn't other times...

    I downloaded and installed the GZIP Filter from the Bea website. It works great most of the time. But in doing some performance testing, I found that the output for some pages is compressed some of the time but not others. I also found some pages were never compressed during the testing. Does anyone know what would cause this?
    Thanks.
    Greg

    Are you using fixed IP addresses or are you using DHCP?
    Change to static (fixed) IP addresses and when you hit ⌘ K then enter the IP address of the other Mac.
    iFelix

Maybe you are looking for