GeoRaster performance: Compressed vs Uncompressed

I tried to read compressed and uncompressed GeoRaster. The different in the performance confused me. I expected better performance for compressed raster, because Oracle needs to read in few times less data from hard drive (1:5 in my case). However, reading uncompressed data is approximately twice faster. I understand Oracle needs to use more CPU for uncompressing data. But I thought that saved time of reading data would be more than time for uncompressing a raster.
Did anybody compare the performance?
Thanks,
Dmitry.

Dmitry,
You can try for yourself. QGIS is a free-open-source-software.
QGIS uses GDAL to access raster and vector data and there is a plugin called "Oracle Spatial GeoRaster", or just oracle-raster, to deal with GeoRaster. To access Geometries you don't need to activate the plugin, just select Oracle as your database "type" in the Add Vector Layer dialog box.
Displaying GeoRaster work pretty fast, as long as you have created pyramids. Yes, there is a little delay when the GeoRaster is compressed but that is because GDAL request the data to be uncompressed and QGIS has no clue about it.
Wouldn't be nice to have a viewer that used the JPEG as it is?
Regards,
Ivan

Similar Messages

  • Compress and uncompress data size

    Hi,
    I have checked total used size from dba_segments, but I need to check compress and uncompress data size separately.
    DB is 10.2.0.4.

    I have checked total used size from dba_segments, but I need to check compress and uncompress data size separately.
    DB is 10.2.0.4.
    Unless you have actually performed any BULK inserts of data then NONE of your data is compressed.
    You haven't posted ANYTHING that suggests that ANY of your data might be compressed. For 10G compression will only be performed on NEW data and ONLY when that data is inserted using BULK INSERTS. See this white paper:
    http://www.oracle.com/technetwork/database/options/partitioning/twp-data-compression-10gr2-0505-128172.pdf
    However, data, which is modified without using bulk insertion or bulk loading techniques will not be compressed
    1. Who compressed the data?
    2. How was it compressed?
    3. Have you actually performed any BULK INSERTS of data?
    SB already gave you the answer - if data is currently 'uncompressed' it will NOT have a 'compressed size'. And if data is currently 'compressed' it will NOT have an 'uncompressed size'.
    Now our management wants that how much compressed data size is and how much uncompressed data size is?
    1. Did that 'management' approve the use of compression?
    2. Did 'management' review the tests that were performed BEFORE compression was done? Those tests would have reported the expected compression and any expected DML performance changes that compression might cause.
    The time for testing the possible benefits of compression are BEFORE you actually implement it. Shame on management if they did not do that testing already.

  • Performing Compression

    Hello Gurus,
    How Function (easy way) Performing Compression ???
    THNX

    Hi Baris,
    Compressing an infocube saves space on disk, but the request ID's are removed during the compression process.so, before you go for compression it is very imp that u make sure that data in the cube is correct, becoz once u compress the data u no longer can delete incorrect requests.
    Steps to Perform compression on cube:
    Check the content of ur uncompressed fact table.  
         1. Switch to contents tab page
         2. Choose dispaly to display the fact table
         3. In the dta Browser, chosse correct button
    After determining that data is correct, compress cube as follow
        1. Select required Cube and chosse Manage Button from the context menu
        2. On the collapse tab strip in the request ID fields. enter the request ID of ur most recent request.
        3. Chosse the release button and then the selection button.
        4. In the start time window, choose immediately and then save the job. Now the compression process begins.
        5. Goto request tab page, in the compression status coloumn, a green flag indicates that compression is sucessful.
    Choose refresh i u r unable to see greeen flag immediately.
    Regards,
    Rajkandula.

  • Initramfs - compressed vs. uncompressed

    I just recently came to think about this. It's common practice to compress iniramfs, but an uncompressed initramfs is also an option.
    But what about the pros and cons? How time consuming is the decompression actually and what will the extra size of an uncompressed image mean?
    Personally I've got an SSD, so that helps with the extra size and as for the decompression, I am using as rather limited atom CPU, so in theory it seems to work out for me at least.
    So, what do you think, that is theoretically, what is the best way to go? In reality of course, the difference is negligible, maybe not even measurable, but that's not really the point.
    I'll look forward to your answers.
    Best regards.

    blackout23 wrote:
    zacariaz wrote:
    blackout23 wrote:
    I have done 5 runs of uncompressed and gzip initramfs.
    CPU is Core i7 2600K 4,5 Ghz
    archbox% sudo systemd-analyze
    Startup finished in 1328ms (kernel) + 311ms (userspace) = 1640ms
    archbox% sudo systemd-analyze
    Startup finished in 1338ms (kernel) + 277ms (userspace) = 1616ms
    archbox% sudo systemd-analyze
    Startup finished in 1305ms (kernel) + 274ms (userspace) = 1580ms
    archbox% sudo systemd-analyze
    Startup finished in 1305ms (kernel) + 304ms (userspace) = 1610ms
    archbox% sudo systemd-analyze
    Startup finished in 1302ms (kernel) + 287ms (userspace) = 1590ms
    Gzip:
    archbox% sudo systemd-analyze
    Startup finished in 1375ms (kernel) + 347ms (userspace) = 1723ms
    archbox% sudo systemd-analyze
    Startup finished in 1375ms (kernel) + 331ms (userspace) = 1706ms
    archbox% sudo systemd-analyze
    Startup finished in 1368ms (kernel) + 351ms (userspace) = 1720ms
    archbox% sudo systemd-analyze
    Startup finished in 1385ms (kernel) + 340ms (userspace) = 1725ms
    archbox% sudo systemd-analyze
    Startup finished in 1402ms (kernel) + 351ms (userspace) = 1753ms
    Even on a faster CPU you can measure a difference between compressed and uncompressed.
    If you add the timestamp hook to your HOOKS array in /etc/mkinitcpio.conf and rebuild your initramfs, systemd-analyze will also be able to show you how much time was spent in the initramfs alone. Without it kernel is basically both combined.
    I dare say the difference is less than a pitiful atom CPU, still interesting that the difference is still there though.
    I am however done timing the initramfs. I know now which is best for me.
    One thing that catches my eyes is the userspace time. the kernel time is not too different between our two setups, but the userspace is apparently very much so. Is that simply because the CPU play a greater role or do you utilize some fancy optimization technique I haven't heard of?
    Try systemd-analyze blame and see what takes the most time. If it's NetworkManager try setting ipv6 to "Ignore" in your settings and assign a static ip.
    Other than that I don't run any extra services and mount only my SSD no external drives.
    Good point, always forget about ipv6. Isn't there a kernel param to disable it btw?
    Edit:
    WICD is responcible for 922ms
    Last edited by zacariaz (2012-09-03 20:07:04)

  • Compressed vs Uncompressed RAW : longer to read / decrypt ?

    I'm trying to manage as efficiently as possible the files on my CF cards (Nikon D3 / D3X) and on my Macs.
    But I'm mostly running after speed, being on very thight deadlines sometimes.
    I'm shooting only RAW, and chose some time ago to shoot "Compressed / Losless RAW" It saves space on the CF cards, but does it take longer to read from Aperture ? Does Ap3 need to decompress the files, which would take more time / consume more energy ? Or is it "lighter" in terms of calculation to simply read an uncompressed RAW ???
    I haven't taken the time to compare yet, did someone out there know anything about that ?
    Thanks !
    Bernard-Pierre

    Sorry, I forget the specifics, but did find uncompressed was preferable (Nikon D2x). I did find that key was fast CF cards and fast connectivity for uploading to a folder on the Macbook Pro's SSD. Decreased CF costs IMO have made use of a compression step to save card capacity irrelevant.
    I have a 2011 17" MBP so use the EC/34 slot which is fast and convenient. EC/34 rocks for input when paired with fast UDMA camera cards. My recent coarse tests of the 2011 17" MBP's EC/34 slot using a $40 SanDisk Extreme Pro Express Card Adapter for CF cards from Amazon:
        -Sandisk Extreme III CF Card, SanDisk EC/34 Adapter = ~10 MB/sec
        -Sandisk Extreme IV CF Card, SanDisk EC/34 Adapter = ~37 MB/sec
        -Sandisk Extreme IV CF Card, SanDisk EC/34 Adapter = ~36 MB/sec
        -Sandisk Extreme Pro CF Card
         (UDMA6), SanDisk EC/34 Adapter = ~80 MB/sec
    I prefer Sandisk Extreme cards because unlike most other cards they are rated for the temperature extremes that I sometimes shoot in.
    With fast CF cards upload speeds via the EC/34 slot are sweet; fast enough to literally change workflows. For comparison, a Sony USB card reader's fastest upload was ~12 MB/sec. My emphasis has been on images uploading to date, but cheap prices of slower CF cards are making me look at using the EC/34 slot for backup in the field as well.
    EC/34 SD adapters also available, and presumably Thunderbolt adapters will become available that allow the same fast i/o for all 2011 Macs, but it has not happened yet.
    Unlike the D2x the newer D3 may actually perform better using UDMA6 cards.
    HTH
    -Allen

  • Sorting compressed from uncompressed PSD/PSB files in Bridge/smart collections?

    Is there any info (metadata or something) one could use to sort uncompressed from compressed PSD/PSB files in Bridge?
    Something that could be used in filters for smart collections would be nice.
    Ty in advance.

    Thanks for the quick reply, CP
    FYI, the file is a photographic panorama based on several .CR2 photos taken in July, 2012 and processed in CS5's camera RAW/Photoshop.  Beyond the metadata, here are no fonts or text in the file.
    1.  I changed Maximize ...File Capability from "Never" to "Always" and the flattened file saved and loaded OK, though I can't figure why given Adobe's description of this option. "If you work with PSD and PSB files in older versions of Photoshop or applications that don’t support layers, you can add a flattened version of the image to the saved file...".  The file saved in this manner is ~8% larger and is not my preference as a routine since I've TB's of photo files.  By the way, out of thousands of .psd/.psb files, the one being discussed is the first to give me the error.
    2.  I've tried saving/reloading the file on another computer with Win XP Sp3/CS5...again with Maximize Capability OFF.  Same problem.  Turn ON Maximize Capability and it works.  As with the Win 8 machine, Bridge could neither create a thumbnail or read Metadata.
    3.  I've purged the Bridge Cache, made sure there were no old PS page files, reset PS preferences, and disabled my 2 plugins with no change in the problem.  Windows and Adobe progs are all updated.  Saving to a different HDD has no effect.
    The problem occurs on 2 computers (Win XP/Win 8), two versioins of Photoshop (CS5/CS6), and "Maximize" On/Off togles the problem.  My guess it is a bug in PSD processing routines.
    If I can't find a fix, will keep with Maximize ON and deal with the larger files that result...the cost of insurance, I suppose.
    Thanks again

  • Unable to load Compressed Tiffs (uncompressed works fine)

    Receiving Data Cartridge error when trying to load compressed Tiff's. Uncompressed Tiff's and PDF work fine. Any suggestions? Thanks.
    DOC.IMPORT(CTX) method on ORDSYS.ORDIMAGE results in:
    ERROR at line 1:
    ORA-29400: data cartridge error
    IMG-00704: unable to read image data
    ORA-06512: at "ORDSYS.ORDIMG_PKG", line 419
    ORA-06512: at "ORDSYS.ORDIMAGE", line 65
    ORA-06512: at "ORDSYS.ORDIMG_PKG", line 506
    ORA-06512: at "ORDSYS.ORDIMAGE", line 209
    ORA-06512: at "DSDBA.LOAD_ORDIMAGES", line 52
    ORA-06512: at line 1
    Oracle 8.1.7

    We support TIFF G3 and G4. It is possible
    that your image is not a valid TIFF file.
    We need to take a look at the actual TIFF
    file you are having problem with. I will
    get in touch with you later to ask you to
    send it to us.

  • How to compress my uncompressed AE export !!!

    am suffering from the huge size of my AE exports .. and from searching i knew that it's normal thing because the AE is not usually the last station in post production.
    but what if am done with my export file!!!
    is there any programs that reduce the size but keep the quality !!!!
    thanks in advance

    well, I enjoyed reading both of the suggested links
    and also I had a look on related topics at adobe.com
    This is all about AE . ok things are getting clearer by now.
    but my trouble now is in premiere ... could you please help me in this :
    http://forums.adobe.com/thread/906572
    many thanks Dear Adobe

  • Compression and query performance in data warehouses

    Hi,
    Using Oracle 11.2.0.3 have a large fact table with bitmap indexes to the asscoiated dimensions.
    Understand bitmap indexes are compressed by default so assume cannot further compress them.
    Is this correct?
    Wish to try compress the large fact table to see if this will reduce the i/o on reads and therfore give performance benefits.
    ETL speed fine just want to increase the report performance.
    Thoughts - anyone seen significant gains in data warehouse report performance with compression.
    Also, current PCTFREE on table 10%.
    As only insert into tabel considering making this 1% to imporve report performance.
    Thoughts?
    Thanks

    First of all:
    Table Compression and Bitmap Indexes
    To use table compression on partitioned tables with bitmap indexes, you must do the following before you introduce the compression attribute for the first time:
    Mark bitmap indexes unusable.
    Set the compression attribute.
    Rebuild the indexes.
    The first time you make a compressed partition part of an existing, fully uncompressed partitioned table, you must either drop all existing bitmap indexes or mark them UNUSABLE before adding a compressed partition. This must be done irrespective of whether any partition contains any data. It is also independent of the operation that causes one or more compressed partitions to become part of the table. This does not apply to a partitioned table having B-tree indexes only.
    This rebuilding of the bitmap index structures is necessary to accommodate the potentially higher number of rows stored for each data block with table compression enabled. Enabling table compression must be done only for the first time. All subsequent operations, whether they affect compressed or uncompressed partitions, or change the compression attribute, behave identically for uncompressed, partially compressed, or fully compressed partitioned tables.
    To avoid the recreation of any bitmap index structure, Oracle recommends creating every partitioned table with at least one compressed partition whenever you plan to partially or fully compress the partitioned table in the future. This compressed partition can stay empty or even can be dropped after the partition table creation.
    Having a partitioned table with compressed partitions can lead to slightly larger bitmap index structures for the uncompressed partitions. The bitmap index structures for the compressed partitions, however, are usually smaller than the appropriate bitmap index structure before table compression. This highly depends on the achieved compression rates.

  • K3b compressed dvd and uncompressed data

    Using k3b to burn .isoDVD R/W media results in a usable DVD of the Knoppix Live CD type.
    Can the same media be burned again with uncompressed data in the unused portion of the media using k3b?
    If so, what setup is required to do so?
    If this be possible, then it would seem possible to symlink to the uncompressed files and extend the compressed file capability to include the files on the uncompressed portion.
    Alternatively, the symlink could address additional compressed format files and thereby extend the basic Live CD to full size DVD capability.  This would be faster in execution (on-the-fly decompressed).
    A symlink seems the useful tool......

    Hi,
    Yes, both Compressed and Uncompressed data will be displayed in reports. Compression activity is to save the memory space in your database.
    Its not mandatory to compress the whole requests till date in cube.
    You can specify the request(desired request) in Collapse tab and release for compression.
    Suppose if you have requests from 1st Juky to till date, you specified the 9th July Request in the Collapse tab, then from 1st July to 9th July requests will be compressed. It will not consider from 1st July to till date.
    Regards,
    Suman

  • FCP Uncompressed Editing on a new Mac Pro

    Hello everybody,
    I too am about to empty my wallet into Apple's bank account.
    I am upgrading my editing capacity and have the following wish list in order to edit compressed and uncompressed 8 or 10 bit high definition video shot with HDV and DVCPro HD camcorders:
    Mac Pro - 2x 2.66GHz Dual Core Intel Xeon Processors
    2 GB RAM
    ATI Radeon X1900 XT graphics card
    1 x 250GB 7200-rpm Serial ATA (as boot drive)
    3 x 500GB 7200-rpm Serial ATA 3Gb/s (as striped footage drives)
    Apple Cinema Display (20" flat panel - as main FCP screen)
    Apple Cinema HD Display (23" flat panel - as HD monitor)
    I already own Final Cut Studio and run FCP 5.1.1
    My questions are:
    1. Is the above hard drive set up fast enough for 8 and/or 10 bit uncompressed HD/HDV editing?
    2. Will I need any extra hardware - besides the graphics card - in order to see every single pixel of 1920 x 1080 footage?
    I know others on the Apple Discussions forums have asked similar questions but I am still not certain which set up is best for me.
    Thanks in advance.
    P.S. Dear Mr Jobs. Thanks in advance for allowing PAL users of FCP to edit 720 footage from Panasonic's HVX-200 camcorder.
    Charlie
    G4 PowerBook, 1.67GHz Mac OS X (10.4.6) Long-time FCP user
    G4 PowerBook, 1.67GHz   Mac OS X (10.4.6)   Long-time FCP user

    I have just found out that my plans for a Mac Pro with 4 internal drives (one as boot drive, the other three striped as footage RAID) will NOT be fast enough to edit uncompressed HD material.
    I have just read this on the Black Magic Design website:
    Blackmagic Disk Speed Test reported about 170 MB/sec which was easily fast enough for HD uncompressed 10 bit. However this three-disk internal solution is more suited to people needing simple capture and playback of HD, such as designers and effects artists. They just want simple clip capture and playback and the built-in three-disk array is a great solution for them. There are also newer 750 GB disks, which are faster, and so performance could increase further.
    For editors who have hundreds of cuts and/or effects in their projects, we would strongly recommend an external disk array with multiple disks.
    http://www.blackmagic-design.com/support/detail.asp?techID=60#intel_tiger
    An external RAID system will have to be the way forward for me for now. Thanks to you all for trying to answer my questions.
    Charlie.

  • Table compression in 10g

    Hi,
    If a table has compression enabled, then the data will be compressed only if there is bulk/direct load. Is there a way we can find, that the data is inserted using simple insert statement(insert into table...values...)?
    We just want to determine the candidate data for compression, which was not inserted in the defined way and it didn't get compressed.
    Database version: 10.2.0.4
    Regards,

    Hi Santi,
    Since you are using Oracle 10g version, so there is no feature by which we can perform compressed DML. Any DML uncompresses the block (in 10g).
    Read below link and replies by Hemant and HJR :
    10g Data Compression old/new
    Regards
    Girish Sharma

  • Optimization steps for Uncompressed cube

    I am new to Oracle OLAP, I have designed a cube with 4 dimensions and three measures. Since I have measures like count of staff and hours encoding, I have to group (sum) # of staff and hours encoded for each team and roll up with its tree. At the same time, I have to show average staff memebers for a given period of time with sum of hours encoded. In order to override cube aggregation, I have to uncompress cube. This is resulting serious issues. Can you please suggest me what steps to be followed to optimize performance with same uncompressed cube.
    Appreciate your support.

    There are a few tricks to calculate averages in a compressed cube. The subject was discussed recently on the forum. 
    https://forums.oracle.com/message/10920684#10920684
    The best way, if you can handle it, is to use the MAINTAIN COUNT syntax.  But this may be a stretch if you are new to OLAP.

  • Degraded performance of ZFS in Update 4 ???

    Hi Guys,
    I'm playing with Blade 6300 to check performance of compressed ZFS with Oracle database.
    After some really simple tests I noticed that default installation of S10U3 is actually faster than S10U4, and a lot faster.
    My configuration - default Update 3 LiveUpgraded to Update 4 with ZFS filesystem on dedicated disk. I'm doing as simple as just $time dd if=file.dbf of=/dev/null in few parallel tasks and on Update 3 it's somewhere close to 11m32s and on Update 4 it's around 12m6s. And it's both reading from compressed or uncompressed ZFS, numbers a little bit higher with compressed, couple of seconds, which impressive by itself, but difference is the same.
    I'm really surprised by this results, anyone else noticed that ?

    Hi Guys,
    I'm playing with Blade 6300 to check performance of compressed ZFS with Oracle database.
    After some really simple tests I noticed that default installation of S10U3 is actually faster than S10U4, and a lot faster.
    My configuration - default Update 3 LiveUpgraded to Update 4 with ZFS filesystem on dedicated disk. I'm doing as simple as just $time dd if=file.dbf of=/dev/null in few parallel tasks and on Update 3 it's somewhere close to 11m32s and on Update 4 it's around 12m6s. And it's both reading from compressed or uncompressed ZFS, numbers a little bit higher with compressed, couple of seconds, which impressive by itself, but difference is the same.
    I'm really surprised by this results, anyone else noticed that ?

  • Gdal compression and pyramids

    Hi!
    I'm having problems importing raster-data using gdal with compression.
    DB-Version: 11.2.0.1.0
    gdal-Version: 1.72
    gdal-Statement: gdal_translate -of georaster \\Path\file.tif geor:user/pw@server,A_TABLE_GK2,GEORASTER -co compress=DEFLATE -co nbits=1 -co "INSERT=VALUES(100,'file.tif','GK2',SDO_GEOR.INIT('RDT_A_TABLE_GK2'))
    The import works fine and the data is loaded into my table. I can validate the file using
    1. sdo_geor.validateblockMBR(georaster),
    2. SDO_GEOR_UTL.calcRasterStorageSize(georaster),
    3. substr(sdo_geor.getPyramidType(georaster),1,10) pyramidType, sdo_geor.getPyramidMaxLevel(georaster) maxLevel
    4. SELECT sdo_geor.getCompressionType(georaster) compType,sdo_geor.calcCompressionRatio(georaster) compRatio
    5. SELECT sdo_geor.getCellDepth(georaster) CellDepth,substr(sdo_geor.getInterleavingType(georaster),1,8) interleavingType,substr(sdo_geor.getBlockingType(georaster),1,8) blocking
    and all results are true (or feasible).
    Now my problem:
    DECLARE
    gr sdo_georaster;
    BEGIN
    SELECT georaster INTO gr
    FROM A_TABLE_GK2 georid = 11 FOR UPDATE;
    sdo_geor.generatePyramid(gr, 'resampling=CUBIC');
    UPDATE A_TABLE_GK2 SET georaster = gr WHERE georid = 11;
    COMMIT;
    END;
    Error report:
    ORA-01403: no data found
    ORA-06512: at line 4
    01403. 00000 - "no data found"
    *Cause:
    *Action:
    The pyramid cannot be calculated. Leaving out the parameter -co compress=DEFLATE allows me to generate pyramids (though this results in an exploding tablespace as 2GB data in file-system rise to about 120 GB in database without compression - and 2GB is only a small amount of the data needed).
    I already recognized gdal needs the Parameter -co compress=DEFLATE in Upper-Case to allow validation of georaster - but this doesn't change my problems calculating pyramids.
    Anybody heaving an idea?
    NilsO

    We definately need colordepth of 1bit as the input-files are b/w. Importing with 8 bit blows up the filesize by 8 (surprise ;-) ) and our customer has a lot of data he can't handle with 8 bit.
    The georid in the import-statement is only a dummy. We're using a trigger to insert the georid (at the moment we're around georid 7000) but all data I gave is taken from the same georaster-object. I already ran a series of tests using nbits, compression, srid-statements in gdal. Importing using srid and nbits works fine with validation and pyramids. Using compression-parameter (with or without srid, nbits) doesn't.
    Current workaround is to import without compression and every 50 files we compress the data and shrink tablespace. Slow performance and I needed to write a tool to create a set of gdal-import statements combined with a function call on oracle using sqlplus. Works for the moment, but no solution for the future....
    C:\Program Files (x86)\gdal172>gdalinfo georaster:user/pw@db,A_TABLE_GK2,
    GEORASTER,GEORID=100 -mdd oracle
    Driver: GeoRaster/Oracle Spatial GeoRaster
    Files: none associated
    Size is 15748, 15748
    Coordinate System is `'
    Metadata (oracle):
    TABLE_NAME=A_TABLE_GK2
    COLUMN_NAME=GEORASTER
    RDT_TABLE_NAME=RDT_A_TABLE_GK2
    RASTER_ID=13209
    METADATA=<georasterMetadata xmlns="http://xmlns.oracle.com/spatial/georaster">
    <objectInfo>
    <rasterType>20001</rasterType>
    <isBlank>false</isBlank>
    </objectInfo>
    <rasterInfo>
    <cellRepresentation>UNDEFINED</cellRepresentation>
    <cellDepth>1BIT</cellDepth>
    <totalDimensions>2</totalDimensions>
    <dimensionSize type="ROW">
    <size>15748</size>
    </dimensionSize>
    <dimensionSize type="COLUMN">
    <size>15748</size>
    </dimensionSize>
    <ULTCoordinate>
    <row>0</row>
    <column>0</column>
    </ULTCoordinate>
    <blocking>
    <type>REGULAR</type>
    <totalRowBlocks>62</totalRowBlocks>
    <totalColumnBlocks>62</totalColumnBlocks>
    <rowBlockSize>256</rowBlockSize>
    <columnBlockSize>256</columnBlockSize>
    </blocking>
    <interleaving>BIP</interleaving>
    <pyramid>
    <type>NONE</type>
    </pyramid>
    <compression>
    <type>DEFLATE</type>
    </compression>
    </rasterInfo>
    <layerInfo>
    <layerDimension>BAND</layerDimension>
    </layerInfo>
    </georasterMetadata>
    Image Structure Metadata:
    INTERLEAVE=PIXEL
    COMPRESSION=DEFLATE
    NBITS=1
    Corner Coordinates:
    Upper Left ( 0.0, 0.0)
    Lower Left ( 0.0,15748.0)
    Upper Right (15748.0, 0.0)
    Lower Right (15748.0,15748.0)
    Center ( 7874.0, 7874.0)
    Band 1 Block=256x256 Type=Byte, ColorInterp=Gray
    With -checksum it's
    Can't see the beginning anymore on console...
    ERROR 1: ZLib return code (-3)
    ERROR 1: ZLib return code (-3)
    ERROR 1: ZLib return code (-3)
    ERROR 1: ZLib return code (-3)
    More than 1000 errors or warnings have been reported. No more will be reported f
    rom now.
    Checksum=12669
    regards,
    NilsO

Maybe you are looking for

  • My iPad won't open and keeps asking for a pass code, which I never installed?

    My iPad mini won't open, it would appear to have a pass code which I never installed. This was the first time I charged it up and I'm confused. Can anyone help please?

  • Solaris installation related questions/problems

    I've recently purchased Sun's Solaris 8 OE media kit 2/02. I tried installing them on two computers and experienced the following problems: On my Pentium 3, I get the following messages when during the first steps of installation: Determining bus typ

  • Gmail has no icons in boxes

    At about the same time as Firefox 9.0.1 was updated on my Dell Optiplex 960, three things happened: 1. Gmail icons went missing 2. Searched images in Firefox returned gray rectangles only 3. Google Analytics page is blank

  • I have my daughter's iPad that she has asked me to up-date for her.

    Tried to tell her it is never that easy, and maybe impossible. I see instructions say to use "update" option (settings, general), of course, that option is not on the iPad. What is plan "B"? I have looked at other iPads around the office, NO ONE has

  • Anyone having problems with pop up windows?

    Very recently (roughly just after performing the most recent Security Update) I have had trouble with pop up windows in Safari. Haven't had the problem before and the strange part is that my setting to block pop up windows keeps becoming unchecked in