Zlib compression and loading

Hello.
I'm working in a little 3D visualization engine, and I need to compress some data for Internet delivering. The exporter is being written in c++, and I want to use java only as Internet visualizer using JOGL. However, I know java can uncompress the zlib format with the java.io.util package, but I think once I read somwhere that this hava a little overhead, right?. So I have two solutions in mind:
a) Use java.io.util and load all as byte arrays(for avoid endianness convertion)
b) Use some JNI code for the uncompression code
The problem with the second option is that I will need a signed applet, and this is something I dont' know how to do(I'm not a system engineer, I'm doing this as a hobby, also I don't know how to use perfectly JNI neither).
Is there any solution pure Java Standard Edition 6 based and high performance??. I need to load all my data as different ByteBuffers for used them with JOGL. I'm planning to use zlib for the C++ exporter.
Bye!

RaulHuertas wrote:
Well... no :( . I just want some suggestions before start with this. Oh, the package I mean before is java.util.zip.I suggest avoiding JNI if at all possible, because it makes it much more difficult, and you lose one of Java's key strengths: platform independence.

Similar Messages

  • ZLIB compression

    <p>Hello yal,</p><p>Who can tell me about Essbase zlib compression mode?</p><p>When using this compression algorithm, is there any setting onthe database excluded by this mode?</p><p>For example, does it need specific settings on transaction(isolation level, synchronisation point)?</p><p>Thanks in advance</p><p> </p>

    <p>OLAP Server supports ZLIB compression and this method is used inpackages like PNG, Zip, and gzip.</p><p><code>first, i'd set isolation level to committedaccess.</code></p><p> </p><p>To set ZLIB as the compression type for a database, useMAXL:</p><p><code>alter database <i>DBS-NAME</i> set compressionzlib;</code></p><p> </p><p> </p><p> </p>

  • Gdal compression and pyramids

    Hi!
    I'm having problems importing raster-data using gdal with compression.
    DB-Version: 11.2.0.1.0
    gdal-Version: 1.72
    gdal-Statement: gdal_translate -of georaster \\Path\file.tif geor:user/pw@server,A_TABLE_GK2,GEORASTER -co compress=DEFLATE -co nbits=1 -co "INSERT=VALUES(100,'file.tif','GK2',SDO_GEOR.INIT('RDT_A_TABLE_GK2'))
    The import works fine and the data is loaded into my table. I can validate the file using
    1. sdo_geor.validateblockMBR(georaster),
    2. SDO_GEOR_UTL.calcRasterStorageSize(georaster),
    3. substr(sdo_geor.getPyramidType(georaster),1,10) pyramidType, sdo_geor.getPyramidMaxLevel(georaster) maxLevel
    4. SELECT sdo_geor.getCompressionType(georaster) compType,sdo_geor.calcCompressionRatio(georaster) compRatio
    5. SELECT sdo_geor.getCellDepth(georaster) CellDepth,substr(sdo_geor.getInterleavingType(georaster),1,8) interleavingType,substr(sdo_geor.getBlockingType(georaster),1,8) blocking
    and all results are true (or feasible).
    Now my problem:
    DECLARE
    gr sdo_georaster;
    BEGIN
    SELECT georaster INTO gr
    FROM A_TABLE_GK2 georid = 11 FOR UPDATE;
    sdo_geor.generatePyramid(gr, 'resampling=CUBIC');
    UPDATE A_TABLE_GK2 SET georaster = gr WHERE georid = 11;
    COMMIT;
    END;
    Error report:
    ORA-01403: no data found
    ORA-06512: at line 4
    01403. 00000 - "no data found"
    *Cause:
    *Action:
    The pyramid cannot be calculated. Leaving out the parameter -co compress=DEFLATE allows me to generate pyramids (though this results in an exploding tablespace as 2GB data in file-system rise to about 120 GB in database without compression - and 2GB is only a small amount of the data needed).
    I already recognized gdal needs the Parameter -co compress=DEFLATE in Upper-Case to allow validation of georaster - but this doesn't change my problems calculating pyramids.
    Anybody heaving an idea?
    NilsO

    We definately need colordepth of 1bit as the input-files are b/w. Importing with 8 bit blows up the filesize by 8 (surprise ;-) ) and our customer has a lot of data he can't handle with 8 bit.
    The georid in the import-statement is only a dummy. We're using a trigger to insert the georid (at the moment we're around georid 7000) but all data I gave is taken from the same georaster-object. I already ran a series of tests using nbits, compression, srid-statements in gdal. Importing using srid and nbits works fine with validation and pyramids. Using compression-parameter (with or without srid, nbits) doesn't.
    Current workaround is to import without compression and every 50 files we compress the data and shrink tablespace. Slow performance and I needed to write a tool to create a set of gdal-import statements combined with a function call on oracle using sqlplus. Works for the moment, but no solution for the future....
    C:\Program Files (x86)\gdal172>gdalinfo georaster:user/pw@db,A_TABLE_GK2,
    GEORASTER,GEORID=100 -mdd oracle
    Driver: GeoRaster/Oracle Spatial GeoRaster
    Files: none associated
    Size is 15748, 15748
    Coordinate System is `'
    Metadata (oracle):
    TABLE_NAME=A_TABLE_GK2
    COLUMN_NAME=GEORASTER
    RDT_TABLE_NAME=RDT_A_TABLE_GK2
    RASTER_ID=13209
    METADATA=<georasterMetadata xmlns="http://xmlns.oracle.com/spatial/georaster">
    <objectInfo>
    <rasterType>20001</rasterType>
    <isBlank>false</isBlank>
    </objectInfo>
    <rasterInfo>
    <cellRepresentation>UNDEFINED</cellRepresentation>
    <cellDepth>1BIT</cellDepth>
    <totalDimensions>2</totalDimensions>
    <dimensionSize type="ROW">
    <size>15748</size>
    </dimensionSize>
    <dimensionSize type="COLUMN">
    <size>15748</size>
    </dimensionSize>
    <ULTCoordinate>
    <row>0</row>
    <column>0</column>
    </ULTCoordinate>
    <blocking>
    <type>REGULAR</type>
    <totalRowBlocks>62</totalRowBlocks>
    <totalColumnBlocks>62</totalColumnBlocks>
    <rowBlockSize>256</rowBlockSize>
    <columnBlockSize>256</columnBlockSize>
    </blocking>
    <interleaving>BIP</interleaving>
    <pyramid>
    <type>NONE</type>
    </pyramid>
    <compression>
    <type>DEFLATE</type>
    </compression>
    </rasterInfo>
    <layerInfo>
    <layerDimension>BAND</layerDimension>
    </layerInfo>
    </georasterMetadata>
    Image Structure Metadata:
    INTERLEAVE=PIXEL
    COMPRESSION=DEFLATE
    NBITS=1
    Corner Coordinates:
    Upper Left ( 0.0, 0.0)
    Lower Left ( 0.0,15748.0)
    Upper Right (15748.0, 0.0)
    Lower Right (15748.0,15748.0)
    Center ( 7874.0, 7874.0)
    Band 1 Block=256x256 Type=Byte, ColorInterp=Gray
    With -checksum it's
    Can't see the beginning anymore on console...
    ERROR 1: ZLib return code (-3)
    ERROR 1: ZLib return code (-3)
    ERROR 1: ZLib return code (-3)
    ERROR 1: ZLib return code (-3)
    More than 1000 errors or warnings have been reported. No more will be reported f
    rom now.
    Checksum=12669
    regards,
    NilsO

  • "Speed bump" problem: when previewing on the iPad, the first page of each new chapter or section sticks and loads slowly.  How can I fix it?

    I am building a book with text and photographs in iBooks Author, basing the design on my publisher's layout for the hard copy version.  Whenever I introduce a new section or chapter, I hit a "speed bump" when previewing on the iPad.  As you read, the first page of the new chapter or section is very hard to turn and sticks as it loads.  Within each section the pages flow very easily, but when you reach the next section you hit another speed bump. I have experimented with many possible solutions to no avail.  It does not seem to matter whether the new section has images or just text; the speed bump occurs regardless.

    Joel Pickford wrote:
    However, I am finding that using fewer sections with more content in them minimizes the problem (i.e. fewer section-transions means fewer speed bumps).  This is contarary to the support page's suggestion of keeping chapters and sections smaller.  I am not having any problem within each section, even with lots of full page photos.
    That recommendation was made, presumably, to reduce the length of the delay for each chapter. More content in the chapter means more delay when changing chapters. Of course, if you have lots of little chapters, the delay can still be annoying because, even if only four or five pages in a chapter, the delay is still noticable. Just pick something that works for your book and don't worry too much about the delay. Your readers will be familiar with it if they've seen more than one IBA book.
    Will the choice of color space, s-rgb or Adobe rgb 1998, affect file size?
    Should I try to compress them more- say, level 5 or level 4 in Save For Web?
    Should I size the images for the iPad 2 instead of the iPad 3 retina display, since there are more 2s out there?
    Which color space you use does not affect file size. Apple recommend to use sRGB, so I would stick with that. As to compression and sizing, only you can decide. Try and borrow an iPad 2 to make a judgement at what level of compression and resolution you think the images start too look bad. Your high-rez images for the new iPad will be down-sampled for display on the older models, which will introduce additional artifacts.
    For my book, I simply used images at 1024 x 768 resolution. When viewed on a new iPad, they look just fine. The lower resolution is basically unnoticeable (at least for my purposes). You can view a sample of my book here: "Djembe Construction: A Comprehensive Guide". If you think the images there are good enough for your purposes, you could down-size your images to half their resolution, which will reduce file size about around a factor of three.
    Michi.

  • Compress and rollup the cube

    Hi Experts,
    do we have to compress and then rollup the aggregates? what whappends if we rollup before compression of the cube
    Raj

    Hi,
    The data can be rolled up to the aggregate based upon the request. So once the data is loaded, the request is rolled up to aggregate to fill up with new data. upon compression the request will not be available.
    whenever you load the data,you do Rollup to fill in all the relevent Aggregates
    When you compress the data all request ID s will be dropped
    so when you Compress the cube,"COMPRESS AFTER ROLLUP" option ensures that all the data is rolledup into aggrgates before doing the compression.
    hope this helps
    Regards,
    Haritha.
    Edited by: Haritha Molaka on Aug 7, 2009 8:48 AM

  • Compress and uncompress data size

    Hi,
    I have checked total used size from dba_segments, but I need to check compress and uncompress data size separately.
    DB is 10.2.0.4.

    I have checked total used size from dba_segments, but I need to check compress and uncompress data size separately.
    DB is 10.2.0.4.
    Unless you have actually performed any BULK inserts of data then NONE of your data is compressed.
    You haven't posted ANYTHING that suggests that ANY of your data might be compressed. For 10G compression will only be performed on NEW data and ONLY when that data is inserted using BULK INSERTS. See this white paper:
    http://www.oracle.com/technetwork/database/options/partitioning/twp-data-compression-10gr2-0505-128172.pdf
    However, data, which is modified without using bulk insertion or bulk loading techniques will not be compressed
    1. Who compressed the data?
    2. How was it compressed?
    3. Have you actually performed any BULK INSERTS of data?
    SB already gave you the answer - if data is currently 'uncompressed' it will NOT have a 'compressed size'. And if data is currently 'compressed' it will NOT have an 'uncompressed size'.
    Now our management wants that how much compressed data size is and how much uncompressed data size is?
    1. Did that 'management' approve the use of compression?
    2. Did 'management' review the tests that were performed BEFORE compression was done? Those tests would have reported the expected compression and any expected DML performance changes that compression might cause.
    The time for testing the possible benefits of compression are BEFORE you actually implement it. Shame on management if they did not do that testing already.

  • Compress and rollup

    Hello,
    It seems that for non cumulative infoproviders, the order of process between compress and rollup is important (example 0IC_C03).
    We need to compress before rolling up in the aggregates.
    However, in the process chain, if I try to compress before rolling up, the 2 processes are in error (RSMPC011 and RSMPC015).
    In the management of the infoprovider the "compress after rollup" is unchecked.
    Please can you tell me how can I do ?
    Thank you everybody.
    Best regards.
    Vanessa Roulier

    Hi
    We can use any of the option
    Aggregates are compressed automatically following a successful roll up.If, subsequently,  you want to delete a request, you first need to deactivate all the aggregates.
    This process is very time consuming.
    If you compress the aggregates first, even if the InfoCube is compressed, you are able to delete requests that have been rolled up, but not yet compressed, without any great difficulty.
    Just try to check that option and load if it works
    Thanks
    Tripple k

  • Compress and Aggrigates

    hai experts,
    Plz..What is difference between Aggrigates and Compress.
    How to do can any body give the step by step........
    Thanks in advance......
    with regards..
    raghu

    Dear Raghu,
    Both, compression and aggregates, are used to increase reporting speed.
    To understand how compression works, you have to know BW's extended star schema. From a technical point of view InfoCubes consist of fact tables and dimension tables. Fact tables store all your key figures, dimension tables tell the system which InfoObject identification are being used with the key figures. Now, every InfoCube has <b>two</b> fact tables, a so-called F-table and an E-table. The E-table is an aggregation of the F-tables's records as the request ID is being removed. Therefore an E-table normally has less records than an F-table. When you load data to an InfoCube, it is just stored in the F-table. By compressing the InfoCube you update the E-table and delete the corresponding records from the F-table.
    Aggregates are, from a technical point of view, InfoCubes themselves. They are related to your "basis" InfoCube, but you have to define them manually. They consist of a subset of all the records in your InfoCube. In principal there are two ways to select the relevant records for an aggregate. Either you select not all Infobjects which are included in your InfoCube, or you choose fixed values for certain InfoObjects. Like the compression, updating aggregates is a task which takes place after the loading of your InfoCube.
    When a report runs BW automatically takes care of F- and E-tables and existing aggregates.
    Further information and instructions can be found in the SAP Help:
    http://help.sap.com/saphelp_nw04/helpdata/en/ca/aa6437e7a4080ee10000009b38f842/frameset.htm
    http://help.sap.com/saphelp_nw04/helpdata/en/91/270f38b165400fe10000009b38f8cf/frameset.htm
    http://help.sap.com/saphelp_nw04/helpdata/en/7d/eb683cc5e8ca68e10000000a114084/frameset.htm
    Greetings,
    Stefan

  • Compress and Ship Archive Logs to DR

    I am currently working on Oracle 10.2.0.3, Linux 5.5 DR Setup. DR is at a remote location. Automatic Synchronous Archive shipping is enabled. As the DR is at a remote location transferring 500MB archive logs has a load on the bandwitdh. Is there anyway I can compress and send the archivelogs to DR? I am aware of the 11g parameter Compression=enabled, but i guess theres nothing on similiar lines for 10g.
    Kindly help!
    Regards,
    Bhavi Savla.

    AliD wrote:
    What is "Automatic Synchronous Archive shipping"? Do you mean your standby is in MAXPROTECTION or MAXAVAILABILITY mode? I fail to see how transfering only 500MB of data to anywhere is a problem. Your system should be next to idle!
    Compression is an 11g feature and is licensed (advanced compression license). Also if I'm not mistaken, 10.2.0.3 is out of support. You should plan to upgrade it as soon as possible.
    Edit - It seems you mean your archivelogs are 500MB each not total of 500MB for the day. In that case to eliminate the peaks in transfer, use LGWR in async mode.
    Edited by: AliD on 10/05/2012 23:24It is a problem as we have archives generating very frequently!!!

  • 2LIS_03_UM initilization and loading to BW

    Hi All,
    Currently we are installing the datasource 2LIS_03_UM and going to use the Revaluated value for the stock. We have already done the setup table fill up for data source 2LIS_03_BF and loaded into cube and done the compression, checking the no marker update tick.
    Now the question is, Is the 2LIS_03_UM also should be filled setup table and has to be loded to cube 2LIS_03_BF and need to be compressed checking the no marker update tick? or is there any other procedure needs to be followed for this?.
    Kindly provide me an idea to proceed.
    Thanks,
    Muruganand.K

    HI
    Yes, Need to setup and load Revaluations - 2LIS_03_UM also if any value changes are required.
    Same as "BF", setup, loading, no marker update required.
    After executing the menu entry, a dialog box appears in which you must specify
    whether you want to carry out a setup for material movements (to be extracted using
    the DataSource 2LIS_03_BF, Report RMCBNEUA, transaction OLI1BW) or for
    revaluations (DataSource 2LIS_03_UM, Report RMCBNERP, transaction OLIZBW).
    More info : [How Tou2026 Handle Inventory Management in BW|http://sap.seo-gym.com/inventory.pdf]
    Srini

  • Is there an easy way to compress and upload movies to iWeb?

    Hi,
    I upload pictures and movies through iPhoto to iWeb. Easy! However, my relatives tell me that they can't watch movies because it takes FOREVER  to download the file. Any tips to compress the movies simply? I'm hoping for an easy solution!
    Thanks!

    If the movie is in .mov format you can compress it to MP4.
    If you have QuickTime Pro, export it with H.264 compression and select "Fast Start" which allows the movie to start playing before it is downloaded.
    The main problem with adding movies to your page in this way is that they have to fully download along with the page files.
    A better way is to use a poster image and some code to load the movie with the plugin of your choice...
    http://www.iwebformusicians.com/Website-Movie-Video/Poster-Movie.html
    http://www.iwebformusicians.com/Website-Movie-Video/JW-Media-Player.html
    http://www.iwebformusicians.com/Website-Movie-Video/Fallback-To-Flash.html
    ... or upload it to Vimeo for a better viewing experience than YouTube.
    http://www.iwebformusicians.com/Website-Movie-Video/Vimeo.html
    http://www.iwebformusicians.com/Website-Movie-Video/YouTube.html
    "I may receive some form of compensation, financial or otherwise, from my recommendation or link."

  • FCP X is loading and loading with the colour wheel revolving for ever for the past few days.

    FCP X is loading and loading with the colour wheel revolving for ever, for the past few days.
    FCP X was working fine in my new 15" Mac book pro (Retina display with Mountain Lion 10.8.2) for the past one month. Now a days, each time I try to open the FCP X, the same non-stop colour wheel is rotating, trying to restore one of my project. It continues even for hours together.  Finally I am force quiting the application.
    How do I come out of it ? Will I be able to recover my project.  I didn't do any backup in particular.
    Is backing up on the regular basis is required?
    I tried the following guidance by Apple, 
    In the Finder, hold down the Option key and choose Library from the Go menu.
    In the Library folder that opens, go to Application Support/Final Cut Pro.
    Drag the Layouts folder from the Final Cut Pro folder to the Trash.
    Still my problem is continued
    Help me

    This is my pet checklist for questions regarding FCP X performance - you may have already addressed some of the items but it's worth checking.
    Make sure you're using the latest version of the application - FCP X 10.0.5 runs very well on my 2009 MacPro 2 x 2.26 GHz Quad-Core Intel Xeon with 16 GB RAM and ATI Radeon HD 5870 1024 MB. I run it with Lion 10.7.5.
    First, check that you have at least 20% free space on your system drive.
    For smooth playback without dropping frames, make sure 'Better Performance' is selected in the FCP X Preferences - Playback Tab.
    If you have not already done so, move your Projects and Events to a fast (Firewire 800 or faster) external HD. Make sure the drive's formatted OS Extended (journalling's not required for video). You should always keep at least 20% free space on the Hard Drives that your Media, Projects and Events are on.
    Check the spec of your Mac against the system requirements:
    http://www.apple.com/finalcutpro/specs/
    Check the spec of your graphics card. If it's listed here, it's not suitable:
    http://support.apple.com/kb/HT4664
    If you are getting crashes, there is some conflict on the OS. Create a new (admin) user account on your system and use FCP X from there - if it runs a lot better, there's a conflict and a clean install of the OS would be recommended.
    Keep projects to 20 mins or less. If you have a long project, work on 20 min sections then paste these into a final project for export.
    If your playback in FCP X is not good, I strongly recommend you use ProRes 422 Proxy - it edits and plays back like silk because the files are small but lightly compressed (not much packing and unpacking to do) - but remember to select 'Original or Optimised Media' (FCP X Preferences > Playback) just before you export your movie, otherwise it will be exported at low resolution.
    The downside of 'Proxy' is that it looks awful. DON'T use Proxy when you're assessing things like video quality - especially focus.
    If you have plenty of processor power, for the ultimate editing experience, create Optimised Media - most camera native files are highly compressed and need a great deal of processor power to play back - particularly if you add titles, filters or effects. ProRes 422 takes up much more hard drive space but is very lightly compressed. It edits and plays back superbly.
    Personally, I work with XDCAM EX and h.264 from a Canon DSLR. Both of these run fine with my system, but I do transcode multicam clips.
    Hide Audio Waveforms at all times when you don't need them (both in Browser and Storyline / Timeline). They take up a lot of processor power. (Use the switch icon at the bottom-right of your timeline to select a format without waveforms if you don't need them at the moment, then switch back when you do).
    Create folders in the Project and Events libraries and put any projects you are not working on currently, in those folders. This will help a lot. There's a great application for this, called Event Manager X - for the tiny cost it's an invaluable application.
    http://assistedediting.intelligentassistance.com/EventManagerX/
    Unless you cannot edit and playback without it, turn off Background Rendering in Preferences (under Playback) - this will help general performance and you can always render when you need to by selecting the clip (or clips) and pressing Ctrl+R.
    The biggest single improvement I saw in performance was when I upgraded the RAM from 8 GB to 16.
    If none of this helps, you might have incompatible or corrupt media or project files. Move ALL your Events and Projects to temporary folders so that FCP X doesn't find them on launch. If it launches OK, re-introduce the Events and Projects one at a time, re-launching each time, so that you can track down the corrupt or incompatible files.
    Andy

  • I used to shoot video in my Powershot S3 IS (MVI_.AVI) and load it into iPhoto, I used quicktime pro so I could convert it for emailing to family members.  That was when I was running Leopard now with Mt. Lion nothing works.  Videos do not open at all?

    I used to shoot video in my Powershot S3 IS (Format MVI_.AVI) and load them into iPhoto, I used quicktime pro so I could convert for emailing to family members.  That was when I was running Leopard now with Mt. Lion nothing works.  Videos do not open at all.  Any simple suggestions, I am not real keen on iMovie Productions and not real certain how they compress for emailing attachments.  Something simple here that I am missing?  Thanks so much.  I am a grandpa trying to share with family members far away (quickly & simply).

    I am surprised that Apple did not build a converter into Mt. Lion.
    Apple does have a converter built into Mountain Lion. It is call "Quicktime." However, in order to use this converter, you must first make sure the compression formats are playback compatible with QT. Basically, there are three levels of QT compatibility. The lowest is "Playback." These file can be played by QT but may not be conversion or edit compatible with QT. The second level is QT "Conversion" compatible. These files are playback compatible and can be converted to other compression formats using QT. "Edit" compatible media files are "Fully" compatible with QT since the can be played, converted and/or edited by QT. The main problem here is that AVI is a "legacy" file format that has not been officially supported by Microsoft for more than 11 years when it was replaced by Windows Media multimedia file/compression formats. Many of the original compression formats used in AVI files have never been transcoded for the Mac platform, use beyond system 9, or use beyond PPC platforms. In addition, some commonly used AVI codecs are proprietary or use non-standard (hybrid) profile and level combinations. In short, there is little wonder that Apple has been distancing itself from this outdated file type as it re-writes and upgrades its own QT structure to support more standardized, more scalable, more modern high definition file types and compression formats. It is really unfortunate that users continue to use this outmoded file type simply because it is freely available, easy to use, or they are simply too lazy to move on to a more modern or more efficient file types and/or compression formats.
    I tested Wondershare Video Converter Ultimate for the Mac, which seems to be the "state of the art".  I may be purchasing a new camera which might create a whole new set of variables.  This program seems to cover all bases and is great for novices.
    There are many third-party apps available if you wish to search for them. Many are even available in the App Store. Most do their job well and it is usually a matter of personal user preference as to which is best.
    HandBrake seems more suited to folks with more experience and knowledge.
    I mentioned Handbrake primarily because it is free and easy to use when you employ the included conversion presets options. (The TV options can normally be used for almost any situation depending on the source file and output requirements.) It is also excellent for more experienced users, but has a somewhat limited choice of output options as it does not access the user's system QT codec component configuration.

  • Cube Compression and Aggregation

    Hello BW Gurus,
    Can I first compress my infocube data and load data into the aggregates.
    The reason being, that when the infocube is compressed the Request Id's are removed.
    Are the Request Id's necessary for data to be transfered to the Aggregates and then later on for aggregate compression.
    Kindly suggest.
    regards,
    TR PRADEEP

    Hi,
    just to clarify this:
    1) you can compress your infocube and then INITIALLY fill the aggregates. The Request information is then no longer needed.
    2) But you can NOT compress requests in your infocube, when your aggregates are already filled and these requests are not yet "rolled up" into the aggregates (this action is prohibited anyway by the system).
    Hope this helps,
    Klaus

  • Difference between Compression and Aggregation

    Hi,
      Can anybody explain the Difference between Compression and Aggregation.Performance wise which is better and explain me in detail.
      Thnaks,
      Chinna

    Hi,
    suppose you are having three charecteristics in a cube say X,Y,Z..
    Even for the records which are having the same combination of these charecteristics but are loaded with different request they won't get aggregated.
    So when you go for compression the records , it deletes the request number, and aggregates the records which are having the same combination of these records.
    Coming to the aggregates , if you build a aggregate on the charectaristic say 'X' then it aggregates the records which are having the same value for a particular charecteristic.
    ex: say you are having the recrds as
    x1, y1 ,z1......(some key figures)
    xi, y2,z1,.....
    x1,y1,z1,....
    x3,y3,z3...
    If you compress them, you will get three records.
    If you go for aggregates based on the charecteristic 'X' you will get two records.
    So aggregates will give more aggregate level of data than compression
    regards,
    haritha.

Maybe you are looking for