Compress and Aggrigates

hai experts,
Plz..What is difference between Aggrigates and Compress.
How to do can any body give the step by step........
Thanks in advance......
with regards..
raghu

Dear Raghu,
Both, compression and aggregates, are used to increase reporting speed.
To understand how compression works, you have to know BW's extended star schema. From a technical point of view InfoCubes consist of fact tables and dimension tables. Fact tables store all your key figures, dimension tables tell the system which InfoObject identification are being used with the key figures. Now, every InfoCube has <b>two</b> fact tables, a so-called F-table and an E-table. The E-table is an aggregation of the F-tables's records as the request ID is being removed. Therefore an E-table normally has less records than an F-table. When you load data to an InfoCube, it is just stored in the F-table. By compressing the InfoCube you update the E-table and delete the corresponding records from the F-table.
Aggregates are, from a technical point of view, InfoCubes themselves. They are related to your "basis" InfoCube, but you have to define them manually. They consist of a subset of all the records in your InfoCube. In principal there are two ways to select the relevant records for an aggregate. Either you select not all Infobjects which are included in your InfoCube, or you choose fixed values for certain InfoObjects. Like the compression, updating aggregates is a task which takes place after the loading of your InfoCube.
When a report runs BW automatically takes care of F- and E-tables and existing aggregates.
Further information and instructions can be found in the SAP Help:
http://help.sap.com/saphelp_nw04/helpdata/en/ca/aa6437e7a4080ee10000009b38f842/frameset.htm
http://help.sap.com/saphelp_nw04/helpdata/en/91/270f38b165400fe10000009b38f8cf/frameset.htm
http://help.sap.com/saphelp_nw04/helpdata/en/7d/eb683cc5e8ca68e10000000a114084/frameset.htm
Greetings,
Stefan

Similar Messages

  • How to compress and decompress a pdf file in java

    I have a PDF file ,
    what i want to do is I need a lossless compression technique to compress and decompress that file,
    Please help me to do so,
    I am always worried about this topic

    Here is a simple program that does the compression bit.
    static void compressFile(String compressedFilePath, String filePathTobeCompressed)
              try
                   ZipOutputStream zipOutputStream = new ZipOutputStream(new FileOutputStream(compressedFilePath));
                   File file = new File(filePathTobeCompressed);
                   int iSize = (int) file.length();
                   byte[] readBuffer = new byte[iSize];
                   int bytesIn = 0;
                   FileInputStream fileInputStream = new FileInputStream(file);
                   ZipEntry zipEntry = new ZipEntry(file.getPath());
                   zipOutputStream.putNextEntry(zipEntry);
                   while((bytesIn = (fileInputStream.read(readBuffer))) != -1)
                        zipOutputStream.write(readBuffer, 0, bytesIn);
                   fileInputStream.close();
                   zipOutputStream.close();
              catch (FileNotFoundException e)
              catch (IOException e)
                   // TODO Auto-generated catch block
                   e.printStackTrace();
         }

  • Compress and rollup the cube

    Hi Experts,
    do we have to compress and then rollup the aggregates? what whappends if we rollup before compression of the cube
    Raj

    Hi,
    The data can be rolled up to the aggregate based upon the request. So once the data is loaded, the request is rolled up to aggregate to fill up with new data. upon compression the request will not be available.
    whenever you load the data,you do Rollup to fill in all the relevent Aggregates
    When you compress the data all request ID s will be dropped
    so when you Compress the cube,"COMPRESS AFTER ROLLUP" option ensures that all the data is rolledup into aggrgates before doing the compression.
    hope this helps
    Regards,
    Haritha.
    Edited by: Haritha Molaka on Aug 7, 2009 8:48 AM

  • When cd jewel song list is printed from play list in Itunes, the list is compressed and unable to read.No problem before lates software change. How do I fix this ?

    When song list is printed from play list in Itunes for inserting into CD jewel case, the song list is compressed and is indecipherable. Did not have this problem prior to latest software change.How can I fix this ?

    Can you play the song in iTunes?
    If you can't the song file is probably corrupt and needs to be replaced.

  • Compression and query performance in data warehouses

    Hi,
    Using Oracle 11.2.0.3 have a large fact table with bitmap indexes to the asscoiated dimensions.
    Understand bitmap indexes are compressed by default so assume cannot further compress them.
    Is this correct?
    Wish to try compress the large fact table to see if this will reduce the i/o on reads and therfore give performance benefits.
    ETL speed fine just want to increase the report performance.
    Thoughts - anyone seen significant gains in data warehouse report performance with compression.
    Also, current PCTFREE on table 10%.
    As only insert into tabel considering making this 1% to imporve report performance.
    Thoughts?
    Thanks

    First of all:
    Table Compression and Bitmap Indexes
    To use table compression on partitioned tables with bitmap indexes, you must do the following before you introduce the compression attribute for the first time:
    Mark bitmap indexes unusable.
    Set the compression attribute.
    Rebuild the indexes.
    The first time you make a compressed partition part of an existing, fully uncompressed partitioned table, you must either drop all existing bitmap indexes or mark them UNUSABLE before adding a compressed partition. This must be done irrespective of whether any partition contains any data. It is also independent of the operation that causes one or more compressed partitions to become part of the table. This does not apply to a partitioned table having B-tree indexes only.
    This rebuilding of the bitmap index structures is necessary to accommodate the potentially higher number of rows stored for each data block with table compression enabled. Enabling table compression must be done only for the first time. All subsequent operations, whether they affect compressed or uncompressed partitions, or change the compression attribute, behave identically for uncompressed, partially compressed, or fully compressed partitioned tables.
    To avoid the recreation of any bitmap index structure, Oracle recommends creating every partitioned table with at least one compressed partition whenever you plan to partially or fully compress the partitioned table in the future. This compressed partition can stay empty or even can be dropped after the partition table creation.
    Having a partitioned table with compressed partitions can lead to slightly larger bitmap index structures for the uncompressed partitions. The bitmap index structures for the compressed partitions, however, are usually smaller than the appropriate bitmap index structure before table compression. This highly depends on the achieved compression rates.

  • Basic questions re zip, compression, and e-mailing large attachments

    Hi, I have never really understood what is meant by "zip" and compressing a file. 
    I believe these things make it possible to make a file smaller for sending via e-mail, and then when the person at the other end opens up the attachment, it returns to its full size in all its glory.  
    Is this true? 
    Reason I ask is that I have a couple of .mov files that I want to send as attachments to a friend, but they are both mammoth in size--1 gig each.
    I know it's probably a lost cause, but can compressing and zip help me out here?   Thanks for your patient replies.

    Yes, that's right. Read this to see how. http://docs.info.apple.com/article.html?path=Mac/10.6/en/8726.html. You'll have to experiment to see if you save enough space to send by email. Different Internet providers have different max file limits. Just try a test email attachment. Your sig says 10.5.8 and the article is for 10.6 but hopefully it is the same.

  • Compress and Encryption Folder

    I would like to use Automator to compress and encrypt a folder.
    I've tried using Automator to create an encrypted compressed file (.ZIP) but don't appear to have the options, e.g., encryption. Can someone suggest a workflow to encrypt a file?
    Thanks in advance!

    When I want to encrypt a file, I save it as an encrypted .pdf file.
    I found that here:
    http://docs.info.apple.com/article.html?path=Mac/10.4/en/mh1035.html
    Too bad it can only encrypt one file and not a folder.
    PowerBook G4   Mac OS X (10.4.4)  

  • Compress and uncompress data size

    Hi,
    I have checked total used size from dba_segments, but I need to check compress and uncompress data size separately.
    DB is 10.2.0.4.

    I have checked total used size from dba_segments, but I need to check compress and uncompress data size separately.
    DB is 10.2.0.4.
    Unless you have actually performed any BULK inserts of data then NONE of your data is compressed.
    You haven't posted ANYTHING that suggests that ANY of your data might be compressed. For 10G compression will only be performed on NEW data and ONLY when that data is inserted using BULK INSERTS. See this white paper:
    http://www.oracle.com/technetwork/database/options/partitioning/twp-data-compression-10gr2-0505-128172.pdf
    However, data, which is modified without using bulk insertion or bulk loading techniques will not be compressed
    1. Who compressed the data?
    2. How was it compressed?
    3. Have you actually performed any BULK INSERTS of data?
    SB already gave you the answer - if data is currently 'uncompressed' it will NOT have a 'compressed size'. And if data is currently 'compressed' it will NOT have an 'uncompressed size'.
    Now our management wants that how much compressed data size is and how much uncompressed data size is?
    1. Did that 'management' approve the use of compression?
    2. Did 'management' review the tests that were performed BEFORE compression was done? Those tests would have reported the expected compression and any expected DML performance changes that compression might cause.
    The time for testing the possible benefits of compression are BEFORE you actually implement it. Shame on management if they did not do that testing already.

  • How to find data compression and speed

    1. whats the command/way for viewing how much space the data has taken on its HANA tables as oposed to the same data in disk . I mean how do people measure that there has been a  10:1 data compression.
    2. The time taken for execution, as per seen from executing a same SQL on HANA varies ( i see that when i am F8 ing the same query repeatedly) , so its not given in terms of pure cpu cycles , which would have been more absolute .
    I always thought  that there must a better way of checking the speed of execution  like checking the log which gives all data regarding executions , than just seeing the output window query executions.

    Rajarshi Muhuri wrote:
    1. whats the command/way for viewing how much space the data has taken on its HANA tables as oposed to the same data in disk . I mean how do people measure that there has been a  10:1 data compression.
    The data is stored the same way in memory as it is on disk. In fact, scans, joins etc. are performed on compressed data.
    To calculate the compression factor, we check the required storage after compression and compare it to what would be required to save the same amount of data uncompressed (you know, length of data x number of occurance for each distinct value of a column).
    One thing to note here is: compression factors must always be seen for one column at a time. There is no such measure like "table compression factor".
    > 2. The time taken for execution, as per seen from executing a same SQL on HANA varies ( i see that when i am F8 ing the same query repeatedly) , so its not given in terms of pure cpu cycles , which would have been more absolute .
    >
    > I always thought  that there must a better way of checking the speed of execution  like checking the log which gives all data regarding executions , than just seeing the output window query executions.
    Well, CPU cycles wouldn't be an absolute measure as well.
    Think about the time that is not  spend on the CPU.
    Wait time for locks for example.
    Or time lost because other processes used the CPU.
    In reality you're ususally not interested so much in the perfect execution on one query that has all resources of the system bound to it, but instead you strive to get the best performance when the system has it's typical workload.
    In the end, the actual response time is what means money to business processes.
    So that's what we're looking at.
    And there are some tools available for that. The performance trace for example.
    And yes, query runtimes will always differ and never be totally stable all the time.
    That is why performance benchmarks take averages for multiple runs.
    regards,
    Lars

  • The detail algorithm of OLTP table compress and basic table compress?

    I'm doing a research on the detail algorithm of OLTP table compress and basic table compress, anyone who knows, please tell me. 3Q, and also the difference between them

    http://www.oracle.com/us/products/database/db-advanced-compression-option-1525064.pdf
    Edited by: Sanjaya Balasuriya on Dec 5, 2012 2:49 PM
    Edited by: Sanjaya Balasuriya on Dec 5, 2012 2:49 PM

  • Compress and rollup

    Hello,
    It seems that for non cumulative infoproviders, the order of process between compress and rollup is important (example 0IC_C03).
    We need to compress before rolling up in the aggregates.
    However, in the process chain, if I try to compress before rolling up, the 2 processes are in error (RSMPC011 and RSMPC015).
    In the management of the infoprovider the "compress after rollup" is unchecked.
    Please can you tell me how can I do ?
    Thank you everybody.
    Best regards.
    Vanessa Roulier

    Hi
    We can use any of the option
    Aggregates are compressed automatically following a successful roll up.If, subsequently,  you want to delete a request, you first need to deactivate all the aggregates.
    This process is very time consuming.
    If you compress the aggregates first, even if the InfoCube is compressed, you are able to delete requests that have been rolled up, but not yet compressed, without any great difficulty.
    Just try to check that option and load if it works
    Thanks
    Tripple k

  • Compress and Ship Archive Logs to DR

    I am currently working on Oracle 10.2.0.3, Linux 5.5 DR Setup. DR is at a remote location. Automatic Synchronous Archive shipping is enabled. As the DR is at a remote location transferring 500MB archive logs has a load on the bandwitdh. Is there anyway I can compress and send the archivelogs to DR? I am aware of the 11g parameter Compression=enabled, but i guess theres nothing on similiar lines for 10g.
    Kindly help!
    Regards,
    Bhavi Savla.

    AliD wrote:
    What is "Automatic Synchronous Archive shipping"? Do you mean your standby is in MAXPROTECTION or MAXAVAILABILITY mode? I fail to see how transfering only 500MB of data to anywhere is a problem. Your system should be next to idle!
    Compression is an 11g feature and is licensed (advanced compression license). Also if I'm not mistaken, 10.2.0.3 is out of support. You should plan to upgrade it as soon as possible.
    Edit - It seems you mean your archivelogs are 500MB each not total of 500MB for the day. In that case to eliminate the peaks in transfer, use LGWR in async mode.
    Edited by: AliD on 10/05/2012 23:24It is a problem as we have archives generating very frequently!!!

  • OSB Unused block compression and Undo backup compression

    Dear all.
    I know that osb backup software is faster than other vendor software especially oracle database backup area.
    There are two special method (Unused block compression and Undo backup compression)
    But, normally in case of comparable product using same RMAN API(Veritas/Legato etc...), they only backup used block and skip unused block, right?
    I'm confused, what is different from used block only backup and unused block compression.
    Please explain detail about unused block compression and Undo backup compression.
    Sorry about my poor knowledge.
    Thanks.

    This is explained in detail in the OSB technical white paper:
    http://www.oracle.com/technetwork/products/secure-backup/learnmore/osb-103-twp-166804.pdf
    Let me know if you have any questions
    Donna

  • Gdal compression and pyramids

    Hi!
    I'm having problems importing raster-data using gdal with compression.
    DB-Version: 11.2.0.1.0
    gdal-Version: 1.72
    gdal-Statement: gdal_translate -of georaster \\Path\file.tif geor:user/pw@server,A_TABLE_GK2,GEORASTER -co compress=DEFLATE -co nbits=1 -co "INSERT=VALUES(100,'file.tif','GK2',SDO_GEOR.INIT('RDT_A_TABLE_GK2'))
    The import works fine and the data is loaded into my table. I can validate the file using
    1. sdo_geor.validateblockMBR(georaster),
    2. SDO_GEOR_UTL.calcRasterStorageSize(georaster),
    3. substr(sdo_geor.getPyramidType(georaster),1,10) pyramidType, sdo_geor.getPyramidMaxLevel(georaster) maxLevel
    4. SELECT sdo_geor.getCompressionType(georaster) compType,sdo_geor.calcCompressionRatio(georaster) compRatio
    5. SELECT sdo_geor.getCellDepth(georaster) CellDepth,substr(sdo_geor.getInterleavingType(georaster),1,8) interleavingType,substr(sdo_geor.getBlockingType(georaster),1,8) blocking
    and all results are true (or feasible).
    Now my problem:
    DECLARE
    gr sdo_georaster;
    BEGIN
    SELECT georaster INTO gr
    FROM A_TABLE_GK2 georid = 11 FOR UPDATE;
    sdo_geor.generatePyramid(gr, 'resampling=CUBIC');
    UPDATE A_TABLE_GK2 SET georaster = gr WHERE georid = 11;
    COMMIT;
    END;
    Error report:
    ORA-01403: no data found
    ORA-06512: at line 4
    01403. 00000 - "no data found"
    *Cause:
    *Action:
    The pyramid cannot be calculated. Leaving out the parameter -co compress=DEFLATE allows me to generate pyramids (though this results in an exploding tablespace as 2GB data in file-system rise to about 120 GB in database without compression - and 2GB is only a small amount of the data needed).
    I already recognized gdal needs the Parameter -co compress=DEFLATE in Upper-Case to allow validation of georaster - but this doesn't change my problems calculating pyramids.
    Anybody heaving an idea?
    NilsO

    We definately need colordepth of 1bit as the input-files are b/w. Importing with 8 bit blows up the filesize by 8 (surprise ;-) ) and our customer has a lot of data he can't handle with 8 bit.
    The georid in the import-statement is only a dummy. We're using a trigger to insert the georid (at the moment we're around georid 7000) but all data I gave is taken from the same georaster-object. I already ran a series of tests using nbits, compression, srid-statements in gdal. Importing using srid and nbits works fine with validation and pyramids. Using compression-parameter (with or without srid, nbits) doesn't.
    Current workaround is to import without compression and every 50 files we compress the data and shrink tablespace. Slow performance and I needed to write a tool to create a set of gdal-import statements combined with a function call on oracle using sqlplus. Works for the moment, but no solution for the future....
    C:\Program Files (x86)\gdal172>gdalinfo georaster:user/pw@db,A_TABLE_GK2,
    GEORASTER,GEORID=100 -mdd oracle
    Driver: GeoRaster/Oracle Spatial GeoRaster
    Files: none associated
    Size is 15748, 15748
    Coordinate System is `'
    Metadata (oracle):
    TABLE_NAME=A_TABLE_GK2
    COLUMN_NAME=GEORASTER
    RDT_TABLE_NAME=RDT_A_TABLE_GK2
    RASTER_ID=13209
    METADATA=<georasterMetadata xmlns="http://xmlns.oracle.com/spatial/georaster">
    <objectInfo>
    <rasterType>20001</rasterType>
    <isBlank>false</isBlank>
    </objectInfo>
    <rasterInfo>
    <cellRepresentation>UNDEFINED</cellRepresentation>
    <cellDepth>1BIT</cellDepth>
    <totalDimensions>2</totalDimensions>
    <dimensionSize type="ROW">
    <size>15748</size>
    </dimensionSize>
    <dimensionSize type="COLUMN">
    <size>15748</size>
    </dimensionSize>
    <ULTCoordinate>
    <row>0</row>
    <column>0</column>
    </ULTCoordinate>
    <blocking>
    <type>REGULAR</type>
    <totalRowBlocks>62</totalRowBlocks>
    <totalColumnBlocks>62</totalColumnBlocks>
    <rowBlockSize>256</rowBlockSize>
    <columnBlockSize>256</columnBlockSize>
    </blocking>
    <interleaving>BIP</interleaving>
    <pyramid>
    <type>NONE</type>
    </pyramid>
    <compression>
    <type>DEFLATE</type>
    </compression>
    </rasterInfo>
    <layerInfo>
    <layerDimension>BAND</layerDimension>
    </layerInfo>
    </georasterMetadata>
    Image Structure Metadata:
    INTERLEAVE=PIXEL
    COMPRESSION=DEFLATE
    NBITS=1
    Corner Coordinates:
    Upper Left ( 0.0, 0.0)
    Lower Left ( 0.0,15748.0)
    Upper Right (15748.0, 0.0)
    Lower Right (15748.0,15748.0)
    Center ( 7874.0, 7874.0)
    Band 1 Block=256x256 Type=Byte, ColorInterp=Gray
    With -checksum it's
    Can't see the beginning anymore on console...
    ERROR 1: ZLib return code (-3)
    ERROR 1: ZLib return code (-3)
    ERROR 1: ZLib return code (-3)
    ERROR 1: ZLib return code (-3)
    More than 1000 errors or warnings have been reported. No more will be reported f
    rom now.
    Checksum=12669
    regards,
    NilsO

  • Tif compression and a res. of 300

    I know all of the diffrent schools of thought on this but some of the clients and even some older designers are going to call and want to know what I am doing giving them a small file with a res. of 72 even if it is at 48 inches. Some just do not get it and want their safe 300 res files that they are use to. These same clients are going to balk at paying me for an extra CD/DVD because I cannot use LZW compression and fit the whole shoot on one CD/DVD Am I missing something? Already several of the things I assumed were were missing I have learned from this group were just hidden a little too well in Aperture

    Use a batch EXIF editor (like EXIFTools) to set the DPI on all files before you import them into Aperture. Whatever goes in is what comes out currently.
    Another option is to set the DPI on all exported files in a similar way. Since all you are doing is changing an EXIF tag it's pretty quick...
    And file an enhancement request asking for the ability to set DPI in export options.

Maybe you are looking for

  • Using variables in ODI 11.1.1.5

    Hi, i have created 4 variables in ODI 11g as mentioned below: 1.Variable_name = v_date datatype = alphanumeric default value = '30-NOV-2011 11:00:00' 2.Variable_name = v_time_offset datatype = alphanumeric default value = 125 3.Variable_name = v_star

  • Error in Jheadstart migrated form

    I have migrated one of my existing forms from designer. I got some error in ResourceBundle files.. The error was that the migration had generated an extra .(dot) before the class name of the Resource bundle. After removing that error, Now I have an a

  • IPhoto LIbrary name in title bar

    Once upon a time iPhoto would show the name of the iPhoto Library in use in the title bar. If, for instance, I decided to create a Library called "Vacations," when I opened iPhoto by option-clicking and chose the Vacations library, the iPhoto title b

  • Strange behavior in Adjustment Brush Module

    In the Adjustment Brush module of ACR 5: Adjusting Exposure with the plus or minus signs on an image after any mask is applied has the following effect: No further adjustments of any kind can be applied. This only applies to plus or minus *Exposure*

  • DnD character generator

    Hello all, Not sure if this is the right forum for this, but here goes.... I'm currently working on a character generator for Dungeons and Dragons (actually the system used in that RPG in general). And while I think I'm doing ok by myself, I could re