Aggregate compression

Hi all,
  I have small and easy question.  Is it true that when I compress the cube from collapse tab the aggregate gets compressed also.  Its kinda silly question because I dont think it is not possible to rollup without request number.
Baseer

Hi Baseer,
The aggregates would compressed automatically if the cube is compressed if the aggregates were in the activated state. If teh aggregates were deactivatd then there would be no impact on aggregates.
Refer to these links for more info:
Re: Aggregate Rollup and Compression
http://help.sap.com/saphelp_nw04/helpdata/en/7d/eb683cc5e8ca68e10000000a114084/frameset.htm
Bye
Dinesh

Similar Messages

  • What do you mean by aggregates compression?

    Hi all,,
    I knew how to compress infcoubes. But heard abt aggregates compression. Can anyone explain the significance of aggregates compression.
    regds
    hari

    Hi hari,
    Compression: It is a process used to delete the Request IDs and this saves space.
    When and why use infocube compression in real time?
    InfoCube compression creates new cube by eliminating duplicates. Compressed infocubes require less storage space and are faster for retrieval of information. Here the catch is .. Once you compress, you can't alter the InfoCube. You are safe as long as you don't have any error in modeling.
    This compression can be done through Process Chain and also manually.
    Check these Links:
    http://www.sap-img.com/business/infocube-compression.htm
    compression is done to increase the performance of the cube...
    http://help.sap.com/saphelp_nw2004s/helpdata/en/ca/aa6437e7a4080ee10000009b38f842/frameset.htm
    http://help.sap.com/saphelp_erp2005vp/helpdata/en/b2/e91c3b85e6e939e10000000a11402f/frameset.htm
    Infocube compression and aggregate compression are mostly independent.
    Usually if you decide to keep the requests in the infocube, you can compress the aggregates. If you need to delete a request, you just have to rebuild an aggregate, if it is compressed. Therefore there are no problems in compressing aggregates, unless the rebuild of the aggregates take a lot of time.
    It does not make sense to compress the infocube without compressing the aggregates. The idea behind compressing is to speed up the infocube access by adding up all the data of the different requests. As a result you get rid of the request number. All other attributes stay the same. If you have more than one record per set of characteristics, the key figures will add the key figures by aggregat characteristic (ADD, MIN, MAX etc.). This will reduce the number of records in the cube.
    Example:
    requestid date 0material 0amount
    12345 20061201 3333 125
    12346 20061201 3333 -125
    12346 20061201 3333 200
    will result to
    requestid date 0material 0amount
    20061201 3333 200
    In this case 2 records are saved.
    But once the requestid is lost (due to compression) you cannot get it back.
    Therefore, once you compressed the infocube, there is no sense in keeping the aggregates uncompressed. But as long as your Infocube is uncompressed you can always compress the aggregates, without any problem other than rebuild time of the aggregates.
    Hope this helps.

  • Aggregate = compress infocube?

    I ve heard that an aggregate has E and F fact tables , which It's a property of compressing infocubes.
    so what's the difference between aggregate and compress infocubes?
    Joseph

    whenever we have huge volume of data , then its prefarable to use aggregate, there are some other parameters as well.
    Pls do refer the following link
    http://help.sap.com/saphelp_nw04/helpdata/en/7d/eb683cc5e8ca68e10000000a114084/content.htm

  • Aggregate Compression - how many requests to keep uncompressed

    I'm creating a new aggregate for one of my InfoCubes and I am filling it right now.  I'm working on adding the rollup operation to my process chain and I notice that by default, the rollup operation will compress the requests in the aggregate, and you can specify a certain number of requests in the aggregate that you want to leave uncompressed in case they need to be backed out.
    What is a standard/good practice for how many requests you should leave uncompressed in the aggregate for a daily delta load?  Is a week's worth a good number?  A month?
    For that matter, should this number match how many requests you leave uncompressed in the InfoCube as well?
    I want to be careful about this as I see the documentation says that if you need to back out a compressed request from an aggregate you have to empty and refill the aggregate all over again.
    Thanks,
    Chris

    For an aggregate you rarely backout Requests, consider letting it compress everything in the aggregate. This eliminates the need for a query on the F aggregate altogether.  If the time it takes to refill it is a major concern, then leave a week, or whatever is the likely window for something to be backed out, uncompressed.
    If the base cube is compressed regularly and has just a couple of weeks of uncompressed Requests, then maybe its easier to just follow that pattern. There is always something to be said for not making things more complicated than they need to be.
    Pizzaman

  • Cube compression vs aggregate compression

    hi i am doing rollup(compress after rollup) but after that i am clicking collapse ,but it is failing?

    job log of compression
    status                                                              message
    collapse finished with errors             collapse of info-cube XX  up to request id xx is cancelled

  • Proc Chain - Delete Overlapping Requests fails with aggregates

    BW Forum,
    Our weekly/daily load process chain loads several full (not delta) transaction infopackages. Those infopackages are intended to replace prior full loads and are then rolled up into aggregates on the cubes.
    The problem is the process chains fail to delete the overlapping requests. I manually have to remove the aggregates, remove the infopackages, then rebuild the aggregates. It seems that the delete overlapping request fails due to the aggregates or a missing index on the aggregates, but I'm not certain. The lengthy job log contains many references to the aggregate prior to it failing with the below messages.
    11/06/2004 13:47:53 SQL-END: 11/06/2004 13:47:53 00:00:00                                                 DBMAN        99
    11/06/2004 13:47:53     SQL-ERROR: 1,418 ORA-01418: specified index does not exist                        DBMAN        99
    11/06/2004 13:47:59 ABAP/4 processor: RAISE_EXCEPTION                                                       00        671
    11/06/2004 13:47:59 Job cancelled                                                                           00        518
    The raise_exception is a short dump with Exception condition "OBJECT_NOT_FOUND" raised.
    The termination occurred in the ABAP program "SAPLRRBA " in
    "RRBA_NUMBER_GET_BW".                                    
    The main program was "RSPROCESS ".                        
    I've looked for OSS notes. I've tried to find a process to delete aggregates prior to loading/deletion of overlapping requests. In the end, I've had to manually intervene each time we execute the process chain, so I've got to resolve the issue.
    Do others have this problem? Are the aggregates supposed to be deleted prior to loading full packages which will require deletion of overlapping requests? I presume not since there doesn't seem to be a process for this. Am I missing something?
    We're using BW 3.3 SP 15 on Oracle 9.2.0.3.
    Thanks for your time and consideration!
    Doug Maltby

    Are the aggregates compressed after the rollup?  If you compress the aggregate completely, the Request you are trying to delete is no longer identifiable once it is in the compressed E fact table (since it throws away the Request ID).
    So you need to change the aggregate so that it the most recent Requests remain in the uncompressed the F fact table.  Then the Request deletion should work.
    I thought what was supposed to happen if the aggregate was fully compressed and then you wanted to delete a Request, the system would recognize that the Request was unavailable due to compression and that it would automatically refill the aggregate - but I'm not sure where I read that. Maybe it was a Note, maybe that doesn't happen in a Process Chain, just not sure.
    The better solution when you regularly backout a Request  is just not the fully compress the aggregate, letting it follow the compression of the base cube, which I'm assuming you have set to compress Requests older than XX days.

  • Problem in Process chain due to Aggregate Roll-up

    Hi,
    I have a Infocube with Aggregates built on it.  I have loaded data in the Infocube from 2000 to 2008, Rolled up & Compressed the aggregates for this.
    I have also loaded the 2009 data in the same Infocube using Prior Month & Current Month Infopackage for which i am only Rolling up the aggregate and no Compression of aggregates is done.  The Current & Prior month load runs through Process chain on a daily basis at 4 times per day.  The Process chain is built in such a way that it deletes the overlapping requests when it is loading for the second/third/fourth time on a day.
    The problem here is, when the overlapping requests are deleted, the Process Chain is also taking the Aggregates compressed requests (2000 to 2008 Data), de-compressing it, De-activating the aggregates, Activating the Aggregates again, Re-filling & compressing the aggregates again.  This nearly takes 1 hour of time for the Process Chain to run which should take not more than 3 minutes.
    So, what could be done to tackle this problem?  Any help would be highly appreciated.
    Thanks,
    Murali

    Hi all,
    Thanks for your reply.
    Arun: The problem with the solution you gave is "Untill i roll-up the aggregates for the Current & Prior Month Infopackage the Ready for Reporting symbol is not appearing for the particular request".
    Thanks,
    Murali

  • Aggregation, archiving and compression

    Hello,
    How important is it to begin the aggregation, archiving and compression setup before go-live of a BW-environment?
    Is it possible to do this after the cubes have been filled?
    We are going live after the weekend and haven't given this much consideration... The plan is to start thinking about this now, just after the go-live...
    Need I worry?
    Best regards,
    Fredrik

    hi Fredrik,
    aggregate, compression and archiving are done to improve performance.
    yes, we can do compression and rollup(after aggregate created) after cubes have been filled, and normally we will include this process in process chain, there are process type for this. compression and rollup are done after daily data filled in infocube.
    normally you can set cube for compression.
    aggregate is created based on query run examination.
    check the aggregate on query performance doc.
    Business Intelligence Performance Tuning [original link is broken]
    archiving, after go live we won't need archiving so soon,
    archiving is done to historical data that not used.
    you can start include the compression in process chain,
    and evaluate you most used queries and see if aggregate needed.
    hope this helps.

  • Business Warehouse - Aggregates

    Hi all,
    I have a question from "area" Business Warehouse.
    We can set up in cube possibility Aggregates and flag -
    Cube - manage - Rollout - Aggregates - Compress After Rollup.
    I suppose that this information (flag "Compress After Rollup") should be saved in some system table. Do anybody know name of this table ?
    Thanks a lot in advance
    Martin

    Hi,
       Check the below tables.
    RSDDAGGR           
    RSDDAGGRCOMP       
    RSDDAGGRDIR        
    RSDDAGGREF         
    RSDDAGGRMODSTATE   
    RSDDAGGRT          
    RSDDSTATAGGR

  • Aggregate Table Name

    Hello All
    Is there any table name for Aggregate.
    I know that when you create Aggregate's by default a 6 digit's technical number will be generated.Is this is the table name for that particular aggregates.
    Like /BIC/******* 
    Iam little bit confused in this issue
    Pl help me out.
    Regards
    Balaji

    AS other have mentioned -
    Aggregate tables
    /BIC/F1xxxxx
    /BIC/E1xxxxx
    You might have data in just F, just E, or both depending on your aggregate compression practices.
    /BIC/D1xxxxx
    Are the dimension tables for the aggregate.  If the dimension for the aggregate is defined with the same characteristics as the dimension on the base cube, the aggregate uses the base cube's dimension table rather than creating ( what would be a duplicate) a new one.
    If the dimension is a line item dimension, the aggregate will create a view ( named /BIC/D1xxxxx ) instead of a transparent dimension table.
    Table RSDDAGGRDIR provides the linkage between the base cube and it's aggregates. DD02L will tell you whether a Dimension table is a transparent table or a view.

  • Did Infocube compression process locks the infocube?

    HI All,
    First of all thanks for ur active support and co-operation.
    Did the compression process locks the cube?, my doubt is, while the compression process is running on a cube, if i try to load data into the same cube, will it allow or not? please reply me as soon as u can.
    Many Thanks in Advance.
    Jagadeesh.

    hi,
    Compression: It is a process used to delete the Request IDs and this saves space.
    When and why use infocube compression in real time?
    InfoCube compression creates new cube by eliminating duplicates. Compressed infocubes require less storage space and are faster for retrieval of information. Here the catch is .. Once you compress, you can't alter the InfoCube. You are safe as long as you don't have any error in modeling.
    This compression can be done through Process Chain and also manually.
    Check these Links:
    http://www.sap-img.com/business/infocube-compression.htm
    compression is done to increase the performance of the cube...
    http://help.sap.com/saphelp_nw2004s/helpdata/en/ca/aa6437e7a4080ee10000009b38f842/frameset.htm
    http://help.sap.com/saphelp_erp2005vp/helpdata/en/b2/e91c3b85e6e939e10000000a11402f/frameset.htm
    Infocube compression and aggregate compression are mostly independent.
    Usually if you decide to keep the requests in the infocube, you can compress the aggregates. If you need to delete a request, you just have to rebuild an aggregate, if it is compressed. Therefore there are no problems in compressing aggregates, unless the rebuild of the aggregates take a lot of time.
    It does not make sense to compress the infocube without compressing the aggregates. The idea behind compressing is to speed up the infocube access by adding up all the data of the different requests. As a result you get rid of the request number. All other attributes stay the same. If you have more than one record per set of characteristics, the key figures will add the key figures by aggregat characteristic (ADD, MIN, MAX etc.). This will reduce the number of records in the cube.
    Example:
    requestid date 0material 0amount
    12345 20061201 3333 125
    12346 20061201 3333 -125
    12346 20061201 3333 200
    will result to
    requestid date 0material 0amount
    20061201 3333 200
    In this case 2 records are saved.
    But once the requestid is lost (due to compression) you cannot get it back.
    Therefore, once you compressed the infocube, there is no sense in keeping the aggregates uncompressed. But as long as your Infocube is uncompressed you can always compress the aggregates, without any problem other than rebuild time of the aggregates.
    hope it helps..

  • Cube Compression and Aggregation

    Hello BW Gurus,
    Can I first compress my infocube data and load data into the aggregates.
    The reason being, that when the infocube is compressed the Request Id's are removed.
    Are the Request Id's necessary for data to be transfered to the Aggregates and then later on for aggregate compression.
    Kindly suggest.
    regards,
    TR PRADEEP

    Hi,
    just to clarify this:
    1) you can compress your infocube and then INITIALLY fill the aggregates. The Request information is then no longer needed.
    2) But you can NOT compress requests in your infocube, when your aggregates are already filled and these requests are not yet "rolled up" into the aggregates (this action is prohibited anyway by the system).
    Hope this helps,
    Klaus

  • Non Cumulative Cube selective deletion and Reload

    Hi,
    We have a typical scenario, where in the company code displays # when we execute the inventory reports at plant level,
    Upon analysis, we did find the Plant to Company code mapping was not maintained at R/3. Now this was fixed, hence we are planning to do selective reload for that particular plant alone.
    Keeping in mind the loading sequence/scenarion for Inventory, can any one advise if we need to do stock init again for this plant and reload the data after selective deletion.
    Or can we directly load from the material movement datasource. Will there be any impact ie: at marker update etc.
    Note: We already have data into this inventory cube for last 2 years.
    The cube contain aggregate compression.
    Thanks
    Ramesh

    Any inputs please
    Ramesh

  • Time out of the Query

    Sdn
    Im getting Time out for one query while excuting in PRD system. That query filter field setting is Only values Infoprovider. But DEV Im able to excute the same query, but in this system that field setting is Only posted values in navigation
    advance wishes

    Hi Kirun,
    I'm sure, it can be done in dev because the data is not big like in the production.
    You need to tune up your info provider (info cube/ods).
    I had once the problem like you, i can run in dev, but not in prd.
    due I use ODS, so that I tuned it up by creating Index. And after it, it can run well.
    If it's Info cube, you can create aggregate / compress it.
    Hope it helps you a lot.
    Br,
    Daniel N.

  • In sap bi performance tuning option is considered in cube not in ods, why?

    details about
    indexes,partitions,aggregates,compression,roll up
    how these are helping to increase system performance
    why ods is not suitable?

    Hi,
    Generally we do performance tuning on the cubes as most of the reportings are done on the cubes hpowever in case of DSO also we can increase the performance by creating indexes of deactivating SID generation flag if there is no reporting on the DSO.
    Following 2 links will show you different aspects of performance tuning :-
    http://www.sdn.sap.com/irj/scn/index?rid=/library/uuid/404544e7-83c9-2e10-7b80-a24d5099ce3f
    For LID and High Cardinality
    http://www.sdn.sap.com/irj/scn/index?rid=/library/uuid/005f3197-d3da-2e10-1a94-a5c62342f2aa
    You may create aggregates on the cube if some of  IO are being used in the query very frequently to enhance the performce of query execution.
    Navesh

Maybe you are looking for