Compression or Aggregates.

We have an accounts payable cube and the data is loaded daily in the cube and it has 11 reports built on it. So we have a proposal to compress the data in the cube to improve query performance.
Could any one of you please let me know if its feasible to go for compression or first build aggregates on the cube and then compress the data?

Hi,
First you have to build the 'Aggregates and do the Compression'.
If  you do Compression first then what happens na it Delete all the Requests and transfer records from F Table to E Table. So, it is always preferred to 'First Aggregate Roll up and Compress'.
Ali.

Similar Messages

  • Deleting Infocube request (already rolledup and compressed in aggregate)

    Dear All,
    I found that there is one request which has incorrect data. This request was updated in cube and rolled up and compressed in aggregates. Now I changed request status to red and deleted it. This deletion request is running from last 20 hours. I checked the job log and found that it has dropped all the aggregate tables and refilling them. Since this cube has around 12 crore records its taking too much time to complete. Can i load further data while this deletion is happening? What will be best strategy to handle this situation?
    regards:
    Jitendra

    Hi
    If your data is not compressed in your cube then the best way is
    1) deactivate the aggregates
    2)delete the request from CUBE
    3) reload the data to cube
    4) fill the aggregates again
    If data is compressed in CUBE, then you can not do the request based deletion. the only way to do is selective deletion.
    When the deletion is work in progress, you can not load the data to the same target. target will be locked.
    Regards,
    Venkatesh

  • Cube compression vs aggregate compression

    hi i am doing rollup(compress after rollup) but after that i am clicking collapse ,but it is failing?

    job log of compression
    status                                                              message
    collapse finished with errors             collapse of info-cube XX  up to request id xx is cancelled

  • What is difference betwwen aggregates and compression in bi 7.0.

    hi All,
    will you please tell me difference between compression and aggregates in bi.

    Dear Ganesh,
            Aggregates allow you to improve the performance of BI queries when data is read from an InfoCube. The data of a BI InfoCube is saved in relational aggregates in an aggregated form. Relational aggregates are useful if you want to improve the performance of one or more specific BI queries, or specifically improve the performance of reporting on characteristic hierarchies
           Compression is phenomenon where all the contents of ur info cube moves from F to E table,,This is called compression and data will be aggregated when this compression happens..
    Why do we do compression..??
    When you load data into the InfoCube, entire requests can be added at the same time. Each of these requests has its own request ID, which is included in the fact table in the package dimension. This makes it possible to pay particular attention to individual requests. One advantage of the request ID concept is that you can subsequently delete complete requests from the InfoCube.
    However, the request ID concept can also cause the same data record (where all characteristics are the same except the request ID) to appear more than once in the fact table. This unnecessarily increases the volume of data and affects system performance when you analyze data, since each time you execute a query, the system has to perform aggregation using the request ID.
    You can eliminate these disadvantages by compressing data and bringing data from different requests together into one single request (request ID 0).
    For more info on aggregates
    http://help.sap.com/search/highlightContent.jsp
    and info on Compression
    http://help.sap.com/search/highlightContent.jsp
    Hope this helps u...
    Best Regards,
    VVenkat..

  • Non-compressed aggregates data lost after Delete Overlapping Requests?

    Hi,
    I am going to setup the following scenario:
    The cube is receiving the delta load from infosource 1and full load from infosource 2. Aggregates are created and initially filled for the cube.
    Now, the flow in the process chain should be:
    Delete indexes
    Load delta
    Load full
    Create indexes
    Delete overlapping requests
    Roll-up
    Compress
    In the Management of the Cube, on the Roll-up tab, the "Compress After Roll-up" is deactivated, so that the compression should take place only when the cube data is compressed (but I don't know whether this influences the way, how the roll-up is done via Adjust process type in process chain - will the deselected checkbox really avoid compression of aggregates after roll-up OR does the checkbox influences the manual start of roll-up only? ).
    Nevertheless, let's assume here, that aggregates will not be compressed until the compression will run on the cube. The Collapse process in the process chain is parametrized so that the newest 10 requests are not going to be compressed.
    Therefore, I expect that after the compression it should look like this:
    RNR | Compressed in cube | Compressed in Aggr | Rollup | Update
    110 |                    |                    | X      | F
    109 |                    |                    | X      | D
    108 |                    |                    | X      | D
    107 |                    |                    | X      | D
    106 |                    |                    | X      | D
    105 |                    |                    | X      | D
    104 |                    |                    | X      | D
    103 |                    |                    | X      | D
    102 |                    |                    | X      | D
    101 |                    |                    | X      | D
    100 | X                  | X                  | X      | D
    099 | X                  | X                  | X      | D
    098 | X                  | X                  | X      | D
    If you ask here, why ten newest requests are not compressed, then it is for sake of being able to delete the Full load by Req-ID (yes, I know, that 10 is too many...).
    My question is:
    What will happen during the next process chain run during Delete Overlapping Requests if new Full with RNR 111 will already be loaded?
    Some BW people say that using Delete Overlapping Requests will cause that the aggregates will be deactivated and rebuilt. I cannot afford this because of the long runtime needed for rebuilding the aggregates from scratch. But I still think that Delete Overlapping should work in the same way as deletion of the similar requests does (based on infopackage setup) when running on non-compressed requests, isn't it? Since the newest 10 requests are not compressed and the only overlapping is Full (last load) with RNR 111, then I assume that it should rather go for regular deleting the RNR 110 data from aggregate by Req-ID and then regular roll-up of RNR 111 instead of rebuilding the aggregates, am I right? Please, CONFIRM or DENY. Thanks! If the Delete Overlapping Requests still would lead to rebuilding of aggregates, then the only option would be to set up the infopackage for deleting the similar requests and remove Delete Overlapping Requests from process chain.
    I hope that my question is clear
    Any answer is highly appreciated.
    Thanks
    Michal

    Hi,
    If i get ur Q correct...
    Compress After Roll-up option is for the aggregtes of the cube not for the cube...
    So when this is selected then aggregates will be compressed if and only if roll-up is done on ur aggregates this doesn't affect ur compression on ur cube i.e movng the data from F to E fact table....
    If it is deselected then also tht doesn't affect ur compression of ur cube but here it won't chk the status of the rollup for the aggregates to compress ur aggregates...
    Will the deselected checkbox really avoid compression of aggregates after roll-up OR does the checkbox influences the manual start of roll-up only?
    This check box won't give u any influence even for the manual start of roll-up....i.e compression of aggreagates won't automatically start after ur roll-up...this has to done along with the compression staus of cube itself...
    And for the second Q I guess aggregates will be deactivated when deleting oveplapping request if tht particular request is rolled up....
    even it happens for the manual deleting also..i.e if u need to delete a request which is rolled up and aggregates are compressed u have to  deactivate the aggregates and refill the same....
    Here in detail unless and until a request is not compressed for cube and aggregates are not compressed it is anormal request only..we can delete without deactivating the aggregates...
    here in urcase i guess there is no need to remove the step from the chain...
    correct me if any issue u found......
    rgds,

  • How to compress aggregate only

    Hi
    we have process chain  which got failed because of activation of ods. I done it manually and this ODS is updates further two cube.
    Both Cube is having setting  "compress after Rollup" as we are only compressing aggregates of those cubes .
    After Further update , data from ODS to Cubes has gone.
    Then I done manuall aggregate rollup.
    In that One cube is done successful Rollup + compression of aggregate.
    Another is done successful Rollup + unsuccessful Compression of aggregate.
    Could you please let me know how do I compress aggreage for the anotehr cube manually.
    Waitng for reply.

    the only "standard" options are then:
    - delete, reconstruct and rollup this request making sure the compress indicator is SET.
    - reactivate an refill this aggregate (also make sure to have the compress indicator SET); this will depend of course how big is your ICube...
    Another Option would be to kick the following function module RSDDK_AGGREGATE_CONDENSE, however it works only under certain circumstances an it's quite tricky... I recommend to go standard...
    Should you decide going with the FM please do that first in a DEV sys
    hope this helps...
    Olivier.

  • What do you mean by aggregates compression?

    Hi all,,
    I knew how to compress infcoubes. But heard abt aggregates compression. Can anyone explain the significance of aggregates compression.
    regds
    hari

    Hi hari,
    Compression: It is a process used to delete the Request IDs and this saves space.
    When and why use infocube compression in real time?
    InfoCube compression creates new cube by eliminating duplicates. Compressed infocubes require less storage space and are faster for retrieval of information. Here the catch is .. Once you compress, you can't alter the InfoCube. You are safe as long as you don't have any error in modeling.
    This compression can be done through Process Chain and also manually.
    Check these Links:
    http://www.sap-img.com/business/infocube-compression.htm
    compression is done to increase the performance of the cube...
    http://help.sap.com/saphelp_nw2004s/helpdata/en/ca/aa6437e7a4080ee10000009b38f842/frameset.htm
    http://help.sap.com/saphelp_erp2005vp/helpdata/en/b2/e91c3b85e6e939e10000000a11402f/frameset.htm
    Infocube compression and aggregate compression are mostly independent.
    Usually if you decide to keep the requests in the infocube, you can compress the aggregates. If you need to delete a request, you just have to rebuild an aggregate, if it is compressed. Therefore there are no problems in compressing aggregates, unless the rebuild of the aggregates take a lot of time.
    It does not make sense to compress the infocube without compressing the aggregates. The idea behind compressing is to speed up the infocube access by adding up all the data of the different requests. As a result you get rid of the request number. All other attributes stay the same. If you have more than one record per set of characteristics, the key figures will add the key figures by aggregat characteristic (ADD, MIN, MAX etc.). This will reduce the number of records in the cube.
    Example:
    requestid date 0material 0amount
    12345 20061201 3333 125
    12346 20061201 3333 -125
    12346 20061201 3333 200
    will result to
    requestid date 0material 0amount
    20061201 3333 200
    In this case 2 records are saved.
    But once the requestid is lost (due to compression) you cannot get it back.
    Therefore, once you compressed the infocube, there is no sense in keeping the aggregates uncompressed. But as long as your Infocube is uncompressed you can always compress the aggregates, without any problem other than rebuild time of the aggregates.
    Hope this helps.

  • Compressing Aggregates in Historical Cubes

    Hi,
      I have a historical cube to which we no longer load data.  How do I compress the aggregates built on it?  The cube is completely compressed but the daily change run has inserted some records in the aggregate's F table.
    I used to know the answer to this question but cant recall now....can somebody get to it before I find out
    Thanks

    wow all three same answers...
    But how can I rollup since I dont have any new requests to roll-up.  I thought that check mark compresses the aggregates only after roll-up.
    I dont have any new requests to roll-up since we dont load the cube any more
    AND
    How will that setting compress after change run since that the reason why I am getting entries in F table of aggregates.
    any more answers
    thanks

  • Compress aggregates via a process chain

    I am rolling up aggregates and not compressing the aggregate.  I would like to do this in a process chain for anything older than 10 days.  Can anyone advise on how I can do this
    Thanks

    S B Deodhar wrote:S B Deodhar wrote:hi,
    Thanks for the input.
    >
    > My scenario is this:
    >
    > We have had to drop and reload contents from a cube because it was quicker than dropping specific requests that had been rolled up and compressed
    i guess you cant delete that request which is already compressed, system doent allow you to delete those request  and regarding problematic request you can do selective deletion if required
    >
    > So what I would like to know is as follows:
    >
    > 1. Can I only compress an aggregate at the same time as I carry out a rollup i.e. if I do not check the compress flag for a request at the time I rollup, I am not able to compress that request going forward
    > 2. If I choose to compress data in the cube is it specifically the cube or will it also take into consideration the compression of requests in aggregates which are not compressed
    *If an info-cube is compressed then keeping aggregates uncompressed wont help you as request id will be lost.
    also you can try out collapse-> Select radio button Calculate Request ids ->Only compress those requests that are older than certain days also please note one thing request ,it will compress the below requests*
    hope it helps
    regards
    laksh

  • Proc Chain - Delete Overlapping Requests fails with aggregates

    BW Forum,
    Our weekly/daily load process chain loads several full (not delta) transaction infopackages. Those infopackages are intended to replace prior full loads and are then rolled up into aggregates on the cubes.
    The problem is the process chains fail to delete the overlapping requests. I manually have to remove the aggregates, remove the infopackages, then rebuild the aggregates. It seems that the delete overlapping request fails due to the aggregates or a missing index on the aggregates, but I'm not certain. The lengthy job log contains many references to the aggregate prior to it failing with the below messages.
    11/06/2004 13:47:53 SQL-END: 11/06/2004 13:47:53 00:00:00                                                 DBMAN        99
    11/06/2004 13:47:53     SQL-ERROR: 1,418 ORA-01418: specified index does not exist                        DBMAN        99
    11/06/2004 13:47:59 ABAP/4 processor: RAISE_EXCEPTION                                                       00        671
    11/06/2004 13:47:59 Job cancelled                                                                           00        518
    The raise_exception is a short dump with Exception condition "OBJECT_NOT_FOUND" raised.
    The termination occurred in the ABAP program "SAPLRRBA " in
    "RRBA_NUMBER_GET_BW".                                    
    The main program was "RSPROCESS ".                        
    I've looked for OSS notes. I've tried to find a process to delete aggregates prior to loading/deletion of overlapping requests. In the end, I've had to manually intervene each time we execute the process chain, so I've got to resolve the issue.
    Do others have this problem? Are the aggregates supposed to be deleted prior to loading full packages which will require deletion of overlapping requests? I presume not since there doesn't seem to be a process for this. Am I missing something?
    We're using BW 3.3 SP 15 on Oracle 9.2.0.3.
    Thanks for your time and consideration!
    Doug Maltby

    Are the aggregates compressed after the rollup?  If you compress the aggregate completely, the Request you are trying to delete is no longer identifiable once it is in the compressed E fact table (since it throws away the Request ID).
    So you need to change the aggregate so that it the most recent Requests remain in the uncompressed the F fact table.  Then the Request deletion should work.
    I thought what was supposed to happen if the aggregate was fully compressed and then you wanted to delete a Request, the system would recognize that the Request was unavailable due to compression and that it would automatically refill the aggregate - but I'm not sure where I read that. Maybe it was a Note, maybe that doesn't happen in a Process Chain, just not sure.
    The better solution when you regularly backout a Request  is just not the fully compress the aggregate, letting it follow the compression of the base cube, which I'm assuming you have set to compress Requests older than XX days.

  • Problem in Process chain due to Aggregate Roll-up

    Hi,
    I have a Infocube with Aggregates built on it.  I have loaded data in the Infocube from 2000 to 2008, Rolled up & Compressed the aggregates for this.
    I have also loaded the 2009 data in the same Infocube using Prior Month & Current Month Infopackage for which i am only Rolling up the aggregate and no Compression of aggregates is done.  The Current & Prior month load runs through Process chain on a daily basis at 4 times per day.  The Process chain is built in such a way that it deletes the overlapping requests when it is loading for the second/third/fourth time on a day.
    The problem here is, when the overlapping requests are deleted, the Process Chain is also taking the Aggregates compressed requests (2000 to 2008 Data), de-compressing it, De-activating the aggregates, Activating the Aggregates again, Re-filling & compressing the aggregates again.  This nearly takes 1 hour of time for the Process Chain to run which should take not more than 3 minutes.
    So, what could be done to tackle this problem?  Any help would be highly appreciated.
    Thanks,
    Murali

    Hi all,
    Thanks for your reply.
    Arun: The problem with the solution you gave is "Untill i roll-up the aggregates for the Current & Prior Month Infopackage the Ready for Reporting symbol is not appearing for the particular request".
    Thanks,
    Murali

  • Compress and rollup

    Hello,
    It seems that for non cumulative infoproviders, the order of process between compress and rollup is important (example 0IC_C03).
    We need to compress before rolling up in the aggregates.
    However, in the process chain, if I try to compress before rolling up, the 2 processes are in error (RSMPC011 and RSMPC015).
    In the management of the infoprovider the "compress after rollup" is unchecked.
    Please can you tell me how can I do ?
    Thank you everybody.
    Best regards.
    Vanessa Roulier

    Hi
    We can use any of the option
    Aggregates are compressed automatically following a successful roll up.If, subsequently,  you want to delete a request, you first need to deactivate all the aggregates.
    This process is very time consuming.
    If you compress the aggregates first, even if the InfoCube is compressed, you are able to delete requests that have been rolled up, but not yet compressed, without any great difficulty.
    Just try to check that option and load if it works
    Thanks
    Tripple k

  • Compress and Aggrigates

    hai experts,
    Plz..What is difference between Aggrigates and Compress.
    How to do can any body give the step by step........
    Thanks in advance......
    with regards..
    raghu

    Dear Raghu,
    Both, compression and aggregates, are used to increase reporting speed.
    To understand how compression works, you have to know BW's extended star schema. From a technical point of view InfoCubes consist of fact tables and dimension tables. Fact tables store all your key figures, dimension tables tell the system which InfoObject identification are being used with the key figures. Now, every InfoCube has <b>two</b> fact tables, a so-called F-table and an E-table. The E-table is an aggregation of the F-tables's records as the request ID is being removed. Therefore an E-table normally has less records than an F-table. When you load data to an InfoCube, it is just stored in the F-table. By compressing the InfoCube you update the E-table and delete the corresponding records from the F-table.
    Aggregates are, from a technical point of view, InfoCubes themselves. They are related to your "basis" InfoCube, but you have to define them manually. They consist of a subset of all the records in your InfoCube. In principal there are two ways to select the relevant records for an aggregate. Either you select not all Infobjects which are included in your InfoCube, or you choose fixed values for certain InfoObjects. Like the compression, updating aggregates is a task which takes place after the loading of your InfoCube.
    When a report runs BW automatically takes care of F- and E-tables and existing aggregates.
    Further information and instructions can be found in the SAP Help:
    http://help.sap.com/saphelp_nw04/helpdata/en/ca/aa6437e7a4080ee10000009b38f842/frameset.htm
    http://help.sap.com/saphelp_nw04/helpdata/en/91/270f38b165400fe10000009b38f8cf/frameset.htm
    http://help.sap.com/saphelp_nw04/helpdata/en/7d/eb683cc5e8ca68e10000000a114084/frameset.htm
    Greetings,
    Stefan

  • Compress Aggr manually

    HI all;
    I just want to rollup the aggregates in our process chain and in my cube the check box for compress aggr is uncheck. which is working fine. I don't want the compress happens automatically. on a weekly basis...I want to go and compress some aggregates manually...how can I do this? I tried to put the check mark on compress under the rollup tab  > and give a request number that I want to compress...it's not working...
    Any help would be awarded.
    Krishma.
    Message was edited by: Krishma Pandey

    Hi,
    You need to click the radio button for Request id and then give the request numner. Then execute. That should work unless there are no requests for compression. Also if the job fails see what the job log says it failed for.
    Cheers,
    Kedar

  • Did Infocube compression process locks the infocube?

    HI All,
    First of all thanks for ur active support and co-operation.
    Did the compression process locks the cube?, my doubt is, while the compression process is running on a cube, if i try to load data into the same cube, will it allow or not? please reply me as soon as u can.
    Many Thanks in Advance.
    Jagadeesh.

    hi,
    Compression: It is a process used to delete the Request IDs and this saves space.
    When and why use infocube compression in real time?
    InfoCube compression creates new cube by eliminating duplicates. Compressed infocubes require less storage space and are faster for retrieval of information. Here the catch is .. Once you compress, you can't alter the InfoCube. You are safe as long as you don't have any error in modeling.
    This compression can be done through Process Chain and also manually.
    Check these Links:
    http://www.sap-img.com/business/infocube-compression.htm
    compression is done to increase the performance of the cube...
    http://help.sap.com/saphelp_nw2004s/helpdata/en/ca/aa6437e7a4080ee10000009b38f842/frameset.htm
    http://help.sap.com/saphelp_erp2005vp/helpdata/en/b2/e91c3b85e6e939e10000000a11402f/frameset.htm
    Infocube compression and aggregate compression are mostly independent.
    Usually if you decide to keep the requests in the infocube, you can compress the aggregates. If you need to delete a request, you just have to rebuild an aggregate, if it is compressed. Therefore there are no problems in compressing aggregates, unless the rebuild of the aggregates take a lot of time.
    It does not make sense to compress the infocube without compressing the aggregates. The idea behind compressing is to speed up the infocube access by adding up all the data of the different requests. As a result you get rid of the request number. All other attributes stay the same. If you have more than one record per set of characteristics, the key figures will add the key figures by aggregat characteristic (ADD, MIN, MAX etc.). This will reduce the number of records in the cube.
    Example:
    requestid date 0material 0amount
    12345 20061201 3333 125
    12346 20061201 3333 -125
    12346 20061201 3333 200
    will result to
    requestid date 0material 0amount
    20061201 3333 200
    In this case 2 records are saved.
    But once the requestid is lost (due to compression) you cannot get it back.
    Therefore, once you compressed the infocube, there is no sense in keeping the aggregates uncompressed. But as long as your Infocube is uncompressed you can always compress the aggregates, without any problem other than rebuild time of the aggregates.
    hope it helps..

Maybe you are looking for

  • Firefox is not opening and is NOT present in task manager

    With no apparent reason FF is not opening and is not present in Win Task Manager. I tried to reinstal, reset registry and other common solutions but none of them changed anything. After doubleclick on an icon nothing happens... and thats all.

  • What is PDK & how to use it in NWDS 7.0.07

    Hi All, I was using NWDS for webdynpro development for a pretty long time. I am working on some portal development and came across PDK. What is PDK and its usage with Portal application development. if its a plugin where to down load it and how to us

  • DiskUtility error: unable to bootstrap transaction group

    I get this bootstrap error with a checksum mismatch but searching only found 1 other person with 'chksum mismatch' with 1 reply which was not of use. "Unable to bootstrap transaction group 611: cksum mismatch Unable to bootstrap transaction group 609

  • Cant accept shared scripts

    Every time I try to accept a shared script I get an error saying I already have the script in my projects, but I dont.  Anyone else have this problem or a way to fix this?

  • Pagemaker 7

    I have Pagemaker 7 (with updates 701 & 701a) installed on my PC running Windows 7 pro in 32 bit, & on my laptop running Windows 7 Home 64 bit. No problem installing & running on either machine. The problem is, where both machines will ask if I want t