Compressing Aggregates in Historical Cubes

Hi,
  I have a historical cube to which we no longer load data.  How do I compress the aggregates built on it?  The cube is completely compressed but the daily change run has inserted some records in the aggregate's F table.
I used to know the answer to this question but cant recall now....can somebody get to it before I find out
Thanks

wow all three same answers...
But how can I rollup since I dont have any new requests to roll-up.  I thought that check mark compresses the aggregates only after roll-up.
I dont have any new requests to roll-up since we dont load the cube any more
AND
How will that setting compress after change run since that the reason why I am getting entries in F table of aggregates.
any more answers
thanks

Similar Messages

  • How to compress aggregate only

    Hi
    we have process chain  which got failed because of activation of ods. I done it manually and this ODS is updates further two cube.
    Both Cube is having setting  "compress after Rollup" as we are only compressing aggregates of those cubes .
    After Further update , data from ODS to Cubes has gone.
    Then I done manuall aggregate rollup.
    In that One cube is done successful Rollup + compression of aggregate.
    Another is done successful Rollup + unsuccessful Compression of aggregate.
    Could you please let me know how do I compress aggreage for the anotehr cube manually.
    Waitng for reply.

    the only "standard" options are then:
    - delete, reconstruct and rollup this request making sure the compress indicator is SET.
    - reactivate an refill this aggregate (also make sure to have the compress indicator SET); this will depend of course how big is your ICube...
    Another Option would be to kick the following function module RSDDK_AGGREGATE_CONDENSE, however it works only under certain circumstances an it's quite tricky... I recommend to go standard...
    Should you decide going with the FM please do that first in a DEV sys
    hope this helps...
    Olivier.

  • Can we create aggregates on the cube which has been compressed

    Hello Gurus,
    Can we create aggregates on the cube which has been compressed. We have a AP cube with 60 Millions of data. If we compress it now, going forward, can i create Aggregates on that cube. How is the performance will effect.
    Answers will be rewarded...

    Hi......
    This is a tablespace problem........its better u contact any Basis people.......Since the no of records is huge......due to this the problem arrise..........they can solve the issue.....
    Anyways check this......
    http://help.sap.com/saphelp_46c/helpdata/en/2e/d9507794f911d283d40000e829fbbd/content.htm
    Regards,
    Debjani.....

  • Compress aggregates via a process chain

    I am rolling up aggregates and not compressing the aggregate.  I would like to do this in a process chain for anything older than 10 days.  Can anyone advise on how I can do this
    Thanks

    S B Deodhar wrote:S B Deodhar wrote:hi,
    Thanks for the input.
    >
    > My scenario is this:
    >
    > We have had to drop and reload contents from a cube because it was quicker than dropping specific requests that had been rolled up and compressed
    i guess you cant delete that request which is already compressed, system doent allow you to delete those request  and regarding problematic request you can do selective deletion if required
    >
    > So what I would like to know is as follows:
    >
    > 1. Can I only compress an aggregate at the same time as I carry out a rollup i.e. if I do not check the compress flag for a request at the time I rollup, I am not able to compress that request going forward
    > 2. If I choose to compress data in the cube is it specifically the cube or will it also take into consideration the compression of requests in aggregates which are not compressed
    *If an info-cube is compressed then keeping aggregates uncompressed wont help you as request id will be lost.
    also you can try out collapse-> Select radio button Calculate Request ids ->Only compress those requests that are older than certain days also please note one thing request ,it will compress the below requests*
    hope it helps
    regards
    laksh

  • Compression status of all cubes at a time

    Hi Experts ,
    How can I check the compression status of each cube without going into mange screen
    Regards,
    Puru

    Hi Sven,
    May be this is help full answer  from Srini
    Open 2 screeens RSMDATASTATE get the SID from this table where field = COMPR which is last compressed request in the cube
    and nother table RSICCOMP - enter cube name and SID(SID from RSMDATASTATE ) and choose greater than - click on number of entries.. will show number of requests which are not yet compressed
    Hi Srini Thank you very muh for urs help
    Regards,
    Puru

  • Problems of using of Aggregates for Transactional Cube

    Hi,
    <b>Are there problems or disadvantages of using of Aggregates for Transactional Cube?</b>
    Tx.
    AndyML

    Hi,
    have a look at SAP's docu: http://help.sap.com/saphelp_nw04/helpdata/en/c0/99663b3e916a78e10000000a11402f/frameset.htm
    /manfred

  • Creating aggregates on partitioned Cube?

    Hello Everyone,
    We have a cube partitioned by Fiscal year/period.  Will creating aggregates help at all after partitioned is done?  Essentially, do you create aggregates if a cube is partitioned?  Is it either partition or aggregate?
    Thanks

    Hi,
    these two concepts are independant: you can create an aggregate or not:
    - if your AGG has 0FISCPER, it will be partitionned as well.
    - you can create an AGG without 0FISCPER; it won't be partitionned
    Of course it makes sense to create an AGG on partitionned cubes! And of course it will help....
    I magine an AGG with 0FISCYEAR compared to the same AGG with 0FISCPER; it will have potentially 12 times (if your fiscyear has 12 periods...) less records thus queries using it will be definitively faster.
    In adidtion you can create an aggregate for a particular FISCYEAR (e.g. 2007)...
    Please note that partitioning will only help if your cube is collapsed; indeed the partitionning only affects the E fact table.
    hope this helps....
    Olivier.

  • Do I need to run "Change run" if I have no aggregates in the cubes

    Hello
    Do I need to run "Change run" if I have no aggregates in the cubes?
    If yes, why?
    thanks

    Hi there,
    As said the change run will activate the modified records from a master data of a characteristic.
    If this characteristic is used in any aggregate then the change run will also rearrange the aggregates.
    Diogo.

  • When do we compress the request in cube. ?

    Hello all,
    We have been live for a few months now. After how many days/weeks/months should we compress the request in cube. ?How often do we have to compress the data in the cube.
    I know once compress than we cannot delete the data by request but we there is an issue and we need to delete data completely from the cube that is possible right ?
    Thanks in advance

    How often Compression needs to be done depends on the volume of data that is loaded everyday. Unless the volume of records loaded everyday is unusually high, doing compression once a week might be good enough.
    Also you are right abt losing the ability to delete by request in the infocube after compressing the request. But we dont necessarily have to delete the whole cube incase of issues. There is always the option of doing 'SELECTIVE DELETION'.
    Next, when doing the compression you have an option of mentioning the date range. Choose the option of compressing requests that are older than 4 days (arbitrary) to have a buffer for deleting the request incase of issues.
    Hope this helps!

  • Non-compressed aggregates data lost after Delete Overlapping Requests?

    Hi,
    I am going to setup the following scenario:
    The cube is receiving the delta load from infosource 1and full load from infosource 2. Aggregates are created and initially filled for the cube.
    Now, the flow in the process chain should be:
    Delete indexes
    Load delta
    Load full
    Create indexes
    Delete overlapping requests
    Roll-up
    Compress
    In the Management of the Cube, on the Roll-up tab, the "Compress After Roll-up" is deactivated, so that the compression should take place only when the cube data is compressed (but I don't know whether this influences the way, how the roll-up is done via Adjust process type in process chain - will the deselected checkbox really avoid compression of aggregates after roll-up OR does the checkbox influences the manual start of roll-up only? ).
    Nevertheless, let's assume here, that aggregates will not be compressed until the compression will run on the cube. The Collapse process in the process chain is parametrized so that the newest 10 requests are not going to be compressed.
    Therefore, I expect that after the compression it should look like this:
    RNR | Compressed in cube | Compressed in Aggr | Rollup | Update
    110 |                    |                    | X      | F
    109 |                    |                    | X      | D
    108 |                    |                    | X      | D
    107 |                    |                    | X      | D
    106 |                    |                    | X      | D
    105 |                    |                    | X      | D
    104 |                    |                    | X      | D
    103 |                    |                    | X      | D
    102 |                    |                    | X      | D
    101 |                    |                    | X      | D
    100 | X                  | X                  | X      | D
    099 | X                  | X                  | X      | D
    098 | X                  | X                  | X      | D
    If you ask here, why ten newest requests are not compressed, then it is for sake of being able to delete the Full load by Req-ID (yes, I know, that 10 is too many...).
    My question is:
    What will happen during the next process chain run during Delete Overlapping Requests if new Full with RNR 111 will already be loaded?
    Some BW people say that using Delete Overlapping Requests will cause that the aggregates will be deactivated and rebuilt. I cannot afford this because of the long runtime needed for rebuilding the aggregates from scratch. But I still think that Delete Overlapping should work in the same way as deletion of the similar requests does (based on infopackage setup) when running on non-compressed requests, isn't it? Since the newest 10 requests are not compressed and the only overlapping is Full (last load) with RNR 111, then I assume that it should rather go for regular deleting the RNR 110 data from aggregate by Req-ID and then regular roll-up of RNR 111 instead of rebuilding the aggregates, am I right? Please, CONFIRM or DENY. Thanks! If the Delete Overlapping Requests still would lead to rebuilding of aggregates, then the only option would be to set up the infopackage for deleting the similar requests and remove Delete Overlapping Requests from process chain.
    I hope that my question is clear
    Any answer is highly appreciated.
    Thanks
    Michal

    Hi,
    If i get ur Q correct...
    Compress After Roll-up option is for the aggregtes of the cube not for the cube...
    So when this is selected then aggregates will be compressed if and only if roll-up is done on ur aggregates this doesn't affect ur compression on ur cube i.e movng the data from F to E fact table....
    If it is deselected then also tht doesn't affect ur compression of ur cube but here it won't chk the status of the rollup for the aggregates to compress ur aggregates...
    Will the deselected checkbox really avoid compression of aggregates after roll-up OR does the checkbox influences the manual start of roll-up only?
    This check box won't give u any influence even for the manual start of roll-up....i.e compression of aggreagates won't automatically start after ur roll-up...this has to done along with the compression staus of cube itself...
    And for the second Q I guess aggregates will be deactivated when deleting oveplapping request if tht particular request is rolled up....
    even it happens for the manual deleting also..i.e if u need to delete a request which is rolled up and aggregates are compressed u have to  deactivate the aggregates and refill the same....
    Here in detail unless and until a request is not compressed for cube and aggregates are not compressed it is anormal request only..we can delete without deactivating the aggregates...
    here in urcase i guess there is no need to remove the step from the chain...
    correct me if any issue u found......
    rgds,

  • How to delete the rollup and compress request from the cube

    Hi Experts,
    I have a requirement, one request was updated into the cube and it was been rollup and then compressed.  As the request was compressed so based on request id we cant perform the deletion.
    So it is possible with selective deletion.  Now before performing the selective deletion do i need to deactivate the aggregates. 
    Help me out on this.
    Regards
    Prasad

    Hi.........
    U hav to deactivate the aggregates................bcoz.........Selective deletion is only possible with uncompressed requests...............ie from F fact table...........after compression data moves from F fact table to E fact table.............. you can't do selective deletion on E fact table.........So first deactivate the aggregates............then do the selective deletion..............then again activate the aggregates..............and do the aggregate filling job manually in RSA1..............but I will suggest u delete the request...............bcoz any how u hav to deactivate the aggregates............and in case of selective deletion u hav to be very particular................if ur load is not taking much time............then delete the request and repeat the load............
    Regards,
    Debjani...........

  • Compress and rollup the cube

    Hi Experts,
    do we have to compress and then rollup the aggregates? what whappends if we rollup before compression of the cube
    Raj

    Hi,
    The data can be rolled up to the aggregate based upon the request. So once the data is loaded, the request is rolled up to aggregate to fill up with new data. upon compression the request will not be available.
    whenever you load the data,you do Rollup to fill in all the relevent Aggregates
    When you compress the data all request ID s will be dropped
    so when you Compress the cube,"COMPRESS AFTER ROLLUP" option ensures that all the data is rolledup into aggrgates before doing the compression.
    hope this helps
    Regards,
    Haritha.
    Edited by: Haritha Molaka on Aug 7, 2009 8:48 AM

  • Use of compression & Aggregates

    Hi All,
    Please provide me the appropriate use of compression & aggresgates.
    1. What is use of compression and how it relates to compression?
    2. How to delete the data after doing compression?
    3. How the aggregates improve the performance?
    Regards
    Dude

    Hi Dude,
    1. What is use of compression and how it relates to compression?
    Ans: Usually we keep on loading the data to a data targets increases the number of requests in that particular data target. This will consume more database space and also effects the querry performance. To avoid these problems it is recommended
    to compress the requests.
    when we compress the requests all the requests will disappeared from the cube, shown as one request. Means all the requests are suppressed as one requst.
    2. How to delete the data after doing compression?
    Ans: hen we compress the requsts the data will be loaded to
    E-Fact Table from F-Fact Table.
    Once request is compressed we can not delete or modify a request. SO, before to perform compression we have to make sure taht no modifications are necessary. After compression if we reqired to delete data we can do so by doing "Selective Deletion"- Which we can perform form Performance Tab stip of Manage sceen of cube.
    3. How the aggregates improve the performance?
    Ans: when ever we load data to a cube will be maintained in
    F-Fact Table. we will create aggregates to improve the query performance. when we perform Rollups the data in F-Fact Table will be loaded to E-Fact Table. Data in E-Fact Table will be stored in more organised format with proper indexing mechanism improves the querry performance.
    I think it is clear... pls go through below links which may helps u..
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/ce7fb368-0601-0010-64ba-fadc985a1f94
    http://help.sap.com/saphelp_nw04s/helpdata/en/10/244538780fc80de10000009b38f842/frameset.htm
    http://help.sap.com/saphelp_nw04s/helpdata/en/c5/40813b680c250fe10000000a114084/frameset.htm
    Regards,
    Ramki.

  • Bex reprort is not fetching data from Aggregates scanning whole cube data

    Hi All,
    I am facing the problem in aggregates..
    For example when i am running the report using Tcode RSRT2, the BW report is not fetching the data from Aggregates.. instead going into the aggregate it is scanning whole cube Data....
    FYI.. Checked the characteristcis is exactely matching with aggregates..
    and also it is giving the message as:
    Characteristic 0G_CWWPTY is compressed but is not in the aggregate/query
    Can some body explain me about this error message.. pls let me know solution asap..
    Thankyou in advance.
    With regards,
    Hari

    Hi,
    Let me start off from basic.
    1) Check the activation of aggregates.
    2) check whether the check box in infocube manage rollup is checked.
    Hope this helps
    Assign points if useful
    Regards,
    venkat

  • Compression in a Planning cube

    I was trying to compress 2009-2010 data in a planning cube and when i collapse the request it got csheduled but the compressio  didnt happen.
    When i look at the logs of the batch job it says that -
    No requests needing to be aggregated have been found in InfoCube DE_RBQTPL
    Compression not necessary; no requests found for compressing
    Compression not necessary; no requests found for compressing
    Job finished
    Can you please help me in resolving this issue.
    Thanks,
    Sravani

    Please check that all the request are loaded properly
    into the info cube.
    Is there any yellow request in the cube?.
    Then wait for some time and two the compression.
    Thanks,
    Saveen

Maybe you are looking for