Roll Up if No Aggregates in Cube

Hi,
Do we need to Roll Up on a cube if we have not used any aggregates?
Also, Can we use Compression even if Aggregates are not there?
Thanks in advance.
Regards,
Priyanka

Activation is the process of generating SIDs , by definition a cube had DIM IDs and the dimension tables will have the SIDs.
Hence when you load data into a cube the SIDs are generated - thus you do not need to activate data in a cube for reporting as in an DSO.
A DSO since it is more of a staging layer that is not used for reporting you have the option of skipping the procedure of generation of SIDs...
Arun

Similar Messages

  • Help!Create Aggregates For Cube 0C_C03 Ocurs Eror

    When I create a aggregate for cube 0ic_c03, and fixed value to characteristic '0PLANT', Error occurs as following: ,how can I do?
    'acna': InfoCube '0IC_C03 contains non-cumulatives: Ref. char. 0PLANT not summarized
    Message no. RSDD430
    Diagnosis
    InfoCube 0IC_C03 contains key figures that display non-cumulative values. The non-cumulative values, however, can only be correctly calculated, if the aggregate contains all characteristic values of the reference characteristics (the characteristics for the time slice).
    System response
    The aggregation level was set to 'not aggregated' for the reference characteristics.

    Hi Shangfu,
    Info cube 0IC_C03 consists of non-cumulative key figures which means breifly the values are aggregated in run-time based on Time characteristics.
    Include time characteristics "0CALDAY" as part of the aggregate in addition to "0PLANT" object to avoid this problem.
    There are many OSS Notes explaining the purpose and usage of Non-cumulative key figures.
    Cheers
    Bala Koppuravuri

  • Aggregates for cube

    HI,
    How can we goahead for creating Aggregates for cube.
    IF large amount of data in cube or any other cases.
    Reg,
    Paiva

    Hi,
    An aggregate is a materialized, aggregated view of the data in an InfoCube. In an aggregate, the dataset of an InfoCube is saved redundantly and persistently in a consolidated form into the database.
    Advantages: It speeds up the query performance because the query when executed first looks at the aggregate for data. If the data is not found it is then that the data from cube is fetched.
    Disadvantage:We need to maintain the aggregates each time a data load is done. The aggregate occupies space on the database.
    To go or not to go for aggregates primarily depends on the query performance. If the query performance is good and the response time is goodthen the aggregates are not needed.
    For more info refer:
    http://help.sap.com/saphelp_nw04/helpdata/EN/7d/eb683cc5e8ca68e10000000a114084/frameset.htm
    check this links u will get idea abt aggregates
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/3f66ba90-0201-0010-ac8d-b61d8fd9abe9
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/e55aaca6-0301-0010-928e-af44060bda32
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/cbd2d390-0201-0010-8eab-a8a9269a23c2
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/2299d290-0201-0010-1a8e-880c6d3d0ade
    and check
    Refer this thread for for info:
    https://www.sdn.sap.com/sdn/collaboration.sdn?contenttype=url&content=https%3A//forums.sdn.sap.com/topusers.jspa%3FforumID%3D132
    Re: Performance ....InfoCube and Aggregates
    -Shreya

  • Aggregates in CUBES

    Hi,
    Im trying to create aggregates in Cubes. I right click and go under "Maintain aggregates" . I choose the option for the system to generate them for me, when i do that a window pops up " SPECIFY STATISTICS DATA EVALUATION" what dates am i supposed to put in there for "FROM" im putting todays date, and for  "TO" do i put 12/31/9999 ? What is this for?
    Also what can i do to improve a DSO's performance?
    Thanks

    Hi............
    Do you have secondary indexes on your ODS?
    As mentioned above.........that will be the best way............
    I think you want to improve DSO performance to improve query performance...........if so......its a good way to proceed would be to have your reporting based on the InfoCube, or MultiProviders based on these InfoCubes .
    But this will be a bit of development work, and you will have to move the queries from the ODS to Cubes, deal with workbooks etc.
    Then you can also create aggregates..........because you cannot create aggregates on ODS........
    Check OSS Note 444287
    Using ODS will be performance hit in terms of Activation Time , Report Excution Time ( Tabular Reporting ) . Better to load the data from these ODS to Indiviual Cubes .......and then create a multiprovider of them and then report out from multiprovider . Since reporting from Multi-dimensional Structure is faster than Flat table reporting...
    Also check this link
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/afbad390-0201-0010-daa4-9ef0168d41b6
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/cccad390-0201-0010-5093-fd9ec8157802
    Regards,
    Debjani.......
    Edited by: Debjani  Mukherjee on Sep 26, 2008 8:07 AM

  • Aggregates for Cubes in Production

    Hi,
    My client has a few cubes in the production system that they would like to improve performance in. I was planing to build aggregates for them. Is it okay to build aggregates for Cubes already in the Production System, do i have to do anything before i transport the aggregates in the Production system (such as delete all data from Production)?
    Thanks

    Hi Dave,
    Dependant on your Prod rights, you should be able to do it directly on the PROD box.
    In most cases you would start of with looking at aggregate proposals from the system. With DEV data this would most likely never apply to PROD as well, as the one is test and the other live data.
    Most clients always apply directly in PROD. You don`t need to reload or delete data. Just fill the created aggregates manually before the batch runs at night for safety sake...
    And don`t collapse as yet! You could always delete the aggregates...but having the collapse set on as well, compressed data can not be altered or reverted on if required after wards.
    Martin

  • Questions regarding aggregates on cubes

    Can someone please answer the following questions.
    1. How do I check whether someone is re-bilding aggregates on a cube?
    2. Does rebuilding an aggregate refer to the rollup process? Can it take a few hours?
    3. What does it mean when someone switches off an aggregate, basically what is the difference (conceptually/time consumption)between:
                            A. activating an aggregate?
                            B. switching off/on an aggregate?
                            C. rebuilding an aggregate?
    3. When a user complains that a query is running slow, do we build an aggregate based on the chars in rows & free chars in that query OR is there anything else we need to include?
    4. Does database statistics in the 'MANAGE' tab of a cube only show statistics or does it do anything to improve the load/query performance on the cube?
    Regards,
    Srinivas.

    1. How do I check whether someone is re-bilding aggregates on a cube?
    If your aggregate status is in red and you are filling up the aggregate - it is an initial fill of the aggregate and filling up would mean loading the data from the cube into the aggregate in full.
    2. Does rebuilding an aggregate refer to the rollup process? Can it take a few hours?
    Rebuilding of an aggregate is to reload the data into the aggregate from the cube once again.
    3. What does it mean when someone switches off an aggregate, basically what is the difference (conceptually/time consumption)between:
    A. activating an aggregate?
    this would mean recreating the data structures for the aggregate - this would mean dropping the data and reloading the data.
    B. switching off/on an aggregate?
    Switching off an aggregate means that it will not be used by the OLAp processor but would mean that the aggregate still gets rolled up. Rollup referring to loading changed data from the cube into the aggregate - this is done based n the requests that have not yet been rolled up into the cube.
    C. rebuilding an aggregate?
    Reloading data into the aggregate
    3. When a user complains that a query is running slow, do we build an aggregate based on the chars in rows & free chars in that query OR is there anything else we need to include?
    Run the query in RSRT and do an SQl view of the query and check the characteristics that are used in the query and then include the same into your aggregate.
    4. Does database statistics in the 'MANAGE' tab of a cube only show statistics or does it do anything to improve the load/query performance on the cube?
    Stats being updated will improve the execution plans on the database. Making sure that stats are up to date would lead to better execution plans and hence possibly better performance but it cannot eb taken for granted that refreshing stats is going to improve query performance.

  • Technical content : aggregate statistics cube info

    Hi Experts,
    can anyone let me know which cube should be used for aggregate statistics in BI7 ( 0TCT* ).
    similar to the statistics from cube 0BWTC_C04.
    Many Thanks,
    Neeraj.

    Hi ,
    Please refer to the below link,
    http://help.sap.com/saphelp_nw04/helpdata/en/72/e91c3b85e6e939e10000000a11402f/frameset.htm
    Hope it helps,
    Rgards,
    Amit Kr.
    Edited by: Amit Kr on Oct 13, 2009 2:39 PM

  • Using Aggregate storage cubes to aggregate and populate DWHSE

    Has anyone ever used aso cube to aggregate data and put it into a datawarehouse? We are exploring the option of using essbase ASO's to aggregate data from a fact into summary form then loading the required data via dataexport in version (9.3.1) OR (9.5)
    whatever version supports aso AND dataexport.

    Hi Whiterook72,
    Heterogenous data sources -> ETL -> warehouse -> essbase/OLAP -> MIS and analyses
    Conventionally, in an enterprise , we have essbase or an OLAP engine after the warehouse .As a level of aggregation happens in warehouse ,and for the multidimensional view ,we push it into OLAP.
    Contrariwise , in your case ,
    Heterogenous data sources -> ETL ->essbase/olap -> warehouse -> MIS and analyses
    you want to bring essbas before you load the data into warehouse .This would make essbase to feed from the operational data sources, where we have a lil problem.
    Ex: for a bank ,operational data has information at customer level i.e you have individual customer name, and their respective info like addres,transaction info bla bla bla.
    So,to feed this info into essbase cube ( with an objective of aggregation) , you got to have millions of members ( i.e all customers ) in your outline .
    Which i see is not the objective of essbase .
    Just my thoughts , hope they help you
    Sandeep Reddy Enti
    HCC

  • Loading data using send function in Excel to aggregate storage cube

    Hi there
    just got version 9.3.1. installed. Can finally load to aggregate storage database using excel essbase send. however, very slow, especially when loading many lines of data. Block storage much much faster. Is there any way you can speed up loading to aggreagate storage data base? Or is this an architectural issue and therefore not much can be done?

    As far as I know, it is an architectural issue.. Further, I would expect it to slow down even further if you have numerous people writing back simultaneously because, as I understand it, they are throttling the update process on the server side so a single user is actually 'writing' at a time. At least this is better than earlier versions where other users couldn't even do a read when the database was being loaded; I believe that restriction has been lifted as part of the 'trickle-feed' support (although I haven't tested it)..
    Tim Tow
    Applied OLAP, Inc

  • Copression in PC

    Hi all
    Can anyone tell me how to create a compress process in the process chain that loads (and performs a roll-up of its aggregate) this cube to compress all requests older than 60days old.
    Thanks
    Kishore

    If you create a variant for the compress process chain step you have to enter the InfoCubes name. At the bottom of that screen you can select that only requests older than XXX days have to be compressed.
    In your case XXX should be 60. Is this what you are looking for?
    Greetings,
    Stefan

  • Creating aggregate based on plan cube

    Hi,
    For the standard baisc cube, we can use the aggregate to improve the query performacne.
    and when the new request comes, it will be ready for reading only after this request is rolled up to aggregate.
    Now I create a aggregate and do the initialzation fill-up for this aggregate, all the requests with green status have been rolled up to the aggregate. There is no any problem.
    but I get one quesion here, for the most updated request, it is usally with yellow status because this cube is ready for input. and this yellow request can not be rolled up to aggregate until it is turned to green. But from our testing, the query can read the required data from both the aggregate and this yellow request. that's to say, it seems the query based on this plan cube can summarize the data from both the aggregate and this yellow request.
    Is there anyone can confirm our testing is correct and it is the specific property of the plan cube?
    Many Thanks
    Jonathan

    Hi Jonathan,
    the OLAP processor knows whether requests are already contained in an aggregate or not; depending on the actualdata setting (cf. note 1136163) the query also is able to read data from the yellow request; this is automatically the case for input ready queries.
    In fact, even the OLAP cache may be involved to read the data, cf. note 1138864 for more details.
    Regards,
    Gregor

  • Report data is from Cube or Aggregates ??

    Hello Friends,
    Can any one please tell.
    The data in the BW Report is comming from a cube where Aggregates are bulid on it. The data is loading into the cube daily and roll up of aggregates are done one month back, since a month roll up was not taking place.
    When i run report now which data i can see in the report only rollup data which is one month old (or) fresh updated data in the cube ??
    Please tell me in detail.
    Thanks in advance..
    Tony
    null

    HI tony,
    If request has not been rolled up, then it would not be available for reportin ...check roll up flag in manage of cube...
    So data is coming from aggregates, if your query can access aggregates( drilldown data is avail in aggregates),..otherwise from cube...
    But there would be data sync b/w cube and aggregates...
    Also you can check RSRT ( display aggregates found) to check if your quer is accessing aggregates or not.
    Let me knw if you hve more doubts
    Gaurav

  • Problem in Process chain due to Aggregate Roll-up

    Hi,
    I have a Infocube with Aggregates built on it.  I have loaded data in the Infocube from 2000 to 2008, Rolled up & Compressed the aggregates for this.
    I have also loaded the 2009 data in the same Infocube using Prior Month & Current Month Infopackage for which i am only Rolling up the aggregate and no Compression of aggregates is done.  The Current & Prior month load runs through Process chain on a daily basis at 4 times per day.  The Process chain is built in such a way that it deletes the overlapping requests when it is loading for the second/third/fourth time on a day.
    The problem here is, when the overlapping requests are deleted, the Process Chain is also taking the Aggregates compressed requests (2000 to 2008 Data), de-compressing it, De-activating the aggregates, Activating the Aggregates again, Re-filling & compressing the aggregates again.  This nearly takes 1 hour of time for the Process Chain to run which should take not more than 3 minutes.
    So, what could be done to tackle this problem?  Any help would be highly appreciated.
    Thanks,
    Murali

    Hi all,
    Thanks for your reply.
    Arun: The problem with the solution you gave is "Untill i roll-up the aggregates for the Current & Prior Month Infopackage the Ready for Reporting symbol is not appearing for the particular request".
    Thanks,
    Murali

  • (Request for reporting available) is not coming in Cube

    Hi All,
    I have Cube & DSO.
    I  added fields in DSO & Cube.
    Cube1 has Aggregrates built on it.i added 5 infoobjects on it, Now when i load data from DSO to Cube(Request for reporting available) is not coming up. i cant do reporting on it. can anyone help.
    thanks in advance,
    Kiran.

    Hi ....
    Have you done the Roll up ?
    Since aggregates are there on that cube....until and unless you do the roll up that request will not be available for Reporting...
    Regards,
    Debjani....

  • Simultaneous data activation in cube - request for reporting available

    Hi,
    I'm on BW 3.5.
    I am loading several million records to a cube, processing is to PSA first and subsequently to data target.
    I have broken the load up into 4 separate loads to prevent caches from filling up and causing huge performance issues.
    When I load all the data in a single load, it takes 10 hours to load.  When I break it up into 4 loads it takes 3 hours.
    My problem is that during the loading from PSA to data target, the first data load becomes green and ready for reporting before the last one has finished loading, and so the users get inaccurate report results if they happen to run a report before the last request activates.
    Is it possible to get all 4 requests to activate simultaneously?
    I have tried adding an aggregate to the cube, no good.
    I have tried loading the 4 loads to the PSA in sequential order in the process chain, and then loading from PSA to data target simultaneously (side by side), no good.
    Does anyone have a solution?
    Many thanks,
    Paul White

    Hi ....
    Have you done the Roll up ?
    Since aggregates are there on that cube....until and unless you do the roll up that request will not be available for Reporting...
    Regards,
    Debjani....

Maybe you are looking for