Aggregates in CUBES

Hi,
Im trying to create aggregates in Cubes. I right click and go under "Maintain aggregates" . I choose the option for the system to generate them for me, when i do that a window pops up " SPECIFY STATISTICS DATA EVALUATION" what dates am i supposed to put in there for "FROM" im putting todays date, and for  "TO" do i put 12/31/9999 ? What is this for?
Also what can i do to improve a DSO's performance?
Thanks

Hi............
Do you have secondary indexes on your ODS?
As mentioned above.........that will be the best way............
I think you want to improve DSO performance to improve query performance...........if so......its a good way to proceed would be to have your reporting based on the InfoCube, or MultiProviders based on these InfoCubes .
But this will be a bit of development work, and you will have to move the queries from the ODS to Cubes, deal with workbooks etc.
Then you can also create aggregates..........because you cannot create aggregates on ODS........
Check OSS Note 444287
Using ODS will be performance hit in terms of Activation Time , Report Excution Time ( Tabular Reporting ) . Better to load the data from these ODS to Indiviual Cubes .......and then create a multiprovider of them and then report out from multiprovider . Since reporting from Multi-dimensional Structure is faster than Flat table reporting...
Also check this link
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/afbad390-0201-0010-daa4-9ef0168d41b6
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/cccad390-0201-0010-5093-fd9ec8157802
Regards,
Debjani.......
Edited by: Debjani  Mukherjee on Sep 26, 2008 8:07 AM

Similar Messages

  • Help!Create Aggregates For Cube 0C_C03 Ocurs Eror

    When I create a aggregate for cube 0ic_c03, and fixed value to characteristic '0PLANT', Error occurs as following: ,how can I do?
    'acna': InfoCube '0IC_C03 contains non-cumulatives: Ref. char. 0PLANT not summarized
    Message no. RSDD430
    Diagnosis
    InfoCube 0IC_C03 contains key figures that display non-cumulative values. The non-cumulative values, however, can only be correctly calculated, if the aggregate contains all characteristic values of the reference characteristics (the characteristics for the time slice).
    System response
    The aggregation level was set to 'not aggregated' for the reference characteristics.

    Hi Shangfu,
    Info cube 0IC_C03 consists of non-cumulative key figures which means breifly the values are aggregated in run-time based on Time characteristics.
    Include time characteristics "0CALDAY" as part of the aggregate in addition to "0PLANT" object to avoid this problem.
    There are many OSS Notes explaining the purpose and usage of Non-cumulative key figures.
    Cheers
    Bala Koppuravuri

  • Aggregates for cube

    HI,
    How can we goahead for creating Aggregates for cube.
    IF large amount of data in cube or any other cases.
    Reg,
    Paiva

    Hi,
    An aggregate is a materialized, aggregated view of the data in an InfoCube. In an aggregate, the dataset of an InfoCube is saved redundantly and persistently in a consolidated form into the database.
    Advantages: It speeds up the query performance because the query when executed first looks at the aggregate for data. If the data is not found it is then that the data from cube is fetched.
    Disadvantage:We need to maintain the aggregates each time a data load is done. The aggregate occupies space on the database.
    To go or not to go for aggregates primarily depends on the query performance. If the query performance is good and the response time is goodthen the aggregates are not needed.
    For more info refer:
    http://help.sap.com/saphelp_nw04/helpdata/EN/7d/eb683cc5e8ca68e10000000a114084/frameset.htm
    check this links u will get idea abt aggregates
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/3f66ba90-0201-0010-ac8d-b61d8fd9abe9
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/e55aaca6-0301-0010-928e-af44060bda32
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/cbd2d390-0201-0010-8eab-a8a9269a23c2
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/2299d290-0201-0010-1a8e-880c6d3d0ade
    and check
    Refer this thread for for info:
    https://www.sdn.sap.com/sdn/collaboration.sdn?contenttype=url&content=https%3A//forums.sdn.sap.com/topusers.jspa%3FforumID%3D132
    Re: Performance ....InfoCube and Aggregates
    -Shreya

  • Aggregates for Cubes in Production

    Hi,
    My client has a few cubes in the production system that they would like to improve performance in. I was planing to build aggregates for them. Is it okay to build aggregates for Cubes already in the Production System, do i have to do anything before i transport the aggregates in the Production system (such as delete all data from Production)?
    Thanks

    Hi Dave,
    Dependant on your Prod rights, you should be able to do it directly on the PROD box.
    In most cases you would start of with looking at aggregate proposals from the system. With DEV data this would most likely never apply to PROD as well, as the one is test and the other live data.
    Most clients always apply directly in PROD. You don`t need to reload or delete data. Just fill the created aggregates manually before the batch runs at night for safety sake...
    And don`t collapse as yet! You could always delete the aggregates...but having the collapse set on as well, compressed data can not be altered or reverted on if required after wards.
    Martin

  • Questions regarding aggregates on cubes

    Can someone please answer the following questions.
    1. How do I check whether someone is re-bilding aggregates on a cube?
    2. Does rebuilding an aggregate refer to the rollup process? Can it take a few hours?
    3. What does it mean when someone switches off an aggregate, basically what is the difference (conceptually/time consumption)between:
                            A. activating an aggregate?
                            B. switching off/on an aggregate?
                            C. rebuilding an aggregate?
    3. When a user complains that a query is running slow, do we build an aggregate based on the chars in rows & free chars in that query OR is there anything else we need to include?
    4. Does database statistics in the 'MANAGE' tab of a cube only show statistics or does it do anything to improve the load/query performance on the cube?
    Regards,
    Srinivas.

    1. How do I check whether someone is re-bilding aggregates on a cube?
    If your aggregate status is in red and you are filling up the aggregate - it is an initial fill of the aggregate and filling up would mean loading the data from the cube into the aggregate in full.
    2. Does rebuilding an aggregate refer to the rollup process? Can it take a few hours?
    Rebuilding of an aggregate is to reload the data into the aggregate from the cube once again.
    3. What does it mean when someone switches off an aggregate, basically what is the difference (conceptually/time consumption)between:
    A. activating an aggregate?
    this would mean recreating the data structures for the aggregate - this would mean dropping the data and reloading the data.
    B. switching off/on an aggregate?
    Switching off an aggregate means that it will not be used by the OLAp processor but would mean that the aggregate still gets rolled up. Rollup referring to loading changed data from the cube into the aggregate - this is done based n the requests that have not yet been rolled up into the cube.
    C. rebuilding an aggregate?
    Reloading data into the aggregate
    3. When a user complains that a query is running slow, do we build an aggregate based on the chars in rows & free chars in that query OR is there anything else we need to include?
    Run the query in RSRT and do an SQl view of the query and check the characteristics that are used in the query and then include the same into your aggregate.
    4. Does database statistics in the 'MANAGE' tab of a cube only show statistics or does it do anything to improve the load/query performance on the cube?
    Stats being updated will improve the execution plans on the database. Making sure that stats are up to date would lead to better execution plans and hence possibly better performance but it cannot eb taken for granted that refreshing stats is going to improve query performance.

  • Technical content : aggregate statistics cube info

    Hi Experts,
    can anyone let me know which cube should be used for aggregate statistics in BI7 ( 0TCT* ).
    similar to the statistics from cube 0BWTC_C04.
    Many Thanks,
    Neeraj.

    Hi ,
    Please refer to the below link,
    http://help.sap.com/saphelp_nw04/helpdata/en/72/e91c3b85e6e939e10000000a11402f/frameset.htm
    Hope it helps,
    Rgards,
    Amit Kr.
    Edited by: Amit Kr on Oct 13, 2009 2:39 PM

  • Using Aggregate storage cubes to aggregate and populate DWHSE

    Has anyone ever used aso cube to aggregate data and put it into a datawarehouse? We are exploring the option of using essbase ASO's to aggregate data from a fact into summary form then loading the required data via dataexport in version (9.3.1) OR (9.5)
    whatever version supports aso AND dataexport.

    Hi Whiterook72,
    Heterogenous data sources -> ETL -> warehouse -> essbase/OLAP -> MIS and analyses
    Conventionally, in an enterprise , we have essbase or an OLAP engine after the warehouse .As a level of aggregation happens in warehouse ,and for the multidimensional view ,we push it into OLAP.
    Contrariwise , in your case ,
    Heterogenous data sources -> ETL ->essbase/olap -> warehouse -> MIS and analyses
    you want to bring essbas before you load the data into warehouse .This would make essbase to feed from the operational data sources, where we have a lil problem.
    Ex: for a bank ,operational data has information at customer level i.e you have individual customer name, and their respective info like addres,transaction info bla bla bla.
    So,to feed this info into essbase cube ( with an objective of aggregation) , you got to have millions of members ( i.e all customers ) in your outline .
    Which i see is not the objective of essbase .
    Just my thoughts , hope they help you
    Sandeep Reddy Enti
    HCC

  • Roll Up if No Aggregates in Cube

    Hi,
    Do we need to Roll Up on a cube if we have not used any aggregates?
    Also, Can we use Compression even if Aggregates are not there?
    Thanks in advance.
    Regards,
    Priyanka

    Activation is the process of generating SIDs , by definition a cube had DIM IDs and the dimension tables will have the SIDs.
    Hence when you load data into a cube the SIDs are generated - thus you do not need to activate data in a cube for reporting as in an DSO.
    A DSO since it is more of a staging layer that is not used for reporting you have the option of skipping the procedure of generation of SIDs...
    Arun

  • Loading data using send function in Excel to aggregate storage cube

    Hi there
    just got version 9.3.1. installed. Can finally load to aggregate storage database using excel essbase send. however, very slow, especially when loading many lines of data. Block storage much much faster. Is there any way you can speed up loading to aggreagate storage data base? Or is this an architectural issue and therefore not much can be done?

    As far as I know, it is an architectural issue.. Further, I would expect it to slow down even further if you have numerous people writing back simultaneously because, as I understand it, they are throttling the update process on the server side so a single user is actually 'writing' at a time. At least this is better than earlier versions where other users couldn't even do a read when the database was being loaded; I believe that restriction has been lifted as part of the 'trickle-feed' support (although I haven't tested it)..
    Tim Tow
    Applied OLAP, Inc

  • No new data in cube after a incoming maintenance job

    Hi,
    I thought I had read about this topic before, so I tried searching the forum but I didn't find any relevant threads.
    Anyways,
    I have a cube(partitioned along month) that I want to add new data to every day. I maintain the cube using AWM and accept all the default values for the maintenance job except for in the "cube data processing options" where I select "aggregate the cube for only the incoming data values".
    But after I do this, none of the new data is shown.
    I have mapped the cube to a view where I only have the data from the last couple of days to limit the load. When I do a select count against the view i get the result 150961 rows. When I check the xml_load_log I see this: Processed 150961 Records. Rejected 0 Records.
    Some of the data in the view already exists in the cube, and some are new and its the new data that's not there.
    I used OX to check the prt_topvar and the data isn't here either.
    When I do the maintenance it takes about 15-20 minutes so everything seems normal.
    Anyone have any ideas what might be wrong? any suggestions would be appreciated.
    regards Ragnar
    edit: I just did a similar maintenance, but this time I limited the view to only new values and the time it took was just 15 seconds from it started the load of measures and until it was done with the solve.
    Message was edited by:
    rhaug

    Once again, thank you for your reply.
    Regarding the 10.2.0.3A patch: Unfortunately the only access I have to the server is with developertools like AWM, OWB, sqlplus etc. so I'm not able to check out whats located in the home of the DB(I guess this is where the executable objects would be). Whenever support wants me to apply a new patch I have to ask my DBA, but he is on vacation at the moment and that makes it a bit more difficult. Hopefully he still has the log file and I can check it out once he gets back. So for now I'm not doing anything more with checking if the patch is properly installed.
    Regarding maintenance of incoming values: I found a nice workaround for this one.
    To describe a little more how it was when it wouldn't work. The cube is partitioned along months in the timedimension, and for every new day I would add that level in the timedimension as well. If I was to create a report today, the highest date I would see would be wed July 18. Then after a new maintenance of incoming values tomorrow, the highest value would be thu July 19, and so on. The logic in the source views for this works perfectly for me, but would easily be a source for error as well if you are not 100% sure on how this would work for you.
    What I had to do in order to make the cube load the incoming values: On my first initial load of the cube, I loaded every day up until 31-12-2007. After I did this maintenance of incoming data works fine, and I suspect it will throughout the year. This might be the way it should be as well, I'm not sure. Seems like you have to include all the days in the partition when you do a full aggregation in order for the maintenance of the incoming values to be updated. What I noticed was that it would update days that was already partially loaded, but not days that was added from the incoming maintenance.
    Hope this made some sense to read.
    regards Ragnar

  • Moving data to archive cube

    Hi Experts,
    We are extracting data from R/3 to BW.We are keeping 2years of Data in DSO1 and moving into Cube(R/3>DSO1>Cube1).we have 2007 and 2008 data in the DSO and Cube.We extracted 1999 to 2006 data in to History DSO  from R/3 and sent into InfoCube(Cube2).The flow as is follows
    Current Data:2007 and 2008
    R/3 -
    >DSO1--->Cube 1(Deltas are running)
    History data:1996 to 2006
    R/3>DSO2--->Cube2.
    Now I want to move 2007 data in to History data(History DSO and Cube).
    I have two options to get this job  done .
    1.Move selective data from DSO1 to DSO2 and from DSO2 to Cube 2.
    2.Move selective data from Cube 1 to Cube 2.If So I can't see item wise data just I can see only Aggregated data in the cube.
    Is there any best approach other than two options.if not what would be the best between two options.
    Once I move the data into History cube I need to delete the data in Current DSO and Current Cube based on selective deletion.If I delete the data in DSO and Cube is there any impact on currrent Delta load.
    I need to do this every year becuase we want to keep only 2years of data in current data InfoCube..Could any one throw some light on the above issue.
    Thansks,
    Rani.

    Hi Rani.........
    Ur 1st Question....
    We are extracting data from R/3 to BW.We are keeping 2years of Data in DSO1 and moving into Cube(R/3>DSO1>Cube1).we have 2007 and 2008 data in the DSO and Cube.We extracted 1999 to 2006 data in to History DSO from R/3 and sent into InfoCube(Cube2).The flow as is follows
    Current Data:2007 and 2008
    R/3 -
    >DSO1--->Cube 1(Deltas are running)
    History data:1996 to 2006
    R/3>DSO2--->Cube2.
    Now I want to move 2007 data in to History data(History DSO and Cube).
    I have two options to get this job done .
    1.Move selective data from DSO1 to DSO2 and from DSO2 to Cube 2.
    2.Move selective data from Cube 1 to Cube 2.If So I can't see item wise data just I can see only Aggregated data in the cube.
    Is there any best approach other than two options.if not what would be the best between two options.
    ANS I want to clear u one thing.....that in infocube data will not get Aggregated until u aggregate the Cube.........So u can see the data in cube item wise if u don't aggregate the cube.......
    Anyways...I think the best option is to follow the Flow.....Move selective data from DSO1 to DSO2 and from DSO2 to Cube 2..........
    Secondly..U hav asked...
    Once I move the data into History cube I need to delete the data in Current DSO and Current Cube based on selective deletion.If I delete the data in DSO and Cube is there any impact on currrent Delta load.
    I need to do this every year becuase we want to keep only 2years of data in current data InfoCube..Could any one throw some light on the above issue....
    ANS Selective deletion of data is not going to effect delta loads...so u can do Selective deletion...
    Actually Delta load will get effected....somehow if anyone delete a delta request without making the QM status red.......then init flag will not be set back....and data will be lost...
    Hope this helps..
    Regards,
    Debjnai...

  • Loading one partition in AWM tries to aggregate all partitions

    I believe I'm seeing a difference in behavior between maintenance of a cube with a compressed composite vs. a cube with all dimensions sparse but not compressed.
    With the compressed composite cube, I execute cube maintenance (AWM 10.2.0.2), select 'Aggregate the cube for only the incoming data values', and click Finish. According to XML_LOAD_LOG, the records are loaded, and the aggregation occurs in only the partition(s) in which the data was loaded.
    With a fully-sparse cube (but NOT using a compressed composite), I follow all of the same steps, and use the same fact table. According to XML_LOAD_LOG, the records are loaded in the same amount of time as before, but the aggregation starts to loop over every partition (I'm partitioned along the Period-level of a Day-Wk-Pd-Qtr-Year time dimension). The log shows an entry for every period, not just the single period in the fact table, and it's taking about 90 seconds per period. For 48 periods, that adds over an hour of processing time unnecessarily.
    I tried it twice, thinking perhaps I clicked the wrong radio button, but that's not the case. I'm seeing very different (and detrimental) behavior.
    Has anyone else seen this?

    Addendum: The cube maintenance process also behaves badly when requesting the use of more than 1 processor. I have two cubes that I want to maintain at the same time. The server has 4 processors. I submit the job to the Oracle Job Queue and specify the use of 2 processors.
    The load log shows 48 3-line entries like these:
    Attached AW TEST1_SALESAW in MULTI Mode
    Started load of measures: Sales, Cost, Units from Cube LY4.CUBE. PD07 04 Partition
    Finished load of measures: Sales, Cost, Units from Cube LY4.CUBE. PD07 04 Partition. Processed 0 records.
    There is only one fiscal period in the fact table. But the multi-threaded process dutifully rolls through all 48 fiscal periods, and in doing so adds a significant amount of time to the process.
    NOTE: This is only an issue when using multiple processors. If I request only 1 processor, the "looping through all partitions" behavior does not occur.

  • Slow query on aggregate.

    Hello Experts,
    We have very specific issue on query performance. We have a query which uses to execute within 3 minutes.  We have created Aggregates on the following cube and the same query partially uses the aggregate and cube.  The particular query after creating the new aggregate taking more than 25 minutes to execute.
    If we switch-off the aggregate and execute the query, the following query takes only 3 minutes.  The query uses Count Function in few formulas.  Can you please suggest, is there any option for particular query to ignore Aggregate and use only infocube.
    Regards
    Christopher Francis

    Hi Francis,
    First of all this is not a common issue seen across SAP System.
    According to your issue...exceution time on aggregate is more than on cube.
    According to my analysis what could be the reason ..is that the characterstics on which you made the aggregate would have different.
    example : you want to create aggregate on cust_no..but aggregate might have created on different characterstic..suppose prod_no
    so When query hits the aggrgate it doesnot find any record of cust_no and  again search in the cube.
    so it takes more time for serching aggreagate as well as in the cube.
    that is why i think it may  be taking more time..for execution on aggregate.
    please check your required characterstics..on which you have created aggregate.
    Regards,
    Sidhartha

  • Line Item 0BATCH is used In Aggregate

    Dear Masters
          I have used aggregate on cube. When I excuting query in query designer, it's taking much time to display the data. When I checked aggregate, it's giving warning message below
    Line item 'Batch number' is used in aggregate
    System Response
    If you are dealing with a line item that has many characteristic values, the aggregate can become as large as the InfoCube. This means that there is probably no advantage in having an aggregate in this case.
    This message does not appear in the expert mode.
    Procedure
    Check the number of characteristic values in the line item and delete the line item from the aggregate, if including it means that the aggregate is too large.
    What I want to do to avoid above warning message.
    Do I want to uncheck checkbox of line item for 0batch or delete the that field from aggregate
    Thanks in advance
    Raja.S

    hi raja,
    u can remove the line item from aggregates dimension.
    allow the system to propese an aggregate from the quey otherwise.
    prerequisite for a system to propose a aggregate is the cube should have atleast one query on it.
    check out the performance in RSRT and RSRCACHE .
    RSRT with aggregates compare the time of execution and consider deleting the aggregates.
    bye
    reward points if helpful.

  • Building Aggregate on a Infocube

    Hi all,
    We are planning to build aggregates on a Infocube to improve the query performance.  The problem is, I couldnt check the query runtimes in ST03 as the cube is not a statistics cube.  so, i dont know which query is taking more time and which characteristics should i choose to build an aggregate.  Please help me in answering the below questions:
    1.  Can i build only one aggregate on a infocube or more than that?  How do we decide how many aggregate can be built on a Infocube?
    2.  How to identify the characteristics to be added to the aggregate?
    3.  There are also Navigational attributes in my Infocube, should i consider that also when i build aggregates?
    Please advice me.  Thanks
    Regards,
    Murali

    hi,
    chk the query performance in RSRT, query is not mandatory to be statistics query.
    1. Can i build only one aggregate on a infocube or more than that? How do we decide how many aggregate can be built on a Infocube?
    you can have any number of aggregates on cube but built on requirement.decide char based  on the char you have used in rows of reports
    2. How to identify the characteristics to be added to the aggregate?
    char in rows of report is mostly used for aggregates
    3. There are also Navigational attributes in my Infocube, should i consider that also when i build aggregates?
    in the report if you used nav attr then you can use that also.
    Assign Points if useful
    Ramesh

Maybe you are looking for

  • ANN: MEI Portico Version 2.0 for Adobe Digital Publishing Suite Now Available

    Jenkintown, PA (November 28, 2012) — Managing Editor Inc. (MEI) today announced MEI Portico™ version 2.0, the latest in the premier Adobe® development partner's lineup of solutions for publishers. The new version of MEI's custom storefront software f

  • Close excel programmatically after user cancels file selection

    Greetings all. I apologize but I incorrectly posted this question on the wrong board originally so am posting it again here. I have a VI that I wrote with some help from this site that reads in as many user selected Flat Data Files as desired into Ex

  • What are best ways to free up space?

    I am still having problems in iDVD. I have created a movie in Final Cut Express HD. I am able to create a DVD in iDVD. I have burned a copy of half of my movie. Ever since I burned this I am getting messages saying "not enough free disk space." I rea

  • BPM User decision Agent assignment

    I am working on User decision steps in BPM. I have added Configurable Parameter in Integration Process but after importing it into ID, I am not able to assign any user in parameters. Once I put the value and save/activate, value got cleared out.

  • Error on write to control file

    Hi all,      We are using the database sever version Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Prod working with windows platform. I saw an error in Alert log as follows, ORA-00221: error on write to control file Instance terminated