Large Aggregates

Hello Experts,
We have recieved an early watch report from SAP which has highlighted large aggregates for certain infocubes. When I check the aggregates, I noticed that they have high utilization indicated by the +++ in the valuation column.
As large aggregates have a bearing on the ACR and Rollup, I am trying to find ways of creating new aggregates and deactivating the existing one.
Currently the aggregates are built on certain navigation attributes = * but I find that most of the other characteristics from other dimensions are also present resulting in more records being inserted into aggregates. Currently most reports are designed to view sales data aggregated at navigational attributes of profit center. Would be OK if we have only the Sales as one dimension and the navigation attribute as another dimension in the aggrete?
Could you please suggest the best ways to create aggregates.
Thanks you,
solomon

Solomon,
Large aggregates in the EWA ( Early Watch Alert ) usually refer to aggregates that have a high aggregation factor..
By definition an aggregate should be smaller than the cube - but if the aggregate starts approaching the cube size then the purpose of the aggregate is defeated.
The reason why SAp flags these is that you could possibly reduce your DB consumption by the aggregate is the size it equal to the cube because you will get a similar performance when you run the query from the cube.
What you can do is :
1. Run the query in RSRT and select "Display statistics " and "Do not use aggregates" in the execute and debug option
2. Run the same with display statistics without checking the aggregates option
compare the DB time for both the runs above and if there is an improvement  then use the aggregates else you can switch them off and still achieve similar performance. Also compress your cube if possible because Aggregates are usualy compressed leading to better performance...
The EWA alert is just a warning and can be ignored if you find that the aggregates have to be there....

Similar Messages

  • Trade off between multiple small optimised aggregates - few large aggregate

    Hi SDN community
    I am asking a purely theoretical question to the sdn.sap.com community if any projects have done verifiable tests between the trade off in performance for a small number of cubes with large unoptimised aggregates, compared to that of multiple cubes with small optimised aggregates.
    Will queries run faster acrosss more cubes with more aggregates that are smaller, as opposed to less cubes with larger aggregates.
    We are currently trying to improve performance, and wish to get feedback on whether to change design to cater for ongoing performance issues.
    Thank you for your assistance.
    Simon

    Hi Ravi,
    Thank you for your reply.
    The reason why we need to consider smaller aggregates is such:
    - We have 2 value types Budget, and Forecasts in the same cube.
    Because of this, we cannot restrict the aggregates to 2 smaller sized aggregates to allow performance gain.
    - If we sepearate the data to the Forecasts Cube,
    We have potential to create 12 Version specific aggregates
    - If we create Fiscal Year Aggregates in addition, we split the size of the aggregates
    Now although the roll up times will be longer, we have very targetted small aggregates so our reports should run faster.
    we have consistent performance problems so i am proposing the above last major performance tuning that can be thought of, but will the performance be worth the expenditure.
    Thank you.
    Simon

  • Logical sql firing larger aggregate table instead of smaller one

    Hi
    When we process a request containing one particular column alone along with some time dimension say month or year, the logical sql is firing larger aggregare table instead of smaller aggregate table. Please, help us in resolving this issue.
    The OracleBI version we are using is 10.1.3.4.1
    Thanks.

    Hi,
    Try posting in the OLAP forum.
    OLAP
    Thanks, Mark

  • OBIEE bypasses smaller aggregate table and queries largest aggregate table

    Hello,
    Currently we are experiencing something strange regarding queries that are generated.
    Scenario:
    We have 1 detail table and 3 aggregate tables in the RPD. For this scenario I will only refer to 2 of the Aggregates.
    Aggregate 1 (1 million rows):
    Contains data - Division, Sales Rep, Month, Sales
    Aggregate 2 (13 milliion rows):
    Contains data - Division, Product, Month, Sales
    Both tables are set at the appropriate dimension levels in the Business Model. Row counts have been updated in the physical layer in the RPD.
    When we create an answers query that contains Division, Month and Sales, one would think that OBIEE would query the smaller and faster of the two tables. However, obiee wants to query the table with 13 million records completely bypassing the smaller table. If we make the larger aggregate inactive, then OBIEE queries the smaller table. We can't figure out why OBIEE wants to immediately go to the larger table.
    Has anyone experienced something such as this? Any help would be greatly appreciated.
    Edited by: gwb on Aug 19, 2009 7:45 AM

    Have you try to change the sort order of the logical table source in your logical table ?
    !http://gerardnico.com/wiki/_media/temp/obiee_logical_table_sources_sort.jpg!
    Set the Aggregate 1 first.
    Cheers
    Nico

  • Largest  Aggregates

    Hi BW Experts,
    I need your help on this ?
    "Large aggregates need high runtime for maintenance like change runs and rollup of new data "
    Can any one help on this like how to split this large Aggregates ?

    Hi Lakshmi,
    You have to have a eye on the aggregates all the time.. whether its been used by queries or not, how much time is taking for the rolup and change run and how effective it is in improving the query performance. If you include very less characteristic inside your aggregates, then the number of records will be almost equal to your cube.. so this will be a bad design. So try to keep your aggregates as records summarized give you a very gud view of how the aggregate is. Look into all these points and design your aggregates
    Sriram

  • Comparing load times w/ and w/o BIA

    We are looking at the pros/cons of BIA for implementation.  Does anyone have data to show a comparison between loads, loads with compression, vs BIA Index time?

    Haven't seen numbers comparing load times.  Loads to your cubes and compression continue whether you have BIA or not.  Rollup time would be eliminated as you would no longer have the need to have aggregates.  No aggregates should also reduce Change Run time, perhaps a lot, or only a little, depending on whether you have large aggregates with Nav Attrs in them. All of that is offset to some degree by the time to update the BIA.
    Make sure you understand all the licensing costs, not just SAP's, but the hardware vendors per blade licensing costs.  Talked to someone just the other day that was not expecting a per blade licensing, list price of the license per blade was $75,000

  • Multi-fact Query

    Hi all,
    Our software group is looking at creating a tool that will query multi-fact star schema environment. What are my options?
    So far, we have 7 fact tables and 6 dimensions (all shared by the facts). There are no aggregate tables.
    My first thought was one large aggregate table, however the dimensions are all normalized to their particular fact table. So, a customer dimension (for instance) would repeat customers if they fall into multiple fact tables. The dimensions are more complicated than that, but you get the idea. There are only two that are truly normalized...and yes one is TIME. :)

    If you have data that shares all the same dimensions, it almost always makes sense to keep it together in a single fact table. The general guideline is only to split out a new fact table if the dimensionality of the data is different.
    Unions and joins typically just end up slowing things down.
    The only exception I've really seen to this rule is when one set of facts has orders of magnitude more data points then the others. I.e. if you have actuals data that has 100 million rows, but budget data (with the same dimensionality) with only 200,000 rows. This doesn't happen often, but I suppose it could potentially occur.
    Scott

  • Scripting IP Management

    I am a scripting novice/intermediate, familiar with shell/bash (eg: sed, awk,) and learning Python, also have some readability of Perl.
    Does anyone know of/use any good scripts that do any or all of below?
     - Discover/report subnetting structure (eg, 'show ip route <network> <mask> longer" -- parse, report)
     - Discover IP assignments/report utilization (eg, parse "show ip arp" -- and pair to the subnetting discovery)
    I am looking for the logic to associate the 'show ip arp' output with a subnet, and report subnet usage utilization for reporting reasons.  If readily available, both the specific subnet and the larger aggregate - but I can extrapolate the latter.  Something quick and easy to report to ARIN, as opposed to buying a commercial product.  (context:  I inherited a network that uses spreadsheets for IP management, but has not been maintained and I'd like to audit).

    I am a scripting novice/intermediate, familiar with shell/bash (eg: sed, awk,) and learning Python, also have some readability of Perl.
    Does anyone know of/use any good scripts that do any or all of below?
     - Discover/report subnetting structure (eg, 'show ip route <network> <mask> longer" -- parse, report)
     - Discover IP assignments/report utilization (eg, parse "show ip arp" -- and pair to the subnetting discovery)
    I am looking for the logic to associate the 'show ip arp' output with a subnet, and report subnet usage utilization for reporting reasons.  If readily available, both the specific subnet and the larger aggregate - but I can extrapolate the latter.  Something quick and easy to report to ARIN, as opposed to buying a commercial product.  (context:  I inherited a network that uses spreadsheets for IP management, but has not been maintained and I'd like to audit).

  • Test plan

    Hi friends,
    Please any one tell me, what are steps involved in test plan preparation in bw

    Hi Siri Raj,
    Integration testing - It is the phase of software testing in which individual software modules are combined and tested as a group. It follows unit testing and precedes system testing.
    Integration testing takes as its input modules that have been checked out by unit testing, groups them in larger aggregates, applies tests defined in an Integration test plan to those aggregates, and delivers as its output the integrated system ready for system testing.
    Unit testing - One part or the whole part of transfer rules , update rules, etc..
    Integration testing - The whole data flow cycle to be tested
    This link will give u detailed description
    http://en.wikipedia.org/wiki/Software_testing
    Stress testing in BI..
    /people/mike.curl/blog/2006/12/05/how-to-stress-test-bw-the-easy-way
    REFER THIS REG CATT
    http://help.sap.com/saphelp_erp2005/helpdata/en/d7/e21221408e11d1896b0000e8322d00/frameset.htm
    Check this doc on Unit Testing
    unit testing
    Look at the threads below :
    Testing Methods in BW
    Unit Testing in BW
    How to do testing in BW
    Hi...BW testing
    Re: Hi...BW testing
    Hi...BW testing
    Pls refer following links...
    http://help.sap.com/saphelp_nw04/helpdata/en/d7/e210c8408e11d1896b0000e8322d00/frameset.htm
    http://help.sap.com/saphelp_nw04/helpdata/en/3c/aba235413911d1893d0000e8323c4f/frameset.htm
    http://help.sap.com/saphelp_nw04/helpdata/en/d7/e2123b408e11d1896b0000e8322d00/frameset.htm
    http://help.sap.com/saphelp_nw04/helpdata/en/d7/e2123b408e11d1896b0000e8322d00/frameset.htm
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/721d6a73-0901-0010-47b3-9756a0a7ff51
    https://service.sap.com/upgrade-bw
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/7dc0cc90-0201-0010-4fa7-d557f2bd65ef .
    https://websmp204.sap-ag.de/~sapdownload/011000358700009385902004E
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/67acb63d-0401-0010-b685-b1b87dd78892
    Hope it helps you!
    ****Assign Points If Helpful****
    Regards,
    Ravikanth

  • Can any one explain about the SAP Testing process in Implementation Project

    Can any one explain about the SAP Testing process to be carried out by BW Consultant in an Implementation Project which is in Testing Phase..

    hi bharat,
    Two types of testing is possible in bw
    unit testing
    integration testing
    Integration testing - It is the phase of software testing in which individual software modules are combined and tested as a group. It follows unit testing and precedes system testing.
    Integration testing takes as its input modules that have been checked out by unit testing, groups them in larger aggregates, applies tests defined in an Integration test plan to those aggregates, and delivers as its output the integrated system ready for system testing.
    Unit testing - One part or the whole part of transfer rules , update rules, etc..
    Integration testing - The whole data flow cycle to be tested
    This link will give u detailed description
    http://en.wikipedia.org/wiki/Software_testing
    Stress testing in BI..
    /people/mike.curl/blog/2006/12/05/how-to-stress-test-bw-the-easy-way
    REFER THIS REG CATT
    http://help.sap.com/saphelp_erp2005/helpdata/en/d7/e21221408e11d1896b0000e8322d00/frameset.htm
    Check this doc on Unit Testing
    unit testing
    Look at the threads below :
    Testing Methods in BW
    Unit Testing in BW
    How to do testing in BW
    Hi...BW testing
    Re: Hi...BW testing
    Hi...BW testing
    Pls refer following links...
    http://help.sap.com/saphelp_nw04/helpdata/en/d7/e210c8408e11d1896b0000e8322d00/frameset.htm
    http://help.sap.com/saphelp_nw04/helpdata/en/3c/aba235413911d1893d0000e8323c4f/frameset.htm
    http://help.sap.com/saphelp_nw04/helpdata/en/d7/e2123b408e11d1896b0000e8322d00/frameset.htm
    http://help.sap.com/saphelp_nw04/helpdata/en/d7/e2123b408e11d1896b0000e8322d00/frameset.htm
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/721d6a73-0901-0010-47b3-9756a0a7ff51
    https://service.sap.com/upgrade-bw
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/7dc0cc90-0201-0010-4fa7-d557f2bd65ef .
    https://websmp204.sap-ag.de/~sapdownload/011000358700009385902004E
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/67acb63d-0401-0010-b685-b1b87dd78892
    Hope it helps you!
    ****Assign Points If Helpful****
    Regards,
    yunus

  • Docs on integration testing

    please send me docs on integration testing with atleast one complete scenerio

    Hi Karunakar,
    Integration testing - It is the phase of software testing in which individual software modules are combined and tested as a group. It follows unit testing and precedes system testing.
    Integration testing takes as its input modules that have been checked out by unit testing, groups them in larger aggregates, applies tests defined in an Integration test plan to those aggregates, and delivers as its output the integrated system ready for system testing.
    Unit testing - One part or the whole part of transfer rules , update rules, etc..
    Integration testing - The whole data flow cycle to be tested
    This link will give u detailed description
    http://en.wikipedia.org/wiki/Software_testing
    Stress testing in BI..
    /people/mike.curl/blog/2006/12/05/how-to-stress-test-bw-the-easy-way
    REFER THIS REG CATT
    http://help.sap.com/saphelp_erp2005/helpdata/en/d7/e21221408e11d1896b0000e8322d00/frameset.htm
    Check this doc on Unit Testing
    unit testing
    Look at the threads below :
    Testing Methods in BW
    Unit Testing in BW
    How to do testing in BW
    Hi...BW testing
    Re: Hi...BW testing
    Hi...BW testing
    Pls refer following links...
    http://help.sap.com/saphelp_nw04/helpdata/en/d7/e210c8408e11d1896b0000e8322d00/frameset.htm
    http://help.sap.com/saphelp_nw04/helpdata/en/3c/aba235413911d1893d0000e8323c4f/frameset.htm
    http://help.sap.com/saphelp_nw04/helpdata/en/d7/e2123b408e11d1896b0000e8322d00/frameset.htm
    http://help.sap.com/saphelp_nw04/helpdata/en/d7/e2123b408e11d1896b0000e8322d00/frameset.htm
    Hope it helps you!
    ****Assign Points If Helpful****
    Regards,
    Ravikanth

  • What is "autodelete_base" snapshot on aggr0 and how to disable it?

    Hello colleagues,could anybody explain what is purpose of "autodelete_base" snapshot in aggr0 and how to remove it permanently?As I noticed, snap_autodelete is permanently turn ON in DOT 8.2.x 7mode. And only way, how to disable this snapshot is to set aggr snap reserve to 0.Is there any other solution? I noticed those only on single systems, HA pairs, but not on metrocluster gateway filer with syncmirror aggregates. Thanks in advance for feedback. RegardsPetr Jaros

    The purpose of “Autodelete_base”  is : If Aggregate snapshot autodelete is leaving you with no snapshots, the storage system will automatically create a snapshot called autodelete_base at the aggregate level when a snapshot is removed as a result of autodeletion. This ensures that there is always a new snapshot in the aggregate after a snapshot is deleted.If aggregate snapshots are enabled and there is not adequate space reserved for these snapshots, a situation may develop where a very large aggregate snapshot is created. This may occur if large volumes are deleted.If SyncMirror triggers an aggregate level snapshot and there is not enough space for two snapshots, the existing snapshot is deleted by the autodeletion feature and instantly replaced by the autodelete_base snapshot. However, there is still not enough space for the SyncMirror triggered snapshot. This will cause a loop of autodeletion which can only be interrupted by manually deleting the autodelete_base snapshot.StepsDisable automatic aggregate Snapshot copy creation by entering the following command:aggr options aggr_name nosnap onaggr_name is the name of the aggregate for which you want to disable automatic Snapshot copy creation.Delete all Snapshot copies in the aggregate by entering the following command:snap delete -A -a aggr_nameSet the aggregate Snapshot reserve to 0 percent by entering the following command:snap reserve -A aggr_name 0

  • What is the roll-up hierarchy means?

    Hello experts,
    What is the roll-up hierarchy means?
    I would greatly appreciate your help.
    Thanks
    Padma

    The system arranges its own hierarchy of aggregates to speed up roll up and change run processing.  Essentially, some aggregates can be derived from other aggregates. Larger aggregates likely include numerous characteristics, while smaller ones fewer characteristics.  The smaller ones are very often "subsets" of the larger aggregates which are "supersets". 
    When a roll-up is performed the "superset" aggregates are filled first from data in the InfoCube.  Next, "subset" aggregates are filled from data in the "superset" aggregates.  The system arranges this hierarchical relationship internally in order to optimize the performance of the rollup and change run.
    Thanks for any points you choose to assign.
    Regards -
    Ron Silberstein
    SAP

  • Best way to aggregate large data

    Hi,
    We load actual numbers and run aggregation monthly.
    The data file grew from 400k lines to 1.4 million lines. The aggregation time grew proportionately and it takes now 9 hours. It will continue growing.
    We are looking for a better way to aggregate data.
    Can you please help in improving performance significantly?
    Any possible solution will help: ASO cube and partitions, different script of aggregation, be creative.
    Thank you and best regards,
    Some information on our enviroment and process:
    We aggregate using CALC DIM(dim1,dim2,...,dimN).
    Windows server 64bit
    We are moving from 11.1.2.1 to 11.2.2
    Block size: 70,000 B
    Dimensions,Type, Members, Sparse Members:
    Bold and underlined dimensions are aggregated.
    Account
    Dense
    2523
    676
    Period
    Dense
    19
    13
    View
    Dense
    3
    1
    PnL view
    Sparse
    79
    10
    Currency
    Sparse
    16
    14
    Site
    Sparse
    31
    31
    Company
    Sparse
    271
    78
    ICP
    Sparse
    167
    118
    Cost center
    Sparse
    161
    161
    Product line
    Sparse
    250
    250
    Sale channels
    Sparse
    284
    259
    Scenario
    Sparse
    10
    10
    Version
    Sparse
    32
    30
    Year
    Sparse
    6
    6

    Yes I have implemented ASO. Not in relation to Planning data though. It has always been in relation to larger actual reporting requirements. In the new releases of Planning they are moving towards having integrated ASO reporting cubes so that where the planning application has large volumes of data you can push data to an ASO cube to save on aggregation times. For me the problem with this is that in all my historical Planning applications there has always been a need to aggregate data as part of the calculation process, so the aggregations were always required within Planning so having an ASO cube would not have really taken any time away.
    So really the answer is yes you can go down the ASO route. But having data aggregating in an ASO application would need to fit your functional requirements. So the biggest one would be, can you do without aggregated data within your planning application? Also, its worth pointing out that even though you don't have to aggregate in an ASO application, it is still recommended to run aggregations on the base level data. Otherwise your users will start complaining about poor performing reports. They can be quite slow, and if you have many users then this will only be worse. Aggregations in ASO are different though. You run aggregations in a number of different ways, but the end goal is to have run aggregations that cover the most commonly run reporting combinations. So not aggregating everything and therefore quicker to run. But more data will result in more time to run an aggregation.
    In your post you mentioned that your actuals have grown and the aggregations have grown with it, and will continue to grow. I don't know anything about your application, but is there a need to keep all of your actuals loading and aggregating each month? Why don't you just load the current years actuals (Or the periods of actuals that are changing) each month and only aggregate those? Are all of your actuals really changing all the time and therefore requiring you to aggregate all of the data each time? Normally I would only load the required actuals to support the planning and forecasting exercise. Any previous years data (Actuals, old fcsts, budgets etc) I would archive and keep an aggregated static copy of the application.
    Also, you mentioned that you did have calc parallel set to 3 and then moved to 7. But did you have the TASK DIMS set at all? The reason I say this is because if you didn't then your calc parallel would likely give you no improvement at all. If you don't set it to the optimal value then by default it will try to paralyze using the last dimension (in your case Year), so not really breaking up the calc ( This is a very common mistake that is made when CALC PARALLEL is used). Setting this value in older versions of Essbase is a bit trial and error, but the saying goes it should be set to at least the last sparse aggregating dimension to get any value. So in your case the minimum value should be TASK DIM 4, but its worth trying higher, so 6. Try 4 then 5 and then 6. As I say, trial and error. But I will say one thing, by getting your calc parallel correct you will save much more than 10% on aggregations. You say you are moving to 11.1.2.2, so I assume you haven't run this aggregation on that environment yet? If so the TASK DIM setting is not required in that environment, essbase will calculate the best value for you, so you only need to set CALC PARALLEL.
    Is it possible for you to post your script? Also I noticed in your original email that for Company and ICP your member numbers on the right are significantly smaller than the left numbers, why is this? Do you have dynamic members in those dimensions?
    I will say 6 aggregating dimensions is always challenging, but 9 hours does sound a little long to simply aggregate actuals, even for the 1.4 millions records

  • Aggregate function on "large value type"

    We are constrained to use a model that stores records in a main table and various attributes of the record in a second table. Something like this: main record table CAR, attribute table CAR_DETAILS with CAR_DETAILS have key value pairs like "color"
    "blue" and "doors" "4".
    For reports we need to flatten the one to many nature of this to get CAR and color and doors by using an aggregate function like: 
    car_id, max(car_detail.value) filtered on car_detail.key = "color", max(car_detail.value) filtered on car_detail.key = "doors". 
    This works on other tables but the car_detail table (shall we say) appears to store its values as blobs. In any case when we attempt to aggregate we get "The query uses an aggregate function on a large value type expression. Large value type expressions
    can not be aggregated."
    Since we can't change the model, we would need to use another function to change this to a smaller string (or date, these are actually mostly date), but none of the very limited set of functions available seems to work to make this aggregation possible (and
    there is no "first" or anything else similar). 
    The list of functions available in Report Model Queries can be found at https://msdn.microsoft.com/en-us/library/ee210538.aspx
    Appreciate any ideas on how to solve this
     

    I found a work around : create a second dataset reversing the direction:
    car_detail.key, car_detail.value, car_id
    with this filtered by car_detail.key = "color" (or doors etc)
    Then use a lookup from the first dataset to the second to display the data:
    =Lookup(Fields!car_Id.Value,Fields!Icar_Id.Value,Fields!car_detail.Value, "DataSet2")
    where the 1st field is in the main report dataset, the second and third are in the lookup dataset, and the 4th parameter is the name of the lookup dataset. 
    Would still really like to have a solution that allows aggregation for ease of use and efficiency. 

Maybe you are looking for

  • Actual Cost of finished goods in MIGO

    Hi, The value of Finished material in FI Document generated in MIGO takes the (Goods Receipt Quantity) X (Planned/Moving average Price from material master), Where as it should take the Total Actual Price calculated after Order Confirmation Based on

  • Approval preview in SRM in table form by default...

    Hi experts, Please tell me, what should I do to have document approval preview in SRM (5.0 version - Extended Classic) in table form by default? In standard, default setting is that approval preview is shown as a java applet graphics. But I have to m

  • Logged into Creative Cloud. But when opening PDF document, asked to log in to Acrobat.

    This has been happening for over a week now. I'm logged in to my Creative Cloud. One of the apps is Acrobat XI pro. It's fully up-to-date. I now try to open a PDF document, but instead I get a pop-up telling my something about continuing my "trial" v

  • HP P1102W printer crashes print service on 64 bit Windows 7

    My P1102W randomly causes the print service to crash taking support of all other printers with it. I found a way to restart the print service manually, but this is a real pain. Also, printing two sided works on my 64 bit windows machine, but doesn't

  • Mac keeps randomly restarting

    Anonymous UUID:       4BBDFF9A-B468-6631-4521-3CCA277A41BD Thu Feb 27 19:32:04 2014 panic(cpu 0 caller 0xffffff80031ec352): assertion failed: igi->igi_version == IGMP_VERSION_3, file: /SourceCache/xnu/xnu-2422.90.20/bsd/netinet/igmp.c, line: 3720 Bac