ASO Aggregations

<p>Hi,  I'm having a difficult time find the right balancebetween aggregation, cube size and retrieval time.  I have anASO cube that has 3 years of data in it, totalling at 160MB. It has a total of 8 dimensions, 6 of which that containmultiple rollups.  So the data is obviously being aggregatedat many different points.  The problem is that if I go to setthe aggregation size higher this will help make the retreival timesfaster, but at the expense of making the cube much larger.  For example an aggregation of 400 MB will take 14.79seconds, where an aggreagtion of 1500 MB will take 7.66 seconds.   My goal is to get retreival times down to 1-3seconds, where the user would notice almost no delay at all.</p><p> </p><p>Does anyone have any suggestions on how to find a good balancebetween faster retrievals and the different aggregation sizes? FYI, I've also tried changing whether members are tagged asDynamic or Multiple Hierarchies enabled, but I didn't find anynoticeable difference.  Is there anything else that I shouldconsider to making the retreivals faster?  </p><p>Thanks in advance.</p>

Given that disk space is relatively inexpensive, this should often be something you don't worry about as much as something like query performance. Especially since even a highly aggregated ASO cube is only a fraction of the foot print for a BSO cube. You should maximize as much aggregation as you can to get the best retrieval time possible and not worry about the disk space too much. Unfortunately in current releases of Essbase you cannot customize aggregations. You can use query tracking to improve aggregations by focusing on the areas most often queried. So I would suggest enabling that, let your users get in there and then start to aggregate again based on the results of the tracking. Of course remembering that by its very nature ASO cubes are very dynamic and some queries are just going to take a little longer. Something to also consider is how complex the query is and is the amount of time it takes to retrieve appropriate. Are you pulling back thousands and thousands of members? If you are then you have to expect a certain amount of time just to bring over the meta-data. Try turning on navigate without data. If your queries still take a long time to come back even though you are not pulling data, then it just means you have a large result set coming back and that's just the way it is. Also look if your result set is returning members with member formulas. MDX formulas can take a little while to run depending on how complex they are and how well they are written.<BR>

Similar Messages

  • ASO- Aggregation

    I am designing an ASO cube. Once i design aggregation for the cube, do i need to aggregate the data during every dataload or it is automatically aggregated. And also what is the purpose of aggragation view selection.
    Thanks in advance.

    Hi,
    The aggregation design wizard allows you to create the aggregate view selection file. Based on this file you can materialize the aggregation (literally: execute the aggregation process). Each time you load bnew data the process has to be repeated (to have the new data aggregated as well).
    The aggregate view selection is a process of selecting aggregate views (combinations of levelds from different dimensions) for which the aggregations will be materialized to disk.
    The purpose is to optimize your query performance. The options you have are:
    1. rely on the automated selection process - the biggest query cost combinations are selected which fit in disk space consumption criteria.
    2. Use query tracking - the view selection optimizes performance for certain type of queries, which are most often being executed against the cube.
    3. Use query hits - specify combinations to aggregate at design time.
    Cheers,
    Patryk.

  • ASO aggregation Issue. Not rolling up to parents.

    hg
    Edited by: user7805819 on Jan 13, 2011 4:08 PM

    I don't have anything open to test syntax right now but something like
    Case
    When
    currentmember([Scenario]) is [Actual] and
    currentmember([Version]) is [Fina;] and
    ISlevel([Total Product].currentmember,0)
    and Not IS([Total Product].currentmember,[XXXXX])
    Then
    20
    When
    currentmember([Scenario]) is [Actual] and
    currentmember([Version]) is [Fina;] and
    NOT ISlevel([Total Product].currentmember,0) THEN
    sum([Total Product].currentmember.children,[measure])
    End
    where measure is the member name you are performaing the calculation on (where the 20 gets put). It has been a whild so you might need additional ({}) around the tuple to make it a set

  • Problem with MIN MDX Function on ASO

    Hello
    I have an aggregate measure in a cube ASO, aggregating to a minimum, I tried to use the MIN function but have not worked out as expected in the first Essbase is the sum of the data and then calculates the minimum.
    This is the function used:
    IIF ( NOT IsLevel ( Hospital.CurrentMember, 0 ),
    MIN( Descendants(CurrentMember ( Hospital ),http://L.Unknown.Level),PrzListinoMINInput),
    IIF ( NOT IsLevel ( Marketing.CurrentMember, 0 ),
    MIN( Descendants(CurrentMember ( Marketing),http://A.Unknown.Level),PrzListinoMINInput),
    IIF ( NOT IsLevel ( Listini.CurrentMember, 0 ),
    MIN( Descendants(CurrentMember ( Listini),http://C.Unknown.Level),PrzListinoMINInput),
    IIF ( NOT IsLevel ( Bravo.CurrentMember, 0 ),
    MIN( Descendants(CurrentMember ( Bravo),http://D.Unknown.Level),PrzListinoMINInput),
    IIF ( NOT IsLevel ( Product Cluster.CurrentMember, 0 ),
    MIN( Descendants(CurrentMember ( Product Cluster),http://E.Unknown.Level),PrzListinoMINInput),
    IIF ( NOT IsLevel ( Area.CurrentMember, 0 ),
    MIN( Descendants(CurrentMember ( Area),http://F.Unknown.Level),PrzListinoMINInput),
    IIF ( NOT IsLevel ( Area2.CurrentMember, 0 ),
    MIN( Descendants(CurrentMember ( Area2),http://G.Unknown.Level),PrzListinoMINInput),
    IIF ( NOT IsLevel ( Area3.CurrentMember, 0 ),
    MIN( Descendants(CurrentMember ( Area3),http://H.Unknown.Level),PrzListinoMINInput),
    IIF ( NOT IsLevel ( Area4.CurrentMember, 0 ),
    MIN( Descendants(CurrentMember ( Area4),http://I.Unknown.Level),PrzListinoMINInput),
    IIF ( NOT IsLevel ( Customer.CurrentMember, 0 ),
    MIN( Descendants(CurrentMember ( Customer),http://N.Unknown.Level),PrzListinoMINInput),
    0))))))))))
    how to proceed in these cases?
    Thank you

    Standard troubleshooting steps in the following order:
    1. Reset: Press the sleep/wake button & home button at the same time, keep pressing until you see the apple logo, then release. Ignore the slide to power off. You won't lose any data or settings.
    If no change, continue:
    2. Restore your phone from your most recent backup & re-sync your itunes content, Note however: if the problem is related to corrupt data in your phone's backup, while this will restore a fresh OS, it will also restore the problem.
    If no change, continue:
    3. Restore as a "new" phone in itunes & re-sync your content. All saved data will be lost.
    If no change, make an appointment at an Apple store, as you most likely have a hardware problem. There are no further user troubleshooting steps available.

  • [Essbase] - using TopCount MDX  function

    Hi,
    I would like to use the TopCount MDX function in an ASO cube.
    I did manage to write the MDX query but did not find how to "convert" it as a MDX formula on the "MEASURE" dimension (well I am guessing).
    I thank you for your help!
    Thomas
    MDX query :
    IND_HITS is from the MEASURE dimension
    A2008 is from the YEAR dimension
    M1013210 is from the MESSAGE dimension
    SELECT
    [MOIS].Generations(1).Members
    ON COLUMNS,
    TopCount([USERS].Children,10, [IND_HITS] )
    ON ROWS
    FROM Cube_Log.Cube_Log
    WHERE ([A2008], [M1013210], [PARAMETRES],[HEURES],[TYPE])

    Hello
    I have an aggregate measure in a cube ASO, aggregating to a minimum, I tried to use the MIN function but have not worked out as expected in the first Essbase is the sum of the data and then calculates the minimum.
    This is the function used:
    IIF ( NOT IsLevel ( [Hospital].CurrentMember, 0 ),
    MIN( Descendants(CurrentMember ( [Hospital] ),[L.Unknown].Level),[PrzListinoMINInput]),
    IIF ( NOT IsLevel ( [Marketing].CurrentMember, 0 ),
    MIN( Descendants(CurrentMember ( [Marketing]),[A.Unknown].Level),[PrzListinoMINInput]),
    IIF ( NOT IsLevel ( [Listini].CurrentMember, 0 ),
    MIN( Descendants(CurrentMember ( [Listini]),[C.Unknown].Level),[PrzListinoMINInput]),
    IIF ( NOT IsLevel ( [Bravo].CurrentMember, 0 ),
    MIN( Descendants(CurrentMember ( [Bravo]),[D.Unknown].Level),[PrzListinoMINInput]),
    IIF ( NOT IsLevel ( [Product Cluster].CurrentMember, 0 ),
    MIN( Descendants(CurrentMember ( [Product Cluster]),[E.Unknown].Level),[PrzListinoMINInput]),
    IIF ( NOT IsLevel ( [Area].CurrentMember, 0 ),
    MIN( Descendants(CurrentMember ( [Area]),[F.Unknown].Level),[PrzListinoMINInput]),
    IIF ( NOT IsLevel ( [Area2].CurrentMember, 0 ),
    MIN( Descendants(CurrentMember ( [Area2]),[G.Unknown].Level),[PrzListinoMINInput]),
    IIF ( NOT IsLevel ( [Area3].CurrentMember, 0 ),
    MIN( Descendants(CurrentMember ( [Area3]),[H.Unknown].Level),[PrzListinoMINInput]),
    IIF ( NOT IsLevel ( [Area4].CurrentMember, 0 ),
    MIN( Descendants(CurrentMember ( [Area4]),[I.Unknown].Level),[PrzListinoMINInput]),
    IIF ( NOT IsLevel ( [Customer].CurrentMember, 0 ),
    MIN( Descendants(CurrentMember ( [Customer]),[N.Unknown].Level),[PrzListinoMINInput]),
    0))))))))))
    how to proceed in these cases?
    Thank you

  • Why we use ESSCMD?

    HI,
    I am new in Hyperion, doing MaxL command assignments.
    Now I understand that whatever task we will do with maxL command the same task we will do by using ESSCMD.
    Then what is the difference between MaxL & ESSCMD?
    When should I go for startMaxL or ESSCMD?
    Is their any particular reason to use MaxL or ESSCMD?
    reply please.

    MaxL is the replacement for Esscmd. I don't know of anything you can do in Esscmd that can't be done in MaxL (I guess I disagree with John). However there are a lot of things that MaxL can do that Esscmd can't. For example ASO Buffered data loads, ASO Aggregation, Essbase Studio Deploy commands.
    There is one exception to my above statement, EsscmdQ as special version of Esscmd can do Essbase ASO outline compression that MaxL can't.
    If you have Esscmd statements you can convert them using a utility in the Bin Directory I think it is called Cmd2MaxL.exe

  • ASO cube aggregation behavior

    Hi All,
    We have an ASO cubes with millions of records having 10 dimensions (1 Account dimension).Few facts values are getting changed after aggregation where without aggregation, we are getting correct values.
    We are loading data from source system (Teradata) where we have rounded off values till 7 places of decimal. But when we see data at report level (WA) it displays non zero values after 7 places of decimal for few facts.
    For us it is making a huge difference as we are calculating var% value on report. The denominator value for this var% which is 0 for certain line items but due to aggregation its showing some value (0.0000000198547587458) which result very large values for var%.
    Is it the tool behavior or there is some problem with aggregation?
    Any input will be really appreciable.
    Thanks,

    What doesn't make sense to me is you don't need aggregations on level zero members as that is where the data is stored. I'm guessing oyu mean level sero members of one dimension and higher level members of other dimensions. Are those other dimensions dynamic or stored? Do you have a lot of calculations going on that are being retrieved? Have you materalized aggregations on the cube?

  • Adding commentary in ASO Cube and aggregating it to TOP Level

    Hi Gurus,
    I have one peculiar problem. We are adding commentary in BSO Planning cube now i have couple of problem related to it.
    a) These commentary needs to be pushed in ASO cube (Reporting) which need are entered in Lower Level.
    b) At top level these commentaried need to be Aggregated or rather Concatenated.
    eg.
    ProfitCentre has two child P1 and P2 and user enter commantary in BSO for P1 and P2 as "Market Risk Deviation" and "Standard Output"
    then in the HSPgetval Smartview report the conent of report will look like:
    Profit Centre          Market Risk Deviation + Standard Output
    P1                        Market Risk Deviation
    P2                        Standard Output.
    Any thoughts/ Suggestions/ Input/ Ways to achieve so
    Thanks
    Anubhav

    Apart from what Glenn suggested
    Not out of box, you are looking at a JAVA API + SQL based solution here
    Here are my thoughts
    Either use a Select query and get the Text values and IDs from the tables HSP_CELL_TEXT (or HSP_TEXT_CELL) table
    Create a Java API, which can import a TextList in ASO cube, ID is going to be what you get from the table
    Load the data to ASO from Planning
    Now for the aggregation/concatenation part, you'll have to Add those as again Smart List, This can be done by looking at HSP_CELL_TEXT (or HSP_TEXT_CELL) table, there is an ID associated with each text, get the id associated
    So for example Market Risk Deviation is 1 and Standard Output Deivation is 2, then you should add Market Risk Deviation + Standard Output as 3, however you'll have to make sure that there is no entry from Planning for 3
    It is complicated
    Regards
    Celvin Kattookaran

  • ASO Build Rule - "Level Usage for Aggregation"

    When Building ASO (9.3.1) Dimensions via Rules, we wish to set the “Level Usage for Aggregation” of a member in the data file.
    You can set a Rule Field Property to Aggregate Level Usage, though it not clear what value should be inserted in the Data File to specify one of the four options below for a member
    •     Consider All Levels
    •     Do Not Aggregate
    •     Consolidate Top Level Only
    •     Never Aggregate to Intermediate Levels
    Inserting the full name does not work and we cannot find any reference to the Aggregate Level Usage property in the Dbag.
    Can anyone advise on the correct property value’s required.
    Much Apreciated.

    From the DBAG here are you options for dimension builds with ASO applications
    % Express as a percentage of the current total in a consolidation (applies only to members of a dynamic
    hierarchy)
    * Multiply by the current total in a consolidation (applies only to members of a dynamic hierarchy)
    + Add to the current total in a consolidation (applies only to members of a dynamic hierarchy)
    - Subtract from the current total in a consolidation (applies only to members of a dynamic hierarchy)
    / Divide by the current total in a consolidation (applies only to members of a dynamic hierarchy)
    ~ Exclude from the consolidation (applies only to members of a dynamic hierarchy or members beneath Label Only members in a stored hierarchy)
    O Tag as label only
    N Never allow data sharing
    C Set member as top of a stored hierarchy (applies to dimension member or generation 2 member)
    D Set member as top of a dynamic hierarchy (applies to dimension member or generation 2 member)
    H Set dimension member as multiple hierarchies enabled (applies to dimension member only)
    Brian Chow

  • Data Visible At Aggregated Level but not at Leaf Node Level in ASO

    Hi,
    I am facing an issue in Essbase Version 7. I have a BSO - ASO partition. I have 4 dimensions Customer, Accounts, Product and Time. When i try to view data across customer, time and accounts the data is visible at the leaf node level and the aggregated level. But when i include Customer in my analysis the data is visible at an aggregated level for the customer but not a the leaf node level. What could be the cause of this? I am not getting any errors during my data load in ASO as well as when i run the aggregation in ASO...
    Any inputs on this issue are highly appreciated....

    Without having complete information, I'll guess you are trying to look at the data in the BSO cube. I would look at the partition definition. One of two things is most likely happening
    1. You only have the partition defined to look at the top level of customers
    2. THe member names of lower levels of customers is not consistent betweent he two cubes and you don't map member names.
    You can prove that is it a partition definition problem by doing the same retrieves from your ASO cube. If you get back data you know it is a partition definition problem. If you don't get back the proper data you have different problems. One that would not seem logical unless you had odd formulas on your ASO cube.

  • ASO Cube, increase aggregation or improve retrieval performance

    We've been using the essbase cube to create report using OBIEE.
    When we use level-0 member filter, it takes quit a long time to get the results.
    Any idea to improve the performance?
    Is there anyway that I can improve the number of aggregation occurs at a time? Thank you.

    What doesn't make sense to me is you don't need aggregations on level zero members as that is where the data is stored. I'm guessing oyu mean level sero members of one dimension and higher level members of other dimensions. Are those other dimensions dynamic or stored? Do you have a lot of calculations going on that are being retrieved? Have you materalized aggregations on the cube?

  • Error aggregating ASO cube

    <p>That's what it happens in my production environment... :-(</p><p> </p><p>alter database myappl.mycube clear aggregates;<br>OK<br>execute aggregate selection on database myappl.mycube force_dump toview_file def_sel;<br>OK<br>execute aggregate build on database myappl.mycube using view_filedef_sel;<br>1270032 The specified view list is invalid or the views wereselected using a different outline</p><p> </p><p>If I try an "execute aggregate process" it works andbuilds just the views that were defined in my csc file def_sel</p><p> </p><p>Any idea?</p><p>Thanks in advance</p><p> </p><p>In my test environment it works..</p>

    Could you have a fragmented outline?
    If so, see this thread for ways to reduce that fragmentation: Re: ASO too large
    See Glenn's blog for an alternate way to get rid of fragmentation: http://glennschwartzbergs-essbase-blog.blogspot.com/2010/06/aso-outline-compaction.html
    Regards,
    Cameron Lackpour

  • Can we load data for all levels in ASO?

    Hi All,
    Im creating cube in ASO
    can i load data for all levels in ASO
    we can load data for all Levels In BSO but in ASO i need confirmation????
    and one more
    wat is the consider all levels option in ASO is used for ? wat is the purpose?
    Can any one help ,it would be appriciate.
    Thanks

    In an ASO cube you can only load to level zero
    The consider all levels if used for aggregation hints. It allows you to tell the aggregation optimizer to look at all levels when deciding if aggregation needs to be done on the dimension

  • Aggregation script is taking long time - need help on optimization

    Hi All,
    Currently we are working to build a BSO solution (version 11.1.2.2) for a customer where we are facing performance issue in aggregating the database. The most common activity of the solution will be to generate data on different scenario from Actual and Budget (Actual Vs Budget difference data in one scenario) and to be used for reporting purpose mainly.
    We are aggregating the data to top level using AGG command for Sparse dimensions. While doing this activity, we found that it is creating a lot of page files and thereby filling up the present physical memory of the drive (to the tune of 70GB). Moreover it is taking a long time to aggregate. The no. of stored members that is present are as follows:
    Dimension - Type - Stored member (Total members)
    Account - Dense- 1597 (1845)
    Period - Dense - 13 (19)
    Year - Sparse - 11 (12)
    Version - Sparse - 2 (2)
    CV - Sparse- 5 (6)
    Scenario - Sparse - 94 (102)
    EV - Sparse - 120 (122)
    FC - Sparse- 118 (121)
    CP - Sparse - 1887 (2049)
    M1 - Sparse - 4873 (4874)
    Entity - Sparse - 12020 (32349) - Includes two alternate hierarchies for rolling up the data
    The other properties are as follows:
    Index Cache - 152000
    Data File Cache - 32768
    Data cache - 153600
    ACR = 0.65
    We are using Buffered I/O
    The level 0 datafile is about 3 GB.( 2 year budget and 1 year 2 months Actuals data)
    Customer is going to use SmartView to retrieve the data and having Planning Plus License only. So could not go for an ASO solution. We could not reduce the members of huge Sparse dimensions M1 and CP as well. To improve the data retrieval time, we had to make upper level members as stored which resolved data retrieval issue
    I am seeking for help on the following:
    1. How can we optimize the time taken? Currently each dimension is taking about an hour to aggregate. Calc Dim is taking even longer time. Hence opted for AGG
    2. Will change of dense and sparse setting help our cause? ACR is ona lower side. Please note that most calculations are either on Period dimensions or FC. There is no such calculation on Account dimension
    3. Will change of a few non-level 0 members from store to dynamic-calc help? Will this slow down calculations in the cube?
    4. What should be the best performance order for this cube?
    Appreciate your help in these regard,
    Regards,
    Sukhamoy

    Please provide following  information
    1)  Block size  and other statistic
    2)  Aggreagation script
    >>Index Cache - 152000
    >>Data File Cache - 32768
    >>Data cache - 153600
    Try this settings
    Index Cache - 1120000
    Data cache - 3153600

  • Error While doing aggregate operaion in ASO cube using MaxL

    Hi,
    when I try to run a Maxl script against an ASO application to aggeragate the cube. I end up with an error msg as shown below.
    ERROR - 1270102 - Merge and view build operations cannot be performed while data load buffers exist.
    Maxl I used "execute aggregate process on database app_name.db_name stopping when total_size exceeds 1.5"
    Please guide me on what caused this issue.
    Thanks
    Sathish

    Is it working now?
    If not maybe you have other buffers that still exist, you can also destroy buffers e.g. alter database ASOSamp.Sample destroy load_buffer with buffer_id 1;
    Also restarting the database should clear the buffers, try restarting and then aggregating to see if it was a buffer issue.
    Cheers
    John
    http://john-goodwin.blogspot.com/

Maybe you are looking for

  • Insalling SQL Loader in 11.2 Oracle client

    Hi, I have installed 11g client on my Linux machine but do not see sqlldr installed alongwith. Can someone please help me getting sqlldr utility installed with my client? Not sure if Oracle ships SQLLDR with 11g clients now. If No, Any steps get sqll

  • Form Personalization - Applications Cover?

    Hi All, This is not an urgent issue. :) But that is confused us for a long time. In Oracle EBS Form Personalization, what is the difference between {PROPERTY_NAME} and {PROPERTY_NAME}(APPLICATIONS COVER)?? For example, what is the difference between

  • Webservice invocation

    Need help in invoking these webservices.. There are two webservices that I am trying to invoke one after the other:- 1) first web service "setupwebsession" sets a session. Here I have setup the session.webuserUI to a component. 2) second service trie

  • Create a Link for HCM Header Infotype

    Dear. How can add a link(URL)  in the header for an Infotype? I found this document of Marcio Leoni. http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/806cda54-610c-2b10-3089-85f254e318de?quicklink=index&overridelayout=true Is posibl

  • I get 2 Icons every time I down load

    IfI save anything to my desktop I have 2 icons. 1) is firefox document & 2) is files. It doesn't make any difference where I down load to, I end up with 2 icons. I have another PC and it does the same thing. Thank You