Rsrt aggregate recommendations

How can I use rsrt to get aggregate recommendations? I am running a query and it takes too long to run. I want to see if rsrt can give me any recommendations but I am not sure how.

Hi Uday.
When you execute the query from RSRT choose "Execute + Debug" and select "Display aggregates found". This will give you a list of the used infoobjects, hierarchies etc. that were used at query execution. However in the aggregate maintanence you can generate an aggregate based on a query which will propose an aggregate with the same characteristics and hierarchies. Just choose "Propose" -> "Propose from query".
Another idea could be to check why and where the query is taking so long? DB, frontend etc. You can check this from transaction ST03 (or the RSDDSTAT table) -> Expert mode -> BW system load -> Today -> Switch the aggregation to query instead of cube.
Hope it helps.
BR
Stefan

Similar Messages

  • RSRT - Aggregates

    Dear Experts,
    A report was taking more time for execution.
    I have used RSRT transaction with "Aggregates" option.
    It displayed few coloumns in Red.
    What does that mean.
    As per my understanding, those calculations are taking more time to get executed.
    So I have created Aggregates with 'propose by system" option.
    My issue was solved.
    I need clarification on ...
    1. What does the Red coloumns mean, why are the characteristics not displayed in Red.
    2. How to know whether aggregates are necessary for the report ?
    .3. What is the correct procedure to know this ?
    Kindly clarify.
    Regards
    Sridhar

    Hi,
    1. What does the Red coloumns mean, why are the characteristics not displayed in Red.
    Only Key figures are aggregated based on characteristic, hence only columns in red. Charachteristics cannot be aggregated.
    2. How to know whether aggregates are necessary for the report ?
    .3. What is the correct procedure to know this ?
    To get an answer to above , check the link below:
    http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/cbd2d390-0201-0010-8eab-a8a9269a23c2?quicklink=index&overridelayout=true
    -Vikram

  • Splits for aggregates in rsrt

    When I go to RSRT and execute+debug my query, with aggregates selected, it shows splits like split1, split2 etc. What does the split mean?

    Each split is like a small sub-query telling you the parameters used in that particular part of the query. Depending on how your aggregates are set up then one aggregate may be sufficient for a number of 'splits'.
    If you have 'Fixed' parameters on then you may find an aggregate is sufficient for most splits but if you have values hard coded then the aggregate is less likely to be useful for many splits.
    You need to find a balance between putting in place appropriate aggregates, without building aggregates that are so big they become almost useless.
    The number of splits you see in RSRT is determined by your query definition, not by your aggregates.
    Hope I have understood your question
    Dave

  • Questions on cache and general RSRT settings for plancube

    Hi,
    we would like to:
    1) set request status 1 in RSRT for our planqueries, in order to automatically refresh the query after executing a planfunction (problem we have now is that the results of a planfunction are not automatically updated in the query. Only when doing something else like executing other function, saving, check locks, ... the results are visible).
    2) activate delta cache for our planqueries
    we have read OSS note 1136163 on RSRT settings. It says:
    Aggregation level "A" is implemented internally by the automatically created query "P/!!1P" (plan buffer query). This query acts like an InfoProvider. It reads the data of the database or provides the data from InfoProvider "P", it adds the data of the delta buffer "Dp" (or the delta buffer Dpi if P is a MultiProvider with several PartProviders Pi that can be planned) and transfers the data manager as data of the InfoProvider "A" of type "ALVL". The query "P/!!1P" can use aggregates and the cache; this is exactly like each normal query in "P". If "P" is a MultiProvider, it is useful to set PARTITIONMODE to "1".
                  For the query "P/!!1P" that is created automatically for an aggregation level or for all aggregation levels using the InfoProvider P, we recommend the following setting:
                  Read mode "H", request status "1", cache mode "1" or higher, delta cache "true" and SP grouping "1".
                  Furthermore, the selection to use the structure element (KIDSEL) should be "true".
                  The input-ready queries in "A" should not, and cannot, use a cache. The request status is irrelevant since queries in "A" are automatically set to current data. The delta buffer does not currently support hierarchy processing. Therefore, aggregation level "A" cannot completely support read mode "H". For input-ready queries at A:
                  Read mode "X", request status "0", cache mode "0".
                 The delta cache and SP grouping are not visible
    Problems we have:
    1) for query P/!!1P (PCA_AGQF/!!1PCA_AGQF in our example) does not allow changing the request status (greyed out). It now has value 0 instead of 1. It also does not allow to activate the delta cache flag. How to change this? In RSDIPROP we have set partitionmode to 1 for the multiprovider and activated the delta cache flag...
    2) can we use the cache / delta cache principle for our planqueries? If so, how to ensure these settings remain activated in RSRT?
    regards
    Dries
    regards
    dries

    Hi,
    To change the cache settings for your cube.
    Open the cube in RSA1 and click in 'Chance'.
    - click in the 'Environment' menu;
    - expand 'InfoProvider Properties'
      - select the option 'Change'.
    You will be able to set the cache mode for this provider.
    I don't think it will be possible use cache for a multiprovider, it is
    not possible.
    Regards,
    Amit

  • Query for functioning of Aggregate

    Hello experts,
    Could you please clarify me one fundamental question on the working of an aggregate.
    I have a cube with the follwing information
    CHAR1        CHAR2     CHAR3          KF 1
    I have build an aggregate with
    CHAR1         CHAR 2         KF1
    I have my doubt here :
    For example:
    If  I create a query with CHAR1 and CHAR2 in rows and KF1 in coloumns then the data is fetched from the Aggregate in stead of cube there by reducing the time. Is this correct?
    OR
    If I create  a query is created with CHAR1 CHAR2 AND CHAR 3 in rows and KF1 in coloumns
    then The data is read from the cube and noway the aggregate comes into picture.
    Could you please clarify me how exactly the aggregate is working.

    Hi,
    Goto RSRT transaction>Give report name>Execute+Debug>a pop up sceen will appear with multiple check boxes>select "Display aggregates found" option--> It will show all the aggregates those are hitting the query.
    To propose aggregates follow the below procedure to improve the query performance:
    First try to execute the query in RSRT on which u required to build aggregates. Check how much time it is taking to execute.....and whether it is required to build aggregate on this querry?? To get this information, Goto SE11> Give tabl name RSDDSTAT_DM in BI7.0 or RSDDSTAT in BW3.x.> Disply -> Contnts-> Give from date and to date values as today, user name as Ur user name, and give the query name
    --> execute.
    Now u'll get a list with fields like Object anme(Report anme), Time read, Infoprovider name(Multiprovider), Partprovider name (Cube), Aggregate name... etc. If the time read is less than 100,000,000 (100 sec) is acceptable. If the time read is more than 100 sec then it is recommended to create Aggregates for that query to increase performance. Keep in mind this time read.
    Again goto RSRT> Give query name> Execute+Debug-->
    A popup will come in that select the check box display aggregates found--> continue. If any aggregates or exist for that
    query it will display first if u press on continue button, it will display from which cube which fields are coming it will display...try to copy this list of objects on which aggregate can be created into one text file...
    then select that particular cube in RSA1>context>Maintain Aggregates-> Create by own> click on create aggregate button on top left side> Give discription of the aggregate>continue> take first object from list and fclick on find button in aggregates creation screen> give the object name and search... drag and drop that object into aggregate name right side (Drag and drop all the fields like this into aggregate).---->
    Activate the aggregate--> it will take some time once the activation finishes --> make sure that aggregate is in switch on mode.
    Try to xecute the query from RSRT again and find out the time read and compare this with first time read. If it is less tahn first time read then u can propose this aggregate to incraese the performance of the query.
    I hope this will help u... go through the below links to know about aggregates more clear.
    http://help.sap.com/saphelp_nw04s/helpdata/en/10/244538780fc80de10000009b38f842/frameset.htm
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/3f66ba90-0201-0010-ac8d-b61d8fd9abe9
    Hope it helps you..
    Regards,
    Ramki.

  • Questions regarding aggregates on cubes

    Can someone please answer the following questions.
    1. How do I check whether someone is re-bilding aggregates on a cube?
    2. Does rebuilding an aggregate refer to the rollup process? Can it take a few hours?
    3. What does it mean when someone switches off an aggregate, basically what is the difference (conceptually/time consumption)between:
                            A. activating an aggregate?
                            B. switching off/on an aggregate?
                            C. rebuilding an aggregate?
    3. When a user complains that a query is running slow, do we build an aggregate based on the chars in rows & free chars in that query OR is there anything else we need to include?
    4. Does database statistics in the 'MANAGE' tab of a cube only show statistics or does it do anything to improve the load/query performance on the cube?
    Regards,
    Srinivas.

    1. How do I check whether someone is re-bilding aggregates on a cube?
    If your aggregate status is in red and you are filling up the aggregate - it is an initial fill of the aggregate and filling up would mean loading the data from the cube into the aggregate in full.
    2. Does rebuilding an aggregate refer to the rollup process? Can it take a few hours?
    Rebuilding of an aggregate is to reload the data into the aggregate from the cube once again.
    3. What does it mean when someone switches off an aggregate, basically what is the difference (conceptually/time consumption)between:
    A. activating an aggregate?
    this would mean recreating the data structures for the aggregate - this would mean dropping the data and reloading the data.
    B. switching off/on an aggregate?
    Switching off an aggregate means that it will not be used by the OLAp processor but would mean that the aggregate still gets rolled up. Rollup referring to loading changed data from the cube into the aggregate - this is done based n the requests that have not yet been rolled up into the cube.
    C. rebuilding an aggregate?
    Reloading data into the aggregate
    3. When a user complains that a query is running slow, do we build an aggregate based on the chars in rows & free chars in that query OR is there anything else we need to include?
    Run the query in RSRT and do an SQl view of the query and check the characteristics that are used in the query and then include the same into your aggregate.
    4. Does database statistics in the 'MANAGE' tab of a cube only show statistics or does it do anything to improve the load/query performance on the cube?
    Stats being updated will improve the execution plans on the database. Making sure that stats are up to date would lead to better execution plans and hence possibly better performance but it cannot eb taken for granted that refreshing stats is going to improve query performance.

  • Aggregate build suggestions

    Hello Experts,
    We are trying to do performance tuning on a BW3.5 setup ... Currently, we are focusing on queries and looking at the feasibility of building aggregates for performance improvement. We find that most queries see data at the very granular level as follows:
    Rows
    Profit centre group -> Profit Centre-> Product hierarchy level 1
    Columns
    Actual , Planed, difference.
    Now the proposal generated for the queries do not give profit centre group but only profit centre in the first dimension. Please note that profit centre group is a navigational attribute of profit centre( There is a message that says profit centre cannot be in aggregate because of presence of profit centre group)
    Now my question is , in order for these queries to hit the aggregate, should I introduce both Profit centre group and Profit centre in the aggregate? In doing so, will I risk creating a bigger aggregate?
    I am not sure if I am making sense. But please feel free to ask questions I will explain more if needed. My question to be precise is, whether all the characteristics required to view the data in a query needs to be present in the aggregate?
    Many thanks in advance for all your inputs..
    Regards,
    Solomon

    Hi Solomon,
    If your query has to hit the aggregate, then all the characteristics in the selection, filter, default values, rows, columns, used in RKFs, used in exceptional aggregation should be present in the aggregate.
    You can execute the query in RSRT ->Execute+Debug-> with Display aggregate found option.
    This will tell you exactly what are the characteristics those should be present so that your default view hits the aggregate.
    Needless to say, if you are planning to drilldown the report with any characteristic from the free characteristics, even that should be present in the aggregate.
    Now coming to your confusion on Profit centre and Profit Centre Group, Since profit centre group is already navigational attribute of profit centre you need not ( can not ) place that in the aggregate, when profit centre is already present.
    However, if the query is executed with this nav attribute, it will certainly hit the aggregate ( Can be checked in RSRT ).
    Thanks,
    Krishnan

  • Report using aggregates or not

    Dear All,
    how to identilfy my report is using aggregates or directly getting data from cube .
    regards,
    Anil

    Hi,
    Go to rsrt>execute and debug>check display statistics data>execute query>go back.
    In infoprovider shows a number, it means it hits the aggregate.
    You can also chheck this in table RSDDSTAT_DM.
    -Mayuri

  • Transaction RSRT and agregattes.

    When I execute transaction RSRT, To execute + Debuging, I select to show aggregates, the process generates dump with the following legend:
    If the error occures in a non-modified SAP program, you may be able to  
    find an interim solution in an SAP Note.                                
    If you have access to SAP Notes, carry out a search with the following  
    keywords:                                                                               
    "RAISE_EXCEPTION" " "                                                   
    "SAPLRRSV" or "LRRSVF03"                                                
    "RAISE"                                                                               
    or                                                                               
    "SAPLRRSV" "IOBJ_VALUE_NOT_VALID"                                                                               
    or                                                                               
    "WRITEQUERY " "IOBJ_VALUE_NOT_VALID"                                    
    Query is on multiprovider.
    Thanks for yuor help.

    Hi,
    Please check the OSS Note:
    877308
    -Vikram

  • How to create suitable aggregates for queries on multiprovider ?

    hi all,
    goal reduce db time of query and improve performance
    i have queries on a multicube. I have 5 cubes under the multiprovider. I having performance issue with one of the cubes.  it  had
    high slection/transfer ratio. The same cube had high 94% DB TIme. All the BW and DB indexes and stats are green. I chose
    the path of aggregates. when i tried suggest from proposal it is giving me query time and date range and i gave last
    3 days and  query time 150 sec. it is suggesting huge number of aggregates like 150 of them and not getting reduced much
    when i tried optimize funcitonality.
    The faulty cube had nearly 9 million records and 4 years of data
    1. generally how many aggregates do we need to create on a cube?
    2. how do i use propose from last navigation? it is not creating any aggregates
    3. is there a way for system to propose less number of aggregates?
    4. if nothing works i want to cut the volume of the aggregates base on years or quarters. how do i do that?
    i created with time charactersitic 0calquarter and dragged in ocal day and 0calmonth. activated and filled in ..but
    query is not hitting it when i do a monthly selection. i tried bringing in all the other dimensions...except line item
    dimenisons....no use ...it  is not hitting the manual aggregates in RSRT.the slection on 0calquarter is * .
    5. should i change it to fixed value and bring in the line items too and create it?
    6. I wanted to try propose aggregate from query option..but  my query is on multiprovider...and not able to copy it to cube..
    plz help me how to find the suitable aggregates for query on multiprovider
    7. should i create any new indexes on the cube using the chractrestics in the where condiotion of select statment...but in
    that case, select statement changes with drill down.......how do i handle it ?
    8. how will i make sure the aggregates work for all queries run time imporvement?
    9. plz suggest other approaches if any with procedures
    this is a urgent problem plz help...
    <b>thanks in advance
    points will be assigned for inputs</b>

    1. generally how many aggregates do we need to create on a cube?
    it depends on your specific needs, you can need none or several.
    2. how do i use propose from last navigation? it is not creating any aggregates
    Can you elaborate?
    3. is there a way for system to propose less number of aggregates?
    In any of the menus of screen for creating aggregates you have an option for sytem propose aggregates for one specific queries i am not sure it worked with multicubes.
    4. if nothing works i want to cut the volume of the aggregates base on years or quarters. how do i do that?
    You should delete 0calday from aggregates in order to accumulate data for any greater time unit. Other solution for times is to study try to do a partition in cube.
    5. should i change it to fixed value and bring in the line items too and create it?
    Can you elaborate?
    6. I wanted to try propose aggregate from query option..but my query is on multiprovider...and not able to copy it to cube..
    Answered before, maybe you can create a query only with data on that cube that appears in multicube query in order to proposal any aggregate in thath cube.
    7. should i create any new indexes on the cube using the chractrestics in the where condiotion of select statment...but in
    that case, select statement changes with drill down.......how do i handle it ?
    is not recomendable create new indexes in multidemensional structures. Try avoid selections for navigational attributes, if necesary add navigate attributte as dimension attributes, put filters in filter section in BEX.
    8. how will i make sure the aggregates work for all queries run time imporvement?
    try transaction st03
    9. plz suggest other approaches if any with procedures
    Some other approches yet answering
    Good luck

  • Query to read more than one aggregate at same time - is it possible?

    Hi,
    We have 2 aggregates set up. One contains data for 2007, the other for 2006. These are restricted by setting fixed values for 0CALYEAR.
    When running a query for a single year the correct aggregate is picked up depending on which year is input.
    When running the query for a range which spans both years then no aggreagate is picked up and the cube is then read.
    Is there a system setting somewhere (RSADMIN table possibly?) which allows for multiple aggregates to be used when query is executed?
    Your help would be greatly appreciated.
    Thanks,
    Steve

    Hi Rodolphe,
    Thanks for the answer.
    Unfortunately we have too many queries against the cube in question to have to create and utilise restricted key figures per year.
    I'm looking for a quick fix solution here if it exists - i.e. a parameter setting possibly.
    I of course always use RSRT when testing the use of aggregates.
    When I enter the range of years I get the following message:
    Not possible because -
    '____Characteristic/attribute 0CALYEAR does not have aggregation level */%'
    This is displayed against both the aggregates for 2006 and 2007.
    When using only one year then it correctly finds the aggregate for that year.
    Thanks,
    Steve

  • Performance ISSUE related to AGGREGATE

    hi Gem's can anybody give the list of issue which we can face related to AGGREGATE maintananece in support project.
    Its very urgent .plz.........respond to my issue.its a urgent request.
    any link any thing plz send me
    my mail id is
        [email protected]

    Hi,
    Try this.
    "---" sign is the valuation of the aggregate. You can say -3 is the valuation of the aggregate design and usage. ++ means that its compression is good and access is also more (in effect, performance is good). If you check its compression ratio, it must be good. -- means the compression ratio is not so good and access is also not so good (performance is not so good).The more is the positives...more is useful the aggregate and more it satisfies the number of queries. The greater the number of minus signs, the worse the evaluation of the aggregate. The larger the number of plus signs, the better the evaluation of the aggregate.
    if "-----" then it means it just an overhead. Aggregate can potentially be deleted and "+++++" means Aggregate is potentially very useful.
    Refer.
    http://help.sap.com/saphelp_nw70/helpdata/en/b8/23813b310c4a0ee10000000a114084/content.htm
    http://help.sap.com/saphelp_nw70/helpdata/en/60/f0fb411e255f24e10000000a1550b0/frameset.htm
    Run your query in RSRT and run the query in the debug mode. Select "Display Aggregates Found" and "Do not use cache" in the debug mode. This will tell you if it hit any aggregates while running. If it does not show any aggregates, you might want to redesign your aggregates for the query.
    use tool RSDDK_CHECK_AGGREGATE in se38 to check for the corrupt aggregates
    If aggregates contain incorrect data, you must regenerate them.
    Note 646402 - Programs for checking aggregates (as of BW 3.0B SP15)
    Check   SE11 > table RSDDAGGRDIR . You can find the last callup in the table.
    Generate Report in RSRT 
    http://help.sap.com/saphelp_nw04/helpdata/en/74/e8caaea70d7a41b03dc82637ae0fa5/frameset.htm
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/cccad390-0201-0010-5093-fd9ec8157802
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/4c0ab590-0201-0010-bd9a-8332d8b4f09c
    /people/juergen.noe/blog/2007/12/13/overview-important-bi-performance-transactions
    /people/prakash.darji/blog/2006/01/26/query-optimization
    Cube Performance
    /thread/785462 [original link is broken]
    Thanks,
    JituK

  • How query read aggregate???

    Experts !
    I have problem with my aggregate design.
    i have created one aggregate on one of my cube. when i try to check using rsrt in debug mode, looks like that query is not hitting the aggregate which i have just created. it goes to another aggregate.
    Now, does it mean that query will always go to the same aggregate ? or when users pulls different characteristics from free charastericts , my query might  jump to another aggregates ??
    OR, At the beginig whatever aggregate it hits, the query will only stick to that !
    hows the process works ?
    thanks

    When youu2019re not sure how to design a good aggregate.  Let the system propose for you but you have  to use the cube in question for  some time. The reason is the system need to gather statistics, before it can propose a good one for you.
    Designing an aggregate  (drag and drop) is easy, but designing a  good one is not as easy as it looks.  It requires some skills.  But the good news is that skills can be learned.
    When you execute a query, OLAP Processor will look for data (based on the criteria) in the following order.
    Local OLAP Cache
    Global OLAP Cache
    Aggregate
    Cube
    The goal is the OLAP Processor should hit either of the first 3 guys, then bingo ! good hit.  But if all of them are missed , it has to go to the cube to fetch the data. Then  it defeats the purpose of aggregate.
    Remember the main purpose of aggregate is speeding up  data retrieval. But there is associated overhead. You should check the rating and delete  bad aggregates.
    Cheers.
    Jen

  • How can i choose charecteristics while building aggregates on zsd-c03

    Hi Experts,
    I would like to build aggregates on my sales infocube. In this infocube having 6 dimensions,
    When we are building aggregates on characteristics how would i know on which characteristics want to choose for aggregates is there any t.code to measure size of the dimension.
    Can you please share you valuable ideas.
    Regards
    Siri

    Hi
    Genrally when we create aggregates ,we will follow some guidelines.
    1.If aggregation is more than 10
    2.If DB ratio is morethan 30.
    We can check these in the following t_codes
    ST03,RSDDSTAT& RSRT
    When we create aggregates,u will get popup whether to create urself or system proposals.U can get info from system proposals also.
    Assign points if useful.
    Regards,
    Hari

  • Best way to aggregate large data

    Hi,
    We load actual numbers and run aggregation monthly.
    The data file grew from 400k lines to 1.4 million lines. The aggregation time grew proportionately and it takes now 9 hours. It will continue growing.
    We are looking for a better way to aggregate data.
    Can you please help in improving performance significantly?
    Any possible solution will help: ASO cube and partitions, different script of aggregation, be creative.
    Thank you and best regards,
    Some information on our enviroment and process:
    We aggregate using CALC DIM(dim1,dim2,...,dimN).
    Windows server 64bit
    We are moving from 11.1.2.1 to 11.2.2
    Block size: 70,000 B
    Dimensions,Type, Members, Sparse Members:
    Bold and underlined dimensions are aggregated.
    Account
    Dense
    2523
    676
    Period
    Dense
    19
    13
    View
    Dense
    3
    1
    PnL view
    Sparse
    79
    10
    Currency
    Sparse
    16
    14
    Site
    Sparse
    31
    31
    Company
    Sparse
    271
    78
    ICP
    Sparse
    167
    118
    Cost center
    Sparse
    161
    161
    Product line
    Sparse
    250
    250
    Sale channels
    Sparse
    284
    259
    Scenario
    Sparse
    10
    10
    Version
    Sparse
    32
    30
    Year
    Sparse
    6
    6

    Yes I have implemented ASO. Not in relation to Planning data though. It has always been in relation to larger actual reporting requirements. In the new releases of Planning they are moving towards having integrated ASO reporting cubes so that where the planning application has large volumes of data you can push data to an ASO cube to save on aggregation times. For me the problem with this is that in all my historical Planning applications there has always been a need to aggregate data as part of the calculation process, so the aggregations were always required within Planning so having an ASO cube would not have really taken any time away.
    So really the answer is yes you can go down the ASO route. But having data aggregating in an ASO application would need to fit your functional requirements. So the biggest one would be, can you do without aggregated data within your planning application? Also, its worth pointing out that even though you don't have to aggregate in an ASO application, it is still recommended to run aggregations on the base level data. Otherwise your users will start complaining about poor performing reports. They can be quite slow, and if you have many users then this will only be worse. Aggregations in ASO are different though. You run aggregations in a number of different ways, but the end goal is to have run aggregations that cover the most commonly run reporting combinations. So not aggregating everything and therefore quicker to run. But more data will result in more time to run an aggregation.
    In your post you mentioned that your actuals have grown and the aggregations have grown with it, and will continue to grow. I don't know anything about your application, but is there a need to keep all of your actuals loading and aggregating each month? Why don't you just load the current years actuals (Or the periods of actuals that are changing) each month and only aggregate those? Are all of your actuals really changing all the time and therefore requiring you to aggregate all of the data each time? Normally I would only load the required actuals to support the planning and forecasting exercise. Any previous years data (Actuals, old fcsts, budgets etc) I would archive and keep an aggregated static copy of the application.
    Also, you mentioned that you did have calc parallel set to 3 and then moved to 7. But did you have the TASK DIMS set at all? The reason I say this is because if you didn't then your calc parallel would likely give you no improvement at all. If you don't set it to the optimal value then by default it will try to paralyze using the last dimension (in your case Year), so not really breaking up the calc ( This is a very common mistake that is made when CALC PARALLEL is used). Setting this value in older versions of Essbase is a bit trial and error, but the saying goes it should be set to at least the last sparse aggregating dimension to get any value. So in your case the minimum value should be TASK DIM 4, but its worth trying higher, so 6. Try 4 then 5 and then 6. As I say, trial and error. But I will say one thing, by getting your calc parallel correct you will save much more than 10% on aggregations. You say you are moving to 11.1.2.2, so I assume you haven't run this aggregation on that environment yet? If so the TASK DIM setting is not required in that environment, essbase will calculate the best value for you, so you only need to set CALC PARALLEL.
    Is it possible for you to post your script? Also I noticed in your original email that for Company and ICP your member numbers on the right are significantly smaller than the left numbers, why is this? Do you have dynamic members in those dimensions?
    I will say 6 aggregating dimensions is always challenging, but 9 hours does sound a little long to simply aggregate actuals, even for the 1.4 millions records

Maybe you are looking for