CALC DIM or AGG - Can we aggregate from higher levels?

Hello everyone,
We have a custom dimension called LOCATION, under which we have Continents ASIA and EUROPE. Under ASIA and EUROPE we again have countries (India & England) under which there are states. Now data may be entered at a Continent level or Country level. If we do a normal AGG or CALC DIM operation, the higher level data will get overwritten since aggregation is done on a bottom up basis from zero level members.
Can we aggregate from a higher level (say Country or Continent) in the dimension up to the Location level (i.e. override the default zero level up aggregation) ?
PS: Creating a dummy member would not be an elegant solution, since I have presented the problem in a much simplified way, and there are many level of hierarchies - and a dummy will be required under each hierarchy level.
Any suggestion would be of great help.
Thanks..
Sayantan

I know you are against creating dummy members but if you don't there is no way to know for sure you are aggregating correctly. Most people want to see the dummy members just so they know why the parent doesn't equal the sum of all the children.
We have a million+ member customer dim. Quite a few of them are loaded at the parent level. Our parent child hierarchy (not dim) gets built from the fact table+dim. So if the etl finds a record in the fact table that has children in the dim table it creates the dummy member to load the data to. This has worked well for us. Good luck.
Edited by: user7332747 on Apr 23, 2009 1:34 PM

Similar Messages

  • Can you explain about Higher-level item category and item usage?

    Hi all
    Can you explain about Higher-level item category and item usage?
    thanks all

    Higher level item category and Item usage are used in the item category determination.
    Let me take an example of Item Category TANN (Free of charge Item)
    Item category Determination: Sales Document type + Item Category Group + Usage + Higher Level Item Category
    TA + NORM + FREE + TAN = TANN
    Higher Level Item category: The category on which this item category is dependent. meaning for TAN , TANN is a free item. Therefor TAN is the higher level item category.
    Item Category Usage: It controls system response during document processing. The line item has an specific usage according to that usage it has to respond. Free for free items, text for text items etc.
    Regards
    AK
    Reward points if helpful

  • Does anyone have a sample implementation plan that can be shared?  High level?

    Does anyone have a sample implementation plan that can be shared?  High level?

    You will probably need to inquire with a VMware consultant to get this kind of information.  VMware depends on these people to make sure they keep the reputation of the software at a very high level.  
    They will have access to various free tools to help large and small scale deployments.  Tools like VMware Health Check Script and the ESX deployment tool.
    If you find this information useful, please award points for
    "correct"
    or "helpful".
    Wes Hinshaw
    www.myvmland.com

  • Calc Dim vs Agg

    to my understand when you want to agg up a Dense dimension you use Calc Dim in the Calc Script, and when you want to agg up a Sparse dimension in the calc Script you use the AGG statement
    what is the different in the to process
    Please advise

    Does the definition of AGG not help from the documentation
    "The AGG command performs a limited set of high-speed consolidations. Although AGG is faster than the CALC commands when calculating sparse dimensions, it cannot calculate formulas; it can only perform aggregations based on the database structure. AGG aggregates a list of sparse dimensions based on the hierarchy defined in the database outline. If a member has a formula, it is ignored, and the result does not match the relationship defined by the database outline."
    Cheers
    John
    http://john-goodwin.blogspot.com/

  • APO DP: Disaggregation to product&plant level from higher levels.

    Hi.
    We do demand planning on groups of products and for country/region in general, we have around 48.000 CVC's in our current setup. It works very well.
    A new situation has arisen where we need to have the forecast split down to product and plant level.
    As is we simply don't have the information at this level of granularity.
    I don't see how we can add for instance product to our setup, we have around 20.000 products so the number of CVC's in DP would become massive if we did this.
    I was thinking that perhaps something could be done by exporting the relevant key figures to a new DP setup with fewer characteristics (to keep the number of CVC's down) via some infocubes, perhaps some disaggregation could be done via some tables and the BW update rules. This still leaves the issue of how to get the figures properly disaggregated to plant and product though.
    Does anyone have experiences on how to get the figures split to lower levels from DP when you're planning on a higher level?

    Simon,
    One approach as you mentioned can be creating Z Table where in you set up disaggregation proportion from product group level to product level or product location level.
    Product Group X  100       Product A@loc1 10
                                          Product B@loc1 90
    Download your planning area data into infocube C and then use BW routines to convert the data from group in infocube C to lower level refereing Z Table....into another infocube..
    SAP also provides such standard functionality of spliting the aggregate Demand plan to detailed level
    SNP plan..through functionality like location slit or product split.
    Essential you will be using same concept in yor BW solution or you may also want to consider the
    release your DP to SNP planning area its as solution of diaggregation of data  to lower level.
    Regards,
    Manish

  • Where can I find various high level examples of workflows being used

    I am about to start a project with TCS 3.5 and have been participating in the Adobe webinars to help learn components and specific techniques but what I am lacking is an understanding of various workflows I can model my project after or take bits from various sources. Why start with Framemaker in this workflow versus RoboHelp or even Word? Questions like this I think come from experience with the process and I am thinking that what I am getting myself into is a chessgame with all these pieces and don't want to paint myself into a corner by traveling down one route. I have seen this graphic:
    And this one:
    And this one:
    But they are too generic and do not contain enough information to really understand the descision making process one must go through on various projects.
    Can we have a series of webinars made, all with the underlining theme of defining a working process or workflow, by having guests describe how they have or are using this suite in real life on their own projects? One that might include a graphic showing the routes taken through the suite with reasons why?
    My project hopes to make a single source internal site that will tie together various 3D portable industrial coordinate metrology systems (hardware and software). It would be used as a dispersal site for help, communications between users and SME, OEM information, QA requirements, established processes, scripting snipet downloads, statistics, and training (including SOJT). Portable industrial metrology has 8 different softwares that are used and right now about 8 different instruments. These include laser trackers and radars, articulated arms, scanners, structered white and blue light to name a few. The softwares include Spatial Analyzer, Veriserf, CompIT, eMscon, AXYZ to a few there as well. I want to be able to participate and add content to an internal Sharpoint site, push content to users for stand-alone workstations, ePub, capture knowledge leaving the company through attrition, develop easy graphic rich job aid sheets, and aid in evaluations of emergent software and hardware. I would also like to leave the option open to use the finished product as a rosetta stone like translator between the software packages; doing this is the equivelent of doing this in these other software pacages for example.

    PDF is definately a format I want to include, to collaborate with other divisions and SME for one reason, but also for the ease in including 3D interactive target models with in it and portability. I plan on being able to provide individual PDFs that are very specific in their topics and to also use them to disperse user guides, cheat sheets or job aids... something the user may want to laminate on their own and keep with them for reference, printed out. Discussion in these sheets would be drasticly reduced to only the elements, relying heavely on bullet points or steps, usfull graphs, charts and tables... and of course illustrative images. I am thinking that these should be downloadable buttons to print on each topic section, not in a general apendix or such. They would hopefully be limited to one page, double sided 8x10.
    The cheet sheet would have a simplistic flow chart of how or where this specific topic fits in the bigger picture,
    The basic steps,
    Illustrations, equipment, setup
    Software settings for various situations in a table or chart,
    Typical result graph to judge with,
    Applicable QA, FAA regulation settings or concerns,
    Troubleshooting table,
    Topic SME contact info
    On the back, a screen shot infographic of software process
    The trouble here is that I have read that FM has a problem sometimes in succesfully transfering highly structured or formatted material to RoboHelp. Does this then mean that I would take it from FM straight to PDF?
    Our OEM material is very high level stuff... basicly for engineers and not shop floor users... but that is not to say they don't have some good material that could be useful. Our internal content is spread out across many different divisions and continents, with various ways of saying the same thing. This leads QA to interpret the information differently depending where the systems are put to work. We also have FAA requirements that need to be addressed and reminded to the user.
    Our company is starting to also see an exodus of the most knowledagble of the users through retirement. Capturing the knowledge and soft skill packages they have developed working here for 20-30 years is something I am really struggling with. I have only come up with two ideas so far:
    Internal User Web based Forum
    Interviews (some SMEs do not want to make the effort in transfering knowledge by participating in anything if it requires an effort they don't see of benefit to themseleves), to get video, audio or transcription records

  • In release strategy unrelease should happen from higher level

    Dear Users,
    In our release strategy we want to make changes like whenever user want to unrelease it should happen from higher hierarchy to bottom level as vice versa of release process. If there is any solution for this process please provide your suggestions.
    Thanks,
    Manoj

    Hi,
    For restriction from higher hierarchy to bottom level, refer similar blog
    Release Strategy: Restrict lower users to revoke PO after the complete release by superior
    Regards,
    Biju K

  • Can I get a higher level of os than what you can download?

    Can you change cpu's to get a higher level of osx? I'm new to mac's and this laptop was given to me and the software I want to load requires a higher level of osx than 4.11.  Is there anything physically I can do to update it?

    10.5.8 is the latest OS that will run on a PPC Mac. You would need to purchase on the open market as Apple no longer sells this version of OS X.
    Some source links:
    http://hardcoremac.stores.yahoo.net/
    http://www.welovemacs.com/apsyso.html
    http://www.buycheapr.com/us/result.jsp?ga=us14&q=leopard+10.5+os+x
    http://oldermac.hardsdisk.net/oldmac.html#hard

  • Transfering PIR from higher level.

    Hi.
    i am having a problem  here. mine is   discrete manufacturing  process.i am using VC. few of my components are  common to 3 ferts in the BOM. one of the fert is a VC material. i dont want the reqirements to be transvered for the VC material to the raw materials.  but i want the requirements to be transvered from other two ferts to the raw material.
    how do i restrict the transver of reqirements for only one material.

    Hi,
    You maintain following Material Master settings for your FERT VC materials as below:
    1.For  Not Transferring dependent requirements to lower level of FERT:
    maintain 1     Materials for dependent requirements are not planned
    in Material Master--> MRP-4 View->MRP Dep Requirements
    2.For  Not Transferring dependent requirements to lower level of FERT:
    maintain (Blank) Materials for dependent requirements are planned
    in Material Master--> MRP-4 View->MRP Dep Requirements
    Save above settings for respective FERT materials and then try with MRP Run.
    Hope your problem resolves.
    Regards,
    Tejas

  • Inheritance for the AAP Plans from a high level Org Unit code

    Hello,
    Did anyone know how I can create Inheritance for AAP Plans from High level Org Unit ? We have a lot of reorgs and we would like to turn on this fuctionality if it possible.
    Thank you in advance.

    Hello,
    Did anyone know how I can create Inheritance for AAP Plans from High level Org Unit ? We have a lot of reorgs and we would like to turn on this fuctionality if it possible.
    Thank you in advance.

  • Which is optimal one among AGG and CALC DIM??

    Hi Frndz,
    I knew the difference between AGG n CALC DIM but i'm not sure which is the optimal one.Is there any specific situation to use them when should we go for them.
    Thanks

    Hi John,
    I am responding to this post after such a long period. But i found that this is perfectly relevant to the situation i am facing.
    Actually we have been using the CALC DIM function for aggregation. Recently we replaced this by AGG as i read that AGG is more optimal when you are aggregating sparse dimensions with less that 6 levels. All the dimensions we are aggregating have less than 6 levels. But after using the AGG command my database size increased exponentially from 53G to 90G. This was the only change i did in the process. Then i cleared all data and imported level 0 and did the aggregation using CALC DIM again. Now the database size is back to normal.
    I am not able to understand why this happened in my case. Even the rules were taking more time to run as compared to Calc Dim.
    If you can please through some light on this.
    Regards
    Vikas

  • About consolidation: CALC DIM, AGG, @IDESCENDANTS, @ANCESTORS

    <p> </p><p>I have some question (and few doubt) about commands andfunctions for data consolidation</p><p> </p><p>1) CALC DIM is available for both dense and sparse dimensionsand calculates member formula. It's right?</p><p> </p><p>2) AGG is available for sparse dimension only and doesn'tcalculate memberformula, but it's faster than CALC DIM.It's right?</p><p> </p><p>3) @IDESCENDANTS("X") consolidate from bottom level to"X" member (included). It's right?</p><p>This function is available for both dense and sparse dimensionsor sparse dimension only? It calculates member formula?</p><p> </p><p>4) @ANCESTORS("X") consolidate from "X"member (included) to top of hierarchy. It's right?</p><p>This function is available for both dense and sparse dimensionsor sparse dimension only? It calculates member formula?</p>

    1. Correct.<BR><BR>2. Correct.<BR><BR>3. Correct, both Sparse and Dense, including member formulas.<BR><BR>4. Almost Correct: IANCESTORS(X) includes "X", ANCESTORS(X) only does the ancestors, not "X". Works on both Sparse and Dense, including member formulas.

  • AGG and CALC DIM Essbase script recently started to grow our pag files

    We have a Essbase script that does nothing but AGG and CALC DIM that ran fine for months in that it did not grow our Workforce cube. Starting in late Jan it started to grow its pag files. Workforce cube used to be 7 GB in Dec 2010, then it grew to 10GB today. I tested running it and it grew our pag files by 170MB 2nd time and then by 70MB the 3rd time I ran it. Has anyone seen this?

    Thanks a million Cameron.
    1) I do dense restructures every night - apparently that does not remove all defragmentation.
    last questions:
    2) I exported level zero, cleared all data, then imported level zero data. That should clear up all defragmentation, wouldn't it?
    3) After importing level zero data, I ran a simple Calc Dim calc script on Accounts dim only on this Workforce BSO cube that is only 400MB. It took over 30 mins. On my second and third run of same calc script, it took 9 mins. My BSO cube grew a few MB. Can I assume that blocks have been build by first run and that all subsequent runs will stay around 9 mins since blocks have now been build?
    Here is the calc script
    SET CACHE HIGH;
    SET UPDATECALC OFF;
    SET CLEARUPDATESTATUS OFF;
    SET LOCKBLOCK HIGH;
    SET AGGMISSG ON;
    SET CALCPARALLEL 3;
    FIX (febscenario,Working)
    FIX(@RELATIVE(TTC,0),@RELATIVE(TCI,0),@LEVMBRS("Project",0),@RELATIVE("Total Employees",0))
    FIX(FY11, FY12 "Jan":"Dec")
    FIX("HSP_InputValue","Local","USD")
    CALC DIM ("Account");
    CALC TWOPASS;
    ENDFIX
    ENDFIX /* &YearNext */
    ENDFIX
    ENDFIX
    4) When I calc only FY11, it takes 3 seconds to calc on the first to 4th run of the above calc. However, when I calc FY12, it takes over 30 mins on first calc and 9 mins subsequently. Why is that? Should I use SET CALCONMISSINGBLK in my calc script?
    5) I am running calc as Essbase admin user. The level zero text file I loaded is only 460MB. After calc, the BSO cube's pag files are only 420MB. We are thinking of calc'ing older scenarios for historical purposes but am not sure if that will degrade the calc performance. My experience has been that - increasing the size of the BSO cube by calc'ing will degrade future calc times. Is that your experience?
    Edited by: Essbase Fan on Feb 25, 2011 9:15 AM
    Edited by: Essbase Fan on Feb 25, 2011 9:17 AM

  • How to speed-up CALC Dim and/or AGG Dim

    Hello Everyone,
    I am new to Essbase and apologize for such a generic query. I came across a calculation script that takes more than 10 hours to execute. The crux of the script is CALC Dim (DIM) where DIM is a dense dimension with 11 levels (and a lot of calculated members). Can anyone guide me about the approach to be adopted to optimize the script. I was fiddling around with CALCCACHEHIGH parameter in essbase.cfg file and the SET CACHE HIGH declaration in the script. Will that help?
    Some details of the original script are outlined below. The basic optimization parameters are in place (like the dense dimensions are at the end of the FIX list etc)
    Thanks.
    Sayantan
    Script details:
    SET AGGMISSG ON;
    SET CREATENONMISSINGBLK ON;
    SET UPDATECALC OFF;
    SET FRMLBOTTOMUP ON ;
    SET CACHE ALL;
    SET CALCPARALLEL 5;
    SET CALCTASKDIMS 2;
    SET LOCKBLOCK HIGH;
    FIX (&pln_scenario,"BU Budget",&plan1_yr,@RELATIVE("Product",0), @RELATIVE("YearTotal",0),... etc
    CALC DIM ("Account");

    Hi,
    Thanks for your suggestions. I will definitely try to implement them and post the results. Meanwhile, here's another script which should not take too long to run. However, this script runs for hours. Any suggestions would be great (this does not have the cache options implemented yet). I have added some more details about the dimensions involved.
    Outline of the script:
    /*Script Begins*/
    SET UPDATECALC OFF;
    SET CACHE ALL;
    SET CALCPARALLEL 5;
    SET CALCTASKDIMS 2;
    SET NOTICE LOW;
    SET MSG SUMMARY;
    FIX (@ATTRIBUTE("Existing"),@Relative("DC",0),@RELATIVE("PCPB",0),&pln_scenario,"BU Budget",&plan1_yr,@RELATIVE("YearTotal",0))
    FIX(@RELATIVE ("RM",0))
    SET CREATENONMISSINGBLK ON;
    "RMQ" = "DC wise Sales Volume"->"UBOM" * "RMQ per Unit SKU"->"All DC"->"YearAgg";
    ENDFIX;
    ENDFIX;
    /*Script Ends*/
    Dimension details: Evaluation Order which has been thought through.
    Dimension Members Density
    Account 352 Dense
    Period 35 Dense
    Version 3 Sparse
    Scenario 8 Sparse
    DC 7 Sparse
    Year 9 Sparse
    Entity 20 Sparse
    Product 416 Sparse
    BOM 938 Sparse

  • What is wrong with this? AGG on sparse and Calc Dim on Dense?

    I have this....
    FIX(@IDESC(Sparse parent member))
    CALC DIM(Dense,Dense,Dense);
    Agg(Sparse,Sparse);
    ENDFIX
    No formulas in outline ....just followed simple perf impro technique.....as they say...agg faster on sparse.Syntatically is abstly right but result is not right......?
    Whats the catch? do i need to attend some basic classes???:-P
    Edited by: 961729 on Sep 26, 2012 10:27 PM

    Thanks Cameroon and Tim for your reply!
    Perhaps i should have listed more details...my bad!. Anyway my idea was to know if two of below fixes produce same result...?
    Fix(Sparse)
    Calc dim(Dense,Dense,Dense,Sparse,Sparse);
    Endfix
    Fix(Sparse)
    Calc dim(Dense,Dense,Dense);
    Agg(Sparse.Sparse);
    Endfix
    Coming to my earlier mentioned fix....well that is the conclusive fix of my allocation script..I have two children of version dim. One is used for allocation and other to put adjustment. So in that fix iam just aggregating all dim for version dim (allocation +adjustment).                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

Maybe you are looking for