Data loading at different levels

We are using Oracle 10g and AWM 10.2 tools to re-write the current Oracle Express application. We have created couple of standard cubes in AWM and it works fine. The design confusion is in one of the case. It is explained below
We are planning to load targets into one of our AW.
Following are the details
Dimensions hierarchy
1. Time Year==> quarter==> Month
2. Point Of Sale(POS) Worldwide ==> Region ==> Country ==> Territory
3. Origin Worldwide ==> Country ==> Territory ==> City
4. Destination Worldwide ==> Country ==> Territory ==> City
Measure
rev_tgt
The challenge is, the target are set in the warehouse at different Point Of Sale levels. i.e. For some POS the targets are set at country level, for some they are set at territory levels. e.g. For India the revenue targets are set only at country level where as for France the targets are set at all the territories of France. The requirement is to show targets at all the levels in the POS above where it is set. The bottom line is that the data is not availble at a fixed level and the dimesion levels are fixed and cannot be ragged.
Query:
How we can design a cube that loads data at different levels and then aggregate the same across all dimension including the higher level of POS dimension.
Solution in the existing Oracle Express
In the current Oracle Express, what we are doing is since we have control over the aggregate program, we have created two aggmaps one with POS and one without. We do two step aggregate for the first one we limit the POS to country level and do a rollup using the aggmap that includes POS. Then we limit POS to all and aggregate using the aggmap without POS.
Can you please assist us on how to implement the above with the AWs.

Hi,
Check the below wiki link,
http://wiki.sdn.sap.com/wiki/display/BI/Aggregates--SAPBWQueryPerformance
Edited by: Priya.D on Apr 8, 2010 3:20 PM

Similar Messages

  • Aggregating data loaded into different hierarchy levels

    I have some problems when i try to aggregate a variable called PRUEBA2_IMPORTE dimensinated by time dimension (parent-child type).
    I read the help in DML Reference of the OLAP Worksheet and it said the follow:
    When data is loaded into dimension values that are at different levels of a hierarchy, then you need to be careful in how you set status in the PRECOMPUTE clause in a RELATION statement in your aggregation specification. Suppose that a time dimension has a hierarchy with three levels: months aggregate into quarters, and quarters aggregate into years. Some data is loaded into month dimension values, while other data is loaded into quarter dimension values. For example, Q1 is the parent of January, February, and March. Data for March is loaded into the March dimension value. But the sum of data for January and February is loaded directly into the Q1 dimension value. In fact, the January and February dimension values contain NA values instead of data. Your goal is to add the data in March to the data in Q1. When you attempt to aggregate January, February, and March into Q1, the data in March will simply replace the data in Q1. When this happens, Q1 will only contain the March data instead of the sum of January, February, and March. To aggregate data that is loaded into different levels of a hierarchy, create a valueset for only those dimension values that contain data. DEFINE all_but_q4 VALUESET time
    LIMIT all_but_q4 TO ALL
    LIMIT all_but_q4 REMOVE 'Q4'
    Within the aggregation specification, use that valueset to specify that the detail-level data should be added to the data that already exists in its parent, Q1, as shown in the following statement. RELATION time.r PRECOMPUTE (all_but_q4)
    How to do it this for more than one dimension?
    Above i wrote my case of study:
    DEFINE T_TIME DIMENSION TEXT
    T_TIME
    200401
    200402
    200403
    200404
    200405
    200406
    200407
    200408
    200409
    200410
    200411
    2004
    200412
    200501
    200502
    200503
    200504
    200505
    200506
    200507
    200508
    200509
    200510
    200511
    2005
    200512
    DEFINE T_TIME_PARENTREL RELATION T_TIME <T_TIME T_TIME_HIERLIST>
    -----------T_TIME_HIERLIST-------------
    T_TIME H_TIME
    200401 2004
    200402 2004
    200403 2004
    200404 2004
    200405 2004
    200406 2004
    200407 2004
    200408 2004
    200409 2004
    200410 2004
    200411 2004
    2004 NA
    200412 2004
    200501 2005
    200502 2005
    200503 2005
    200504 2005
    200505 2005
    200506 2005
    200507 2005
    200508 2005
    200509 2005
    200510 2005
    200511 2005
    2005     NA
    200512 2005
    DEFINE PRUEBA2_IMPORTE FORMULA DECIMAL <T_TIME>
    EQ -
    aggregate(this_aw!PRUEBA2_IMPORTE_STORED using this_aw!OBJ262568349 -
    COUNTVAR this_aw!PRUEBA2_IMPORTE_COUNTVAR)
    T_TIME PRUEBA2_IMPORTE
    200401 NA
    200402 NA
    200403 2,00
    200404 2,00
    200405 NA
    200406 NA
    200407 NA
    200408 NA
    200409 NA
    200410 NA
    200411 NA
    2004 4,00 ---> here its right!! but...
    200412 NA
    200501 5,00
    200502 15,00
    200503 NA
    200504 NA
    200505 NA
    200506 NA
    200507 NA
    200508 NA
    200509 NA
    200510 NA
    200511 NA
    2005 10,00 ---> here must be 30,00 not 10,00
    200512 NA
    DEFINE PRUEBA2_IMPORTE_STORED VARIABLE DECIMAL <T_TIME>
    T_TIME PRUEBA2_IMPORTE_STORED
    200401 NA
    200402 NA
    200403 NA
    200404 NA
    200405 NA
    200406 NA
    200407 NA
    200408 NA
    200409 NA
    200410 NA
    200411 NA
    2004 NA
    200412 NA
    200501 5,00
    200502 15,00
    200503 NA
    200504 NA
    200505 NA
    200506 NA
    200507 NA
    200508 NA
    200509 NA
    200510 NA
    200511 NA
    2005 10,00
    200512 NA
    DEFINE OBJ262568349 AGGMAP
    AGGMAP
    RELATION this_aw!T_TIME_PARENTREL(this_aw!T_TIME_AGGRHIER_VSET1) PRECOMPUTE(this_aw!T_TIME_AGGRDIM_VSET1) OPERATOR SUM -
    args DIVIDEBYZERO YES DECIMALOVERFLOW YES NASKIP YES
    AGGINDEX NO
    CACHE NONE
    END
    DEFINE T_TIME_AGGRHIER_VSET1 VALUESET T_TIME_HIERLIST
    T_TIME_AGGRHIER_VSET1 = (H_TIME)
    DEFINE T_TIME_AGGRDIM_VSET1 VALUESET T_TIME
    T_TIME_AGGRDIM_VSET1 = (2005)
    Regards,
    Mel.

    Mel,
    There are several different types of "data loaded into different hierarchy levels" and the aproach to solving the issue is different depending on the needs of the application.
    1. Data is loaded symmetrically at uniform mixed levels. Example would include loading data at "quarter" in historical years, but at "month" in the current year, it does /not/ include data loaded at both quarter and month within the same calendar period.
    = solved by the setting of status, or in 10.2 or later with the load_status clause of the aggmap.
    2. Data is loaded at both a detail level and it's ancestor, as in your example case.
    = the aggregate command overwrites aggregate values based on the values of the children, this is the only repeatable thing that it can do. The recomended way to solve this problem is to create 'self' nodes in the hierarchy representing the data loaded at the aggregate level, which is then added as one of the children of the aggregate node. This enables repeatable calculation as well as auditability of the resultant value.
    Also note the difference in behavior between the aggregate command and the aggregate function. In your example the aggregate function looks at '2005', finds a value and returns it for a result of 10, the aggregate command would recalculate based on january and february for a result of 20.
    To solve your usage case I would suggest a hierarchy that looks more like this:
    DEFINE T_TIME_PARENTREL RELATION T_TIME <T_TIME T_TIME_HIERLIST>
    -----------T_TIME_HIERLIST-------------
    T_TIME H_TIME
    200401 2004
    200402 2004
    200403 2004
    200404 2004
    200405 2004
    200406 2004
    200407 2004
    200408 2004
    200409 2004
    200410 2004
    200411 2004
    200412 2004
    2004_SELF 2004
    2004 NA
    200501 2005
    200502 2005
    200503 2005
    200504 2005
    200505 2005
    200506 2005
    200507 2005
    200508 2005
    200509 2005
    200510 2005
    200511 2005
    200512 2005
    2005_SELF 2005
    2005 NA
    Resulting in the following cube:
    T_TIME PRUEBA2_IMPORTE
    200401 NA
    200402 NA
    200403 2,00
    200404 2,00
    200405 NA
    200406 NA
    200407 NA
    200408 NA
    200409 NA
    200410 NA
    200411 NA
    200412 NA
    2004_SELF NA
    2004 4,00
    200501 5,00
    200502 15,00
    200503 NA
    200504 NA
    200505 NA
    200506 NA
    200507 NA
    200508 NA
    200509 NA
    200510 NA
    200511 NA
    200512 NA
    2005_SELF 10,00
    2005 30,00
    3. Data is loaded at a level based upon another dimension; for example product being loaded at 'UPC' in EMEA, but at 'BRAND' in APAC.
    = this can currently only be solved by issuing multiple aggregate commands to aggregate the different regions with different input status, which unfortunately means that it is not compatable with compressed composites. We will likely add better support for this case in future releases.
    4. Data is loaded at both an aggregate level and a detail level, but the calculation is more complicated than a simple SUM operator.
    = often requires the use of ALLOCATE in order to push the data to the leaves in order to correctly calculate the aggregate values during aggregation.

  • Multiplying data at two different levels

    I have Actual Forecast Bias data generated at a particular level (%Bias). I want to multiply this against other data that is at a different level in the same cube so as to adjust the forecast in line with actual performance. The Bias% data is at Battery Part# level. The data that I want to multiply it against is Appliance/Battery level. When I run it, the multiplication operates against a rolled up consumption level for all appliances that use a particular battery. I need the Bias % to operate at the battery level, not the rolled up level..
    I'm a newbi at this so any help welcome
    See the code below
    FIX (Batteries, @IDESCENDANTS ("A Size Family"),appliances,ww,@idescendants(NA));
    "New forecast %" = #Missing;
    "Bias %" = #Missing;
    "Bias %" = ("trade units" -> Actual/"trade units");
    ENDFIX;
    FIX("wif 1")
    FIX (@LEVMBRS(appliances,1),@levmbrs(Batteries,0), @IDESCENDANTS ("A Size Family"),ww,@idescendants(NA));
    "New forecast %" = consumption*"Bias %";
    ENDFIX;
    ENDFIX;
    FIX("wif 1")
    CALC DIM (WW, Supplies, Printers);
    ENDFIX;

    Hi Johnnie
    I assume the important bit is the bit within the second fix:
    "New Forecast" = Consumption * "Bias %";
    The way you have your fix statement set up it is looking at level 0 members in your Appliances and Batteries dimensions (I'm reading your post as if they are separate dimensions). I assume your issue is that the one piece of data is at level 0 interesections of appliances and batteries but the other piece is not possibly at level 0 of one dimension and level 1 or higher of the other, Bias% being the one at the higher level?
    If that is the case then I think you need to look at a way of getting from the level 0 member to the correct intersection point(s).
    Have you had a look through the technical reference at functions like @PARENTVAL, @ANCESTVAL or the multidimensional options for the same @MDPARENTVAL and @MDANCESTVAL, it sounds like your formula needs to be something along the following lines
    "New Forecast" = Consumption * @PARENTVAL(Appliances, "Bias%");
    What the above would do differently to your original code would be to get the Bias % value from the parent of the current level 0 member of the appliance dimension being calculated. While this might not be exactly what your requirement is hopefully it will steer you in the right direction.
    Hope this helps
    Stuart

  • Process chains- different  data  loads  at  different  points  of  time

    hi  friends 
    i  want  to  load   master  data  at  4am  
    and  then  i  want  to  load  transaction  data  at  4:25 am 
    how  can  i   achieve  this  by  using  process chains  
    to  load  other  datas  also  at  different  times      and  where  we   have  to  specify  this  different  timings?
    thanks in  advance

    Hi Venkat,
    Create Processchain for your data & follow the belo steps -
    In Processchain Double click on start button -
    1) Select direct scheduling option
    2) clcik maintain selections
    3) give the date & time when you want to run the job.
    4) according the time that you given in selections the job is going to start.
    Regards,
    Lakshman kumar Ghattamaneni

  • Regarding master data loading for different source systems

    Hi Friends,
    I have an issue regarding master data loading.
    we have two source systems one is 4.6c and another is ecc 6.0.
    First i am loading the master data from 4.6c to bi7.0.
    Now this 4.6c is upgraded to ecc6.0.
    *In 4.6c and ecc6.0c master data is changing.
    After some time there is no 4.6c, only ecc 6.0 is there.
    Now if i load master data from ecc6.0 to bi7.0 what will happen.
    Is it possible ?
    Could you please tell me?
    Regards,
    ramnaresh.

    Hi ramnaresh porana,
    Yes, its possible. You can load data from ECC.
    Data will not change, may be you may get more fields in datasource at r/3 side, but BW/BI side no change in mappings structures are same. So data also same.
    You need to take care of Delta's before and after upgrade.
    Hope it Helps
    Srini

  • Displaying data on a different level then the allocation check

    Hi
    we are creating sales orders in R/3 and executing a sales order check based on a planning area in APO.
    this check is done on Product / sold to, because that is the level where teh forecast is entered.
    However, one sold to can contain different ship to's. so because of disagregation, it is randomly done over the ship to's.
    What we want to obtain is that the when the allocation check is done and the orders is in APO via the CIF, the data is displayed on product sold to, ship to like in the sales order while the check to see if there is enough quantity can remain on product sold to.
    Do you have any ideas?
    Tommy

    Tommy,
    Please elaborate on your problem.
    I assume you are talking about the Planning book used for Allocations; please confirm.  If the only CVCs in your Allocation planning book and your Product Allocation Group are 'Product' and 'Sold To', then 'ShipTo' (and ShipTo disaggregation) is irrelevant.  Product Allocation only considers the CVCs that have been created.  In this case, multiple ShipTos against a single SoldTo are 'first come first served' until the SoldTo Incoming Orders qty reaches the SoldTo Allocation Qty.
    It is possible to include ShipTo in your Allocation Group and Allocation Planning Area in addition to SoldTo; this is a fairly common solution.  If you do so, you will THEN have to consider ShipTo Disaggregation issues.  Since this seems to be a negative issue for you, I would recommend against it.
    Best Regards,
    DB49

  • How to load data at non-leaf levels directly?

    Hi,
    We have a requirement where by Fact data for different scenarios gets loaded at different levels of ragged/recursive hierarchies.
    For example, the Actual sales are loaded at Product Leaf level but the Budgets data is available for loading only at Product Group level (which is one level above Product Leaf in Product Hierarchy).
    Please let me know if this is possible in Essbase.
    This is rater urgent.
    Thanks & Regards,
    Gurudatta

    Yes, as Gary said, performance is one factor.
    There are other less prominent reasons to keep agg missing on, but the overall reason for me is that it just makes more sense, especially when you consider the user operations involved in zooming in on data. When you do a zoom to bottom level, it's going to show you that the sum of the parts is equal to the whole in a well designed cube with agg missing on. With agg missing off, this may or may not be the case, and would NOT be the case if you load at non-leaf members. In that scenario, you end up with a "where did the data go?" issue that adds time to any hunt for answers as well as the likelyhood that more questions will be brought up that ultimately need answering. It's not obvious, but one scenario can be described as follows:
    (For convenience, I will use a standard time dimension hierarchy of YEAR|QUARTER|MONTH|DAY to illustrate the issue)
    2008 = 12000
        2008Q1 = 3000
            Jan 2008 = 1000 (input)
                1/1/2008 = #Missing
                1/2/2008...
            Feb 2008 = 1000 (input)
            Mar 2008 = 1000 (input)
        2008Q2 = 3000
          ... and so onIn the above example, there are two ways that confusion and errors can occur, the first is that somebody decides to upload some data to the days (level 0) instead of the month, and it doesn't add up to the total for the month. The second is that somebody decides to upload to the quarter, and thus the months no longer add up to the quarter amount. I treat these two conditions separately because in the first, the total is still represented by the amounts loaded to the monthly values, whereas the second case changes the grand total. If you were troubleshooting, you have to go level by level to see where the "breakdown" occurs by comparing each and every parent value to the sum of the children. It's a valid condition that is almost impossible to find quickly. There is no quick "show me all the inputs" approach, and no easy way to enforce an "input only at these levels" condition (there is, but there isn't as it's only practical in a perfect environment and trust me here you won't have it for long even if you could create it to begin with).
    So, long story short, data consistency goes out the window in a "load at upper level" cube, and the simple answer of providing an input member for the required generation(s) is much easier and reliable in the long run.
    Of course, there will always be exceptions, I just like to avoid them at all cost on this issue.

  • Data load in Essbase ASO cube

    Hi,
    I have not been using ASO cube before and had worked only on BSO cubes. Now I have a requirement to create a rule file to load data in to an ASO Essbase cube. I have created a data load rule file as I was creating for a BSO cube which is correctly validating. However when I am doing the data load I am getting following warning:
    "Aggregate storage applications ignore update to derived cells. [480] cells skipped"
    I have investigated further and found that ASO cube does not allow data loading at upper levels & on members calculated through formulas. After this I have ensured that I am loading the data in to zero level members and members which are not calculated through formula. But still I am not able to do the data load & getting the same warning.
    Could you please help me and let me know if there is anything else which I am missing here?
    Thanks in advance...
    AKW

    Hi AKW,
    "Aggregate storage applications ignore update to derived cells. [480] cells skipped"This is only a warning message that means only those many cells were skipped might be for some reasons like any member pointing to those cells will be missing.
    If you want to copy the Data of your BSO cube to an ASO Application why dont you use an PARTIONING it will copy your whole data from BSO to ASO (If Outline is common in both then copy any member of Sparse dimension like "Scenario 1" from Source i.e. BSO, to same member like "Scenario 1" in Target i.e ASO ),
    This is only an alternate wayThanks
    Avneet Singh Bhatia

  • "master data deletion for requisition" before master data loading

    Hello Gurus,
             in our bw syetem , for   process chains for loading  master infoobjects, all include "u201C master data deletion for requisition" ABAP
    process  except for one process chain. my question is:
           why that process chain for master data loading is different from others as for lacking "master data deletion for requisition" in it?
    so it does not matter if you include " master data deletion for requisition" ABAP  process in process chain for master data loading ?
    Many thank.

    Hi,
    ABAP Process means some ABAP program is being executed in this particular step.
    It's possible that for all of your process chains except for that one requirement was to do some ABAP program processing.
    You can check which program is executed by following below process:
    Open your process chain in planning view -> Double click on that particular ABAP process -> Here you can see program name as well as program variant.
    Hope this helps!
    Regards,
    Nilima

  • Data loaded all level in a hierarchy and need to aggregate

    I am relatively new to Essbase and I am having problems with the aggregation of cube.
    Cube outline
    Compute_Date (dense)
         20101010 ~
         20101011 ~
         20101012 ~
    Scenario (dense)
    S1 ~
    S2 ~
    S3 ~
    S4 ~
    Portfolio (sparse)
    F1 +
    F11 +
    F111 +
    F112 +
    F113 +
    F12 +
    F121 +
    F122 +
    F13 +
    F131 +
    F132 +
    Instrument (sparse)
    I1 +
    I2 +
    I3 +
    I4 +
    I5 +
    Accounts (dense)
    AGGPNL ~
    PNL ~
    Porfolio is a ragged hierarchy
    Scenario is a flat hierrachy
    Instrument is a flat hierarchy
    PNL values are loaded for instruments at different points in the portfolio hierarchy.
    Then want to aggregate the PNL values up the portfolio hierarchy into AGGPNL, which is not working the loaded PNL values should remain unchanged.
    Have tried defining the following formula on AGGPNL, but this is not working.
    IF (@ISLEV (folio,0))
    pnl;
    ELSE
    pnl + @SUMRANGE("pnl",@RELATIVE(@CURRMBR("portfolio"),@CURGEN("portfolio")+1));
    ENDIF;
    using a calc script
    AGG (instrument);
    AGGPNL;
    Having searched for a solution I have seen that essbase does implicit sharing when a parent has a single child. This I can disable but this is not the sole cause of my issues I think.

    The children of F11 are aggregated, but the value which is already present at F11 is over wriiten and the value in F11 is ignored in the aggregation.^^^That's the way Essbase works.
    How about something like this:
    F1 +
    ===F11 +
    ======F111 +
    ======F112 +
    ======F113 +
    ======F11A +
    ===F12 +
    ======F121 +
    ======F122 +
    ===F13 +
    ======F131 +
    ======F132 +
    Value it like this:
    F111 = 1
    F112 = 2
    F113 = 3
    F11A = 4
    Then F11 = 1 + 2 + 3 + 4 = 10.
    Loading at upper levels is something I try to avoid whenever possible. The technique used above is incredibly common, practically universal as it allows the group level value to get loaded as well as detail and agg up correctly. Yes, you can load to upper level members, but you have hit upon why it isn't all that frequently done.
    NB -- What you are doing is only possible in BSO cubes. ASO cube data must be at level 0.
    Regards,
    Cameron Lackpour

  • Can i use one interface to load data into 2 different tables

    Hi Folks,
    Can i use one interface to load data into 2 different tables(same schema or different schemas) from one source table with same structure ?
    Please give me advice
    Thanks
    Raj
    Edited by: user11410176 on Oct 21, 2009 9:55 AM

    Hi Lucky,
    Thanks for your reply,
    What iam trying is ...Iam trying to load the data from legacy tables(3) into oracle staging tables.But i need to load the same source data into two staging tables(these staging tables are in two different schemas)
    can i load this source data into two staging tables by using single standard interface(some business logic is there)
    If i can then give me some suggestion how to do that
    Thanks in advance
    Raj

  • Load one text file with 12 periods' data into 12 different periods at once?

    Hi guys,
    In one swoop, can we load one .txt file with 12 periods of data into 12 different periods?
    The scenario:
    Budget data is required for monthly comparative reporting with actuals, so we have 12 periods in our Budget version.
    From a non-SAP system we get one .txt file containing 12 periods worth of budget data,
    - it is in the correct format (and we don't want to create risk by opening and editing it) so it is loaded into each period and the other 11 periods of irrelevant data are obviously ignored.
    Some extra tasks (such as validation, cashflow calculations) are then performed per period.
    Because this has to be repeated 12 times, this can be a time consuming process
    We now have multiperiod monitor functionality (from EHP2) and I see how it works for the automatic tasks (very well).
    I'm aware that the guide says that manual tasks will be ignored during the automatic run, this is true.
    However, if I remember correctly EC-CS used to allow it with upoad files, so i was expecting it in BCS 6.02
    - is there anyway to load a file containing 12 periods worth of data into 12 individual periods in all at once?
    (NB we still have an improvement to the previous situation, the user can scroll between periods more quickly and load the file 12 times, then go back to the start and run all auto tasks at once)..
    One thought was to use a file server location with a hardcoded filename but his would require work beyond my expertise.
    All suggestions welcome

    Hi
    I wil suggest mapping the path based on a common location is the best way and then map the same logically in the Upload method
    If you are using Citrix then it becomes all the more easy to have a commmon location
    Rgds
    Dheeraj

  • Stage tab delimited CSV file and load the data into a different table

    Hi,
    I pretty new to writing PL/SQL packages.
    We are using Application express for our development. We get CSV files which is stored as a BLOB content in a table. I need to write a trigger that would get executed once the user the uploads the file and parse thru the Blob content and upload or stage the data in a different table.
    I would like to see if there is any tutorial or article that could explain the above process with the example or sample code to do the same. Any help in this regard will be highly appreciated.

    Hi,
    This is slightly unusual but at the same time easy to solve. You can read through a blob using the dbms_lob package, which is one of the Oracle supplied packages. This is presumably the bit you are missing, as once you know how you read a lob the rest is programming 101.
    Alternatively, you could write the lob out to a file on the server using another built in package called utl_file. This file can be parsed using an appropriately defined external table. External tables are the easiest way of reading data from flat files, including csv.
    I say unusual because why are you loading a csv file into a blob? A clob is almost understandable but if you can load into a column in a table why not skip this bit and just load the data as it comes in straight into the right table?
    All of what I have described is documented functionality, assuming you are on 9i or greater. But you didn't provide a version so I can't provide a link to the documentation ;)
    HTH
    Chris

  • Entering budget data on different levels

    Dear experts,
    I have a question concerning posting budget data on different levels in BCS. In Former Budgeting there was an automatic check for budget entry which was not allowing greater amounts on lower level commitment items than on higher levels.
    Now with BCS there is not a hierarchical structure when entering budget data so you can enter a greater amount on subordinate commitment items than on superior ones.
    Do you have an idea if the budget entered on the lowest level can be totaled up automatically to the higher levels of the hierarchy or consistency checks should be defined in order to control budget amounts on the different levels???
    Thanks in advance.
    Best regards,
    sappsm

    Dear SAPPSM,
    In the Budget Control System (BCS) there exists no hierarchical budget structure which would take into account some funds center hierarchy, for example.
    This was standard function in Former (or Classical) Budgeting, but is not available in BCS only from EhP 603 on, you will be able to have multi-level budget structure but still in development phase.
    As a consequence, when you enter budget, budget values are only created at the level of budget entry (for example, at the 3rd level). Furthermore, BCS does not provide functions like "Distribute budget"
    or "Total up budget".
    However, you can "simulate" a totalling up of budget values via reporting. For example, for Report Writer reports like FMRP_RW_BUDGET you can use funds center groups (FM menu, Master Data, Funds Center), created by transaction FM_SETS_FICTR1. You can use such funds center groups with the Report Writer reports, which then automatically sum up the budget values according to the funds center group.
    BCS is a complete system with more possibilities than FB and you will be able to define strategy derivation for controling budget or posting address according to business process. Please check the documentation available in help.sap.com and in BCS node from SPRO configuration.
    I hope I could help you
    Kind Regards,
    Vanessa.

  • Extract Data in Power Pivot from MDX with diffrentnt different Level of Hierarchy in MDX

    Hi All,
    My requirement is to extract the data form tabular model and put in power pivot,as all the measures calculation is present in model just extracting the aggregated data.
    Issue is appearing on Rolling up the data, so is there any way to extract the data so that it will display the right data on different level. As per the below example on level 2 data is getting summed up which is not correct.
    Ex:-
    Level1
    A
    10%
    B
    20%
    Level2
    30%
    Thanks Chandan

    Hi Chandan,
    According to your description, you create a SQL Server Analysis Services Tabular model and extract Data in Power Pivot from MDX, the problem is that the hierarchy data rolling up base on the hierarchy level, right?
    I have tested it on my local environment, we cannot reproduce this issue. So as per my understanding, the issue can be cause by the hierarchy setting in your tabular model. Here are some links about how to create and manage hierarchies in a tabular model,
    please refer to the links and check if the settings are correct.
    http://msdn.microsoft.com/en-in/library/hh213003.aspx
    http://www.youtube.com/watch?v=qKKNY1Vj_2c
    If the issue persists, please provide us more information about you hierarchy creation steps, so that we can make further analysis.
    Regards,
    Charlie Liao
    TechNet Community Support

Maybe you are looking for

  • Safari (10.5.2) crashes 100% of the time after successfuly logging on .Mac

    Safari (10.5.2) crashes 100% of the time after successfuly logging on .Mac portal. ... and Firefox doesn't. I've upgraded yesterday night to 10.5.2 and today (while behind an http proxy) I can't get to log in onto .Mac Safari crashes as soon as my lo

  • LayeredPane, validation Icons, JScrollPane should become invisible

    For development of an application we have customized textfields, comboxes etc. The are able to show validation icons, these icons are centered in the down/left corner. Thus a little bit in and outside of the components bounds. To add validation icons

  • Import Portugal - Specific  forms for financial statements

    Hi in SAP Service Marketplace, using the quick link globalization for *Portugal* I found the following information: Import forms 0PT-ACTIV-01, 0PT-PASSI-01and 0PT-RESUL-01 from client 000 to your development client. To do this, in Customizing for Fin

  • Transfer of No range interval  to production server

    Hi, Changed the existing number range interval for billing types  in development server.But the system didn't generate any change request number.Then how to transfer this to production server? Plz guide me what to do? Regards Subhasish

  • Edits don't export to flickr/facebook

    To elaborate, I'm on Mac OS Mavericks with PSE-10. I can upload to both flickr and facebook, but it uploads the original (.CR2) file. When exporting to Flickr/Facebook, my photos show up as originals, not the edited version, even though the edited ve