Adding budget Facts at higher levels to other Facts

Hi,
Hoping someone can advise.
I have a table of budget figures at the week and vessel level and I need to incorporate them into my business model but not sure how to do it. I have a time dimension year - period - week - day and a corridor dimension seacorridor - corridor - vessel - route. I want the budget figure to be able to sum up to the levels above where it is available.
I have tried setting the measure with the levels being at vessel and week but it doesn't seem to be working.
I would appreciate any suggestions.
Thanks
Patricia

Pete,
Thanks a million you have been extremely helpful - this has worked. I just have a couple of queries in relation that you might be able to help with.
1) I have a report that uses some Journey facts plus budget facts i.e. 'No of Car Journies' (from JourneyFacts) with 'Car Volume' ( from budgetFacts) I need to filter my 'No of Car Journies' to just checked in Journies - the field checked in is in a Journey Details dimension table with no hierarchy associated. If I add this to the report the 'Car Volume' budget figures become null. Can you help ? Perhaps I need to add a checked in Flag to my budget table and join this to the journey details table ? I can add this filter to the actual columns in the report or at the BMM layer but this would involve a bit work on my part as I have quite a few facts and there would be other variations of filters. Just thought you might know another way
2) I have an ago function using yearweek level ( I needed this so that I can compare week numbers as oppposed to week date ranges which is how the year ago function works). If I have a total on year or period in my pivot table I get the below error -
State: HY000. Code: 10058. [NQODBC] [SQL_STATE: HY000] [nQSError: 10058] A general error has occurred. [nQSError: 22036] Level in AGO function (YearWeek) must be a rollup of (or equivalent to) the measure level (Year). (HY000)
3) If I use period in my report it defaults back to the previous wrong answers ( i.e. multiplies by number of days) and in my NQQuery log I can see if picking this up from the normal calendar. If I used yearperiod (which is the Chronological Key) it works fine. Therefore I have to use YearPeriod for my drilldown. This ins't really a big deal - it is just for display purposes it would be nice to use period.
Thanks again for you time,
Patricia

Similar Messages

  • High level language

    when a language will be called high level language ?
    is it true if the language support graphics then it is high level ?
    or what are the characteristics should a language posses to be called high level ?
    Fortran ? is it high level ?
    in fact i dont know on what basis or features i will call a language high level ? do u know the answer ?
    thank you

    Hi..
    I just asked about this from Albet Instine (Not sure about spelling) you know the guy with the weird hair.
    NAd he said that is is relative.
    Ex :-
    Compaired to Assembly Fortran is high level. compaired to c c++ is high level and c is also higi level compaired to Assembly.
    But if you talking about generations of languages then there are well defined boundires.
    For example
    1GL - Meshine code (1010101010101010101010)
    2GL - Language will have corresponding code to each executable code that processor understand (Assembly). So the compiling is one-to-one translation of codes.
    3GL - Eache language code will result in multiple processor instructions once compiled.
    4GL - Eache language code will result in multiple processor instructions once compiled. and lots of coding and debuging tools are available (IDEs).
    Note: a 3GL language can later become a 4GL language Ex:- C
    Some experts argue that Object orianted languages are also belong to 4th generation but some says Object oriantation is the 5th Genaration
    5GL - Object Orianted languages Java,C++
    6GL - (Provided that the 5th is the OOP) "Natural Like languages" where the code can be written in a flexible manner
    Correct me if I am wrong.

  • OBIEE Query not hitting the other fact table

    Hi All,
    I am trying to create report based on two fact column and one dimension. Dimension is connected with these two facts table. When i create report using one column from dimension and one column from respective facts so i get two scenerio...
    For example let say..
    D1 is dimension and F1 and F2 are two fact tables.
    First i used a column which have aggregation rule from one fact and one column from other fact which also have aggregate column.
    That is report like...
    D1.c1,Agg(F1.c2),Agg(F2.c3)
    When i run the report I get the data from dimension and only from first fact table. When i check the query, Query contain only one fact table and it doesnt hit the other one.
    Now in second scenerio i used one column from dimension, one column from first fact which have aggregation rule and one column from second fact which doesnt have any aggregation rule.
    like...
    D1.c1,Agg(F1.c2),F2.c3
    When i run the report i got the error. It says
    State: HY000. Code: 10058. [NQODBC] [SQL_STATE: HY000] [nQSError: 10058] A general error has occurred. [nQSError: 14026] Unable to navigate requested expression: F1 -C2 . Please fix the metadata consistency warnings. (HY000).
    But there is no warning in RPD.
    I am amazed that it is not taking both the fact columns even the dimension is confirmed dimension and have joined with both the fact tables.
    As i am just started to learn OBIEE, So i am find it bit difficult that how OBIEE select the tables and formed physical query.
    Waiting for your help.
    Regards
    Suhail

    Aadi-Wasi,
    Thumb rule, OBIEE BMM layer must contain a simple star schema.
    Did your BMM layer suffice the above condition? If hope, Not.
    My prediction of your BMM layer structure contains 3 logical tables, i.e. dimension & 2 logical facts...which is not a simple star.
    Thus to make it a simple star collapse 2 logical fact tables into 1 logical fact table. As you mentioned dimension is linked to both facts, collapsing 2 logical fact tables into 1 logical fact table will provide the result for your query1.
    regarding your second error:
    All aggregations must be contained within Fact tables with few exceptions.
    Let us know If I did resolve your issue
    mark posts promptly...
    J
    -bifacts
    http://www.obinotes.com

  • SUS - Added Data Type Enhancement and Higher level Proxies are not active

    Hello,
    I've added a field to our current data type enhancement Z_Purchase_Order_Item.  Once I regenerate the proxy on the enhancement and activate it the field appears as it should in the high level items that use the enhancement (PurchaseOrderRequest_In).  But those proxies have become inactive and when I try to activate them I get this message:
    Interface II_BBPX1_SUS_PO was repaired before the Modification Assistant was enabled. 
    All Modification Assistant functions only apply to future modifications, not to those already
    undertaken.  This means:
    -The modification overview only displays future modifications.
    -When resetting to the standard, the system will reset all objects to their current version, since
    the actual standard can no longer be identified by the Modification Assistant.
    -Support for adjustment after an upgrade will only be available for future modifications. 
    Modifications that already exist must be re-made manually using version management.
    The next message says:
    Object can only be created in SAP package.
    Then the status bar shows "Proxy Activated".  But when I close and reopen the proxy I see that it is once again inactive. 
    Does any know what I need to do to activate this proxy? 
    Thanks,
    Matt

    In SPROXY you can open your proxy and then view the Activation Log under the GoTo menu.  The log will explain better what the problems might be.  In my case I needed to activate another data type enhancement first.
    Thanks,
    Matt

  • Adding higher level key words?

    Greetings fellow LR4 users -
    I already have several kewords set up in lightroom 4.2 (Win7). I want to add a 'higher level' keword to include some of those keywords already present. For example - If I already have
    Dog
    Cat
    Bird
    How can I add 'Animals' so that the result is -
    Animals
         Dog
         Cat
         Bird
    I know that I can add 'lower-level' kewords to existing keywords. Perhaps this is not possible? I can't find any options in dialog boxes that will allow this.
    Thanks in advance for your thoughts/suggestions.
    Ron

    Create a keyword that says "Animal". Then drag "Dog" onto the "Animal" keyword, it becomes a sub-keyword to "Animal". Repeat for other desired sub-keywords.

  • How to automat. copy configuration of higher level material to sub-item

    Hi Experts,
    I have Make to Order Scenario. 
    My customer wants the configuration of the non-phantom sub-items in the sales order to be automatically copied from the configuration of the higher level material.
    I have added the ZCOPY function module( to copy configuration) to the configuration profile of the sub-item but the program flow does not go through the function module because sub-item has a link to higher level material.
    If I go to the configuration of the sub-item in VA02, it appears a message u201CThe item configuration can only be changed by that higher level materialu201D. All chosen characteristics of the higher level material are shown, actually configuration of the higher level material is shown because of the link. But in CU44 of that sub-item, sales order no
    Characteristics are selected. 
    Any suggestion how and where to place automatic copy of the configuration of the higher level material to the sub-item?
    Kind regards,
    Danijela Zivanovic

    Vivek & Madhava
        Thank you for your comments. I know that adding A's routing to B is the best simple way to solve the problem. But as a little part of our global firm, our plant can't change master data related to cost calculation so easy. Every chance should be verified by global cost control team. And the routing is B is already very complex, we don't want to make it more complex.
        As same, create another material code is not available too because this should be considered by global team.
        Currently, we are creating production order for A separately. And A was looked as a component in B's production order. So, there are 2 production orders should be deal with. In fact, these two orders should be done at same time. It was complained by shop floor that too much order confirmation work to do. Warehouse complains too.  And further, this way disturbs production planning very much due to the system will not always plans these 2 orders together.
    Regards
    Robbie

  • Error: Maintain settlement rule of the sender for a higher level WBS

    Hi,
    I dont want to maintain the settlement rule for a higher level WBS. How can i configure this in such a way that i dont get the following error:" Maintain settlement rule of the sender" while doing CJ88. Maintaing a separate Settlement profile for a higher level WBS is an option but we are looking if something else could be done The problem is that there are no actuals booked against, say, level 2 WBS but when i execute CJ88, i get the aforesaid error. How can i ensure that only the lowest level WBS ask for the settlement rule and not the levels above it. I have already removed the Investment profile from the higher level WBS but still getting the same error.
    Regards,
    DPil

    Hi,
    It is a type Capex WBS and Biling element is not checked. In fact i get a warning while doing the settlement: WBS is neither a billing element nor an account assignment element.
    Diagnosis
    WBS element  is not indicated as either an account assignment element or as a billing element in the master record.
    System Response
    The WBS element cannot be assigned to an account.
    Procedure
    Correct your entries or add the missing indicator to the master record for the WBS element.
    But this is just a warning. On pressing enter i get the error :Maintain settlement rule of the sender "

  • Parent-Child hierarchy without facts in top-level

    Hello folks,
    I've got a fact table which looks like:
    fact_id | costs | some_id
    1 | 5 | 1
    2 | 10 | 1
    and a hierarchy which looks like:
    id | super_id | some_id
    1 | null | null
    2 | 1 | 1
    3 | 1 | 1
    In the business logic layer I defined two keys for the dimension talbe (id and some_id) and built the hierarchy on the first (id).
    Also I defined a foreign key from the fact table to the dimension on the second.
    As you see no facts are assigned to the first entry (= top-level) in the hierarchy (id = 1).
    However the children of 1 have got facts.
    What I want is to show the costs for the children (which is the sum of 5 and 10) and the total sum for the parent (30).
    However when I drag the hierarchy and the costs into an analysis I get no results available, as there are no facts for the top-level.
    How can I fix this?
    Regards,
    Matthias

    Hi Dhar,
    thanks for your reply!
    What you write coincides with what I thought would happen (but didn't).
    Hence there must be some error in my rpd file.
    I created my hierarchy right-clicking on the dimension table and only setting the mandatory settings (that is the definition of parent/menber keys and the setup for the hierarchy table).
    I didn't touch what admin-tool created inside the hierarchy, which is: a "total" level, where "grand total level" is checked, and a "detail" level, where "Supports rollup to higher level of aggregation" is checked.
    What other settings in the admin-tool will I have to touch to step forward?
    @anton: I'm using OBIEE 11.1.1.5.0
    Regads,
    Matthias

  • Errors in the high-level relational engine on Schedule Refresh Correlation ID: 7b159044-c719-41f9-8d0f-da6f73576d6e

    Connections are all valid and work when I setup the Refresh but when the schedule refresh occurs I get this error:
    Errors in the high-level relational engine. The following exception occurred while the managed IDataReader interface was being used: The Data Transfer Service has encountered a fatal error when performing the data upload. The remote server returned
    an error: (400) Bad Request. The remote server returned an error: (400) Bad Request. Transfer client has encountered a fatal error when performing the data transfer. The remote server returned an error: (400) Bad Request. The remote server returned an error:
    (400) Bad Request.;transfer service job status is invalid Response status code does not indicate success: 400 (Bad Request).. The current operation was cancelled because another operation in the transaction failed.
    It is trying to refresh 3 simple tables with less than 9,000 rows each.
    Also, i'd like to add that the refresh works right from excel as well...
    Another fact just in, it seems to work on one out of 3 tables sometimes, so first table gets a success on the log, but sometimes it fails (It succeed twice and failed once with the above error).  The second table never succeeds and gets the error above. 
    The 3rd table never even gets attempted.
    Am I running into some sort of timeout perhaps?
    loading
    Failure
    Correlation ID:
    7b159044-c719-41f9-8d0f-da6f73576d6e
    04/01/2015
    at 01:50 AM
    04/01/2015
    at 01:53 AM
     00:03:14
      Power Query - Sendout_Records Not tried
      Power Query - Positions Errors in the high-level relational engine. The following exception occurred while the managed IDataReader interface was being used: The Data Transfer Service has encountered a fatal error when performing the data upload. The
    remote server returned an error: (400) Bad Request. The remote server returned an error: (400) Bad Request. Transfer client has encountered a fatal error when performing the data transfer. The remote server returned an error: (400) Bad Request. The remote
    server returned an error: (400) Bad Request.;transfer service job status is invalid Response status code does not indicate success: 400 (Bad Request).. The current operation was cancelled because another operation in the transaction failed. 
      Power Query - Position_Activities Success.

    This is not because of the number of rows, instead its the execution time. The query takes more than 7 mins to execute and seems this fails the refresh process.
    Thank You

  • How to perform high-level planning without doing full costctr-level rollup

    I am wondering what my options are for this effort - we want to start our 2009 budget season by first preparing a high-level 2009 plan. To do the high-level plan we'd like to avoid building and interfacing workbooks at the cost center level; instead, to get started, we want to look at groups of cost centers that represent the different lines of business of the Bank (retail, credit, etc).
    Here's my question:is it possible to build planning workbooks not at the cost center level, but at a cost center rollup level? That is, we'd like to avoid having 10 workbooks at the cost center level that make up the rollup "Mortgage Lending"; instead we want one workbook for Mortgage lending that includes data for all 10 of those CC's.
    Please let me know if you have any ideas as to how to do so, thanks, W

    I am wondering what my options are for this effort - we want to start our 2009 budget season by first preparing a high-level 2009 plan. To do the high-level plan we'd like to avoid building and interfacing workbooks at the cost center level; instead, to get started, we want to look at groups of cost centers that represent the different lines of business of the Bank (retail, credit, etc).
    Here's my question:is it possible to build planning workbooks not at the cost center level, but at a cost center rollup level? That is, we'd like to avoid having 10 workbooks at the cost center level that make up the rollup "Mortgage Lending"; instead we want one workbook for Mortgage lending that includes data for all 10 of those CC's.
    Please let me know if you have any ideas as to how to do so, thanks, W

  • Entering budget data on different levels

    Dear experts,
    I have a question concerning posting budget data on different levels in BCS. In Former Budgeting there was an automatic check for budget entry which was not allowing greater amounts on lower level commitment items than on higher levels.
    Now with BCS there is not a hierarchical structure when entering budget data so you can enter a greater amount on subordinate commitment items than on superior ones.
    Do you have an idea if the budget entered on the lowest level can be totaled up automatically to the higher levels of the hierarchy or consistency checks should be defined in order to control budget amounts on the different levels???
    Thanks in advance.
    Best regards,
    sappsm

    Dear SAPPSM,
    In the Budget Control System (BCS) there exists no hierarchical budget structure which would take into account some funds center hierarchy, for example.
    This was standard function in Former (or Classical) Budgeting, but is not available in BCS only from EhP 603 on, you will be able to have multi-level budget structure but still in development phase.
    As a consequence, when you enter budget, budget values are only created at the level of budget entry (for example, at the 3rd level). Furthermore, BCS does not provide functions like "Distribute budget"
    or "Total up budget".
    However, you can "simulate" a totalling up of budget values via reporting. For example, for Report Writer reports like FMRP_RW_BUDGET you can use funds center groups (FM menu, Master Data, Funds Center), created by transaction FM_SETS_FICTR1. You can use such funds center groups with the Report Writer reports, which then automatically sum up the budget values according to the funds center group.
    BCS is a complete system with more possibilities than FB and you will be able to define strategy derivation for controling budget or posting address according to business process. Please check the documentation available in help.sap.com and in BCS node from SPRO configuration.
    I hope I could help you
    Kind Regards,
    Vanessa.

  • DI job causing high levels of I/O on database server

    We have a DI job that is loading a sql server 2005 database.  When the facts are loaded itu2019s causingu2019s a high level of I/O on the database server causing the DI job to slow down.  No more than 5 facts are loaded concurrently.  The fact dataflows all have a sql transform to run the select query against the DB, a few query transforms to do lookups to get dimension keys, and all do inserts to the target. The DBA says there are too many DB connections open and DI is not closing them.  My thinking was DI would manage the open connections for lookup, etc and would close then properly when the dataflow is complete. 
    Any thoughts on what else would cause high levels of DB I/O?
    Additional Info:
    - Run the DI job, source and target tables are in SQL Server, and it takes 5 hours.
    - Run the same DI job again, on the same data set, and it takes 12+ hours.  This run will have high levels on DB I/O.
    - But if SQL Server is stopped and restart, the job will again take 5 hours the first time it runs.
    Edited by: Chris Sam on Apr 15, 2009 3:43 PM

    There are a lot of areas of a DI Job that can be tuned for performance, but given the fact that your job runs fine after the database is restarted, it sounds like a problem with the database server and not the Data Integrator job.
    There are a lot of resources out there for dealing with SQL Server disk IO bottlenecks.  As a minimum first step all of them will recommend putting your .mdf and .ldf files on seperate drives and using Raid 10 for the .mdf file.

  • Why does OWB 9.2 generate UK's on higher levels of a dimension?

    When you specify levels in a dimension, OWB 9.2 generates unique key constraints in the table properties for every level, but only the UK on the lowest level is visible in the configuration properties. Why then are these higher level UK's generated? Is this a half baked attempt to implement the possiblility to generate a snow flake model in OWB?
    Jaap.

    Piotr, Roald and others,
    This is indeed a topic we spend a lot of our time on these past months. We are addressing this as (in my old days I had the same problem as a consultant) we know that this is a common problem.
    So the solution is one that goes in 2 directions:
    - Snowflake support
    - Advanced dimension data loading
    Snowflake is obvious, may not be desired for various reasons but we will start supporting this and loading data for it in mapping.
    If you want a star table, you will know that a completely flattened table with day at the lowest level will not be able to get you a unique entry for month. So what people tend to do is one of the following:
    - Proclaim the first of the month the Month entry point (this stays closest to the star table and simply relies on semantics on both ETL and query side).
    - Create extra day level entries which simbolize the month, so you have a day level with extra entries
    - Create views, extra tables etc to cover the extra data
    - Create a data set within the tables that solves the key problem
    We have opted for the last one. What you need to do for this is a set of records that uniquely identify any record in any level. Then you add a key which links to the dimension at the same point (a dimension key), so all facts always use this surrogate key to link (makes life in query tools easier).
    For a time dimension you will have a set of day records with their months etc in them (the regular star). Then you add a set of records with NULL in the day having months and up. And you go up the hierarchy. For this we will have the ETL logic (in other words you as a designer do not worry about this!). On the query tool you must be a little cautious on counts but this is doable and minor.
    As you can see none of the solutions are completely transparent, but we believe this is one that solves a lot of problems and gives you the best of all worlds. We will also support the same data structure in the OLAP dimensions for the database as well in the relational dimension. NOTE that there are some disclaimers with this as we are doing software here...
    In principal however we will solve your problem.
    Hope this explains some of our plans in this area.
    Jean-Pierre

  • Changing foreground/ Background color in a High-Level GUI MIDlet

    I have created MiDlet and all the Screens in the MIdlet are High-Level GUIs (Elements from Palette) and I didn't use any Canvas.
    Now, I want to give color customization capability to my application.
    I know its possible to change colors in a canvas.
    But, my MIdlet doesn't use any canvas.
    So, is there any possibility to achieve this ? (even if it takes to add canvas to my MIdlet, no problem, I just want it accomplished anyhow)
    Please give me any slightest of ideas to achieve this.

    I am not sure this will help you or not.... any way if u just want to add some animation or graphics to form its possiable thur javax.microedition.lcdui.CustomItem.
    You can see it as a small canvas inside your form.
    import javax.microedition.lcdui.*;
    public class MyCustomItem extends CustomItem
        int W,H;
        public MyCustomItem(String label, int Width, int Height)
            super(label);
            W=Width;
            H=Height;
            protected int getMinContentWidth() {
            return W;
        protected int getMinContentHeight() {
            return H;
        protected int getPrefContentWidth(int i) {
            return getMinContentWidth();
        protected int getPrefContentHeight(int i) {
            return getMinContentHeight();
        protected void paint(Graphics g, int i, int i0) {
            g.setColor(100,0,0);
            g.fillRect(0,0,i,i0);
            g.setColor(0,255,0);
            g.drawString("Hello Thank God Its Working",10,10,Graphics.TOP|Graphics.LEFT);
    }you can use MyCustomItem in your midlet
    import javax.microedition.midlet.MIDlet;
    import javax.microedition.lcdui.*;
    import javax.microedition.midlet.MIDletStateChangeException;
    public class CustomItemDemo1 extends MIDlet
        private Display disp;
        private Form frm;
        private MyCustomItem cust;
        protected void startApp() throws MIDletStateChangeException {
        disp=Display.getDisplay(this);
        frm=new Form("CustomItem Demo1");
        cust=new MyCustomItem("This is Cool...",100,100);
        cust.setLayout(CustomItem.LAYOUT_NEWLINE_BEFORE|CustomItem.LAYOUT_CENTER|CustomItem.LAYOUT_NEWLINE_AFTER);
        frm.append("hi this is Before Adding");
        frm.append(cust);
        frm.append("This is After Adding");
        disp.setCurrent(frm);
        protected void pauseApp() {
        protected void destroyApp(boolean b) throws MIDletStateChangeException {
    }Just like canvas we can even add key press and other events inside CusomItem
    you can even add traversal features by
        protected void traverseOut()
            // do some thing here
        protected boolean traverse(int dir, int viewportWidth, int viewportHeight, int[] visRect_inout) {
            //do some thing here....
    return false; // or true;
        }Rest test and discover your self....
    Hope it helps..
    Edited by: Zac-Mathews on Jun 15, 2008 10:52 PM
    Edited by: Zac-Mathews on Jun 15, 2008 10:56 PM

  • Content level settings in fact LTS

    Hi all,
    i just want to get very clear picture upon content level settings of LTS, when we create a dimensional hierarchy say time hierarchy (Year, Quarter, Month, Day), when we want a measure to sum up to month level we go to that particular measure column in fact and set its content level to month, now i want to know the significance of content level in LTS of the fact table, suppose if i set content level of whole fact table to "month" does all the measures in the fact sum to month level or how it is going to impact the result. In what scenario do we use the content setting in fact LTS?
    Many Thanks,
    Sreekanth

    Hi,
    @Kishore
    thank you for quick reply, to get more clarity now lets assume we have one fact table with daily granular data and one aggregate table with month granular data, so when i drag and drop aggregated measures to BMM it creates another LTS for aggregate table, in that aggregate table we would set the content level as "MONTH" so whenever a report is constructed based on month columns and measures, BI Server picks aggregate table since it has LTS at month level. Is my understanding correct?
    Now, i have a connecting question.. we have monthly aggregate table which has 2 columns:
    1. MONTH_CODE (1999-01,1999-02,...)
    2. SALES_AMT
    now if i join TIME_DIM.MONTH_CODE = AGGREGATE_TABLE.MONTH_CODE
    time dimension (TIME_DIM ) has records with day granularity so there will be 31 "1999-01" in TIME_DIM, this will result in summing up same measure value 31 times, so this will result in high numbers, so in this case what would be the ideal join condition to be used between aggregate table and time dimension, or what is the ideal join column to be used in aggregate table for joining to time dimension at day level.
    Much appreciated...
    Sreekanth

Maybe you are looking for