Dimension's levels and level attributes

Hi guys,
Let's say I have following situation:
Creating dimension CUSTOMERS with unique key CUS_ID.
Levels: L_CUSTOMER and L_COUNTRY.
One hierarchi: H_CUSTOMER_REGIONAL with levels L_COUNTRY -> L_CUSTOMER.
Now the question: What level attributes should I create?
For level L_CUSTOMER obviously CUS_ID, which is a key level attribute, and CUS_NAME.
For level L_COUNTRY what attributes should I create?
I see two variants:
1. COUN_ID (key level attribute) and COUN_NAME
2. just one COUN_NAME and this will be my key level attribute.
What guidelines should I follow here? I intend to use model in Discoverer later if it influences the design here.
Please advice.
Thanks,
Alex

Alexandre,
It depends whether you plan to use the COUN_ID in a join with a summarized fact table - if yes, I would suggest to have the ID, if not (pure star schema), there might not be a great use for it.
Regards:
Igor
Hi guys,
Let's say I have following situation:
Creating dimension CUSTOMERS with unique key CUS_ID.
Levels: L_CUSTOMER and L_COUNTRY.
One hierarchi: H_CUSTOMER_REGIONAL with levels L_COUNTRY -> L_CUSTOMER.
Now the question: What level attributes should I create?
For level L_CUSTOMER obviously CUS_ID, which is a key level attribute, and CUS_NAME.
For level L_COUNTRY what attributes should I create?
I see two variants:
1. COUN_ID (key level attribute) and COUN_NAME
2. just one COUN_NAME and this will be my key level attribute.
What guidelines should I follow here? I intend to use model in Discoverer later if it influences the design here.
Please advice.
Thanks,
Alex

Similar Messages

  • Facts at different Logical Dimension Level and Default Interaction (Drill)

    I have one Geography Logical Dimension that consists of 3 physical tables snowflaked. The 3 tables are City, State and Country. I have created 3 Level Based Hierarchies.
    I have 2 Facts. Fact_State is at the State grain only (Set the appropriate Content Level). Fact_City is at the City grain (Set the appropriate Content Level).
    When I pull a query with Country and a metric from Fact_State, the default interaction allows me to drill down. I drill into the State values. I would assume that the Drill interaction would stop here since this metric (Fact_State) is only at this level.
    What actually occurs is it allows for a drill into City and when you look at the Physical SQL it is trying to hit the other Fact Table (Fact_City) and pulls back no metrics. I then return no data.
    I would ask does anyone know if this is the expected behaviour? Is there anyway to stop the drill at the State Level when metrics are only pulled from the State Fact?
    Please let me know if more information is needed.

    You need to have a metric sourcing from both Fact_State and Fact_City, provided both facts are as Logical sources for the logical fact table.
    Make sure Fact_State got proper level and Fact_City got detail level.
    When you pull Country or State with metric query should hit Fact_State, when you drill down below the state query should be from Fact_City with proper data.
    Edited by: Srini VEERAVALLI on Mar 28, 2013 2:43 PM
    I can look into your new post only If you update this or older posts. Still you may get responses from others ;)
    Edited by: Srini VEERAVALLI on May 22, 2013 2:00 PM

  • Creating dimension in OMB script with only 1 level and no hierarchy

    Hi all,
    I have created some dimensions in the OWB client, that map straight 1-on-1 to a table. Those dimensions have only 1 level 'TOTAL' and no hierarchies.
    I am trying to create those dimensions using an OMB script, but it seems that when I bind the dimension level to a table, it =expects= me also to bind hierarchy levels to the table! However I have (dont want/need) a hierarchy?!
    The error message is:
    OMB00001: Encountered DIMENSION_KEY at line: 1, column: 2309. Was expecting one of: <EOF>
    The OMB code I have is:
    IMPLEMENTED BY SYSTEM STAR \
    DIMENSION_KEY BOUND TO COLUMN 'DIM_CAUSE_KEY' \
    LEVEL 'TOTAL' BOUND TO TABLE 'DIM_CAUSE' \
    LEVEL_ATTRIBUTE 'DIM_CAUSE_KEY' OF LEVEL 'TOTAL' BOUND TO COLUMN 'DIM_CAUSE_KEY' \
    LEVEL_ATTRIBUTE ...
    LEVEL_ATTRIBUTE ...
    After the LEVEL_ATTRIBUTEs the statement ends, so I say nothing about any hierarchies.
    If anybody could give me a hint, that would be much appreciated!!
    Ed

    It is finally solved...
    The keyword "SYSTEM" in "IMPLEMENTED BY SYSTEM STAR" shoudn't be there...
    Unfortunately I was overlooking that in the oh-so-clear syntax diagram in the OWB API and Scripting Reference..... )-;
    Let's move on.
    Cheers,
    Ed

  • Dimensions, Levels and Keys

    Folks,
    I am trying to deploy and load a dimension table composed of the following columns:
    Company Id
    Company_Name
    Corporation_Id
    Corporation_Name
    Cost_Center_Id
    Cost_Center_Name
    I have created three levels: Company, Corporation and Cost Center. Each Level is composed of the corresponding id and name.
    I have also created a single hierarchy, composed of the three levels: Company, Corporation and Cost Center.
    When I deploy the script, it creates a Unique Key constraint composed of the key at the lowest level - Cost_Center_Id.
    However, it is the combination of Company_Id, Corporation_Id and Cost_Center_Id that is required to uniquley identify a row in the dimension table.
    Any clue as to what I am doing wrong?
    Thanks. Bruce

    Hi,
    I believe hierarchies in OWB are One to Many. In your example, a company can have many corporations and a corporation can have many cost centers, but a cost center can only belong to one corporation and a corporation can only belong to one company. If this was the case, then Cost_Centre_id would be the unique key. You're saying that the combination of Company_Id, Corporation_Id and Cost_Center_Id is the unique key, and this leads me to believe that you don't actually have a hierarchy (the way OWB sees it) One way to handle this is to put all the columns in a single level, and add a unique identifier for each possible combination of the three.
    If you look at "best practices" I think you'll find that you should always use a single synthetic numeric value as your primary key in a dimension table. This ensures the most efficient join to the fact table. (If you have a three column key, you need all three columns in the fact table, wasting space and creating a more complex join) Also, you should try to avoid using any keys from an OLTP system as a key in your dimension.
    Hope this helps,
    Roald

  • Time Dimension with Hourly base level

    Hi all
    I need to analyze data at Hour, Day, Month, and Year levels. The data in the fact and dimension tables are at the 'Hour' level with DATE datatype, such as:
    02-SEP-10 10:00:00 AM
    02-SEP-10 11:00:00 AM
    To use Time-Series type calculations, I understand that I have to create an OLAP dimension of type 'TIME' (and not 'USER') and map it to the populated relational time dimension table. My questions are:
    1) Can I have the primary key for 'Hour' level as the actual base level value of datatype DATE (eg. 02-SEP-10 10:00:00 AM) ?
    2) For the END_DATE and TIME_SPAN attributes at the 'Hour' level, what should I use?
    The documentation is only available for minimum 'Day' level hierarchies, which allows setting END_DATE and TIME_SPAN to the actual 'Day' value and 1, respectively.
    3) For the END_DATE and TIME_SPAN attributes at the 'Month' level, do I need to supply the last-date-of-each-month and number-of-days-in-that-month, respectively?
    Please bear in mind that I am relatively new to Oracle OLAP. Any assistance will be appreciated.
    Cheers.

    Thank you Szilard and Adnan for the very prompt and informative responses.
    I managed to follow the advice on the oracleolap.blogspot link and created a time dimension with members at Hour level loaded into the dimension in character format: TO_CHAR(hour_id, 'DD-MON-YYYY HH24')
    The problem now is the maintenance (loading) of the dimension is taking an abnormally large amount of time (over 1 hour) as opposed to when the members were being loaded in DATE format (5 minutes). The mapping table only as 10,000 entries.
    Why is these such a big difference? Is it normal? Is there a way to speed up the maintenance time?
    FYI, I have not created any indexes on any of the attributes.
    My platform is:
    11.1.0.7.0 DB
    11.1.0.7.0B Client

  • Issue Processing Account Dimension When Number of Levels Decreases

    Hello.
    We have an issue processing the account dimension. It started at 8 levels, then decreased to 6 levels. However, the processing errors out. If we add some dummy members to bring it back to 8 levels, then it process fine.
    Any ideas?
    Thanks, Nancy

    Hi Nancy,
    the numbers of levels is not important.
    I think the changes done into your dimensions are wrong and adding dummy members is doing correction of that error.
    To be able to understand the issue I need to know what version of BPC are you using and it will be good to see also the dimension.
    Do you have more than one hierarchy?
    Regards
    Sorin Radulescu

  • Loading Hierarchy levels as Navigational Attributes of 0MATERIAL.

    Hello Guys, <br><br>
    bit of background about the issue: <br><bR>
    I am loading 0MATERIAL hierarchy from ECC using 0MATERAIL_LKLS_HIER Data source.(IS Retail). this job runs very fast.
    We have a 8 level hierarchy in ECC/BI and the lowest level of hierarchy is 0MATERAIL. I am also loading 0MATERIAL_ATTR from ECC. While loding 0MATERAIL_ATTR into BI, i need to populate each hierarchy level of an artcile as navigation attributes of 0MATERIAL. <br><br>
    For this requiremetn i wrote a start routine in the update rules and reading hierarchy levels and node names from /BI0/HMATERIAL table. the problem is the performance of the extractor is very very poor. It runs for more than 10 hours every day and it is causing many problems for our transactional data loads. <br><br>
    Here is the code i am using in my start routine. <br><br>
    I need suggestions on improving the performance of this extractor asap. are there any other best practices to load Article hierarchy levels as attributes apart from using ABAP  lookups?? any help will be appreciated. <br><br>
    TYPES: BEGIN OF TY_HIER, <br>
            MATERIAL TYPE /BI0/OIMATERIAL, <br>
            HIER_LEVEL1 TYPE RSSHNODENAME, <br>
            HIER_LEVEL2 TYPE RSSHNODENAME, <br>
            HIER_LEVEL3 TYPE RSSHNODENAME, <br>
            HIER_LEVEL4 TYPE RSSHNODENAME, <br>
            HIER_LEVEL5 TYPE RSSHNODENAME, <br>
           END OF TY_HIER. <br><br>
    DATA: GT_HIER TYPE STANDARD TABLE OF TY_HIER, <br>
          WA_HIER TYPE TY_HIER. <bR><br>
    DATA: LT_MAT_HIER TYPE STANDARD TABLE OF /BI0/HMATERIAL,<br>
              WA_MAT_HIER TYPE /BI0/HMATERIAL,<br>
              wa_hier TYPE ty_hier. <bR><br>
      DATA: l_nodename TYPE rsshnodename.<bR><br>
    * Retrieving the whole hierarchy table into an internal table <br>
      SELECT * FROM /BI0/HMATERIAL INTO TABLE LT_MAT_HIER. <br><br>
    * Looping at the data package <br>
      LOOP AT DATA_PACKAGE. <bR><br>
        WA_HIER-MATERIAL = DATA_PACKAGE-MATERIAL. <bR><br>
    * Sort internal table for binary search<br>
        Sort LT_MAT_HIER by iobjnm nodename. <bR><br>
        READ TABLE LT_MAT_HIER <br>
              INTO WA_MAT_HIER WITH KEY IOBJNM = '0MATERIAL'<br>
                                      NODENAME = DATA_PACKAGE-MATERIAL<br>
                                      binary search. <bR><br>
        Sort LT_MAT_HIER by nodeid. <bR><br>
        WHILE WA_MAT_HIER-TLEVEL > 1. <bR><br>
          READ TABLE LT_MAT_HIER <br>
                INTO WA_MAT_HIER WITH KEY NODEID = WA_MAT_HIER-PARENTID <br>
                                                        binary search. <br><bR><br>
    *      IF SY-SUBRC = 0. <br>
          SHIFT WA_MAT_HIER-NODENAME BY 3 PLACES LEFT. <bR><br>
          CASE WA_MAT_HIER-TLEVEL. <br>
            WHEN '07'. <br>
              CONTINUE. <br>
            WHEN '06'. <br>
              WA_HIER-HIER_LEVEL5 = WA_MAT_HIER-NODENAME. <br>
            WHEN '05'. <br>
              WA_HIER-HIER_LEVEL4 = WA_MAT_HIER-NODENAME. <br>
            WHEN '04'. <br>
              WA_HIER-HIER_LEVEL3 = WA_MAT_HIER-NODENAME. <br>
            WHEN '03'. <br>
              WA_HIER-HIER_LEVEL2 = WA_MAT_HIER-NODENAME. <br>
            WHEN '02'. <br>
              WA_HIER-HIER_LEVEL1 = WA_MAT_HIER-NODENAME. <br>
            WHEN OTHERS. <br>
              CONTINUE. <br>
          ENDCASE. <br>
        ENDWHILE. <br><br>
        APPEND WA_HIER TO GT_HIER[]. <br>
        CLEAR WA_HIER. <bR>
      ENDLOOP. <bR><br>
      SORT GT_HIER BY MATERIAL.
    <br><br>
    Thanks in advance <br>
    Cheers <bR>
    POPS<br>

    Any other thoughts guys...
    is there a standard function module where we can specify the lowest level article detail as input and request for level  1 - 5 values ??

  • [38073] dimension has multiple leaf levels which are not identical

    Hi,
    I am trying to implement a time dimension with 2 source tables.
    The time dimension is from minute to year where logical table includes 2 source tables :
    Source table#1 : Year,month,week,day
    Source table#2: hour,minute (minute is a leaf level and chronological key)
    When i try to check-in i got the following error: [38073] dimension has multiple leaf levels with are not identical.
    Thank you.
    Michael

    Hi RM
    i am able to create multiple hirerachy in time dimension and assigned time_id as chronological key which is at the dimension detail level and acting as shared child level for both of them......
    I have created AGO function for calendar year as
    AGO(Test."Fact".amount_sold, Test."Time Dim"."Calendar Year", 1)
    but when i have used AGO column in answers i am getting below mentioned error:-
    State: HY000. Code: 10058. [NQODBC] [SQL_STATE: HY000] [nQSError: 10058] A general error has occurred. [nQSError: 22046] To use AGO function, the query level ('Calendar Year, Fiscal Year') must be a static level. (HY000)
    any suggestion would be appreciated to solve this.
    thanks & regards

  • Scoping Dimension Members, Incorrect TOTAL Levels

    I have successfully applied security on my analytical workspace using PERMIT_READ and FGAC. The guide that I had followed can be read at the following link:
    http://www.oracle.com/technology/products/bi/olap/olap10g_applying_aw_security.doc
    My problem is that on the TOTAL level on the DIMENSION in which the security/limitiation is applied for a certain USER, it still shows the aggregated TOTALs of all the members, instead of just computing for the TOTALs of the visible members to the current user, regardless of checking or unchecking the presummarizations on the "Summarize To" tab.
    I'm guessing that the solution is the manipulation of the PERMIT_READ program, but I'm new to all of these and have a rather tight schedule.
    I was wondering if anyone might know what to do..
    If the above explanation is not quite clear, please see the following:
    =======================================================
    ] 1) A REGULAR user sees the following:
    ] Member 1 : 200
    ] Member 2 : 500
    ] Member 3 : 300
    ] Member 4 : 700
    ] TOTAL : 1700
    ] 2) A LIMITED user sees the following:
    ] Member 2 : 500
    ] Member 3 : 300
    ] TOTAL : 1700
    See the problem? The limited user should have its TOTALs calculated according to the dimensions that it only sees.

    When you say the following "regardless of checking or unchecking the pre-summarizations on the "Summarize To" tab", does this mean you unchecked the various levels and then completely rebuilt the cube? If not then you will have to completely rebuild the cube you are querying because the summarizations for the cube are stored directly in the AW. Unchecking the levels does not delete the stored data.
    What you need is a cube that has not been summarized across the dimensions where you are applying the PERMIT_READ. It should then work correctly, or at least that is my experience.
    Hope this helps
    Keith

  • SGE2010 and SRW2008: Noise level and power consumption?

    Hello!
    I am looking for information on the actual noise level and power consuption produced by the forementioned models.
    Many thanks in advance,
    Luca

    Luca,
    Power consumption and other key hardware product attributes can be found in the following document (in the documents section of this community):
    https://www.myciscocommunity.com/docs/DOC-8669
    WRT noise levels, here they are:
    SGE2010 - 51.4 dB
    SRW2008 - N/A - there is no fan inside this switch - uses and external power supply
    Thanks,
    Ivor

  • Difference b/w mat level and Sku level in AFS

    hi
    in AFS terminology what does the term  material level and SKU level mean? and what does SKU valve mean?

    Hi
    AFS materials are attached with Grids in which using characteristics ,dimensional values are entered.
    Ex Shirt is manufactured in different sizes say L,XL,S,M and colours say blue,green and white.
    Size and colour as used as dimensions(characters)and l,xl,blue... used as dimensional values.
    If stock of shirt is say 25 pcs.out of which L Blue is 10 and XL green is 15 pcs.
    When we note stock at material level ,we say as 25
    at Grid level we say L BLUE shirt is 10 pcs which SKU level.
    Hope I am clear.
    Regards,
    Raj.K

  • MCHB,MARD - Stock available at plant level and storage location level

    Hi ,
    How to determine how much stock available at Storage location level and Plant level. (Lot is not maintained).
    In MCHB-Stock at lot level is shown.
    In MARD-Stock at Storage Location level is shown lncluding
    the stock maintaned at the lot level.
    If i want to know the stock which is not maintained in Lot, i.e, Stocks directly maintained at Storage location / Plant.
    Do I need to subtract the MCHB Value from MARD Value ?
    Is there any other way available to find the stocks which is not maintained at lot level.
    Regards
    Prasath

    Hi prashant,
    What do u mean by stock maintained at SL level without batch level.  Do u mean that some materials are subjected to batch management and some or not?
    If that is the case, for all batch managed materials, u can see the stock at batch level and also same is summed up at SL level.  But still what ever u are getting from MARD table isstock at SL level only.
    If u have activated batch management for any material, it will always be shown at lot level also.
    hope this clarfifies query
    award points if found useful

  • How to improve the query performance in to report level and designer level

    How to improve the query performance in to report level and designer level......?
    Plz let me know the detail view......

    first its all based on the design of the database, universe and the report.
    at the universe Level, you have to check your Contexts very well to get the optimal performance of the universe and also your joins, keep your joins with key fields, will give you the best performance.
    at the report level, try to make the reports dynamic as much as you can, (Parameters) and so on.
    and when you create a paremeter try to get it match with the key fields in the database.
    good luck
    Amr

  • Dunning Old dinning level and dunning level

    Hi experts,
    I have one issue in dunning. the issue is where the dunning history for particular customer shows as old dunning level as 3 and dunning level as 2.
    i have checked  the customer master there is change in the dunning procdure. But the document and master data shows the correct dunning level with the new procdure.
    so i didnt understand what is the OLD dunning level and dunning level  for the same cusomer. the issue is only with one customer.
    I have also checked with old procdure with same customer in table MHNK. there are some notices done. but how come the current invoice shows the old dunning level for the same invoice.
    can any once explain me how to proceed.
    Regards
    Ashok

    Hi
    Thnks for the reply so we need to apply this note for making the old dunning level and dunning level makes equal after you run the program.
    Also one more doubt, since i have observed that the dunning after the parameters entered. looks like this
    Parameters were maintained
    Dunning selection executed, job deleted
    Dunning notice printed, job deleted.
    the above doesnt means that they will update the old dunning level right.
    Regards
    Ashok

  • Table relationship between hierarchy level and merchandise category

    1) Want to find Merchadise Category from Merchandise Hierarchy level which is attached with same Merchandise category.
    I am having data/TABLE as below.
    my input: M_WWG1C_class
    i want to fetch data matkl in T023 or matkl in MARA for the same article within same merchandise category.
    How i can have link?
    2) i am having input CAWN_atwrt(Characteristic Value) and how i can reach matkl in T023 or matkl in MARA  for the same article within same merchandise category.

    Oracle does not support REPEATABLE READ transaction isolation level. It only supports SERIALIZABLE, READ COMMITED and READ ONLY isolation levels.
    The default is READ COMMITED.
    While the READ COMMITED can access all the committed data till the point of execution in transaction, READ SERIALIZABLE can access committed data till the point of start of transaction.
    Another difference between two with respect to ROW LEVEL LOCKING is
    Both read committed and serializable transactions use row-level locking, and both will wait if they try to change a row updated by an uncommitted concurrent transaction. The second transaction that tries to update a given row waits for the other transaction to commit or roll back and release its lock. If that other transaction rolls back, the waiting transaction, regardless of its isolation mode, can proceed to change the previously locked row as if the other transaction had not existed.
    However, if the other blocking transaction commits and releases its locks, a read committed transaction proceeds with its intended update. A serializable transaction, however, fails with the error "Cannot serialize access", because the other transaction has committed a change that was made since the serializable transaction began.
    Read following for clearing your concepts on transaction isolation levels and locking mechanisms
    http://otn.oracle.com/docs/products/oracle9i/doc_library/release2/server.920/a96524/c21cnsis.htm#2414
    Chandar

Maybe you are looking for

  • IMac 24 - Just an idea

    After my first Mac work with FCExpress with a MacBook 2,16GHz, I'm thinking now to upgrade with FCStudio and the top of the iMac line. First to all: do you have some advice about it? I know iMac have only one firewire bus and this is maybe a problem,

  • Cant turn iphone 3G on :'(

    when i turn on it asks me to plug it in and a red battery is showing, i plug it in and the white appl elogo comes on, after 1-2 minutes, the phone reboots by itsself to show the empty battery with a little bit of red in it and the cycle repeats i nee

  • Security and WIFI software for android tablet pcs

    Are there any plans to develop a security package for android tablet PCs?  Would my VISS software work on my android tablet?  I have the same question about the WIFI client that allows you to search for Verizon hotspots.  Are there any plans to devel

  • Textinput skipping characters as i type

    anybody else having this issue or knows a solution for it ? If i type real slow like 1 characters per 2 seconds i may get all the characters in the textinput. it seems like its lagging behind. any help would be apreciated.

  • Archive taken every 30 secs

    Hi all, My database take archive every 30 or 40 seconds, i don't know why that's why i need your help. Which parametere i need to set and and to modify the value. Configuration: Oracle 10g ver 10.2.0.3 with RAC and ASM Here bellow some part of my azl