Better Practice - split dimension table or split hierarchy?

Hi All,
I have an Org dimension table that has geography and organizational structure within it (columns - state-city-etc.) and (columns division-department-etc.). Is it better to create a dimensional hierarchy with 2 trees (1 being geography sub-hierarchy from ALL and the other being ORG) or is it better to split table at BMM layer and create a logical geo and logical org tables separately. What do you think? Thank you

Great help thank you. I was also leaning towards the same decision, but now I'm just sure. Venkat, when you're saying that OBIEE cannot handle 2 hierarchies, do you mean that a Fiscal Year / Calendar year model wouldn't work - and I think it's how it is in the OBI vanilla marketing app. I just want to make sure I understood your correctly. If yes, then are those 2 expected to have the same amount of children (i.e. I might have Week level in Calendar, but skip it for Fiscal). Thanks

Similar Messages

  • Best Practice loading Dimension Table with Surrogate Keys for Levels

    Hi Experts,
    how would you load an Oracle dimension table with a hierarchy of at least 5 levels with surrogate keys in each level and a unique dimension key for the dimension table.
    With OWB it is an integrated feature to use surrogate keys in every level of a hierarchy. You don't have to care about
    the parent child relation. The load process of the mapping generates the right keys and cares about the relation between the parent and child inside the dimension key.
    I tried to use one interface per Level and created a surrogate key with a native Oracle sequence.
    After that I put all the interfaces in to one big Interface with a union data set per level and added look ups for the right parent child relation.
    I think it is a bit too complicated making the interface like that.
    I will be more than happy for any suggestions? Thank you in advance!
    negib
    Edited by: nmarhoul on Jun 14, 2012 2:26 AM

    Hi,
    I do like the level keys feature of OWB - It makes aggregate tables very easy to implement if your sticking with a star schema.
    Sadly there is nothing off the shelf with the built in knowledge modules with ODI , It doesnt support creating dimension objects in the database by default but there is nothing stopping you coding up your own knowledge module (use flex fields maybe on the datastore to tag column attributes as needed)
    Your approach is what I would have done, possibly use a view (if you dont mind having it external to ODI) to make the interface simpler.

  • Dimension table to support Hierarchy and the aggregation functions

    Hello expert,
    Now I seem to know why those aggregation functions e.g. SUM, COUNT failed whenever the report is executed.
    I have a fact table REJECT_FACT contains all the information of rejects:
    Reject ID
    Reject Category
    Reject Code
    Reject Desc
    Site Desc
    Site Code
    Region Desc
    Age Group
    Reject Date
    So I created a alias REJECT_DIM based on REJECT_FACT. After several trials, I think that the aggregation functions do not work with alias because after I remove the REJECT_DIM, the aggregation seem working.
    Is my concept right? Or I am missing something? I don't understand that the data model for datawarehouse should be simple, why do we need to create many dimension tables to support the hierarchy?

    Hello expert,
    Thank you very much for your reply.
    Actually the data model is very simple. There is only one physical table REJECT_FACT. The structure is as follows:
    Reject ID (NUMBER)
    Reject Category (VARCHAR2)
    Reject Code (VARCHAR2)
    Reject Code Desc (VARCHAR2)
    Site Desc (VARCHAR2)
    Site Code (VARCHAR2)
    Region Desc (VARCHAR2)
    Age Group (VARCHAR2)
    Reject Date (DATE)
    The hierarchy required is as follows:
    Reject Category -> Reject Code Desc -> Site Desc -> Region Desc -> Age Group -> Reject Date.
    I want to produce count on each hierachy level.
    How to populate the hierachy structure effectively?
    Thanks......

  • Trying to use 2 different Dimension tables and make a hierarchy on some columns which are split into these dimensions .. how do I do that

    Trying to use 2 different Dimension tables and make a hierarchy on some columns which are split into these dimensions .. how do I do that

    If you need to make a hierarchy in an Attribute view you need to have all relevant fields/columns in the same Dimension table..

  • Split hierarchy - syndicator

    Hi,
        I am trying to do the spilt hierarchy by root/leaf operations in syndciator. For some of the lookup hierarchy table i seeing the root/leaf is grayed out. Not allowing me to the op's. The same is true for by value and others? I tried to it on Repository TE07_products_XXX provided by sap as sample repository. Its the same.
      Any idea why is it happening?  is there any chance that I can ONLY download the hierarchy lookup table records NOT the main  table used from lookup table by using "spilt hierarchy".
    Thanks

    Hi,
    You cannot perform Split Hierarchy operation for the hierarchy fields which are Multi-Valued. In Products repository, you cannot split Hierarchy Fields Class Hierarchy and Category Hierarchy because for both the fields Multi-Valued is set to yes.
    Regards,
    Jitesh Talreja

  • Best practice when FACT and DIMENSION table are the same

    Hi,
    In my physical model I have some tables that are both fact and dimension table, i.e. in the BMM they are of course separated into Fact and Dim source (2 different units) and it works fine. But I can see that there will be trouble when having more fact tables and I e.g. have a Period dimension pointing to all the different fact tables (different sources).
    Seems like the best solution to this is to have an alias of the fact/transaction table and have 2 "copies" of the transaction table (one for fact and one for dimension table) in the physical layer. Only bad thing is that there will then allways be 2 lookups in the same table when fetching data from the dimension and the fact table.
    This is not built on a datawarehouse - so the architecture is thereby more complex. Hope this was understandable (trying to make a short story of it).
    Any best practice on this? Or other suggestions.

    Id recommend creation of a view in the database. if its an oracle DB, materialised views would be a huge performance benefit. you just need to make sure that the MVs are updated when the source is updated.
    -Domnic

  • Problem creating hierarchy based on 2 physical dimension tables

    I'm having a problem creating 1 logical dimension with a drill-down hierarchy, based on two separate physical dimension tables. The errors I receive when navigating the drill-down hierarchy is:
    "Cannot find logical table source coverage for logical columns" &
    "Missing join between logical tables".
    I'm using OBIEE 10.1.3.4
    Here are the details of what I have setup sofar:
    Physical layer:
    Dimension table DIM_ORG with columns:
    -dimension_key
    -org_total_code
    -org_total_description
    -org_detail_code
    -org_detail_description
    Dimension table DIM_DEPT with columns:
    -dimension_key
    -dept_total_code
    -dept_total_description
    -dept_detail_code
    -dept_detail_description
    Fact table FACT_SALES with columns:
    -fk_org
    -fk_dept
    -sum_sales
    Physical Joins:
    FACT_SALES.fk_org = DIM_ORG_dimension_key
    FACT_SALES.fl_dept = DIM_DEPT.dimension_key
    Business Model & Mapping layer:
    I created a logical dimension ORG_DEPT. It contains two logical table sources (DIM_ORG & DIM_DEPT) and the following logical columns:
    - All Departments (mapped to dept_total_code)
    - Organisation (mapped to org_detail_description)
    - Organisation Number (mapped to org_detail_code)
    - Department (mapped to dept_detail_description)
    - Department Code (mapped to dept_detail_code)
    The business logical key is based on the combination of Organisation Number & Department Code
    The hierarchy I need is: All Departments -> Organisation -> Department so I created the following hierarchy for ORG_DEPT:
    - Total Level containing: All Departments
    - Organisation Level containing: Organisation Number (defined as the Logical level key) & Organisation (defined als the Drill level key)
    - Detail Department Level containing: Department Code (defined as Logical level key) and Department (defined as Drill level key).
    In the LTS of the dimension ORG_DEPT I've set the Content levels for the sources:
    DIM_ORG : Organisation Level
    DIM_DEPT: Detail Department Level
    In the LTS no -inner- joins have been added against related physical tables.
    I created a logical fact table SALES (based on the physical fact table) and joined it against the logical dimension table ORG_DEPT.
    In the LTS the Content level for ORG_DEPT is set against the Detail Department Level. No - inner- joins have been aded against related physical tables.
    When I create a report in Answers to test the hierachy and select only 'All Departments' I get the correct dimension value returned. When I try to drill to the next level I get the following ODBC error:
    "Cannot find logical table source coverage for logical columns: [All Departments]. Please check more detailed level keys are mapped correctly".
    When I create a report in Answers and select both 'All Departments' and 'Sales' I get the correct result. When I try to drill to the next level I get a different ODBC error:
    "Missing join between logical tables DIM_DEPT and DIM_DEPT: There must be at least one physical join link between the underlying physical tables".
    Any suggestions are welcome!
    Thanks!

    Hello Robert,
    Your suggestions were known to me but I still wanted to combine the two physical dimension tables in one logical dimension. So I've played around a bit more and found the solution: In my original setup I had two seperate logical table sources (one for each physical dimension table). The solution was to combine the two logical table sources in one logical table source. I achieved that by logical joining the DIM_DEPT table to the FACT_SALES table and subsequently to the DIM_ORG within the 1 LTS and using inner joins.
    Then I created the logical table key (a combination of org_detail_code & dept_detail_code). After that I could create the hierarchy with no problem.
    Edited by: The_Dutchman on Nov 4, 2011 9:43 PM

  • Can OBIEE handle hierarchy built from different dimension tables?

    Can I build an hierarchy with values at various levels appearing from other dimension tables in the model ?
    Thanks

    Hi,
    if you want to drill in Answers from an attribute of a dimension to the attribute of anothr dimension, you can use the Preferred Drill Path inside the level of an hierachy.
    For example:
    Dime Time
    -Year
    --Month
    ---Day
    Dim Geography
    -Continent
    --Nation
    ---City
    If you set in the Month level of the Dim Time the Preferred Drill Path to Nation, in Answer when you drill from Month you see Nation.
    I hope it helps.
    Regards,
    Gianluca

  • Should my dimension table be this big

    Im in the process of building my first product dimension for a star schema and not sure if im doing this correctly. A brief explanation of our setup
    We have products (dresses) made by designers for specific collections each of which has a range of colors and each color can come in many sizes. For our UK market this equates to some 1.9 million
    product variants. Flattening the structure out to Product+designer+collection gives about 33,000 records but when you add all the colors and then all the colors sizes it pumps that figure up to 1.9million. My “rolled own” incremental ETL load runs
    ok just now, but we are expanding into the US market and our projections indicate that our products variants could multiple 10 fold. Im a bit worried about performance of the ETL just now and in the future.
    Is 1.9m records not an awful lot of records to incrementally load (well analyse) for a dimension table nightly as it is never mind what that figure may grow to when we go to US?
    I thought of somehow reducing this by using a snowflake but would this not just reduce the number of columns in the dimensions and not the row count?
    I then thought of separating the colors and size into their own dimension, but this doesn’t seem right as they are attributes of products and also I would lose the relationship between products,  size & color I.e. I would have to go through the
    fact table (which ive read is not a good choice.) for any analysis.
    Am I correct in thinking these are big numbers for a dimension table? Is it even possible to reduce the number somhow?
    Still learning so welcome any help.
    Thanks

    Hi Plip71,
    In my opinion, It is always good to reduce the Dimension Volume as much as possible for better performance.
    Is there any Hierarchy in you product Dimension?.. Going for a snowflake for this problem is a bad idea..
    Solution 1:
    From the details given by, It is good to Split the Colour and Size as seperate dimension. This will reduce the vloume of dimension and increase the column count in the fact(seperate WID has to be maintained in the fact table). but, this will improve the
    performance of the cube. before doint this please check the layout requirement from the business.
    Solution 2:
    Check the distinct count of Item varient used in fact table. If it is very less, they try creating a linear product dimension. i.e, create an view for the product dimension doing a inner join with the fact table. so that only the used Dimension member will
    be loaded in the Cube Product Dimension. hence volume is reduced with improvement in performance and stability of the cube.
    Thanks in advance, Sorry for the delayed reply ;)
    Anand
    Please vote as helpful or mark as answer, if it helps Regards, Anand

  • Fact and dimension table partition

    My team is implementing new data-warehouse. I would like to know that when  should we plan to do partition of fact and dimension table, before data comes in or after?

    Hi,
    It is recommended to partition Fact table (Where we will have huge data). Automate the partition so that each day it will create a new partition to hold latest data (Split the previous partition into 2). Best practice is to create partition on transaction
    timestamps so load the incremental data into a empty table called (Table_IN) and then Switch that data into main table (Table). Make sure your tables (Table and Table_IN) should be on one file group.
    Refer below content for detailed info
    Designing and Administrating Partitions in SQL Server 2012
    A popular method of better managing large and active tables and indexes is the use of partitioning. Partitioning is a feature for segregating I/O workload within
    SQL Server database so that I/O can be better balanced against available I/O subsystems while providing better user response time, lower I/O latency, and faster backups and recovery. By partitioning tables and indexes across multiple filegroups, data retrieval
    and management is much quicker because only subsets of the data are used, meanwhile ensuring that the integrity of the database as a whole remains intact.
    Tip
    Partitioning is typically used for administrative or certain I/O performance scenarios. However, partitioning can also speed up some queries by enabling
    lock escalation to a single partition, rather than to an entire table. You must allow lock escalation to move up to the partition level by setting it with either the Lock Escalation option of Database Options page in SSMS or by using the LOCK_ESCALATION option
    of the ALTER TABLE statement.
    After a table or index is partitioned, data is stored horizontally across multiple filegroups, so groups of data are mapped to individual partitions. Typical
    scenarios for partitioning include large tables that become very difficult to manage, tables that are suffering performance degradation because of excessive I/O or blocking locks, table-centric maintenance processes that exceed the available time for maintenance,
    and moving historical data from the active portion of a table to a partition with less activity.
    Partitioning tables and indexes warrants a bit of planning before putting them into production. The usual approach to partitioning a table or index follows these
    steps:
    1. Create
    the filegroup(s) and file(s) used to hold the partitions defined by the partitioning scheme.
    2. Create
    a partition function to map the rows of the table or index to specific partitions based on the values in a specified column. A very common partitioning function is based on the creation date of the record.
    3. Create
    a partitioning scheme to map the partitions of the partitioned table to the specified filegroup(s) and, thereby, to specific locations on the Windows file system.
    4. Create
    the table or index (or ALTER an existing table or index) by specifying the partition scheme as the storage location for the partitioned object.
    Although Transact-SQL commands are available to perform every step described earlier, the Create Partition Wizard makes the entire process quick and easy through
    an intuitive point-and-click interface. The next section provides an overview of using the Create Partition Wizard in SQL Server 2012, and an example later in this section shows the Transact-SQL commands.
    Leveraging the Create Partition Wizard to Create Table and Index Partitions
    The Create Partition Wizard can be used to divide data in large tables across multiple filegroups to increase performance and can be invoked by right-clicking
    any table or index, selecting Storage, and then selecting Create Partition. The first step is to identify which columns to partition by reviewing all the columns available in the Available Partitioning Columns section located on the Select a Partitioning Column
    dialog box, as displayed in Figure 3.13. This screen also includes additional options such as the following:
    Figure 3.13. Selecting a partitioning column.
    The next screen is called Select a Partition Function. This page is used for specifying the partition function where the data will be partitioned. The options
    include using an existing partition or creating a new partition. The subsequent page is called New Partition Scheme. Here a DBA will conduct a mapping of the rows selected of tables being partitioned to a desired filegroup. Either a new partition scheme should
    be used or a new one needs to be created. The final screen is used for doing the actual mapping. On the Map Partitions page, specify the partitions to be used for each partition and then enter a range for the values of the partitions. The
    ranges and settings on the grid include the following:
    Note
    By opening the Set Boundary Values dialog box, a DBA can set boundary values based on dates (for example, partition everything in a column after a specific
    date). The data types are based on dates.
    Designing table and index partitions is a DBA task that typically requires a joint effort with the database development team. The DBA must have a strong understanding
    of the database, tables, and columns to make the correct choices for partitioning. For more information on partitioning, review Books Online.
    Enhancements to Partitioning in SQL Server 2012
    SQL Server 2012 now supports as many as 15,000 partitions. When using more than 1,000 partitions, Microsoft recommends that the instance of SQL Server have at
    least 16Gb of available memory. This recommendation particularly applies to partitioned indexes, especially those that are not aligned with the base table or with the clustered index of the table. Other Data Manipulation Language statements (DML) and Data
    Definition Language statements (DDL) may also run short of memory when processing on a large number of partitions.
    Certain DBCC commands may take longer to execute when processing a large number of partitions. On the other hand, a few DBCC commands can be scoped to the partition
    level and, if so, can be used to perform their function on a subset of data in the partitioned table.
    Queries may also benefit from a new query engine enhancement called partition elimination. SQL Server uses partition enhancement automatically if it is available.
    Here’s how it works. Assume a table has four partitions, with all the data for customers whose names begin with R, S, or T in the third partition. If a query’s WHERE clause
    filters on customer name looking for ‘System%’, the query engine knows that it needs only to partition three to answer
    the request. Thus, it might greatly reduce I/O for that query. On the other hand, some queries might take longer if there are more than 1,000 partitions and the query is not able to perform partition elimination.
    Finally, SQL Server 2012 introduces some changes and improvements to the algorithms used to calculate partitioned index statistics. Primarily, SQL Server 2012
    samples rows in a partitioned index when it is created or rebuilt, rather than scanning all available rows. This may sometimes result in somewhat different query behavior compared to the same queries running on SQL Server 2012.
    Administrating Data Using Partition Switching
    Partitioning is useful to access and manage a subset of data while losing none of the integrity of the entire data set. There is one limitation, though. When
    a partition is created on an existing table, new data is added to a specific partition or to the default partition if none is specified. That means the default partition might grow unwieldy if it is left unmanaged. (This concept is similar to how a clustered
    index needs to be rebuilt from time to time to reestablish its fill factor setting.)
    Switching partitions is a fast operation because no physical movement of data takes place. Instead, only the metadata pointers to the physical data are altered.
    You can alter partitions using SQL Server Management Studio or with the ALTER TABLE...SWITCH
    Transact-SQL statement. Both options enable you to ensure partitions are
    well maintained. For example, you can transfer subsets of data between partitions, move tables between partitions, or combine partitions together. Because the ALTER TABLE...SWITCH statement
    does not actually move the data, a few prerequisites must be in place:
    • Partitions must use the same column when switching between two partitions.
    • The source and target table must exist prior to the switch and must be on the same filegroup, along with their corresponding indexes,
    index partitions, and indexed view partitions.
    • The target partition must exist prior to the switch, and it must be empty, whether adding a table to an existing partitioned table
    or moving a partition from one table to another. The same holds true when moving a partitioned table to a nonpartitioned table structure.
    • The source and target tables must have the same columns in identical order with the same names, data types, and data type attributes
    (length, precision, scale, and nullability). Computed columns must have identical syntax, as well as primary key constraints. The tables must also have the same settings for ANSI_NULLS and QUOTED_IDENTIFIER properties.
    Clustered and nonclustered indexes must be identical. ROWGUID properties
    and XML schemas must match. Finally, settings for in-row data storage must also be the same.
    • The source and target tables must have matching nullability on the partitioning column. Although both NULL and NOT
    NULL are supported, NOT
    NULL is strongly recommended.
    Likewise, the ALTER TABLE...SWITCH statement
    will not work under certain circumstances:
    • Full-text indexes, XML indexes, and old-fashioned SQL Server rules are not allowed (though CHECK constraints
    are allowed).
    • Tables in a merge replication scheme are not allowed. Tables in a transactional replication scheme are allowed with special caveats.
    Triggers are allowed on tables but must not fire during the switch.
    • Indexes on the source and target table must reside on the same partition as the tables themselves.
    • Indexed views make partition switching difficult and have a lot of extra rules about how and when they can be switched. Refer to
    the SQL Server Books Online if you want to perform partition switching on tables containing indexed views.
    • Referential integrity can impact the use of partition switching. First, foreign keys on other tables cannot reference the source
    table. If the source table holds the primary key, it cannot have a primary or foreign key relationship with the target table. If the target table holds the foreign key, it cannot have a primary or foreign key relationship with the source table.
    In summary, simple tables can easily accommodate partition switching. The more complexity a source or target table exhibits, the more likely that careful planning
    and extra work will be required to even make partition switching possible, let alone efficient.
    Here’s an example where we create a partitioned table using a previously created partition scheme, called Date_Range_PartScheme1.
    We then create a new, nonpartitioned table identical to the partitioned table residing on the same filegroup. We finish up switching the data from the partitioned table into the nonpartitioned table:
    CREATE TABLE TransactionHistory_Partn1 (Xn_Hst_ID int, Xn_Type char(10)) ON Date_Range_PartScheme1 (Xn_Hst_ID) ; GO CREATE TABLE TransactionHistory_No_Partn (Xn_Hst_ID int, Xn_Type
    char(10)) ON main_filegroup ; GO ALTER TABLE TransactionHistory_Partn1 SWITCH partition1 TO TransactionHistory_No_Partn; GO
    The next section shows how to use a more sophisticated, but very popular, approach to partition switching called a sliding
    window partition.
    Example and Best Practices for Managing Sliding Window Partitions
    Assume that our AdventureWorks business is booming. The sales staff, and by extension the AdventureWorks2012 database, is very busy. We noticed over time that
    the TransactionHistory table is very active as sales transactions are first entered and are still very active over their first month in the database. But the older the transactions are, the less activity they see. Consequently, we’d like to automatically group
    transactions into four partitions per year, basically containing one quarter of the year’s data each, in a rolling partitioning. Any transaction older than one year will be purged or archived.
    The answer to a scenario like the preceding one is called a sliding window partition because
    we are constantly loading new data in and sliding old data over, eventually to be purged or archived. Before you begin, you must choose either a LEFT partition function window or a RIGHT partition function window:
    1. How
    data is handled varies according to the choice of LEFT or RIGHT partition function window:
    • With a LEFT strategy, partition1 holds the oldest data (Q4 data), partition2 holds data that is 6- to 9-months old (Q3), partition3
    holds data that is 3- to 6-months old (Q2), and partition4 holds recent data less than 3-months old.
    • With a RIGHT strategy, partition4 holds the holds data (Q4), partition3 holds Q3 data, partition2 holds Q2 data, and partition1
    holds recent data.
    • Following the best practice, make sure there are empty partitions on both the leading edge (partition0) and trailing edge (partition5)
    of the partition.
    • RIGHT range functions usually make more sense to most people because it is natural for most people to to start ranges at their lowest
    value and work upward from there.
    2. Assuming
    that a RIGHT partition function windows is used, we first use the SPLIT subclause of the ALTER PARTITION FUNCTIONstatement
    to split empty partition5 into two empty partitions, 5 and 6.
    3. We
    use the SWITCH subclause
    of ALTER TABLE to
    switch out partition4 to a staging table for archiving or simply to drop and purge the data. Partition4 is now empty.
    4. We
    can then use MERGE to
    combine the empty partitions 4 and 5, so that we’re back to the same number of partitions as when we started. This way, partition3 becomes the new partition4, partition2 becomes the new partition3, and partition1 becomes the new partition2.
    5. We
    can use SWITCH to
    push the new quarter’s data into the spot of partition1.
    Tip
    Use the $PARTITION system
    function to determine where a partition function places values within a range of partitions.
    Some best practices to consider for using a slide window partition include the following:
    • Load newest data into a heap, and then add indexes after the load is finished. Delete oldest data or, when working with very large
    data sets, drop the partition with the oldest data.
    • Keep an empty staging partition at the leftmost and rightmost ends of the partition range to ensure that the partitions split when
    loading in new data, and merge, after unloading old data, do not cause data movement.
    • Do not split or merge a partition already populated with data because this can cause severe locking and explosive log growth.
    • Create the load staging table in the same filegroup as the partition you are loading.
    • Create the unload staging table in the same filegroup as the partition you are deleting.
    • Don’t load a partition until its range boundary is met. For example, don’t create and load a partition meant to hold data that is
    one to two months older before the current data has aged one month. Instead, continue to allow the latest partition to accumulate data until the data is ready for a new, full partition.
    • Unload one partition at a time.
    • The ALTER TABLE...SWITCH statement
    issues a schema lock on the entire table. Keep this in mind if regular transactional activity is still going on while a table is being partitioned.
    Thanks Shiven:) If Answer is Helpful, Please Vote

  • Non Unique Indexes on Dimension Table

    Hello ,
      I have Material Dimension that has Product MainGroup, Brand , SPC code broken from Product Hierarchy.
      As per Business Requirement , we don't want to use the Product Hierarchy , that should need to split into 3 pieces.
      Since The Cube Dimensions already reached the 13 , we can not increase the dimensions to keep these 3 fields into separate dimensions.
      I heard from Basis guy as <b>we can have index on three fields in same dimension table</b> & read some negative impact on aggregate definition.
      Is it true , which one is true , am not sure.
      Which one impact causes more worst & usefull..?
      Could please some experts throw some light on this.
    Cheers
    Martin

    Martin -
    Not sure what you mean by "read some negative impact on aggregate definition".
    But as far as adding additional indices on other columns of a dimension table, that certainly is doable. I have never done this as part of an actual intentiional design, but it seems like a valid apporach if you are limited by dimensions.  I'm assuming when you say you already have 13, that the 13 does not include the three standard dimensions for time, request, (drawing a blank, is it currency), so that you really have 16 dimensions, 13 of which are user defined. 
    We have added dimension tabl e indices in our shop when we have found large dimension tables that, either from poor initial design, or changes to the data and/or queries, have resulted in full tables scans against large dimension tables.  In some cases, the query costs of the full scan of the dimension table was more than the cost for the access of the fact table itself.
    These indices must be added by your DBA as they can not be added thru the Admin Wkbench.  You should also probably keep a record of any of these indices you create because if for some reason you delete the tables and reactivate the cube, you'll probably lose them.
    Pizzaman

  • Multiple columns from the same dimension table as row labels performing slowly

    (Working with SSAS tabular)
    I'm trying to figure out what the approach should be for the following scenario:
    Lets say we have a Customer table. The table has columns such as account number, department number, name, salesperson, account manager, number of customers, delivery route, etc
    A user of the model could want to see any permutation of that information as the row labels. How should that be handled?
    What we've been doing so far is that the user adds each column they want into the "ROWS" section in Excel. This works fine with smaller tables (for example, "Department" table with a "Department Code" and "Department Name",
    but on large tables this quickly chokes. I understand why this is happening, I just haven't found a better way to accomplish the same thing.
    I can add a calculated column to the model through VS, but obviously this is unsupportable and unscalable when each person needs their own permutations of the data. Can something similar be done in Excel? 
    This question seems to be what I need:
    http://social.msdn.microsoft.com/Forums/en-US/97d1157a-1402-4227-b96a-79524401ddcd/mdx-query-performance-when-selecting-multiple-attributes-from-same-dimension?forum=sqlanalysisservices
    However I can't find any information on how to add those properties (is it a multidimensional-only thing?)

    Thanks for the help. Sorry but i'm a self-taught developer, and i may be missing some basics :)
    Anyway i've done what you suggested but i get this error:
    [nQSError: 15011]The dimension table source Dimension Services.DM_D_SERVIZI_SRV has an aggregate content specification that specifies the level Product. But the source mapping contains column COD_PRODUCT with a functional dependency association on a more detailed level .
    where:
    - DM_D_SERVIZI_SRV is the physical alias for the Service Dimension (and the name of the LTS too)
    - COD_PRODUCT is the leaf of the hierarchy, the physical primary key, but it hasnt to be included in the hierarchy
    Do i have to add another level with the primary key and hide it to the users?
    I tried to solve this going to the logical tables source properties, on the tab contents, setting "logical level" to null for the hierarchy, but i don't know if this is correct.
    Thanks

  • Static and small dimension table should I use lookup

    Hi Gurus,
    I am creating interface to load my fact table.
    Generally for different codes to replace with surrogate keys I use dimension table for look-up in interface.
    but I have question if dimension table is very small and has only static values like
    Sur_key, Code, Description.
    1, C, Create
    2 , R, Redeem
    in such case if in interface for fact table, I just write expression "if source_table.column=C then 1 else 2".
    Please advice if this is better way or not.
    Thanks in advance.

    I agree - it is much better practice to use lookups to populate surrogate keys. Using hard-coded values in code requires the developer to always remember that dependency on dimensional data and can cause unexpected results down the line if the keys are updated or are inconsistent across environments.

  • How to Create a Dimension Table in OBIEE11g

    Hello,
    Chapter 9, page 221(if pdf) of the Build Repository manual of OBIEE 11g explains following.
    Creating Dimensions in Level-Based Hierarchies
    After creating a dimension, each dimension can be associated with attributes
    (columns) from one or more logical dimension tables and level-based measures from
    logical fact tables. After you associate logical columns with a dimension level, the
    tables in which these columns exist appear in the Tables tab of the Dimension dialog.
    To create a dimension with a level-based hierarchy:
    1. In the Business Model and Mapping layer of the Administration Tool, right-click a
    business model and select New Object > Logical Dimension > Dimension with
    Level-Based Hierarchy.
    Note that this option is only available when there is at least one dimension table
    that has no dimension associated with it.I am still learning OBIEE, and I can not see the dimension menu item as per point 1 becuse i do not have dimension table associated with it. All I have done at this point is to create logical tables based on few physical tables in physical layer.
    How do I create Dimension Tables? using BI Admin tool?
    Thanks,R

    Hi Rich,
    If you are new to Oracle BI, you would better start with the Oracle Learning Library; http://apex.oracle.com/pls/apex/f?p=9830:41:0::NO:RIR:IR_PRODUCT,IR_PRODUCT_SUITE,IR_PRODUCT_COMPONENT,IR_RELEASE,IR_TYPE,IRC_ROWFILTER,IR_FUNCTIONAL_CATEGORY:,BI,,,,,
    You create dimensions based on your logical tables. In a plain star-schema you would have one logical fact table and at least one or more logical dimension tabes attached (joined) to the fact table. Just follow the documentation and you would be good to go.
    Good Luck,
    Daan Bakboord
    http://obibb.wordpress.com

  • [39008] Logical dimension table X has a source that does not join to any...

    Hi Gurus,
    I have warning:
    "[39008] Logical dimension table X has a source that does not join to any fact source"
    WHERE X:
    PROMOTIONS,PRODUCTS, CUSTOMERS, CHANNELS
    but all logical table are mapped to fact.
    Eveb I can create reports for them
    (The metadata was derived from OWB 11gR2, SALES_EE)
    Thanks
    Laszlo
    Thanks for Your earlier answer as well!

    Hi,
    Did you create hierrarchy on that dimension.?
    Add contel level in fact source to hierarrchy detail for the fact that joins the dimension.
    Look at this for best practice...http://gerardnico.com/wiki/dat/obiee/hierarchy?s
    Please do close your previous thread assigning points by clicking helpful or correct to the replies
    Import XML file into rpd file
    Thanks,
    Srikanth

Maybe you are looking for