Fact & Dimension Table Concept

Hi All,
I am new to OBII,
I install successfully Install OBIEE and its working fine. I created repository successfully.
Now i am facing a challenge.
Creating BMM.
I am so confused about the BMM. I need help, if any body can provide me with a simple and best example to understand.
Help plz..
Thanks
K.L.Y

Hi,
Please refer to the following links that provides you the entire details on building BMM Layer in a repository:
http://www.oracle.com/webfolder/technetwork/tutorials/obe/fmw/bi/bi1113/biadmin11g_01/biadmin11g.htm
http://download.oracle.com/docs/cd/E21764_01/bi.1111/e10540/busmodlayer.htm#i1045226
http://download.oracle.com/docs/cd/E21764_01/bi.1111/e10540/dimensions.htm#BABHIDCC
http://download.oracle.com/docs/cd/E21764_01/bi.1111/e10540/lts.htm#CIHFGDIB
For more information please read the below thread.
Design of BMM layer
Thanks,
Satya

Similar Messages

  • How to identify fact and dimension tables

    Hi ,
    We are having the list of parent child relation info for each database tables. Based upon the parent child tables ,needs to identify which table is fact ,which table is dimension .Could you please help me how to identify the fact table and dimensions tables ?
    Thanks in advance

    Hi,
    Refer this link........
    http://www.oraclebidwh.com/2007/12/fact-dimension-tables-in-obiee/
    Please mark if it helps you.......

  • Fact and dimension table partition

    My team is implementing new data-warehouse. I would like to know that when  should we plan to do partition of fact and dimension table, before data comes in or after?

    Hi,
    It is recommended to partition Fact table (Where we will have huge data). Automate the partition so that each day it will create a new partition to hold latest data (Split the previous partition into 2). Best practice is to create partition on transaction
    timestamps so load the incremental data into a empty table called (Table_IN) and then Switch that data into main table (Table). Make sure your tables (Table and Table_IN) should be on one file group.
    Refer below content for detailed info
    Designing and Administrating Partitions in SQL Server 2012
    A popular method of better managing large and active tables and indexes is the use of partitioning. Partitioning is a feature for segregating I/O workload within
    SQL Server database so that I/O can be better balanced against available I/O subsystems while providing better user response time, lower I/O latency, and faster backups and recovery. By partitioning tables and indexes across multiple filegroups, data retrieval
    and management is much quicker because only subsets of the data are used, meanwhile ensuring that the integrity of the database as a whole remains intact.
    Tip
    Partitioning is typically used for administrative or certain I/O performance scenarios. However, partitioning can also speed up some queries by enabling
    lock escalation to a single partition, rather than to an entire table. You must allow lock escalation to move up to the partition level by setting it with either the Lock Escalation option of Database Options page in SSMS or by using the LOCK_ESCALATION option
    of the ALTER TABLE statement.
    After a table or index is partitioned, data is stored horizontally across multiple filegroups, so groups of data are mapped to individual partitions. Typical
    scenarios for partitioning include large tables that become very difficult to manage, tables that are suffering performance degradation because of excessive I/O or blocking locks, table-centric maintenance processes that exceed the available time for maintenance,
    and moving historical data from the active portion of a table to a partition with less activity.
    Partitioning tables and indexes warrants a bit of planning before putting them into production. The usual approach to partitioning a table or index follows these
    steps:
    1. Create
    the filegroup(s) and file(s) used to hold the partitions defined by the partitioning scheme.
    2. Create
    a partition function to map the rows of the table or index to specific partitions based on the values in a specified column. A very common partitioning function is based on the creation date of the record.
    3. Create
    a partitioning scheme to map the partitions of the partitioned table to the specified filegroup(s) and, thereby, to specific locations on the Windows file system.
    4. Create
    the table or index (or ALTER an existing table or index) by specifying the partition scheme as the storage location for the partitioned object.
    Although Transact-SQL commands are available to perform every step described earlier, the Create Partition Wizard makes the entire process quick and easy through
    an intuitive point-and-click interface. The next section provides an overview of using the Create Partition Wizard in SQL Server 2012, and an example later in this section shows the Transact-SQL commands.
    Leveraging the Create Partition Wizard to Create Table and Index Partitions
    The Create Partition Wizard can be used to divide data in large tables across multiple filegroups to increase performance and can be invoked by right-clicking
    any table or index, selecting Storage, and then selecting Create Partition. The first step is to identify which columns to partition by reviewing all the columns available in the Available Partitioning Columns section located on the Select a Partitioning Column
    dialog box, as displayed in Figure 3.13. This screen also includes additional options such as the following:
    Figure 3.13. Selecting a partitioning column.
    The next screen is called Select a Partition Function. This page is used for specifying the partition function where the data will be partitioned. The options
    include using an existing partition or creating a new partition. The subsequent page is called New Partition Scheme. Here a DBA will conduct a mapping of the rows selected of tables being partitioned to a desired filegroup. Either a new partition scheme should
    be used or a new one needs to be created. The final screen is used for doing the actual mapping. On the Map Partitions page, specify the partitions to be used for each partition and then enter a range for the values of the partitions. The
    ranges and settings on the grid include the following:
    Note
    By opening the Set Boundary Values dialog box, a DBA can set boundary values based on dates (for example, partition everything in a column after a specific
    date). The data types are based on dates.
    Designing table and index partitions is a DBA task that typically requires a joint effort with the database development team. The DBA must have a strong understanding
    of the database, tables, and columns to make the correct choices for partitioning. For more information on partitioning, review Books Online.
    Enhancements to Partitioning in SQL Server 2012
    SQL Server 2012 now supports as many as 15,000 partitions. When using more than 1,000 partitions, Microsoft recommends that the instance of SQL Server have at
    least 16Gb of available memory. This recommendation particularly applies to partitioned indexes, especially those that are not aligned with the base table or with the clustered index of the table. Other Data Manipulation Language statements (DML) and Data
    Definition Language statements (DDL) may also run short of memory when processing on a large number of partitions.
    Certain DBCC commands may take longer to execute when processing a large number of partitions. On the other hand, a few DBCC commands can be scoped to the partition
    level and, if so, can be used to perform their function on a subset of data in the partitioned table.
    Queries may also benefit from a new query engine enhancement called partition elimination. SQL Server uses partition enhancement automatically if it is available.
    Here’s how it works. Assume a table has four partitions, with all the data for customers whose names begin with R, S, or T in the third partition. If a query’s WHERE clause
    filters on customer name looking for ‘System%’, the query engine knows that it needs only to partition three to answer
    the request. Thus, it might greatly reduce I/O for that query. On the other hand, some queries might take longer if there are more than 1,000 partitions and the query is not able to perform partition elimination.
    Finally, SQL Server 2012 introduces some changes and improvements to the algorithms used to calculate partitioned index statistics. Primarily, SQL Server 2012
    samples rows in a partitioned index when it is created or rebuilt, rather than scanning all available rows. This may sometimes result in somewhat different query behavior compared to the same queries running on SQL Server 2012.
    Administrating Data Using Partition Switching
    Partitioning is useful to access and manage a subset of data while losing none of the integrity of the entire data set. There is one limitation, though. When
    a partition is created on an existing table, new data is added to a specific partition or to the default partition if none is specified. That means the default partition might grow unwieldy if it is left unmanaged. (This concept is similar to how a clustered
    index needs to be rebuilt from time to time to reestablish its fill factor setting.)
    Switching partitions is a fast operation because no physical movement of data takes place. Instead, only the metadata pointers to the physical data are altered.
    You can alter partitions using SQL Server Management Studio or with the ALTER TABLE...SWITCH
    Transact-SQL statement. Both options enable you to ensure partitions are
    well maintained. For example, you can transfer subsets of data between partitions, move tables between partitions, or combine partitions together. Because the ALTER TABLE...SWITCH statement
    does not actually move the data, a few prerequisites must be in place:
    • Partitions must use the same column when switching between two partitions.
    • The source and target table must exist prior to the switch and must be on the same filegroup, along with their corresponding indexes,
    index partitions, and indexed view partitions.
    • The target partition must exist prior to the switch, and it must be empty, whether adding a table to an existing partitioned table
    or moving a partition from one table to another. The same holds true when moving a partitioned table to a nonpartitioned table structure.
    • The source and target tables must have the same columns in identical order with the same names, data types, and data type attributes
    (length, precision, scale, and nullability). Computed columns must have identical syntax, as well as primary key constraints. The tables must also have the same settings for ANSI_NULLS and QUOTED_IDENTIFIER properties.
    Clustered and nonclustered indexes must be identical. ROWGUID properties
    and XML schemas must match. Finally, settings for in-row data storage must also be the same.
    • The source and target tables must have matching nullability on the partitioning column. Although both NULL and NOT
    NULL are supported, NOT
    NULL is strongly recommended.
    Likewise, the ALTER TABLE...SWITCH statement
    will not work under certain circumstances:
    • Full-text indexes, XML indexes, and old-fashioned SQL Server rules are not allowed (though CHECK constraints
    are allowed).
    • Tables in a merge replication scheme are not allowed. Tables in a transactional replication scheme are allowed with special caveats.
    Triggers are allowed on tables but must not fire during the switch.
    • Indexes on the source and target table must reside on the same partition as the tables themselves.
    • Indexed views make partition switching difficult and have a lot of extra rules about how and when they can be switched. Refer to
    the SQL Server Books Online if you want to perform partition switching on tables containing indexed views.
    • Referential integrity can impact the use of partition switching. First, foreign keys on other tables cannot reference the source
    table. If the source table holds the primary key, it cannot have a primary or foreign key relationship with the target table. If the target table holds the foreign key, it cannot have a primary or foreign key relationship with the source table.
    In summary, simple tables can easily accommodate partition switching. The more complexity a source or target table exhibits, the more likely that careful planning
    and extra work will be required to even make partition switching possible, let alone efficient.
    Here’s an example where we create a partitioned table using a previously created partition scheme, called Date_Range_PartScheme1.
    We then create a new, nonpartitioned table identical to the partitioned table residing on the same filegroup. We finish up switching the data from the partitioned table into the nonpartitioned table:
    CREATE TABLE TransactionHistory_Partn1 (Xn_Hst_ID int, Xn_Type char(10)) ON Date_Range_PartScheme1 (Xn_Hst_ID) ; GO CREATE TABLE TransactionHistory_No_Partn (Xn_Hst_ID int, Xn_Type
    char(10)) ON main_filegroup ; GO ALTER TABLE TransactionHistory_Partn1 SWITCH partition1 TO TransactionHistory_No_Partn; GO
    The next section shows how to use a more sophisticated, but very popular, approach to partition switching called a sliding
    window partition.
    Example and Best Practices for Managing Sliding Window Partitions
    Assume that our AdventureWorks business is booming. The sales staff, and by extension the AdventureWorks2012 database, is very busy. We noticed over time that
    the TransactionHistory table is very active as sales transactions are first entered and are still very active over their first month in the database. But the older the transactions are, the less activity they see. Consequently, we’d like to automatically group
    transactions into four partitions per year, basically containing one quarter of the year’s data each, in a rolling partitioning. Any transaction older than one year will be purged or archived.
    The answer to a scenario like the preceding one is called a sliding window partition because
    we are constantly loading new data in and sliding old data over, eventually to be purged or archived. Before you begin, you must choose either a LEFT partition function window or a RIGHT partition function window:
    1. How
    data is handled varies according to the choice of LEFT or RIGHT partition function window:
    • With a LEFT strategy, partition1 holds the oldest data (Q4 data), partition2 holds data that is 6- to 9-months old (Q3), partition3
    holds data that is 3- to 6-months old (Q2), and partition4 holds recent data less than 3-months old.
    • With a RIGHT strategy, partition4 holds the holds data (Q4), partition3 holds Q3 data, partition2 holds Q2 data, and partition1
    holds recent data.
    • Following the best practice, make sure there are empty partitions on both the leading edge (partition0) and trailing edge (partition5)
    of the partition.
    • RIGHT range functions usually make more sense to most people because it is natural for most people to to start ranges at their lowest
    value and work upward from there.
    2. Assuming
    that a RIGHT partition function windows is used, we first use the SPLIT subclause of the ALTER PARTITION FUNCTIONstatement
    to split empty partition5 into two empty partitions, 5 and 6.
    3. We
    use the SWITCH subclause
    of ALTER TABLE to
    switch out partition4 to a staging table for archiving or simply to drop and purge the data. Partition4 is now empty.
    4. We
    can then use MERGE to
    combine the empty partitions 4 and 5, so that we’re back to the same number of partitions as when we started. This way, partition3 becomes the new partition4, partition2 becomes the new partition3, and partition1 becomes the new partition2.
    5. We
    can use SWITCH to
    push the new quarter’s data into the spot of partition1.
    Tip
    Use the $PARTITION system
    function to determine where a partition function places values within a range of partitions.
    Some best practices to consider for using a slide window partition include the following:
    • Load newest data into a heap, and then add indexes after the load is finished. Delete oldest data or, when working with very large
    data sets, drop the partition with the oldest data.
    • Keep an empty staging partition at the leftmost and rightmost ends of the partition range to ensure that the partitions split when
    loading in new data, and merge, after unloading old data, do not cause data movement.
    • Do not split or merge a partition already populated with data because this can cause severe locking and explosive log growth.
    • Create the load staging table in the same filegroup as the partition you are loading.
    • Create the unload staging table in the same filegroup as the partition you are deleting.
    • Don’t load a partition until its range boundary is met. For example, don’t create and load a partition meant to hold data that is
    one to two months older before the current data has aged one month. Instead, continue to allow the latest partition to accumulate data until the data is ready for a new, full partition.
    • Unload one partition at a time.
    • The ALTER TABLE...SWITCH statement
    issues a schema lock on the entire table. Keep this in mind if regular transactional activity is still going on while a table is being partitioned.
    Thanks Shiven:) If Answer is Helpful, Please Vote

  • In Answers am seeing "Folder is Empty" for Logical Fact and Dimension Table

    Hi All,
    Am working on OBIEE Answers, on of sudden when i clicked on Logical Fact table it showed me as "folder is empty". I restarted all the services and then tried still showing same for Logical Fact and Dimension tables but am able to see all my reports in Shared Folders. I restarted the machine too but no change. Please help me out to resolve this issue.
    Thanks in Advance.
    Regards,
    Rajkumar.

    First of all, follow the forum etiquette :
    http://forums.oracle.com/forums/ann.jspa?annID=939
    React or mark as anwser the post that the user gave.
    And for your question, you must check the log for a possible corrupt catalog :
    OracleBIData_Home\web\log\sawlog0.log

  • Help on  Setting logical Levels  in Fact tables and on Dimension tables

    Hi all
    Can any body provide any blogs or any king of material on what exactly is levelling .
    Like after creating the Dimensional hierarchies we need to set the logical levels for the LTS of fact tabels ri8 .So what is the difference between setting logical levels to fact tabels and also Setting levelling on Dimension tables .
    Any kind of help is appreciated
    Thanks
    Xavier.
    Edited by: Xavier on Aug 4, 2011 10:50 AM

    I have read these blogs ,but what my question is
    Setting the logical levels in LTS of Fact tables i understood .
    But we can also set the logical levels for dimensions also ri8 .I didn't understand why do we set the logical levels for dimensions .Is there any reason why we go with the levelling at dimensions
    Thanks
    Xavier
    Edited by: Xavier on Aug 4, 2011 2:03 PM
    Edited by: Xavier on Aug 4, 2011 2:32 PM

  • Best practice when FACT and DIMENSION table are the same

    Hi,
    In my physical model I have some tables that are both fact and dimension table, i.e. in the BMM they are of course separated into Fact and Dim source (2 different units) and it works fine. But I can see that there will be trouble when having more fact tables and I e.g. have a Period dimension pointing to all the different fact tables (different sources).
    Seems like the best solution to this is to have an alias of the fact/transaction table and have 2 "copies" of the transaction table (one for fact and one for dimension table) in the physical layer. Only bad thing is that there will then allways be 2 lookups in the same table when fetching data from the dimension and the fact table.
    This is not built on a datawarehouse - so the architecture is thereby more complex. Hope this was understandable (trying to make a short story of it).
    Any best practice on this? Or other suggestions.

    Id recommend creation of a view in the database. if its an oracle DB, materialised views would be a huge performance benefit. you just need to make sure that the MVs are updated when the source is updated.
    -Domnic

  • Issue using one 2 Fact tables with one dimension Table.

    Hi,
    I have 1 Dimension table X and 2 Fact tables A and B
    X is joined to Both A and B for Loan Amount ( with A) and for colleatral amount (with B) when I am selecting the X.Product_Name, A.Loan_Amt, B.Collateral Amount, it is giving an error message
    State: HY000. Code: 10058. [NQODBC] [SQL_STATE: HY000] [nQSError: 10058] A general error has occurred. [nQSError: 15018] Incorrectly defined logical table source (for fact table EIP Collateral FACT) does not contain mapping for [EIP Reporting FACT.PD ID]. (HY000)
    Any clues???
    Is there a Inner or Outer join which needs to be created or set in the RPD to get the desired results???

    Ok..
    I have one table which is Porfolio Details which has Portfolio name, Product Category , Product Name, Product ID, Product sources code.- This is my Dimension table.
    I have another 2 set of fact tables : EIP Reporting FACT and EIP Collateral FACT..
    These two tables are joined to Portfolio Details table.
    EIP Reprting FACT gives portfolio wise Loan Amount
    and EIP Collateral FACT gives Portfolio wise Collateral Amount details for same set of customer..
    Now, I am selecting Portfolio Name, Product Category, Product Name,SUM( EIP Reporting FACT.LOAN_AMOUNT), SUM(EIP Collaetral FACT.Collateral_Amt) in a report
    Now, on selecting these columns I am getting that error message which is related to mapping.
    If I take any column from Portfolio details table and any column from EIP Reporting FACT- It works.
    If I take any column from Portfolio details table and any column from EIP Colletral FACT- It works.
    But if I take any column from portfolio table and columns from both FACT tables it gives mapping error...
    Hope I am able to explain the issue in a better way now..
    Edited by: help-required on Mar 11, 2010 6:53 PM
    Edited by: help-required on Mar 11, 2010 6:53 PM

  • Consistency Warning - [39008] Dimension Table not joined to Fact Source

    I have a schema in which I have the following tables:
    A) Patient Transaction Fact Table (i.e. supplies used, procedures performed, etc.)
    B) Demographic Dimension table (houses info like patient location code)
    C) Location Dimension table (tells me what Hospital each unique Location maps to)
    So table A is the fact, and table B is a dimension table joined to table A based on Patient ID, so I can get general info on the patient. This would allow me to apply logic to just see patient transactions where the patient was FEMALE, or was in the Emergency Room, by applying conditions to these fields in table B.
    Table C is a simple lookup table joined to table B by Location Code, so I can identify which hospital's emergency room the patient was located in for instance.
    So the schema is: A<---B<---C, where B and C are both dimension tables.
    The query works as desired, but my consistency check gives me the following WARNING:
    *[39008] Logical dimension table LOCATION MASTER D has a source LOCATION MASTER D that does not join to any fact source.*
    How do I resolve this WARNING, or at least suppress it?

    Hi,
    What you need to do is to add the (physical) location dimension table to the logical table source of the demographic dimension, for example by dragging it from physical layer on top of logical table source of demographic logical dimension table in bmm layer
    Regards,
    Stijn

  • Multiple Fact Tables and Dimension Tables

    I have been having some problems trying to model the data from Oracle E-Business Suite maintenance. I will try to give the best description of how the data is held in the tables. The structure is such that a work order can have multiple operations and an operation can have multiple resources as well. I believe the problem comes in the fact that an operation doesn't necessarily need to have a resource. I could not attach an image so I have written out an example below. I am not saying this is right or that it works, but just to give you an idea of what I am thinking. The full dimension would be Organization -> WorkOrder -> Operation -> Resource. Now, the fact tables all hold factual data for the three different levels, with the facts being at each corresponding level. This causes an obvious problem in combining the tables into one large fact table through the ETL process.
    Can anyone tell me if they think this can be done? Am I way off? I am sure that there is a solution as there always is but I have been killing myself trying to figure this one out. I currently have the entire solution in different Business Models. I would like however to be able to compare facts from multiple areas such as the Work Order level and the Resource level.
    Any help is greatly appreciated. I realize that the solution may also require additional work on the ETL side so I am open to any and all suggestions.
    Thank you in advance for anyones time. :)
    Dimension Tables
    WorkOrder_D
    Operation_D
    Resource_D
    Organization_D
    Fact Tables
    WorkOrder_F
    Operation_F
    Resource_F
    Joins
    WorkOrder_D -> Operation_D
    Operation_D -> Resource_D
    WorkOrder_D -> WorkOrder_F
    Operation_D -> Operation_F
    Resource_D -> Resource_F
    Organization_D -> WorkOrder_D
    Organization_D -> Operation_D
    Organization_D -> Resource_D

    Hi,
    Currently the dimension table is taken as a simple logical table in rpd as it does not have have any levels or hierarchy.
    Its a flat dimension. Can you guide me how can I implement a flat dimension in OBIEE? Because this dimension is taken as simple logical table
    I am not able to set appropriate level for fac tables. This dimension does not appear in the list of dimensions.

  • [39008] Logical dimension table has a source that does not join to any fact

    Dear reader,
    After deleting a fact table from my physical layer and deleting it from my business model I'm getting an error: [39008] Logical dimension table TABLE X has a source TABLE X that does not join to any fact source. I do have an other fact table in the same physical model and in the same business model wich is joined to TABLE X both in the physical and business model.
    I cannot figure out why I'm getting this error, even after deleting all joins and rebuilding the joins I'm getting this error. When I look into the "Joins Manager" these joins both in physical as well as logical model do exist, but with consistency check it warns me about [39008] blabla. When I ignore the warning and go to answers and try to show TABLE X (not fact, but dim) it gives me the following error.
    Odbc driver returned an error (SQLExecDirectW).
    Error Details
    Error Codes: OPR4ONWY:U9IM8TAC:OI2DL65P
    State: HY000. Code: 10058. [NQODBC] [SQL_STATE: HY000] [nQSError: 10058] A general error has occurred. [nQSError: 14026] Unable to navigate requested expression: TABLE X.column X Please fix the metadata consistency warnings. (HY000)
    SQL Issued: SELECT TABLE X.column X saw_0 FROM subject area ORDER BY saw_0
    There is one *"special"* thing about this join. It is a complex join in the physical layer, because I need to do a between on dates and a smaller or equal than current date like this example dim.date between fact.date_begin and fact.date_end and dim.date <= current_date. In the business model I've got another complex join
    Any help is kindly appreciated!

    Hi,
    Have you specified the Content level of the Fact table and mapped it with the dimension in question? Ideally this should be done by default since one of the main features of the Oracle BI is its ability to determine which source to use and specifying content level is one of the main ways to achieve this.
    Another quick thing that you might try is creating a dimension (hierarchy) in case it is not already present. I had a similar issue few days back and the warning was miraculously gone by doing this.
    Regards

  • Regarding  Dimension Table and Fact table

    Hello,
    I am having basic doubts regarding the star schema.
    Let me explain first regarding  star schema.
    Fact table containes Key fiigures and Dim IDs,Ok,
    These DIm ids will be connected to my dimension tables.The Dimension table contains Characterstics and these Dim ids ,Ok.
      Then My basic doubt
    1.How does DIm id will be linked to SID tables
    2.If I have not maintained any master data or text or Heirachies then SID tables will it be generated or not?
    3.If it is generated I think there is use of This SID now..as we have not  maintained Master data.
    4.I am haing 18 characterstic which are no way related to each other in that scnerio how does Dimensions have to identified.?or we need to inclued whole chracterstics in one dimensions or we need to create seprate dimesnions for each of them..?(max is 13 dimensions)
    5.If Dimension table contains dim ids and characterstics then where does the  values for characterstics will be stored...?
    ( for ex..sales rep is characterstics for this we will be giving values some names where does these values will be stored..)

    hi Vasu,
    e.g we have infocube with 
    - dimension 'location' -> characteristic 'sales rep', 'country' 
    - dimension 'partner'.
    fact table
    dim-id('sales person') dim-id('partner') revenue
    1001                   9001              500
    1002                   9002              300
    1003                   9004              200
    dimenstion table 'location'
    dim-id  sid-id(sales rep) sid-id(country)
    1001    3001              5001
    1002    3004              5004
    1003    3005              5001
    'sales rep' sid table
    sid   sales rep
    3001  abc
    3004  pqr
    3005  xyz
    'country' sid table
    5001      country1
    5004      country2
    so from the link dim-id and sid, we get
    "sales rep report"
    sales-rep   revenue
    abc        500
    pqr        300
    xyz        200
    "country report"
    country    revenue
    country1   700
    country2   300
    hope it's clear.

  • Fact Tables without Dimension Tables.

    Hi,
    I am currently working on a project where a data warehouse is being developed using an application database as the source. Since the application database is already normalised, the tables are being loaded in as they are and the consultant involved is creating
    data marts for them. I have asked about why he is not creating separate dimension and fact tables as data marts, and he has told me that he will create a fact table with the dimensions as column names without any dimension tables created as you might expect
    as per star/snowflake schema.
    Is this right, as I always thought you had to have dimension and fact tables, especially if you want to start using SSAS?
    Regards,
    W.

    create a fact table with the dimensions as column names without any dimension tables created as you might expect as per star/snowflake schema.
    This is the trouble with denormalization: where is the stopping point?
    If the data warehouse is the source for SSAS OLAP cubes, it is better to go with the usual star schema.
    Kalman Toth Database & OLAP Architect
    SQL Server 2014 Database Design
    New Book / Kindle: Beginner Database Design & SQL Programming Using Microsoft SQL Server 2014

  • Fact Table and Dimension Tables

    Hi Experts, I'm creating custom InfoCubes for data coming from non-SAP source systems. I have two InfoCubes. Tha data is coming from like 10 tables. I have 10 DataSources created fo this and the data will be consolidated in Standard DSO before it will flow into 2 InfoCubes.
    Now client wants to know before how much data will be there in InfoCubes in Fact table nad Dimension tables in both the InfoCubes. I have the total size of all the 10 tables from the sources given to me by the DBA. I wan not sure how I can convert that info for Fact table and Dimension table as I have not yet created these Infocubes.
    Please help me with this on how I should address this.

    hi,
    The exact data will be hard to give however you can reach at a round figure in your case.
    You are consolidating the data from the tables that means that there is relation between the tables. Arrive at a rough figure based on the relation and the activity you are performing while consolidating the data of the tables.
    For example, let us say we want to combine data for sales order and deliveries in a DSO.
    Let Sales order has 1000 records and Delivery has 2000 records. Both the tables have a common link (Sales Order).In DSO you are combining the data that means the data will be at the most granular level consist of Delivery data, so the maximum no of records which the consolidated DSO can have is 2000.
    regards,
    Arvind.

  • How to resolve many fact tables and Dimensions tables

    Hi,
    The scenario is we have many facts and dimension tables. Based on some conditions one measure from one fact will be divided by another fact measure. I have encoutered with many errors like " Unable to navigate .... " ? How to resolve these errors and reduce many to few ? ( I assume creating logical tables, but is there any other alternatives ? )
    thanks
    Suresh

    Suresh,
    I assume that you know how to create a single logical fact from n-physical facts, ie only if the fact tables are related. Then join all the conformed dimensions to this single Logical table using a join in the Business Model layer. Remember to set the mappings in the LTS. All if you have any hierarchies please set the aggregation level for those.
    - Red

  • Resolving loops in a star schema with 5 fact tables and 6 dimension tables

    Hello
    I have a star schema, ie 5 FACT tables and 7 dimension tables, All fact tables share the same dimension tables, some FACT tables share 3 dimesnsions, while other share 5 dimensions.  
    I did adopt the best practices, and as recommended in the book, I tried to resolve them using Context, as it is the recommended option to Alias in a star schema setting.  The contexts are resolved, but I still have loops.  I also cleared the Multiple SQL Statement for each context option, but no luck.  I need to get this resoved ASAP

    Hi Patil,
    It is not clear what exactly is the problem. As a starting point you could set the context up so that it only covers the joins from fact to dimension.
    Fact A, joins Dim 1, Dim 2, Dim 3, and Dim 4
    Fact B, joins Dim 1, Dim 2, Dim 3, Dim 4 and Dim 5
    Fact C, joins Dim 1, Dim 2, Dim 3, Dim 4 and Dim 6
    Fact D, joins Dim 1, Dim 2, Dim 3, Dim 4 and Dim 7
    Fact E, joins Dim 1, Dim 2, Dim 4 and Dim 6
    If each of these are contexts are done and just cover the joins from fact to dim then you should be not get loops.
    If you could lay out your joins like above then it may be possible to specify the contexts/aliases that should work.
    Regards
    Alan

Maybe you are looking for