Logical level in Fact tables - best practice

Hi all,
I am currently working on a complex OBIEE project/solution where I am going straight to the production tables, so the fact (and dimension) tables are pretty complex since I am using more sources in the logical tables to increase performance. Anyway, what I am many times struggling with is the Logical Levels (in Content tab) where the level of each dimension is to be set. In a star schema (one-to-many) this is pretty straight forward and easy to set up, but when the Business Model (and physical model) gets more complex I sometimes struggle with the aggregates - to get them work/appear with different dimensions. (Using the menu "More" - "Get levels" does not allways give the best solution......far from). I have some combinations of left- and right outer join as well, making it even more complicated for the BI server.
For instance - I have about 10-12 different dimensions - should all of them allways be connected to each fact table? Either on Detail or Total level. I can see the use of the logical levels when using aggregate fact tables (on quarter, month etc.), but is it better just to skip the logical level setup when no aggregate tables are used? Sometimes it seems like that is the easiest approach...
Does anyone have a best practice concerning this issue? I have googled for this but I haven't found anything good yet. Any ideas/articles are highly appreciated.

Hi User,
For instance - I have about 10-12 different dimensions - should all of them always be connected to each fact table? Either on Detail or Total level.It not necessary to connect to all dimensions completely based on the report that you are creating ,but as a best practice we should maintain all at Detail level only,when you are mentioning any join conditions in physical layer
for example for the sales table if u want to report at ProductDimension.ProductnameLevel then u should use detail level else total level(at Product,employee level)
Get Levels. (Available only for fact tables) Changes aggregation content. If joins do not exist between fact table sources and dimension table sources (for example, if the same physical table is in both sources), the aggregation content determined by the administration tool will not include the aggregation content of this dimension.
Source admin guide(get level definition)
thanks,
Saichand.v

Similar Messages

  • Help on  Setting logical Levels  in Fact tables and on Dimension tables

    Hi all
    Can any body provide any blogs or any king of material on what exactly is levelling .
    Like after creating the Dimensional hierarchies we need to set the logical levels for the LTS of fact tabels ri8 .So what is the difference between setting logical levels to fact tabels and also Setting levelling on Dimension tables .
    Any kind of help is appreciated
    Thanks
    Xavier.
    Edited by: Xavier on Aug 4, 2011 10:50 AM

    I have read these blogs ,but what my question is
    Setting the logical levels in LTS of Fact tables i understood .
    But we can also set the logical levels for dimensions also ri8 .I didn't understand why do we set the logical levels for dimensions .Is there any reason why we go with the levelling at dimensions
    Thanks
    Xavier
    Edited by: Xavier on Aug 4, 2011 2:03 PM
    Edited by: Xavier on Aug 4, 2011 2:32 PM

  • Temp Tables - Best Practice

    Hello,
    I have a customer who uses temp tables all over their application.
    This customer is a novice and the app has its roots in VB6. We are converting it to .net
    I would really like to know the best practice for using temp tables.
    I have seen code like this in the app.
    CR2.Database.Tables.Item(1).Location = "tempdb.dbo.[##Scott_xwPaySheetDtlForN]"
    That seems to work, though i do not know why the full tempdb.dbo.[## is required.
    However, when i use this in the new report I am doing I get runtime errors.
    i also tried this
    CR2.Database.Tables.Item(1).Location = "##Scott_xwPaySheetDtlForN"
    I did not get errors, but I was returned data i did not expect.
    Before i delve into different ways to do this, i could use some help with a good pattern to use.
    thanks

    Hi Scott,
    Are you using the RDC still? It's not clear but looks like it.
    We had an API that could piggy back the HDBC handle in the RDC ( craxdrt.dll ) but that API is no longer available in .NET. Also, the RDC is not supported in .NET since .NET uses the framework and RDC is COM.
    Work around is to copy the temp data into a data set and then set location to the data set. There is no way that I know of to get to the tempdb from .NET. Reason being is there is no CR API to set the owner of the table to the user, MS SQL Server locks the tempdb to that user has exclusinve rights on it.
    Thank you
    Don

  • Multiple granular levels for fact table

    My fact table has to incorporate both at Transaction level and Accumulative , my basic design for Transaction level is as follows
    CUSTOMER_KEY, LOAN_KEY, TIME_KEY, LOAN_AMT, TOTAL_DUE, LOAN_STATUS, TRANSACTION
    9000,1000,1,200,200,Open, Advance
    9000,1000,1,200,0,Close, Payment If I aggregate the values then query will take time to execute . How can I provide cumulative information from this fact table? shall i go for one more fact table for Accumulative information ?
    Please suggest.
    Thanks,
    Hesh.

    Hi ,
    Is it a question of OLAP cube generation using your fact table design ? If not then it is incorrect forum ..If yes , then it should be straighway ur fact and dimension design and no need of another fact table with aggregation because OLAP cube aggregate this which should ideally be tremendous fast
    Thanks,
    DxP

  • Building tables - Best practices?

    Hi all,
    I hope this is a good place to ask a general question like this. I'm trying to improve myself as a DB designer/programmer and am wondering what are the current practices used when deploying a database that must be kept running at the highest performance possible (as far as selecting data and keeping the database clean).
    Basically, here are the specific topics of concern for me:
    - table sizing
    - index sizing
    - oracle parameter tuning
    - maintenance work required to be done on tables/indexes
    The things I've studied on were all based on Oracle 8i, and I'm wondering if much has changed for 9i and/or 10g.
    Thanks.
    Peter

    Actually I'm not very new to doing database work now, but I do still consider myself not quite sufficient in certain aspects of a typical DBA. For that reason, I'm trying to keep my questions very general as though as I'm learning them afresh.
    It does seem that I'm trying to ask something that is too broad to bring up in forum discussions... I'll go back and do some independent studies then come back to the forum with better questions. :)
    When looking through the 10g bug reports in metalink, it made me uncomfortable on some issues that people have been running into (been a while since I did the initial evaluation, and I forgot which specific issues I looked at). I realized that Oracle 10g provided a lot of conveniences with their new web-based EM and EMServer (especially interested in the new reports and built-in automations that Oracle provided), and also on grid deployments for high-availability systems, but we've been held back by many reasons to not go forward with 10g at this time. Having said that, moving to 10g is still planned for the future, so I am continuing the evaluation in several aspects that are specific to our design to determine what we can use and/or abandon in our existing deployment processes.
    Thanks for everyone's time, best wishes.
    Peter

  • Join large external external Tables (best practice ?)

    Hi there,
    I have three external tables in a master-detail relation: table A (10000000 rows) is master of table B (20000000 rows), Table B is master of table C (100000000 rows). Can you tell my the best way:
    - directly join the external tables, or
    - copy the external tables into tables, create an index, and join them
    What is more efficient, and why ?
    Thanks for your help and ideas.
    Gerhard.

    In general, if the joins you are doing can benefit from indexes, you will want to copy the data to database tables. If the joins will end up doing full table scans anyway, it will matter far less.
    For data integrity, you will likely also want to be able to enforce foreign key constraints.
    Justin
    Distributed Database Consulting, Inc.
    http://www.ddbcinc.com/askDDBC

  • Using 2 fact tables with different granularity against calendar dimension

    Hello gurus,
    I have a requirement to provide a report to show the consumption of available capacity per month and also YTD.
    I have two fact tables:
    Fact table ‘Capacity’ with columns:
    - Site_id
    - Month_id
    - Capacity
    Ie.
    001, 2010M01, 50
    001, 2010M02, 50
    001, 2010M12, 75
    002, 2010M01, 60
    002, 2010M02, 65
    002, 2010M12, 80
    Etc
    Fact table ‘Consumption’ with columns
    - Site_id
    - Day_id
    - Consumption
    Ie
    001, 20100101, 2
    001, 20100102, 3
    001, 20100131, 1
    001, 20100201, 5
    001, 20100212, 6
    001, 20100228, 4
    Etc
    As can be see above, my ‘Capacity’ table contains monthly volumes, and the ‘Cunsumption’ table contains daily volumes.
    My Calendar dimension is straightforward:
    Year
    Quarter_id
    Month_id
    Day_id
    Ie
    2010, 2010Q1, 2010M01, 20100101
    2010, 2010Q1, 2010M01, 20100102
    2010, 2010Q1, 2010M01, 20100103
    2010, 2010Q1, 2010M01, 20100104
    Etc
    The MfgSite dimension is also simple:
    Site_id
    Site_name
    Group
    These are the steps I have taken sofar:
    - Imported the four tables
    - Created following joins:
         MfgSite.Site_id = Capacity.Site_id
         MfgSite.Site_id = Consumption.Site_id
         Calendar.Month_id = Capacity.Month_id
         Calendar.Day_id = Consumption.Day_id
    - Created Business Model Diagram in BMM
    - Created Calendar hierachy:
         Year, Quarter, Month, Day
    - Created MfgSite hierarchy:
         Group, SiteName
    - Setup Logical Table Source / Content settings as follows:
         Fact table Capacity:
              Dimension MfgSite: Logical Level = Site
              Dimension Calendar: Logical Level = Month
         Fact table Consumption:
              Dimension MfgSite: Logical Level = Site
              Dimension Calendar: Logical Level = Day
    - Set Default Aggregation Rule to Sum on Logical Columns:
    Capacity.Capacity
    Consumption.Consumption
    - Created following YTD Logical Columns:
         YTDCapacity = TODATE(Capacity.Capacity, Calendar.Year)
         YTDConsumption = TODATE(Consumption.Consumption, Calendar.Year)
    - Created Presentaion layer
    I then built a few reports to test it out and found that I have an issue with the Capacity object: When I build a simple report to show capacity per month:
    SiteName, Month, Capacity
    the capacity for each month is multiplied by the number of calendardays in that months, so I get
    Site      Month      Capacity
    001      2010M01      1550 (= 31 x 50)
    001      2010M02     1400 (= 28 x 50)
    Etc
    In addition, when I add YTDCapacity to my report, the report fails with the following message:
    Unable to navigate requested expression: ToDate(Capacity:[DAggr(Capacity.Capacity by [ Calendar.Year, Calendar.Month_id, MfgSite.Site_id, MfgSite.SiteName] )], [Level Year]). Please fix the metadata consistency warnings. (HY000)
    Did I miss any steps? Any help is greatly appreciated!
    Thanks!
    Randall

    hi in the capacity fact table remove the level set for the calendar dim and see.

  • RAID Level Configuration Best Practices

    Hi Guys ,
       We are building new Virtual environment for SQL Server and have to define RAID level configuration for SQL Server setup.
    Please share your thoughts for RAID configuration for SQL data, log , temppdb, Backup files .
    Files  RAID Level 
    SQL Data File -->
    SQL Log Files-->
    Tempdb Data-->
    Tempdb log-->
    Backup files--> .
    Any other configuration best practices   are more then welcome . 
    Like Memory Setting at OS level , LUN Settings. 
    Best practices to configure SQL Server in Hyper-V with clustering.
    Thank you
    Please Mark As Answer if it is helpful. \\Aim To Inspire Rather to Teach A.Shah

    Hi,
    If you can shed some bucks you should go for RAID 10 for all files. Also as a best practice keeping database log and data files on different physical drive would give optimum performance. Tempdb can be placed with data file or on a different drive as per
    usage. Its always good to use dedicated drive for tempdb
    For memory setting.Please refer
    This link for setting max server memory
    You should monitor SQL server memory usage using below counters taken from
    this Link
    SQLServer:Buffer Manager--Buffer Cache hit ratio(BCHR): IIf your BCHR is high 90 to 100 Then it points to fact that You don't have memory pressure. Keep in mind that suppose somebody runs a query which request large amount of pages in that
    case momentarily BCHR might come down to 60 or 70 may be less but that does not means it is a memory pressure it means your query requires large memory and will take it. After that query completes you will see BCHR risiing again
    SQLServer:Buffer Manager--Page Life Expectancy(PLE): PLE shows for how long page remain in buffer pool. The longer it stays the better it is. Its common misconception to take 300 as a baseline for PLE.   But it is not,I read it from
    Jonathan Kehayias book( troubleshooting SQL Server) that this value was baseline when SQL Server was of 2000 version and max RAM one could see was from 4-6 G. Now with 200G or RAM coming into picture this value is not correct. He also gave the formula( tentative)
    how to calculate it. Take the base counter value of 300 presented by most resources, and then determine a multiple of this value based on the configured buffer cache size, which is the 'max server memory' sp_ configure option in SQL Server, divided by 4 GB.
      So, for a server with 32 GB allocated to the buffer pool, the PLE value should be at least (32/4)*300 = 2400. So far this has done good to me so I would recommend you to use it.  
    SQLServer:Buffer Manager--CheckpointPages/sec: Checkpoint pages /sec counter is important to know about memory pressure because if buffer cache is low then lots of new pages needs to be brought into and flushed out from buffer pool, 
    due to load checkpoint's work will increase and will start flushing out dirty pages very frequently. If this counter is high then your SQL Server buffer pool is not able to cope up with requests coming and we need to increase it by increasing buffer pool memory
    or by increasing physical RAM and then making adequate changes in Buffer pool size. Technically this value should be low if you are looking at line graph in perfmon this value should always touch base for stable system.  
    SQLServer:Buffer Manager--Freepages: This value should not be less you always want to see high value for it.  
    SQLServer:Memory Manager--Memory Grants Pending: If you see memory grants pending in buffer pool your server is facing SQL Server memory crunch and increasing memory would be a good idea. For memory grants please read this article:
    http://blogs.msdn.com/b/sqlqueryprocessing/archive/2010/02/16/understanding-sql-server-memory-grant.aspx
    SQLServer:memory Manager--Target Server Memory: This is amount of memory SQL Server is trying to acquire.
    SQLServer:memory Manager--Total Server memory This is current memory SQL Server has acquired.
    For other settings I would like you to discuss with vendor. Storage questions IMO should be directed to Vendor.
    Below would surely be a good read
    SAN storage best practice For SQL Server
    SQLCAT best practice for SQL Server storage
    Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it.
    My TechNet Wiki Articles

  • How to determine the logical Keys of logical fact table in BMM layer?

    Hello guys
    I'd like to know more about how to know what columns of the logical fact table should be used as key in BMM layer. How OBIEE behaving differently when different keys are selected?
    In my situation, I have 1 fact table that physically joins to 20 dimensions in physical layer. Therefore, in physical layer, this fact table has about 20 foreign keys but no primary keys..
    In the logical layer, this fact table is logically joined to the same group of dimension tables. So in this case, how do I know what columns in logical fact table I should use as keys?
    Please advice
    Thanks

    thanks for the reply...
    The interesting thing I notice is that, when I start using fragmentation with the fact tables in its LTS, if not primary key is defined in the logical fact table, then I will be getting errors:
    [nQSError: 10058] A general error has occurred. [nQSError: 14023] None of the fact sources for columns are compatible with the detail filter []. (HY000)
    When I create the key by choosing random columns from the fact table as the logical key columns, this error goes way and the correct SQL is generated according to the frag..
    Because of such behavior, I'd really like to know how logical key really works in BMM layer.

  • Details from fact table as dimension - solution review needed

    Hi experts,
    I found a convinient way to display details from fact table, as a dimension and need your opinion. Do you see any potential issues about this solution?
    Context
    - I use vertical federation in OBIEE 11.1.1.6
    - all aggregated measures are in OLAP
    - there is a need that user drill down to lowest level details (single fact row) -> this information is in relational database
    - we decided to use dedicated presentation hierarchy column and let users drill to lowest level detail (just 2 hierarchy levels "All" -> "Detail")
    - always add this column at the right side of report table and after all other dimension drills are in place (other table columns are also presentation hierarchy), user can drill to contract ID (the detail)
    Solution (I tested it and it seems to works ok)
    - create "dummy" relational table with just 1 row ("All" level in hierarchy), linked to fact table (to "dummy" ID column in fact table)
    - create "dummy" dimension in Analytic Workspace
    - in BMM create logical table with 2 data sources: OLAP for hierarchy level "All" (linked to dummy OLAP dimension) and relational for hierarchy level "Detail" (using relational fact table)
    - so in BMM we have additional logical dimension with 2 levels - one "All" that read from OLAP, and does not impact performance, and second "Details" that is the only information at all that is obtained from relational database when user expand "All"
    - Why created "dummy" dimension? -> because OBIEE won't let me create logical dimension from fact table columns, but can create a dummy dimension with just "All" level and add to it column from fact table
    Any comments are welcome.

    hi,
    The exact data will be hard to give however you can reach at a round figure in your case.
    You are consolidating the data from the tables that means that there is relation between the tables. Arrive at a rough figure based on the relation and the activity you are performing while consolidating the data of the tables.
    For example, let us say we want to combine data for sales order and deliveries in a DSO.
    Let Sales order has 1000 records and Delivery has 2000 records. Both the tables have a common link (Sales Order).In DSO you are combining the data that means the data will be at the most granular level consist of Delivery data, so the maximum no of records which the consolidated DSO can have is 2000.
    regards,
    Arvind.

  • Best practice for placing module

    Hello
    I have an enterprise application project, with :
    1) EJB Module
    2) Web Module .
    I want to add the hibernate jars+classes+data+servlets+utils etc...
    Should i create a new Web module for hibernate or should i add it to the existing EJB module which is my business logic ?
    What is the best practice for that?
    Thanks !

    Thanks for the reply, Jiri.
    I believe that the answer is 'yes' to both of your questions above. I do have a question about the formatting of the meta tag itself, as my main question is how do I extract the needed information from a particular CDF so that it is displayed in the <head> section located in the page template.
    I have a Region Definition for the content section of my page with a 'Title' field, which a contributor can fill in, save, and create a Contributor Data File with their own specified title. Let's also say that my page template, which formats the entire page, looks like this (rough):
    <html>
      <head>
      <title>Web page</title>
      </head>
      <body>
      <!-- placeholder for header section -->
      <!-- placeholder for content section -->
      <!-- placeholder for footer section -->
      </body>
    </html>
    What would my <meta> tag look like for the following:
      <meta property="og:title" content="Title of the content section"/>
    Let's name our templates/definitions:
    Page Template: PT_META
    Region Template: RT_META
    Region Definition: RD_META
    Contributor Data File: CDF_META
    In other words, how do I extract the information included in the Contributor Data File (as well as its metadata) for use in the Page Template? Is there some sort of Idoc Script call that can be used?
    Thank you so much,
    Josh

  • What is the best practice for creating primary key on fact table?

    what is the best practice for primary key on fact table?
    1. Using composite key
    2. Create a surrogate key
    3. No primary key
    In document, i can only find "From a modeling standpoint, the primary key of the fact table is usually a composite key that is made up of all of its foreign keys."
    http://download.oracle.com/docs/cd/E11882_01/server.112/e16579/logical.htm#i1006423
    I also found a relevant thread states that primary key on fact table is necessary.
    Primary Key on Fact Table.
    But, if no business requires the uniqueness of the records and there is no materilized view, do we still need primary key? is there any other bad affect if there is no primary key on fact table? and any benifits from not creating primary key?

    Well, natural combination of dimensions connected to the fact would be a natural primary key and it would be composite.
    Having an artificial PK might simplify things a bit.
    Having no PK leads to a major mess. Fact should represent a business transaction, or some general event. If you're loading data you want to be able to identify the records that are processed. Also without PK if you forget to make an unique key the access to this fact table will be slow. Plus, having no PK will mean that if you want to used different tools, like Data Modeller in Jbuilder or OWB insert / update functionality it won't function, since there's no PK. Defining a PK for every table is a good practice. Not defining PK is asking for a load of problems, from performance to functionality and data quality.
    Edited by: Cortanamo on 16.12.2010 07:12

  • Best practice when FACT and DIMENSION table are the same

    Hi,
    In my physical model I have some tables that are both fact and dimension table, i.e. in the BMM they are of course separated into Fact and Dim source (2 different units) and it works fine. But I can see that there will be trouble when having more fact tables and I e.g. have a Period dimension pointing to all the different fact tables (different sources).
    Seems like the best solution to this is to have an alias of the fact/transaction table and have 2 "copies" of the transaction table (one for fact and one for dimension table) in the physical layer. Only bad thing is that there will then allways be 2 lookups in the same table when fetching data from the dimension and the fact table.
    This is not built on a datawarehouse - so the architecture is thereby more complex. Hope this was understandable (trying to make a short story of it).
    Any best practice on this? Or other suggestions.

    Id recommend creation of a view in the database. if its an oracle DB, materialised views would be a huge performance benefit. you just need to make sure that the MVs are updated when the source is updated.
    -Domnic

  • Logical level for logical fact table sources

    it is clear that for fact aggregates, we should use the Content tab of the Logical Table Source dialog to assign the correct logical level to each dimension.
    question is : is it mandatory to assign even for non-aggregates fact tables the logical level for each dimension (which normally should be set to the most detailed level of each dimension) ? is it any known issue if "logical levels"in content tab are not set ?
    the reason I'm asking this is a strange bug I have (I'm not going to discuss it here) and then only workaround seems to be NOT setting the logical levels (on content tab) for logical fact table sources.
    thank you !

    If levels are not set: By default levels are considered as lowest level
    It should not matter if you set or not
    Generally we set for facts explicitly when we are using Aggregate tables.
    Your current issue might be a case by case; I would suggest to check implicit fact, any table mapped to the source to force a join etc
    Mark if helps
    Let me know how it helps
    Edited by: Srini VEERAVALLI on Feb 5, 2013 8:33 AM
    Any updates on this?+_
    Edited by: Srini VEERAVALLI on Feb 14, 2013 9:09 AM

  • Best practice to create multi tier (atleast 3 level) table

    What is the best practice to create multi tier (minimum 3 levels) of table?. Could any one provide a sample structure?.
    Thanks.

    Can u b more specific as to what you are trying to do. What u mean by 3 level table?

Maybe you are looking for

  • Error_Create Load table on target_NZ

    Hi, Following is the error I am encountering while executing a procedure of KM which is a test version:- KM_KIM MSSQL to Netezza (bcp-nzload) V4.xml. The procedure is run on Netezza Server. Procedure command reads as follows: create table <%=snpRef.g

  • How to get Airport Express to authenticate the PPPoE connection so that more than one computer can get online

    Hello, I have to use a PPPoe Connection with my ISP and have some troubles setting up the Airport Express. I can only use the Airport in Bridge Mode and DHPC and authenticate the PPPoE on the mac individually. With this solution, no second computer c

  • Add an apple keyboard to apple tv

    I need help to pair an Apple keyboard and Apple TV.  The keyboard is currently paired to a Mac mini.

  • Archiving Adapter Engine Messages

    For AE we have implemented the Archiving: The steps for AE we took are as follows: 1. Created archive store, and after that done home path synch and assigned it to an archive path. 2. created the archiving  background job in RWB with rules and have d

  • Deploying QPM policies through CLI

    Hello, We are using QPM v4.1 Combined (provisioning + monitoring) and using the IP Telephony wizard to create Voice policies on the network. Deploying polices directly from QPM showed up that some deployment jobs fail for reasons such as "QPM interna