Why is EA Star Schema used ?

Hello,
I have a simple question on why the Star Schema is used ? Getting an answer from Wiki is not what I wanted , some response from the Gurus would be great to hear. Every time when there is a Metadata Update in HFM during Monthly Change and Maintenance a New Star Schema should be created and only then I run the Dimension Update Job in Essbase, what is the real reason for that. If you say that it Updates the New Dimension changes in HFM to Essbase , then for that we already have an EAL taking care of Updating the Essbase Metadata in Tandem with HFM.
Any comments.
Thanks !!

I was on a Extended Analytics project in 2008. There was no EAL. EA was it.
Regards,
Cameron Lackpour
P.S. Thankfully, I can't remember the functionality other than it was like being cooked, slowly, over a roaring fire.
Edited by: CL on Jan 9, 2013 3:42 PM

Similar Messages

  • Error with creating star schema using HsvStarSchemaACM

    Hi,
    I am trying to create a star schema using the API HsvStarSchemaACM. But when calling the create function of API, i get the below exception. Exception from HRESULT: 0x80040251 (A general error occurred while trying to obtain a database Reader/Writer lock). Is this something to do with the parameters passed and its connections(connection works fine and createstarschema is running thru workspace).
    Any solutions to this is more welcome.
    Thanks,
    Logu

    Can you provide details on HsvStarSchemaACM API? How is it related to ODI?

  • Why is the Star Transformation using two indexes for the same dimension?

    Hi,
    Recently, I have made an investigation about the Star Transformation feature. I have found a strange test case, which plays an important role in my strategy for our overall DWH architecture. Here it is:
    The Strategy:
    I would like to have the classical Star Transformation approach (single column Bitmap Indexes for each dimension foreign key column in the fact table), together with additional Bitmap Join Indexes for some of the dimension attributes, which would benefit from the materialization of the join (bitmap merge operation will be skipped/optimized).
    The query:
    select dp.brand, ds. region_name, dc.region_name
         , count(*), sum(f.extended_price)
      from fact_line_item  f
         , dim_part       dp
         , dim_supplier   ds
         , dim_customer   dc
    where dp.mfgr        = 10                 -- dimension selectivity = 1/10 --> acttual/fact selectivity = 6/10
       and f.part_dk      = dp.dk
       and ds.region_name = 'REGION #1' -- dimension selectivity = 1/9
       and f.supplier_dk  = ds.dk
       and dc.region_name = 'REGION #1' -- dimension selectivity = 1/11
       and f.customer_dk  = dc.dk
    group by dp.brand, ds. region_name, dc.region_name
    The actual plan:
    | Id  | Operation                              | Name                        | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers | Reads  |
    |   0 | SELECT STATEMENT                       |                             |      1 |        |  3247 (100)|      1 |00:01:42.05 |     264K|    220K|
    |   1 |  HASH GROUP BY                         |                             |      1 |      2 |  3247   (1)|      1 |00:01:42.05 |     264K|    220K|
    |*  2 |   HASH JOIN                            |                             |      1 |  33242 |  3037   (1)|    217K|00:01:29.67 |     264K|    220K|
    |*  3 |    TABLE ACCESS FULL                   | DIM_SUPPLIER                |      1 |   1112 |   102   (0)|   1112 |00:00:00.01 |     316 |      4 |
    |*  4 |    HASH JOIN                           |                             |      1 |  33245 |  2934   (1)|    217K|00:01:29.10 |     264K|    220K|
    |*  5 |     TABLE ACCESS FULL                  | DIM_CUSTOMER                |      1 |    910 |   102   (0)|    910 |00:00:00.08 |     316 |      8 |
    |*  6 |     HASH JOIN                          |                             |      1 |  33248 |  2831   (1)|    217K|00:01:28.57 |     264K|    220K|
    |*  7 |      TABLE ACCESS FULL                 | DIM_PART                    |      1 |     10 |     3   (0)|     10 |00:00:00.01 |       6 |      0 |
    |   8 |      PARTITION RANGE ALL               |                             |      1 |  36211 |  2827   (1)|    217K|00:01:28.01 |     264K|    220K|
    |   9 |       TABLE ACCESS BY LOCAL INDEX ROWID| FACT_LINE_ITEM              |      6 |  36211 |  2827   (1)|    217K|00:01:33.85 |     264K|    220K|
    |  10 |        BITMAP CONVERSION TO ROWIDS     |                             |      6 |        |            |    217K|00:00:07.09 |   46980 |   3292 |
    |  11 |         BITMAP AND                     |                             |      6 |        |            |     69 |00:00:08.33 |   46980 |   3292 |
    |  12 |          BITMAP MERGE                  |                             |      6 |        |            |    193 |00:00:02.09 |    2408 |   1795 |
    |  13 |           BITMAP KEY ITERATION         |                             |      6 |        |            |   4330 |00:00:04.66 |    2408 |   1795 |
    |  14 |            BUFFER SORT                 |                             |      6 |        |            |     60 |00:00:00.01 |       6 |      0 |
    |* 15 |             TABLE ACCESS FULL          | DIM_PART                    |      1 |     10 |     3   (0)|     10 |00:00:00.01 |       6 |      0 |
    |* 16 |            BITMAP INDEX RANGE SCAN     | FACT_LI__P_PART_DIM_KEY_BIX |     60 |        |            |   4330 |00:00:02.11 |    2402 |   1795 |
    |* 17 |          BITMAP INDEX SINGLE VALUE     | FACT_LI__P_PART_MFGR_BJX    |      6 |        |            |   1747 |00:00:06.65 |     890 |    888 |
    |  18 |          BITMAP MERGE                  |                             |      6 |        |            |    169 |00:00:02.78 |   16695 |    237 |
    |  19 |           BITMAP KEY ITERATION         |                             |      6 |        |            |   5460 |00:00:01.56 |   16695 |    237 |
    |  20 |            BUFFER SORT                 |                             |      6 |        |            |   5460 |00:00:00.02 |     316 |      0 |
    |* 21 |             TABLE ACCESS FULL          | DIM_CUSTOMER                |      1 |    910 |   102   (0)|    910 |00:00:00.01 |     316 |      0 |
    |* 22 |            BITMAP INDEX RANGE SCAN     | FACT_LI__P_CUST_DIM_KEY_BIX |   5460 |        |            |   5460 |00:00:02.07 |   16379 |    237 |
    |  23 |          BITMAP MERGE                  |                             |      6 |        |            |    170 |00:00:03.65 |   26987 |    372 |
    |  24 |           BITMAP KEY ITERATION         |                             |      6 |        |            |   6672 |00:00:02.23 |   26987 |    372 |
    |  25 |            BUFFER SORT                 |                             |      6 |        |            |   6672 |00:00:00.01 |     316 |      0 |
    |* 26 |             TABLE ACCESS FULL          | DIM_SUPPLIER                |      1 |   1112 |   102   (0)|   1112 |00:00:00.01 |     316 |      0 |
    |* 27 |            BITMAP INDEX RANGE SCAN     | FACT_LI__S_SUPP_DIM_KEY_BIX |   6672 |        |            |   6672 |00:00:02.74 |   26671 |    372 |
    The Question:
    Why is the Star Transformation using both indexes FACT_LI__P_PART_DIM_KEY_BIX and FACT_LI__P_PART_MFGR_BJX for the same dimension criteria (dp.mfgr = 10)?? The introduction of the additional Bitmap Join Index actually make Oracle to do the work twice !!!
    Anybody, any idea ?!?

    Dom, here it is the plan with the predicates:
    | Id  | Operation                              | Name                        | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers | Reads  |
    |   0 | SELECT STATEMENT                       |                             |      1 |        |  3638 (100)|      1 |00:06:41.17 |     445K|    236K|
    |   1 |  HASH GROUP BY                         |                             |      1 |      2 |  3638   (1)|      1 |00:06:41.17 |     445K|    236K|
    |*  2 |   HASH JOIN                            |                             |      1 |  33242 |  3429   (1)|    217K|00:08:18.02 |     445K|    236K|
    |*  3 |    TABLE ACCESS FULL                   | DIM_SUPPLIER                |      1 |   1112 |   102   (0)|   1112 |00:00:00.03 |     319 |    313 |
    |*  4 |    HASH JOIN                           |                             |      1 |  33245 |  3326   (1)|    217K|00:08:17.47 |     445K|    236K|
    |*  5 |     TABLE ACCESS FULL                  | DIM_CUSTOMER                |      1 |    910 |   102   (0)|    910 |00:00:00.01 |     319 |    313 |
    |*  6 |     HASH JOIN                          |                             |      1 |  33248 |  3223   (1)|    217K|00:08:16.63 |     445K|    236K|
    |*  7 |      TABLE ACCESS FULL                 | DIM_PART                    |      1 |     10 |     3   (0)|     10 |00:00:00.01 |       6 |      0 |
    |   8 |      PARTITION RANGE ALL               |                             |      1 |  36211 |  3219   (1)|    217K|00:08:16.30 |     445K|    236K|
    |   9 |       TABLE ACCESS BY LOCAL INDEX ROWID| FACT_LINE_ITEM              |      6 |  36211 |  3219   (1)|    217K|00:08:40.89 |     445K|    236K|
    |  10 |        BITMAP CONVERSION TO ROWIDS     |                             |      6 |        |            |    217K|00:00:32.00 |   46919 |  19331 |
    |  11 |         BITMAP AND                     |                             |      6 |        |            |     69 |00:00:34.50 |   46919 |  19331 |
    |  12 |          BITMAP MERGE                  |                             |      6 |        |            |    193 |00:00:00.58 |    2353 |      1 |
    |  13 |           BITMAP KEY ITERATION         |                             |      6 |        |            |   4330 |00:00:00.10 |    2353 |      1 |
    |  14 |            BUFFER SORT                 |                             |      6 |        |            |     60 |00:00:00.01 |       6 |      0 |
    |* 15 |             TABLE ACCESS FULL          | DIM_PART                    |      1 |     10 |     3   (0)|     10 |00:00:00.01 |       6 |      0 |
    |* 16 |            BITMAP INDEX RANGE SCAN     | FACT_LI__P_PART_DIM_KEY_BIX |     60 |        |            |   4330 |00:00:00.07 |    2347 |      1 |
    |* 17 |          BITMAP INDEX SINGLE VALUE     | FACT_LI__P_PART_MFGR_BJX    |      6 |        |            |   1747 |00:01:23.64 |     882 |    565 |
    |  18 |          BITMAP MERGE                  |                             |      6 |        |            |    169 |00:00:09.14 |   16697 |   7628 |
    |  19 |           BITMAP KEY ITERATION         |                             |      6 |        |            |   5460 |00:00:02.19 |   16697 |   7628 |
    |  20 |            BUFFER SORT                 |                             |      6 |        |            |   5460 |00:00:00.01 |     316 |      0 |
    |* 21 |             TABLE ACCESS FULL          | DIM_CUSTOMER                |      1 |    910 |   102   (0)|    910 |00:00:00.01 |     316 |      0 |
    |* 22 |            BITMAP INDEX RANGE SCAN     | FACT_LI__P_CUST_DIM_KEY_BIX |   5460 |        |            |   5460 |00:00:08.78 |   16381 |   7628 |
    |  23 |          BITMAP MERGE                  |                             |      6 |        |            |    170 |00:00:21.46 |   26987 |  11137 |
    |  24 |           BITMAP KEY ITERATION         |                             |      6 |        |            |   6672 |00:00:10.29 |   26987 |  11137 |
    |  25 |            BUFFER SORT                 |                             |      6 |        |            |   6672 |00:00:00.01 |     316 |      0 |
    |* 26 |             TABLE ACCESS FULL          | DIM_SUPPLIER                |      1 |   1112 |   102   (0)|   1112 |00:00:00.01 |     316 |      0 |
    |* 27 |            BITMAP INDEX RANGE SCAN     | FACT_LI__S_SUPP_DIM_KEY_BIX |   6672 |        |            |   6672 |00:00:20.94 |   26671 |  11137 |
    Predicate Information (identified by operation id):                                                                                                  
       2 - access("F"."SUPPLIER_DK"="DS"."DK")                                                                                                           
       3 - filter("DS"."REGION_NAME"='REGION #1')                                                                                                        
       4 - access("F"."CUSTOMER_DK"="DC"."DK")                                                                                                           
       5 - filter("DC"."REGION_NAME"='REGION #1')                                                                                                        
       6 - access("F"."PART_DK"="DP"."DK")                                                                                                               
       7 - filter("DP"."MFGR"=10)                                                                                                                        
      15 - filter("DP"."MFGR"=10)                                                                                                                        
      16 - access("F"."PART_DK"="DP"."DK")                                                                                                               
      17 - access("F"."SYS_NC00017$"=10)                                                                                                                 
      21 - filter("DC"."REGION_NAME"='REGION #1')                                                                                                        
      22 - access("F"."CUSTOMER_DK"="DC"."DK")                                                                                                           
      26 - filter("DS"."REGION_NAME"='REGION #1')                                                                                                        
      27 - access("F"."SUPPLIER_DK"="DS"."DK")                                                                                                           
    Note                                                                                                                                                 
       - star transformation used for this statement                                                                                                     

  • Star Schema Using OWB 10g

    Hello,
    I would like to know whether is it possible to build a star schema using Oracle OWB 10g?
    If so how could I do build one? Any help is highly appreciable.
    Regards

    Hi ,
    You can use Oracle SQL Developer Data Modeler .
    SQL Developer Data Modeler provides a full spectrum of data and database modeling tools and utilities, including modeling for Entity Relationship Diagrams (ERD), Relational (database design), Data Type and Multi-dimensional modeling, full forward and reverse engineering and DDL code generation. The Data Modeler imports from and exports to a variety of sources and targets, provides a variety of formatting options and validates the models through a predefined set of design rules.
    Oracle SQL Developer Data Modeler can connect to any supported Oracle Database and is platform independent
    http://www.oracle.com/technology/products/database/datamodeler/index.html
    Thanks,
    Sutirtha

  • How to join  2 star schemas  using a Dimensional table( like Bridge Table)

    How to join 2 star schemas using a Dimensional table( like Bridge Table) in OBIEE?

    Complex joins and Content levels is all you need, have you tried the forum search?

  • Understanding Star Schema Operations

    Hi All,
    I worked on Oracle Answers and have knowledge on Data Warehousing concepts. I have enough oracle documentation on OLAP, OBIEE, Data Warehousing.
    But i 'm not able to understand the star schema concepts in and out. I'm lost on all documents, where i'm getting the simple answer on the following topics:
    1. Is it necessary to create dimensions using CREATE DIMENTION or it can be a simple relational table structure with CREATE TABLE statement?
    2. What will be the steps to create a very basic/simple star schema using DML statements not using any tool?
    3. Where I can get very basic level step by step tutorial to create a very simple star schema by sql -> loading test data by sql commands -> then what queries we can use to get BI reports or using OBIEE to create a simple rpd.
    Please give me some idea on this as I have to create a small OBIEE repository where i have to do create the warehouse and ETL process, everything.
    Pls advise.
    Thanks in Advance.
    Sudipta

    No my question was on creating the star schema. If i have to create a very simple star schema then can I create the star schema with the following relational tables?
    create table dim_products (
    product_id number primary key,
    product_name varchar2(32),
    brand varchar2(32),
    category_name varchar(32),
    sub_category_name varchar2(32)
    create table dim_time (
    time_id number primary key,
    selling_date date,
    selling_day number(2),
    selling_month varchar2(12),
    selling_Quarter varchar2(6),
    year number(4)
    create table dim_region (
    region_id number primary key,
    store_name varchar2(32),
    city_name varchar2(32),
    state_name varchar2(32),
    country_name varchar2(32),
    region_name varchar2(32)
    create table fact_sales (
    sale_id number primary key,
    region_id number ,
    time_id number ,
    product_id number ,
    unit_sold number,
    constraint fact_sales1 foreign key (region_id) references dim_region (region_id),
    constraint fact_sales2 foreign key (time_id) references dim_time (time_id),
    constraint fact_sales3 foreign key (product_id) references dim_products (product_id)
    Or I have to create the DIMENSIONS explicitly, in order to building the hierarchy levels?
    Regards
    Sudipta

  • Why do we need SSIS and star schema of Data Warehouse?

    If SSAS in MOLAP mode stores data, what is the application of SSIS and why do we need a Data Warehouse and the ETL process of SSIS?
    I have a SQL Server OLTP database. I am using SSIS to transfer my SQL Server data from OLTP database to a Data Warehouse database that contains fact and dimension tables.
    After that I want to create cubes using SSAS form Data Warehouse data.
    I know that MOLAP stores data. Do I need any Data warehouse with Fact and Dimension tables?
    Is not it better to avoid creating Data warehouse and create cubes directly from OLTP database?

    Another thing to note is data stored in transactional system may not always be in end user consumable format for ex. we may use bit fields/flags to represent some details in OLTP as storage required ius minimum but presenting them as is would not make any
    sense to user as they would not know what each bit value represents. In such cases we apply some transformations and convert data into useful information for users to understand. This is also in the warehouse so that information in warehouse can directly be
    used for reporting. Also in many cases the report will merge data from multiple source systems so merging it on the fly in report would be tedious and would have hit on report server. In comparison bringing them onto common layer (warehouse) and prebuilding
    aggregates would be benefitial for the report performance.
    I think (not sure) we join tables in SSAS queries and calculate aggregations in it.
    I think SSAS stores these values and joined tables and we do not need to evaluates those values again and this behavior is like a Data Warehouse.
    Is not it?
    So if I do not need historical data, Can I avoid creating Data Warehouse?
    On the backend SSAS uses queries only to extract the data
    B/w I was not explaining on SSAS. I was explaining on what happens inside datawarehouse  which is a relational database by itself. SSAS is used to built cube (OLAP structures) on top of datawarehouse. star schema is easier for defining relationships
    and buidling aggregations inside SSAS as its simple and requires minimal lookups to be performed. Also data would be held at lowest granularity level which can easily be aggregated to required levels inside OLAP cubes. Cube processing is very resource
    intensive and using OLTP system would really have a huge impact on processing performance as its nnot denormalized and also doing tranformation etc on the fly adds up to complexity. Precreating a layer (data warehouse) having data in required format would
    make cube processing easier and simpler as it has to just cross join tables and aggregate data based on relationships defined and level needed inside the cube.
    Please Mark This As Answer if it helps to solve the issue Visakh ---------------------------- http://visakhm.blogspot.com/ https://www.facebook.com/VmBlogs

  • Newbie question : why is star schema fast and efficient?

    Hi all,
    just a stupid question, but I haven't been able to find a proper
    answer so far...
    Why is star schema a good design for Data Marts and DWH?
    What is the underlying reason that makes it attractive
    performance wise?
    Why wouldn't just one big table with all the data in it and with
    the proper indexes be enough?
    Thanks all!!
    Regards
    Vincent

    There are several reasons to use star schemas, particularly in
    Oracle.
    A flat table like you asked about looks attractive but has
    several flaws, i.e. massive data redundancy, no logical
    groupings, no aggregation (or additional redundant data
    aggregated), etc.
    A start schema is semi-denormalized to allow easy reporting. A
    truely normalized system is diffucult to report against be cause
    you may have to join many tables to return just 2 pieces of
    related data. A star schema enables you to join to only a single
    dimension table to the fact table to return the same 2 pieces of
    data. If you're returning many pieces of data, a star schema
    keeps access very simple. Most third party reporting tools
    recognize star schemas and will build your where clauses behind
    the scenes making them a lot more useful to end users.
    Oracle is adding optimizations to the cbo for start schemas.
    Using dimensions, materialized views, partitions, IOTs, etc
    greatly enhances performance for queries against massive amounts
    of data. It does make loading the data more difficult but the
    trade off at query time is worth it.
    A flat table structure, besides having a lot of redundant data,
    is hard to optimize. When you have terebytes of data, a flat
    table structure gets scary even with indexes.
    This is just my opinion, hope that helps.
    Lewis

  • Using two facts of two different star schemas and conformed dimensions

    Hi,
    I've been working as developer and database designer for years and I'm new to Business Objects. Some people says you can not use two facts of two different star schemas in the same query because of conformed dimensions and loop problems in BO.
    For example I have a CUSTOMER_SALE_fACT table containing customer_id and date_id as FK, and some other business metrics about sales. And there is another fact table CUSTOMER_CAMPAIGN_FACT which also contains customer_id and date_id as FK, and some  other business metrics about customer campaigns. SO I have two stars like below:
    DIM_TIME -- SALE_FACT -- DIM_CUSTOMER
    DIM_TIME -- CAMPAIGN_FACT -- DIM_CUSTOMER
    Business metrics are loaded into fact tables and facts can be used together along conformed dimensions . This is one of the fundamentals of the dimensional modeling. Is it really impossible to use SALE_FACT and CAMPAIGN_FACT together? If the answer is No, what is the solution?
    Saying "you cannot do that because of loops" is very interesting.
    Thank you..

    When you join two facts together with a common dimension you have created what is called a "chasm trap" which leads to invalid results because of the way SQL is processed. The query rows are first retrieved and then aggregated. Since sales fact and campaign fact have no direct relationship, the rows coming from either side can end up as a product join.
    Suppose a customer has 3 sales fact rows and 2 campaign fact rows. The result set will have six rows before any aggregation is performed. That would mean that sales measures are doubled and campaign measures are tripled.
    You can report on them together, using multiple SQL passes, but you can't query them together. Does that distinction make sense?

  • Wat is the use of sid in extended star schema rather than linking masterdat

    hi bw gurus,
    wat is the use of sid in extended star schema rather than linking masterdata with dimention tables?
    thanx in advace,
    i will assign points,
    srinivas

    hi,
        the sid are used instead of the data in order to avoid the redunduncy of data.
    and reduce the datastorage size.
    the data will be present in the sid table and the data is linked used the corerponding sid in the dimension.
    regards

  • Do we use direct star schema concept anywhere in sap bw

    i know about extended star schema,and where sap uses this concept.
    my question is do we use normal star schema concept any where in sap bw, apart from extended star schema concept.
    if yes specify the answer briefly .
    thanks in advance
    with regards
    yash.b

    Hi,
    If I'm not mistaken an Analytic view in Hana is more like the normal star schema, it is definitely not extended and can be consumed by BW for OLAP processing.
    Regards,
    Michael Devine

  • Why was the extended star schema scrapped?

    Hi,
    BW on HANA takes out the dimension tables and gives us back the original star schema structure.
    Now my questions are
    - How did the original extended star schema help the old BW on oracle or DB2 systems? I mean what was the need for such a design
    - If it offered some kind of optimization .. then why was it scrapped in BW on HANA? Couldn't it have added on to the performance that HANA DB brought?
    Regards,
    Sam

    Can anyone please explain.. ? Krishna Tangudu Thomas Jung Shyam Uthaman .. Any thoughts?

  • Master data sharing using ext. star schema

    Hi,
    I have undetrstood the concept of extended star schema.
    I understand,
    One of the advantages of extended star schema is, that the master data can be shared, meaning, since the master data is stored separately, other star schema's can also share this master data provided it is of the same relevance( with same infoobjects used in this star schema).
    Confirm if my understanding is right and if right,
    Any idea or suggestion how to demonstrate this ?
    Points will be given for good answers.
    thanks.
    bwlearner

    Hey,
    If u could map this with programming.
    MASTER DATA Tables are like GLOBAL declaration and InfoCubes (Fact Tables) are like LOCAL declaration. Any number of IncoCubes can access one particular Master Data Table.
    Assume Sales data coming from 5 regions. Here 0CUSTOMER & its attributes will update the same Master Data Table. The data fields will be stored in the Infocube. So all the 5 InfoCubes access the same Master Data tables using the Dimension & SID tables.
    Clear ?
    Best Regards....
    Sankar Kumar
    +91 98403 47141

  • Trouble with star schema

    Hi All,
    I have a star schema in place with 8 dimension and 1 fact table.
    But due to some specific requirement, I need to denormalize the schema. I want to copy all fields from all dimension tables to the fact table.
    I know this sounds bad but I have to do it. pls dont ask why..
    Now, the same can be done using a materialized view but the problem in MV is there are fields which are present in 2 or more table with the same column name, due to which I cant create a MV.
    Is there some other way to achieve this goal.
    BRK.

    I have too many records which is affecting the performance of the database.
    But now its raising performance problems.Your application had a performance problem. Somebody guessed that a star schema would solve the problem. But it hasn't. So now you intend to implement some spavined variant on star schema because somebody has suggested that complete de-normalisation might solve the problem.
    Is this a demonstrable fact (you have benchmarks and explain plans to justify it) or just a guess?
    I know how difficult this sort of thing can be, because I've been working through some similar scenario for a while now. The important thing is get some decent metrics on your application. Use statspack. Use the wait interface. Find out where your application is spending its time and figure out what you need to do to reduce the waits. Benchmark some alternatives. This may result in you having to re-write your code but at least you'll be doing so in the knowledge
    Cheers, APC
    Blog : http://radiofreetooting.blogspot.com/

  • Star schema design, metrics dimension or not.

    Hello Guys,
    I just heard from one of my colleagues that its wise to
    have an "KPI" or "metrics" dimension in my DWH star schema (later used in OBIEE).
    Now, we have quite a lot of data 100 000 rows per day (botton leve, non-aggregated, the aggregations are obviously far less then that, lets say 200 rows per day) and
    we have build pre-aggregated data marts for each of the 5 very static reports (OBIEE Publisher).
    The table structure is very simple
    e.g.
    Date,County,NumberofCars,RevenuePerCar, ExpensesPerCar, BreakEvenPerCar, CarType
    One could exclude the metrics "NumberofCars","RevenuePerCar", "ExpensesPerCar", "BreakEvenPerCar"
    and put them into a metrics dimension.
    MetricID Metric
    1 NumberofCars
    2 RevenuePerCar
    3 ExpensesPerCar
    4 BreakEvenPerCar
    and hence the fact table design would be simpler.
    Date,County,MetricID,Metric, CarType
    Disadvanatages: A join is required
    We would have to redesign our tables
    tables are not aggregated anymore for specific metric types
    if we notice performance is bad, we would need to go back to the old design
    Advantages : Should new metrics appear, we dont have to change the design of the tables
    its probably best practice
    Note: date, country and cartype are already dimensions. we are just missing one to differentiate the metrics/KPI's
    So I struggle a bit, what should I do? Redesign, or stick to the way I have done it, having
    performance optimization in mind.
    Thanks

    "Usually the date is stored in sales table or product table.
    ut here why they created separate Dimension table for date(Dim_date)? "
    You should provide the link.
    A good place to start with the basic concepts is :
    http://www.ralphkimball.com/
    Pick up some of his books and start going through them.
    My recommendation would be
    The Data Warehouse Toolkit, 2nd Edition: The Complete Guide to Dimensional Modeling
    John Wiley & Sons, 2002 (436 pages
    Good Luck.,

Maybe you are looking for

  • I'm struggling with iOS6 and iTunes [new iPad Cellular]

    Last night I connect my new-ipad cellular and was informed to update iOS6. And yes, I did agree to update. After iTunes downloaded and it started updating software + firmware as usual. I think there should be no problem. After finishing updating firm

  • How do you get rid of the encrypted text when forwarding an email to somebody

    When forwarding an email to others it comes up like this. Can I get rid rid of this Return-Path: <[email protected]> Received-Spf: pass (domain of insideapple.apple.com designates 17.254.6.227 as permitted sender) X-Ymailisg: W7LWasUWLDvp3Cg68NzuIQ0E

  • Retina MBP locks up when trusting an untrusted certificate

    My new 2013 MBP Retina 13 inch hangs when any program is asked about untrusted certificated, I selected "Always trust" in system "view certificate" window, and the app starts to hang. I then see the spinning ball of death for a little while and then

  • 11503 and SSL Termination - Cookie Handling

    I'm looking for some insight on how the 11503 handles SSL termination, specifically with regard to cookie handling. We are going to be installing a 11503, so it can handle load balancing, content switching and SSL termination instead of IIS / WLBS. I

  • Encoding difference on faster CPU?

    I have a DVD I'm finishing for a client. The movie is 2hrs and 6min. I had to compress using the DVD 150min fast encode setting to get it to fit on the DVD. I'm having a lot of choppiness in the burned DVD on playblack, some blocky pixelation. My que