Star schema or Snowflake schema

Hi Gurus,
I have following dimensions and fact table. let me know can I go ahead with star schema and snowflake schema while building the cube.
1. Country's table
2. workgroup table --> each country have N number of work groups
3. user table---> each workgroup have  N number of users.
4. time table.
5. fact table.

This is a similar thread that discusses on the design approach of star vs normalized tables
https://social.technet.microsoft.com/Forums/sqlserver/en-US/7bf4ca30-a1bc-415d-97e6-ce0ac3137b53/normalized-3nf-vs-denormalizedstar-schema-data-warehouse-?forum=sqldatawarehousing
In my experience majority of cases I've some across is also star schema for data marts where tables will be more denormalized rather than applying priciples of normalization. And I believe so far as its through SSAS cubes that you exposes the OLAP model
it would be much easier to implement relationships using a denormalised approach.
What you may do is to have a normalised datawarehouse if you want and then built the datamarts over it using denormalised tables (star schema) for the cube.
Please Mark This As Answer if it solved your issue
Please Vote This As Helpful if it helps to solve your issue
Visakh
My Wiki User Page
My MSDN Page
My Personal Blog
My Facebook Page

Similar Messages

  • Star schema versus snowflake schema

    I have a question regarding dimensional data modeling. My question here is, when star schema model would be useful and when snowflake schema model would be useful.
    In star schema, we have only fact and it is connected with dimensions. But in snowflake schema, we are normalizing dimension into one more level. Let us say, we have dimension product. Product can be normalized into another table called supplier. Let us take another example, customer dimension. Customer dimension can be normalized into country…
    Advantage of star schema is, easy to write the query since we have only less tables. You do not need to join multiple tables when we write the query. It would improve the performance some time.
    Advantage of snowflake schema is, it is little complex to write the query, since we have to join multiple tables. Performance might improve some time when we join smaller tables…
    My question is, at what circumstances, we can use star and snowflake schema? I am not able to define the word sometime_
    Any help is highly appreciated…

    Hi,
    There is a trade off on the availability and the Complex analytics.
    A star schema is good if you have the functional requirements really simple. Like the dimension is not SCD Type2 (slowly changing dimension) and you don't need to do "AS IS" vs "AS WAS" reporting.
    In modern Analytics in any domain dimensions are SCD Type 2 as business keep on evolving. In a star schema structure this will cause explosion of data if there are frequent changes at the higher levels of the dimensional hierarchy. That anyway will hit the performance.
    As far as my experience goes, at the data model level it is better to have snow flaked dimensions. and while managing the metadata (in a BI reporting tool) you can consolidate the snowflaked dimensions in star schema structures. That will make ah hoc analytics much simple for the business users.
    A lot of performance measure can be taken to improve the end user experience.
    In short the trend in BI analytics demands to have a snowflaked structure rather than a simple star schema structure.
    Hope this helps.

  • Creation of star schema from snowflake schem in BMM layer

    hi,
    This is my situation.I have "Fact-table" which has Dim 1 .now Dim 1 is joined to Dim2,Dim3
    Fact
    |
    Dim 1
    |
    | |
    Dim 2 Dim 3
    Now in Bmm Layer how can i make this snowfalke schema to star schema.I heard about making changes in the Logical Table source.And what will be the look of the presentation layer.
    Any help is appricaited Guys.

    In physical layer, you have a join between Dim 1 and Dim 2, Dim1 and DIm3, Fact and Dim1. In BMM for Dim1, in the sources, add Dim2 and Dim3. You may add both these dimensions in one single LTS if the data is not duplicate in the tables. In case the data is duplicated add them as seperate LTS in the sources for Dim1. Refer this post for reference -- Logical Table source source query
    In BMM you need a join between Dim1 and Fact. Basically your Dim1 is sourced from three different tables which are your dimensions. This would transform your snowflake into star. In your presentation layer you will have all the columns from your dimensions (except for the duplicates, lets say you have column A in both dim1 and dim2, you should map this column in column mapping tab so as to enable BI server to pick the most economical source) and facts.
    Hope this clears your question.

  • Efficiency of data warehouse sql and star/snowflake schema

    Hi,
    We are using 11.2.0.3 and need to improve query performance of reports.  data warehouse star/snowflake schema
    In addition to indexing, partitioning having star_transformation enabled etc I am condisriing impact of the following on query performance.
    central fact (over a billion rows) joins to a dimesnion customer ( few hundred thousand rows) which in turn joined to latest version of the dimesnion ( whichhas circa 30,000 rows).
    The table with few hundred thousand rows (customer dimesnion) must alwsys be queried as data stored aganist the version of customer applicable at the time - we just query latest_customer as users want to see
    latest version of customer attributes to stop data being fragemented across several rows in the report.
    Considering if would be more efficient to create a dimenson which is the equivalent of customer but also stores the latest version of the customer attributes on the on row - this would mean customer dimensuion would have far more columns but queries would could avoid additional lookup of this 30k row table.
    Thoughts are - would this be a material benefit?
    At monent users would query latest_customer to say get all customers belonging to a certain multiple chain.
    If change as above then they would query directly the customer dimension with few hundred thousand rows.
    Thoughts?
    Thanks

    We are using 11.2.0.3 and need to improve query performance of reports.  data warehouse star/snowflake schema
    That is NOT a realistic or even meaningful goal.
    And until you identify and document an actual PROBLEM or specific goal you should not even be considering possible solutions.
    Anything you do to improve one report might degrade the performance of several other reports.
    You need to start over and gather information about WHAT Oracle is doing for the reports now, HOW that work is being done and capture metrics that validate how the reports are currently performing.
    Your first step should be to document the performance you are getting now for each report.
    The second step would be to identify which of those reports is a possible target for tuning.
    The third step is to prioritize the reports: which is most important to tune, which is next, etc.
    Then you need to generate the execution plans for those reports to identify EXACTLY how Oracle is executing the queries now.
    At this point you should have enough information to know what your possible options are.
    So then you create a prioritized list of options. The top of the list should be additions to what you already have.
    1. New indexes - regular or bitmapped (if appropriate)
    2. Dropping indexes that aren't being used.
    3. Report-ready summary tables or Materializeds views.
    IMHO modifying your basic architecture should be your LAST resort and undertaken only if you can't solve your (unstated) problem using solutions that have less impact and risk.

  • Question for integration star and snow flake schema in data warehouse

    Dear Reader,
    I facing a problem like that
    I have two data warehouse, one use star schema, other use snow flake schema. I would like to integrate both of them into one data warehouse. What is the strategy should these two data warehouse adopt in order to integrate int one data warehouse?
    Should I scrap both data warehouse and build a new one instead, or scrap one of them and use the other?
    What factors should be considered in order for me to more easily resolve the differences between the two data warehouses.
    Please advise. Thank you very much.

    Hi Mallis,
    This is a very broad question and the answer depends on so many factors. Please go through the following articles to get an
    understanding of what the differences are when to use which.
    When do you use a star schema and when to use a snowflake schema -
    http://www.information-management.com/news/10000243-1.html
    Star vs Snowflake Schemas – what’s your belief? –
    http://www.networkworld.com/community/blog/star-vs-snowflake-schemas-%E2%80%93-what%E2%80%99s-your-belie
    Hope this helps!

  • Multiple star or snow flake schema for universe

    Hi,
    I would like to know following things
    1. Can we use more than one star or snow flake schema for an universe? and how to do this?
    2. Using multiple shemas for one universe is it good practice or not?
    Regards,
    Manjunath

    Manjunath,
    This is exactly where BusinessObjects excels.
    When dealing with multiple fact tables, you use contexts.
    Contexts are very simple to understand and you must take your time to do so if you are going to successfully develop universes based on more than one fact table. No matter what universe you build, the rule for contexts is always the same; there are no different circumstances based on, say, industry.
    Each context starts with a base table, typically a fact table, where all its joins are at the many end of the relationship. The joins are then followed out through the joined tables up other joins where they in turn are at the many end, as in a snowflake schema.
    For example, consider the very basic schema below:
    D1 -< D2 -< F1 >- D3 -< F2 >- D4
    There are two tables that only have many joins attached to them - F1 and F2.
    Starting with F1, I can move to D2. I can also move from there to D1. In the other direction, I can move from F1 to D3. However, I cannot move from D3 to F2 because the join cardinalities are in the wrong direction. So, I've found all the joins that belong in the first context. They are D1-D2, D2-F1 and F1-D3. By the same process, I will get to the second context containing joins D3-F2 and F2-D4.
    It doesn't work well with many-to-many joins, but you really shouldn't be facing these in a well-designed multi-star.
    As for one-to-one joins, set the cardinality as one-to-many in the direction that you know the relationship should be in for the ownership to work out correctly..
    The process that I've described above is essentially how the Detect Contexts algorithm works.
    The only remaining thing for you to read up on is the SQL parameters but essentially you need ot be able to select multiple contexts and generate multiple sql statements for each context. Otherwise, what's the point in defining them?!!
    Hope that clears it up for you.
    Regards,
    Mark

  • How do I move a table from one schema to another schema on Oracle XE?

    How do I move a table from one schema to another schema on Oracle XE?

    Hi,
    I tried to use the insert/select statement that you had given, it did not work.
    The error is ORA-00913: too many values.
    But finally what I did was, I went into the system schema where the table was and generated the DDL through the utilities and afterwards I imported them into the schema that I am currently working on. It solved the problem!
    However I am still curious to know why the insert/select statement did not work? Do you know any site/tutorial which gives a real time example?
    Thank you
    Skye

  • How do I move a table from one schema to another schema?

    How do I move a table from one schema to another schema?

    Grant access to the table from the source schema to destination schema.
      GRANT SELECT ON <TABLE_NAME> TO  <DESTINATION SCHEMA>A simple way would be to use CREATE Table with select syntax (in destination schema)
      CREATE TABLE <TABLE_NAME> AS SELECT * FROM <SOURCE SCHEMA>.<TABLE_NAME><li>However, you would be in <b><u>trouble when the table has index,constraints and triggers</u></b>.
    So you can better of grab the DDL statement of the table(and any additional components) andd then create the table in the destination schema.You can use SQL developer, Toad or Apex's Object browser for this.
    After the table is created, Insert the records using SELECT.
    INSERT INTO <TABLE_NAME> SELECT * FROM <SOURCE SCHEMA>.<TABLE_NAME>This question is discussed in great detail in this <b>AskTom thread</b>

  • How can i access all the objects of one schema from another schema

    Dear All,
    How can i access all the objects(Tables,Views,Triggers,Procedures,Functions,Packages etc..) and do the modifications of one schema from another schema (Without using synonyms concept).
    Thanks in advance,
    Mahi

    First of all, synonyms only help you easy reference the object. It doesn't have any implication of object privilege.
    As long as you have proper privilege on target object. You can access it with or without synonyms.
    Assuming you have proper privilege of objects, you can use following command to assume schema owner.
    ALTER SESSION SET CURRENT_SCHEMA = Schema_owner

  • Grant access to all the views created in user schema to another schema

    How to grant access for all the views created in own HAGGIS schema to comqdhb schema on the HAGGIS database.
    Oracle Grant Privileges
    ===============
    Object privileges assign the right to perform a particular operation on a specific object
    I read that we can use select 'grant select on' ||view_name||'HAGGIS' user_views where owner='COMQDHB'
    Is this right
    Oracle System Privileges
    ===============
    System privileges should be used in only cases where security isnt important,because a single grant statement could remove all security from the table
    Role based security
    ============
    Role security allows you to gather related grants into a collection-since the role is a predefined collection of privileges that are grouped together.privileges are easier to assign to users.
    [http://www.dba-oracle.com/art_builder_grant_sec.htm]
    can we grant select update to all the views at a time to the other schema.
    Are there any other ways to secure the data other than creating users and assigning roles.
    Thank you
    Edited by: Trooper on Dec 23, 2008 9:24 AM

    I think what was suggested was that you use SQL to generate the grants on each and every view, that is, you use SQL to generate SQL where the SQL being generated is "grant select on view_name to role'"
    If you users to connect to Oracle you have to create usernames for them though if the users only connect via an application the application might run just as one user and access to the application is controled via application security. The control on the application can be via Directory Services such as OID or MS Active Directory. User access to Oracle can also be controlled via OID.
    To connect to Oracle you can use OS authenication (not recommended), usernames with passwords, or via Advanced Security Option which supports single sign-on products like Kebros or Oracle Internet Directory etc....
    Example using SQL to generate SQL
    How do I find out which users have the rights, or privileges, to access a given object ?
    http://www.jlcomp.demon.co.uk/faq/privileges.html
    HTH -- Mark D Powell --

  • Context,Physical schema and Logical schema

    Hi,
    How the context,physical schema,logical schema and agent are interrelated.
    Please explain
    Thanks
    Jack

    Hi Jack,
    Context:
    A context is a set of resources allowing the operation or simulation of one or more data processing applications. Contexts allow the same jobs (Reverse, Data Quality Control, Package, etc) to be executed on different databases and/or schemas.
    Its used to run the object(process) in different database.
    Physical Schema:
    The physical schema is a decomposition of the data server, allowing the Datastores (tables, files, etc) to be classified. Objects stored in data servers with this mode of classification can be accessed by specifying the name of the schema attached to the object name.
    Ex
    Oracle classifies its tables by "schema" (or User). Each table is linked to a schema, thus SCOTT.EMP represents the table EMP in the schema SCOTT.
    Logical schema:
    A logical schema is an alias that allows a unique name to be given to all the physical schemas containing the same datastore structures.
    ->The aim of the logical schema is to ensure the portability of the procedures and models on the different physical schemas. In this way, all developments in ODI Designer are carried out exclusively on logical schemas.
    Thanks
    Madha

  • From schema 1 to schema 2 migration delegated admin problem

    I want migrate from schema 1 to schema 2 the messaging server 6.2 ( jes 2005q1).
    I have install access manager and delegated admin.
    With the commdirmig I migrate the domain and schema , the messaging work correctly.
    I have a problem with the delegated admin web interface.
    The delegated don't view my domain. If I add the sundelegatedorganization objectclass I can view my domain on delegated admin but I can view user and group.
    Any Idea?
    TIA
    Bye Giovanni

    There are two very different products called "deletaged admin". The old iPlanet Delegated Admin (iDA) only works with Schema 1. The current Delegated Admin, that comes with JES3 only works with Schema 2.
    If you're using the old iDA that worked with schema 1, it won't work with schema 2. You have to install the new DA for that.
    It doesn't work with groups/lists, only with users and domains.

  • Cannt execute stored proc of one schema in another schema from java app.

    I am posting my problem in this forum as i i though it could be server-independent.
    I am working on apache tomcat and spring framework with Oracle db (schema/user A)
    We access oracle db from our java application by setting jndi and works fine.We have sqlstatements, stored procs and functions all run fine.
    Now we create a role (DBROLE) with all permissions to that original db schema/user(A) . We created another empty schema B and assigned that role(DBROLE) to that user B.
    (We grant all kind of permissions on tables/packages of schema A to user role DBROLE and also created synonyms)
    Intentions are: to access the schema A though schema B from application and avoiding direct access.
    In our spring application, we replaced database-settings with schema B.
    Things work fine: When its plain SQL statement is run from Java code but Stored proc wont run and we get
    'Wrong num of arguments/data types' error.
    Also all stored procs are in packages.To execute stored proc in java code, we use SimpleJdbcCall.
    I also checked run stored proc from schema B and its works. Only from web app, it doesnt work.
    Please suggest,what should be done to make this working or if there is other alternative.
    Thanks

    Instead of importing a scema in another schema specifiy the schemas in the external-schemaLocation property.
    SAXParser saxParser = new SAXParser();
    saxParser.setProperty("http://apache.org/xml/properties/schema/external-schemaLocation", "xmlschema1.xsd, xmlschema2.xsd");

  • Best LKM to move data from with in Oracle from one schema to another Schema

    Hi Gurus,
    What is the best KM to move data from one schema to another schema within same oracle database.
    Thanks in advance

    Dear,
    If your source and target are on the same database server then you dont need LKM.
    You have to 1. create one data server for the database server
    2. Create one physical schema for your source and another physical schema for your target under the above created data server.
    3. Then create models for each above created physical schema
    In this case you just need IKM knowledge module
    Please refer http://oditrainings.blogspot.in/2012/08/odi-interface-source-target-on-same.html
    If your source and target are on different server then you must create two different data servers in topology. You have to use LKM.
    The best LKM to use is LKM oracle to Oracle dblink. But you should have proper grants to use it
    If your source has very few records you can go with LKM SQL to Oracle other wise use LKM oracle to Oracle dblink

  • To kill session in one schema from another schema

    Hi Team,
    I got a problem like a table from one of my schema has been locked. I am getting 'ORA-00054: resource busy and acquire with NOWAIT specified' error when trying to delete rows from that table or even when trying to truncate that table.
    Let the table be 'T1' present in schema 'VIEW'
    I tried to kill the session which is active for that schema by below query
    select sid,serial#,status from v$session where username='VIEW' and STATUS = 'ACTIVE';
    alter system kill session '681,2586';
    But i couldn't do the above as i don't have DBA privilege for that. But i have DBA privilege for another schema let it be 'ADMIN'
    Now how can i kill the session in schema 'VIEW' from schema 'ADMIN'
    can any one get me solution.
    Thanks in Advance
    11081985

    I got a problem like a table from one of my schema has been locked. I am getting 'ORA-00054: resource busy and acquire with NOWAIT specified' error when trying to delete rows from that table or even when trying to truncate that table.
    Before you do anything why don't you actually find out WHY that table has been locked.
    You generally should NOT be killing sessions without knowing what is causing the problem to begin with.
    Then you also need to determine if you should use KILL SESSION or instead use DISCONNECT SESSION and well as whether the use of IMMEDIATE is appropriate.
    Each of those choices acts differently. Many people use KILL when they should really use DISCONNECT.
    See DISCONNECT SESSION Clause and KILL SESSION Clause in the ALTER SESSION chapter of the SQL Language doc
    http://docs.oracle.com/cd/E11882_01/server.112/e17118/statements_2014.htm#i2282145

Maybe you are looking for