Unread      Implementing heirarichal structure in data warehouse

I want to create a data warehouse for credit card application. Each user can have a credit card and multiple supplementary credit cards. Each credit card has a main limit, which can be sub-divided into sub-limits to supplementary credit cards as requested by the user. Let us consider the following example:
User “A” has a credit card “CC” with Limit “L” and its limit is $100,000.
User “A” requested for a supplementary credit card “CC1” which is assigned limit
“L1” = $50,000. He requests for another supplementary credit card “CC2” which is assigned limit “L2” = $100,000.
Source tables contain data like this:
1. src_client_card_trans: contains transaction data of client/user credit card usage (client_id, credit_card_number, balance_acquired)
Client_id     Credit_card_number     Balance_acquired
A     CC1     $20,000
A     CC2     $50,000
A     CC     $70,000
2. src_card_limits: contains client’s credit cards linked to credit limits.
Credit_card_number     Limit_id
CC1     L1
CC2     L2
CC     L
3. src_limit_structure: contains the relationship of limits and sub-limits.
Limit_id     Sub_Limit_id
L     L1
L     L2
I have designed two dimensions and one fact table. Dimensions are:
1. LIMITS: contains the limit_id.
2. CLIENTS: contains credit card user’s information.
And fact table is LIMIT_BALANCES_FACT, which have some fact columns with the above dimensions.
How can I implement the above scenario of limit hierarchy in data warehouse? Need your suggestions.
Thanks in advance

Much depends on how you want to analyze the data and there are a few options:
1) Use credit limit as an attribute of the customer dimension. This would allow you to create query filters that can just show those customers with a $100,000 credit limit. This would return a list of credit cards (since the attribute would be assigned to each credit card) and then you can simply add or just keep the parents of that result set.
However, this assumes you do not want to measure data specifically relating to credit card limit. For example it would not be possible to view a total amount spent by all customers who had a credit-limit of $100,000.
In this case the attribute, credit limit, is simply used to filter a result set
2) Create a separate dimension called Credit Limit and create three levels:
All
Range
Credit Limit
The level Range would contain groupings of credit limits such as 100-500, 501-1200, 1201-1,000 etc etc.
This would allow you to analyse your data by customer and by credit limit over time. Allowing you to slice and dice quickly and easily.
3) A second customer hierarchy could be added to the customer dimension. This would allow you to drill-down through different credit limits to customers to individual credit cards. It would be advisable to follow the same approach as option 2 and create some groupings for the credit limits to make the drill down easier for your business users to navigate:
All
Range
Credit Limit
Customer
Credit Card
Hope this helps
Keith Laker
Oracle EMEA Consulting
BI Blog: http://oraclebi.blogspot.com/
DM Blog: http://oracledmt.blogspot.com/
BI on Oracle: http://www.oracle.com/bi/
BI on OTN: http://www.oracle.com/technology/products/bi/
BI Samples: http://www.oracle.com/technology/products/bi/samples/

Similar Messages

  • Implementing heirarichal structure in data warehouse

    I want to create a data warehouse for credit card application. Each user can have a credit card and multiple supplementary credit cards. Each credit card has a main limit, which can be sub-divided into sub-limits to supplementary credit cards as requested by the user. Let us consider the following example:
    User “A” has a credit card “CC” with Limit “L” and its limit is $100,000.
    User “A” requested for a supplementary credit card “CC1” which is assigned limit
    “L1” = $50,000. He requests for another supplementary credit card “CC2” which is assigned limit “L2” = $100,000.
    Source tables contain data like this:
    1. src_client_card_trans: contains transaction data of client/user credit card usage (client_id, credit_card_number, balance_acquired)
    Client_id     Credit_card_number     Balance_acquired
    A     CC1     $20,000
    A     CC2     $50,000
    A     CC     $70,000
    2. src_card_limits: contains client’s credit cards linked to credit limits.
    Credit_card_number     Limit_id
    CC1     L1
    CC2     L2
    CC     L
    3. src_limit_structure: contains the relationship of limits and sub-limits.
    Limit_id     Sub_Limit_id
    L     L1
    L     L2
    I have designed two dimensions and one fact table. Dimensions are:
    1. LIMITS: contains the limit_id.
    2. CLIENTS: contains credit card user’s information.
    And fact table is LIMIT_BALANCES_FACT, which have some fact columns with the above dimensions.
    How can I implement the above scenario of limit hierarchy in data warehouse? Need your suggestions.
    Thanks in advance

    Much depends on how you want to analyze the data and there are a few options:
    1) Use credit limit as an attribute of the customer dimension. This would allow you to create query filters that can just show those customers with a $100,000 credit limit. This would return a list of credit cards (since the attribute would be assigned to each credit card) and then you can simply add or just keep the parents of that result set.
    However, this assumes you do not want to measure data specifically relating to credit card limit. For example it would not be possible to view a total amount spent by all customers who had a credit-limit of $100,000.
    In this case the attribute, credit limit, is simply used to filter a result set
    2) Create a separate dimension called Credit Limit and create three levels:
    All
    Range
    Credit Limit
    The level Range would contain groupings of credit limits such as 100-500, 501-1200, 1201-1,000 etc etc.
    This would allow you to analyse your data by customer and by credit limit over time. Allowing you to slice and dice quickly and easily.
    3) A second customer hierarchy could be added to the customer dimension. This would allow you to drill-down through different credit limits to customers to individual credit cards. It would be advisable to follow the same approach as option 2 and create some groupings for the credit limits to make the drill down easier for your business users to navigate:
    All
    Range
    Credit Limit
    Customer
    Credit Card
    Hope this helps
    Keith Laker
    Oracle EMEA Consulting
    BI Blog: http://oraclebi.blogspot.com/
    DM Blog: http://oracledmt.blogspot.com/
    BI on Oracle: http://www.oracle.com/bi/
    BI on OTN: http://www.oracle.com/technology/products/bi/
    BI Samples: http://www.oracle.com/technology/products/bi/samples/

  • Advice on implementing oracle streams on RAC 11.2 data warehouse database

    Hi,
    I would like to know high level overview on implementing one-way schema level replication within same database using oracle streams on RAC 11.2 data warehouse database.
    Are there any points that should be kept in mind before drafting the implementation plan.
    Please share your thougts and experiences.
    Thanks in advance
    srh

    Hi,
    I would like to know high level overview on implementing one-way schema level replication within same database using oracle streams on RAC 11.2 data warehouse database.
    Are there any points that should be kept in mind before drafting the implementation plan.
    Please share your thougts and experiences.
    Thanks in advance
    srh

  • Business Process Oriented Development of Data Warehouse Structures

    I've recenltly read an interesting paper by Michael Böhnlein and Achim Ulbrich-vom Ende:
    Business Process Oriented Development of Data Warehouse Structures .
    It discusses the derivation of data warehouse structures from business process models, as opposed to deriving relevant datasets from the underlying operational data sources.
    The Semantic Object Model (SOM) methodology is also introduced. The SOM approach can be useful for the modeling of business systems as well as analysis and design.
    Cheers, Davide

    I've recenltly read an interesting paper by Michael Böhnlein and Achim Ulbrich-vom Ende:
    Business Process Oriented Development of Data Warehouse Structures .
    It discusses the derivation of data warehouse structures from business process models, as opposed to deriving relevant datasets from the underlying operational data sources.
    The Semantic Object Model (SOM) methodology is also introduced. The SOM approach can be useful for the modeling of business systems as well as analysis and design.
    Cheers, Davide

  • Data warehouse implementation misconceptions

    In their book "Mastering the SAP Business Information Warehouse" the authors identify 5 common misconceptions of data warehouse implementations as:
    1) Data warehouse implementations are IT projects
    2) "Quick-win" iterative implementations will lead to a successful data warehouse
    3) Business content is a proper solution to accommodate all BI demands
    4) Governance can be introduced later
    5) Operations is not so important
    I'd like those of you who have experience in data warehouse implementation to rank these five misconceptions in terms of their importance.  You may just rank them by their numbers, for example, 5,1,3, 2,4.
    Please also indicate:
    1) your job title
    2) the number of years of experience you have in data warehousing
    3) the number of years of experience you have in SAP BW/BI
    Thank you!

    Hi Mark,
    I guess your questions below are very weird. Can I know what is your purpose ?
    Please also indicate:
    1) your job title
    2) the number of years of experience you have in data warehousing
    3) the number of years of experience you have in SAP BW/BI

  • Strategy in Data Warehouse Table Structure

    I'm building a relational data warehouse, and there are two approaches that seem almost interchangeable to me, despite being quite different from each other. 
    The first approach is rather simple.  I have a "User" table with a bunch of foreign keys, and then I have a bunch of other tables containing user attributes.  One table for "department," another for "payroll type,"
    another for "primary location," and so on for 20 different user attributes.
    The second approach, instead of using 20+ tables, combines this down into far fewer.  I would have an "Attribute Type" table and "Attribute" table.  These two, in conjunction with a bridge table, could accommodate as many
    attributes as necessary within three tables.  If the business wants to track a new "user-related" attribute, I don't need any new tables.  I would simply add the new attribute into the "Attribute Type" table as, say, "attribute
    21," and begin tracking it.  All the work could be done without ever adding new tables or columns.
    Both approaches seem to maintain (at least) 3NF.  Is one approach better in certain circumstances, and the other approach more appropriate at other times?  Any insight is appreciated!
    BrainE

    Hi Brian,
    The second approach with three tables is not really good here. Query Optimizer in SQL Server has a few enhancements for Star/Snowflake schemas in DW environment and 3-table schema would not be able to benefit from them. It would be also harder to maintain,
    load data and query. Finally, your attributes could have different data types, which you need to store. 
    I would suggest to go with first solution (multiple dimensions table) and follow a few extra rules:
    Avoid nullable attributes
    Choose attribute data types as narrow as possible
    Avoid string attributes. If needed create separate dimension tables for them
    Use columnstore indexes and
    Upgrade to SQL Server 2014 if it is all possible - there are multiple enhancementsin batch-mode processing there
    Thank you!
    Dmitri V. Korotkevitch (MVP, MCM, MCPD)
    My blog: http://aboutsqlserver.com

  • Best practice of metadata table in data warehouse environment ?

    Hi guru's,
    In datawarehouse, we have 1. Stage schema 2. DWH(Data warehouse reporting schema). In stageing we have about 300 source tables. In DWH schema, we are creating the tables which are only required from reporting prespective . some of the tables in stageing schema, have been created in DWH schema as well with different table name and column names. The naming convention for these same tables and columns in DWH schema is more based on business names.
    In order to keep track of these tables we are creating metadata table in DWH schema say for example
    Stage                DWH_schema
    Table_1             Table_A         
    Table_2             Table_b
    Table_3             Table_c
    Table_4              Table_DMy question is how do we handle the column names in each of these tables. The stage_1, stage_2 and stage_3 column names have been renamed in DWH_schema which are part of Table_A, Table_B, Table_c.
    As said earlier, we have about 300 tables in stage and may be around 200 tables in DWH schema. Lot of the column names have been renamed in DWH schema from stage tables. In some of the tables we have 200 column's
    so my concern is how do we handle the column names in metadata table ? Do we need to keep only table names in metadata table not column names ?
    Any idea will be greatly appriciated.
    Thanks!

    hi
    seems quite a buzzing question.
    In our project we designed a hub and spoke like architecture.
    Thus we have 3 layer, L0 is the one closest to the source and L0 table's name are linked to the corresponding sources names by mean of naming standard (like tabA EXT_tabA tabA_OK1 so on based on implementation of load procedures).
    At L1 we have the ODS , normalized model , we use business names for table there and standard names for temporary structures and artifacts
    Both L0 an L1 keep source's column names as general rule, new columns like calculated one are business driven and metadata are standard driven.
    Datamodeler fits perfect for modelling L1 purpose.
    L2 is the dimensional schema business names take place for tables and columns eventually rewritten at presentation layer ( front end tool )
    hope this helps D.

  • Data warehouse database

    <p>
    Today I came across one very interesting question
    </p>
    <p>
    "Data Warehouse can be only deployed in relation database."
    </p>
    <p>
    Above statement is true or false.
    </p>
    <p>
    If we see it simply without any complication or may be go back 7-8 years it's answer may be false as I find out after doing research on it
    </p>
    <p>
    "A data warehouse can be normalized or denormalized. It can be a relational database, multidimensional database, flat file, hierarchical database, object database, etc. Data warehouse data often gets changed. And data warehouses often focus on a specific activity or entity." Larry Greenfield
    </p>
    <p>
    The data warehouse is normally (but does not have to be) a relational database. It must be organized to hold information in a structure that best supports not only query and reporting, but also advanced analysis techniques, like data mining. Most data warehouses hold information for at least 1 year and sometimes can reach half century, depending on the business/operations data retention requirement. As a result these databases can become very large.en.wikipedia.org
    </p>
    <p>
    But I think when we look into the complication of designing and functionality which we are expecting from a data warehouse today and plus the concept which use when designing the data warehouse structure like star scheme, snow flake etc., I think it cannot be done in any other type of database which don't follow the relation database concept. We may call it multidimensional database or ORDBMS and we may talk about cubes, measures, dimension and so on but from the base it has to follows the relation database.
    </p>
    <p>
    Let me know if anybody has anything to say about it.
    </p>
    <br>
    Regards,
    <br><br>
    Raj<br>
    <b>www.oraclebrains.com<a>
    <br><font color="#FF0000">POWERED by the people, to the people and for the people WHERE ORACLE IS PASSION.</font></b>

    Thanks Justin!
    <br><br>
    I agree with you the concept exist before the relational database was invented. Last time I know they call it EIS then they name it MIS and now data warehouse. But what I am taking about is modern data warehouse technique, If you really think that any other type of database can support it without following relational database concept. Let me know which one?
    <br><br>
    I am still searching for my answers and have discuss with lot of people, but when I asked them have they seen any implementation of data warehouse without using relational database, the answer that I get is always negative.<br><br>
    Raj<br>
    <b>www.oraclebrains.com<a>
    <br><font color="#FF0000">POWERED by the people, to the people and for the people WHERE ORACLE IS PASSION.</font></b>

  • Connect to MS SQL Server 2000 data warehouse

    Hi,
    I use a MS SQL Server 2000 database for my web application where I use JSP. I suppose to create data warehouse using MS SQL Server's Data Transformation Service. But I don't know it's possible to connect to a MS SQL Server's data warehouse using JSP. So I want to know is it possible to connect to data warehouse using JSP and if it is how to do it? Thank you.

    You can certainly connect to M$ SQL Server using the JDBC driver:
    http://www.microsoft.com/downloads/details.aspx?FamilyID=4f8f2f01-1ed7-4c4d-8f7b-3d47969e66ae&displaylang=en
    Connecting to a data warehouse is no different from any relational database. (My understanding is that a data warehouse usually means a star schema implemented in a relational database.) This will connect you.
    If you're not familiar with JDBC, you might need the tutorial:
    http://java.sun.com/docs/books/tutorial/jdbc/

  • Tablespaces and block size in Data Warehouse

    We are preparing to implement Data Warehouse on Oracle 11g R2 and currently I am trying to set up some storage strategy - unfortunately I have very little experience with that. The question is what are general advices in such considerations according table spaces and block size? I made some research and it is hard to find some clear answer, there are resources advising that block size is not important and can be left small (8 KB), others state that it is crucial and should be the biggest possible (64KB). The other thing is what part of data should be placed where? Many resources state that keeping indexes apart from its data is a myth and a bad practice as it may lead to decrease of performance, others say that although there is no performance benefit, index table spaces do not need to be backed up and thats why it should be split. The next idea is to have separate table spaces for big tables, small tables, tables accessed frequently and infrequently. How should I organize partitions in terms of table spaces? Is it a good idea to have "old" data (read only) partitions on separate table spaces?
    Any help highly appreciated and thank you in advance.

    Wojtus-J wrote:
    We are preparing to implement Data Warehouse on Oracle 11g R2 and currently I am trying to set up some storage strategy - unfortunately I have very little experience with that. With little experience, the key feature is to avoid big mistakes - don't try to get too clever.
    The question is what are general advices in such considerations according table spaces and block size? If you need to ask about block sizes, use the default (i.e. 8KB).
    I made some research and it is hard to find some clear answer, But if you get contradictory advice from this forum, how would you decide which bits to follow ?
    A couple of sensible guidelines when researching on the internet - look for material that is datestamped with recent dates (last couple of years), or references recent - or at least relevant - versions of Oracle. Give preference to material that explains WHY an idea might be relevant, give greater preference to material that DEMONSTRATES why an idea might be relevant. Check that any explanations and demonstrations are relevant to your planned setup.
    The other thing is what part of data should be placed where? Many resources state that keeping indexes apart from its data is a myth and a bad practice as it may lead to decrease of performance, others say that although there is no performance benefit, index table spaces do not need to be backed up and thats why it should be split. The next idea is to have separate table spaces for big tables, small tables, tables accessed frequently and infrequently. How should I organize partitions in terms of table spaces? Is it a good idea to have "old" data (read only) partitions on separate table spaces?
    It is often convenient, and sometimes very important, to separate data into different tablespaces based on some aspect of functionality. The performance thing was mooted (badly) in an era when discs were small and (disk) partitions were hard; but all your other examples of why to split are potentially valid for administrative. Big/Small, table/index, old/new, read-only/read-write, fact/dimension etc.
    For data warehouses a fairly common practice is to identify some sort of aging pattern for the data, and try to pick a boundary that allows you to partition data so that a large fraction of the data can eventually be made read-only: using tablespaces to mark time-boundaries can be a great convenience - note that the tablespace boundary need not match the partition boudary - e.g. daily partitions in a monthly tablespace. If you take this type of approach, you might have a "working" tablespace for recent data, and then copy the older data to "time-specific" tablespace, packing it and making it readonly as you do so.
    Tablespaces are (broadly speaking) about strategy, not performance. (Temporary tablespaces / tablespace groups are probably the exception to this thought.)
    Regards
    Jonathan Lewis

  • Are there any timesten installation for data warehouse environment?

    Hi,
    I wonder if there is a way to install timesten as an in memory database for data warehouse environment?
    The DW today consist of a large Oralcle database and I wonder if and how a timesten implementation can be done.
    what kind of application changes involve with such an implementation and so on?
    I know that the answer is probably complex but if anyone knows about such an implementation and some information about it , it would be great to learn from that experience.
    Thanks,
    Adi

    Adi,
    It depends on what you want to do with the data in the TimesTen database. If you know the "hot" dataset that you want to cache in TimesTen, you can use Cache Connect to Oracle to cache a subset of your Oracle tables into TimesTen. The key is to figure out what queries you want to run and see if the queries are supported in TimesTen.
    Assuming you know the dataset you need to cache and you have control of your application code to change the connection to TimesTen (using ODBC or JDBC), you can give it a try. If you are using a third party tool, you need to see if the tool supports JDBC or ODBC access to the database and change the tool to point to your TimesTen database instead of the Oracle database.
    If you are using the TimesTen Cache Connect to Oracle product option, data synchronization between Oracle and TimesTen is handled automatically by the product.
    Without further details of what you'd like to do, it's difficult to provide more detailed recommendation.
    -scheung

  • Why do we need SSIS and star schema of Data Warehouse?

    If SSAS in MOLAP mode stores data, what is the application of SSIS and why do we need a Data Warehouse and the ETL process of SSIS?
    I have a SQL Server OLTP database. I am using SSIS to transfer my SQL Server data from OLTP database to a Data Warehouse database that contains fact and dimension tables.
    After that I want to create cubes using SSAS form Data Warehouse data.
    I know that MOLAP stores data. Do I need any Data warehouse with Fact and Dimension tables?
    Is not it better to avoid creating Data warehouse and create cubes directly from OLTP database?

    Another thing to note is data stored in transactional system may not always be in end user consumable format for ex. we may use bit fields/flags to represent some details in OLTP as storage required ius minimum but presenting them as is would not make any
    sense to user as they would not know what each bit value represents. In such cases we apply some transformations and convert data into useful information for users to understand. This is also in the warehouse so that information in warehouse can directly be
    used for reporting. Also in many cases the report will merge data from multiple source systems so merging it on the fly in report would be tedious and would have hit on report server. In comparison bringing them onto common layer (warehouse) and prebuilding
    aggregates would be benefitial for the report performance.
    I think (not sure) we join tables in SSAS queries and calculate aggregations in it.
    I think SSAS stores these values and joined tables and we do not need to evaluates those values again and this behavior is like a Data Warehouse.
    Is not it?
    So if I do not need historical data, Can I avoid creating Data Warehouse?
    On the backend SSAS uses queries only to extract the data
    B/w I was not explaining on SSAS. I was explaining on what happens inside datawarehouse  which is a relational database by itself. SSAS is used to built cube (OLAP structures) on top of datawarehouse. star schema is easier for defining relationships
    and buidling aggregations inside SSAS as its simple and requires minimal lookups to be performed. Also data would be held at lowest granularity level which can easily be aggregated to required levels inside OLAP cubes. Cube processing is very resource
    intensive and using OLTP system would really have a huge impact on processing performance as its nnot denormalized and also doing tranformation etc on the fly adds up to complexity. Precreating a layer (data warehouse) having data in required format would
    make cube processing easier and simpler as it has to just cross join tables and aggregate data based on relationships defined and level needed inside the cube.
    Please Mark This As Answer if it helps to solve the issue Visakh ---------------------------- http://visakhm.blogspot.com/ https://www.facebook.com/VmBlogs

  • Compression and query performance in data warehouses

    Hi,
    Using Oracle 11.2.0.3 have a large fact table with bitmap indexes to the asscoiated dimensions.
    Understand bitmap indexes are compressed by default so assume cannot further compress them.
    Is this correct?
    Wish to try compress the large fact table to see if this will reduce the i/o on reads and therfore give performance benefits.
    ETL speed fine just want to increase the report performance.
    Thoughts - anyone seen significant gains in data warehouse report performance with compression.
    Also, current PCTFREE on table 10%.
    As only insert into tabel considering making this 1% to imporve report performance.
    Thoughts?
    Thanks

    First of all:
    Table Compression and Bitmap Indexes
    To use table compression on partitioned tables with bitmap indexes, you must do the following before you introduce the compression attribute for the first time:
    Mark bitmap indexes unusable.
    Set the compression attribute.
    Rebuild the indexes.
    The first time you make a compressed partition part of an existing, fully uncompressed partitioned table, you must either drop all existing bitmap indexes or mark them UNUSABLE before adding a compressed partition. This must be done irrespective of whether any partition contains any data. It is also independent of the operation that causes one or more compressed partitions to become part of the table. This does not apply to a partitioned table having B-tree indexes only.
    This rebuilding of the bitmap index structures is necessary to accommodate the potentially higher number of rows stored for each data block with table compression enabled. Enabling table compression must be done only for the first time. All subsequent operations, whether they affect compressed or uncompressed partitions, or change the compression attribute, behave identically for uncompressed, partially compressed, or fully compressed partitioned tables.
    To avoid the recreation of any bitmap index structure, Oracle recommends creating every partitioned table with at least one compressed partition whenever you plan to partially or fully compress the partitioned table in the future. This compressed partition can stay empty or even can be dropped after the partition table creation.
    Having a partitioned table with compressed partitions can lead to slightly larger bitmap index structures for the uncompressed partitions. The bitmap index structures for the compressed partitions, however, are usually smaller than the appropriate bitmap index structure before table compression. This highly depends on the achieved compression rates.

  • Is the OBIEE used to create a data warehouses dynamically?

    Management where I work wants to use the OBIEE Administrator to source a 3NF normalized database and create a "virtual data warehouse" in the Business Modeling Mapping layer of OBI Administrator as a Star Schema model is required by OBI Business Model layer. They claim they were told by an Oracle sales rep that the Administrator tool could do this.
    Is this possible? As OBI issues only SQL and not PL/SQL how can one "create" dimensions, lookup tables and fact tables dynamically? And even if it could the performance hit to recreate the virtual data warehouse each time a query is issued would be huge.
    Having used Prism Warehouse Builder and DataStage in the past to create data warehouses I am aware that one needs a procedural programming language to create and maintain the star schema tables (surrogate key maintenance, controling workflows, maintaining slowly changing dimensions, intermediate lookup tables, etc.). SQL was not meant to do this heavy lifting programming. Afterall, isn't this why Oracle Warehouse Builder and previously Informatica is shipped with OBIEE suite because OBI is not an ETL tool to create dimensional models? One uses an ETL tool to create the dimensional data model for OBI to access and pass along the metadata to OBI Answers.
    So is it normal practice to use the Administrator's Business Mapping/Modeling layer for creating a virtual star schema logical tables from physical tables that are 3NF? Or is the tool used to access already denormalized tables in the physical layer that were created using Informatica or OWB or other ETL tool?

    I asked an "Expert" in OBIEE. Here are snippets of his response:
    "Be aware though that the transformation ability is fairly limited, and
    will only really work with data that is very close to a star schema, i.e.
    the data can be easily transformed through a couple of denormalizations and
    table joins. If your source data is very normalized and cannot easily be
    transformed into a star schema, you would need to use a tools such as
    Informatica, OWB or similar to extract data from your source systems, load
    and then transform it into a data warehouse or data mart and report of of
    that. The more that your data needs to be transformed (i.e. the closer it
    is to a 3NF model) the more likely it is that you'll need to use an ETL
    tool, and a data warehouse or data mart, to host your data."
    And in response to my noting the lack of documentation on how to model a 3NF to Star Schema his response was:
    "No, you're right, the documentation doesn't really go into "how to" turn a 3NF model into a dimensional model. If you look back to when OBIEE was a Siebel product, the documentation was really aimed at either Siebel consultants or customers who had been on the training, they didn't want customers "off the street" to try and implement OBIEE as it would hit their services revenue. That's where the blog posts we do, things like the Oracle-by-example training courses on OTN and so on come in, otherwise as you say there's little out there on the best way to transform your model - it's mostly passed on "word of mouth" or is built up from experience on working on projects."

  • What are the Disadvantages of Management Data Warehouse (data collection) ?

    Hi All,
    We are plan to implement Management Data Warehouse in production servers .
    could you please explain the Disadvantages of Management Data Warehouse (data collection) .
    Thanks in advance,
    Tirumala 
     

    >We are plan to implement Management Data Warehouse in production servers
    It appears you are referring to production server performance.
    BOL: "You can install the management data warehouse on the same instance of SQL Server that runs the data collector. However, if server resources or performance is an issue on the server being monitored, you can install the management data warehouse
    on a different computer."
    Management Data Warehouse
    Kalman Toth Database & OLAP Architect
    SQL Server 2014 Database Design
    New Book / Kindle: Beginner Database Design & SQL Programming Using Microsoft SQL Server 2014

Maybe you are looking for