ODI SCD Type2 questions

Guys,
Iam implementing ODI SCD Type 2 on a dimension.
When we use TYPE 2 ODI IKM do we need to select CKM as well?
We have 3 tire ODI architecture i.e staging is different from target database,so we need to select 'staging area different from target'option,do you think it will work with TYPE 2 SCD?
Can you please clarify above mentioned doubts.
Cheers
Jinni

Hi Jinni
No you do not need to select a CKM.
Yes this should work with stage different from target.
Cheers
David

Similar Messages

  • Expert - Generating SCD Type2 Mapping for a Table Operator - To Share

    Hi All,
    I have created an Expert which generates a SCD Type2 Mapping for a Table Operator being used as Dimension.
    Let me know how I can share this expert. This forum has helped answer many questions and I would like to contribute.
    Thanks,
    Sam.

    Hi David,
    I am trying to follow the steps as on that page. But I am stuck at step 5 - to upload the zip file. I cannot find the "Add an Attachment" link in the page toolbox.
    The steps on that page are
    1)Use the link in the "Page Toolbox" (right-hand column) to add a new page, with a title describing the expert or script.
    2)In the keywords for the page, enter "OWB USER CONTRIBUTED EXPERT".
    3)On that page, describe the expert or other script, the version of OWB it was developed for (including patch level), how to install and use it, and who to contact with any questions.
    4)Package the Expert files in a Zip file.
    5)Use the "Add an Attachment" link in the Page Toolbox to upload your Zip file as an attachment to the page for your expert. Alternatively, post a link to your own web site where the expert can be downloaded.
    Thanks,
    Sam.

  • Building SCD Type2 changes. Any record deletion in Source does not expire the Target Record

    Building SCD Type2 changes. Any record deletion in Source does not expire the Target Record. When I Delete any Record in Source Table, I expect the same record should be 'Expired' with 'End_Date' with Active = 'N'. 
    BTW: In 'Table Comp',  I have Checked the 'Detect Deleted rows(s) ...'. /  ' ... largest generated key'  is selected by default..
    This is not happening..! My Update and Insert works fine..!

    Hi
    Do you have detect deletes set on Table Comparison?
    I also add the Map operations to the output of History Preserving and manage each stream of the Insert/Update/deletes separately and control the record start/effective & record end/Expiry dates with more variables based on the stream req, ie updates to end previous record have record end date set to variable for business or run date set to date - 1.
    You only need key gen for inserts (including the deleted record final state)
    Use merge to bring back together.

  • ODI - SCD Type 2 - Insert new row error

    Hi All,
    For Dimension I have a surrogate key, a natural key, and a column with "overwrite on change", start_date, end_date, current_record_ind. When I run the interface with the default SCD Type 2 for SQl server, this runs fine. But when I change that one column from "overwrite on change" to "insert new row", it fails on the update step. What should I be looking for anf fixing.
    Thanks for your time and help.

    DB: SQL Server 2008.
    IKM = IKM MSSQL Slowly Changing Dimension
    Error Message: ODI-1228 - Incorrect syntax near the keyword from.
    Code: update T
    set
    from database.dbo.Dim_type as T,
    database.dbo.I$_Dim_type as S
    where T.Cd = S.Cd
    and T.Current_rec_ind = 1
    and IND_UPDATE = 'U'
    To overcome the issue, I have commented the update code in the Knowledge module and the insert works, but for this one it is ok, but I have requirements where one column needs to be overwritten and other column changes will require to add a new record. How to handle both?
    Thanks for your time.

  • Cluster with 2 linux machines and ODI console - some questions

    Hello,
    I need to setup domain with ODI plugins (console etc.) on clustered environment. OS is Oracle Linux 6.3.
    I've read this documentation:http://docs.oracle.com/cd/E13222_01/wls/docs81/adminguide/createdomain.html#CreateClusteredDomain and I have some additional questions:
    - I know I need to install weblogic on 2 machines. Should I install Oracle Data Integrator on 2 machines as well?
    - Creating domains, starting domains etc. I assume I should do this on first server? (for example, via ssh). Or I will need to login via cluster ip addres?
    - MultiCast Address: this is not entirely clear for me. Should this ip aldready exist in my environment - should I configure my network interfaces somehow? Or, I simply need to provide any ip from 224.0.0.0 to 239.255.255.255 and this will work?

    MukeshNegi wrote:
    Which version of weblogic you are using ?Weblogic 11g, 64-bit.
    MukeshNegi wrote:
    if you are using shared filesystem between your machines then you don't need to install again on second server, Simply register ORACLE_HOME for ODI and oracle_common with oraInventory on second server.What do you mean "shared file system"? Let's say I have 2 separate physical machines, and they exists in the same LAN network. And I assume ORACLE_HOME is weblogic home directory, but what is "oracle_common"? Can you describe all of this more detailed?
    MukeshNegi wrote:
    Simply go to $ODI_ORACLE_HOME/common/bin on server1
    run config.sh and select following from domain template
    - Oracle Enterprise Manager Plug-in for ODI
    - Oracle Enterprise Manager Plug-in
    - Oracle Data Integrator Console
    - Oracle Data Integrator Agent
    - Oracle JRFShould I do the same on second server machine? If not, how weblogic will know about other physical machine in my network and that it should be available to join my cluster? There is no domain and no admin server set on second server, should not I do this? There are a lot of tutorials describing how to setup cluster via config.sh od enterprise manager console, but:
    - they describe how to add managed server to my cluster, but I need to know about physical machine servers. So, should I create managed server on second machine somehow? What about my domains - they should be re-created the same way on second server? I can't find any information about this, there are only enterprise manager screenshoots showing how to create managed server on the same physical machine and how to join managed servers into one cluster. But all of this don't tell me anything what should I do to complete my scenario.
    - cluster ip addreess. I still don't understand this. End user should be able to access odi console via cluster address, am I right? So, is there any system network configuration required? How this ip addres is created?
    - I have set up all of this (odi/weblogic/domain) on single machine and I have second server with only operating system installed (the same Oracle Linux). What's the simples way to join this second physical machine and make all of this working as clustered environment? Is there any step by step instruction/tutorial etc describing ALL steps should be done?
    Sorry for basic questions, I'm really newbie with this and I hope you are patient enough to answer all of this ;)
    Edited by: 960949 on 2012-12-10 01:36
    Edited by: 960949 on 2012-12-10 01:38
    Edited by: 960949 on 2012-12-10 01:56

  • ODI and Essbase - question about updating structure (temp otls)

    Hi,
    versions:
    ODI 11.1.1
    essbase 11.1.2.2.1 (linux)
    I'm running an interface that intends to update a dimension structure with the data from my respective dimension in Oracle relational. Actually a pretty simple interface using the KM "IKM SQL to
    Hyperion Essbase (METADATA)". The execution is running fine with no errors and the structure is updated as expected, however we noticed that there are otl files being created in a tmp folder and they're never deleted. This folder is in the essbase server (/tmp). If I run this interface many times in a day, I'll have as many files in this folder as executions I did. So my question is if anybody knows why those files are being created and why they are not removed from there when the interface execution ends.
    Thinking ahead, I'll have to create a shell script to clean up this folder in order to never have storage issues with those temporary otl files.
    Thanks in advance for any contributions
    Eduardo

         Agreed.     
         I also think those files were created by the java APIs. Just for testing purposes, I ran the rule manually and those otls were not created. This is one more reason for me to believe on this theory.
         However, what makes me think is: no one else got this issue? What are you guys doing with those files?
         Maybe there is some setup I have to do and those files will not be created anymore.
    Thanks,
    Eduardo

  • ODI Reverse-Engineering Question

    We've been using ODI 10.3.5.1 for about a year now. The current load process was set up for us during implementation of Hyperion Planning. We are now trying to add a new dimension to our hierarchy, and I mistakenly went in and added the columns manually to all the models I could think of. I also added the columns into the source and target datastores in the interfaces manually.
    Naturally, the load did not work. I keep getting the error "invalid column name 'Cost_Center'. I have been told by the installer (over email) that i should have reverse-engineered the models, and let the program build all the datastores. I tried to do the reverse-engineering, and am now getting an error for the HYP_PLANNING model, and the HYP_ESSBASE model runs and runs and runs, without doing anything noticeable.
    How long should a reversal run? I don't believe our hierarchy is extraordinarily huge.
    This is all in the TEST environment right now, but I need to put things in LIVE for August financials, so I have until 9/9/09 to figure this stuff out. Any insights would be greatly appreciated.

    Thanks for the quick response.
    Like I said, it was done by the consultant, so I don't even know what an agent is.
    As for the error, it was:
    org.apache.bsf.BSFException: exception from Jython:
    Traceback (innermost last):
    File "<string>", line 41, in ?
    com.hyperion.odi.planning.ODIPlanningException: Failed to sync with user provisioning. Check Planning log for details
         at com.hyperion.odi.planning.wrapper.PlanningWrapper.login(Unknown Source)
         at com.hyperion.odi.planning.ODIPlanningConnection.connect(Unknown Source)
         at com.hyperion.odi.common.ODIModelImporter.importModels(Unknown Source)
         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
         at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
         at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
         at java.lang.reflect.Method.invoke(Unknown Source)
         at org.python.core.PyReflectedFunction.__call__(PyReflectedFunction.java)
         at org.python.core.PyMethod.__call__(PyMethod.java)
         at org.python.core.PyObject.__call__(PyObject.java)
         at org.python.core.PyInstance.invoke(PyInstance.java)
         at org.python.pycode._pyx0.f$0(<string>:41)
         at org.python.pycode._pyx0.call_function(<string>)
         at org.python.core.PyTableCode.call(PyTableCode.java)
         at org.python.core.PyCode.call(PyCode.java)
         at org.python.core.Py.runCode(Py.java)
         at org.python.core.Py.exec(Py.java)
         at org.python.util.PythonInterpreter.exec(PythonInterpreter.java)
         at org.apache.bsf.engines.jython.JythonEngine.exec(JythonEngine.java:144)
         at com.sunopsis.dwg.codeinterpretor.k.a(k.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.scripting(SnpSessTaskSql.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.execScriptingOrders(SnpSessTaskSql.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.execScriptingOrders(SnpSessTaskSql.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.treatTaskTrt(SnpSessTaskSql.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSqlC.treatTaskTrt(SnpSessTaskSqlC.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.treatTask(SnpSessTaskSql.java)
         at com.sunopsis.dwg.dbobj.SnpSessStep.treatSessStep(SnpSessStep.java)
         at com.sunopsis.dwg.dbobj.SnpSession.treatSession(SnpSession.java)
         at com.sunopsis.dwg.cmd.DwgCommandSession.treatCommand(DwgCommandSession.java)
         at com.sunopsis.dwg.cmd.DwgCommandBase.execute(DwgCommandBase.java)
         at com.sunopsis.dwg.cmd.e.i(e.java)
         at com.sunopsis.dwg.cmd.g.y(g.java)
         at com.sunopsis.dwg.cmd.e.run(e.java)
         at java.lang.Thread.run(Unknown Source)
    Caused by: com.hyperion.planning.HspRuntimeException: Failed to sync with user provisioning. Check Planning log for details
         at com.hyperion.planning.HspJSImpl.synchronizeUserWithProvisioning(Unknown Source)
         at com.hyperion.planning.HspJSImpl.login(Unknown Source)
         at com.hyperion.planning.HspJSImpl.login(Unknown Source)
         at com.hyperion.planning.HyperionPlanningBean.Login(Unknown Source)
         at com.hyperion.planning.HyperionPlanningBean.Login(Unknown Source)
         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:85)
         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:58)
         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java)
         at java.lang.reflect.Method.invoke(Method.java)
         at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:279)
         at sun.rmi.transport.Transport$1.run(Transport.java:164)
         at java.security.AccessController.doPrivileged1(Native Method)
         at java.security.AccessController.doPrivileged(AccessController.java)
         at sun.rmi.transport.Transport.serviceCall(Transport.java:160)
         at sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:505)
         at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.handleRequest(TCPTransport.java:837)
         at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:911)
         at java.lang.Thread.run(Thread.java:570)
         at sun.rmi.transport.StreamRemoteCall.exceptionReceivedFromServer(Unknown Source)
         at sun.rmi.transport.StreamRemoteCall.executeCall(Unknown Source)
         at sun.rmi.server.UnicastRef.invoke(Unknown Source)
         at com.hyperion.planning.HyperionPlanningBean_Stub.Login(Unknown Source)
         ... 34 more
    com.hyperion.odi.planning.ODIPlanningException: com.hyperion.odi.planning.ODIPlanningException: Failed to sync with user provisioning. Check Planning log for details
         at org.apache.bsf.engines.jython.JythonEngine.exec(JythonEngine.java:146)
         at com.sunopsis.dwg.codeinterpretor.k.a(k.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.scripting(SnpSessTaskSql.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.execScriptingOrders(SnpSessTaskSql.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.execScriptingOrders(SnpSessTaskSql.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.treatTaskTrt(SnpSessTaskSql.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSqlC.treatTaskTrt(SnpSessTaskSqlC.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.treatTask(SnpSessTaskSql.java)
         at com.sunopsis.dwg.dbobj.SnpSessStep.treatSessStep(SnpSessStep.java)
         at com.sunopsis.dwg.dbobj.SnpSession.treatSession(SnpSession.java)
         at com.sunopsis.dwg.cmd.DwgCommandSession.treatCommand(DwgCommandSession.java)
         at com.sunopsis.dwg.cmd.DwgCommandBase.execute(DwgCommandBase.java)
         at com.sunopsis.dwg.cmd.e.i(e.java)
         at com.sunopsis.dwg.cmd.g.y(g.java)
         at com.sunopsis.dwg.cmd.e.run(e.java)
         at java.lang.Thread.run(Unknown Source)

  • ODI/OWB scripting question

    Hi all,
    I new to ODI and would appreciate some guidance. I have developing a substantial ETL environment using OWB(10g) and have now been directed to change it over to ODI(11g).
    In OWB we had the OMB+ and TCL scripting capability to create and modify repository objects. Anything that could be created or modified in the GUI (such as OWB Mappings or Process Flows) could be done with OMB+ scripting. With this, I could perform mass changes to the mappings.
    Does ODI support the same sort of design-time scripting capability? I understand that ODI uses Jython and Java, but so far, everything I have seen on these refers to run-time scripting. I would like to be able to create or modify Interfaces and Packages using the scripting API.
    Can someone point me in this direction?
    Thanks in advance,
    Philip

    Have a look at the groovy console in 11.1.1.6 there are some improvements that make scripting easier....
    Some recent postings;
    https://blogs.oracle.com/warehousebuilder/entry/odi_11g_getting_scripting_with
    https://blogs.oracle.com/warehousebuilder/entry/odi_11g_scripting_the_model
    https://blogs.oracle.com/warehousebuilder/entry/odi_11g_expert_accelerator_for
    Cheers
    David

  • What is the difference between CDC and SCD type2

    I am a new user of BODS and Have used SCD type 2 (delta's) capturing and loading the difference of data to targets.
    Trying to understand the difference between CDC and scd type 2.
    CDC says capture changed data, so i assume both are same, is that true?
    thanks for the helpful info.

    CDC is Change Data Capture -
    The CDC methods will enable you to extract and load only the new or changed records form the source, rather than loading the entire records from the source. Also called as delta or incremental load.
    SCD Type 2 (Slowly Changing Dimension Type 2)
    This lets you store/preserve the history of changed records of selected dimensions as per your choice. The transaction table / source table will mostly have only the current value and is used in certain cases where in the history of a certain dimension is required for analysis purpose.
    Regards,
    Suneer

  • SCD Type 2 Conversion

    Hello, I am converting to OWB. The existing ETL has some support for SCD Type 2. It does not support Triggering attributes, however if the ETL determines that the Business identifier no longer exists in the source table it will set a Status attribute to I (for inactive) in the target table.
    I assume that I can program this logic in OWB using an SCD Type 1. However, if possible I would like to take advantage of OWB's native support for SCD Type 2. Is it possible to define a SCD as Type 2 but not identify any Triggering attributes? If so, can OWB detect when the Business identifier no longer exists in the source table and update the Expiration Date in the target table? If so, can I modify the logic that updates the Expiration Date so that it also set's the Status attribute to I (for backwards compatibility)?
    I may have to resort to a SCD Type 1 and program the update to the Status attribute. However, if I could use OWB's native support for SCD Type 2, I would be in a position to introduce Triggering attributes as the business need arose.
    Thank you for your help.

    Hi,
    several years ago Oracle published whitepaper which described steps for designing OWB mappings for loading SCD (types 1,2,3) tables.
    But recently (after website reorganization) this whitepaper was removed, although you can stil find it by googling with "scdwhitepaper",
    or you can directly download it from http://sourceforge.net/projects/owbland/files/Stuffs_from_Oracle_site/SCDWhitePaper.zip/download
    Also it is available for download OWB experts which implement generation of SCD2 mappings according this whitepaper:
    http://odi-ee.blogspot.com/2009/02/scd-type2-expert-table-operator.html
    http://owbexpert.blogspot.com/2008/12/hallo-hallo.html
    Also look at this thread {message:id=4336731} with discussion of bug which force to update all rows in target table, even they not changed between mapping executions.
    Regards,
    Oleg

  • SCD type 2 and 3 for Relational Dimension?

    SCD type 2 and 3 for Relational Dimension?
    ==========================
    Thanks for your replies.
    I searched this forum and web on SCD 2 and 3 implementation using Relational Dimension (Table) using OWB.
    I find some thread talks about implementing Oracle Dimensional structures, which has levels/hierarchy etc.
    If we design our dimension in pure relational fashion, how do we go about using OWB for SCD 2 and 3?
    Are there any new SQL features of 11g to help us here?
    Thanks in helping.
    RI

    Hi,
    several years ago Oracle published whitepaper which described steps for designing OWB mappings for loading SCD (types 1,2,3) tables.
    But recently (after website reorganization) this whitepaper was removed, although you can stil find it by googling with "scdwhitepaper",
    or you can directly download it from http://sourceforge.net/projects/owbland/files/Stuffs_from_Oracle_site/SCDWhitePaper.zip/download
    Also it is available for download OWB experts which implement generation of SCD2 mappings according this whitepaper:
    http://odi-ee.blogspot.com/2009/02/scd-type2-expert-table-operator.html
    http://owbexpert.blogspot.com/2008/12/hallo-hallo.html
    Also look at this thread {message:id=4336731} with discussion of bug which force to update all rows in target table, even they not changed between mapping executions.
    Regards,
    Oleg

  • Dimensional Modeling Question

    Hello:
    I am new to dimensional modeling and trying to design a model based on the existing OLTP system.
    Below are the links to ERDs for existing OLTP system and a dimension model that I have developed. Can someone expert in this area take a quick look and offer me suggestions please? It won't take much time as the model is simple. I am especially worried about the inclusion of bridge table STORE_DEPARTMENT in the model and the dimension tables referencing other dimension tables. Is this normal or I am doing it wrong? I am trying to check if the model can answer some of the DSS questions but your suggestions would really help me if I am going in a wrong direction.
    Thanks in advance for your help :)
    Regards,
    Ramesh

    Hi,
    There is a trade off on the availability and the Complex analytics.
    A star schema is good if you have the functional requirements really simple. Like the dimension is not SCD Type2 (slowly changing dimension) and you don't need to do "AS IS" vs "AS WAS" reporting.
    In modern Analytics in any domain dimensions are SCD Type 2 as business keep on evolving. In a star schema structure this will cause explosion of data if there are frequent changes at the higher levels of the dimensional hierarchy. That anyway will hit the performance.
    As far as my experience goes, at the data model level it is better to have snow flaked dimensions. and while managing the metadata (in a BI reporting tool) you can consolidate the snowflaked dimensions in star schema structures. That will make ah hoc analytics much simple for the business users.
    A lot of performance measure can be taken to improve the end user experience.
    In short the trend in BI analytics demands to have a snowflaked structure rather than a simple star schema structure.
    Hope this helps.

  • Lookup SCD Use Most Current not working

    Hello Forum,
    Can I get some advice on how to set up/configure the Type 2 History Lookup, the ETL guy and I thought by just selecting the "Use the most current record" option, OWB would do it's auto magic and use the dimension key assigned to the row/record that did not have an expiration date attribute populated..... instead the mapping is loading both the dimension key for the previous version of the record (the expried one) and the new version of the record. The data being added to my fact table now has two rows per daily insert instead of one.... not a good thing.
    I'm thinking that we need to use the Effective Date somehow in the look-up so that the new items being added to my cube (fact table) are pointed to the new version of my record in our SCDII Dimension.....
    what's the trick to setting up the Type2 History Lookup tab?
    what does this radion button do then? "Use most current record"?
    Thanks,
    Ron

    There are several outstanding issues (bugs) against the SCD Type2 dimension functionality in OWB.
    It is possible your issue may be related one of these:
    5372855 - Key Lookup on SCD2 - Parent Values are not Returned (Dimension is populated twice)
    5845656 - Need Effective Dates For Initial Version and New Version of Records
    6017185 - Dimension Operator Property 'Type2 Gap' Not Working
    These bugs are scheduled to be fixed in the next major release of OWB as the SCD functionality is being improved/better documented.

  • SCD - source sets where multiple changes exist for the same natural key

    I am using the Oracle SCD type2 method (end date old record and insert the new record) which was working fine until my source set contained multiple changes for the same natural key. Does anyone know of a way to handle this?
    Thanks

    Hello,
    I think, the way of handling multiply changes of NK is totally depend on business requirements for your project.
    There could be number of approaches for this. For example:
    - all changes but the last are considered active during one day, going subsequently right after previously existed SCD record;
    - all changes but the last are considered “active” depending on its occurrence within “fact” data (as far as you consider NK change based on some attribute set - probably some of attributes show themselves within “fact” data)
    - etc.
    So, make an agreement upon this with your customer – and ETL design decision will follow immediately.
    Sergey

  • Star schema versus snowflake schema

    I have a question regarding dimensional data modeling. My question here is, when star schema model would be useful and when snowflake schema model would be useful.
    In star schema, we have only fact and it is connected with dimensions. But in snowflake schema, we are normalizing dimension into one more level. Let us say, we have dimension product. Product can be normalized into another table called supplier. Let us take another example, customer dimension. Customer dimension can be normalized into country…
    Advantage of star schema is, easy to write the query since we have only less tables. You do not need to join multiple tables when we write the query. It would improve the performance some time.
    Advantage of snowflake schema is, it is little complex to write the query, since we have to join multiple tables. Performance might improve some time when we join smaller tables…
    My question is, at what circumstances, we can use star and snowflake schema? I am not able to define the word sometime_
    Any help is highly appreciated…

    Hi,
    There is a trade off on the availability and the Complex analytics.
    A star schema is good if you have the functional requirements really simple. Like the dimension is not SCD Type2 (slowly changing dimension) and you don't need to do "AS IS" vs "AS WAS" reporting.
    In modern Analytics in any domain dimensions are SCD Type 2 as business keep on evolving. In a star schema structure this will cause explosion of data if there are frequent changes at the higher levels of the dimensional hierarchy. That anyway will hit the performance.
    As far as my experience goes, at the data model level it is better to have snow flaked dimensions. and while managing the metadata (in a BI reporting tool) you can consolidate the snowflaked dimensions in star schema structures. That will make ah hoc analytics much simple for the business users.
    A lot of performance measure can be taken to improve the end user experience.
    In short the trend in BI analytics demands to have a snowflaked structure rather than a simple star schema structure.
    Hope this helps.

Maybe you are looking for