Creating Dimensions Best Practices

I am going through the different options for creating my dimensions and wanted to get some insight as to what would be considered the "best" way. I do understand that this could be based on context but thought I would see what others had to say.
If you had for example 3 dimension tables and 2 fact table.
Dim 1 Fact1, Fact2
Dim 2 Fact1, Fact2
Dim 3 Fact1
The way I see it you have two options. Now if the primary key of Dim1 is a subset of Dim2 and Dim3 and the primary key of Dim2 is a subset of Dim3 I would think you have two choices. Create 3 Dimensions, one for each Dimension table and use preferred drill paths to jump between them. Or create on Dimension table in a logical table that uses the data from the three physical dimension tables. In my testing I believe that the most right way would be to use 1 logical table so that 1 dimension could be built. I have noticed that level based measures don't work quite as well when using the drill paths.
If anyone has any comments to add it would be greatly appreciated. Before building my solutions I like to do my best to see if I am missing anything that could bite me later.
Thank you in advance for any help :)

Best practices depend on each project and each architect :)
I wouldn't use drill path if they are not necessary. I would create different tables at the DB level and import them into the repository as they are.
J.-

Similar Messages

  • Discoverer Adm. Best Practices

    Hi All,
    I'm doing a consultancy job for this Company and when they heard I know Oracle Discoverer, they asked me to create some kind of document to evaluate what they have developed here using Discoverer. Plus they want me to create a "Best Practices" document for creating business areas.
    Well... I must confess I don't know discoverer that well ;-).
    So, I'd like to hear from experienced users how you guys do this kind of job. What should I look for to see what is right, what is wrong, what can we do to make it better.. The usual stuff.
    I've been looking for security issues (they grant privileges to users, instead of having roles, etc). But I guess I won't be able to do any report without talking to end-users (those who develop reports using Discoverer Desktop/Plus/whatever...).
    Please, tell me what you think about it and what else shoul I look for. At this moment, I'm not sure where should I point my efforts to...
    Thanks in advance,
    Marcos

    Hey Marcos.
    Big #1 question - are they running Oracle Apps (Enterprise Suite) or not. I'm guessing not as you mentioned roles instead of responsibilities?
    Otherwise - regardless if it's roles or responsibilities (and all my experienced opinion).
    1. Do not grant privs or share bus areas with users - only roles or responsibilites. Why? There's lots of reasons, but for a few, it's way cleaner to have job assignments (roles, etc.) so that when someone moves jobs, promoted, etc., all the existing privs, security keeps on working PLUS you don't have to share workbooks with numerous people, but just their role / resp. Also, when someone leaves, you have have a problem in getting rid of any workbooks they've created - as not even the admin can see a user's report in the database if not shared with them.
    2. Are they using Disco desktop, plus or viewer?
    Now I know some may prefer it differently (had this discussion before on this forum), but if there's lots of users, I try to give Disco Viewer to the vast majority of end users because end users (99% of the time - and I'm sure I'll get slammed for this) - SHOULD NOT BE WRITING REPORTS. Most of the time, they'll screw up and choose too many folders and wonder what a fan trap means, do dumb conditions (such as multiple not ins, multiple queries with %, etc.) and their reports run like a dog.
    For the couple in each department who are tech savvy and want to help others in their department, these ones become power users and they can have Disco Plus or Desktop to create the reports.
    3. Create one user called something like: CORP. This user is (for example), you. With your Discoverer brilliance, you are in charge of creating workbooks that are the CORP standard that work corerctly, look pretty (with corp approved logos, colors, etc.) and are shared with the correct roles / resp. If a power user creates a good report, get them to share it with CORP, you CORP-orize it, check it and it becomes the CORP standard. This alone has worked great in all the Disco contracts I've been on - once the company buys into it.
    4. In the EUL, you try and set up business areas that are logical (ie: either by geographical area (ie: US AR, MX AR, etc.), or by business job (ie: AR, AP, GL), or a combination of both (usually what it evolves to over time).
    5. In the EUl, - if possible - identify the indexes. This makes a huge difference if users can be taught to limit workbooks if at all possible to indexes in a condition. I leanred this long ago with working for years with NoetixViews (a Disco view product) where indexes in folders (ie: pointing to views in the database) are identified with a prefix of A$.
    6. Always, always ... sort your items in the folders in the business area. Sure, if you're using 10+, you can do it automatically, but if a product before (ie: 4i), then do it manually. It sounds dumb, but makes a huge difference for you - and end users - creating reports as you find items way, way quicker.
    That's the first 6. I'm sure more'll be added.
    Russ

  • What is the best practice for creating primary key on fact table?

    what is the best practice for primary key on fact table?
    1. Using composite key
    2. Create a surrogate key
    3. No primary key
    In document, i can only find "From a modeling standpoint, the primary key of the fact table is usually a composite key that is made up of all of its foreign keys."
    http://download.oracle.com/docs/cd/E11882_01/server.112/e16579/logical.htm#i1006423
    I also found a relevant thread states that primary key on fact table is necessary.
    Primary Key on Fact Table.
    But, if no business requires the uniqueness of the records and there is no materilized view, do we still need primary key? is there any other bad affect if there is no primary key on fact table? and any benifits from not creating primary key?

    Well, natural combination of dimensions connected to the fact would be a natural primary key and it would be composite.
    Having an artificial PK might simplify things a bit.
    Having no PK leads to a major mess. Fact should represent a business transaction, or some general event. If you're loading data you want to be able to identify the records that are processed. Also without PK if you forget to make an unique key the access to this fact table will be slow. Plus, having no PK will mean that if you want to used different tools, like Data Modeller in Jbuilder or OWB insert / update functionality it won't function, since there's no PK. Defining a PK for every table is a good practice. Not defining PK is asking for a load of problems, from performance to functionality and data quality.
    Edited by: Cortanamo on 16.12.2010 07:12

  • Best Practices for creating PDFs using PLPDF?

    Does anyone have any suggestions for Best Practices in making PDF files using PLPDF?
    I have been using it for about a month now, and the best that I have come up with is to use MS Access to prototype the layout of a report. Once I have all the graphics areas and text areas lined up how I would want them, I then write PLSQL code to create a procedure which is called from an HTMLDB page. MS Access is handy in that it provides the XY coordinates for each text area and graphics area. It also provides the dimensions of the respective cells. So long as I call plpdf.Init('P', 'in', 'letter') at the beginning of the procedure, both my MS Access prototype and my plpdf code are both using inches - this makes the translation relatively easy.
    Has anybody found anything else easier/better?
    Regards,

    You can make it happen by creating a private connection for 40 users by capi script and when creating portlet select 2nd option in Users Logged in section. In this the portlet uses there own private connection every time user logs in.
    So that it won't ask for password.
    Another thing is there is an option of entering password or not in ASC in discoverer section, if your version 10.1.2.2. Let me know if you need more information
    thnaks
    kiran

  • Best practice when FACT and DIMENSION table are the same

    Hi,
    In my physical model I have some tables that are both fact and dimension table, i.e. in the BMM they are of course separated into Fact and Dim source (2 different units) and it works fine. But I can see that there will be trouble when having more fact tables and I e.g. have a Period dimension pointing to all the different fact tables (different sources).
    Seems like the best solution to this is to have an alias of the fact/transaction table and have 2 "copies" of the transaction table (one for fact and one for dimension table) in the physical layer. Only bad thing is that there will then allways be 2 lookups in the same table when fetching data from the dimension and the fact table.
    This is not built on a datawarehouse - so the architecture is thereby more complex. Hope this was understandable (trying to make a short story of it).
    Any best practice on this? Or other suggestions.

    Id recommend creation of a view in the database. if its an oracle DB, materialised views would be a huge performance benefit. you just need to make sure that the MVs are updated when the source is updated.
    -Domnic

  • BEST PRACTICES FOR CREATING DISCOVERER DATABASE CONNECTION -PUBLIC VS. PRIV

    I have enabled SSO for Discoverer. So when you browse to http://host:port/discoverer/viewer you get prompted for your SSO
    username/password. I have enabled users to create their own private
    connections. I log in as portal and created a private connection. I then from
    Oracle Portal create a portlet and add a discoverer worksheet using the private
    connection that I created as the portal user. This works fine...users access
    the portal they can see the worksheet. When they click the analyze link, the
    users are prompted to enter a password for the private connection. The
    following message is displayed:
    The item you are requesting requires you to enter a password. This could occur because this is a private connection or
    because the public connection password was invalid. Please enter the correct
    password now to continue.
    I originally created a public connection...and then follow the same steps from Oracle portal to create the portlet and display the
    worksheet. Worksheet is displayed properly from Portal, when users click the
    analyze link they are taken to Discoverer Viewer without having to enter a
    password. The problem with this is that when a user browses to
    http://host:port/discoverer/viewer they enter their SSO information and then
    any user with an SSO account can see the public connection...very insecure!
    When private connections are used, no connection information is displayed to
    SSO users when logging into Discoverer Viewer.
    For the very first step, when editing the Worksheet portlet from Portal, I enter the following for Database
    Connections:
    Publisher: I choose either the private or public connection that I created
    Users Logged In: Display same data to all users using connection (Publisher's Connection)
    Users Not Logged In: Do no display data
    My question is what are the best practices for creating Discoverer Database
    Connections.
    Is there a way to create a public connection, but not display it in at http://host:port/discoverer/viewer?
    Can I restrict access to http://host:port/discoverer/viewer to specific SSO users?
    So overall, I want roughly 40 users to have access to my Portal Page Group. I then want to
    display portlets with Discoverer worksheets. Certain worksheets I want to have
    the ability to display the analyze link. When the SSO user clicks on this they
    will be taken to Discoverer Viewer and prompted for no logon information. All
    SSO users will see the same data...there is no need to restrict access based on
    SSO username...1 database user will be set up in either the public or private
    connection.

    You can make it happen by creating a private connection for 40 users by capi script and when creating portlet select 2nd option in Users Logged in section. In this the portlet uses there own private connection every time user logs in.
    So that it won't ask for password.
    Another thing is there is an option of entering password or not in ASC in discoverer section, if your version 10.1.2.2. Let me know if you need more information
    thnaks
    kiran

  • Creating Billing Unit in CRM V1.2007 Best Practices C05

    Hi,
    in C05 (Org Model with HR Integration) for Best Practices V1.2007 I have to create a Billing Unit.
    That means, I have to create a Corporate Account.
    For Creation of the Corporate Account i need a Number Range and a Grouping.
    My Question:
    Maintaining Number Range and Grouping for Business Partners is described in C03.
    In the Solution Builder C03 comes after C05.
    So I have first to finish C03 manually via SPRO or at least I have to maintain a Number Range and a Grouping so that I´m able to create the Billing Unit as an Corporate Account and then proceed with C05?
    Regards
    Andreas

    Hi Padma,
    We are facing the same issue while installing Baseline Best practices.
    "Transport numbers not fullfill the requirement"
    We are trying to activate full solution.
    I have already created a new work bench and customize request ,but stills its gvg "Transport numbers not fullfill the requirement".
    Iam not able to find a solution for this on service market place.
    Thanks & Regards,

  • Best practice for creating RFC destination entries for 3rd parties(Biztalk)

    Hi,
    We are on SAP ECC 6 and we have been creating multiple RFC destination entries for the external 3rd party applications such as Biz-talk and others using TCP/IP connection type and sharing the programid.
    The RFC connections with IDOC as data flow have been made using Synchronous mode for time critical ones(few) and majority through asynchronous mode for others. The RFC destination entries have been created for many interfaces which have unique RFC destinations with its corresponding ports defined in SAP. 
    We have both inbound and outbound connectivity.with the large number of RFC destinations being added we wanted to review the same. We wanted to check with others who had encountered similar situation and were keen to learn their experiences.
    We also wanted to know if there are any best practices to optimise on number of RFC destinations.
    Here were a few suggestions we had in mind to tackle the same.
    1. Create unique RFC destinations for every port defined in SAP for external applications as Biztalk for as many connections. ( This would mean one for inbound, one for outbound)
    2. Create one single RFC destination entry for the external host/application and the external application receiving the idoc control record to interpret what action to perform at its end.
    3. Create RFC destinations based on the modules it links with such as materials management, sales and distribution, warehouse management. This would ensure we can limit the number of RFC to be created and make it simple to understand the flow of data.
    I have done checks on SAP best practices website, sap oss notes and help pages but could not get specific information I was after.
    I do understand we can have as unlimited number of RFC destinations and maximum connections using appropriate profile parameters for gateway, RFC, client connections, additional app servers.
    I would appreciate if you can suggest the best architecture or practice to achieve  RFC destinations in an optimized manner.
    Thanks in advance
    Sam

    Not easy to give a perfect answer
    1. Create unique RFC destinations for every port defined in SAP for external applications as Biztalk for as many connections. ( This would mean one for inbound, one for outbound)
    -> be careful if you have multi cllients ( for example in acceptance) RFC's are client independ but ports are not! you could run in to trouble
    2. Create one single RFC destination entry for the external host/application and the external application receiving the idoc control record to interpret what action to perform at its end.
    -> could be the best solution... its easier to create partner profiles and the control record will contain the correct partner.
    3. Create RFC destinations based on the modules it links with such as materials management, sales and distribution, warehouse management. This would ensure we can limit the number of RFC to be created and make it simple to understand the flow of data.
    -> consider this option 2.
    We send to you messagebroker with 1 RFC destination , sending multiple idoctypes, different partners , different ports.

  • Best practice to create multi tier (atleast 3 level) table

    What is the best practice to create multi tier (minimum 3 levels) of table?. Could any one provide a sample structure?.
    Thanks.

    Can u b more specific as to what you are trying to do. What u mean by 3 level table?

  • Best Practice : Creating Custom Renderer for Standard Component

    I've been reading the docs and a few threads about Custom Renderers. The best practice seems to be to create a Custom Component where you need a Custom Renderer. Is this the case?
    See [this post|http://forums.sun.com/thread.jspa?forumID=427&threadID=520422]
    I've created several Custom Renderers to override the HTML provided by the Standard Components, however I can't see the benefit in also creating a Custom Component when the behaviour of the standard component is just fine.
    Thanks,
    Damian.

    It all depends on what you are trying to accomplish. Generally speaking if all you need is for the user interface output to be changed then a renderer will work just fine. A new component is usually made in order to provide some fundamental change in server side functionality not related to the user interface. - Ponderator

  • Best Practice while creating Contract, Purchase Requisition, Purchase Order

    Hi
    What is the best practice with respect to Contract & Purchase Requisition?
    IN T Code ME31K, there is a button Reference to PReq, meaning that we can create contract with using Purchase Requisition. (We have done the same)
    While creating the Purchase Order using Contract, we could find Purchase Requisition reference in the Purchase Order; similarly when we created the Purchase Order using Purchase Requisition, we could find Contract reference in the Purchase Order.
    I have done the following:
    1. Create a Contract.
    2. Create a Purchase Requisition
    3. Assign Requisition and Create Purchase Order using T Code ME57.
    4. Create Purchase Order
    Here in this case we could find references of both Contract & Purchase Requisition.
    I just want to know what Practice should we adopt / advise while creating Contract, Purchase Requisition. & Purchase Order?
    Regards,

    Hi,
    In ME51N Screen, enter Contract number in the "AGREEMENT" TAB and if more items in the contract, you can mention the item number in the next TAB to the AGREEMENT TAB and then proceed with creating PR by entering other details. This TAB you can find in the 4th column from extreme right end of PR screen.
    Regards,
    Biju K
    Edited by: Bijay Kumar Barik on Sep 10, 2009 2:23 PM

  • Best Practice for creating Excel report from SSIS.

    I have a requirement to create an Excel report on a daily basis which pulls data from SQL. I have attempted to resolve this by creating a stored procedure to save the results in SQL, a template in Excel to hold the graphs & pivot tables and an SSIS package
    to copy the data to the template.
    Problem 1: When the data turns up in Excel it is saved as text rather than numbers.
    Problem 2: When the data turns up in Excel it appends the data rather than overwriting it.
    I resolved problem 1 by having another sheet which converts the text to numbers (=int(sheet1!A1))
    I resolved problem 2 by adding some VB script to my SSIS package which clears the existing cells before copying the data
    The job runs fine, however when I schedule the job to run overnight it complains "System.UnauthorizedAccessException: Retrieving the COM class factory for component with CLSID". A little googling tells me that running the client side commands in
    my vb script (workSheet1.Range("A2:F9999").Clear(), workBook.Save(), workBook.Close() etc) from a server side task is bad practice.
    So, I am left wondering how people usually get around this problem; copy a SQL table into an existing Excel file and overwrite the data, without having the numbers turn up as text. My requirements are that the report must display pivot charts with selectable
    options and be automatically updated overnight.
    Help appreciated,
    Bish.
    Office 2013 on my PC, Office 2010 on the server, Windows Server 2008R2 Enterprise, SQL Server 2008R2.

    I think that the best practice in case like this is to Link an excel file to a view or directly to a table. So you don't have to struggle with changing template, with overnight packages, etc. If the data are too much complex and the desiderate too excessive
    then I tend to create a Cube and that's it...dashboard, graph and everyone is happy. In your case if the request is not too much try to don't use SSIS but directly build a view and point directly on SQL.
    SSIS is really strong for the ETL, to run some stored procedure too heavy, to use a cut time scheduled, etcetera , etcetera, etcetera...I love it. But sometimes we need to find the easier solutions...
    I hope this post helped you

  • Best practice to create views

    Hi,
    I've a question about best practice to develop a large application with many complex views.
    Typically at each time only one views is displayed. User can go from a view to another using a menu bar.
    Every view is build with fxml, so my question is about how to create views and how switch from one to another.
    Actually I load fxml every time the view is required:
    FXMLLoader loader = new FXMLLoader();
    InputStream in = MyController.class.getResourceAsStream("MyView.fxml");
    loader.setBuilderFactory(new JavaFXBuilderFactory());
    loader.setLocation(OptixController.class.getResource("MyView.fxml"));
    BorderPane page;
    try {
         page = (BorderPane) loader.load(in);
         } finally {
              if (in != null) {
                   in.close();
    // appController = loader.getController();
    Scene scene = new Scene(page, MINIMUM_WINDOW_WIDTH, MINIMUM_WINDOW_HEIGHT);
    scene.getStylesheets().add("it/myapp/Mycss.css");
    stage.setScene(scene);
    stage.sizeToScene();
    stage.setScene(scene);
    stage.sizeToScene();
    stage.centerOnScreen();
    stage.show();My questions:
    1- is a good practice reload every time the fxml to design the view?
    2- is a good practice create every time a new Scene or to have an unique scene in the app and every time clear all elements in it and set the new view?
    3- the views should be keep in memory to avoid performace issue or it is a mistake? I think that every time a view should be destroy in order to free memory.
    Thanks very much
    Edited by: drenda81 on 21-mar-2013 10.41

    >
    >
    My questions:
    1- is a good practice reload every time the fxml to design the view?
    2- is a good practice create every time a new Scene or to have an unique scene in the app and every time clear all elements in it and set the new view?
    3- the views should be keep in memory to avoid performace issue or it is a mistake? I think that every time a view should be destroy in order to free memory.
    In choosing between 1 and 3 above, I think either is fine. Loading the FXML on demand every time will be slightly slower, but assuming you are not doing something unusual such as loading over a network connection it won't be noticeable to the user. Loading all views at startup and keeping them in memory uses more memory, but again, it's unlikely to be an issue. I would choose whichever is easier to code (probably loading on demand).
    In choosing between reusing a Scene or creating a new one each time, I would reuse the Scene. "Clearing all elements in it" only needs you to call scene.setRoot(...) and pass in the new view. Since the Scene has a mutable root property, you may as well make use of it and save the (small) overhead of instantiating a new Scene each time. You might consider exposing a currentView property somewhere (say, in your main controller, or model if you have a separate model class) and binding the Scene's root property to it. Something like:
    public class MainController {
      private final ObjectProperty<Parent> currentView ;
      public MainController() {
        currentView = new SimpleObjectProperty<Parent>(this, "currentView");
      public void initialize() {
        currentView.set(loadView("StartView.fxml"));
      public ObjectProperty<Parent> currentViewProperty() {
        return currentView ;
      // event handler to load View1:
      @FXML
      private void loadView1() {
        currentView.set(loadView("View1.fxml"));
      // similarly for other views...
      private Parent loadView(String fxmlFile) {
        try {
         Parent view = FXMLLoader.load(getClass().getResource(fxmlFile));
         return view ;
        } catch (...) { ... }
    }Then your application can do this:
    @Override
    public void start(Stage primaryStage) {
       Scene scene = new Scene();
       FXMLLoader loader = new FXMLLoader(getClass().getResource("Main.fxml"));
       MainController controller = (MainController) loader.getController();
       scene.rootProperty().bind(controller.currentViewProperty());
       // set scene in stage, etc...
    }This means your Controller doesn't need to know about the Scene, which maintains a nice decoupling.

  • The best practice for creating reports and dashboard

    Hello guys
    I am trying to put together a list of best practice on how to create reports and dashboards using OBIEE presentation service. I know a lot of those dos and donts are just corporate words that don't apply consistantly in real world environment, but still I'd like to know if Oracle has any officially defined best practice or not.
    the only best practice I can think of when it comes to building reports and dashboards is:
    Each subject area should contain only one star schema that holds data for a specific business information
    Is there anything else?
    Please advice
    Thanks

    Read this book to understand what a Dashboard is, what it should do and look like to be used by the end users. Very enlightentning.
    Information Dashboard Design: The Effective Visual Communication of Data by Stephen Few (There are a couple of other books by Stephen and although I haven't read them yet, I anticipate them to be equally helpful.
    This book was also helpful to me:
    http://www.amazon.com/Performance-Dashboards-Measuring-Monitoring-Managing/dp/0471724173
    I also found this book helpful in Best Practices...
    http://www.biconsultinggroup.com/knowledgebase.asp?CategoryID=337

  • Best Practice on Creating Queries in Production

    We are a fairly new BI installation. I'm interested in the approach other installations take to creation of queries in the production environment. Is it standard to create most queries directly into the production system? Or is it standard to develop the queries in the development system and transport them through to production?

    Hi,
    Best practices applied to all developments whether it is R/3, BI modelling or Reporting and as per the best practice we do development in Development system, testing in testing box and finally deploy successful development to production. yes for user analysis purpose, user can do adhoc analysis or in some scenario they create user specific custom queries (sometimes reffere as X-query created by super user).
    So it is always to do all yr developement in Development Box and then transport to Production after successful QA testing.
    Dev

Maybe you are looking for