Best Practices - Dynamic Ranking, Dimension Values to Return, etc.

The pinned post says non-technical questions belong on the Business Forum. I can't find an Endeca-specific business forum. If there is one, please tell me where to find it.
My question is about dynamic ranking and the initial display of only the top N dimension values with a "More..." option to see the rest of them.
What's the current wisdom on this from a usability point of view? Use it, don't use it? If using it, show how many values initially?
Or, if not using it, you instead set up a hierarchy of dimensions so that the user never has to look at 50 choices for something?
This is not a technical question. What is the current wisdom? What are the best practices?
Thanks!

Dynamic ranking is a good choice only if all choices cannot be further grouped. In my practice most of the content can be normalized and restricted to a very limited set of options. Dynamic ranking with "more" is an easy way out and seems like a lazy take on content management.

Similar Messages

  • Best Practice: Dynamically changing Item-Level permissions?

    Hi all,
    Can you share your opinion on the best practice for Dynamically changing item permissions?
    For example, given this scenario:
    Item Creator can create an initial item.
    After item creator creates, the item becomes read-only for him. Other users can create, but they can only see their own entries (Created by).
    At any point in time, other users can be given Read access (or any other access) by an Administrator to a specific item.
    The item is then given edit permission to a Reviewer and Approver. Reviewers can only edit, and Approvers can only approve.
    After the item has been reviewed, the item becomes read-only to everyone.
    I read that there is only a specific number of unique permissions for a List / Library before performance issues start to set in. Given the requirements above, it looks like item-level permission is unavoidable.
    Do you have certain ideas how best to go with this?
    Thank you!

    Hi,
    According to your post, my understanding is that you wanted to change item level permission.
    There is no out of the box way to accomplish this with SharePoint.               
    You can create a custom permission level using Visual Studio to allow users to add & view items, but not edit permission.   
    Then create a group with the custom permission level. The users in this group would have the permission of create & add permission, but they could no edit the item.
    In the CodePlex, there is a custom workflow activities, but by default it only have four permission level:
    Full Control , Design ,Contribute and Read.
    You should also customize some permission levels for your scenario. 
    What’s more, when use the SharePoint 2013 designer, you should only use the 2010 platform to create the workflow using this activities,
    https://spdactivities.codeplex.com/wikipage?title=Grant%20Permission%20on%20Item
    Thanks & Regards,
    Jason
    Jason Guo
    TechNet Community Support

  • Best Practice: Migrating transports to Prod (system down etc.)

    Hi all
    This is more of a process and governance question as opposed to a ChaRM question.
    We use ChaRM to migrate transports to Production systems. For example, we have a Minor BAU Release (every 2 weeks), a Minor Initiative Release (every 4 weeks) and a Major Release (every 3 months).
    We realise that some of the major releases may require SAP to be taken offline. But what is SAP Best practice for ANY release into Production? i.e. for our Minor BAU Release we never shut down any Production systems, never stop batch jobs, never lock users etc.
    What does SAP recommend when migrating transports to Prod?
    Thanks
    Shaun

    Have you checked out the "Two Value Releases Per Year" whitepaper for SAP recommendations?  Section 6 is applicable.
    Lifetime Support by SAP » Two Value Releases per Year
    The "real-world" answer is going to depend on how risk-adverse versus downtime adverse your company is.  I think most companies would choose to keep the systems running except when SAP forces an outage or there is a real risk of data corruption (some data conversions and data loads, for example).
    Specific to your minor BAU releases, it may be wise to make a process whereby anything that requires a production shutdown, stopped batch jobs, locked users, etc. needs to be in a different release type. But if you don't have the kind of control, your process will need to allow for these things to happen with those releases.
    Also, with regards to stopping batch jobs in the real world, you always need to balance the desire to take full advantage of the available systems versus the pain of managing the variations.  If your batch schedule is full, how are you going to make sure the critical jobs complete on time when you do need to take the system down?  If it isn't full, why do you need that time?  Can you make sure only non-critical batch jobs run during those times?  Do you have a good method of implementing an alternate batch schedule when need be?

  • Best practice/pattern for flag values

    I'm writing a class that has several public static final int fields that can be combined bitwise. I'm debating whether to number all the default values zero, have zero represent the defaults but NOT have public fields numbered zero, or number every field with a power of two.
    I've thought of the following pros and cons:
    The advantages of the first pattern include allowing the defaults to be set explicitly in user code, making the coder's intent apparent. The disadvantages include it being an error to test flags against the default (zero) values, and that error not being apparent to the user.
    The advantages of the second pattern include there being no default values to erroneously test against. Disadvantages include the inability to set defaults explicitly.
    The advantages of the thid pattern include being able to test against defaults and set defaults explicitly. Disadvantages include the possibility of erroneously setting two mutually exclusive flags.
    I'm leaning toward the third pattern, because I can make it my own problem to test for mutually exclusive flags. Is there anything else I haven't considered?
    Edited by: kjkrum on Oct 2, 2010 9:50 AM

    kjkrum wrote:
    I'm writing a class that has several public static final int fields that can be combined bitwise.Perfect, do it.
    public class Person {
    // female is 0, male is 1
      public static final int FLAG_GENDER = 1;
      public static final int FLAG_EMPLOYEE = 2;
      public static final int FLAG_CLIENT = 4;
      protected int flags = 0;
      public String getGender() {
        return (isFlag(FLAG_GENDER) ? "male" : "female");
      public boolean isEmployee() {
        return isFlag(FLAG_EMPLOYEE);
      public boolean isClient() {
        return isFlag(FLAG_CLIENT);
      public void setGender(String gender) {
        setFlag(FLAG_GENDER, (gender == "male"));
      public void setEmployee(boolean value) {
        setFlag(FLAG_EMPLOYEE, value);
      public void setClient(boolean value)
        setFlag(FLAG_CLIENT, value);
      protected boolean isFlag(int flag) {
        return ((flags & flag) == flag);
      protected void setFlag(int flag, boolean value) {
        if(value) {
          flags |= flag;
        else {
          flags &= ~flag;
    }Done, that took me a whole 10 minutes with rewrites.
    I'm debating whether to number all the default values zero, have zero represent the defaults but NOT have public fields numbered zero, or number every field with a power of two.
    I've thought of the following pros and cons:
    The advantages of the first pattern include allowing the defaults to be set explicitly in user code, making the coder's intent apparent. The disadvantages include it being an error to test flags against the default (zero) values, and that error not being apparent to the user.
    The advantages of the second pattern include there being no default values to erroneously test against. Disadvantages include the inability to set defaults explicitly.
    The advantages of the thid pattern include being able to test against defaults and set defaults explicitly. Disadvantages include the possibility of erroneously setting two mutually exclusive flags.
    I'm leaning toward the third pattern, because I can make it my own problem to test for mutually exclusive flags. Is there anything else I haven't considered?
    Edited by: kjkrum on Oct 2, 2010 9:50 AMWhat's more important... you sitting here trying to make a philisophical decision, or getting the class definition written?
    Edited by: pierrot_2 on Oct 3, 2010 3:28 PM

  • Best practice for reading GUI values?

    I am using a Matisse GUI. This GUI has a number of variables and buttons that will invoke methods from other classes. How do I pass or otherwise make the variables in the GUI jText... jLabel... to the receiving function without passing each variable individually?

    You should define objects for each Panel.
    For example, your application will consist of a lots of Panels.
    Each panel will have 2 methods. One method will accept
    an object, and set all the GUI fields. One object will
    return a new object based on values in the GUI fields.
    E.g.
    public class CirclePanel extends JPanel {
         JLabel radiusLabel = new JLabel("Circle Radius");
         JTextField radiusTF = new JTextField();
         public CirclePanel() {
              // .. layout GUI components
         public void setFields(Circle c) {
              radiusTF.setText(c.radius);
         public Circle getCircle() {
              return new Circle(Double.parseDouble(radiusTF.getText()));
    }Now, let's say you want to have this Panel in a dialog.
    You should put your ok and cancel button in the dialog.
    Don't put the buttons in the panel because you may want
    to use that panel somewhere else that doesn't need the
    buttons.
    public class CircleDialog extends JDialog implements ActionListener {
         JButton okButton = new JButton("OK");
         JButton cancelButton = new JButton("Cancel");
         CirclePanel panel = new CirclePanel();
         ActionListener a = null;
         Object src = null;
         public CircleDialog() {
              // ... add circle panel
              // ... add buttons
              okButton.addActionListener(this);
              cancelButton.addActionListener(this);
         public void actionPerformed(ActionEvent evt) {
              a.actionPerformed(evt);
         public Object getSource() {
              return src;
         public void show(Object src, ActionListener a) {
              this.a = a;
              this.src = src;
              super.show();
    }Suppose you have a menubar that launches the dialog:
    public clas CircleMenuBar extends JMenuBar implements ActionListener {
         JMenuItem circleItem = new JMenuItem("Make Circle");
         public void actionPerformed(ActionEvent evt) {
              Object src = evt.getSource();
              if (src == circleItem) {
                   circleDialog.show(src, this);
              else if (src == circleDialog.okButton) {
                   Circle c = circleDialog.panel.getCircle();
                   // do something with it
                   // like call CircleAPI.add(c); // adds circle to database
                   circleDialog.close();
              else if (src == circleDialog.cancelButton) {
                   circleDialog.close();
    }This architecture is basically allow your dialogs to be non-modal.
    Typically for a GUI application, I create a Dialogs class that has
    all the dialogs the GUI uses. The dialogs are created only if the
    user asks for the dialog. Basically the singleton pattern
    for each dialog.

  • Best practice to pass a value into a sub-process

    Hi
    I'm new to Oracle workflow and have the following problem/question.
    I have three identical subprocesses (one for each category) which should run in parallel, and I need to know for which category I'm running inside the activities of these subprocesses.
    Of course I would like to define the subprocess only once an reuse it for the other two parallel paths.
    What's the best way of doing this and how should I pass the category into the subprocess.
    I can think of the following
    - use an item attribute to pass the CATEGORY
    - use a process attribute to pass the CATEGORY into the subprocess
    - use the <process_name> of WF_ENGINE.GET_ACTIVITY_LABEL (each subprocess having a different process_name)
    Is there a better way of doing this ?
    Thanks
    Guido

    Unfortunately there is a limitation that Oracle Workflow does not support using a subprocess activity multiple times within a process hierarchy.
    See http://download-west.oracle.com/docs/cd/B10501_01/workflow.920/a95265/defcom36.htm#pact
    You could create one subprocess and then make copies with different names, and include logic in the main process to transition to the appropriate one.

  • Best-practice on naming dimensions to match Oracle data source

    Hi all,
    I have a DW that will consist of a number of Oracle datamarts and multiple Essbase hypercubes. Can anyone advise on the naming of dimension fields that I will want to federate with the Oracle datamarts in OBIEE
    I.e. Should they be the same name - If my column in an oracle table is CLIENT_ID, then have the dimension in Essbase the same name? Does it make a difference?
    Thanks in advance

    Hi There,
    As long as your member names don't clash with the reserved keywords in Essbase (Pls. refer to dbag for details), we shouldn't have any problem. Any name goes. One of the few important rules is that- We can't have a member name whose string length is exceeding 80 characters.
    As far as naming standards are concerned, professionals are generally focussed at giving the right name for the Essbase application/database. This's half the game when we've many data marts. While it would be nice to have custom names in Essbase, it's not recommended because- retaining the members in their natural state would avoid complaints from users, in future *&* most importantly, avoid manual intervention by developers/Support group. The ideal scenario is- To let whatever is there at source flow through to Essbase via your Marts.
    Having the data models & requirements, you're the right person to decide on the naming conventions than anybody else :D.
    - Natesh

  • Best Practice for Time Dimension

    I am designing a new outline. I need to set up a time dimension for every day, perpetually. In the past I would set up one member for years i.e. 2010, 2011, 2012... and a second dimension with the days i.e. Jan 01, Jan 02.
    Is there a better way than this? I would like to do it in one dimension if possible.

    If you create a new ASO (remember, this does not work in BSO) database, right click in the outline (okay, the completely blank outline).
    You should see a pop up menu with "Create date-time dimension..."
    I should also note that you have to click on the word "Outline: dbname" to do this.
    I wonder if you are trying to do this in a BSO app as that will make the menu item show up unselected.
    Regards,
    Cameron Lackpour

  • Best practice for editing Captions, photo names, places, etc.

    Hello - I find the small windows in iPhoto to be a bit "clunky" to do a bunch of meta-tagging. Anyone have any thoughts or suggestions on how to best do this? Where I can, I do batch changes. If I upload photos to mobilme galleries - can you edit this data there and have it sync back?
    Thanks.
    Steve

    You mean adding metadata? Keywords can be added in the Info Window or using the Keyword Manager window, or with Quick Groups.
    Titles and Descriptions (as iPhoto calls Captions) can be added via the Info Window or by the Batch Change command, if you’re working in batches.
    You can also add titles and keyword simply by editing them directly under the image in Browse mode.
    Those, and my suggestions above, are about your options.
    Regards
    TD

  • Value or binding, best practice.

    I was using a lot of value attribute before. Played with Sun Creator lately and realized that it is using binding very heavily.
    What's the best practice to use value and binding?
    To use binding, it's more like Java Swing. You can access the UI Component inside the page bean. To use value, the user input is converted to the right type by the framework, which is very convenient.
    thanks.

    Note that you can use both at the same time.
    I think that if you're comfortable with the concept of coding a "model" for your data - instead of just, say, writing JDBC calls - then the best practice is to use "value", and use "binding" only where you need to specifically access the component for component-ish things. This'll keep your model cleanly isolated from your view, generally considered a good thing.
    But if you've got a strong personal preference for just talking to APIs like JDBC directly, and don't want the extra abstraction of a model layer, then by all means use "binding". Sun's JS Creator is targetted at that sort of developer.
    Hope this helps.
    -- Adam Winer (EG member)

  • Server Core 2008 R2 SP1 - AD DS Best Practice Analyzer Scans Don't Produce Any Output

    Hi,
    This is a re-post moving this discussion to the recommended forum "Server Core" from here:
    http://social.technet.microsoft.com/Forums/en-US/winservergen/thread/cc33d429-88e0-4450-a73c-361e395fd217.
    I am having problems producing any output for any AD DS Best Practice Analyzer Scans on a Windows Server Core 2008 R2 SP1 Domain Controller.
    I have imported the "ServerManager" and "BestPractices" PS modules on that Server by running the following commands:
    Import-Module ServerManager
    Import-Module BestPractices
    I've then run
    get-BPAModel, to find out what best practice scan models are availale, this returns the following output:
    Id                                                       
    LastScanTime
    Microsoft/Windows/DirectoryServices     Never
    Microsoft/Windows/DNSServer               Never
    I then run all the BPA scans on that box:
    Get-BPAModel | Invoke-BPAModel
    This returns the following output:
    ModelId                                          
    Success  Detail
    Microsoft/Windows/DirectoryServices True       (InvokeBpaModelOutputDetail)
    Microsoft/Windows/DNSServer          True       (InvokeBpaModelOutputDetail)
    Since the BPA invocation results weren’t displayed automatically, I entered the following command to see them:
    Get-BPAModel | Get-BPAResult | Out-File "D:\Source\BPA.txt"
    This command will create a text file with the scan results but I only see the results of the DNSServer scan, not the DirectoryServices scan.
    I have also tried to view the results in a HTML format by running the following command but still only see the DNSServer scan results:
    Get-BPAModel | Get-BPAResult | ConvertTo-Html | Set-Content d:\Source\BPA.htm
    I have also tried exeucuting the scan ONLY for the "Microsoft/Windows/DirectoryServices" model but can't get any results to be returned.  I have also connected using server manager from a Full install of Server 2008 R2 SP1 but that
    doesn't seem to show any results under the "Best Practices Analyzer" section when the "Active Directory Domain Services" node is selected, all 4 tabs ("Noncompliant", "Excluded", "Compliant" and "All") show zero (0).  However, the summary text above the
    tabs does show when the last scan was performed. which seems to be correct.
    Is there something special that needs to be done to produce the BPA results for the "Microsoft/Windows/DirectoryServices" BPA model on Server Core 2008 R2 SP1?
    BTW: The Forest/Domain is W2K3R2 Native, this is the first W2K8R2 DC in the environment and I have installed .NET 4 framework (Server Core) to support Powershell 3, also installed.
    Thanks, Paul.
    belpad

    Hi Diana,
    OK, pretty sure I've now found the root cause of the issue I've described above.
    I was also looking into Windows Update Agent issues for these W2K8R2 Server Core DC's, where no updates would be applied via WSUS (configured via GPO) and would fail with "FATAL: CBS called Error with 0x8000ffff windows update agent server
    core". 
    Yesterday, I managed to get one of the W2K8R2 Server Core DC's (WSUS updates) working again by removing one of the .NET 4 Framework security updates (KB2600211) which was manually applied when the server was initially setup.  .NET 4 (Server Core Edition
    http://www.microsoft.com/en-us/download/details.aspx?displaylang=en&id=22833) was installed as a pre-requisitie for Powershell 3.  Once this update was removed, the affected server core DC was restarted and WSUS updates started to get applied.
    So I followed the same procedure on the other server core DC but this did not resolve the WSUS issue this time.  Next, I did further investigation into the Windows Update Agent problem.  This led me to the following article:
    http://blogs.technet.com/b/brad_rutkowski/archive/2008/07/03/windows-update-fails-with-8000ffff-e-unexpected.aspx which described an issue with NTFS permissions being set incorrectly on C: drive, with the "BUILTIN\Users" group completely
    missing on the C: drive.
    I found the affected Server Core DC also had this issue and when the "BUILTIN\Users" was assigned permissions on the C: drive as described above, and the Windows Update Agent re-started, the Server Core DC started to install all required updates
    configured via WSUS.
    Next, I ran the Directory Service BPA, which now produces the desired output either locally or remotely via Server Manager.
    Therefore, I can only assume that the Directory Service BPA also uses "Network Service" much like WUAUSERV (Windows Update Agent), which requires access to the C: drive via the "BUILTIN\Users" assignment.
    So this has subsequently led me to check the C: drive (%systemdrive%) permissions across multiple W2K8R2 machines, all of which showed differing assigned permissions, as follows:
    1. W2K8R2 Server Core DC - With Directory Services BPA and Windows Update Agent Not Working
    C:\>icacls c:\
    c:\ BUILTIN\Administrators:(OI)(CI)(F)
        CREATOR OWNER:(OI)(CI)(IO)(F)
        NT AUTHORITY\INTERACTIVE:(OI)(CI)(RX)
        NT AUTHORITY\SYSTEM:(OI)(CI)(F)
    2. W2K8R2 Server Core DC - With Directory Services BPA and Windows Update Agent Working OK
    C:\>icacls c:\
    c:\ NT AUTHORITY\SYSTEM:(OI)(CI)(F)
        BUILTIN\Administrators:(OI)(CI)(F)
        BUILTIN\Users:(OI)(CI)(RX)
        BUILTIN\Users:(CI)(AD)
        BUILTIN\Users:(CI)(IO)(WD)
        CREATOR OWNER:(OI)(CI)(IO)(F)
    3. W2K8R2 Full DC - With Directory Services BPA and Windows Update Agent Working OK
    C:\>icacls c:
    c: NT SERVICE\TrustedInstaller:(F)
       NT SERVICE\TrustedInstaller:(CI)(IO)(F)
       NT AUTHORITY\SYSTEM:(M)
       NT AUTHORITY\SYSTEM:(OI)(CI)(IO)(F)
       BUILTIN\Administrators:(M)
       BUILTIN\Administrators:(OI)(CI)(IO)(F)
       BUILTIN\Users:(RX)
       BUILTIN\Users:(OI)(CI)(IO)(GR,GE)
       CREATOR OWNER:(OI)(CI)(IO)(F)
    4. W2K8R2 Server Core DHCP Server (Migrated from W2K3 with Server Migration Tools) - With DHCP BPA and Windows Update Agent Working OK
    C:\>icacls c:
    c: NT AUTHORITY\SYSTEM:(OI)(CI)(F)
       BUILTIN\Administrators:(OI)(CI)(F)
    5. W2K8R2 Server Core DHCP Server (Migrated from W2K3 with netsh) - With DHCP BPA and Windows Update Agent Working OK
    C:\>icacls c:
    c: NT AUTHORITY\SYSTEM:(OI)(CI)(F)
       BUILTIN\Administrators:(OI)(CI)(F)
       BUILTIN\Users:(OI)(CI)(RX)
       BUILTIN\Users:(CI)(AD)
       BUILTIN\Users:(CI)(IO)(WD)
       CREATOR OWNER:(OI)(CI)(IO)(F)
    None of the above servers have a Group Policy or any in-house scripts defined that configure C: drive permissions.  It seems odd that there should be such a variance in the C: (%systemdrive%) drive permissions across the above servers, with only
    scenarios 2 and 5 above have matching permissions.  I can only imagine that maybe some software or software update might be causing this.
    By reviewing the above output, it seems there is also a difference between the C: drive permissions of W2K8R2 Server Core and W2K8R2 Full.  Not sure if this is by design? 
    Is there any Microsoft Documentation describing what the default %systemdrive% NTFS permissions should be for W2K8R2 Server Core and Full.  Furthermore, do these permissions change when the various infrastructure roles are installed and enabled i.e.
    Domain Controller, DHCP etc.  I ask, since I would like to use the correct set of permissions for %systemroot% in each scenario. Please advise if I should be asking this question in a different forum?
    belpad

  • Ibook to desktop syncing best practices

    trying to keep my ibook in sync with my g5 desktop. Client projects in addition to the entourage data files, etc.. I've come across numerous scenarios and recommendations. Anyone with any best practice suggestions, ie: software, syncing scenarios, automation, etc., would be greatly appreciated.

    Hello Hugh
    The settings that you are looking for are in Itunes. you can choose to sync only unlistened podcasts.
    If you go to the podcast section in itunes, there is a filed that says keep and you can choose from the following options:
    All episodes
    All unplayed episodes
    Most Recent episode
    Then when you connect your ipod you will then see an optionto only sync the unlistened podcasts and you should be all set.

  • Best practices for nested virtualization

    Can someone please provide best practices when creating Vswitches (ports, port group etc)?
    Learning vswitches and just want to not over think the process.I don't have networking experience but understand the basics.

    Hi,
    In implementing any LO module, generally the following are the points to be taken in mind.
    1. Base level have a ODS to load data from R/3. This ensure that you have exact data content as that in R/3. This can be a write optimized.
    2. Second level have an ODS along with the transformation and modification of data based on the business and functional requirement and enhancments.
    3. Finally have a cube to consolidate the data and make it available for reporting.Create all the reports based on the cube.
    Hope this gives an idea.
    Regards,
    akhan
    Edited by: Akhan_BI on Sep 5, 2009 12:41 AM

  • Best Practice for Extracting a Single Value from Oracle Table

    I'm using Oracle Database 11g Release 11.2.0.3.0.
    I'd like to know the best practice for doing something like this in a PL/SQL block:
    DECLARE
        v_student_id    student.student_id%TYPE;
    BEGIN
        SELECT  student_id
        INTO    v_student_id
        FROM    student
        WHERE   last_name = 'Smith'
        AND     ROWNUM = 1;
    END;
    Of course, the problem here is that when there is no hit, the NO_DATA_FOUND exception is raised, which halts execution.  So what if I want to continue in spite of the exception?
    Yes, I could create a nested block with EXCEPTION section, etc., but that seems clunky for what seems to be a very simple task.
    I've also seen this handled like this:
    DECLARE
        v_student_id    student.student_id%TYPE;
        CURSOR c_student_id IS
            SELECT  student_id
            FROM    student
            WHERE   last_name = 'Smith'
            AND     ROWNUM = 1;
    BEGIN
        OPEN c_student_id;
        FETCH c_student_id INTO v_student_id;
        IF c_student_id%NOTFOUND THEN
            DBMS_OUTPUT.PUT_LINE('not found');
        ELSE
            (do stuff)
        END IF;
        CLOSE c_student_id;   
    END;
    But this still seems like killing an ant with a sledge hammer.
    What's the best way?
    Thanks for any help you can give.
    Wayne

    Do not design in order to avoid exceptions. Do not code in order to avoid exceptions.
    Exceptions are good. Damn good. As it allows you to catch an unexpected process branch, where execution did not go as planned and coded.
    Trying to avoid exceptions is just plain bloody stupid.
    As for you specific problem. When the SQL fails to find a row and a value to return, what then? This is unexpected - if you did not want a value, you would not have coded the SQL to find a value. So the SQL not finding a value is an exception to what you intend with your code. And you need to decide what to do with that exception.
    How to implement it. The #1 rule in software engineering - modularisation.
    E.g.
    create or replace function FindSomething( name varchar2 ) return foo.col1%type is
      id foo.col1%type;
    begin
      select col1 into id from foo where col2 = upper(name);
      return( id );
    exception when NOT_FOUND then
      return( null );
    end;
    And that is your problem. Modularisation. You are not considering it.
    And not the only problem mind you. Seems like your keyboard has a stuck capslock key. Writing code in all uppercase is just as bloody silly as trying to avoid exceptions.

  • Best practice when FACT and DIMENSION table are the same

    Hi,
    In my physical model I have some tables that are both fact and dimension table, i.e. in the BMM they are of course separated into Fact and Dim source (2 different units) and it works fine. But I can see that there will be trouble when having more fact tables and I e.g. have a Period dimension pointing to all the different fact tables (different sources).
    Seems like the best solution to this is to have an alias of the fact/transaction table and have 2 "copies" of the transaction table (one for fact and one for dimension table) in the physical layer. Only bad thing is that there will then allways be 2 lookups in the same table when fetching data from the dimension and the fact table.
    This is not built on a datawarehouse - so the architecture is thereby more complex. Hope this was understandable (trying to make a short story of it).
    Any best practice on this? Or other suggestions.

    Id recommend creation of a view in the database. if its an oracle DB, materialised views would be a huge performance benefit. you just need to make sure that the MVs are updated when the source is updated.
    -Domnic

Maybe you are looking for