Drawbacks of big volume into Dimension Tables

Experts,
Below are counts for DIM and FACT tables.
Can somebody please help me with the big volume of data under Dimension tables versus FACT tables?
What are the drawbacks and how DIM tables with big volume of data can hurt in Data Warehousing?
Also, in near future we are target for cubes designing above this DIM & FACT tables.
Your thoughts/responses are much appreciated.
Thanks in advance
Please do let us know your feedback. Thank You - KG, MCTS

Hi gk1393, 
Incremental loading is fundamental when facing these volumes you are showing. But, besides the ETL considerations, you may want to re-think about the design of this datawarehouse. 
¿Are these dimensions actually dimensions? In some businesses dimensions are very big, but is not that common to find many dimensions bigger than the facts (at least not in a mature datawarehouse). 
Building dimensions on top of these big tables presents some performance problems, both processing and querying the cube. Disabling attributes for processing, removing extra text attributes that are not used in aggregations and reducing data type sizes (if
you can store data in a smallint data type don't use bigint, for example) are useful techniques to mitigate these drawbacks. 
But, again, I'd think about the datawarehouse design itself to check if these cardinalities make sense within your business needs. 
Regards.
Pau

Similar Messages

  • Should my dimension table be this big

    Im in the process of building my first product dimension for a star schema and not sure if im doing this correctly. A brief explanation of our setup
    We have products (dresses) made by designers for specific collections each of which has a range of colors and each color can come in many sizes. For our UK market this equates to some 1.9 million
    product variants. Flattening the structure out to Product+designer+collection gives about 33,000 records but when you add all the colors and then all the colors sizes it pumps that figure up to 1.9million. My “rolled own” incremental ETL load runs
    ok just now, but we are expanding into the US market and our projections indicate that our products variants could multiple 10 fold. Im a bit worried about performance of the ETL just now and in the future.
    Is 1.9m records not an awful lot of records to incrementally load (well analyse) for a dimension table nightly as it is never mind what that figure may grow to when we go to US?
    I thought of somehow reducing this by using a snowflake but would this not just reduce the number of columns in the dimensions and not the row count?
    I then thought of separating the colors and size into their own dimension, but this doesn’t seem right as they are attributes of products and also I would lose the relationship between products,  size & color I.e. I would have to go through the
    fact table (which ive read is not a good choice.) for any analysis.
    Am I correct in thinking these are big numbers for a dimension table? Is it even possible to reduce the number somhow?
    Still learning so welcome any help.
    Thanks

    Hi Plip71,
    In my opinion, It is always good to reduce the Dimension Volume as much as possible for better performance.
    Is there any Hierarchy in you product Dimension?.. Going for a snowflake for this problem is a bad idea..
    Solution 1:
    From the details given by, It is good to Split the Colour and Size as seperate dimension. This will reduce the vloume of dimension and increase the column count in the fact(seperate WID has to be maintained in the fact table). but, this will improve the
    performance of the cube. before doint this please check the layout requirement from the business.
    Solution 2:
    Check the distinct count of Item varient used in fact table. If it is very less, they try creating a linear product dimension. i.e, create an view for the product dimension doing a inner join with the fact table. so that only the used Dimension member will
    be loaded in the Cube Product Dimension. hence volume is reduced with improvement in performance and stability of the cube.
    Thanks in advance, Sorry for the delayed reply ;)
    Anand
    Please vote as helpful or mark as answer, if it helps Regards, Anand

  • Incremental load into the Dimension table

    Hi,
    I have the problem in doing the incremental load of the dimension table.Before loading into the dimension table,i would like to check the data in the dimnesion table.
    In my dimension table i have one not null surrogate key and the other null dimension tables.The not null surrogate key, i am populating with the Sequence Generator.
    To do the incremental load i have done the following.
    I made lookup into the dimension table and looked for a key.The key from the lookup table i have passed to the expression operator.In the expression operator i have created one field and hard coded one flag based on the key from the lookup table.I passed this flag to the filter operator and rest of the fields from the source.
    By doing this i am not able to pass the new records to the dimension table.
    Can you please help me.
    I have another question also.
    How do i update one not null key in the fact table.
    Thanks
    Vinay

    Hi Mark,
    Thanks for your help to solve my problem.I thought i share more information by giving the sql.
    I am giving below the 2 sqls, i would like to achieve through OWB.
    Both the following tasks need to be accomplished after loading the fact table.
    task1:
    UPDATE fact_table c
    SET c.dimension_table_key =
    (SELECT nvl(dimension_table.dimension_table_key,0)
    FROM src_dimension_table t,
    dimension_table dimension_table
    WHERE c.ssn = t.ssn(+)
    AND c.date_src_key = to_number(t.date_src(+), '99999999')
    AND c.time_src_key = to_number(substr(t.time_src(+), 1, 4), '99999999')
    AND c.wk_src = to_number(concat(t.wk_src_year(+), concat(t.wk_src_month(+), t.wk_src_day(+))), '99999999')
    AND nvl(t.field1, 'Y') = nvl(dimension_table.field1, 'Y')
    AND nvl(t.field2, 'Y') = nvl(dimension_table.field2, 'Y')
    AND nvl(t.field3, 'Y') = nvl(dimension_table.field3, 'Y')
    AND nvl(t.field4, 'Y') = nvl(dimension_table.field4, 'Y')
    AND nvl(t.field5, 'Y') = nvl(dimension_table.field5, 'Y')
    AND nvl(t.field6, 'Y') = nvl(dimension_table.field6, 'Y')
    AND nvl(t.field7, 'Y') = nvl(dimension_table.field7, 'Y')
    AND nvl(t.field8, 'Y') = nvl(dimension_table.field8, 'Y')
    AND nvl(t.field9, 'Y') = nvl(dimension_table.field9, 'Y')
    WHERE c.dimension_table_key = 0;
    fact table in the above sql is fact_table
    dimesion table in the above sql is dimension_table
    source table for the dimension table is src_dimension_table
    dimension_table_key is a not null key in the fact table
    task2:
    update fact_table cf
    set cf.key_1 =
    (select nvl(max(p.key_1),0) from dimension_table p
         where p.field1 = cf.field1
    and p.source='YY')
    where cf.key_1 = 0;
    fact table in the above sql is fact_table
    dimesion table in the above sql is dimension_table
    key_1 is a not null key in the fact table
    Is it possible to achieve the above tasks through Oracle Warehouse builder(OWB).I created the mappings for loading the dimension table and fact table and they are working fine.But the above two queries i am not able to achieve through OWB.I would be thankful if you can help me out.
    Thanks
    Vinay

  • Trying to use 2 different Dimension tables and make a hierarchy on some columns which are split into these dimensions .. how do I do that

    Trying to use 2 different Dimension tables and make a hierarchy on some columns which are split into these dimensions .. how do I do that

    If you need to make a hierarchy in an Attribute view you need to have all relevant fields/columns in the same Dimension table..

  • Identify high volume characteristics in dimension tables

    Hi,
    We have identified some cubes where dimension table to fact table ratio is more than 20%.
    Now further step is to find out characteristics in those dimension tables which we can either make as line item dimension or
    move to new dimension.Is there any function module or table which can identify such characteristics for particular cube?
    Regards,
    Shital

    Hi,
    LISTSCHEMA won't give you the size of the tables nor the option to analyze the dimensions in detail.
    Similarly, SAP_INFOCUBE_DESIGN won't give you the detail of what's inside a dimension. It just gives you the size of the dimension.
    The only way(as far as i know)  to do it is as i suggested in earlier posts - get your dimension table to your DBA. It will take 2 minutes for them to find out.
    Please correct me if I am wrong.
    -RMP

  • What is '#Distinct values' in Index on dimension table

    Gurus!
    I have loaded my BW Quality system (master data and transaction data) with almost equivalent volume as in Production.
    I am comparing the sizes of dimension and fact tables of one of the cubes in Quality and PROD.
    I am taking one of the dimension tables into consideration here.
    Quality:
    /BIC/DCUBENAME2 Volume of records: 4,286,259
    Index /BIC/ECUBENAME~050 on the E fact table /BIC/ECUBENAME for this dimension key KEY_CUBENAME2 shows #Distinct values as  4,286,259
    Prod:
    /BIC/DCUBENAME2 Volume of records: 5,817,463
    Index /BIC/ECUBENAME~050 on the E fact table /BIC/ECUBENAME for this dimension key KEY_CUBENAME2 shows #Distinct values as 937,844
    I would want to know why the distinct value is different from the dimension table count in PROD
    I am getting this information from the SQL execution plan, if I click on the /BIC/ECUBENAME table in the code. This screen gives me all details about the fact table volumes, indexes etc..
    The index and statistics on the cube is up to date.
    Quality:
    E fact table:
    Table   /BIC/ECUBENAME                    
    Last statistics date                  03.11.2008
    Analyze Method               9,767,732 Rows
    Number of rows                         9,767,732
    Number of blocks allocated         136,596
    Number of empty blocks              0
    Average space                            0
    Chain count                                0
    Average row length                      95
    Partitioned                                  YES
    NONUNIQUE  Index   /BIC/ECUBENAME~P:
    Column Name                     #Distinct                                       
    KEY_CUBENAMEP                                  1
    KEY_CUBENAMET                                  7
    KEY_CUBENAMEU                                  1
    KEY_CUBENAME1                            148,647
    KEY_CUBENAME2                          4,286,259
    KEY_CUBENAME3                                  6
    KEY_CUBENAME4                                322
    KEY_CUBENAME5                          1,891,706
    KEY_CUBENAME6                            254,668
    KEY_CUBENAME7                                  5
    KEY_CUBENAME8                              9,430
    KEY_CUBENAME9                                122
    KEY_CUBENAMEA                                 10
    KEY_CUBENAMEB                                  6
    KEY_CUBENAMEC                              1,224
    KEY_CUBENAMED                                328
    Prod:
    Table   /BIC/ECUBENAME
    Last statistics date                  13.11.2008
    Analyze Method                      1,379,086 Rows
    Number of rows                       13,790,860
    Number of blocks allocated       187,880
    Number of empty blocks            0
    Average space                          0
    Chain count                              0
    Average row length                    92
    Partitioned                               YES
    NONUNIQUE Index /BIC/ECUBENAME~P:
    Column Name                     #Distinct                                                      
    KEY_CUBENAMEP                                  1
    KEY_CUBENAMET                                 10
    KEY_CUBENAMEU                                  1
    KEY_CUBENAME1                            123,319
    KEY_CUBENAME2                            937,844
    KEY_CUBENAME3                                  6
    KEY_CUBENAME4                                363
    KEY_CUBENAME5                            691,303
    KEY_CUBENAME6                            226,470
    KEY_CUBENAME7                                  5
    KEY_CUBENAME8                              8,835
    KEY_CUBENAME9                                124
    KEY_CUBENAMEA                                 14
    KEY_CUBENAMEB                                  6
    KEY_CUBENAMEC                                295
    KEY_CUBENAMED                                381

    Arun,
    The cube in QA and PROD are compressed. Index building and statistics are also up to date.
    But I am not sure what other jobs are run by BASIS as far as this cube in production is concerned.
    Is there any other Tcode/ Func Mod etc which can give information about the #distinct values of this Index or dimension table?
    One basic question, As the DIM key is the primary key in the dimension table, there cant be duplicates.
    So, how would the index on Ftable on this dimension table show #distinct values less than the entries in that dimension table?
    Should the entries in dimension table not exactly match with the #Distinct entries shown in
    Index /BIC/ECUBENAME~P on this DIM KEY?

  • Oracle Data Compression on SID tables and Dimension Tables

    Hello Community,
    We have had great success with Oracle compression on ODS tables that are no longer loaded.
    We'd now like to move on to other types of BW tables that are very large.
    OSS Note 701235 provides answers to questions concerning the possible use of Oracle compression together with SAP BW.
    But the Note does not give suggestions for (or against) Oracle compression on SID tables or Dimension tables.
    I believe both table types would exhibit the same behaviour : mostly inserts of new SID IDs and new DIM IDs, but few updates to existing SID or Dimension records.  If this is true, then both are good candidates for oracle compression. 
    Do you also agree that this is the typical behaviour for SID tables and dimension tables ?  And that these types of tables are good candidates for Oracle compression in a large BW system ?
    Thanks kindly!
    Keith Helfrich

    Hi all,
    Although this is an old thread I stumbled on during my own investigations I can provide some answers to your questions.
    Table candidates for compression are found by these criteria
           - Table size big enough?
           - Long lifetime of object planned ?
          - No or only rare structural changes for the table   ?
          - u201EUpdateu201C rate low : is your data mostly kind of u201Cread onlyu201D ?
    --               for the wideley used rolling window partition techniques of
                      tables in BW this is not a problem: mostly INSERTu2019s in the
                     current partition not affecting other partitions
    BW tables that can benefit from compression (see SAP notes 105047,701235)
           - PSA tables with data that must be saved for a longer time
           - ODS change log (no Updates of old data, only Inserts of new data)
           - u201Ehistoricalu201C cubes wich get no changes in table structure anymore
    Limitations
             - normal Insert or Update statements are stored ALWAYS in uncompressed
                    format and must be compressed separately ( <= Oracle10g )
             - Slight CPU overhead of compression, butu2026
             --      CPU consumption is more than compensated by doing less I/O as
                   for Bulk loads or parallel processing
             --      SAP BW transformations took a significant amount of CPU for
                       overall load-time into cubes caused by the application server not
                       the database
              - Table must have not more than 255 fields
              - Adding columns with an initial value or dropping columns require
                   uncompression of the complete table (strongest limitation)
    Consider all this above and you can decide that tables that going through UPDATE's are
    not good candidates for compression or tables that can change it's structure (like
    Fact- or DIM tables) .
    Now, my questions to you:
    Wich Oracle version do you use?
    Wich tool do you use for  Oracle compression?
    BRspace (can you give an example ?)
    ALTER ... MOVE COMPRESS
    bye
    yk

  • Best practice when FACT and DIMENSION table are the same

    Hi,
    In my physical model I have some tables that are both fact and dimension table, i.e. in the BMM they are of course separated into Fact and Dim source (2 different units) and it works fine. But I can see that there will be trouble when having more fact tables and I e.g. have a Period dimension pointing to all the different fact tables (different sources).
    Seems like the best solution to this is to have an alias of the fact/transaction table and have 2 "copies" of the transaction table (one for fact and one for dimension table) in the physical layer. Only bad thing is that there will then allways be 2 lookups in the same table when fetching data from the dimension and the fact table.
    This is not built on a datawarehouse - so the architecture is thereby more complex. Hope this was understandable (trying to make a short story of it).
    Any best practice on this? Or other suggestions.

    Id recommend creation of a view in the database. if its an oracle DB, materialised views would be a huge performance benefit. you just need to make sure that the MVs are updated when the source is updated.
    -Domnic

  • Copy selected values from a table control into another table control

    hi there,
    as seen in the subject i need to copy selected values from a table control into another table control in the same screen. as i dont know much about table controls i made 2 table controls with the wizard and started to change the code... right now im totally messed up. nothing works anymore and i don't know where to start over.
    i looked up the forums and google, but there is nothing to help me with this problem (or i suck in searching the internet for solutions)
    i have 2 buttons. one to push the selected data from the top table control into the bottom tc and the other button is to push selected data from the bottom tc into the top tc. does somebody has a sample code to do this?

    you're funny
    i still don't get it... can't believe, there is no tutorial or sample code around how to copy multiple selected rows from a tc.
    here's my code, maybe you can tell me exactly were i have to change it:
    tc1 = upper table control
    tc2 = lower table control
    SCREEN 0100:
    PROCESS BEFORE OUTPUT.
      MODULE status_0100.
      MODULE get_nfo. --> gets data from the dictionary table
      MODULE tc1_change_tc_attr.
      LOOP AT   it_roles_tc1
           INTO wa_roles_tc1
           WITH CONTROL tc1
           CURSOR tc1-current_line.
      ENDLOOP.
      MODULE tc2_change_tc_attr.
      LOOP AT   it_roles_tc2
           INTO wa_roles_tc2l
           WITH CONTROL tc2
           CURSOR tc2-current_line.
      ENDLOOP.
    PROCESS AFTER INPUT.
      LOOP AT it_roles_tc1.
        CHAIN.
          FIELD wa_roles_tc1-agr_name.
          FIELD wa_roles_tc1-text.
        ENDCHAIN.
        FIELD wa_roles_tc1-mark
          MODULE tc1_mark ON REQUEST.
      ENDLOOP.
      LOOP AT it_roles_tc2.
        CHAIN.
          FIELD wa_roles_tc2-agr_name.
          FIELD wa_roles_tc2-text.
        ENDCHAIN.
        FIELD wa_roles_tc2-mark
          MODULE tc2_mark ON REQUEST.
      ENDLOOP.
      MODULE ok_code.
      MODULE user_command_0100.
    INCLUDE PAI:
    MODULE tc1_mark INPUT.
      IF tc1-line_sel_mode = 2
      AND wa_roles_tc1-mark = 'X'.
        LOOP AT it_roles_tc1 INTO g_tc1_wa2
          WHERE mark = 'X'.    -
    > big problem here is, that no entry has an 'X' there
          g_tc1_wa2-mark = ''.
          MODIFY it_roles_tc1
            FROM g_tc1_wa2
            TRANSPORTING mark.
        ENDLOOP.
      ENDIF.
      MODIFY it_roles_tc1
        FROM wa_roles_tc1
        INDEX tc1-current_line
        TRANSPORTING mark.
    ENDMODULE.                    "TC1_MARK INPUT
    MODULE tc2_mark INPUT.
      IF tc2-line_sel_mode = 2
      AND wa_roles_tc2-mark = 'X'.
        LOOP AT it_roles_tc2 INTO g_tc2_wa2
          WHERE mark = 'X'.             -
    > same here, it doesn't gets any data
          g_tc2_wa2-mark = ''.
          MODIFY it_roles_tc2
            FROM g_tc2_wa2
            TRANSPORTING mark.
        ENDLOOP.
      ENDIF.
      MODIFY it_roles_tc2
        FROM wa_roles_tc2
        INDEX tc2-current_line
        TRANSPORTING mark.
    ENDMODULE. 
    thx for anybody who can help with this!

  • Multiple columns from the same dimension table as row labels performing slowly

    (Working with SSAS tabular)
    I'm trying to figure out what the approach should be for the following scenario:
    Lets say we have a Customer table. The table has columns such as account number, department number, name, salesperson, account manager, number of customers, delivery route, etc
    A user of the model could want to see any permutation of that information as the row labels. How should that be handled?
    What we've been doing so far is that the user adds each column they want into the "ROWS" section in Excel. This works fine with smaller tables (for example, "Department" table with a "Department Code" and "Department Name",
    but on large tables this quickly chokes. I understand why this is happening, I just haven't found a better way to accomplish the same thing.
    I can add a calculated column to the model through VS, but obviously this is unsupportable and unscalable when each person needs their own permutations of the data. Can something similar be done in Excel? 
    This question seems to be what I need:
    http://social.msdn.microsoft.com/Forums/en-US/97d1157a-1402-4227-b96a-79524401ddcd/mdx-query-performance-when-selecting-multiple-attributes-from-same-dimension?forum=sqlanalysisservices
    However I can't find any information on how to add those properties (is it a multidimensional-only thing?)

    Thanks for the help. Sorry but i'm a self-taught developer, and i may be missing some basics :)
    Anyway i've done what you suggested but i get this error:
    [nQSError: 15011]The dimension table source Dimension Services.DM_D_SERVIZI_SRV has an aggregate content specification that specifies the level Product. But the source mapping contains column COD_PRODUCT with a functional dependency association on a more detailed level .
    where:
    - DM_D_SERVIZI_SRV is the physical alias for the Service Dimension (and the name of the LTS too)
    - COD_PRODUCT is the leaf of the hierarchy, the physical primary key, but it hasnt to be included in the hierarchy
    Do i have to add another level with the primary key and hide it to the users?
    I tried to solve this going to the logical tables source properties, on the tab contents, setting "logical level" to null for the hierarchy, but i don't know if this is correct.
    Thanks

  • How to Maintain Surrogate Key Mapping (cross-reference) for Dimension Tables

    Hi,
    What would be the best approach on ODI to implement the Surrogate Key Mapping Table on the STG layer according to Kimball's technique:
    "Surrogate key mapping tables are designed to map natural keys from the disparate source systems to their master data warehouse surrogate key. Mapping tables are an efficient way to maintain surrogate keys in your data warehouse. These compact tables are designed for high-speed processing. Mapping tables contain only the most current value of a surrogate key— used to populate a dimension—and the natural key from the source system. Since the same dimension can have many sources, a mapping table contains a natural key column for each of its sources.
    Mapping tables can be equally effective if they are stored in a database or on the file system. The advantage of using a database for mapping tables is that you can utilize the database sequence generator to create new surrogate keys. And also, when indexed properly, mapping tables in a database are very efficient during key value lookups."
    We have a requirement to implement cross-reference mapping tables with Natural and Surrogate Keys for each dimension table. These mappings tables will be populated automatically (only inserts) during the E-LT execution, right after inserting into the dimension table.
    Someone have any idea on how to implement this on ODI?
    Thanks,
    Danilo

    Hi,
    first of all please avoid bolding something. After this according Kimball (if i remember well) is a 1:1 mapping, so no-surrogate key.
    After that personally you could use Lookup Table
    http://www.odigurus.com/2012/02/lookup-transformation-using-odi.html
    or make a simple outer join filtering by your "Active_Flag" column (remember that this filter need to be inside your outer join).
    Let us know
    Francesco

  • Multiple Fact Tables and Dimension Tables

    I have been having some problems trying to model the data from Oracle E-Business Suite maintenance. I will try to give the best description of how the data is held in the tables. The structure is such that a work order can have multiple operations and an operation can have multiple resources as well. I believe the problem comes in the fact that an operation doesn't necessarily need to have a resource. I could not attach an image so I have written out an example below. I am not saying this is right or that it works, but just to give you an idea of what I am thinking. The full dimension would be Organization -> WorkOrder -> Operation -> Resource. Now, the fact tables all hold factual data for the three different levels, with the facts being at each corresponding level. This causes an obvious problem in combining the tables into one large fact table through the ETL process.
    Can anyone tell me if they think this can be done? Am I way off? I am sure that there is a solution as there always is but I have been killing myself trying to figure this one out. I currently have the entire solution in different Business Models. I would like however to be able to compare facts from multiple areas such as the Work Order level and the Resource level.
    Any help is greatly appreciated. I realize that the solution may also require additional work on the ETL side so I am open to any and all suggestions.
    Thank you in advance for anyones time. :)
    Dimension Tables
    WorkOrder_D
    Operation_D
    Resource_D
    Organization_D
    Fact Tables
    WorkOrder_F
    Operation_F
    Resource_F
    Joins
    WorkOrder_D -> Operation_D
    Operation_D -> Resource_D
    WorkOrder_D -> WorkOrder_F
    Operation_D -> Operation_F
    Resource_D -> Resource_F
    Organization_D -> WorkOrder_D
    Organization_D -> Operation_D
    Organization_D -> Resource_D

    Hi,
    Currently the dimension table is taken as a simple logical table in rpd as it does not have have any levels or hierarchy.
    Its a flat dimension. Can you guide me how can I implement a flat dimension in OBIEE? Because this dimension is taken as simple logical table
    I am not able to set appropriate level for fac tables. This dimension does not appear in the list of dimensions.

  • [39008] Logical dimension table has a source that does not join to any fact

    Dear reader,
    After deleting a fact table from my physical layer and deleting it from my business model I'm getting an error: [39008] Logical dimension table TABLE X has a source TABLE X that does not join to any fact source. I do have an other fact table in the same physical model and in the same business model wich is joined to TABLE X both in the physical and business model.
    I cannot figure out why I'm getting this error, even after deleting all joins and rebuilding the joins I'm getting this error. When I look into the "Joins Manager" these joins both in physical as well as logical model do exist, but with consistency check it warns me about [39008] blabla. When I ignore the warning and go to answers and try to show TABLE X (not fact, but dim) it gives me the following error.
    Odbc driver returned an error (SQLExecDirectW).
    Error Details
    Error Codes: OPR4ONWY:U9IM8TAC:OI2DL65P
    State: HY000. Code: 10058. [NQODBC] [SQL_STATE: HY000] [nQSError: 10058] A general error has occurred. [nQSError: 14026] Unable to navigate requested expression: TABLE X.column X Please fix the metadata consistency warnings. (HY000)
    SQL Issued: SELECT TABLE X.column X saw_0 FROM subject area ORDER BY saw_0
    There is one *"special"* thing about this join. It is a complex join in the physical layer, because I need to do a between on dates and a smaller or equal than current date like this example dim.date between fact.date_begin and fact.date_end and dim.date <= current_date. In the business model I've got another complex join
    Any help is kindly appreciated!

    Hi,
    Have you specified the Content level of the Fact table and mapped it with the dimension in question? Ideally this should be done by default since one of the main features of the Oracle BI is its ability to determine which source to use and specifying content level is one of the main ways to achieve this.
    Another quick thing that you might try is creating a dimension (hierarchy) in case it is not already present. I had a similar issue few days back and the warning was miraculously gone by doing this.
    Regards

  • Dimension Table populating data

    Hi
    I am in the process of creating a data mart with a star schema.
    The star schema has been defined with the fact and dimension tables and the primary and foreign keys.
    I have written the script for one of the dimensions and would like to know when the job runs on a daily basis should the job truncate the table every day and rebuild the dimension table or should it only add new records to the table. If it should add only
    new records to the table how do is this done?
    I assume that the fact table job is run once a day and only new data is added to it?
    Thanks

    It will depend on the volume of your dimensions. In most of our projects, we do not truncate, we update only updated rows based on a fingerprint (to make the comparison faster than column by column), and insert for new rows (SCD1). For SCD2 we apply
    similar approach for updates and inserts, and expirations in batch (one UPDATE for all applicable rows at the end of the package/ETL). 
    If your dimension is very large, you can consider truncating all data or deleting only affected modified rows (based on nosiness key) to later reload those, but you have to be carefully maintaining the same surrogate keys reference by your
    existing facts.
    HTH,
    Please, mark this post as Answer if this helps you to solve your question/problem.
    Alan Koo | "Microsoft Business Intelligence and more..."
    http://www.alankoo.com

  • Fact Table and Dimension Tables

    Hi Experts, I'm creating custom InfoCubes for data coming from non-SAP source systems. I have two InfoCubes. Tha data is coming from like 10 tables. I have 10 DataSources created fo this and the data will be consolidated in Standard DSO before it will flow into 2 InfoCubes.
    Now client wants to know before how much data will be there in InfoCubes in Fact table nad Dimension tables in both the InfoCubes. I have the total size of all the 10 tables from the sources given to me by the DBA. I wan not sure how I can convert that info for Fact table and Dimension table as I have not yet created these Infocubes.
    Please help me with this on how I should address this.

    hi,
    The exact data will be hard to give however you can reach at a round figure in your case.
    You are consolidating the data from the tables that means that there is relation between the tables. Arrive at a rough figure based on the relation and the activity you are performing while consolidating the data of the tables.
    For example, let us say we want to combine data for sales order and deliveries in a DSO.
    Let Sales order has 1000 records and Delivery has 2000 records. Both the tables have a common link (Sales Order).In DSO you are combining the data that means the data will be at the most granular level consist of Delivery data, so the maximum no of records which the consolidated DSO can have is 2000.
    regards,
    Arvind.

Maybe you are looking for