Dimension table indexes

All dimension table, non-key columns
should have individual bitmap indexes.
All fact table, foreign key columns must
have individual bitmap indexes on them.
By doing the above will the performance for joins be better.?
Rakesh

Well do remember bitmap indexes are best on columns where there not so many unique values. Everyone has differing opinions on what that # ought to be. In other words, the cardinality, or # of unique elements is not extremely high.
IN general, bitmap indexes will improve performance given that the CBO is using them.
Look at this post, it is a pretty good discovery on this sort of thing. it tends to be a design process, where you test with your given dataset and optimize it.
http://www.rittmanmead.com/2007/07/27/playing-around-with-star-transformations-and-bitmap-indexes/
-Greg

Similar Messages

  • Dimension table, indexing business key

    Hi.
    I my dimension table business key is Unique key constraint and business key is also a index key (unique).
    My question is why is important for dimension to have index?
    In my mapping I first load data into dimension and to merge data I use business key. Ok, business key is unique so I'm merging by constraint (UK).
    I have a fact table mapping where I use previous dimension for lookup, and I do lookup by a business key.
    I understand why we have index in fact tables, but in dimension?
    When and why does business key index help me?

    Hi,
    There are many reasons why we use indexex on dimensions.
    1) The source key or natural key may or may not be a unique one in case of SCD2 and hence its faster to look up w.r.t the index key rather than the source key.
    2) It is not a good idea to have all the source keys from ur dimensions in ur fact. We have to have the index keys anyway to speed up the process of quering.
    3) The index key is always unique and so u will always land on only 1 record on any look up
    Regards
    Bharath

  • What is '#Distinct values' in Index on dimension table

    Gurus!
    I have loaded my BW Quality system (master data and transaction data) with almost equivalent volume as in Production.
    I am comparing the sizes of dimension and fact tables of one of the cubes in Quality and PROD.
    I am taking one of the dimension tables into consideration here.
    Quality:
    /BIC/DCUBENAME2 Volume of records: 4,286,259
    Index /BIC/ECUBENAME~050 on the E fact table /BIC/ECUBENAME for this dimension key KEY_CUBENAME2 shows #Distinct values as  4,286,259
    Prod:
    /BIC/DCUBENAME2 Volume of records: 5,817,463
    Index /BIC/ECUBENAME~050 on the E fact table /BIC/ECUBENAME for this dimension key KEY_CUBENAME2 shows #Distinct values as 937,844
    I would want to know why the distinct value is different from the dimension table count in PROD
    I am getting this information from the SQL execution plan, if I click on the /BIC/ECUBENAME table in the code. This screen gives me all details about the fact table volumes, indexes etc..
    The index and statistics on the cube is up to date.
    Quality:
    E fact table:
    Table   /BIC/ECUBENAME                    
    Last statistics date                  03.11.2008
    Analyze Method               9,767,732 Rows
    Number of rows                         9,767,732
    Number of blocks allocated         136,596
    Number of empty blocks              0
    Average space                            0
    Chain count                                0
    Average row length                      95
    Partitioned                                  YES
    NONUNIQUE  Index   /BIC/ECUBENAME~P:
    Column Name                     #Distinct                                       
    KEY_CUBENAMEP                                  1
    KEY_CUBENAMET                                  7
    KEY_CUBENAMEU                                  1
    KEY_CUBENAME1                            148,647
    KEY_CUBENAME2                          4,286,259
    KEY_CUBENAME3                                  6
    KEY_CUBENAME4                                322
    KEY_CUBENAME5                          1,891,706
    KEY_CUBENAME6                            254,668
    KEY_CUBENAME7                                  5
    KEY_CUBENAME8                              9,430
    KEY_CUBENAME9                                122
    KEY_CUBENAMEA                                 10
    KEY_CUBENAMEB                                  6
    KEY_CUBENAMEC                              1,224
    KEY_CUBENAMED                                328
    Prod:
    Table   /BIC/ECUBENAME
    Last statistics date                  13.11.2008
    Analyze Method                      1,379,086 Rows
    Number of rows                       13,790,860
    Number of blocks allocated       187,880
    Number of empty blocks            0
    Average space                          0
    Chain count                              0
    Average row length                    92
    Partitioned                               YES
    NONUNIQUE Index /BIC/ECUBENAME~P:
    Column Name                     #Distinct                                                      
    KEY_CUBENAMEP                                  1
    KEY_CUBENAMET                                 10
    KEY_CUBENAMEU                                  1
    KEY_CUBENAME1                            123,319
    KEY_CUBENAME2                            937,844
    KEY_CUBENAME3                                  6
    KEY_CUBENAME4                                363
    KEY_CUBENAME5                            691,303
    KEY_CUBENAME6                            226,470
    KEY_CUBENAME7                                  5
    KEY_CUBENAME8                              8,835
    KEY_CUBENAME9                                124
    KEY_CUBENAMEA                                 14
    KEY_CUBENAMEB                                  6
    KEY_CUBENAMEC                                295
    KEY_CUBENAMED                                381

    Arun,
    The cube in QA and PROD are compressed. Index building and statistics are also up to date.
    But I am not sure what other jobs are run by BASIS as far as this cube in production is concerned.
    Is there any other Tcode/ Func Mod etc which can give information about the #distinct values of this Index or dimension table?
    One basic question, As the DIM key is the primary key in the dimension table, there cant be duplicates.
    So, how would the index on Ftable on this dimension table show #distinct values less than the entries in that dimension table?
    Should the entries in dimension table not exactly match with the #Distinct entries shown in
    Index /BIC/ECUBENAME~P on this DIM KEY?

  • Non Unique Indexes on Dimension Table

    Hello ,
      I have Material Dimension that has Product MainGroup, Brand , SPC code broken from Product Hierarchy.
      As per Business Requirement , we don't want to use the Product Hierarchy , that should need to split into 3 pieces.
      Since The Cube Dimensions already reached the 13 , we can not increase the dimensions to keep these 3 fields into separate dimensions.
      I heard from Basis guy as <b>we can have index on three fields in same dimension table</b> & read some negative impact on aggregate definition.
      Is it true , which one is true , am not sure.
      Which one impact causes more worst & usefull..?
      Could please some experts throw some light on this.
    Cheers
    Martin

    Martin -
    Not sure what you mean by "read some negative impact on aggregate definition".
    But as far as adding additional indices on other columns of a dimension table, that certainly is doable. I have never done this as part of an actual intentiional design, but it seems like a valid apporach if you are limited by dimensions.  I'm assuming when you say you already have 13, that the 13 does not include the three standard dimensions for time, request, (drawing a blank, is it currency), so that you really have 16 dimensions, 13 of which are user defined. 
    We have added dimension tabl e indices in our shop when we have found large dimension tables that, either from poor initial design, or changes to the data and/or queries, have resulted in full tables scans against large dimension tables.  In some cases, the query costs of the full scan of the dimension table was more than the cost for the access of the fact table itself.
    These indices must be added by your DBA as they can not be added thru the Admin Wkbench.  You should also probably keep a record of any of these indices you create because if for some reason you delete the tables and reactivate the cube, you'll probably lose them.
    Pizzaman

  • How to Maintain Surrogate Key Mapping (cross-reference) for Dimension Tables

    Hi,
    What would be the best approach on ODI to implement the Surrogate Key Mapping Table on the STG layer according to Kimball's technique:
    "Surrogate key mapping tables are designed to map natural keys from the disparate source systems to their master data warehouse surrogate key. Mapping tables are an efficient way to maintain surrogate keys in your data warehouse. These compact tables are designed for high-speed processing. Mapping tables contain only the most current value of a surrogate key— used to populate a dimension—and the natural key from the source system. Since the same dimension can have many sources, a mapping table contains a natural key column for each of its sources.
    Mapping tables can be equally effective if they are stored in a database or on the file system. The advantage of using a database for mapping tables is that you can utilize the database sequence generator to create new surrogate keys. And also, when indexed properly, mapping tables in a database are very efficient during key value lookups."
    We have a requirement to implement cross-reference mapping tables with Natural and Surrogate Keys for each dimension table. These mappings tables will be populated automatically (only inserts) during the E-LT execution, right after inserting into the dimension table.
    Someone have any idea on how to implement this on ODI?
    Thanks,
    Danilo

    Hi,
    first of all please avoid bolding something. After this according Kimball (if i remember well) is a 1:1 mapping, so no-surrogate key.
    After that personally you could use Lookup Table
    http://www.odigurus.com/2012/02/lookup-transformation-using-odi.html
    or make a simple outer join filtering by your "Active_Flag" column (remember that this filter need to be inside your outer join).
    Let us know
    Francesco

  • Is convenient to create a dimension table in this situation...?

    I have Oracle 9.2.0.5
    I have a table “FACT_OPERATIONS” with 10.000.000 of records and these fields:
    TABLE “FACT_OPERATIONS”
    Category VARCHAR2(40) (PK)
    Type VARCHAR2(35) (PK)
    Source VARCHAR2(50) (PK)
    Description VARCHAR2(100) (PK)
    CUSTOMER_ID NUMBER(6) (PK)
    MODEL_ID NUMBER(6) (PK)
    STATE_ID NUMBER(6) (PK)
    TIME_ID NUMBER(8) (PK)
    Quantity (PK)
    I charge it from multiple fact_tables with a Merge Statement
    I have the Dimensions tables: Dim_Customers, Dim_Models, Dim_States and Dim_Time.
    The field Source have the name of the Fact_Table where I charge the information
    The field Type is the father of the field Source.
    and the field Category is father of the field Type.
    For example:
    Category: EVENT
    Type: SALES
    Source: SALES_VI_ZU
    Category: EVENT
    Type: SALES
    Source: SALES_VI_AU
    Category: EVENT
    Type: MOVS
    Source: MOV_SALES
    The question is:
    Is convenient to improve the query performance I create the dimension Source (It will contain the fields Category, Type, Source and so on). Then, I put in the FACT_OPERATIONS the field SOURCE_ID and delete the fields Category, Type and Source.
    Now I do for example these queries for example:
    select Category, Type, Source, count(*)
    from FACT_OPERATIONS
    group by Category, Type, Source
    or
    select *
    from FACT_OPERATIONS
    where CATEGORY = ‘EVENT’
    or
    select *
    from FACT_OPERATIONS
    where TYPE = ‘SALES’ and SOURCE = ‘SALES_VI_AU’
    ¿I will improve the performance with these changes or is the same?
    Thanks!

    Your queries would probably run more slowly if you moved the category and type to a dimension table, but having said that it is still the correct thing to do. If you need faster performance for those three queries then it would be apropriate to use a bitmap join index, or to create a materialized view to aggregate the data -- in fact query 1 could be satisfied very quickly using a materialized view.

  • To find the size of the fact table and dimension table

    Hi experts,
    Can anyone plz tell me if i want to find size of the fact table and size of the dimension table to find cardinality and line item do we first build statistics then find size by transaction DB02 or any other method we have?
    Thanks in advance

    Hi ,
    Please go to Tcode DB02 >Space>Table and Indexes.Give your table name or pattern (like /BIC/F* for gettinf all the Fact tables)
    .This will give you sizes of all the table.
    Also if you want to get list like TOP 30 Fact tables and Dimension Table.Please use TCode ST14, this will give a desired output with all the required details.
    -Vikram

  • Where can we see Fact Table And Dimension Table in DataWarehouse Workbench?

    Hai Experts
    where can we see Fact Table And Dimension Table in DataWarehouse Workbench?
    Best Regards
    nvnkmr12

    Hi
    Refer the link below and you will get a comprehensive explanation of how data is stored and modeled in BI. If explains fact table, dimension table, SID table, P and Q , X and Y tables.
    http://www.sdn.sap.com/irj/scn/index?rid=/library/uuid/6ce7b0a4-0b01-0010-52ac-a6e813c35a84
    Cheers
    Umesh

  • Fact and dimension table partition

    My team is implementing new data-warehouse. I would like to know that when  should we plan to do partition of fact and dimension table, before data comes in or after?

    Hi,
    It is recommended to partition Fact table (Where we will have huge data). Automate the partition so that each day it will create a new partition to hold latest data (Split the previous partition into 2). Best practice is to create partition on transaction
    timestamps so load the incremental data into a empty table called (Table_IN) and then Switch that data into main table (Table). Make sure your tables (Table and Table_IN) should be on one file group.
    Refer below content for detailed info
    Designing and Administrating Partitions in SQL Server 2012
    A popular method of better managing large and active tables and indexes is the use of partitioning. Partitioning is a feature for segregating I/O workload within
    SQL Server database so that I/O can be better balanced against available I/O subsystems while providing better user response time, lower I/O latency, and faster backups and recovery. By partitioning tables and indexes across multiple filegroups, data retrieval
    and management is much quicker because only subsets of the data are used, meanwhile ensuring that the integrity of the database as a whole remains intact.
    Tip
    Partitioning is typically used for administrative or certain I/O performance scenarios. However, partitioning can also speed up some queries by enabling
    lock escalation to a single partition, rather than to an entire table. You must allow lock escalation to move up to the partition level by setting it with either the Lock Escalation option of Database Options page in SSMS or by using the LOCK_ESCALATION option
    of the ALTER TABLE statement.
    After a table or index is partitioned, data is stored horizontally across multiple filegroups, so groups of data are mapped to individual partitions. Typical
    scenarios for partitioning include large tables that become very difficult to manage, tables that are suffering performance degradation because of excessive I/O or blocking locks, table-centric maintenance processes that exceed the available time for maintenance,
    and moving historical data from the active portion of a table to a partition with less activity.
    Partitioning tables and indexes warrants a bit of planning before putting them into production. The usual approach to partitioning a table or index follows these
    steps:
    1. Create
    the filegroup(s) and file(s) used to hold the partitions defined by the partitioning scheme.
    2. Create
    a partition function to map the rows of the table or index to specific partitions based on the values in a specified column. A very common partitioning function is based on the creation date of the record.
    3. Create
    a partitioning scheme to map the partitions of the partitioned table to the specified filegroup(s) and, thereby, to specific locations on the Windows file system.
    4. Create
    the table or index (or ALTER an existing table or index) by specifying the partition scheme as the storage location for the partitioned object.
    Although Transact-SQL commands are available to perform every step described earlier, the Create Partition Wizard makes the entire process quick and easy through
    an intuitive point-and-click interface. The next section provides an overview of using the Create Partition Wizard in SQL Server 2012, and an example later in this section shows the Transact-SQL commands.
    Leveraging the Create Partition Wizard to Create Table and Index Partitions
    The Create Partition Wizard can be used to divide data in large tables across multiple filegroups to increase performance and can be invoked by right-clicking
    any table or index, selecting Storage, and then selecting Create Partition. The first step is to identify which columns to partition by reviewing all the columns available in the Available Partitioning Columns section located on the Select a Partitioning Column
    dialog box, as displayed in Figure 3.13. This screen also includes additional options such as the following:
    Figure 3.13. Selecting a partitioning column.
    The next screen is called Select a Partition Function. This page is used for specifying the partition function where the data will be partitioned. The options
    include using an existing partition or creating a new partition. The subsequent page is called New Partition Scheme. Here a DBA will conduct a mapping of the rows selected of tables being partitioned to a desired filegroup. Either a new partition scheme should
    be used or a new one needs to be created. The final screen is used for doing the actual mapping. On the Map Partitions page, specify the partitions to be used for each partition and then enter a range for the values of the partitions. The
    ranges and settings on the grid include the following:
    Note
    By opening the Set Boundary Values dialog box, a DBA can set boundary values based on dates (for example, partition everything in a column after a specific
    date). The data types are based on dates.
    Designing table and index partitions is a DBA task that typically requires a joint effort with the database development team. The DBA must have a strong understanding
    of the database, tables, and columns to make the correct choices for partitioning. For more information on partitioning, review Books Online.
    Enhancements to Partitioning in SQL Server 2012
    SQL Server 2012 now supports as many as 15,000 partitions. When using more than 1,000 partitions, Microsoft recommends that the instance of SQL Server have at
    least 16Gb of available memory. This recommendation particularly applies to partitioned indexes, especially those that are not aligned with the base table or with the clustered index of the table. Other Data Manipulation Language statements (DML) and Data
    Definition Language statements (DDL) may also run short of memory when processing on a large number of partitions.
    Certain DBCC commands may take longer to execute when processing a large number of partitions. On the other hand, a few DBCC commands can be scoped to the partition
    level and, if so, can be used to perform their function on a subset of data in the partitioned table.
    Queries may also benefit from a new query engine enhancement called partition elimination. SQL Server uses partition enhancement automatically if it is available.
    Here’s how it works. Assume a table has four partitions, with all the data for customers whose names begin with R, S, or T in the third partition. If a query’s WHERE clause
    filters on customer name looking for ‘System%’, the query engine knows that it needs only to partition three to answer
    the request. Thus, it might greatly reduce I/O for that query. On the other hand, some queries might take longer if there are more than 1,000 partitions and the query is not able to perform partition elimination.
    Finally, SQL Server 2012 introduces some changes and improvements to the algorithms used to calculate partitioned index statistics. Primarily, SQL Server 2012
    samples rows in a partitioned index when it is created or rebuilt, rather than scanning all available rows. This may sometimes result in somewhat different query behavior compared to the same queries running on SQL Server 2012.
    Administrating Data Using Partition Switching
    Partitioning is useful to access and manage a subset of data while losing none of the integrity of the entire data set. There is one limitation, though. When
    a partition is created on an existing table, new data is added to a specific partition or to the default partition if none is specified. That means the default partition might grow unwieldy if it is left unmanaged. (This concept is similar to how a clustered
    index needs to be rebuilt from time to time to reestablish its fill factor setting.)
    Switching partitions is a fast operation because no physical movement of data takes place. Instead, only the metadata pointers to the physical data are altered.
    You can alter partitions using SQL Server Management Studio or with the ALTER TABLE...SWITCH
    Transact-SQL statement. Both options enable you to ensure partitions are
    well maintained. For example, you can transfer subsets of data between partitions, move tables between partitions, or combine partitions together. Because the ALTER TABLE...SWITCH statement
    does not actually move the data, a few prerequisites must be in place:
    • Partitions must use the same column when switching between two partitions.
    • The source and target table must exist prior to the switch and must be on the same filegroup, along with their corresponding indexes,
    index partitions, and indexed view partitions.
    • The target partition must exist prior to the switch, and it must be empty, whether adding a table to an existing partitioned table
    or moving a partition from one table to another. The same holds true when moving a partitioned table to a nonpartitioned table structure.
    • The source and target tables must have the same columns in identical order with the same names, data types, and data type attributes
    (length, precision, scale, and nullability). Computed columns must have identical syntax, as well as primary key constraints. The tables must also have the same settings for ANSI_NULLS and QUOTED_IDENTIFIER properties.
    Clustered and nonclustered indexes must be identical. ROWGUID properties
    and XML schemas must match. Finally, settings for in-row data storage must also be the same.
    • The source and target tables must have matching nullability on the partitioning column. Although both NULL and NOT
    NULL are supported, NOT
    NULL is strongly recommended.
    Likewise, the ALTER TABLE...SWITCH statement
    will not work under certain circumstances:
    • Full-text indexes, XML indexes, and old-fashioned SQL Server rules are not allowed (though CHECK constraints
    are allowed).
    • Tables in a merge replication scheme are not allowed. Tables in a transactional replication scheme are allowed with special caveats.
    Triggers are allowed on tables but must not fire during the switch.
    • Indexes on the source and target table must reside on the same partition as the tables themselves.
    • Indexed views make partition switching difficult and have a lot of extra rules about how and when they can be switched. Refer to
    the SQL Server Books Online if you want to perform partition switching on tables containing indexed views.
    • Referential integrity can impact the use of partition switching. First, foreign keys on other tables cannot reference the source
    table. If the source table holds the primary key, it cannot have a primary or foreign key relationship with the target table. If the target table holds the foreign key, it cannot have a primary or foreign key relationship with the source table.
    In summary, simple tables can easily accommodate partition switching. The more complexity a source or target table exhibits, the more likely that careful planning
    and extra work will be required to even make partition switching possible, let alone efficient.
    Here’s an example where we create a partitioned table using a previously created partition scheme, called Date_Range_PartScheme1.
    We then create a new, nonpartitioned table identical to the partitioned table residing on the same filegroup. We finish up switching the data from the partitioned table into the nonpartitioned table:
    CREATE TABLE TransactionHistory_Partn1 (Xn_Hst_ID int, Xn_Type char(10)) ON Date_Range_PartScheme1 (Xn_Hst_ID) ; GO CREATE TABLE TransactionHistory_No_Partn (Xn_Hst_ID int, Xn_Type
    char(10)) ON main_filegroup ; GO ALTER TABLE TransactionHistory_Partn1 SWITCH partition1 TO TransactionHistory_No_Partn; GO
    The next section shows how to use a more sophisticated, but very popular, approach to partition switching called a sliding
    window partition.
    Example and Best Practices for Managing Sliding Window Partitions
    Assume that our AdventureWorks business is booming. The sales staff, and by extension the AdventureWorks2012 database, is very busy. We noticed over time that
    the TransactionHistory table is very active as sales transactions are first entered and are still very active over their first month in the database. But the older the transactions are, the less activity they see. Consequently, we’d like to automatically group
    transactions into four partitions per year, basically containing one quarter of the year’s data each, in a rolling partitioning. Any transaction older than one year will be purged or archived.
    The answer to a scenario like the preceding one is called a sliding window partition because
    we are constantly loading new data in and sliding old data over, eventually to be purged or archived. Before you begin, you must choose either a LEFT partition function window or a RIGHT partition function window:
    1. How
    data is handled varies according to the choice of LEFT or RIGHT partition function window:
    • With a LEFT strategy, partition1 holds the oldest data (Q4 data), partition2 holds data that is 6- to 9-months old (Q3), partition3
    holds data that is 3- to 6-months old (Q2), and partition4 holds recent data less than 3-months old.
    • With a RIGHT strategy, partition4 holds the holds data (Q4), partition3 holds Q3 data, partition2 holds Q2 data, and partition1
    holds recent data.
    • Following the best practice, make sure there are empty partitions on both the leading edge (partition0) and trailing edge (partition5)
    of the partition.
    • RIGHT range functions usually make more sense to most people because it is natural for most people to to start ranges at their lowest
    value and work upward from there.
    2. Assuming
    that a RIGHT partition function windows is used, we first use the SPLIT subclause of the ALTER PARTITION FUNCTIONstatement
    to split empty partition5 into two empty partitions, 5 and 6.
    3. We
    use the SWITCH subclause
    of ALTER TABLE to
    switch out partition4 to a staging table for archiving or simply to drop and purge the data. Partition4 is now empty.
    4. We
    can then use MERGE to
    combine the empty partitions 4 and 5, so that we’re back to the same number of partitions as when we started. This way, partition3 becomes the new partition4, partition2 becomes the new partition3, and partition1 becomes the new partition2.
    5. We
    can use SWITCH to
    push the new quarter’s data into the spot of partition1.
    Tip
    Use the $PARTITION system
    function to determine where a partition function places values within a range of partitions.
    Some best practices to consider for using a slide window partition include the following:
    • Load newest data into a heap, and then add indexes after the load is finished. Delete oldest data or, when working with very large
    data sets, drop the partition with the oldest data.
    • Keep an empty staging partition at the leftmost and rightmost ends of the partition range to ensure that the partitions split when
    loading in new data, and merge, after unloading old data, do not cause data movement.
    • Do not split or merge a partition already populated with data because this can cause severe locking and explosive log growth.
    • Create the load staging table in the same filegroup as the partition you are loading.
    • Create the unload staging table in the same filegroup as the partition you are deleting.
    • Don’t load a partition until its range boundary is met. For example, don’t create and load a partition meant to hold data that is
    one to two months older before the current data has aged one month. Instead, continue to allow the latest partition to accumulate data until the data is ready for a new, full partition.
    • Unload one partition at a time.
    • The ALTER TABLE...SWITCH statement
    issues a schema lock on the entire table. Keep this in mind if regular transactional activity is still going on while a table is being partitioned.
    Thanks Shiven:) If Answer is Helpful, Please Vote

  • Integration Services - Dimension Table

    I?m trying to build a dimension table for a star schema to be used with Essbase Integration Services. I need to know how to structure the table when the data in the fact table is at a more granular level that that of the Essbase leaf member. The typical structure of my dimension tables is: [Leaf Node Name], [Leaf Node Alias], [Gen01], [Gen02]?, where the leaf node name is also the level of detail in my fact table.

    Well do remember bitmap indexes are best on columns where there not so many unique values. Everyone has differing opinions on what that # ought to be. In other words, the cardinality, or # of unique elements is not extremely high.
    IN general, bitmap indexes will improve performance given that the CBO is using them.
    Look at this post, it is a pretty good discovery on this sort of thing. it tends to be a design process, where you test with your given dataset and optimize it.
    http://www.rittmanmead.com/2007/07/27/playing-around-with-star-transformations-and-bitmap-indexes/
    -Greg

  • In Answers am seeing "Folder is Empty" for Logical Fact and Dimension Table

    Hi All,
    Am working on OBIEE Answers, on of sudden when i clicked on Logical Fact table it showed me as "folder is empty". I restarted all the services and then tried still showing same for Logical Fact and Dimension tables but am able to see all my reports in Shared Folders. I restarted the machine too but no change. Please help me out to resolve this issue.
    Thanks in Advance.
    Regards,
    Rajkumar.

    First of all, follow the forum etiquette :
    http://forums.oracle.com/forums/ann.jspa?annID=939
    React or mark as anwser the post that the user gave.
    And for your question, you must check the log for a possible corrupt catalog :
    OracleBIData_Home\web\log\sawlog0.log

  • Help on  Setting logical Levels  in Fact tables and on Dimension tables

    Hi all
    Can any body provide any blogs or any king of material on what exactly is levelling .
    Like after creating the Dimensional hierarchies we need to set the logical levels for the LTS of fact tabels ri8 .So what is the difference between setting logical levels to fact tabels and also Setting levelling on Dimension tables .
    Any kind of help is appreciated
    Thanks
    Xavier.
    Edited by: Xavier on Aug 4, 2011 10:50 AM

    I have read these blogs ,but what my question is
    Setting the logical levels in LTS of Fact tables i understood .
    But we can also set the logical levels for dimensions also ri8 .I didn't understand why do we set the logical levels for dimensions .Is there any reason why we go with the levelling at dimensions
    Thanks
    Xavier
    Edited by: Xavier on Aug 4, 2011 2:03 PM
    Edited by: Xavier on Aug 4, 2011 2:32 PM

  • Best practice when FACT and DIMENSION table are the same

    Hi,
    In my physical model I have some tables that are both fact and dimension table, i.e. in the BMM they are of course separated into Fact and Dim source (2 different units) and it works fine. But I can see that there will be trouble when having more fact tables and I e.g. have a Period dimension pointing to all the different fact tables (different sources).
    Seems like the best solution to this is to have an alias of the fact/transaction table and have 2 "copies" of the transaction table (one for fact and one for dimension table) in the physical layer. Only bad thing is that there will then allways be 2 lookups in the same table when fetching data from the dimension and the fact table.
    This is not built on a datawarehouse - so the architecture is thereby more complex. Hope this was understandable (trying to make a short story of it).
    Any best practice on this? Or other suggestions.

    Id recommend creation of a view in the database. if its an oracle DB, materialised views would be a huge performance benefit. you just need to make sure that the MVs are updated when the source is updated.
    -Domnic

  • How to obtain the table index in word use LabVIEW Report Generation Toolkit for Microsoft Office

    I created a word templete and it had several tables. When I use the "Word Edit Cell" function in LabVIEW Report Generation Toolkit for Microsoft Office, the function need "table index", and I didn't find any function to get or set the table index in word document. How can I achieve my attention to write value to specified table cell using the "Word Edit Cell" function?
    Thanks for reply!
    YangAfreet

    Hi yangafreet
    You do not need to get the table index for the word edit cell.vi from anywhere. LabVIEW will automatically index all the tables in the document. See the attatched vi for an example.
    Rich
    Attachments:
    Table Edit.vi ‏23 KB

  • To Use  Cursor or  TYPE table Index by PLS_integer

    Hi All,
    Let's see if I have table with no. of records 19,26,20,000.
    If I want to loop through all the records which will be a optimized way To Use Cursor or TYPE table Index by PLS_integer.
    Please guide.
    Thanks.

    What is it you want to do to/with the rows you're looping through?
    Ideally you want to avoid looping, as that's row by row (aka slow by slow) processing and it's expensive time-wise.
    If you're doing DML (insert/update/delete) then you're best off doing it in one sql statement, rather than looping.

Maybe you are looking for

  • Problems with iTunes after installing an SSD/HDD on my Macbook Pro

    Hello everybody, I decided to install a SSD on my Macbook Pro, replacing the optical drive with it, then I set up the new disk to be the boot drive and left the HDD on its place to store media and stuff. Obviously, my library now is on the HDD, but w

  • Cannot update rolling patch in Tuxedo 10.3

    I would like to update rolling patch to latest patch I completed in windows box but in linux box is error during i call "./install" [tuxedo@HRIS-DEV RP097]$ $TUXDIR/bin/tmadmin -v INFO: Oracle Tuxedo, Version 10.3.0.0, 64-bit, Patch Level (none) [tux

  • PP01 BADI

    Hi All, I have a requirement in which i need to update the PA0001 infotype with the Supervisor Details ,when a position is assigned with the Realtion A 002 in "PP01". So in PP01 when the relation reports to is assigned then we need to update the info

  • Burning a mp3 disk

    How can I burn a mp3 disc without Itunes automatically numbering each song in the playlist? Imac G3   Mac OS X (10.3.9)  

  • Account ID for Title (32 Digit) appears in red after next

    In the viewer builder, after I enter all of the Viewer Details information and click next, a new dialog box appears highlighted in red. It says Account ID for Title (32 Digit). I cannot advance to the next phases until I have it. Where can I find thi