Indexes for OWB fact tables

Hi All,
I had completed development of my OWB with 10 fact tables, each fact table is having index like(SA_PROD_S_IDX1_1).
Now Client had given the production database(table space and indexes) with default tablespace USERS and unlimited quota on MASTER_DATA, MASTER_IDX, TRANS_DATA, and TRANS_IDX.
My Problem is that I can put table space to my fact tables in the table properties, but how to specify my indexes to the dabase indexes.
Please help me in this regard.
Regards,
Kumar.

Hi,
I have another problem.
I Created ProcessFlows and by using email notification, I can send SUCCESS / ERROR / WARNING notification.
In process flows I have 3 maps.
When there is any error or warnings in any of the map execution, I am getting only message which I put in my email subject body, but I am not getting exactly which map got error or warnings.
Is there any way to send the execution details of the map through mail notification.
Or Please suggest me to do any solution.
Regards,
Kumar

Similar Messages

  • Indexes on a fact table

    Hi All,
    We are trying to build a data warehouse. The data marts would be accessed by cognos reporting layer. In the data marts we have around 9 dimension tables and 1 fact table. For each month we will have around 21-25 million records in the fact table. Out of 9 dimensions there dim1 and dim2 have 21 million and 10 million records respectively. The rest 7 dimensions are very small like less than 10k records.
    In cognos reports they are trying to join the some dimension tables and the fact table to populate some reports. they are taking around 5-6 min.
    I have around 8 B-Tree indexes on this fact table with all possible combination of columns. I believe that these many indexes is not improving the performance. So I decided to create a aggregated table with measures. But in cognos there are some reports which give detailed information from the fact table and that are taking around 8 min.
    please advice as to what type indexes can be created on fact tables.
    I read that we can create bit map indexes based on join conditions but the documentation says that it can include columns only from dimension tables and not fact tables. Should the indexed columns be keys in dimensional tables?
    I have observed that the fact table is around 1.5gb. But each index is around 1.9 -2gb. I was kind of surprised when I saw that figure. Does it imply that index scan and table lookup would take more time than the full table scan? And hence it is not using the indexes.
    Any help is greatly appreciated.
    Thanks
    Hari

    What sort of queries are you running? Do you have an example (with a query plan)?
    Are indexes even useful? Or are you accessing too much data to make indexes worthwhile?
    Are you licensed to use partitioning? If so, are your fact tables partitioned? Are the queries doing partition pruning?
    Are you using parallelism? If so, is parallel query actually being invoked?
    If creating aggregate tables is a potentially useful strategy, you would want to use materialized views with query rewrite.
    Justin

  • Logical level for logical fact table sources

    it is clear that for fact aggregates, we should use the Content tab of the Logical Table Source dialog to assign the correct logical level to each dimension.
    question is : is it mandatory to assign even for non-aggregates fact tables the logical level for each dimension (which normally should be set to the most detailed level of each dimension) ? is it any known issue if "logical levels"in content tab are not set ?
    the reason I'm asking this is a strange bug I have (I'm not going to discuss it here) and then only workaround seems to be NOT setting the logical levels (on content tab) for logical fact table sources.
    thank you !

    If levels are not set: By default levels are considered as lowest level
    It should not matter if you set or not
    Generally we set for facts explicitly when we are using Aggregate tables.
    Your current issue might be a case by case; I would suggest to check implicit fact, any table mapped to the source to force a join etc
    Mark if helps
    Let me know how it helps
    Edited by: Srini VEERAVALLI on Feb 5, 2013 8:33 AM
    Any updates on this?+_
    Edited by: Srini VEERAVALLI on Feb 14, 2013 9:09 AM

  • Content tab for a fact table

    Hi
    Please , help me in knowing the use of content tab for a fact table in the repository in OBIEE.
    Thanks.

    if you have multiple LTS then you should set the content level approprately otherwise you can get errors during consistency checks.not able to find any link which talks only about content level.see these links and let us know if you have any doubts
    http://kr.forums.oracle.com/forums/thread.jspa?threadID=604637
    Content tab is also handy when you are using aggregate tables.
    Regards,
    Sandeep

  • "Select" Physical table as LTS for a Fact table

    Hi,
    I am very new to OBIEE, still in the learning phase.
    Scenario 1:
    I have a "Select" Physical table which is joined (inner join) to a Fact table in the Physical layer. I have other dimensions joined to this fact table.
    In BMM, I created a logical table for the fact table with 2 Logical Table Sources (the fact table & the select physical table). No errors in the consistency check.
    When I create an analysis with columns from the fact table and the select table, I don't see any data for the select table column.
    Scenario 2:
    In this scenario, I created an inner join between "Select" physical table and a Dimension table instead of the Fact table.
    In BMM, I created a logical table for the dimension table with 2 Logical Table Sources (the dimension table & the select physical table). No errors in the consistency check.
    When I create an analysis with columns from the dimension table and the select table, I see data for all the columns.
    What am I missing here? Why is it not working in first scenario?
    Any help is greatly appreciated.
    Thanks,
    SP

    Hi,
    If I understand your description correctly, then your materialized view skips some dimensions (infrequent ones). However, when you reference these skipped dimensions in filters, the queries are hitting the materialized view and failing as these values do not exist. In this case, you could resolve it as follows
    1. Create dimensional hierarchies for all dimensions.
    2. In the fact table's logical sources set the content tabs properly. (Yes, I think this is it).
    When you skipped some dimensions, the grain of the new fact source (the materialized view in this case) is changed. For example:
    Say a fact is available with the keys for Product, Customer, Promotion dimensions. The grain for this is Product * Customer * Promotion
    Say another fact is available with the keys for Product, Customer. The grain for this is Product * Customer (In fact, I would say it is Product * Customer * Promotion Total).
    So in the second case, the grain of the table is changed. So setting appropriate content levels for these sources would automatically switch the sources.
    So, I request you to try these settings and let me know if it works.
    Thank you,
    Dhar

  • Index for a PSA table is not in the "customer" namespace

    Hi,
    While loading data through infosource 0CO_OM_WBS_1 to
    data target 0IMFA_1 loading failes and the reason given is that the system reads data from PSA table /BIC/B0001060000 of 0IM_FA_IQ_9 and the index generated for this table supplied return code 14.
    I found no notes in the subject - with the syntax :
    Index for a PSA table is not in the "customer" namespace.
    thanks in advance for your help.

    Hi,
    I think you need to speak to your basis guys / DBA as index created on PSA table is not in your tablespace. He should be able to help you.
    Vikash

  • How can I create index(uniqie index) for a existed table?

    Hi guys,
    I want to create a index for a existed table, it should be unique index. I did it using se11, "indexes," and select the radio button "unqiue index", of course the fields as well. But when I activate it, I get a waring and therefore can not create this index. But if I select radio button "non-uniqe index", it works.
    However I have to reate a unqiue index.
    How can I do it?
    Thanks in advance
    Regards,
    Liying

    HI Wang
    You can create your index via SE11, enter the table name, click change, choose Go To, Indexes. Here create your index with the key fields that you want. To use the index, your select statement WHERE clause, you must have the key fields of the index in the order that they appear in the index. The "optimizer" will choose the index depending on your fields of the WHERE clause.
    One thing to remember is that when you create indexes for tables, the update or insert of these tables may have a slower response then before, I for one have never seen a big problem as of yet.
    this Document may help u on primary and secondary indexes :
    http://jdc.joy.com/helpdata/EN/cf/21eb20446011d189700000e8322d00/frameset.htm
    http://help.sap.com/saphelp_nw04/helpdata/en/cf/21eb47446011d189700000e8322d00/content.htm
    If it helps Reward the points
    Regards,
    Rk
    Message was edited by:
            Rk Pasupuleti

  • ETL for loading Fact tables

    Hi all
    I have a fundamental question regarding loading records into the Fact tables. As you know we always load Dimensions before loading Facts. In our data model, for most of the Fact tables we have a special Dimension that has got the basic attributes in it, along with the source key of the records which are valid in terms of our data warehouse. The source of the Fact table includes the source tables which are involved in loading the Dimension as well.
    For example for Student Fact table we have a Student Dimension which is loaded before Student Fact, hence has got the most recent changes (source key of the records which are to be loaded into DW). Now my question is that which options below are more appropriate in loading Student Fact table.
    Option 1 -  Load Student Fact table using the same source tables that are used in loading records into Student Dimension, ignoring the similar logic has been applied already in Dimension to get valid records from Source and do it again in Fact job.
    Option 2 -  Inner Join the Student Dimension in the Student Fact ETL job, so it limits the number of source records (coming from Source System) to the validated records which we are interested to load into DW. This option makes loading the Fact table faster, as by contributing the Dimension in loading its corresponding Fact, we don't have to get all records from the source and validate them against the rules again (as it has been done in the Dimension job and now we can use the validated record IDs in Fact job).
    As we are new building jobs for a dimensional model, I thought it would be good to ask your opinions and your experience
    Thanks in advance,
    Tootia

    I don't see any problem with re-using your "which students do we care about?" logic and rules by borrowing the list of students from the dimension table. You can join to it, or do a lookup() into it and then filter-out nulls (assuming you return a null on a no-match), or use a Validation transform with the "Exists in table" option -- which to use is question of style & performance -- any would work.  Indeed, when migrating data into SAP using iDocs, I use a similar idea, always joining the iDoc base segment "map" table to the load of any child segment table to "automatically" filter-out any child segment records not present in the base, re-using the selection logic as you suggest, which then only needs to be built in the base segment. Keeps things tidy.
    Best wishes,
    Jeff Prenevost
    Data Services Practice Manager
    itelligence

  • Issue with Multiple LTS for a fact table and filters

    Hello,
    I am facing an issue with obiee 10g.
    In my model, I have a huge FACT table F1 (partitioned and indexed). The average response time for the queries, which targeted it, was ~30-60 seconds, which was not really convincing our end user.
    So, we decided to create a materialized view, which removes some dimensions that are not used by default, but might be used if the end user adds some filters. I added the Materialized view in the Physical Layer and in the corresponding Logical Table Source.
    I then tried to see if it works, but I was a bit surprised by the result. Indeed,
    -> If the report does not reference a truncated dimension, it targets the materialized view. -> Perfect
    -> If the report does reference a truncated dimension in the columns, it targets the Fact Table. -> Perfect
    -> If the report does reference a truncated dimension in the Filters, it targets the materialized view. For this reason, the filter is never resolved and no join on the dimension table is applied, whereas it exists in logical SQL generated. -> Ko.
    A suggestion could be to add the filters into the columns, but I am not satisfied by this response because it will never use the materialized view in that case.
    An other suggestion could be to use query rewrite, but I 'd like to have the full control on the generation of the queries.
    Does someone know if the filters are not evaluated to determine which LTS to use? How can I force this evaluation?
    Regards,

    Hi,
    If I understand your description correctly, then your materialized view skips some dimensions (infrequent ones). However, when you reference these skipped dimensions in filters, the queries are hitting the materialized view and failing as these values do not exist. In this case, you could resolve it as follows
    1. Create dimensional hierarchies for all dimensions.
    2. In the fact table's logical sources set the content tabs properly. (Yes, I think this is it).
    When you skipped some dimensions, the grain of the new fact source (the materialized view in this case) is changed. For example:
    Say a fact is available with the keys for Product, Customer, Promotion dimensions. The grain for this is Product * Customer * Promotion
    Say another fact is available with the keys for Product, Customer. The grain for this is Product * Customer (In fact, I would say it is Product * Customer * Promotion Total).
    So in the second case, the grain of the table is changed. So setting appropriate content levels for these sources would automatically switch the sources.
    So, I request you to try these settings and let me know if it works.
    Thank you,
    Dhar

  • ROWNUM is indexed in the Fact table - How to optimize performace with this?

    Hi,
    I have a scenario where there is an index on the Rownum.
    The main Fact table is partitioned based on the job number (Daily and monthly). As there can be multiple entries for a single jobID, the primary key is made up of the Job ID and the Rownum
    This fact table in turn is joined with another fact table based on this job number and rownum. This second fact table is also partitioned on job ID.
    I have few reference tables that are joined with the first fact table with btree index.
    Though in a normal DW scenario we should use bitmap, here we can't do that as lot of other applications are accessing data (DML queries) where bitmap will be slow. So I am using STAR_TRANSFORMATION hint to use the normal index as bitmap index.
    Till here it is fine. Problem is when I simply do a count for a specific partition from a reference table and a fact table, it is using all required indexes as bitmap with very low cost. But also it is using ROWNUM index that is of very very high cost.
    I am relatively new to Oracle tuning. I am not able to understand what it is exactly doing. Could you please suggest if I can get rid of this ROWNUM to make my query performance faster? This index can not be dropped. Is there a way in the hint I can instruct not to use this primary key index?
    Or Even by using is there a way that the performance will be faster?
    I will highly appreciate any help in this regard.
    Regards
    ...

    Just sending the portion having info on the partition and Primary index as the entire script is too big.
    CREATE TABLE FACT_TABLE
    JOBID VARCHAR2(10 BYTE) DEFAULT '00000000' NOT NULL,
    RECID VARCHAR2(18 BYTE) DEFAULT '000000000000000000' NOT NULL,
    REP_DATE VARCHAR2(8 BYTE) DEFAULT '00000000' NOT NULL,
    LOCATION VARCHAR2(4 BYTE) DEFAULT ' ' NOT NULL,
    FUNCTION VARCHAR2(6 BYTE) DEFAULT ' ' NOT NULL,
    AMT.....................................................................................
    TABLESPACE PSAPPOD
    PCTUSED 0
    PCTFREE 10
    INITRANS 11
    MAXTRANS 255
    STORAGE (
    INITIAL 32248K
    LOGGING
    PARTITION BY RANGE (JOBID)
    PARTITION FACT_TABLE_1110500 VALUES LESS THAN ('01110600')
    LOGGING
    NOCOMPRESS
    TABLESPACE PSAPFACTTABLED
    PCTFREE 10
    INITRANS 11
    MAXTRANS 255
    STORAGE (
    INITIAL 32248K
    MINEXTENTS 1
    MAXEXTENTS 2147483645
    BUFFER_POOL DEFAULT
    PARTITION FACT_TABLE_1191800 VALUES LESS THAN ('0119190000')
    LOGGING
    NOCOMPRESS
    TABLESPACE PSAPFACTTABLED
    PCTFREE 10
    INITRANS 11
    MAXTRANS 255
    CREATE UNIQUE INDEX "FACT_TABLE~0" ON FACT_TABLE
    (JOBID, RECID)
    TABLESPACE PSAPFACT_TABLEI
    INITRANS 2
    MAXTRANS 255
    LOCAL (
    PARTITION FACT_TABLE_11105
    LOGGING
    NOCOMPRESS
    TABLESPACE PSAPFACT_TABLEI
    PCTFREE 10
    INITRANS 2
    MAXTRANS 255
    STORAGE (
    INITIAL 64K
    MINEXTENTS 1
    MAXEXTENTS 2147483645
    BUFFER_POOL DEFAULT
    ......................................................

  • Error for the fact table while processing the cube - attribute key cannot be found when processing

    Please help as I am new to SSAS and this is urgent requirement. This is a MOLAP cube and below is the error that I am receiving when processing the cube. The cube is set to Prrocess Full. Several similar errors are popped up for various dimensions.
    "Errors in the OLAP storage engine: The attribute key cannot be found when processing: Table: 'Fact_Table', Column: 'ID', Value: '1'. The attribute is 'Id'. Errors in the OLAP storage engine: The attribute key was converted to an unknown member because
    the attribute key was not found. Attribute Id of Dimension: 17 - Ves - PoC Cont from Database: DB, Cube: IPNCube, Measure Group: iSrvy, Partition: Partition1, Record: 1."
    Thanks in advance.

    Thanks for the recommendations David.
    It will be really great if you can clear some of my doubts:
    To my information, all the dimensions need to be processed first and then the fact table will be processed.
    So if the ID's are not present in the dimension tables, then it should not be present in the Fact table either.
    Here we found null values in the dimension table and the ID's were present in the Fact table. What might be the reasons causing such situation?
    Also how frequently the cube needs to be processed? Currently the ETL which processes the cube, is scheduled in a SQL Job Agent on hourly basis everyday. 
    Is there any possibilty that the cube might be under processing state and the SQL job for the next run getting executed trying to access and process the cube while it was still processing?

  • Distinct count for multiple fact tables in the same cube

    I'm fairly new to working with SSAS, but have been working with DW environments for many years.
    I have a cube which has 4 fact tables.  The central fact table is Encounter and then I also have Visit, Procedure and Medication.  Visit, Procedure and Medication all join to Encounter on Encounter Key.  The relationship between Encounter
    and Procedure and Encounter and Medication are both an optional 1 to 1.  The relationship between Encounter and Visit is an optional 1 to many.
    Each of the fact tables join to the Patient dimension on the Patient Key.  The users are looking for a distinct count of patients in all 4 fact tables.  
    What is the best way to accomplish this so that my cube does not talk all day to process?  Please let me know if you need any more information about my cube in order to answer this.
    Thanks for the help,
    Andy

    Hi Andy,
    Each distinct count measure cause an ORDER BY clause in the SELECT sent to the relational data source during processing. In SSAS 2005 or later, it creates a new measure group for each distinct count measure(it's a technique strategy for improving perormance).
    Besides, please take a look at the following distinct count optimization techniques:
    Create Customized Aggregations
    Define a Processing Plan
    Create Partitions of Equal Size
    Use Partitions Comprised of a Distinct Range of Integers
    Distribute the Hash of Your UserIDs
    Modulo Function
    Hash Function
    Choose a Partitioning Strategy
    For more detail information, please refer to the article below:
    Analysis Services Distinct Count Optimization:
    http://www.microsoft.com/en-us/download/details.aspx?id=891
    In addition, here is a good article about SSAS Best Practices for your reference:
    http://technet.microsoft.com/en-us/library/cc966525.aspx
    If you have any feedback on our support, please click
    here.
    Hope this helps.
    Elvis Long
    TechNet Community Support

  • Pulling values from Variables for my Fact Table

    Hi All,
    The Load_Date in my Fact table is currently, current_timestamp-1, as the data is one day old when it is being loaded. I want to get that date from a variable because if I need to rerun with an earlier date like 20 or 30 days before the current date, I do not want to touch the mapping instead modify the variable and run the package. Please advise on how to do this?
    Thanks for your time and help.

    As Bhabani said, create a project variable with the code you would like to run to generate the earlier date. For example, "select current_timestamp - 30". Then, add this variable to the Package before the Interface, setting it to refresh. In the Interface, change the mapping from what it is now (current_timestamp - 1) to the variable (#VARIABLE_NAME). The variable will now be refreshed and the value added to the mapping.
    If you want to change it to "current_timestamp - 20", just update the variable.
    Regards,
    Michael Rainey

  • Find the partition for the fact table

    Oracle version : Oracle 10.2
    I have one fact table with daily partitions.
    I am inserting some test data in this table for old date 20100101.
    I am able to insert this record in this table as below
    insert into fact_table values (20100101,123,456);
    However I observed that the partition for this date does not exist in the table (all_tab_partitions) morever I am not able to select the data using
    select * from facT_table partition(d_20100101)
    but I am able to extract the data using
    select * from facT_table where date_id=20100101
    could some one please let me know the way to find the partition in which this data might be inserted
    and if the partition for date 20100101 is not present then why insert for that date is working ?

    user507531 wrote:
    However I observed that the partition for this date does not exist in the table (all_tab_partitions) morever I am not able to select the data using
    select * from facT_table partition(d_20100101)Wrong approach.
    but I am able to extract the data using
    select * from facT_table where date_id=20100101Correct approach.
    could some one please let me know the way to find the partition in which this data might be inserted
    and if the partition for date 20100101 is not present then why insert for that date is working ?Who says that the date is invalid..? This is a range partition - which means that each partition covers a range. And if you bothered to read up in the SQL Reference Guide on how a range partition is defined, you will notice that each partition is defined with the end value of the range it covers. There is no start value - as the previous partition's end value is the "+border+" between this and the prior partition.
    I suggest that before you use a database feature you first familiarise yourself with it. Else incorrectly using it, and making the wrong assumptions about it, more than likely results.

  • Creating index for a big table

    Hello,
    We have Oracle 9.2
    I have two tables TableA and TableB. TableB is created by copying part of the content of TableA. Inorder to speed up the coping process, I dropped the primary index of TableB. Now TableB has 94 million records.
    Now I was trying to create the primary index for TableB with the following script:
    CREATE UNIQUE INDEX USR3."TableB~0"
    ON USR3.TableB(ID,COUNTER)
    PCTFREE 10
    INITRANS 2
    MAXTRANS 255
    TABLESPACE INDEX1
    STORAGE(FREELISTS 1
    FREELIST GROUPS 1
    BUFFER_POOL DEFAULT)
    LOGGING
    NOPARALLEL
    NOCOMPRESS
    And I get the Oracle catch all error (ora-00600: internal error....)
    And I only have the following error message: "kcbvmap: unable to find victim buffer"
    The tablespace is a locally managed tablespace
    Please help, how do I fix this problem.

    CREATE UNIQUE INDEX USR3."TableB~0"
    What's with the wacky name?

Maybe you are looking for

  • The head phone jack on my brand new iPod touch 5th generation is not working properly

    The head phone jack on my brand new iPod touch 5th generation is not working properly. It has sound without the headphones in but when I plug them in it has no sound, it's low, or poppy. I had only used my apple supplied headphones once before and it

  • IPhone 6 plus stuck in recovery mode after trying to download latest iOS update

    why did this happen?  How to fix?  Can't get phone to restore.

  • How do you Install iBooks on iPad 1?

    I restored my old iPad version1 to factory settings and gave it to my father in law. I then let him start a new apple id.  I was showing him how to download from the app store, and tried installing iBooks, but I am unable to Install a version of iboo

  • I-photo trouble

    I am having a problem, I-photo will not do anything except give me the color wheel when I try to work with it.  This just happened and the trouble report shows at least 10 pages of repeat kernal errors.  I have shut down everything and restarted nume

  • ClassNotFoundError running application on freebsd

    Hello to all, I have a simple jar application that uses log4j library. I want to run this application on a freebsd machine (6.3) with java 1.5. installed. - I put log4j jar in the same folder where there is my application jar - i digit java -classpat