How to design a fact table to keep track of active dimensions?

I would like to design a classic OLAP facts table using a star scheme. The SQL model of the facts table should be independent of any concrete RDBMS technology and portable between different systems.
The problem is this: users should be able to select subsets of the facts based on conjunctive queries on the dimension values defined for the facts. However, the program that provides the interface for doing this to the user should only present those dimensions where anything is still selectable at all. For example, if a user selected year 2001 and for dimension contract code there is only a single value for all records in the fact table for that year, this dimension should not be shown to the user any more. This should be solved in a generic way. So for n dimensions in total, if the current set of facts is based on constraints from j dimensions, I want to know for which of the remaining n-j dimensions there is still something to select from and only show those.
The obvious way is to make a count(*) query on the distinct foreign keys of each of the dimensions on the fact table, using the same where clause. That means that one would need (n-j) such queries on the whole facts table and that sounds like an awful waste of resources given that the original query for selecting the facts could have done it internally "on the fly".
How can this be achieved in the most performant way? Is there a "classical" way of how to approach this problem? Is there tool support for doing this efficiently?
Any help or pointers to where one could find out more about this would be greatly apreciated - thank you!

>
Did you get the counts for each value of each dimension by doing a separate query with the current "WHERE" clause on each dimension?
>
My method doesn't apply to your use case. I wrote a Java class to create my own bit-mapped indexes on CSV files. So each attribute value was a one million bit binary raw.
I don't know, and don't want to know, what your particular requirements are. But I can show you a basic process that will work for large numbers of rows. Get a simple process working and then explore to see if it will meet your particular needs. Not going to answer questions here about anything but about my example code
1. Assume a single fact table with one primary key column and multiple single-value attribute columns.
2. The table is not subject to DML operations AT ALL - truncate and load if you want apply changes. Meaning it will be useful for research purposes on archived data.
3. The purpose of the table is to select the fact table ROWIDs for records of interest. So the only value selected is a result set of ROWIDs that can then be used to get any of the normal FACt table data and other linked data as needed.
Create the table - insert some records, create a bitmap index on each dimension column and collect the statistics
ALTER TABLE SCOTT.STAR_FACT
DROP PRIMARY KEY CASCADE;
DROP TABLE SCOTT.STAR_FACT CASCADE CONSTRAINTS;
create table star_fact (
    fact_key varchar2(30) DEFAULT 'N/A' not null,
    age      varchar2(30) DEFAULT 'N/A' not null,
    beer    varchar2(30) DEFAULT 'N/A' not null,
    marital_status varchar2(30) DEFAULT 'N/A' not null,
    softdrink varchar2(30) DEFAULT 'N/A' not null,
    state    varchar2(30) DEFAULT 'N/A' not null,
    summer_sport varchar2(30) DEFAULT 'N/A' not null,
    constraint star_fact_pk PRIMARY KEY (fact_key)
INSERT INTO STAR_FACT (FACT_KEY) SELECT ROWNUM FROM ALL_OBJECTS;
create bitmap index age_bitmap on star_fact (age);
create bitmap index beer_bitmap on star_fact (beer);
create bitmap index marital_status_bitmap on star_fact (marital_status);
create bitmap index softdrink_bitmap on star_fact (softdrink);
create bitmap index state_bitmap on star_fact (state);
create bitmap index summer_sport_bitmap on star_fact (summer_sport);
exec DBMS_STATS.GATHER_TABLE_STATS('SCOTT', 'STAR_FACT', NULL, CASCADE => TRUE);Now if you run the 'complex' query for the example from my first reply you will get
SQL> set serveroutput on
SQL> set autotrace on explain
SQL> select rowid from star_fact where
  2   (state = 'CA') or (state = 'CO')
  3  and (age = 'young') and (marital_status = 'divorced')
  4  and (((summer_sport = 'baseball') and (softdrink = 'pepsi'))
  5  or ((summer_sport = 'golf') and (beer = 'coors')));
no rows selected
Execution Plan
Plan hash value: 1934160231
| Id  | Operation                      | Name                  | Rows  | Bytes |
|   0 | SELECT STATEMENT               |                       |     1 |    30 |
|   1 |  BITMAP CONVERSION TO ROWIDS   |                       |     1 |    30 |
|   2 |   BITMAP OR                    |                       |       |       |
|*  3 |    BITMAP INDEX SINGLE VALUE   | STATE_BITMAP          |       |       |
|   4 |    BITMAP AND                  |                       |       |       |
|*  5 |     BITMAP INDEX SINGLE VALUE  | AGE_BITMAP            |       |       |
|*  6 |     BITMAP INDEX SINGLE VALUE  | MARITAL_STATUS_BITMAP |       |       |
|*  7 |     BITMAP INDEX SINGLE VALUE  | STATE_BITMAP          |       |       |
|   8 |     BITMAP OR                  |                       |       |       |
|   9 |      BITMAP AND                |                       |       |       |
|* 10 |       BITMAP INDEX SINGLE VALUE| SOFTDRINK_BITMAP      |       |       |
|* 11 |       BITMAP INDEX SINGLE VALUE| SUMMER_SPORT_BITMAP   |       |       |
|  12 |      BITMAP AND                |                       |       |       |
|* 13 |       BITMAP INDEX SINGLE VALUE| BEER_BITMAP           |       |       |
|* 14 |       BITMAP INDEX SINGLE VALUE| SUMMER_SPORT_BITMAP   |       |       |
Predicate Information (identified by operation id):
   3 - access("STATE"='CA')
   5 - access("AGE"='young')
   6 - access("MARITAL_STATUS"='divorced')
   7 - access("STATE"='CO')
  10 - access("SOFTDRINK"='pepsi')
  11 - access("SUMMER_SPORT"='baseball')
  13 - access("BEER"='coors')
  14 - access("SUMMER_SPORT"='golf')
SQL>As you can see Oracle is combining bitmap indexes on columns in a single table to implement the same AND/OR complex conditions I showed earlier. It doesn't need any other table to do this.
In 11g you can create virtual columns and then index them.
so if you find that the condition 'young' and 'divorced' is used frequently you could create a VIRTUAL 'young_divorced' column and create an index.
alter table star_fact add (young_divorced AS (case
   when (age = 'young' and marital_status = 'divorced') then 'TRUE' else 'N/A' end) VIRTUAL);
create bitmap index young_divorced_ndx on star_fact (young_divorced);
exec DBMS_STATS.GATHER_TABLE_STATS('SCOTT', 'STAR_FACT', NULL, CASCADE => TRUE);Now you can query using the name of the virtual column
SQL> select rowid from star_fact where young_divorced = 'TRUE'
  2  and  (state = 'CA') or (state = 'CO')
  3  /
no rows selected
Execution Plan
Plan hash value: 2656088680
| Id  | Operation                    | Name               | Rows  | Bytes | Cost
|   0 | SELECT STATEMENT             |                    |     1 |    28 |
|   1 |  BITMAP CONVERSION TO ROWIDS |                    |       |       |
|   2 |   BITMAP OR                  |                    |       |       |
|*  3 |    BITMAP INDEX SINGLE VALUE | STATE_BITMAP       |       |       |
|   4 |    BITMAP AND                |                    |       |       |
|*  5 |     BITMAP INDEX SINGLE VALUE| STATE_BITMAP       |       |       |
|*  6 |     BITMAP INDEX SINGLE VALUE| YOUNG_DIVORCED_NDX |       |       |
Predicate Information (identified by operation id):
   3 - access("STATE"='CO')
   5 - access("STATE"='CA')
   6 - access("YOUNG_DIVORCED"='TRUE')
SQL>
----------------------------------------------------------------Notice that at line #6 the new index was used. The VIRTUAL column itselfl doesn't create data for the fact table; the definition only exists in the data dictionary.
The YOUNG_DIVORCE_NDX is real and does consume space. The tradeoff is additional space for the index but you make the query easier because you don't have to recreate the complex condition every time.
Oracle can work with the complex condition and combine the indexes so this really only helps the query writer. Your UI should be able to hide the query construction from the user so I would avoid the use of VIRTUAL columns and an additional index until you demonstrate you really need it.
If you provide users with their own RESULT table to store custom query results you could just store the query name and the set of primary keys from the result set. I used ROWIDs in the example but don't use rowid for a real application - use a primary key value that won't change.
So your UI would let users construct complext dimension queries for 'young_sportsters' and get a result set of primary keys for that. They could save the label 'young_sportsters' and the primary keys in their own work table. Then you can let them run queries that use the primary keys to query data from your active data warehouse to get any other data it contains.
>
Did you get the counts for each value of each dimension by doing a separate query with the current "WHERE" clause on each dimension?
>
For an Oracle implementation you need to do a count select for each dimension. I haven't tried it but you might be able to do multiple dimensions in a singe query. One query would look like this>
-- get the dimension counts
SQL> select beer, count(*) from star_fact group by beer;
BEER                             COUNT(*)
N/A                                 56977
Execution Plan
Plan hash value: 1692670403
| Id  | Operation                | Name        | Rows  | Bytes | Cost (%CPU)| Ti
|   0 | SELECT STATEMENT         |             |     1 |    12 |     3   (0)| 00
|   1 |  SORT GROUP BY NOSORT    |             |     1 |    12 |     3   (0)| 00
|   2 |   BITMAP CONVERSION COUNT|             | 56977 |   667K|     3   (0)| 00
|   3 |    BITMAP INDEX FULL SCAN| BEER_BITMAP |       |       |            |
SQL>Notice that Oracle uses only the index to gather the data.

Similar Messages

  • How to update a fact table when a dimension table is reloaded

    We have implemented BI Apps 796. Insertion into W_EMPLOYEE_D table which stores all the employee information had stopped one year back as some company security policy restricted the informatica worklfows to pick up the data. (PER_ALL_PEOPLE_F was a HRMS table and it contained sensitive information line SSN and salary, was inaccessible to the user which informatica uses and the SDE mapping used to return 0 rows).
    Now we have the approval to see those rows and the dimension table is loaded with some 100 new employees who joined in last one year.
    The ROW_WID of W_EMPLOYEE_D is referenced in lot of fact tables and for all those missing employees the WID in the fact table is 0.
    Now that we have all employees, how to make the FACT table point to the correct WID and not store 0. Has anyone faced this problem before?? Writing an update statement will be a tedious task as there are so many fact tables that join to w_employee_d. Also our company uses Sales, Procurement, Finance modules of OB Apps (which constitutes atleast 20 fact tables)
    Any guidance is appreciated. Thanks in advance

    Hello Kostis,
    thank you for your answer. I don't fully understand you. Can you show me short example, please? I create alias table for time dimension on Physical Layer - original table is TimeDayDim and I create aliases TimeDayDim1, TimeDayDim2, TimeDayDim3, TimeDayDim4. Then I create foreign key Fact.Time1 -> TimeDayDim1, Fact.Time2 -> TimeDayDim2, Fact.Time3 -> TimeDayDim3, Fact.Time4 -> TimeDayDim4. And what now? Must I create these table api Bussines Model and create new time dimensions at bussiness model????
    I need in Answers ONE Time dimension. I think I must split my fact table to four tables ... (time1, place1 ...) (time2, place2 ...) (time3 place3...) (time4 place4...) then link those tables to Time dimension (but I dont know where I can split those tables - on Physical Layer or on Bussines Layer).
    I suppose that I will have in Answers one time dimension and four facts tables and I will be able to query them. (for example: Time.Days, Fact1.Place1, Fact3.Speed, Fact4.Count Criteria: Time.Year = 2008)
    Best Regards Vlada

  • Mapping the Fact table to different levels of a dimension

    Hi,
    I have a fact table which stores the data for 4 levels of the dimensions. The aggregation method was taken care by PL/SQL and the fact table will have the data for all the 4 levels. When im trying to map all the levels to a column in the fact table using the OEM, it is generating the F KEY constraints referncing the columns mapped for the various levels of the dimension.
    The problem is that im using a denormalised table for maintaing the values of the dimension. So the columns mapped for the levels(Except for the lowest) can't have the unique key defined on it. The cube is not getting created because of the error in creating the F KEY.
    Can u please suggest how to map this fact table.
    Thnks,
    Manohar Vanama

    I am not exactly clear on your schema but I believe you are trying to map tables which are not strict star or snowflake. This means that you cannot use CWM1 (and OEM), unless you change the structure of the tables. You might be able to map the tables with CWM2. The document below will assist you:
    Oracle9i OLAP User's Guide
    Chapter 4. Designing Your Database for OLAP
    Chapter 5. Creating OLAP Catalog Metadata

  • Two FACT Tables, Some Common and Non-Common Dimensions

    Hello all, a question i am sure you have faced in the past but still wanted to get your feedback.
    I have a few FACT tables and some dimensions that are shared (common dimensions). Rest of the dimensions are related to one or the other FACT tables.
    What is the best way to present a view where users can pull information from both the FACT tables?
    I am successful in pulling the shared (common) dimensions across BOTH FACT tables having the same grain but this view breaks down when i pull information from one Dimension that has not much to do with the other FACT.
    What is the best way to present this? Should this be broken in three subject areas?
    Subject Area 1 --> Some Dims --> FACT Table A
    Subject Area 2 --> Some Dims --> FACT Table B
    AND
    Subject Area 3 --> ***Only Common Dims*** --> FACT Table A & FACT Table B?
    Your feedback is always appreciated.
    Regards,
    Edited by: user10679130 on Oct 12, 2009 3:27 PM

    Please check the forum first for similar threads/questions.
    Joining two fact tables with different dimensions into single logical table
    http://108obiee.blogspot.com/2009/08/joining-two-fact-tables-with-different.html
    This solution keeps both fact tables in the same subject area in the single logical fact table, with common and not-common dimensions.
    Regards
    Goran
    http://108obiee.blogspot.com

  • Any standard table that keep tracks of changes in user attributes

    Hi,
    We have a HR system and we are tyring to find out a standard table that keep tracks of changes(when was it changed,who changed it,what has been changed) in user details like email,address etc.
    Plz let me know the solution
    Thanks
    Bala Duvvuri

    CDHDR
    CDPOS

  • How to combine multiple fact tables and dimensions in one worksheet?

    Hello Forum,
    I am encountering a reporting problem when trying to create a worksheet that uses more than one cube/fact table and common dimensions. I have used Oracle Warehouse Builder 10Gr2 to design and deploy a pretty simple ROLAP data mart. We are using Discoverer Plus for OLAP as our reporting tool. We have 5 dimension tables using a star schema and 3 fact tables, when I create the worksheet I bring in our sales measure from our sales item table and then Store_Name from my Stores Dimension and then day from my time dimension, everything looks good at the stage, we're just trying to get a sum of all sales for that store on that day. Then I bring in a measure from our advertising cost table and a join window pops up asking which join to use, if I choose either the Store or the Time dimension I get correct data for the first fact table (sales) and grossly incorrect data for the ad cost measure from the second fact table (advertsing costs)...... any help would be appreciated

    You have encountered one of the key limitations of Discoverer... which I complained about to the Discoverer product manager at OpenWorld in 2001....
    Anyhow, to get around this, you are going to have to deal with it either in the database, (views, materialized views, tables), or within the admin tool by creating a custom folder.
    Discoverer also calls this the "fan trap", but never really had a solution to the problem. [The solution only worked is you joined to one and only one dimension!]
    What you want (using Sales_Fact and Inventory_Fact as an example) is to join Sales to Time, Store, and Product, and save that result. Then join Inventory to Time, Store, and Product, save that result, then do a double outer join between the two intermediate temporary tables in order to calculate something useful like inventory turns by store and product line.
    This is also known a "multipass SQL", and is supported by some (but not many) other tools.
    So, to accomplish this with Discoverer, you'll either need to create a view, or table, or materialized view that has already put Sales and Inventory into a single (virtual?) fact table. Alternatively you can write the SQL for how to do this linkage (don't forget to handle missing data), and use the Discoverer admin tool to create a custom folder that uses your SQL.
    Hope this helps!

  • How can I make fact table?

    Hi all
    I would like to make fact table in oracle 11g using SQL syntax. how can I do so?
    Is Analytical Workspace Manager available by defult in oracle 10g or 11g?
    Do you recommand good book for implementing star schema or ROLAP? i want the setps and sql syntax examples.
    thanks

    Hi,
    You can make a fact table through any number of ways viz:
    - CREATE TABLE syntax , your table design would correspond to the dimensional model (Star Schema / Snowflake Schema) specifications
    - You can alternately use an ETL Tool like OWB to design your star schema
    As far as I know i dont think Analytical Workspace Manager comes by default with an Oracle DB Installation
    - Read any book you find on Dimensional Modeling to get a hang on building Star Schemas and OLAP Concepts. You would definitely find one such book in "The Oracle documentation library"

  • How to resolve many fact tables and Dimensions tables

    Hi,
    The scenario is we have many facts and dimension tables. Based on some conditions one measure from one fact will be divided by another fact measure. I have encoutered with many errors like " Unable to navigate .... " ? How to resolve these errors and reduce many to few ? ( I assume creating logical tables, but is there any other alternatives ? )
    thanks
    Suresh

    Suresh,
    I assume that you know how to create a single logical fact from n-physical facts, ie only if the fact tables are related. Then join all the conformed dimensions to this single Logical table using a join in the Business Model layer. Remember to set the mappings in the LTS. All if you have any hierarchies please set the aggregation level for those.
    - Red

  • How to create logical fact table in BMM layer ?

    Hello,
    I have 3 Dimension table - 2 are in one schema and last is another schema. Using this 3 dimension tables, I need to create a logical fact table.
    So, my question is whether we can create this fact table by joining these 3 dimension table which are in 2 different schema s ?
    Thanks

    Fiaz,
    you are correct. We can use tables from different subject area to create a report. However, my question was related to rpd design. Sorry, I was not very clear about the queries earlier.
    Here is the whole scenario in the physical layer of the rpd
    Table name      Databse name      Connection pool name      Schema name
    AV          AV_PXRPAM     AVAILABILITY          CRMODDEV
    OUTAGE          AV_PXRPAM     AVAILABILITY          CRMODDEV
    COMPANY          PXRPAM          PXRPAM_POOL          CRMODDEV
    AV and OUTAGE have the joins already. I want to make a join between COMPANY with OUTAGE. And then I want to include a column from each of above tables to the logical fact table in the BMM layer. then I want to do a star schema with the logical fact table to the above 3 tables in the BMM layer.
    Thanks

  • OBIA How to truncate all fact tables automatically before running a full load

    With every run data gets appended to the fact tables.How to configure so that with every run data gets deleted before we start the load.
    We have OBIA 11g

    If you are using DAC for scheduling the ETL jobs, you can list out the fact tables that are being used in your execution plan and set the Refresh dates for those tables as NULL. This action makes the dac execution plan to consider loading the data freshly into the fact tables.
    Steps to make Refresh date as NULL in DAC:
    To make the refresh dates as NULL for your fact tables in DAC, go to Setup tab -> Physical Data Sources
    Now select the connection Datawarehouse(In my case) -> Refresh Dates
    Query your fact tables and go to Refresh Date column and click on the calendar Icon
    Click on NULL button to make the refresh date as NULL for that particular table and click on ok and save.
    In the similar manner do it for all the fact tables you want.
    After the above process, once you run the execution plan then data will be loaded freshly.
    Regards,
    Obul

  • Design of Fact Table

    Hello,
    I am relatively new to BI and am wondering the pros and cons of a particular design of a fact table and associated dimensions.  In the first case below the fact table would have one row with fields T, N, M for which the codes would come from a dimension
    (the Tis etc just demonstrate the idea -- would normally be an integer key in place).  
    In the second case I have two dimensions - one for the Criteria (T,N, or M) and another table for the actual Criteria values.  This approach would require numerous rows for each ID - again the fact table would just be populated with keys from the two
    dimensions rather than the actual values seen below.  
    Is either of these just plain wrong or are both valid but one is better than the other?   
    Thanks kindly for any consideration. 

    Hello Nimesh,
    Thank you for the thoughts.   Please see my replied below.
    1. Will there be any other/Additional value needs to be added in future? (Except T,M,N)
    There will initially be a set of values including T,M,N (more may be added in the future depending on
    what new things international cancer research comes up with but for now these are pretty static)
    2. What will be value if ID 1234 is not related to N or M?
    There a whole bunch of other measurements that are taking in addition to N, M,and N.   Typically, one sees a fact table as narrow but long.  In this case since there are many different dimensions for staging a cancer the fact table could be 10 columns
    wide AND long.
    3. What is the relation between (T and Tis) or (N and N3)? Can it linked to One to many?
    Tis only one possible value for the measurement T.  There could be T1, Tx, T2 etc etc.  These are all just types of T measurements.  Same for N and M.    One can say that if a patient has T of Tis, N of N3 and M of M0 then their cancer
    stage is III. 
    4. If it's Dimension model, you can combine small dimensions into one dimension. 
    Do you have a specific example of this in mind or I can look at on a website?

  • How to change the fact table in backend query

    Hi
    I have a criteria where f1 is the fact table comming in backend query, how can I change/modify so that if i select same criterian it should come different fact table f2.
    Please suggest.

    Hi Hussain,
    I have a measure 'po amount' which is comming from two fact tables cost_f and line_f from physical layer. I have a implict fact column in presentation layer on a column(internal-row count) from cost_f table.
    Now, when I take only 'po amount ' in criteria, in the backend query it should have cost_f table, but i am seeing line_f table.
    The reason I am checking in this direction is.
    I have criteria with 4 columns and measure column 'po amount' and run the result it is fecting from line_f table.
    The same 4 columns and measure column 'po amount' and one new column 'cost center' is added to the critera the fact table changes to cost_f table.
    In both the cases the result should result from cost_f table, not sure why line_f is comming in backend query.
    Please suggest.

  • How to create the fact table

    pleae let me know how to create the fatc table by using pl/sql packages
    Edited by: 792988 on Sep 6, 2010 3:34 AM

    Please let us know something about your fact table.
    There's no create_fact_table() procedure that could satisfy everybody.

  • How to create solved fact table and corresponding cube

    Hello,
    I want to create cube with solved fact table. It means, i need to feed data for higher levels of dimension also from fact table instead of aggreting from base level.
    I am using star schema and AWM 10g R2 for creating cube.
    If anyone knows how to do this, i would be very benificial.
    Thanks
    Subash

    I have generated parent child script using cwm2_olap_pc_transform.create_script. After running this generated script, it has created 3 table/view.
    My Base Parent Child table is like this:-
    drop table PARENT_CHILD;
    create table PARENT_CHILD (PARENT varchar2(30), CHILD varchar2(30));
    insert into PARENT_CHILD values ('Eligible', 'Compliant');
    insert into PARENT_CHILD values ('Eligible', 'Non-Compliant');
    insert into PARENT_CHILD values ('All', 'Eligible');
    insert into PARENT_CHILD values ('All', 'Ineligible');
    insert into PARENT_CHILD values (null, 'All');
    After running generated script thru cwm2_olap_pc_transform.create_script, it has created :-
    Table - PARENT_CHILD_SOLVED
    View - PARENT_CHILD_SOLVED_view , PARENT_CHILD_view
    This script also inserted data in above table/view. (5 rec in each). Table/View structure is like this:-
    SQL> desc PARENT_CHILD_view
    Name Null? Type
    GID NUMBER
    CHILD1 VARCHAR2(30)
    CHILD2 VARCHAR2(30)
    CHILD3 VARCHAR2(30)
    Data:-
    0 All Eligible Compliant
    0 All Eligible Non-Compliant
    1 All Eligible
    1 All Ineligible
    3 All
    SQL> desc PARENT_CHILD_SOLVED
    Name Null? Type
    GID NUMBER
    CHILD1 VARCHAR2(30)
    CHILD2 VARCHAR2(30)
    CHILD3 VARCHAR2(30)
    Data:-
    0 All Eligible Compliant
    0 All Eligible Non-Compliant
    1 All Eligible
    1 All Ineligible
    3 All
    SQL> desc PARENT_CHILD_SOLVED_view
    Name Null? Type
    GID NUMBER
    SHORT_DESCRIPTION VARCHAR2(30)
    LONG_DESCRIPTION VARCHAR2(30)
    CHILD1 VARCHAR2(30)
    CHILD2 VARCHAR2(30)
    CHILD3 VARCHAR2(30)
    Data:-
    0 Compliant Compliant All Eligible Compliant
    0 Non-Compliant Non-Compliant All Eligible Non-Compliant
    1 Eligible Eligible All Eligible
    1 Ineligible Ineligible All Ineligible
    3 All All All
    I tried to create dim and cube based on this. I am not sure whether its correct or not! Though validate_dimension API call shows that it is valid.
    Script for Dim:-
    DECLARE
    -- variable to hold error message
    errtxt varchar(60);
    BEGIN
    -- To be on safer side just drop dimension before creating new one and catch exceptions
    BEGIN
    cwm2_olap_dimension.drop_dimension('APPS', 'HCP_DIM_PC');
    EXCEPTION
    WHEN OTHERS THEN
    dbms_output.put_line('Dimension HCP_DIM_PC not dropped');
    END;
    cwm2_olap_dimension.create_dimension(
    'APPS',
    'HCP_DIM_PC',
    'Parent Child',
    NULL,
    'Parent Child',
    'Parent Child',
    NULL);
    cwm2_olap_dimension_attribute.create_dimension_attribute(
    'APPS',
    'HCP_DIM_PC',
    'Short Description',
    'Short Description',
    'Short Description',
    'Short Description',
    TRUE);
    cwm2_olap_dimension_attribute.create_dimension_attribute(
    'APPS',
    'HCP_DIM_PC',
    'Long Description',
    'Long Description',
    'Long Description',
    'Long Description',
    TRUE);
    cwm2_olap_dimension_attribute.create_dimension_attribute(
    'APPS',
    'HCP_DIM_PC',
    'Grouping ID',
    'Grouping ID',
    'Grouping ID',
    'Grouping ID',
    TRUE);
    cwm2_olap_hierarchy.create_hierarchy(
    'APPS',
    'HCP_DIM_PC',
    'HCP_DIM_PC_HIER',
    'Standard',
    'Standard',
    'Standard Parent Child Hierarchy',
    'SOLVED LEVEL-BASED');
    cwm2_olap_dimension.SET_DEFAULT_DISPLAY_HIERARCHY ('APPS', 'HCP_DIM_PC', 'HCP_DIM_PC_HIER');
    cwm2_olap_level.create_level(
    'APPS',
    'HCP_DIM_PC',
    'ALL_PARENT_LVL',
    'All Parent Child',
    'All Parent Child',
    'All Parent Child',
    'All Parent Child Level');
    cwm2_olap_level_attribute.create_level_attribute(
    'APPS',
    'HCP_DIM_PC',
    'Short Description',
    'ALL_PARENT_LVL',
    'Short Description',
    'Short Description',
    'Short Description',
    'Short Description',
    TRUE);
    cwm2_olap_level_attribute.create_level_attribute(
    'APPS',
    'HCP_DIM_PC',
    'Long Description',
    'ALL_PARENT_LVL',
    'Long Description',
    'Long Description',
    'Long Description',
    'Long Description',
    TRUE);
    cwm2_olap_level_attribute.create_level_attribute(
    'APPS',
    'HCP_DIM_PC',
    'Grouping ID',
    'ALL_PARENT_LVL',
    'Grouping ID',
    'Grouping ID',
    'Grouping ID',
    'HTB Grouping ID',
    TRUE);
    -- Add all levels one by one to dimension hierarchy. For top most level last parameter is null.
    cwm2_olap_level.add_level_to_hierarchy(
    'APPS',
    'HCP_DIM_PC',
    'HCP_DIM_PC_HIER',
    'ALL_PARENT_LVL',
    NULL);
    BEGIN
    cwm2_olap_table_map.removemap_dimtbl_hierlevel(
    'APPS',
    'HCP_DIM_PC',
    'HCP_DIM_PC_HIER',
    'ALL_PARENT_LVL');
    cwm2_olap_table_map.removemap_DimTbl_HierLevelAttr(
    'APPS', 'HCP_DIM_PC', 'Short Description', 'HCP_DIM_PC_HIER', 'ALL_PARENT_LVL', 'Short Description');
    cwm2_olap_table_map.removemap_DimTbl_HierLevelAttr(
    'APPS', 'HCP_DIM_PC', 'Long Description', 'HCP_DIM_PC_HIER', 'ALL_PARENT_LVL', 'Long Description');
    cwm2_olap_table_map.removemap_DimTbl_HierLevelAttr(
    'APPS', 'HCP_DIM_PC', 'Grouping ID', 'HCP_DIM_PC_HIER', 'ALL_PARENT_LVL', 'Grouping ID');
    EXCEPTION
    WHEN OTHERS THEN
    dbms_output.put_line('Level map for ALL_PARENT_LVL not removed');
    END;
    -- Map ALL_PARENT_LVL level to dimension table. Last parameter is null since it is top most level
    cwm2_olap_table_map.map_dimtbl_hierlevel(
    'APPS',
    'HCP_DIM_PC',
    'HCP_DIM_PC_HIER',
    'ALL_PARENT_LVL',
    'APPS',
    'PARENT_CHILD_SOLVED_VIEW',
    'GID',
    NULL);
    -- one by one map all the level attributes to respective columns in the dimension table.
    cwm2_olap_table_map.Map_DimTbl_HierLevelAttr(
    'APPS',
    'HCP_DIM_PC',
    'Short Description',
    'HCP_DIM_PC_HIER',
    'ALL_PARENT_LVL',
    'Short Description',
    'APPS',
    'PARENT_CHILD_SOLVED_VIEW',
    'SHORT_DESCRIPTION');
    cwm2_olap_table_map.Map_DimTbl_HierLevelAttr(
    'APPS',
    'HCP_DIM_PC',
    'Long Description',
    'HCP_DIM_PC_HIER',
    'ALL_PARENT_LVL',
    'Long Description',
    'APPS',
    'PARENT_CHILD_SOLVED_VIEW',
    'LONG_DESCRIPTION');
    cwm2_olap_table_map.Map_DimTbl_HierLevelAttr(
    'APPS',
    'HCP_DIM_PC',
    'Grouping ID',
    'HCP_DIM_PC_HIER',
    'ALL_PARENT_LVL',
    'Grouping ID',
    'APPS',
    'PARENT_CHILD_SOLVED_VIEW',
    'GID');
    -- Use cwm2_olap_validate.validate_dimension to validate the dimension
    cwm2_olap_validate.validate_dimension('APPS', 'HCP_DIM_PC');
    cwm2_olap_metadata_refresh.mr_refresh;
    -- Rollback if any exception occurs during processing this script
    EXCEPTION
    WHEN OTHERS THEN
    cwm_utility.dump_error;
    errtxt := cwm_utility.get_last_error_description;
    dbms_output.put_line('ERROR: ' || errtxt);
    ROLLBACK;
    RAISE;
    END;
    Script for Cube:-
    declare
    HCP_time_dim number;
    errtxt varchar(60);
    begin
    cwm_utility.collect_garbage;
    begin
    cwm2_olap_cube.drop_cube('APPS', 'HCP_PC_CUBE');
    exception
    when others then
    dbms_output.put_line('No HCP_PC_CUBE to drop');
    end;
    begin
    cwm2_olap_catalog.drop_catalog('HCP_PC_CAT');
    exception
    when others then
    dbms_output.put_line('No HCP_PC_CAT to drop');
    end;
    CWM2_OLAP_CUBE.Create_Cube('APPS', 'HCP_PC_CUBE', 'Parent Child Cube', 'Parent Child Cube','Parent Child Cube');
    cwm2_olap_cube.add_dimension_to_cube('APPS', 'HCP_PC_CUBE','APPS', 'HCP_DIM_PC');
    cwm2_olap_measure.create_measure('APPS', 'HCP_PC_CUBE', 'HCP_PC_MEASURE', 'PC Measure','PC Measure', 'PC Measure Fact');
    cwm2_olap_table_map.map_facttbl_levelkey('APPS','HCP_PC_CUBE','APPS','HCP_PC_FACT','ET', 'DIM:APPS.HCP_DIM_PC/HIER:HCP_DIM_PC_HIER/GID:GID/LVL:ALL_PARENT_LVL/COL:CHILD3;');
    cwm2_olap_table_map.Map_FactTbl_Measure('APPS', 'HCP_PC_CUBE','HCP_PC_MEASURE', 'APPS', 'HCP_PC_FACT', 'MEASURE_COL', 'DIM:APPS.HCP_DIM_PC/HIER:HCP_DIM_PC_HIER/GID:GID/LVL:CHILD3/COL:SHORT_DESCRIPTION;');
    cwm2_olap_catalog.create_catalog('HCP_PC_CAT', 'Parent Child Catalog');
    cwm2_olap_catalog.add_catalog_entity('HCP_PC_CAT', 'APPS', 'HCP_PC_CUBE', 'HCP_PC_MEASURE');
    cwm2_olap_validate.validate_cube('APPS', 'HCP_PC_CUBE','DEFAULT','YES');
    cwm2_olap_metadata_refresh.mr_refresh;
    exception
    when others then
    cwm_utility.dump_error;
    errtxt := cwm_utility.get_last_error_description;
    dbms_output.put_line('ERROR: ' || errtxt);
    rollback;
    raise;
    END;
    My Fact Table is :-
    DROP TABLE HCP_PC_FACT CASCADE CONSTRAINT;
    CREATE TABLE HCP_PC_FACT (
    SHORT_DESCRIPTION VARCHAR2(30) NOT NULL,
    GID NUMBER NOT NULL,
    CHILD1 VARCHAR2(30) NOT NULL,
    CHILD2 VARCHAR2(30) ,
    CHILD3 VARCHAR2(30) ,
    MEASURE_COL NUMBER NOT NULL);
    Data in Fact Table:-
    insert into HCP_PC_FACT values('Compliant',0,'All','Eligible','Compliant', 100);
    insert into HCP_PC_FACT values('Non-Compliant',0,'All','Eligible','Non-Compliant', 200);
    insert into HCP_PC_FACT values('Eligible',1,'All','Eligible',null, 300);
    insert into HCP_PC_FACT values('Ineligible',1,'All','Ineligible',null, 400);
    insert into HCP_PC_FACT values('All',3,'All',null,null, 500);
    I am not sure how to create level, level attr, dim attr for such dim.
    All CWM2 validation API shows that my all dims, cubes are valid but when I try to create presentation thru JDev it hangs after selecting Parent-Child Measure.
    Any complete working exapmle will be helpful.
    P.S. One more query I have can we have one solved and one un-solved dim in the same cube/measure?
    regds
    Prakash

  • How to skip the fact table  /BI0/9AEDFC01 error  while import phase in Heterogeneous migration

    Hi.
    Please  find the below  issue of the fact table while import phase in OS/DB migration and enclosed the below log for  reference.
    /usr/sap/AQ2/SYS/exe/uc/linuxx86_64/R3load: START OF LOG: 20140924185259
    /usr/sap/AQ2/SYS/exe/uc/linuxx86_64/R3load: sccsid @(#) $Id: //bas/741_REL/src/R3ld/R3load/R3ldmain.c#6 $ SAP
    /usr/sap/AQ2/SYS/exe/uc/linuxx86_64/R3load: version R7.40/V1.8 [UNICODE]
    Compiled Nov 23 2013 13:06:03
    -------------------- Start of patch information ------------------------
    patchinfo (patches.h): (0.009) Support for SUM/ZDM and DMO (note 1778564)
    DBSL patchinfo (patches.h): (0.011) DBSL error corrections in 7.41: (4) LOBs (note 1928526)
    --------------------- End of patch information -------------------------
    process id 14248
    /usr/sap/AQ2/SYS/exe/uc/linuxx86_64/R3load: job completed
    /usr/sap/AQ2/SYS/exe/uc/linuxx86_64/R3load: END OF LOG: 20140924185259
    /usr/sap/AQ2/SYS/exe/uc/linuxx86_64/R3load: START OF LOG: 20140924185259
    /usr/sap/AQ2/SYS/exe/uc/linuxx86_64/R3load: sccsid @(#) $Id: //bas/741_REL/src/R3ld/R3load/R3ldmain.c#6 $ SAP
    /usr/sap/AQ2/SYS/exe/uc/linuxx86_64/R3load: version R7.40/V1.8 [UNICODE]
    Compiled Nov 23 2013 13:06:03
    -------------------- Start of patch information ------------------------
    patchinfo (patches.h): (0.009) Support for SUM/ZDM and DMO (note 1778564)
    DBSL patchinfo (patches.h): (0.011) DBSL error corrections in 7.41: (4) LOBs (note 1928526)
    --------------------- End of patch information -------------------------
    process id 14265
    (DB) INFO: connected to DB
    (DB) INFO: DbSlControl(DBSL_CMD_NLS_CHARACTERSET_GET): UTF16
    (GSI) INFO: dbname  = "AQ220140924040917                                                                                                              "
    (GSI) INFO: vname    = "ORACLE                          "
    (GSI) INFO: hostname = "VA1WIPRSCM03                                                    "
    (GSI) INFO: sysname  = "Linux"
    (GSI) INFO: nodename = "VA1WIPRSCM03"
    (GSI) INFO: release  = "2.6.32-358.el6.x86_64"
    (GSI) INFO: version  = "#1 SMP Tue Jan 29 11:47:41 EST 2013"
    (GSI) INFO: machine  = "x86_64"
    (SQL) INFO: Searching for SQL file SQLFiles.LST
    (SQL) INFO: SQLFiles.LST not found
    (SQL) INFO: Searching for SQL file /data1/SCMEXPORT/EXPAQ2/ABAP/DB/SQLFiles.LST
    (SQL) INFO: found /data1/SCMEXPORT/EXPAQ2/ABAP/DB/SQLFiles.LST
    (SQL) INFO: Trying to open /data1/SCMEXPORT/EXPAQ2/ABAP/DB/SQLFiles.LST
    (SQL) INFO: /data1/SCMEXPORT/EXPAQ2/ABAP/DB/SQLFiles.LST opened
    (SQL) INFO: Searching for SQL file DFACT.SQL
    (SQL) INFO: DFACT.SQL not found
    (SQL) INFO: Searching for SQL file /data1/SCMEXPORT/EXPAQ2/ABAP/DB/ORA/DFACT.SQL
    (SQL) INFO: found /data1/SCMEXPORT/EXPAQ2/ABAP/DB/ORA/DFACT.SQL
    (SQL) INFO: Trying to open /data1/SCMEXPORT/EXPAQ2/ABAP/DB/ORA/DFACT.SQL
    (SQL) INFO: /data1/SCMEXPORT/EXPAQ2/ABAP/DB/ORA/DFACT.SQL opened
    (SQL) ERROR: Invalid entry at line 5 in file /data1/SCMEXPORT/EXPAQ2/ABAP/DB/ORA/DFACT.SQL
    (SQL) ERROR: SQL list was not built successfully
    (DDL) ERROR: check_sql_list() failed for /BI0/9AEDFC01
    (DB) INFO: disconnected from DB
    /usr/sap/AQ2/SYS/exe/uc/linuxx86_64/R3load: job finished with 1 error(s)
    /usr/sap/AQ2/SYS/exe/uc/linuxx86_64/R3load: END OF LOG: 20140924185259
    /usr/sap/AQ2/SYS/exe/uc/linuxx86_64/R3load: START OF LOG: 20140925104442
    /usr/sap/AQ2/SYS/exe/uc/linuxx86_64/R3load: sccsid @(#) $Id: //bas/741_REL/src/R3ld/R3load/R3ldmain.c#6 $ SAP
    /usr/sap/AQ2/SYS/exe/uc/linuxx86_64/R3load: version R7.40/V1.8 [UNICODE]
    Compiled Nov 23 2013 13:06:03
    -------------------- Start of patch information ------------------------
    patchinfo (patches.h): (0.009) Support for SUM/ZDM and DMO (note 1778564)
    DBSL patchinfo (patches.h): (0.011) DBSL error corrections in 7.41: (4) LOBs (note 1928526)
    --------------------- End of patch information -------------------------
    process id 23939
    (DB) INFO: connected to DB
    (DB) INFO: DbSlControl(DBSL_CMD_NLS_CHARACTERSET_GET): UTF16
    (GSI) INFO: dbname  = "AQ220140924040917                                                                                                              "
    (GSI) INFO: vname    = "ORACLE                          "
    (GSI) INFO: hostname = "VA1WIPRSCM03                                                    "
    (GSI) INFO: sysname  = "Linux"
    (GSI) INFO: nodename = "VA1WIPRSCM03"
    (GSI) INFO: release  = "2.6.32-358.el6.x86_64"
    (GSI) INFO: version  = "#1 SMP Tue Jan 29 11:47:41 EST 2013"
    (GSI) INFO: machine  = "x86_64"
    (SQL) INFO: Searching for SQL file SQLFiles.LST
    (SQL) INFO: SQLFiles.LST not found
    (SQL) INFO: Searching for SQL file /data1/SCMEXPORT/EXPAQ2/ABAP/DB/SQLFiles.LST
    (SQL) INFO: found /data1/SCMEXPORT/EXPAQ2/ABAP/DB/SQLFiles.LST
    (SQL) INFO: Trying to open /data1/SCMEXPORT/EXPAQ2/ABAP/DB/SQLFiles.LST
    (SQL) INFO: /data1/SCMEXPORT/EXPAQ2/ABAP/DB/SQLFiles.LST opened
    (SQL) INFO: Searching for SQL file DFACT.SQL
    (SQL) INFO: DFACT.SQL not found
    (SQL) INFO: Searching for SQL file /data1/SCMEXPORT/EXPAQ2/ABAP/DB/ORA/DFACT.SQL
    (SQL) INFO: found /data1/SCMEXPORT/EXPAQ2/ABAP/DB/ORA/DFACT.SQL
    (SQL) INFO: Trying to open /data1/SCMEXPORT/EXPAQ2/ABAP/DB/ORA/DFACT.SQL
    (SQL) INFO: /data1/SCMEXPORT/EXPAQ2/ABAP/DB/ORA/DFACT.SQL opened
    (SQL) ERROR: Invalid entry at line 5 in file /data1/SCMEXPORT/EXPAQ2/ABAP/DB/ORA/DFACT.SQL
    (SQL) ERROR: SQL list was not built successfully
    (DDL) ERROR: check_sql_list() failed for /BI0/9AEDFC01
    (IMP) INFO: a failed DROP attempt is not necessarily a problem
    (SQL) ERROR: SQL list was not built successfully
    (DDL) ERROR: check_sql_list() failed for /BI0/9AEDFC01
    (DB) INFO: disconnected from DB
    /usr/sap/AQ2/SYS/exe/uc/linuxx86_64/R3load: job finished with 1 error(s)
    /usr/sap/AQ2/SYS/exe/uc/linuxx86_64/R3load: END OF LOG: 20140925104442
    /usr/sap/AQ2/SYS/exe/uc/linuxx86_64/R3load: START OF LOG: 20140925124401
    /usr/sap/AQ2/SYS/exe/uc/linuxx86_64/R3load: sccsid @(#) $Id: //bas/741_REL/src/R3ld/R3load/R3ldmain.c#6 $ SAP
    /usr/sap/AQ2/SYS/exe/uc/linuxx86_64/R3load: version R7.40/V1.8 [UNICODE]
    Compiled Nov 23 2013 13:06:03
    -------------------- Start of patch information ------------------------
    patchinfo (patches.h): (0.009) Support for SUM/ZDM and DMO (note 1778564)
    DBSL patchinfo (patches.h): (0.011) DBSL error corrections in 7.41: (4) LOBs (note 1928526)
    --------------------- End of patch information -------------------------
    process id 25323
    (DB) INFO: connected to DB
    (DB) INFO: DbSlControl(DBSL_CMD_NLS_CHARACTERSET_GET): UTF16
    (GSI) INFO: dbname  = "AQ220140924040917                                                                                                              "
    (GSI) INFO: vname    = "ORACLE                          "
    (GSI) INFO: hostname = "VA1WIPRSCM03                                                    "
    (GSI) INFO: sysname  = "Linux"
    (GSI) INFO: nodename = "VA1WIPRSCM03"
    (GSI) INFO: release  = "2.6.32-358.el6.x86_64"
    (GSI) INFO: version  = "#1 SMP Tue Jan 29 11:47:41 EST 2013"
    (GSI) INFO: machine  = "x86_64"
    (SQL) INFO: Searching for SQL file SQLFiles.LST
    (SQL) INFO: SQLFiles.LST not found
    (SQL) INFO: Searching for SQL file /data1/SCMEXPORT/EXPAQ2/ABAP/DB/SQLFiles.LST
    (SQL) INFO: found /data1/SCMEXPORT/EXPAQ2/ABAP/DB/SQLFiles.LST
    (SQL) INFO: Trying to open /data1/SCMEXPORT/EXPAQ2/ABAP/DB/SQLFiles.LST
    (SQL) INFO: /data1/SCMEXPORT/EXPAQ2/ABAP/DB/SQLFiles.LST opened
    (SQL) INFO: Searching for SQL file DFACT.SQL
    (SQL) INFO: DFACT.SQL not found
    (SQL) INFO: Searching for SQL file /data1/SCMEXPORT/EXPAQ2/ABAP/DB/ORA/DFACT.SQL
    (SQL) INFO: found /data1/SCMEXPORT/EXPAQ2/ABAP/DB/ORA/DFACT.SQL
    (SQL) INFO: Trying to open /data1/SCMEXPORT/EXPAQ2/ABAP/DB/ORA/DFACT.SQL
    (SQL) INFO: /data1/SCMEXPORT/EXPAQ2/ABAP/DB/ORA/DFACT.SQL opened
    ------------------ C-STACK ----------------------
    R3load[S](LinStackBacktrace+0x8c)[0x48d167]
    R3load[S](LinStack+0x35)[0x6ca6c5]
    R3load[S](CTrcStack2+0x48)[0x48fba1]
    R3load[S](SigIGenAction+0x212)[0x58b4fb]
    libpthread.so.0[T][0x397680f710]
    R3load[S](check_sql_list+0xab0)[0x61bbb0]
    R3load[S](DBDrop+0xbb)[0x60a8fb]
    R3load[S](import+0xde6)[0x620986]
    R3load[S](main_r3ldmain+0x1cfc)[0x6073bc]
    libc.so.6[T](__libc_start_main+0xfd)[0x397601ed5d]
    /usr/sap/AQ2/SYS/exe/uc/linuxx86_64/R3load: START OF LOG: 20140925125540
    /usr/sap/AQ2/SYS/exe/uc/linuxx86_64/R3load: sccsid @(#) $Id: //bas/741_REL/src/R3ld/R3load/R3ldmain.c#6 $ SAP
    /usr/sap/AQ2/SYS/exe/uc/linuxx86_64/R3load: version R7.40/V1.8 [UNICODE]
    Compiled Nov 23 2013 13:06:03
    -------------------- Start of patch information ------------------------
    patchinfo (patches.h): (0.009) Support for SUM/ZDM and DMO (note 1778564)
    DBSL patchinfo (patches.h): (0.011) DBSL error corrections in 7.41: (4) LOBs (note 1928526)
    --------------------- End of patch information -------------------------
    process id 25932
    /usr/sap/AQ2/SYS/exe/uc/linuxx86_64/R3load: job completed
    /usr/sap/AQ2/SYS/exe/uc/linuxx86_64/R3load: END OF LOG: 20140925125540
    /usr/sap/AQ2/SYS/exe/uc/linuxx86_64/R3load: START OF LOG: 20140925125540
    /usr/sap/AQ2/SYS/exe/uc/linuxx86_64/R3load: sccsid @(#) $Id: //bas/741_REL/src/R3ld/R3load/R3ldmain.c#6 $ SAP
    /usr/sap/AQ2/SYS/exe/uc/linuxx86_64/R3load: version R7.40/V1.8 [UNICODE]
    Compiled Nov 23 2013 13:06:03
    -------------------- Start of patch information ------------------------
    patchinfo (patches.h): (0.009) Support for SUM/ZDM and DMO (note 1778564)
    DBSL patchinfo (patches.h): (0.011) DBSL error corrections in 7.41: (4) LOBs (note 1928526)
    --------------------- End of patch information -------------------------
    process id 25955
    (DB) INFO: connected to DB
    (DB) INFO: DbSlControl(DBSL_CMD_NLS_CHARACTERSET_GET): UTF16
    (GSI) INFO: dbname  = "AQ220140924040917                                                                                                              "
    (GSI) INFO: vname    = "ORACLE                          "
    (GSI) INFO: hostname = "VA1WIPRSCM03                                                    "
    (GSI) INFO: sysname  = "Linux"
    (GSI) INFO: nodename = "VA1WIPRSCM03"
    (GSI) INFO: release  = "2.6.32-358.el6.x86_64"
    (GSI) INFO: version  = "#1 SMP Tue Jan 29 11:47:41 EST 2013"
    (GSI) INFO: machine  = "x86_64"
    (TSK) ERROR: file /tmp/sapinst_instdir/BS2013SR1/SCM703SR1/ORA/COPY/SYSTEM/STD/AS-ABAP/SAPDFACT_1.TSK.bck already seems to exist
                a previous run may not have been finished cleanly
                file /tmp/sapinst_instdir/BS2013SR1/SCM703SR1/ORA/COPY/SYSTEM/STD/AS-ABAP/SAPDFACT_1.TSK possibly corrupted
    /usr/sap/AQ2/SYS/exe/uc/linuxx86_64/R3load: job finished with 1 error(s)
    /usr/sap/AQ2/SYS/exe/uc/linuxx86_64/R3load: END OF LOG: 20140925125540

    Dear Ram Nath,
    It may be late but it will be use full to others.
    I faced similar issue and solved it by commenting the 5th line of DFACT.SQL file.
    The issue was with the indexes not being generated properly to avoid such issues we must ensure to implement SAP note 1991576 in prior, if not we can comment the 5th line.
    Below is the corrected file in my case
    # ORACLE : NATIVE SQL EXPORT GENERATED AT 20150426083248
    #ind:  - commented line (earlier it was just 'ind: ')
    ind: /BI0/E0PPM_VC1~0
    tab: /BI0/F0PPM_VC1
    Please let me know in case of queries if any.
    Regards
    Baranedharan S.

Maybe you are looking for

  • Calendar App shortcut

    Ok, let's be intuitive here. If there's a great and convenient back swipe gesture on the Pre, which I love, why not use it? For instance, in the Calendar app (which I again love), If I go: Month view -> Week view -> Day view I should be able to back

  • IPhone 3G Freezes Often

    Greetings, My iPhone 3G has been behaving erratically since the update offered last month. In short, the phone freezes for 15-20 seconds regardless of the operation, and does this fairly often during the day. During this time, the screen is frozen &

  • Output from a Database

    I have a program which takes user input, searches keywords, if the keywords are found a specific SQL query is sent to the database and the results are output. I'm having problems with the output because there are obviously a few tables in the databas

  • Can Smart Groups split a large e-mail Group

    We have a Group mailing list for our Newsletter.  To send the Newsletter with Mail, we put my name into the "To" field and put the Group into the bcc field. Due to ISP restrictions, when the Group became larger than 99, we split this one Group into 2

  • Mail delivery and received sounds have stopped

    Recently the "whoosh" sound that accompanies mail being sent simply stopped working. A couple of days later the incoming mail alert stopped as well. In all other respects, my Mac mini seems to be working just fine. I'm running the most current OS ver