Cache purging in Physical table

Hi Gurus,
there is cache never expire and cache persisitent time avl in every physical table avl in physical layer of obiee. i wanted to know if we will set the cache persistent time for 10 mins then how it will purge the cache automatically
thanks

Hi Gurus,
I totally aggre with you but can you please pay you kind attenetion towards bwlow points
Are you sure that the physical table cache purging done oracle bi server
and that is not visible to track by the Administratortr
what about cache never expires-- can we purge this cache from cache manager
please make it sure
thanks

Similar Messages

  • Dynamic physical table name vs. Cache

    Hello, Experts!
    I'm facing quite an interesting problem. I have two physical tables with the same structure but with a different data. Requirement is to show same reports with one or another table. Idea is to have dynamically changed physical table name with session variable usage. Session variable can be change in UI so it was working until cache was turned on. When cache is turned on logical statements sent to OBI backend are the same even for different values of session variable that stores physical table name. Once cache is populated every users will get values from cache. This is possible source of discrepancy because some users might run reports with tableA values and some with tableB values.
    Are there any options to set OBI to use data related to proper physical table name (i.e. accordingly to session variable value)? Model clone is not an option because it will be way to hard and complex to maintain both, beside same reports need to work sometimes with one table name and sometimes with other...
    PS. Cache is set to be common for all users.
    Lucas

    thank you, I've found another way to make it running. In fact there are two ways of doing it: filter LTS and have all data filtered from single table with session variable or use fragmentation content also with session variable.
    Now tricky part is to set variable from UI, currently I'm using issue raw sql: call NQSSetSessionValue( 'String SV_SIGNOFF=aaa;' ) but I have to figure out how to change session non system variable value without need of administrator user rights.
    There is GoURL method, but it's not working...
    2. Add In ORACLE_HOME/bifoundation/web/display/authenticationschemas.xml
    <RequestVariable source="url" type="informational" nameInSource="lang"
    biVariableName="NQ_SESSION.LOCALE" />
    inside the top <AuthenticationSchemaGroup> </AuthenticationSchemaGroup> tag

  • Cache Purging Using Event Poll Table

    Hi Experts,
    We are Facing a problem when we use the event poll table to purge the cache using OBIEE (10.1.3.4.1). The Oracle BI Server goes down for the polling frequency that has been set. However when we use the same with OBIEE 10.1.3.4.0, the purging happens without any problems.
    Please give me suggestions on this
    Thanks&Regards
    Naresh

    HI Naresh,
    I am interested in your purging mechanism of the OBIEE cache. Can you please let me know how to do the cache purging. Also, can you please provide pointers on cache purging manually would be appreciated.
    Thanks in Advance
    svr

  • How to set up automatic cache purge

    I am using the event polling table for setting up automatic cache purging.
    I have created a table 'event polling table' in the database with the following fields:
    CATALOGNAME
    DATABASENAME
    OTHER
    SCHEMANAME
    TABLENAME
    UPDATETIME
    UPDATETYPE
    LOAD_ID
    with TABLENAME, UPDATETIME, UPDATETYPE as non-nullable fields.
    Then I have imported the table to the physical layer of the rpd, set up 'Oracle BI Event tables' in the Utilities as this table and set up a polling frequency of 15min.
    I have created a report with 2 columns (DeptID, Saleamt) from 2 tables (Department, Sales) respectively. Both these tables were made 'cacheable' in the physical layer with cache never expires. Now the saved report is ran by logging in once again. So a cache entry is registered. My NQSCONFIG file enables cache.
    I have added these 2 tables to the event polling table after the cache entry was registered. Technically after 15min, this cache should be purged automatically as the request created was from these 2 tables. But that does not happen. Even after half hr when I log in and run this report a cachehit occurs as cache still persists. My cache folder is the C:\OracleBIData\cache. I also check entries in the Repository (Manage ->cache).
    Please let me know where am I making a mistake.
    My goal is to get cache purged automatically every time there's an entry in the EPT. (To test when I create an entry in the EPT, I give the tablename and updatetype (=1), the updatetime is obtained automatically as sysdate.
    Thanks
    ===================================================
    For reference I have copied an earlier posting here:
    ===================================================
    lets look at all your questions one by one
    As the event polling table gets a row added for each table whose data is updated at the end of each ETL run, when exactly does the BI server purges cache related to that table - As soon as the polling happens and it finds a new record in your table
    Lets say I set up a polling frequency (in Tools -> Utilities -> Oracle BI Event Tables) to every 60 minutes. At the end of every 60 minutes does the BI server restart by itself or does the polling take place without restart. - Polling takes place only when the server is up (there is no need to restart)
    At the end of every 60 minutes does the BI server poll the event polling table whether or not any new rows are added to it? -(Yes )
    What exactly happens when a new record is added to the event polling table in the database. - (The cache is purged )
    If at the end of the ETL run data in 3 tables has changed, would 3 records be added to the event polling table in the database. - (yes , after the polling happens)
    Let me know if you have more questions ...
    Another way you can set it up is to create a batch file which would purge a table as its updated (use a procedure to capture the tables which have a value of 1 for update_type or use triggers to populate a table with these table names ) ...once purged you can set up your cache by using ibots (use javascripts) .
    =============================================================================

    I seem to have all the steps running except #3 - which is clearing the cache!
    select T54927.UPDATE_TYPE as c1,
    T54927.UPDATE_TIME as c2,
    T54927.DB_NAME as c3,
    T54927.CATALOG_NAME as c4,
    T54927.SCHEMA_NAME as c5,
    T54927.TABLE_NAME as c6
    from
    MPO_OWNER.MPO_EVENT_POLLING T54927
    where ( T54927.OTHER in ('') or T54927.OTHER is null )
    minus
    select T54927.UPDATE_TYPE as c1,
    T54927.UPDATE_TIME as c2,
    T54927.DB_NAME as c3,
    T54927.CATALOG_NAME as c4,
    T54927.SCHEMA_NAME as c5,
    T54927.TABLE_NAME as c6
    from
    MPO_OWNER.MPO_EVENT_POLLING T54927
    where ( T54927.OTHER = 'cac7057v' )
    +++Administrator:fffe0000:fffe021b:----2009/03/27 17:02:14
    -------------------- Execution Node: <<121787>>, Close Row Count = 1, Row Width = 384 bytes
    +++Administrator:fffe0000:fffe021b:----2009/03/27 17:02:14
    -------------------- Sending query to database named Event Polling (id: <<121830>>):
    insert into
    MPO_OWNER.MPO_EVENT_POLLING("UPDATE_TYPE", "UPDATE_TIME", "DB_NAME", "CATALOG_NAME", "SCHEMA_NAME", "TABLE_NAME", "OTHER") values (1, TIMESTAMP '2009-03-27 16:59:19', 'Trading', 'remedy', 'MPO_READ', 'MPO_POSITION_DATE_CHOICES', 'cac7057v')
    +++Administrator:fffe0000:fffe021b:----2009/03/27 17:02:14
    -------------------- Execution Node: <<121830>>, Close Row Count = 0, Row Width = 0 bytes
    +++Administrator:fffe0000:fffe021b:----2009/03/27 17:02:14
    -------------------- Execution Node: <<121830>> DbGateway Exchange, Close Row Count = 0, Row Width = 0 bytes
    +++Administrator:fffe0000:fffe021b:----2009/03/27 17:02:14
    -------------------- Execution Node: <<121830>> DbGateway Exchange, Close Row Count = 0, Row Width = 0 bytes
    +++Administrator:fffe0000:fffe021b:----2009/03/27 17:02:14
    -------------------- Execution Node: <<121787>> DbGateway Exchange, Close Row Count = 1, Row Width = 384 bytes
    +++Administrator:fffe0000:fffe021b:----2009/03/27 17:02:14
    -------------------- Execution Node: <<121787>> DbGateway Exchange, Close Row Count = 1, Row Width = 384 bytes
    +++Administrator:fffe0000:fffe021b:----2009/03/27 17:02:14
    -------------------- Sending query to database named Event Polling (id: <<121831>>):
    select T54927.UPDATE_TIME as c1
    from
    MPO_OWNER.MPO_EVENT_POLLING T54927
    where ( T54927.OTHER = 'cac7057v' )
    group by T54927.UPDATE_TIME
    having count(T54927.UPDATE_TIME) = 1
    +++Administrator:fffe0000:fffe021b:----2009/03/27 17:02:14
    -------------------- Execution Node: <<121831>>, Close Row Count = 1, Row Width = 16 bytes
    +++Administrator:fffe0000:fffe021b:----2009/03/27 17:02:14
    -------------------- Execution Node: <<121831>> DbGateway Exchange, Close Row Count = 1, Row Width = 16 bytes
    +++Administrator:fffe0000:fffe021b:----2009/03/27 17:02:14
    -------------------- Execution Node: <<121831>> DbGateway Exchange, Close Row Count = 1, Row Width = 16 bytes
    +++Administrator:fffe0000:fffe021b:----2009/03/27 17:02:14
    -------------------- Sending query to database named Event Polling (id: <<121875>>):
    delete from
    MPO_OWNER.MPO_EVENT_POLLING where MPO_OWNER.MPO_EVENT_POLLING.UPDATE_TIME = TIMESTAMP '2009-03-27 16:59:19'
    What am I missing? My NQServer.log shows this:
    The physical table Trading:remedy:MPO_READ:MPO_POSITION_DATE_CHOICES in a cache polled row does not exist.
    Any thoughts are appreciated.
    Thanks
    Mano

  • Implementing Automatic cache purging

    Hi All ,
    I want to implement Automatic cache purging using Event pooling table in obiee..
    i have followed one site, in that they asked me to crated one table in database ... table columns are as follows
    1.update_type
    2.update_date
    3.databasename
    4.catalogname
    5.schemaname
    6.tablename.
    here i am having one doubt .. in my rpd , i am having two tables which are using in 4 catalogs . so.. my doubt is .. how should i came to know the table has come from particular catalog .. then i should i populated the catalog names in backend table ..
    if any one knows please let me know.
    Thanks
    Sree

    Hi,
    The below links should help you
    http://obiee101.blogspot.com/2008/03/obiee-manage-cache-part-1.html
    and
    http://oraclebizint.wordpress.com/2007/12/18/oracle-bi-ee-101332-scheduling-cache-purging/
    To purge the cache automatically you have to set the cache persistent time in the tables present in the physical layer. There you can mention the time after which you want to purge the cache. The steps are provided below:
    1. Double click on the table in the physical layer.
    2. Select the General tab.
    3. Select the Cacheable option.
    4. Select the Cache Persistent time.
    5. Specifiy the time interval when you need the cache to be refreshed.
    You have to do the same for all tables for which you want to purge the cache
    Thanks
    Deva
    Edited by: Devarasu on Sep 28, 2011 2:39 PM

  • Using multiple physical tables in a single logical dimension table

    I have two physical tables that are related on a 1 to 1 basis based on a natural key. One of these tables is already part of my RPD file (actually, it is the W_EMPLOYEE_D from the Oracle BI Applications). The second table contains additional employee attributes from a custom table added to the data warehouse. Unfortunately, I don't seem to be able to display ANY data from this newly added custom table! I'm running on OBIEE 11.1.1.6.
    Here's what I've tried to do. Lets call the original table E1 and the new one E2. E1 is part of the repository already, and has functioned perfectly for years.
    - In my physical model, I have imported E2 and defined the join between E1 and E2.
    - In my logical table for E1, I've mapped E2 to E1 (E2 appears as a source), set up an INNER JOIN in the joins section for E1 and added the attributes from E2 in the folder
    - In the SOURCES for this logical table, I've set the logical level of the content for E2 appropriately (detail level, same as E1)
    - In my presentation folder for E1, I've copied in the attributes from E2 that were included in my logical table
    Consistency check runs smoothly, no warnings or errors. Note: E2 contains hundreds of rows, all of which have matching records in E1.
    Now, when I create an analysis that includes only an attribute sourced from E2, I get a single row returned, with a NULL value. If I create an analysis that includes one attrribute from E1 and one from E2, I get all the valid E1 records with data showing, but with NULL for the E2 attributes. Remember, I have an inner join, which means that the query is "seeing" E2 data, it is just choosing not to show it to me! Additionally, I can grab the query from the NQQuery.log file - when I run this SQL in SQL*Developer, I get PERFECT results - both E1 and E2 attributes show up in the SQL - the query engine is generating valid SQL. The log file does not indicate there are any errors either; it does show the correct number of rows being added to cache. If I create a report that includes attributes from E1, E2 and associated fact metrics I get similar results. The reports seem to run fine, but all my E2 attributes are NULL in Answers. I've verified basics, like data types, etc. and when I "Query Related Objects" in the repository, everything looks consistent across all 3 layers and all objects. E2 is located in the same (Oracle) database and schema as E1, and there are no security constraints in effect.
    I've experimented with a lot of different things without success, but I expected that the above configuration should have worked. Note that I cannot set up E2 as a new separate dimension, as it does not contain the key value used to join to the facts, nor do the facts contain the natural key that is in both E1 and E2.
    Sorry for the long post - just trying to head off some of the questions you might have.
    Any ideas welcomed! Many thanks!
    Eric

    Hi Eric,
    I would like you to re-check on the content level settings here as they are the primary causes of this kind of behavior. You could notice that the same information might have written down in the logical plan of the query too.
    Also, as per your description
    "In the SOURCES for this logical table, I've set the logical level of the content for E2 appropriately (detail level, same as E1)"
    I would like to check on this point again, as if you had mapped E2 to E1 in the same logical source with an inner join, you would get to set the content level at E1 levels themselves but not E2 (Now, that E2 would become a part of the E1 hierarchy too). This might be the reason, the BI Server is choosing to elimiate(null) the values from E2 too (even you could see them in the sql client)
    Hope this helps.
    Thank you,
    Dhar

  • How to mark the physical tables as cacheable

    Hi All,
    Can someone please tell me how to mark the physical tables as cacheable.
    Thnaks a lot

    Hi,
    by default they are cacheable...
    You can see it on repository, physical layer, right clik on table, properties, on general tab, check cacheable...
    You can specify peristence time...
    See this article, you will find an image with table properties:
    http://obieeblog.wordpress.com/2009/01/19/obiee-cache-is-enabled-but-why-is-the-query-not-cached/
    Regards
    Nicoale
    Edited by: Nicolae Ancuta on 26.05.2010 09:15

  • Using case when statement in the select query to create physical table

    Hello,
    I have a requirement where in I have to execute a case when statement with a session variable while creating a physical table using a select query. let me explain with an example.
    I have a physical table based on a select table with one column.
    SELECT 'VALUEOF(NQ_SESSION.NAME_PARAMETER)' AS NAME_PARAMETER FROM DUAL. Let me call this table as the NAME_PARAMETER table.
    I also have a customer table.
    In my dashboard that has two pages, Page 1 contains a table with the customer table with column navigation to my second dashboard page.
    In my second dashboard page I created a dashboard report based on NAME_PARAMETER table and a prompt based on customer table that sets the NAME_ PARAMETER request variable.
    EXECUTION
    When i click on a particular customer, the prompt sets the variable NAME_PARAMETER and the NAME_PARAMETER table shows the appropriate customer.
    everything works as expected. YE!!
    Now i created another table called NAME_PARAMETER1 with a little modification to the earlier table. the query is as follows.
    SELECT CASE WHEN 'VALUEOF(NQ_SESSION.NAME_PARAMETER)'='Customer 1' THEN 'TEST_MART1' ELSE TEST_MART2' END AS NAME_PARAMETER
    FROM DUAL
    Now I pull in this table into the second dashboard page along with the NAME_PARAMETER table report.
    surprisingly, NAME_PARAMETER table report executes as is, but the other report based on the NAME_PARAMETER1 table fails with the following error.
    Error Codes: OPR4ONWY:U9IM8TAC:OI2DL65P
    State: HY000. Code: 10058. [NQODBC] [SQL_STATE: HY000] [nQSError: 10058] A general error has occurred. [nQSError: 16001] ODBC error state: S1000 code: 1756 message: [Oracle][ODBC][Ora]ORA-01756: quoted string not properly terminated. [nQSError: 16014] SQL statement preparation failed. (HY000)
    SQL Issued: SET VARIABLE NAME_PARAMETER='Novartis';SELECT NAME_PARAMETER.NAME_PARAMETER saw_0 FROM POC_ONE_DOT_TWO ORDER BY saw_0
    If anyone has any explanation to this error and how we can achieve the same, please help.
    Thanks.

    Hello,
    Updates :) sorry.. the error was a stupid one.. I resolved and I got stuck at my next step.
    I am creating a physical table using a select query. But I am trying to obtain the name of the table dynamically.
    Here is what I am trying to do. the select query of the physical table is as follows.
    SELECT CUSTOMER_ID AS CUSTOMER_ID, CUSTOMER_NAME AS CUSTOMER_NAME FROM 'VALUEOF(NQ_SESSION.SCHEMA_NAME)'.CUSTOMER.
    The idea behind this is to obtain the data from the same table from different schemas dynamically based on what a session variable. Please let me know if there is a way to achieve this, if not please let me know if this can be achieved in any other method in OBIEE.
    Thanks.

  • Bridge CC. on opening message says unable to read cache, purge cache. This does not help.

    On opening message says unable to read cache, purge cache. Purging the cache does not help and Bridge continues to hang. Was working fine before.

    Mac?  REad this http://forums.adobe.com/thread/1237168

  • OBIEE generated SQL differs if it's a Physical Table or Select Table...

    Hi!
    I have some tables defined in the Physical Layer, which some are Physical Tables and others are OBIEE "views" (tables created with a Select clause).
    My problem is that the difference in the generated SQL for the same table, differs (as expected) whether it is a Physical Table or a "Select Table". And this difference originates problems in the returned data. When it a Physical Table, the final report returns the correct data, but when it is a Select Table it returns incorrect/incomplete data. The report joins this table with another table from a different Database (it is a join between Sybase IQ and SQL Server).
    This is the generated SQL in the log:
    -- Physical Table generated SQL
    select T182880."sbl_cust_acct_row_id" as c1,
    T182880."sbl_cust_acct_ext_key" as c2,
    T182880."sbl_cust_source_sys" as c3
    from
    "SGC_X_KEY_ACCOUNT" T182880
    order by c2, c3
    -- "Select Table" generated SQL
    select
         sbl_cust_acct_ext_key,
         ltrim(rtrim(sbl_cust_source_sys)) as sbl_cust_source_sys,
         sbl_cust_acct_row_id,
         sbl_cust_acct_camp_contact_row_id,
         ods_date,
         ods_batch_no,
         ods_timestamp
    from dbo.SGC_X_KEY_ACCOUNT
    As you may notice, the main difference is the use of Aliases (which I think that it has no influence in the report result) and the use of "Order By" (which I start to think that it its the main cause to return the correct data).
    Don't forget that OBIEE server is joining the data from this table, with data from another table from a differente database. Therefore, the join is made in memory (OBIEE Engine). Maybe in the OBIEE Engine the Order by is essential to guarantee a correct join...but then again, I have some other tables in the Physical Layer that are defined as "Select" and the generated SQL uses the aliases and the Order by clause...
    In order to solve my problem, I had to transform the "Select Table" into a "Physical Table". The reason it was defined as a "Select Table" was because it had a restriction in the Where Clause (which I eliminated already, althouth the performance wil be worse).
    I'm confused. Help!
    Thanks.
    FPG

    Hi FPG,
    Not sure if this is a potential issue for you at all, but I know it caused me all kinds of headaches before I figured it out. Had to do with "Features" tab Values in the database object's settings in the Physical Layer:
    Different SQL generated for physical table query vs. view object query?
    Mine had to do with SQL from View objects not being submitted as I would expect, sounds like yours has more to do with "Order By"? I believe I remembered seeing some Order By and Group By settings in the "Features" list. You might make a copy of your RPD and experiement around with setting some of those if they aren't already selected and retesting your queries with the new DB settings.
    Jeremy

  • "Select" Physical table as LTS for a Fact table

    Hi,
    I am very new to OBIEE, still in the learning phase.
    Scenario 1:
    I have a "Select" Physical table which is joined (inner join) to a Fact table in the Physical layer. I have other dimensions joined to this fact table.
    In BMM, I created a logical table for the fact table with 2 Logical Table Sources (the fact table & the select physical table). No errors in the consistency check.
    When I create an analysis with columns from the fact table and the select table, I don't see any data for the select table column.
    Scenario 2:
    In this scenario, I created an inner join between "Select" physical table and a Dimension table instead of the Fact table.
    In BMM, I created a logical table for the dimension table with 2 Logical Table Sources (the dimension table & the select physical table). No errors in the consistency check.
    When I create an analysis with columns from the dimension table and the select table, I see data for all the columns.
    What am I missing here? Why is it not working in first scenario?
    Any help is greatly appreciated.
    Thanks,
    SP

    Hi,
    If I understand your description correctly, then your materialized view skips some dimensions (infrequent ones). However, when you reference these skipped dimensions in filters, the queries are hitting the materialized view and failing as these values do not exist. In this case, you could resolve it as follows
    1. Create dimensional hierarchies for all dimensions.
    2. In the fact table's logical sources set the content tabs properly. (Yes, I think this is it).
    When you skipped some dimensions, the grain of the new fact source (the materialized view in this case) is changed. For example:
    Say a fact is available with the keys for Product, Customer, Promotion dimensions. The grain for this is Product * Customer * Promotion
    Say another fact is available with the keys for Product, Customer. The grain for this is Product * Customer (In fact, I would say it is Product * Customer * Promotion Total).
    So in the second case, the grain of the table is changed. So setting appropriate content levels for these sources would automatically switch the sources.
    So, I request you to try these settings and let me know if it works.
    Thank you,
    Dhar

  • OBIEE 10g repository - Business model - logical table to physical table, column mapping is empty

    Hi, I am really new to OBIEE 10g.
    I already set up a SQL Server 2005 database in Physical and import a view vw_Dim_retail_branch.
    The view has 3 columns: branch_id, branch_code, branch_desc.
    Now I want to set up the Business model to map this physical table (view).
    I created a new Business model
    Added new logical table Dim_retail_branch
    In the sources, added the vw_Dim_retail_branch as source table.
    But in the Logical table source window,  column mapping tab, it's blank. I thought it should be able to identify all the columns from vw_Dim_retail_branch, but not. The show mapped columns is ticked.
    What should I do here? Manually type each column?

    HI,
    Just you can drag and drop the columns from physical layer to BMM layer.
    Select the 3 columns and drag and drop it to the created logical column in BMM layer.
    for more reference : http:\\mkashu.blogspot.com
    Regards,
    VG

  • Create physical table using select in repository

    Hi Gurus,
    Can we create Physical table in OBIEE 11.1.1.6 repository using stored procedure and select?
    How is the right syntax?
    Thank you so much
    JOE

    Hi,
    Yes. physical layer just select and put it like below
    for example,
    select field1, field2, . field_n
    from tables
    UNION
    select field1, field2, . field_n
    from tables;
    http://gerardnico.com/wiki/dat/obiee/opaque_view
    http://www.clearpeaks.com/blog/oracle-bi-ee-11gusing-select_physical-in-obiee-11g
    http://allaboutobiee.blogspot.com/2012/05/obiee-11g-deployundeploy-view-in.html
    Thanks
    Deva

  • ERROR IN PHYSICAL TABLES JOIN

    The error i receive while performing global consistency check is : [38091] Physical table 'D_TIME__EVENT_TIME' joins to non-fact table 'FS_IND_SUBS_RGE_ACT' that is outside of its time dimension table source 'D_TIME__EVENT_TIME'.
    I have had this problem for some time and it is getting frustrating. the table D_TIME__EVENT_TIME is an alias of the d_time_event table. i have created a foreign key between both tables D_TIME__EVENT_TIME (time dimension) and FS_IND_SUBS_RGE_ACT in the physical layer. but everytime I check for consistency, i get the error.
    I have created multiple star schemas using the time dimension table There are a couple of other tables that have necessitated creating physical foreign keys with D_TIME__EVENT_TIME but had no errors.
    I have on the side created another alias (d_time_) of the parent table to validate the steps taken. I have created a physical foreign key with the d_time_ table and fs_ind_subs_rge_act table and this was successful.
    I am at loggerheads on what to do next. i need some help any help
    Edited by: 794286 on Sep 20, 2010 11:35 AM

    hi,
    Refer Re: Time Dimension Problem joe mentioned some good points
    thanks,
    saichand.v

  • Problem: 1 physical table -- multiple logical table sources

    Hi,
    I'm quite new to BIEE and setting up my repository.
    So I have a question, if the following scenario is possible:
    Physical Layer: TABLE_A: COL_A, COL_B, COL_C
    TABLE_B: COL_D, COL_E, COL_F
    Join TABLE_A.COL_A = TABLE_B.COL_D
    In Business Model I have a Dimension Table with TABLE_A as datasource with fields DIM1 (COL_B).
    The Fact Table (MEASURE) would have twice TABLE_B as data source with different where-clauses on COL_F and logical table columns (ATT1 and ATT2) of value COL_E.
    So far I have created everything and the consistency check shows no errors or warnings, but I get an error in Answer: Incorrectly defined logical table source (for fact table MEASURE) does not contain mapping for [MEASURE.ATT1, MEASURE.ATT2], when I creating an report showing DIM1, ATT1, ATT2.
    Isn't it possible to have one physical column used as multiple data source?
    I know it 's working, when I create the physical table twice ... but maybe there's a solution for business model.
    Thanks
    chrissy

    Hi mengesh,
    that's what I also tried, but it's always returning me the same error.
    I know it would work, when I import the physical table twice or more, but that's not what I want to do, because at the end I have 10 or more fields based on this one physical table. There's one field indicating what value is contained in the record, this means:
    COL_F | COL_E
    1 | customer name
    2 | customer number
    3 | customer branche
    4 | salesman
    5 | date
    6 | report number
    etc.
    I don't think it's usefull to import the physical table as often as I need this field. So I want to divide it in business model.
    thanks
    chrissy

Maybe you are looking for

  • Problem in OHW I5

    Hi, I am new to Oracle Help and working on a POC as part of my requirement, I have successfully created POC in OHW 2.0 but now I need to work on latest version to avail advanced search features. After exploring, I found ohwrcf_i5.ear and I have follo

  • HOW to use InputMap and ActionMap for a JButton ?

    Hi, I used to call the registredKeyBoardAction to deal with JButton keyBoard actions. It is written in the JavaDoc that this method is deprecated, so I try to use the InputMap and ActionMap as described in this doc without success. Does anybody has a

  • Changing to uppercase?

    I'm trying to change the first letter in a name to uppercase, I dont appear to have a problem chaning the letter to uppercase but I cant change the string. any help would be much appreciated. thanx. String name = "james";           //Get the first ch

  • Urgent: Portletizing an struts JSP application

    Hello all, I have a struts JSP application. I want to portletize this whole application, so that navigation is always within the portal framework. Using URL Services all I can see is that the first page will be a portlet. Can anyone suggest the best

  • Crystal Reports -- Printing Issue

    Hello all.. when i export report to pdf format., am seeing an unnecessary space between characters. please suggest how to avoid this!! here is the screenshot.,