Repository with 2 or more fact tables
Helllo everyone.
Is there any official document describing how to setup a repository with two or more fact tables? I have created a repository with three fact tables and the needed dimension tables, but there are some loops through table links. Local Oracle's representatives pointed out that these loops should be avoided, because they raise problems and also the select statements created by BI server are not logically correct. Is officially supported by OBI this setup? Or should I maintain separete BM for each fact?
Any help will be appriciated
Stefan thanx for your replay.
So the answer is that officially it s supported but there are (or there shouldn't be any) guidelines about thinks you should or should not do in such an approach?
I wonder if it is my wrong BM design!
Similar Messages
-
A loop between two or more fact table
Hi,
How Does Discovere solve the loop between tables ? (join between with more fact table in a star schema)
Microstartegy resolve this problem with diffrent select and union the results. Instead BO use the contexts.
Thank you.
nullI don't think Lightroom handles cmyk images.
For rgb and gray, you can stack the images, or make the gray from a virtual copy of the rgb. In this way, simply unstacking the images results in your requested "show corresponding images". -
Best ways to create rpd or reports if we have data in more fact tables
I have fact and dimensional data in one or more different tables. Then each logical table source represents one data segment.Please suggest me some methods or ways like fragmentation through which i can use them in creating rpd and report and main problem here is facts too large contains 25 million records.But adding tables in BMM layer effecting performance so can anyone other ways doing it in database side.
Thanks in advance
Edited by: user2989722 on Dec 3, 2009 3:09 PMhi,
For the fragmentation you can create on dimension .The procedure is clearly explained in this blog
http://108obiee.blogspot.com/2009/01/fragmentation-in-obiee.html
http://www.rittmanmead.com/2007/06/19/obiee-data-modeling-tips-2-fragmentation/
For the performance point of view you can create a Materialized view (based on columns that your report is using) so that it will hit that particluar view instead of hitting whole table(25million records table) please look into this post
Re: Materialized views in OBIEE
thanks,
saichand.v -
Problem with populating a fact table from dimension tables
my aim is there are 5 dimensional tables that are created
Student->s_id primary key,upn(unique pupil no),name
Grade->g_id primary key,grade,exam_level,values
Subject->sb_id primary key,subjectid,subname
School->sc_id primary key,schoolno,school_name
year->y_id primary key,year(like 2008)
s_id,g_id,sb_id,sc_id,y_id are sequences
select * from student;
S_ID UPN FNAME COMMONNAME GENDER DOB
==============================
9062 1027 MELISSA ANNE f 13-OCT-81
9000 rows selected
select * from grade;
G_ID GRADE E_LEVEL VALUE
73 A a 120
74 B a 100
75 C a 80
76 D a 60
77 E a 40
78 F a 20
79 U a 0
80 X a 0
18 rows selectedThese are basically the dimensional views
Now according to the specification given, need to create a fact table as facts_table which contains all the dim tables primary keys as foreign keys in it.
The problem is when i say,I am going to consider a smaller example than the actual no of dimension tables 5 lets say there are 2 dim tables student,grade with s_id,g_id as p key.
create materialized view facts_table(s_id,g_id)
as
select s.s_id,g.g_id
from (select distinct s_id from student)s
, (select distinct g_id from grade)gThis results in massive duplication as there is no join between the two tables.But basically there are no common things between the two tables to join,how to solve it?
Consider it when i do it for 5 tables the amount of duplication being involved, thats why there is not enough tablespace.
I was hoping if there is no other way then create a fact table with just one column initially
create materialized view facts_table(s_id)
as
select s_id
from student;then
alter materialized view facts_table add column g_id number;Then populate this g_id column by fetching all the g_id values from the grade table using some sort of loop even though we should not use pl/sql i dont know if this works?
Any suggestions.Basically your quite right to say that without any logical common columns between the dimension tables it will produce results that every student studied every sibject and got every grade and its very rubbish,
I am confused at to whether the dimension tables can contain duplicated columns i.e column like upn(unique pupil no) i will also copy in another table so that when writing queries a join can be placed. i dont know whether thats right
These are the required queries from the star schema
Design a conformed star schema which will support the following queries:
a. For each year give the actual number of students entered for at A-level in the whole country / in each LEA / in each school.
b. For each A-level subject, and for each year, give the percentage of students who gained each grade.
c. For the most recent 3 years, show the 5 most popular A-level subjects in that year over the whole country (measure popularity as the number of entries for that subject as a percentage of the total number of exam entries).
I written the queries earlier based on dimesnion tables which were highly duplicated they were like
student
=======
upn
school
school
======
school(this column substr gives lea,school and the whole is country)
id(id of school)
student_group
=============
upn(unique pupil no)
gid(group id)
grade
year_col
========
year
sid(subject id)
gid(group id)
exam_level
id(school id)
grades_list
===========
exam_level
grade
value
subject
========
sid
subject
compulsory
These were the dimension table si created earlier and as you can see many columns are duplicated in other tables like upn and this structure effectively gets the data out of the schema as there are common column upon which we can link
But a collegue suggested that these dimension tables are wrong and they should not be this way and should not contain dupliated columns.
select distinct count(s.upn) as st_count
, y.year
, c.sn
from student_info s
, student_group sg
, year_col y
, subject sb
, grades_list g
, country c
where s.upn=sg.upn
and sb.sid=y.sid
and sg.gid=y.gid
and c.id=y.id
and c.id=s.school
and y.exam_lev=g.exam_level
and g.exam_level='a'
group by y.year,c.sn
order by y.year;This is the code for the 1st query
I am confused now which structure is right.Are my earlier dimension tables which i am describing here or the new dimension tables which i explained above are right.
If what i am describing now is right i mean the dimension tables and the columns are allright then i just need to create a fact table with foreign keys of all the dimension tables. -
Use of time series functions with horizontally fragmented fact tables
Hi Guys,
in OBIEE 10g it wasn't possible to use time series functions [AGO, TO_DATE] on horizontally fragmented fact tables. This was due to be fixed in 11g.
Has this been fixed? Has somebody used this new functionality? What the the limitations?
Tkx
EmilHello,
Can you give us some examples for "horizontally fragmented fact tables", we can tell you whether we can do that or not?
Thanks, -
OBIEE 10G - Repository Business Model adding Fact Table
Hi there,
I've tried to search in the forum for problems similar to this one but didn't find any thread that answered my doubts.
I'll try to explain this well (without images it's complicated). Let's assume we have a schema with the following tables:
Dim_1, Dim_2 , Dim_3, Dim_4, Dim_5, Fact_1, Fact_2
Fact_1 is connected to Dim_1, Dim_2 and Dim_3
Fact_2 is connected to Dim_2, Dim_3, Dim_4 and Dim_5
And on the business model of the repository we already have a model with the following tables:
Dim_1, Dim_2, Dim_3 and Fact_1. This was the state of the repository the last time we saved it.
Now i want to add to the business model the table Fact_2 and the Dims that connect with Fact_2. What should i do here?
If i drag from the physical layer only the tables (Dim_4, Dim_5 and Fact_2 since Dim_1 and Dim_2 already exist in the business model) when i check on the business model diagram the Fact_2 table and direct joins i will only see joins to Dim_4 and Dim_5 and not the joins to the two other dimesion tables (Dim_1 and Dim_2). I don't know if OBIEE will still act like these joins are made or if i will have problems when i work on presentation services.
If i drag from the physical layer the tables (Dim_2, Dim_3, Dim_4, Dim_5 and Fact_2) i will see all joins but i will have on the business model repeated tables, something like this:
Dim_1, Dim_2, Dim_2#1, Dim_3, Dim_3#1, Dim_4, Dim_5, Fact_1 and Fact_2
So what's the solution here? Do i have to redo everything i want to add something and this situation occurs? Doesn't sound smart and there has to be a better solution. I know i can redo the direct joins manually but is that the best solution i can look for in this situation?
I hope i was clear :)
ThanksHi,
Follow these steps,
1.Drag Dim_4, Dim_5 and Fact_2 to BMM
2.Select (Press Ctrl to select multiple objects)in BMM: Dim 1 to 5, Fact_1 and Fact_2
3.Right click ->Business Model diagram->Selected objects only
4.Give the necessary joins,
Dim 4,&5 joins to Fact1(as 1,2,3 are already joined)
Dim 2,3,4,5 joins to Fact2
5.Check consistency.
Rgds,
Dpka -
OBIEE10g: extending a fact table with a custom fact
Hello everyone!
I have the following problem: I created a table with, let's say only two fields: primary key COMMISSION_LINE_ID and a measure called PAID.
After joining my new table with the existing fact table on COMMISSION_LINE_ID both in the physical and logical layer, I tried to set my measure PAID with an aggregation level of SUM. If instead I do not set an aggregation level, everything works fine.
My goal is to aggregate it but the query always crashes with a message like the following:
Odbc driver returned an error (SQLExecDirectW).
Error Details
Error Codes: OPR4ONWY:U9IM8TAC:OI2DL65P
State: HY000. Code: 10058. [NQODBC] [SQL_STATE: HY000] [nQSError: 10058] A general error has occurred. [nQSError: 14026] Unable to navigate requested expression:
Please fix the metadata consistency warnings. (HY000)
What can I do?
Thanks in advance for your suggestions!
Edited by: 911078 on 17-gen-2013 3.52Delete the join in BMM layer and column mappings, just follow as I said earlier.
I would treat the new table as fact extension. Unlike dimension extensions(these go with LTS) fact tables need to map physically.
Just in case if you want to go with your own way as you've done-> You need to add all those dimension keys in new fact(ie F2) treat as 2nd LTS, this would result more or less as you do for aggregate tables.
Hope this helps, pls mark if it does -
Fact tables with different granularity
We currently have 3 dimensions (Site, Well, Date) and 2 fact tables (GasEngine, GasField), both having granularity of a day.
GasEngine is linked to Site and Date
GasField is linked to Site, Well and Date
We now have a requirement to make the GasEngine fact table have granularity of an hour but keep
GasField at a day.
We therefore must include a new Time dimension, which would only be linked to GasEngine.
Is it ok to have a DW with these two fact tables having different granularity?
And would we therefore require two separate cubes for querying this data?Hi Rajendra and Visakh16,
Based on your input provided to this thread, I would like to ask a question just to fine-tune my knowledge regarding data modelling. In Darren’s case I guess his date dimension only store dimension records up to day level granularity. Now the requirement
is to make the “GasEngine” fact table to hold data granularity of an hour.
Now based on Rajendra’s input
“Yes, you can have. but why you need new time dimension, I recommend, make GasEngine fact to
hour granularity.”
How Darren could display data for each hour without having a time dimension attached to GasEngine fact table? With the existing date dimension he ONLY can display the aggregated data with the minimum granularity of day level.
Now anyone can modify the date dimension to hold time records which will complicate the date dimension totally. Instead why Darren cannot have a separate time dimension which hold ONLY time related data and have a timekey in GasEngine fact table and relate
those tables using the time key? This way isn’t Darren’s data model become more readable and simplified? As we provide another way of slicing and dicing data by using a time dimension I do not think Darren’s cube becomes a complex STAR schema.
I could be totally wrong therefore for the sake of knowledge for Darren and me I am asking the question from both of you.
Best regards…
Chandima Lakmal Fonseka -
More than one fact tables...
Hi.
I have tried OLAP until now with only one fact table.
But now I have more than one. To start i added one more.
I am always using SOLVED LEVEL...LOWEST LEVEL.
I am always receiving the following error when creating the cube with this measures:
"exact fetch returns more than requested number of rows"
What shall I look for when dealing with more than one fact table?
Thanks.
ODDS
:: ... and still have a very poor performance ...1.
Well ... I saw the global star schema and we have two fact tables there!!!
Do I have to build different cubes for each fact table always?
2.
I have built cubes, created a java client and a jsp client.
Performance is much better in JSP using the AppServer(sure!).
The power of the JSP client is more limited i presume.
I wonder if I can do things such setCellEditing for a crosstab in both.
3.
Some aggregation questions:
Everytime I create a cube using CWM2 and also a AW using AWM wizards with that cube I have one aggregation plan by default that processes everything online.
After that I create and deploy my own aggregation plan.
My question is: If I don't want to aggregate anything!??! I want to see, for instance, in BiBeans the lowest level values only. And everything at the top levels empty.
I am missing something 'cause I still have everything aggregated !!!
Thanks.
ODDS -
ROWNUM is indexed in the Fact table - How to optimize performace with this?
Hi,
I have a scenario where there is an index on the Rownum.
The main Fact table is partitioned based on the job number (Daily and monthly). As there can be multiple entries for a single jobID, the primary key is made up of the Job ID and the Rownum
This fact table in turn is joined with another fact table based on this job number and rownum. This second fact table is also partitioned on job ID.
I have few reference tables that are joined with the first fact table with btree index.
Though in a normal DW scenario we should use bitmap, here we can't do that as lot of other applications are accessing data (DML queries) where bitmap will be slow. So I am using STAR_TRANSFORMATION hint to use the normal index as bitmap index.
Till here it is fine. Problem is when I simply do a count for a specific partition from a reference table and a fact table, it is using all required indexes as bitmap with very low cost. But also it is using ROWNUM index that is of very very high cost.
I am relatively new to Oracle tuning. I am not able to understand what it is exactly doing. Could you please suggest if I can get rid of this ROWNUM to make my query performance faster? This index can not be dropped. Is there a way in the hint I can instruct not to use this primary key index?
Or Even by using is there a way that the performance will be faster?
I will highly appreciate any help in this regard.
Regards
...Just sending the portion having info on the partition and Primary index as the entire script is too big.
CREATE TABLE FACT_TABLE
JOBID VARCHAR2(10 BYTE) DEFAULT '00000000' NOT NULL,
RECID VARCHAR2(18 BYTE) DEFAULT '000000000000000000' NOT NULL,
REP_DATE VARCHAR2(8 BYTE) DEFAULT '00000000' NOT NULL,
LOCATION VARCHAR2(4 BYTE) DEFAULT ' ' NOT NULL,
FUNCTION VARCHAR2(6 BYTE) DEFAULT ' ' NOT NULL,
AMT.....................................................................................
TABLESPACE PSAPPOD
PCTUSED 0
PCTFREE 10
INITRANS 11
MAXTRANS 255
STORAGE (
INITIAL 32248K
LOGGING
PARTITION BY RANGE (JOBID)
PARTITION FACT_TABLE_1110500 VALUES LESS THAN ('01110600')
LOGGING
NOCOMPRESS
TABLESPACE PSAPFACTTABLED
PCTFREE 10
INITRANS 11
MAXTRANS 255
STORAGE (
INITIAL 32248K
MINEXTENTS 1
MAXEXTENTS 2147483645
BUFFER_POOL DEFAULT
PARTITION FACT_TABLE_1191800 VALUES LESS THAN ('0119190000')
LOGGING
NOCOMPRESS
TABLESPACE PSAPFACTTABLED
PCTFREE 10
INITRANS 11
MAXTRANS 255
CREATE UNIQUE INDEX "FACT_TABLE~0" ON FACT_TABLE
(JOBID, RECID)
TABLESPACE PSAPFACT_TABLEI
INITRANS 2
MAXTRANS 255
LOCAL (
PARTITION FACT_TABLE_11105
LOGGING
NOCOMPRESS
TABLESPACE PSAPFACT_TABLEI
PCTFREE 10
INITRANS 2
MAXTRANS 255
STORAGE (
INITIAL 64K
MINEXTENTS 1
MAXEXTENTS 2147483645
BUFFER_POOL DEFAULT
...................................................... -
Multiple granular levels for fact table
My fact table has to incorporate both at Transaction level and Accumulative , my basic design for Transaction level is as follows
CUSTOMER_KEY, LOAN_KEY, TIME_KEY, LOAN_AMT, TOTAL_DUE, LOAN_STATUS, TRANSACTION
9000,1000,1,200,200,Open, Advance
9000,1000,1,200,0,Close, Payment If I aggregate the values then query will take time to execute . How can I provide cumulative information from this fact table? shall i go for one more fact table for Accumulative information ?
Please suggest.
Thanks,
Hesh.Hi ,
Is it a question of OLAP cube generation using your fact table design ? If not then it is incorrect forum ..If yes , then it should be straighway ur fact and dimension design and no need of another fact table with aggregation because OLAP cube aggregate this which should ideally be tremendous fast
Thanks,
DxP -
Unable to join 3 fact tables in obiee data model.
Hi,
I am unable to join the 3 fact tables in obiee rpd. If I join with 2 fact tables I got the data in repors, Once I add the one more fact table and join with related dimensions, I am getting nodata error. I am following all joins keys as per datamodel, I am able to see the data in database. anyone pls give me the solution. Thanks in advance.
Edited by: 1007582 on May 23, 2013 2:17 AMCan you please give some more detail as in what you have model in RPD;
For example, if There are 3 Fact table F1, F2, F3 and two Dimesion Confirmed_D1, Non_Conf_D2. Confirmed_D1 is joined to all three fact tables and Non_Conf_D2 is joined to F2.
Now to implement this model we have to set the logical content level in BMM layer for Non_Conf_D2 in F1 & F2 to Total*. After this only we can report on Confirmed_D1, Non_Conf_D2, F1, F2, F3 whithout any error.
Sometime join with multiple table can also result in No Data, since the join conditions filters out the result data.
Please mark helpful or correct.
Regards,
Kashinath -
OBIEE 11g - No fact table exists at the requested level of detail
My dimesion tables are snow-flake.
Table1 has Key, ProductName, ProductSize, Table2Key
Table2 has Key, ProductDepartment, Table3Key
Table3 has Key, ProductDivision
I have created 2 hierarchies (in same dimension Product). Note: ProductSize is in Table1.
ProductDivision > ProductDepartment > ProductName (shared level)
ProductSize > ProductName (shared level)
There are 2 fact tables
Fact1 is at ProductName level
Fact2 is at ProductDepartment level
When I create a request with columns as ProductSize and some measure; and filter it on ProductDepartment. The request fails with error "No fact table exists at the requested level of detail", but the request can ideally be answered using fact with ProductName level.
I have properly defined logical level keys in the hierarchies and logical level in the LTS (content tab)
Can anyone point me what I am doing wrong here?Since both fact tables are at same granular level I would suggest to map each other (Signon_A maping Signon_B) in BMM layer logical fact @source.
Considering them as Fact and with fact extension.
BTW: Did you try by setting implicit fact at subject area properties?
Edited by: Srini VEERAVALLI on Feb 1, 2013 9:04 AM -
How to create logical fact table in BMM layer ?
Hello,
I have 3 Dimension table - 2 are in one schema and last is another schema. Using this 3 dimension tables, I need to create a logical fact table.
So, my question is whether we can create this fact table by joining these 3 dimension table which are in 2 different schema s ?
ThanksFiaz,
you are correct. We can use tables from different subject area to create a report. However, my question was related to rpd design. Sorry, I was not very clear about the queries earlier.
Here is the whole scenario in the physical layer of the rpd
Table name Databse name Connection pool name Schema name
AV AV_PXRPAM AVAILABILITY CRMODDEV
OUTAGE AV_PXRPAM AVAILABILITY CRMODDEV
COMPANY PXRPAM PXRPAM_POOL CRMODDEV
AV and OUTAGE have the joins already. I want to make a join between COMPANY with OUTAGE. And then I want to include a column from each of above tables to the logical fact table in the BMM layer. then I want to do a star schema with the logical fact table to the above 3 tables in the BMM layer.
Thanks -
OBIEE Query not hitting the other fact table
Hi All,
I am trying to create report based on two fact column and one dimension. Dimension is connected with these two facts table. When i create report using one column from dimension and one column from respective facts so i get two scenerio...
For example let say..
D1 is dimension and F1 and F2 are two fact tables.
First i used a column which have aggregation rule from one fact and one column from other fact which also have aggregate column.
That is report like...
D1.c1,Agg(F1.c2),Agg(F2.c3)
When i run the report I get the data from dimension and only from first fact table. When i check the query, Query contain only one fact table and it doesnt hit the other one.
Now in second scenerio i used one column from dimension, one column from first fact which have aggregation rule and one column from second fact which doesnt have any aggregation rule.
like...
D1.c1,Agg(F1.c2),F2.c3
When i run the report i got the error. It says
State: HY000. Code: 10058. [NQODBC] [SQL_STATE: HY000] [nQSError: 10058] A general error has occurred. [nQSError: 14026] Unable to navigate requested expression: F1 -C2 . Please fix the metadata consistency warnings. (HY000).
But there is no warning in RPD.
I am amazed that it is not taking both the fact columns even the dimension is confirmed dimension and have joined with both the fact tables.
As i am just started to learn OBIEE, So i am find it bit difficult that how OBIEE select the tables and formed physical query.
Waiting for your help.
Regards
SuhailAadi-Wasi,
Thumb rule, OBIEE BMM layer must contain a simple star schema.
Did your BMM layer suffice the above condition? If hope, Not.
My prediction of your BMM layer structure contains 3 logical tables, i.e. dimension & 2 logical facts...which is not a simple star.
Thus to make it a simple star collapse 2 logical fact tables into 1 logical fact table. As you mentioned dimension is linked to both facts, collapsing 2 logical fact tables into 1 logical fact table will provide the result for your query1.
regarding your second error:
All aggregations must be contained within Fact tables with few exceptions.
Let us know If I did resolve your issue
mark posts promptly...
J
-bifacts
http://www.obinotes.com
Maybe you are looking for
-
New to BT Broadband. How do I get back to Yahoo Homepage from a website? At present it is taking me back to my desktop each time. Thanks in anticipation of the answer.
-
Aperture not creating new version of raw image when editing in external editor
I have my export settings set up in Aperture 3.5.1 as editing int Photoshope Elements 12 as Tiff files. When I right click on a raw photo in aperture and say to edit in PSE, it will open it in the program, but only in camera raw. This happens even
-
I have tried the unplugging and resetting but having no luck. Need some directions. Pretty sure out of warranty too
-
Starting OBIEE 11g on Linux 64 bit
Hi, I installed OBIEE 11g on Linux 64 bit and it worked fine after installation. I rebooted the system and I need to start it again. I am trying to run run-sa.sh run-saw.sh etc. but it looks like the scripts are broken: ./run-saw.sh: line 43: syntax
-
How to use javacard develoment kit 2.2.1
Do i need to install or just unzip the folder and that will do. another problem is that sample coding needs to import a package call javacard.framework and i get this error when i compile. plz help