Multi-fact Query
Hi all,
Our software group is looking at creating a tool that will query multi-fact star schema environment. What are my options?
So far, we have 7 fact tables and 6 dimensions (all shared by the facts). There are no aggregate tables.
My first thought was one large aggregate table, however the dimensions are all normalized to their particular fact table. So, a customer dimension (for instance) would repeat customers if they fall into multiple fact tables. The dimensions are more complicated than that, but you get the idea. There are only two that are truly normalized...and yes one is TIME. :)
If you have data that shares all the same dimensions, it almost always makes sense to keep it together in a single fact table. The general guideline is only to split out a new fact table if the dimensionality of the data is different.
Unions and joins typically just end up slowing things down.
The only exception I've really seen to this rule is when one set of facts has orders of magnitude more data points then the others. I.e. if you have actuals data that has 100 million rows, but budget data (with the same dimensionality) with only 200,000 rows. This doesn't happen often, but I suppose it could potentially occur.
Scott
Similar Messages
-
NQSError 14020 When Running Cross-Fact Query
I keep getting the following error when trying to run a cross-fact query in OBIEE Answers:
State: HY000. Code: 10058. [NQODBC] [SQL_STATE: HY000] [nQSError: 10058] A general error has occurred. [nQSError: 14020] None of the fact tables are compatible with the query request D3.X. (HY000)
I set up a whole new RPD with only 4 tables, and made sure all the content levels were set up correctly, but I'm still getting the error. I have the following setup:
Physical
I have 2 fact tables and three dimensions
F1 <- D1, D2
F2 <- D2, D3
BMM
I have one fact tables and three dimension tables:
F <- D1, D2, D3
I created a dimension higherarchy for each dimension table and kept the two default logical levels --Total and Detail.
Each dimension table has a single LTS set to the "Detail" level.
The fact table has two LTS, and the levels are set up as follows:
F1: D1=Detail, D2=Detail, D3=Total
F2: D1=Total, D2=Detal, D3=Detail
Answers
I'm trying to run the following request in OBIEE Answers: D1.x, D2.x, D3.x
Could anyone could point me in the right direction?
- TimHi Tim,
In your query, do you just have the dimensions? How about having measure in that report and checking if you still have the same error.
I see your LTS settings fine (as you have merged the individual facts to one in BMM). Generally, query across dimensions have to go through the fact to resolve the relations and so in here, it has to go through different facts though ;) . Could you post your query so that we can have a look?
Thank you,
Dhar -
SQL+-MULTI TABLE QUERY PROBLEM
HAI ALL,
ANY SUGGESTION PLEASE?
SUB: SQL+-MULTI TABLE QUERY PROBLEM
SQL+ QUERY GIVEN:
SELECT PATIENT_NUM, PATIENT_NAME, HMTLY_TEST_NAME, HMTLY_RBC_VALUE,
HMTLY_RBC_NORMAL_VALUE, DLC_TEST_NAME, DLC_POLYMORPHS_VALUE,
DLC_POLYMORPHS_NORMAL_VALUE FROM PATIENTS_MASTER1, HAEMATOLOGY1,
DIFFERENTIAL_LEUCOCYTE_COUNT1
WHERE PATIENT_NUM = HMTLY_PATIENT_NUM AND PATIENT_NUM = DLC_PATIENT_NUM AND PATIENT_NUM
= &PATIENT_NUM;
RESULT GOT:
&PATIENT_NUM =1
no rows selected
&PATIENT_NUM=2
no rows selected
&PATIENT_NUM=3
PATIENT_NUM 3
PATIENT_NAME KKKK
HMTLY_TEST_NAME HAEMATOLOGY
HMTLY_RBC_VALUE 4
HMTLY_RBC_NORMAL 4.6-6.0
DLC_TEST_NAME DIFFERENTIAL LEUCOCYTE COUNT
DLC_POLYMORPHS_VALUE 60
DLC_POLYMORPHS_NORMAL_VALUE 40-65
ACTUAL WILL BE:
&PATIENT_NUM=1
PATIENT_NUM 1
PATIENT_NAME BBBB
HMTLY_TEST_NAME HAEMATOLOGY
HMTLY_RBC_VALUE 5
HMTLY_RBC_NORMAL 4.6-6.0
&PATIENT_NUM=2
PATIENT_NUM 2
PATIENT_NAME GGGG
DLC_TEST_NAME DIFFERENTIAL LEUCOCYTE COUNT
DLC_POLYMORPHS_VALUE 42
DLC_POLYMORPHS_NORMAL_VALUE 40-65
&PATIENT_NUM=3
PATIENT_NUM 3
PATIENT_NAME KKKK
HMTLY_TEST_NAME HAEMATOLOGY
HMTLY_RBC_VALUE 4
HMTLY_RBC_NORMAL 4.6-6.0
DLC_TEST_NAME DIFFERENTIAL LEUCOCYTE COUNT
DLC_POLYMORPHS_VALUE 60
DLC_POLYMORPHS_NORMAL_VALUE 40-65
4 TABLES FOR CLINICAL LAB FOR INPUT DATA AND GET REPORT ONLY FOR TESTS MADE FOR PARTICULAR
PATIENT.
TABLE1:PATIENTS_MASTER1
COLUMNS:PATIENT_NUM, PATIENT_NAME,
VALUES:
PATIENT_NUM
1
2
3
4
PATIENT_NAME
BBBB
GGGG
KKKK
PPPP
TABLE2:TESTS_MASTER1
COLUMNS:TEST_NUM, TEST_NAME
VALUES:
TEST_NUM
1
2
TEST_NAME
HAEMATOLOGY
DIFFERENTIAL LEUCOCYTE COUNT
TABLE3:HAEMATOLOGY1
COLUMNS:
HMTLY_NUM,HMTLY_PATIENT_NUM,HMTLY_TEST_NAME,HMTLY_RBC_VALUE,HMTLY_RBC_NORMAL_VALUE
VALUES:
HMTLY_NUM
1
2
HMTLY_PATIENT_NUM
1
3
MTLY_TEST_NAME
HAEMATOLOGY
HAEMATOLOGY
HMTLY_RBC_VALUE
5
4
HMTLY_RBC_NORMAL_VALUE
4.6-6.0
4.6-6.0
TABLE4:DIFFERENTIAL_LEUCOCYTE_COUNT1
COLUMNS:DLC_NUM,DLC_PATIENT_NUM,DLC_TEST_NAME,DLC_POLYMORPHS_VALUE,DLC_POLYMORPHS_
NORMAL_VALUE,
VALUES:
DLC_NUM
1
2
DLC_PATIENT_NUM
2
3
DLC_TEST_NAME
DIFFERENTIAL LEUCOCYTE COUNT
DIFFERENTIAL LEUCOCYTE COUNT
DLC_POLYMORPHS_VALUE
42
60
DLC_POLYMORPHS_NORMAL_VALUE
40-65
40-65
THANKS
RCS
E-MAIL:[email protected]
--------I think you want an OUTER JOIN
SELECT PATIENT_NUM, PATIENT_NAME, HMTLY_TEST_NAME, HMTLY_RBC_VALUE,
HMTLY_RBC_NORMAL_VALUE, DLC_TEST_NAME, DLC_POLYMORPHS_VALUE,
DLC_POLYMORPHS_NORMAL_VALUE
FROM PATIENTS_MASTER1, HAEMATOLOGY1, DIFFERENTIAL_LEUCOCYTE_COUNT1
WHERE PATIENT_NUM = HMTLY_PATIENT_NUM (+)
AND PATIENT_NUM = DLC_PATIENT_NUM (+)
AND PATIENT_NUM = &PATIENT_NUM;Edited by: shoblock on Nov 5, 2008 12:17 PM
outer join marks became stupid emoticons or something. attempting to fix -
Can PQO been used in DML statements to speed up multi-dimension query ?
We have
limit dim1 to var1 ne na
where dim1 is a dimension.
var1 is a variable dimensioned by dim1.
This is pretty much like SQL table scan to find the records. Is there a PQO (parallel query option)-like option in DML to speed up multi-dimension query ?This is one of the beauties of the OLAP Option, all the query optimisation is managed by the engine itself. It resolves the best way to get you the result set. If you have partitioned your cube the query engine will perform the same way as the relational query engine and employ partition elimination.
Where things can slow down is where you used compression, since to answer a question such as "is this cell NA?" all rows in the cube need to be uncompressed before the query can be answered and result set generated. Compression technology works best (i.e. is least intrusive in terms of affecting query performance) where you have a very fast CPU. However, the overall benefits of compression (smaller cubes) nearly always outweigh the impact of having to uncompress data to answer a specific question. Usually the impact is minimal if you partition your cube, since the un-compress function only needs to work on a specific partition.
In summary, the answer to your question is "No", because the OLAP engine automatically optimises the allocation of resources to return the result-set as fast as possible.
Is there a specific problem you are experiencing with this query?
Keith Laker
Oracle EMEA Consulting
OLAP Blog: http://oracleOLAP.blogspot.com/
OLAP Wiki: http://wiki.oracle.com/page/Oracle+OLAP+Option
DM Blog: http://oracledmt.blogspot.com/
OWB Blog : http://blogs.oracle.com/warehousebuilder/
OWB Wiki : http://wiki.oracle.com/page/Oracle+Warehouse+Builder
DW on OTN : http://www.oracle.com/technology/products/bi/db/11g/index.html -
Multi Fact RPD with Compatibility Error
All,
I was wondering if you can help with this issue. Here is a picture of my RPD model in Oracle OBIEE 11g BI Administrator:
http://scottysols.files.wordpress.com/2013/09/multi_fact_model.png?w=908
It is in the format D1 <-- F1 --> D2 <-- F2 --> D3.
In the picture you can also see the Logical Level settings for one of the Logical Table Sources and one of the Measure columns.
When I try to create a report in OBIEE, OBIEE throws a SQL error. It doesn't understand how to relate D1 to D3. My requirement is to build an RPD that supports a report like this:
D1 Column | F1 Measure Aggregated | D2 Column | F2 Measure Aggregated | D3 Column
minus any cartesians. I haven't even gotten to the cartesian error (which would be a good sign I am on the right track), i just get the error:
None of the fact tables are compatible with the query request AWARD_SUMMARY_DIM.AWARD_NUMBER
We have also tried smashing F1 and F2 together into a single Fact within Business Model and Mapping with the same results.
Thanks,
S.Srini,
First, thanks for your support.
Second, I went into F1, then sources, then the Logical Table Source properties General tab and added D2, then F2. After updating the Presentation layer and restarting OBIEE, my report still returns the same error.
I tried to force the join on F2 next. Same as above, but in reverse. I added D2, then F1 as sources for F2. Same error remains:
None of the fact tables are compatible with the query request AWARD_SUMMARY_DIM.AWARD_NUMBER (D1).
- S. -
Run a SQL procedure with multi database querying from Excel
I'm using SQL Server 2008 Enterprise. I created a procedure in one database. The procedure is composed of several queries to different databases and the final combined result set is being displayed.
I try to execute it via Excel, so the results will appear automatically in Excel sheet, but I'm getting the error:
"The query did not run, or the database table could not be opened. Check the database server or contact your DBA. Make sure the external
database is available and hasn't been moved or recognized, then try the operation again".
I created a simpler procedure that queries only one database, and the results displayed at the Excel sheet with no issues.
I suspect that, the original procedure failed due to the fact that I'm querying several databases in the procedure, when in the connection details of the "External Data Properties", only one database is mentioned.
My question is - can it be solved? Can I use multiple databases in the procedure and see it in the Excel?
Thanks, RoniUse Global Temporary table(##) instead of Local Temporary table(#).
The scope of the temp table is limited to one database and it dispose automatically when jump to another database.
No, that is not correct. From where did you get that idea?
USE tempdb
go
CREATE TABLE #a(a int NOT NULL)
INSERT #a(a) VALUES(9)
go
USE master
go
SELECT a FROM #a
go
USE msdb
go
SELECT a FROM #a
go
DROP TABLE #a
And Roni's stored procedure does not even change database.
...however, the temp tables may very well be the problem, but for a completely different reason. Excel may ask SQL Server for the shape of the result set before it runs the procedeure, and this does not work with the temp tables. For this reason, using a
table variable my save the show.
Erland Sommarskog, SQL Server MVP, [email protected] -
Transferring multi-column query result into MS Excel
Hello everybody!
An ultimate novice in Oracle, with some database concepts, is here.
The very first challenge; which I encoutered is that I want to transfer a Select query result, with multiple columns, into MS Excel sheet in a way that each field occupies a separate column in the sheet.
I hope the forum members will show me a simple way to get around the problem.
Thanks in anticipation.
Rabiwhat will happen the query returns too many rows that
Excel cant hold? I faced this problem and divided the
report into "n" numbers based on the number of rows
returned. Is there any internal logic for it in
oracle?Excel is capable of holding 65536 rows of data on a sheet.
Oracle is capable of writing data out to files.
Oracle does not know or care whether the file it is writing is for Excel or any other product, as the file you are likely to be writing is CSV (comma seperate format) which is an open format, not specifically for Excel. If you need to limit the number of rows per CSV file then you will have to code this "business logic" into your code that produces the CSV file from Oracle. -
Hi Friends,
I was created a new multiprovider which contains two InfoCubes, First infocube contains the Site & Material and second infocube contains only Material, so when i executed the query it is Showing the first cube Values in some rows, after ending the first query values, it is showing the second query values, i am unable to finding the solution.
Plz help me.
Regards,
Rams.
Message was edited by:
Rams RamsHi,
Check in the multiprovider option if you have selected the material number to match with both infoproviders, in order to do this :
1)Double click in MultiProvider
2)Next
3)Characteristic Tab, click in Identification Button.
4)Check if the material number is selected in both infoprovider
End
Regards
Asign point if useful -
Hello all
I’m encountering a challenge which I believe has been discussed on these forums.
This relates to pixelated, blurry text within buttons and multi-state objects.
One solution to this issue is not using the objects containing text as buttons or multi-state objects. Instead, you insert shapes with no fill/stroke on top of text graphics and then grouping them.
This ensures the text is sharp and clear.
However, if you wish to have a roll over state, with a different fill/stroke/text colour as part of the roll-over state, then this poses problems as the shape serving as a button has to have no fill/stroke or text within them.
Has around a work around for this problem?
Many thanks
HFor clear buttons with click states, the easiest workaround is to place a dummy button over the image and change the stroke and perhaps even add a transparency fill. There is another more complicated workaround as well. For examples and instructions, see DPS Tips > Advanced Overlays > Sharp Buttons.
http://contentviewer.adobe.com/s/DPS%20Tips/7f80a0ffed3a4ff08734bc905aac4a29/Advanced_Over lays/12_button_crisp.html -
Multi fact columns to Single fact column
Hi,
We have fact table which stored the data in monthly wise fact columns.
Ex:
Jan
Feb
Mar
Apr
May
Jun
July
Aug
Sep
Oct
Nov
Dec
100
200
150
250
223
1212
171
12123
31123
112
2113
1123
150
223
222
142
1354
1567
452
763
41733
441
1211
1213
333
222
55
256
455
445
752
4752
45214
114
8122
4555
My requirement, We have convert all month fact columns into a single month fact column.
How to achive this. Kindly Let me know.It rather seems that you don't understand his input or the concept of LTSs.
You create 12 LTS, each using the same physical table as a source.
Then you create one single column called "Monthly Value"
Then you map the "Jan" column from the first LTS into the column
Then "Feb" from the second LTS
etc.
Bob's your uncle -
I have a dataset with cols such as Year, FW, Defect, Area, Resp. I need to sum the defects where Year, FW, Area and Resp are the same.
For the life of me I cannot thinnk of how to find similar data in cols like I need.
TIA guysOk, thanks, can I replace the table name with a dataset (recordset) from another query?yes, you can:
select
sum(defect), year, fw, area, resp
from (<your_query_here>)
group by year, fw, area, resp;as long as the inline view "(<your_query_here>)" has all the columns or column aliases (defect, year, fw, area, resp) in the SELECT column list.
pratz -
According to the definition of the layer given in Oracle Spatial documentation every layer is locating in its own table. I have a scheme there multiple layers share same coordinate system and exist some coordinate systems. For each coordinate system i have to define own table with geometries. It is clear. The question is why i have to separate multiple layers ti different tables and cannot keep all geometry primitives in on table no matter to which layer they belong ? I can add some kind of identification for layer id and do query according to the layer id first and then by RELATE function (for instance) ...
Index can be defined for this layer id that will speed up query.
From other side managment of multiple tables demands dynamic SQL scripts a lot of check ups etc.
What is conventional way to design such a system ?
I need an answer ASAP
Thank you
nullI don't know anything about the requirements
of your application.
But, do you display 200 layers on
your map at the same time?
It seems like there would be too much data displayed to be meaningful to a user.
If for example you are drawing 10 of the
200 layers, it will probably be more efficient to query the 10 layers (each in their own table), than to query a single table with all 200 layers and sift out
the 10 layers you are interested in by some
attribute ID.
The spatial index would not have to
sift through the index records for the
190 layers you are not interested in.
Hope this helps. Thanks.
Dan -
Hello,
Please forgive me if this is an elementary question...I'm trying to run a query against multiple schema's but it does not work. I've Associated both schema's with my workspace and I tried running it in the SQL Workshop and it runs fine there. When I create a report page and specify the query there it tells me that table/view does not exist. I also tried building it using the query builder utility, the second schema name does not appear on the drop down list at the top right of the page.
Can anyone help out with this?
Thanks!!!Hello:
Generally, if a query runs from SQL Workshop you should be able to use the query in an APEX report.
Check if making an explicit grant on the table in the other schema to the default schema of your application makes a difference.
Example: If SchemaA and SchemaB are the two schemas allowed for your workspace and if TableA exists in SchemaA and TableB exists in SchemaB
Grant select on SchemaA.TableA to SchemaB
Grant select on SchemaB.TableB to SchemaAVarad -
Hi All,
Need Help:
If multi-language has been enabled in MDM, in order to provide a different language description for an item using portal, does the user have to login with a different language each time?
Waiting for revert.
Regards,
Ms.DasHeyy ,
I have not came across with particular API functions with multilingual.
but you can refer below URL's
http://help.sap.com/javadocs/MDM/SP04/com/sap/mdm/data/MultilingualString.html
Re: SAP MDM API 7.1, Region Codes
http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/b0e8aedc-cdfe-2c10-6d90-bea2994455c5?quicklink=index&overridelayout=true
Hope it helps.
Deep -
Is it possible to query two databases at the same time?
If not is there a workaround for this issue?
ThanksHi,
i'm trying to understand your query but i have seen that i have an archive log that for oracle hasn't been transferred but it is already on the standby site. So if i run it (let's say every 10 minutes in a script to check what happens) i always have that this archive has not been transferred.
Is this right what i'm running?
SELECT name,sequence#
FROM v$archived_log
WHERE standby_dest = 'NO'
AND sequence#
NOT IN (SELECT sequence#
FROM v$archived_log
WHERE standby_dest = 'YES')
AND sequence# >
(SELECT min(sequence#)
FROM v$archived_log
WHERE standby_dest = 'YES');
About what i'm doing is as i told you run the gapcheck on the standby site, if there's a check stop managed, copy the file from primary (with rcp) apply it and the go back to managed.
Do you know a way that if the archiving fails on the standby site sends a message, email or something so that we are awared of what's happening?
Maybe you are looking for
-
Battery not charging with usb port but works fine with wall plug.. So the prob is computer is not detecting the iphone5 . This problem started after upgrading to 6.1.3 version
-
An error occurred when attempting to change modules. message after my regist numb?
after I put my regist number appears this message= An error occurred when attempting to change modules. and I cannot use my lightroom What to do?
-
TV not displaying DVD video from Macbook but showing slideshow instead
Connecting new MacbookPro to new LCD TV with newly purchased cables as follows: miniDVI to DVI converter connected to mini DVI port on Macbook and to DVI/HDMI cable which is connected into HDMI port on TV. Audio connected via standard one to two audi
-
Crystal Reports and JobBoss issues?
Iu2019m hoping someone can help find a solution to a problem we are having with Crystal Reports when using the program with JobBoss. We are using JobBoss version 11.3.8 along with the trial version of Crystal Reports XI. Our PC uses Windows XP. We ha
-
One large message or multipels small ones?
Hi: I'm working in an application using weblogic 8.1. We receive in a JMS queue a large message containing thouthands of transactions to be processed. What should be more eficient? processing the whole transactions in the message (with one MDB) or sp