Distinctcount across star schema with boolean filter

Hi,
I have a fact table with CustomerNo, DateKey, ProductKey, SupplierKey, Sales.
This is related to a Calendar table, Product, and Supplier Table.
I'm trying to calculate a distinct count of customer where the sales = 0 and keeping the original filter conditions on related tables. 
I've tried :  
Calculate(Distinctcount(fact[CustomerNo), Fact[Sales]=0)
The figures are wrong.  It's hard for me to evaluate what rows are actually being returned due to the size of the table. I think it may be related to the the DistinctCount running a groupby first and the filter being run second.
So I tried :
Calculate(Distinctcount(fact[CustomerNo), Filter(Fact, Fact[Sales]=0))
This returns the same value as the one above.
It's very weird as the slicers on the Pivot do interact with the measure so I know the measures are keeping the existing filter contexts.
Any ideas what is happening? 
Lee Hawthorn ACMA, CGMA View my Performance Management blog at leehbi.com

well, the formula you provided is correct and should return the correct result
its hard to say why it would return wrong results without knowing/seeing the data
the only thing i could think of is the following:
a customer made two sales 1) with Sales=123 2) with Sales=0
the formula would of course count that customer as a distinct customer with Sales=0
maybe thats the reason for your "wrong" results?
you may also check this link: http://www.daxpatterns.com/distinct-count/
hth,
gerhard
Gerhard Brueckl
blogging @ http://blog.gbrueckl.at
working @ http://www.pmOne.com

Similar Messages

  • Resolving loops in a star schema with 5 fact tables and 6 dimension tables

    Hello
    I have a star schema, ie 5 FACT tables and 7 dimension tables, All fact tables share the same dimension tables, some FACT tables share 3 dimesnsions, while other share 5 dimensions.  
    I did adopt the best practices, and as recommended in the book, I tried to resolve them using Context, as it is the recommended option to Alias in a star schema setting.  The contexts are resolved, but I still have loops.  I also cleared the Multiple SQL Statement for each context option, but no luck.  I need to get this resoved ASAP

    Hi Patil,
    It is not clear what exactly is the problem. As a starting point you could set the context up so that it only covers the joins from fact to dimension.
    Fact A, joins Dim 1, Dim 2, Dim 3, and Dim 4
    Fact B, joins Dim 1, Dim 2, Dim 3, Dim 4 and Dim 5
    Fact C, joins Dim 1, Dim 2, Dim 3, Dim 4 and Dim 6
    Fact D, joins Dim 1, Dim 2, Dim 3, Dim 4 and Dim 7
    Fact E, joins Dim 1, Dim 2, Dim 4 and Dim 6
    If each of these are contexts are done and just cover the joins from fact to dim then you should be not get loops.
    If you could lay out your joins like above then it may be possible to specify the contexts/aliases that should work.
    Regards
    Alan

  • Relational Star Schema with oWB 10g

    Hi,
    It is an interesting forum. I like to explore the different approached to build ETL using OWB to load Star Schema built around Oracle 10g relational structures.
    How do we take advantage of additional features of Oracle DB 10g and OWB 10gR2?
    What features does help? Which one should stay away from?
    Thanks in advance.
    Ram

    Hi ,
    You can use Oracle SQL Developer Data Modeler .
    SQL Developer Data Modeler provides a full spectrum of data and database modeling tools and utilities, including modeling for Entity Relationship Diagrams (ERD), Relational (database design), Data Type and Multi-dimensional modeling, full forward and reverse engineering and DDL code generation. The Data Modeler imports from and exports to a variety of sources and targets, provides a variety of formatting options and validates the models through a predefined set of design rules.
    Oracle SQL Developer Data Modeler can connect to any supported Oracle Database and is platform independent
    http://www.oracle.com/technology/products/database/datamodeler/index.html
    Thanks,
    Sutirtha

  • Star Schemas with designer 6.0

    How can I easily create star schemas for a Datawarehouse in Designer6? Where can I still find documentation for such release?
    Mauro

    Mauro,
    You should be able to create start schemas - search in the online help for all references to data warehousing
    and star schemas. There should be a few topics.
    Regards,
    Dominic
    Designer Product Management
    Oracle Corp

  • [OBIEE 11g] Enforce star-schema without security filter?

    I have imported my first OLAP cube using the instructions <a href = 'http://www.oracle.com/webfolder/technetwork/tutorials/obe/db/11g/r1/olap/biee/createbieemetadata.htm'>here</a> and have applied the necessary security filter to force the star join between the cube and dimension views. However, the security filter does not apply to users in the BI Administrator user group. Is there any way to do this without a security filter, or somehow apply the security filter even when the user is an administrator?
    Edited by: islan on Jan 22, 2013 7:56 AM
    Edited by: islan on Jan 22, 2013 7:57 AM

    The link in your first posting points to the old-way of creating OBIEE metadata for OLAP objects.
    Starting with OBIEE 11.1.1.5, it is much simpler as Oracle-OLAP is one of the data sources in BI-Admin Tool.
    So do not use the old way.
    Start with this doc:
    http://www.oracle.com/webfolder/technetwork/tutorials/obe/fmw/bi/bi11115/olap/olap.htm
    For your other issue, you need this troubleshooting doc:
    http://www.oracle.com/technetwork/database/options/olap/troubleshootingbieeconnections-504856.pdf
    Note that even though it says OBIEE 11.1.1.5, the above two docs are applicable to 11.1.1.6 and future releases.
    For security, you should define it in OBIEE instead of doing in OLAP.
    .

  • FOREIGN KEY across 2 schemas with ON DELETE CASCADE

    I found I could define
    ALTER TABLE user2.foo ADD CONSTRAINT fk_bar bar REFEERENCES
    user1.foobar;
    if user1 hat granted SELECT and REFERENCES to users2.
    But what, if I want to have
    ALTER TABLE user2.foo ADD CONSTRAINT fk_bar bar REFEERENCES
    user1.foobar ON DELETE CASCADE;
    I made
    GRANT DELETE ON foo to users1
    as user2
    But I get ORA-02292 (child-record exists) if I delete a row in user1.bar wich ha s a related row in user2.foo
    Any idea what to do ?

    Sorry i made a mistakte in my test. It will allso work with ON DELETE CASCADE

  • Normalized (3NF) VS Denormalized(Star Schema) Data warehouse :

    what are the benefits of normalized data warehouse (3NF) over the denormalized (Star schema)?
    if DW is in the 3NF then is need to create the seprate physical database which contains several data marts( star schema)with physical tables, which feeds to cube or create the views(SSAS data source view) on top of 3NF warehouse of star schema which feeds to
    cube?
    please explin the pros and cons of 3NF and denormalized DW.
    thanks in advance.
    Zaim Raza.

    Hi Zaim,
    Take a look to this diagram:
    1) Normally, 3NF schema is typical for ODS layer, which is simply used to fetch data from sources, generalize, prepare, cleanse data for upcoming load to data warehouse.
    2) When it comes to DW layer (Data Warehouse), data modelers general challenge is to build historical data silo.
    Star schema with slow changing facts and  slow changing dimensions are partially suitable.
    The DataVault and other similar specialized methods provides, in my opinion, wider possibility and flexibility.
    3) Star schema is perfectly suitable for datamarts. SQL Server 2008 and higher contains numerous query analyzer improvements to handle such workload efficiently. SQL Server 2012 introduced column stored indexes, that makes possibility to
    create robust star model datamarts with SQL Query performance comparable to MS OLAP. 
    So, your choice is:
    1) Create solid, consistent DW solution
    2) Create separate datamarts on top of DW for specific business needs. 
    3) Create necessary indexes, PK, FK key and statistics (of FK in fact tables) to help sql optimizer as much as possible.
    4) Forget about approach of defining SSAS datasource view on top of 3NF (or any other DWH modeling method), since this is the way to performance and maintenance issues in the future.

  • What happens when i remove Dimension table in extended star schema

    *how to integrate the flex builder web dynpro and xcelsius?
    i found some scenarios like when the content of the table change the chart will changes for the changed data in the table using flash islands and web dynpro.
    my scenario is like this.
    when i changed the chart with sliders automatically the content in the table will update to the chart level ? but both table and chart will be in the same dashboard. how can i work out this scenario using flash islands, flex and web dynpro>
    thanks in Advance*

    Hi Sateesh,
    Please go through this URL you will get overall Idea about Extended Star schema with DIM ID's,SID's and Data flow.
    http://help.sap.com/saphelp_nw70/helpdata/EN/8d/2b4e3cb7f4d83ee10000000a114084/frameset.htm
    http://help.sap.com/saphelp_nw70/helpdata/EN/8d/2b4e3cb7f4d83ee10000000a114084/frameset.htm
    http://help.sap.com/saphelp_nw70/helpdata/EN/8d/2b4e3cb7f4d83ee10000000a114084/frameset.htm
    I hope it will help you.
    Regards
    Suresh B.G.

  • Understanding Star Schema Operations

    Hi All,
    I worked on Oracle Answers and have knowledge on Data Warehousing concepts. I have enough oracle documentation on OLAP, OBIEE, Data Warehousing.
    But i 'm not able to understand the star schema concepts in and out. I'm lost on all documents, where i'm getting the simple answer on the following topics:
    1. Is it necessary to create dimensions using CREATE DIMENTION or it can be a simple relational table structure with CREATE TABLE statement?
    2. What will be the steps to create a very basic/simple star schema using DML statements not using any tool?
    3. Where I can get very basic level step by step tutorial to create a very simple star schema by sql -> loading test data by sql commands -> then what queries we can use to get BI reports or using OBIEE to create a simple rpd.
    Please give me some idea on this as I have to create a small OBIEE repository where i have to do create the warehouse and ETL process, everything.
    Pls advise.
    Thanks in Advance.
    Sudipta

    No my question was on creating the star schema. If i have to create a very simple star schema then can I create the star schema with the following relational tables?
    create table dim_products (
    product_id number primary key,
    product_name varchar2(32),
    brand varchar2(32),
    category_name varchar(32),
    sub_category_name varchar2(32)
    create table dim_time (
    time_id number primary key,
    selling_date date,
    selling_day number(2),
    selling_month varchar2(12),
    selling_Quarter varchar2(6),
    year number(4)
    create table dim_region (
    region_id number primary key,
    store_name varchar2(32),
    city_name varchar2(32),
    state_name varchar2(32),
    country_name varchar2(32),
    region_name varchar2(32)
    create table fact_sales (
    sale_id number primary key,
    region_id number ,
    time_id number ,
    product_id number ,
    unit_sold number,
    constraint fact_sales1 foreign key (region_id) references dim_region (region_id),
    constraint fact_sales2 foreign key (time_id) references dim_time (time_id),
    constraint fact_sales3 foreign key (product_id) references dim_products (product_id)
    Or I have to create the DIMENSIONS explicitly, in order to building the hierarchy levels?
    Regards
    Sudipta

  • Complex request with range filter and boolean search

    Hi
    I need a build and maintain complex requests to Oracle Endeca Server. These requests could include different types of attributes: string, datetime, int.
    For string attributes I have created a search interface and use boolean search. For datetime attributes I use a range filter. Separately it works as I expected, but it fails with combined search with range filter and boolean search in a single request. As I see from results range filter is completely ignored. What could be wrong with following request?
    <?xml version="1.0" ?>
    <S:Envelope xmlns:S="http://schemas.xmlsoap.org/soap/envelope/">
    <S:Body>
    <ns3:Request xmlns:ns2="http://www.endeca.com/MDEX/lql_parser/types" xmlns:ns3="http://www.endeca.com/MDEX/conversation/1/0">
    <ns3:State/>
    <ns3:Operator xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="ns3:RecordFilterOperator">
    <ns3:RecordFilter Name="OnlyData">AND(NOT(mdex-property_Key),NOT(mdex-dimension_Key),NOT(mdex-config_Key),NOT(mdex-precedenceRule_Key),NOT(system-group_Key),NOT(system-group-membership_Key))</ns3:RecordFilter>
    </ns3:Operator>
    <ns3:Operator xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="ns3:SortOperator" Direction="Descending" Key="string_prop" OwnerId="Records"/>
    <ns3:Operator xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="ns3:RangeFilterOperator">
    <ns3:RangeFilter AttributeName="dateTime_prop">
    <ns3:LowerBound Inclusive="true">2012-07-24T19:47:01Z</ns3:LowerBound>
    </ns3:RangeFilter>
    </ns3:Operator>
    <ns3:Operator xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="ns3:SearchOperator">
    <ns3:SearchFilter SnippetLength="20" EnableSnippeting="true" Mode="Boolean" Key="orderSearch">multiple_prop : one</ns3:SearchFilter>
    </ns3:Operator>
    <ns3:ContentElementConfig xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="ns3:RecordListConfig" HandlerFunction="RecordListHandler" HandlerNamespace="http://www.endeca.com/MDEX/conversation/1/0" Id="Records">
    <ns3:RecordsPerPage>3</ns3:RecordsPerPage>
    </ns3:ContentElementConfig>
    </ns3:Request>
    </S:Body>
    </S:Envelope>
    If my approach is incorrect could you provide me a a better way to handle such requests?
    Regards,
    Eugene.

    Hi Eugene,
    As you know, this issue has been discussed further, so I am posting here the summary of the findings, with the hope that this summary will benefit the Endeca Server user community at large.
    * Boolean search only works with range filters for the AND operator. The documentation link above confirms this.
    *Because of this restriction, you need to build another query that would achieve the same result. Both options are equally good and would achieve the results you need. Two suggestions that we've received were:
    - Option 1. You could perform the key restriction of the search via a restrictive search interface rather than using the boolean match mode. This would involve creating a search interface for each field and then querying against the search interface for the field in question:
    <Operator xmlns="http://www.endeca.com/MDEX/lql_parser/types" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="SearchOperator">
    <SearchFilter SnippetLength="20" EnableSnippeting="true" Mode="All" Key="orderSearch_for_field_multiple_prop">one</SearchFilter>
    </Operator>
    For this option, you would have to create a search interface for each attribute, using the Config Web Service. Here is the topic about loading config documents into the Endeca Server:http://docs.oracle.com/cd/E29805_01/server.230/es_dev/toc.htm#Loading%20configuration%20documents
    - Option 2
    * You could implement the range filter through EQL row expression syntax instead and pass it in via the DataSourceFilterOperator, which acts as a prefilter on all other query logic, using the Conversation Web Service. Thus you might include:
    <Operator xmlns="http://www.endeca.com/MDEX/lql_parser/types" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="DataSourceFilterOperator">
    <DataSourceFilterString>dateTime_prop <= TO_DATETIME('2012-07-24T19:47:01Z')</DataSourceFilterString>
    </Operator>
    For this option, using the DataSourceFilter does not require you to model the rest of the query in EQL. It will perform slightly better than the filter in option 1 because of the order in which record set intersections are computed, but the difference is likely to be very small.
    Finally, if this addresses your question, please mark this question as answered, so that others can benefit from it on the forum. Big thanks to Endeca Server development team (you know who you are!), for helping to find solutions for this request.
    Julia
    Edited by: JuliaM on Aug 30, 2012 2:05 PM
    Edited by: JuliaM on Aug 30, 2012 2:06 PM

  • Error with creating star schema using HsvStarSchemaACM

    Hi,
    I am trying to create a star schema using the API HsvStarSchemaACM. But when calling the create function of API, i get the below exception. Exception from HRESULT: 0x80040251 (A general error occurred while trying to obtain a database Reader/Writer lock). Is this something to do with the parameters passed and its connections(connection works fine and createstarschema is running thru workspace).
    Any solutions to this is more welcome.
    Thanks,
    Logu

    Can you provide details on HsvStarSchemaACM API? How is it related to ODI?

  • Trouble with star schema

    Hi All,
    I have a star schema in place with 8 dimension and 1 fact table.
    But due to some specific requirement, I need to denormalize the schema. I want to copy all fields from all dimension tables to the fact table.
    I know this sounds bad but I have to do it. pls dont ask why..
    Now, the same can be done using a materialized view but the problem in MV is there are fields which are present in 2 or more table with the same column name, due to which I cant create a MV.
    Is there some other way to achieve this goal.
    BRK.

    I have too many records which is affecting the performance of the database.
    But now its raising performance problems.Your application had a performance problem. Somebody guessed that a star schema would solve the problem. But it hasn't. So now you intend to implement some spavined variant on star schema because somebody has suggested that complete de-normalisation might solve the problem.
    Is this a demonstrable fact (you have benchmarks and explain plans to justify it) or just a guess?
    I know how difficult this sort of thing can be, because I've been working through some similar scenario for a while now. The important thing is get some decent metrics on your application. Use statspack. Use the wait interface. Find out where your application is spending its time and figure out what you need to do to reduce the waits. Benchmark some alternatives. This may result in you having to re-write your code but at least you'll be doing so in the knowledge
    Cheers, APC
    Blog : http://radiofreetooting.blogspot.com/

  • Need help with an extended star schema.

    I have the following data in spreadsheet form and I need to transpose this into an extended star schema in order to prdouce my characteristics and key figures, however I'm having a little trouble understanding where to include the Product Group in the star schema and also what to do with the Sales Doc (Invoice Number) in the sales information spreadsheet. 
    I need to summarise the data into the following also:
    1.     Total quantity and revenue for each Channel and product group
    2.     Total quantity and revenue for each Channel and product group (July 2009 to September 2009)
    3.      Total Quantity and Revenue for iphone sales
    So my question in short is what would the star schema look like?  I'm hoping somebody can help me over this hurdle.
    The following are the details from the spreadsheet:
    Product Details (Attributes)
    File Name:  Product_Details.csv
    Structure     FieldName     Length     Description
         ProdID     5     Uniquely identifies a product
         PG_Group     20     Identifies products in specific groups (Navigational field)
    Sample Data
    ProdID,PG_Group
    PDS01,COMPUTER
    PDS02,COMPUTER
    PDS03,COMPUTER
    PDS04,ACCESSORIES
    PDS05,ACCESSORIES
    PDS06,ACCESSORIES
    PDS07,ACCESSORIES
    PDS08,HARDWARE
    Product Description (Text)
    File Name:  Product.csv
    Structure     FieldName     Length     Description
         ProdID     5     Uniquely identifies a product
         Prod_Desc     Medium Text     Represents the product description
    (Navigational field)
         Language     2     Represents the language descriptor
    Sample Data
    ProdID,Prod_Desc,Language
    PDS01,Notebook Speedy I,EN
    PDS02,Notebook Speedy II,EN
    PDS03,PC Thinkbox I,EN
    PDS04,USB Adaptor,EN
    PDS05,iPhones PX2,EN
    PDS06, Stereo Kit,EN
    PDS07,Camera Connector,EN
    PDS08,USB Storage,EN
    Transactional Data
    Sales information
    File Name:  Sales.txt
    Structure     FieldName     Length     Description
         Product     5     Uniquely identifies a product
         Sales Doc     4     Sales Document number (Invoice Number)
         Calendar Year     8     YYYYMMDD format
         Channel     1     Represents the distribution channel the product was sold by
         Quantity     4     Numeric value indicating the number of each product sold on a specific sales document
         Unit     2     Refers the unit specified by the quantity
         Revenue     8     Refers the sales amount of a specific product on a specific sales document
         Currency     3     Relates the revenue currency type
    Sample Data
    Product;Sales Doc ,Calendar Day,Channel,Quantity,Unit;Revenue,Currency
    ZD_PROD,0DOC_NUMBER,0CALDAY,ZD_CHAN,ZD_QTY,0UNIT,ZD_REV,0CURRENCY
    PDS07,7389,20090705,A,95,EA,151905,USD
    PDS08,7389,20090705,A,115,EA,264385,USD
    PDS02,7473,20090706,B,55,EA,3580445,USD
    PDS01,7472,20090706,C,110,EA,604989,USD
    PDS02,7472,20090706,C,43,EA,2799257,USD
    PDS03,7472,20090706,C,75,EA,4507425,USD
    PDS04,7472,20090706,A,339,EA,301032,USD
    PDS06,7472,20090706,A,120,EA,47976,USD
    PDS01,7473,20090718,D,55,EA,3024945,USD
    PDS02,6845,20090806,B,30,EA,195297,USD
    PDS07,6877,20090807,C,800,EA,12792,USD
    PDS08,6877,20090807,C,250,EA,57475,USD
    PDS03,6912,20090812,C,250,EA,1502475,USD
    PDS04,6900,20090812,B,70,EA,6216,USD
    PDS01,6899,20090812,C,12,EA,659988,USD
    PDS04,6899,20090812,D,55,EA,4884,USD
    PDS04,7479,20090815,C,80,EA,7104,USD
    PDS05,7479,20090815,C,300,EA,70497,USD
    PDS05,6809,20090818,A,1000,EA,234990,USD
    PDS06,6809,20090818,A,130,EA,51974,USD
    PDS03,6831,20090822,C,15,EA,901485,USD
    PDS04,6831,20090822,C,500,EA,4440,USD
    PDS03,6830,20090822,C,125,EA,7512375,USD
    PDS06,6830,20090822,C,400,EA,15992,USD
    PDS04,6841,20090825,C,53,EA,47064,USD
    PDS05,6841,20090825,C,350,EA,822465,USD
    Distribution Channel
    Channel (text)
    File Name:  Channel.csv
    Structure     FieldName     Length     Description
         ChanID     1     Uniquely identifies a specific distribution channel
         Chan_Desc     Medium
    Text     Full name of each distribution channel
    (Navigational field)
    Sample Data
    ChanID, Chan_Desc
    A,Phone
    B,Fax
    C,Internet
    D,Other
    Star Schema Model:

    Potentially the issue may be the combination of actual distance and the presence of any Wi-Fi interference at the location of the AirPort Express that may be the culprit here. The bottom line is you would want the Express to be within a 25+ dB range of the Extreme to be sure that it can consistently stay connected to it.
    For more details on proper base station placement, check out the following AirPort User tip. Please post back your results.

  • What is star schema - pls explain with example

    Hai.
    what is star schema - pls explain with example
    thanks in advance
    Giri

    Hi Giri,
    SAP's BIW employs extended star schama
    The extended star schema consists of a fact table (in two parts, E and F - f is the inbound table, E long-term storage). Dimension tables are connected to the fact tables via the DIMID(dimension id) which is a generated value and is stored in both dimension and fact tables. In addition, the dimension tables are connected to tables which hold master data values (or bind the dimension table to tables that hold the values), such as S tables, P, Q, X, Y. These dimension tables hold SIDs, again generated keys which relate values in the dimension table (the DIMIDs) with master data values. Thus, at the time of the query, join operations ensure that the master data values can be merged with the key figure values stored in the fact tables.
    Truthfully, one does not need to understand this schema extensively in order to model in BI in SAP NetWeaver. It helps to understand master data, navigational attributes, etc. Otherwise, simply model the key figures in the fact table and the characteristics into dimensions and you're good - the application generates the star schema for you - you don't have to specify it.
    See the transaction "LISTSCHEMA" which will show you the relationship between the F fact table and the other tables of the cube's star schema.
    Also follow the link for more info:
    http://help.sap.com/saphelp_nw04/helpdata/en/4c/89dc37c7f2d67ae10000009b38f889/content.htm
    Thanks for any points you assign.
    Regards

  • Star schema design

    Hi,
    I know that in classical star schema the dimension tables sits within the info cube and so we cannot use this dimension table in any other cube we need to have separate dimension table for that cube thought it might be having same data. I also know to over come this redundancy extended star schema came into picture where we have SID table and we keep the dimension table out of the cube and reuse the dimension tables across many cubes.
    Now what i don't understand is that instead of having Separate SID tables for linking the dimension and fact tables   why cant we make the DIMENSION table generic and keep them out of the infocube so that we can same the same dimension table for many infocube in this case we wont need SID tables.
    suppose i have one info cube which has dimension vendor material and customer  and its keyfigure is quantity and price and i have a separate infocube which has dimesnion material  customer and location and its key figure is something else ......so here in why cant i keep the dimensions out of the infocube and use the dimension material  customer for both infocube.

    Your dimension tables are filled based on your transaction data - which is why dimension table design is very important  you decide to group related data for the incoming transaction data into your dimension tables .
    The dimension tables have SIDs which in turn point to master data = in the classic star schema - the dimension tables are outside the cube but the dim tables have the master data within them whhich is overcome using the extended star schema.
    The reason why dimension tables can be reused is that the dim IDs and SIDs in the simension table correspond to the transaction data in the cube - and unless the dim IDs in both your cubes match you cannot reuse the dim tables - which means that you have exactly the same data in both the cubes - which means you need not have two cubes with the same data.
    Example :
    Cube 1 : Fact Table
    Dim1ID | DIM2ID | KF1
    1|01|100
    2|02|200
    Dimension Table : Dim 1 ( Assumin that there are 2 characteristics in this dimension ) - here the DIM1ID is Key
    Dim1ID | SID1 | SID2
    1|20|25
    2|30|35
    Dimension Table Dim 2 - Here the Dim2ID field is key
    Dim2ID| SID1 | SID2| SID3
    01| 30| 45
    02|45|40
    Here the Dim IDs for the cube Fact table are generated at the time of load and this is generated from the NRIV Table ( read material on Number Ranges ) - this meanns that you cannot control DIM ID generation across cubes which means that you cannot reuse Dimension Tables

Maybe you are looking for

  • CS3 JS progress bar function

    I'm trying to create a progress bar function but the code below creates 6 instances of the progress window rather than updating the info in the first window.  I've gone the for loop process to make progress bars work is this the only way it can be us

  • Lumia 1020; app update error 80073cf6 after cyan ...

    I updated my Lumia 1020 to the Nokia Cyan firmware today. Most of the app updates went fine but now some updates from the store are stuck with the error code 80073cf6. I tried restarting the phone but that does not help. Any ideas?

  • SSD upgrade on Z935-P300

    I just got this machine.  Great product but at some point I will want to switch out the 128 gb SSD to a larger one.  Will this be possible and will I have to return it to an authorized service center to have this done? Thanks

  • Print Dialog Missing Text, Icons

    For the last few months my print dialog has been pretty screwy looking: Anyone have a clue what is going on here?

  • My fathers ipod got hacked and the password has been changed what do i do

    help please