BW Star Scheme & Multi dimensional Data Modelling

Hi BW Experts,
Can any one please let me know when i have to check in help.sap or serivices.sap
for detailed info on BW Star Scheema and Multi dimensional Data Modelling and how it is used in BW.
Please update me where i have to check for this info
Thanks

hi...
star schema..
Please check the threads below..
Differences between Star Schema and extended Star Schem
What is the difference between Fact tables F & E?
Invalid characters erros
mdm..
http://help.sap.com/bp_biv133/documentation/Multi-dimensional_modeling_EN.doc
hope this helps,...

Similar Messages

  • Universe Design approach - Dimensional Data model

    We use a Dimensional data model which has about 15 different models based on Subject areas. Eg: Billing, Claims, Eligibility, etc. Each model has its own Fact table linked to Dimensions, some of which are Conformed dimensions which is present in multiple models. We want to build Universes on top of this model, for creating Crystal Reports and to expose it to the Business Users to create WebI reports through InfoView.
    <br><br>
    The Client has already built 15 Universes one for each Subject area_, which has 1 fact table each and many Conformed dimensions with some junk dimensions. When a Report needs data from more than one Universe, we have to link the different Universe queries at Report. <br>
    Major drawback with this approach is change management. As our data model will be expanded in future, which in turn makes me to update multiple Universes when, say a Conformed dimension changes; since the Conformed dimension table will be present in multiple Universes.
    <br><br>
    Now we are considering the below approaches to have better Architectural design and have easier User interface.
    <br><br>
    1. Creating a master Universe for the Dimension tables(here there may be a effort to modify data model to suit linking Dimension tables together). Then to create derived Universes for each Fact table. These derived Universes will be linked back to common dimension Universe. <br>
    Maintenance will be easier in this approach, as whenever a Dimension changes I need not update multiple Universes, but as I am linking Universes at Designer level as Master and derived Universes, I am concerned about the Report development if the report needs data from multiple Universes. Then I would be linking u201Cmultiple Linked Universeu201D queries at Report. <br><br>
    2. The other option I have is to combine multiple dimension models(Subject areas) into one Universe. By this we will create minimal number of Universes as possible. May be end up creating 5 or 6 Universes, but we will have tough time in maintaining Security of data elements. For instance, at high level a Universe may have Billing and Eligibility data, where I have to maintain strict Security for the User groups, and let only specific users to see/ use all data elements (objects). <br><br>
    Hope I have summarized my question well, any inputs from you on the approach you are aware of/ prou2019s and conu2019s of it in terms of time it takes to build, the performance of Report(creating WebI reports through InfoView) is appreciated !!
    We want to see which approach makes it better for creating Crystal Reports and when it reaches Business Users who has little patience waiting for a Report and needs best possible interface

    There is no one perfect answer for your question.  Universes are more of an art than a science imo.  I can tell you that we have many conformed dimensions joined to multiple facts in a single universe.  The key to this approach is that for each fact table you will need a context.  The advantage to this approach is the ease in which your WebBI users will be able to build reports.  The disadvantage is that Crystal Reports cannot handle multiple contexts so your Universe is basically useless in CR.  For CR, you will need to build Business Views rather than universes.

  • Download IBM Book on Star Schema Design (Dimensional Modeling)

    Hello Friends,
    You can download an IBM Book on Dimensional Modeling from the following link:
    http://www.redbooks.ibm.com/redbooks.nsf/e9abd4a2a3406a7f852569de005c909f/e235dc46161249d38525703e00036135?OpenDocument
    The Book is very well explained.

    Dear Krish,
    Please try this link and let me know if you can open.If not I will email you the Book:
    http://www.redbooks.ibm.com/abstracts/sg247138.html?Open
    My Email is [email protected]
    Thanks
    Dave

  • Trying (and failing) to read XML multi-dimensional data array

    i.e.
    <mpp:Vehicle>
      <mpp:Vehicle_Rgtrn_Ref>?</mpp:Vehicle_Rgtrn_Ref>
      <mpp:Vehicle_Model_Code>?</mpp:Vehicle_Model_Code>
      <mpp:DrivingRestrictionCode>?</mpp:DrivingRestrictionCode>
      <mpp:Object_Mnfct_Year>?</mpp:Object_Mnfct_Year>
      <mpp:Building_Number_const>1</mpp:Building_Number_const>
      <mpp:Building_Name>The Slums</mpp:Building_Name>
      <mpp:Sub_Building_Name>Flat 1</mpp:Sub_Building_Name>
      <mpp:Postcode_Area_Ref>???? ???</mpp:Postcode_Area_Ref>
      <mpp:Covers>
        <mpp:Cover>?</mpp:Cover>
        <mpp:Cover>?</mpp:Cover>
        <mpp:Cover>?</mpp:Cover>
      </mpp:Covers>
    </mpp:Vehicle>
    <mpp:Vehicle>........
    I'm currently trying to create a web service to read a SOAP message containing such XML. When reading the message, EDQ converts the singleton nodes in each Vehicle node group to a stringarray but only provides the last Cover node in each Vehicle in a stringarray. I'm not in control of the XML structure so trying to get the supplier to concatenate each vehicles covers into a delimited list within a single node may be a battle.
    Anybody else ever encountered this? Is it possible or am I flogging a dead horse trying to achieve this.
    Thanks in advance.
    Jon

    Hi Richard,
    Thank you for your suggestion to my colleague Jon regarding -multi in option. As he said, w
    hat we currently get when we have a SOAP message containing <Covers><Cover>A</Cover><Cover>B</Cover><Cover>C</Cover></Covers> is an EDQ stringarray containing the value {C}.
    I've tried your suggestion
    and ran the wsdlizer with the -multi in option but the wsdlizer fails with the following error:
    H:\Workspaces\svn\edqTrunk\EDQ\WebServices\WSDL>java -jar wsdlizer.jar -o lv-mpp-query-request-ws.jar -multi in MPP_Query_Request_Service.wsdl
    INFO: 10-Sep-2013 13:24:07: wsimport succeeded
    Problem encountered during annotation processing;
    see stacktrace below for more information.
    com.datanomic.director.webservices.apt.ScannerException: multi-record request element must contain single nested list
    com.datanomic.director.webservices.apt.Scanner$Processor$Servicer.makeDef(Scanner.java:747)
    etc.
    SEVERE: 10-Sep-2013 13:24:07: APT scan failed
    Unfortunately, the error message is not very helpful to me.  Do you know what it is whingeing about?  I've also tried running the wsdlizer on our old wsdl files, i.e. before we introduced the parent tag <Covers> around <Cover>, but this failed with the same result when running with the -multi in option
    The wsdl file, with most of the XML tags removed for clarity and brevity, looks like this:
    <wsdl:definitions xmlns:schema="http://xxxx/MppService" xmlns:soap="http://schemas.xmlsoap.org/wsdl/soap/" xmlns:tns="http://xxxx/MppService" xmlns:wsdl="http://schemas.xmlsoap.org/wsdl/" name="LV" targetNamespace="http://xxxx/MppService">
       <wsdl:types>
          <xsd:schema xmlns="http://xxxx/MppService" xmlns:xsd="http://www.w3.org/2001/XMLSchema" attributeFormDefault="qualified" elementFormDefault="qualified" targetNamespace="http://xxxx/MppService">
             <xsd:element name="request">
                <xsd:complexType>
                   <xsd:sequence>
                      <xsd:element minOccurs="1" name="CorrelationUID" type="xsd:string"/>
                      <xsd:element minOccurs="1" name="RequestorName" type="xsd:string"/>
                      <xsd:element minOccurs="1" name="Brands" type="brandType"/>
                      <xsd:element minOccurs="1" name="Parties" type="partiesType"/>
                      <xsd:element minOccurs="1" name="InsuredObjects" type="insuredObjectsType"/>
                   </xsd:sequence>
                </xsd:complexType>
             </xsd:element>
             <xsd:element name="response">
                <xsd:complexType>
                   <xsd:sequence>
                      <xsd:element name="CorrelationUID" type="xsd:string"/>
                      <xsd:element name="MPPResponseUID" type="xsd:string"/>
                      <xsd:element maxOccurs="unbounded" minOccurs="1" name="Brand" type="schema:brandsType"/>
                   </xsd:sequence>
                </xsd:complexType>
             </xsd:element>
             <xsd:complexType name="brandsType">
                <xsd:sequence>
                   <xsd:element name="BrandCode" type="xsd:string"/>
                   <xsd:element name="ResponseMPD">
                      <xsd:complexType>
                         <xsd:sequence>
                            <xsd:element minOccurs="0" name="FunctionAvailable" type="xsd:string"/>
                            <xsd:element minOccurs="0" name="MessageCode" type="xsd:string"/>
                         </xsd:sequence>
                      </xsd:complexType>
                   </xsd:element>
                   <xsd:element name="ResponseCI">
                      <xsd:complexType>
                         <xsd:sequence>
                            <xsd:element minOccurs="0" name="FunctionAvailable" type="xsd:string"/>
                            <xsd:element minOccurs="0" name="MessageCode" type="xsd:string"/>
                         </xsd:sequence>
                      </xsd:complexType>
                   </xsd:element>
                </xsd:sequence>
             </xsd:complexType>
             <xsd:complexType name="brandType">
                <xsd:sequence>
                   <xsd:element maxOccurs="2" minOccurs="1" name="BrandCode" type="xsd:string"/>
                </xsd:sequence>
             </xsd:complexType>
             <xsd:complexType name="partiesType">
                <xsd:sequence>
                   <xsd:element maxOccurs="unbounded" minOccurs="1" name="Party" type="schema:partyType"/>
                </xsd:sequence>
             </xsd:complexType>
             <xsd:complexType name="partyType">
                <xsd:sequence>
                   <xsd:element minOccurs="0" name="PartyUID" type="xsd:string"/>
                   <xsd:element minOccurs="1" name="RoleCode" type="xsd:string"/>
                </xsd:sequence>
             </xsd:complexType>
             <xsd:complexType name="insuredObjectsType">
                <xsd:sequence>
                   <xsd:element maxOccurs="1" minOccurs="0" name="Properties" type="schema:propertiesType"/>
                   <xsd:element maxOccurs="1" minOccurs="0" name="Vehicles" type="schema:vehiclesType"/>
                </xsd:sequence>
             </xsd:complexType>
             <xsd:complexType name="propertiesType">
                <xsd:sequence>
                   <xsd:element maxOccurs="unbounded" minOccurs="0" name="Property" type="schema:propertyType"/>
                </xsd:sequence>
             </xsd:complexType>
             <xsd:complexType name="vehiclesType">
                <xsd:sequence>
                   <xsd:element maxOccurs="unbounded" minOccurs="0" name="Vehicle" type="schema:vehicleType"/>
                </xsd:sequence>
             </xsd:complexType>
             <xsd:complexType name="propertyType">
                <xsd:sequence>
                   <xsd:element minOccurs="0" name="BuildingNumber" type="xsd:string"/>
                   <xsd:element minOccurs="0" name="BuildingName" type="xsd:string"/>
                   <xsd:element maxOccurs="1" minOccurs="0" name="Covers" type="schema:coversType"/>
                </xsd:sequence>
             </xsd:complexType>
             <xsd:complexType name="coversType">
                <xsd:sequence>
                   <xsd:element minOccurs="1" name="Cover" type="xsd:string"/>
                </xsd:sequence>
             </xsd:complexType>
             <xsd:complexType name="vehicleType">
                <xsd:sequence>
                   <xsd:element minOccurs="0" name="VehicleRegistrationMark" type="xsd:string"/>
                   <xsd:element minOccurs="1" name="ABIBrokernetCode" type="xsd:string"/>
                </xsd:sequence>
             </xsd:complexType>
          </xsd:schema>
       </wsdl:types>
       <wsdl:message name="request">
          <wsdl:part name="parameters" element="schema:request"/>
       </wsdl:message>
       <wsdl:message name="response">
          <wsdl:part name="parameters" element="schema:response"/>
       </wsdl:message>
       <wsdl:portType name="LVEI">
          <wsdl:documentation>Operations</wsdl:documentation>
          <wsdl:operation name="process">
             <wsdl:documentation>Process a query request</wsdl:documentation>
             <wsdl:input message="schema:request"/>
             <wsdl:output message="schema:response"/>
          </wsdl:operation>
       </wsdl:portType>
       <wsdl:binding name="LVBinding" type="tns:LVEI">
          <soap:binding style="document" transport="http://schemas.xmlsoap.org/soap/http"/>
          <wsdl:operation name="process">
             <soap:operation soapAction="http://xxxx/MppService"/>
             <wsdl:input>
                <soap:body use="encoded"/>
             </wsdl:input>
             <wsdl:output>
                <soap:body use="literal"/>
             </wsdl:output>
          </wsdl:operation>
       </wsdl:binding>
       <wsdl:service name="LV-MPPQRS-v01">
          <wsdl:port name="LVEndpoint" binding="tns:LVBinding">
             <soap:address location="https://xxxx/LV-MPPQRS-v01"/>
          </wsdl:port>
       </wsdl:service>
    </wsdl:definitions>
    A colleague created the wsdl, it's been checked a number of times but no one can find anything dodgy that might account for the wsdlizer error we're getting.  So we're a bit lost now, the data coming in to our EDQ process is incomplete, which makes further development of out EDQ processes somewhat challenging.
    Any help would be greatly received.  Also happy to send any more information you may require.
    Jules

  • Question on BW Star Schema

    Hello all
    Please help me understand the DIM/SID table concept.
    I was going through the BW star schema and I was curious to know the importance of Dimension table. What is wrong with putting the SID ID directly in Fact table instead of relating it through the DIM table?
    Please post a link to documentation on SAP Star Schema, if you have.
    This is how I understood
    Fact Table:
    DIM_CUST     | QUANTITY
    001          | 50
    002          | 100
    001          | 60
    Dimension Table:
    DIM_CUST     | SID_CUST
    001          | 011
    002          | 022
    003          | 033
    SID Table:
    SID_CUST     | CUST_ID
    011          | 123ABC
    022          | 767TYT
    033          | 989UHY
    - In Fact table we identify unique transaction by combination of Cust ID and Date/time
    - We can have the same Cust ID appearing multiple times in Fact table for different transactions
    - DIM and SID table have 1:1 relation
    - In SID table, SID ID has 1:1 relation with a customer ID
    If they are all right, I still want to know why can't we have SID ID appear directly in Fact table?
    Please explain.
    Thanks
    -Sudhakar
    Message was edited by: Sudhakar Upadhyaya
    Message was edited by: Sudhakar Upadhyaya

    About you last question:
    "Can anyone give me a scenario where we use multiple characteristic in the same domension?"
    The first consideration about the number of characteristic involved is the most evident, but consider even the problem related with the number of key fields of a DB Table (no more than 16).
    Speaking about a scenario: let's say you have to report on Division, Sales Org, Sales Off, and Sales Pers: putting all this char in a "line item" dimension makes sense only in order to elimiate the corresponding Dimesion Table, but are you sure that the query will be faster that putting all of the in the same Dimension? And what about the comprehension of the underlying schema? Think about a complex one, with 50 or more Chars ...
    By the way the easier way to solve such a doubts is to read something about MultiDimensional Data Modeling: there's a wide litterature about this topic that can't be summarized in few rows without omitting important notes
    If you don't want to start with "heavy" books search in http://service.sap.com/bw for a doc about Multidimensional Data Modeling (see under InfoIndex / DataModeling / BW ASAP for 2.0B Phase 2: Multi Dimensional Data modeling (doc)) that's an old doc (before 2k), but a very good starting point, that will answer to you questons about SidID and DimID.
    Hope it helps
    GFV

  • Star schema for a uploaded data sheet

    Hi All gurus,
    I am new to this tech . I have a requirement like this , I have to prepare the star schema for this data sheet as below .
    REPORT_DATE     PREPARED_BY     Units On-time     Units Late     Non-Critical On-time     Non-Critical Lates      Non-Critical DK On-time     Non-Critical DK Lates
    2011 -01     Team1     1                         
    2011-02     Team1                              
    2011-03     Team1                              
    2011 -01     Team2                              
    2011-02     Team2                         7     1
    2011-03     Team2                              4 5
    2011 -01     Team3                              
    2011-02     Team3                              
    2011-03     Team3     1                         3
    (Take blank fields as zeros)
    Note : There are 3 report date types 2011-01,02,03 and three teams team 1,2,3 as text data and all others columns contain number data .
    I am given Time as dimensional table containing the Report Date and Whole sheet as Data table . So how to define the relationship for this in Physical and BMM ?
    I am thinking to make Time as Dimensional Table and the whole table(as Data) as a fact table in the Physical layer . And then in the BMM , I want to carve out a Logical Dimension called Group from the Data Physical Table and then make Group and Time as dimensional Table and Data as Fact table .
    Is this approach is correct ? please suggest me and if have any better Idea ,then please note down what are the tables to be taken as Dimension and Fact table in both physical and BMM . Your help willl be appreciated ,so thanks in advance . You can also advice for any change in no of Physical tables in the Physical schema design ..

    Your' s suggestion utterly anticipated ..

  • RuleAuthor : error importing XML Schemas into Data Model

    Hi,
    I have problems during import XML Schema in my Data Model.
    I'm following these steps:
    1) Click Definitions tab;
    2) Click XMLFact;
    3) Click Create
    4) I enter the path for the schema and the directory to store JAXB-generated classes. In this directory every user has all permission (777).
    In the next step when I click on "Add Shemas" I have this error:
    java.io.IOException: Not enough space at java.lang.UNIXProcess.forkAndExec(Native Method) at java.lang.UNIXProcess.<init>(UNIXProcess.java:53) at java.lang.ProcessImpl.start(ProcessImpl.java:65) at java.lang.ProcessBuilder.start(ProcessBuilder.java:451) at java.lang.Runtime.exec(Runtime.java:591) at java.lang.Runtime.exec(Runtime.java:429) at java.lang.Runtime.exec(Runtime.java:326) at oracle.rules.sdk.datamodel.impl.DataModelUtil.compileJavaFile(DataModelUtil.java:479) at oracle.rules.sdk.datamodel.DataModelManager.addXMLSchemaPath(DataModelManager.java:984) at oracle.rules.sdk.mapper.RuleObjectHelper.addSchemapath(RuleObjectHelper.java:2759) at oracle.rules.ra.uix.mvc.SchemaSelectorEH.addSchema(SchemaSelectorEH.java:138) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:585) at oracle.rules.ra.uix.mvc.BeanEH.genericHandleEvent(BeanEH.java:869) at oracle.rules.ra.uix.mvc.BeanEH.handleEvent(BeanEH.java:838) at oracle.cabo.servlet.event.TableEventHandler.handleEvent(Unknown Source) at oracle.cabo.servlet.event.TableEventHandler.handleEvent(Unknown Source) at oracle.cabo.servlet.event.BasePageFlowEngine.handleRequest(Unknown Source) at oracle.cabo.servlet.AbstractPageBroker.handleRequest(Unknown Source) at oracle.cabo.servlet.ui.BaseUIPageBroker.handleRequest(Unknown Source) at oracle.cabo.servlet.PageBrokerHandler.handleRequest(Unknown Source) at oracle.cabo.servlet.UIXServlet.doGet(Unknown Source) at javax.servlet.http.HttpServlet.service(HttpServlet.java:743) at javax.servlet.http.HttpServlet.service(HttpServlet.java:856) at com.evermind.server.http.ServletRequestDispatcher.invoke(ServletRequestDispatcher.java:711) at com.evermind.server.http.ServletRequestDispatcher.forwardInternal(ServletRequestDispatcher.java:368) at com.evermind.server.http.HttpRequestHandler.doProcessRequest(HttpRequestHandler.java:866) at com.evermind.server.http.HttpRequestHandler.processRequest(HttpRequestHandler.java:448) at com.evermind.server.http.AJPRequestHandler.run(AJPRequestHandler.java:302) at com.evermind.server.http.AJPRequestHandler.run(AJPRequestHandler.java:190) at oracle.oc4j.network.ServerSocketReadHandler$SafeRunnable.run(ServerSocketReadHandler.java:260) at com.evermind.util.ReleasableResourcePooledExecutor$MyWorker.run(ReleasableResourcePooledExecutor.java:303) at java.lang.Thread.run(Thread.java:595)
    I cannot find the solution!
    Can someone help me?
    Thanks.

    Do you still have enough disk space available on your file system to store the different xml-facts the RuleAuthor will create for you?

  • Data Modeler - Default object's schema

    Hi All!
    I’m starter in Data Modeler.
    I can’t find a simple option - default object’s owner or default schema.
    In Data Modeler, I already have tables and other objects. So, when I generate DDL SQL
    I’m getting:
    CREATE TABLE <table name>
    How can I get:
    CREATE TABLE <schema>.<table name>
    Thanks!

    Thank you, Philip, for your answer!
    Actually, I did it.
    I opened physical model, added user and changed user for table.
    So, it’s ok, however, I not found where is changing owner for indexes.
    Because, I got like this:
    CREATE TABLE <schema>.<table name> --- it’s ok
    CREATE INDEX <index name> --- it’s wrong there is no owner
    How can I change indexes owner?
    And, indeed, it’s not suitable to change owner for each object. Is there an opportunity to change owners for all objects synchronously?

  • Star schema versus snowflake schema

    I have a question regarding dimensional data modeling. My question here is, when star schema model would be useful and when snowflake schema model would be useful.
    In star schema, we have only fact and it is connected with dimensions. But in snowflake schema, we are normalizing dimension into one more level. Let us say, we have dimension product. Product can be normalized into another table called supplier. Let us take another example, customer dimension. Customer dimension can be normalized into country…
    Advantage of star schema is, easy to write the query since we have only less tables. You do not need to join multiple tables when we write the query. It would improve the performance some time.
    Advantage of snowflake schema is, it is little complex to write the query, since we have to join multiple tables. Performance might improve some time when we join smaller tables…
    My question is, at what circumstances, we can use star and snowflake schema? I am not able to define the word sometime_
    Any help is highly appreciated…

    Hi,
    There is a trade off on the availability and the Complex analytics.
    A star schema is good if you have the functional requirements really simple. Like the dimension is not SCD Type2 (slowly changing dimension) and you don't need to do "AS IS" vs "AS WAS" reporting.
    In modern Analytics in any domain dimensions are SCD Type 2 as business keep on evolving. In a star schema structure this will cause explosion of data if there are frequent changes at the higher levels of the dimensional hierarchy. That anyway will hit the performance.
    As far as my experience goes, at the data model level it is better to have snow flaked dimensions. and while managing the metadata (in a BI reporting tool) you can consolidate the snowflaked dimensions in star schema structures. That will make ah hoc analytics much simple for the business users.
    A lot of performance measure can be taken to improve the end user experience.
    In short the trend in BI analytics demands to have a snowflaked structure rather than a simple star schema structure.
    Hope this helps.

  • BMM layer creation (Star Schema Physical Layer) - What to add/not add?

    Hi All,
    I am just looking for any general feedback on the thought/question below.
    I am setting out on creating my first BMM layer and trying to determine what I need to do in this layer that will be different and add value from what I already did in the physical layer. My data model is already defined as a star schema within my data mart source. So in the physical layer I have my facts imported along with the dimensions and I have joined them together as needed. Here is what I think I will setup as I move into the BMM layer:
    1. I will add heirarchies as needed to enable drill down within my reports
    2. I will need to add my calculations/measures to allow for any type of metric to be returned through a request in Answers
    3. I do not see a large need to create logical tables (at least not yet) based on multiple physical source tables as my source is already a star schema and dimensionally modeled. For users that also already source a star schema at the physical layer .. do you find that you do a lot of logical table creation/mapping to add functionality or does your BMM look a lot like you Physical Layer?
    Other than steps 1 and 2, I am not really sure how much additional manipulation I might do from the Physical to BMM layer since my Physical is already a star schema. Am I missing anything? Obviously everyone's data model and circumstances are different but I wasn't sure if maybe there were some good thoughts on what I might be missing (if anything)?
    One last question .. I am not currently planning to use any aliases at the Physical Layer but I do plan to rename the tables at the presentation layer to be more business verbage like. Why are others using aliases?
    Thanks in advance for the help.
    K

    Alastair, thanks for the advice. I'll definitely keep that in mind as I start to build out the BMM.
    One question/issue I just ran into as I was wrapping up my Physical Layer mapping. When I check for global consistency, I am getting an error that is complaining that I have multiple joins defined between the same two tables (which I do). This is because I have the following setup:
    TBL_A_FACT
    F_ID_HIT
    F_HIT_DESC
    F_ID_MISSED
    F_MISSED_DESC
    TBL_B_DIM
    F_ID
    F_DESC
    Table A joins to Table B in two ways:
    TBL_A_FACT.F_ID_HIT = TBL_B_DIM.F_ID
    TBL_A._FACT.F_ID_MISSED = TBL_B_DIM.F_ID
    The F_IDs can be either hit or missed on any given fact record and the total distinct set exist in the dimension.
    When I define two foreign key joins in the physical layer based on the relationship above and check Global Consistency, I get an error saying that "TBL_A and TBL_B have multiple joins define. Delete duplicate foreign keys if they exist" and it is listed as an error. I guess this makes sense because when the two tables are used in a request OBIEE would need to know how to join them (using the hit or missed field). What is the best approach for handling this..
    - Should I define TBL_A twice in the physical layer as:
    TBL_A (Alias TBL_A_HIT)
    F_ID_HIT
    F_ID_HIT_DESC
    TBL_A (Alias TBL_A_MISSED)
    F_ID_MISSED
    F_ID_MISSED_DESC
    Or do something like the above in the BMM layer?
    Thanks for the help!
    K
    And then establish the relationships using these separate tables?

  • Not able to open data model

    Hi
    In portal i want ot design news for that i want to use xml form builder so i clicked on content management xml form builder is opened i have created a new project when i saved  i recived an error but still i clicked yes and it is saved anyways now when i clicked on data schema to open data model iam unable to open it symbol is remaining as it is
    can any body please let me know why ia m not able to open data schema to create data model.
    thanks in advance.

    Hi Krishna,
    Please follow below link it has detail steps to create xml form builder project.
    Also, check your jre version as well.
    http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ee639033-0801-0010-0883-b2c76b18583a?QuickLink=index&overridelayout=true
    Thanks.
    Sushil

  • Data modeler: Having Maintainable Query for View Creation

    Hi,
    I am using SQL Developer 4 and the data modeler. I typically create a view in SQL developer (from a SQL select statement). I then drag and drop the view to the data modeler in order to have it stored for the next time I regererate the database schema from the data modeler. This works and I can see the query for my view when I click Properties on the view in the data modeler. However, the query I see there is logically equivalent (and very similar) but it is hard to read for a programmer as carrier returns are added, the various elements of my where clause are not in the same order as I programmed them, comments (starting with '--') where not transfered with the drag and drop.
    I feel I cannot only rely on the version of the view stored in the data modeler to keep my DDL code that creates the view. Maintenance of that code would be too difficult. I feel I will need to keep a separate SQL script (in a separate version control tool) containing the select statement of my view. I would then always need to make my modifications in this select statement, update my view in the database and then drag and drop the result back in the data modeler. This is cumbersome.
    Do you have any solution for me? Thanks!

    Hi Jeff would you be able to help me in creating a data model  cause im really stuck with this one .Basically if been asked to create a survey application in oracle apex that use to excel based . So the info i was given was in a form of  excel sheet which looks like this
    USER'S                                                                                                                                                                    VENDORS                                                                                                                                  TOPICS
    NAME
    E-MAIL
    TSSA
    ORACLE
    HP
    IBM
    MS
    SAP
    INTERGRAPH
    CISCO
    Relationship
    Contracting
    Performance
    Architecture
    Supplier Feedback
    Comment
    jjjjjnn Bixxxxxff
    [email protected]
    Yes
    Yes
    Yes
    Yes
    x
    Added as requested by Sanet Mulder
    itha CCCniah
    [email protected]
    Yes
    Yes
    Yes
    x
    x
    Elliot dan
    [email protected]
    Yes
    x
    x
    x
    Ger que
    [email protected]
    Yes
    Yes
    Yes
    Yes
    Yes
    Yes
    Yes
    Yes
    x
    x
    x
    isha Per
    [email protected]
    Yes
    x
    John Rato
    [email protected]
    Yes
    x
    x
    So where it says yes thous are the vendors that people associated with them have to asses and where there's an X thous are the topics that the vendors have to be rated on . So if for example the first guy on the list  jjjjjnn Bixxxxxff will asses TSSA , ORACLE , HP , IBM , MS , SAP  on the topic of Architecture and if you look at the second user itha CCCniah he would rate TSSA , ORACLE , INTERGRAPH on the topics of Relationship and performance  . Any idea how i could data model this to get my table structures right .so that features like completion status could be displayed to the user through APEX which can only be done by a correct data-model i have tried normalization but  i did go anywhere becauce there are so many variations any idea on how you would go about data modeling this would be greatly appreciated thank you     If you would like a better copy of the table my email is    [email protected]    

  • Normalized (3NF) VS Denormalized(Star Schema) Data warehouse :

    what are the benefits of normalized data warehouse (3NF) over the denormalized (Star schema)?
    if DW is in the 3NF then is need to create the seprate physical database which contains several data marts( star schema)with physical tables, which feeds to cube or create the views(SSAS data source view) on top of 3NF warehouse of star schema which feeds to
    cube?
    please explin the pros and cons of 3NF and denormalized DW.
    thanks in advance.
    Zaim Raza.

    Hi Zaim,
    Take a look to this diagram:
    1) Normally, 3NF schema is typical for ODS layer, which is simply used to fetch data from sources, generalize, prepare, cleanse data for upcoming load to data warehouse.
    2) When it comes to DW layer (Data Warehouse), data modelers general challenge is to build historical data silo.
    Star schema with slow changing facts and  slow changing dimensions are partially suitable.
    The DataVault and other similar specialized methods provides, in my opinion, wider possibility and flexibility.
    3) Star schema is perfectly suitable for datamarts. SQL Server 2008 and higher contains numerous query analyzer improvements to handle such workload efficiently. SQL Server 2012 introduced column stored indexes, that makes possibility to
    create robust star model datamarts with SQL Query performance comparable to MS OLAP. 
    So, your choice is:
    1) Create solid, consistent DW solution
    2) Create separate datamarts on top of DW for specific business needs. 
    3) Create necessary indexes, PK, FK key and statistics (of FK in fact tables) to help sql optimizer as much as possible.
    4) Forget about approach of defining SSAS datasource view on top of 3NF (or any other DWH modeling method), since this is the way to performance and maintenance issues in the future.

  • Injecting data into a star schema from a flat staging table

    I'm trying to work out a best approach for getting data from a very flat staging table and then loading it into a star schema - I take a row from a table with for example 50 different attributes about a person and then load these into a host of different tables, including linking tables.
    One of the attibutes in the staging table will be an instruction to either insert the person and their new data, or update a person and some component of their data or maybe even to terminate a persons records.
    I plan to use PL/SQL but I'm not sure on the best approach.
    The staging table data will be loaded every 10 minutes and will contain about 300 updates.
    I'm not sure if I should just select the staging records into a cursor then insert into the various tables?
    Has anyone got any working examples based on a similar experience?
    I can provide a working example if required.

    The database has some elements that make SQL a tad harder to use?
    For example:
    CREATE TABLE staging
    (person_id NUMBER(10) NOT NULL ,
    title VARCHAR2(15) NULL ,
    initials VARCHAR2(5) NULL ,
    forename VARCHAR2(30) NULL ,
    middle_name VARCHAR2(30) NULL ,
    surname VARCHAR2(50) NULL,
    dial_number VARCHAR2(30) NULL,
    Is_Contactable     CHAR(1) NULL);
    INSERT INTO staging
    (person_id, title, initials, forename, middle_name, surname, dial_number)
    VALUES ('12345', 'Mr', 'NULL', 'Joe', NULL, 'Bloggs', '0117512345','Y')
    CREATE TABLE person
    (person_id NUMBER(10) NOT NULL ,
    title VARCHAR2(15) NULL ,
    initials VARCHAR2(5) NULL ,
    forename VARCHAR2(30) NULL ,
    middle_name VARCHAR2(30) NULL ,
    surname VARCHAR2(50) NULL);
    CREATE UNIQUE INDEX XPKPerson ON Person
    (Person_ID ASC);
    ALTER TABLE Person
    ADD CONSTRAINT XPKPerson PRIMARY KEY (Person_ID);
    CREATE TABLE person_comm
    (person_id NUMBER(10) NOT NULL ,
    comm_type_id NUMBER(10) NOT NULL ,
    comm_id NUMBER(10) NOT NULL );
    CREATE UNIQUE INDEX XPKPerson_Comm ON Person_Comm
    (Person_ID ASC,Comm_Type_ID ASC,Comm_ID ASC);
    ALTER TABLE Person_Comm
    ADD CONSTRAINT XPKPerson_Comm PRIMARY KEY (Person_ID,Comm_Type_ID,Comm_ID);
    CREATE TABLE person_comm_preference
    (person_id NUMBER(10) NOT NULL ,
    comm_type_id NUMBER(10) NOT NULL
    Is_Contactable     CHAR(1) NULL);
    CREATE UNIQUE INDEX XPKPerson_Comm_Preference ON Person_Comm_Preference
    (Person_ID ASC,Comm_Type_ID ASC);
    ALTER TABLE Person_Comm_Preference
    ADD CONSTRAINT XPKPerson_Comm_Preference PRIMARY KEY (Person_ID,Comm_Type_ID);
    CREATE TABLE comm_type
    comm_type_id NUMBER(10) NOT NULL ,
    NAME VARCHAR2(25) NULL ,
    description VARCHAR2(100) NULL ,
    comm_table_name VARCHAR2(50) NULL);
    CREATE UNIQUE INDEX XPKComm_Type ON Comm_Type
    (Comm_Type_ID ASC);
    ALTER TABLE Comm_Type
    ADD CONSTRAINT XPKComm_Type PRIMARY KEY (Comm_Type_ID);
    insert into comm_type (comm_type_id, NAME, description, comm_table_name) values ('23456','HOME PHONE','Home Phone Number','PHONE');
    CREATE TABLE phone
    (phone_id NUMBER(10) NOT NULL ,
    dial_number VARCHAR2(30) NULL);
    Take the record from Staging then update:
    'person'
    'Person_Comm_Preference' Based on a comm_type of 'HOME_PHONE'
    'person_comm' Derived from 'Person' and 'Person_Comm_Preference'
    Then update 'Phone' with the number based on a link derived from 'Phone' which is made up of Person_Comm Primary_Key where 'Comm_ID' (part of that composite key)
    relates to the Phone table Primary_Key which is Phone_ID.
    Does you head hurt as much as mine?

  • Why do we need SSIS and star schema of Data Warehouse?

    If SSAS in MOLAP mode stores data, what is the application of SSIS and why do we need a Data Warehouse and the ETL process of SSIS?
    I have a SQL Server OLTP database. I am using SSIS to transfer my SQL Server data from OLTP database to a Data Warehouse database that contains fact and dimension tables.
    After that I want to create cubes using SSAS form Data Warehouse data.
    I know that MOLAP stores data. Do I need any Data warehouse with Fact and Dimension tables?
    Is not it better to avoid creating Data warehouse and create cubes directly from OLTP database?

    Another thing to note is data stored in transactional system may not always be in end user consumable format for ex. we may use bit fields/flags to represent some details in OLTP as storage required ius minimum but presenting them as is would not make any
    sense to user as they would not know what each bit value represents. In such cases we apply some transformations and convert data into useful information for users to understand. This is also in the warehouse so that information in warehouse can directly be
    used for reporting. Also in many cases the report will merge data from multiple source systems so merging it on the fly in report would be tedious and would have hit on report server. In comparison bringing them onto common layer (warehouse) and prebuilding
    aggregates would be benefitial for the report performance.
    I think (not sure) we join tables in SSAS queries and calculate aggregations in it.
    I think SSAS stores these values and joined tables and we do not need to evaluates those values again and this behavior is like a Data Warehouse.
    Is not it?
    So if I do not need historical data, Can I avoid creating Data Warehouse?
    On the backend SSAS uses queries only to extract the data
    B/w I was not explaining on SSAS. I was explaining on what happens inside datawarehouse  which is a relational database by itself. SSAS is used to built cube (OLAP structures) on top of datawarehouse. star schema is easier for defining relationships
    and buidling aggregations inside SSAS as its simple and requires minimal lookups to be performed. Also data would be held at lowest granularity level which can easily be aggregated to required levels inside OLAP cubes. Cube processing is very resource
    intensive and using OLTP system would really have a huge impact on processing performance as its nnot denormalized and also doing tranformation etc on the fly adds up to complexity. Precreating a layer (data warehouse) having data in required format would
    make cube processing easier and simpler as it has to just cross join tables and aggregate data based on relationships defined and level needed inside the cube.
    Please Mark This As Answer if it helps to solve the issue Visakh ---------------------------- http://visakhm.blogspot.com/ https://www.facebook.com/VmBlogs

Maybe you are looking for