Universe Design approach - Dimensional Data model

We use a Dimensional data model which has about 15 different models based on Subject areas. Eg: Billing, Claims, Eligibility, etc. Each model has its own Fact table linked to Dimensions, some of which are Conformed dimensions which is present in multiple models. We want to build Universes on top of this model, for creating Crystal Reports and to expose it to the Business Users to create WebI reports through InfoView.
<br><br>
The Client has already built 15 Universes one for each Subject area_, which has 1 fact table each and many Conformed dimensions with some junk dimensions. When a Report needs data from more than one Universe, we have to link the different Universe queries at Report. <br>
Major drawback with this approach is change management. As our data model will be expanded in future, which in turn makes me to update multiple Universes when, say a Conformed dimension changes; since the Conformed dimension table will be present in multiple Universes.
<br><br>
Now we are considering the below approaches to have better Architectural design and have easier User interface.
<br><br>
1. Creating a master Universe for the Dimension tables(here there may be a effort to modify data model to suit linking Dimension tables together). Then to create derived Universes for each Fact table. These derived Universes will be linked back to common dimension Universe. <br>
Maintenance will be easier in this approach, as whenever a Dimension changes I need not update multiple Universes, but as I am linking Universes at Designer level as Master and derived Universes, I am concerned about the Report development if the report needs data from multiple Universes. Then I would be linking u201Cmultiple Linked Universeu201D queries at Report. <br><br>
2. The other option I have is to combine multiple dimension models(Subject areas) into one Universe. By this we will create minimal number of Universes as possible. May be end up creating 5 or 6 Universes, but we will have tough time in maintaining Security of data elements. For instance, at high level a Universe may have Billing and Eligibility data, where I have to maintain strict Security for the User groups, and let only specific users to see/ use all data elements (objects). <br><br>
Hope I have summarized my question well, any inputs from you on the approach you are aware of/ prou2019s and conu2019s of it in terms of time it takes to build, the performance of Report(creating WebI reports through InfoView) is appreciated !!
We want to see which approach makes it better for creating Crystal Reports and when it reaches Business Users who has little patience waiting for a Report and needs best possible interface

There is no one perfect answer for your question.  Universes are more of an art than a science imo.  I can tell you that we have many conformed dimensions joined to multiple facts in a single universe.  The key to this approach is that for each fact table you will need a context.  The advantage to this approach is the ease in which your WebBI users will be able to build reports.  The disadvantage is that Crystal Reports cannot handle multiple contexts so your Universe is basically useless in CR.  For CR, you will need to build Business Views rather than universes.

Similar Messages

  • BW Star Scheme & Multi dimensional Data Modelling

    Hi BW Experts,
    Can any one please let me know when i have to check in help.sap or serivices.sap
    for detailed info on BW Star Scheema and Multi dimensional Data Modelling and how it is used in BW.
    Please update me where i have to check for this info
    Thanks

    hi...
    star schema..
    Please check the threads below..
    Differences between Star Schema and extended Star Schem
    What is the difference between Fact tables F & E?
    Invalid characters erros
    mdm..
    http://help.sap.com/bp_biv133/documentation/Multi-dimensional_modeling_EN.doc
    hope this helps,...

  • Oracle Designer to SQL Data Modeller Migration

    Hi
    We would like to migrate our artifacts from Oracle Designer to SQL Data Modeller
    Does anybody here have experience with that and things that i should be aware of that is possible with Designer but need to be handled differently in Modeller.

    You might want to try this forum as well: SQL Developer

  • 'Select a measure:' stuck on 'Loading...' in Dashboard Designer KPI Dimensional Data Source Mapping

    [using SharePoint 2013 Enterprise SP1]
    I am trying to create a KPI in Dashboard Designer, but am getting a timeout. I have been doing this for a while on my site; this is not the first. I haven't had this problem before.
    I created a new KPI and clicked on the Data Mappings column value, which is a hyperlink, to bring up the Dimensional Data Source Mapping dialog. I switched to a Data Connection in the site I just created (DC works perfectly and can retrieve sample data).
    When I click the "Select a measure:" drop-down menu, I get the message "Loading..." and after a while (a minute? two?) a dialog pops up with:
    The request took too long to complete. SharePoint is currently unavailable or experiencing heavy traffic. Try again later.
    This is a test SP server and I'm the only one on it, so there is no load. Also as mentioned, I am able to verify the Data Connection without problem. I am not having any issue with any of my other few dozen KPIs/Data Connections. Any suggestions as to how
    to troubleshoot?

    Hi cgtyoder,
    According to your description, my understanding is that you got an error when you created a KPI in Dashboard Designer.
    Please try to recycle the PerformancePoint Services Application Pool account, compare the result.
    Please go to C:\inetpub\wwwroot\wss\VirtualDirectories\the port of web application, adjust the HttpRuntime executionTimeout for the Web Application by modifying the web.config, now PerformancePoint report stability is much better:
    <httpRuntime executionTimeout="600" maxRequestLength="51200" />
    Note: before you change the web.config file, please make a bakcup for the file.
    If this issue still exists, please go to the log file to find more information about this issue.
    I hope this helps.
    Thanks,
    Wendy
    Forum Support
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Subscriber Support, contact
    [email protected]
    Wendy Li
    TechNet Community Support

  • Best approach for Data Modelling.

    Hello Experts
    I am building a Customer Scorecard involving SD and Marketing in BI 7.0.
    There are a couple of existing DSOs, some pushing the data into InfoCubes and some don't. All the reporting is happening from MultiProvider sitting on top of these Data Targets.
    The team has a primitive design which says that I additional DSOs be created to extract data from the above mentioned couple of DSOs based on only the Objects that are needed for Customer Scorecard reporting.
    This means, I am creating a couple of DSOs as per the current design which is in place.
    Upon suggesting to only create a Customer Scorecard MultiProvider on top of the already existing couple of Data Targets (avoiding to recreate addtional DSOs and the hassles of loading and activating them and then loading the data into InfoCubes) and then create the BEx Queries on top of them, the Lead expressed his concerns about the impacts it could have on the existing Data Model and subsequent transports once the Model is complete..!
    What is the best practice to handle a situation like this? I see there are 3 ways to go ahead with this:
    1. Do as the Lead said, which means creating additional DSOs (extracting data from a couple of required existing DSOs, push this data into 1 InfoCube and then create a MultiProvider on top of this (be aware that there is another similar data model that I need to create which will also be embedded into this MultiProvider) and create BEx Reports from there.
    2. Create only the InfoCubes which will extract data from the already existing DSOs (avoid creation of additional DSOs) and then create a MP from where BEx Reports are created.
    3. Only create a MultiProvider on all the required and already existing DSOs and InfoCubes, making sure if reporting needs aggregated data for reporting or not and then create BEx Reports from there (avoid creation of additional DSOs, & ICs).
    Note: We use Rev-Track to do the Transports.
    Which one do you think would be the best way to go and what could be the implications? Eventually, the reporting is done in WAD.
    Thanks for your time in advance.
    Cheers,
    Chandu

    Hi,
    Case 1 and 2 have similarities. But its purely depend user needs.
    I think you may be know the difference between dso and cube.
    DSO - holds detailed level data
    Cube - holds aggregated data.
    As per you needs use any one target only, no need to use DSO---> cube flow for existing flows.
    you can decide which you want use DSO or Cube only.
    Case 3. if your requirement will suffice with existing dso and at reporting level if you can manage to get the required out put then you can with it. But as my guess with existing target your requirement may won't suffice your needs.
    About transports:
    You can create one Rev track and assign multiple transports to it.
    you can add and release transport one by one rather than all at a time.
    if you release all at a time you may get some inconsistency issue and TR won't be released.
    Thanks

  • Basic questions on data modeling

    Hi experts,
    I have some basic questions regarding data modeling within MDM. I understand the available table types and the concept of lookup fields. I know that the MDM data modeling concept is different to the relational concept. But having a strong database background my first step was to design a relational data model which I would like to transfer to a MDM repository. Unfortunately I didn't found good information material on this. So here are some questions maybe you can help me:
    1) Is it the right approach to model n:m relationships with multivalued lookup fields? E.g. main table Users with lookup field from subtable SapAccounts (a user can have accounts in different SAP systems, that means more than one account).
    2) Has a record always be unique in MDM repositories (e.g. should we use Auto ID's in every table or do we have to mark a combination of fields as unique)? Is a composite key of 2 or more fields represented with marking these fields as unique?
    3) The concept of relationships in MDM is only based on relationships between single records (not valid for all records in a table)? Is it necessary to define all relationships similar to the relational data model in MDM? Is there something similar to referential integrity in MDM?
    4) Is it possible to change the main table to a sub table later on if we realize that it has also to be used as a lookup table for another table (when extending the data model) or do we have to create a new repository from scratch?
    Thank you for your answers.
    Regards, bd

    Yes you are correct. It is almost difficult to map relational database to mdm one. But again MDM is not 'just' a database. It holds much more 'master' information as compared to any relational db.
    1) Is it the right approach to model n:m relationships with multivalued lookup fields? E.g. main table Users with lookup field from subtable SapAccounts (a user can have accounts in different SAP systems, that means more than one account).
    Yes Here you need to use MV look up tables or can also try Qualifier tables if it gets more complex
    2) Has a record always be unique in MDM repositories (e.g. should we use Auto ID's in every table or do we have to mark a combination of fields as unique)? Is a composite key of 2 or more fields represented with marking these fields as unique?
    Concept of uniqueness differs here that you also have something called Display Fields (DF). A combination of DF can also be treated as Unique one. For instance while importing records if you select these DF as a combination, you will eliminate any possible of duplicates based on this combination. Auto Id is one of the ways to have a unique id once record is within MDM. While you use UF or DF to eliminate any possible duplicates at import level
    3) The concept of relationships in MDM is only based on relationships between single records (not valid for all records in a table)? Is it necessary to define all relationships similar to the relational data model in MDM? Is there something similar to referential integrity in MDM?
    Hmm... good one. Referencial Integrity. What I assume you are talking is that if you have relationships between tables then removing a record will not be possible as it is a foreign key for some record. Here MDM does not allow that. As Relationships within MDM are physical and not conceptual. For instance material can have components. Now if material does not exist then any relationship to components is not worthwile to maintain. Hence relationshsip is eliminated.  While in relational model relationships are more conceptual. Hence with MDM usage of lookups and main table you do not need to maintain these kind of relationships on your own.
    4) Is it possible to change the main table to a sub table later on if we realize that it has also to be used as a lookup table for another table (when extending the data model) or do we have to create a new repository from scratch?
    No. It is not possible to convert main table. There is only one main table and it cannot be changed.
    I went for the same option but it did not work. What I suggest is to look up your legacy system one by one and see what fields in general can be classified as Master, Reference, Transactional - You will start getting answers immediately.

  • Questions on Subviews and Import/Export in Data Modeler v3 EA1.

    I have a few questions about the capabilities of Data Modeler v3 EA1:
    1) Is it possible to rename subviews? Would like more meaningful names then Relational_x - Subview_x.
    2) Is it possible to save documents at subview level?
    3) Is it possible to import/export subsets of data?
    4) Having problems importing Erwin 7 .xml file, is there known problems with this import?
    Judy

    Hi Judy,
    1) To rename a subview just right click on it in the browser tree and select "Properties". In the properties dialog change the name and click OK button.
    2) You can save a subview as new Data Modeler design - from the File menu select Export -> To Data Modeling Design. In the newly opened dialog select the subview you want to export and click OK button.
    3) After saving a subview as new design (see answer #2) it can be imported in some other design (File -> Import -> Data Modeler Design).
    4) What kind of problems do you have with import of Erwin 7.* xml file?
    Regards,
    Ivan

  • Data modeler in 2.1 - how to export metadata

    How do you do an export in the 2.1 EA for the data modeler? I..e do a 'bottom up' model?
    The documentation is written for the standalone datamodeler, file-import/export. How do you do it in 2.1?
    Edited by: Bazza on 27-Oct-2009 03:54

    Hi,
    standalone Data Modeler is cost option. Data Modeler Viewer is bundled into SQL Developer 2.1 - it's free. You can do following with viewer :
    1) Open designs created with Data Modeler - it's in read only mode
    2) Visualize tables and object types from SQL Dev browser when there is no open Data Modeler design - select tables/views on SQL Developer browser tree and drag them on relational model window - FK are imported if parent table is among selected or already is imported; for object types - they should be dropped on Data Types model window. This works for Oracle and SQL Server - we are going to support it for all databases supported by Data Modeler (including JDBC import). And you can do some rearrangement of tables but the rest is in read only mode - no additional import/export.
    Philip

  • Q:  Data Modeler - Logical Model - 0 or 1 to....relationships...

    Hi,
    I’m currently at a client where we’re using Oracle SQL Developer Data Modeler to design our Logical Data Models (as part of an overall deliverable).
    When in the Oracle SQL Developer Data Modeler (in the logical view), I’m trying to figure out if the tool has the ability to create a “zero or one to” relationship between entities.
    According to the relationship options available, it only seems to show:
    One to One,
    One to Many (*),
    Many (*) to One, or
    Many (*) to Many (*)
    In the preferences – Data Modeler – Diagram – Logical Model, the notation type is set to “Barker”.
    I don’t see anywhere else where configuration can affect the variation of these options…
    Is there a way to show either a “Zero or One to…” relationship between entities?
    I’ve also loaded version 3.1 (beta) and it appears to be behaving the same as 3.0…
    Any insights are greatly appreciated…
    Thanks,
    Patrick

    Hi Kent,
    When I looked at this option, I was expecting to see a "|" and/or "O" in front of the crow's feet. I was not expecting to see a dashed line. Now I know...
    Thanks for your reply.
    Regards,
    Patrick

  • Domains usage in SQL Developer data MODELER

    Hi,
    I'm trying to understand how to use Domains in Oracle SQL Developer Data Modeler. We use version 3.1.3 .  before I used Toad Modeler  where domains are just part of your main design.
    Oracle data modeler has some different concept.
    let's assume I'm working on 2 designs:  DesignA and DesignB that include relational models.
    DesignA and Design B should use domains but list of domains in design A is very different than in design B.
    Default domain file is located on c: drive where SqlModeler is installed. It is obviously unacceptable , so I need to change Default System Type directory in preferences.
    And of course I want to have different domain directories for DESIGN A and DESIGN B.
    So when I open design A then I changed   Default System Type directory  let's say to x:\AAA.   Then i close design A and open Design B and change Default System Type directory to x:\BBB
    I checked folders AAA and BBB and they  have necessary XML files there:  defaultdomains.xml, defaultRFDBSSites and so on....
    Now questions:
    can I rename defaultdomains.xls to something else like AAAdomains.xls?  Domain administration can edit any domain file with any name , but how  can I associate certain domain file with my design?  My wish , when I open my design , then corresponding domain file will be open automatically.  Is it possible?
    If I open 2 designs in Sql Modeler and switch between designs  then corresponding domain files should be changed automatically as well.  Currently   I shouldn't forget to change default System Type directory every time when I switch models.  Is it the only way to handle it?
    Thanks
    vitaliy

    Hi Vitaliy,
    We use version 3.1.3
    I recommend always to use the latest version. If you don't want to use beta (DM 4.0 EA is out) you can use DM 3.3.
    Otherwise Oracle SQL Developer Data Modeler supports two types of domains:
    1) DM installation domains - those in file defaultdomains.xml
    2) Design level domains - they are stored in design directories and are visible to particular design only. They can be created in following ways:
    2.1 Manually - there is a property "Domains file" and you id it's not set "defaultdomains" domain will become design level domain and will be stored in file with provided name (without .xml extension)
    You can change later the file for design level domains, however you cannot change file for domain already in defaultdomains.xml.
    2.2 Using types to domains wizard you can generate design level domains
    2.3 Design level domains are created during import of  DDL files (controlled in preferences)
    2.4 You can import domains from specific file with domains using "File>Import>Domains" - you need to rename the source file if it's named defaultdomains.xml otherwise you'll get domains as installation domains
    If the list with domains is too long you can define  a list with preferred domains (or/and logical types) in "Preferences>Data Modeler>Model" and you can use shorter list in Table/Entity dialog if check "Preferred" check box next to "Type:" combo box.
    If I open 2 designs in Sql Modeler and switch between designs  then corresponding domain files should be changed automatically as well
    If you open 2 designs in one instance of DM they will use the same file with default domains i.e. you'll lose domains in one of design depending of setting for "system data type directory". You need to go with design level domains.
    Philip

  • Reuse Dimdate across different database for different data models

    Hi,
    I am designing a new data model for a data mart. I need to add dimdate dimension in this data model. I noticed that dimdate already exists in another database and being used by another
    data model. Is it possible to re-use the existing dimdate table for my new data model? If so, what about the foreign key constraints? Normally we link the date columns from fact table to the dimdate keys. How would we achieve that in case we are using the
    same table across different databases?
    Any opinion on this will be highly appreciated.
    Thanks in Advance.
    Cheers!!

    You can create a copy of dimdate table to your new data warehouse.
    If both data marts were in a single data warehouse, then you don't required to copy. but as these are in two different databases then you just copy that.
    regarding FK relationship. you can connect any fact table to you date dimension. even if you want to use more than one instance of your date dimension, it would be simply adding multiple FK columns in your fact table (role playing dimension).
    For date dimension be sure that your date dimension covers most of the attributes required. here is an example of date dimension:
    http://www.rad.pasfu.com/index.php?/archives/156-Script-to-Generate-and-Populate-Date-Dimension-Version-2-Adding-Multiple-Financial-Years.html
    Regards,
    Reza
    SQL Server MVP
    Blog:  
    http://rad.pasfu.com  Twitter:
      LinkedIn:
    SQL Server Integration Services 2012 Tutorial Videos:
    http://www.radacad.com/CoursePlan.aspx?course=1

  • Data Modeling Tool

    Does Oracle 9i provide a database modeling tool? if not, which tool in the market would you recommand? thanks in advance!

    Oracle Gives Designer Tool for Data Modelling.
    However ER-Win And ER-Studio can be also used as both of them are vwry good tool for modelling.
    Regards

  • 4.0 EA1 - Data Modeler does not open saved models

    Hi,
    When trying to open a Data Modeler design created by Data Modeler 3.3.1.748, the Logical Model is empty (has no objects) and the Relational Model is not opened.  Right clicking on the Relational_1 node and attempting to open the Relational Model yields an empty Relational Model (again no objects).  Since the Relational Model is empty I can't even try to open the Physical Model.
    I am able to successfully import DDL and Data Dictionary objects into a new model, however, after saving the model, If I try opening it with SQL Dev 4.0 or SQL Dev 3.2.20 with the 3.3.1.748 version of Data Modeler, none of the models are properly opened as above.
    Finally when closing a Data Modeler Design, the entire Data Modeler Browser tree disapears (the browser pane remains but is empty).  The only apparent way to recover the browser tree is to restart SQL Developer.

    I've started from a fresh install of EA1 without importing settings from v3.2.  Changed my look and feel to match my OS (Windows) enabled logging and set the logging directory.
    Now when I start DataModeler I get the following:
    2013-07-22 15:46:52,597 [AWT-EventQueue-0] ERROR FileManager - getDataInputStream: Can not read data
    java.io.FileNotFoundException: C:\Program Files\SQLDeveloper\SQLDev-4.0.0.12.27\sqldeveloper\extensions\oracle.datamodeler.4\types\types.xml (The system cannot find the path specified)
            at java.io.FileInputStream.open(Native Method)
            at java.io.FileInputStream.<init>(FileInputStream.java:138)
            at oracle.dbtools.crest.model.persistence.FileManager.getDataInputStream(FileManager.java:285)
            at oracle.dbtools.crest.model.persistence.FileManager.getDataInputStreamWithoutExtension(FileManager.java:254)
            at oracle.dbtools.crest.model.persistence.XMLPersistenceManager.getInputStreamFor(XMLPersistenceManager.java:894)
            at oracle.dbtools.crest.model.persistence.xml.AbstractXMLReader.getInputStreamFor(AbstractXMLReader.java:216)
            at oracle.dbtools.crest.model.persistence.xml.AbstractXMLReader.recreateObject(AbstractXMLReader.java:143)
            at oracle.dbtools.crest.model.persistence.XMLPersistenceManager.readSystemInit(XMLPersistenceManager.java:564)
            at oracle.dbtools.crest.model.design.DesignSet.createElement(DesignSet.java:53)
            at oracle.dbtools.crest.swingui.ApplicationView.addDesign(ApplicationView.java:2322)
            at oracle.dbtools.crest.swingui.ApplicationView.<init>(ApplicationView.java:403)
            at oracle.dbtools.crest.swingui.ApplicationView.getInstance(ApplicationView.java:2123)
            at oracle.dbtools.crest.fcp.DataModelerAddin.initialize(DataModelerAddin.java:520)
            at oracle.ideimpl.extension.AddinManagerImpl.initializeAddin(AddinManagerImpl.java:496)
            at oracle.ideimpl.extension.AddinManagerImpl.initializeAddin(AddinManagerImpl.java:483)
            at oracle.ideimpl.extension.AddinManagerImpl.initializeAddin(AddinManagerImpl.java:520)
            at oracle.ideimpl.extension.ExtensionManagerImpl._loadExtensionHooks(ExtensionManagerImpl.java:1948)
            at oracle.ideimpl.extension.ExtensionManagerImpl.__loadExtensionHooks(ExtensionManagerImpl.java:1902)
            at oracle.ideimpl.extension.SameThreadExtensionQueueLoadStrategy.load(SameThreadExtensionQueueLoadStrategy.java:28)
            at oracle.ideimpl.extension.SameThreadAndSwingWorkerMixedStrategy.load(SameThreadAndSwingWorkerMixedStrategy.java:22)
            at oracle.ideimpl.extension.ExtensionManagerImpl.fullyLoadExtension(ExtensionManagerImpl.java:1651)
            at oracle.ide.osgi.extension.internal.ClassLoaderProxy.fullyLoadExtensionIfNeeded(ClassLoaderProxy.java:162)
            at oracle.ide.osgi.extension.internal.ClassLoaderProxy.loadMetaClass(ClassLoaderProxy.java:130)
            at javax.ide.util.MetaClass.toClass(MetaClass.java:104)
            at javax.ide.util.MetaClass.toClass(MetaClass.java:95)
            at javax.ide.util.MetaClass.newInstance(MetaClass.java:138)
            at oracle.ide.javaxide.Util.createInstance(Util.java:62)
            at oracle.ide.javaxide.Util.createInstance(Util.java:42)
            at oracle.ideimpl.controller.MetaClassController.getDelegate(MetaClassController.java:139)
            at oracle.ideimpl.controller.MetaClassController.handleEventWhenExtensionNotInitialized(MetaClassController.java:58)
            at oracle.ideimpl.controller.ControllersHook$RuleBasedController.handleEventWhenExtensionNotInitialized(ControllersHook.java:741)
            at oracle.ideimpl.controller.MetaClassController.handleEvent(MetaClassController.java:50)
            at oracle.ide.controller.IdeAction$ControllerDelegatingController.handleEvent(IdeAction.java:1482)
            at oracle.ide.controller.IdeAction.performAction(IdeAction.java:663)
            at oracle.ide.controller.IdeAction.actionPerformedImpl(IdeAction.java:1153)
            at oracle.ide.controller.IdeAction.actionPerformed(IdeAction.java:618)
            at javax.swing.AbstractButton.fireActionPerformed(AbstractButton.java:2018)
            at javax.swing.AbstractButton$Handler.actionPerformed(AbstractButton.java:2341)
            at javax.swing.DefaultButtonModel.fireActionPerformed(DefaultButtonModel.java:402)
            at javax.swing.DefaultButtonModel.setPressed(DefaultButtonModel.java:259)
            at javax.swing.AbstractButton.doClick(AbstractButton.java:376)
            at javax.swing.plaf.basic.BasicMenuItemUI.doClick(BasicMenuItemUI.java:833)
            at javax.swing.plaf.basic.BasicMenuItemUI$Handler.mouseReleased(BasicMenuItemUI.java:877)
            at java.awt.Component.processMouseEvent(Component.java:6505)
            at javax.swing.JComponent.processMouseEvent(JComponent.java:3321)
            at java.awt.Component.processEvent(Component.java:6270)
            at java.awt.Container.processEvent(Container.java:2229)
            at java.awt.Component.dispatchEventImpl(Component.java:4861)
            at java.awt.Container.dispatchEventImpl(Container.java:2287)
            at java.awt.Component.dispatchEvent(Component.java:4687)
            at java.awt.LightweightDispatcher.retargetMouseEvent(Container.java:4832)
            at java.awt.LightweightDispatcher.processMouseEvent(Container.java:4492)
            at java.awt.LightweightDispatcher.dispatchEvent(Container.java:4422)
            at java.awt.Container.dispatchEventImpl(Container.java:2273)
            at java.awt.Window.dispatchEventImpl(Window.java:2719)
            at java.awt.Component.dispatchEvent(Component.java:4687)
            at java.awt.EventQueue.dispatchEventImpl(EventQueue.java:735)
            at java.awt.EventQueue.access$200(EventQueue.java:103)
            at java.awt.EventQueue$3.run(EventQueue.java:694)
            at java.awt.EventQueue$3.run(EventQueue.java:692)
            at java.security.AccessController.doPrivileged(Native Method)
            at java.security.ProtectionDomain$1.doIntersectionPrivilege(ProtectionDomain.java:76)
            at java.security.ProtectionDomain$1.doIntersectionPrivilege(ProtectionDomain.java:87)
            at java.awt.EventQueue$4.run(EventQueue.java:708)
            at java.awt.EventQueue$4.run(EventQueue.java:706)
            at java.security.AccessController.doPrivileged(Native Method)
            at java.security.ProtectionDomain$1.doIntersectionPrivilege(ProtectionDomain.java:76)
            at java.awt.EventQueue.dispatchEvent(EventQueue.java:705)
            at oracle.javatools.internal.ui.EventQueueWrapper._dispatchEvent(EventQueueWrapper.java:169)
            at oracle.javatools.internal.ui.EventQueueWrapper.dispatchEvent(EventQueueWrapper.java:151)
            at java.awt.EventDispatchThread.pumpOneEventForFilters(EventDispatchThread.java:242)
            at java.awt.EventDispatchThread.pumpEventsForFilter(EventDispatchThread.java:161)
            at java.awt.EventDispatchThread.pumpEventsForHierarchy(EventDispatchThread.java:150)
            at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:146)
            at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:138)
            at java.awt.EventDispatchThread.run(EventDispatchThread.java:91)
    2013-07-22 15:46:52,613 [AWT-EventQueue-0] ERROR AbstractXMLReader - Data inputstream is null (path: types name: types)
    For some reason it's trying to read C:\Program Files\SQLDeveloper\SQLDev-4.0.0.12.27\sqldeveloper\extensions\oracle.datamodeler.4\types\types.xml, however prior to running SQL Dev the oracle.datamodeler.4 directory does not exist although an oracle.datamodeler directory (without the dot4) does exist with the types\types.xml file.
    If I rename the oracle.datamodeler directory to oracle.datamodeler.4 then the data model opens partially, though all my relationships are missing and probably other stuff as well.  Further I get the following in the log:
    2013-07-22 15:56:35,902 [AWT-EventQueue-0] INFO  DataModelerAddin - Oracle SQL Developer Data Modeler 4.0.0.812
    2013-07-22 15:56:59,215 [Thread-22] ERROR XMLTransformationManager - java.lang.NoClassDefFoundError: com/adbs/querybuilder/QueryBuilder
    If instead of renaming the oracle.datamodeler directory to oracle.datamodeler.4 I copy it, then everything loads correctly, though models created with SQL Dev 3.2 have errors when opening the Physical Model, but once saved with SQL Dev 4's DataModeler, those errors do not reoccur.
    Is the fact that it's trying to read files from a nonexistant oracle.datamodeler.4 directory a bug?
    If not what should be in the oracle.datamodeler.4 directory?  Obviously not everything in the oracle.datamodeler directory, but at least some of the files are needed.

  • OBIEE Best Practice Data Model/Repository Design for Objectives/Targets

    Hello World!
    We are faced with a design question that has become somewhat difficult and we need some help. We want to be able to compare side-by-side actual measures with their corresponding objectives/targets. Sounds simple. But, our objectives are static (not able to be aggregated) with multi-dimensionality and multi-levels. We need some best practice tips on how to design our data model and repository properly so that we can see the objective/target for a measure regardless of the dimensions that are used in the criteria and regardless of the level.
    Here is some more details:
    Example of existing objective table.
    Dimension1
    Dimension2
    Dimension3
    Obj1
    Obj2
    Quarter
    NULL
    NULL
    NULL
    .99
    1.8
    1Q13
    DIM1VAL1
    NULL
    NULL
    .99
    2.4
    1Q13
    DIM1VAL1
    DIM2VAL1
    NULL
    .98
    2.41
    1Q13
    DIM1VAL1
    DIM2VAL1
    DIM3VAL1
    .97
    2.3
    1Q13
    DIM1VAL1
    NULL
    DIM3VAL1
    .96
    1.9
    1Q13
    NULL
    DIM2VAL1
    NULL
    .97
    2.2
    1Q13
    NULL
    DIM2VAL1
    DIM3VAL1
    .95
    2.0
    1Q13
    NULL
    NULL
    DIM3VAL1
    .94
    3.1
    1Q13
    - Right now we have quarterly objectives set using 3 different dimensions. So, if an author were to add one or more (or zero) dimensions to their criteria for a given measure they could get back a different objective. They could add Dimension1 and get 99%. They could add Dimension1 and Dimension2 and get 98%. They could add all three dimensions and get 97%. They could add zero dimensions (highest grain) and get 99%. Using our existing structure if we were to add a new dimension to the mix the possible combinations would grow dramatically. (Not flexible)
    - We would like our final solution to be flexible enough so that we could view objectives with altogether different dimensions and possibly get different objectives.
    - We currently have 3 fact tables with 3+ conformed dimension tables and a few unique dimension tables.
    Could anyone share a similar situation where you have implemented a data model structure with the proper repository joins to handle showing side-by-side objectives/targets where the objectives were static and could be displayed at differing levels with flexible dimensions as described?
    Any help would be greatly appreciated.

    hi..yes this suggestion is nice...first configure the sensors(activity or variable) ..then configure the sensor action as a JMS Topic which will in turn insert the data into a DB..Or when u configure the sensor action as a DB..then the data goes to Oracle Reports schema..if there is any chance of altering the DB..i mean if there is any chance by changing config files so that the data doesnt go to that Reports schema and goes to a custom schema created by any User....i dont know if it can b done...my problem is wen i m configuring the jms Topic for sensor actions..i see blank data coming..for sm reason or the other the data is not getting posted ...i have used a esb ..a routing service based on the schema which i am monitoring...can any1 help?

  • Data Model Design

    Hi Experts,
    In Current project I need to design data model and create data flow strategy for SD,MM,PP,FI modules from R/3.The client wants to use BOBJ on top of BI Info cubes/Reports.Based on KPI's given I need to do Data availability in R/3,Data extraction,Data model analysis and submit the documentation for that.Please guide me how to approach step by step so that I can go ahead with clear cut strategy.Any documentation if it is there please share with me.
    Regards
    Prasad

    You can find ASAP methodology and accelleretors related to modelling data here
    https://websmp203.sap-ag.de/roadmaps
    see the "ASAP Implementation Roadmap for SAP Exchange Infrastructure"  there are New roadmap content for SAP Business Intelligence.
    Regards,
    Sergio

Maybe you are looking for